Body - Language - Communication: Volume 2 9783110302028, 9783110300802

Volume II of the handbook offers a unique collection of exemplary case studies. In five chapters and 99 articles it pres

265 88 18MB

English Pages 1086 [1084] Year 2014

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
VI. Gestures across cultures
73. Gestures in South Africa
74. Gestures in the Sub-Saharan region
75. Gestures in West Africa: Left hand taboo in Ghana
76. Gestures in West Africa: Wolof
77. Gestures in South America: Spanish and Portuguese
78. Gestures in South American indigenous cultures
79. Gestures in native South America: Ancash Quechua
80. Gestures in nativeMexico and Central America: TheMayan Cultures
81. Gestures in native Nothern America: Bimodal talk in Arapaho
82. Gestures in Southwest India: Dance theater
83. Gestures in China: Universal and culturally specific characteristics
84. Gestures and body language in Southern Europe: Italy
85. Gestures in Southern Europe: Children’s pragmatic gestures in Italy
86. Gestures in Southwest Europe: Portugal
87. Gestures in Southwest Europe: Catalonia
88. Gestures in Western Europe: France
89. Gestures in Northern Europe: Children’s gestures in Sweden
90. Gestures in Northeast Europe: Russia, Poland, Croatia, the Czech Republic, and Slovakia
VII. Body movements - Functions, contexts, and interactions
91. Body posture and movement in interaction: Participation management
92. Proxemics and axial orientation
93. The role of gaze in conversational interaction
94. Categories and functions of posture, gaze, face, and body movements
95. Facial expression and social interaction
96. Gestures, postures, gaze, and movement in work and organization
97. Gesture and conversational units
98. The interactive design of gestures
99. Gestures and mimicry
100. Gestures and prosody
101. Structuring discourse: Observations on prosody and gesture in Russian TV-discourse
102. Body movements in political discourse
103. Gestures in industrial settings
104. Identification and interpretation of co-speech gestures in technical systems
105. Gestures, postures, gaze, and other body movements in the 2nd language classroom interaction
106. Bodily interaction (of interpreters) in music performance
107. Gestures in the theater
108. Contemporary classification systems
109. Co-speech gestures: Structures and functions
110. Emblems or quotable gestures: Structures, categories, and functions
111. Semantics and pragmatics of symbolic gestures
112. Head shakes: Variation in form, function, and cultural distribution of a head movement related to “no”
113. Gestures in dictionaries: Physical contact gestures
114. Ring-gestures across cultures and times: Dimensions of variation
115. Gesture and taboo: A cross-cultural perspective · Heather Brookes
VIII. Gesture and language
116. Pragmatic gestures · Llui´s Payrato´ and Sedinha Teßendorf
117. Pragmatic and metaphoric – combining functional with cognitive approaches in the analysis of the “brushing aside gesture”
118. Recurrent gestures
119. A repertoire of German recurrent gestures with pragmatic functions
120. The family of Away gestures: Negation, refusal, and negative assessment
121. The cyclic gesture
122. Kinesthemes: Morphological complexity in co-speech gestures
123. Gesture families and gestural fields
124. Repetitions in gesture
125. Syntactic complexity in co-speech gestures: Constituency and recursion
126. Creating multimodal utterances: The linear integration of gestures into speech
127. Gestures and location in English
128. Gestural modes of representation as techniques of depiction
129. Levels of abstraction
130. Gestures and iconicity
131. Iconic and representational gestures
132. Gestures and metonymy
133. Ways of viewing metaphor in gesture
134. The conceptualization of time in gesture
135. Between reference and meaning: Object-related and interpretant-related gestures in face-to-face interaction · Ellen Fricke
136. Deixis, gesture, and embodiment from a linguistic point of view
137. Pointing by hand: Types of reference and their influence on gestural form
IX. Embodiment - The body and its role for cognition, emotion, and communication
138. Gestures and cognitive development
139. Embodied cognition and word acquisition: The challenge of abstract words
140. The blossoming of children’s multimodal skills from 1 to 4 years old
141. Gestures before language: The use of baby signs
142. Gestures and second language acquisition
143. Further changes in L2 Thinking for Speaking?
144. Gesture and the neuropsychology of language
145. Gestures in aphasia
146. Body movements and mental illness: Alterations of movement behavior associated with eating disorders, schizophrenia, and depression
147. Bodily communication and deception
148. Multi-modal discourse comprehension
149. Cognitive operations that take place in the Perception-Action Loop
150. Gesture and working memory
151. Body movements in robotics
152. Gestures, postures, gaze, and movements in computer science
153. The psychology of gestures and gesture-like movements in non-human primates
154. An evolutionary perspective on facial behavior
155. On the consequences of living without facial expression
156. Multimodal forms of expressing emotions: The case of interjections
157. Some issues in the semiotics of gesture: The perspective of comparative semiotics
158. Embodied meaning, inside and out: The coupling of gesture and mental simulation
159. Embodied and distributed contexts of collaborative remembering
160. Living bodies: Co-enacting experience
161. Aproprioception, gesture, and cognitive being
162. Embodying audio-visual media: Concepts and transdisciplinary perspectives
163. Cinematic communication and embodiment
164. The discovery of the acting body
165. Expressive movements in audio-visual media: Modulating affective experience
166. Expressive movement and metaphoric meaning making in audio-visual media
167. Gesture as interactive expressive movement: Inter-affectivity in face-toface communication
X. Sign language - Visible body movements as language
168. Linguistic structures in a manual modality: Phonology and morphology in sign languages
169. The grammaticalization of gestures in sign languages
170. Nonmanual gestures in sign languages
171. Enactment as a (signed) language communicative strategy
172. Gestures in sign-language
Recommend Papers

Body - Language - Communication: Volume 2
 9783110302028, 9783110300802

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Body  Language  Communication HSK 38.2

Handbücher zur Sprach- und Kommunikationswissenschat Handbooks o Linguistics and Communication Science Manuels de linguistique et des sciences de communication Mitbegründet von Gerold Ungeheuer () Mitherausgegeben 19852001 von Hugo Steger

Herausgegeben von / Edited by / Edite´s par Herbert Ernst Wiegand Band 38.2

De Gruyter Mouton

Body  Language  Communication An International Handbook on Multimodality in Human Interaction Edited by Cornelia Müller Alan Cienki Ellen Fricke Silva H. Ladewig David McNeill Jana Bressem Volume 2

De Gruyter Mouton

ISBN 978-3-11-030080-2 e-ISBN 978-3-11-030202-2 ISSN 1861-5090 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. 쑔 2014 Walter de Gruyter GmbH, Berlin/Boston Typesetting: Meta Systems Publishing & Printservices GmbH, Wustermark Printing and binding: Hubert & Co. GmbH & Co. KG, Göttingen Cover design: Martin Zech, Bremen 앝 Printed on acid-free paper 앪 Printed in Germany www.degruyter.com

Contents

Volume

2

Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body – Language – Communication. An International Handbook on Multimodality in Human Interaction (Handbooks of Linguistics and Communication Science 38.2.). Berlin/Boston: De Gruyter Mouton.

VI. Gestures across cultures 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90.

Gestures in South Africa · Heather Brookes . . . . . . . . . . . . . . . . Gestures in the Sub-Saharan region · Heather Brookes and Victoria Nyst Gestures in West Africa: Left hand taboo in Ghana · James Essegbey . . Gestures in West Africa: Wolof · Christian Meyer . . . . . . . . . . . . . . Gestures in South America: Spanish and Portuguese · Monica Rector . Gestures in South American indigenous cultures · Sabine Reiter . . . . . Gestures in native South America: Ancash Quechua · Joshua Shapero . Gestures in native Mexico and Central America: The Mayan Cultures · Penelope Brown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gestures in native Nothern America: Bimodal talk in Arapaho · Richard Sandoval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gestures in Southwest India: Dance theater · Rajyashree Ramesh . . . . Gestures in China: Universal and culturally specific characteristics · Shumeng Hou and Wing Chee So . . . . . . . . . . . . . . . . . . . . . . . Gestures and body language in Southern Europe: Italy · Marino Bonaiuto and Tancredi Bonaiuto . . . . . . . . . . . . . . . . . . . Gestures in Southern Europe: Children’s pragmatic gestures in Italy · Maria Graziano . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gestures in Southwest Europe: Portugal · Isabel Galhano-Rodrigues . . Gestures in Southwest Europe: Catalonia · Lluı´s Payrato´ . . . . . . . . . Gestures in Western Europe: France · Dominque Boutet and Simon Harrison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gestures in Northern Europe: Children’s gestures in Sweden · Mats Andre´n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gestures in Northeast Europe: Russia, Poland, Croatia, the Czech Republic, and Slovakia · Grigory E. Kreydlin . . . . . . . . .

1147 1154 1161 1169 1175 1182 1193 1206 1215 1226 1233 1240 1253 1259 1266 1272 1282 1289

VII. Body movements  Functions, contexts, and interactions 91. 92. 93.

Body posture and movement in interaction: Participation management · Ulrike Bohle . . . . . . . . . . . . . . . . . . . Proxemics and axial orientation · Jörg Hagemann . . . . . . . . . . . . . The role of gaze in conversational interaction · Mardi Kidwell . . . . . .

1301 1310 1324

vi

Contents 94. Categories and functions of posture, gaze, face, and body movements · Beatrix Schönherr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95. Facial expression and social interaction · Pio E. Ricci Bitti . . . . . . . . 96. Gestures, postures, gaze, and movement in work and organization · Marino Bonaiuto, Stefano De Dominicis and Uberta Ganucci Cancellieri 97. Gesture and conversational units · Ulrike Bohle . . . . . . . . . . . . . . 98. The interactive design of gestures · Irene Kimbara . . . . . . . . . . . . . 99. Gestures and mimicry · Irene Kimbara . . . . . . . . . . . . . . . . . . . . 100. Gestures and prosody · Dan Loehr . . . . . . . . . . . . . . . . . . . . . . 101. Structuring discourse: Observations on prosody and gesture in Russian TV-discourse · Nicole Richter . . . . . . . . . . . 102. Body movements in political discourse · Fridanna Maricchiolo, Marino Bonaiuto and Augusto Gnisci . . . . . . 103. Gestures in industrial settings · Simon Harrison . . . . . . . . . . . . . . . 104. Identification and interpretation of co-speech gestures in technical systems · Timo Sowa . . . . . . . . . . . . . . . . . . . . . . . 105. Gestures, postures, gaze, and other body movements in the 2nd language classroom interaction · Alexis Tabensky . . . . . . . . . . . . . . . . . . . . 106. Bodily interaction (of interpreters) in music performance · Richard Ashley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107. Gestures in the theater · Erika Fischer-Lichte . . . . . . . . . . . . . . . . 108. Contemporary classification systems · Ulrike Bohle . . . . . . . . . . . . 109. Co-speech gestures: Structures and functions · Fridanna Maricchiolo, Stefano De Dominicis, Uberta Ganucci Cancellieri, Angiola Di Conza, Augusto Gnisci and Marino Bonaiuto . . . . . . . . . . . . . . . . . . . . 110. Emblems or quotable gestures: Structures, categories, and functions · Lluı´s Payrato´ . . . . . . . . . . . . . . . . . . . . . . . . . . 111. Semantics and pragmatics of symbolic gestures · Isabella Poggi . . . . . 112. Head shakes: Variation in form, function, and cultural distribution of a head movement related to “no” · Simon Harrison . . . . . . . . . . 113. Gestures in dictionaries: Physical contact gestures · Ulrike Lynn . . . . . 114. Ring-gestures across cultures and times: Dimensions of variation · Cornelia Müller . . . . . . . . . . . . . . . . . . 115. Gesture and taboo: A cross-cultural perspective · Heather Brookes . . .

1333 1342 1349 1360 1368 1375 1381 1392 1400 1413 1419 1426 1432 1440 1453

1461 1474 1481 1496 1502 1511 1523

VIII. Gesture and language 116. Pragmatic gestures · Lluı´s Payrato´ and Sedinha Teßendorf . . . . . . . . 117. Pragmatic and metaphoric – combining functional with cognitive approaches in the analysis of the “brushing aside gesture” · Sedinha Teßendorf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118. Recurrent gestures · Silva H. Ladewig . . . . . . . . . . . . . . . . . . . . 119. A repertoire of German recurrent gestures with pragmatic functions · Jana Bressem and Cornelia Müller . . . . . . . . . . . . . . . . . . . . . . 120. The family of Away gestures: Negation, refusal, and negative assessment · Jana Bressem and Cornelia Müller . . . . . . . . . . . . . . . . . . . . . . . 121. The cyclic gesture · Silva H. Ladewig . . . . . . . . . . . . . . . . . . . . .

1531

1540 1558 1575 1592 1605

Contents 122. Kinesthemes: Morphological complexity in co-speech gestures · Ellen Fricke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123. Gesture families and gestural fields · Ellen Fricke, Jana Bressem and Cornelia Müller . . . . . . . . . . . . . . 124. Repetitions in gesture · Jana Bressem . . . . . . . . . . . . . . . . . . . . . 125. Syntactic complexity in co-speech gestures: Constituency and recursion · Ellen Fricke . . . . . . . . . . . . . . . . . . 126. Creating multimodal utterances: The linear integration of gestures into speech · Silva H. Ladewig . . . . . . . . . . . . . . . . . . 127. Gestures and location in English · Mark Tutton . . . . . . . . . . . . . . . 128. Gestural modes of representation as techniques of depiction · Cornelia Müller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129. Levels of abstraction · Ulrike Lynn . . . . . . . . . . . . . . . . . . . . . . 130. Gestures and iconicity · Irene Mittelberg . . . . . . . . . . . . . . . . . . . 131. Iconic and representational gestures · Irene Mittelberg and Vito Evola 132. Gestures and metonymy · Irene Mittelberg and Linda Waugh . . . . . . 133. Ways of viewing metaphor in gesture · Alan Cienki and Cornelia Müller 134. The conceptualization of time in gesture · Kensy Cooperrider, Rafael Nu´n˜ez and Eve Sweetser . . . . . . . . . . . . 135. Between reference and meaning: Object-related and interpretant-related gestures in face-to-face interaction · Ellen Fricke . . . . . . . . . . . . . . 136. Deixis, gesture, and embodiment from a linguistic point of view · Ellen Fricke . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137. Pointing by hand: Types of reference and their influence on gestural form · Ewa Jarmolowicz-Nowikow . . . . . . . . . . . . . . . .

vii

1618 1630 1641 1650 1662 1677 1687 1702 1712 1732 1747 1766 1781 1788 1803 1824

IX. Embodiment  The body and its role or cognition, emotion, and communication 138. Gestures and cognitive development · Martha W. Alibali . . . . . . . . . 139. Embodied cognition and word acquisition: The challenge of abstract words · Anna M. Borghi . . . . . . . . . . . . . 140. The blossoming of children’s multimodal skills from 1 to 4 years old · Aliyah Morgenstern . . . . . . . . . . . . . . . . . 141. Gestures before language: The use of baby signs · Lena Hotze . . . . . . 142. Gestures and second language acquisition · Marianne Gullberg . . . . . 143. Further changes in L2 Thinking for Speaking? · Gale A. Stam . . . . . . 144. Gesture and the neuropsychology of language · Pierre Feyereisen . . . . 145. Gestures in aphasia · Pierre Feyereisen . . . . . . . . . . . . . . . . . . . . 146. Body movements and mental illness: Alterations of movement behavior associated with eating disorders, schizophrenia, and depression · Hedda Lausberg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147. Bodily communication and deception · Siegfried L. Sporer . . . . . . . . 148. Multi-modal discourse comprehension · Seana Coulson and Ying Choon Wu 149. Cognitive operations that take place in the Perception-Action Loop · Stephanie Huette and Michael Spivey . . . . . . . . . . . . . . . . . . . . .

1833 1841 1848 1857 1868 1875 1886 1898

1905 1913 1922 1929

viii

Contents 150. Gesture and working memory · Susan Wagner Cook . . . . . . . . . . . . 151. Body movements in robotics · Ipke Wachsmuth and Maha Salem . . . . 152. Gestures, postures, gaze, and movements in computer science: Embodied agents · Stefan Kopp . . . . . . . . . . . . . . . . . . . . . . . . 153. The psychology of gestures and gesture-like movements in non-human primates · Katja Liebal . . . . . . . . . . . . . . . . . . . . 154. An evolutionary perspective on facial behavior · Marc Mehu . . . . . . . 155. On the consequences of living without facial expression · Kathleen Rives Bogart, Jonathan Cole, and Wolfgang Briegel . . . . . . 156. Multimodal forms of expressing emotions: The case of interjections · Ulrike Stange and Damaris Nübling . . . . . . . . . . . . . . . . . . . . . 157. Some issues in the semiotics of gesture: The perspective of comparative semiotics · Göran Sonesson . . . . . . . . . . . . . . . . . . . . . . . . . . . 158. Embodied meaning, inside and out: The coupling of gesture and mental simulation · Tyler Marghetis and Benjamin K. Bergen . . . . . . . . . . . 159. Embodied and distributed contexts of collaborative remembering · Lucas M. Bietti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160. Living bodies: Co-enacting experience · Elena Clare Cuffari and Thomas Wiben Jensen . . . . . . . . . . . . . . . 161. Aproprioception, gesture, and cognitive being · Liesbet Quaeghebeur, Susan Duncan, Shaun Gallagher, Jonathan Cole and David McNeill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162. Embodying audio-visual media: Concepts and transdisciplinary perspectives · Jan-Hendrik Bakels . . . . 163. Cinematic communication and embodiment · Christina Schmitt and Sarah Greifenstein . . . . . . . . . . . . . . . . . . 164. The discovery of the acting body · Sarah Greifeinstein and Hermann Kappelhoff . . . . . . . . . . . . . . . . 165. Expressive movements in audio-visual media: Modulating affective experience · Thomas Scherer, Sarah Greifenstein and Hermann Kappelhoff 166. Expressive movement and metaphoric meaning making in audio-visual media · Christina Schmitt, Sarah Greifenstein and Hermann Kappelhoff 167. Gesture as interactive expressive movement: Inter-affectivity in face-toface communication · Dorothea Horst, Franziska Boll, Christina Schmitt and Cornelia Müller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

X.

1936 1943 1948 1955 1962 1969 1982 1989 2000 2008 2016

2026 2048 2061 2070 2081 2092

2112

Sign language  Visible body movements as language

168. Linguistic structures in a manual modality: Phonology and morphology in sign languages · Onno Crasborn 169. The grammaticalization of gestures in sign languages · Esther van Loon, Roland Pfau and Markus Steinbach . . . . . . 170. Nonmanual gestures in sign languages · Annika Herrmann and Nina-Kristin Pendzich . . . . . . . . . . . 171. Enactment as a (signed) language communicative strategy · David Quinto-Pozos . . . . . . . . . . . . . . . . . . . . . . . . . . 172. Gestures in sign-language · Sherman Wilcox . . . . . . . . . . . .

. . . . .

2127

. . . . .

2133

. . . . .

2149

. . . . . . . . . .

2163 2170

Contents

ix

Volume 1 Introduction · Cornelia Müller . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I. 1. 2. 3. 4. 5. 6. 7.

II. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

1

How the body relates to language and communication: Outlining the subject matter Exploring the utterance roles of visible bodily action: A personal account · Adam Kendon. . . . . . . . . . . . . . . . . . . . . . Gesture as a window onto mind and brain, and the relationship to linguistic relativity and ontogenesis · David McNeill . . . . . . . . . . . . Gestures and speech from a linguistic perspective: A new field and its history · Cornelia Müller, Silva H. Ladewig and Jana Bressem . Emblems, quotable gestures, or conventionalized body movements · Sedinha Teßendorf . . . . . . . . . . . . . . . . . . . . . . . . Framing, grounding, and coordinating conversational interaction: Posture, gaze, facial expression, and movement in space · Mardi Kidwell Homesign: When gesture is called upon to be language · Susan Goldin-Meadow . . . . . . . . . . . . . . . . . . . . . . . Speech, sign, and gesture · Sherman Wilcox . . . . . . . . . . . . . . . . .

7 28 55 82 100 113 125

Perspectives rom dierent disciplines The growth point hypothesis of language and gesture as a dynamic and integrated system · David McNeill . . . . . . . . . . . . . . . . . . . Psycholinguistics of speech and gesture: Production, comprehension, architecture · Pierre Feyereisen . . . . . . . . . . . . . . . . . . . . . . . . Neuropsychology of gesture production · Hedda Lausberg . . . . . . . Cognitive Linguistics: Spoken language and gesture as expressions of conceptualization · Alan Cienki . . . . . . . . . . . . . . . . . . . . . . Gestures as a medium of expression: The linguistic potential of gestures · Cornelia Müller . . . . . . . . . . . . . . . . . . . . . . . . . Conversation analysis: Talk and bodily resources for the organization of social interaction · Lorenza Mondada . . . . . . . . . . . . . . . . . . Ethnography: Body, communication, and cultural practices · Christian Meyer . . . . . . . . . . . . . . . . . . . . . . . . . . Cognitive Anthropology: Distributed cognition and gesture · Robert F. Williams . . . . . . . . . . . . . . . . . . . . . . . Social psychology: Body and language in social interaction · Marino Bonaiuto and Fridanna Maricchiolo . . . . . . . . Multimodal (inter)action analysis: An integrative methodology · Sigrid Norris . . . . . . . . . . . . . . . . . . . . . . . . . Body gestures, manners, and postures in literature · Fernando Poyatos . . . . . . . . . . . . . . . . . . . . . . . . .

.

135

. .

156 168

.

182

.

202

.

218

.

227

.

240

.

258

.

275

.

287

x

Contents

III. Historical dimensions 19. 20. 21. 22. 23. 24. 25. 26. 27. 28.

29.

Prehistoric gestures: Evidence from artifacts and rock art · Paul Bouissac . . . . . . . . . . . . . . . . . . . . . . . . Indian traditions: A grammar of gestures in classical dance and dance theatre · Rajyashree Ramesh . . . . . . . . . . . . . . . . . . Jewish traditions: Active gestural practices in religious life · Roman Katsman . . . . . . . . . . . . . . . . . . . . . . . . . . . . The body in rhetorical delivery and in theater: An overview of classical works · Dorota Dutsch . . . . . . . . . . . . . . . . . . . . . . Medieval perspectives in Europe: Oral culture and bodily practices · Dmitri Zakharine . . . . . . . . . . . . . . . . . . . . . . . . Renaissance philosophy: Gesture as universal language · Jeffrey Wollock . . . . . . . . . . . . . . . . . . . . . . . . . Enlightenment philosophy: Gestures, language, and the origin of human understanding · Mary M. Copple . . . . . . . . . . . . . . . 20th century: Empirical research of body, language, and communication · Jana Bressem . . . . . . . . . . . . . . . . . . . . Language ⫺ gesture ⫺ code: Patterns of movement in artistic dance from the Baroque until today · Susanne Foellmer . . . . . . . . . . . . Communicating with dance: A historiography of aesthetic and anthropological reflections on the relation between dance, language, and representation · Yvonne Hardt . . . . . . . . . . . . . . Mimesis: The history of a notion · Gunter Gebauer and Christoph Wulf. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. .

301

. .

306

. .

320

. .

329

. .

343

. .

364

. .

378

. .

393

. .

416

. .

427

. .

438

. . . . . .

451 466

. . .

480

. . . . . .

512 533

. . .

551

. . . . . .

564 577

. . .

589

. . .

609

. . .

627

IV. Contemporary approaches 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40.

Mirror systems and the neurocognitive substrates of bodily communication and language · Michael A. Arbib . . . . . . . . . . . Gesture as precursor to speech in evolution · Michael C. Corballis The co-evolution of gesture and speech, and downstream consequences · David McNeill . . . . . . . . . . . . . . . . . . . . . . Sensorimotor simulation in speaking, gesturing, and understanding · Marcus Perlman and Raymond W. Gibbs . . . Levels of embodiment and communication · Jordan Zlatev . . . . . Body and speech as expression of inner states · Eva Krumhuber, Susanne Kaiser, Kappas Arvid and Klaus R. Scherer . . . . . . . . Fused Bodies: On the interrelatedness of cognition and interaction · Anders R. Hougaard and Gitte Rasmussen . . . . Multimodal interaction · Lorenza Mondada . . . . . . . . . . . . . . Verbal, vocal, and visual practices in conversational interaction · Margret Selting . . . . . . . . . . . . . . . . . . . . . . . The codes and functions of nonverbal communication · Judee K. Burgoon, Laura K. Guerrero and Cindy H. White . . . . . . . . . . Mind, hands, face, and body: A sketch of a goal and belief view of multimodal communication · Isabella Poggi . . . . . . . . .

Contents 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51.

Nonverbal communication in a functional pragmatic perspective · Konrad Ehlich . . . . . . . . . . . . . . . . . . . . . . . . . . . Elements of meaning in gesture: The analogical links · Genevie`ve Calbris Praxeology of gesture · Jürgen Streeck . . . . . . . . . . . . . . . . . . . . A “Composite Utterances” approach to meaning · N. J. Enfield . . . . . Towards a grammar of gestures: A form-based view · Cornelia Müller, Jana Bressem and Silva H. Ladewig . . . . . . . Towards a unified grammar of gesture and speech: A multimodal approach · Ellen Fricke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The exbodied mind: Cognitive-semiotic principles as motivating forces in gesture · Irene Mittelberg . . . . . . . . . . . . . . . . . . . . . . . Articulation as gesture: Gesture and the nature of language · Sherman Wilcox . . . . . . . . . . . . . . . . . . . . . . . . . . . How our gestures help us learn · Susan Goldin-Meadow . . . . . . . . . Coverbal gestures: Between communication and speech production · Uri Hadar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The social interactive nature of gestures: Theory, assumptions, methods, and findings · Jennifer Gerwing and Janet Bavelas . . . . . . .

V.

Methods

52. 53. 54. 55.

Experimental methods in co-speech gesture research · Judith Holler . . . Documentation of gestures with motion capture · Thies Pfeiffer . . . . . Documentation of gestures with data gloves · Thies Pfeiffer . . . . . . . . Reliability and validity of coding systems for bodily forms of communication · Augusto Gnisci, Fridanna Maricchiolo and Marino Bonaiuto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sequential notation and analysis for bodily forms of communication · Augusto Gnisci, Roger Bakeman and Fridanna Maricchiolo . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decoding bodily forms of communication · Fridanna Maricchiolo, Angiola Di Conza, Augusto Gnisci and Marino Bonaiuto . . . . . . . . . Analysing facial expression using the facial action coding system (FACS) Bridget M. Waller and Marcia Smith Pasqualini . . . . . . . . . Coding psychopathology in movement behavior: The movement psychodiagnostic inventory · Martha Davis . . . . . . . . . . . . . . . . . Laban based analysis and notation of body movement · Antja Kennedy . . . . . . . . . . . . . . . . . . . . . . . . . . . Kestenberg movement analysis · Sabine C. Koch and K. Mark Sossin . Doing fieldwork on the body, language, and communication · N. J. Enfield . . . . . . . . . . . . . . . . . . . . . . Video as a tool in the social sciences · Lorenza Mondada . . . . . . . . . Approaching notation, coding, and analysis from a conversational analysis point of view · Ulrike Bohle . . . . . . . . . . . . . . . . . . . . . Transcribing gesture with speech · Susan Duncan . . . . . . . . . . . . . . Multimodal annotation tools · Susan Duncan, Katharina Rohlfing and Dan Loehr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56.

57. 58. 59. 60. 61. 62. 63. 64. 65. 66.

xi

648 658 678 689 707 733 755 785 792 804 821

837 857 868

879

892 904 917 932 941 958 974 982 992 1007 1015

xii

Contents 67.

NEUROGES ⫺ A coding system for the empirical analysis of hand movement behavior as a reflection of cognitive, emotional, and interactive processes · Hedda Lausberg . . . . . . . . . . . . . . . Transcription systems for gestures, speech, prosody, postures, and gaze · Jana Bressem . . . . . . . . . . . . . . . . . . . . . . . . . . . A linguistic perspective on the notation of gesture phases · Silva H. Ladewig and Jana Bressem . . . . . . . . . . . . . . A linguistic perspective on the notation of form features in gestures · Jana Bressem . . . . . . . . . . . . . . . . . . . . . . . . . . . Linguistic Annotation System for Gestures (LASG) · Jana Bressem, Silva H. Ladewig and Cornelia Müller . . . . . . . . . . . . . . . . . . Transcription systems for sign languages: A sketch of the different graphical representations of sign language and their characteristics · Brigitte Garcia and Marie-Anne Sallandre . . . . . .

. .

1022

. .

1037

. .

1060

. .

1079

. .

1098

. .

1125

Organizations, links, reference publications, and periodicals . . . . . . . . . . .

2177

68. 69. 70. 71. 72.

Appendix

Indices Authors Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2179 2194

VI. Gestures across cultures 73. Gestures in South Arica 1. 2. 3. 4. 5. 6. 7. 8.

Overview Quotable gestures/emblems Variation in gestural behavior Gesture and language typology Cross-cultural variation and gestural pragmatics Culture and gestural development Conclusion References

Abstract Studies on gestures in South Africa contribute to research on quotable gestures/emblems, gestural variation, language typology and co-speech gesture, cross-cultural variation and gestural pragmatics, and the relationship between culture and the nature of co-speech gesturing and its development. Studies of the quotable gestural repertoire of urban Bantu language speakers examine the semantic and structural characteristics of these gestures as well as their communicative and social functions. Social meanings attached to gestures and gestural behavior influence variation in gestural behavior based on situational context, age, gender, and social identity. Inter-ethnic comparisons show that different cultural groups ascribe different meanings and pragmatic values to gesture use and other non-verbal behaviors. Language structure influences the types of co-speech gestures Zulu speakers employ. Cross-linguistic comparative work demonstrates that cultural values shape the nature of discourse genres, such as narratives, and consequently the kinds of co-speech gestures Zulu speakersusewhennarrating.Thesedifferencescanaccountforwhysomefeaturesofco-speechgestures develop differently in Zulu speaking children’s narratives.

1. Overview Studies of gesture in South Africa have focused on the nature, function, and social meanings of gestures and gestural use among urban Bantu language speakers in Johannesburg townships (Brookes 2001, 2004, 2005, 2011), variation in gestural behavior (Brookes 2004, 2005; Kunene 2010), cross-cultural variation in gesture and gestural pragmatics (Kunene 2010; Ribbens 2007; Schutte 2001; Scott and Charteris 1986), the nature and development of co-speech gesturing among Zulu speaking children and adults (Kunene 2010), and the influence of culture on gestural development (Kunene 2010). Opondo (2006) refers to the prominent role of gesture in South African Zulu song and dance, and there is reference in one study to the use of gesture in traditional South Sotho children’s games (Ntsihlele 2007). However, these two studies do not provide any systematic analysis of gesture use. Studies on gesture in South Africa contribute to four areas of gesture research: quotable gestures/emblems, variation in gestural behavior, language typology and gesture, cross-cultural variation and gestural pragmatics, and the impact of cultural norms on gesture and gestural development. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 11471153

1148

VI. Gestures across cultures

2. Quotable gestures/emblems Urban Bantu language speakers in Johannesburg townships make use of a large repertoire of quotable gestures/emblems. These gestures have established forms and meanings, can occur independently of spoken language and are part of a recognized gestural vocabulary. A list of quotable gestures can be found in Brookes (2004) with photographs and descriptions of their forms and meanings with an analysis of their semantico-grammatical types and semantic domains. Township residents also make use of a kinesic code comprising a set of gestures specifically for hailing minibus taxis, a major form of transport in the region. Woolf (2010) has published drawings of the different taxi gestures and their corresponding destinations used in Johannesburg. Similarly to Poggi’s (1983) and Payrato´’s (1993) analyses of Italian and Catalan quotable gestures, the South African repertoire of quotable gestures can be divided into two main semantico-grammatical categories, lexical gestures (equivalent to single words that can convey different communicative acts depending on context), and holophrastic gestures (complete communicative acts whose performative function does not vary) (Brookes 2004). A number of lexical gestures have related stabilized variations in the movement of the stroke, manner of performance, or the positioning/orientation of the hand that convey established communicative acts called derived holophrastic gestures (Poggi 1983). The semantic domains of lexical gestures include everyday objects and activities, e.g., gestures for telephone, lock, eat, and sleep. A proportion of lexical gestures represent common activities, objects, and topics of conversation among young men, such as gestures for drinking alcohol, gambling, soccer, clothing items, sex, and marijuana. Similarly to other repertoires of quotable gestures (see Kendon 1981; Payrato´ 1993) most of the holophrastic gestures in the South African repertoire are gestures of interpersonal control ⫺ commands, apologies, refusals, insults, promises (42 percent), and evaluative comments about others (39 percent). Twelve percent are expressions of one’s personal state and seven percent are gestures that comment about general states of affairs. There is also a small proportion of quotable gestures that function like lexical gestures in that they can convey different communicative acts, but unlike lexical gestures are not closely tied to a single meaning. Rather this kind of gesture expresses a range of polysemous meanings related to a core underlying abstract concept or semantic theme. Kendon (2004) suggests that some of these types of gestures represent values that are of particular importance to the community in which they occur. Brookes (2001) analyzed the role of one of the most prominent of these concept gestures in the South African repertoire. The gesture involves the first and fourth fingers directed towards the eyes while the hand moves diagonally up and downwards across the face. Users commonly gloss this gesture as clever in the sense of being streetwise and city slick. It expresses a range of meanings all related to the underlying concept of seeing or perception such as “You are streetwise”, “Look out”, “Be alert”, and “I see you” (as a greeting). It also accompanies words that describe a person as quick thinking, witty, and entertaining characteristics considered streetwise. The gesture functions as an act of approval and inclusion that expresses the core interactional function of distinguishing between insider and outsider status in black urban communities. The clever gesture symbolizes this streetwise and city slick identity among urban black South Africans who wish to see themselves as part of a modern progressive urban African identity in contrast to the backward, primitive, and tribal African from the rural areas.

73. Gestures in South Africa Longitudinal ethnographic work on South African quotable gestures shows that there are some quotable gestures that endure over long periods of time, others fall into disuse, change their forms, and new gestures emerge to become part of the established gestural vocabulary (Brookes 2004, 2011). Quotable gestures that mark common speech acts such as greeting, negation, or agreement, express key ideological concerns such as the clever gesture or express taboo topics that are a consistent part of the gestural repertoire. However, in the case of some taboo topics, a new form may replace the old form of the gesture when it becomes too well known (Brookes 2004). Several lexical gestures representing objects and practices that are no longer popular die out. In one case, a new quotable gesture emerged for the Human Immunodeficiency Virus (HIV) in response to the speech taboo against suggesting a person was sick or had died because of HIV. As the stigma decreased with the introduction of antiretroviral drugs that can prevent death, so has the use of the gesture (Brookes 2011). Township residents use quotable gestures as a substitute for speech when speech is impossible, over long distances, or secret exchanges, or when speech is inappropriate, to avoid interrupting an ongoing interaction. Speakers also use quotable gestures in conjunction with speech for rhetorical effect. However, use of quotable gestures is most prominent in the communicative interactions of male youth in their late teens and twenties. Young men integrate quotable gestures into spontaneous co-speech gesturing. This gesturing occurs in conjunction with a male youth anti-language spoken on the township streets sometimes referred to as Tsotsitaal or Iscamtho (Brookes 2001). Skillful use of speech and gesture in entertaining ways is part of linguistic performance that indexes a streetwise and city slick identity crucial for gaining and maintaining access, acceptance, and status within male social networks. An analysis of how male youth use quotable gestures in relation to speech shows that they function in similar ways to co-speech gestures (Brookes 2005). Quotable gestures not only depict aspects of the content of utterances, they are also integrally involved in marking discourse structure and regulating exchanges. Quotable gestures represent, modify, or add information to what is said in speech and their manner of performance contributes to the illocutionary force of the message, indicates the type of speech act performed, and directs and regulates the interaction. Most importantly they structure discourse into information units with gesture phrases visually demarcating spoken units while the gesture stroke gives prominence to the information focus of the message. The intended textual meaning plays an important role in how speakers organize gestures in relation to spoken units of information. The greater the rhetorical effect, the better the linguistic performance, and gestures play a key role in these linguistic performances.

3. Variation in gestural behavior Observations of gestural behavior among Bantu language speakers in black urban townships around Johannesburg show that the way in which speakers use gesture varies (Brookes 2001, 2004, 2005). Speakers modify the types of gestures they use, gestural frequency, and use of space based on interlocutor identity, age, gender, and social distance. Two thirds of the quotable gesture repertoire is used by the general population, while the other third is associated mainly with use by male youth, with only two percent of quotable gestures considered to be exclusively women’s gestures (Brookes 2004). Ges-

1149

1150

VI. Gestures across cultures tural behavior indexes social distance and respect, the latter being a key value among black South Africans. Excessive gesturing is seen as disrespectable. Gestures become more frequent and prominent in informal contexts where participants are of equal status. Although gesture is most highly elaborated among male youth in peer group interactions, the way young men gesture among their peers is not appropriate in other contexts nor is it appropriate for women to gesture in a similar way. Gestures and gestural style may be a key part of expressing a male streetwise and city slick township identity, but excessive use of gesture indicates disrespectability and delinquency or criminality in township society. Gestural styles also index different “sub-cultural” affiliations among male youth. The gendered nature of gestural variation has also been observed in work on the development of Zulu children’s gestures (Kunene 2010). Boys of 11 and 12 years and adult males used a larger physical gestural space than females in the same age cohorts in elicited narratives and also employed quotable gestures unlike their female counterparts (Kunene 2010). Kunene (2010) attributes these differences to cultural expectations of linguistic performance among male peers during adolescence and early adulthood.

4. Gesture and language typology Kunene (2010) has provided the first account of how Bantu language structure shapes the nature of co-speech gesturing. She compared gestural development in the narratives of French and Zulu children and adults and found that the greater number of representational gestures among Zulu when compared to French speakers was partly due to Zulu connectives having subject markers that track the referent. Cohesive devices such as “as” and “then” gave rise to representational gestures that tracked the referent among Zulu speakers. As Zulu is a pro-drop language, it does not require a lexical subject. Concords or agreement prefixes refer to the subject by indicating its subject class. Anaphoric reference can be ambiguous for example when a speaker refers to an object and subject from the same class. The speaker may then disambiguate the message by using representational gestures to refer to the referents. Kunene (2010) also observed a developmental aspect to reference tracking with connectives. As Zulu speakers move towards the adult norm they increasingly use class neutral connectors that require more representational gestures to disambiguate the referent. Kunene (2010) suggests that the move to class neutral markers would partly explain why Zulu speaking adults used more representational gestures than Zulu speaking children in her study.

5. Cross-cultural variation and gestural pragmatics Scott and Charteris (1986) compared Southern African’s interpretations of Morris et al.’s (1979) twenty emblems/gestures with those of the Europeans. They sampled a hundred Southern African Caucasians comprising equal numbers of males and females and an unspecified number of Southern Africans of African decent in major cities in South Africa and Zimbabwe. The authors did not include the data from those participants of African descent due to methodological problems. However, they noted that Southern Africans of African descent did not recognize seven out of twenty emblems, and the meanings they provided were quite disparate to both Europeans and their Southern African Caucasian counterparts. Comparing Southern African Caucasian’s knowledge of Morris et al.’s (1979) twenty gestures with the European responses, nine gestures

73. Gestures in South Africa elicited similar interpretations from both groups. Their results showed higher levels of recognition and semantic congruence for gestures that are established emblems in Britain from where a large proportion of South African and Zimbabwean Caucasians originate. However, overall lack of congruence led them to conclude that gestures are mostly culture dependent and that Morris et al.’s (1979) gesture inventory was unsuitable for interethnic comparisons in Southern Africa. Ribbens (2007) and Schutte (2001) have carried out inter-ethnic comparisons of nonverbal behavior between English and Afrikaans speaking South Africans of European descent and Bantu language speakers. They found that the two groups ascribed different values to certain gestures and non-verbal behaviors. Ribbens (2007) and Schutte (2001) identified pointing and beckoning with the forefinger, a gesture commonly used by South Africans of European descent to tell a person to “come”, as highly offensive to Bantu language speakers. Schutte (2001) describes the different meanings cultural groups attach to various handshake forms and their manner of performance. He also identifies the “purse hand” as appropriate for indicating the height of a person among Bantu language speakers who regard the hand with palm down, often used by South Africans of European descent to describe a person’s height, as offensive. The negative value ascribed to some gestural forms also extends to other non-verbal behaviors such as substituting a smile for a spoken greeting, a common practice among English speaking South Africans but considered offensive by Bantu language speakers (Ribbens 2007), and appropriate eye contact or avoidance in interactions (Schutte 2001). Several studies on speech acts in Zulu such as requests and politeness markers (de Kadt 1992, 1994, 1995) and expressing gratitude (Wood 1992) where gestures rather than speech express “please” and “thank you”, suggest a more prominent role for gestures as politeness markers among Bantu language speakers. Ribbens (2007) points out that these different cultural patterns of coordination between speech and gesture can result in misinterpretation of speaker intention in intercultural communication.

6. Culture and gestural development Kunene’s (2010) comparative work on the development of discourse and co-speech gestures in French and Zulu children’s narratives (five to twelve years) shows that co-speech gestures increase with age in both cases, and Zulu and French speaking children younger than ten years are at a similar level of multimodal development in relation to the adult norm (Kunene 2010). Types of gesture also change with age (Colletta, Pellenq, and Guidetti 2010). French and Zulu speaking children produce a higher proportion of nonrepresentational type gestures than French and Zulu speaking. Similarly to French children, Zulu speaking children had a higher proportion of integrating type gestures that add preciseness to their speech while adults had more supplementary gestures that provide additional information (Kunene 2010). However, there were some developmental differences that could be attributed to cultural norms. Zulu speakers tended towards more detailed narrative accounts when telling a story and produced more narrative clauses than French speakers who provided more summarized versions with more non-narrative clauses as they moved towards the adult norm. Zulu speakers produced more gestures with their narratives as well as more representational and supplementary gestures than their French counterparts who had more discursive and framing gestures. Kunene attributes these differences to how speak-

1151

1152

VI. Gestures across cultures ers from different cultures perceive the nature of the storytelling task. Cultural expectations require that Zulu speakers provide detailed sequential accounts while French speakers tend towards a more succinct overview of the main narrative events (Kunene 2010). Zulu adults also gestured significantly more than 5 to 6 year olds and 9 to 10 year olds, while French adults’ gesture rate did not differ significantly from 9 to 10 year olds but only from 5 to 6 year olds. Kunene suggests that the nature of Zulu orature requires a level of co-speech gesturing that is not yet fully mastered by 10 or even 12 years of age.

7. Conclusion These studies involving longitudinal ethnographic fieldwork, context-of-use studies, and comparative work provide insights into how sociocultural norms shape the nature of gestures, gestural behavior, and gestural development. South Africa provides a rich context in which to study gesture with a diversity of languages and cultures for crosscultural comparison along the lines of repertoires and form-meaning associations, the relationship between language typology and gesture, the relationship among culture, language and cognition, and the pragmatics of communication in which gesture plays a key role. The social, political, and historical context in which ethnic groups were kept separate under apartheid and the process of reintegration in the post-apartheid era may be a suitable context in which to address questions that scholars of gesture have raised (Kita 2009). How does cultural contact influence gestures? How do gestures spread and how do their meanings change or adapt with cultural integration? How do cultural differences shape language, gesture, and cognition? How is gesture use and gestural pragmatics influenced by cultural ideas and values? Other than the studies discussed in this review, no systematic comparative studies have been undertaken among different South African ethnic groups to address these questions. Nevertheless the studies reviewed make an important contribution to understanding the role of gesture as a communicative and social tool by placing gesture use in its sociocultural context.

Acknowledgements This work is based on research supported by the National Research Foundation, South Africa under Grants 77955 and 75318. Any opinions and conclusions are those of the author and not the University of Cape Town nor the National Research Foundation.

8. Reerences Brookes, Heather J. 2001. O clever ‘He’s streetwise.’ When gestures become quotable: The case of the clever gesture. Gesture 1(2): 167⫺184. Brookes, Heather J. 2004. A first repertoire of South African quotable gestures. Linguistic Anthropology 14(2): 186⫺224. Brookes, Heather J. 2005. What gestures do: Some communicative functions of quotable gestures in conversations among black urban South Africans. Journal of Pragmatics 37(12): 2044⫺2085. Brookes, Heather J. 2011. Amangama Amathathu ‘Three Letters’: The emergence of a quotable gesture. Gesture 11(2): 194⫺218.

73. Gestures in South Africa

1153

Colletta, Jean-Marc, Catherine Pellenq and Michelle Guidetti 2010. Age-related changes in cospeech gesture and narrative: Evidence from french children and adults. Speech Communication 52(6): 565⫺576. De Kadt, Elizabeth 1992. Requests as speech acts in Zulu. South African Journal of African Languages 12(3): 101⫺106. De Kadt, Elizabeth 1994. Towards a model for the study of politeness in Zulu. South African Journal of African Languages 14(3): 103⫺112. De Kadt, Elizabeth 1995. I must be seated to talk to you: Taking non-verbal politeness strategies into account. Pragmatics and Language Learning 6: 143⫺153. Kendon, Adam 1981. Geography of gesture. Semiotica 37(1/2): 129⫺163. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kita, Sotaro 2009. Cross-cultural variation of speech-accompanying gesture: A review. Language and Cognitive Processes 24(2): 145⫺167. Kunene, Ramona 2010. A comparative study of the development of multimodal narratives in French and Zulu children and adults. Ph.D. dissertation, Department of Linguistics, University of Grenoble 3. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures: Their Origins and Distribution. A New Look at the Human Animal. London: Jonathan Cape. Ntsihlele, Flora 2007. Games, gestures and learning in Basotho children’s play songs. Ph.D. dissertation, University of South Africa. Opondo, Patricia 2006. Song-Gesture-Dance: Redefined aesthetics in the performance continuum as South African women’s indigenours groups explore new frontiers. Critical Arts 20(2): 61⫺74. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20(3): 193⫺216. Poggi, Isabella 1983. Le analogie tra gesti e interiezioni. Alcune osservazione preliminari. In: Franca Orletti (ed.), Communicare nella vita quotidiana, 117⫺133. Bologna: Il Mulino. Ribbens, Rita 2007. Misinterpretation of speaker intent in a multilingual workforce. Communicare 6(2): 71⫺88. Schutte, Paul 2001. Potensie¨le nieverbale spanning in die komuunikasie tussen Eurosentriese an Afrosentriese kulture ⫺ ‘n perspektief op tweedetaalonderrig. Journal for Language Teaching 35(1): 12⫺24. Scott, PA and J. Chateris 1986. Gesture identification: Southern African ratings with European responses. International Journal of Psychology 21(6): 753⫺768. Wood, Marilyn 1992. Expressing gratitude in Zulu: a speech act study emphasising communication competence. In: Robert Herbert (ed.), Language and Society in Africa: The Theory and Practice of Sociolinguistics, 265⫺275. Johannesburg: Witwatersrand University Press. Woolf, Susan 2010. Taxi Hand Signs. Johannesburg: Susan Woolf.

Heather Brookes, Cape Town (South Africa)

1154

VI. Gestures across cultures

74. Gestures in the Sub-Saharan region 1. 2. 3. 4. 5. 6. 7. 8. 9.

Introduction Pointing Repertoires of conventional and quotable gestures Counting gestures Gestures in oral narratives Ideophones and gestures Gestures in pre-colonial times: The trans-Atlantic slave trade and the diaspora Conclusion References

Abstract Most of the studies on gesture in sub-Saharan Africa focus on documenting the forms and meanings of conventionalized gestures such as pointing, repertoires of quotable gestures, and counting gestures. An important aspect of these studies, particularly work on pointing, has been to highlight how cultural and interactive norms shape gestural behavior. The role of gestures in oral-story telling and other art forms has also been a particular area of interest in the African context. In work on oral narratives, there has also been a focus on the relationship of ideophones and gestures. Studies on gestures in the African diaspora give support to other work showing the persistence of gestures over time. Many of these studies on gesture in sub-Saharan Africa highlight the conscious and often explicit importance attached to gesture and bodily conduct in many African cultures.

1. Introduction The first comprehensive overview of studies on gestures and gesture use in Sub-Saharan Africa was published in French in 1971 by Baduel-Mathon. In her overview, BaduelMathon provides a bibliography and description of gestures that occur in three West African language families: the Agni-Ashanti, the Manding, and the Yoruba. Since then, there have been a number of studies published that also make an important contribution to knowledge about gestures and gestural behavior particularly in relation to pointing, quotable gestures, oral narratives, and ideophones.

2. Pointing There have been two substantial studies on pointing practices in Sub-Saharan Africa. Kita and Essegbey (2001) analyze pointing in Ghana and Orie (2009) examines pointing among speakers of Yoruba in Nigeria. Both studies demonstrate that pointing practices are shaped by socio-cultural factors. Kita and Essegbey (2001) and Orie (2009) describe how different functions and social values are ascribed to the left and right hands and to the use of both hands. These functions and values affect how speakers use them when gesturing and pointing. Many African cultures associate the left hand with negative values and actions, and therefore there is a taboo against using it for giving, receiving, or eating. Pointing with the left hand is taboo among Yoruba speakers in Nigeria, Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 11541161

74. Gestures in the Sub-Saharan region in Ghana and in many cultures in sub-Saharan Africa (see Kita and Essegbey 2001; Orie 2009; Wilkins 2003). Pointing with the left hand is also taboo among the Igbo, the Iyala and the Hausa (Nigeria), the Gikuyu and the Luya (Kenya), and among the Chichewa (Malawi) (Orie 2009). Orie (2009) suggests index finger pointing is also socially circumscribed in many African cultures. In her study of the Yoruba, hand pointing is generally viewed as more polite. Orie (2009) points out that index finger pointing to people may be taboo under certain conditions because of cultural beliefs about its supernatural powers. Index finger pointing is subject to social restrictions in terms of social hierarchy relating to age and status with status more important than age (Orie 2009). Orie (2009) documents a range of different forms of pointing among the Yoruba regulated by social and occupational status and context. The Yoruba also use five different lip points with gaze as a key component. Mouth pointing is also governed by social status and age. Nose pointing is derogatory and head pointing also has restrictions in relation to status and age. Again lip pointing, head pointing, eye pointing, and gaze are governed by social cultural constraints relating to age, status, and context. The only other study of head gestures in Africa that we have found is McClave’s (2007) study of head movements among the Turkana who were nomadic pastoralists in northwestern Kenya. She bases her analysis on data from films made in the 1970’s by anthropologists David and Judith McDougall. McClave compares Turkana head movements (Turkana belongs to the Nilo-Saharan language family) with head movements among speakers from four different languages belonging to three unrelated language families. The Turkana do not have head movements for “yes” and “no.” However, similarly to speakers of Egyptian Arabic, Bulgarian, Korean, and African-American English, the Turkana have the same head movements for indicating inclusivity (lateral head sweep), to mark individual items on a list, and head orientation to refer to and locate a non-present referent in space.

3. Repertoires o conventional and quotable gestures A number of publications contain descriptions of conventionalized and quotable gestures belonging to different cultural groups. Some of these accounts go back to the nineteenth century, such as Sibree (1887) who lists some gestures used in Madagascar. Another early account of African gestures is given in Westermann (1907) who describes gestures of the Ewe in Ghana as well as other forms of non-verbal communication, for example, conventions for marking the way or relevant spots in an area. Glauning and Huber (1904) describe gestural conventions for greeting in East Africa. In his account of the Hausa people and culture, Tremearne (1913) gives a list of 30 Hausa gestures. After these studies, there seems to be a break in the study of gestures in the first half of the twentieth century. The publications of Baduel-Mathon (1969; 1971) signal a renewed interest in (quotable) gestures. One of the most important contributions to gesture studies in Africa is Creider’s work. Creider (1977) published various gesture studies including repertoires of conventional gestures for four Kenyan languages. In addition, three publications appear on Swahili gestures and one on gestures in Central Africa (Claessen 1985; Eastman and Omar 1985; Hochegger 1978). Creider (1977) documented a total of 72 quotable gestures in East Africa among the Luo, Kipsigis, and Samburu who speak Nilotic languages and the Gusii who speak a Bantu language. The Luo, Kipsigis, and Gusii are geographically adjacent to one another. Sixty-eight percent of

1155

1156

VI. Gestures across cultures the quotable gestures identified were common to all four groups. He also compared their gestures to gesture vocabularies for North America and Columbia (Saitz and Cervenka 1972). The east African groups had 24 percent and 31 percent of gestures in common with the North American and Columbian repertoires. The gestures that were common are also found among many other groups as they represent common human interactions, actions, or depictions of space and size. In addition to this work, Creider (1978) presents an analysis of the relation between intonation, tone, and gesture in Luo, followed by a cross-linguistic comparison of this relation in Luo, Kipsigis, and Gusii (1986). Creider shows that, “there is a close relationship between the character of certain kinds of body movements and the intonational structure of a language” (1986: 148). He finds crosslinguistic differences in the alignment of body movements/gestures with speech depending on whether stress is used to mark pause groups, and whether stress is used to mark emphasis. Eastman and Omar (1985) describe verbal independent gestures that are used only without speech and verbal dependent gestures that are only used with speech among Kenya coastal Swahili speakers. The latter they call verbal/visual or gestural/speech units because speech and gesture combine to create a specific meaning that separately would be meaningless. In the case of verbal independent gestures, these range from gestures that can be glossed with a sentence to gestures that are purely exclamations and have no verbal gloss or equivalent. In 1983, Omar and Morell produced a video recording of Swahili gestures demonstrating their form and use. Claessen (1985) also gives an account of the gestures of native speakers of Swahili, including a set of body-based measure gestures, in which the arm is used as a measure stick that is delimited by the other arm/ hand. A similar type of gesture is mentioned for the Luo in Kenya (Creider 1977). Hochegger (1978) gives a richly illustrated repertoire of conventional gestures in Central Africa but does not specify the language groups involved. Kirk and Burton (1976) take an experimental approach on conventional gestures, using a judged similarity test with speakers of Maasai and Kikuyu to determine whether speakers classify gestures according to their meaning or to their form. Speakers were found to assess similarity among emblems according to their verbal glosses rather than formal features. More recently, Brookes (2001, 2004, 2005, 2011) has published several articles on South African quotable gestures and their communicative functions among urban Zulu and South Sotho speakers in Johannesburg townships.

4. Counting gestures Several studies have focused on the use of gestures for counting in various African cultures (e.g. Caprile 1995; Zaslavsky 1999). Gerdes and Cherinda (1993) describe counting among the Yao of Malawi and Mozambique, the Makonde of Mozambique, the Shambaa of Tanzania and Kenya, and the Sotho of Lesotho. Hollis (1909) cites a unique set of fourteen counting gestures among the Kenyan Nandi. Gulliver (1958) claims that speakers of Arusha Maasai use virtually the same counting gestures and so does Creider (1977) for speakers of Luo, Kipsigis, and Samburu. All languages in which this set of non-iconic counting gestures is found are part of the Nilotic language family. A more recent publication is Caprile (1995), who analyzes the relation between counting gestures and the spoken numeral systems in four Central Sudanic languages as spoken in Chad. Another recent publication is Siebicke (2002), who presents an analysis of counting,

74. Gestures in the Sub-Saharan region including counting gestures, in Samo, a language of Burkina Faso. Number gestures seem to occupy a special position in oral narratives in Iraqw, a Cushitic language of Tanzania. In Iraqw stories, numbers are typically not pronounced by speech, but rather by gesture. The audience then verbalizes the number, which in turn is confirmed by the storyteller (Maarten Mous, personal communication, August 30, 2013).

5. Gestures in oral narratives There have been a number of studies of gestures and the use of the body in oral narratives (Calame-Griauel 1977; Klassen 1999; Konrad 1994; Kunene 2010; Sorin-Barreteau 1996). In Africa, oral storytelling is still a significant part of daily life and informal storytelling is saturated with gestures (Klassen 1999). Klassen (1999) has studied hand gestures, body movements, and posture in Shona storytelling in Zimbabwe. She examines the semantic relation of gesture to speech and identifies four ways in which gestures in storytelling are imitative. Gestures can reenact an action or diagram it, they can metaphorically illustrate an abstract concept, gestures can place a story and its components in the gestural space of the speaker to represent various aspects of the story, and gestures can show direction, mood, pacing, and attitude including showing the reaction of one character to another. Klassen (2004) also points out that the timing of gestures corresponding with what the speaker wishes to emphasize visually provides the shape of the story. Gestures increase when the story nears its climax and may even replace speech at this point (Eastman 1992; Klassen 1999). Gestures, and particularly bodily movements, represent character and map objects and actions as well as making transparent the form and moral dimensions of a narrative. Klassen (1999, 2004) also points out how body posture cues the type of story being told, its believability and the level of artistry of the storyteller. The storyteller’s position such as sitting may be a metaphor for social relations. Body posture and movement have strong moral connotations. Changes in body position often mark the structure of the story by changing when there is a change of scenes or genres (talking to singing) in the story (Klassen 1999). Similar kinds of observations are made by Calame-Griaule (1977) in her analysis of gestures accompanying a Touareg story from Niger, as well as by Konrad (1994) in her analysis of gestures accompanying a trickster story in the Ewe language of Ghana. Other work on storytelling is that of Sorin-Barreteau (1996), who gives an overview of over 628 conventional gestures for actions for the Mofu-Gudur language of Cameroon, as used in storytelling. In addition to oral literature, gestures may also play a role in other art forms. Thus, Groß (1977) presents an analysis of gestures and body positions in the Adzogbo dance of the Ewe people in Ghana. Thompson and Nsonde´ (2002) look in detail at conventional gestures and body postures of the Kongo culture in central Africa, as evident in (ceremonial) face-to-face interactions, dance, martial arts, and statues.

6. Ideophones and gestures There have been several studies of ideophones and gestures (Dingemanse 2011; Klassen 1999; Kunene 1965). Gestures that accompany ideophones function differently from other gestures in storytelling (Klassen 1999). These gestures show the quality and length of the action and are essential to understand the ideophone’s precise meaning because

1157

1158

VI. Gestures across cultures these are usually idiomatic and only locally understood (Klassen 1999). Klassen also points out that ideophones for body movement have gestures and depict not only the movement but the moral character of the story character. Among the South Sotho, gestures co-occur with or substitute for ideophones and may even cause a new word to be coined (Kunene 1965). Dingemanse’s (2011) extensive work on ideophones in Siwu, the language of Mawu people in Ghana, found that previous claims that gestures almost always occur with ideophones are too strong. He argues that discourse type plays a role in the occurrence of gestures with ideophones. Gestures are more likely to occur with ideophones in “telling”. He also found that depictive gestures are more likely to occur with ideophones and be synchronized. Dingemanse (2011) suggests that the “tight coupling” of depictive gestures with ideophones are due to both being holistic depictions and two components of a single performative act.

7. Gestures in pre-colonial times: The trans-Atlantic slave trade and the diaspora Extensive descriptions of conventional gestures do not seem to be found in publications prior to the nineteenth century. However, the use of gestures in communication in early contacts between Europeans and Africans has been mentioned in various earlier sources. Fayer (2003) reconstructs linguistic practices including the use of gestures in the Atlantic slave trade, based on data describing “sign language” in the journals of explorers, traders, travelers, missionaries, and plantation owners. The accounts of gesture use cited in this article go back as far the fifteenth century. Fayer concludes however that reliance on African interpreters largely outweighed the systematic use of gestural communication for bridging the linguistic gap between the various parties. What has become clear, however, is that African gestures have been retained and transmitted by Africans crossing the Atlantic, as evident in the analysis of gestures in African diaspora communities. There are a number of studies describing gestures, body postures, and stance taking in the African diaspora. A number of studies focus on the use and function of non-verbal communication in marking identities and framing conversations, e.g., Cooke (1972), Goodwin and Alime (2010) and Kochman (1972). Some of these studies focus on the African origin of non-verbal behavior in African diaspora communities. A well-known example of gestures found in various communities, both in Africa as well as in Guyana and the West Indies, are the “cut-eye” and the “suck-teeth” gestures, as described by Rickford and Rickford (1976). A detailed analysis of the use of the suck-teeth gesture in Guyana is presented in Patrick and Figueroa (2002). The study of Thompson and Nsonde´ (2002) mentioned above on gesture and posture in the Kongo culture actually aims at identifying similarities between the Kongo culture and African diaspora cultures in South-America.

8. Conclusion Although we have tried to present a comprehensive overview of publications in this area, it is likely that we may have omitted studies published in languages other than English and French. Nevertheless this review shows that studying gestures in Africa gives insights into our understanding of the social, linguistic, and cognitive aspects of human gestural behavior. These studies highlight the conscious and often explicit importance attached

74. Gestures in the Sub-Saharan region to gesture and bodily conduct in many African cultures. Eastman and Omar (1985), Creider (1977, 1978, 1986), and Orie (2009) give important descriptions about how people in many different African cultures have indigenous terminology to talk about gestural and other forms of non-verbal behavior. Olofson (1974) provides a detailed description of Nigerian Hausa language about facial expressions, gaze, and hand gestures, based on theatrical stage directions, as well as interviews. However, there is a dearth of studies on gestures and gestural behavior in Africa and much more needs to be done.

Acknowledgements This work is based on research supported by the National Research Foundation, South Africa under Grants 77955 and 75318. Any opinions and conclusions are those of the author and not the University of Cape Town nor the National Research Foundation.

9. Reerences Baduel-Mathon, Ce´line 1969. Pour une se´miologie du geste en Afrique Occidentale. Semiotica 3(3): 245⫺255. Baduel-Mathon, Ce´line 1971. Le langage gestuel en Afrique occidentale: Recherches bibliographiques. Journal de la Socie´te´ des Africanistes 41(2): 203⫺249. Brookes, Heather J. 2001. O clever ‘He’s streetwise’. When gestures become quotable: The case of the clever gesture. Gesture 1(2): 167⫺184. Brookes, Heather J. 2004. A first repertoire of South African quotable gestures. Linguistic Anthropology 14(2): 186⫺224. Brookes, Heather J. 2005. What gestures do: Some communicative functions of quotable gestures in conversations among black urban South Africans. Journal of Pragmatics 37: 2044⫺2085. Brookes, Heather J. 2011. Amangama Amathathu ‘Three Letters’: The emergence of a quotable gesture. Gesture 11(2): 194⫺218. Calame-Griaule, Genevie`ve 1977. Pour une e´tude des gestes narratifs. In: Calame-Griaule Genevie`ve (ed.), Langage et Cultures Africaines: Essais d’Ethnolinguistique, 303⫺358. Paris: Maspero. Caprile, Jean-Pierre 1995. Morphogene`se nume´rale et techniques du corps: des gestes et des nombres en Afrique Centrale. Intellectica 1(20): 83⫺109. Claessen, A. 1985. Investigation into the patterns of non-verbal communication behaviour related to conversational interaction between mother tongue speakers of Swahili. In: Joan Maw and David Parkin (eds.), Swahili Language and Society, 159⫺193. Vienna: Afro-Pub. Cooke, Benjamin G. 1972. Nonverbal communication among Afro-Americans: An initial classification. In: Thomas Kochman (ed.), Rappin’ and Stylin’ Out: Communication in Urban Black America, 32⫺64. Chicago: University of Illinois Press. Creider, Chet A. 1977. Towards a description of East African gestures. Sign Language Studies 14: 1⫺20. Creider, Chet A. 1978. Intonation, tone groups and body motion in Luo conversation. Anthropological Linguistics 20(7): 327⫺339. Creider, Chet A. 1986. Interlanguage comparisons in the study of the interactional use of gesture: Progress and prospects. Semiotica 62(1⫺2): 147⫺164. Dingemanse, Mark 2011. The meaning and use of ideophones in Siwu. Ph.D. dissertation. Nijmegen: Radboud University. Eastman, Carol M. 1992. Swahili interjections: Blurring language-use/gesture-use boundaries. Journal of Pragmatics: An Interdisciplinary Monthly of Language Studies 18(2⫺3): 273⫺287. Eastman, Carol M. and Yahya Ali Omar 1985. Swahili gestures: Comment (vielezi) and exclamations (Viingizi). Bulletin of the School of Oriental and African Studies, University of London 48(2): 321⫺332.

1159

1160

VI. Gestures across cultures Fayer, Joan M. 2003. African interpreters in the Atlantic slave trade. Anthropological Linguistics 45(3): 281⫺295. Gerdes, Paulus and Cherinda Marcos 1993. Words, gestures and symbols. UNESCO Courier 46(11): 37⫺40. Glauning, Friedrich von and Max Huber 1904. Forms of salutation amongst natives of East Africa. Journal of the Royal African Society 3(11): 288⫺299. Goodwin, Marjorie Harnes, and H. Samy Alim 2010. “Whatever (neck roll, eye roll, teeth suck)”: The situated coproduction of social categories and identities through and transmodal stylization. Journal of Linguistic Anthropology 20(1): 179⫺194. Groß, Ulrike 1997. Analyse und Deskription textueller Gestik im Adzogbo (Ewe) unter Berücksichtigung kommunikationstheoretischer Aspekte. Ph.D. Dissertation, Universtity of Köln. Gulliver, Philip 1958. Counting with the fingers by two East African tribes. Tanganyika Notes and Records 51: 259⫺262. Hochegger, Hermann 1978. Le Langage Gestuel en Afrique Centrale. Bandundu: Publications CEEBA. Hollis, Alfred C. 1909. The Nandi: Their Language and Folk-Lore. Oxford: Clarendon Press. Kirk, Lorraine and Michael Burton 1976. Physical versus semantic classification of nonverbal forms: A cross-cultural experiment. Semiotica 17(4): 315⫺338. Kita, Sotaro and James Essegbey 2001. Pointing left in Ghana: How a taboo on the use of the left hand influences gestural practice. Gesture 1(1): 73⫺95. Klassen, Doreen H. 1999. “You can’t have silence with your palms up”: Ideophones, gesture, and iconicity in Zimbabwean Shona women’s ngano (storysong) performance. Ph.D. dissertation, Indiana University. Klassen, Doreen H. 2004. Gestures in African oral narrative. In: Philip M. Peek and Kwesi Yankah (eds.), African Folklore: An Encyclopedia, 298⫺303. New York/London: Routledge. Kochman, Thomas 1972. Rappin’and Stylin’Out: Communication in Urban Black America. Chicago: University of Illinois Press. Konrad, Zinta 1994. Ewe Comic Heroes: Trickster Tales in Togo. New York: Garland Publications. Kunene, Daniel P. 1965. The ideophone in Southern Sotho. Journal of African Languages 4(1): 19⫺39. Kunene, Ramona 2010. A comparative study of the development of multimodal narratives in French and Zulu children and adults. Ph.D. dissertation, University of Grenoble 3. McClave, Evelyn 2007. Potential cognitive universals: Evidence from head movements in Turkana. In: Susan D. Duncan, Justine Cassell and Elena T. Levy (eds.), Gesture and the Dynamic Dimension of Language, 91⫺98. Amsterdam/Philadelphia: John Benjamins. Olofson, Harold 1974. Hausa language about gestures. Anthropological Linguistics 16: 25⫺39. Omar, Sheih Y.A. and Karen L. Morell 1983. Swahili Gestures: A Dual Language Production. Seattle, WA: University of Washington Instructional Media Services. Orie, Olanike O. 2009. Pointing the Yoruba way. Gesture 9(2): 237⫺261. Patrick, Peter L. and Esther Figueroa 2002. Kiss-teeth. American Speech 77(4): 383⫺397. Rickford, John. R. and Angela E. Rickford 1976. Cut-eye and suck-teeth: African words and gestures in New World guise. The Journal of American Folklore 89(353): 294⫺309. Saitz, Robert L. and Edward J. Cervanka 1972. Handbook of Gestures: Columbia and the United States. The Hague: Mouton and Co. Sibree, James 1884. Notes on relics of the sign and gesture language among the Malagasy. The Journal of the Anthropological Institute of Great Britain and Ireland 13: 174⫺183. Siebicke, Larissa 2002. Die Samo in Burkina Faso: Zahlen und Zählen im San im Vergleich zu seinen Nachbarsprachen. Afrikanistische Arbeitspapiere/AAP (Köln) 69: 5⫺61. Sorin-Barreteau, Liliane 1996. Le langage gestuel des Mofu-Gudur au Cameroun. Ph.D. dissertation, University of Paris V. Thompson, Robert F., Jean de Dieu Nsonde´ and Erwan Dianteill 2002. Le Geste Kongo. Paris: Muse´e Dapper. Tremearne, Arthur J.N. 1913. Hausa Superstitions and Customs: An Introduction to the Folk-lore and the Folk. London: Bale, Sons and Danielsson.

75. Gestures in West Africa: Left hand taboo in Ghana

1161

Westermann, Diedrich 1907. Zeichensprache des Ewevolkes in Deutsch-Togo. Mitteilungen des Seminar für orientalische Sprachen 10(3): 1⫺14. Wilkins, David 2003. Why pointing with the index finger is not a universal (in sociocultural and semiotics terms). In: Sotaro Kita (ed.), Pointing: Where Language, Culture and Cognition Meet, 171⫺216. Mahwah, NJ: Lawrence Erlbaum Associates. Zaslavsky, Claudia 1999. Africa Counts: Number and Pattern in African Cultures. Chicago: Lawrence Hill Books.

Heather Brookes, Cape Town (South Africa) Victoria Nyst, Leiden (The Netherlands)

75. Gestures in West Arica: Let hand taboo in Ghana 1. 2. 3. 4. 5. 6.

Introduction Data collection Results Implicational hierarchy and familiarity Conclusion References

Abstract Several communities in Ghana, as in many other African countries, observe a restriction on left-hand use (henceforth taboo). This paper reports on two studies carried out on left-hand taboo in Ghana. The first study was conducted with Sotaro Kita among the Anlo (Ewe) people in Keta in the Volta Region of Ghana. The second, and more recent study, was conducted among the Dwang people in Kwame Danso in the Brong Ahafo Region. The lefthand taboo occurs in three main domains, namely eating, giving and receiving, and pointing. These three domains belong to an implicational hierarchy such that people aware of the giving and receiving taboo necessarily know of the eating taboo while those who know about the pointing taboo also know of the giving and receiving taboo. There are different ways to mitigate the negativity associated with the left-hand use, especially as regards giving and receiving. Furthermore, the taboo itself has given rise to a number of pointing gestures such as semi-pointing and hyper-contra-lateral pointing (cf. Kita and Essegbey 2001).

1. Introduction Several years ago when I was an undergraduate student at the University of Ghana, I was eating rice with a spoon held in my left hand when a friend (let’s call him Sam) came by. Following customary practice, I invited him to come and eat the rice from the same bowl with me. Sam picked a spoon and was about to commence eating when he noticed that I was holding mine in my left hand. He stopped and told me that he wouldn’t be Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 11611169

1162

VI. Gestures across cultures able to eat the food if I ate with the spoon in my left hand. A cousin also recently narrated an experience he had: he was buying a piece of cloth at a stall in the market when someone stopped by to enquire about the price of one of the cloths on display. The stall owner got upset because the man had pointed out the cloth with his left hand. She complained until the man left without making a purchase. My cousin joked that the woman was willing to forego a sale just because of her disapproval over the use of the left hand. Ghana has restrictions on the use of the left hand which come in different forms: the most basic one involves a prohibition against eating with the left hand (henceforth “eating taboo”). Since we eat most meals with our fingers, a child is taught right from tender age to use only the right hand to eat. This means that even though I am left handed, I can only eat with my right fingers and, thus far, I haven’t encountered any Ghanaian brought up in Ghana who eats with his or her left fingers. It is clear from my incidence with Sam that attitudes differ when it comes to the use of cutlery; my parents did not try to get me to change that. The next restriction regards giving and receiving things (henceforth “give/receive taboo”). In this case too, parents guide children right from infancy to use their right hand when they give or receive a gift. The image of a parent pushing away a child’s extended left hand and pulling out the right hand to receive something is one that is known to every Ghanaian. A similar situation has been observed in Tanzania where Brain (1977) reports of a parent smacking the left hand of a child as young as 12 months old when he extended it to receive a banana. The final restriction regards pointing with the left hand (henceforth “pointing taboo”) either for the purpose of indicating a directed path or the direction toward a location. In fact most Ghanaian languages have proverbs that caution against violating the pointing taboo. The one in Dwang, a Guang language spoken in the Brong Ahafo region of Ghana, is cko´ mabe me kebene sε´re be´twccre me Awu´ ‘no one points to Awu with his/her left hand’ (Awu is the town of the paramount chief). Yet, despite the fact that an equivalent of this proverb exists in almost every Ghanaian language, it is not everyone who knows that restriction on the use of the left hand extends to pointing. The three taboos therefore constitute an implicational hierarchy, as represented below, such that knowledge of the one on the right implicates knowledge of the one(s) on the left: Eat → give/receive → point There are a number of ways to mitigate a violation of the give/receive: first of all, if it occurs because the violator’s right hand is soiled, then he or she could use the left hand to give/receive while at the same time extending the soiled hand. Among the Ewe, this may be accompanied by the expression emia hee ‘note the left’ and the response is asie´ ‘it is a hand.’ Some people use the expression only without extending the soiled hand. The Ga in the capital city Accra would say miiha bo abekum ‘I am giving you the left’ while the Akans, the largest linguistic community, say memma wo benkum ‘I am not giving you the left’. These expressions show that the taboo is not restricted to one specific linguistic community. The restrictions on left hand use are not peculiar to Ghana. They have been reported in several parts of Africa (Brain 1977; Orie 2009). Harris (1990: 197) also writes: “Most Western countries today hold liberal views about left hand uses but in each case, there was a time when left hand use was forbidden or strongly discouraged for certain acts.” Not surprisingly, the restrictions have consequences on gesture among the Ghanaian community. In this paper, I report on results of a study carried out with Kita in Keta, a

75. Gestures in West Africa: Left hand taboo in Ghana town in the Volta region of Ghana and a smaller, more recent, one carried out in Denu, Tegbi (both in the Volta region) and Kwame Danso, a newly created district capital in the Brong Ahafo region. The Keta study sought to find out people’s knowledge of the restrictions and how that affected their pointing gesture. The discussion of this part of the study draws heavily on discussions in Kita and Essegbey (2001). The more recent study sought to establish how people deal with the dimensions on the taboo and what influences its violation. The rest of the paper is organized as follows: in section 2 I discuss data collection. Section 3 looks at the results while section 4 discusses the impact of differing awareness of the taboos and familiarity on gesturing and section 5 concludes the paper.

2. Data collection 2.1. Pointing study As stated in the introduction, the Keta study sought to establish how the left hand taboo affects pointing gestures and, secondly, the contexts in which Ghanaians produce taboo defying gestures. To accomplish this, we had an interviewer stand at a T-junction in Keta, facing the direction of oncoming traffic. There he stopped people coming towards him to ask them about directions to Rose Pavilion and Chief Amegashie’s palace. The former was a few blocks away to his right (but to the left of anyone facing him), while the latter was located to his left. After getting the direction he would then interview the direction-givers about their knowledge of and views on the left hand taboo. The interviews were recorded from a distance surreptitiously. After the interview participants were debriefed and consent was obtained to use the material. In all, 28 participants were interviewed out of which two declined permission. We therefore used the interviews and directions of the remaining 26.

2.2. Hand-restriction study The more recent study aimed at testing the effect of restricting the right hand on the give/ receive and pointing taboos. As such I told interviewees that they were expected to identify species of fish in a book and give directions to specific locations. Unlike the Keta study, participants undertook the task while they were eating, which also meant that they were seated. In all, we interviewed 11 people spread over three locations: 3 in Denu, 2 in Tegbi and 6 in Kwame Danso. Those from Denu and Tegbi, like their counterparts in the Keta study, were native speakers of Ewe, a Kwa language, while those in Kwame Danso were native speakers of Dwang, a Guang language also of the Kwa family. In addition, they were all bilingual in Akan, the dominant regional language. The interviewees in Denu were a mother, daughter, and son, and they performed the task while eating together (see Fig. 75.6). I handed a booklet to the daughter, who is the elder of the two children and asked her to provide the Ewe names of species of fish found on a page in it. I then asked her to pass the booklet on to her brother. He was then asked to pass it on to his mother. Note that because the book was given to them while they were eating, they were confronted with the prospect of simply collecting it with their left hand and thereby breaking the taboo, using their right hand and thereby soiling the book, or collecting it with the left hand while accompanying it with the right. After they had attempted to name the fishes, I asked them in turn to tell me the location of two places not far from where we

1163

1164

VI. Gestures across cultures were. Finally, if they used their left hand anytime during the interview, I asked them why. If they did not, I asked them why not. The Tegbi scenario was similar, with the differences that the two people were interviewed separately, and I had an elderly woman hand the booklet to them, although I did the questioning. In the Kwame Danso case, they were questioned by an interviewer who was also a native speaker of Dwang. Note that this means that unlike the Keta study these people knew from the outset that they were being filmed and they knew the person who interviewed them.

3. Results The results are divided into three sub-sections. In the first one I simply look at the number of people who used the left hand and the ones who did not. I then turn to strategies for avoiding the use of the left hand. Finally, I look at the type of left-hand gestures made by those who did make them.

3.1. Tally Of the 26 people whose directions and interviews we used for the Keta study, 16 reported that they knew about the left hand taboo while 10 were not particularly aware of it. As the implicational hierarchy above shows, not everyone is aware that the restriction extends to pointing. Not surprisingly, 9 out of the 10 people who were not aware that the restriction extended to pointing used the left hand at least once in their pointing gestures. That means only one of them did not use the left hand. In contrast, 5 of the people who were aware of the restriction did not use it. That still leaves a whopping 11 who did use their left hand. For the recent study, 5 people collected the book with their left hand only while 5 extended the right hand simultaneously and 1 person actually held it with both hands thereby soling it. The same number of people (i.e., 5) used their left hand to return it, although they were not the same people, and the one person who collected it with both hands returned it in the same manner. Regarding pointing, 10 out of the 11 people made a left hand pointing gesture at least once.

3.2. Avoiding let-hand use Although very few people did not use the left hand at all, there were strategies for minimizing it. One involved immobilizing the left hand while the other involves making contra-lateral and hyper-contra-lateral pointing. In Kita and Essegbey (2001) we reported that interviewees assumed a “respect posture” which involves placing both hands on the buttocks with the palms facing outwards. Since the people needed to give directions with a hand, they kept only the left hand at the back and used the right. This is illustrated by Fig. 5 of Kita and Essegbey (2001) which I reproduce below as Fig. 75.1a. Note that the man giving directions in Fig. 75.1a has the left hand firmly placed on his buttocks with the palms facing outwards. For those in a seated position immobilization involves placing their elbow firmly on their thigh or on the arm of a chair, if it has one. This limits the mobility of the left hand and forces them to use only the right hand, as illustrated by Madame B in Fig. 75.1b who was part of the Kwame Danso interviewees. Interviewees also engaged in a lot of contra-lateral pointing in order to avoid using the left hand, and a few engaged in “hyper-contra-lateral” pointing whereby they had

75. Gestures in West Africa: Left hand taboo in Ghana

Fig. 75.1a: Respect position and hyper-contra-lateral pointing

Fig. 75.1b: Immobilized hand and hyper-contralateral pointing

to strain to get their right hand to point to something behind them to the left. Note that in Fig. 75.1a and 75.1b, the interviewees have their right arms practically wrapped around their neck in order to point with their right hand to a location on the left.

3.3. The dierent types o let hand use As indicated in the section on tally, interviewees did use the left hand in a number of situations. These are “semi-pointing”, bi-manual and what I characterize as the “leftright asymmetry”. They are discussed in turn.

3.3.1. Semi-pointing In Kita and Essegbey (2001: 78), a semi-pointing is described thus: Semi-pointing is performed only with the left hand, and has the following formal characteristics. [It] is performed below the waist, usually with a fully-stretched arm. In some cases all fingers are extended, and in other cases only the index finger is extended. It makes a small movement to the left or to the left-front to indicate a direction away from the body. The right hand is also simultaneously but separately pointing to the left, and it is either in its preparation phase or in its hold phase.

An illustration from Fig. 2 of the paper is provided below:

Fig. 75.2: Semi-pointing

1165

1166

VI. Gestures across cultures The interviewee is pointing to Rose Pavilion which, if she faced the interviewer, would be to her left. She therefore turns slightly and uses her right hand to point out the location while at the same time making a smaller pointing gesture with her left index. In Kita and Essegbey, we wondered whether the interviewer was aware of that gesture. Since then, I have tried to find out from several Ghanaians whether they consider the semi-pointing gesture to be pointing and they replied in the negative. In fact, for a number of them, it was only when I actually drew their attention to it that they noticed that the person is doing something with her left finger as well. It is therefore safe to conclude, as we did tentatively in Kita and Essegbey, that semi-pointing does not violate the left hand taboo. Because of their seated position interviewees in the later study did not make semi-pointing gestures.

3.3.2. Bi-manual strategy We noted in Kita and Essegbey that this is part of a general principle according to which the use of the left hand is not considered to be offensive when used together with the right. This can be when one is receiving an object or pointing, as illustrated in Fig. 75.3a and 75.3b:

Fig. 75.3a: Receiving booklet with both hands

Fig. 75.3b: Pointing with both hands

In Fig. 75.3a Mr D. collects the booklet with the left hand while supporting it with the right hand. In Fig. 75.3b, YB partially extends both arms in order to point to a location on his left.

3.3.3. Let-right asymmetry Unlike semi-pointing, some left handed gestures are more pronounced than semi pointing and yet still occupy much less space than the right hand. The three pointing gestures by Mr. C. illustrate this. ´ fae´yε a εbo kebena´ se´ ‘when you take In Fig. 75.4a, Mr C. says mc´ febo kontragye nem the main road and go it is on the left.’ He opens the left hand outwards at the mention of kebena´ se´ (literally left top). However, as the figure shows, the left arm is kept quite close to the body, thereby reducing the gesture space. Mr C. quickly retracts it as he ´ ckyce kale´ a, cbo kebena´ se´ ‘the road that branches like this, it is on continues ckpe´ nem the left,’ he uses the right hand instead in an expansive contra-lateral pointing gesture (Fig. 75.4b).

75. Gestures in West Africa: Left hand taboo in Ghana

Fig. 75.4a: Reduced left-hand use

Fig. 75.4b: Extended contra-lateral right-hand use

Fig. 75.5a: Pistol gesture points away

1167

Fig. 75.4c: Extended right-hand use

Fig. 75.5b: Contra-lateral pointing in interviewer’s direction

Miss L’s left hand gesture is not reduced like Mr C’s. In her case, she rather avoids extending the index finger in the leftward direction but instead, curls her wrist and points a pistol-shaped hand forward rather than in the leftward direction (which is also in the direction of the interviewer). By so doing she avoids pointing her extended left index finger at the interviewer. Note that when she does a contra-lateral pointing gesture, she points her right index finger in his direction.

4. Implicational hierarchy and amiliarity In this section I discuss the effect of the implicational hierarchy as well as familiarity between participants on gesturing. Recall that the interviewees in Denu were mother, daughter, and son. The daughter collected the booklet from me with her left hand accompanied by the right hand and did most of her pointing gestures with her right hand. However, she gave the booklet to her younger brother with her left hand and he, in turn, received it with his left hand. Fig. 75.6 shows that handing over and receipt of the booklet between mother and son was also with the left hand. When I asked them at the end of it all why this was the case, the daughter replied that it is not proper to collect things with the left hand. She said she gave it to her brother with her left hand because it is her brother and, besides, he is younger than her. For her, therefore, the familiarity brought about by kinship as well as the fact that her brother is younger played a role in her violating the taboo. Her brother thought the restriction doesn’t apply when one is eating. He did not know that one could mitigate the left hand taboo by accompanying it with the right hand. Their mother admitted that she hadn’t taught them. She said that growing up she respected the restrictions until she went to work as a teller at a bank where they were trained to ignore it. This is because they constantly had to use both hands in interaction with customers, most often receiv-

1168

VI. Gestures across cultures

Fig. 75.6: Giving booklet with, and receiving it with, the left hand only

ing or giving money with one hand while entering figures in a ledger with the other. Invariably, it was the left hand that they ended up using to hand money to customers. Note that the implicational hierarchy proposed above means that some may be particular about some taboos while completely oblivious to others. Mr D. in Tegbi studiously accompanied the left hand with the right when he wanted to collect the booklet (see Fig. 75.3a). However, when the time came for him to point out directions, he used only his left to do so. He later explained that he didn’t know that pointing with the left hand is a taboo. The final person who used the left hand because of ignorance was seven-year old K. from Kwame Danso. K. had apparently been told about the give/receive taboo. However, he did not know about the pointing taboo. Neither did he know what to do about the give/receive taboo when the right hand is occupied. As a result he received the booklet with his left hand and used the hand to make pointing gestures. This suggests that while parents teach children at early age to give and receive things with the right hand, the intricacies of what to do when the right hand is occupied is not comprehended until later.

5. Conclusion This paper has shown that while every Ghanaian knows that there are restrictions on the use of the left hand, knowledge of the restriction differs from person to person and, to some extent, this determines where and when people use the left hand. For instance those who know the give/receive taboo but not the point one would point with the left hand even while avoiding the use of the left hand to collect/receive a thing or, if they have to use it, accompany it with the right hand. Still those who know all the restrictions still use the left hand. In many cases, this is noticeably different from that of the right hand, such as semi-pointing, which is not noticed and therefore not considered a taboo. Other such uses are the reduced gesture space for the left hand, and pistol hand pointing. My study shows that some people, including those aware of the taboo do break it. Some do it unconsciously and apologize when their attention is drawn to it. Others do so because of familiarity with their interlocutor, or because they don’t attach much importance to the taboo. The position of the mother interviewed in Denu reflects the influence of Western traditions (i.e., banking) on changes in attitudes towards the taboo (cf. Payne 1981). Her daughter also remarked that they see on the TV that “people abroad” use their left all the time. She said therefore that in school they don’t pay particular attention to the taboo except when they are with a particularly strict teacher.

76. Gestures in West Africa: Wolof

1169

6. Reerences Brain, James L. 1977. Handedness in Tanzania: The physiological aspect. Anthropos 72: 108⫺192. Harris, Lauren J. 1990. Cultural influences on handedness: Historical and contemporary theory and evidence. In: Stanley Coren (ed.), Left Handedness: Behavioral Implications and Anomalies, 195⫺258. North-Holland: Elsevier. Kita, Sotaro and James Essegbey 2001. Pointing left in Ghana: How a taboo on the use of the left hand influences gestural practice. Gesture 1: 73⫺94. Orie, Olanike O. 2009. Pointing the Yoruba way. Gesture 9(2): 237⫺261. Payne, Monica A. 1981. Incidence of left handedness for writing: A study of Nigerian primary schoolchildren. Journal of Cross Cultural Psychology 12(2): 233⫺239.

James Essegbey, Gainesville (USA)

76. Gestures in West Arica: Wolo 1. 2. 3. 4.

Introduction Gesture and touch in the organization of Wolof conversations Conclusion References

Abstract This chapter focuses on the role of contact gestures and touch for the organization of conversations among the Wolof of Northwestern Senegal. As an example, it analyses a short interactional episode and pays special attention to the role of these gestures for turntaking, turn-allocation and attention management. It will become apparent in the analysis that in general, constant bodily contact is common in Wolof conversations. Conversational functions that are fulfilled in Western conversations by gaze (selection of addressee, signaling of listenership) are adopted by touch and contact gestures among the Wolof. Moreover, gestures are often combined with touch or even performed in tactile ways so that the body of the interlocutor is used as a resource. In the course of an interaction, a gesture can be transformed into touch and vice versa. The hands employed in conversation offer a constant resource for different interactional moves. The Wolof co-interactants, as it appears, use their senses (as semiotic resources) in a different way than assumed by canonical Conversation Analysis and Goffmanian interactionist sociology.

1. Introduction In this chapter, I focus on the role of gesture and touch for conversational organization among the Wolof of Northwestern Senegal. Analyzing a short episode of interaction as an illustration, I will pay particular attention to turn-taking, turn-allocating, and the management of the participation framework. In the past, the type of gestures relevant for these Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 11691175

1170

VI. Gestures across cultures activities has been called “pragmatic” by Kendon (2004) and Streeck (2009). According to Streeck (2009: 179) gestures are pragmatic “when they themselves enact a communicative function”, for example “when a raised hand, palm facing the interlocutor, admonishes him to wait his turn”. Streeck emphasizes, however, that “pragmatic gestures are an unruly bunch: speakers show all manner of idiosyncrasies in making them” (2009: 181). They are coupled with interaction units “such as turns, turn-construction units, speech acts, and speech act sequences” (Streeck 2009: 179). Kendon (2004: 159) adds that the interactive and interpersonal functions of pragmatic gestures equally include “indicating to whom a current utterance is addressed, to indicate that a current speaker, though not actually speaking, is nevertheless still claiming a role as speaker (still ‘holding the floor’)” and to “regulate turns at talk, as in raising a hand to request a turn, or pointing to someone to give them a turn”. Both authors give more examples that cannot be considered here. Streeck emphasizes the multifunctionality of the hand, which serves as means not only for “action and expression”, but also for “cognition and knowledge acquisition” (Streeck 2009: 39). Although Streeck himself mostly refers to the interaction with objects or the environmental situation, what he says is equally true for interactions and interpersonal communication: The hands often participate in tactile (and haptic) and visual contexts at once, which provides for easily intelligible connections between the two sensory realms. Hand-gestures enable translations between the senses. Thus, in a powerful way, the dual nature of the hand is recruited for communicative purposes; tactile features of the world, presently available only to a single party, are visually broadcast to everyone present. (Streeck 2009: 70)

Yet, it is precisely the role of touch for conversational organization that has been widely ignored in the theoretical literature on interaction. The literature on touch appears to be highly biased since it works on the premise that touch is the “most intimate way of communicating” (Jones 1994: 18). The only studies that accord to touch the status of a “semiotic resource” (Goodwin 2000) are inquiries into medical interaction. For Heath (1986: 50), for example, medical interaction is particularly interesting, because the patients are at the same time the objects of inquiry (tactile among others) and the subject of social interaction. Nishizaka (2007) explores the “multi-sensory accomplishment of reference” in the examination of pregnant women by midwives in Japan. In regard to cross-cultural differences in touching behavior, some scholars (cf. Watson 1970) distinguish between whole “contact” and “non-contact” cultures. Although touching has mostly been excluded from gesture studies, Efron ([1941] 1972: 120), in his inventory of Jewish gestures in New York City, conceives of “grasping of [the] wrist or of wearing apparel”, “shaking”, “poking”, and “pulling”, as “[e]nergetic modi of physical persuasion”. An example is the Jewish “buttonholing”: the act of fumbling with the jacket of the interlocutor as an expression of affection (Efron 1972: 132, 135).

2. Gesture and touch in the organization o Wolo conversations Some of the functions that Goodwin (1981) has assigned to gaze for American communication (such as speaker allocation at turn transition or the display of addresseehood and hearer roles) are occupied in Wolof conversations by gesture and touch. The first example shows a situation in which Maggat (MG) competes with Jajji (JJ) about Ba’s (BA)

76. Gestures in West Africa: Wolof

Fig. 76.1: “My Fulani”

attention. He does so not only through vocal devices but also through gesture and touch. In the transcripts presented here, the line ‘To’ describes the touching behavior, ‘Gt’ provides a transcript of the other manual gestures, ‘Gz’ marks direction of eye gaze, ‘St’ indicates the still image referred to, and ‘Tr’ gives a gloss in English of the speech. In 01, Maggat (MG) tries to get Ba’s (BA) attention through the employment of several different resources. For one, he uses verbal summoning devices (“I tell you”, “you see”) and restarts in his utterance. Secondly, he touches, or better, grasps Ba’s (BA) right foot with his right hand. Thirdly, he addresses Ba (BA) with gaze and gesture. One can see quite well that he first addresses him with gaze, then withdraws his gaze and makes an “open hand horizontal palm down” gesture. The gesture is executed with a slow horizontal movement as if to calm down or stop its addressee softly. While Maggat (MG)

1171

1172

VI. Gestures across cultures

Fig. 76.2: “My Fulani”

Fig. 76.3: “My Fulani”

76. Gestures in West Africa: Wolof

Fig. 76.4: “My Fulani”

gazes at Ba (BA) during its preparation, he withdraws gaze during the proper performance of the gesture. In 04, Maggat (MG) again tries to acquire Ba’s (BA) attention by grasping his foot and then by pointing at him with a one beat hand gesture. Since in 05, Jajji (JJ) makes a concurrent utterance, Maggat (MG), in 06, again seeks Ba’s (BA) attention using verbal summons, another pointing beat and touch. In this moment, he not only grasps Ba’s (BA) foot, but he also shakes it as an augmentation of attention request, maybe in an urgent reaction to Jajji’s (JJ) utterance in 05. Shortly later, Maggat (MG) succeeds in breaking Ba (BA) and Jajji’s (JJ) renewed dyadic interaction by pulling Jajji’s (JJ) hand out of Ba’s (BA) in 51. Jajji’s (JJ) hand had rested in Ba’s (BA) for 20 seconds. Maggat’s (MG) pulling gesture is supported by a vocal summons. In what follows, Maggat (MG) invokes a traditional dictum that he visualizes by performing a gesture using the fingers of Jajji’s (JJ) hand. In 52⫺54 in examples 76.2 and 76.3 he takes Jajji’s (JJ) hand as an object to perform a “counting gesture” (cf. Creider 1977: 6⫺8). Doing so, he intensely gazes at the gesture performed with both his and Jajji’s (JJ) right hands, presumably in order to draw Jajji’s (JJ) attention to it. Accordingly, Jajji (JJ) also gazes at his hand rested in Maggat’s (MG). The practice of looking at one’s own gesture and thereby drawing the attention of the interlocutor towards it has already been described by Streeck (1993).

1173

1174

VI. Gestures across cultures

Fig. 76.5: “My Fulani”

Jajji (JJ) is virtually forced to address his attention to Maggat (MG). He gazes at him and reacts with vocal hearer signals (continuers) and head nods. Maggat (MG) thus seems to have succeeded in virtually pulling Jajji (JJ) out of the dyad with Ba and into his own.

3. Conclusion We have seen that in conversations among individuals of equal rank among the Wolof, gestures are sometimes combined with touch or even performed in tactile ways using the co-interactionalist as co-performer and object. In the course of an interaction, a gesture can be transformed into touch and vice versa. Particularly the hands employed in conversation offer a constant resource for different interactional moves. In general, constant bodily contact is common in Wolof conversations. However, since some semiotic resources (touch, gesture, vocal signals) substitute for others (gaze), the senses are by no means employed in one overall way, as Watson’s distinction in “contact” and “noncontact” cultures would suggest. Rather, Wolof conversations are evidence of a “contact culture” in some situations and of a “non-contact culture” in others. This entails that an interaction might be established and maintained by different resources including mutual gaze, gesture, and bodily contact according to the situation.

77. Gestures in South America: Spanish and Portuguese

1175

The Wolof co-interactants, as it appears, use their senses (as semiotic resources) in a different way than assumed by canonical Conversation Analysis and Goffmanian interactionist sociology.

4. Reerences Creider, Chet A. 1977. Towards a description of East African gestures. Sign Language Studies 14: 1⫺20. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton. First published [1941]. Goodwin, Charles 1981. Conversational Organization. Interaction between Speakers and Hearers. New York: Academic Press. Goodwin, Charles 2000. Action and embodiment within situated human interaction. Journal of Pragmatics 32: 1489⫺1522. Heath, Christian 1986. Body Movement and Speech in Medical Interaction. Cambridge: Cambridge University Press. Jones, Stanley E. 1994. The Right Touch: Understanding and Using the Language of Physical Contact. Cresskill: Hampton. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Nishizaka, Aug 2007. Hand touching hand: Referential practice at a Japanese midwife house. Human Studies 30(3): 199⫺217. Streeck, Jürgen 1993. Gesture as communication I: Its coordination with gaze and speech. Communication Monographs 60(4): 275⫺299. Streeck, Jürgen 2009. Gesturecraft. The Manu-facture of Meaning. Amsterdam: John Benjamins. Watson, O. Michael 1970. Proxemic Behavior: A Cross-Cultural Study. The Hague: Mouton.

Christian Meyer, Bielefeld (Germany)

77. Gestures in South America: Spanish and Portuguese 1. 2. 3. 4.

Historical overview Research Studies on body language and gestures in some South American countries References

Abstract This article is an overview of research being done with body language and gestures in South America. There is a historical introduction of the development of these studies mainly within the fields of communication and semiotics. The most significant institutions and literary productions are mentioned. Emphasis is given to countries with more relevant work. A specific but detailed bibliography is included. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 11751181

1176

VI. Gestures across cultures

1. Historical overview Body language and gesture studies in South America have had as main objectives the subjects of meaning-making and conceptualization, as well as inquiries to overcome the dichotomy between verbal and nonverbal communication, and analogical and digital forms of communication in cultural relations. In the academic world, the study of signs began in the 1970s with the linguistics. Saussure’s (1972) book was translated to both Spanish and Portuguese. Besides linguistics, literary signs were initially studied through structuralist and poststructuralist approaches. The theories of Barthes, Greimas, and Eco were used as academic models. Lotman’s model was embraced by cultural studies. From semiology to semiotics, there was a shift in the 1990s to Peircean semiotics. The books that influenced scholars on body language were authored by Kendon, Harris, and Key (1975), Laver and Hutcheson (1972), Prieto (1967) and Sarduy (1973) in the Spanish-speaking countries. Weil and Tompakov’s (13th edition, 1999) book was translated into several languages and has been a best-seller since its appearance. Rector and Trinta (1986, 1990) contributed with gestures in the Brazilian culture. From the 1970s onwards, books and articles have been published on nonverbal communication influenced mainly by Ekman and Friesen’s (1969), Bouissac’s (1973), and Desmond Morris’s (1979) models. In the 1980s, body language studies were active, but soon they were replaced by cultural studies in the beginning of the 1990s. Edward T. Hall (1977) gave place to Stuart Hall. At the end of the 20th century, McNeill’s (1992, 2000) model started being used for interaction of gestures and speech. The aspects of gesture, body language, and kinesics, which have been most developed are mainly facial signs, gaze, haptics (tactile communication), proxemics, and chronemics. However, nonverbal communication does not have a predominant space in South America. It has always been part of another discipline or field of knowledge. The creation of Latin-American Federation of Semiotics (FELS) united researchers in Latin America (1987). The Latin-American Federation of Semiotics publishes the journal deSignis of which two volumes were dedicated to gestures: deSignis 3 (2002) “Los Gestos, Sentidos y Pra´cticas” and deSignis 14 (2009) “El Gusto Latino”. Extensive bibliography on different countries, with innumerous articles on body languages and gestures, can be found in Signa (Signa: Revista de la Asociacio´n Espan˜ola de Semio´tica 7, 1998) entitled “Panorama de la semio´tica en el a´mbito hispa´nico” with articles on Chile, Mexico, Puerto Rico, Venezuela, and Uruguay, and a second volume (Signa 9, 2000) with articles on Colombia and Argentina. The International Conference on Gestures, which counted with a large number of participants from South America also gives an insight of the interest in the field in the proceedings (Oporto, Portugal, 2000; Gestos: Uso e Significado, 2003 and Gestures: Meaning and Use, 2003). Initially the theoretical framework for studies on gestures is the lexical method of gestures, using segmentation of the flux of movements and following linguistic models. The entries combine representations and graphic description. The represented gestures can be rendered verbally, therefore glossaries and dictionaries of gestures started being published. The evolution of the studies show that the initial lexical units and syntagms developed from “word” to “discourse” in an interaction of actions and reactions whenever there is an emitter and a receptor, having to adjust constantly in their nonverbal interaction.

77. Gestures in South America: Spanish and Portuguese

2. Research As to gestures, two tendencies can be seen in recent studies. There is an American and a European tendency. There are the followers of McNeill and Kendon, and there are those with a cultural and social approach who pursue the French Greimasian group led by Fontanille and Coquet, distinguishing a body of existence (worldly body), a body of experience (the body itself) with a third socialized body (a body as a cultural configuration). The followers of McNeill work with varieties of gestures, how they function, and express thoughts. The concept of “growth point”, that is, gestures and speech studied jointly as a unit is frequently used. Kendon’s communication in face-to-face interaction is a model for interaction in everyday conversations and how the different roles in the construction of utterances function. Fontanille’s theory is used by scholars who work with visual arts and mass communication, from the written to the visual world as stated in the title of his book Se´miotique du Lisible au Visible: Se´miotique du Texte et de l’Image. Coquet’s theory is used as a theoretical framework for analyzing philosophical points of view and the phenomenology of language. Besides the traditional fields of research, there are some new areas in which body communication plays a major role: Computer engineering with human-computer interaction; organizational semiotics based on insights into organized behavior and enacted social practices; Inter Psi (Laborato´rio de Psicologia Anomalı´stica e Processos Psicossociais, University of Sa˜o Paulo), an integration of the study of semiotics, interconnectivity and consciousness; and speech therapy, integrating nonverbal signs and gestures to coach TV anchors and presenters on how to convincingly convey their messages, furthermore new reporters began this training (Cotes). With the universe perfused by signs, different ideas have been welcomed, developing in diverse directions to try to explain the world, especially in business, marketing, traveling, and job interviews. In Latin America, culture and nonverbal communication studies contrasting with other cultures, especially North-American, are in demand. Their importance lies in the fact to avoid misunderstandings. Words and gestures which are innocent in one culture are offensive in another. Etiquette is also part of these studies, dealing with expectations regarding good manners to avoid faux pas. Intercultural competence is essential to grasp the variation of perception and behavior in the modern world.

3. Studies on body language and gestures in some South American countries An overview of the main research fields and current ideas in some countries is given below. The selection depended on organizations that foster these studies and on individuals who impulse activities themselves.

3.1. Argentina In the 1970s, Argentina was in the foreground in Latin America. France produced a boom in linguistics and semiology. Luis E. Prieto contributed greatly before moving to Europe. The foundation of the Argentinian Semiotic Society in the 1970s consolidated the previous work. Its journal LENGUAjes already innovates in the graphic cover.

1177

1178

VI. Gestures across cultures ´ scar Steimberg, O ´ scar Traversa, Claudio Guerri, and Names such Juan Carlos Indart, O Eliseo Vero´n have led innovative research. Ana Laura Maizels is working on the dimension of gestures with the Argentinian president Cristina Ferna´ndez de Kirchner. The recent publication Nona´gono Semio´tico, un Modelo Operativo para la Investigacio´n Cualitativa (Claudio Guerri, Martı´n Acebal, and Cristina Voto 2013) is an example of the state of art in semiotic studies in Argentina and how a conceptual model can serve as a framework for qualitative analyses.

3.2. Brazil Initially, in the 1970s, Algirdas J. Greimas’s line of research influenced scholars such as Edward Lopes and Eduardo Pen˜uela Can˜izal, who created a study group and the journal BACAB-Estudos Semiolo´gicos, that later on became Significac¸a˜o, Revista Brasileira de Semio´tica. In the 1980’s, body language and gesture studies became popular with the translation of books by Corraze, Davis, Fast, and Hall. In the 1990’s, cultural studies took over the academic world and emphasis was given to text and discourse studies, currently a major concern for linguistics in Brazil, and intersecting with semiotics. Studies of signs in Brazil focus, on the one hand, on social research linked to behavior and, on the other hand, on multimedia and technology. Nonverbal communication, especially tacesics, has been used intensively when dealing with patients in the medical field. The only institution that was able to develop semiotics as a discipline in itself, was the Pontifical Catholic University of Sa˜o Paulo. Body language and gestures are integrated in the program of communication and semiotics, created by Maria Lu´cia Santaella Braga. This program has an online journal: Estudos Semio´ticos. Directly or indirectly linked to this programs are the following institutes and centers: Instituto Brasileiro de Linguagem Corporal (Brazilian Institute of Body Language), Programa em Linguı´stica Aplicada e Estudos da Linguagem (Program in Applied Linguistics and Studies of the Language), Centro de Estudos do Corpo (Center for Studies of the Body); Gı´cia Amorim is one of the main contributors.

3.3. Chile In Chile, besides the traditional fields mentioned above, there is a main research trend about the role that memory and language play in studies of cognitive processes. Some papers review the theory of how gestures facilitate working memory tasks both in children and adults, and how gestures influence the memory of students with developmental disorders and intellectual disabilities. Gestures also play a role in metaphoric understanding in cognitive sciences. Rafael de Villar Mun˜oz has contributed in the study of mass communication and in the visual and audiovisual tools for education. He also directs the journal Revista Chilena de Semio´tica.

3.4. Colombia Advertising has played an important role in sign studies since the foundation of the Uruguayan Association of Semiotic Studies by Lisa Block de Behar. This first phase was under European structuralist and poststructuralist influence. Kristeva’s semanalysis influenced women’s studies. Hilia Moreira focuses on television (teletheatre) and Tania Modleski on feminine issues. The gaze on women developed studies on the female body,

77. Gestures in South America: Spanish and Portuguese with emphasis on hidden aspects of culture such as menstruation and other “shameless” body aspects. Fernando Andacht studies the ‘disappeared bodies’ (los desaparecidos), with an ideological and political input. The anthropologist Zandra Pedraza Go´mez has several publications, mainly on body and bio-politics. Armando Silva’s contributes with works on semiotics and social and audiovisual communication, as well as Jesu´s Martı´nBarbaro, born in Spain, but living in Colombia since 1967.

3.5. Mexico Although the research in Mexico is not focused exclusively on body language, many studies are indirectly linked to the subject. The PowerPoint La semio´tica en Me´xico (https://www.google.com/search?q⫽semiotica⫹en⫹mexico&rlz⫽1C1TSNF_enUS437US437&aq⫽f&oq⫽semiotica⫹en⫹mexico&aqs⫽chrome.0.59j0l2.11005&sourceid⫽ chrome&ie⫽UTF-8 (02/27/2013)) gives an overview of the activities going on in this country as well as a bibliography. Alfredo Cid Jurado is a main figure dealing with body language on television from a methodological perspective.

3.6. Venezuela As in other South American countries, the interest of Venezuela in the study of signs started under French European influence, followed by Peirce’s theory, later on. They were mainly influenced by the Groupe de Recherches Se´mio-Linguistiques, of the E´cole de Hautes E´tudes en Sciences Sociales (Paris). Most scholars dedicated themselves to text analysis, Finol to myths, and Andre´s Garcı´a Ildarraz to spatial and design analysis. There has also been interest in applying gestures to improving business relations. The Asociacio´n Venezolana de Semio´tica edits books on semiotics. Number 8 of the Coleccio´n de Semio´tica Latinoamericana is on Semio´ticas del cuerpo (‘Semiotics of the body’). Jose´ Enrique Finol has been of major influence in promoting these studies.

4. Reerences This is a small sample of the works on body language of some countries in Latin America and the authors and books who influenced their writers. We mainly refer to authors mentioned in the article. Many publications are not specific to body language, but involve social and mass communication or television and audiovisual projects in which the body is part of the study. Birdwhistell, Ray L. 1970. Kinesics and Context. Philadelphia: University of Pennsylvania Press; New York: Ballantine Books. Block de Behar, Lisa 1973. El Lenguaje de la Publicidad. Buenos Aires: Siglo XXI. Bouissac, Paul 1973. La Me´sure des Gestes. Prole´gome`nes a` la Se´miotique Gestuelle. The Hague: Mouton. Camargo, Paulo Se´rgio de 2010. Linguagem Corporal ⫺ Te´cnicas para Aprimorar Relacionamentos Pessoais e Profissionais. Sa˜o Paulo: Summus. Cascudo, Luı´s da Caˆmara 1976. Histo´ria Dos Nossos Gestos, uma Pesquisa da Mı´mica do Brasil. Sa˜o Paulo: Melhoramentos. Colo´n, Eliseo and Monica Rector (eds.) 2009. El Gusto Latino. (deSignis 14.) Buenos Aires: La Crujı´a-FELS.

1179

1180

VI. Gestures across cultures Coquet, Jean-Claude 1973. Se´miotique Litte´raire, Contribution a` l’Analyse Se´mantique du Discours. Paris: Mame. Coquet, Jean-Claude 1982. Se´miotique. L’E´cole de Paris. Paris: Hachette. Cotes, Claudia 2002. Articulando voz e gesto no telejornalismo. In: Leslie Ferreira and Marta Silva (eds.), Sau´de Vocal. Pra´ticas Fonoaudiolo´gicas, 267⫺288. Sa˜o Paulo: Roca. Cotes, Claudia and Leslie Piccolatto 2002. A gestualidade no telejornal. Los Gestos, Sentidos y Pra´cticas ⫺ deSignis 3: 143⫺157. Davis, Flora 1975. El Lenguaje de los Gestos. Buenos Aires: Emece´. Davis, Flora 1979. A Comunicac¸a˜o Na˜o-Verbal. Trad. Antonio Dimas. Sa˜o Paulo: Summus. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of non-verbal Behavior. Categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Escudero Chavel, Lucrecia and Monica Rector (eds.) 2002. Los Gestos: Sentidos y Pra´cticas. (deSignis 3.) Buenos Aires: La Crujı´a-FELS. Fontanille, Jacques 1995. Se´miotique du Visible: des Mondes de Lumie`res. Paris: PUF. Fontanille, Jacques 2004. Soma and Sema, Figures du Corps. Paris: Masonneuve and Larose. Greiner, Christine 2005. Corpo, Pistas Para Estudos Indisciplinares. Sa˜o Paulo: Annablume. Greiner, Christine and Cla´udia Amorim (eds.) 2003. Leituras do Corpo. Sa˜o Paulo: Annablume. Guerri, Claudio 2009. Aportes a una teoria del disen˜o: De la teoria de la delimitacion al lenguaje grafico TDE. Ph.D. dissertation, Universidad de Buenos Aires. Guerri, Claudio, Martı´n Acebal and Cristina Voto 2013. Nona´gono Semio´tico, un Modelo Operativo Para la Investigacio´n Cualitativa. Buenos Aires: Eudeba, Universidad de Buenos Aires. Hall, Edward T. 1977. A Dimensa˜o Oculta. Trad. Sonia Coutinho. Rio de Janeiro: Francisco Alves. Kemp, Keˆnia 2005. Corpo Modificado, Corpo Livre?. Sa˜o Paulo: Paulus. Kendon, Adam, Richard M. Harris and Mary Ritchie Key (eds.) 1975. The Organization of Behavior in Face-to-Face Interaction. The Hague: Mouton. Laver, John and Sandy Hutcheson (eds.) 1972. Communication in Face-to-Face Interaction. Harmondsworth: Penguin Books. Magarin˜os de Morentı´n, Juan 1987. Semiotic diagnosis of marketing culture. In: Jean UmikerSebeok (ed.), Marketing and Semiotics: New Directions in the Study for Sale, 497⫺520. Berlin: Mouton de Gruyter. Martı´n-Barbero, Jesu´s and Germa´n Rey 1999. Los Ejercicios del Ver. Hegemonia Audiovisual y Ficcio´n Televisiva. Barcelona: Editorial Gedisa. Martinell Gifre, Emma 1992. La Comunicacio´n entre Espan˜oles e Indios: Palabras y Gestos. Madrid: Editorial MAPFRE. ´ scar 1970. La Historieta en el Mundo Moderno. Buenos Aires: Paido´s. Masotta, O McNeill, David 1992. Hand and Mind. What Gestures Reveal about Thought. Chicago: The University of Chicago Press. McNeill, David 2000. Language and Gestures. Cambridge: Cambridge University Press. Meo Zilio, Giovanni 1960. El Lenguaje de los Gestos en el Rio de La Plata. Montevideo: Libertad. Meo Zilio, Giovanni 1980⫺1983. Diccionario de Gestos, Espan˜a e Hispanoame´rica. Bogota´: Instituto Caro y Cuervo. Moreira, Hilia 1994. Cuerpo de Mujer. Reflexio´n Sobre lo Vergonzante. Montevideo: Trilce. Morris, Charles 1946. Signs, Language and Behavior. New York: Prentice Hall. Morris, Desmond 1979. Gestures, Their Origins and Distribution. New York: Stein and Day. Pedraza Go´mez, Zandra 2007. Polı´ticas y Este´ticas del Cuerpo en Ame´rica Latina. Bogota´: Universidade de los Andes, CESO. Pedraza Go´mez, Zandra 2011. En Cuerpo y Alma: Visiones del Progreso y de la Felicidad. Educacio´n, Cuerpo y Orden Social en Colombia (1833⫺1987). Second edition. Bogota´: Universidade de los Andes, CESO. Pedraza Go´mez, Zandra 2011. Regı´menes este´tico-polı´ticos: el orden del cuerpo en Ame´rica Latina. In: Luis Henrique Sacchi Santos and Paula Regina Costa Ribeiro (eds.) Corpos, Geˆnero e Sexua-

77. Gestures in South America: Spanish and Portuguese lidade. Instaˆncias e Pra´ticas de Produc¸a˜o nas Polı´ticas de Pro´pria Vida, 33⫺46. Rio Grande: FURG. Pires, Beatriz Ferreira 2005. O Corpo Como Suporte da Arte: Piercing, Implante, Escarificac¸a˜o, Tatuagem. Sa˜o Paulo: Editora Senac. Polito, Reinaldo 1996. Gestos e Postura. 13th edition. Sa˜o Paulo: Saraiva. Polito, Reinaldo 2006. Como Falar Corretamente e Sem Inibic¸o˜es. 11th edition. Sa˜o Paulo: Saraiva. Poyatos, Fernando 2002. Nonverbal Communication across Disciplines. Volume 3. Amsterdam/Philadephia: John Benjamins. Prieto, Luis J. 1967. Mensajes y Sen˜ales. Barcelona: Seix Barral. Rector, Monica and Aluizio R. Trinta 1986. Comunicac¸a˜o Na˜o-Verbal, a Gestualidade Brasileira. Petro´polis: Vozes. First published [1985]. ´ tica. Rector, Monica and Aluı´zio R. Trinta 1990. Comunicac¸a˜o do Corpo. 4th edition. Sa˜o Paulo: A Rector, Monica and Isabella Poggi (eds.) 2003. Gestos: Uso e Significado. Oporto: Edic¸o˜es Universidade Fernando Pessoa. Rector, Monica, Isabella Poggi and Nadine Trigo (eds.) 2003. Gestures: Meaning and Use. Oporto: Edic¸o˜es Universidade Fernando Pessoa. Rodrigues, Jose´ Carlos 1983. Tabu do Corpo. Rio de Janeiro: Achiame´. Saitz, Robert L. and Edward J. Cervenka 1973. Colombian and North American Gestures. The Hague: Mouton. Sankey, Maria Rayo 1986. El galanteo, descripcio´n cine´sica y ana´lisis semio´tico. Morphe´ 2: 111⫺ 137. Santaella, Lucia 2004. Corpo e Comunicac¸a˜o, Sintoma da Cultura. Sa˜o Paulo: Paulus. Sant’Anna, Denise Bernuzzi de 1995. Polı´ticas do Corpo. Sa˜o Paulo: Estac¸a˜o Liberdade. Sant’Anna, Denise Bernuzzi de 2001. Corpos de Passagem: Ensaios Sobre a Subjetividade Contemporaˆnea. Sa˜o Paulo: Estac¸a˜o Liberdade. Sarduy, Severo 1973. Gestos. Barcelona: Editorial Seix Barral. Saussure, Ferdinand de 1972. Cour de Linguistique Ge´ne´rale. Paris: Payot. SELITEN@T 1998. Signa: Revista de la Asociacio´n Espan˜ola de Semio´tica 7. Madrid: UNED. SELITEN@T 2000. Signa: Revista de la Asociacio´n Espan˜ola de Semio´tica 9. Madrid: UNED. Souza, Clarisse de 2005. The Semiotic Engineering of Human-Computer Interaction. Cambridge: The Massachusetts Institute of Technology Press. ´ scar 1993. Semio´tica de los Medios Masivos. Buenos Aires: Atuel. Steimberg, O ´ scar 1980. El Cine de Animacio´n: Cuerpo y Relato. (LENGUAjes 4.) Buenos Aires: Traversa, O Tierra Baldı´a. Vero´n, Eliseo and Lucrecia Escudero (eds.) 1997. Telenovela. Ficcio´n Popular y Mutaciones Culturales. Buenos Aires: Gedisa. Villac¸a, Nizia 1998. Em Nome do Corpo. Rio de Janeiro: Rocco. Villac¸a, Nizia 1999. Em Pauta: Corpo, Globalizac¸a˜o e Novas Tecnologias. Rio de Janeiro: Mauad/ CNPq. Villac¸a, Nizia, Fred Goes and Esther Kosovski 1999. Que Corpo e´ Esse? Novas Perspectivas. Rio de Janeiro: Mauad. Villar Mun˜oz, Rafael del 2004. Corpus Digitalis. Semio´ticas del Mundo Digital. Barcelona: Gedisa. Villegas, Juan 1994. Negotiating Performance: Gender, Sexuality, and Theatricality Latin/o America. Duke: Duke University Press. Weil, Pierre and Roland Tompakov 1999. O Corpo Fala, a Linguagem Silenciosa da Comunicacc¸a˜o Na˜o-Verbal. 9th edition. Petro´polis: Vozes.

Monica Rector, Chapel Hill (USA)

1181

1182

VI. Gestures across cultures

78. Gestures in South American indigenous cultures 1. 2. 3. 4. 5. 6.

Introduction Pointing gestures for spatial and temporal orientation Ideophones as vocal gestures A non-verbal mode of communication in a multilingual setting? Prospects References

Abstract The use of gesture in South American indigenous communities has only recently ⫺ with a more large-scale documentation and description of these languages ⫺ stirred the interest of scientific research. The few studies available so far are descriptive as well as cognitively oriented. While Nu´ n˜ez and Sweetser (2006) focus on how a conceptualization of time is reflected in the use of language and co-speech gestures by Aymara speakers, Floyd (in prep.) analyzes the spatial and temporal orientation of speakers of Nheengatu´ in a multimodal framework of language description. Reiter (2012), in her study of ideophones in Awetı´, investigates the accompanying gesture-production.

1. Introduction While Mesoamerican indigenous cultures moved into the focus of modern linguistic research already in the 1930s and 1940s, the native languages of South America continued to be predominantly studied by missionaries who had been trained in linguistics. They have received more scholarly attention only in recent times after a considerable progress in the field of documentary linguistics. The same applies to research in South America related to conceptualization across cultures for which co-speech gestures have recently become important indicators. (For Mesoamerica the famous study of the Hopi’s conceptualization of time, carried out in the 1940s by the American linguist Benjamin Lee Whorf, must be mentioned in this context [cf. Carroll 1956]. Whorf’s analysis was critically assessed by Malotki [1983]. According to Malotki, the crucial point about Hopi is that it emphasizes aspect over tense which does not say anything obvious about conceptualization of time.) A first study to investigate the conceptual structuring of spatial information as revealed in spontaneous gestures in two related cultures was carried out by Kita, Danziger, and Stolz (2001). When comparing the hand gestures accompanying the narratives of Mopan Mayan people from Belize and Yucatec Mayans from Mexico, the authors found out that the two cultures differ in their use of the projected lateral (left ⫺ right) axis to represent location, motion, time flow, or plot development. From this differential use of gesture in the two cultures Kita, Danziger, and Stolz conclude that there is also a crosscultural difference in the conceptual structuring of space. A similar study, further outlined below, was carried out by Nu´n˜ez and Sweetser (2006) in two Aymara communities of Chile. Other studies of gesture in two Tupian groups of Brazil are rather descriptive: Floyd (in prep.) focuses on the use of pointing gestures for the indication of time, and Reiter (2012, 2013) analyses the interaction between co-speech gestures and ideophones. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 11821193

78. Gestures in South American indigenous cultures These latter can be considered verbal gestures and have been the object of study in various South American indigenous languages under different perspectives. A noteworthy approach to ideophones in a lowland variety of Quechua carried out by Nuckolls (1996, 2001) will be presented in more detail in section 3 of this overview.

2. Pointing gestures or spatial and temporal orientation Pointing gestures of different shapes were observed very early in South American indigenous languages. Key (1962: 94), when conducting fieldwork among Amazonian groups of Bolivia, mentioned that “many Indian tribes point with their lips; we recorded the Movima, Tacana, and the Ayoreo as using this gesture. But there were differences in executing this gesture; the Movimas do not accompany it with a thrust of the head as the other tribes do, but simply protrude their lips to point to an object.” More recently, Guillaume (personal communication) observed that in Cavinen˜a, a Bolivian language of the Tacanan family, demonstratives are accompanied by lip pointing. The use of pointing gestures, including lip pointing, in combination with demonstratives has also been noted for Hup, a Nadahup (Maku´) language spoken in the multiethnic Vaupe´s region at the Colombian Brazilian border (Epps 2005: 247). Throughout her grammar, Epps points out the importance of gesture to disambiguate or specify information expressed by language (for instance, with numerals or with spatial deictics, e.g., to indicate a body part, the height, or size of an entity). A different kind of pointing gesture, adding temporal information to verb phrases, has been observed by Floyd (in prep.) for Nheengatu´. This Tupi-Guaranian language, a descendant of Tupinamba´ which in colonial times had developed into a lingua franca throughout Brazil, is still spoken natively by members of different ethnic origin in some places along the Rio Negro in the state of Amazonas in Northwest Brazil. According to Floyd these gestures, which can be performed by hands and eye gaze or by eye gaze alone, are in several respects unlike other co-speech gestures and rather resemble signs used in sign languages. Their frame of reference is constituted by absolute sun-positions along an east-west axis, i.e., their indexical properties are constrained. Such an absolute frame of reference was first observed by Haviland (1993) among the Guugu Yimithirr of Australia. These gestures in Nheengatu´ are highly conventionalized and context-independent. In comparison to Nheengatu´ temporal adverbs for the time of day, they are semantically more fine-grained, making distinctions which are more or less equivalent to the hours of the day. Thus they do not convey redundant information. Floyd further shows that they are internally complex: by pointing to specific positions punctual events and by a sweeping movement between two positions durative events can be expressed. In this respect the gestures also combine or interact with the lexical semantics of the verbs they co-occur with (cf. Floyd in prep.: 20, Fig. 3). Floyd proposes an analysis of this language across modalities, i.e., to treat the Nheengatu´ gestures like the conventionalized grammatical elements of spoken or signed languages. According to the author, such gestures are “fully grammatical within the visual mode” and “can be readily characterized in morphosyntactic terms” (Floyd in prep.: 5) as either elements of predicate modifiers, when occurring in combination with spoken deictics or adverbials, or as modifiers of predicates on their own (Floyd in prep.: 16, Tab. 1). Floyd assumes that over time the more independent gestures have lost their spoken deictic component. He also observes that gestures produced together with a temporal adverbial are less tightly bound to their associated speech, whereas when

1183

1184

VI. Gestures across cultures occurring by themselves as verb modifiers their temporal meaning is fully expressed without any spoken support. Floyd (in prep.: 5) argues that “since gesture can become increasingly abstract and conventionalized through grammaticalization processes in signed languages, it is unclear why the same would not be possible for co-speech gesture”. From his observations of Nheengatu´ temporal gestures he concludes that they represent a counterexample to the idea that gestures do not grammaticalize as long as there is a spoken channel available in communication and postulates a multimodal approach for descriptive grammars in order to check whether this also applies to other languages (Floyd in prep.: 23). Pointing gestures also play a crucial role in the study carried out by Nu´n˜ez and Sweetser (2006) among speakers of Aymara. The authors investigated the spatial conceptualization of time in this language by analyzing linguistic and gestural data complementarily. The analysis is based on ethnographic interviews, i.e., data which was not recorded in an experimental setting especially designed for the purpose of investigation. As a starting point for their analysis Nu´n˜ez and Sweetser proposed a taxonomy of spatial metaphorical mappings of time which focuses on reference points. While in most other research time is conceptualized in terms of motion in space, i.e., moving-Ego as opposed to moving-time metaphors, in their conceptualization a primary distinction is drawn between time-reference-point and Ego-reference-point structures (see section 2.2 in Nu´n˜ez and Sweetser [2006] for details; see also Cooperrider, Sweetser, and Nu´n˜ez [this volume]). They found out that in Aymara a static time model is predominant which localizes future events behind and past events in front of Ego. According to the authors, this mapping pattern of time onto space differs remarkably from dynamic Ego-referencepoint patterns shared by most other languages documented so far which conceptualize future and past in the reverse order and where time is most typically metaphorically conceived in terms of relative motion in linear space. The authors argue that co-speech gestures are an additional source of information where linguistic data alone cannot resolve questions regarding the cognitive processing of certain concepts as reflected in metaphors. Metaphoric speech has been observed to be often accompanied by metaphoric gesture. This kind of co-speech gesturing is less conscious and monitored than language and thus provides the researchers with a unique opportunity to visualize the cognitive processing (cf. Nu´n˜ez and Sweetser 2006: 403). In Aymara nouns meaning “front” and “back” can be used to refer to the past or to the future respectively, as illustrated in examples (1) and (2) (see Nu´n˜ez and Sweetser 2006: 415⫺417): (1)

nayra mara eye/sight/front year ‘last year’

(2)

qhipa pacha back/behind time ‘future time’

Unlike patterns in English and other languages, however, the dominant pattern in Aymara does not overtly mark a reference point in order to indicate whether an event occurred before or after another event or whether the event lies in the past or in the future of the Ego. In order to obtain convergent evidence for one time model or the other, the authors

78. Gestures in South American indigenous cultures interviewed 30 Aymara men and women between 38 and 84 in two regions in the Andes highlands of northern Chile. These were either monolinguals or bilingual Aymara and Spanish speakers with varying degrees of proficiency in the two languages. This factor had a clear impact on the gestural performance, since the ten Aymara monolinguals or bilinguals with no grammatical Spanish made the gestures to be expected from the Aymara linguistic data: when referring to the past they pointed in front of them, and when referring to the future they pointed behind themselves. Of the five fluent speakers of Spanish only one gestured like the Aymara-dominant speakers while the other four showed a reverse pattern, corresponding to the Spanish metaphors for time reference. Other speakers whose proficiency in either language was in-between these extremes gestured into both directions for future as well as past, but here, too, the respective dominant language had a major influence on the gestured time concept. A related factor was the age of the speaker which determined whether s/he had been exposed to formal education in the national language or not. The Aymara gestures also confirmed that in the dominant time model a static Ego in most cases was taken as the reference point: whenever speakers referred to two events in the past they pointed to a more distant location in front of them to indicate that the event was further in the past than another one which they located nearer to themselves (cf. Nu´n˜ez and Sweetser 2006: 430, 431). By pointing in an upward angle to the front, Aymara speakers further expressed that a reference event lies further in the past than events located by low pointing. Sweeping movements, similar to Floyd’s observations in Nheengatu´, in Aymara, too, indicate periods of time. In order to account for the deviant model in Aymara, Nu´n˜ez and Sweetser (2006: 403) suggest that the temporal metaphor system of this language encodes “aspects of humans’ basic embodied experience of the environment” which are different from those encoded in other languages. The Aymaras’ static mapping of future onto the space behind and past onto the space in front of the person are interpreted by Nu´n˜ez and Sweetser (2006: 440) as indicating a strong emphasis on visual perception as a source of knowledge in Aymara culture, i.e., what is known (the past) is in front of Ego, what is unknown (the past) is in the back. Linguistically this further correlates with an evidential system in which the source of knowledge of a reported information, whether obtained by eye-sight or not, is obligatorily marked. A question which arises in this context and should be further explored in future studies is why the same mapping pattern does not occur in other languages with visual evidential systems.

3. Ideophones as vocal gestures Ideophones all over the world have often been associated with gesture. A close relation between African ideophones and gesture had been pointed out early in Samarin’s (1971) review of studies on various Bantu languages and Kunene’s (1978) study on the Bantu language Southern Sotho. McGregor (2002: 335) refers to ideophones as “vocal gestures” to account for their demonstrative use in Australian languages. Nuckolls (2001: 277), in the context of her investigation of these elements in Pastaza Quechua oral narratives, has termed them as “verbal gestures”, “hybrid forms combining properties from what are traditionally circumscribed as verbal and gestural domains”. Ideophones can be broadly defined as “marked words that depict sensory imagery” (Dingemanse 2011: 1). This definition contains all important characteristics of ideophones across languages: They are words with specifiable meanings which are marked

1185

1186

VI. Gestures across cultures in various ways, e.g., in being uttered with marked prosody or in having sound-symbolic properties and expressive morphology such as reduplication. They further depict rather than describe perceptually salient features of events. Ideophones can refer to complex events by themselves, and they can be integrated in discourse. The most common means to integrate them is by means of “quotative indexes”, according to Güldemann (2008: 275), words functioning as markers of direct reported discourse and as predicators for certain invariant elements such as ideophones and “representational gesture”. Such gestures, according to Güldemann (2008: 278), refer to “the represented world [and] are so salient vis-a`-vis speech that they must be viewed as the major meaning-bearing units”. Both, ideophones and this type of iconic gesture, Güldemann more generally subsumes under the term of “mimetic signs”. Ideophones have been mentioned and partially described for many South American indigenous languages where they are part of the verbal arts and occur most notably in oral narrative discourse (see Reiter [2012: 9⫺43] for an overview). A detailed semantic description of ideophones in Pastaza Quechua, taking into account their semiotic distinctiveness, is given by Nuckolls (1996). Nuckolls describes and schematizes the ideophones in this Quechua variety of the Ecuadorian lowlands as subentries of specific “image schemata” in the sense used by Lakoff (1987) and Johnson (1987: 19) as “embodied patterns of meaningfully organized experience”. One of the uses of ideophones, according to Nuckolls, is to restate a verb meaning as a kind of verbal gesture. In order to account for this, Nuckolls (2001: 277) refers to Kita’s (1997) two-dimensional semantic framework. According to this theory, ideophones are different from other words in that their meanings are primarily represented in what Kita calls an “affecto-imagistic” dimension where they can be directly experienced, similar to the meanings of iconic gestures. In an utterance of the type given in example (3) the ideophone thus depicts the same activity which is referred to by the finite verb, in this case the ideophone ling, describing any act of insertion into an enclosed space, in relation to the verb satina (‘insert’) (cf. Nuckolls 2001: 278, 11): (3)

Chi washa-ga l ing ling ling ling ling ling sati -sha -ga nina-ta that back-top ideo insert-coref- fire-acc top ‘After that, inserting (the peppers) ling ling ling ling ling ling, they

hapi-chi -nau-ra. catch-caus-3plpst lit the fire.’

Nuckolls further notes various degrees of abstraction for ideophones in Pastaza Quechua which can undergo semantic changes comparable to those of grammaticalizing forms. She illustrates these with the ideophone tak which can have punctual and completive uses (see Nuckolls [2001: 280⫺282] for the whole “paradigm”) . This ideophone has the concrete meaning of a contact between two surfaces, accompanied by an audible sound. In a different context it can depict a soundless contact between surfaces. Further it can be used in contexts where the notion of direct contact has disappeared, as illustrated in (4) (cf. Nuckolls 2001: 281, 14). (4)

Na kay-bi-ga dziriri dziriri dziriri dziriri dziriri tak chupa-ta hawa-y. new here-loc-top ideo ideo tail-acc above-loc ‘Then here (the snake coiled itself) dziriri dziriri dziriri dziriri dziriri and (placed) its tail tak above.’

78. Gestures in South American indigenous cultures

1187

Finally tak can be transferred from one-dimensional contact into three-dimensional space, expressing a contact that surrounds or fills up. This is illustrated in (5) (cf. Nuckolls 2001: 282, 16). (5)

Tak kipiri-kpi-ga? ideo hug-swrf-top ‘And what if he hugged you tak?’

According to Nuckolls (2001: 282), in these contexts tak implies a grammatical notion of completive aspect. A study of ideophones which does not only describe their phonological, morphosyntactic, and semantic properties but also explores their close relation to co-speech gestures is presented in Reiter (2012) for Awetı´, a Tupian language spoken by a small indigenous community in Central Brazil. In order to be able to draw any general conclusions from the heterogenous Awetı´ corpus data about how ideophones and gesture interact in communication, identical and similar ideophones as well as those occurring in comparable syntactic and discourse contexts were chosen, described together with their accompanying gestures, and contrasted with each other. The gesture studies are based on more than four hours of non-elicited ethnographic discourse (myths, personal and historical narratives, descriptions, explanations) recorded on video. The analysis of the available data suggests that ideophones in Awetı´ are consistently accompanied by gesture and that this does not apply to other words (e.g., verbs or nouns) where gestures co-occur less systematically and to a much lesser degree. Furthermore, Awetı´ ideophones and gestures are completely synchronous in that the gesture stroke always falls on the ideophone. Such a noticeable correlation was first described by Kita (1997: 392) with respect to Japanese adverbial ideophones. Kita, who investigated in the interaction between ideophones, spontaneous iconic co-speech gestures and expressive prosody, could further show that the prosodic peak, if existent in an utterance, also falls together with the utterance of the ideophone, an observation for which corroborative evidence could be found in Awetı´ (cf. Reiter 2012: 302⫺308). Regarding the shape of the accompanying gesture, a distinction had to be drawn between ideophones depicting the manner component in motion events, typically occurring in syntactic structures like (6) where their meaning combines with that of a verbal predicate (cf. Reiter 2012: 338, 4), and ideophones expressing other activities, as illustrated in (7), where both (nominalized) verb and ideophone refer to the same activity (cf. Reiter 2012: 430, 10). (6)

Powowowo, o-to a’yn. ideo.fly, 3-go.vi part ‘It flew off.’ (lit.: ‘(There was) powowowo, it went off.’)

(7)

I⫽po-mo˜j-tu azo⫽kyty me, pupupupupupupu 3⫽anti-cook.vi-nom 1pl.excl⫽for part, ideo.simmer ‘She cooks them for us, pupupupupu (until it’s) ready.’

mu’je˜. ready

1188

VI. Gestures across cultures The semantic difference between these two types is also reflected in the co-occurring gestures: while the former are accompanied by deictic gestures, only occasionally containing descriptive components, the latter are entirely iconic in nature. Gestures co-occurring with ideophones of this type further vary, depending on the discourse prominence of the event depicted by the ideophone (cf. Reiter 2012: 450⫺451, Tab. 14) or on the degree of an ideophone’s syntactic integration. The latter was the result of the analysis of the gestures accompanying a full “paradigm”, given by various occurrences of the ideophone pupupu (‘boiling’), and of a formally related verb pupure with the same meaning (cf. Reiter 2012: 422⫺432). The occurrences, ranging from independent ideophones over different types of their syntactic integration to lexical verbs, showed that with an increase of grammatical structure there was a decrease in iconicity, of the gesture as well as of the ideophone itself which was gradually losing its marked prosody. This suggests that ideophones can indeed be considered some kind of hybrid phenomenon between iconic sign and arbitrary language. In motion events, ideophones depict the “manner” component, while the co-occurring gestures encode the “path” component. This path information conveyed by gesture depends on the path which is lexically encoded in the motion verb of an adjacent clause associated with the ideophone or by other means, e.g., by a locative adjunct. While most motion verbs in Awetı´ encode the motion and the path component of a motion event and provide information on the figure by grammatical structure, an ideophone encodes motion, manner, and information on the figure. Both, ideophones and lexical verbs, can refer to a motion event by themselves or in combination, occurring in adjacent clauses. Since gestures synchronizing with ideophones depicting motion events always encode a path which is not encoded in the ideophone, they vary with the respective verb or other elements they relate to. If the manner component, depicted by the ideophone, is a central piece of information in discourse, it can additionally be depicted by the co-occurring gesture (cf. Reiter 2012: 433⫺452; Reiter 2013). Gestures accompanying other ideophones than those of motion events depict salient features of the activity referred to, whereas gestures, produced together with verbs referring to the same activities, often depict objects, i.e., participants of the event referred to. Finally, it could be shown that ideophone-accompanying gestures in Awetı´ are mostly conventionalized and this apparently not only in narrative discourse, where they may be part of a learned repertoire of narrative techniques. Examples from Awetı´ confirm that gesture and speech are interrelated with regard to ideophones but that the type and shape of the gesture co-occurring with an ideophone also seems to be determined by other factors. Consequently, the multi-modality of any specific ideophone does not, in principle, include gesture as an invariant component. A certain autonomy of the meaning-bearing types of gesture is also pointed out in Güldemann’s (2008: 277) mimesis approach, according to which ideophones and representational gestures are both “mimetic signs” of semantic representation, which ⫺ due to the fact that they are produced by different mediums (speech organs, body) ⫺ can but need not occur simultaneously. In Awetı´ discourse other elements of speech are also gestured but not as systematically as ideophones. Verbs of motion, for example, are most often accompanied by a gesture, while verbs of other semantic classes are only occasionally gestured. Participants, when introduced in the discourse, are often gesturally depicted in size, height, width, or shape. Other gestures seem to have the function of highlighting the intonation structure of the utterance. Furthermore, speakers vary with regard to the

78. Gestures in South American indigenous cultures frequency of gesture production. Another noteworthy observation is that co-speech gestures are abundantly used in narrative discourse. As a possible explanation it was suggested that gestures serve the story-tellers as a technique of memorizing long narratives.

4. A non-verbal mode o communication in a multilingual setting? A profitable area for further exploring gesture production in discourse is the Upper Xingu in the southern region of the Parque Indı´gena do Xingu, since 1961 the first national reserve in Mato Grosso/Central Brazil. Since the beginnings of Portuguese colonization, the area has been continuously inhabited by different ethnic groups speaking mutually unintelligible languages of various major families. These groups soon established a common cultural system characterized by ceremonial cooperation, intermarriage, and economic interdependencies coupled with a specific language policy which keeps the languages ⫺ seen as the primary badge of an ethnic identity ⫺ apart on an individual level. In accordance with this policy, a person is only allowed to speak the language of the village s/he grew up in and ⫺ if different ⫺ the language(s) of his/her parents. Of immediately surrounding languages, i.e., the language of the other-ethnic spouse and/or the village one moved to as an adult, the person usually has a passive competence. According to archeological findings and oral history Arawak-speaking groups had founded the cultural system, were joined by Carib-speaking groups from about 1400 onward and in the mid 18th century by Tupian peoples (cf. Franchetto 2000: 117; Heckenberger 2000). Currently, there are still ten languages spoken in the Upper Xinguan society. Although even today ethnic groups from other places are relocated in adjacent areas within the reserve, they are not integrated in the common cultural system. Already Karl von den Steinen, a German explorer and first visitor to the Upper Xingu in the late 19th century, characterized the discourse of the Bakaı¨rı´ ⫺ a Carib people now living outside the area ⫺ as multimodal, describing phenomena such as prosodic markedness, reduplication, ideophones, lip pointing, and hand gestures (cf. Steinen 1894: 70⫺72). He further claimed that the gestures used among the Bakaı¨rı´ were conventionalized, since exactly the same gestures occurred in his encounters with all other ethnic groups in the area: “Ich darf wohl gleich erwähnen, dass sich die Mimik der Bakaı¨rı´ mutatis mutandis mit mehr oder weniger Temperament bei allen Stämmen wiederholte, dass nur die Interjektionen verschieden, die Geberden aber genau dieselben waren” (Steinen 1894: 71) [I should mention right away that the facial expressions of the Bakaı¨rı´ were repeated mutatis mutandis in more or less lively manner in all other ethnic groups, that only interjections were different, the gestures, however, were exactly the same.] (my translation). Steinen’s following description of a “stone axe pantomime” performed by the Bakaı¨rı´ (1894: 71) suggests that most of the “interjections” he mentions are “ideophones” according to the above definition. Steinen (1894: 71⫺72) also observed that these gestures were not only used in interaction with him as an outsider but also in interethnic communication with the neighboring groups: “Sie waren sparsamer mit diesen Lauten und Geberden in ihrer eigenen Unterhaltung, allein sie verfügten doch über die Hülfssprache ausdrucksvoller Bewegung in reichem Masse und bedienten sich ihrer im Verkehr mit anderen Stämmen […]”, [They were more economical with these sounds and gestures in their own conversation, but they had a large repertoire of this auxiliary language of expressive movements and used it in their interaction with other tribes] (my translation). This suggests that in the multi-

1189

1190

VI. Gestures across cultures ethnic and multilingual Upper Xingu area there may have existed or still exists a common gestural code which at that time may even have had the function of a lingua franca, similar to the sign language used among the indigenous peoples of the Great Plains in North America. In the Great Plains, European colonialism had triggered the development of this non-verbal lingua franca. The basic conditions for its formation and its gestural manifestations were, however, very different from those in the Upper Xingu. The indigenous groups of the Great Plains had been in sparse contact with each other until horses were (re-)introduced to the area in the late 15th century with European colonialism. According to Taylor (1996) “trade may have been an important stimulus in the development of sign language, and it was certainly an important factor in its diffusion after the rise of horse nomadism” (see also www.handtalkbook.com for several accounts in sign language). While the Great Plains lingua franca was a fully-fledged sign language, Steinen described the Upper Xingu non-verbal code as conventionalized, but still iconic and not too abstract to be understood by outsiders. That gesture, among other non-verbal modalities, plays an important role in this area where individuals are usually allowed to speak only the language of their ethnic group and where Portuguese only in recent times has started to establish as a modern lingua franca, has been observed by different authors (e.g., Basso 1973, 2009; Franchetto 2000; Reiter 2010; Seki 2011). Basso (1973: 8) attributed the fact that none of the indigenous languages established as a lingua franca to the fact that each of them represents a symbol of group identity. She distinguished between “personal” and “non-personal situations” of interethnic communication. In the former, bilinguals play an important role to convey information from one ethnic group to another, in the latter, formal encounters and on intertribal ceremonial occasions, however, non-verbal codes dominate. These can be gestures, performances of activities as well as body paint designs, and ornamentation. The gestures in these contexts are described as extremely ritualized and appear in a fixed order, i.e., they cannot be combined in a different way to convey other information (cf. Basso 1973: 6⫺8). The development of a common gestural code, however, may also have been of importance in interpersonal relationships. Due to intermarriage every Upper Xinguan village has inhabitants who do not speak the language of their direct environment. During their marriage they become passive bilinguals, as do their spouses, each understanding the other one’s language and speaking one’s own, while their children, entitled to speak the languages of both parents, develop into fluent bilinguals. The role of gesture in the verbal interactions of these interethnic couples has not been studied yet. One may, however, hypothesize that one function of the iconic co-speech gesture, which occurs especially often in story-telling, is to make family-members from other ethnic groups participate in the event, giving them the opportunity to recognize a myth or historical narrative that forms part of a common oral literature. A comparison of the gestures and their use by professional story-tellers from different ethnic groups belonging to the Upper Xinguan society would be a starting point to investigate in a cultural technique which is currently on the verge of disappearing. For ideophones, on the other hand, it has been noted that they have started to assimilate in the languages of the Upper Xingu, even though their different origins are still known in the speech communities (cf. Reiter 2012: 460⫺461, Tab. 5.3).

5. Prospects As could be shown by this short overview, the study of gestures in South American indigenous languages is still in its beginning stages. The recently growing interest seems

78. Gestures in South American indigenous cultures to be a direct consequence of the large documentation corpora which have been assembled since the late 1990s. These corpora for the first time include a relevant proportion of video data which turn the attention to multimodality, including the non-verbal cues in communication, and which due to refined recording methods provide material for gesture analysis (cf. Floyd’s in prep.: 23 critical remarks regarding his own incapability to perceive the meaningfulness of temporal gestures due to a bias of attention on audio data and elicitation). In addition, for many previously undescribed languages there are now at least concise grammars available which give those researchers who focus on gesture analysis the opportunity to approach the linguistic data. It should be added, however, that in order to be able to fully understand the gesture use in the discourse of an Amerindian language it is necessary to have access to a reasonable amount of ethnographic information which can often only be provided if the researcher has actually spent some time within the respective linguistic community and closely cooperates with community members familiar with the gesture conventions. While the interest in this non-verbal mode of communication is growing among researchers of South American indigenous languages, at the same time it can be observed that gesture production decreases or changes with proceeding acculturation and a growing proficiency of the native speakers in a national language. Many of the native languages of South America continue to be highly endangered, even though various countries currently invest in bilingual education programs for their indigenous population. That the dominant language has an impact on conceptualization of time and its expression by speech-accompanying gestures could be demonstrated by Nu´n˜ez and Sweetser (2006: 442) who conclude their study with a pessimistic observation: “Sadly, this rare pattern of linguistic and cognitive construal may be vanishing (at least from northern Chile), thus diminishing the rich cultural diversity of our world”. Other data from large documentation corpora impressively show that those speakers who make abundant use of ideophones and gestures in their discourse are mostly the ones with little access to formal education, often older members of the communities (cf. various South-American documentation corpora [e.g., Kuikuro, Awetı´, Cashinahua] under http://dobes.mpi.nl/projects. Examples for the use of co-speech gestures and ideophones are an older woman’s explanation of how to collect honey in the Kuikuro-corpus and various myths told by professional story-tellers in the Awetı´ and older community members in the Cashinahua corpus.) Such tendencies for literacy to “remove[s] language from the body of the speaker” (Nuckolls 1996: 134) have been reported from many different places. In some communities there are still professional story-tellers who learn to use non-verbal cues as part of their artistic repertoire in order to animate their performances. These formerly widespread manifestations of indigenous verbal arts are often not passed on to the younger generations, losing importance vis-a`-vis television and other media technology of Western culture (cf. England 2009: 207⫺208). For these reasons oral narratives have been focused on by many documentation projects. In addition, in order to capture what is still left of the native cultures, including the use of nonverbal communicative techniques, initiatives have been set up to encourage younger community members to document their elders on video. One such initiative is the NGO vı´deo nas aldeias (‘video in the villages’) (http://www.videonasaldeias.org.br) in Brazil, funded by UNESCO and the Norwegian Embassy. Hopefully, these endeavors will help to preserve unique manifestations of rich cultural practices which in pre-industrial times may also have played a major role in the communication of Western societies.

1191

1192

VI. Gestures across cultures Abbreviations used in the examples: ACC ⫺ accusative , ANTI ⫺ antipassive, CAUS ⫺ causative, COREF ⫺ co-reference, 1PL.EXCL ⫺ 1st person plural exclusive, IDEO ⫺ ideophone, LOC ⫺ locative, NOM ⫺ nominalizer, PL ⫺ plural, PART ⫺ particle, PST ⫺ past tense, SWRF ⫺ switch reference, TOP ⫺ topicalizer, vi ⫺ intransitive verb.

6. Reerences Basso, Ellen 1973. Portuguese Relationship Terms in Kalapalo Encounters. Language in Society 2(1), 1⫺21. Basso, Ellen 2009. Civility and Deception in Two Kalapalo Ritual Forms. In: Gunter Senft and Ellen B. Basso (eds.), Ritual Communication, 243⫺269. Oxford, New York: Berg. Carroll, John B. 1956. Language, Thought and Reality: Selected Writings of Benjamin Lee Whorf. Cambridge, Mass.: Technology Press of Massachusetts Institute of Technology. Cooperrider, Sweetser, and Rafael Nu´n˜ez this volume. The conceptualization of time in gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1781⫺1788. Berlin/Boston: De Gruyter Mouton. Dingemanse, Mark 2011. The Meaning and Use of Ideophones in Siwu. Doctoral Dissertation. Nijmegen: Radboud Universiteit Nijmegen. England, Nora 2009. To Tell a Tale: The Structure of Narrated Stories in Mam, A Mayan Language. International Journal of American Linguistics 75(2): 207⫺231. Epps, Patience 2005. A Grammar of Hup. Doctoral Dissertation. Charlottesville: University of Virginia. www.etnolinguistica.org/tese:epps-2005. Floyd, Simeon in preparation. Grammar across Modes: Celestial Gesture and Temporality in the Nheengatu Verb Phrase. Manuscript, Max-Planck Institute for Psycholinguistics, 1⫺33. Franchetto, Bruna 2000. Lı´nguas e Histo´ria no Alto Xingu. In: Bruna Franchetto and Michael Heckenberger (eds.), Os Povos do Alto Xingu. Histo´ria e Cultura, 111⫺156. Rio de Janeiro: UFRJ. Güldemann, Tom 2008. Quotative Indexes in African Languages: A Synchronic and Diachronic Survey. Empirical Approaches to Language Typology 34. Berlin: Mouton de Gruyter. Haviland, John B. 1993. Anchoring, Iconicity, and Orientation in Guugu Yimithirr Pointing Gestures. Journal of Linguistic Anthropology 3(1): 3⫺45. Heckenberger, Michael 2000. Epidemias, I´ndios Bravos e Brancos: Contato Cultural e Etnogeˆnese do Alto Xingu. In: Bruna Franchetto and Michael Heckenberger (eds.), Os Povos do Alto Xingu. Histo´ria e Cultura, 77⫺110. Rio de Janeiro: UFRJ. Johnson, Mark 1987. The Body in the Mind. Chicago: University of Chicago Press. Key, Mary Ritchie 1962. Gestures and Responses: A Preliminary Study among some Indian Tribes of Bolivia. Studies in Linguistics, 16 (3⫺4): 92⫺99. Kita, Sotaro 1997. Two-dimensional Semantic Analysis of Japanese Mimetics. Linguistics 35(2): 379⫺415. Kita, Sotaro, Eve Danziger and Christel Stolz 2001. Cultural Specificity and Spatial Schemas, as Manifested in Spontaneous Gestures. In: Merideth Gattis (ed.), Spatial Schemas and Abstract Thought, 115⫺146. Cambridge: MIT Press. Kunene, Daniel P. 1978. The Ideophones in Southern Sotho. Marburger Studien zur Afrika- und Asienkunde Serie A, Band 11. Berlin: Dietrich Reimer. Lakoff, George 1987. Women, Fire and Dangerous Things: What Categories Reveal about the Mind. Chicago: University of Chicago Press. Malotki, Ekkehart 1983. Hopi Time. A Linguistic Analysis of the Temporal Concepts in the Hopi Language. Berlin: Mouton de Gruyter.

79. Gestures in native South America: Ancash Quechua

1193

McGregor, William 2002. Verb Classification in Australian Languages. Empirical Approaches to Language Typology 25. Berlin: Mouton de Gruyter. Nuckolls, Janis B. 1996. Sounds like Life: Sound-Symbolic Grammar, Performance and Cognition in Pastaza Quechua. New York/Oxford: OUP. Nuckolls, Janis B. 2001. Ideophones in Pastaza Quechua. In: F. K. Erhard Voeltz and Christa Kilian-Hatz (eds.), Ideophones, 271⫺285. Typological Studies in Language 44. Amsterdam/Philadelphia: John Benjamins. Nu´n˜ez, Rafael E. and Eve Sweetser 2006. With the Future behind Them: Convergent Evidence from Aymara Language and Gesture in Crosslinguistic Comparison of Spatial Construals of Time. Cognitive Science (30): 401⫺450. Reiter, Sabine 2010. Linguistic Vitality in the Awetı´ Indigenous Community: A Case Study from the Upper Xingu Multilingual Area. In: Jose´ Antonio Flores Farfa´n and Fernando F. Ramallo (eds.), New Perspectives on Endangered Languages. Bridging Gaps between Sociolinguistics, Documentation and Language Revitalization, 119⫺146. Amsterdam/Philadelphia: John Benjamins. Reiter, Sabine 2012. Ideophones in Awetı´. Doctoral Dissertation. Kiel: University of Kiel. Reiter, Sabine 2013. The Multi-modal Representation of Motion Events in Awetı´ Discourse. Cognitexte [En-ligne] 9. Samarin, William J. 1971. Survey of Bantu Ideophones. African Language Studies 7, 130⫺168. ´ rea Linguı´stica? In: Bruna Franchetto (ed.), Alto Xingu. Uma Seki, Lucy 2011. Alto Xingu: uma A Sociedade Multilı´ngue, 57⫺85. Rio de Janeiro: Editora do Museu do I´ndio. http://www.ppgasmuseu.etc.br/publicacoes/altoxingu.html. Steinen, Karl von den 1894. Unter den Naturvölkern Zentralbrasiliens. Berlin: Hoefer and Vohsen. http://biblio.etnolinguistica.org/steinen-1894_unter_den_naturvolkern. Taylor, Allan R. 1996. Nonspeech Communication Systems. In: Ives Goddard (ed.), Handbook of North American Indians, Vol. 17: Languages. Washington, DC: Smithsonian Institution Press, 275⫺289.

Sabine Reiter, Bele´m (Brazil)

79. Gestures in native South America: Ancash Quechua 1. 2. 3. 4. 5. 6.

Introduction: Geographic, ethnographic, and linguistic context of study Summary of data collected Body and world: The phenomenological approach to gesture Look, point, handle: The sequence of manual gesture and gaze Conclusion: Going beyond the body and embodiment References

Abstract This chapter examines spatial gestures among speakers of the Ancash Quechua language. In addition to a summary of findings from a study of 115 spatial gestures made with hand and head, the value of a phenomenological approach to gesture is presented. Such a perspective is particularly valuable in the study of gestures that refer to the surrounding landscape. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 11931206

1194

VI. Gestures across cultures By looking at patterns across communicative and non-communicative practices and habits, it is possible to gain a better understanding of gesture’s role in the emergence, maintenance, transmission, and evolution of local ways of relating to and thinking about the surrounding world. This chapter examines one such pattern: the relations among the relative sequence of gaze and manual gesture, the characteristics of Huaripampa’s landscape, and local ways of wayfinding and orientation. Specifically, the normative sequence of gaze-then-hand correlates with the use of the landscape as a cue for orientation and mnemonic for the locations of places and paths. Further, the gaze-then-hand sequence is shown to facilitate the “handling” of the earth’s physical forms in iconic gestures. In conclusion, these facts suggest that rather than embodying the landscape, this procedure for engaging manually with space involves disembodying the always-already embodied world through gazes and points.

1. Introduction: Geographic, ethnographic, and linguistic context o study This chapter draws on a study of spatial gestures among Quechua speakers in the Peruvian highland town of Huaripampa (pop. approx. 1,200), located in the department of Ancash, 20km south and 500meters up from the departmental capital, Huaraz. The population of Huaripampa and the closely related smaller communities of Canray Grande and Canray Chico are dedicated primarily to agricultural and pastoral work. However, migratory labor also forms an important sector of the economy ⫺ many residents spend anywhere from one to twenty years working in Huaraz, Lima, or other cities. While in Huaripampa, locals farm areas extending from the town center roughly 1.5km west, 2km north, 0.3km south, and 5km east, and ranging from 3,400meters to 3,800meters above sea level. Further, residents take animals to pasture within this range and to nearby Ruric Canyon. The canyon’s entrance is roughly 14km north-by-northeast from the town center. It extends 7km further from 4,000 to 4,400meters above sea level. Huaripampa and Canray Grande occupy two plateaus separated by the Sawan River. This territory is marked by roughly a dozen smaller mountains and surrounded by glacier-capped mountains, the tallest of which is Huantsan (6,369meters). One goal of this chapter is to argue the relevance of the social and geological configuration of this territory for understanding the particularities of Huaripampa Quechua speakers’ gestures. I have conducted ethnographic and linguistic fieldwork in Huaripampa periodically from 2010 to 2013. This chapter draws from a set of 115 gestures transcribed from 70minutes of video recorded at five locations with seven participants. I also draw on observations and notes from previous fieldwork. The transcribed gestures are coded for location of recording, hand shape, orientation of gesture and gaze, orientation of movement in gesture and gaze, angle of pointing gestures, torso movements, relative sequence of co-occurring gazes and gestures, referent of gestures, accuracy of pointing gestures, and origo transposition. The Ancash Quechua language (Adelaar 2004; Julca Guerrero 2009; Parker 1976) is an agglutinative SOV language with extensive derivational morphology (Larsen 2008) and a complex aspectual system (Hintz 2011). The department of Ancash has one of the densest populations of Quechua speakers, with the most extensive dialectal variation. Most relevant to the article here is the way that Quechua speakers in Huaripampa talk about spatial relations. While there are in theory words for left and right, these are rarely used, if ever. The same is true for cardinal directions. My research suggests that the

79. Gestures in native South America: Ancash Quechua Tab. 79.1: Ancash Quechua Directional Nouns QUECHUA

ENGLISH

Rara Hana Uma Witsay Ura Hawa Ruri Tsimpa Frenti Qipa Waqta Kinray Washa Kuchun

Up; above; east Up; above; east Up; above; east Upward direction; easterly direction Down; below; west Base; down; below; west? Inside; underneath Front; facing Front; facing Behind; back Behind; other side Side Side; same level Border; edge

terms denoting “up” and “down” are used to speak about east and west, respectively. Ambivalent cases are also frequent, because in the case of larger distances, east generally is up, and west down. The words for up and down are part of a paradigm of fourteen nouns that denote directions (Tab. 79.1) and intrinsic relations such as “inside” and “behind.” Further spatial information can be conveyed by six case suffixes (Tab. 79.2). There are three deictic terms, kay, tsay/hay, and taqay, which I will gloss here roughly as proximal, medial, and distal, respectively. The proximal and medial are also both used frequently for discourse functions. The verbal deictic suffix -mu denotes movement toward the origo when affixed to motion verbs, and at a remove from the origo when affixed to non-motion verbs. In what follows I argue that a phenomenologically grounded understanding of the relation between language, body, landscape, and culture (see box 1) is essential in explaining the relevance of spatial gestures beyond their own systematicity. The emerging study of gestures in language challenges traditional assumptions about human communication, for example the independence of spoken language as a semiotic modality (e.g., Enfield 2009; Kendon 1980; McNeill 1992). Gesture has also been shown to be significant to the study of cognition (e.g., Cienki and Müller 2008; Kendon 1986; Kita 2003; McNeill 2005). I argue further that gesture is a domain of investigation that can deepen understandings of how populations develop, embody, and transmit locally specific ways of relating to the

Tab. 79.2: Ancash Quechua Case Suffixes CASE SUFFIXES

Spatial gloss

-ta -man -chaw -pa -pita/-piq -kama/-yaq

To (accusative) Toward (goal) In, at, on (locative) Through, about, via (genitive) From (ablative) Up to, until (limitative)

1195

1196

VI. Gestures across cultures box 1 I use culture to refer to the habitual or regular practices that are shared systematically but not homogenously across a population and the resulting patterns of associations of meanings and materials (Bourdieu 1977, 1984; Sapir 2002; Silverstein 2004).

environment. In Streeck’s words, “even our habitual motor-patterns are cultural phenomena, while it is the very nature of our bodies to make the acquisition of cultural patterns possible” (2013: 83). I begin with a brief resume of patterns in the data collected. I then discuss how a phenomenological perspective has been brought into the study of gesture, and how it may be relevant particularly in the case of spatial gesture. In the final section I explore this perspective in relation to the sequence of gaze and manual gesture.

2. Summary o data collected 2.1. Hand shapes Of 115 gestures, 63 were index finger points, 21 were points made with gaze alone, 13 used gestures made with the entire hand, 11 with finger bunches, 3 with thumb points, 2 with closed fists, and 2 with both index and middle fingers. In sum, the majority of hand shapes were index finger points, while the next most frequent case involved no use of the hand, only gaze. There is also a clear correlation in the data that has been noted across diverse languages (Haviland 1993; Wilkins 2003) between the angle of a pointing gesture and the distance of the referent ⫺ in other words pointing higher up indicates a more distant referent. ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺

Index finger: 55% Gaze only: 18% Full hand: 11% Finger bunch: 10% Thumb: 3% Fist: 2% Two-fingers: 2%

2.2. Gaze and torso Of 115 gestures, 81 included a gaze that indicated the referent, 34 involved manual gestures unaccompanied by gaze, and 13 included torso movements. In sum, well over half of the recorded gestures were accompanied by gazes toward the referent. ⫺ Gaze points (with or without manual gestures): 70% ⫺ Gestures without gaze: 30% ⫺ Torso movements: 11%

2.3. Sequential relation o gaze and manual gesture Of 115 gestures, the sequence of gaze and manual gesture in the preparation and hold phases was measured in 32 cases and in the retraction phase in 29 cases.

79. Gestures in native South America: Ancash Quechua In the preparation phase, 25 out of 32 gestures involved the gaze moving toward the target before the hand, while the hand moved first in only 2 cases. In 5 cases the preparation involved the simultaneous movement of head and hand. ⫺ Gaze first: 78% ⫺ Hand first: 6% ⫺ Simultaneous: 16% In the hold phase, 29 out of 32 cases involved the gaze reaching the held position before the hand, while the hand arrived first in only one case. In two cases, both head and hand reached the target position simultaneously. ⫺ Gaze first: 91% ⫺ Hand first: 3% ⫺ Simultaneous: 6% In the release phase, 18 out of 29 cases involved the gaze moving away from held position before the hand, while only 5 cases involved the hand moving away first. In six cases, the release was simultaneous. ⫺ Gaze first: 62% ⫺ Hand first: 17% ⫺ Simultaneous: 21% Finally, in three cases, gaze and hand moved completely separately. That is to say, the retraction of one ended before the preparation of the other began. In all three cases, the manual gesture began after the gaze. In sum, in all gesture phases, gaze consistently occurred prior to manual movement.

2.4. Co-occurrence with deictic utterance Of 115 gestures, 51 co-occurred with speech including deictic elements. Of these, 24 were proximal, 14 distal, 9 medial, and 7 included the verbal deictic. In sum, slightly less than half of the recorded gestures co-occurred with deictic speech, and of these, proximal forms were most common. ⫺ ⫺ ⫺ ⫺ ⫺

Deictic-gesture co-occurrences: 43% Proximal: 47% Distal: 28% Medial: 18% Verbal: 14%

2.5. Co-occurrence with spatial case suixes Of 115 gestures, 54 co-occurred with speech marked with case suffixes that conveyed spatial meaning. Of these, 22 were “through/about”, 15 were locatives, 11 were goal, 4 were ablative, and 2 were accusative. In sum, nearly half of the gestures co-occurred with spatial case suffixes. Of these, -pa (‘through/about’) was by far the most common.

1197

1198

VI. Gestures across cultures Spatial case suffix co-occurrences: 47% -pa (GEN): 41% -chaw (LOC): 28% -man (GOAL): 20% -pita (ABL): 7% -ta (ACC): 4%

2.6. Co-occurrence with utterances conveying motion or location Of 115 gestures, 70 co-occurred with utterances conveying information about location only, 35 co-occurred with utterances conveying information about movement (many of which also implied location information), and 10 were ambivalent. ⫺ Location: 61% ⫺ Movement: 30% ⫺ Ambivalent: 9% The ambivalent cases all involved the conjunction of a noun with the suffix -pa, but without a verb. Because this suffix can denote both a path “through,” “about,” or “via,” as well as location in a general proximity, it must be clarified by the surrounding discourse. In four cases, the surrounding discourse suggested movement, and in two cases location. A further case involved the “fictive movement” (Streeck 2009: 136) of the path of a road. Finally, three cases involved true ambivalence: the utterance could be interpreted as both a path toward a place as well as the location of that place. This is because the suffix -pa can function to anchor the vector of the path toward the target to a landmark and/or to locate the target in the general proximity of a landmark (Weber 1996: 286). The following utterance illustrates this point: (1)

Taqay washa -pa Waraqayuq Distal same.level-gen Waraqayuq ‘Waraqayuq is around over yonder’ or ‘Waraqayuq is through over yonder’

During this utterance, the speaker also makes an index-finger pointing gesture accompanied by a gaze in the same direction. The arm is fully extended to the speaker’s right side. It is lifted at an angle of 50 degrees from the horizon, and the index finger is bent slightly to point behind the speaker’s back, suggesting a vector to the northeast. Just before pronouncing the name of the place described, Waraqayuq, the speaker slightly lifts both torso and arm slightly, then brings them down with the first syllable of the word so that the pointing hand comes to rest at an angle of 30 degrees from the horizon. Both the movement and first held phase can be interpreted in a way consistent with the meaning of -pa: the downward movement metaphorically indicates the proximity of Waraqayuq to the general area indicated by the first, higher point, while the first held phase of the point, co-occurring with washa-pa, indicates the path. The second held phase, co-occurring with Waraqayuq indicates the precise location.

79. Gestures in native South America: Ancash Quechua

2.7. Origo transposition, gesture accuracy, and rames o reerence Of 115 gestures, 18 involved utterances with a transposed origo (see box 2), 91 involved utterances with origos consistent with the place of the speech event, and 6 were indeterminate. All non-transposed pointing gestures toward physical referents were spatially accurate to within approximately 10 degrees of error. Of the 18 gestures that involved speech with transposed origos, there were four clear cases in which the points involved absolute orientation. In other words, the point could only indicate the referent if the origo was imagined to be at a location specified in the interaction rather than the place of interaction itself. In other cases no determination could be made for one of the following reasons: (i) I was unfamiliar with the location of the referent, (ii) the gesture indicated only a direction, and thus didn’t involve a transposition of the origo, or (iii) the referent, origo, and location of interaction formed a straight line such that there would be no distinction between a transposed and non-transposed gesture. It is critical to note that speakers never produced points in a relative frame of reference in cases of transposed origos. In sum, speakers clearly can and do produce transposed gestures, and in doing so likely utilize an absolute frame-of-reference. Nevertheless, there is also a strong preference for non-transposed gestures. This may be related to the argument in section 4 that Ancash Quechua speakers’ gestural habits reflect their reliance on looking at the landscape for orientation and wayfinding, but I hesitate to make any conclusion without more extensive data. ⫺ Non-transposed gestures: 79% ⫺ Transposed gestures: 16% ⫺ Gestures with indeterminate transposition: 5%

box 2 The origo is the source for interpreting deictic meaning (Bühler 1982) or, more generally, the ground of any indexical reference (Hanks 1990: 38). As Le Guen (2011) showed, frames of reference are only involved in pointing gestures in which the origo is other than the location of the speech event (transposed pointing gestures).

3. Body and world: The phenomenological approach to gesture It would be in some sense absurd to leave the body out of an account of gesture, as has been shown by researchers who have argued for its experiential basis (e.g., Kendon 2004; Müller 1998; Streeck 2009). Such phenomenologically informed work has shown that gestures originate in physical practices and lived experiences and have meaning by virtue of their contiguity with them. It is by virtue of having experienced typing on a keyboard that I can communicate my plan to go write to another by waving my fingers in the air in front of me. But can a phenomenological approach to gesture do more than simply show that gestures mean through indexical links to experience? Specifically, I ask why

1199

1200

VI. Gestures across cultures

we should stop with the body. Drawing on Ingold’s critique of the concept of materiality, Streeck wrote that gestures should be understood “as the work of those who inhabit or dwell in the world” (Streeck 2009: 83). So why exclude the world in which the body dwells? With the typing gesture, while the manual sign refers iconically to its referent in a classic Peircean sense, it does so by means of the indexical link between the hand movement ⫺ the sign-vehicle ⫺ and the experience (which serves here as interpretant). However, in the case of communicating about the land, the indexical ties connecting gestures to the experience of being in and moving through the earth are more complex. What indexical links make it possible for the hand to establish an iconic relation to the landscape? No one can pick up a mountain peak or river, nor can anyone manually reproduce the action of walking toward or arriving at these places. Yet gestures are readily used to refer to such places and actions. Just as my waving fingers would be meaningless to you if you don’t share the experience of typing, a silent point in the direction of a house currently out of sight cannot evoke the image of its resident if the knowledge of the house’s location is not shared among interlocutors. The previous example shows how a simple act of communication requires the coordination of body, geography, language, and social relations. It would of course be possible to analyze such cases in terms of the internal consistencies and patterns of gesture-speech cooccurrence, hand shapes, frames-of-reference, etc. But what can we learn from an approach that attends to patterns not only in communicative practices themselves, but also to how they pattern with non-communicative practices and background knowledge such as social relations, land use, habits of movement through the landscape, and cultural meanings of surrounding places and paths? Such approaches may help not only to understand gesture, but also the role it plays in the emergence, maintenance, transmission, and evolution of local ways of relating to and thinking about the surrounding world. Further, this approach opens questions that pertain to cognition, culture, and language. Does bodily movement play an instrumental role in the process by which particular ways of thinking about space become shared in a population? Is this necessarily contiguous with linguistic factors, or can it overlap with linguistic groups? Might gestures mediate the role of nonverbal practice in language change? Such questions can be fruitfully investigated only by attending to patterns across communicative and non-communicative practices and knowledge. In the following section I examine such a pattern in my data.

4. Look, point, handle: The sequence o manual gesture and gaze In section 2.3 I described the finding that manual gestures and gaze pattern together in a regular way in spatial gestures. Specifically, manual pointing gestures are regularly preceded by a gaze in the same direction. I interpret this as an indication of a special relation between visual and haptic experience of one’s surroundings. Whether this relation is universal or culturally specific remains to be seen. However, I would suggest the particular relation in the case presented here has to do with a locally salient way of perceiving the surrounding world. In Huaripampa, the landscape is such that the most efficient and effective way of orienting oneself is by looking at the shape of the land. The contours of mountains, slope of the land, and position of the sun all provide important information (see Fig. 79.1). But more than just getting one’s bearings, the shape of the world also serves as a mnemonic for locating distant places. This is strongly attested in my data and in many more hours of observation and recording: speakers’ predominant strategy for

79. Gestures in native South America: Ancash Quechua

1201

Fig. 79.1: Looking east across Huaripampa from one of the five recording locations.

locating distant places is to point out a visible landmark, either verbally or physically, and then use absolute or direct (Danziger 2010) spatial description to place the target in relation to this landmark. Clearly relevant here is the fact that in the vast majority of instances of pointing that involved both head and hand movements, the gaze moved toward, reached, and returned from the target before the hand or finger. This interlocking pattern of gaze and hand suggests that visual experience provides a fundamental basis for making the landscape “graspable” for meaningful manipulation through manual gesture. Streeck notes that the nature of hands obviates the dichotomy between sign-oriented (creative) and embodiment-oriented (presupposing) theories of meaning because the hands are used for both data gathering and sign-production (2009: 69). While the dichotomy that Streeck mentions is itself questionable ⫺ Merleau-Ponty (1962) essentially argued perception (not “data-gathering”) is itself a creative act ⫺ the observation that the hand is frequently involved both in experiencing and representing the same phenomenon is insightful. But the same is true of gaze. We use eye and head movements on the one hand to locate and follow the movements of others, ourselves, and things in the world, and on the other hand to communicate about these places and paths. The following example demonstrates how hands are used to “handle” the landscape as well as the role of gaze in this process. An older man from Huaripampa is explaining how fish that come down the river from Pamparahu Lake are killed when the river joins another that is contaminated with poison. I have transcribed gesture/gaze only in the first part of the utterance, as it is the only part relevant here.

1202

VI. Gestures across cultures

(2)

(.) | Haynam | taqay tinku | encuentro- | chaw| na | Then | distal meeting | meeting| loc| already Gaze: prep | hold ………………………………………………………….. | retract Hand: | prep …...…. | 1 ….…………... | …………….. | beat | beat “Then, once in yonder meeting [of two rivers],”

(3)

wanu-tsi ⫺lla -n pobre llullu pescadito die ⫺caus -just-3 poor tender little.fish “it just kills the poor, tender little fish.”

Fig. 79.2a: Index finger point, arm nearly fully extended, 20 degrees from horizon.

-kuna -pl

-ta -acc

Fig. 79.2b: Index and middle finger fingers separated, in same position as 79.2a point.

Before beginning the utterance, the speaker starts to move his gaze toward the direction of the soon-to-be-mentioned river confluence, a place obscured by the contour of the land from the current location. As the utterance begins, the gaze reaches the target direction just as his hand begins to move toward the same target. On reaching the second word, a distal deictic, the index finger has reached its holding position, which extends through the pronunciation of the Quechua word for river confluence, tinku. Then, the speaker says ‘encuentro’, the Spanish equivalent of tinku, at the same moment quickly changing the shape of his hand. He now extends both index and middle finger. While his arm remains extended along the same vector as before, the new hand-shape completely changes its mode of meaning (McNeill 1992; Mittelberg 2008). The index finger simply indicated that the vector of the finger and arm together pointed toward the referent tinku. The two fingers together, however, represent the physical form of the rivers’ meeting, while still maintaining the vector toward its location. In other words, the hand’s mode of signification shifted from indexical to iconic, while the referent simultaneously shifted from location to physical form. This series of gestures involved an order consistent with the rest of my data. First, the gaze locates the referent, then the pointing hand,

79. Gestures in native South America: Ancash Quechua

1203

and only after this is it possible for the hand to begin to engage in a semiotic relation to the referent’s physical form. In conclusion, pointing gestures accompanying Quechua speech in Huaripampa are normatively preceded by a gaze in the same direction. Furthermore, this sequence is itself a prerequisite for engaging semiotically in gesture with the physical form of the land. But beyond telling us about normative practices in gesture, these facts also suggest local residents orient themselves to the world around them by looking at the lay of the land. The fact that gaze often precedes the beginning of the utterance also strongly contributes to this conclusion. However, this gaze is by no means necessary to locate a place. The same sequence of gaze and manual gesture occurs when referents are repeated one after another, and in conjunction with places in plain sight. Speakers surely could point first, and on a small number of occasions they do (this may also be tied to the dynamics of the interaction, which is unfortunately beyond this chapter’s scope). Rather, the gaze-first sequence seems to be habitual rather than instrumental. This further supports the conclusion that the sequence of gaze and manual gesture speaks to the way residents orient themselves in the landscape. Specifically, this sequence reflects the habitual experience of looking around to get one’s bearings, find the attitude of the sun, or pick out landmarks relevant in finding one’s way. These facts open the question of whether this normative sequence is reflected in the gestural habits of other populations (see box 3). Would we find a difference along the lines of Ingold’s distinction between transport ⫺ movement from point to point ⫺ and wayfaring ⫺ movement along a path (2011: 149)? For example, how would this data compare to the gestural habits of a group of New York subway commuters who are accustomed to moving from stop to stop? Ingold argues that such a group would perceive the path itself as irrelevant, as opposed to “wayfarers” like hunters, for whom the path is an important source of gathering knowledge, as much the goal of travel as the destination itself. An interesting question for further research is whether this distinction bears out in gestural habits. Investigating the phenomenological aspects of gesture is fundamentally a task that crosses disciplinary methodologies, as it ultimately requires studying language, cognition, local geographies, cultural practices, and social relations. As Sheets-Johnstone (2011) wrote, phenomenology should not be taken as a speculative philosophy, but rather as one that can be validated (or invalidated) and that leads to a trans-disciplinary task.

box 3 Related research: Kita (2003) described a gaze-then-point sequence among Tokyo residents in pointing to unseen referents. Kita’s explanation, however, is that this facilitates “the conceptual choice between LEFT and RIGHT” (2003: 325). In this cognition-centered approach, there is little room to explain the same pattern in a population that practices constant dead reckoning and speaks a language that relies predominantly on absolute and intrinsic frames of reference. Cienki (2005) showed experimental findings that objectifying points or gazes were not used for gestures representing metaphoric images rather than actual physical referents, supporting the possibility that such distancing helps to distinguish the relevance of the referent’s physical existence.

1204

VI. Gestures across cultures

5. Conclusion: Going beyond the body and embodiment Just as the body and its movements are necessary to knowledge, communication, and life (Ingold 2011; Sheets-Johnstone 2011), the paths, places, things, and materials that course through the world are motives and means for living, communicating, and knowing. Hanks’ (1990) investigation of deixis among Yucatec Maya speakers moved in this direction, engaging Merleau-Ponty’s philosophy to explain the role of the body and its proprioception in the practice of deictic reference. In doing so, he suggested going beyond the philosopher’s notion of sche´ma corporel to what he called ‘the corporeal field’ (1962: 85). This concept was intended to go beyond the body itself by including a space that is contextually and culturally defined, can be inhabited by co-participants in an interaction, and can be transposed (as in the case of a transposed origo) to other locations (1962: 94). The data presented in this chapter draws attention to cases in which the corporeal field expands to include distant parts of the landscape. I have shown that spatial gestures in Ancash Quechua allow for this expansion of the corporeal field by engaging with the world first through vision, then through pointing gestures that direct attention, and finally through gestures that bring the world into close semiotic contact with the body. If we consider this as a way of embodying representations of the world, what exactly is being embodied? When confronting communication about a wide-ranging territory, the very concept of embodiment becomes problematic, as it re-inscribes the same separation of body and world that the gestural practices I have described aim to overcome by creating the semiotic conditions for handling the earth, even transforming body parts and their movements into places and paths. To return to Merleau-Ponty, if it is particular ways that the body perceives the world that produces the “known” world, embodiment is not the problem since the world is always already embodied. Instead, the problem is how to disembody the world, to objectify it as a target of reference, and to make it semiotically, linguistically, and gesturally manipulable (see box 3). The data presented here represents a strategy for doing just that ⫺ looking and pointing both contribute to the distancing and objectification of knowledge about the landscape so it can then become an object of the semiotic processes of communication.

Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 1224697 and the Wenner-Gren Foundation for Anthropological Research. Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation or the Wenner-Gren Foundation. This work would not have been possible without the participation, understanding, and curiosity of the residents of Huaripampa and Canray Grande, especially Mary Luz Roberta Huerta Cacha, Rolando Marcelo Huerta Cacha, Angelica Gloria Cacha, Pascual Leo´n Villanueva, Rube´n Alejo Trejo, Donato Molina Rojas, and Marco Mallqui Villanueva, as well as the assistance of my Quechua instructor, Ce´sar Vargas Arce. I also thank Alan Cienki, Cornelia Müller, and Michael Lempert for their guidance with early Ancash Quechua gesture data, and Bruce Mannheim, Barbra Meek, and Webb Keane for their support in the formation of my project.

79. Gestures in native South America: Ancash Quechua

1205

6. Reerences Adelaar, Willem 2004. The Languages of the Andes. Cambridge: Cambridge University Press. Bourdieu, Pierre 1977. Outline for a Theory of Practice. Cambridge: Cambridge University Press. Bourdieu, Pierre 1984. Distinction: A Social Critique of the Judgment of Taste. Cambridge: Harvard University Press. Bühler, Karl 1982. The deictic field of language and deictic words. In: Robert Jarvella and Wolfgang Klein (eds.), Speech, Place, and Action: Studies in Deixis and Related Topics, 9⫺30. Chichester: Wiley. Cienki, Alan 2005. Image schemas and gesture. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics, 421⫺442. Berlin: Mouton de Gruyer. Cienki, Alan and Cornelia Müller 2008. Metaphor and Gesture. Amsterdam: John Benjamins. Danziger, Eve 2010. Deixis, gesture, and cognition in spatial Frame of Reference typology. Studies in Language 34(1): 167⫺185. Enfield, N.J. 2009. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge/New York: Cambridge University Press. Hanks, William 1990. Referential Practice: Language and Lived Space among the Maya. Chicago: University of Chicago Press. Haviland, John 1993. Anchoring, iconicity, and orientation in Guugu Yimithirr pointing gestures. Journal of Linguistic Anthropology 3(1): 3⫺45. Hintz, Daniel 2011. Crossing Aspectual Frontiers: Emergence, Evolution, and Interwoven Semantic Domains in South Conchucos Quechua Discourse. Berkeley: University of California Press. Ingold, Tim 2000. The Perception of the Environment: Essays on Livelihood, Dwelling and Skill. London: Routledge. Ingold, Tim 2011. Being Alive: Essays on Movement, Knowledge and Description. New York/London: Routledge. Julca Guerrero, Fe´lix 2009. Quechua Ancashino: Una Mirada Actual. Lima: CARE Peru´. Kendon, Adam 1980. Gesticulation and speech: Two aspects of the process of utterance. In: Mary R. Key (ed.), Relationship of Verbal and Nonverbal Communication, 207⫺228. The Hague: Mouton. Kendon, Adam 1986. Some reasons for studying gesture. Semiotica 62(1/2): 3⫺28. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kita, Sotaro 2003. Interplay of gaze, hand, torso orientation, and language in pointing. In: Kita Sotaro (ed.), Pointing: Where Language, Culture, and Cognition Meet, 307⫺328. London: Lawrence Erlbaum Associates. Larsen, Helen 2008. Los Sufijos Derivacionales del Verbo en el Quechua de Ancash. Lima: Instituto Lingüistico de Verano. Le Guen, Olivier 2011. Modes of pointing to existing spaces and the use of frames of reference. Gesture 11(3): 271⫺307. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Merleau-Ponty, Maurice 1962. Phenomenology of Perception. Translated from the French by Colin Smith. London: Routledge. Mittelberg, Irene 2008. Peircean semiotics meets conceptual metaphor: Iconic modes in gestural representations of grammar. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gesture, 145⫺184. Amsterdam: Benjamins. Müller, Cornelia 1998. Iconicity and gesture. In: Serge Santi, Isabelle Guaı¨tella, Christian Cave´ and Gabrielle Konopezynski (eds.), Oralite´ et Gestualite´: Communication Multimodale, Interaction, 321⫺328. Paris: L’Harmattan. Parker, Gary 1976. Grama´tica Quechua: Ancash-Huailas. Lima: Instituto de Estudios Peruanos. Sapir, Edward 2002. The patterning of culture. In: Judith T. Irvine (ed.), The Psychology of Culture, 103⫺123. Berlin: Mouton de Gruyter.

1206

VI. Gestures across cultures

Sheets-Johnstone, Maxine 2011. The Primacy of Movement. Expanded second edition. (Advances in Consciousness Research 82.) Amsterdam: John Benjamins. Silverstein, Michael 2004. “Cultural” concepts and the language-culture Nexus. Current Anthropology 45(5): 621⫺652. Streeck, Jürgen 2009. Gesturecraft: The Manu-facture of Meaning. Amsterdam: John Benjamins. Streeck, Jürgen 2013. Interaction and the living body. Journal of Pragmatics 46(1): 69⫺90. Weber, David 1996. Una Grama´tica del Quechua del Huallaga (Hua´nuco). Lima: Ministerio de Educacio´n. Wilkins, David P. 2003. Why Pointing with the Index Finger is Not a Universal (in Socio-Cultural and Semiotic Terms). In: Kita Sotaro (ed.), Pointing: Where Language, Culture, and Cognition Meet, 171⫺215. London: Lawrence Erlbaum Associates.

Joshua Shapero, Ann Arbor (USA)

80. Gestures in native Mexico and Central America: The Mayan cultures 1. 2. 3. 4.

Introduction Bodily aspects of speaking: Kinesics and gaze Co-speech gestures References

Abstract The systematic study of kinesics, gaze, and gestural aspects of communication in Central American cultures is a recent phenomenon, most of it focusing on the Mayan cultures of southern Mexico, Guatemala, and Belize. This article surveys ethnographic observations and research reports on bodily aspects of speaking in three domains: gaze and kinesics in social interaction, indexical pointing in adult and caregiver-child interactions, and co-speech gestures associated with “absolute” (geographically-based) systems of spatial reference. In addition, it reports how the indigenous co-speech gesture repertoire has provided the basis for developing village sign languages in the region. It is argued that studies of the embodied aspects of speech in the Mayan areas of Mexico and Central America have contributed to the typology of gestures and of spatial frames of reference. They have refined our understanding of how spatial frames of reference are invoked, communicated, and switched in conversational interaction and of the importance of co-speech gestures in understanding language use, language acquisition, and the transmission of culture-specific cognitive styles.

1. Introduction What kinds of differences might there be in gestures and other embodied aspects of communication across cultures? There are cultural conventions governing the deployment of Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12061215

80. Gestures in native Mexico and Central America: The Mayan cultures

1207

gaze and bodily deportment, as well as emblematic or “quotable” gestures which, like words, are conventionalized in particular communities and vary widely (Kita 2009). But some have argued (e.g., McNeill 1992) that iconic gestures are “natural”, directly reflecting thinking. The corresponding assumption is that their meanings are more or less the same across cultures. Yet most gestures are spatial, and if the cognition that drives them varies ⫺ for example, spatial cognition associated with different frames of reference ⫺ then what look like similar gestures might have rather different cognitive representations. This article surveys work in Central America ⫺ broadly construed as the region extending from Mexico to Panama ⫺ that addresses these issues, focusing mainly on the Mayan areas. There are over 100 named indigenous groups in Mexico and Central America. (The Mexican CDI [National Commission for the Development of Indigenous Peoples] identifies 62 indigenous groups in Mexico. The website www.native-languages.org lists 52 for the rest of Central America.) Some 16 million people identify with these groups, speaking languages from around 20 distinct language stocks. They live in varying degrees of integration with the surrounding dominant culture, although all are subject to its legal and educational systems. In many of these communities the indigenous language is still spoken and a strong sense of identification with the native culture is maintained. Indigenous languages spoken in this area range from the highly endangered Lacandon (with only a few elderly speakers) to Yukatek Maya (with nearly a million speakers). These indigenous peoples of Mexico and Central America have been the focus of intense linguistic and anthropological study, especially in the past 50 years. As a result there are hundreds of published descriptions of indigenous languages and ethnographic descriptions of their ways of life, culture, and belief systems. Yet what we can distill from these descriptions concerning everyday practices of bodily comportment, kinesics and gesture is very limited. Occasionally a linguistic description reports on associated gestural practices. For instance Zavala (2000: 144) mentions hand-shape gestures associated with measuring the size of non-present objects among the Akatek Maya, and suggests that this non-linguistic classification system, which is even more specific than the linguistic ones, reveals that classification is deeply embedded in Akatek cultural routines. Similar size gestures are among the eleven emblematic gestures among the Tzintzuntzan Tarascans that were documented by Foster (1949: 237); these, he argues, appear to be widespread in Mexico and have no special relationship to Indian identity. Vogt (1969: 239⫺240) describes a ubiquitous interactional marker of seniority displayed in a greeting practice among the Tzotzil Maya of Zinacanta´n, where a lower-status man bows before a higherstatus one and is released from the bow by the latter putting the back of his hand on the bower’s head. But in general, the local habits of everyday social interaction are invisible in the reports of linguists and anthropologists who have lived among these cultural groups. A major exception is linguistic anthropologists, who over the past 50 years have carried out many studies of social interaction among particular indigenous groups. One of the earliest was Sherzer’s 1983 study of the communicative practices of the Kuna of Panama, where he described in detail their practice of “lip pointing” (1983: 169⫺176). But the majority of this work is concentrated in the Mayan areas of southeastern Mexico, Guatemala, and Belize, and it is these that are the source of most of what is known about the bodily communicative practices of native groups in naturally-occurring interactions in this part of the world. Here I concentrate on three aspects where the available information for Mayan groups is particularly rich: gaze and kinesics in social interaction, indexical pointing among adults and between caregivers and infants, and co-speech gestures associated with spatial words and spatial descriptions.

1208

VI. Gestures across cultures

2. Bodily aspects o speaking: kinesics and gaze Kinesic deportment in interaction differs across social groups ⫺ for example, some tolerate much closer physical proximity than others when initiating an encounter or casually interacting. In the Tzeltal Mayan community of Tenejapa in southern Mexico, a person approaching the house of another with the purpose of interacting will initiate interaction from 20 feet or more away, calling greetings to someone who replies invisibly from within the house. The whole interaction may take place from this distance, or, if the visitor is summoned to sit down for sustained interaction, participants tend to arrange themselves at least 6 to 10 feet apart. Adults are comfortable conversing at length from a distance of 20 feet, and only intimates out of view of the public (or children, or drunks) interact with a physical separation of just a foot or two. A norm of physical restraint governs the control of the body in public situations (Brown 1979; see also Tax 1964), which constrains gesture and influences the nature of co-speech bodily communication practices. The deployment of gaze while interacting also varies across cultural groups. Among the Tzeltal Maya of Tenejapa, adults generally follow a practice of gaze avoidance which predisposes them to arrange themselves side by side rather than face to face while conversing, and to join gaze only intermittently and briefly during interaction. The same applies to the Tzotzil of Zinacantan, where direct eye contact is an index of close friendship (Freeman 1989). In these communities, prolonged mutual gaze and animated gesture is a feature of conflict situations ⫺ e.g., court cases ⫺ which contributes directly to the communication of hostility in these contexts (Brown 1990). The absence of direct gaze in most contexts is associated with a marked tendency for conversationalists to occupy themselves with “displacement activities”, engaging hands and eyes in physical actions like weaving, smoking, or fiddling with objects (Tax 1964). Brown and Levinson (2005) relate gaze avoidance in Tenejapa to a characteristic of the Tzeltal conversational response system. Rather than utilizing gaze and facial expressions as a resource for rapid communication of response (as, for instance, the Rossel Islanders of Papua New Guinea do), the Tzeltal response system relies on extensive verbal repetition (Brown 1998); this appears to be an areal feature found in many Mesoamerican indigenous communities (Brown, Le Guen, and Sicoli 2010). A comparative study of gaze practices during question-answer sequences in casual interactions in three unrelated cultures (Rossano, Brown, and Levinson 2009) found that the Tzeltal Maya deploy gaze somewhat differently than do speakers in two other cultural contexts ⫺ Italians and Rossel Islanders in Papua New Guinea. The gaze behavior of question-speakers is similar, but Tzeltal question-recipients gaze at their interlocutor much less than do Italians or Rossel Islanders. Tzeltal interactors also showed significantly less mutual gaze in the question-answer context. Equivalently, in all three cultural settings, speakers gaze more during questions initiating repair than in information questions or confirmation requests. But unlike in the other two languages, where recipient gaze seems to be related to doing recipiency, in Tzeltal the absence of recipient gaze was not a good predictor of lack of response to the question. This is not due to the Tzeltal recipients looking at something else specific; rather, they look down or mid distance, displaying different “home positions” for the eyes when not looking at the other’s face. Nonverbal signals of recipiency ⫺ nods, headshakes ⫺ are infrequent (Brown, Le Guen, and Sicoli 2010). The Tzeltal system of verbal recipiency seems built to assume the absence of an expectation of gaze as an indicator of engaged recipiency.

80. Gestures in native Mexico and Central America: The Mayan cultures

1209

In short, comparative work on gaze practices shows that the deployment of gaze in interaction is systematic and interactionally managed, but not entirely in the same ways across cultures. Mesoamerica cultures appear in general to be relatively guarded in physical expressiveness and mutual gaze, prompting the hypothesis that this interactional restraint derives from earlier Mesoamerican cultures which were more hierarchically organized than are those of the present day populations.

3. Co-speech gestures The two types of co-speech gesture that have been investigated in depth in indigenous Central America are connected with spatial reference: pointing gestures and spatially iconic gestures. Deictic gestures of pointing at things in the environment, and the “presentational deixis” of the linguistic accompaniments to handing objects to others, are contexts where language and gesture are inseparably carrying the message. Hanks’ (1990) detailed study of deictics in Yukatek Maya language use emphasizes deixis as a “referential practice” involving conceptualized bodily spaces not only of a speaker but for some deictic terms of the speech scene including other participants; what is embodied in this case is not a property of an individual but of interacting multiple bodies. Hanks’ work demonstrates the many complex ways in which body spaces are involved in deictic usage. The most thorough study of gesture in naturally-occurring conversations for this region is Haviland’s work on the Zinacantec Tzotzil. Like Hanks, Haviland emphasizes the nature of pointing as part of the linguistic system of determiners and pronouns (2003: 139). Pointing in Zinacanta´n is morphologically complex, distinguishing reference to individuals from mere direction ⫺ using the index finger for individual referents located in a particular direction vs. a flat hand (palm held vertically, thumb up) for vectors or directions. Other body parts (chin, lips, eyes), as well as objects held in the hand (tools) may be used to point with. Different aspects of a gesture’s form relate to direction, shape, and proximity of the referent. These are not, Haviland argues, simple referring devices, but they are complex semantic portmanteaux analogous to spoken demonstratives (Haviland 2003: 162). The deictic gestures of caretakers with their infants are also well documented in the Mayan area (de Leo´n 1998, 2011 and Haviland 1998, 2000 for Tzotzil; Brown 2011 and Liszkowski et al. 2011 for Tzeltal; Le Guen 2011a for Yukatek). Adults and child caretakers index-finger point for infants, drawing their attention to things that will attract them (e.g., birds, chickens) and warning against things that they should be trained to fear (strangers, dogs). In Tenejapa they do so regularly from the time the child is about 10 months old, and, despite a comparatively low level of infant-caregiver interaction during the first 10 months, Tenejapan Tzeltal babies index-finger point at objects, drawing an interlocutor into joint attention towards them, at about the same time as babies in other cultures where interaction with infants is more intensive (Brown 2011; Liszkowski et al. 2011), suggesting a universal basis for pointing. Gestural routines between infants and their caregivers (e.g., holding out and withholding an object) develop well before the baby produces words, and caregivers interpret infants’ gestures as having referential and speech act (e.g., imperative) significance. Infants’ first words are produced in routines combined with gestures in ways familiar from the study of infantcaregiver interactions in other societies (de Leo´n 1998; Haviland 2005).

1210

VI. Gestures across cultures The most extensive work on bodily aspects of communication in this region has to do with co-speech gestures associated with spatial reference. Here a phenomenon identified in connection with spatial language has prompted extensive investigation into the co-speech gestures associated with talk about spatial locations. The phenomenon is “absolute” frames of reference, which are a feature of spatial language in many communities throughout the Maya region. When locating an object in relation to another, speakers can take different perspectives, choosing from among three distinct frames of reference (Levinson 2003): they can use an axis projected from the speaker’s own viewpoint (a “relative” or “egocentric” frame of reference as in “to the left of the tree”), an axis projected from the reference object (an “intrinsic” frame of reference, as in “at the front of the car”), or an axis utilizing vectors extrinsic to the scene (an “absolute” or “geocentric” frame of reference, as in “north of the church”). In the Mayan area, spatial language and thinking relies heavily on geocentric frames of reference, and this goes along with a remarkable tendency to gesturally represent direction of motion using an absolute frame of reference and to correctly orient pointing and spatial-relating gestures in relation to their real-world referents’ locations (Levinson 2003). Haviland (2000), Levinson (2003), and Le Guen (2006, 2009, 2011a, 2011b) have described in detail how the pointing gestures of different Mayan groups conventionally use “correct” geographic orientation for what is being pointed out, even when it is far away and out of sight. Locations and events that involve directional vectors are pointed to with precisely oriented gestures, and characteristics of the terrain and the relative location of objects are gesturally indicated. Haviland’s example (2003: 149⫺150) of a man describing the location of some trees in a woodland some distance away shows how complex spatial configurations are conveyed with gestures accompanying a general verbal spatial description (“down”, “below”, “above”): an absolute pointing gesture (toward the actual location of the woodland) is accompanied with iconic indications of the terrain (fingers wiggling), the location of particular trees (a backhand sweep in a directionally accurate direction “above” a different group of trees), and the spatial configuration of objects there. Such descriptions can make use of transposition ⫺ where the speakers transpose their perspective to another location and gesture absolutely from there ⫺ requiring interlocutors to imagine themselves transposed to that location in order to interpret the gestures (Haviland 1996, 2000). Haviland’s work illustrates the semiotic complexity of pointing gestures, distinguishing the local spaces anchored in Mayan contexts to geography from interactional space which is free from geographic reality; narrated spaces are laminated over these (2000: 36). Absolute gestures are found in utterances both with and without accompanying cardinal direction words. Le Guen’s work on Yukatek Maya (2006, 2009, 2011a, 2011b) shows how even when speakers are not sure where the cardinal directions are, they point accurately to places and maintain an absolutely oriented mental map of their territory. The results of a series of experiments comparing knowledge of the semantics of spatial terms, performance on nonlinguistic tasks and gestures produced by men and women, show striking gender differences in the knowledge of the semantics of spatial terms but an equal preference for a geocentric frame of reference in nonverbal tasks. Le Guen’s conclusion is that the preferred frame of reference in Yukatek Maya is only detectable through the analysis of co-speech gesture and not through speech alone. The reliable spatial accuracy of gestures accompanying speech is likely an important element promoting children’s acquisition of the absolute spatial reference system (Brown and Levinson 2000, 2009; Le Guen 2011a).

80. Gestures in native Mexico and Central America: The Mayan cultures Preferred spatial frame of reference has also been shown to influence gesturing on the lateral axis. Kita, Danziger, and Stolz (2001) and Danziger (2008) report that the Mopan Maya of Belize and the Yukatek Maya of Mexico have different preferred frames of reference ⫺ Mopan uses only intrinsic frames whereas Yukatek uses both relative and absolute frames as well. The Mopan pattern of habitual language use correlates with an asymmetry in the conceptualization of space ⫺ the Mopan treat the two sides of a represented body symmetrically, displaying insensitivity to mirror-image reversals. Analogously, in their gesturing, to-the-right and to-the-left relations do not play a contrastive role. When telling traditional mythical stories, the Mopan ⫺ in line with an absence of a linguistic distinction between left and right and no relative frame of reference in Mopan ⫺ do not use the lateral axis contrastively, whereas the Yukatek Maya do. In their gestures representing contrasting aspects of motion (e.g., source vs. goal) and location (e.g., two different entities located in distinct places) the Mopan tended to use sagittally differentiated gestures while the Yukatek used the lateral axis to distinguish them. This distinction extended to gestural representations of time, which in this data for the Mopan were sagitally represented but for the Yukatek were aligned on the lateral axis. Kita, Danziger, and Stolz (2001) argue that this difference in gesturing is not just a thinking-for-speaking effect, but reflects deeper differences in spatial cognition in these two communities. Le Guen and Pool Balam (2012) observe that for Yukatek geocentric coders, metaphorical pointing for time (e.g., to the back for past) appears to be prohibited. Their explanation is this: since the Yukatek Maya make use of the full range of the gestural space for actual reference to objects in real space, using this geocentric frame of reference presumes that any point in any direction is always by default a reference to an existing direction, or an existing place identified in the speech, or the context. The whole space surrounding the speaker (the gestural space) is relevant for spatial reference and there are only two parts of the surrounding space co-opted for time reference. Although Yukatak Maya speakers do not use a linear metaphorical representation of time, there is still a space-to-time metaphorical mapping. The “now” or “precise/specific” time is indicated by pointing towards the space at the speaker’s feet, i.e., mapped onto the spatial “here”. In accordance with a spatial “up is far/remote” rule, remote time (either past or future) is gestured towards the space above the head of the speaker. Additionally, time unfolding is represented via a cyclical metaphor using a corresponding “rolling” gesture. A contrasting but still nonlinear representation of time in gestures is documented for another Mayan language in a recent dissertation on the co-speech gestures of Chol speakers, where temporal progression is represented not as uni-directional movement along an abstract timeline but as dyadic, non-linear connections between events, often with separate movements in different directions (Rodriguez 2013). Further evidence showing the flexibility of gestural use for time reference despite the predominance of an absolute spatial linguistic system comes from the Tzeltal Maya of Tenejapa (Brown 2012). A linguistically preferred frame of reference is not a straightjacket; speakers can use more than one to switch perspectives. Danziger’s work (2008, 2010) on spatial language and deictic gestures among the Mopan Maya has motivated her to propose an additional ego-based frame of reference (“direct”) to the standard three (absolute, relative, and intrinsic), which she argues is better able to account for frame of reference usage in cospeech gestures. She analyses a narrative telling, where at one point the speaker shifts linguistically from subjunctive (Irrealis) to completive (Realis) inflexion and correspondingly to a new perspective revealed in gestures, which switch from frontal (in local space)

1211

1212

VI. Gestures across cultures

to lateral absolutely anchored in the geography. Danziger (2008) claims that this is a case of gestural self-repair that “literally makes visible” the narrative’s switch from invoking a virtual non-oriented (Irrealis) space, where gesture occurs in front of the body, to a view of a real place located in relation the speaker’s own body, marked by lateral gestures. Gestures can metaphorically refer in domains other than that of time. For example, sociocentric pointing ⫺ e.g., pointing to the house of an associated relative to refer to an individual ⫺ is a conventional form of person reference in these Mayan communities (Brown 2007; Haviland 2003, 2007). Pointing to or touching parts of one’s own body while referring to the body part of another in a narration is another example of how speakers transpose gestures to imagined spaces. Given the large repertoire of conventionalized gestures in the Mesoamerican region, it is perhaps not surprising that indigenous natural sign languages draw upon this repertoire for linguistic signs. Reports of these “village sign systems” are mostly limited to documenting the repertoire of signs in a particular community, along with sociolinguistic observations on attitudes to their use (Du Bois 1978; Fox Tree 2009; Johnson 1991; Schuman 1980; Schuman and Cherry-Shuman 1981). It is argued that these indigenous sign languages are autonomous from the spoken language in the community, and entirely distinct from the sign languages promoted nationally (Mexican Sign Language or LSM in Mexico, Lensegna in Guatemala). There is some evidence of correspondences between the indigenous signs across different communities, widely separated geographically, as well as similarities to hand shape gestures depicted in Mayan hieroglyphs (Du Bois 1978; Fox Tree 2009), suggesting a possible pre-conquest source for these languages. More detailed on-going work on Yukatek Maya home sign systems in two villages (Le Guen 2011b, 2012) shows that the repertoire of signs developed in these communities is remarkably similar, and draws heavily on the rich repertoire of co-speech emblematic gestures available in the surrounding communicative activities of hearing Yukatek Maya. In particular, Le Guen shows how Yukatek Maya time co-speech gestures have been promoted into time signs in the two villages, and how Yukatek Maya signers have preserved a non-linear metaphorical representation of time inherited from the surrounding culture. The sign language is not restricted to deaf people and their families; most people in the community command the sign language to some degree. Deaf people are therefore not isolated from the main avenues of productivity and interaction available in the community (Le Guen 2011a; see also Danziger 1996; Fox Tree 2009). In conclusion, studies of the embodied aspects of speech in the Mayan areas of Mexico and Central America have revealed some interesting cultural characteristics of gaze, kinesics, and spatial gestures. They have contributed to the typology of gestures by identifying and characterizing the kinds of gestures that accompany languages where an absolute frame of reference is dominant, in contrast to an intrinsic or relative frame of reference. They have been important in refining our understanding of how spatial frames of reference are invoked, communicated, and switched in conversational interaction. In addition, they have provided evidence for the importance of co-speech gestures in understanding language use and language acquisition, at least in the domain of spatially relevant utterances, and shown the important role of gesture in transmitting culture-specific cognitive styles both across generations and across languages (as in the case of spoken Yucatec Maya and the signed language of deaf Yucatec Mayas). Finally, they have contributed to the increasing ethnographic evidence of the linguistic and sociocultural complexity of communicative gestures and signs, and have added to the theoretical sophistication of discourse taking an embodiment perspective on human communication.

80. Gestures in native Mexico and Central America: The Mayan cultures

4. Reerences Brown, Penelope 1979. Language, Interaction, and Sex Roles in a Mayan community: A Study of Politeness and the Position of Women. Ph.D. dissertation, University of California, Berkeley. Brown, Penelope 1990. Gender, politeness and confrontation in Tenejapa. Discourse Processes 13(1): 123⫺141. Brown, Penelope 1998. Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech. Journal of Linguistic Anthropology 8(2): 197⫺221. Brown, Penelope 2007. Principles of person reference in Tzeltal conversation. In: N. J. Enfield and Tanya Stivers (eds.), Person Reference in Interaction: Linguistic, Cultural, and Social Perspectives, 172⫺202. Cambridge: Cambridge University Press. Brown, Penelope 2011. The cultural organization of attention. In: Alessandro Duranti, Elinor Ochs and Bambi B. Schieffelin (eds.), Handbook of Language Socialization, 29⫺55. Oxford: Blackwells. Brown, Penelope 2012. Time and space in Tzeltal: Is the future uphill? In: Asifa Majid, Lera Boroditsky and Alice Gaby (eds.), Special edition, Foundations in Psychology, 3, 212. Brown, Penelope, Olivier Le Guen and Mark Sicoli 2010. Dialogic repetition in Tzeltal, Yucatec, and Zapotec conversation. Paper delivered at International Conference on Conversation Analysis (ICCA10), Mannheim, Germany. Brown, Penelope and Stephen C. Levinson 2000. Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In: Larry Nucci, Geoffrey Saxe and Elliot Turiel (eds.), Culture, Thought, and Development, 167⫺197. Mahwah, NJ: Erlbaum. Brown, Penelope and Stephen C. Levinson 2005. Comparative feedback: Cultural shaping of response systems in interaction. Paper delivered at the American Anthropological Association meetings, Washington, D.C. Brown, Penelope and Stephen C. Levinson 2009. Language as mind tools: Learning how to think through speaking. In: Jiangsheng Guo, Elena Lieven, Nancy Budwig, Susan Ervin-Tripp, Keiko Nakamura and S¸eyda Özc¸alis¸kan (eds), Crosslinguistic Approaches to the Psychology of Language: Research in the Tradition of Dan Isaac Slobin, 451⫺464. NY: Psychology Press. Danziger, Eve 1996. The communicative creation of language: A Mayan case study. Paper delivered at the 95th Annual Meeting of the American Anthropological Association, San Francisco, CA. November 20⫺24. Danziger, Eve 2008. Deixis, gesture and spatial frame of reference. Chicago Linguistic Society 39: 105⫺122. Danziger, Eve 2010. Deixis, gesture and cognition in spatial Frame of Reference typology. Studies in Language 34(1): 167⫺185. de Leo´n, Lourdes 1998. The emergent participant. Journal of Linguistic Anthropology 8(2): 131⫺61. de Leo´n, Lourdes 2011. Language socialization and multiparty participation frameworks. In: Alessandro Duranti, Elinor Ochs and Bambi B. Schieffelin (eds.), Handbook of Language Socialization, 81⫺111. Oxford: Blackwells. Du Bois, John W. 1978. Mayan sign language: An ethnography of non-verbal communication. Paper presented at the 77th annual meeting of the American Anthropological Association, Los Angeles. Foster, George 1949. Empire’s Children: The People of Tzintzantzun. Washington, D.C.: Smithsonian Institution, Institute of Social Anthropology, Publication No. 6. Fox Tree, Erich 2009. Meemul Tziij: An indigenous sign language complex of Mesoamerica. Sign Language Studies 9(3): 324⫺366. Freeman, Susan Tax 1989. Notes from the Chiapas Project, Zinacantan, summer 1959. In: Victoria R. Bricker and Gary H. Gossen (eds.), Ethnographic Encounters in Southern Mesoamerica: Essays in Honor of Evon Zartman Vogt, Jr. Albany, 89⫺100. NY: Institute for MesoAmerican Studies.

1213

1214

VI. Gestures across cultures

Hanks, William 1990. Referential Practice: Language and Lived Space among the Maya. Chicago: University of Chicago Press. Haviland, John B. 1996. Projections, transposition, and relativity. In: John J. Gumperz and Stephen C. Levinson (eds.), Rethinking Linguistic Relativity, 271⫺323. Cambridge: Cambridge University Press. Haviland, John B. 1998. Early pointing gestures in Zinacanta´n. Journal of Linguistic Anthropology, 8(2): 162⫺196. Haviland, John B. 2000. Pointing, gesture spaces, and mental maps. In: David McNeill (ed.), Language and Gesture, 13⫺46. Cambridge: Cambridge University Press. Haviland, John B. 2003. How to point in Zinacanta´n. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 139⫺170. Mahwah, N.J./ London: Lawrence Erlbaum Associates. Haviland, John B. 2005. Directional precision in Zinacantec deictic gestures: (cognitive?) preconditions of talk about space. Intellectica 2⫺3(41⫺42): 25⫺54. Haviland, John B. 2007. Principles of person reference in Tzeltal conversation. In: N. J. Enfield and Tanya Stivers (eds.), Person Reference in Interaction: Linguistic, Cultural, and Social Perspectives, 172⫺202. Cambridge: Cambridge University Press. Johnson, Robert E. 1991. Sign language, culture and community in a traditional Yucatec Maya Village. Sign Language Studies 73: 461⫺474. Kita, Sotaro 2009. Cross-cultural variation of speech-accompanying gesture: A review. Language and Cognitive Processes, 24(2), 145⫺167. Kita, Sotaro, Eve Danziger and Cristal Stolz 2001. Cultural specificity of spatial schemas, as manifested in spontaneous gestures. In: Merideth Gattis (ed.), Spatial Schemas and Abstract Thought, 115⫺146. Cambridge, MA: Massachusetts Institue of Technology Press. Le Guen, Olivier 2006. L’organisation et l’apprentissage de l’espace chez les Mayas Yucate`ques du Quintana Roo, Mexique. Ph.D dissertation, Universite´ Paris X-Nanterre. Le Guen, Olivier 2009. Geocentric gestural deixis among Yucatec Maya (Quintana Roo, Mexico). In: 18th IACCP Book of Selected Congress Papers, 123⫺136. Athens, Greece: Pedio Books Publishing. Le Guen, Olivier 2011a. Speech and gesture in spatial language and cognition among the Yucatec Mayas. Cognitive Science 35(5): 905⫺938. Le Guen, Olivier 2011b. Modes of pointing to existing spaces and the use of frames of reference. Gesture 11(3): 271⫺307. Le Guen, Olivier 2012. An exploration in the domain of time: From Yucatec Maya time gestures to Yucatec Maya Sign Language time signs. In: Ulrike Zeshan and Connie de Vos (eds.), Endangered Sign Languages in Village Communities: Anthropological and Linguisitic Insights, 209⫺249. Berlin: Mouton de Gruyter & Ishara Press. Le Guen, Olivier and Lorena I. Pool Balam 2012. No metaphorical timeline in gesture and cognition among Yucatec Mayas. Frontiers in Cultural Psychology 3, 217. Levinson, Stephen C. 2003. Space in Language and Cognition: Explorations in Cognitive Diversity. Cambridge: Cambridge University Press. Liszkowsky, Ulf, Penelope Brown, Tara Callaghan, Akira Takada and Connie de Vos 2012. A prelinguistic gestural universal of human communication. Cognitive Science 36: 698⫺713. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: Chicago University Press. Rodriguez, Lydia 2013. Thinking Gesture: Gesture and Speech in Chol Maya. Ph.D. dissertation, Department of Anthropology, University of Virginia. Rossano, Federico, Penelope Brown and Stephen C. Levinson 2009. Gaze, questioning, and culture. In: Jack Sidnell (ed.), Comparative Studies in Conversation Analysis, 187⫺249. Cambridge: Cambridge University Press. Schuman, Malcolm K. 1980. The sound of silence in Nohya: A preliminary account of sign language use by the deaf in a Maya community in Yucatan, Mexico. Language Sciences 2(1): 144⫺173.

81. Gestures in native Northern America: Bimodal talk in Arapaho

1215

Schuman, Malcolm K. and Mary M. Cherry-Shuman 1981. A brief annotated sign list of Yucatec Maya Sign Language. Language Sciences 3(1): 124⫺185. Sherzer, Joel 1983. Kuna Ways of Speaking: An Ethnographic Perspective. Austin, TX: University of Texas Press. Tax, Susan 1964. Displacement activity in Zinacantan. Ame´rica Indı´gena 24(2): 111⫺121. Vogt, Evon Z. 1969. Zinacanta´n: A Maya Community in the Highlands of Chiapas. Cambridge: The Belknap Press. Zavala, Roberto 2000. Multiple classifier systems in Akatek (Mayan). In: Gunter Senft (ed.), Systems of Nominal Classification, 114⫺146. Cambridge: Cambridge University Press.

Penelope Brown, Nijmegen (The Netherlands)

81. Gestures in native Northern America: Bimodal talk in Arapaho 1. 2. 3. 4. 5. 6.

Introduction Social and historical background The bimodal format and lexical gestures An extended example Conclusion References

Abstract Arapaho bimodal talk is the interactional use of language that integrates speech and a large repertoire of conventional gestures. This chapter examines a practice of bimodal talk that uses a two-part grammatical format. Each part of the format features a distinct speechgesture arrangement, with some formal repetition and semantic overlap between the two parts. Speakers employ this format to display their perspective on a social position they are taking. The bimodal properties of the practice allow recipients of the talk to take the speaker perspective, which motivates recipients to display an affiliation with the speaker position. As an important feature of Arapaho language use, bimodal talk provides strong support for the concept of multimodal language.

1. Introduction [The Arapaho] are known as among the best in gesture speech, and used it to such an extent that, until recently, it was supposed their vocal language was so poor as to make it necessary; in fact, some people had stated that to such a degree were they dependent on signs that they could not carry on a conversation in the dark. (Clark [1885] 1982: 39)

Cursory 19th-century documentation of Arapaho, such as that of William P. Clark (1982) above, suggests that the integration of vocal speech and gesture in Arapaho was well Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12151226

1216

VI. Gestures across cultures

beyond what Euro-Americans were accustomed to. However, the general disinterest and disavowal of gesture by linguists in the 20th-century means that the most thorough documentation of Arapaho has ignored its multimodal features. This documentation bias has affected other Native American languages with similar multimodal practices (e.g. Farnell 1995). For the Northern Arapaho (Wind River Reservation, Wyoming), a documentary corrective has currently been developed: The “Arapaho Conversational Database” (2011). This video-based interactional corpus provides striking evidence of the multimodal character of Arapaho. Through documentation supported by this corpus, 19th-century discourses of Arapaho as an exotic species of language can be replaced by an understanding of how Arapaho might be exemplary of the general multimodal nature of language. The idea that language is generally multimodal is a common theme in the work of gesture scholars. Focusing on gesture use in natural interaction, many have challenged the idea that talk, or language in action, is prototypically an exchange of vocal speech. For example, Goodwin (2000: 1519) shows that the action potential of gesture is as diverse as that of speech: Seemingly simple iconic gestures that occur with speech, such as numeric hand shapes, “can carry propositional information and function as individual actions”, while hand points can function “as components of multimodal actions”. To highlight this potential as a matter of information, Enfield (2009) demonstrates the various ways in which gesture can add iconic, indexical, and symbolic content to the information structure of “composite utterances”. Such evidence suggests that speech and gesture are inherently collaborative, and so, as Kendon (2011) argues, language is, at the very least, bimodal. This chapter supports this claim by presenting evidence from Arapaho that the bimodal interface itself is a semiotic resource rich with action potential. Specifically, the chapter examines a variety of conventional gestures as they are integrated with speech through a special two-part grammatical format. Speakers employ this format as part of an interactional practice for developing a shared perspective. The term bimodal talk is thus used not only to underscore the unique bimodal structure of certain utterances of Arapaho, but also to underscore how these utterances are geared toward social action.

2. Social and historical background Arapaho is a Native American language of the Great Plains region. The vocal repertoire is polysynthetic and historically situated within the Algonquian language family (see Cowell and Moss Sr. 2008). The repertoire of conventional gestures can be historically situated elsewhere. Arapaho incorporates features of an alternate sign language, which is called Plains Indian Sign Language (PISL) for its primary use as a pre-20th century lingua franca amongst Great Plains tribes (see Davis 2011). Additionally, Arapaho incorporates a pervasive and highly accurate set of local-geographic pointing practices, reflecting a cultural symbolism that is in many other ways highly tuned to the landscape (cf. Anderson 2001). According to Levinson (2003), such pointing practices constitute an “absolute gesture system” and are tied to a cultural specialization in way-finding or navigation. For the Arapaho, absolute pointing practices would have been fundamental for their pre-20th century nomadic lifestyle. The “Arapaho Conversational Database” (2011) provides evidence that the vocal and gestural repertoires are often integrated in regular talk, as a characteristic feature of Arapaho. Thus, bimodal talk persists despite the long-since loss of Plains Indian Sign Language as a lingua franca, the general decline of traditional geographic knowledge

81. Gestures in native Northern America: Bimodal talk in Arapaho

1217

(Cowell and Moss Sr. 2003), and the general endangered state of the Arapaho language (less than 100 fluent speakers, all over the age of 60). The next sections examine bimodal talk as an activity of ordinary language use.

3. The bimodal ormat and lexical gestures When Arapaho speakers engage in bimodal talk they integrate conventional gestures and vocal speech. There are many types of conventional gesture and different ways that gesture and speech are integrated. This section examines lexical gestures (a type of “quotable” gesture; see Kendon 1992), and the next section examines other gesture types. Both sections examine how gestures are integrated with speech through a specific grammatical format, the bimodal format. According to research on other languages, for a typical gesture to impinge on the content of the talk, a speaker must develop a “lexical affiliate” for the gesture by simultaneously positioning it with some segment of vocal speech (e.g. Schegloff 1984). However, because conventional gestures are at least partially symbolic (in the Peircean sense; cf. Enfield 2009) and there is such a variety and quantity of Arapaho conventional gestures, Arapaho gesture and speech can be semantically related in ways that are not dependent on their simultaneous temporal positioning. The bimodal format is exemplary of this unique quality of bimodal talk. By putting the bimodal format into action, speakers bring salient visual detail into collaboration with nuanced verbal detail in order to reinforce, for recipients of the talk, both the speaker’s perspective and the content of the talk itself. The bimodal format is typically employed in situations where a shared perspective is a primary interactional goal. The bimodal format consists of two components in series, a base and a sequel, which are formally distinct from one another except for some repeated material. In the example below, vocal prefixes are appended to a lexical gesture in the base, while in the sequel the same prefixes are appended to a fully vocalized verb. (Gesture shapes are captured in the stills and lettered; the letters on the top line of the example correspond to still letters; right-angled brackets signify transitions from one gesture shape to the next; gesture and speech are temporally aligned with respect to one another.) The speaker here has been describing dramatic changes to Arapaho reservation life over the last century. From the speaker’s perspective, the most dramatic period came just after men returned from World War II. The men had returned with a lifestyle that highly contrasted with Arapaho traditions. The speaker uses the base-sequel format to mark this perspective, describing it as a sort of implosion of Arapaho life. In the base, there is a series of lexical gestures employed in collaboration with vocal elements. Still A shows the hands apart in a gesture that indicates “the Arapaho community”. Still B has the hands coming together for a singular clap that is held for a moment (see Fig. 81.1). Similar gestures are used as standalone expressions for “gun shot” or “explosion” (cf. Clark 1982: 173). Because the gesture in still A depicts the community as a type of bounded space, the gesture in still B conventionally denotes an explosion while simultaneously depicting a collapse of the bounded space that symbolizes the community. Relating the drastic change suffered by the community to an implosion, the gestures thus work together to create a metaphor that could not be realized with the same depth through the idiom of vocal speech (cf. Cienki and Müller 2008). Additionally, the clap comes right before wohei ‘okay’, which as a marker of transition works to reinforce the clap’s metaphorical significance as the moment of implosion.

1218

VI. Gestures across cultures

Fig. 81.1: Example of the bimodal format with lexical gestures; Arapaho Conversational Database, File 32a, TC 8:32, Speaker #45

The final gesture, combining stills C and D, starts with clenched hands that then snap out while the hands move upwards. This gesture indicates “fire” (cf. Clark 1982: 173), which reinforces the “implosion” reading of the prior gesture. The fire gesture is vocally prefixed with an intensifier and ne’ ‘what follows’ to metaphorically qualify the fire as the disastrous result of the implosion. As a coherent statement, the base signifies the metaphorical implosion of the Arapaho community. The sequel repeats the two vocal prefixes but appends them to the vocalized verb nonsoo-’ ‘confusion-it’, adding that the Arapaho world became very disordered. As the base and sequel components use a repetition of linguistic material to signal conceptual coherence with one another, the total utterance expresses the idea that the implosion of the Arapaho community resulted not so much in the destruction of material life as the destruction of cultural life. It might seem that instead of a two-part format this example is rather evidence of a word search, where the gesture in the base projects what is finally vocalized in the sequel. However, to be sure, there is no hesitation or extra glottal cut off that would indicate such. Rather, as the example makes evident, speakers employ the bimodal format to bring specific detail to a social position they are taking. Stivers (2008: 31⫺32) argues that such detail provides recipients of the talk with “the means to understand what it was like to experience the event being reported through the eyes of the teller”. For the bimodal format to work in this way, information is distinctly structured from the base to the sequel components. As in the example, the base structure is dominated by gesture. This works to visually detail a speaker’s perspective and thus mark a position the speaker is taking. A gesture-dominated base on its own introduces visual structure into a discourse dominated by verbal structure and thus leaves some amount of interpretative work for recipients. The sequel, then, by being more verbally elaborate makes the dis-

81. Gestures in native Northern America: Bimodal talk in Arapaho

1219

course-sequential connection of the whole bimodal format more explicit, as is the case in the example. Thus, the bimodal format works to maximize both visual and verbal information, making it a crucial speaker resource for detailing a perspective on a social position and thereby providing recipients with the information needed to adopt the perspective.

4. An extended example A speaker uses the bimodal format to display a social position and give recipients of the talk detailed access to the perspective from which the position was developed. This section shows that through such access, recipients are motivated to display their affiliation with the speaker position. The section examines an interactional example in which the bimodal format encompasses many types of conventional gesture. The situation involves a speaker evaluating the status of a non-Arapaho person who wants to learn the Arapaho language, a culturally sensitive matter. By employing the bimodal format, the speaker motivates recipients to affiliate with his evaluative position. The three participants in view are sitting in a side-by-side or low eye-contact formation, which is typical of casual Arapaho interactions.

Fig. 81.2: Pseudonyms with Arapaho Conversational Database speaker numbers in parentheses, ordered from viewer’s left to right: FV (#5), TR (#3), IW (#52); File 24b, TC 4:26

The main speaker, FV, and the two other men, TR and IW, are fluent Arapaho speakers. The woman being referred to, Ann, is working the camera for this video documentation. She is sitting to FV’s right, next to the camera, and out of view. Ann is not a fluent speaker of Arapaho, and so she is not treated as a normal participant. The culminating action is an associative placement, which is the association of persons through a place. To display a position of support for Ann’s rights to be niibeethinono’eiyeitit ‘one who wants to learn Arapaho’, FV uses the bimodal format to structure an associative placement in which Ann is associated with a well respected person through their place-based life convergence. The example, in its entirety, is given below. (Parentheses enclosing a letter signify a gesture shape that is similar to the lettered gesture without parentheses. Equal signs signify the continuation of a gesture from one gesture line to the next. For other conventions, see section 3.)

1220

VI. Gestures across cultures

1. FV

(gaze at Ann) (0.7)

2.

BBBB neh’eeno nih’oo3ousei 3ii’oku-t nii-beet-hinono’eiyeiti-t this white.woman sit-3.S IMP-want.to-spk.Arap-3.S ‘This white woman sitting here wants to speak Arapaho.’ (1.4) (head nod)

3. IW 4. FV

(C) (C) (C) > > > > C C C C C = noosou-neyei3ei’i-t niithuutiino still.go.to.school-3.S where- around.here

5.

=CCCCCCCCCCCCC= too3-iihi’ nii- niineniiniicie near-ADV IMP- Denver

6.

=CCCCCCCCCCCCCCCCCCCCCCCCCCC= niituhh ho’nookeeni-’ ni’ii3eihini-’ hini’ boulder IMPERF- uhh rocky-0S called.-0S that Boulder, CO ‘She is still going to school where...here. Near Denver, where uhh, it is rocky as that Boulder is called.’

7. IW 8. FV

(head nod) =C>D C>DDDDDDD nee’ee nee’ee- nee’eeteihi-t that.is- that.is- that.is.where.X.is.from-3.S ‘That’s where she is from.’

9. TR

(head nod)

evaluative preface

base

81. Gestures in native Northern America: Bimodal talk in Arapaho 10. FV

EEEEEEEEEEEEEEEEEEEE= nooxeihi’ nooxeihi’ neh’eeno Andy nooxeihi’ hii- hi’in maybe maybe this Andy maybe ?- that

11.

=EEEEEEE neyei3eibeee-t teach-3.S

hinono’eitiit Arapaho.lang.

‘Maybe this Andy, maybe that one who teaches Arapaho language.’ 12. IW

yeah (1.3)

13. FV

FFFFFFFF nehe’ hi- hi-neyei3- neyei3eihii this 3S- 3S-studstudent ‘This is his student’ (head nod)

14. IW

1221

sequel

Fig. 81.3: Example of the bimodal format used in an evaluation; TC 4:26

The subsections to follow discuss the parts of this example, focusing on how the speaker integrates the conventional gestures and the speech through the bimodal format to develop an evaluative position.

4.1. The evaluative preace and a gestural modiier In line 1 and still A, FV initiates the sequence by gazing at Ann (see Fig. 81.4). In line 2, FV juxtaposes Ann’s status as a non-Arapaho (nih’oo3ousei ‘white woman’) and niibeethinono’eiyeitit ‘one who wants to learn Arapaho language’ to preface the evaluation. Still B shows a gestural modifier, which is simultaneous with hinono’eiyeiti

A.

B.

1. FV

(gaze at Ann) (0.7)

2.

BBBB neh’eeno nih’oo3ousei 3ii’oku-t nii-beet-hinono’eiyeiti-t this white.woman sit-3.S IMP-want.to-spk.Arap-3.S ‘This white woman sitting here wants to speak Arapaho.’ (1.4)

3. IW

(head nod)

Fig. 81.4: Lines 1⫺3, evaluative preface and gestural modifier; TC 4:26

1222

VI. Gestures across cultures

‘to speak Arapaho’. This gesture has a superlative function as it is often used to modify mentions of venerated things or people (cf. Davis 2010: 145). Thus, to display his position that the situation is particularly worthy of evaluation, FV creates a striking asymmetry in the juxtaposition of Ann, a non-Arapaho, and the venerated Arapaho language. Arapaho recipients display their affiliation to such interactional positioning by responding with simple head nods (cf. Stivers 2008), which IW does after FV comes to completion in line 2. Head nods, then, continue to mark key developments of FV’s talk as he defines his evaluative position through the bimodal format.

4.2. The base component, a geographic point, and a gestural link Next, FV produces a gesture-dominant utterance to instantiate the base component of the bimodal format. The utterance is held together by a geographic point that FV transitions to a person point. The transition constitutes a gestural link, which is a specific bimodal practice of upgrading the status of one of the referents by visually associating it with the other referent. In this case, FV employs a gestural link to construct an associative placement. To start, FV formulates a setup for the associative placement.

4. FV

(C) (C) (C) > > > > C C C C C = noosou-neyei3ei’i-t niithuutiino still.go.to.school-3.S where- around.here

5.

=CCCCCCCCCCCCC= too3-iihi’ nii- niineniiniicie near-ADV IMP- Denver

6.

=CCCCCCCCCCCCCCCCCCCCCCCCCCC= niituhh ho’nookeeni-’ ni’ii3eihini-’ hini’ boulder IMPERF- uhh rocky-0S called.-0S that Boulder, CO

C.

‘She is still going to school where...here. Near Denver, where uhh, it is rocky as that Boulder is called.’ 7. IW

(head nod)

Fig. 81.5: Lines 4⫺7, geographic point to Boulder; TC 4:32

The setup begins in line 4 where FV identifies Ann as a (university) student. From the end of line 4 to line 6 FV conducts a word search for where she goes to school, which is sustained throughout by the geographic point in still C (cf. Schegloff 1984). The point is precisely directed toward Boulder and angled up to indicate distance (about 400 miles). As the word search culminates in the English place name Boulder, it is evident that FV is using the word search to display an avoidance of an Arapaho-language place name for Boulder. This is because Boulder has two relevant values: First, Boulder is an important area of the Arapaho ancestral homelands; second, Andy, the well respected Arapaho linguist and head of the interactional video documentation project, has the University of Colorado in Boulder as his home institution. The English formulation Boulder indexes the latter, and so the word search works to constitute the geographic point within an attentional frame where the university is the relevant feature of the place indicated (cf. Goodwin 2006). As this particular moment illustrates, places are part of

81. Gestures in native Northern America: Bimodal talk in Arapaho

1223

the rich structure that participants must sequentially develop for semiotic availability. This bimodal formulation of Boulder, then, allows for a subsequent use of the Boulder point in the sequel without any vocal qualifications. Holding the Boulder point in the beginning of line 8, FV constructs an associative placement by describing Boulder as the place where Ann is from while concurrently redirecting the point toward her (see Fig. 81.6).

C.

8. FV

=C>D C>DDDDDDD nee’ee nee’ee- nee’eeteihi-t that.is- that.is- that.is.where.X.is.from-3.S ‘That’s where she is from.’

9. TR

(head nod) (3.1)

D.

Fig. 81.6: Lines 8⫺9, gestural link and associative placement; TC 4:42

The gestural link maintains the hand shape throughout and thus ends with a forefinger point at Ann. As Ann is within the participation space, the use of a forefinger to point at her is somewhat marked, a thumb point being normally deployed for such person reference. The forefinger point therefore reinforces the gestural link as a practice for doing something beyond transitioning from one point to another. It is rather a practice in which a speaker takes a position by displaying an association between two referents so that a questionable activity involving one referent can be culturally grounded through the other referent. Here, FV uses a gestural link to culturally ground Ann’s desire to learn Arapaho and thereby upgrade her status. Constructed as an associative placement, the link works by visually detailing Ann’s life convergence with Andy in Boulder, as his research assistant in the language documentation project. Again, such perspectival detail motivates affiliative responses by recipients, such as the head nod in line 9.

4.3. The sequel component, repeated points, and a morphological pointing contrast After bringing the gesture to rest, FV achieves sequential closure of the evaluation through the sequel. Here, FV is more explicit about the sequential implications of the base’s associative placement. So, while the conceptual coherence of the sequel with the base is signaled by repeating the points to Boulder and Ann, there is no gestural link but rather an increase of vocalized information. In line 10, FV vocally introduces Andy for the first time (see Fig. 81.7).

1224

VI. Gestures across cultures

10. FV

EEEEEEEEEEEEEEEEEEEE= nooxeihi’ nooxeihi’ neh’eeno Andy nooxeihi’ hii- hi’in maybe maybe this Andy maybe ?- that

11.

=EEEEEEE neyei3eibeee-t teach-3.S

E. hinono’eitiit Arapaho.lang.

‘Maybe this Andy, maybe that one who teaches Arapaho language.’ 12. IW

yeah (1.3)

Fig. 81.7: Lines 10⫺12, repeat of geographic point; TC 4:48

The use of the Boulder point in the base is here reinforced by a description of Andy as a teacher of Arapaho language (note that there is no vocalization of place). Additionally, in line 13, FV explicitly states the association between Andy and Ann while pointing at her with his thumb (see Fig. 81.8).

13. FV

14. IW

FFFFFFFF nehe’ hi- hi-neyei3- neyei3eihii this 3S- 3S-studstudent ‘This is his student’ (head nod)

F.

Fig. 81.8: Line 13⫺14, repeat of person point but with morphological contrast; TC 4:56

As a thumb point, this gesture occurs in morphological contrast with the prior point to Ann in line 8. Again, doing different work than forefinger points, thumb points generally construe the referred-to person as a part of the participation framework. Given Ann’s ambiguous status as a participant (being a non-speaker but within the participation space), the thumb point works symbolically to highlight FV’s position: Regardless of Ann’s status as a non-Arapaho outsider, she should be treated as a possible interactional participant and, through such acts, encouraged as an Arapaho-language student. The subtle actions of these gestures together with the more explicit verbal descriptions thus work to foreground the significance of the base’s associative placement.

5. Conclusion This chapter has provided a partial sketch of bimodal talk in Arapaho. A variety of conventional gestures constitute a rich gestural repertoire, including lexical gestures, geographic pointing, and gestural linking. Such gestures were examined through two exam-

81. Gestures in native Northern America: Bimodal talk in Arapaho

1225

ples, both of which highlighted a bimodal format consisting of a base and a sequel component. This format underscores how, in bimodal talk, speakers can build linguistic relationships between gesture and vocal speech that are semiotically rich and conceptually coherent. Additionally, the format is not just a matter of style or artistry. A speaker employs it to articulate and display a detailed social position as well as the perspective through which the position was developed. Detailed access to a perspective allows recipients of the talk to adopt the perspective and thus motivates them to display an affiliation with the speaker’s position. The semiotic richness and action potential of Arapaho bimodal practices demonstrate some of the complex possibilities at the interface between vocal speech and gesture, underscoring the truly multimodal nature of language.

Acknowledgements I am very grateful to Dr. Andrew Cowell and the Northern Arapaho Nation for providing me with the opportunities and materials to learn about Arapaho. I also thank the following for their helpful feedback on this chapter: Andrew Cowell, Joshua Raclaw, Nina Jagtiani, Matthew Ingram, and Irina Vagner.

6. Reerences Arapaho Conversational Database 2011. Data collected and processed by Dr. Andrew Cowell, University of Colorado, 2007⫺2011. Funded by Hans Rausing ELDP. Deposited at ELAR Archive, SOAS, University of London, Sept. 2011. Anderson, Jeffrey D. 2001. The Four Hills of Life: Northern Arapaho Knowledge and Life Movement. Lincoln: University of Nebraska Press. Cienki, Alan and Cornelia Müller 2008. Metaphor, gesture, and thought. In: Raymond W. Gibbs Jr. (ed.), The Cambridge Handbook of Metaphor and Thought, 483⫺501. Cambridge: Cambridge University Press. Clark, William P. 1982. The Indian Sign Language. Lincoln: University of Nebraska Press. First published [1885]. Cowell, Andrew and Alonzo Moss Sr. 2003. Arapaho place names in Colorado: Form and function, language and culture. Anthropological Linguistics 45(4): 349⫺389. Cowell, Andrew and Alonzo Moss Sr. 2008. The Arapaho Language. Boulder: University of Colorado Press. Davis, Jeffrey E. 2010. Hand Talk: Sign Language among American Indian Nations. New York: Cambridge University Press. Enfield, N.J. 2009. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge: Cambridge University Press. Farnell, Brenda 1995. Do You See what I Mean? Plains Indian Sign Talk and the Embodiment of Action. Austin, TX: University of Texas Press. Goodwin, Charles 2000. Action and embodiment within situated human interaction. Journal of Pragmatics 32: 1489⫺1522. Goodwin, Charles 2006. Human sociality as mutual orientation in a rich interactive environment: Multimodal utterances and pointing in aphasia. In: N. J. Enfield and Stephen C. Levinson (eds.), Roots of Human Sociality: Culture, Cognition and Interaction, 97⫺125. Oxford: Berg. Kendon, Adam 1992. Some recent work from Italy on quotable gestures (emblems). Journal of Linguistic Anthropology 2(1): 92⫺108. Kendon, Adam 2011. Vocalisation, speech, gesture, and the language origins debate: An essay review on recent contributions. Gesture 11(3): 349⫺370.

1226

VI. Gestures across cultures

Levinson, Stephen C. 2003. Space in Language and Cognition: Explorations in Cognitive Diversity. Cambridge: Cambridge University Press. Schegloff, Emanuel A. 1984. On some gestures’ relation to talk. In: J. Maxwell Atkinson and John Heritage (eds.), Structures of Social Action, 266⫺296. Cambridge: Cambridge University Press. Stivers, Tanya 2008. Stance, alignment, and affiliation during storytelling: When nodding is a token of affiliation. Research on Language and Social Interaction 41(1): 31⫺57.

Richard Sandoval, Boulder (USA)

82. Gestures in Southwest India: Dance theater 1. 2. 3. 4. 5. 6.

Introduction A worldview represented in symbols Gestures and geometry in pure dance nritta Gestural movements, geometric patterns, and complex abstract concepts Conclusion References

Abstract A study of gestural articulation in Indian dance theatre using Müller’s (1998: 123, 2009: 514) form-based linguistic analysis exposed processes of conceptualization as each conventionalized hand gesture takes several meanings (Ramesh volume 1). Also, an analysis of pure dance gestures using Laban Movement Analysis revealed underlying spatial relationships and inner connectivity patterns. Based on this a further analysis is presented in this article, where a pure dance gestural movement in the dance style Bharatanatyam is correlated to geometric symbols and signs used in the Indian context. These are abstractions of concepts related to life and are seen in symbols used in religious practices, architectural designs, and works of art. The concepts often find description in terms of physiology of the human body, prompting Vatsyayan (1996, 1997) to describe the body as a metaphor for these concepts. Based on the inherent geometric patterns and subsequent embodied experiences, an additional correlation of gestural movement to image schematic structures discussed in Cognitive Linguistics, exemplifies how the body is not a metaphor but can be understood as the source or basis for conceptualizations represented in the symbols. Pure dance gestures then have the function of reinforcing embodied experiences underlying conceptualizations of life’s phenomena.

1. Introduction Gesture has been related to thought and imagery and is considered an important constitutive element in the unfolding of these in the dynamics of communication (McNeill 1992, 2005). Gestures expose the mechanisms underlying imagistic processes. As research in co-speech gestures (Kendon 2004; McNeill 1992, 2005; Müller 1998, 2009, 2010) reMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12261233

82. Gestures in Southwest India: Dance theater

1227

veals, they bring to visualization, or make obtainable, objects, ideas and thoughts, concrete and abstract, and the imagery underlying these. This holds for gestures in the Indian context as well. Used extensively in ritual practices and performing arts, they carry different functions in each. An analysis of gestural articulation of the hands in Indian performing arts based on the mimetic modes of representation proposed by Müller (1998: 123, 2009: 514) has been presented in the article in volume 1 of this Handbook. I have discussed there how each conventionalized hand gesture can take several meanings and briefly described how hands depict objects while the eyes express emotions associated with the object. I also analyzed an example how gestures in pure dance movements called nritta in the dance style Bharatanatyam also establish spatial relationships and inner connectivity patterns. I suggested that a further analysis that reveals crystalline forms in the movements would correlate these to the principles of architecture, the Vastusastra. In this article I will extend the discussion to how and why the practice of gestural movement in the pure dance context of India seems to underscore inherent geometric structures. Geometric structures and crystalline forms underlie the signs and symbols used in representing the Indian worldview, as some concepts presented in section 2 will reveal. They are seen to give form to the conceptualizations of life’s phenomena and get represented in works of art as geometric shapes, as section 2.1 will briefly present. Their correspondence to the human body and perceptual experiences suggests a grounding of these in embodied experiences. In Section 3 I will therefore first identify, in a pure dance movement, the geometric patterns underlying these symbols, to then correlate these in Section 3.1 to image schematic structures defined in Cognitive Linguistics (the discussion on how these patterns can be related to image schematic structures introduces a further dimension of using linguistic form-based analysis and ushers in a contemporary discussion on gestures used in the Indian context). I finally discuss in section 4 how, due to inherent geometric patterns and embodied experiences arising from spatial relationships and inner connectivity patterns, bodily actions in gestural movement and their correspondence to image schemas reveal an embodied experiential grounding for the geometric concepts related to the Indian worldview.

2. A worldview represented in symbols The Indian worldview has brought forth a number of geometric symbols and signs, understood as being designed to enable a comprehension of the relationship between man and the universe. These symbols are understood as giving form to the formless aspects of life. Vatsyayan (1997: 11) describes how the organization of the universe is represented by the stambha ‘pillar’. It stands at the center of the earth and represents “perfect balance of the earth and sky” (1997: 10). The idea of movement in the universe has been equated with a wheel and its hub, represented by a circle with a center. The center is the unmanifest, the point of “restful stillness” (1997: 19). Elsewhere she (1996: 51) calls the pillar or pole as growing out of the bija ‘seed’. Space and time get represented in these concepts, extended by the figure of man in a circle to represent man in space and the rhythm of movement, with the cosmic rhythm moving in a spiral. Similarly everything in the universe, including man, is the manifestation of the five elements space, wind, fire, water, and earth. Circle, triangle and square, e.g., represent water, fire and earth respectively.

1228

VI. Gestures across cultures

These shapes also constitute what is called a yantra, a geometric symbol used in religious practices. These geometric patterns are considered to be potent diagrams. The sri cakra, e.g., has a square, within which are several enclosures constituted by 43 triangles, a sixteen-petalled lotus, an 8-petalled lotus, and a central point (Pappu 2008: 15). Each enclosure represents various aspects of life, from emotions to body, mind, intellect, body activities such as digestion or elimination, creation, destruction, the geographic directions, senses, the five elements, and so on. Pappu defines the objective of worshipping this particular symbol as establishing a “oneness” between, “knower, known and knowledge” (2008: 68). The ancient science of architecture vastusastra mentions shape, orientation, measure, rhythm, and energy as important constitutive elements. Schmieke (2000: 34, 66⫺70) explains form as the most basic among these, because the form of an enclosed space defines how energies find orientation, measure and rhythm. The square and its threedimensional form, the cube, are, for instance, seen as the most simple, stable and harmonizing units of space. All the geographic directions get represented here. When space is enclosed in a cube, nature’s forces get organized in it into pulsating spatial pulls corresponding to the geographic directions and forces of nature. These influence man’s physical, emotional and mental well-being and are therefore important when planning construction of buildings. Thus the square and the cube are given great relevance. As seen above, the square builds the fundamental shape of the yantras. The architectural plan of all temples is also designed along these lines (Vatsyayan 1997: 73⫺99). Vatsyayan (1996, 1997) refers in various chapters of her work to how the abstract geometric conceptualizations of life-phenomena are defined in terms of the human body. She sees the body as a metaphor that has been used to explain concepts related to the world. The understanding of the world as pillar is over-layered by the vertical man standing in space. The unmanifest center is equated with the navel. The womb also represents the unmanifest. At the same time, the geometric patterns used for worship represent the human being as a whole as seen above, embedded in the larger framework of factors that constitute and influence life. Also, the important constitutive elements of vastu, namely shape, orientation, measure, rhythm, and energy, are constitutive elements of movement and thus dance as well. The human body here suggests an understanding of the embodied basis of these concepts.

2.1. Geometric representations o a worldview in works o art The complex relationships of life’s phenomena, understood as geometric patterns and corresponded to the human body, find representations as such through abstractions. The shapes mentioned above are incorporated not only into rituals. Vatsyayan (1997) particularly discusses how geometric shapes of square and circle appear in all forms of representation, be they works of art, temple architecture, or dance movements. Particularly a circle or square, or a circle within a square, with or without figures, is prevalent in art works seen in temples, caves, etc. She (1996: 48⫺56) elaborates on how the concept of man and the universe is implicit in the performing arts as well. In the figures she presents (1997: 53⫺56, 122⫺123) showing dance postures in relation to a circle, its center and diameters, she compares some basic stances of dance forms like Bharatanatyam to these and discusses physiological aspects as well. However, I think a more detailed analysis of the body movement would provide insight into whether the movements only correspond to or represent symbols like the other representations discussed above do, or are

82. Gestures in Southwest India: Dance theater

1229

the very source of such geometric conceptualizations of life’s phenomena. The next two sections will therefore present two distinct forms of analysis as mentioned in the introduction to discuss how geometric patterns in body movement arise intrinsically and correlate both to symbols discussed above and image schemas of Cognitive Linguistics.

3. Gestures and geometry in pure dance nritta For the following analysis I return to the movement I analyzed in my earlier article (Ramesh volume 1), illustrated in Figure 20.1 therein, to identify the geometric shapes and correlate them to the crystalline forms and geometric shapes that represent concepts of the Indian worldview. The figure is presented below once again as Fig. 82.1 for a better understanding of the analysis. I will draw from a more detailed analysis (Ramesh 2008) and also refer to the earlier analysis in volume 1, in both of which I used Laban Movement Analysis and Bartenieff Fundamentals. (i) In Fig. 82.1 one sees a deep-seated posture where there is a gradated rotation together with a flexion at the femoral joint and a flexion of the outward turned knees. The feet are placed together with the ankles facing forward. Physiologically this stance or the seatedness of it comes from sitting into the pelvic floor with a firm placement of the feet. This physiological component also is its stabilizing factor, with the extended pelvic floor acting as support. It is the basic stance in dance called the ardhamandala. The geometric shape this is correlated to is the square, with the heels, knees and tailbone in dynamic alignment with each other. (ii) In the leg gesture coming out of this stance, the right foot is seen to get lifted and placed on the heel on the side at the distance of the stretched leg. In doing so the foot is lifted, tracing a perpendicular triangle. In the final stance called the alidham, one discerns two triangles, if taking into consideration the dynamic alignment between heel and tailbone that constructs an experiential line or trajectory, along which the right foot got lifted vertically before being placed on the side. (iii) In the gestural movement of the hands, one sees an opening circular motion, ending in a straight line of the arms in a diameter, the scaffolding of a square here. If extending the hands first upwards before going to the sides, one can use the vertical dimension of up-down, thus reflecting verticality of pillar or column. The center experienced here is the navel. It supports the movement. (iv) There is a tilt of the torso to the side the foot is stretched to. It however retains its verticality along the head-tailbone connection. The vertical alignment along with the navel center is its supporting factor. (v) The whole movement is along a vertical plane, the final posture taking a Wall Shape. In other words only two dimensions, vertical and horizontal are involved here. Squares, triangles, lines and circle are the outcome of the stances and movements seen here. The navel center and pelvic center of support that do not move during movement execution give the notion of stillness that is also referred to in the Indian worldview. This one example illustrates how geometric shapes created in gestural movements of Indian performing arts correlate to the same geometric symbols, which are used to represent concepts underlying an Indian worldview. At the same time it also reveals how creating geometric patterns and shapes is inherent to movement of the body. Then body

1230

VI. Gestures across cultures

Fig. 82.1: A pure dance movement

architecture correlates with space architecture. I would say, movement in space creates shapes. Gestures here do not appear to be a mere representation of the geometric symbols denoting life’s phenomena. They create these geometric patterns through movement and the factors supporting it. The next section will correlate these movements and their geometry with image schematic structures to enable an understanding of how they could be the structuring patterns of embodied experience.

3.1. Image schematic structures in dance movement Image schemas, according to Cognitive Linguistic Research, structure motor-sensory experiences grounded in motor actions of the body. They are contours of such embodied experiences or recurring patterns of these (Johnson 2005: 15⫺34). Several studies in Cognitive Linguistic Research (Hampe 2005) discuss how embodied experiences get

82. Gestures in Southwest India: Dance theater

1231

transformed into mental representations by way of image schematic structures. These structures reflect spatial relationships and geometric patterns and shapes. Mittelberg (2010) has appropriated the shapes gestures reveal, such as a cup-shaped hand to the image schema container (2010: 361), or the hands moving away from each other to source-path-goal (2010: 364), thus providing evidence that the kind of imagery revealed in the geometric shapes and diagrammatic patterns of gestures can be related to the notion of image schemas. On a similar vein, the cup-shaped shape of the hands seen in the movement pattern in Fig. 82.1 can be related to container, and the outward movement of the hands reflects source-path-goal. There is also verticality in the hand movement, which begins at the chest with an up-down axis as illustrated, going into a side-side axis. This movement also reveals center-periphery, that is moving from center to the distal edges, as I analyzed in the earlier article. The circular movement of the hands creates a cycle. Due to the deep-seatedness, the centerdness at the navel, the vertical alignment, the posture of the legs, and due to the counter-tension provided by the arms and legs as discussed in the earlier article, the movement experience the pattern provides has a balancing, stabilizing and grounding function for the human body. There is thus balance and support. It also reflects containment. The whole posture creates the Shape of a Wall along a Vertical Plane.

4. Gestural movements, geometric patterns, and complex abstract concepts In this analysis one sees how geometric patterns and shapes that are defined in conjunction with the Indian world view are similar to those executed in the dance movement. Also, they are created by the body through movement, based on its physical structure and its need to balance, on its physiology, and kinesthetic nature. They are thus inherent to movement of the human body. The embodied experience of these movements is also inherent, because of the connectivity patterns (see Hackney 1998 for reference) the movements establish, as discussed in the analysis in volume 1. Geometric patterns, their embodied experience, and inner connectivity patterns hence inform each other. Where there is a geometric pattern, there is an inner connectivity pattern, and thus an experience. The geometric structures of movement provide as such the contours of experience. The image schematic correspondence of the dance movement presented above reveals that these contours of inherent embodied experiences the motor actions provide, also representative of all other movement of pure dance, can be recruited for conceptualization in the same way as discussed in Cognitive Linguistics. They can get recruited here for structuring concepts such as verticality, circle, center, support, enclosure, represented in the geometric symbols discussed in the Indian context. As is evident, the geometry of the movement itself gets conceptualized in the symbols, based on experiential factors. Damasio (2003) in another body of research postulates how the mind in its creative ability can abstract from mental images emerging due to bodily experiences to symbolically represent objects and events “such as a sign or number” (2003: 204). Symbolic representations as an activity of the mind are thus the outcome of bodily actions and their experience. The mind, in other words, creates the abstractions of its own mental images of motor actions. Damasio sees these experiences grounded in emotions and

1232

VI. Gestures across cultures

feelings, which he calls the substrate for thought and mental abilities (2003: 106, 194⫺ 196, 204). Such an understanding would also account for the gestural representation of emotions in the Indian context. These then have to be analyzed in further studies.

5. Conclusion The short exemplary analysis of the gestural representation of geometric patterns that correspond to the symbols in the Indian context enables the conclusion that the body not only represents, but is the basis for symbolic conceptualizations. It exposes how the three-dimensionality of the human body coupled with its ability to orchestrate and integrate gestures of several body parts enables geometric shapes, spatial relationships and inner connectivity patterns intrinsically. These in turn enable embodied experiences from which, as image schematic structures reveal, higher cognitive abilities recruit. The Indian performing arts context in as much reveals how gestures not only represent thought and imagery. They seem to have the function of reinforcing embodied experiences by activating patterns underlying them.

6. Reerences Damasio, Antonio R. 2003. Looking for Spinoza. Joy, Sorrow, and the Feeling Brain. Orlando: Harcourt, Inc. Hackney, Peggy 1998. Making Connections: Total Body Integration Through Bartenieff Fundamentals. New York/London: Routledge. Hampe, Beate 2005. From Perception to Meaning: Image Schemas in Cognitive Linguistics. Berlin: Mouton de Gruyter. Johnson, Mark 2005. The philosophical significance of image schemas. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics, 15⫺33. Berlin: Mouton de Gruyter. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. McNeill, David 1992. Hand and Mind. What Gestures Reveal about Thought. Chicago/London: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Mittelberg, Irene 2010. Geometric and image-schematic patterns in gesture space. In: Vyvyan Evans and Paul Chilton (eds.), Language, Cognition and Space: The State of the Art and New Directions, 351⫺385. London/Oakville: Equinox. Müller, Cornelia 1998. Redebegleitende Gesten, Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2009. Gesture and language. In: Kirsten Malmkjaer (ed.), Linguistics Encyclopedia, 510⫺518. London/New York: Routledge. Müller, Cornelia 2010. Mimesis und Gestik. In: Martin Vöhler, Christiane Voss and Gertrud Koch (eds.), Die Mimesis und ihre Ku¨nste, 149⫺187. Mu¨nchen: Wilhelm Fink Verlag. Pappu, Venugopala Rao 2008. Science of Sri Cakra. Chennai: Pappus Academic and Cultural Trust (PACT). Ramesh, Rajyashree 2008. Culture and cognition in Bharatanatyam. Integrated Movement Studies Certification Program Application Project. Unpublished document. Ramesh, Rajyashree volume 1. Indian traditions: a grammar of gestures in dance, theatre and ritual. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication/Körper ⫺ Sprache ⫺ Kommunikation. Handbücher zur Sprach- und Kommunikationswissenschaft. (HSK 38.1.), 305⫺320. Berlin/Boston: De Gruyter Mouton.

83. Gestures in China: Universal and culturally specific characteristics

1233

Schmieke, Marcus 2000. Die Kraft lebendiger Ra¨ume. Das große Vastu-Buch. Aarau, Schweiz: AT Verlag. Vatsyayan, Kapila 1996. Bharata. The Natyasastra. Delhi: Sahitya Akademi. Vatsyayan, Kapila 1997. The Square and the Circle of the Indian Arts. New Delhi: Abhinav Publications.

Rajyashree Ramesh, Frankfurt (Oder) (Germany)

83. Gestures in China: Universal and culturally speciic characteristics 1. 2. 3. 4.

Introduction Cross-cultural differences in gestures Future directions References

Abstract People around the world gesture. However, there are cross-cultural differences in their gestures. In this chapter, we focus on the gestures produced by Chinese (Mandarin) speakers and highlight the culturally-specific and universal characteristics of their gestures. Previous research has shown that Chinese adult speakers tend to gesture less often than English adults, possibly because of the influence of Confucianism. Interestingly, an opposite pattern is found among Chinese caregivers who gesture more frequently than American caregivers, suggesting that Chinese and American caregivers might socialize their children in different ways. Besides gesture frequency, Chinese speakers gesture manner and path (semantic components of motion events) differently from English, Turkish, and Japanese speakers. In spite of the aforementioned cross-cultural differences, gestures produced by Mandarin-speaking children show culturally-universal characteristics. In particular, gestures produced by Chinese and American children show sensitivity to discourse-pragmatic features. Moreover, when asked to describe motion events with hands only, the gestures produced by Chinese, English, Turkish, and Spanish speakers display the same syntactic order, suggesting a universal pattern in nonverbal representation of motion events. Future research should be conducted on how bilinguals speaking Chinese and other languages gesture and whether their gestures display the culturally-specific and universal characteristics found in Chinese monolinguals.

1. Introduction Speakers from all cultural and linguistic backgrounds move their hands and arms when they talk (Feyereisen and de Lannoy 1991; Mead 2009; Wundt 1921); e.g., a speaker points to the right, while saying, “The library is over there”. Such hand and arm moveMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12331240

1234

VI. Gestures across cultures

ments are referred to as gestures. Interestingly, these gestures are spontaneously created by speakers, who often are not conscious of moving their hands or arms when talking (McNeill 1992). There are several types of gestures. Iconic gestures bear direct resemblance to the referents they represent, e.g., a speaker flaps his hands when describing a bird; deictic gestures are pointing movements, e.g., a speaker points to a table or point to an abstract space; beats refer to simple and rapid hand movements which come along with the rhythmical pulsation of speech, e.g., a speaker has his index finger flipped outward when he talks; and emblems carry culture-specific meanings, e.g., a speaker has his index and middle fingers formed a V-shape meaning “victory” in some cultures (McNeill 1992).

2. Cross-cultural dierences in gestures While speakers around the world do gesture, their gestures show cross-cultural differences (see review in Kita 2003). In particular, each cultural group has a distinct set of emblems (Kendon 1992, 2004; Morris et al. 1979; Payrato´ 1993, 2008; Sparhawk 1978). Italy, for example, is, one of the cultural groups that is known for abundant emblematic gestures. For instance, tapping the side of forehead indicates that someone is crazy or index finger tapping the nose indicates that someone is clever. Interestingly, emblematic gestures which have the same forms can carry different meaning across different cultures. For example, the hand waving gesture means “good-bye” in US but it is “come here” in Japan (Archer 1997). In addition to cross-cultural differences in emblems, there are also cross-cultural differences in gesture frequency. For instance, it is suggested that Italians gesture more often than English speakers (e.g., Barzini 1964) and pay attention to gesture more often when listening than American speakers (Graham and Argyle 1975). Yet, evidence to these claims is always based on research on Western cultures.

2.1. Cultural-speciic characteristics o gestures produced by Chinese speakers To date, very few studies have examined the gestures produced by speakers in Asian cultures (e.g., Chinese) and compared these gestures to those produced by speakers in Western cultures. It is possible that Chinese speakers gesture less often than English speakers. This is because Confucianism is the cornerstone of traditional Chinese culture. According to Confucius’ philosophical thinking, one should always be calm, collected, and controlled. Therefore, the body posture should be formal and self-attentive. As a result, it is not uncommon to find that Chinese speakers avoid body movements, including hands, when they are having conversations with others. Otherwise, speakers might be considered to be impolite (Axtell 1993). Given that Chinese speakers tend to be reserved, they might produce fewer gestures to express thoughts and ideas when talking, compared to English speakers. So (2010) examined this issue and found Chinese (Mandarin)-speaking adults seemed to gesture less often than their English-speaking counterparts. In her study, Mandarin- and English-speaking adults were individually shown two cartoons depicting a cat chasing a bird and asked to describe the scenes in their native languages to an experimenter. Her findings showed that the English-speaking adults produced more representational (abstract deictic and iconic gestures) and non-represen-

83. Gestures in China: Universal and culturally specific characteristics

1235

tational gestures (concrete deictic gestures, emblems, and speech beats) than the Mandarin-speaking adults, suggesting that the American culture is a relatively high-gesture culture and Chinese culture is a relatively low-gesture culture. Interestingly, although Chinese speakers gesture less often than English speakers while talking to adult speakers, the opposite pattern is observed in caregivers who are interacting with their children. Previous research has shown that Chinese mothers were three times more likely than American mothers to produce gestures when talking to their children (Goldin-Meadow and Saltzman 2000). So, Lim, and Tan (2012) also observed a similar pattern. In their study, they asked Chinese and American caregivers to engage in spontaneous conversations with their children. A standardized bag of toys, books, pictures, and puzzles was provided in order to facilitate communication. Speech was transcribed and gestures were coded. Following Özc¸alıs¸kan and Goldin-Meadow (2005) in identifying gestures, hand movements that involved direct manipulation of an object (e.g., placing a toy on a floor) or were part of a ritualized game (e.g., putting a puzzle in a puzzle slot) were not considered gestures. The findings showed that Chinese (Mandarin)-speaking caregivers were more likely to ask their children to identify objects (e.g., “What is this?” “How do you label this object?”) than English-speaking caregivers. More importantly, Mandarin-speaking caregivers were also more likely to produce pointing gestures when asking object-identifying questions than English-speaking caregivers. These results are consistent with previous findings, which also showed Mandarin-speaking caregivers were more likely to give instructions to their children during interactions, e.g., Zhe4ge4 shi4 ping2guo3 (‘This is an apple?’) (Goldin-Meadow and Saltzman 2000; Tardif, Gelman, and Xu 1999), compared to English-speaking caregivers. The reason for cultural differences in gesture rates in caregivers is unknown. It is possible because Chinese and American caregivers socialize their children differently. Specifically, Chinese caregivers might have heightened interest to instruct their children and their interests are manifested in their verbal and nonverbal modalities. In addition, they might have an explicit goal for what kinds of referent names their children should have acquired. For example, a Chinese caregiver points to a bird on the book while saying “What is it?” and expects the child to label the referent (bird) “on demand”. Not only that Chinese caregivers gesture more often than American counterparts, Chinese children also gesture more frequently than American children. By observing free play activities and spontaneous conversations between children and their caregivers, So, Demir, and Goldin-Meadow (2010) found that four- to six-year-old Chinese (Mandarin)speaking children gestured referents more often than English-speaking counterparts. This finding might suggest that children learn gesture behaviors and patterns from their caregivers. Besides gesture frequency, researchers have also discovered that Chinese speakers gesture motion differently from other speakers. Talmy (2000) classified languages into three categories according to the way languages package the semantic components, manner and path, of a motion event in various linguistic forms. Manner refers to the way an entity moves while path refers to the direction an entity moves. English is considered to be a satellite-framed language and, as such, encodes manner of movement in the main verb and path in an associated satellite, e.g., “the man rolls (main verb⫽manner) down (satellite⫽path) the hill” (Talmy 1985, 1991). Mandarin Chinese, considered as an equipollently-framed language, encodes manner in the first serial verb and path in the second serial verb, e.g., “the man hill roll (first verb⫽manner) descend (second verb⫽path)”

1236

VI. Gestures across cultures

(Gao 2001; Slobin 2004). In contrast, Turkish is a verb-framed language. It not only encodes path and not manner in the main verb, but it either encodes manner in a subordinate clause that is not dependent on the main verb or omits manner entirely, e.g., “the man descends (main verb⫽path) the hill rolling (separate clause⫽manner)”. Crosslinguistic differences in encoding manner and path also influence the way speakers gesture. In Chui’s (2012) study, adult participants viewed a seven-minute cartoon episode of the “Mickey Mouse and Friends” series and then retold it to an adult listener. Results showed that Chinese speakers predominantly expressed the path information in their gestures. On the contrary, previous studies have found different gesture patterns in English, Turkey, and Japanese participants (Kita and Özyürek 2003; Kita et al. 2007; Özyürek and Kita 1999; Özyürek et al. 2005). In particular, Japanese and Turkish speakers tend to use separate gestures for path and manner respectively whereas English speakers tend to combine path and manner in one gesture.

2.2. Cultural-universal characteristics o gestures produced by Chinese speakers Although gestures produced by Chinese speakers (both adults and children) are different from those produced by other speakers, some characteristics of their gestures tend to be culturally universal. For example, gestures produced by Chinese (Mandarin)- and English-speaking children show sensitivity to discourse-pragmatic principles. So, Demir, and Goldin-Meadow (2010) examined the referents conveyed in speech produced by young children and classified them into two categories: referents that have to be specified, i.e., 3rd person and new referents, and referents that do not have to be specified, i.e., 1st/2nd person and given referents. Both Mandarin- and English-speaking children tended to use nouns when indicating 3rd person and new referents but pronouns or null arguments when indicating 1st/2nd person and given referents. Children in both Mandarin- and English-speaking groups tended to use less specified forms (pronouns, null arguments) for referents that needed not to be specified (1st/2nd person, 3rd person given) but specified forms (nouns) for referents that needed to be specified (3rd person new referents), suggesting that their speech followed the discourse-pragmatic principles of person and information status (Clancy 1993; Greenfield and Smith 1976). Previous research has also shown that children learning other languages have developed sensitivity to discoursepragmatic features in their early childhood (e.g., Italian and Inuktitut: Allen 2000; Allen and Schröder 2003; Serratrice 2005; Korean: Clancy 1993; Hindi: Narasimhan, Budwig, and Murty 2005; Romance: Paradis and Navarro 2003). Interestingly, children also displayed such sensitivity in their gestures. Both Mandarin- and English-speaking children in So, Demir, and Goldin-Meadow’s (2010) study produced gestures more often when indicating 3rd person and new referents than when indicating 1st/2nd person and given referents (see also So, Lim, and Tan 2012). They also gestured more often for 3rd person referents that were ambiguously conveyed by less explicit referring expressions (pronouns, null arguments) than those that were conveyed by explicit referring expressions (nouns). The development of discourse-pragmatic strategies in young children can be attributed to the parental inputs. To date, there are very few studies investigating the relation between parental input and the development of children’s sensitivity to discourse-pragmatic features. In one of the few studies by Guerriero, Oshima-Takane, and Kuriyama (2006), they followed English- and Japanese-speaking children for more than a year,

83. Gestures in China: Universal and culturally specific characteristics

1237

observing conversations between the children and their parents. They found consistent language-specific discourse patterns in the English-speaking parents but not in the Japanese-speaking parents. In turn, the English-speaking children developed discourse-pragmatic strategies earlier than the Japanese-speaking children. These findings suggest that children may learn discourse-pragmatic features in their language from their parents’ speech. Based on this view, Chinese- and English-speaking children might also acquire the discourse-pragmatic skills from their parents’ speech and gestures. To extend previous research in So, Demir, and Goldin-Meadow (2010), So and Lim (2012) examined whether the caregivers gestured new referents more often than given referents. While Chinese caregivers gestured more often than American caregivers, both groups of caregivers produced more gestures when asking their children to identify new referents than when asking their children to identify given referents, thus following the discourse-pragmatic principle of information status. More importantly, both Chinese and American children were responsive to their caregivers’ discourse-appropriate gestures. In response to their caregivers’ questions about new referents, both groups of children were more able to identify these referents when they were accompanied by gestures than when they were not. As a result, the way Chinese and American caregivers’ gesture would shape their children’s sensitivity to discourse-pragmatic features. We had discussed previous research which compared the gestures produced by Chinese speakers to speakers from other cultural/linguistic backgrounds and found both culturally specific and universal characteristics. In fact, the aforementioned research focused on gestures accompanying speech (i.e., co-speech gestures). Recently, researchers have found that we gesture while thinking silently (i.e., co-thought gestures, Chu and Kita 2011). Previous research has found culturally universal characteristics in these co-thought gestures. A study by Goldin-Meadow et al. (2008) asked speakers from four different cultural backgrounds (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (gesture task) and a non-communicative task (transparency task). These four languages have different predominant word orders. Same as English and Spanish speakers, Chinese speakers typically use the order of actor(Ar)⫺act(A)⫺patient(P) to describe an event like “a women twisting a knob”. However, Turkish speakers use actor(Ar)-patient(P)-act(A)[woman-knob-twist] to describe the same event. In the gesture task, four groups of adults were asked to describe transitive (e.g., a girl gives a flower to a man) and intransitive (e.g., a girl waves) motion events with their hands only (but not their mouths). In the transparency task, participants were asked to reconstruct the event by stacking the transparencies one by one onto a peg in order to form a single representation. The findings have shown that, in spite of the cross-linguistic differences in the predominant word orders in speech, all four groups of speakers gestured or reconstructed the motion events in the same order, which was actor-patient-act. As a result, this order might be a robust natural order that humans use when asked to represent events nonverbally. This order is also the one found in the earliest stages of newly evolving gestural languages, and thus may reflect a natural disposition that humans exploit not only when asked to represent events nonverbally, but also when creating language anew.

3. Future directions To summarize, Chinese speakers display both culturally universal and specific characteristics in their gestures. How about bilinguals who speak Chinese and another language

1238

VI. Gestures across cultures

such as English? The question of interest is whether these bilinguals gesture similarly to Chinese monolinguals when speaking in Chinese or their gestures display characteristics of gestures produced by monolinguals of other languages. To date, there are very few studies addressing this issue. A study by So (2010) recruited Chinese-English bilinguals, Chinese (Mandarin) monolinguals, and English monolinguals and showed them two cartoons depicting a cat chasing a bird (So 2010). Three groups of participants then described the cartoons to native speakers. Their speech utterances and gestures were coded. Results showed that when speaking Mandarin, Chinese-English bilinguals produced more representational gestures (i.e., iconic and abstract deictic gestures) than the Chinese monolinguals but comparable number of representational gestures as the English monolinguals did. When speaking English, Chinese-English bilinguals produced similar number of representational and nonrepresentational gestures (i.e., concrete deictic gestures and speech beats) to the English monolinguals’. These findings suggested that Chinese-English bilinguals were influenced by the American culture of producing gesture frequently when speaking. Thus, there might be a possibility that gesture frequency is transferred from a high-gesture to a low-gesture language. However, such a transfer was only found for representational gestures but not for nonrepresentational gestures. In the future, more work should be done to investigate how Chinese culture influences the way Chinese speakers gesture and the extent to which the immersion in other cultures influences bi-cultural speakers gesture.

4. Reerences Allen, Shanley E.M. 2000. A discourse-pragmatic explanation for argument representation in child Inuktitut. Linguistics 38(3): 483⫺521. Allen, Shanley E.M. and Heike Schröder 2003. Preferred argument structure in early Inuktitut spontaneous speech data. In: John W.D. Bois, Lorraine E. Kumpf and William J. Ashby (eds.), Preferred Argument Structure: Grammar as Architecture for Function, 301⫺338. Amsterdam: John Benjamins. Archer, Dwight 1997. Unspoken diversity: Cultural differences in gestures. Qualitative Sociology 20(1): 79⫺105. Axtell, Roger E. 1993. Do’s and Taboos Around the World. New York: John Wiley and Sons. Barzini, Luigi 1964. The Italians. New York: Simon and Schuster. Chu, Mingyuan and Sotaro Kita 2011. The nature of gestures’ beneficial role in spatial problem solving. Journal of Experimental Psychology: General 140(1): 102⫺116. Chui, Kawai 2012. Cross-linguistic comparison of representations of motion in language and gesture. Gesture 12(1): 40⫺61. Clancy, Patricia 1993. Preferred argument structure in Korean acquisition. In: Eve V. Clark (ed.), Paper presented at the Proceedings of the 25th Annual Child Language Research Forum, 307⫺ 314. Stanford, CA: Center for the Study of Language and Information. Feyereisen, Pierre and Jacques-Dominique de Lannoy 1991. Gestures and Speech: Psychological Investigations. Cambridge: Cambridge University Press. Gao, Yang 2001. Foreign Language Learning: 1 ⫹ 1 > 2. Beijing: Peking University Press. Goldin-Meadow, Susan and Jody Saltzman 2000. The cultural bounds of maternal accommodation: How Chinese and American mothers communicate with deaf and hearing children. Psychological Science 11(4): 307⫺314. Goldin-Meadow, Susan, Wing Chee So, Asli Özyürek and Carolyn Mylander 2008. The natural order of events: How speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences 105(27): 9163⫺9168.

83. Gestures in China: Universal and culturally specific characteristics

1239

Graham, Jeann Ann and Michael Argyle 1975. A cross-cultural study of the communication of extra-verbal meaning by gesture. International Journal of Psychology 10(1): 57⫺67. Greenfield, Patricia M. and Joshua H. Smith 1976. The Structure of Communication in Early Language Development. New York: Academic Press. Guerriero, A. Sonia, Yuriko Oshima-Takane and Yoko Kuriyama 2006. The development of referential choice in English and Japanese: a discourse-pragmatic perspective. Journal of Child Language 33(4): 823⫺857. Kendon, Adam 1992. Some recent work from Italy on quotable gestures (emblems). Journal of Linguistic Anthropology 2(1): 92⫺108. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kita, Sotaro 2003. Pointing: A foundational building block of human communication. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 1⫺8. Mahwah, NJ: Lawrence Erlbaum Associates. Kita, Sotaro and Asli Özyürek 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48(1): 16⫺32. Kita, Sotaro, Asli Özyürek, Shanley Allen, Amanda Brown, Reyhan Furman and Tomoko Ishizuka 2007. Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production. Language and Cognitive Processes 22(8): 1212⫺1236. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press. Mead, George H. 2009. Mind, Self, and Society: From the Standpoint of a Social Behaviorist. Chicago: University of Chicago press. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures. London: Triad/Granada. Narasimhan, Bhuvana, Nancy Budwig and Lalita Murty 2005. Argument realization in Hindi caregiver-child discourse. Journal of Pragmatics 37(4): 461⫺495. Özc¸alıs¸kan, S¸eyda and Susan Goldin-Meadow 2005. Gesture is at the cutting edge of early language development. Cognition 96(3): B101⫺B113. Özyürek, Asli and Sotaro Kita 1999. Expressing manner and path in English and Turkish: Differences in speech, gesture, and conceptualization. In: Paper Presented at the Proceedings of the Twenty First Annual Conference of the Cognitive Science Society, 507⫺512. Mahwah, NJ/London: Lawrence Erlbaum. Özyürek, Asli, Sotaro Kita, Shanley E.M. Allen, Reyhan Furman and Amanda Brown 2005. How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. Gesture 5(1⫺2): 219⫺240. Paradis, Johanne and Samuel Navarro 2003. Subject realization and crosslinguistic interference in the bilingual acquisition of Spanish and English: what is the role of the input? Journal of Child Language 30(2): 371⫺393. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20(3): 193⫺216. Payrato´, Lluı´s 2008. Past, present, and future research on emblems in the Hispanic tradition. Preliminary and methodological considerations. Gesture 8(1): 5⫺21. Serratrice, Ludovica 2005. The role of discourse pragmatics in the acquisition of subjects in Italian. Applied Psycholinguistics 26(3): 437⫺462. Slobin, Dan I. 2004. The many ways to search for a frog: Linguistic typology and the expression of motion events. In: Sven Strömqvist and Ludo Verhoeven (eds.), Relating Events in Narrative: Typological and Contextual Perspectives, 219⫺257. Mahwah, NJ: Lawrence Erlbaum Associates. So, Wing Chee 2010. Cross-cultural transfer in gesture frequency in Chinese-English bilinguals. Language and Cognitive Processes 25(10): 1335⫺1353. So, Wing Chee, Özlem Ece Demir and Susan Goldin-Meadow 2010. When speech is ambiguous, gesture steps in: Sensitivity to discourse-pragmatic principles in early childhood. Applied Psycholinguistics 31(1): 209⫺224.

1240

VI. Gestures across cultures

So, Wing Chee, Jia-Yi Lim and Seok-Hui Tan 2012. Sensitivity to information status in discourse: Gesture precedes speech in unbalanced bilinguals. Applied Psycholinguistics 1(1): 1⫺25. Sparhawk, Carol M. 1978. Contrastive-identificational features of Persian gesture. Semiotica 24(1⫺ 2): 49⫺86. Talmy, Leonard 1985. Lexicalization patterns: semantic structure in lexical forms. In: Timothy Shopen (ed.), Language Typology and Syntactic Description, Volume 3, 57⫺149. Cambridge: Cambridge University Press. Talmy, Leonard 1991. Path to realization: a typology of event conflation. Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society, 480⫺519, February 15⫺18. Berkeley, CA: Berkeley Linguistics Society. Talmy, Leonard 2000. Toward a Cognitive Semantics, Volume II: Typology and Process in Concept Structuring. Cambridge, MA: Massachusetts Institute of Technology Press. Tardif, Twila, Susan A. Gelman and Fan Xu 1999. Putting the “noun bias” in context: A comparison of English and Mandarin. Child Development 70(3): 620⫺635. Wundt, Wilhelm M. 1921. Elements of Folk Psychology: Outlines of a Psychological History of the Development of Mankind. London: G. Allen and Unwin.

Shumeng Hou, Hong Kong (Hong Kong) Wing Chee So, Hong Kong (Hong Kong)

84. Gestures and body language in Southern Europe: Italy 1. 2. 3. 4. 5.

Introduction: Geographical, historical, and cultural background Knowledge about gestures in Italy: A historical sketch Research about Italians’ gestures Gesture in the Italian lifestyle References

Abstract Italy and Italians are presented as unique place and population for bodily communication. This is argued on the basis of three background features characterizing Italy and the Italians: natural history, social history, and culture. At all three levels, a high degree of diversity resulted across times and places. This constructed a high degree of variety and therefore complexity in both verbal and bodily communication phenomena. The relevance of communication, and particularly of its bodily forms, for Italian territories and populations in different ages is evident also via exemplary philosophical, artistic, and scientific evidences across 2500 years. Two examples of contemporary cross-cultural researches, a classical one and a new one, show the importance of carefully considering contextual features when describing Italian (vs. non-Italian) samples’ hand gestures as supposedly: higher in gestural rate, higher in degree of speech-content related gestures, higher in gestures relating with nearby objects, higher in wide movements and in space occupation (via movements and vocalizations). Gesture can be considered as both the aesthetic and pragmatic code of the Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12401253

84. Gestures and body language in Southern Europe: Italy

1241

Italian lifestyle, where they play a major role in both visualizing and shaping the social life or at least in being highly reputed and credited for doing so. Ritual events such as Il Palio di Siena epitomize this and can be used to illustrate it.

1. Introduction: Geographical, historical, and cultural background The present chapter gives an overview ⫺ necessarily partial and exemplifying rather than complete and systematic ⫺ about gestures and other forms of bodily communication and their relation to speech and language, with reference to Italy and Italian people. Before entering the issue, information on relevant background themes is given to start with, in order to correctly set a proper interpretative framework for the following more specific paragraphs. It has to be remembered that Italy has a rather recent history as a modern nation: it was born only in 1861, although Italy certainly enjoys a long and ancient past. Across the centuries, moreover, Italy has been influenced by exchanges ⫺ at many different levels, such as military, political, religious, commercial, artistic, scientific, etc. ⫺ with many different places and cultures, both European (literally all European areas) and non-European (particularly African, Middle-Eastern, Balkan, and Asian since ever; then American and Oceania only in more recent ages). This peculiar historical condition is inextricably intertwined with the peculiar geographical (and geo-political) Italian condition. From a strictly geographical condition, Italy is about 1200km long from North to South, and its East-West width in the North is about 530km but for most of the country it is just between about 120 and 240km. Moreover it is almost completely surrounded by the Mediterranean Sea ⫺ which literally means sea between the lands ⫺ on which many South and Eastern European, North African, and Middle-Eastern Asian countries face too; while on the North it shares the Alps with major Central European countries. This peculiar geo-political setting gives Italy, to put it simply, at least three important reasons of inner diversity: a natural, a historical, and a cultural one. First of all, a natural diversity: in fact, Italy’s latitude range ⫺ coupled with its extensive geo-morphological, micro-climatic, and vegetative variety ⫺ brings extreme climate, landscape, and ecosystem differences, ranging from glaciers in the North up to arid areas on the Southern islands and inland; the additional influence of factors from history and culture created Italy as one of the richest biodiversity places in Europe as well as one of the global biodiversity hotspots (according to the European Environment Agency, http:// www.eea.europa.eu/soer/countries/it/soertopic_view?topic⫽biodiversity). Recent studies show biological and linguistic diversities co-occur, in biodiversity hotspots (Gorenflo et al. 2012). Secondly, a historical diversity: since the beginning of the Italian history, the peninsula and the islands had been inhabited by different ethnic and linguistic groups, and in many historical periods Italian provinces, regions, or larger areas had been administered by different groups (sometimes even within similar geographical areas): they derived from indigenous populations or from populations outside the peninsula coming from either all the nearby and bordering areas (Greeks, Germans, French, Spaniards, etc.), or from diverse far and very far areas (Normans, Arabs, Turks, Eastern populations). Thirdly, and as a result of the previous two differences, a cultural diversity: Italy, far from being a uniform country, is a cultural microcosm where people sharing a geographical and historical common ancient past are differentiated by equally important geographical and historical endemic local differences affecting an infinite list of everyday

1242

VI. Gestures across cultures

life features: from food and drinks ⫺ or even slight crucial differences in the recipes for the same meal ⫺ to daily agenda and proper hours for usual habits; from dress codes to sexual norms; from cultural values to environmental and architectural styles. Just as an example, it can be remembered the so-called campanilismo phenomenon (from campanile, a church’s bell tower but also one’s own hometown), namely a kind of parochialism: people are attached to their own town and at the same time they are very competitive with other towns, usually vs. a specific one or a few. Actually this phenomenon can apply within the very same town, where more campanili are usually located: this is the case for example of the Contrade (sort of neighborhoods, or districts) in the city of Siena, where they still are a meaningful social reality since the medieval times, having their peak at il Palio di Siena on July the 2nd and August the 16th every year (see last paragraph here). The importance of such a varied cultural panorama within the same geohistorical political reality is perfectly synthesized by a renowned sentence from Massimo Taparelli marchese D’Azeglio, one of the patriots active for the Italian Reign establishment in 1861, who very few years later in his memoires (1867: 5) wrote “pur troppo s’e` fatta l’Italia, ma non si fanno gl’Italiani” [‘regrettably Italy is made, but Italians are not’]. The above mentioned three inner Italian diversities ⫺ in the natural history, in the social history, and finally in the culture ⫺ imply that different parts of Italy had been exposed and developed within dramatically different natural, historical, and cultural conditions. This necessarily influenced habits and customs of the local populations: people, according to their places, have been exposed to diverse climates, different historical events, and varied cultures. When therefore it is intended to describe something about any normative Italian feature in general, the paradoxical nature of such an effort must be reckoned since variations would be the rule rather than the exception. This holds particularly true for any communicative feature, whether bodily or verbal. The most striking evidence of that is in the verbal language domain: First of all, in the Italian dialects palette, which survives after more than 150 years of unified Italy and notwithstanding many decades of mass-media and new-media communication broadcasted in Italian. Italy’s dialects, though can be grouped into a finite number of families, are basically infinite in their specific forms. Strictly speaking, they are mere variations of the Italian language (itself derived from Latin via the Florence dialect). But, in many other cases, they are really diverse territorial languages of Italy spoken by several millions of people in a region, or by hundreds thousand, or thousands people in a province, whether in the North (e.g., Venetian or Lombard), or the Centre (e.g., the languages’ archipelago in le Marche), or the South of the peninsula (e.g., Neapolitan), or on the islands (e.g., Sicilian or Sardinian). Moreover, many Italian areas host bilingual communities or communities using a language which is not Italian nor a dialect nor one of the regional or provincial territorial languages close to the Italian language; and some, not all, of them can have the status of official language used in both oral and written communication. These further territorial languages can be French, German, Greek, Catalan, Croatian, Slovenian, Albanian, Ladin, Occitane, etc. On top of this there are also nonterritorial languages ⫺ i.e., those not limited to certain geographical areas ⫺ linked to recently immigrated population (e.g., Romanian) or to nomadic populations such as Romany or Sinti. There are good examples of Italy’s communicative variations within the bodily communication as well: the head movement performed to negate, in the Northern Italian regions is realized along a horizontal axis (left-right shake of the head or sideways nod,

84. Gestures and body language in Southern Europe: Italy

1243

as in Western Europe and most of the World). On the contrary, in Sicily (the more Southern Italian region located on the Sicily Island right in the center of the Mediterranean Sea) the head movement used ⫺ whether alone or co-occurring with verbal or nonverbal vocalizations ⫺ to negate is fulfilled with a movement along a vertical axis (i.e., similarly to an affirmative nodding, though starting in the upward direction): this way is similar to the one found in other Mediterranean areas, such as in Greece, Turkey, Albania, or in their neighboring Balkan areas (Macedonia, Bulgaria). This introductory paragraph works therefore as a caveat to alert the reader avoiding any simplification when coming to considering knowledge about Italian linguistic or bodily communication, given the complex background very briefly sketched above. Of course, a proper homage to such an articulated Italian reality cannot be paid within the boundaries of this chapter. What can be offered here, is just a pale illustration of such a richness and diversity, via the presentation of a few varied examples: they cannot cover in any proper systematic way the issue (which would probably deserve a volume or an encyclopedia in itself), but the reader can taste, or maybe just smell, the abundance of the communication intricacies when focusing on Italy. Moreover, just gestures and bodily communication in relation to speech and language are at focus here and not verbal language per se which, as stated above, as dialects and other languages in Italy, is a hugely complex phenomenon in itself embracing an endless variety of idioms. After this introduction, the chapter is organized in three main paragraphs: a first one gives an idea of the historical attention devoted to develop ⫺ across the centuries within Italy ⫺ some knowledge about bodily communication. A second paragraph gives bits of a classical empirical evidence in the form of systematic research focusing on Italian persons, integrated by an example from recent research data on gestures’ occurrences in conversations comparing Italian dyads with a matching sample from a distant culture. A third final paragraph offers some final qualitative examples from a famous Italian event.

2. Knowledge about gestures in Italy: A historical sketch Interestingly, Italy had probably been the home of the first intellectual systematic interests on communication as an object of study, at least in the so-called Western World. Actually, it could be said that an institutionalized form of meta-communication about verbal and bodily communication was born and grew up in Italy, at the very early stages of the Western philosophical thinking with the school of the Sofists Greek philosophers on the island of Sicily in the 5th century BC (Corace and Tisia, in the city of Siracusa): they were particularly acute in starting the study and the teaching of the interplay between verbal and bodily communication for persuasive purposes, founding the study of a new discipline, namely rhetoric (e.g., Billig 1987). The Greek roots originated then a rich Latin tradition in rhetoric, which spread across several centuries from caput mundi Rome all over the whole ancient World: Cicero’s (in the 1st century BC) and Quintilian’s (in the 1st century AD) treatises on oratory are just a couple of milestones within an impressively prolific production from many different authors paying minute attention to the verbal language without overlooking the importance of bodily features which reciprocally join in a global communicative act on the ground of rhetoric, reasoning, argumentation, persuasion. When dealing with the delivery features of oratory and focusing on speech-related gesture, Cicero already referred to ‘body language’ (sermo corporis) or

1244

VI. Gestures across cultures

‘eloquence of the body’ (eloquentia corporis). Quintilian distinguished delivery between ‘voice’ (vox) and ‘body posture and carriage’ (gestus). Gestures were important in Romans’ everyday life, as in the greeting ritual too: the verbal Ave (Latin meaning ‘be well’ corresponding to the contemporary Italian Salve or Salute as an opening greeting) or Vale (Latin meaning ‘farewell’ corresponding to the contemporary Italian Arrivederci or Addio as a leave greeting) was coupled with the 45 degree upward arm and hand extension (just hint at, when the setting was an informal one). More specific kinds of greetings were used for example in the army, where the right hands reciprocally held their counterparts’ forearms. Gestures in the ancient roman world were an important practice in everyday life (e.g., the prototypical thumb down and thumb up) as also proven by coeval Roman visual arts where gesture figures had been continuously represented in mosaics or sculptures. A renewed interest for the human being during first the Humanism and then the Renaissance, is evident also in the blossoming interest for interpersonal communication in the Italian cradle of the Renaissance: the region of Tuscany, where local outstanding writers and genius painters use their talents to shed light also on verbal and bodily communication features. Already in 1435, in his De Pictura [On Painting], the architect, writer, mathematician, cryptographer, humanist, linguist, philosopher, musician, and archaeologist Leon Battista Alberti (belonging to a Florentine family, though resident in Florence only for a part of his life) explicitly recommends in visual arts the use of a gesture indicating figure (the so-called indigitazione) which will be subsequently at the focus of many geniuses and talents over the 15th-16th century such as Leonardo da Vinci, Michelangelo, Raffaello, Tiziano, among others. One of the best examples, specifically focusing on hand gestures, is Leonardo da Vinci’s “Last Supper” fresco (realized when he was in Milan, ca. 1494⫺1498), as well as in many paintings by him (e.g., the “Virgin of the Rocks” 1483⫺1486 first version especially; but the 1494⫺1508 second version too). He studied many different bodily communication features in his drawings, focusing again on hand gestures realized both within dyadic and group settings, but also on facial expressions, postures, face and body shapes, etc. (e.g., VV.AA. 2003). Such a relevance of hand gestures in Da Vinci’s paintings will be subsequently magnified in the careful analysis made by Goethe (1817) on each single “Last Supper” Apostles’ hand gesture. This attention on communication cross-fertilized among different disciplines. In the literature domain the attention covers communication’s public and high-profile implications such as politics (Machiavelli 1513), as well as the more local and everyday life dynamics such as etiquette (Della Casa 1558). In the following historical period, the study of the national language arises in several countries with the creation of national academies, and the first one is in fact in Italy (the Accademia della Crusca, already in the 16th century). The following centuries, in particular, see the development of the scientific method and of the Enlightenment: coherently, the study approach to communication started to be more systematic. This is evident in Italy for example with an Italian from Naples: the dramatist and philosopher Giovanni Battista Della Porta wrote De humana physiognomonia (1586), i.e., a treatise on comparative physiognomy. Later on, an Italian from Rovigo, a Northern city in the Veneto region, Giovanni Bonifacio (1616) wrote his L’arte de’ cenni con la quale formandosi favella visibile, si tratta della muta eloquenza, che non e` altro che un facondio silentio [‘The art of gesture with which creating a visible utterance, it is dealt with the mute eloquence, which is nothing else than an eloquent silence’]. Italy hosts then what

84. Gestures and body language in Southern Europe: Italy

1245

is probably the first scientific approach to the study of hand gesture by Andrea De Jorio (1832). Hailing from Procida, a small Island in the Naples gulf, then Abbot of the Naples Cathedral, he was a writer, ethnographer and archaeologist based in Naples. In his masterpiece ⫺ still quoted in many contemporary scientific papers in the field of gesture studies ⫺ he realizes the first modern attempt to create a gesture taxonomy with specific attention to the emblems, giving detailed and systematic descriptions via both written and iconic means. He systematically describes emblematic gestures ⫺ i.e., those having a codified and culturally shared meaning ⫺ on the basis of those observed in his contemporary Neapolitans, and he links them with those observed from other contemporary sources (e.g., from 19th century novels or from past archeological ruins). In this way he strived for a valid and reliable knowledge, generalizing it across different contexts (though via qualitative, not quantitative, techniques and methods). For example, he devotes about thirty pages to describe the possible uses of the sign of the horns (de Jorio [1832] 2000: 138⫺173). His careful attention to both the ideographic dimension of the research and the nomothetic one is remarkable: he is at the same time very close to the actual single data, as well as aiming at the construction of a general knowledge. This can be appreciated in a passage like the following one discussing gestures used to represent money, where it is also evident how he relies, for his observational data gathering, not only on literary, archaeological, and everyday sources, but also on cross-cultural sources (the original iconic part is omitted for the sake of brevity): Rubbing the tips of the fingers of thumb and index finger lightly together. This gesture indicates the act of enumerating coins, and hence it denotes money. It is widely used, not only among us but in many other nations. This identical gesture is used in Canada to indicate money. Having asked two very respectable missionaries, Mr. Mason and Thomas Maguire of Quebec, if, among their people, a gestures was used to express ,money’ they at once performed this gesture, and in just the same way as it is done by our compatriots; they added that the same gesture was also used among the savages of Canada […]. The gesture is sometimes done with two hands when one wants to show the riches in which Tizio or Cajo are swimming; or the great amount of money that has been promised by adding the gesture for molto (,much’). (De Jorio 2000: 181, italics in the original)

Such a contribution largely anticipates subsequent scientific milestones on gestures and bodily communication: it arrives 40 years before Darwin’s one, about 70 years before Wundt’s one, about 140 years before Ekman and Friesen’s one.

3. Research about Italians gestures Just to stress once more the Italian specific interest for bodily communication, and for hand gesture in particular, it can be remembered here the Italian contemporary Roman artist Sergio Lombardo who, during the Sixties of the 20th century, did several paintings for the well-known series Gesti tipici (‘Typical gestures’): they focus on the details of various kinds of co-speech gestures, enlarging them, thus magnifying their features and bringing the attention of the viewers on their shapes and coordination with the speech. But there is also a small 20th century modern photographic version of de Jorio’s idea, by the Italian designer Bruno Munari (1963).

1246

VI. Gestures across cultures

Indeed, from the 20th century, bodily communication starts to be more systematically placed under the microscope of the scientific inquiry. Probably, the first systematic contribution comparing gesture, and particularly speech-related ones, across different cultures is by David Efron ([1941] 1972) and he specifically chose to focus on Italians. He studied for two years two ethnic groups in New York City: Jewish Yiddish-speaking immigrants (Lithuanians) and southern Italy immigrants (Sicilians). He collected data using different techniques (drawing, photography, video) and found, among other things, significant differences between the two cultural groups. The Italians, compared to the other ethnic group, differed in the way gestures were realized: Italians tended to use both arms; needed more space for their gesticulations and their gesture movements tended to be wider going out of the face and chest area; Italians stood mostly apart from one another and performed more fluid movements. The Italians also differed in which kind of gesture they perform more frequently: they displayed a range of symbolic (emblematic) gestures, i.e., with standard cultural meanings, many of them corresponding to those surveyed by de Jorio over one century before in southern Italy; while the other ethnic group preferred beats and ideographs. These results show that individuals in each ethnic group were using their bodily communication similarly to the community of their country of origin, in both the way they gesture and in the frequency of preferred kind of gestures. Interestingly, Efron also noticed that second generation individuals who assimilated more to the hosting culture, show smaller gestural differences among the two ethnic groups: they tend to use more the American bodily communication codes they have been exposed in the USA, rather than the gestural habits of their roots. On the whole, these results show the relevance of the cultural influence an individual is exposed to in determining both the way of gesturing and the kind of gestures. Specifically Italians were characterized by wider gesturing and by a preference for emblems. In a data set gathered by some of us about ten years ago and still unpublished, the research aim was close to Efron’s one (Bonaiuto, Maricchiolo, and Orlacchio 2005), that is to check if a gesture taxonomy already tested on Italian samples can be tested on a very different culture. In fact, nine Italians’ dyads (from few different regional backgrounds) observed in Rome were compared to nine dyads observed in Burkina Faso (from the Mossi ethnic group). A country placed in Central-West Africa, Burkina Faso has a dramatically different situation from Italy under many respects, and it poles apart in some cases, such as: ecosystem, history, economy, religion, language, literacy, health conditions, etc. Data were gathered under relatively controlled situations in both samples: same task (free conversation for ten minutes about a given everyday topic), same setting (two semi-frontal chairs), same gender and age group (female students, average age around 20⫺21 years, university level in Italy and high school level in Burkina Faso). In the Burkinabe´ sample only, they were requested to speak for five minutes in their country’s official language (i.e., French) and for five minutes in their mother tongue (i.e., Mo`ore´ belonging to Niger-Kordofanian languages). Results showed that all the coded categories (according to the used taxonomy by Bonaiuto, Gnisci, and Maricchiolo 2002; Maricchiolo, Gnisci, and Bonaiuto 2012; Gnisci, Maricchiolo, and Bonaiuto this volume; Maricchiolo et al. this volume) were observed in both samples, thus testing that the main kinds of co-speech gestures are pretty universal: cohesive, rhythmic, ideational on the speech linked side, and adaptors on the speech non-linked side. When coding with more detailed categories, only one exception resulted: Burkinabe´ sample showed a specific kind of rhythmic gesture which is a hint of hands clapping (occurring for about 4% of

84. Gestures and body language in Southern Europe: Italy

1247

their total gesturing), never observed in this nor in any previous Italian sample with that taxonomy. The Burkinabe´ intra-sample comparison showed no differences among the total gestures produced in the two language conditions ( f % ⫽ 50% of total co-speech gestures produced using French vs. f % ⫽ 50% of total co-speech gestures produced using Mo`ore´), nor among the occurrence of a gesture category due to the language shifting: this comparison thus shows the rate and kind of gestures’ production within the same culture was not affected by the language used, once keeping constant setting, topic, persons. When then coming to the inter-sample comparison (Burkinabe´s vs. Italian), results showed Italian young women gestured just slightly less ( f ⫽ 3175, i.e., roughly about 10% less) than Burkinabe´ ones ( f ⫽ 3363 or 3519 considering also the hinted clapping rhythmic one). More significant differences resulted when comparing the occurrence of specific gesture categories: Italians (vs. Burkinabe´s) significantly produced more illustrative gestures (metaphorics and deictics but not iconics) and object-adaptors and less cohesives and self-adaptors. No significant differences were noted about the use of rhythmics or of emblems (which however strongly tended to be used more by Burkinabe´s), or of person-adaptors. A more detailed analysis showed further differences in the use of specific cohesive sub-category (some used more by Italians, some preferred more by Burkinabe´s). Thus, in an observational research focusing on a more controlled situation compared to previous cross-culture gesture studies, results indicate that 96% of gestures produced by Italian women are of the same kinds of those produced by Burkinabe´ women: this suggests the main categories and subcategories of co-speech gestures are pretty universal across cultures and across languages (between or within culture). What differentiates the Italian culture from another one (Burkinabe´) is the gesture rate and kind: a slightly lower rate and a significant preference for gestures illustrating some contents of the speech and the manipulation of objects in the nearby space. The specificity of Italian communication can regard vocal features too. For example, interruptions, at least in Southern Italy, are a much more common and accepted phenomenon than in many more “Northern” cultures (see for example, Gnisci et al. 2012): within a dyadic conversation, people tolerate a higher amount of interruptions; moreover, negative interruptions (i.e., those for disagreeing) are perceived in a similar way to the positive ones (i.e., those for agreeing). From the above reported few data it could be hypothesized that Italians, on average, use the same co-speech gesture categories as other cultures do: however, they gesticulate more than some (but not necessary all) other cultures; they tend especially to show a higher degree of speech-content related gestures and gestures that relate them with objects in the nearby space. This latter feature can also be related to data showing that Italian gestures tend to be displayed with wider movements and a bigger space occupation, as well as with vocal features showing a greater use of the conversational space via verbal interruptions and Italians’ acceptance of negative interruptions as if those were positive ones. Having said that, and remembering that the present chapter does not aim at any systematic review, further studies should test such a communicative profile and specific hypotheses about that, as well as working for elaborating and/or extending it. On the basis of the various arguments presented above, it is however crucial that any further study on Italian gesturing deals at least with the following three issues: (i) Italy’s internal range of diversities: communicative features of diverse regional areas, among Southern and Northern Italians particularly, can be as different as in crosscultural comparisons;

1248

VI. Gestures across cultures (ii) The specific matching sample: in fact, Italians can result more or less on a certain communicative feature according to the specific contrasting sample (e.g., see above Jewish or Burkinabe´); (iii) The specific condition under which the match is carried out: it is important to consider methodological approaches increasing internal validity of the research conclusions, such as for example to keep as much constant as possible a number of possible confounding variables (as it has been attempted in the Bonaiuto, Maricchiolo, and Orlacchio 2005 data set reported above, where some situational and personal features were kept constant among the Italian and the non-Italian sample).

4. Gesture in the Italian liestyle To summarize, it can be concluded that, for many historical reasons ⫺ some of which had been briefly sketched in this chapter ⫺ bodily communication has a special status, though it is a universal phenomena, when coming to Italy. To synthesize and to give an idea, though in a narrative more than a scientific register, it could be said that gesturing is part of the Italian way of life. This has to do surely with the phenomenon itself, i.e., some peculiarities of the Italian gestures ⫺ of emblems particularly but not only, and of their emphatic usage ⫺ which attracted specific scientific attention also from leading students in the field who carefully studied Italian gestures repeatedly spending time in Italy and within the Italian culture (let us remember just Kendon 1995, 2004). However, the relevance of gesturing, and more generally of verbal communication, in Italy and for the Italians should be also understood in the light of the continuous meta-communicative competence that the local culture had been cultivating on all communication phenomena, whether verbal or bodily, as superficially sketched in the first parts of this chapter. This had been continuously developed for at least 2500 years, being enriched by the contamination of world range different languages and cultures merging in Italy’s territories and populations. More generally, the specific relevance given to the communication’s visual forms is part of the code of the Italian style: The fact that communication’s iconic features enjoy a special status within the Italian communication processes is perfectly coherent with, and inscribed in, the particular status of the Italian history of the arts, particularly of the visual ones. This holds true for example for organizational communication too, where traditionally the Italian design ⫺ both graphic design and industrial design ⫺ represented a world’s excellence (as evident from any visit of the design section hosted at whichever leading modern and contemporary art museums in the world). Visual and iconic culture ⫺ both in landscape, arts, applied arts, craftsmanship, industry, technology, services ⫺ is one of the things that epitomizes Italy (probably for other senses, such as taste and hearing, a similar assertion could be made too). No wonder, then, that the visual and iconic features of communication enjoy a special status in Italy and for the Italians: it has been and it is in their ecosystem, in their natural and social history, in their culture. After all Il Belpaese (‘the beautiful country’) is the classical poetical appellative for Italy ⫺ since probably the first uses in Dante’s and Petrarca’s verses in 13th and 14th centuries ⫺ due to its mild weather, cultural heritage, and natural endowment. One contemporary data best illustrates this: Italy is currently top of the list in the UNESCO’s World Heritage Sites rank for countries, with 49 sites in 2013 (whc.unesco.org/en/list/stat#d2). As a parallel line to this first aesthetic code of the Italian communication, a second pragmatic code could be identified. In general, the status held by communication, and

84. Gestures and body language in Southern Europe: Italy

1249

especially by its interpersonal dimension, for the Italians, can be simply appreciated referring to a single contemporary data on mobile phones and modern technologies (continuously updated by many different sources): when comparing the diffusion of, say, mobile phones’ numbers for inhabitants, or similar indexes, among the world’s countries, Italy usually results to be among the tops of the list; moreover, its usage is higher, compared to other countries, especially for one-to-one direct connections (voice and text), rather than for other usages particularly available via smart phones (such as web browsing). This exemplifies the strong Italian interest for interpersonal networking and exchanges, particularly in the form of everyday direct informal conversations. This role of communication, both verbal and bodily, as a tool to cope with everyday issues is present in Italian events since ever. In order to exemplify this, it can be particularly interesting to focus on an event which had been constantly held across many centuries: Il Palio di Siena. During this unique ancient horse riding race ⫺ which has roots in some medieval similar events and had been shaped in its present forms during the 16th and 17th centuries ⫺ it is possible to observe the whole paramount of communication features, rich in shapes and colors, as well as to understand their convergence onto the pragmatic ground. A proper analysis, even just limited to its communication features, would be impossible here simply because it touches many different levels due to the complexity of the event. For the sake of the present argument, let us just very briefly illustrate how the bodily, and particularly gestural, features of the communication come into play in coordination with the verbal ones, in both viewers’ and jockeys’ interactions. Ten different jockeys, each one representing one of seventeen Contrade (‘districts’) of the city of Siena randomly chosen for each run twice a year, compete for il Palio (‘the prize’) within a strong long lasting intergroup scenario (set by the Contrade system). Though the contest is a race, communication enters it directly and dramatically as one of the main contest’s tool. Since many centuries, bodily and verbal communications does not simply surround such an event, they rather contribute to create the whole event itself, with the full paraphernalia of intergroup relations within a conflicting scenario. This can be observed both at the level of the spectators and at the level of the actors whose communication becomes an active tool in the contest itself, as well as in the communication among the two sides of the show, the audience and the horse-jockeys. The race is just the climax after many days: more specifically, in the last few hours before the race, each Contrada’s supporters (contradaioli) gather at the center and all around the contest square where the race field is located (Siena’s central square, Piazza del Campo). Here, in the race preceding hours, the lay viewers as well as the contradaioli are drawn into the climax and at the same time they contribute to inflame it: the full range of body-speech communicative paraphernalia appears then in all its most flamboyant manifestations within the audience (e.g., Fig. 84.1a⫺c). Many in the audience feature the across-centuries stable exterior apparel colors representing each Contrada for the intergroup confrontation, nowadays very commonly used in the foulard or neckerchief to display each one’s specific Contrada belongingness (Fig. 84.1a and b); the audience also displays all the mercurial bodily features within specific dyadic exchanges (Fig. 84.1c). Moreover, all these phenomena characterize bi-directional public-stage communication too, such as among one party of audience, i.e., one Contrada’s supporters, and their own horse-jockey, via both the common uniforms colours and the body-speech coordination (e.g., Fig. 84.2a). Finally, communication at il Palio di Siena is going to play a major role right on the stage, among the jockeys themselves: during the long and

1250

VI. Gestures across cultures

Fig. 84.1a, b, c: Bodily and verbal communication in the audience at il Palio di Siena on the 16th July 2013 (photographs by Marino Bonaiuto).

delicate initial phase preceding the start ⫺ when the horses have to be aligned according to a complicated procedure (called la mossa) ⫺ the jockeys can communicate among them to set strategies, reciprocal vetoes, alliances and agreements. This typically takes the form of riding horse-jockeys’ dyads closely interacting both just before and while aligning on the starting area marked by ‘two ropes’ (i canapi). During the whole prestart procedure, they are already playing a part of the contest ⫺ possibly discussing about money and other relevant resources or transaction issues able to affect the impend-

84. Gestures and body language in Southern Europe: Italy

1251

Fig. 84.2a, b, c: Bodily and verbal communication between Contrada la Torre’s (‘the Tower’) reciprocally pointing jockey and its contradaioli (associated by their burgundy colour, 2a), and between the horse-jockeys (2b and 2c): l’Onda (‘the Wave’, in its white and light-blue colours), who is going to win the race, communicates with two competitors ⫺ il Bruco (‘the Caterpillar’, in 2b), and la Lupa (‘the She-wolf’, in 2c) ⫺ just immediately before the start at il Palio di Siena on the 16th July 2013 (photographs by Marino Bonaiuto)

ing race ⫺ thus partly shaping the race outcome simply by verbally and bodily communicating before the actual run (Fig. 84.2b and c). Together with the two pictures series above, it can be concluded that old-time events such as il Palio di Siena ritually represent, among other things, the relevant and crucial role ⫺ or at least the Italian profound values and belief about it ⫺ that bodily-speech communication plays in the Italian life and in affecting its outcomes and its end.

1252

VI. Gestures across cultures

Acknowledgments The present work, though conducted without any financial support, greatly benefited from a few persons’ generous hail during August 2013. Domitilla Harding’s and Paul Getty’s Tuscan open-heartedness made possible to the first author a truly unique experience at il Palio di Siena, as well as the enjoyment of a restorative and ideas-generating atmosphere which, thanks also to Mafalda von Hessen, inspired part of this work. Then, Flavia Bonaiuto’s and Gabriella Bartoli’s hospitality and secular cares in Caprarola provided both authors with peace and freedom from mundane duties, facilitating ideas elaboration.

5. Reerences Alberti, Leon Battista 1435. De Pictura. First published [1540]. Billig, Michael 1987. Arguing and Thinking. A Rhetorical Approach to Social Psychology. Cambridge: Cambridge University Press. Bonaiuto, Marino, Augusto Gnisci and Fridanna Maricchiolo 2002. Proposta e verifica empirica di una tassonomia dei gesti delle mani nell’interazione di piccolo gruppo. Giornale Italiano di Psicologia 29: 777⫺807. Bonaiuto, Marino, Fridanna Maricchiolo and Tiziana Orlacchio 2005. Cultura e gestualita` delle mani durante la conversazione: un confronto tra donne native dell’Italia e del Burkina Faso. Paper presented at the Workshop “Intersoggettivita`, identita` e cultura”, Universita` di Urbino, 14th⫺15th September, Urbino. Bonifacio, Giovanni 1616. L’Arte de’ Cenni con la Quale Formandosi Favella Visibile, si Tratta della Muta Eloquenza, che Non E` Altro che un Facondio Silentio. Vicenza: Francesco Grossi. Cicero, Marcus Tullius 55 BC. De Oratore. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. A translation of ‘La mimica degli antichi investigata nel gestire napoletano’. With an introduction and notes by Adam Kendon. Bloomington: Indiana University Press. First published Fibreno, Naples [1832]. Della Casa, Giovanni 1558. Galateo Ovvero de’ Costumi. Venezia: Niccolo` Bevilacqua. Della Porta, Giovanni Battista 1586. De Humana Physiognomonia. Vico Equense: G. Cacchi. Efron, David 1972. Gesture and Environment. New York: King’s Crown Press. First published in [1941]. Gnisci, Augusto, Fridanna Maricchiolo and Marino Bonaiuto this volume. Reliability and validity of coding system. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communciation Science 38.2.), 879⫺892. Berlin/Boston: De Gruyter Mouton. Gnisci, Augusto, Ida Sergi, Elvira De Luca and Vanessa Errico 2012. Does frequency of interruptions amplify the effect of various types of interruptions? Experimental evidence. Journal of Nonverbal Behavior 36(1): 39⫺57. Goethe, Johann Wolfgang 1817. Joseph Bossi über Leonard da Vinci Abendmahl zu Mayland. Über Kunst und Alterthum, 3. Italian translation: Il Cenacolo di Leonardo, Abscondita: Milano, 2004. Gorenflo, Larry J., Suzanne Romaine, Russell A. Mittermeier and Kristen Walker-Painemilla 2012. Co-occurrence of linguistic and biological diversity in biodiversity hotspots and high biodiversity wilderness areas. PNAS Proceedings of the National Academy of Science 109(21): 8032⫺8037. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23: 247⫺279. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Machiavelli, Nicolo` 1513. Il Principe. First published in Florence by Bernardo di Giunta and in Rome by Antonio Blado d’Asola, 1532.

85. Gestures in Southern Europe: Children’s pragmatic gestures in Italy

1253

Maricchiolo, Fridanna, Stefano De Dominicis, Uberta Ganucci Cancellieri, Angiola Di Conza, Augusto Gnisci and Marino Bonaiuto this volume. Co-speech gestures: Structures and functions. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1461⫺ 1474. Berlin/Boston: De Gruyter Mouton. Maricchiolo, Fridanna, Augusto Gnisci and Marino Bonaiuto 2012. Coding hand gestures: A reliable taxonomy and a multi-media support. In: Anna Esposito, Antonietta M. Esposito, Alessandro Vinciarelli, Rüdiger Hoffmann and Vincent C. Müller (eds.), Cognitive Behavioural Systems, Lecture Notes in Computer Science 7403: 405⫺416. Berlin, Heidelberg: Springer. Munari, Bruno 1963. Supplemento al Dizionario Italiano ⫺ Supplement to the Italian dictionary. Mantova: Corraini. Quintilian, Marcus Fabius 90⫺96 AD. Institutio Oratoria. Taparelli D’Azeglio, Massimo 1867. I Miei Ricordi. Firenze: Barbera. VV.AA. 2003. Le´onard de Vinci. Dessins et Manuscrits. Paris: Re´union des Muse´es Nationaux.

Marino Bonaiuto, Rome (Italy) Tancredi Bonaiuto, Rome (Italy)

85. Gestures in Southern Europe: Childrens pragmatic gestures in Italy 1. 2. 3. 4. 5.

Introduction Data Analysis: Children’s use of pragmatic gestures Conclusion References

Abstract The paper focuses on the production of pragmatic gestures with the three functions identified by Kendon (2004), performative, modal and parsing, in narratives produced by 33 Italian children aged between 4 and 10. Results show that the ability to use gestures with the three different pragmatic functions is correlated with the capacity to structure a text with a hierarchical organization and to comment on one’s own production, since when pragmatic gestures come into use, the children also produce different types of textual connectives. The findings provide further evidence to support the tight correlation between speech and gesture.

1. Introduction Speech-associated gestures can convey various kinds of information, expressing both referential content (i.e., indicating concrete or abstract entities, representing the size or Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12531258

1254

VI. Gestures across cultures

the shape of an object, enacting an action) and pragmatic meanings (Kendon 2004; McNeill 1992). Kendon (2004) defines the pragmatic function of gestures as follows: “any of ways in which gestures may relate to features of an utterance’s meaning that are not a part of its referential meaning or propositional content” (Kendon 2004: 158). He proposes three varieties of such function: performative, modal and parsing. Gestures serving a performative function indicate the speech act that the speaker is expressing (such as a refusal, a question, an offer); gestures having a modal function show the speaker’s own mental attitude towards his discourse, indicating the interpretative key of the discourse; finally, gestures serving a parsing function contribute to make visible the process of organization and the structure of the discourse, punctuating, or stressing its components. Pragmatic gestures can, in sum, express the rhetorical purposes of an utterance (Kendon 1995). In contrast to referential gestures, pragmatic ones are less frequently discussed in the literature. Studies focusing on these gestures provide illustrations of the use of specific hand shapes and description of their contexts of use with examples drawn from adult conversations (i.e., mano a borsa and mani giunte, Kendon 1995; pistol hand, Seyfeddinipur 2004; palm up gestures, Ferre´ 2011; Kendon 2004; Müller 2004; Streeck 2009). Surprisingly little is known about the use of pragmatic gestures in children. A few studies describe the use of some highly conventionalized pragmatic gestures in children at early stages of language development. It is shown, for instance, that already at the age of two, children are able to produce gestures that realize a great variety of communicative acts, such as negating and asserting, indicating to shut up, waving hello/goodbye, indicating that something is finished, etc. (Guidetti 2003). Other studies, investigating older children’s use of gesture in narratives and dyadic conversations, show that pragmatic gestures increase with age as a function of children’s symbolic and pragmatic abilities. McNeill (1992) observed, for example, that the production of gestures connected to discourse elaboration rather than to its referential content appears around the age of 6, when the child begins to produce gestures signaling the structure of the text (such as beats) and those expressing more abstract contents (e.g., metaphoric gestures). Other studies looking at interactions between French adults and children aged from 6 to 11 found a correlation between the emergence of textual abilities (as shown in the alternation between narrative and metanarrative sequences) and more complex multimodal behavior (increase of pragmatic gestures, more posture shifts and better gaze management) (Colletta 2004, 2009). The study presented here provides an illustration of how Italian children use pragmatic gestures with the three functions proposed by Kendon while producing a narrative discourse. The focus of the chapter is on the qualitative differences in the use of such gestures observed at three different ages. The chapter also centers on the connection between the development of the use of such gestures and the development of textual abilities.

2. Data The multimodal corpus used for this study consists of 33 video recordings of cartoon retellings which were made as part of a study of gesture and speech development in narrative discourse in Italian children aged between 4 and 10 (Graziano 2009). Participants were divided into three age groups (4⫺5; 6⫺7; 8⫺10 years), each containing

85. Gestures in Southern Europe: Children’s pragmatic gestures in Italy

1255

11 subjects. After viewing a silent cartoon (“Pingu at Christmas time”), children were asked to tell the story to an adult who had seen the cartoon with them. The adult was instructed to only listen to the story and to avoid interrupting the child. The recordings were made in a familiar environment for the children (either at school or at home). All participants are native speakers of Italian, some living in Naples, others in Rome.

2.1. Gesture and speech coding For the purposes of this analysis, only pragmatic gestures produced by the children were taken into consideration. Applying Kendon’s (2004) definition reported above, they were divided into: performatives, modals, and parsing. In order to investigate the connection between the use of pragmatic gestures with the three functions and the child’s ability to structure a text, an analysis of connectives (taken as a measure of discourse cohesion) was also conducted. Connectives are verbal elements serving the function of connecting different parts of the text, making clear the relation existing among them. They can be conjunctions, prepositions, adverbs, discursive signals and interjections (Lundquist 1994). Connectives were divided into the following functional categories: temporals, causals, adversatives, additives, explicatives, metatextuals, interactives (Bazzanella 1994; Halliday and Hasan 1976).

3. Analysis: Childrens use o pragmatic gestures Pragmatic gestures were found in all age groups, although their proportion gradually increased with age, reaching a distribution equal to that of referential gestures in older children (for details, see Graziano 2009). Similarly, all children produced gestures with the three pragmatic functions. However, as will be detailed below, differences emerged from a qualitative point of view, especially with respect to modal and parsing gestures. Beginning with performative gestures, the ones used by children at all ages observed were headshakes and head nods, mainly produced in connection with verbal expressions of negation or assertion. They thus expressed the same meaning expressed in words. However, in older children (from 6 to 10), these gestures also appeared in association with comments on the speaker’s own verbal production. In such cases, the comments were introduced by interjections signaling a reformulation (e.g., voglio dire ‘I want to say’, ‘I mean’) with which the child usually corrects a mistake in the verbal production. The analysis of modal gestures, i.e., gestures indicating the interpretative frame of the utterance, has revealed that the type of information that children express with these gestures was more diversified with age. An illustration is provided by the different use of a gestural expression frequently employed with this function. That is, a gesture in which the hand is rotated outwards to reveal the palm, at the same time moved backward and laterally (the so-called PL gesture, which is the manual part of a gestural ensemble which may include a shoulder shrug. See the account in Kendon 2004: 275⫺ 281; see also Streeck 2009). Whereas 4 and 5 years old children produced this gesture only at the end of the discourse to signal that the utterance that the gesture accompanies has to be considered as the conclusion of the discourse, older children used this gesture with more diversified meanings. For example, an 8 years old child used this gesture to indicate that what she was saying had to be interpreted as something obvious (a use of this gesture documented for adults in Kendon 2004). Recalling a scene in which the

1256

VI. Gestures across cultures

characters prepare cookies, the child said that the mother penguin spread some cream on the cookies, but having previously said that the penguin children had already eaten the cream, she commented quel che era rimasto ‘what was left’. This comment is accompanied by a PL, here having the function to convey the meaning of obviousness not expressed in speech. Therefore, the gesture, together with the intonation, is used to provide the interpretative key of this utterance. Interestingly, older children also produced interactive connectives, such as the socalled modulators (connectives serving to reinforce or mitigate the propositional content of the utterance, such as phrasal expression like diciamo ‘let’s say’, mi sembra ‘I think’; Bazzanella 1994), which, in contrast, were never produced by younger children. This finding confirms previous studies showing that the use of verbal expressions of evaluations and interpretations only appears around school age (Baumgartner, Devescovi, and D’Amico 2000; Berman and Slobin 1994). We can thus affirm that the ability to use gestures with a modal function develops in parallel with the ability to verbally express comments and evaluation. Just like these, the use of modal gestures requires the ability to comment upon and control one’s own production, as well as the ability to manage the interaction with the interlocutor in order to maintain his attention. Finally, the analysis of parsing gestures has shown that younger children used these gestures only when listing a series of objects or events in the story. That is, they produced conventional gestures of enumeration (extending the fingers consecutively) as they marked the designation of each element of a list. Older children, in contrast, used parsing gestures in metanarrative (McNeill 1992) or orientation clauses (Labov 1972; Labov and Waletzky 1967), that is, clauses in which the narrative frame is explicitly mentioned (for instance, when presenting the setting of the story or introducing the characters, when signaling the shift towards a new sequence, etc.). In other words, older children used parsing gestures to mark the internal articulation of their narration. The gesture used in such instances was often a palm up open hand gesture, with the hand extended into immediate frontal space (Kendon 2004; Müller 2004). The use of these gestures reflects the emerging metanarrative competence that appears, again, around school age (Berman 2001; Berman and Slobin 1994; Peterson 1990; Peterson and McCabe 1983). An interesting parallel was found in the different use of temporal and metatextual connectives between the groups of children. Younger children tended to use temporal connectives to express relations of posteriority among events (e.g., poi, dopo ‘then’). As shown in studies above mentioned, younger children tend to retell the events in a sequential order regardless of the actual relation that links them. Older children, in contrast, produced a greater variety of forms, expressing other types of temporal relations (anteriority: prima che ‘before’; simultaneity: mentre ‘while’). This reflects their greater ability to structure the narrative text (see the above-mentioned studies). A similar observation can be made for the use of metatextual connectives (discourse structure markers ⫺ such as allora ‘well’; basta ‘the end’ ⫺ whose function is to segment the flow of discourse). While younger children used them only to mark the beginning and the end of the discourse, older children used them also to mark other parts of the discourse structure, such as the beginning of a particular sequence (like a flashback), the resumption of the flow of the speech, the shift to another part of the story. Sometimes, in older children, these connectives were also accompanied by a parsing gesture. Considering these parallelisms, we can affirm that the ability to use gestures with a parsing function and the capacity of planning and structuring a narration develop in tandem.

85. Gestures in Southern Europe: Children’s pragmatic gestures in Italy

1257

4. Conclusion The aim of this paper has been to illustrate differences in the use of the pragmatic gestures in Italian children of three different age groups while producing a narrative. Moreover, it also aimed at comparing the use of pragmatic gestures to the production of textual connectives. The analysis has shown that there is a strong connection between the use of gestures with three pragmatic functions and the use of textual connectives, in particular, temporal, metatextual and interactive connectives. In order to interpret the findings, we must consider that the type of text that children were to produce, a narrative, is a complex type of discourse. Narration is a social activity, characterized by an interactive dimension (Losh et al. 2000; Reilly et al. 2004). A good narrative must thus accomplish two kinds of functions: referential (i.e., recalling a series of facts) and evaluative (i.e., interpreting and evaluating them) (Labov 1972; Labov and Waletzky 1967). A competent narrator must organize the story in a precise structure, normally composed by a sequence of episodes and events that develop hierarchically in time and are connected by logical and casual relationship (Mandler and Johnson 1977; Stein and Glenn 1979). At the same time, the narrator must provide evaluative comments of events and characters’ behaviours. In this way, he shows his own point of view about the story and attracts and orients the listener’s flow of attention (Berman and Slobin 1994; Labov 1972; Peterson and McCabe 1983). To acquire such a competence is a slow process, as we can see both in the use of the connectives and in the use of pragmatic gestures. As we have observed, the ability to use a larger repertoire of temporal connectives, through which different types of temporal relations may be expressed, and the ability to use connectives with a discourse structure marker function is paralleled by the ability to use parsing gestures. Similarly, the capacity to use modal gestures to convey a greater variety of meanings corresponds to the ability to use connectives with the function of modulators. The ability to integrate gestural and verbal units serving similar functions is an important sign of the parallel development of the two modalities. The findings provide further evidence to support the view of speech and gesture as an integrated system.

5. Reerences Baumgartner, Emma, Antonella Devescovi and Simonetta D’Amico 2000. Il Lessico Psicologico. Origine ed Evoluzione. Roma: Carocci. Bazzanella, Carla 1994. Le Facce del Parlare: Un Approccio Pragmatico All’italiano Parlato. Firenze: La Nuova Italia. Berman, Ruth A. and Dan I. Slobin 1994. Relating Events in Narrative: A Crosslinguistic Developmental Study. Hillsdale, NJ: Lawrence Erlbaum Associates. ˆ ge´ de 6 a` 11 Ans. Corps, Colletta, Jean-Marc 2004. Le De´veloppement de la Parole chez l’Enfant A Langage et Cognition. Hayen: Mardaga. Colletta, Jean-Marc 2009. Comparative analysis of children’s narratives at different ages. A multimodal approach. Gesture 9(1): 61⫺97. Ferre´, Gae¨lle 2011. Functions of three open-palm hand gestures. Multimodal Communication 1(1): 5⫺20. Graziano, Maria 2009. Rapporto fra lo sviluppo della competenza verbale e gestuale nella costruzione di un testo narrativo in bambini dai 4 ai 10 anni. Unpublished doctoral dissertation.

1258

VI. Gestures across cultures

SESA ⫺ Scuola Europea di Studi Avanzati ⫺ Universita` degli Studi “Suor Orsola Benincasa”, Napoli, Italy & Universite´ Stendhal ⫺ Grenoble 3, Grenoble, France. Guidetti, Miche`le 2003. Pragmatique et Psychologie du De´veloppement. Comment Communiquent les Jeunes Enfants. Paris: Belin. Halliday, Michael Alexander K. and Ruqaiya Hasan 1976. Cohesion in English. London: Longman. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺279. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Labov, William 1972. Language in the Inner City: Studies in the Black English Vernacular. Oxford: Basil Blackwell. Labov, William and Joshua Waletzky 1967. Narrative analysis: Oral versions of personal experience. In: June Helm, Essays on the Verbal and Visual Arts, 12⫺44. Seattle: University of Washington Press. Losh, Molly, Ursula Bellugi, Judy S. Reilly and Diane Anderson 2000. Narrative as a social engagement tool: the excessive use of evaluation in narratives from children with Williams syndrome. Narrative Inquiry 10(2): 265⫺299. Lundquist, Lita 1994. La Cohe´rence Textuelle: Syntaxe, Se´mantique, Pragmatique. Frederiksberg C: Samfundslitteratur. Mandler, Jean M. and Nancy S. Johnson 1977. Remembrance of things parsed. Story structure and recall. Cognitive Psychology 9(1): 111⫺151. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: Chicago University Press. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand. A case of gesture family? In: Roland Posner and Cornelia Müller (eds.), The Semantics and Pragmatics of Everyday Gestures, 234⫺256. Berlin: Weidler Buchverlag. Peterson, Carole 1990. The who, when and where of early narratives. Journal of Child Language 17(2): 433⫺455. Peterson, Carole and Allyssa McCabe 1983. Developmental Psycholinguistics: Three Ways of Looking at a Child’s Narrative. New York: Plenum Press. Reilly, Judy S., Molly Losh, Ursula Bellugi and Beverly Wulfeck 2004. “Frog, where are you?”. Narratives in children with specific language impairment, early focal injury and Williams syndrome. Brain and Language 88(2): 229⫺247. Seyfeddinipur, Mandana 2004. Meta-discursive gestures from Iran: Some uses of the ‘Pistol Hand’. In: Roland Posner and Cornelia Müller (eds.), The Semantics and Pragmatics of Everyday Gestures, 205⫺216. Berlin: Weidler Buchverlag. Stein, Nancy L. and Christine G. Glenn 1979. An analysis of story comprehension in elementary school children. In: Roy O. Freedle (ed.), New Directions in Discourse Processing, 53⫺119. Norwood: Ablex. Streeck, Jürgen 2009. Gesturecraft: The Manu-facture of Meaning. (Gesture Studies 2.) Amsterdam: John Benjamins.

Maria Graziano, Lund (Sweden)

86. Gestures in Southwest Europe: Portugal

1259

86. Gestures in Southwest Europe: Portugal 1. 2. 3. 4. 5. 6. 7.

Introduction Linguistics Sign language/Gesture acquisition: neurological and cognitive perspectives Performance studies: music and contemporary dance Computer Science Concluding remarks References

Abstract This article is the result of a survey of scientific research on body, language, and communication carried out in Portugal. It is not an exhaustive list, as only the most salient and recent developments of research on body and communication have been taken into account. These scientific endeavors can be grouped into four disciplinary areas: Linguistics, Sign Language Studies, Performance Studies, and Computer Science. Research is more intensive and advanced in the technologically oriented areas, such as Computer Science, Neuroscience, and Performance Studies, as well as in research oriented to the application of bodily communication in various contexts and for different purposes. As for Linguistics, although there is not yet full awareness of the importance of gesture in the study of language, there are already a few scholars committed to ethnographic and cognitive research of multimodality in interaction, sign language, and sign language acquisition.

1. Introduction It would be misleading to talk about a representative body of research on gesture in Portugal. Nevertheless, there are isolated studies and projects developed within different scientific areas where body movements are both a primary and a secondary object of study. This article provides a review of some former and recent approaches to Portuguese co-speech gestures and body movements in different contexts and from distinct disciplinary perspectives. The emphasis placed on each approach depends on the advancement of its respective studies and projects. The article is divided into four sections, each corresponding to a disciplinary area: Linguistics, including the ethnographic and cognitive orientation in the multimodal analysis of face-to-face interaction; Sign Language Studies, focusing on gesture acquisition from a neurological and cognitive perspective; Performance Studies, which include music and contemporary dance; and, finally, Computer Science, where the analysis of body movements and facial expressions is important for the creation of avatars, with special emphasis on sign language avatars. All these approaches are interdisciplinary and/or transdisciplinary and most of them depend on and contribute to the development of new technologies, as is the case, for instance, of sign language acquisition research. Since it would be impossible to consider every single work on gesture, some more salient examples from each specific area of “gesture” research in Portugal will be briefly outlined in the following sections. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12591266

1260

VI. Gestures across cultures

2. Linguistics The first references to gestures in communication in Portugal can be found in modern Philology and Ethnology (see Basto 1938; Vasconcelos 1886), reflecting the reception of the work of European 19th-century scholars. It is curious to note that Vasconcelos already described most of the functions now attributed to co-speech gestures (see Vasconcelos 1886: 97⫺98). The same author also wrote about the “figa” (Vasconcelos [1925] 1996), a quotable gesture also described by de Jorio in his description of the Neapolitan gesture la fica (de Jorio 2000: 214⫺219). In the 20th century, the linguist Herculano de Carvalho mentioned gesticulation and posture in speech as responsible for the display of attitudes and emotions in a silent fashion. He referred to gestures as “oral elements of language”, or “nonverbal language”, which he considered “other kinds of language” (Carvalho 1983: 63). However, despite his recognized influence on 20th-century Portuguese linguistics, his reflections did not result in any systematic study on the subject. In the year 2000, the first conference exclusively devoted to Gesture in Portugal was held in Porto at the Fernando Pessoa University, strongly contributing to the reception of Gesture Studies in Linguistics. In fact, this event and the discussion with the founders of Gesture Studies was pivotal for the beginning of a research project on multimodal communication whose results were published some years later (Galhano-Rodrigues 2007b). This project consisted of a holistic analysis of co-speech body movements (head and torso and upper-limb movements, gaze and facial expressions) and of their relation(s) with speech (including prosody). The analysis of some sequences of face-to-face interactions was qualitative and its objective was to detect how the polyfunctional and polysemic properties formally described for the different kinds of conversational signals, could also be recognized in co-speech body movements. A further aim was to find out whether there were any formal regularities in body movements corresponding to specific conversational functions (see Galhano-Rodrigues 2007a, 2007b). This micro-analytical study lacks a comprehensive and detailed systematization of forms (regular movement features and configurations) correlated to conversational functions and speech. Furthermore, although it emphasizes the absence of a hierarchy between verbal and nonverbal modalities, this dichotomy is still present. However, as it pursues a multimodal perspective of interaction, it brings together theoretical approaches from many different orientations in Conversation and Discourse Analysis and other disciplinary fields, offering a flexible and dynamic methodology (i.e., capable of embracing further theoretical approaches) for a holistic micro-analysis of linguistic, prosodic, and kinetic elements of interaction. Further works consisted in isolated case studies, namely: “Gesticulating with the feet” and “Multimodality in Interpretation” (Galhano-Rodrigues 2007c); the comparison of gesture spaces, interactional spaces, and gesture amplitude in Angolan and European Portuguese speakers (Galhano-Rodrigues 2010). Recent research has been centred on the analysis of multimodality in Portuguese spoken in different cultures, with emphasis on two interconnected topics: (i) pointing gestures and correlated speech, considering context, interpersonal expectations, target pointed at, and other related facts, and (ii) embodied spaces (mental spaces and the structuring and need for space[s] in social life) and embodied cultures (postural/movement habits resulting from cultural practices).

86. Gestures in Southwest Europe: Portugal

1261

The relevance of multimodality in face-to-face communication is slowly beginning to attract the attention of a limited number of scholars in Portuguese Linguistics, and it is possible to state that the tight link between speech and body movements in utterance production has not been acknowledged yet. Nevertheless, there are some young researchers from other disciplines who have taken up the challenge of embracing gesture studies, further contributing to the development of this field in Portugal. Multimodality in simultaneous interpreting is being explored by Elena Zagar Galva˜o (University of Porto) (Galva˜o 2009; Galhano-Rodrigues and Galva˜o 2010), herself a professional interpreter. Her research topic, which has been recognized as innovative within the area of Interpreting Studies, aims at comparing simultaneous interpreters’ gesture and speech production with speakers’ gesture and speech production, in order to describe the gestures produced within a highly complex cognitive and social activity and to establish how interpreters’ gestures, or bodily actions, can contribute to the construction of the meaning they are called upon to convey. The results of this research could have far reaching implications for both the theory and teaching of interpreting. Multimodality in intercultural interactions in forensic contexts is the focus of attention in research project conducted by Ana Paula Lopes (2012). The corpus analyzed consists of footage of interactions between native speakers of Portuguese and English of representative ages and socio-cultural groups. Combining the methodology used in Galhano-Rodrigues (2007a) with other methodological approaches from Psychology and Forensic Sciences, she aims at identifying and describing multimodal cues which can be correlated with the strategies used to construct meaning. These strategies depend on inter-subjectivities and on interpersonal attitudes and expectations, as well as on cultural values and practices. The objective is to establish a general framework of categories relevant to the specific context of forensic interactions, more precisely, to intercultural interactions in forensic contexts. This framework should account for the complexity of the multiple phenomena to be detected at the different ⫺ but intertwined ⫺ levels of interaction. The cues which may cause misunderstandings and/or influence judges’ decisions are important issues to be pursued in this research project.

3. Sign language/Gesture acquisition: neurological and cognitive perspectives Anabela Cruz-Santos, from the University of Minho, CIED-UMinho, has been responsible for introducing a cognitive-oriented perspective in the study of gesture in Portugal. She first came into contact with gesture research in 2004, during her doctoral program at the University of Wisconsin-Madison, USA. She was supervised by Julia Evans, currently director of the Child Language and Cognitive Processes Lab (SUSD), California. As a consequence, she developed a strong interest in gesture acquisition and is supervising various studies on natural gesture acquisition and use in children with and without hearing impairments. Considering natural gestures as one of the primary basis for the acquisition and development of communicative (linguistic and gestural) competences, these studies seek to establish how hearing can affect both the acquisition of and the different types of natural gestures used by Portuguese children (Lima 2011). A neurological approach to research in Portuguese Sign Language is pursued by Ana Mineiro, researcher at the Institute of Health Sciences of the Portuguese Catholic University and the Theoretical and Computer Linguistic Institute in Lisbon. As the

1262

VI. Gestures across cultures

leader of the project “Longitudinal Corpus of Portuguese Sign Language”, she is interested in collecting a vast amount of data on sign language acquisition (http://corpusaquilgp.ics.lisboa.ucp.pt:8080/). Mineiro also collaborates with Ana M. Abrantes (PI), Maria V. Nunes, and Alexandre Castro-Caldas (Castro-Caldas 2009) in the project “Thinking, Learning and Talking in the Deaf Way”. Their aim is to understand how the deaf categorize reality and to describe prototypes in deaf cognition (see Carmo et al. in print; Mineiro et al. 2009; Morais et al. 2011).

4. Perormance studies: music and contemporary dance Researchers from the areas of performance have been considering the analysis of body movements with both educational and artistic objectives. An example of these orientations is the INET-MD, Institute of Ethnomusicology ⫺ Research Centre on Music and Dance, a multidisciplinary research center comprising three Schools of different Universities (website: http://www.fcsh.unl.pt/inet/). The project “TeDance, Technologically Expanded Dance”, coordinated by Daniel Te´rcio (UTL), relies on a multidisciplinary team consisting of specialists from engineering, biomechanics, choreography, theater, and dance studies. Its aim is to develop interaction between the physical and virtual worlds in dance (http://www.fmh.utl.pt/dance/tedance/). An interdisciplinary approach within the area of Performance Studies is being pursued by Jorge Salgado Correia (University of Aveiro) (Correia 2003). The author proposes a theoretical methodology for the comprehension and study of musical performance, which can be applied to other contexts. Based on psychological, cognitive, and neurological studies of onto-genetic and communicative issues in the relation between body and the environment, the author presents his perspective on the body’s involvement in the making of meaning, considering two premises, which presuppose the existence of: (i) a bodily/physical memory, a knowledge stock based on experience, from which meaning is constructed; (ii) a correspondence between this stock of experience and a stock of emotions which were assimilated through experience. The last premise is the basis for new imaginative combinations of sense, gesture, and emotions. These combinations are non-verbally displayed by the performer as bodily constructions of emotional narratives, while at the same time being understood/re-interpreted by the listener as a mimetic reaction to music. Imagination works from the most basic cognitive level to the most complex rational structure and builds on humans’ capacity for making sense in the representation of new creative acts. Imagination is seen as essential in aesthetic (musical) experiences, in other words, in the musical gesture ⫺ not only from the interpreter’s/performer’s but also from the listener’s point of view. Resorting to the “mimesis hypothesis”, he explains that music communication is only possible if gestures produced by performers find the adequate listeners. This model contributes to the explanation of the conceptualization of performers’ work processes (see also further research: Correia 2008, 2009). Following this framework, Ana Carrolo (2009) studies interactional and synchronizing strategies used in the communication between blind and non-visually impaired musicians in group performance. She focuses on the role of motor activity in both conveying and receiving information, on blind people’s relation to their

86. Gestures in Southwest Europe: Portugal

1263

body in the context of performance, on how they use expressiveness and hear other people’s music, and on the kind of information they can perceive and the images this information suggests. Anto´nio Salgado (Escola Superior de Artes de Espeta´culo, Porto) explores “gestures” in singers’ facial expressions and voice in relation to the expression of emotions during musical performance. Salgado (2006, 2011) shows the need for raising singers’ awareness about the innate bodily mechanisms of the expression of emotions which underlie their communication with the audience in a subconscious way. This awareness could contribute to improving self-perception during performance and to achieving greater expressiveness during musical execution, and, consequently, to a more effective and interactive communication with the audience. Alejandro Laguna (2012) considers body movements in the specific context of dance technique lessons. He proposes a methodology for the analysis of a triadic inter-subjective model: the learner, the teacher, and the dance accompanist. The objective of his project is to establish how different kinds of features (prosodic, musical, and visualkinetic) in teachers’ instructions and movements as well as in the music played can lead to deviations in learner’s performance. The project “TKB ⫺ A Transmedia Knowledge Base for contemporary”, directed by Carla Fernandes (University Nova de Lisboa) is another example for the integration of gesture studies in dance studies. The project aimed at the creation of a relational and annotated digital archive for contemporary dance and of a video annotation tool for choreographers to be used in real time (webpage: http://tkb.fcsh.unl.pt). One of Fernandes’ objectives was to analyze dancers’ body movements in relation to their spoken utterances according to other criteria than those traditionally used by choreographers (Fernandes 2010). For this purpose, three performances of the Portuguese choreographer Rui Horta were thoroughly analyzed. For the micro-analysis of body movements the team (Dimakopoulou, Galhano-Rodrigues, Fernandes, and Santos) established the criteria for the creation of a glossary with the necessary vocabulary for a uniform classification of movements. The annotation processes and the process of the respective on-line publication in an archive platform are described in two master’s theses (Dimakopoulou 2011; Santos 2011). The “1st Musical Itineraries Forum: Music and Movies” (19⫺20 June 2010) and the “2nd Musical Itineraries Forum: Music and Gesture” (28⫺30 October 2011), both held in Lisbon, are testimonies to the importance of exploring body movements in music and dance studies.

5. Computer Science In the field of computer science numerous projects explore multimodality in humancomputer interaction with a view to developing interfaces and avatars. Here are some examples: “LIFEisGAME ⫺ LearnIng of Facial Emotions usIng Serious GAMEs” is a project developed by researchers from the University of Porto, the Microsoft Development Center, and from the University of Texas at Austin (http://www.portointeractivecenter.org/ lifeisgame). Through the syntheses of realistic virtual characters and markerless motion capture technology, this project aims to develop relaxing and interactive games which help individuals with autism spectrum disorders to recognize emotions in facial expres-

1264

VI. Gestures across cultures

sions. It involves 30 researchers, among whom: Veronica Orvalho (PI) from the School of Sciences of the University of Porto and the Telecommunication Institute, Miguel Sales Dias, Director of the Microsoft Language Development Center, and Cristina Queiro´s and Anto´nio Marques, from the School of Psychology and Educational Science of the University of Porto. Ana Mineiro is also engaged in the field of Computer Sciences, with the objective of modeling an avatar for Portuguese Sign Language. This avatar should be able to translate text from Portuguese to Portuguese Sign Language. The team is composed of two computational linguists, Jose´ P. Ferreira and Mara Moita (ILTEC), three deaf researchers, Patrı´cia Carmo, Amı´lcar Morais, and Jorge Barreto (UCP), a web designer, Ricardo Oliveira (UCP), and two computer scientists, Rui de Almeida (ISCTE) and Michael Filhol (University of Paris 11, CNRS). Miguel Sales Dias (ISCTE) is a consultant for this project (see Moita et al. in print). Vitor Sa´, together with Cornelius Malerczyk and Michael Schnaider (2002), from the School of Social Sciences of the Portuguese Catholic University, works on the development of speech, gesture, and even smell recognition. His work on pointing gestures is of special interest, as it examines 3D hand-configurations and index positions in relation to speech in the context of a multimodal interaction.

6. Concluding remarks The collective and individual research projects outlined above reveal that there is a heightened awareness of the pertinence of body movements in communication in the more technologically oriented disciplines, such as computer sciences, neurosciences, and performance studies, as well as in research oriented to the application of bodily communication in various contexts and for different purposes. However, the reception of the theoretical background of gesture studies and its integration in the analysis of conversation and interaction dates from the very beginning of the 21st century. Thus, in Portugal there is still a long path ahead in the study of multimodality in face-to-face interaction from a linguistics perspective. Other fields, such as educational sciences and translation studies seem to be more receptive to the importance of studying gesture in the specific contexts of intercultural communication, interpreting, language impairment, and sign language acquisition.

7. Reerences Basto, Cla´udio 1938. A linguagem dos gestos em Portugal. Esboc¸o etnogra´fico. Revista Lusitana 36(1⫺4): 5⫺72. Carmo, Patrı´cia, Ana Mineiro, Quadros Ronice de Muller and Alexandre Castro Caldas in print. Handshape is the Hardest Path in Portuguese Sign Language Acquisition. Sign Language and Linguistics. Amsterdam: John Benjamins. Carrolo, Ana Jose´ 2009. Corpo, mu´sica e invisualidade: interacc¸a˜o e sincronia na mu´sica de conjunto. Master thesis, Universidade de Aveiro, Departamento de Comunicac¸a˜o e Arte. Carvalho, Jose´ G. Herculano 1983. Teoria da Linguagem. Natureza do Feno´meno Linguı´stico e a Ana´lise das Lı´nguas, Volume I, 6th edition. Coimbra: Coimbra Editora Limitada. Castro-Caldas, Alexandre 2009. Brain mechanisms for Sign Language. Cadernos de Sau´de 1, especial Lı´nguas Gestuais: 7⫺13.

86. Gestures in Southwest Europe: Portugal

1265

Correia, Jorge Salgado 2003. Investigating musical performance as embodied socio-emotional meaning construction: Finding an effective methodology for interpretation. Ph.D. dissertation, University of Sheffield, UK. Correia, Jorge Salgado 2008. Do performer and listener share the same musical meaning? Estudios de Psicologı´a 29(1): 49⫺69. Correia, Jorge Salgado 2009. Developing musicality in the teaching of performance. In: Stephen Malloch and Colwyn Trevarthen (eds.), Communicative Musicality: Exploring the Basis of Human Companionship, 597⫺610. Oxford: Oxford University Press. De Jorio, Andrea 2000. Gestures in Naples and Gesture in Classical Antiquity. Bloomington/Indianapolis: Indiana University Press. Dimakopoulou, Paraskevi 2011. Towards the creation of an annotation system and a digital archive platform for contemporary dance. M.A. dissertation, Porto, Faculdade de Engenharia, Universidade Porto. Fernandes, Carla 2010. Looking for the linguistic knowledge behind the curtains of contemporary dance: the case of Rui Horta’s creative process. Art, Brain and Language. Special Issue of International Journal of Arts and Technology 3(2⫺3): 235⫺250. Galhano-Rodrigues, Isabel 2007a. Body in interpretation. Nonverbal communication of speaker and interpreter and its relation to words and prosody. In: Peter Schmitt and Heike Jüngst (eds.), Translationsqualität. Leipziger Studien zur angewandten Linguistik und Translaktologie, 739⫺753. Frankfurt a. M.: Peter Lang Verlag. Galhano-Rodrigues, Isabel 2007b. O Corpo e a Fala. Lisboa: FCG/FCT. Galhano-Rodrigues, Isabel 2007c. How do feet gesticulate? Paper presented at the 5th Conference of the ISGS (International Society for Gesture Studies), Evanston, Chicago, Illinois, Northwestern University, 18⫺21 June, 2007. Galhano-Rodrigues, Isabel 2010. Gesture space and gesture choreography in European Portuguese and African Portuguese interactions: A pilot study of two cases. In: Stephan Koop and Ipke Wachsmuth (eds.), GW2009, LNAI, 5934, 23⫺33. Heidelberg: Springer Verlag. Galhano-Rodrigues, Isabel and Elena Zagar Galva˜o 2010. The importance of listening with one’s eyes: a case study of multimodality in simultaneous interpreting. In: Dı´az Cintas, Anna Matamala and Jose´lia Neves (eds.), New Insights into Audiovisual Translation and Media Accessibility: Media for All 2, 241⫺253. Amsterdam: Rodopi. Galva˜o, Elena Zagar 2009. Speech and gesture in the booth ⫺ A descriptive approach to multimodality in simultaneous interpreting. In: Dries de Crom (ed.), Selected Papers of the CETRA Research Seminar in Translation Studies 2008. Laguna, Alejandro 2012. Transmodalidad y divergencia informacional en la ensen˜anza de danza. Cuadernos de Mu´sica, Artes Visuales y Artes Esce´nicas 7(2): 43⫺63. Lopes, Ana Paula 2012. Analysis of multimodality in face-to-face interaction applied in a multicultural criminal context. Paper presented at the 6th International Conference on Multimodality, London, 22⫺24 August, 2012. Mineiro, Ana, Liliana Duarte, Joana Pereira and Isabel Morais 2009. Adding other pieces to the Portuguese Sign Language lexicon puzzle. Cadernos de Sau´de 1, Especial Lı´nguas Gestuais: 83⫺98. Moita, Mara, Patrı´cia Carmo, Helena Carmo, Jose´ Pedro Ferreira and Ana Mineiro in print. Estudos preliminares para a modelizac¸a˜o de um avatar para a LGP: os descritores fonolo´gicos, Cadernos de Sau´de, Lisboa. Morais, Amı´lcar, Joa˜o C. Jardim, Ana Silva and Ana Mineiro 2011. Para ale´m das ma˜os: elementos para o estudo da expressa˜o facial em Lı´ngua Gestual Portuguesa (LGP). Cadernos de Sau´de 1(2): 135⫺142. Sa´, Vı´tor J., Cornelius Malerczyk and Michael Schnaider 2002. Vision Based Interaction within a Multimodal Framework. Selected Readings in Computer Graphics 2001. Fraunhofer IRB Verlag: Stuttgart. Webpage: http://virtual.inesc.pt/10epcg/actas/pdfs/sa.pdf.

1266

VI. Gestures across cultures

Salgado, Anto´nio 2006. A cognitive feedback study for improving emotional expression in solo vocal music performance. Performance-Online 2(1). Available at: www.performanceonline.org. Salgado, Anto´nio 2011. Investigating e-motional meaning in music performance. ISSTP Journal, Tension in Performance 2(1): 19⫺24. Santos, David 2011. Dramaturgia multimodal: Anotac¸a˜o digital de corpora multimodais. M.A. dissertation, Faculdade de Engenharia, Univeridade Porto. Vasconcelos, Jose´ Leite de 1886. Linguagem. Ensaio Antropolo´gico Apresentado a´ Eschola Medica do Porto Como Dissertac¸a˜o Inaugural. Porto: Typographia Occidental. Vasconcelos, J. Leite de 1996. Signum Salomonis. A Figa. A Barba em Portugal. Estudos de Etnografia Comparativa. Lisboa: Publicac¸o˜es Dom Quixote. First published [1925].

Isabel Galhano-Rodrigues, Porto (Portugal)

87. Gestures in Southwest Europe: Catalonia 1. 2. 3. 4.

The tradition of studies of Catalan gestures. Studies of emblematic gestures Emblematic gestures. Analysis and comments Other studies. Issues, trends and gaps in the analysis of Catalan gestures References

Abstract The tradition of studies of Catalan gestures began with the article by Joan Amades (1957), discussed and completed years later by Mascaro´ (1981), and Payrato´ (1989, 1993, 2013). These works are basically a series of studies on Catalan emblems and, in part, a reflection on the characteristics of the gesticulation of Catalan speakers. Recent studies on various aspects of Catalan gestures have focused on the concept of multimodality, particularly in the relationship between the production and the interpretation of messages, which show verbal, vocal, and gestural constituents.

1. The tradition o studies o Catalan gestures. Studies o emblematic gestures We owe the first collection of Catalan autonomous gestures or emblems to Joan Amades (1957). Notably in tune with current interpretations, Amades understood the linguistic nature of the gesture (at least in the broad sense) and its relevance for the ethnology of the language. His repertoire comprises 299 entries and 96 photographs of gestures which can be considered autonomous and emblematic. In fact, he speaks literally about “linguistic” gestures (and non-linguistic), on the one hand, and about “general”, “individual”, and “indeterminate” gestures on the other, but he does not produce an accurate characterization of the items. Neither does he establish reliable relationships between gestures (their morphological variants, for example); sometimes, variants of the same unit appear as different gestures (Mascaro´ 1981: 27). Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12661272

87. Gestures in Southwest Europe: Catalonia

1267

Many years later, in a short article, Jaume Mascaro´ (1981) set out the foundations of what would eventually become the study of Catalan gestures, providing specific criteria for comparison and contrast. Payrato´ (1989, 1993) followed the line traced by Mascaro´, with the intention of establishing a basic repertoire of Catalan emblems associable with the colloquial speech of Barcelona and Martı´ (1992) recorded and studied gestures from Alghero. There is still much work to be done on the differences in the gestures used throughout the Catalan-speaking territories. A project of this kind, a gestural dialect atlas compiled from an ethnological perspective, would be the best possible tool for expanding our current knowledge (see particularly Payrato´ 2006). For a recent panoramic study recording traditional information and sketching the relationship of Catalan gestures with emblems from other cultures see Payrato´ (2013).

2. Emblematic gestures. Analysis and comments Below is a selection of some of the best-known, traditional, and representative emblems used in the Catalan-speaking countries, grouped by subject. A more complete list of a basic repertoire of Catalan emblems established for the Catalan of Barcelona, with semantic labels, verbal descriptions, and corresponding illustrations, can be found in Payrato´ (1989, 1993, 2007, 2013). With a methodology that combines an encoding test and a (later) decoding test (performed by different informants), a basic corpus of units is built up. Within the established repertoire, (prototypical) emblems (autonomous from speech, with a communicative goal, illocutionary force, a semantic core, and a social nature) are distinguished from pseudoemblems (which do not present all the emblematic features). In an analogy with speech acts, the most common gestures are those of an assertive and directive nature.

2.1. Gestures o mockery, insult, and threat Cat. pam i pipa (literally ‘span and pipe’; “The Shanghai Gesture”, or the Thumb Nose, in Morris et al.’s 1979 list) is one of the best well-known and most widespread Catalan emblems. The name has been exported to Latin America, and especially to Argentina, where it is known as el pito catala´n (literally ‘the Catalan whistle’). The gesture is often documented in Catalan literature, is included in current dictionaries, and its name appears with dialectal variants (jutipiris in the Balearic Islands, for example). Cat. llengota or llengotes (‘to stick out one’s tongue’) is also well represented as a gesture of derision. Nowadays the gestures of insult are the “horns”, la botifarra (literally ‘the big sausage’) and the traditionally called dit impu´dic (the ‘impudent finger’, in Latin digitus impudicus). These gestures can be considered as separate, different items (“the Vertical Forearm Jerk”, “the Horn-sign”, and “the finger”, with the finger raised [see Morris et al. 1979]), but they can also be understood as related items linked by family resemblance (see illustrations in Payrato´ 2003: 78).

2.2. Gestures o conjuration, giving orders, or making requests Cat. figa (‘the fig’, Morris et al. 1979) is the gesture of conjuration par excellence in Catalonia and is also collected in current dictionaries. As in Spanish and English, Cat. tocar ferro (‘to touch iron’) or Cat. tocar fusta (‘to touch wood’) are also typical expressions (both included in dictionaries, the first one more widespread) to prevent bad luck.

1268

VI. Gestures across cultures With respect to orders or requests, the gesture performed to bring the receiver closer is normally made with the palm facing the face of the issuer (or the ground) and the fingers folding back; to have the receiver move away, the gesture is usually made with the palm parallel to the ground and the fingers stretching out. Similar indications can be given to make somebody hurry (stretched fingers oscillation), to ask someone to speak more loudly or more softly (in the first case with the palm up, and in the second case with the palm down, as in the gesture requesting calm), to obtain an answer (soft header back) or vice versa, to ask somebody to be quiet (many emblems are used for this purpose: holding the lips, placing the index finger in front of the lips, closing the lips like a zipper, etc.). The diversity of gestural metaphors and metonymies is evident in these cases, as well as in others requesting that something be finished or stopped (e.g. calling “time” by making a T with the hands, pretending to cut the air with scissors, etc.), to express that someone is talking too much in a quantitative sense (opening and closing the hand), or that someone talks too much in the sense that (s)he cannot keep a secret (the index finger touches the tip of the tongue, Cat. anar-se’n de la llengua, literally ‘to go too far with the tongue’. Other hand gestures serve to ask a favor (with the hands in the praying position), ask for keys (as opening a lock on the air), ask the time (the index finger touches the wrist), a cigarette (making a V with the fingers, near the lips), the bill (signing in the air), ask if one can eat or drink something (a hand approaching the mouth with the fingers together or with a separate, stretched thumb), or indicate a phone (raising the fist to the ear), etc.

2.3. Gestures to answer and to indicate states The non-verbal yes and no are performed by head movements (forward and sideways, respectively, i.e., nodding or shaking one’s head). Rejection can also be made moving the head back in a strong movement, and denial is expressed with the stretched index moving sideways like a metronome. To indicate “good” or “bad” the thumb is raised or lowered, or a circle is made with thumb and forefinger to indicate “good” (the Thumb Up and the Ring, see Morris et al. 1979). Warnings include the act of pulling down the skin beneath the eye (“look”; the Eyelid Pull, see Morris et al. 1979) and also tapping the nose with the index finger (“to sniff something”, the Nose Tap, see Morris et al. 1979). Kissing one’s fingers and then opening them immediately has a long tradition (“delicious”, “excellent”; the Fingertips Kiss, see Morris et al. 1979). Some of these emblems can be traced back to Ancient Rome (Forne´s and Puig 2008). Expressions of doubt or approximate evaluation (“more or less”) are usually made with oscillations of the flat hand or with sideways head movements, and expressions of indifference or ignorance are shrugging one’s shoulders with a downward grimace of the lips and often by showing the palms (the three movements can be combined in various ways). Slapping the forehead manifests an oversight or that we have noticed something. To swear, the fingers making a cross are kissed; innocence is revealed by showing the palms, and surprise by raising the eyebrows.

2.4. Gestures to describe other peoples traits and to describe situations, objects, or actions Touching one’s nose with the index finger (or the thumb) denotes that someone is drunk or drinks too much, and rotating the index finger near the temple indicates that the

87. Gestures in Southwest Europe: Catalonia

1269

person to whom the gesture is applied is crazy or talking nonsense. To indicate impertinence, there are still samples of a gesture (especially in the Balearic Islands) in which the thumb touches one side of the jaw and the rest of the hand the other, with the palm facing inwards (Payrato´ 2008). The gesture is associated with the Catalan expression tenir barra (‘to have a nerve/to have a cheek’, literally ‘to have a jaw’). This gesture is probably exclusive to Catalan and synonymous with another one that is also exclusive and has the same meaning: to grab one’s cheeks and pull them out, Cat. ser un galtes, literally ‘to be a cheek’. However, today the most common gesture in this domain comes from Spanish and consists in slapping one’s cheek, corresponding to the verbal Spanish expression caradura, literally ‘hard face’ (Payrato´ 2008), based on another metaphor reflecting shamelessness with (hard) facial features. Another gesture that appears to be exclusive to Catalan is associated with the expression no bufar cullera (literally ‘not to blow a spoon’, meaning ‘not to eat anything’ or ‘not to understand a word’) and is performed with a vertical hand, which is closed quickly in front of the mouth. The metaphor to indicate meanness involves showing the fist with the knuckles facing up, while to indicate that someone is a flatterer, people make gestures simulating bouncing a ball, or brushing or shampooing someone (Payrato´ 2008). To denote a “thief ” or “theft” the fingers rotate and fold. Extending the index fingers side by side with the palms facing the ground suggests some sort of union or complicity. Rhythmically opening and closing the fingers of one hand (or both) signifies that a place is full to overflowing. If the motion is made rubbing the tips of the index and middle finger, the gesture refers to money. A small quantity is represented by showing a minute space between the thumb and the index finger. For surprise or admiration, the hand shakes repeatedly, and functions as an exclamation or an adverb of quantity. As regards deictic gestures, particularly those of person deixis, “I” is denoted by the index finger pointing to the chest (or with the palm on the chest), and the rest (“you”, “he”, “we”, etc.) with the pointing index finger away from the body of the issuer. The basic space deictic item, for “here”, is usually marked with the index finger pointing to the ground, and “there” with the index finger up, also away from the body.

2.5. Internal variation and borrowings. Emblems rom Alghero and other areas The Catalan situation is very interesting because it combines a language with a rather small regional variation (at least compared with other Romance languages, such as Italian) with the fact that it is in contact with many different languages: with Spanish (in the Iberian Peninsula), with French (in Roussillon), with Italian and Sardinian (in Alghero, Sardinia). So we find interesting combinations of languages and emblematic gestures, which reflect numerous cases of gestural and cultural interference on the one hand and the contrast between emblems that are language-dependent and those that are languageindependent on the other (Payrato´ 2008, 2013). For example, the case of caradura mentioned above clearly comes from Spanish and is intimately linked with the language. Catalan shares other language-dependent emblems with Spanish: for instance, the Catalan expression fer el pilota (‘to flatter’, simulating the bouncing of a ball), also mentioned above, is completely unknown in other linguistic and cultural domains due to its dependency on phraseology. Instead, in Alghero, Martı´ (1992) reports three typical gestures

1270

VI. Gestures across cultures

that we usually associate with Italian but which also occur in the variety of Catalan spoken in this city: with Morris’s et al. (1979) names, the Ear Touch (“homosexual”), the Cheek Stroke (“handsome”), and the Cheek Screw (“very good”). In the Catalan spoken in Roussillon we also find gestures typically associated with standard French.

2.6. Gestures, popular phraseology and igures o speech Catalan contains a large number of expressions which have been established on the basis of gestures. It also contains examples of the opposite process: emblems which have been created from the verbal expressions. Among the first cases, certain body postures are often interpreted in a symbolic, emblematic way, i.e., with a recurring, socially accepted meaning: folded arms, arm over arm, or not moving a finger indicate a “lazy person”. Sometimes the verbal expression is a description of the gesture, as a movement understood in purely physical terms: rubbing one’s hands (“joy”), washing one’s hands (“deliberately ignoring something”), shrugging one’s shoulders (“showing ignorance or indifference”), etc. In other cases the gesture has promoted verbal expressions which are related to their morphology: for example, in the emblem for robbery (see above, fingers rotate and fold), Cat. tocar l’arpa (literally ‘to play the harp’), and in the case of insulting or rejection gestures, Cat. per aquı´ es pengen els paraigües (literally ‘here the umbrellas are hung’, in the botifarra, the Forearm Jerk), or Cat. puja aquı´ dalt i balla (literally ‘come up here and dance’), in the digitus impudicus. Gesture and speech can be expressed simultaneously, as in the case of the “fig” (see above), a gesture of conjuration accompanied by ritual, popular expressions such as Cat. La figa i la flor/per tu sı´, per mi no, literally ‘The fig and the flower/for you yes, for me no’. Certain emblems have originated in speech, for instance, in recent innovations, tallar (‘to cut’ [imitating the movement of scissors]), has the meaning of ceasing to talk or interact; making a T with the hands (see above) is a neologism (as a gesture) imported from sport to everyday contexts (also in the sense of stopping an action). Other emblematic neologisms come from Spanish or may be parallel creations based on the same metaphors and metonymies (Payrato´ 2008, 2013).

3. Other studies. Issues, trends and gaps in the analysis o Catalan gestures The contrastive study of internal varieties of Catalan and their associated gestures, as well as the comparison between Catalan and neighboring languages and cultures, can provide a great deal of useful information. In the Romance tradition, the Catalan language has often been considered a bridge between the Ibero-Romance and the GaloRomance regions, but any statement of this kind must be corroborated with empirical data to avoid the risk of being relativistic. This also applies to the gestural domain, where Catalan culture can be seen as a bridge between a Southern (“contact”) culture and a Northern (“non-contact”) culture. In the specific field of coverbal gesticulation, Fito´ (2009) pioneered the study of nonverbal deictic associated with verbal deixis. He provides descriptive data that are also valuable for contrasting varieties and styles in the performance of Catalan speakers (bilingual Catalan/Spanish, see Payrato´ and Fito´ 2008, a multilingual, audiovisual corpus in Catalan, Spanish, and English). Paya` (2004) also studies different aspects of Catalan

87. Gestures in Southwest Europe: Catalonia

1271

gesticulation in relation to prosody, specifically the relationship between intonative groups, their final features, and their corresponding gestures. Borra`s-Comes and Prieto (2011) analyze the contribution of visual aspects to the perception of prosody. Alturo (2004) also relates the verbal message with the non-verbal production in the semantic domain where the representation of the states and the representation of processes are constructed and distinguished. Similarly, Lloberes and Payrato´ (2011) present the concept of discourse coherence from a pragmatic, multimodal point of view, showing how the contribution of coverbal gestures is crucial. In fact it is here, in the field of multimodality, and specifically regarding the relationship between oral and gestural elements, that we can expect more fruitful studies on the characteristics of Catalan gestures in the future.

4. Reerences Alturo, Nu´ria 2004. Hipo`tesis sobre la representacio´ multimodal (verbal i gestual) dels esdeveniments. In: Lluı´s Payrato´, Nu´ria Alturo and Marta Paya` (eds.), Les Fronteres del Llenguatge. Lingüı´stica i Comunicacio´ no Verbal, 141⫺153. Barcelona: PPU ⫺ Universitat de Barcelona. Amades, Joan 1957. El gest a Catalunya. Anales del Instituto de Lingüı´stica VI: 88⫺148. Mendoza: Universidad Nacional del Cuyo. Borra`s-Comes, Joan and Pilar Prieto 2011. ‘Seeing tunes.’ The role of visual gestures in tune interpretation. Laboratory Phonology 2(2): 355⫺380. Fito´, Jaume 2009. El gest i la dixi d’espai en textos instructius. Caplletra 46: 9⫺41. Forne´s, M. Anto`nia and Merce` Puig 2008. El Porque´ de Nuestros Gestos. La Roma de Ayer en la Gestualidad de Hoy. Barcelona: Octaedro ⫺ Edicions UIB. Lloberes, Marina and Lluı´s Payrato´ 2011. Pragmatic coherence as a multimodal feature: Illustrative cospeech gestures, events and states. In: Lluı´s Payrato´ and Josep Maria Cots (eds.), The Pragmatics of Catalan, 215⫺246. Berlin/Boston: De Gruyter Mouton. Martı´, Joan 1992. Apunts sobre la comunicacio´ no verbal dels algueresos. Revista de l’Alguer 3: 33⫺50. [First published as “Nonverbale Kommunikation: die Gestik”. In: Joan Martı´ 1986 L’Alguer. Kulturanthropologische Monographie einer sardischen Stadt, 337⫺364. Berlin: Reimer] Mascaro´, Jaume 1981. Notes per a un estudi de la gestualitat catalana. Serra d’Or 259: 25⫺28. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures. Their Origins and Distribution. London: Jonathan Cape. Paya`, Marta 2004. Interaccio´ del grup tonal i el gest en el discurs: una aproximacio´ d’ana`lisi multimodal. In: Lluı´s Payrato´, Nu´ria Alturo and Marta Paya` (eds.), Les Fronteres del Llenguatge. Lingüı´stica i Comunicacio´ no Verbal, 155⫺172. Barcelona: PPU ⫺ Universitat de Barcelona. Payrato´, Lluı´s 1989. Assaig de dialectologia gestual. Aproximacio´ pragma`tica al repertori d’emblemes del catala` de Barcelona. Universitat de Barcelona. Publicacions de la Universitat de Barcelona, 1991. http://www.tdx.cat/handle/10803/1687;jsessionid⫽46B9A98A31F4448A1B0EC 16CE6BEF2E0.tdx2. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20(3): 193⫺216. Payrato´, Lluı´s 2003. What does ‘the same gesture’ mean? A reflection on emblems, their organization and their interpretation. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures: Meaning and Use, 73⫺81. Porto: Edic¸oes Universidade Fernando Pessoa. Payrato´, Lluı´s 2006. Breus apunts introductoris per a un projecte de dialectologia catalana del gest. In: Joan Veny (ed.), Estudis de Llengua i Literatura Catalanes/LII, 309⫺322. Barcelona: Publicacions de l’Abadia de Montserrat. Payrato´, Lluı´s 2007. Formes de comunicacio´ no verbal. In: Joan Soler and Roser Ros (eds.). Tradicionari. Volum 7: La narrativa popular, 70⫺84. Barcelona: Enciclope`dia Catalana. Payrato´, Lluı´s 2008. Past, present, and future research on emblems in the Hispanic tradition: preliminary and methodological considerations. Gesture 8(1): 5⫺21.

1272

VI. Gestures across cultures

Payrato´, Lluı´s 2013. El Gest Nostre de Cada Dia. La Cultura al Cos: la Gestualitat Emblema`tica com a Patrimoni de la Cultura Popular. Barcelona: Publicacions de l’Abadia de Montserrat. Payrato´, Lluı´s and Jaume Fito´ (eds.) 2008. Corpus Audiovisual Plurilingüe. Barcelona: Publicacions i Edicions de la Universitat de Barcelona.

Lluı´s Payrato´, Barcelona (Spain)

88. Gestures in Western Europe: France 1. 2. 3. 4. 5. 6. 7. 8.

Introduction Pre-eighteenth century Gesture studies in eighteenth century France Gesture studies in nineteenth century France Gesture studies in twentieth century France Gesture studies in contemporary France Conclusion References

Abstract In this entry, we approach the relationship between gesture and culture by examining the place that gesture studies have been accorded in France. By tracing how attitudes towards gestures in France changed from pre-eighteenth century up until contemporary France, we demonstrate the various ways that gestures in France have been understood, used, studied, and even abandoned (if only on rare occasions).

1. Introduction The understanding of gesture depends on the culture within which it is to be situated. Yet to answer the question of how a specific community understands, apprehends, or even uses “gesture” is a major challenge. The ability to perceive someone’s nationality through the way they gesture would be a fantastic one. But allocating a particular character to each nationality is already more of a pit-fall than a framework. Except for Efron’s (1972) comparative description, no major investigation has been able to establish an inventory of gesture or a typology of gestural practices that is unique to any one country. In this paper, we propose to approach the cultural specificity of gesture in France from the perspective of the place that gesture studies have been accorded in historical descriptions. This place provides the roots of the gesture studies being developed in France today and provides a new backdrop from which to consider their orientation.

2. Pre-eighteenth century The Middle Ages and Renaissance periods were marked by early religious thought and by references persisting from antiquity. A prescriptive approach to gesture dominated, Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12721282

88. Gestures in Western Europe: France

1273

with the emphasis in this period lying not with understanding what gesture was but with describing what gesture should be. In this period, it was the role of gestures in antiquity, especially in regards to pantomime, which began to attract the first scholarly interest in gestures. The two leading theorists of the art of Oratory, Cicero and Quintilian, did not explicitly focus on gesture but recognized the need for orators to receive training in bodily action (gymnasticus). In L’Institution Oratoire, Quintilian warns the orator against using particular gestures, which he would consider vulgar or evocative of “gestes de the´aˆtre” (Quint. Inst. 11.3⫺ 102, 102⫺104, 123, 104, 123, 125; see Aldrete 1999: 69). Exaggerated, large, repeated movements were seen as unworthy of Roman nobility. The value of self-restraint and modesty in gesture, seen as social markers (see Graf 1991: 47), were incompatible with the pervasive pantomime observable in the games of ancient Rome. Actors were regularly expelled from Rome, if not from Italy (Aldrete 1999: 54). The actio of aristocratic rhetoric goes as far as referring to the farcical gesturing of plebs. Cicero captures this view when he writes that On all those emotions a proper gesture ought to attend; not the gesture of the stage, expressive of mere words, but one showing the whole force and meaning of a passage, not by gesticulation, but by emphatic delivery, by a strong and manly exertion of the lungs, not imitated from the theatre and the players, but rather from the camp and the palaestra. The action of the hand should not be too affected, but following the words rather than, as it were, expressing them by mimicry (L’Orateur, III, LIX; translation found here: http:// pages.pomona.edu/~cmc24747/sources/cic_web/de_or_3.htm).

In the fourth century, the Church Fathers, including Ambrose and St. Augustine, consider modesty ⫺ a virtue from Roman times that could be expressed through a moderated use of gestures ⫺ as one of the four cardinal virtues. Consequently, the body (exterior, foris) and its manifestations (gestures and attitudes) could reflect by analogy the state of the soul (interior, intus). Thus, inappropriate gestures could degrade the soul, while using gesture modestly would always preserve it. The Church’s complex relationship to the body ⫺ either seen as an embodiment of the spoken word, as a vector of original sin, or even as sacrificial in the Eucharist ⫺ leads toward the redemptive manipulation of the body. One way to redeem the body includes the self-restraint of gestures. In fifth century France, monasticism establishes a set of rules in order to discipline the body. Referring to the gestures made by actors in his De Doctrina Christiana, Augustin (II, XXV: 38) emphasizes the misunderstanding of people unfamiliar with theatre and highlights that gestures are culturally marked (see Augustin et al. 1997). He situates gesture as a kind of sign and presents it as a conventional one (versus a natural one), placing gestures in a sub-category labeled superflue et excessive. Going beyond this critique that dismisses pantomime, which in any case is marginal to his theory of signs that influenced scholarship throughout the Middle Ages, Augustin highlights that gestures and facial expressions also express the obviousness of spontaneous movements and are therefore natural: [all men aim at a certain degree of likeness in their choice of signs, that the signs may as far as possible be like the things they signify] (“Nos traits sont comme un miroir ou` viennent se refle´ter les divers mouvements de notre aˆme” De Doctrina Christiana 2, 1, 2.; English translation://www.abbaye-saint-benoit.ch/saints/augustin/doctrine/#⫺Toc15481348). Augustin refuses to commit to a categorization of gestural signs as either natural or

1274

VI. Gestures across cultures

conventional: [Now whether these signs, like the expression or the cry of a man in grief, follow the movement of the mind instinctively and apart from any purpose, or whether they are really used with the purpose of signification, is another question, and does not pertain to the matter in hand] (“Quant a` de´cider si les signes expriment les mouvements de l’aˆme inde´pendamment de la volonte´, comme l’aspect du visage et le cri de la douleur; ou si, en re´alite´, ils ne les expriment qu’en vertu d’une convention arbitraire, c’est une question e´trange`re a` notre sujet, et que nous laissons de coˆte´ comme inutile.” De Doctrina Christiana 2, 2, 3; English translation: http://www9.georgetown.edu/faculty/ jod/augustine/ddc2.html). This exclusion of gestures and more generally of the body holds sway through the Middle Ages and spills over somewhat into the Renaissance period (Schmitt 1990), a period characterized by the need to behave humbly. Following the Reformation and Counter-Reformation, Europe is divided in its understanding of gesture and its capacity for pictorial representation. From the perspective of the Catholics, the Council of Trent (1545⫺1563) encourages the idea that gesture achieves representation via images and thereby provides the body its unique capacity for representation (Lühr 2002). Instigated by the Jesuits, the Counter-Reformation in France provokes an evolution in the theories of oratory. The driving forces include works by Carlo Borromeo (Counter-Reformation) and Pierre de la Ramee (Anglicized to “Peter Ramus”; Reform) ⫺ two writers with high stakes of persuasion among Catholics and Protestants. While these two authors reject theatrical action in favor of the oratory, Louis Carbone (1595) and Nicolas Caussin (1619) distinguish two actio and in addition they keep the theater, which they consider a place of artifice and civility. Eloquence can be achieved through gesture equally for the preacher and for the honest man. Jesuit educational precepts even stipulate one hour per week to be devoted to oratory for students. In 1620, Louis de Cresolles revises and illustrates Quintilian’s actio (Cressolles, Verdun, and Cramoisy 1620). He describes gestures in a book that becomes selectively but widely distributed, frequently offered as a reward for good students. The body and its gestural manifestation now go beyond the realm of preaching to include communication in the public sphere. As an artifact, gesture can imitate styles from the Antique period (see Conte 2007), paving the way for Classicism. The body-based understanding of gesture in this pre-eighteenth century period gradually evolves from the need for discipline and self-restraint to the need to understand how the body works in its own terms. This evolution means respecting the natural movement of the body, guaranteeing a balance between arrogance and debauchery. Moving towards the eighteenth century, the art of imitation, which is more or less that of theatre, no longer violates decency. Rather, gesturing is seen as motivated by the very nature of the human body.

3. Gesture studies in eighteenth century France As an outlet for passion, the body in the eighteenth century is primarily perceived as a vehicle for expressing emotion, or even for a “language of action”. The approaches developed in the previous century garner support during the Enlightenment. In Critical Reflections on Poetry and Painting, l’Abbe´ Du Bos (1719) establishes an aesthetic of gesture that influences this entire period. What is beautiful must also be capable of arousing sensitivity among the audience, instilling the desired feelings: gesture holds its own discourse, one that is capable of moving onlookers.

88. Gestures in Western Europe: France

1275

The role of pantomime in ancient theatre attracts a lot of attention in this period. This attention serves as a basis for the eponymous entry of the Encyclopedia in 1772 by the Chevalier de Jaucourt (Diderot and d’Alembert 1772). In a table of contents summarizing the categories of the Encyclope´die (called “the figurative system of human knowledge”), gesture appears under: (i) (ii) (iii) (iv) (v)

the auspices of Reason, the human sciences, the rubric Logic, grammar, and signs.

Gestures have the status of understanding; they compose signs and intermingle with written characters. They become a specific object of study and lead to insights that attract as much attention as the insights from hieroglyphs. Gestures are defined in the Encyclope´die as the beginnings of infant communication, where they are allocated the primary function of expression. Because they are governed by the laws of nature, gestures are also embellishments, language’s natural ornamentation. Gesture is defined under two principal rubrics: Dance and Declamation. Dance, or saltation in the ancient meaning (i.e., the action of leaping), is approached from the perspective of the Roman and Greek theatres. Dance directly links sensitivity to gesture, [which always occurs at the moment where the soul feels the need] (“qui part toujours au moment ou` l’aˆme e´prouve le sentiment” (Diderot and d’Alembert 1772: 651). As a natural and irrepressible phenomenon studied outside of the moral sciences, dance expresses the impulsion of the soul. Facial expression on the other hand is a spontaneous manifestation of feelings and thus cannot be reproduced by the art of imitation. Under the second rubric, Declamation (Diderot and d’Alembert 1772: 652), gesture becomes an expression of passion ⫺ an expression capable of moving the audience. Gesture instills into the souls of observers all the feelings that it is capable of depicting in all its beautiful disarray. Gesture indeed has the vocation to be a sign because, beyond the passion that provokes it, it communicates to the souls of onlookers ineffable and authentic feelings in accordance with the arrangement of nature. With this understanding in place, a new paradigm opens up for the study of gesture for two centuries: as natural signs of human passion de jure, gestures are amenable to analysis de facto. Gestures are now viewed as relating more to the soul than to the body; more to the theatre than to the realm of discourse. Appearing to be ineffable, gestures cannot be objectified. Having escaped from the constraints of the body imposed by the Church Fathers, during the Enlightenment period gesture is granted the expression of the soul’s passions. From this point on, for the following two centuries, the body is not analyzed in and of itself, and the study of gesture shifts from the natural sciences into the field of humanities. The paradox is striking: while being natural, gesture is nonetheless studied as a convention. Gesture suffers from having been part of pantomime and the art of oratory in Antiquity, a glorious and elevated topic in the past, but now finished with and decadent. In 1746, Condillac speaks of gesture in the following way: [We know neither the tone nor the gesture with which they were accompanied, both of which will have acted powerfully on the soul of the audience] (“Nous ne connaissons ni le ton ni le geste dont ils

1276

VI. Gestures across cultures

e´taient accompagne´s, et qui devaient agir puissamment sur l’aˆme des auditeurs” [Condillac [1746] 2001: 119]). According to Condillac, gestures were a pre-adamic language of action. Gestures and facial expressions come directly from the natural bodily expression associated with a perception (for example, hunger provokes a communicable discomfort). As first an operation of understanding, perception does not require an arbitrary link between things and their bodily expression. As a preverbal category, the materiality of gestures would not be motivated by thought. This sensualist understanding seems to have continued until today in and beyond France. Gesture is inseparable from feeling. In expressing a feeling through a natural link, gesture can naturally arouse the same feeling in the soul of the interlocutor without passing through a process of reason. Following Condillac, gestures exhibit a gradual normalization. Already at this point, the beginnings emerge of something akin to Kendon’s (1988) continuum: from gesticulation, through emblems, to arrive at pantomime (a definition of gesticulation is offered in L’Encyclope´die as gestures that appear “affected” and “too frequent”, by Edme Franc¸ois Mallet). In McNeill (2005), gestures along the continuum acquire an increase in linguistic properties as well as an increase in gestural conventionality. This understanding is basically already in place in the eighteenth century, especially in the work of L’abbe´ de l’E´pe´e who united Deaf students in a school in Paris in 1760 and presented his educational method based on teaching sign language in 1774. L’abbe´ de l’E´pe´e implements certain ´ tienne de Fay earlier efforts by Pedro Ponce de Leon in the sixteenth century and by E in Amiens from 1718, but these efforts are now on a larger scale. In 1791, his school becomes the National Institute for Deaf-Mutes in Paris (l’Institut National des SourdsMuets de Paris). In this century, analyses emerge of the relationship between the “language of the Deaf ”, thought, and the order and nature of word. These analyses were inaugurated by Diderot in 1751, pursued by l’abbe´ de l’E´pe´e in 1774, then by Desloges, a Deaf factory worker in 1779 in relation to education. Finally, more descriptive and lexicographical approaches to signed languages appear with l’abbe´ Ferrand in 1783 and a series of dictionaries appear regularly throughout the nineteenth century (Bonnal 2005).

4. Gesture studies in nineteenth century France For gesture studies in France, the highly political 19th century is an experimental century, both for education in the case of signed language and for theatre with its numerous creations in pantomime. For signed language, legislation introduced in the previous century lead institutions for the young deaf to multiply: in 1792, Deaf Education becomes a concern of the state, and in 1796, the National Assembly creates six schools across the country. Teachers of signed language require descriptions of the language and manuals to help them teach it. In this century, 27 works propose inventories of signs and in 1825 Be´bian publishes a writing system for sign language (Be´bian 1825). Teaching at these schools is led by Deaf teachers, several of whom remain icons among the French Deaf community today, such as Ferdinand Berthier and Laurent Clerc (who co-founded the Gallaudet College in Washington). But in 1880, the Milan Convention imposes oralism upon deaf education for the next century, until 1991. This imposition in France is based on an alliance between Republicans and Catholics: on the one hand, Republicans in search of national repair through linguistic standardization after the defeat against Prussia (1870); and on the other hand, Catholics advocating the presentation of speech in the bible as primary.

88. Gestures in Western Europe: France

1277

Pantomimic gestural creation in the modern sense was born in France with JeanBaptiste Deburau (1796⫺1846) in his interpretations of Pierrot, from 1819 onwards at the The´aˆtre des Funambules (Paris). After 1680, it was the Come´die-Franc¸aise that enjoyed the monopoly of dance and speech performances in the Parisian region. This privilege encourages the creation of “the theater of mimes” on the boulevard du Temple from 1759 onwards (Jean-Baptiste Nicolet’s Arlequin). Abolished in 1789, this royal privilege reappears under an imperial form between 1806 and 1864. Hundreds of silent dramatic representations about love, hate, and passionate crimes are created on ‘Crime Boulevard’ (boulevard du Temple) up until 1862. The techniques used in these mimodrames spread across Europe, through the circus with the clown Joseph Grimaldi in England, and also through silent movies. Charlie Chaplin is central to this tradition. The pantomime is taken up in more elaborate forms by Etienne Decroux (mobile life sculptures from 1932 onwards as well as abstract pantomime) and Marcel Marceau (pantomime de style) with his character “Bip” who represents an icon of pantomime world-over since 1947. The nineteenth century brought further educational experiments with gestures as well as unprecedented pantomimic creations. Once again, the focus of gesture swayed between theater and education. But because of stigmatization ⫺ institutionalization of sign language to teach deaf children; privilege of spoken representation at some theatres ⫺ experiments with gesture weaken and eventually disappear, also by way of decree. Gestural descriptions for pantomimes in booklets as for manuals of sign language suffer on the one hand from the almost exclusive use of French and on the other hand from a descriptive terminology that is metaphorical (already pointed out by Diderot 1751). Graphic arts then become the only way of capturing gesture, but an objective method for transcription is lacking. At the end of the 19th century, the relationship between medicine and graphic arts constitute a major advance in physiological analysis and notation of the mechanisms of gestures and facial expressions. In 1862, Duchenne de Boulogne tested the electrical stimulation of facial expressions with experimental control (Duchenne 1862; 1990). Trying to identify the muscle mechanisms of the expression of emotions, he used photography to capture the facial expressions that his method elicited, allowing for comparison with pictorial representations in classic works of art. From 1883, studying the appearance of movement in animals and men, E´tienne-Jules Marey creates time-lapse photography (Marey, Demery, and Pages 1883). This allows him to retrieve information about the movement of a body in motion. Improving its recording system, he incidentally created the first electric camera in history. Purchased by the Lumie`re brothers (les fre`res Lumie`re), this camera became the first technological contribution born from the study of human gesture.

5. Gesture studies in twentieth century France Besides the teaching of mime throughout the 1930s, 40s, and even the 50s (for example by´ Etienne Decroux and Jacques Lecoq 1997), which instigated a renewed interest in the body in theater, Antonin Artaud founded the The´atre de la Cruaute´ (‘Theater of Cruelty’), which stipulated that speech was not the vehicle of thought. For Artaud, there is no representation, only presentation (enactement), locating expression with the body, while speech is seen as only incantatory. This theater will eventually make much intellectual noise but attract very few performers. In 1934, Marcel Mauss poses the problem of classification of variation in ethnology, in particular the un-classifiable feature of what he calls in his eponymous article “body

1278

VI. Gestures across cultures

techniques” (Mauss 1934). These techniques are part of a three-fold consideration of man: biological, sociological, and psychological. In one line of thought, these three disciplinary fields are ordered in relation to the body. The social prestige of the person or of the group applying a body technique precedes the act of imitation within which are intertwined psychological and biological factors, all of which emerge as inseparable. Mauss defines a body technique as a traditional and efficient act that can be understood as a mechanical or physical act. The body is itself the first object and technical means, before being an instrument. Mauss reveals the inextricable entanglement of biology, physics (more precisely biomechanics), social psychology and sociology, which characterizes French studies of gesture in a sustainable way. First of all comes the theory of the externalization of human faculties by Andre´ Leroi-Gourhan in the early 1960s, a paleoanthropologist who shows how the main actions assigned to objects carved or shaped by human lineage derived from functions of the body (Leroi-Gourhan 1964). The body therefore contains a substrate of functions capable of projecting features onto objects. This externalization of human faculties theory continues the tradition of interdisciplinary approach to the body that was also characteristic of Merleau-Ponty in 1945 and already documents the relationship between mimetic action and perception, to be found later within the mirror neuron system (Rizzolatti and Arbib 1998). A rupture in approaches to gesture occurs in 1968 with the 10th issue of the journal Langage. The structuralism of the e´cole de se´miotique de Paris takes gestures out of the field of linguistics by defining the body and its descriptive categories as a priori semiotic: gestures are just a manifestation of something which is semiotic at another level (the body). In the issue’s introductory article, Greimas announces that the body and its gestures are not in themselves an object of study for semiotics (Greimas 1968). It is the semiotic categories by which we apprehend gestures that interest this group. For this semiotic school, gesture is not language-like: the meaning of the world that a gesture conveys is inseparable from the phenomenological events through which it is expressed. Gestures exhibit too much iconicity and analogy to be linguistic; gestural forms are not dissociable from their materiality and not arbitrary enough to be part of language. Ontological structuralism triumphs in France during this period because of the complete desubstantialization of any material support it sought to examine and its unique quest for differentiation systems. Ontological structuralism rejects gesture from the field of linguistics. In the tradition of 19th century philosophical physicians (such as Philippe Pinel, Jeane´tienne-Dominique Esquirol and Jean-Martin Charcot), a key player for the reintroduction of gesture studies in France is Jacques Cosnier. Functionalist, interactionist, and multidisciplinary (background in medicine and ethnology), Cosnier propagates the thought of Anglo-Saxon authors and offers a reflection on the classification of gestures (Brossard and Cosnier 1984). Cosnier carries out empirical studies alternated with theoretical investigations. He was responsible for the concept of “e´choı¨sation” or mimetic synchrony through which he explains the internalization and externalization of phenomena of affective alignment that facilitates the perception of others’ emotions (Cosnier 1996). Cosnier was able to bring to light and disseminate his work by building on the gesture inventory established by Genevie`ve Calbris and Jacques Montredon (1986), as well as by analyzing the semiophysical principles governing co-verbal gestures (Calbris 1990, 2011). For her part, Genevie`ve Calbris accompanied if not preceded in some cases the direction of gesture studies towards the realm of metaphor. Grounded with examples from

88. Gestures in Western Europe: France

1279

audiovisual corpora, Calbris’ studies reveal an extremely detailed reading of gestures and indicate a root within the image schema theory of Johnson (1987) and Lakoff (1987). Calbris continued to participate in the dissemination of a cultural typology of French gestures, identifying a common cultural basis that still serves as an important reference in the field of gesture studies (see Calbris 2011). The early 90s also saw the development of studies in French Sign Language (LSF). These studies are mainly characterized by the inclusion of a language outside the laboratory, re-conquered by the deaf community for only ten or so years, deeply scarred by a hundred years of prohibition (Milan Convention) and transmitted mainly in specialized residential schools outside traditional educational settings. This language, now free from stigmatization, relies heavily on the productivity of signs. Approaches towards iconicity in French Sign Language unite gesture researchers in France, especially since the work of Christian Cuxac (1983, 1996, 2000). Cuxac defends the idea of iconicity as a structuring principle if not a genetic factor for sign language. Through the body with its articulatory means the perceptual-practical experience of Deaf people undergoes anamorphosis. Besides this primary iconicity (“productive signs”), less iconic signs or signs with a degenerated iconicity possibly standardize the vocabulary of the language, without the productive signs ever disappearing (maintained by their generic function). The narrow sense of Saussurian arbitrariness of the sign that ontological structuralism advocated is questioned, reopening the integration of French Sign Language, despite its iconic principle, into the field of linguistics. The differential principle that is foundational to the linguistic system (absolute arbitrariness), and that Saussure opposed to the referential principle (Saussure 1916), does not mean that iconicity should be ignored. By being visual and only rarely audible, the signature of the objects that surround us is essentially artifactual. Representation in a visual modality as in signed languages has this strong constraint of similarity. In contrast, for spoken languages, the mode of representation is sound, which is different than the mode for objects. Because of this disconnection between sensory modalities, the similarity constraint leaves complete freedom to the acoustic and phonological system to deploy forms independently of their referents, thus leading to the idea of radical arbitrariness. In both types of languages, there is a system of differences between signs. One system is arbitrary for reasons of sensory disjunction between referents and signs; the other is iconic because of sensorial similarity between sign and referent. Arbitrariness is thus a factor that is external to language and does not constitute an a priori for language. This argument outlined by Cuxac is valid not only for signed languages but also for gesture. It shows that arbitrariness is neither part of a language requirement nor capable of accounting for the linguistic character of gestures.

6. Gesture studies in contemporary France At the time of writing this article, the field of gesture studies is flourishing in contemporary France. The study of gesture has benefitted from an internationalisation of research and the incorporation of new frameworks and methods of analysis. Advances in technology for collecting, transcribing and analyzing gesture has fuelled the desire in France to numerize, share, and mine various multimodal corpora electronically. Academic degree programs and training sessions regularly unite students of gesture across France. Gesture is studied in the natural and computer sciences, but also attracts attention in several

1280

VI. Gestures across cultures

disciplines in the human and social sciences, including Linguistics (Science du Langage), English Linguistics (Linguistique anglaise), Anthropology, and French as a Foreign Language studies (FLE ⫺ Franc¸ais Langue Etrange`re). Building on research in the field of conversational analysis, one line of research that has developed in France is an application of multimodality in the domain of interaction studies. The analysis of linguistic and gestural or “embodied” resources proceeds hand-in-hand with sequential analysis of on-going interactions, often associated with or framed by workplace practice (pioneered notably by Mondada within the ICAR research team in Lyon; e.g. Mondada 2011). Approaches to multimodal analysis of data from specific languages have seen a widening scope from focusing on manual gestures to systematically including segmental and supra-segmental properties in both the verbal and gestural modalities. Within a linguistics framework, there is an increasing focus on quantitative studies based on language data collected in experimental and naturally occurring conversational contexts (for example, work by Ferre´ 2011 and several Ph.D. dissertations focusing on multimodality either complete or nearing completion at universities around France). The appreciation of gesture in rhetoric characteristic of previous centuries has been revived in studies seeking to help train future language teachers. The pedagogical implications of an understanding of gesture has been the basis of several empirical studies in the field of French as a Foreign Language (Tellier 2008; Tellier and Stam 2012) and also the basis of an embodied approach to gesture in the area of applied English linguistics (Lapaire 2006). As far as specific domains are concerned, language acquisition from a multimodal perspective has been a particular strength in the French research profile. Numerous large-scale projects, publicly funded by the French National Research Agency (Agence Nationale de Recherche ⫺ ANR), have significantly propelled this line of research. Key figures in this domain are located across France, such as Jean-Marc Colletta (Grenoble), Miche`le Guidetti (Toulouse), and Maya Hickmann and Aliyah Morgenstern (Paris). Researchers have also adopted multimodal approaches in the fields of phonetics and phonology, with work on spoken French published by scholars such as Ferre´ and Morel.

7. Conclusion By documenting approaches towards gesture in France from pre-eighteenth century up until contemporary France, we hope to have demonstrated the various ways that gestures in France have been understood, used, studied, and even sometimes abandoned. While comparisons of gestural practices across communities may reveal similarities and differences in the culturally marked and structural aspects of gestures, our approach provides an insight to the origins and development of what is currently known about gestures and other forms of bodily communication in France.

8. Reerences Aldrete, Gregory S. 1999. Gestures and Acclamations in Ancient Rome. Baltimore: The Johns Hopkins University Press. Augustin, Madeleine Moreau, Martine Dulaey, Goulven Madec and Isabelle Bochet 1997. Oeuvres de saint Augustin. 11/2, La Doctrine Chre´tienne De doctrina Christiana. Paris: Institut d’e´tudes augustiniennes. Be´bian, Auguste 1825. Mimographie, ou Essai d’e´criture mimique, propre a` re´gulariser le langage des sourds-muets. Paris: Louis Colas.

88. Gestures in Western Europe: France

1281

Bonnal, Franc¸oise 2005. Se´mioge´ne`se de la langue des Signes franc¸aise: E´tude critique des signes de la langue des signes franc¸aise atteste´s sur support papier depuis le XVIIIe sie`cle et nouvelles perspectives de dictionnaires (reproduction de). Universite´ de Toulouse-Le Mirail, Lille. Brossard, Alain and Jacques Cosnier 1992. La Communication Non-Verbale. Neuchaˆtel: Delachaux et Niestle´. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington/Indianapolis: Indiana University Press. Calbris, Genevie`ve 2011. Elements of Meaning in Gestures. Amsterdam/Philadelphia: John Benjamins. Calbris, Genevie`ve and Jacques Montredon 1986. Des Gestes et des Mots pour le Dire. Paris: Cle´ international. Carbone, Lodovico 1595. Divinus Orator vel de Rhetorica diuina libri septem. Venise: apud societatem Minimam. Caussin, Nicolas 1619. Eloquentiae Sacrae et Humanae Parallela. Paris: Chappelet. Cicero, Marcus Tullius 2011. De Oratore, Volume 1⫺1. Edited by David Mankin. Cambridge: Cambridge University Press. Condillac, E´tienne Bonnot de 2001. Essay on the Origin of Human Knowledge. Cambridge: Cambridge University Press. First published [1746]. Conte, Sophie 2007. Louis de Cressolles: Le savoir au service de l’action oratoire. XVIIe Sie`cle 4(237): 653⫺667. Cosnier, Jacques 1996. Les gestes du dialogue, la communication non verbale. Psychologie de la Motivation 21: 129⫺138. Cressolles, Louis, Nicholas de Verdun and Se´bastien Cramoisy 1620. Theatrum Veterum Rhetorum, Oratorum, Declamatorum, quos in Graecia Nominabant Sophistai, Expositum Libris Quinque. In Quibus Omnis Eorum Disciplina, & Dicendi ac Docendiratio, moresque Produntur, vitia Damnantur, & magni Utriusque linguae Illustrantur & Emaculantur Scriptores. Auctore Ludovico Cresollio Armorico e` Societate Jesu, Volume 1⫺1. Paris: Sumptibus Sebastiani Cramoisy, via Jacobaea, sub Ciconiis. M. DC. XX. Cuxac, Christian 1983. Le Langage des Sourds. Paris: Payot. Cuxac, Christian 1996. Fonctions et structures de l’iconicite´ dans les langues des signes; analyse descriptive d’un idiolecte parisien de la Langue des Signes Franc¸aise. PhD dissertation, Universite´ Rene´ Descartes, Paris V. Cuxac, Christian 2000. La Langue des Signes Franc¸aise (LSF): Les Voies de l’Iconicite´. Paris/ Gap: Ophrys. Decroux, Etienne and Patrick Pezin 2003. E´tienne Decroux, Mime Corporel: Textes, E´tudes et Te´moignages. Saint-Jean-de Ve´das: l’Entretemps. Diderot, Denis and Jean le Rond d’Alembert 1772. L’Encyclope´die de Diderot et d’Alembert: Ou Dictionnaire Raisonne´ des Sciences, des Arts et des Me´tiers. Paris: Briasson, David, Le Breton and Durand. Duchenne, Guillaume Benjamin 1862. Me´canisme de la Physionomie Humaine, ou Analyse E´lectroPhysiologique de l’Expression des Passions. Paris: J. Renouard libraire. Duchenne, Guillaume Benjamin 1990. The Mechanism of Human Facial Expression. Cambridge: Cambridge University Press. Du Bos, Jean-Baptiste 1719. Critical Reflections on Poetry, Painting and Music. Thomas Nugent (trans.), London: John Nourse. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton de Gruyter. First published [1941]. Ferre´, Gaelle 2011. Functions of three open-palm hand gestures. Multimodal Communication 1(1): 5⫺20. Graf, Fritz 1991. Gestures and Conventions: The Gestures of Roman Actors and Orators. Cambridge: Polity Press. Greimas, Algirdas J. 1968. Conditions d’une se´miotique du monde naturel. Langages 3(10): 3⫺35. Johnson, Mark 1987. The body in the mind. The bodily basis of meaning, imagination, and reason. Chicago: University of Chicago Press.

1282

VI. Gestures across cultures

Kendon, Adam 1988. How gestures can become like words. In: Fernando Poyatos (ed.), CrossCultural Perspectives in Nonverbal Communication, 131⫺141. Toronto/Lewiston, NY: Hogrefe and Huber Publishers. Lapaire, Jean-Re´mi 2006. La Grammaire Anglaise en Movement. Paris: Hachette Education. Lakoff, George 1987. Woman, Fire and Dangerous Things: What Categories Reveal about the Mind. Chicago: University of Chicago Press. Lecoq, Jacques, Jean-Gabriel Carasso and Jean-Claude Lallias 1997. Le Corps poe´tique: un Enseignement de la Cre´ation The´aˆtrale, Volume 1⫺1. Arles: Actes Sud. Lühr, Berit 2002. The language of gestures in some of EI Greco’s altarpieces. Ph.D. dissertation, Department of History of Art, University of Warwick. Marey, E´tienne-Jules, Georges Demery and Pages 1883. Etudes Photographiques sur la Locomotion de l’ Homme et des Animaux. Paris: Gauthier-Villars. Mauss, Marcel 1934. Les Techniques du Corps. Jean-Marie Tremblay. http://classiques.uqac.ca/ classiques/mauss_marcel/socio_et_anthropo/6_Techniques_corps/Techniques_corps.html. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Mondada, Lorenza 2011. Understanding as an embodied, situated and sequential achievement in interaction. Journal of Pragmatics 43(2): 542⫺552. Quintilian, Marcus Fabius and Jean Cousin 2003. Institution Oratoire. Tome VI, Livre X et XI. Paris: les Belles lettres. Rizzolatti, Giacomo and Michael A. Arbib 1998. Language within our grasp. Trends in Neurosciences 21(5): 188⫺194. Saussure, Ferdinand de 1916. Cours de Linguistique Ge´ne´rale. Edited by Charles Bally and Albert Sechehaye, with the collaboration of Albert Riedlinger. Lausanne: Payot. Schmitt, Jean-Claude 1990. La Raison des Gestes dans l’Occident Me´die´val. Paris: Gallimard. Tellier, Marion 2008. The effect of gestures on second language memorisation by young children. Gesture 8(2): 219⫺235. Tellier, Marion and Gale Stam 2012. Strate´gies verbales et gestuelles dans l’explication lexicale d’un verbe d’action. In: Ve´ronique Rivie`re (ed.), Spe´cificite´s et diversite´ des interactions didactiques, 357⫺374. Paris: Riveneuve e´ditions.

Dominque Boutet, Paris (France) Simon Harrison, Ningbo (China)

89. Gestures in Northern Europe: Childrens gestures in Sweden 1. 2. 3. 4. 5.

Research on Swedish children’s gestures General developmental patterns Conventionality in children’s gestures Conclusions References

Abstract What are Swedish children’s gestures like? How do they change over time in development? How do they relate to spoken language? Is there anything particularly Swedish about them? Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12821289

89. Gestures in Northern Europe: Children’s gestures in Sweden

1283

These questions are all discussed in this paper. The main message is that the overarching patterns of gestural development in Swedish children are similar to what has been found in studies of children in other cultures, but that conventionality ⫺ in the sense of culturally established forms of conduct ⫺ is a pervasive and somewhat underestimated aspect of children’s gestures. Finally, a distinction between different levels of conventionality is presented ⫺ typified and normative conventionality ⫺ that makes possible a more nuanced discussion of cultural aspects of gesture.

1. Research on Swedish childrens gestures The question of what makes a gesture “Swedish” can be understood in at least two different ways. On the one hand, it may refer broadly to any qualities of gestures produced by Swedes, no matter if they are similar or different from what is found in other cultural communities. On the other hand, it might refer more narrowly to qualities of gestures of Swedish persons that are specifically Swedish in character, different from what is found in at least some other cultures. Both the broad and the narrow conception of the question are discussed in this text.

2. General developmental patterns In a study of early communicative development in 228 Swedish children Eriksson and Berglund (1999) found that at least 90% of the children around 8⫺12 months produced deictic gestures such as point, show and give. Also common are gestures grounded in routine forms of interaction like bye-bye, peekaboo, and pick-me-up (raising hands to be picked up), and action schemes associated with the conventionalized use of particular types of objects (drinking from a container with liquid or putting a telephone to the ear). These action schemes are not yet typically performed in the gestural sense of acting as if drinking, but rather in a more “literal” sense, as precursors to later developing enactive gestural variants of these actions. Up to at least 3 years enactive gestures are mainly either directed towards the present surroundings (pretending to hit a doll present in the room) or they involve actual handling of a physical object (pretending to drink from a toy cup), or both (holding an empty spoon towards someone’s mouth to “feed” them) (Andre´n 2010). Enactive iconic gestures produced in a purely detached “gesture space”, without connection to the material environment, do not appear in substantial frequency until later. Deictic gestures are the most frequently performed gestures in children around 12 months. This continues to be the case for several years, and there seems to be a peak around 21 months (Andre´n 2010). At this age there is also a peak in the number of utterances that involve both gesture and speech ⫺ a kind of “two-unit” utterances across modalities ⫺ and most children have recently started to produce two-word utterances in speech. Speech-only utterances are slightly less common than gesture⫹speech utterances, and gesture-only utterances are rare in comparison. Around 24 months, when children are not only able to produce some two-word utterances, but when the mean number of words in utterances (MLU) reaches two words, a reorganization takes place. Following this milestone there is a sudden decrease in gesture⫹speech utterances and they become less common than speech-only utterances; also visible as a sudden drop in overall gesture frequency.

1284

VI. Gestures across cultures Curiously, a pattern almost identical to the 24 month pattern is repeated again when the mean number of words in utterances reaches three words, around 29 months. Directly after 29 months there is, again, a sudden decrease in the number of gesture⫹speech utterances relative to speech-only utterances, as well as a corresponding decrease in the overall gesture frequency. This repeated pattern underscores the strong relation between changes in the use of gesture and milestones in the development of spoken language. Are the developmental patterns described in this section unique to Swedish children? The answer is no. They are all very similar to, or fit well with, what has been reported in research on children in a range of other countries and cultures. Since the major bulk of child gesture research has been carried out in a Western context, it is justified to ask whether a Western bias may be involved in this conclusion. There are still whole continents like Africa and South America that are almost absent in the literature and we do not know for sure if the patterns are indeed universal, but it seems reasonable to work along the hypothesis that they are. Especially since the few studies on children from other parts of the world, e.g. Japan (Blake et al. 2005) or Thailand (Zlatev and Andre´n 2009), confirm rather than challenge these basic patterns. But time will tell.

3. Conventionality in childrens gestures In the previous section I argued that several overarching patterns of gestural development are similar across many, or possibly all, cultures. This may seem to imply that conventionality ⫺ culturally established forms of conduct ⫺ is not important in children’s gestures. That is not true! Conventionality is a pervasive feature of children’s gestures. Guidetti and Nicoladis (2008: 109) come to similar conclusions: If our reasoning is correct, then infants may use primarily conventional gestures, as well as gestures that they have learned by acting in the world (such as ‘pick-me-up’). There is a curious lack of […] spontaneous, non-conventional gestures that seem to be created on the spot to convey meaning […]. (Guidetti and Nicoladis 2008: 109)

The same kind of reasoning is true for the development of emotional expressions in Swedish children (Gerholm 2007): on the one hand, the overall structure of this development is similar to other cultures, but on the other hand, there is a pervasive social and conventionalized component in this development. I will now discuss how conventionality may enter into different kinds of gestures, both in general and with reference to what is found in Swedish children.

3.1. Emblems Emblems are the most obviously conventionalized kind of gestures as they are conventionalized by definition. They have a conventionalized form paired with conventionalized meanings, shared within a cultural community, and in some respects similar to words in spoken language. In addition to constituting recognizable types of gestures that are commonly known in a cultural community (typified conventionality, Andre´n 2010), they also have certain criteria of correctness (normative conventionality, Andre´n 2010) ⫺ a stronger form of conventionalization than typified conventionality alone. Performing an emblem like the thumbs-up gesture with some other finger extended than the thumb will not be perceived merely as atypical, as is the case with gestures which are only conventionalized

89. Gestures in Northern Europe: Children’s gestures in Sweden

1285

on the level of typified conventionality, but as incorrect, because the normative constraints are violated. One emblem already mentioned is the head-shake for refusal, disagreement, or negation. Eriksson and Berglund (1999) found the head-shake already at 8 months in 7% of the children, and then a gradual increase up to 16 months where 83% of the children produced this gesture. This gesture is neither universal nor specific to Sweden, but is found in many cultures, including geographically distant places like Thailand. headshake and nodding (for affirmation) are the second and third most common gestures found in children both in Sweden and Thailand, surpassed in frequency only by pointing (Zlatev and Andre´n 2009). Several other emblems are found children in both of these cultures. Some emblems are indeed very widespread in the world. Another emblem ⫺ the wai gesture ⫺ exists only in the Thai children (Zlatev and Andre´n 2009). This is a respectful greeting performed firstly, and sometimes only, by the person of lower status in a communicative encounter, and is not used in Sweden. Other emblems found in Swedish children, such as bye-bye, hello (raise a hand to greet someone), and gone (hands held out laterally to the side with the palms up), are also found in several other cultures. There are no documented emblems commonly used by Swedish children known to be used only in Sweden. Kendon (2004) distinguished between referential gestures, which contribute directly to the propositional content of what is “said”, and pragmatic gestures, which are more conventionalized and mark what kind of utterance something is. Children produce a larger proportion of pragmatic gestures, relative to referential gestures, as they grow older. In Italian children retelling the plot of Pingu cartoons, Graziano (2009) found 19% pragmatic gestures at 4 years, 31% at 6 years, and 48% at 9 years. In Swedish children, also retelling Pingu cartoon plots, only 35% are found at 10 years (Forssell and Mustaniemi 2012). Swedish children thus seem to use fewer of the conventionalized pragmatic gestures than Italian children, although several of the actual gestures involved are found in both cultures.

3.2. Iconic gestures and conventionality In iconic gestures, there is some form of resemblance between the gestural form and the meaning invoked by the gesture. The predominant kind of iconic gestures found in children consist of enactive gestures, performed “as if ” some action was carried out, typically to signify the type of action itself or to signify the object involved in the action. Other kinds of iconic gestures, found in adults, are rare in children, and will not be discussed here. There are currently no reasons to believe that there are cultures where children do not produce enactive gestures. There is indeed a seeming “naturalness” and “universality” to these gestures, which in principle allows anyone to spontaneously invent gestures that illustrate some recognizable type of action, even when there is no pre-existing conventionalized gesture for this. It is nevertheless important to realize that enactive gestures often depend on familiarity with conventionalized uses of objects (Andre´n 2010; Rodrı´guez and Moro 2008): a spoon is for eating, a telephone for making telephone calls, a comb for combing the hair, and so forth. Such uses of objects are often taken for granted as somehow evident and transparent in meaning, but at second thought, they are obviously culturally dependent. What if someone produced an empty-handed gesture of turning the steering wheel of a car, as Tea (26 months) does in Fig. 89.1 in a culture where

1286

VI. Gestures across cultures

Fig. 89.1: Tea (26 months) acts “as if” turning the steering wheel of a car

nobody has ever seen or heard of a car? The seeming transparency and naturalness of the gesture immediately vanish. Enactive gestures are not conventionalized in the same sense as emblems. Whereas emblems are conventionalized as gestures, enactive gestures are typically not, since it is rather the action which is signified by the enactive gesture that forms a typified convention. One should nonetheless pay close attention to the fact that there is, in enactive gestures, a direct overlap in the expressive mode (bodily action) and what is signified (bodily action). In this sense, one could say that enactive gestures draw directly on knowledge of conventionalized bodily performances. Tea simply wouldn’t produce the gesture in Fig. 89.1 if she wasn’t familiar with the cultural practice of handling the steering wheel of a car. In contrast to emblems, enactive gestures are mainly a matter of typified conventions and less of normative conventionality, e.g., a steering wheel could be handled in somewhat different ways, without necessarily being “wrong”. There is also another sense in which children do not necessarily spontaneously “invent” the iconic gestures they perform. Many enactive gestures come about as a result of imitating enactive gestures produced by parents, rather than imitating the actions signified by the enactive gestures. In these ways, Swedish children’s iconic gestures deeply reflect the kind of actions, enactive gestures, artifacts and toys they encounter in a Swedish context. They are not so much a matter of spontaneous inventions on behalf of the children, but rather a convention-based affair.

3.3. Deictic gestures and conventionality Are children’s deictic gestures like point and show also conventionalized? There are no known cultures where these gestures are not used at all, which indicates that they may be universal. But there is also evidence that the prototypical index finger hand shape is not necessarily invented by children themselves. Blind children, who cannot see other people’s pointing gestures directly, do not seem to make spontaneous use of the index-

89. Gestures in Northern Europe: Children’s gestures in Sweden

1287

finger pointing hand shape in their pointing gestures (Iverson 1998). Junefelt (1987) found the same thing, and also noted that this hand-shape, like several other conventionalized gestural shapes, was explicitly taught by the parent. Furthermore, the use of “the same” gesture, such as index finger pointing, across cultures does not necessarily mean that the gesture is used in precisely the same ways. Zlatev and Andre´n (2009) found that Thai children do make use of the index finger pointing hand shape, but less often than Swedish children, and Thai children tended to employ other forms of pointing especially often in the context of referring to people (it is considered rude to point to people in Thailand ⫺ more so than in Sweden). Conventionality involved in index finger pointing is, in most cultures, mainly a matter of typified conventions, because it is clear that a lot of variation in form is typically allowed, without apparent violations of normative constraints. Another indication of their status as typified conventions is the fact that ‘point’ (peka) and ‘show’ (visa) exist as verbs in most (or all?) spoken languages, and children learn early on to respond to utterances such as “point to the cookie you want!” by performing, precisely, a pointing gesture. There are also cultures where normative constraints apply to index finger pointing (Wilkins 2003), but not so in Sweden. Iconic gestures, and some of the emblems, have some similarities with open class words (like nouns or verbs) in the sense that they constitute an essentially open-ended list of possible gestures that are not used by every individual. Deictic gestures (like point and show) and some very frequent emblems (like nodding and head-shake) are more reminiscent of closed class words (like pronouns). The closed class gestures, if one can call them that, are concerned with essential properties of language, such as reference (deictic gestures), negation (head-shake), and affirmation (nodding), rather than much more restricted situations (like indicating that someone is silly with an emblematic gesture), and they are the kind of recurrent gestures that you find in every individual in a certain culture ⫺ such as Sweden (Eriksson and Berglund 1999). The closed class gestures resemble closed class words in the sense that they belong to the most central and most strongly conventionalized parts of the communicative system. Several of them are part of Sign Language (Bergman 2012). Closed-class gestures are not only the most frequently produced gestures in children between 18 and 30 months, but also the ones which are most often coordinated with speech, whereas iconic gestures are less often coordinated with speech (Andre´n 2010). This stands in opposition to the common idea that conventionalized gestures are somehow more independent of spoken language, but just because emblems are typically comprehensible even in the absence of speech (they are sometimes called “autonomous gestures”), this doesn’t mean that they are normally produced without speech. The closed class gestures are rather the ones that are most strongly tied to, and most similar to, spoken language in young children, in several respects (Andre´n in press).

4. Conclusions There are several general ⫺ possibly universal ⫺ patterns in children’s gestural development, but this universality in no way implies that the actual gestures themselves are not conventionalized. In fact, the vast majority of children’s gestural expressions are conventionalized in some way, rather than being spontaneous inventions (see Guidetti and Nicoladis 2008).

1288

VI. Gestures across cultures

I have argued that the emblems and deictic gestures used by Swedish children are conventionalized, but not in the sense of being uniquely Swedish. They are also found in many other cultures. However, even the use of “the same” gesture across two or more cultures often turns out to be slightly different, when it is scrutinized more closely how exactly it is used, perhaps in relation to particularities in the structure of the language spoken or conventions of appropriate conduct. More comparative studies would be needed to shed more substantial light on what ways Swedish children’s use of gestures may in fact be specifically Swedish. As for children’s enactive iconic gestures, they can be considered universal ⫺ when conceived of as a general mode of gestural representation ⫺ but at the same time specific gestures often depend deeply on familiarity with the conventional usage of various kinds of objects. Hence, enactive gestures too reflect conventionalized bodily practices stemming from the cultural community from which they are learned. The distinction between typified and normative conventionality (Andre´n 2010) allows one to see that just because a gesture doesn’t qualify as an emblem (normative conventionality), this doesn’t mean that it has to be non-conventional. This is because there is also the possibility of typified conventions, as in the case of many of children’s gestures. One should also note that conventionality (symbolicity) does not stand in opposition to other semiotic grounds for meaning such as indexicality (in deictic gestures) and iconicity. A given gesture often rests on several semiotic grounds at the same time, and conventionality is frequently one of them ⫺ especially in children.

5. Reerences Andre´n, Mats 2010. Gestures from 18 to 30 months. Ph.D. thesis, Centre for Languages and Literature, Lund University. Andre´n, Mats in press. Multimodal constructions in children: Is the headshake part of language? Gesture. Bergman, Brita 2012. Barns Tidiga Teckenspra˚ksutveckling, Volume XXII of Forskning om Teckenspra˚k [Early Sign Language Development in Children, Volume XXI of Research on Sign Language]. Stockholm University: Department of Linguistics. Blake, Joanna, Grace Vitale, Patricia Osborne and Esther Olshansky 2005. A cross-cultural comparison of communicative gestures in human infants during the transition to language. Gesture 5(1⫺2): 201⫺217. Eriksson, Ma˚rten and Eva Berglund 1999. Swedish early communicative development inventories: Words and gestures. First Language 19(55): 55⫺90. Forssell, Boel and Kirsi Mustaniemi 2012. Gesters funktion i narrativer: En jämförelse mellan barn med spra˚kstörning och barn med typisk spra˚kutveckling [The Function of Gestures in Narration: A comparison between children with language impairment and children with typical language development]. Master’s Thesis, Department of Clinical Sciences, Lund University. Gerholm, Tove 2007. Socialization of verbal and nonverbal emotive expressions in young children. Ph.D. thesis, Department of Linguistics, Stockholm University. Graziano, Maria 2009. Rapporto fra lo sviluppo della competenza verbale e gestuale nella costruzione di un testo narrativo in bambini dai 4 ai 10 anni. Unpublished Ph.D. tesi, SESA, Universita` degli studi “Suor Orsola Benincasa”, Napoli and Universite´ Stendhal, Grenoble. Guidetti, Miche`le and Elena Nicoladis 2008. Introduction to special issue: Gestures and communicative development. First Language 28(2): 107⫺115. Iverson, Jana M. 1998. Gesture when there is no visual model. In: Jana M. Iverson and Susan Goldin-Meadow (eds.), The Nature and Functions of Gesture in Children’s Communication, 89⫺ 100. San Fransisco: Jossey-Bass Publishers.

90. Gestures in Northeast Europe

1289

Junefelt, Karin 1987. Blindness and child-adjusted communication. Ph.D. thesis, Department of Nordic Languages, Stockholm University. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Rodrı´guez, Cintia and Christiane Moro 2008. Coming to agreement: Object use by infants and adults. In: Jordan Zlatev, Timothy Racine, Chris Sinha and Esa Itkonen (eds.), The Shared Mind: Perspectives on Intersubjectivity, 89⫺114. Amsterdam: John Benjamins. Wilkins, David 2003. Why pointing with the index finger is not a universal (in sociocultural and semiotic terms). In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 171⫺215. Mahawa, NJ: Lawrence Erlbaum Associates. Zlatev, Jordan and Mats Andre´n 2009. Stages and transitions in children’s semiotic development. In: Jordan Zlatev, Mats Andre´n, Marlene Johansson Falck and Carita Lundmark (eds.), Studies in Language and Cognition, 380⫺401. Newcastle: Cambridge Scholars.

Mats Andre´n, Lund (Sweden)

90. Gestures in Northeast Europe: Russia, Poland, Croatia, the Czech Republic, and Slovakia 1. 2. 3. 4. 5.

Russia Poland Croatia The Czech Republic and Slovakia References

Abstract Slavic (Russia, Poland, Croatia, the Czech Republic, Slovakia) studies on body, language, and communication have been conducted along several scientific lines. One set of problems focuses on the human body and the other on somatic objects as “part” and “partners” of natural language and oral communication.

1. Russia 1.1. Moscow In Moscow, a research group was founded at the seminar on nonverbal semiotics at the Russian State University for the Humanities in 1991. The group has started the pioneer investigations in the interdisciplinary field that addresses the relation of the human body to the Russian language, Russian body language, and oral communication typical for Russian native speakers. The group is headed by Prof. Grigory Kreydlin from the Institute of Linguistics (Russian State University for the Humanities) and includes (among others) several young scholars: Peter Arkadyev, Anna Kadykova, Alexander Letuchy, and Svetlana Pereverzeva (all from the same Institute). Since it began, the group has launched several scientific projects, and two of them are now completed. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 12891299

1290

VI. Gestures across cultures The first one is called The Dictionary of Russian Gestures (see Grigorjeva, Grigoryev, and Kreydlin 2001). It is a specialized nonverbal dictionary, the lexicon of which comprises various Russian body language signs, i.e., gestures proper, postures, facial expressions, meaningful glances, meaningful body movements, and complex verbal-nonverbal forms (manners). Its primary goal is to present a complete, rigorous and consistent description of the main emblematic gestures in Russian body language. (Emblematic gestures, or emblems, in a broad meaning of the word, are body signs of a particular semiotic type: emblems have autonomous and distinct lexical meanings, and they are capable of codifying and communicating their meanings irrespective of verbal context). The lexicon of The Dictionary of Russian Gestures comprises emblems belonging to many semantic types. The lexicographic information in The Dictionary of Russian Gestures includes, inter alia, physical representation of an entry gesture unit, its typical Russian name, and syntactic properties of the gesture. Lexical entries also present semantic definitions of the gesture, its stylistic specification, etymology, examples of usage, etc. All in all, the vocabulary consists of about 120 emblematic gestures (about 60 dictionary entries), which can occur in different styles and modes of speech. The information within the dictionary entries is distributed into 17 areas, or domains. Among them several domains that contain verbal information associated with the emblems. In constructing The Dictionary of Russian Gestures, the authors aimed to present all the information about body language and natural language units in one readable and formally specified metalanguage, or language of description. The Dictionary of Russian Gestures is the first and preliminary step on the way to creating two formal semiotic models ⫺ the model of comparative analysis of verbal and nonverbal semiotic systems and the model of their interaction in communication. The cognitive and conceptual analyses of both human body signs and their behavior have demanded studies on the phenomena of body, body parts, and corporeality. The outline of these studies is the description of semiotic conceptualization of the human body (Arkadyev, Kreydlin, and Letuchiy 2008a; Kreydlin 2007; Pereverzeva 2009). The notion of the semiotic conceptualization of the human body as it exists in Russian culture and communication reflects the ideas and notions of ordinary, or unsophisticated, Russian speakers concerning the body and other somatic objects. It is a formal model for the representation of the so-called “naı¨ve semiotic picture of the body and body parts” and is a useful generalization of the old notion that is known by the name linguistic conceptualization. Semiotic conceptualization demonstrates how the body and its parts are represented in the human mind and how they are codified in natural language or/and body language signs. If one wants to learn how the human body is reflected in these two codes, one must have a common basis for their joint and comparative description. The second project of the Moscow group, “Body parts in the Russian language and culture”, is aimed at the construction of this common basis. The researchers aim to perform the following general tasks: (i) to construct the Russian semiotic conceptualization of body; (ii) to give the analysis of the phenomenon of corporeality and compare the expressive possibilities that the Russian semiotic codes possess; and (iii) to provide common grounds for the future cross-linguistic and cross-cultural analyses of semiotic conceptualizations of human body.

90. Gestures in Northeast Europe

1291

The resources of semiotic conceptualization provide a solution of some problems of speech-and-gesture interaction. The main results of the project are (i) the description of different properties of somatic objects ⫺ those that are reflected both in verbal and nonverbal signs, sequences of signs, or composite speech-andgesture utterances; (ii) the exploration of linguistic names, or nominations, of somatic objects, basically of their semantics and pragmatics, as well as of idiomatic expressions involving them; (iii) the classification and explanation of certain peculiarities of corporeal behavior of interlocutors in dialogues; (iv) the characterization of general mechanisms and statement of some rules that regulate the behavior of Russian speakers in different types of oral discourse; (v) the description of certain regularities in co-functioning of units that belong either to the Russian language or to the Russian body language in different types of communicative acts. In contrast to a traditional lexicographic approach, in this new approach researchers describe the human body and corporeality in terms of several sets or classes. These are (among others) (i) the set of features of the somatic objects explored together with the values of these features (ii) the class of linguistic features including the properties of the names of the somatic objects and the collocations of the names, and (iii) the set of gestures performed with these somatic objects. While working on the project, the participants describe both verbal and nonverbal expressions of different features as well as many relevant properties of the possessors of somatic objects (gender, age, race, state of health, etc.). Here are some examples of features of somatic objects investigated in the project. “Mereology”, or “meronymy” reflects the hyponymy relations between the body (or a given body part) and other body parts, e.g., arm ⫺ body, finger ⫺ hand, nail ⫺ finger, etc. The feature “actions” is divided in two sub-features: “actions performed by a somatic object” and “actions performed over a somatic object”; see the Russian gesture ruki vverh! (‘hands up!’) that refers to the action performed by hands, and the linguistic expression polozhitj ruki na plechi (‘to put hands on the shoulders’) that contains two words referring to somatic objects “hands” and “shoulders”. The expression conveys the idea of an action performed with the hands on someone’s shoulders. The knowledge of topography and locations of body parts helps to understand the meaning of many natural language phraseological units. In Russian culture, the idea that non-speaking is strongly connected with holding the tongue behind one’s teeth is encapsulated in the phraseological expression derzhat’ jazyk za zubami (‘to hold one’s tongue behind one’s teeth’). But the position of the tongue codified in it leads to complete silence, and that accounts for the existence of another, derivative meaning of the expression ⫺ ‘not to say something you shouldn’t say’. The research agenda consists of the following blocks:

1292

VI. Gestures across cultures

(i)

(ii) (iii)

(iv)

(v)

classification of somatic objects according to their various properties. The human body consists of body parts (head, stomach, back, etc.), parts of these parts (face, navel), organs (heart, eyes), liquids (blood, tears), holes (mouth, nostrils), bones, etc. (Kreydlin and Pereverzeva 2009); analysis of conceptualization of somatic objects, such as head, breast, fist, legs, nostrils, fingers, shoulders, etc.; typology of features and properties of somatic objects and values thereof (Arkadyev, Kreydlin, and Letuchiy 2008b), e.g., three big classes of features are distinguished: structural, physical, and functional; description of distribution and consideration of some interesting peculiarities of cooccurrence of words and gestures that manifest themselves in the genre of the academic lecture; and construction of an electronic database system that summarizes the results of the explorations.

The database system provides, inter alia, answers to the following questions: (i) What features are relevant for the somatic object given? (ii) What somatic objects are characterized by some feature or combination of features? (iii) What is the set of characteristics that pertain to the somatic object? In addition, the database system presents the typical Russian verbal expressions of the relevant features and their values. Another type of research is being conducted by Natalia Sukhova from the Russian State University (Moscow). She is exploring some cognitive and functional aspects of interaction between prosodic and kinetic characteristics of an oral dialogue, considering two major research questions: (i) What are the general mechanisms that govern the processes of speech-and-gesture production? (ii) What are the basic working instruments that can provide information about these mechanisms, and how can they be discovered while analyzing the gesture-and-speech interaction in a communicative act? The research material consists of two documentaries produced by British film companies, “Diana. Princess of Wales ⫺ Her Life” (1997) and “Churchill ⫺ A Video Biography” (1997), (the overall length is 5 hours 48 minutes involving 363 fragments). For the purpose of the research Sukhova has chosen 60 audio and visual fragments (32minutes in all). The people in the documentaries belong to the upper middle or upper classes of the English society; their ages vary from 40 to 70. The author is studying prosodic-kinetic complexes in monologue speech acts; these are units that include a kinetic phrase and a prosodic nucleus of a syntagma, which function jointly in the utterance (Sukhova 2004, 2006). The aim is to verify the hypothesis that different intentions implied by speakers and different pragmatic meanings of utterances give birth to different prosodic-kinetic complexes. The communicative and pragmatic types of utterances, together with their illocutive intensions, serve as a starting point for investigation of the monologue production. First,

90. Gestures in Northeast Europe

1293

the research material is divided into monologue fragments, each being ascribed to one of the two communicative and pragmatic types according to speakers’ communicative intentions. The second step consists in intention-based grouping of these fragments in two major categories, statements and estimations. Statements (13% of all cases) render the information new to a listener, while estimation utterances (the rest 87%) express the speaker’s attitude towards people, situations, actions, events, etc. and at the same time convey some new information about these people, situations, actions, etc. Producing an utterance, a speaker has several communicative aims: (i) the aim to provide conditions for the subsequent communicative act with the addressee; (ii) the aim to express his/her intentions; cases of lying and some other sorts of speechgesture manipulations are not considered here. (iii) the aim of monologue adjustment, or a dialogue adjustment, in a dialogical situation. They are attained by various prosodic and kinetic tools. The project demonstrates the existence of deep cognitive links that combine gestures and non-segmental phonological units as well as considers and describes meaning-making instruments which provide the functioning of discourse units.

1.2. Ivanovo The mainstream idea of the Ivanovo school of nonverbal communication and its representation in English literary texts is to study how verbal and nonverbal codes are related within a communication act. Of special interest are the issues of interrelation between a verbal component and a nonverbal component in a communication act. This illuminates and clarifies the most enigmatic and important theoretical questions concerning communication. In particular, while studying verbal component and nonverbal component in different sorts of texts registering human emotions, the Ivanovo research group basically deals with situations when verbal component and nonverbal component express different emotional assessments (e.g., the verbal part expresses a positive emotion while the nonverbal part manifests a negative emotion, or vice versa). The Ivanovo linguists’ and semioticians’ work is divided into several groups: (i) semantic and pragmatic analyses of nonverbal manifestations of emotions in English literary texts of different genres and styles (Ganina and Kartashkova 2006; Vansyatskaya and Kartashkova 2005); (ii) phraseological units corresponding to some types of nonverbal behavior (Kartashkova and Mayakina 2007; Mayakina 2006); and (iii) the gender factor influencing nonverbal units’ behavior and the representation thereof in works of fiction (Tarasova 2006). The nonverbal manifestations of emotions are studied in non-congruent utterances, which are characterized by the discord of verbal and nonverbal factors. It is shown that in most cases of utterance usage semantic analyses are insufficient for determining what kind of emotion is experienced by the communicator. It is the pragmatic context that

1294

VI. Gestures across cultures

can make evident or clarify the pole of the emotional scale. The analyses carried out on the pragmatic level display the ways in which different implications can be inferred from nonverbal component. The same holds true for the cases of communicative failures. A salient component of communicative interaction is silence. Silence as a reaction to either verbal component or nonverbal component is studied within two types of communication acts: those that include either unambiguous or ambiguous nonverbal component. Within the former type, genuine emotion is displayed through the nonverbal channel, whereas the latter requires pragmatic context. Emotions as displayed through nonverbal behavior have also been studied from the point of view of gender. It has been statistically proven that English men are no less emotional than women. The set of nonverbal component describing masculine and feminine behavior is practically the same; the difference between them lies in the way they express their emotions. In expressing negative emotions, the main gender differences are observed in visual, facial, and respiratory types of nonverbal component; while in expressing positive emotions the gender differences are displayed in the use of the tactile and facial nonverbal component. Various types of combinations of nonverbal component are described, types which simultaneously express the emotions of male and female communicators. Nonverbal component proves once more the multi-channel character of the nonverbal sign code. A systematic description of language correlates of different sorts of nonverbal component focuses on the feature of gender and constitutes a separate part of the research. Of special interest is the analysis of phonosemantic (sound symbolic) verbs, adjectives, and other language units. It has been shown that masculine language predominantly involves correlates of nonverbal component expressing sharpness, strength, or voice, while feminine correlates are characterized by a great number of words marking unsteadiness of the voice. Gender also lies at the heart of studies on how different kinds of speech acts combine with one or another kind of nonverbal units. Nonverbal behavior is shown to be largely dependent on the type of communication act ⫺ with the same sex or with the opposite sex. Presuming that a literary text consists of phrases that describe verbal and nonverbal components of a communication act, nonverbal semiotic and linguistic scholars from Ivanovo assert that typical patterns of verbal component and nonverbal component order in a text are: (i) a single nonverbal component follows nonverbal component; (ii) verbal component is in pre- or postposition to a single/number of nonverbal component; (iii) a number of nonverbal component follows verbal component; (iv) a single/number of nonverbal component precedes verbal component; (v) single/number of nonverbal component is in pre-postposition to verbal component. They show that in the case of congruent communication acts (described in the text) emotions are first expressed verbally; then follows nonverbal component (more often a single component than a group), which manifests the intensity of the emotion and the multi-channel character of nonverbal communication. As to non-congruent communication acts (described in the text), emotions are expressed through the nonverbal channel, preceding the verbal part (that often masks the emotion).

90. Gestures in Northeast Europe

1295

Nowadays some Ivanovo scholars continue studying phraseological units that describe or correspond to nonverbal behavior; mostly they study English nonverbal behavior combined with the verbal one. Two types of behavior are distinguished ⫺ controlled and uncontrolled. The phraseological units considered fall into these two classes correspondingly. The semantic analysis of these classes of phraseological units explains the different types of emotional reactions at the idiomatic level of the English language and shows some relations between the communicators as well. Some phraseological units are shown to stress the intensity of emotions experienced, and others display socio-cultural patterns of behavior.

2. Poland The young Polish scholar Agnieszka Szczepaniak from the Institute of Polish Philology (University of Wroclaw), whose supervisor is Prof. Anna Da˛browska from the same Institute, is working on the project “Cultural and non-cultural aspects of nonverbal communication on the example of Polish, Greek and British gestures”. She provides a comparative empirical analysis of Polish gestures and their close equivalents in Greek and British cultures. Szczepaniak uses Ekman and Friesen’s method (Ekman and Friesen 1969, 1971) and adjusts it to collect as much information as possible about the usage of gestures and their profile characteristics. She is observing and recording everyday conversations of Polish, Greek, and English native speakers, eliciting cultural gestures, i.e., gestures typical of the corresponding cultures, and compiling a list of gestures together with words and phrases associated with these nonverbal units. Once the list has been obtained, the author asks 16 people of different gender, age, education, etc. to read the verbal units and to combine them with the gestures of their own culture. Then the recipients have to show those gestures in front of a camera. The purpose of this action is to make records of gestures for each country separately. The next part of the experiment is to decode the gestures. New informants (also 16 people) are invited to see the recorded fragments and to explain the meanings of the verbal and nonverbal units. After this stage of the experiment the author has in her possession the final list of gestures and can compare the nonverbal signs from different countries. In addition, the list of gestures can be used as a lexicon for a small cross-cultural nonverbal dictionary, which may serve for educational purposes. Another center of gesture studies in Poland can be found at Jagiellonian University (Krako´w), led by Prof. Jolanta Antas in the Institute of Communication Theory of the Faculty of Polish Studies. Prof. Antas has been researching gesture since the early 1990s, focusing on the relations between concepts and their verbal and gestural expression. Some of this work has been conducted with her colleague, dr. Aneta Załazin´ska, who has also studied the nonverbal structure of dialogue. Dr. Beata Drabik-Fra˛czek from that institute is extending gesture research to the study of communication by people with aphasia. The works and projects of many Polish students on semiotics were presented at the “International conference GESPIN 2009: Gesture and Speech in Interaction” in Poznan´, September, 24⫺26, 2009. Gestures forming a symbolic system, gestures as conventionalized signs, gestures as a medium of expression, as well as a unified multimodal grammar of gesture and speech ⫺ these are separate divisions that cover all the studies presented at this conference. The topics and contents of these explorations can be divided into several blocks.

1296

VI. Gestures across cultures One group can be called “Analysis of gestures, gesture-and-speech utterances and their functioning in different discourses” (everyday communication, psychotherapy communication, philological discourse, taboo-related discourse and some other types of communication). These works show convincingly that body movements are really intertwined with language and communication and constitute an integrated ensemble with vocal language. Among them are speech and gesture interaction within the motion events as they are represented in Polish culture and discourse (Lis 2009), nonverbal cues in relationship-focused integrative psychotherapy sessions (Pawelczyk 2009), some peculiarities of speech-gesture interaction in Polish narratives (Malisz et al. 2009), the role of gestures in taboo-related discourses (Wachowska 2009), and the semantics and pragmatics of gestures in dialogues about death (Biela-Wołon´ciej 2009). Historical perspectives on the study of body, language, and communication constitute the second group of research. Investigations of some Polish scholars are connected with the origin and acquisition of gestures and gesture-like movements. One of them, Orzechowski (2009), assumes that it is the bipedal posture that freed the upper limbs for gesturing and locomotion in general. He asserts that gestures possess the capacity for the projection of spatial features, and conveying information that is related to space constitutes an important element of coordinating the gestural actions of a social group, aimed at meeting the goals that are essential for survival of the group. Several works on body movements as they are studied in computer science form a separate field of Polish scholars’ investigations. Gestures, postures, gaze, and movements in computer science (the problem of embodied agents, modeling body movements including gestures for embodied agents and some theoretical and practical implications for the lives and communication of hearing-impaired people) ⫺ all these key topics lie in the centre of the third group of studies. The problems of improving the functioning of the Thetos system, which is a computer system intended for translation of Polish texts into Polish sign language, and the experimental identification and translation of emotions into body and sign languages are also favorite topics of Polish semioticians (see, among others, Romaniuk and Suszczan´ska 2009).

3. Croatia Some projects of Bogdanka Pavelin Lesˇic´, Associate Professor from the Faculty of Humanities and Social Sciences, Department of Romanic Languages, University of Zagreb continue a long tradition of Croatian nonverbal research. It starts from the works of the linguist Petar Guberina (1913⫺2005), who paved the way for genuine linguistic analysis of speech as a multimodal phenomenon, taking into account the importance of rhythm, intonation, and gestures as optimal factors in the structuring of the utterance, and consequently in the acquisition of language and language development. His work was under influence of the famous Geneva School of Languages and Linguistics (Charles Bally, Jean Piaget). Guberina was the first in Croatia who ranked nonverbal units (gestures and facial expressions) with accentuation, rhythm, tempo, intonation, intensity, silences regarded as values or qualities of the spoken language. He also stressed the importance of taking into account the situation in which the utterance takes place. He developed a well known verbotonal system for the education of hearing-impaired people and of foreign language learners. The first of B. Pavelin Lesˇic´’s projects lies in the field of French gestures studies and is a natural extension of her pre-Ph.D. thesis on French emblems entitled “La re´ce´ption

90. Gestures in Northeast Europe

1297

de la mimogestuelle francaise” and of her Ph.D. thesis on oral speech gestures in faceto-face dialogues “La posturomimogestuelle dans l‘e´change langagier en face a` face”. The second project was P. Guberina’s project “Polisenzorika slusˇanja” (‘Multiple sensory audition’). This multidisciplinary project covered 14 different subprojects, and B. Pavelin Lesˇic´ was responsible for the subproject “Spaciogramatika: gramatika jezika i gramatika prostora” (‘Spatiogrammar: grammar of language and of space’). In this project, audition (hearing) was regarded not only as an acoustic phenomenon, but as a multimodal object: sound, movement, and space are represented as separate and equal components of communication. The research was based on videotaped spoken interactions and its objective was to describe the functions of some gestures, postures, and facial expressions that occur in oral discourse. These nonverbal units facilitate the communication of hearing-impaired individuals. A form of multimodal transcription was invented to present all these units in a uniform scheme. The investigations carried out within this project were presented in a series of publications (Pavelin Lesˇic´ 2002a, 2002b). The main topic of Pavelin Lesˇic´’s present research is people’s face-to-face interactions. She studies them from a multidisciplinary point of view that involves linguistics, phonetics, semiotics, psycholinguistics, communication theory, and the ethnography of communication. Her special interest lies in the domain of sound and movement synergy in speech pragmatics. Natural language cannot be reduced to a one-level, segmental linguistic system; it involves an open repertoire of suprasegmental and kinesic means of expression as well. Referring to the visual suprasegmental manifestations, the researcher uses the term of posturomimogestuality, taking into account the importance of global body movement. This term covers not only meaningful hand movements (often referred to as gestures); posturomimogestuality also includes nonverbal manifestations reflecting semantic, syntactic, and pragmatic aspects of the global utterance. Some other Croatian scholars deal with body and bodily signs used in rhetorical communication. There is research on gestures in political speech produced at the Department of phonetics at the University of Zagreb, explorations in the field of social communication with the focus on the semantics and pragmatics of semiotic units in everyday discourse, Polish-Croatian comparative gesture studies (Pintaric´ 2002), and investigations of body movements in Russian literature (namely Gogol’s works; see Vojvodic´ 2006).

4. The Czech Republic and Slovakia In the Czech Republic, the first studies in the field of nonverbal communication were conducted only in the 1970s. The pioneers were Jaro Krˇivohlavy´ and Jaromı´r Janousˇek, who specialized in psychiatry and psychology and thus focused on normal and pathological human nonverbal behaviour, as well as on the specifics of dialogue between the psychiatrist and the client, on the one hand, and between clients in a small group, on the other hand. Their studies were later developed in the works of Zdeneˇk Vybı´ral and Oldrˇich Tegze. The only book in the Czech Republic on the nonverbal semiotics of everyday communication is Zdeneˇk Klein’s atlas of Czech gestures Atlas Se´manticky´ch Gest (Klein 1998), which, again, was written for the purposes of psychiatry. The dictionary contains 143 gestures, each of which is denoted by a triadic code: the active body part ⫺ the passive body part ⫺ the index number. Each unit is given a short description of its performance accompanied by a sketch and a photograph, a semantic label, and percen-

1298

VI. Gestures across cultures

tage results of its interpretation by men and women. The aim of the atlas was to help psychiatrists discover the features of normal and abnormal gestural behaviour. Zdeneˇk Klein (1944⫺2000), an ethologist by education, worked in the Prague Psychiatric Centre and taught ethology at the faculty of natural science at Charles University. His main research interests were dermatoglyphs and semiotics of nonverbal communication. Because of his political views he was prohibited from doing teaching and research work for a long time, which he could regain only in 1990. In this last decade of his life he published his Atlas and several papers. Although Klein’s interdisciplinary method, based on biology, psychology, and semiotics, was unique, and Klein himself was very popular among his students, he didn’t have any followers. This may explain why the term semantic gesture widely used by Klein (as an analogue of Ekman’s emblematic gesture, “a gesture with shared meaning” (Klein 1998: 24)) was not carried over later. Although the Czech Republic and Slovakia were a single state for a long time and their scholars have been in close interaction, the situation in Slovakia is a little different. The only scholar which developed an original Slovak approach to nonverbal semiotics was Josef Mistrı´k (1921⫺2000). Professor of linguistics, specialist in stylistics, handwriting expert, and theatre-lover, Mistrı´k has contributed to the study of gesture and speech interaction in Slovakia by introducing an interdisciplinary joint approach, created on the basis of socio-psychological and philological approaches to nonverbal communication (see Mistrı´k 1998, 1999).

5. Reerences Arkadyev, Peter M., Grigory E. Kreydlin and Alexander B. Letuchiy 2008a. Sravnitel’ny analiz verbal’nykh i neverbal’nykh znakovykh kodov (postanovka zadachi i sposob ee resheniya) [Comparative analysis of verbal and nonverbal semiotic codes (the task and the methodology)]. In: A.V. Bondarko, G.I. Kustova and R.I. Rozina (eds.), Dinamicheskiye Modeli. Slovo. Predlozheniye. Tekst, 439⫺449. Moscow: Jazyki slav’anskikh kultur. Arkadyev, Peter M., Grigory E. Kreydlin and Alexander B. Letuchiy 2008b. Semioticheskaya konceptualizaciya tela i ego chastey. I. Priznak “Forma” [Semiotic conceptualization of body and its parts. I. The feature “form”]. Voprosy Yazykoznaniya 6: 78⫺97. Biela-Wołon´ciej, Aleksandra 2009. Verbal and nonverbal copying with difficult topics. GESPIN Gesture and Speech in Interaction proceedings, Poznan, 24⫺26 September 2009, volume 1. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Ekman, Paul and Wallace V. Friesen 1971. Constants across cultures in the face and emotion. Journal of Personality and Social Psychology 17(2): 124⫺129. Ganina, Vera and Faina I. Kartashkova 2006. Emotions and nonverbal behavior of people. Unpublished Manuscript. Ivanovo State University. Grigorjeva, Svetlana A., Nikolay V. Grigorjev and Grigory E. Kreydlin 2001. Slovarj Jazyka Russkih Zhestov [The Dictionary of Russian gestures]. (Wiener Slawistischer Almanach, Sonderband 49.) Moscow/Vienna: Jazyki russkoj kultury. Kartashkova, Faina I. and Marina A. Mayakina 2007. The General Characterization of the Dictionary Model of Phraseological Units (Semantic Derivatives of Nonverbal Components of Communication). In: Olga Karpova and Faina I. Kartashkova (eds.), Essays on Lexicon, Lexicography, Terminography in Russian, American and Other Cultures, 223⫺235. Newcastle: Cambridge Scholars Publishing. Klein, Zdeneˇk 1998. Atlas Se´manticky´ch Gest [The Atlas of Semantic Gestures]. Prague: HZ editio. Kreydlin, Grigory E. 2007. Leksikografiya zhestov i ikh nominacij (slovari i bazy dannykh) [Lexicographic description of gestures and their nominations (dictionaries and databases)]. In: Materialy

90. Gestures in Northeast Europe

1299

VII Mezhdunarodnoy shkoly-seminara “Sovremannaya leksikografiya: global’nye problemy i nacional’nye resheniya”, 17⫺19. Ivanovo. Lis, Magdalena 2009. Motion events in Polish: Speech and gesture. GESPIN Gesture and Speech in Interaction proceedings, Poznan, 24⫺26 September 2009, volume 1. Mayakina, M. 2006. Phaseological collocations describing nonverbal behavior (pragmatic and lexicographic aspects). Ph.D. dissertation, Ivanovo State University. Mistrı´k, Josef 1998. Pohyb Ako Recˇ [Gesture as speech]. Bratislava: Na´rodne´ divadelne´ centrum. Mistrı´k, Josef 1999. Vektory Komunika´cie [Vectors of communication], 2nd edition. Bratislava: Univerzita Komenske´ho. Orzechowski, Sylwester 2009. Why hands? Why gestures? Origins of human gestures. GESPIN Gesture and Speech in Interaction proceedings, Poznan, 24⫺26 September 2009, volume 1. Pawelczyk, Joanna 2009. ‘Your head is moving, do it out loud’: Therapists uptake of clients’ nonverbal cues in relationship-focused integrative Psychotherapy session. GESPIN Gesture and Speech in Interaction proceedings, Poznan, 24⫺26 September 2009, volume 1. Pavelin, Lesˇic´, Bogdanka 2002a. Statut et role du mouvement dans la communication orale en face a` face. In: Renard R. (ed.), Apprentissage d’une Langue E´trange`re Seconde. La Phone´tique Verbotonale Tome 2, 71⫺87. Bruxelles: De Boeck Universite´. Pavelin Lesˇic´, Bogdanka 2002b. Le Geste a` la Parole. Toulouse: Presses Universitaires du Mirail. Pereverzeva, Svetlana I. 2009. Human body in the Russian language and culture: the features of body and body parts. GESPIN Gesture and Speech in Interaction proceedings, Poznan, 24⫺26 September 2009, volume 1. Pintaric´, Neda. 2002. Pragmemi u Komunikaciji [Pragmemes in communication]. Zagreb: Zavod za lingvistiku Filozofskoga fakulteta Sveucˇilisˇta u Zagrebu. Romaniuk, Julia and Nina Suszczan´ska 2009. Studies on Emotion in the Thetos System. GESPIN Gesture and Speech in Interaction proceedings, Poznan, 24⫺26 September 2009, volume 1. Sukhova, Natalya V. 2004. Vzaimodeistvie prosodii i neverbal’nykh sredstv v monologe (na materiale angliiskikh dokumental’nykh filmov) [Interaction between prosody and nonverbal means in monologue speech (on the English documentaries)]. Ph.D. dissertation, Moscow State Linguistic University. Sukhova, Natalya V. 2006. Vzaimodeistvie prosodicheskogo jadra i kineticheskoi frazy v raznykh kommunikativno-pragmaticheskikh tipakh monologicheskikh vyskazyvanii [Interaction between a prosodic nucleus and a kinetic phrase in different communicative and pragmatic types of monologue utterances]. Moscow Linguistic Journal 9(1): 51⫺67. Tarasova, O. 2006. Gender behavior from the angle of correlation between verbal and nonverbal components. Ph.D. dissertation, Ivanovo State University. Vansyatskaya, Elena and Faina I. Kartashkova 2005. Nonverbal components of communication in English literary text. Unpublished Manuscript. Ivanovo State University. Vojvodic´, Jasmina 2006. Gesta, Tijelo, Kultura: Gestikulacijski Aspekti u Djelu Nikolaja Gogolja [Gesture, Body, Culture]. Zagreb: Disput. Wachowska, Monika 2009. The role of gesture in taboo-related discourse. GESPIN Gesture and Speech in Interaction proceedings, Poznan, 24⫺26 September 2009, volume 1.

Grigory E. Kreydlin, Moscow (Russia)

VII. Body movements  Functions, contexts, and interactions 91. Body posture and movement in interaction: Participation management 1. Introduction 2. Scientific-historical background of context analysis and approach to multimodal communication 3. Empirical research on interaction synchrony/coordination 4. Comparison 5. Conclusion 6. References

Abstract The synchronization of body posture and movement in interaction has been investigated using varying terminologies and from quite different perspectives or schools of thought. The phenomenon of self- and interaction synchrony was first observed by Condon and his colleagues (Condon 1976, 1980; Condon and Ogston 1966, 1967, 1971; Condon and Sander 1974), on the base of frame-by-frame analyses of videotaped interactions. While they interpreted it in etic terms, Kendon (1990a), within the framework of context analysis, studied the function(s) of several instances of movement coordination in communication. Due to science-historical reasons, his findings have long been disregarded and only come to be recognized recently in the newly developing approach to multimodal communication (Deppermann and Schmitt 2007). Researchers of this latter framework empirically validate conversation analytic concepts in light of their inherent multimodality. After introducing some terminological distinctions, in the second section, context analysis and the approach to multimodal communication are presented from their scientific-historical background. The third section gives an overview of the leading questions and methodology, the basic empirical findings and their interpretations of exemplary representatives: Condon (Condon 1976, 1980; Condon and Ogston 1966, 1967, 1971; Condon and Sander 1974), Kendon (1990c, 1990d, 1990e), Deppermann and Schmitt (2007) respectively Mondada and Schmitt (2010b). In the third section the development in the investigation of multimodal interaction is reconstructed by a comparison of these approaches.

1. Introduction Whenever two or more people engage in a focused encounter, an observer may notice a fine-tuning of their body movements and postures. This phenomenon has been investigated using a variety of terminology and from quite different perspectives or schools of thought. The terminology itself is telling: While in social psychology, the communicative value of “motor mimicry” is investigated in experimental studies (e.g., Bavelas et al. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13011310

1302

VII. Body movements – Functions, contexts, and interactions

1986, 1988), context analysis began with an etic analysis of “self- and interactional synchrony” (Condon 1976, 1980; Condon and Ogston 1966, 1967, 1971; Condon and Sander 1974) and has later investigated communicative functions thereof under the label of “movement coordination” (Kendon 1990c). Likewise, in the newly developing approach to multimodal communication, “intrapersonal” and “interpersonal coordination” have become a prominent object of investigation in functional perspective (Deppermann and Schmitt 2007). Despite their varying theoretical and methodological background(s), researchers agree that bodily coordination is an indispensable prerequisite for any verbal exchange. The phenomenon itself can further be differentiated: Interaction synchrony encompasses matching behavior on the one hand, with postural congruence as one of the most frequently matched behaviors, which “may involve crossing the legs and/or arms, leaning, head propping, or any number of other positions. […] When the listener’s behavior is a mirror image of the speaker’s, this form of matching is called mirroring.” (Knapp and Hall 1997: 278; original emphasis) Mirroring again may take the form of “reflection symmetry” or “rotation symmetry” (Bavelas et al. 1986). On the other hand, “interaction synchrony” may refer just to “the ongoing co-occurrence of changes in movement and speech by each of the two interactants. Changes […] refer to the initiation, termination, speed and/or duration direction of the behaviors under study” (Knapp and Hall 1997: 280). While these definitions grasp the notion of interaction synchrony in social psychology, and, in a more limited sense, that of context analysis, in the approach to multimodal communication coordination is described rather than defined in terms of its interactive functions.

2. Scientiic-historical background o context analysis and approach to multimodal communication 2.1. Context Analysis Context analysis is most prominently represented by the work of Adam Kendon, who republished some of his most influential papers in 1990. The following exposition is based on Kendon’s introduction to this collection, in which he renders “some context for Context Analysis” (Kendon 1990b). Context analysis was highly influenced by the cooperation of various scholars at the Institute for Advanced Studies in the Behavioral Sciences at Palo Alto, joined by working on the question posed by Frieda Fromm-Reichmann, a psychiatrist who wondered what bodily cues she relied on in diagnosing her patients. Colleagues from several disciplines then contributed to the development of a new perspective on body behavior in interaction. First of all, in line with information theory, information was defined as new for a given recipient. This abstract definition of information allowed for assessing bodily behavior as informative. Furthermore, given their technical origin, information theorists were concerned with the problem of how to reduce noise in contrast to signals. In human interaction, the question of what is signal or what is noise depends on the perspective/ focus of attention. On the other hand, cybernetics allowed for analyzing communication as a self-regulating system. Participants were no longer seen as speaker and hearer in reaction to one another, but as always adapting their own behavior to their recipient’s response by constant monitoring and frame attunement. Both these theories called for

91. Body posture and movement in interaction: Participation management

1303

investigating interaction in its authentic context ⫺ similar to what was already standard practice in anthropology, and it required to film or videotape interaction in order to collect “specimen”, that is, audio-visual documents of naturally occurring interactions. These audio-visual data in their turn called for a vigorous methodology of identifying functional units, which was provided by structural linguistics, in which the methodology, the methods and concepts for analyzing the stream of behavior into functional units had already been developed. While initial studies focused on the behavior of individuals in interaction, later the interaction system itself came under scrutiny.

2.2. Approach to multimodal communication During the last decade, the German Gesprächsforschung (conversation analysis in the broader sense) split into two different research interests, both of which widen the scope of conversation analytical studies: interactional linguistics on the one hand, investigating the interactional use of linguistic structure (which was widely neglected or treated rather unsystematically and beyond the influence of linguistic theory in conversation analysis, beforehand) (see Selting and Couper-Kuhlen 2000, 2001), on the other hand the approach to multimodal communication, surmounting the longstanding restriction to verbal data by widening the focus to include body movements in interaction (Deppermann and Schmitt 2007; Mondada and Schmitt 2010b). The development of the approach to multimodal communication began by regular meetings of a group of (former) conversation analytic researchers at the Institute for German Language (IDS) at Mannheim and the Laboratoire ICAR (Centre National de Recherche Scientifique and Universite´ de Lyon) at Lyon. Since 2006 it has intensified its cooperation in a common research project on openings in contrastive perspective(s). In the last years, it has become increasingly apparent that theoretically and methodologically accounting for the genuine multimodality of interaction has fundamentally changed the research program, in addition to raising new research questions, calling for new methods of analysis, and ultimately leading to a reconceptualization of some basic categories established in conversation analysis (Deppermann and Schmitt 2007; Mondada and Schmitt 2010b; Schmitt 2006).

3. Empirical research on interaction synchrony/coordination 3.1. Context analysis The phenomenon of self- and interaction synchrony was first highlighted in the work of Condon, who in a series of studies (Condon 1976, 1980; Condon and Ogston 1966, 1967, 1971; Condon and Sander 1974) investigated whether human behavior could better be analyzed as the organization of patterns of behavior or as the combination of discrete units of behavior (see e.g., Condon 1980: 49). Careful frame-by-frame microanalysis of sound films revealed that the body movements of a speaker are precisely coordinated with his speech, a phenomenon that Condon termed self-synchrony (Condon and Ogston 1966: 342). Until then the roles of listener behavior had been disregarded. Equally careful microanalyses of listener behavior showed that listeners organize their body movements in relation to speech as well so that, to quote his famous metaphor, not only “the body of the speaker dances in time with speech. Further, the body of the listener dances in rhythm with that of the speaker!” (Condon and Ogston 1966: 338).

1304

VII. Body movements – Functions, contexts, and interactions Condon explicitly opted for an etic perspective, describing behavior in physical terms, without looking for its precise communicative function(ing). These studies on self- and interaction synchrony were taken up by Kendon (1990c), most explicitly in his study on “movement coordination in social interaction” (originally published in 1970), yet in a functional perspective, taking into account the specific contexts of the observed behaviors (see Kendon 1990c: 94). In this study, Kendon investigated how participants in a focused encounter coordinate their body movements. Most significantly, Kendon showed that axial participants, that is, the speaker and his direct addressee, come to share a rhythm of movements, whereas nonaxial participants only indirectly synchronize their movements by attending to the speaker (see Kendon 1990c: 103, 111). Synchronization is most prominent at the beginning and at the end of speaking turns. While at the beginning and throughout the speaking turn, the primary addressee by synchronizing his behavior with that of the speaker enables the speaker to see if and how he is understood, at the end of a turn the current primary addressee may initiate a different movement, which is then taken up by the current speaker. Kendon interprets this as the current addressee’s advance warning that he wants to take the turn, and “further, it may be that in overtly ‘beating time’ to [the current speaker’s] speech, he may thereby facilitate the precise timing of his own entry as a speaker […]” (Kendon 1990c: 104). Kendon observed different functions for mere synchronization and mirroring: “when, in an interchange, speaker and listener mirror one another’s postures, if there is a change in posture which does not reflect a change in the relationship, such posture shifts often occur synchronously, and in these instances we may again get movement mirroring. This is in contrast to those occasions when there is a change in the relationship between the participants in the interchange, for instance when one starts to ask the other questions. We may then see synchronous posture shifts, or head position shifts, but the movements are differentiated, not mirrored” (Kendon 1990c: 104). In another study, “A description of some human greetings” (Kendon 1990d, originally published in 1973 together with Andrew Ferber), Kendon demonstrated that rhythmical coordination of movements serves as a preparation for the greeting itself. In this study, he shows how synchronization of body movements can be used as less risky way (compared to gaze direction and body orientation towards the possible interlocutor) to signal that one is willing to greet someone if this greeting would be reciprocated (Kendon 1990d: 171). In his study “On spatial organization in social encounters: the F-formation system” Kendon (1990e) investigates how participants manage to establish and maintain an F-formation, that is “a spatial-orientational relationship, in which the space between them is one to which they have equal, direct, and exclusive access” (Kendon 1990e: 209). Kendon shows that entering an F-formation requires “cooperative action between himself and members of the existing system” (Kendon 1990e: 230). Like in the work of Condon, Kendon’s notion of coordination appears to be restricted to the mirroring and/or synchronization of body posture and body motion. Yet, while Condon proposes an etic analysis, in the work of Kendon, both perspectives are present: the investigation of forms or patterns of behavior in a functional perspective, and the investigation of the means by which a specific function, in his case the establishment and maintenance of a common focus is realized. While synchronization is the focus of the study on movement coordination in social interaction, in his other studies, the leading question is how people come to share a common focus of attention. In line with

91. Body posture and movement in interaction: Participation management

1305

Goffman (esp. Goffman 1971), Kendon considers a shared focus of attention to be the prerequisite for any focused encounter. In this perspective, synchronization cannot be disentangled from other means of establishing and maintaining an F-formation such as gaze organization and spatial-positional orientation of the participants (see the account in Kendon 1990f).

3.2. Approach to multimodal communication Whereas in context analysis the decision to use filmed material arose from the need for capturing “specimen” of bodily communication in interaction, for the approach to multimodal communication the study of coordination was quite an unexpected byproduct of its methodological decision to analyze videotapes instead of audio records of interactions. As a starting point for analysis, the approach to multimodal communication takes conversation analytic concepts and categories and aims at their empirical validation and critical revision (see Mondada and Schmitt 2010b: 43). Consequently, many papers in the approach to multimodal communication explicitly focus on central conversation analytic concepts such as turn-taking (Mondada 2006, 2007; Schmitt 2005), understanding in interaction as situated, sequential and embodied practice (Deppermann, Schmitt, and Mondada 2010; Mondada 2011), openings (in contrastive perspective all contributions gathered in Mondada and Schmitt 2010a). Yet, accounting for the genuine multimodality of interaction theoretically and methodologically brings up objects of investigation that have not been addressed in conversation analysis so far (Mondada and Schmitt 2010b: 26⫺29). Among them, the phenomenon of coordination has attracted so much attention that it has been dedicated an anthology: Deppermann and Schmitt (2007). In their theoretical foundation for coordination, Deppermann and Schmitt (2007) distinguish between intrapersonal and interpersonal coordination, which at first glance corresponds to Condon’s notion of self- and interaction synchrony: Intrapersonal coordination encompasses those activities by which participants adjust and/or time their own behaviors in the multiple modes of expression ⫺ verbal expression, facial expression, gaze, gesture, body position, spatial orientation and others. Interpersonal coordination encompasses the temporal, spatial, and multimodal adjustment of one’s own acts and behaviors to that of other participants. Yet, in contrast to Condon’s etic analysis, coordination in the sense of Deppermann and Schmitt (2007) is considered a functional category. The authors define coordination as behaviors in all modes of bodily expression, that co-occur with and enable verbal contributions, yet cannot be seen as goal-oriented contributions by themselves (Deppermann and Schmitt 2007: 22⫺23). Some of the basic findings in the approach to multimodal communication confirm Kendon’s observations some forty years ago. In short, it has become apparent that it is by the coordinated multimodal behavior of participants ⫺ encompassing their spatialorientational positioning, the organization of their gaze, the synchronization of their body movements and postures ⫺ that the conditions for a focused encounter, under which a verbal exchange can take place, are established and maintained. Before the first words are exchanged, potential interlocutors must establish a participation frame and identify each other as willing participant in an intended interaction (Mondada and Schmitt 2010b: 38). Body orientation plays an important role since it is by modification of the pacing, approaching, orienting one towards the other, orientation of the head and gaze organization that potential participants may anticipate the opening of an encounter

1306

VII. Body movements – Functions, contexts, and interactions

(Mondada and Schmitt 2010b: 38). Especially in the opening phase, the establishment of a common interaction space, what Kendon termed F-formation, is crucial. The specifics of this interaction space may already project thematic, pragmatic, and social aspects of the future interaction (Mondada and Schmitt 2010b: 39).

4. Comparison Both approaches developed in a number of ways in relation to conversation analysis: as an addition to it, a continuation of it, and in contrast to it. Consequently, a comparative perspective provides a deeper understanding of how these three approaches relate to each other. Context analysis adapted the methodology of descriptive/structural linguistics for the analysis of bodily behavior. Due to the cognitive turn in linguistics, it lived in the shadows of generative grammar for a long time. Furthermore, context analysis and conversation analysis developed as mutually exclusive fields of research. Given the dominance of conversation analysis and its longstanding restriction to verbal interaction context analysis was widely ignored in linguistics and in sociology. Thus, during the last decade, the newly developing approach to multimodal communication has only hesitantly acknowledged the pioneering work of Kendon. Despite their rather ambivalent relation to conversation analysis, both approaches share some common assumptions with conversation analysis as well as with each other: All three refrain from experimental as well as induced data; they all rely on authentic data (naturally occurring social interaction); all three opt for an inductive approach ⫺ instead of counting the occurrence of instances of pre-established categories, they derive their categories from analysis itself. Yet, they differ essentially in that conversation analysis concentrates on verbal interaction alone, while context analysis and the approach to multimodal communication have from the beginning based their analyses on filmed and videotaped interaction. As a consequence, both (i) focus on multimodal communication, not on verbal means alone, and (ii) recognize simultaneity as equally important as sequentiality: (i) No mode of communication is given priority over the others, all modes of communication are of equal interest, unless participants treat one as more relevant e.g. for the core activity. But whereas Kendon concentrates on the role of body behavior in the establishment and maintenance of the conditions under which focused encounters and within them verbal exchanges become possible, researchers in the approach to multimodal communication consider coordination as being intrinsically involved in any focused encounter itself, not as mere framing or a prerequisite for it. Moreover, they postulate a dialectic relationship between speech and the other modalities (see Mondada and Schmitt 2010: 37). (ii) Neither approach has any chronological notion of time, nevertheless they differ in their notion of temporality insofar as context analysis (at least Condon) looks at the temporal relationship of (the beginnings and endings of) movements as well as changes in bodily configurations, whereas the approach to multimodal communication investigates the functional relation of any body movement for the interactive accomplishment of some joint activity. Condon opts for an etic analysis, supposing “that synchrony appears to occur in relation to the etic rather than the emic seg-

91. Body posture and movement in interaction: Participation management

1307

mentations of behavior” (Condon and Ogston 1966: 339, original emphasis). Meanwhile, researchers of the approach to multimodal communication investigate how time is emically structured by participants (Oloff 2010: 175). In this sense, the terminology opted for reflects the different approaches: While “self- and interaction synchrony” refer exclusively to the rhythm/pacing of body movements, “coordination” refers to the function thereof. Consequently, researchers of the approach to multimodal communication not only look at the mere succession of body postures and changes thereof, but at their sequentiality in the sense of projection forces. They show that coordination not just precedes the verbal exchange, but that it already establishes the frame and projects the specifics of the upcoming verbal exchange. Thus, these analyses confirm an essential observation in the study of gestures that prepositioned gestures “render the scene in which the talk arrives a prepared scene” (Schegloff 1984: 291). Last, but not least, the authors differ in their explanations for the phenomenon of synchronization: Condon explained self- and interactional synchrony by common underlying neurological processes (see Condon 1976: 305, 309) given the latency time of 50milliseconds (Condon 1980: 56). Bavelas et al. propose a parallel processing theory suggesting that “internal reactions and communicative responses […] function independently, and it is the communicative situation that determines the visible behavior” (Bavelas et al. 1986: 322). The authors do not deny the possibility, that stimulus may evoke inner responses, yet they explain the overt mirroring behavior in strictly communicative terms, that is, not as expression of inner states, but rather as representations to one-another. Kendon develops a cognitive perspective and explains interaction synchrony in terms of comprehension (see Kendon 1990c: 113) and ultimately, he considers synchrony as a means of coordinating expectations among participants (see Kendon 1990c: 114). Researchers of the approach to multimodal communication explain any observable forms of coordination by reference to its functional relatedness to the specific situation.

5. Conclusion In social-psychological research, open questions concern the contexts under which more or less synchronization may be observed, the perception by participants, and the methods for investigation (Knapp and Hall 1997: 284). In interaction analytic research, the functions of coordination and their theoretical status are awaiting conclusive determination. Kendon, on the other hand, did not define his notion of coordination explicitly, but it seems to be restricted to mirroring and synchronization of body posture, body motion, and spatial-orientational positioning. Nonetheless, he does take into account communicative and interactive functions of other modalities as well (e.g., gaze and facial expression). Although he clearly recognized functions of coordination as fundamental for any focused encounter, what lacks in his studies is any integral or comprehensive account for intrapersonal as well as interpersonal coordination in the sense that has been explicated by the approach to multimodal communication (Deppermann and Schmitt 2007). Deppermann and Schmitt on the contrary integrate and analyze manifold requirements of coordination in interaction in much more detail. Meanwhile, in the approach to multimodal communication the theoretical status of coordination remains rather vague: while in Mondada and Schmitt (2010b) coordination

1308

VII. Body movements – Functions, contexts, and interactions

is referred to as one of the new objects of investigation in the multimodal perspective, in Deppermann and Schmitt (2007), multimodality is said to be one of the constitutive aspects of coordination. Likewise, the modalities ⫺ such as voice, prosodic structure, gesticulation, facial expressions, body posture, body orientation, spatial positioning, forms of movement (walking, standing, sitting) ⫺ are said to constitute resources in the processing of the aspects of coordination (temporality, dimensionality, multimodality, and multi-person-orientation) (Deppermann and Schmitt 2007b: 25), and at the same time as constituting levels of multimodality itself (Mondada and Schmitt 2010b: 24⫺25). Until now, many explorative studies on coordination have been developed, yet it remains to be determined where the phenomenon finds its place in an integrated theory of communication. For an etic perspective on synchronization and motor mimicry further insights might be gained by the recent investigation of mirror neurons (Knoblich and Jordan 2002; Rotondo and Boker 2002). The emic perspective then again empirically validates many of Goffmans (1971) observations on focused encounters. On the other hand, it may further be inspired by the recent work on performativity in cultural studies (FischerLichte and Wulf 2001, 2004).

Acknowledgements Many thanks to Hildegard Gornik and Joe Couve de Murville for their careful reading and valuable comments on an earlier version of this paper.

6. Reerences Bavelas, Janet Beavin, Alex Black, Nicole Chovil, Charles R. Lemery and Jennifer Mullett 1988. Form and function in motor mimicry: Topographic evidence that the primary function is communicative. Human Communication Research 14(3): 275⫺299. Bavelas, Janet Beavin, Alex Black, Charles R. Lemery and Jennifer Mullett 1986. “I show you how I feel.” Motor mimicry as a communicative act. Journal of Personality and Social Psychology 50(2): 322⫺329. Condon, William S. 1976. An analysis of behavioral organization. Sign Language Studies 13: 285⫺318. Condon, William S. 1980. The relation of interaction synchrony to cognitive and emotional processes. In: Mary R. Key (ed.), The Relationship of Verbal and Nonverbal Communication, 49⫺ 65. The Hague: Mouton. Condon, William S. and William D. Ogston 1966. Sound film analysis of normal and pathological behavior patterns. Journal of Nervous and Mental Disease 143(4): 338⫺347. Condon, William S. and William D. Ogston 1967. A segmentation of behavior. Journal of Psychiatric research 5(3): 221⫺235. Condon, William S. and William D. Ogston 1971. Speech and body motion synchrony of the speaker-hearer. In: David L. Horton and James J. Jenkins (eds.), Perception of Language, 150⫺ 173. Columbus, OH: Merrill. Condon, William S. and Lewis W. Sander 1974. Neonate movement is synchronized with adult speech: Interaction participation in language acquisition. Science 183(4120): 99⫺101. Deppermann, Arnulf and Reinhold Schmitt 2007. Koordination. Zur Begründung eines neuen Forschungsgegenstandes. In: Reinhold Schmitt (ed.), Koordination. Analysen zur multimodalen Interaktion, 15⫺93. Tübingen: Gunter Narr Verlag.

91. Body posture and movement in interaction: Participation management Deppermann, Arnulf, Reinhold Schmitt and Lorenza Mondada 2010. Agenda and emergence: Contingent and planned activities in a meeting. Journal of Pragmatics 42(6): 1700⫺1718. Fischer-Lichte, Erika and Christoph Wulf (eds.) 2001. Theorien des Performativen. Paragrana. Internationale Zeitschrift für historische Anthropologie 10(1). Fischer-Lichte, Erika and Christoph Wulf (eds.) 2004. Praktiken des Performativen. Paragrana. Internationale Zeitschrift für historische Anthropologie 13(1). Goffman, Erving 1971. Relations in Public: Microstudies of the Public Order. New York: Basic Books. Kendon, Adam 1990a. Conducting Interaction. Patterns of Behavior in Focused Encounters. Cambridge: Cambridge University Press. Kendon, Adam 1990b. Some context for context analysis: a view of the origins of structural studies of face-to-face interaction. In: Adam Kendon, Conducting Interaction. Patterns of Behavior in Focused Encounters, 15⫺49. Cambridge: Cambridge University Press. Kendon, Adam 1990c. Movement coordination in social interaction: some examples described. In: Adam Kendon, Conducting Interaction. Patterns of Behavior in Focused Encounters, 91⫺115. Cambridge: Cambridge University Press. Kendon, Adam 1990d. A description of some human greetings. In: Adam Kendon, Conducting Interaction. Patterns of Behavior in Focused Encounters, 153⫺207. Cambridge: Cambridge University Press. Kendon, Adam 1990e. Spatial organization in social encounters: the F-formation system. In: Adam Kendon, Conducting Interaction. Patterns of Behavior in Focused Encounters, 209⫺237. Cambridge: Cambridge University Press. Kendon, Adam 1990f. Behavioral foundations for the process of frame-attunement in face-to-face interaction. In: Adam Kendon, Conducting Interaction. Patterns of Behavior in Focused Encounters, 239⫺262. Cambridge: Cambridge University Press. Knapp, Mark and Judith Hall 1997. The effects of gesture and posture on human communication. In: Mark Knapp and Judith Hall, Nonverbal Communication in Human Interaction, 4th edition, 223⫺261. Fort Worth: Harcourt Brace College Publishers. Knoblich, Günther and Jerome Scott Jordan 2002. The mirror system and joint action. In: Maxim I. Stamenov and Vittorio Gallese (eds.), Mirror Neurons and the Evolution of Brain and Language, 115⫺124. Amsterdam/Philadelphia: John Benjamins. Mondada, Lorenza 2006. Participants’ online analysis and multimodal practices: projecting the end of the turn and the closing of the sequence. Discourse Studies 8(1): 117⫺129. Mondada, Lorenza 2007. Multimodal resources for turn-taking: pointing and the mergence of possible next speakers. Discourse Studies 9(2): 194⫺225. Mondada, Lorenza 2011. Understanding as an embodied, situated and sequential achivement in interaction. Journal of Pragmatics 43(2): 542⫺552. Mondada, Lorenza and Reinhold Schmitt (eds.) 2010a. Situationseröffnungen. Zur multimodalen Herstellung fokussierter Interaktionen. Tübingen: Narr Francke Attempto Verlag. Mondada, Lorenza and Reinhold Schmitt 2010b. Zur Multimodalität von Situationseröffnungen. In: Lorenza Mondada and Reinhold Schmitt (eds.), Situationseröffnungen. Zur multimodalen Herstellung fokussierter Interaktionen, 7⫺52. Tübingen: Narr Francke Attempto Verlag. Oloff, Florence 2010. Ankommen und Hinzukommen: Zur Struktur der Ankunft von Gästen. In: Lorenza Mondada and Reinhold Schmitt (eds.), Situationseröffnungen. Zur multimodalen Herstellung fokussierter Interaktionen, 171⫺228. Tübingen: Narr Francke Attempto Verlag. Rotondo, Jennifer L. and Steven M. Boker 2002. Behavioral synchronization in human conversational interaction. In: Maxim I. Stamenov and Vittorio Gallese (eds.), Mirror Neurons and the Evolution of Brain and Language, 151⫺162. Amsterdam/Philadelphia: John Benjamins. Schegloff, Emanuel 1984. On some gestures’ relation to talk. In: Maxwell Atkinson and John Heritage (eds.), Structures of Social Action. Studies in Conversation Analysis, 266⫺296. Cambridge: Cambridge University Press.

1309

1310

VII. Body movements – Functions, contexts, and interactions

Schmitt, Reinhold 2005. Zur multimodalen Struktur von turn-taking. Gesprächsforschung ⫺ Online Zeitschrift zur verbalen Interaktion 6: 17⫺61. Schmitt, Reinhold 2006. Videoaufzeichnungen als Grundlage für Interaktionsanalysen. Deutsche Sprache 34(1⫺2): 18⫺31. Selting, Margret and Elizabeth Couper-Kuhlen 2000. Argumente für die Entwicklung einer Interaktionalen Linguistik. Gesprächsforschung. Online-Zeitschrift zur verbalen Interaktion 1: 76⫺95. (www.gespraechsforschung-ozs.de) Selting, Margret and Elizabeth Couper-Kuhlen 2001. Forschungsprogramm Interaktionale Linguistik. Linguistische Berichte 187: 257⫺287.

Ulrike Bohle, Hildesheim (Germany)

92. Proxemics and axial orientation 1. 2. 3. 4.

Introduction A selective and critical overview of the research Outlook References

Abstract The regulation of distance and the body orientation of communicating partners is a semiotic resource, which is relevant, or can be assumed to be relevant, in every co-present interaction situation. As this overview of the research shows, this is borne out by a systematic study of the correlation between proxemic activities and axial alignment, and the development of specific interactive tasks.

1. Introduction An elaborate and secret code that is written nowhere known by none, and understood by all. (Sapir 1928: 137)

Non-verbal signals such as facial expressions, gestures, and physical movements that are learnt or acquired in the course of socialization, and enculturation frequently occur in communicative situations in conjunction with verbal signals. However, whilst the structure of a language involves discrete units (sounds, words, phrases, constructions, and expressions among others), non-verbal signals must first be identified as such, in order to be able to interpret their meaning in context ⫺ if they are not grasped intuitively as a result of strong conventions (see Grammer 2004: 3448⫺3449). Thus non-verbal, as well as verbal, signals act as a signifier: they can be perceived and interpreted and therefore lead to conclusions (inferences) of a contingent (index), causal (symptom), associative (icon), or rule-based type (symbol) (see Keller 1995: 113⫺132). Relevant components of the context for interpretation of non-verbal signals are, firstly, aspects of the comMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13101323

92. Proxemics and axial orientation municative situation (including the characteristics and features of the interlocutors), as well as the verbal signals always involved a concurrent understanding of non-verbal signals (see Grammer 2004: 3474). In this way, the production of a structure of interaction order comprises different types of signals and signs and is also involved in their mutual interaction. Non-verbal signals are therefore not just interpretable due to a given context, but at the same time they provide the conditions of the context under which the verbal part of the communication is interpreted. Non-verbal signals are also understood correspondingly as contextualization cues (see Auer 1986; Auer and di Luzio 1992; Gumperz 1982, 1992). Until recently, studies of the connection between verbal and non-verbal signals in the comprehension process have been carried out principally within the field of kineticmotor behavior in communication (Birdwhistell 1952, 1967, 1970), in particular studies of facial expressions (Ehlich and Rehbein 1982; Ekman, Friesen, and Ellsworth 1974; Ekman, Friesen, and Hager 2002; Huber 1983; Scherer and Wallbott 1979 among others) and of gestures (Argyle 1972; Armstrong, Stokoe, and Wilcox 1995; Egidi 2000; Ekman and Friesen 1969; Feyereisen 1997; Feyereisen and de Lannoy 1991; Hübler 2001; Kendon 1972, 2002; Kresse and Feldmann 1999; McNeill 1992, 2000; Morris 1995; Müller 1998; Müller and Posner 2004; Scheflen 1976; Schmauser and Noll 1998 among others). In contrast, two aspects of non-verbal behavior, as related to the verbal part of communication, have rarely been studied specifically: proxemics behavior (see Hall 1969, 1974, 2003), that is, the spatial relationship between interacting participants (for an overview, see Aiello 1987) and axial orientation, that is, leaning towards or away from the communicating partner with the head or body. Contributions to research in this area stem mainly from sociologists, anthropologists, ethnographers, or psychologists. They can be distinguished, in that they focus more on the identification and categorization of non-verbal activities and the development of new concepts, and less on the interplay with the verbal aspects of communication. The reason for this is that many of these pieces of research are based on a working definition of the research into the non-verbal (explicitly or implicitly) that can be attributed to the most prominent researcher in proxemics behavior, Edward T. Hall: “Nonverbal communication, defined (by Hall) as communication that does not involve the exchange of words” (Rogers, Hart, and Miike 2002: 10). From this perspective, it is the emotional and interpersonal attitudes expressed in proxemics and axial behavior, above all, that have come to the fore. In case studies, however, it is the interaction of distance and axial orientation with other non-verbal components that has been investigated (Argyle and Ingham 1972; Kendon 1973). Nevertheless, there are few in-depth studies of the link between such non-verbal activities as regulating distance and forward inclination concurrently with verbal activity, in terms of the organization of discourse or thematic control. More recently, however, a concept of discourse analysis has been established under the term multimodal interaction, in which the observation of participants focuses equally on verbal and non-verbal aspects of communication, in terms of the construction of an interactive order by all those involved and this anthroposemiotic term also encompasses proxemics and axial orientation (see, e.g., contributions in Schmitt 2007). How the relationship between non-verbal and verbal signals is to be evaluated in principle remains controversial: are non-verbal activities such as the regulation of interactive distance and forwards inclination ⫺ analogous to the differentiation in the func-

1311

1312

VII. Body movements – Functions, contexts, and interactions

tions of hand gestures in connection with the spoken language, according to Ekman and Friesen (1969) ⫺ complementary to speech (or contrastive) or do they also function as substitutes for verbal activities (as emblems)? In general terms, the question is: what possible communicative functions do non-verbal behaviors like interactive distance and inclination regulation fulfill? The following selective and critical overview of research seeks to ascertain if, and to what extent, distance behavior and axial orientation are considered in research that also considers the verbal element of communication. In so doing, the focus rests on studies in which conclusions are made, or at least inferred, about tendencies or preferences in the body positioning of interlocutors in two or more person discourses. The presentation is only an approximate organization according to discipline and methodological approach to the phenomena. A clear delineation of the disciplines involved is not always possible, because of the interconnection of scientific theories.

2. A selective and critical overview o the research 2.1. Semiotics Rauscher (1986) outlines the intentional use of non-verbal signals as an example of distance regulation. From the semiotic perspective, interactive distance is portrayed as a sign system that not only uses signs (products) intentionally, but whose intentional use, in hindsight, is also the subject of reflection (see also Ravelli and Stenglin 2008). The “language of space” as proxemics communication schema can be reconstructed as follows: “Sender A, at an interactive distance Dx, as opposed to other possible interactive distances D1…. Dn represents D1…n ⫽Dx, intends to communicate an item E by Dx to Receiver B, to which E attributes conventional meaning Dx. The intention of A involves that B recognizes this intention” (Rauscher 1986: 449). (All German direct quotes were translated into English in order to ensure a more even flow of language). The problem here is that the “distance-meaning-pairing” (Dx J E) is assumed to be conventionally established ⫺ expressed in the construction grammar paradigm, this would correspond to “proxemic constructions”. The phenomenon of distance, in particular, leads to the assumption that the measurable element (distance) could result in specific meanings being attributed. Since distance between interacting partners cannot be proved fundamentally in terms of fixed classifications and boundaries, and is much more negotiated and assigned relevance interactively, models of interaction distance as a signal system with physically measurable elements is not an accurate representation of the relational or dynamic character of personal space behavior. A “semiotic” use of proxemics relationships can be explained, similar to the cited reconstruction of Rauscher, but primarily with the aid of the so-called Grice’s Basic Model (see Grice 1979; Meggle 1981; Rolf 1994). This would have the advantage that the same model could be applied to both verbal and non-verbal behavior in describing how meaning is attributed.

2.2. Anthropology and social psychology The existence of four distance zones, first described by Hall (1969), which regulate proximal behavior and reflect or determine the combined roles and status of the interlocutors is undisputed. As a result of work by the anthropologist Hall, the various culturally

92. Proxemics and axial orientation

1313

conditioned dimensions of the zones of intimate proximity, personal space, social space, and public space can be distinguished. With the exception of public space, these distance zones have biological origins to a certain extent, ethological findings indicate that they are a part of all specific forms of territorial behavior (see for example, the difference between flight, defense, and critical distance of Hediger 1934). The respecting of these distance zones is mutually required by the interlocutors; deliberate encroachments are either sanctioned or a part of verbal discussion. In order to discover more precisely how the interlocutors handle these distance zones when they are part of a two or more person discourse, it should be observed that, dependent on the cause of the interaction, the role and status of the interlocutors, the physical spatial arrangement is either accepted as a pre-established rule (and generally observed in the course of the interaction) or negotiated. The latter is particularly valid in communicative situations in which the interacting partners have the possibility of adopting a (physical) position for discourse freely, or change position, such as communicating while standing. It can be deduced from these cases that certain positions (distancing or leaning towards or away from) are favored in certain constellations and that this also correlates, in part, with particular interactive tasks, i.e., that interactive distance behavior and axial orientation interact systematically with the respective verbal aspect of the communication. Argyle (1975) also highlighted that this spatial aspect, particularly forwards or backwards inclination, plays a significant role in communication when he linked the knowledge of behavioral research, ethnology, and experimental social psychology. Thus, he indicates that the distance between interlocutors is also controlled by the extent to which they are able to perceive auditory and kinetic signals: “If another person is too close only part of him can be seen; if he is too far away his facial expression cannot be seen. Orientation also affects how much can be seen; however, if he adopts the best orientation for seeing ⫺ head-on ⫺ the looker is also fully exposed himself, perhaps more than he wants to be” (Argyle 1975: 155). This last remark makes it clear that it is assumed, of course, that not just physical, but also cognitive and social psychological factors could be responsible for the adopting or changing of certain physical positions during discourse. Spatial behavior is always “encoded and decoded in terms of interpersonal attitudes, and in other ways, […] though it may not be intended to communicate at all” (Argyle 1975: 312; see also Kalverkämper 1998: 1341). Argyle (1975: 300⫺306) also observes that distance behavior and axial orientation are connected by a kind of inverse relationship: a frontal orientation is linked to a greater distance, a side-by-side orientation to a smaller distance. Experiments confirm the preference for certain axial positioning in different social relationships with the interacting partner: competition with a faceto-face orientation, cooperation with a side-to-side orientation (see Cook 1970; Sommer and Becker 1969). Incidentally, the latter can be used, to a certain extent, as a somatic explanation of how the expression of solidarity (“to stand shoulder-to-shoulder”) may have arisen. The favored conversation position when sitting at a triangular table is a ninety-degree orientation (except for culturally determined special conditions, see Argyle 1975: 307⫺312 on this topic). Thus the positioning “across the corner”, which suggests both cooperation, as well as competition, and which enables a shared perception of events and processes external to the discourse, and moreover, is open to further potential interlocutors. These are important indicators that certain formations are influenced by both the type of social relationship and the opportunity for verbal communication.

1314

VII. Body movements – Functions, contexts, and interactions

2.3. Psychology In psychology, the regulation of distance and axial orientation as articulated elements of speech-related behavior within the parameters of research into emotion has been studied mainly for diagnostic purposes (see Wallbott 2003: 561). According to the wellknown functional classification of Ekman and Friesen (1969), non-verbal behavior can be divided into five categories (illustrators, adaptors, emblems, regulators, and affect representations). It is certain body postures that are adopted above all within the category of affect representations, however, it can be assumed that distance and axial orientation as “dynamic body postures”, as understood in the interactive role, can also be adopted by the role of regulators. Salewski (1993) considers in depth the spatial distance behavior in interaction, as it is developed within the concept of “personal space” (Hayduk 1981a, 1981b, 1983). The concept of personal space describes the existence of an intra-individual constant, but an inter-individually different personal sphere, which has the value of a personal trait. According to the justified criticism of Patterson (1975, 1976), the personal space concept should be re-categorized as an interpersonal space concept. Personal space is only visible because it is damaged at its boundaries; however, such a crossing of boundaries can only be initiated by others (by the bordering or overlapping of the personal space) ⫺ but probably also by representatives of others like cameras, as might be studied experimentally. Recently, Salewski proposed a model which might explain the adoption of a particular interaction space that can be measured. Knowles (1980) affiliative conflict theory appears to be a suitable candidate. At its core this theory is based on the equilibrium theory of Argyle and Dean (1965), which says, that eye-contact and interaction space behaviors arise from a need to balance affiliation and keeping distance. Knowles (1980) considers the tendency to approach as fulfilling the desire for contact and feedback above all, whereas the tendency to keep one’s distance is motivated by the fear of rejection by others or of the unwilling and public exposure of inner states. The motives for approaching and keeping distance mentioned here probably explain not just the maintaining of particular interaction zones, but go some way to explaining that observable actions of the interlocutors are responsible for their positioning. With respect to the above empirical study and its criticism of the personal space concept, Salewski (1993: 71⫺96) concludes that personal space behavior is not only an interpersonal phenomenon, but also that even “the expectation of one interlocutor causes the other to display a particular personal space behavior”. This correlation between the interpersonal relationship and its assumption and interaction space behavior respectively has already been documented by Leipold (1963). He was able to prove that interaction space behavior correlates systematically with the expected attitude of the other: if one interlocutor believes that the other has a negative attitude towards him, he will maintain a physical distance (independently of how the other behaves towards him in an actual communication situation); if he/she believes that the other has a positive attitude, he/she will seek physical proximity. Leipold designed his experiment using the roles of teacher and pupil. However, it cannot be ruled out that, even under non-institutional and outside laboratory conditions, personal spaces are linked systematically to the ideas and constructions that the interlocutors mutually assume, in relation to the presence or absence of affinity ⫺ independently of whether these are justified or not. With regard to axial orientation, Knapp (1979: 324) expands on Leipold’s findings, by observing that if communication takes places while standing, the shoulders of the

92. Proxemics and axial orientation

1315

person with higher status tend to be directed towards the person of lower status. However, this occurs independently of the quality of the attitude towards the person of higher status. The findings identified here indicate a fundamental assumption that interaction space and axial orientation are influenced and directed by various factors, which can strengthen or hinder potential outcomes. Such factors include cultural norms and conventions, situational elements (cooperation and competition), personality variables (gender, age, status, popularity etc.), as well as non-proxemic, non-verbal behaviors (facial expression and gestures among others) and, of course, verbal communication elements (see Hayduk 1983, for a structured overview of this topic).

2.4. Psychotherapeutic research In the field of psychotherapy, Scheflen (1964) in particular, has drawn attention to the “significance of posture in communication systems”. Scheflen adopts Birdwhistell’s view that “the communication system as a whole [is] an integrated arrangement of structural units, originating from kinesic, tactile, verbal and other elements” (Scheflen 1964: 320). That elements appear next to each other is by no means coincidental, indeed the interaction between individual elements is coordinated or synchronized, as, for example, the movements of the head, eyes, and hands with the change in intonation at the end of verbal phrases. In this context, Scheflen also describes aspects of posture that are used analogously in larger units of communication. In addition to the standardized configurations of posture, he includes markers for “points, positions and presentations” (Scheflen 1964: 320⫺324), as well as indicators for the relationships between interlocutors. If the interlocutors form a group, they define themselves through their body position and the placing of their limbs. “If group members are standing or are able to move the furniture, they will tend to form a circle” (Scheflen 1964: 326), in order to demarcate a territory within which to interact. The group action to create such an “enclosure” is directed towards the equilibrium of the group. It serves to maintain social control within the group (see Scheflen 1976: 38⫺50). By adopting particular (body) positions for discourse, access to the group and availability within the group is regulated. Scheflen (1964: 326) sees this as the realization of the inclusive and noninclusive function of (body) posture (in relation to others). He differentiates further between interactive (face-to-face orientation) and complementary (side-to-side orientation) types of activity, although here he only considers particular seating arrangements and no forms of discourse in which positions are freely adopted. In this context, Scheflen makes the pertinent observation, that if particular postures are imitated directly or reproduced as a mirror image by the interlocutors, this reflects viewpoints or roles ⫺ further evidence that non-verbal signals can function as cues or aids to contextualization or interpretation of the “uptake”. If we consider the focus of the most recent studies of non-verbal processes in psychotherapy, it can be noted that other non-verbal elements (for example, smiling, touching, or various modes of expressing emotions) are emphasized, and interaction space behavior and axial orientation have been relegated to a place of medium interest (see, for example, Hermer, and Klinzing 2004).

2.5. Non-verbal communication In his study of the role of visible behavior in the organization of social interaction, Kendon (1973: 35⫺37) states that there are two significant conditions for the spatial arrangement of groups of people: the adoption or maintenance of a particular distance

1316

VII. Body movements – Functions, contexts, and interactions

away from the other interacting participants, as well as an axial orientation, which means that leaning towards the other interlocutors is only enabled by turning the head at an angle of less than 90 degrees. Such “configurations” indicate the status of those involved. “The particular form that the configuration as a whole assumes reflects the type of occasion and the kind of role relationships prevailing in the gathering” (Kendon 1973: 37). Circular configurations often indicate equal rights. Triangles, semi-circles, or parallelograms “tend to have a ‘head’ position at which the member with the most rights to participation is usually located” (Kendon 1973: 39). At the same time, the distance between the interacting participants in similar configurations can vary according to the environment. Thus, groups in public spaces often move closer together than in private spaces, because in “open” spaces the territory demarcated for interaction by the group has to withstand potential intruders (see Kendon 1973: 37⫺38). Exactly how such configurations originate, what dynamics they exhibit and how they disband ⫺ is the desired research focus, following Kendon’s work. With respect to the roles of the participants in multi-person discourses, Kendon distinguishes between speakers, active listeners, nonaxial listeners, and those that have temporarily withdrawn from the interaction (see Kendon 1973: 52⫺54). Speaker and active listeners are connected by a so-called axis of interaction (this term can be traced back to a work by Watson and Potter 1962), whose origin is the coordination of movement and interactive synchronicity, which is regulated, above all, by a change of speaker (see also Condon and Ogston 1966, 1967). In a detailed study, Kendon and his colleagues, using analysis of film footage of a seated, multi-person group, were able to establish that interacting participants demonstrate responsiveness or general communicative availability in an impersonal way to the dominant or current speaker in a group, by the synchronized movements of the interacting participants (this may only be the rhythm of performing particular head, hand, or body movements). Comparable observations of actions and behavior patterns of interacting participants who are standing and therefore less restricted in their movement have indicated that synchronizing sequences of movement is the key communicative function ⫺ apparently independent of which form of behavior is affected. According to the classification of functions by Ekman and Friesen (1969), patterns of interactional synchronized behavior are classified as regulators. In this respect, Kendon (1990a) stresses that body positioning and axial orientation in the formation promotes a focused interaction, which the joint action both frames and structures: “there is a systematic relationship between spatial arrangement and mode of interaction” (Kendon 1990a: 251).

2.6. Linguistics In her search for a linguistic explanation for non-verbal communication, Kühn concludes that “from the correlation of the multi-dimensional nature of physicality and the multimodality of the response to physicality” (Kühn 2002: 208) there are three basic principles of the fundamental structure and organization of non-verbal communication: the coordination principle (temporal coordination of body and language), the choreography principle (the relationship of body movements to one another), and the proxemics principle (personal and shared communication spaces). With regard to the coordination principle, Kühn points out the interesting fact that, next to the synchronized interaction of language and body, asynchronous cases can also be observed, for example, when gestures precede speech (see also anticipatory gestures as noted by McNeill 1985; Streeck and Knapp 1992). Comparable explanations can be assumed for some of the actions in con-

92. Proxemics and axial orientation

1317

nection to proxemics and axial orientations: in addition, coming towards or moving away, forwards and backwards inclination can precede and anticipate speech. As far as the choreography principle is concerned, Kühn (2002: 216⫺231) stresses that any attempt at a reproducible analysis of inter-subjective movements has been unsuccessful. The spectrum of research here stretches from the meticulous measurement of physical size to psychological or liberal artistic interpretations. What can be confirmed is that “choreographic” body movements, coordinated with speech, make the process of reception on the part of the communicating partners easier, since these structural aspects of verbal communication help the interlocutors to clarify or explain the tasks and requirements of the way in which discourse is organized. Kühn sees the proxemic principle, above all, as bringing about the defense of personal space and in the take up and release of interactional space. In this way, the proxemics principle, in concrete terms, depicts the fundamental acceptance of research into politeness, as expressed by Brown and Levinson (1978) for example. The need for proximity and forwards inclination, as well as the need for distance and backwards inclination can all be seen as realizations of ambivalent interest, that Brown and Levinson (1978: 66⫺70) attribute to each interlocutor: on the one hand, the need to be acknowledged and valued by others, on the other, the desire to remain undisturbed and unimpeded in one’s own actions. The empirical basis for Kühn’s study is a discourse group with a fixed, semi-circular seating position, opened towards the camera. In this way she takes the penetration of the gesturing space of the other (e.g., to negotiate the right to speak) and changes in the axial orientation (for example, to address the group or for thematic positioning) into account. Thus personal space in interactive situations, in which the interlocutors are able to adopt their body positions freely, the specific sequence of movements agreed together and the “dancing” with or against one another, cannot be observed. In all, Kühn’s observations demonstrate clearly that distance regulation and axial orientation can be used as contextualizing cues, along with other indicators.

2.7. Multi-modality Originating from the idea of interaction as the multi-modal production of an interactive structure by all those involved (see Deppermann and Schmitt 2007: 16⫺20; Hausendorf, Mondada, and Schmitt 2012), non-verbal elements of communication have become an increasing focal point in recent years ⫺ doubtless due to the technical advances made in the collection and processing of empirical data (the focus on so-called context analysis can already be seen in the work, for example, of Kendon 1990b and of Kress and van Leeuwen 2001 and Norris 2004 among others). In order to do justice to the now complex documentation on interaction processes, it is worth moving beyond the focus placed on the verbal: “prosody, facial expression, eye contact, gestures and body posture should be afforded the same methodological standing in future, that the analysis of verbal expression currently enjoys” (Schmitt 2004: 1). In this connection, some studies of interaction in multi-modal forms have taken as their theme specifically the interaction distances, the spatial arrangements of the bodies of the interlocutors and their axial orientation, also as regards the relevance for the concurrent verbal communication. With respect to establishing an interaction order (in the sense of Goffmann 1983), Mondada (2007), in comparison with other detailed studies, showed that verbal actions are suspended, slowed down, or delayed, in order to shape the shared interactive space by reorganizing the physical positioning, such that an appropriate continuation of communication is

1318

VII. Body movements – Functions, contexts, and interactions

enabled (for example, producing a shared line of sight when giving directions, including corresponding pointing gestures). The results of the cross-sectional studies show clearly that body positioning in the interaction space is the product of both an interpersonal and an intrapersonal coordination. The latter is particularly evident, in that the formatting of the corresponding remark indicates signs of the demands of the non-verbal actions occurring at the same time. Mondada concludes that: “the orientation of participants in the interaction space is not a given, simply existing, already present, but rather, it must be actively created by them” (Mondada 2007: 84). It can be assumed, that even these acts of formation can generally be correlated systematically with the (verbal) interaction tasks present at the time, i.e., that not only linguistic expressions demonstrate signs of the creation process of an appropriate interaction space, but that proxemics and axial behavior are also actions that result in the production of expressions related to interaction tasks and only then can continued communication be facilitated. Tiittula (2007) studied the organization of glances in a side-by-side position with three interlocutors during a business dialogue and concluded that: “different interaction constellations and the status of those involved are initiated and terminated through the organization of looks and body postures” (Tiittula 2007: 248). In this situation, a change to the axial orientation also proves to be closely connected with the different task-specific verbal requests, (in the above study, products are presented, checked, orders are taken, etc.). Furthermore, the changes in the physical orientation correlate with the exchange of phases of business-motivated actions and non-business related types of orientation (for example, the explanation of certain words in the other language), which is accompanied by a change in the mode of interaction. The focus of the analysis carried out by Tiittula, though primarily concerned with how looks are organized, is nevertheless evidence that the physical orientation of the interlocutors is also systematically linked to verbal interaction tasks and demands. Taking the example of dance teachers, who observe individual pairs during the dance lesson and briefly interrupt and correct them, Müller and Bohle (2007) propose a prototype for the preparatory steps in establishing an interaction space. They are making a detailed study of how the pre-conditions for a focused interaction are created together by those involved, through body orientation and positioning within the space. Müller and Bohle identify the positioning and orientation of the pelvis and the feet as the physical actions, by which the three interlocutors in the above example established a triangle formation together, to which all have equal access. The task for the third person (dance teacher) is formulated as follows: “How do I gain entry into an existing interaction space for two people and in a face-to-face orientation?” (Müller and Bohle 2007: 133). Opening up focused interaction as a structural principle of social interaction is also valid in everyday situations. If people standing start a discourse in an unstructured open space, they form polygons, depending on their number. In this way, it can be seen that the tendency to re-shape the established interaction space to facilitate the entry and exit of those involved, i.e., by ensuring that accessibility for all is guaranteed by creating the new formation together (Müller and Bohle 2007: 151⫺160). Müller and Bohle consider the role of the verbal element only in as far as they show that the production of a “fundament of focused interaction” (the title of their text) is the shared, coordinated action of those involved, which precedes and facilitates verbal exchanges. The transition between physical action and individual, concrete linguistic utterances in the interaction space is not studied in detail, however.

92. Proxemics and axial orientation

1319

2.8. Conclusion This overview of the research indicates that results or simply incidental comments of proxemic behavior and axial orientation are spread over various disciplines and embedded in different methods of investigation and key questions. Research addressing the question as to what connection there is between non-verbal and verbal actions is not yet concluded. The trend in current research seems to be to perceive both as elements of the same process (Wallbott 2003). In order to analyze how verbal and non-verbal communication mesh together, however, it is also necessary to understand and classify non-verbal signals specifically. To do this, as Wallbott (2003: 579) highlights, it is essential: “not only to observe nonverbal behavior and language at a macro level, but to attempt to study in ‘micro-analyses’ the point-by-point relationship between both behaviors over the course of time”. The basis for this must be an appropriate transcription of the non-verbal behavior (see for example, Hausendorf, Mondada, and Schmitt 2012, for more recent analyses).

3. Outlook In studying personal space or proxemics actions and axial orientation as part of human communication systems, it is evident that related behavior and action are both significant. As behavior, these non-verbal elements of communication are symptoms; as actions they are symbols. Rauscher (1986) argues that only in cases of intentional use can proxemics be part of a semiotic study: “as far as our actions are affected, the conscious application of proxemics relations, as a signal for the receiver to use socially established paradigms with regard to space, or for which the receiver accepts the sender’s intention in this situation, on the basis of such paradigms” (Rauscher 1986: 441). If we disregard the problems behind them what count among semiotics, it becomes clear that in the intentional use of proxemic and axial symptoms, a metamorphosis of signals can be demonstrated, as construed by Keller (1995: 160⫺173). Particular (physical) discourse positions can be adopted intentionally and consciously in relation to specific communication states or interaction functions. Behavior as a natural, qualifying adaptation to the physical and socio-cultural environment is instrumentalized, that is, used intentionally, in order to be recognized and understood as such. In this context it concerns a symbolizing of symptoms, as a result of which Keller (1995: 165⫺167) insists that: “the assumption of communicative intentions, which in particular cases do not need to be present, including the development of collective knowledge, enables a symptom to become a symbol” (Keller 1995: 167). Thus, for example, the integration of a person in an already formed circular constellation is a sign of group membership and a symbol of social success in the collective knowledge. Should integration in a discourse circle be forced, with the aim of making social success apparent, this can also be termed a staging of symptoms (Keller 1995: 166). If a someone places himself in a discourse circle in order to be accepted as a potential interlocutor (see Sager, central trunk orientation, 2000: 557), this indicates that he is adopting the role of an interlocutor in that particular group, with which rights, but also duties, are associated (see also Kendon 1973: 37⫺38). Principles of courtesy are valid here, for example, that a person answers when a question is posed, and also the communicative demands, for example, the principle of conditional relevance, which limits the spectrum of possible replies (see Sacks, Schegloff, and Jefferson 1974). If a person is accepted physically, in this sense, into a discourse circle, however without the intention of taking advantage of the corresponding rights or fulfilling his obligations, benefits

1320

VII. Body movements – Functions, contexts, and interactions

from the communicative function of a symptom that has become a symbol. It allows social success to be recognized, which still functions, even if he disqualifies himself, in whatever way, within the discourse circle. From an intercultural perspective, a distinct and further area opens up for considering connections. An intercultural comparison of the preferred (physical) discourse positions could provide information on the extent to which personal space behavior and axial orientation in intercultural communication causes irritation or misunderstanding. Hall already remarked that: “One of my earliest discoveries in the field of intercultural communication was that the position of the bodies of people in conversation varies with the culture” (1969: 150). He goes on to describe how an Arabic friend found it impossible to talk to him while they were walking side by side (see also Schmitt 2012 on walking as a cultural practice), as it was deemed impolite in his culture to look at the partner in the conversation out of the corner of the eyes. Culturally specific differences of this type, in intercultural communication situations in which discourse positions are freely adopted and can be altered, are particularly evident, when irritations or misunderstandings arising from the negotiation of “somatic” discourse formation are implicitly evaluated or have an explicit verbal theme (see Kühn 2002: 289 on the status of so-called reception indicators).

Acknowledgements This paper would not have been possible without Svend F. Sager. It was through his considerations of the transcription of proxemic behavior and axial orientation (Sager 2000, 2001) as well as in personal conversation that the basis of this mutual project was facilitated, in terms of searching for options for presentation, descriptive categories, and methods of analysis for activities of this type at the “display circle”.

4. Reerences Aiello, John R. 1987. Human spatial behavior. In: Daniel Stokols and Irwin Altman (eds.), Handbook of Environmental Psychology, Volume 1, 389⫺504. New York: John Wiley. Argyle, Michael 1972. Soziale Interaktion. Köln: Kiepenheuer and Witsch. Argyle, Michael 1975. Bodily Communication. London: Methuen. Argyle, Michael and Janet Dean 1965. Eye-contact, distance and affiliation. Sociometry 28(3): 289⫺304. Argyle, Michael and Roger Ingham 1972. Gaze, mutual gaze and proximity. Semiotica 6(1): 32⫺49. Armstrong, David F., William C. Stokoe and Sherman Wilcox 1995. Gesture and the Nature of Language. Cambridge: Cambridge University Press. Auer, Peter 1986. Kontextualisierung. Studium Linguistik 19: 22⫺47. Auer, Peter and Aldo di Luzio (eds.) 1992. The Contextualization of Language. Amsterdam: John Benjamins. Birdwhistell, Ray L. 1952. Introduction to Kinesics. Washington, DC: Foreign Service Institute. Birdwhistell, Ray L. 1967. Some body motion elements accompanying spoken American English. In: Lee Thayer (ed.), Communication: Concepts and Perspectives, 53⫺76. Washington, DC: Spartan Books. Birdwisthell, Ray L. 1970. Kinesics and Context. Essays on Body Motion Communication. Philadelphia: University of Pennsylvania Press. Brown, Penelope and Stephen Levinson 1978. Universals in language usage: Politeness phenomena. In: Esther N. Goody (ed.), Questions and Politeness. Strategies in Social Interaction, 56⫺289. Cambridge: Cambrige University Press.

92. Proxemics and axial orientation Condon, William S. and William D. Ogston 1966. Soundfilm analysis of normal and pathological behaviour patterns. Journal of Nervous and Mental Disease 143(4): 338⫺347. Condon, William S. and William D. Ogston 1967. A segmentation of behaviour. Journal of Psychiatric Research 5(3): 221⫺235. Cook, Mark 1970. Experiments on orientation and proxemics. Human Relations 23(1): 61⫺76. Deppermann, Arnulf and Reinhold Schmitt 2007. Koordination. Zur Begründung eines neuen Forschungsgegenstandes. In: Reinhold Schmitt (ed.), Koordination. Analysen zur multimodalen Interaktion, 15⫺54. Tübingen: Gunter Narr. Egidi, Margreth (ed.) 2000. Gestik. Tübingen: Gunter Narr. Ehlich, Konrad and Jochen Rehbein 1982. Augenkommunikation. Methodenreflexion und Beispielanalyse. Amsterdam: John Benjamins. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behaviour: Categories, origins, usage, and coding. Semiotica 1(1): 47⫺98. Ekman, Paul, Wallace V. Friesen and Phoebe C. Ellsworth 1974. Gesichtssprache. Wege zur Objektivierung menschlicher Emotionen. Wien/Köln/Graz: Böhlau. Ekman, Paul, Wallace V. Friesen and Joseph Hager 2002. Facial Action Coding System (FACS). The Manual on CD-Rom. Salt Lake City: A Human Face. Feyereisen, Pierre 1997. The competition between gesture and speech production in dual-task paradigms. Journal of Memory and Language 36(1): 13⫺33. Feyereisen, Pierre and Jacques-Dominique de Lannoy 1991. Gesture and Speech. Psychological Investigations. Cambridge: Cambridge University Press. Goffman, Erving 1983. The interaction order. American Sociological Review 48(1): 1⫺17. Grammer, Karl 2004. Körpersignale in menschlicher Interaktion. In: Robert Posner, Klaus Robering and Thomas A. Sebeok (eds.), Semiotik. Ein Handbuch zu den zeichentheoretischen Grundlagen von Natur und Kultur, 3448⫺3487. Berlin/New York: de Gruyter. Grice, H. Paul 1979. Sprecher-Bedeutung und Intention. In: Georg Meggle (ed.), Handlung, Kommunikation, Bedeutung, 16⫺51. Frankfurt am Main: Suhrkamp. Gumperz, John J. 1982. Discourse Strategies. Cambridge: Cambridge University Press. Gumperz, John J. 1992. Contextualization and understanding. In: Alessandro Duranti and Charles Goodwin (eds.), Rethinking Context: Language as an Interactive Phenomenon, 229⫺252. Cambridge: Cambridge University Press. Hall, Edward T. 1969. The Hidden Dimension: Man’s Use of Space in Public and Private. London: Bodley Head. Hall, Edward T. 1974. Handbook for Proxemic Research. Washington, DC: Society for the Anthropology of Visual Communication. Hall, Edward T. 2003. Proxemics. In: Setha M. Lown and Denise Lawrence-Zu´niga (eds.), The Anthropology of Space and Place: Locating Culture, 51⫺73. Malden, MA: Blackwell Publishers. Hausendorf, Heiko, Lorenza Mondada and Reinhold Schmitt (eds.) 2012. Raum als interaktive Ressource. Tübingen: Gunter Narr. Hayduk, Leslie A. 1981a. The shape of personal space: an experimental investigation. Canadian Journal of Behavioural Science 13(1): 87⫺93. Hayduk, Leslie A. 1981b. The permeability of personal space. Canadian Journal of Behavioural Science 13(3): 274⫺287. Hayduk, Leslie A. 1983. Personal space: Where we now stand. Psychological Bulletin 94(2): 293⫺335. Hediger, Heini 1934. Zur Biologie und Psychologie der Flucht bei Tieren. Biologisches Zentralblatt 54: 21⫺40. Hermer, Matthias and Hans G. Klinzing (eds.) 2004. Nonverbale Prozesse in der Psychotherapie. Tübingen: DGVT-Verlag. Huber, Richard 1983. Das kindliche Un-Tier. Vom Affenjungen, das nicht mehr Tier werden wollte. München: Selecta-Verlag Idris. Hübler, Axel 2001. Das Konzept „Körper“ in den Sprach- und Kommunikationswissenschaften. Tübingen/Basel: Francke.

1321

1322

VII. Body movements – Functions, contexts, and interactions

Kalverkämper, Hartwig 1998. Körpersprache. In: Gert Ueding (ed.), Historisches Wörterbuch der Rhetorik, Band 4, 1339⫺1371. Tübingen: Niemeyer. Keller, Rudi 1995. Zeichentheorie. Zu einer Theorie semiotischen Wissens. Tübingen/Basel: Francke. Kendon, Adam 1972. Some relationships between body motion and speech. In: Aron Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 177⫺216. Elmsford/New York: Pergamon Press. Kendon, Adam 1973. The role of visible behaviour in the organization of social interaction. In: Mario von Cranach and Ian Vine (eds.), Social Communication and Movement, 29⫺74. London/ New York: Academic Press. Kendon, Adam 1990a. Conducting Interaction. Cambridge: Cambridge University Press. Kendon, Adam 1990b. Some context for context analysis. A view of the origins of structural studies of face-to-face interaction. In: Adam Kendon, Conducting Interaction, 15⫺49. Cambridge: Cambridge University Press. Kendon, Adam 2002. Some uses of the head shake. Gesture 2(2): 147⫺182. Knapp, Mark L. 1979. Nonverbale Kommunikation im Klassenzimmer. In: Klaus R. Scherer and Harald G. Wallbott (eds.), Nonverbale Kommunikation: Forschungsberichte zum Interaktionsverhalten, 320⫺329. Weinheim/Basel: Beltz. Knowles, Eric S. 1980. An affiliative conflict theory of personal and group spatial behavior. In: Paul B. Paulus (ed.), Psychology of Group Influence, 133⫺138. Hillsdale, NJ: Lawrence Erlbaum. Kress, Gunther and Theo van Leeuwen 2001. Multimodal Discourse ⫺ The Modes and Media of Contemporary Communication. London: Arnold. Kresse, Dodo and Georg Feldmann 1999. Handbuch der Gesten. Wien/München: Deuticke. Kühn, Christine 2002. Körper-Sprache. Elemente einer sprachwissenschaftlichen Explikation non-verbaler Kommunikation. Frankfurt am Main: Lang. Leipold, William D. 1963. Psychological distance in a dyadic interview as a function of introversionextraversion, anxiety, social desirability, and stress. Ph.D. dissertation, University of North Dakota. McNeill, David 1985. So you think gestures are nonverbal? Psychological Review 92(3): 350⫺371. McNeill, David 1992. Hand and Mind. What Gestures Reveal About Thought. Chicago/London: University of Chicago Press. McNeill, David (ed.) 2000. Language and Gesture. Cambridge: Cambridge University Press. Meggle, Georg 1981. Grundbegriffe der Kommunikation. Berlin/New York: de Gruyter. Mondada, Lorenza 2007. Interaktionsraum und Koordinierung. In: Reinhold Schmitt (ed.), Koordination. Analysen zur multimodalen Kommunikation, 55⫺93. Tübingen: Gunter Narr. Morris, Desmond 1995. Bodytalk. Körpersprache, Gesten und Gebärden. München: Wilhelm Heyne. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia and Ulrike Bohle 2007. Das Fundament fokussierter Interaktion. Zur Vorbereitung und Herstellung von Interaktionsräumen durch körperliche Koordination. In: Reinhold Schmitt (ed.), Koordination. Analysen zur multimodalen Kommunikation, 129⫺165. Tübingen: Gunter Narr. Müller, Cornelia and Roland Posner (eds.) 2004. The Semantics and Pragmatics of Everyday Gestures. The Berlin Conference. Berlin: Weidler Verlag. Norris, Sigrid 2004. Analyzing Multimodal Interaction. A Methodological Framework. New York/ London: Routledge. Patterson, Miles L. 1975. Personal space ⫺ time to burst the bubble? Man-Environment Systems 5(2): 67. Patterson, Miles L. 1976. An arousal model for interpersonal intimacy. Psychological Review 83(3): 235⫺245. Rauscher, Josef 1986. Wann und mit welchen Gründen wird Proxemik zum Gegenstand der Semiotik. In: Klaus D. Dutz and Peter Schmitter (eds.), Geschichte und Geschichtsschreibung der Semiotik. Fallstudien, 439⫺452. Münster: MAkS-Publikationen.

92. Proxemics and axial orientation

1323

Ravelli, Louise J. and Maree Stenglin 2008. Feeling space. Interpersonal communication and spatial semiotics. In: Gerd Antos and Eija Ventola (eds.), Handbook of Interpersonal Communication, 355⫺393. Berlin: Mouton de Gruyter. Rogers, Everett M., William B. Hart and Yoshitaka Miike 2002. Edward T. Hall and the history of intercultural communication. The United States and Japan. Keio Communication Review 24: 3⫺26. Rolf, Eckard 1994. Sagen und Meinen. Paul Grices Theorie der Konversations-Implikaturen. Opladen: Westdeutscher Verlag. Sacks, Harvey, Emanuel A. Schegloff and Gail Jefferson 1974. A simplest systematics for the organization of turn-taking in conversation. Language 50(4): 696⫺735. Sager, Svend F. 2000. Kommunikatives Areal, Distanzzonen und Displayzirkel. Zur Beschreibung räumlichen Verhaltens in Gesprächen. In: Gerd Richter, Jörg Riecke and Britt-Marie Schuster (eds.), Raum, Zeit, Medium ⫺ Sprache und ihre Determinanten. Festschrift für Hans Ramge zum 60. Geburtstag., 543⫺570. Darmstadt: Hessische Historische Kommission. Sager, Svend F. 2001. Probleme der Transkription nonverbalen Verhaltens. In: Klaus Brinker, Gerd Antos, Wolf Heinemann and Sven F. Sager (eds.), Text- und Gesprächslinguistik. Ein internationales Handbuch zeitgenössischer Forschung, 2. Halbband, 1069⫺1085. Berlin/New York: de Gruyter. Salewski, Christel 1993. Räumliche Distanzen in Interaktionen. Münster/New York: Waxmann. Sapir, Edward 1928. The unconscious patterning of behavior in society. In: Ethel S. Dummer (ed.), The Unconscious: A Symposium, 114⫺142. New York: Alfred A. Knopf. Scheflen, Albert E. 1964. The significance of posture in communication systems. Psychiatry 27(4): 316⫺331. Scheflen, Albert E. 1976. Körpersprache und soziale Ordnung. Kommunikation als Verhaltenskontrolle. Stuttgart: Klett-Cotta. Scherer, Klaus R. and Harald G. Wallbott (eds.) 1979. Nonverbale Kommunikation: Forschungsberichte zum Interaktionsverhalten. Weinheim/Basel: Beltz. Schmauser, Caroline and Thomas Noll (eds.) 1998. Körperbewegungen und ihre Bedeutung. Berlin: Berlin Verlag. Schmitt, Reinhold 2004. Bericht über das 1. Arbeitstreffen zu Fragen der Multimodalität am Institut für Deutsche Sprache in Mannheim. Gesprächsforschung ⫺ Online-Zeitschrift zur verbalen Interaktion 5: 1⫺5. (http://www.gespraechsforschung-ozs.de) Schmitt, Reinhold (ed.) 2007. Koordination. Analysen zur multimodalen Kommunikation. Tübingen: Gunter Narr. Schmitt, Reinhold 2012. Gehen als situierte Praktik: „Gemeinsam gehen“ und „hinter jemandem herlaufen“. Gesprächsforschung ⫺ Online-Zeitschrift zur verbalen Interaktion 13. (http:// www.gespraechsforschung-ozs.de) Sommer, Robert and Franklin D. Becker 1969. Territorial defense and the good neighbour. Journal of Personality and Social Psychology 11: 120⫺122. Streeck, Jürgen and Mark L. Knapp 1992. The interaction of visual and verbal features in human communication. In: Fernando Poyatos (ed.), Advances in Nonverbal Communication. Sociocultural, Clinical, Esthetic, and Literary Perspectives, 3⫺24. Amsterdam: John Benjamins. Tiittula, Liisa 2007. Blickorganisation in der side-by-side-Positionierung am Beispiel eines Geschäftsgesprächs. In: Reinhold Schmitt (ed.), Koordination. Analysen zur multimodalen Kommunikation, 225⫺261. Tübingen: Gunter Narr. Wallbott, Harald G. 2003. Nonverbale Komponenten der Sprachproduktion. In: Theo Herrmann and Joachim Grabowski (eds.), Sprachproduktion, 561⫺581. Göttingen: Hogrefe. Watson, Jeanne and Robert J. Potter 1962. An analytic unit for the study of interaction. Human Relations 15(3): 243⫺263.

Jörg Hagemann, Freiburg (Germany)

1324

VII. Body movements – Functions, contexts, and interactions

93. The role o gaze in conversational interaction 1. 2. 3. 4. 5. 6. 7.

Introduction Background, methods, and transcription Regulating engagement Managing conversational activities Gaze and very young children Conclusion References

Abstract Recent research has focused on the role of gaze in interaction with respect to action, sequence, and interactional context. This chapter reports on recent and prior research in both conversational settings, and settings in which conversation (i.e., talk) is not the primary resource for communication. As will be demonstrated, gaze, from early childhood to adulthood, can be deployed to do many different things ⫺ notice, search, address talk to another, and show that talk is being attended to ⫺ and constitutes a fundamental, as well as quite differentiated, resource through which people organize their conduct in face-to-face interaction.

1. Introduction Scholars have long noted that the exchange of gaze between two people fosters a moment of “connection” (e.g., Argyle and Cook 1976; Ellsworth, Carlsmith, and Henson 1972; Kendon 1967; Mazur et al. 1980; Simmel [1921] 1969). For Simmel, writing in the early 1920s, this connection provided for potentially profound insights about the other, and represented “the purest form of reciprocity in the entire field of human relationships” (Simmel 1969: 358). Indeed, the belief that the eyes reveal something essential about a person ⫺ their desires, personality, motivations, goals, whether or not they are telling the truth, and so on ⫺ has roots in antiquity and earlier (Swain and Boys-Stones 2007; Ulmer 2003). The special quality of the eyes to convey something important about a person, captured in our common wisdom (e.g., “the eyes are the windows of the soul”), as well as several strands of social science research, make it an important topic to consider in light of what, in Emanuel Schegloff’s (1999) terms, is the “primordial site” of human communication, namely, interaction. This chapter reports on recent and prior research on the role of gaze in conversational interaction, and then considers its role in young children’s interactions before language is a primary resource for communication. While most of this research is based on interaction between native English speakers in Western countries (especially the United States), some (as noted below) is based on interaction between participants from such countries as Japan, Italy, Papua New Guinea, and Mexico.

2. Background, methods, and transcription Research on gaze in interaction has its roots in the confluence of individuals from several disciplines with an interest in the body in naturally-occurring interaction, starting with Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13241333

93. The role of gaze in conversational interaction

1325

early twentieth century American anthropologists (e.g., Franz Boaz and Edward Sapir), and culminating in the 1950s and 60s with a group of scholars that came to be known as the “Palo Alto Group” (see Kidwell volume 1; also Kendon 1990; Leeds-Hurwitz 1987; Rossano 2012; for research on gaze from other perspectives, concerned especially with gaze and interpersonal attitudes, social characteristics, and/or situational contexts, see e.g., Mason, Tatkow, and Macrae 2005; Muirhead and Goldman 1979; Patterson 1977; for a review see Kleinke 1986). This group included, among other well-known figures such as Gregory Bateson and Ray Birdwhistel, Adam Kendon, one of the first to systematically research the role of gaze in conversational interaction. Subsequent research in this area, notably by Charles Goodwin, has also been influenced by conversation analytic work, as well as the work of Erving Goffman. While focused work on eye gaze seems to have enjoyed a peak in the 1970s and 80s, more recent work on embodied action in interaction has consistently considered the role of gaze in the multi-modal accomplishment of conjoined courses of action (for a collection see Streeck, Goodwin, and LeBaron 2011). The development of film, and later video, has been critical to the emergence of gaze research, allowing for a methodology that could capture otherwise fleeting details of interaction for fine-grained analysis. These tools have led to the development of special transcription systems used to study and represent gaze in interaction, particularly in relation to talk. One system is that developed by Goodwin (1980, 1981), which includes dots, dashes, and commas to represent the movement and fixing of gaze in interaction: Lee:

. . . . [X Can ya brin [g?- (0.2)

Ray: Lee: me here that nylo [n? Ray: . . . . . [X

Can you bring . . . . .

(from Goodwin, 1980)

More recently, Rossano (2012; see also Rossano, Brown, and Levinson 2009) has introduced a system comprised of symbols:

(from Rossano, Brown, and Levinson 2009)

Finally, researchers may use a combination of video frame grabs and/or other images, often in conjunction with a gaze transcription system, in accompaniment with narrative descriptions of gazing behavior (see e.g., Heath, Hindmarsh, and Luff 2010; Hepburn and Bolden 2013).

1326

VII. Body movements – Functions, contexts, and interactions

3. Regulating engagement As Goffman (1963) wrote, participants may transform a situation of their “sheer and mere co-presence” (Goffman 1963: 26) into one of ratified mutual engagement by moving from an initial exchange of glances to more sustained looking. Gaze directed at another, especially in conjunction with an approach (i.e., someone walking or making a movement toward another), is perhaps the most pervasive and straightforward method of opening face-to-face interaction (Kendon and Ferber 1973; Kidwell 2000; Kidwell and Zimmerman 2006; Pillet-Shore 2011). Indeed, people (including infants and young children) will withhold gaze because they do not desire interaction, or because they see that another is pre-occupied and not yet ready for interaction. In public settings, people may also engage in “civil inattention,” a quick glance by which they acknowledge an unacquainted other’s presence, but convey that they do not intend interaction (Goffman 1963). In short, gaze directed at someone, or away from someone, is a fundamental resource in the initiation, or inhibition, of interaction, and the establishment of mutual gaze is one of the necessary pre-conditions for parties’ entry into ratified social interaction (Goffman 1963; Mondada 2009; Pillet-Shore 2011). Gaze also works with body movement and posture to convey degrees of engagement in accord with a differential hierarchy of body segments that includes the head, torso, and legs, particularly when one is involved in multiple-simultaneous activities (Goffman 1963; Goodwin 1981; Kendon 1990; Robinson 1998; Schegloff 1998). For example, while gaze (via a head turn) directed toward someone in conjunction with a greeting acknowledges the other, if the lower body (torso and legs) is turned toward a computer screen, that is, in a “torqued” position, a return to that activity (computing) is projected as the long-term dominant involvement and engagement with the other is projected as a fleeting and transitory one. Objects, too, play a role in how gaze and engagement are organized. For example, a speaker shifting her gaze to her hands while she gestures establishes the gesture as a common point of focus (Streeck 1993: 289). A doctor moving his or her gaze to the intake pad during a medical consultation also does this, as well as signals a transition to a new phase of the activity (Heath 1986; Psathas 1990: 216⫺219; Robinson 1998; Robinson and Stivers 2001; also Goodwin 2000: 1500⫺1503). Gaze shifts such as these, i.e., to something, typically occasion mutual orientation, and may be joined in by another in a way that “staring into space” may not be (Goodwin 1981: 100⫺101; Kidwell 2009).

4. Managing conversational activities One area of study that has received a great deal of attention is that of the management of conversational activities. Once interaction has begun, as a number of researchers have noted, recipients tend to gaze toward speakers as an indication of their attentiveness to talk, and speakers tend to direct their gaze to listeners to show that talk is being addressed to them (Bavelas, Coates, and Johnson 2002; Goodwin 1981; Kendon 1990; Kidwell 1997; Lerner 2003). Recipients may even use gaze toward another to get the other to address talk to them (Heath 1986; Kidwell 1997). A long-standing claim has been that recipients gaze toward speakers more than speakers do toward recipients (but see Rossano 2012, below). In his classic work on gaze, Goodwin (1981) demonstrated that when speakers do not have the gaze of a recipient, they may produce talk that contains cut-offs, restarts, pauses, and

93. The role of gaze in conversational interaction

1327

others sorts of dysfluencies that serve to recruit gaze. Kidwell (2006, 2013), too, has shown that methods for recruiting recipient gaze, particularly with unwilling or unavailable recipients, either preserve the action that is being implemented with talk as the main activity (embedded “within-talk” methods that include cut-offs and restarts, but also “accompanyingtalk” methods such as tapping and touching the other), or make getting the other’s gaze the main activity (exposed methods that include mid-encounter summonses and/or commands such as, “Look at me!”). In a related way, Egbert (1996) demonstrated how participants use the German repair initiator bitte (‘pardon’) when mutual gaze is lacking, and make moves to (re-)establish gaze contact (e.g., when two people in different rooms move in a house to the same room). Hence, the range of gaze recruiting/remedying practices, and their ordering on a continuum of less to more interactionally intrusive, point to gaze as integral not just to speaker-recipient alignment, but, fundamentally, to the accomplishment of conjoined action in face-to-face situations (see below). The claim that recipients gaze more at speakers than vice versa, and that speakers seek to recruit gaze upon finding a non-gazing recipient, has recently been called into question. Rossano (2012) has demonstrated that recipient gaze toward the speaker is affected by the kind of action the speaker is doing. While recipients often gaze toward speakers during multi-unit extended tellings ⫺ and will be held accountable for doing so by speakers, especially via the methods described by Goodwin ⫺ recipients often do not gaze toward speakers when the action is the first pair part of an adjacency-pair (e.g., a question, request, or offer), nor do speakers undertake the kinds of methods to elicit recipient gaze as described above. Moreover, Rossano, Brown, and Levinson (2009) showed that culture, in addition to action type, can play a role in how recipients use gaze to display recipiency. In a three-language study, the authors found that Italian and Rossel Island (Papua New Guinea) recipients gazed at the speaker as a display of recipiency, but Tzeltal (Mexico) recipients tended not to gaze at the speaker. They found, however, that speaker gaze was consistent across the three cultures, with speakers tending to gaze at recipients throughout the production of questions. One area of controversy concerning gaze and the management of conversational activities has been that of turn taking. While researchers such as Kendon (1967) and Duncan (1972) found that gaze and turn taking are inter-related, specifically that speakers gaze away from a recipient at turn beginnings and back at turn endings, and in this way gaze works as a “turn-yielding” cue, subsequent research has not reached the same conclusion (Beattie 1978; Rutter et al. 1978; Torres, Cassell, and Prevost 1997). One of the issues confounding this subject is that these prior studies do not differentiate the types of actions that a speaker may be doing with talk (e.g., telling a story, asking a question, making an assessment, and so on). Indeed, Rossano (2012, 2013) has argued that gaze is organized not by reference to turn taking, but rather by reference to sequences and the courses of action they seek to implement. A number of recent studies have been more attentive to sequence and action type and their relationship to participant gaze, not so much with respect to turn taking, but with respect to how speaker action mobilizes recipient response. For example, Kidwell (2006, 2013) demonstrated that recipient gaze is treated by a speaker as an essential component of getting recipient compliance in directive-response sequences. Speakers, as part of managing noncompliant and/or distraught recipients, will take increasingly strong measures to elicit and maintain recipient gaze toward them (such as those discussed above). Thus, in example 1, the Caregiver (CG) employs multiple methods to get a misbehaving child to gaze toward her as part of getting him to cease his untoward conduct:

1328

VII. Body movements – Functions, contexts, and interactions Example 1 “Boxhit” Eduardo is hitting two children over the head with a box when the Caregiver intervenes.

1 2 3 4 5 6 7 8 9

CG: CG:

*That’s not okay Eduardo! ((*trying to pull E away from children)) (2.2) *We’re not hitting him on the hea::d (.1) with the box. ((*trying to pull E to her)) *A– are you looking at me? I want you to look at me. ((*CG brings face closer to E’s)) (.2) I want you to look at me Eduardo. We’re not hitting him on the head.

E does not look at CG, keeps gaze on children

(from Kidwell 2013)

In addition, Stivers et al. (2009) reported that in the case of assessments, which are weak sequence initiating actions relative to questions (also requests, invitations, etc.), speaker gaze (along with other turn design features) is an important resource for eliciting recipient response. As for questions, Stivers et al. (2009) found in their multi-language study that response time is faster with questions that are accompanied by gaze (see also Rossano 2010). In the case of narratives, Bavelas, Coates, and Johnson (2002) reported that when speakers gazed toward recipients, recipients responded with continuers (e.g., “mh hm”). Finally, Rossano (2012) showed that while participants tended to withdraw gaze near sequence endpoints, their continued gaze resulted in sequence expansion. Context, in terms of the number of potential speakers and recipients, also plays a role in the relationship between speaker gaze and recipient response. Lerner (2003) demonstrated that speakers’ use of gaze to address talk to a recipient in multiparty talk situations is a resource for recipient and non-recipient alike to determine who is being addressed (and thus who should respond; see also Kalma 1992; Tiitinen and Rusuuvuori 2012). Kidwell (1997) showed that gaze can be used by an unaddressed recipient in a three-party talk situation to shift the participation framework, in particular, to get the speaker to “include” her in the conversation by addressing talk to her. In sum, the focus on action, sequence, and context in recent research has helped further our understanding of the role of gaze in the management of conversational interaction, from the sorts of resources speakers employ to recruit recipient gaze, and at what points in the course of an interaction, to how gaze is differentially mobilized by speakers to elicit a recipient response, and also by recipients to display their recipiency.

5. Gaze and very young children Gaze proves to be an important behavior in human interaction in the earliest phases of development. From birth, human infants are interested in, and sensitive to, the eyes and faces of other humans (Bruner 1995; Farroni et al. 2002; Hains and Muir 1996). As early as their first week, they begin to turn toward the sound of their caretaker’s voice (Holmlund 1995). Caretakers respond by talking to the infant, then, when the infant looks away, by ceasing talk (Filipi 2009; Stern 1974). A milestone in child development occurs near the end of the first year and early into the second when children begin to

93. The role of gaze in conversational interaction

1329

enter into joint attentional engagements with others: from 9 to 12 months, children check the attention of their adult caregivers to objects and people in the environment; from 11 to 14 months, they follow adult attention, specifically, their gaze shifts and points; and from 13 to 15 months, they direct adult attention using both imperative and declarative pointing (Carpenter et al. 1998). These developments are thought to be associated with children’s early understandings of others’ intentionality and their emerging theory of mind, as well as with the development of language (see, e.g., Baron-Cohen 1991; Bruner 1995; Meltzoff and Gopnick 1993). Children’s impairment with these developments is associated with other impairments, such as delays in language development and autism (Baron-Cohen 1997). Filipi (2009) investigated how adults manage children’s gaze conduct in accord with adult (and Western middle-class) gazing norms in conversational interaction. She reported, for example, that an adult may summon a child (e.g., call the child’s name), and upon the child’s gaze shift to her, display the adequacy of this as a response by producing a new next action (e.g., a question); the child’s failure to produce a gaze shift engenders pursuit by the adult in the form of repeated summonses and/or explicit directives to “look” (Filipi 2009: 66⫺72). Kidwell (2013), too, has shown that in cases of adult interventions in children’s sanctionable activities, adults may go to great lengths to pursue the child’s return gaze, and thus a display of recipiency from the child (see example 1). Studies of very young children also provide insights into gaze and interaction before language has become their primary communicative resource. Kidwell has examined the ways that gaze constitutes social action that is differentially oriented to by very young children, aged 1⫺2 1/2 years, in their interactions with adults and with other children (Kidwell 2005, 2009). For example, she demonstrated that children differentiate between gazes by their caregivers as ones that do, or do not, portend an intervention in their sanctionable activities (e.g., biting or hitting another child), and that children may cease, revise, or continue these acts contingent on their discernments (Kidwell 2005). Specifically, these gazes may be characterized as a “mere look,” by which the adult makes a quick, visual inspection of the child’s activities while she is engaged in a concurrent activity (e.g., preparing a meal, etc.), in contrast to a gaze that may be characterized as “the look,” by which the adult holds her gaze toward the child, and, further, suspends her current activities as she gazes: Example 2 “headpat” Child is patting another child, H, on the head when H cries out. Caregiver, who is reading a storybook, produces three gazing actions toward the child. "Mere Look" .7s

..X 1 2

CG: H:

"The Look" 1.6s

x

.X

"The Look" 2.5s

x.X

I wish I could ho[p like that. (4.2) [ Aihhhhhhhhhhhhhhhhh!

(from Kidwell 2005)

As example 2 shows, such features of a gaze as whether it is made during the course of a current activity that the gazer is involved in (here, reading a line from a storybook), or

1330

VII. Body movements – Functions, contexts, and interactions

whether that activity has been halted and the gaze held toward the other as a “new” or independent activity, and typically for a longer duration, are of consequence for how children respond to the gazing action ⫺ in this case, the child withdraws his arm. Similarly, Kidwell (2009) showed that very young children discern that gazing actions made by another child, such as “noticing,” “searching,” and “targeting” gaze shifts, are of differential consequence for them, particularly in terms of how such gaze shifts made by a child who is the recipient of their sanctionable conduct (e.g., they are being bitten or hit) locate, or fail to locate, the caregiver, even when they cannot see the caregiver themselves: Is she near the scene and likely going to intervene (as when the other child suddenly notices the caregiver)? Or is she nowhere to be found (as when the other child searches her out)? As the examples in this section demonstrate, gaze is an important interactional resource from a very young age, one that is shaped by the experiences of childhood across a wide array of situations involving talk, but also other sorts of happenings that are of consequence for children and their activities.

6. Conclusion Building on prior work on eye gaze in interaction, recent work has furthered our understanding of gaze and the organization of interaction in conversational settings, as well as in settings in which conversation ⫺ i.e., talk ⫺ is not the primary resource for communication. Perhaps the most important contribution of this recent work is the focus on action. This is the case both in terms of how gaze can be produced in quite different ways, and thus constitute quite different types of social action, and in terms of the sorts of action and/or sequences of action of which gaze may be a part, including, of course, when talk is involved. As has been discussed here, gaze can be deployed to “do” many different sorts of things ⫺ notice, search, address talk to another, and show that another’s talk is being attended to. Gaze constitutes a domain of social orderliness that is utterly pervasive, one that bears continued scrutiny for how participants recurrently organize their conduct when in one another’s presence.

7. Reerences Argyle, Michael and Mark Cook 1976. Gaze and Mutual Gaze. Cambridge: Cambridge University Press. Baron-Cohen, Simon 1991. Precursors to a theory of mind: Understanding attention in others. In: Andrew Whiten (ed.), Natural Theories of Mind: Evolution, Development and Simulation of Everyday Mindreadin, 233⫺251. Oxford: Basil Blackwell. Baron-Cohen, Simon 1997. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press. Bavelas, Janet Beavin, Linda Coates and Trudy Johnson 2002. Listener responses as a collaborative process: The role of gaze. Journal of Communication 52(3): 566⫺580. Beattie, Geoffrey W. 1978. Floor apportionment and gaze in conversational dyads. British Journal of Social and Clinical Psychology 17(1): 7⫺15. Bruner, Jerome 1995. From joint attention to the meeting of minds: An introduction. In: Chris Moore and Philip Dunham (eds.), Joint Attention: Its Origins and Role in Development, 1⫺14. New Jersey: Lawrence Erlbaum Associates. Carpenter, Melinda, Katherine Nagell, Michael Tomasello, George Butterworth and Chris Moore 1998. Social cognition, joint attention, and communicative competence from 9 to 15 months of age. Monographs of the Society for Research in Child Development 63(4): i-vi, 1⫺143.

93. The role of gaze in conversational interaction

1331

Duncan, Starkey 1972. Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology 23(2): 283⫺292. Egbert, Maria M. 1996. Context-sensitivity in conversation: Eye gaze and the German repair initiator bitte? Language in Society 25(4): 587⫺612. Ellsworth, Phoebe C., J. Merrill Carlsmith and Alexander Henson 1972. The stare as a stimulus to flight in human subjects: a series of field experiments. Journal of Personality and Social Psychology 21(3): 302⫺311. Farroni, Teresa, Gergely Csibra, Francesca Simion and Mark H. Johnson 2002. Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences 99(14): 9602⫺9605. Filipi, Anna 2009. Toddler and Parent Interaction: The Organisation of Gaze, Pointing and Vocalisation, Volume 192. Amsterdam: John Benjamins. Goffman, Erving 1963. Behavior in Public Places. New York: The Free Press. Goodwin, Charles 1980. Restarts, pauses, and the achievement of a state of mutual gaze at turnbeginning. Sociological Inquiry 50(3/4): 272⫺302. Goodwin, Charles 1981. Conversational Organization: Interaction between Speakers and Hearers. New York: Academic Press. Goodwin, Charles 2000. Action and embodiment within situated human interaction. Journal of Pragmatics 32(10): 1489⫺1522. Hains, Sylvia M. and Darwin W. Muir 1996. Infant sensitivity to adult eye direction. Child Development 67(5): 1940⫺1951. Heath, Christian 1986. Body Movement and Speech in Medical Interaction. Cambridge: Cambridge University Press. Heath, Christian, John Hindmarsh and Paul Luff 2010. Video in Qualitative Research. London: SAGE Publications Limited. Hepburn, Alexa and Galina B. Bolden 2013. The conversation analytic approach to transcription. In: Tanya Stivers and Jack Sidnell (eds.), The Handbook of Conversation Analysis, 57⫺76. Oxford: Blackwell. Holmlund, Christian 1995. Development of turntakings as a sensorimotor process in the first 3 months: A sequential analysis. In: Keith E. Nelson and Zita Re`ger (eds.) Children’s Language, Volume 8: 41⫺64. Hillsdale, NJ: Erlbaum Kalma, Akko 1992. Gazing in triads: A powerful signal in floor apportionment. British Journal of Social Psychology 31(1): 21⫺39. Kendon, Adam 1967. Some functions of gaze direction in social interaction. Acta Psychologica 26: 22⫺63. Kendon, Adam 1990. Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge: Cambridge University Press. Kendon, Adam and Andrew Ferber 1973. A description of some human greetings. In: Richard P. Michael and John H. Cook (eds.), Comparative Ecology and Behavior of Primates, 591⫺668. London: Academic Press. Kidwell, Mardi 1997. Demonstrating recipiency: Knowledge displays as a resource for the unaddressed participant. Issues in Applied Linguistics 8(2): 85⫺96. Kidwell, Mardi 2000. Common ground in cross-cultural communication: Sequential and institutional contexts in front desk service encounters. Issues in Applied Linguistics 11(1): 17⫺37. Kidwell, Mardi 2005. Gaze as social control: How very young children differentiate “the look” from a “mere look” by their adult caregivers. Research on Language and Social Interaction 38(4): 417⫺449. Kidwell, Mardi 2006. ‘Calm down!’: the role of gaze in the interactional management of hysteria by the police. Discourse Studies 8(6): 745⫺770. Kidwell, Mardi 2009. Gaze shift as an interactional resource for very young children. Discourse Processes 46(2⫺3): 145⫺160. Kidwell, Mardi 2013. Availability as a trouble source in directive-response sequences. In: Makoto Hayashi, Geoffrey Raymond and Jack Sidnell (eds.), Conversational Repair and Human Understanding, 234⫺260. Cambridge: Cambridge University Press.

1332

VII. Body movements – Functions, contexts, and interactions

Kidwell, Mardi volume 1. Framing, grounding and coordinating conversational interaction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 100⫺113. Berlin/Boston: De Gruyter Mouton. Kidwell, Mardi and Don Zimmerman 2006. “Observability” in the interactions of very young children. Communication Monographs 73(1): 1⫺28. Kleinke, Chris L. 1986. Gaze and eye contact: a research review. Psychological Bulletin 100(1): 78⫺100. Leeds-Hurwitz, Wendy 1987. The social history of the natural history of an interview: A multidisciplinary investigation of social communication. Research on Language and Social Interaction 20(1⫺4): 1⫺51. Lerner, Gene H. 2003. Selecting next speaker: The context free operation of a context sensitive organization. Language in Society 32(2): 177⫺201. Mason, Malia F., Elizabeth P. Tatkow and C. Neil Macrae 2005. The look of love: Gaze shifts and person perception. Psychological Science 16(3): 236⫺239. Mazur, Allan, Eugene Rosa, Mark Faupel, Joshua Heller, Russell Leen and Blake Thurman 1980. Physiological aspects of communication via mutual gaze. American Journal of Sociology 86(1): 50⫺74. Meltzoff, Andrew N. and Alison Gopnik 1993. The Role of Imitation in Understanding Persons and Developing a Theory of Mind. Oxford: Oxford University Press. Mondada, Lorenza 2009. Emergent focused interactions in public places: A systematic analysis of the multimodal achievement of a common interactional space. Journal of Pragmatics 41(10): 1977⫺1997. Muirhead, Rosalind D. and Morton Goldman 1979. Mutual eye contact as affected by seating position, sex, and age. The Journal of Social Psychology 109(2): 201⫺206. Patterson, Miles L. 1977. Interpersonal distance, affect, and equilibrium theory. The Journal of Social Psychology 101(2): 205⫺214. Pillet-Shore, Danielle 2011. Doing introductions: The work involved in meeting someone new. Communication Monographs 78(1): 73⫺95. Psathas, George 1990. The organization of talk, gaze, and activity in a medical interview. Interaction Competence 21(2): 205⫺243. Robinson, Jeffrey D. 1998. Getting down to business: Talk, gaze, and body orientation during openings of doctor-patient consultations. Human Communication Research 25(1): 97⫺123. Robinson, Jeffrey D. and Tanya Stivers 2001. Achieving activity transitions in physician-patient encounters. Human Communication Research 27(2): 253⫺298. Rossano, Frederico 2010. Questioning and responding in Italian. Journal of Pragmatics 42(10): 2756⫺2771. Rossano, Federico 2012. Gaze behavior in face-to-face interaction. Unpublished Ph.D. dissertation, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands. Rossano, Federico 2013. Gaze in conversation. In: Jack Sidnell and Tanya Stivers (eds.), The Handbook of Conversation Analysis, 308⫺329. Thousand Oaks, CA: Blackwell. Rossano, Federico, Penelope Brown and Stephen C. Levinson 2009. Gaze, questioning and culture. In: Jack Sidnell (ed.), Comparative Studies in Conversation Analysis, 187⫺249. Cambridge: Cambridge University Press. Rutter, D.R., G.M. Stephenson, K. Ayling and P.A. White 1978. The timing of looks in dyadic conversation. British Journal of Social and Clinical Psychology 17(1): 17⫺21. Schegloff, Emanuel A. 1998. Body torque. Social Research 65(3): 535⫺596. Schegloff, Emanuel A. 1999. What next?: Language and social interaction study at the century’s turn. Research on Language & Social Interaction 32(1⫺2): 141⫺148. Simmel, Georg 1969. Sociology of the senses: Visual interaction. In: Robert Park and Ernest Burgess (eds.), Introduction to the Science of Sociology, 356⫺361. Chicago: Chicago University Press. First published [1921].

94. Categories and functions of posture, gaze, face, and body movements

1333

Stern, Daniel N. 1974. Mother and infant at play: The dyadic interaction involving facial, vocal, and gaze behaviors. In: Michael Lewis and Leonard Rosenblum (eds.), The Effect of the Infant on its Caregiver, 141⫺156. New York: John Wiley and Sons. Streeck, Jürgen 1993. Gesture as communication I: Its coordination with gaze and speech. Communications Monographs 60(4): 275⫺299. Streeck, Jürgen, Charles Goodwin and Curtis LeBaron 2011. Embodied Interaction: Language and Body in the Material World. Cambridge: Cambridge University Press. Swain, Simon and George Boys-Stones 2007. Seeing the Face, Seeing the Soul: Polemon’s Physiognomy from Classical Antiquity to Medieval Islam. Oxford: Oxford University Press. Stivers, Tanya, N.J. Enfield, Penelope Brown, Christina Englert, Makoto Hayashi, Trine Heinemann, Gertie Hoymann, Federico Rossano, Jan de Ruiter, Kyung-Eun Yoon and Stephen C. Levinson 2009. Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences 106(26): 10587⫺10592. Tiitinen, Sanni and Johanna Ruusuvuori 2012. Engaging parents through gaze: Speaker selection in three-party interactions in maternity clinics. Patient Education and Counseling 89(1): 38⫺43. Torres, Obed, Justine Cassell and Scott Prevost 1997 Modeling gaze behavior as a function of discourse structure. Paper presented at First International Workshop on Human-Computer Conversation, Bellagio, Italy, 14⫺16 July. Ulmer, Rivka B.K. 2003. The divine eye in ancient Egypt and in the Midrashic interpretation of formative Judaism. Journal of Religion and Society 5: 1⫺17.

Mardi Kidwell, New Hampshire (USA)

94. Categories and unctions o posture, gaze, ace, and body movements 1. 2. 3. 4. 5.

Channel-based vs. functional classification systems of nonverbal behavior Some common functional classification systems Critical comments on functional classification approaches Summary References

Abstract In the course of history of nonverbal communication research, several approaches to the classification of nonverbal behavior have been put forth. While the most common models are based on “channels”, also functional classifications, which are in the focus of the present paper, have proved beneficial. After explaining the difference between channel-based and functional classification systems and briefly outlining their advantages and disadvantages, I will discuss some of the best-known functional models ⫺ from Efron’s pioneering work to more recent neurologically-based approaches. Taking into account some critical comments, I reach the conclusion that functional classification models represent useful tools for coding and analyzing interactions provided that the categories are objectively defined and reliably applied. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13331341

1334

VII. Body movements – Functions, contexts, and interactions

1. Channel-based vs. unctional classiication systems o nonverbal behavior Over the course of the history of research in the area of nonverbal communication, different criteria regarding the organization of the field and construction of models have been put to use. The most common is the orientation towards (behavior- and perceptual-) “channels”. These channels relate to certain areas of the body, in which a particular behavior pattern occurs. Movements, especially those of the hands, are described as gestures; in the face the facial expressions can be investigated; eye movement manifests itself as gaze; the distance between the persons as proxemics, etc. This leads on the one hand to clear definitions, but, on the other hand, says very little about the meaning of the separate nonverbal behavior patterns. In this case, functional classifications are helpful in describing behavior patterns according to their (supposed) functions in the course of interaction. Although they have been accused of proceeding too quickly from observation to interpretation (see chapter 3), they provide very useful research instruments, because they offer suitable criteria for definition and a framework for interpretation. In the following, some of the best-known functional classification models will be presented.

2. Some common unctional classiication systems 2.1. The classiication o gestures by David Eron ([1941] 1972) In many respects, the work of Efron ([1941] 1972) can be regarded as pioneering. From the viewpoint of an anthropologist, he wished “(a) to discover whether there are any standardized group differences in the gestural behavior of two different ‘racial’ groups, and if so, (b) to determine what becomes of these gestural patterns in members and descendants of the same groups under the impact of social assimilation” (Efron 1972: 65). Behind this was the criticism of national socialist race theories. His materials of investigation were conversations between Eastern Jews and Southern Italians in the USA; in both cases namely, between “traditional” as opposed to “assimilated” groups. His method was observation, in part without technical assistance (“direct observation of gestural behavior in natural situations”, Efron 1972: 66), and in part with the help of films and sketches. In order to classify his manifold observations, which above all deal with hand and head gestures, Efron differentiates between several aspects of these gestures. On the one hand he describes spatio-temporal aspects (1972: 67) (e.g., radius, form, bodily parts involved, tempo) and on the other hand, interlocutional aspects, under which fall, in particular, bodily contact with the conversational partner and simultaneous gesturing by several participants in the interaction (1972: 89). The third level of classification, described by him as linguistic aspects (1972: 94⫺98), had the greatest influence on further research. He distinguishes between two large groups of gestures: “A gestural movement may be ‘meaningful’ by (a) the emphasis it lends to the content of the verbal and vocal behavior it accompanies, (b) the connotation (whether deictic, pictorial, or symbolic) it possesses independently from the speech of which it may, or may not, be an adjunct” (Efron 1972: 96). The first group points out two sub-divisions, which Efron (1972: 96) characterizes as follows: “This type of gesture may in turn be (a) simply baton-like, representing a sort of ‘timing out’ with the hand the successive stages of the referential activity, (b) ideo-

94. Categories and functions of posture, gaze, face, and body movements

1335

graphic, in the sense that it traces or sketches out in the air the ‘paths’ and ‘directions’ of the thought-pattern.” Within the second group, Efron makes three sub-divisions: […] the movement may be (a) deictic, referring by means of a sign to a visually present object (actual pointing), (b) physiographic, depicting either the form of a visual object or a spatial relationship (iconographic gesture), or that of a bodily action (kinetographic gesture), (c) symbolic or emblematic, representing either a visual or a logical object by means of a pictorial or a non-pictorial form which has no morphological relationship to the thing represented. (Efron 1972: 96)

In the course of his investigations, the difference between ideographic and physiographic/ symbolic gestures proved to be fruitful, because an important difference between the group of traditional Jews and the traditional Italians concerns the fact that traditional Jews primarily use ideographic gestures, while traditional Italians primarily use physiographic and symbolic gestures. Efron’s influence upon gesture research cannot be valued highly enough. His findings, however, largely became well-known due to the way in which they have been presented by Paul Ekman and Wallace V. Friesen.

2.2. The classiication o nonverbal behavior by Paul Ekman and Wallace V. Friesen (1969) In the framework of social-psychological research, Ekman and Friesen (1969) suggested a classification scheme for nonverbal behavior that is based on usage, but also includes coding and origin of nonverbal behavior. Some of the five categories thus obtained are based upon Efron (1972). As opposed to Efron, however, Ekman and Friesen do not relate in the first instance to manual gestures, but also to facial expression, the movement of other bodily parts, and to changes in position. The description emblem for the first category is taken from Efron (1972: 96, described there as “symbolic or emblematic [gesture]”, see chapter 2.1). Emblems are described as nonverbal signals, which have a clearly-coded, translatable meaning and which are in general purposely used as a means of communication. Emblems do not convey a certain meaning simply by referring to a parallel verbal happening; they can even replace a spoken utterance. The second group, the illustrators, show the closest connection to speech, they are said to be “directly tied to speech, serving to illustrate what is being said verbally” (Ekman and Friesen 1969: 68). This refers, above all, to (hand) gestures that accompany speech. The further subdividing of the illustrators is also to a great extent taken over from Efron: batons, ideographs, deictic movements, spatial movements, kinetographs, pictographs (Ekman and Friesen 1969: 68). Efron’s iconographic gestures are subdivided into the categories spatial movements and pictographs. Though Ekman and Friesen (1969: 68) describe the category “pictograph” as “not described by Efron”, the respective definitions are very similar to one another: “which draw a picture of their referent” (Ekman and Friesen 1969: 68) vs. “depicting […] the form of a visual object” (Efron 1972: 96). Hand and arm movements mainly fall into the two categories already mentioned. This is not true of the third class, the regulators. In addition, they operate almost entirely independently of the particular contents of speech at the level of the organization of

1336

VII. Body movements – Functions, contexts, and interactions

conversation and are, e.g., concerned with the controlling of turn taking. One example of this is gaze behavior during interaction (Duncan [1974] 1979; Kendon 1967). A hand gesture can also be taken into account, as described by Duncan and Fiske (1977: 188⫺ 189) as “speaker gesticulation signal”. The relationship of the last two categories to language is less close. Affect displays serve to express feelings, which is above all the function of facial expression. Culturallydetermined display rules decide whether and in which intensity or form an effect can or should be shown. Adaptors are behavior patterns that mostly possess no interactive function, but which, by their origin, serve to satisfy individual needs (e.g., head scratching, eye rubbing), even though they are often only seen to be hinted at during a conversation. A similarity with illustrators and emblems lies in the fact that adaptors are usually carried out with the hands. The differing relationship of the individual categories to language also corresponds to differing information content and to differing receiver feedback. The categorization of nonverbal behavior observed within a concrete interaction into the different categories naturally always requires individual interpretation. Moreover, as Ekman and Friesen (1969: 92) point out, the categories are not exclusive; in certain cases, mixtures and transmissions appear. This fact has been criticized by some researchers (see chapter 3). Nevertheless, this classification offers a practicable and commonly used terminology for the description and interpretation of the functions of posture, gaze, face, and body movements.

2.3. The Gießen system or the analysis o gestures (Wallbott 1977) This system was developed within the framework of empirical psychological research for the notation of hand, head, and body movements (Scherer, Wallbott, and Scherer 1979). It is substantially based upon the categories, or rather upon part of the categories of Ekman and Friesen (1969). The further subdivision of the basic categories (adaptors, illustrators, emblems) differs in part from Ekman and Friesen; for instance, concerning the illustrators, the simplified sub-divisions have been named abbildend (‘picturing’), zeigend (‘pointing’), and untermalend (‘illustrating’) (Wallbott 1977: 13⫺14). A distinction is made by the adaptors between who/what is being touched (self-adaptor, object-adaptor). On the other hand, it is decided whether a movement is made only once (discrete) or repeatedly (repetitive) (Wallbott 1977: 10⫺12). The reason why the latter two criteria are not put to use with the other basic categories to provide a broader differentiation is not clear. Regulators and affect displays are missing. The latter would be excluded because they above all present the sphere of facial expression (Scherer, Wallbott, and Scherer 1979: 181). By contrast, the category named “postural shift” (which may overlap with the group of regulators) is introduced together with a “remainder” category, which covers “playful” actions as well as movements which serve a concrete purpose; e.g., writing, or raising a glass to the mouth (Wallbott 1977: 16⫺17). The inter-rater-reliability has been statistically evaluated by referring to Cohen’s κ and was proven to be absolutely adequate (Scherer, Wallbott, and Scherer 1979:184).

2.4. The unctions o nonverbal behavior in conversation according to Klaus R. Scherer (1977) Scherer’s (1977) suggestion for classification has a different starting point. Following the dimensions of Morris’s (1975) sign process, Scherer differentiates between para-

94. Categories and functions of posture, gaze, face, and body movements

1337

semantic, para-syntactic, and para-pragmatic, as well as ⫺ in addition to Morris’s categories ⫺ dialogic functions. His division is even stricter than that of Ekman and Friesen (1969), not dependent upon single behavior channels. He views the separate functions as being different aspects of one and the same kind of behavior (whereby, however, a special affinity exists between certain channels and certain functions). The deciding factor in this classification is the relationship to the verbal aspect of the interaction. This makes it especially attractive for inclusion in linguistic models. Die parasemantischen Funktionen der nonverbalen Verhaltensweisen kann man auffassen als Beziehungen spezifischer nonverbaler Verhaltensweisen zu den Bedeutungsinhalten der sie begleitenden verbalen Äußerungen. [The para-semantic functions of nonverbal behavior patterns can be understood as the relationship of specific nonverbal behaviour patterns to the meaning of the verbal expression which accompanies them.] (Scherer 1977: 279, translation by author).

Whenever, by means of nonverbal behavior, the meaning of the verbal expression is supported, strengthened, illustrated, etc., Scherer speaks of amplification. According to Scherer, illustrators (Ekman and Friesen 1969) are used with this function. In the case of modification, on the other hand, the verbal expression is weakened, or modified through different nonverbal signals. (As an example, Scherer names the smile of excuse accompanying a negative reply), but not contradicted. If this were so, this would be classified as contradiction (Scherer 1977: 281⫺282). Due to an ambiguous formulation, in Scherer (1977) it remains unclear whether irony (i.e., the distortion of the interpretation of a statement by means of an “unfitting” nonverbal accompaniment in its reverse) is to be considered as a case of contradiction. In Scherer and Wallbott (1985: 200) and Wallbott (1988: 1228), irony is considered to be modification. In psychology, contradiction has been examined as “channel discrepancy” and discussed in connection with a theory about the origin of schizophrenia through “double-bind situations”. A special case of para-semantic function is substitution, in which case the nonverbal behavior does not influence the spoken word as far as meaning is concerned, but instead replaces it. This is characteristic of emblems. Two different functional domains are summarized under the category of para-syntactic function. In the first instance, it is about the segmentation of the flow of speech through nonverbal signs. Scherer mentions as an example the marking of the rhythm of speech with batons. (His second example, the segmentation of the flow of speech by means of pauses and speech tempo, would be placed in the category of prosody and should, therefore, rather be regarded as a linguistic resource.) In Scherer and Wallbott it is quoted in greater detail: One of the major syntactic functions is the segmentation of the behavioral stream. This is true for both macroscopic segments of conversations, such as beginnings and endings (e.g., eye contact and smiling as signs to begin a conversation, leaning forward in one’s chair to signal readiness to end a meeting) or topic changes (often signaled through gross changes in body posture […] as well as microscopic segments, such as shifts of attention during a speaker’s utterance or signals indicating a paraphrase. (Scherer and Wallbott 1985: 201)

As can be seen in the quote, in English, Scherer and Wallbott use the term syntactic instead of para-syntactic, leaving out the partly offensive prefix para- (compare Weinrich 1992: 15).

1338

VII. Body movements – Functions, contexts, and interactions

The same is true for the terms semantic and pragmatic. As a result, however, the problem arises that, for example, the semantic dimension of a sign (here, for example, a gesture) as defined by Morris (1975: 24) stands for the relationship between this sign and its own meaning and not for the relationship to the meaning of another sign (in this case, e.g., of a word). To avoid ambiguity, the terms para-semantic etc. are used in this article. As a second aspect of the para-syntactic dimension, Scherer mentions synchronization of different behavior patterns within different communication channels. He presumes “daß Regelhaftigkeiten in Bezug auf die Zulässigkeit und die Wahrscheinlichkeit des gleichzeitigen Auftretens verschiedener Kommunikationsweisen bestehen” [that there are regulations concerning the acceptability and the probability of the simultaneous occurrence of different means of communication] (Scherer 1977: 284, translation by author). The para-syntactic aspect is very often overlooked when the meaning of nonverbal communication in interaction is under discussion. Perhaps the reason is that it is especially unspectacular and inconspicuous. In spite of this, it is of an importance which should not be underestimated for the smooth functioning of interaction. The para-pragmatic dimension embraces two very different areas. With the term expression Scherer points to the fact that a person’s nonverbal behavior provides clues about his or her emotions, intentions, and personality structure. Although this is traditionally above all the object of psychological research, Scherer quite rightly points out that both the speaker and the listener, in expressing such features, help to constitute the content of the conversation. In comparison, the equally para-pragmatic reactive function falls into the subject matter of linguistics. Here, different forms of back-channel behavior are differentiated. Scherer names signals of attention, understanding, and the evaluation of the other person’s utterances. Two different fields are also condensed under the term dialogic dimension. Scherer refers here to the regulation of the flow of conversation, which is substantially realized by regulators, as described by Ekman and Friesen (1969). The expression of the interpersonal relations between those taking part in the conversation (e.g., in respect of status and sympathy) is also indicated here. Altogether, Scherer’s functional categories of nonverbal behavior offer a feasible approach that has been used, or at least quoted, in many works. However, it also shows certain weak points. Especially, the differentiation between the para-pragmatic and the dialogic function, or rather the separate sub-divisions, are, without going into further detail, not quite plausible.

2.5. The NEUROGES-ELAN system by Hedda Lausberg and Han Sloetjes (2009) A more recent suggestion for the classification of gesture comes from Hedda Lausberg, in cooperation with Han Sloetjes. What is new here is the combination ⫺ intended from the start ⫺ with one of the currently most widespread multimodal annotation tools, namely ELAN, which was developed at the Max Planck Institute for Psycholinguistics in Nijmegen. Furthermore, a consequent attempt has also been made to define the separate categories so that they correlate, in a neurological sense, to different states of consciousness (see Lausberg and Sloetjes 2009). Concerning the category on body (corresponding to the body-focused gestures of Freedman 1972, or rather the self-adaptors of Ekman

94. Categories and functions of posture, gaze, face, and body movements

1339

and Friesen 1969), e.g., research results are cited whereby for this gesture, there is a preference for the left hand, indicating a right-hemisphere-activity. This again corresponds to the relationship between this movement and stressful situations that has been proven by investigations (see Lausberg and Sloetjes 2009: 843⫺846). These neurological connections remain, for the time being, at least partly a hypothesis, since it is still difficult to prove them experimentally (the necessary technology is too complicated to be used unobtrusively in natural communicative situations). One difference in relation to most other suggestions for classification lies in the fact that the “gesture categories are defined by kinetic features only and not by interpretation of the verbal context” (Lausberg and Sloetjes 2008: 176), so that the individual coding measures are generally carried out without the audio. In NEUROGES, the functional classification is merely the last step in a process, consisting of several stages, which first describes gesticulation only on the surface (whereby parameters from Laban 1988 are included), then deals with the relationship of both hands to each other, and only as a further consequence relates these to functional categories. The categories are based, amongst others, upon Efron (1972). They relate, however, to several other research works (e.g., Darwin 1884; Freedman 1972; Kimura 1973a, 1973b; Müller 1998). Compared with the classifications described in the foregoing chapters, NEUROGES is, by comparison, minutely differentiated. Consequently, for example, for the gesture function “iconograph”, a difference is indicated between length, area, and volume and between the different styles of performance, indicating endpoints, tracing, palpating, holding, and body-part-as-object (Lausberg n.d.). Although the usage of these categories demands in each case subtle differentiation, foregoing inter-rater-reliability-estimates (with Cohen’s κ) has produced satisfactory results (see Lausberg and Sloetjes 2008: 176). At the time being, one difficulty lies in the somewhat complicated accessibility of the system. It is conveyed in workshops that take place regularly, but is not generally accessible as a publication. On the other hand, this arrangement naturally facilitates the consistent use of the notation system.

3. Critical comments on unctional classiication approaches In general, functional classification demands a strongly interpretive method of procedure by the scientists. Frey et al. have vehemently criticized this: Der Nachweis darüber, welche Funktionen die in einer Interaktion gezeigten Bewegungen nun wirklich erfüllen, kann natürlich nicht dadurch erbracht werden, daß man feststellt, wie oft der Kodierer welches funktionale Etikett verliehen hat. Die direkte funktionale Klassifikation erweist sich denn auch bei genauerem Hinsehen als eine den Forschungsprozeß unzulässig verkürzende Scheinlösung, bei der die gesuchte funktionale Bedeutung vom Experimentator zum Koderier, und von diesem wieder im Kreis zurück zum Experimentator transportiert wird. [Proof of which functions are really fulfilled in a movement displayed in interaction, can naturally not be brought about by assessing how often the coder has awarded whichever functional label. Through precise observation, the direct functional classification proves to be a bogus solution that shortens the process of research in an inadmissible way, by means of which the functional meaning is first transported from the experimenter to the coder and then back again from him in a circle to the experimenter.] (Frey et al. 1981: 206⫺207, translation by author)

This criticism, however, does not exactly relate to functional classification as such, but rather to its usage as the basis of a system of notation for respective behavior patterns.

1340

VII. Body movements – Functions, contexts, and interactions David McNeill, who has himself made suggestions for functional classifications (McNeill and Levy 1982), provides a more moderate criticism regarding functional classification (see also Kendon 1983): I wish to claim, however, that none of these ‘categories’ is truly categorical. We should speak instead of dimensions and say iconicity, metaphoricity, deixis, ‘temporal highlighting’ (beats), social interactivity, or some other equally unmellifluous (but accurate) terms conveying dimensionality. The essential clue that these semiotic properties are dimensional and not categorial is that we often find iconicity, deixis, and other features mixing in the same gesture. Falling under multiple headings is not impossible in a system of categories, but simultaneous categories imply a hierarchical arrangement. We cannot define such a hierarchy because we cannot say in general which categories are dominant and which are subordinate. (McNeill 2005: 41⫺42)

Problems such as those addressed here in the use of functional categories point to the importance of following pragmatic definitions of categories in the coding, e.g., “when there is doubt between category x and category y, category x will always be coded”. Obviously, the use of modern multi-media annotation tools such as ELAN, which allow a quick retrieval of the audio-visual version, are helpful. (For an overview of current transcription systems see Bressem volume 1.)

4. Summary Functional classifications are widespread and belong to the common inventory that is in use. They ease the harnessing of the abundance of data and rash communication regarding the observed phenomena. The far-reaching independence from channel-based approaches allows a global view of the performance of posture, gaze, face, and body movements. At the same time, however, there are difficulties in this approach. The categories should be as precisely defined as possible and put to use in an understandable fashion, in order to maintain the greatest possible intersubjectivity. Apart from that, there exist at present different descriptions for (roughly) identical functional categories, which make the comparison of the research results difficult. Simplification would be welcome here.

5. Reerences Bressem, Jana volume 1. Transcription systems for gestures, speech, prosody, postures, gaze. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1033⫺1055. Berlin/Boston: De Gruyter Mouton. Darwin, Charles 1884. Der Ausdruck der Gemüthsbewegungen bei dem Menschen und den Thieren. Stuttgart: Schweizerbart. Duncan, Starkey D. 1979. Interaktionsstrukturen zwischen Sprecher und Hörer. In: Klaus R. Scherer and Harald G. Wallbott (eds.), Nonverbale Kommunikation: Forschungsberichte zum Interaktionsverhalten, 236⫺255. Weinheim/Basel: Beltz. First published [1974]. Duncan, Starkey and Donald W. Fiske 1977. Face-to-Face Interaction: Research, Methods, and Theory. Hillsdale, NJ: Erlbaum. Efron, David 1972. Gesture, Race and Culture. Den Haag: Mouton. First published New York: King’s Crown Press [1941].

94. Categories and functions of posture, gaze, face, and body movements

1341

Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1(1): 49⫺98. Freedman, Norbert 1972. The analysis of movement behavior during the clinical interview. In: Aron W. Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 153⫺175. New York/ Toronto/Oxford/Sydney/Braunschweig: Pergamon Press. Frey, Siegfried, Hans-Peter Hirsbrunner, Jonathan Pool and William Daw 1981. Das Berner System zur Untersuchung nonverbaler Interaktion: I. Die Erhebung des Rohdatenprotokolls. In: Peter Winkler (ed.), Methoden der Analyse von Face-to-Face-Situationen, 203⫺236. Stuttgart: Metzler. Kendon, Adam 1967. Some functions of gaze-direction in social interaction. Acta Psychologica 26: 22⫺63. Kendon, Adam 1983. Gesture and speech. How they interact. In: John M. Wiemann and Randall P. Harrison (eds.), Nonverbal Interaction. (Sage annual reviews of communication research 11.), 13⫺45. Beverly Hills/London/New Delhi: Sage Publications. Kimura, Doreen 1973a. Manual activity during speaking: I. Right-handers. Neuropsychologia 11(1): 45⫺50. Kimura, Doreen 1973b. Manual activity during speaking: II. Left-handers. Neuropsychologia 11(1): 51⫺55. Laban, Rudolf von 1988. The Mastery of Movement. Plymouth: Northcote House. Lausberg, Hedda n.d. Module III. Functional gesture coding. Unpublished manuscript. Lausberg, Hedda and Han Sloetjes 2008. Gesture coding with the NGCS-ELAN system. In: Andrew Spink, Mechteld Ballintijn, Natasja Bogers, Fabrizio Grieco, Leanne Loijens, Lucas Noldus, Gonny Smit and Patrick Zimmerman (eds.), Proceedings of Measuring Behavior 2008, 6th International Conference on Methods and Techniques in Behavioral Research, 176⫺177. Wageningen: Noldus Information Technology. Lausberg, Hedda and Han Sloetjes 2009. Coding gestural behavior with the NEUROGES-ELAN system. Behavior Research Methods 41(3): 841⫺849. McNeill, David 2005. Gesture and Thought. Chicago/London: University of Chicago Press. McNeill, David and Elena Levy 1982. Conceptual representations in language activity and gesture. In: Robert J. Jarvella and Wolfgang Klein (eds.), Speech, Place, and Action. Studies in Deixis and Related Topics, 271⫺295. Chichester/New York/Brisbane/Toronto/Singapore: John Wiley and Sons Ltd. Morris, Charles William 1975. Grundlagen der Zeichentheorie. München: Hanser. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag Arno Spitz. Scherer, Klaus R. 1977. Die Funktionen des nonverbalen Verhaltens im Gespräch. In: Dirk Wegner (ed.), Gesprächsanalysen. Vorträge, gehalten anläßlich des 5. Kolloquiums des Instituts für Kommunikationsforschung und Phonetik, Bonn, 14.⫺16. Oktober 1976, 275⫺297. Hamburg: Buske. Scherer, Klaus R. and Harald G. Wallbott 1985. Analysis of nonverbal behavior. In: Teun A. van Dijk (ed.), Handbook of Discourse Analysis. Volume 2: Dimensions of Discourse, 199⫺230. London: Academic Press. Scherer, Klaus R., Harald G. Wallbott and Ursula Scherer 1979. Methoden zur Klassifikation von Bewegungsverhalten: Ein funktionaler Ansatz. Zeitschrift für Semiotik 1: 177⫺192. Wallbott, Harald G. 1977. Analysemethoden nonverbalen Verhaltens I: Giessener System zur Handbewegungsanalyse. Unpublished manuscript. Wallbott, Harald G. 1988. Nonverbale Phänomene. In: Ulrich Ammon, Norbert Dittmar and Klaus J. Mattheier (eds.), Soziolinguistik. Ein internationales Handbuch zur Wissenschaft von Sprache und Geselllschaft, 1227⫺1237. Berlin/New York: de Gruyter. Weinrich, Lotte 1992. Verbale und nonverbale Strategien in Fernsehgesprächen. Eine explorative Studie. Tübingen: Niemeyer.

Beatrix Schönherr, Innsbruck (Austria)

1342

VII. Body movements – Functions, contexts, and interactions

95. Facial expression and social interaction 1. 2. 3. 4. 5. 6.

Introduction Interpersonal attitudes Cognitive processes Conversational signals Emblems and adaptors References

Abstract The face plays an important role in social interaction, both in its static and dynamic dimension, being a rich source of information and interactive signals. The face is in fact able to send a lot of information concerning age, gender, social status, and affects impression of personality through the process of interpersonal perception. Facial expression on the other hand is an effective signalling system in interpersonal communication. In combination with other nonverbal signals it has a strong and immediate impact in communicating interpersonal attitudes such as cordiality, hostility, dominance, submission; it communicates also other mental activity such as attention, memory, thinking. Moreover, the face takes part actively in conversation: the “speaker” accompanies his/her words with facial expression to emphasize or modulate the meaning of verbal communication; the “listener” during conversation provides a constant feedback through facial expression. Facial movements take also part in regulating the interpersonal exchanges and synchronizing the turn taking. The face may finally produce mimic movements that can play the role of adaptive behavior correlated with the level of arousal experienced by the individual.

1. Introduction The face plays an important role in social interaction, both in its static dimension (structural features, physiognomy) and in its dynamic dimension (facial expression), being a rich source of information and of interactive signals. The face makes a significant contribution towards defining a person’s appearance and identity. It is as such the most important of the so-called “static” non-verbal signals that are part of social interaction. It is therefore an important source of information on the person, and can provide clues as to a variety of characteristics such as race, age, gender, etc. (Ekman 1978). The face does not appear to provide such reliable information on other equally important dimensions and aspects, such as personality or intelligence. As a rule, however, when formulating judgments on the personal characteristics of others, we attribute a great deal of importance to facial configuration. The possibility of reading aspects of a person’s character in their facial features has been analyzed in a wide range of studies on physiognomy which, albeit in alternating phases, has been very successful and has significantly influenced “common-sense” theories of personality. There is plenty of evidence in the classic social psychology studies on interpersonal perception of the inappropriate generalizations proposed by physiognomy (Cook 1971) and the poor value of the face as a source of information on personality; this research does highlight, however, that common-sense psychology has incorporated the main points of Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13421349

95. Facial expression and social interaction

1343

the physiognomy tradition, favouring the existence of “facial stereotypes”, i.e., widely shared rules of identification by means of which the external appearance is placed in relation to personality (Brunswik and Reiter 1937; Secord 1958). But facial expression is associated with the externalization of emotions and interpersonal attitudes, the generation of conversational signals, the manifestation of cognitive processes, and the production of specific movements, like for example adaptors and emblems, that can be considered gesture categories; facial expression is one of the socalled “dynamic” non-verbal signals that are given and that change rapidly over the course of an ongoing social interaction. This paper does not address the analysis of the facial expression of emotions; here, we will take into consideration the role of facial expression in relation to the communication of interpersonal attitudes, the externalization of cognitive processes, the sending of signals during conversation and the production of self-adaptation movements (adaptors) and of symbolic signals (emblems).

2. Interpersonal attitudes During social interactions individuals constantly “negotiate” (moment by moment) the quality of their social relationship, which is primarily expressed through the mutual manifestation of interpersonal attitudes. The externalization of these attitudes influences subsequent phases of the interaction, mutual perception and, more generally, the process of interpersonal communication. For years, research into the manifestation of interpersonal attitudes has shown that it is above all non-verbal signals that convey the most meaningful information and that make the manifestation of interpersonal attitudes possible (Argyle, Alkema, and Gilmour 1973; Mehrabian 1972). Facial expression (together with other non-verbal signals such as gaze direction, body posture, tone of voice, etc.), contributes to the externalization of interpersonal attitudes, among which the following were considered most frequently: friendliness vs. hostility, dominance/superiority vs. submission/inferiority, cordiality/warmth vs. coldness, willingness/acceptance vs. refusal, liking vs. disliking, and formality vs. informality. Dominance/superiority, for example, is conveyed through the face via an attentive, serious facial expression, generated mainly by movement of the eyebrows and eyelids, while externalization of submission/inferiority is revealed mainly through other nonverbal signals, such as body posture and head position, accompanied nonetheless by a “meek” expression, somewhat reminiscent of sadness. Friendly and warm attitudes are expressed mainly by a smile generated through two specific facial movements: widening and lifting the corners of the mouth and contracting the orbicularis oculi muscle, respectively Action Unit (AU) 12 and AU 6 as per Facial Action Coding System (Ekman and Friesen 1978). The presence of only the first of the two aforementioned movements is not sufficient to express a positive attitude. It indicates rather a formal, or “circumstantial” attitude, which is often used in greetings. While the first type of smile (AUs 12⫹6) is also defined as “genuine”, “sincere”, and “felt”, the second one (AU 12) is seen as “formal”, “insincere”, or even “false”. Frowning, possibly accompanied by a widening of the eyelids, is the most obvious sign of hostility, which can also be accompanied by facial movements involving the mouth and cheeks and partly reminiscent of the expressive signs of anger. The expression of interpersonal refusal is generated through a facial configuration that includes partial reduction of the eyelid opening and, in certain cases, lifting of the upper lip.

1344

VII. Body movements – Functions, contexts, and interactions

3. Cognitive processes In reference to Italian pioneers in facial expression study (De Sanctis 1902; Mantegazza 1879), we will now take into consideration a rather neglected aspect of the literature: the relationship between cognitive processes and facial movements. Mantegazza (1879), following Darwin (1872) and other 19th century scholars (Bell 1806; Duchenne 1876; Gratiolet 1865; Piderit 1867) dedicated a chapter of his book Fisionomia e Mimica [Physiognomy and Facial Expression] to the facial expressions that accompany each form of “mental process” (sensory attention; reflection; memory, etc.). De Sanctis (1902) went into much more depth, dedicating an entire book to La Mimica del Pensiero [The Facial Expression of Thought]. He poses many questions, such as: do thought processes manifest themselves in facial expressions? Is there a relationship between mental states such as attention and concentration and the concomitant facial expression? What relationship exists between the expression of emotions (or affective states) and the expression of other mental processes, such as attention, concentration, or cognitive engagement? Can the extent and intensity of the facial movements act as a measure of the degree of concentration? Of course, De Sanctis’ considerations were mainly the result of observation and speculation; they did, however, open a very interesting chapter, which did not subsequently receive the due attention from researchers. De Sanctis identified frowning and the movements of the eyebrow region as the most significant indicators of cognitive processes; in particular he maintains, in agreement with Duchenne (1876), that the frontalis muscle can be considered as the muscle that chiefly expresses attention towards external objects (also called sensory attention or external attention). To this he added the movement used for internal attention (or reflection), during which a marked reduction of the eyelid opening is observed. A particular form of integration between the two previous expressions is represented by the so-called interrogative attention (Cuyer 1902), in which both frowning and the relative tightening of the eyelids are observed. Another interesting form of facial expression of a specific cognitive process is observed during mnemonic effort: here too, the eyes are narrowed due to the eyelids tightening and the direction of the gaze can be diverted, upwards for example. Also Darwin (1872) had a certain interest in the so-called “blank” expression of the eyes, which expresses a kind of “assortment” of the thoughts, an “enchantment” in which the gaze is blank and the eyelids slightly narrowed. Finally, De Sanctis (1902) includes among the possible facial expressions of mental processes also the expression of contemplation and spiritual ecstasy, studied chiefly in the figurative art forms (Pasquinelli 2005), in which particular focus is drawn to the partial closure of the eyelids (the ecstatic gaze) in a relatively relaxed face devoid of other specific facial movements. In short, we can say that the expressive structures of the face that manifest the various cognitive processes (external attention, reflection, concentration, mnemonic effort, etc.) are mainly located in the upper part of the face and can originate in the forehead, eyelid, and eyebrow muscles. A less significant role is played by the muscles in the lower part of the face that are responsible for the movements involving the mouth: in some cases the mouth can be shaped into the movement resembling a kiss (lip pucker); in other cases the lips are pressed together (lip tightener or lip presser) or pulled inside (lip suck); finally, we can observe a stretching of the corners of the mouth to produce a sardonic expression (movements similar to those produced in the facial expression that accompanies physical efforts).

95. Facial expression and social interaction

1345

De Sanctis concludes his observations by highlighting, at least on a methodological level, the distinction between emotional and “intellectual” expression, recognizing greater complexity and propagation in the former than in the latter, which involves fewer facial movements and which are also less intense and evident than emotional expressions. Although no tradition of empirical and experimental research has systematically confirmed De Sanctis’ observations, we can conclude that on the basis of a careful analysis of the expressive phenomena accompanying various types of mental processes, the reflections begun over a century ago still largely hold true.

4. Conversational signals It is clear to all that the face takes an active part in the communication processes that occur during conversation, through the gaze direction and the movements of the forehead, eyebrows, eyes, and the lower part of the face (together with other signals not discussed here, such as the position and movement of the head and shoulders, posture, gestures, etc.). Through these movements, the face constantly accompanies both the speaker and the listener as they take turns speaking in a conversation. Naturally, we are not referring to the movements of the mouth or the other movements of the face required for vocal emission in verbal behavior. The speaker constantly accompanies his/her words with facial expressions that in turn emphasize, underline, and modulate the content and meaning of the concomitant verbal language, in the same way as he/she uses an array of hand gestures (Ekman 1976; Ekman and Friesen 1972; Rime´ 1983). To this end, the movements involving the muscles of the forehead, eyebrows, and mouth play a significant part. The eyebrows in particular provide a great deal of information about verbal behavior (Costa and Ricci Bitti 2003) by lifting, lowering, or moving together to varying degree (expressed respectively by AUs 1⫹2 and by AUs 4⫹5 as per the Facial Action Coding System by Ekman and Friesen 1978). From a functional point of view these facial movements, which we can call co-verbal facial expressions, have specific characteristics distinct from emotional facial expressions. They are quicker and appear at the same time as the concomitant verbal behavior. The subject who takes on the role of listener can also produce a wide repertoire of facial movements which act simultaneously as constant feedback for the speaker and as comment/reaction (attention, interest, indifference, agreement, disagreement, doubt, perplexity, etc.) to the speaker’s verbal behavior. Research conducted in recent years on the non-manual components of sign languages (Corina, Bellugi, and Reilly 1999), including that on facial expression, have highlighted an extensive series of linguistic functions of facial expression also during conversation and social interaction among the hearing persons. We will call these movements “coverbal facial expressions with linguistic functions”. They are part of the grammatical apparatus, and coincide with other communicative components required to structure the message. Facial movements act simultaneously with other communicative systems (voice, direction of the gaze, hand gestures, structured use of space, etc.) to get across the message. These facial expressions transmit specific information that helps to identify the communicative components within a phrase and they are also indicative of a more or less formal style. These expressions, which from a certain point of view may appear to be

1346

VII. Body movements – Functions, contexts, and interactions

functionally comparable with the expressive intonation of vocal behavior in the word or phrase, in fact have an array of functions that are not yet entirely clarified. The facial expressions that accompany verbal behavior and perform a linguistic function (defined by some as “grammatical” facial expressions) are governed by the linguistic system and their activation/deactivation is coordinated by the concomitant phrase. Some of them take on a specific lexical function, accompanying individual phrases or words with the aim of completing/modulating the meaning; they are therefore concomitant (or simultaneous) with the word or phrase they accompany. Despite their apparent resemblance and use of the same underlying muscles, co-verbal and affective (conveying emotions) facial behaviors differ from one another in many ways, such as in their function, form, duration and, probably, in their activation mechanisms. Co-verbal facial expressions are governed by the verbal system; their precise coordination with the word or phrase is crucial in transmitting the message, which is thus completed and/or modulated. They are characterized by rapid and “individual” movements, unlike affective facial expressions which consist most often of combinations of facial actions, which are activated, evolve and end in ways that do not correspond to the clear confines of linguistic units, as is the case with co-verbal expressions with grammatical functions. The difference between co-verbal facial expressions with linguistic functions and affective expressions is also supported by neuropsychological research. It was demonstrated that the two types of expression involve the activation of different neural structures: affective expression are processed mainly in the right hemisphere, while co-verbal expression with linguistic function are chiefly processed in the left hemisphere. Further proof of these differences is provided by studying aphasic patients: damage to specific areas of the left hemisphere can cause deterioration of co-verbal facial expressions with linguistic function without interfering with affective facial expressions, while damage to the right hemisphere with consequent deterioration of affective expressions leaves coverbal facial expressions with linguistic functions intact (Adolphs et al. 1996; Borod et al. 1998; Burt and Perret 1997; Campbell 1986). We will give a few examples of the role of co-verbal facial expression in performing an adverbial function, in interrogative phrases (or questions) and in negation. Co-verbal facial expressions with adverbial function can convey adjectival information by accompanying various predicates and modifying their meaning. If we limit ourselves to illustrating the case of a diminutive (little) or an augmentative (very much), we can in the first case notice verbal behavior being accompanied by an expression that contemplates the eyebrows lowering and moving closer together (AUs 4⫹5 as per Facial Action Coding System) and a tightening of the eyelids (AU 7). In the second case, the words are accompanied by a raising of the eyebrows (AUs 1⫹2), a raising of the upper lid (AU 5), and, possibly, a raising of the chin (AU 17). Co-verbal facial expressions with syntactic function may characterize yes/no type questions, wh-questions (who, what, where, when, and negation. A verbal negation, for example, may be accompanied by a movement involving both the eyebrows (lowering and moving closer together, AUs 4⫹5 as per Facial Action Coding System) and the mouth (lowering of the corners of the mouth and raising of the chin, AU 15 and AU 17). A recent study (Ricci Bitti et al. 2012) analyzed the facial expression involved in cases of a person communicating doubt/uncertainty related to a specific information/knowledge of her/his own (“I am not sure to know it.”). The facial

95. Facial expression and social interaction

1347

expression shows in the lower face the presence of the AUs 15⫹17 (lowering of the corners of the mouth and raising of the chin) and in the upper face the presence of the AUs 1⫹2 (raising of the eyebrows). A yes/no type question (i.e., which calls for either an affirmative or a negative response) may be accompanied by a co-verbal facial expression involving the raising of the eyebrows and the widening of the eyelids (respectively AUs 1⫹2 and AU 5 as per Facial Action Coding System). A “wh-question” may be accompanied by a co-verbal facial expression involving the lowering and moving together of the eyebrows and reduction of the eyelid area (respectively AUs 4⫹5 and AU 7 as per Facial Action Coding System), possibly also accompanied by the raising of the chin and the tilting back of the head. The fact that facial expressions can modulate the meaning of a message conveyed through another communicative system in social interaction is also demonstrated by studies on the meaning of certain symbolic gestures (or emblems) which, sharing the same manual component, take on different meaning depending on the concomitant facial expression (see Ricci Bitti 1992). Finally, still in the area of conversation, other non-verbal signals can intervene to govern and regulate (which is why they are called “regulators”) the flow of the interaction and turn-taking between speakers, regulating transitions from the speaker to the listener and vice versa. In addition to certain typical hand gestures, facial expressions can also serve this purpose, for example raising the eyebrows or with movements in the mouth area; the listener, for example, can manifest his/her intention to take his/her turn through a wide range of signals, including some “preparatory” movements mainly involving the mouth (Duncan 1973; Kendon 1973).

5. Emblems and adaptors Although the repertoire of symbolic non-verbal signals (also known as conventional signals or “emblems”) is mainly the result of hand and arm movements, the face can also perform this function. Symbolic gestures are signals given with the intention of conveying specific meaning, which can be translated directly into words; this meaning is shared among members of a certain social group. A typical example of a symbolic gesture produced with the face is winking to convey complicity and reciprocal agreement, but many other emblems are the result of composite movements that imply the participation of the face and a hand, and consist mostly of a hand moving over a part of the face to convey a specific expression. Examples of this are: kissing the tips of the fingers as a sign of appreciation (Morris et al. 1979) or pulling down the lower eyelid of one eye with the index finger to mean “take care/be alert” (Morris et al. 1979). Some symbolic gestures or emblems include a facial expression accompanied by a movement of the head; one interesting example of this is “tossing the head back”, accompanied by a facial expression produced by raising the eyebrows and partially closing the eyelids to signal negation, which is much used in many Mediterranean cultures (Morris et al. 1979). Finally, there are some unintentional movements used repeatedly by individuals, as they are recognized as being useful in particular situations of internal discomfort or tension (corresponding to a physical arousal of the organism) and which become part of the idiosyncratic repertoire of each individual, who uses them to self-regulate in dayto-day situations. These movements originate from the necessity to fulfill certain needs

1348

VII. Body movements – Functions, contexts, and interactions

inherent to the particular situations in which the individual finds him/herself. They are generally learned over the course of the individual’s experience as part of a global model of adaptive behavior (which is why they are called “adaptors”). They become stable, habitual, and unconscious behaviors (not intended therefore to convey a specific message or meaning), which are a sign, or symptom, of a particular internal condition of arousal and the task of which is to maintain the organism in a condition of relative equilibrium (self-regulation). These movements mostly consist of self-manipulations (of body parts) or manipulations of objects, but in certain cases expressive facial movements can also be used. Of the facial expressions that can be included in this category of behaviors, we can mention lip biting, lip sucking, repeatedly licking the lips, and other movements of the mouth. Since these movements are associated with the level of physiological arousal, they tend to gradually increase as the arousal associated with the individual’s discomfort progressively intensifies, even though major differences exist between the quantity and type of movements used by different individuals.

6. Reerences Adolphs, Ralph, Hanna Damasio, Daniel Tranel and Antonio R. Damasio 1996. Cortical systems for the recognition of emotion in facial expression. The Journal in Neuroscience 16(23): 7678⫺7687. Argyle, Michael, Florisse Alkema and Robin Gilmour 1972. The communication of friendly and hostile attitudes by verbal and nonverbal signals. European Journal of Social Psychology 1(3): 385⫺402. Bell, Charles 1806. Essays on the Anatomy and Phylosophy of Expression in Paintings. London: Murray. Borod, Joan C., Loraine K. Obler, Hulya M. Erhan, Ilana S. Grunwald, Barbara A. Cicero, Joan Welkowitz, Cornelia Santschi, Reto M. Agosti and John R. Whalen 1998. Right hemisphere emotional perception: evidence across multiple channels. Neuropsychology 12(3): 446⫺458. Brunswik, Egon and Lotte Reiter 1937. Eindrucks-Charaktere schematischer Gesichter. Zeitschrift für Psychologie 142: 67⫺134. Burt, Michael D. and David I. Perrett 1997. Perceptual asymmetries in judgments of facial attractiveness, age, gender, speech and expression. Neuropsychologia 35(5): 685⫺693. Campbell, Ruth 1986. The lateralization of lip-read sounds: a first look. Brain and Cognition 5(1): 1⫺21. Cook, Mark 1971. Interpersonal Perception. Harmondsworth: Penguin. Corina, David P., Ursula Bellugi and Judy Reilly 1999. Neuropsychological studies of linguistic and affective facial expressions in deaf signers. Language and Speech 42(2/3): 307⫺331. Costa, Marco and Pio E. Ricci Bitti 2003. Il chiasso delle sopracciglia. Psicologia Contemporanea 176: 38⫺47. Cuyer, E´douard 1902. La Mimique. Paris: Doin. Darwin, Charles 1872. The Expression of Emotions in Man and Animals. London: Murray. De Sanctis, Sante 1902. La Mimica del Pensiero. Palermo: Sandron. Duchenne, Guillaume-Benjamin 1876. Mecanisme de la Physiologie Humain, II Edition. Paris: Baillie`re. Duncan, Starkey D. 1973. Some signals and rules for taking speaking turns in conversation. Journal of Personality and Social Psychology 23(2): 283⫺292. Ekman, Paul 1976. Movements with precise meaning. Journal of Communication 26(3): 14⫺26. Ekman, Paul 1978. Facial signs: Facts, fantasies, and possibilities. In: Thomas A. Sebeok (ed.), Sight, Sound, and Sense, 124⫺156. Bloomington: Indiana University Press. Ekman, Paul and Wallace V. Friesen 1972. Hand movements. Journal of Communication 22(4): 353⫺374. Ekman, Paul and Wallace V. Friesen 1978. Manual For The Facial Action Coding System. Palo Alto: Consulting Psychologists Press.

96. Gestures, postures, gaze, and movement in work and organization

1349

Gratiolet, Pierre L. 1865. De la Psysionomie et de Mouvements d’Expression. Paris: Bibliotheque de Education et de Recreation Kendon, Adam 1973. The role of visible behavior in the organization of social interaction. In: Mario von Cranach and Ian Vine (eds.), Social Communication and Movement, 29⫺74. New York: Academic Press. Mantegazza, Paolo 1879. Fisionomia e Mimica. Milano: Dumolard. Mehrabian, Albert 1972. Nonverbal communication. In: James K. Cole (ed.), Nebraska Symposium on Motivation, 107⫺161. Lincoln: Nebraska University Press. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures. London: J. Cape. Pasquinelli, Barbara 2005. Gesto ed Espressione. Milano: Electa. Piderit, Theodor 1867. Wissenschaftliches System der Mimik und Physiognomik. Detmold: Klingenberg. Ricci Bitti, Pio E. 1992. Facial and manual components of Italian symbolic gestures. In: Fernando Poyatos (ed.), Advances in Nonverbal Communication, 187⫺196. Amsterdam: John Benjamins. Ricci Bitti, Pio E., Luisa Bonfiglioli, Paolo Melani, Roberto Caterina and Pier Luigi Garotti 2012. Facial expression in communicating doubt/uncertainty. Paper presented at the International Conference The Communication of Certainty and Uncertainty, Macerata, University of Macerata, 3rd⫺5th October. Secord, Paul F. 1958. The role of facial features in interpersonal perception. In: Renato Tagiuri and Luigi Petrullo (eds.), Person Perception and Interpersonal Behavior, 300⫺315. Stanford: Stanford University Press.

Pio E. Ricci Bitti, Bologna (Italy)

96. Gestures, postures, gaze, and movement in work and organization 1. 2. 3. 4. 5.

Introduction The employment interview Power and social relations Conclusion References

Abstract The role of nonverbal cues in work and organizations is fundamental in shaping the whole working life. Behaviors acted during the employment interview, the social interactions, the power relationships, and all the transactions in work and organizations contexts are related to nonverbal communication. Given the vertical dimension of workplaces, nonverbal cues influence and determine behaviors from the first contact between the employer and the employee, that is, the job interview during the hiring process. Then, nonverbal cues shape social and power relations among persons in the organization, both in symmetrical or asymmetrical relations (i.e., peer to peer or hierarchical relations). Specifically, in work contexts many nonverMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13491360

1350

VII. Body movements – Functions, contexts, and interactions

bal cues are fundamental for one’s own full integration within the organization. Moreover, more power related is the occurring relation, more important are the nonverbal cues: dress and appearance, posture, gesture, gaze, and voice, are all salient nonverbal cues that shape one’s own daily work life. During each occurring social exchange the communication process is constantly shaped by nonverbal communication among the involved actors. The main nonverbal cues, features, and processes involved in work and organizations are discussed.

1. Introduction Many body cues play a central role in influencing expectations and results of social interactions in work organizations, such as in employment interviews or in social interaction and power relationships among different organizational roles. Any organizational context is the unique framework within which each specific organizational culture and social life is played and by which it is maintained: its routinized organizational interactions have some features ⫺ status differences, chain of command, division of labour, measurable performance objectives ⫺ that are unique to the specific organizational context when compared to other social-physical contexts, as well as to any other organizations (according to different cultures, climates, market sectors, etc.). These two types of specificity create special communication challenges and social interaction issues for the members of any organization. Bodily communication is important in all stages of a personal life-cycle within an organization, i.e., all along a person’s organizational socialization. Such a track, according to past research (Levine and Moreland 1994; Moreland and Levine 1982), encompasses five main phases: investigation, socialization, maintenance, resocialization, and remembrance. Therefore, bodily communication can play a basic role from the first contact between a candidate for a vacancy job position and his/her evaluator (e.g., in the job interview) ⫺ where few personal information about the candidate are available ⫺ across the full track of a person’s organizational life up to its change of organizational context or to its end of the organizational life. There are a number of possible issues for which bodily communication within the organizational context is relevant. For example, on the one hand, one of the major challenges is how to manage the displays of status among organizational members (and of the connected power relations) that are such an integral part of organizational life (Remland 2006). This issue is strictly linked to the social identification (which can be at either the organizational and/or team level) and to the related relationship functions and consequences: bodily communication in the organization can therefore get involved in the social identity processes at play within an organization, which in turn affect the full range of the organizational behaviors, from leadership to merging to diversity management etc. (e.g., Haslam et al. 2003). This is relevant both in terms of vertical (leaderfollower) and horizontal (co-workers and colleagues) relationships within the organization. On the other hand, relations within everyday organizational life are, as any other forms of group social interactions, a fundamental tool for the teamwork and the team management, given the impact they have on both the well-being and the efficiency of the people in the team and in the organization: bodily communication can therefore impact both the health and the work of team and organization. Both classical realms of group social psychology within organization ⫺ i.e., task achievement and relationship

96. Gestures, postures, gaze, and movement in work and organization

1351

or mood satisfaction (e.g., from Bales 1950; Wilson et al. 2004) ⫺ can be affected by bodily communication. Within the aim of the present chapter, in the following paragraphs a couple of main issues are reviewed which can be relevant for some of the organizational socialization stages: the employment interview as a first stage issue, and the power and social relations issue which affects most stages along the organizational socialization.

2. The employment interview In general, human relations can be organized among the “vertical” dimension (Burgoon and Hoobler 2002; Fiske 1991; Hall and Friedman 1999) ⫺ relating to power, dominance, status, hierarchy, and related concepts ⫺ and among the affective or socio-emotional “horizontal” dimension ⫺ which describes the emotional closeness of interpersonal relations and the valence of feelings and behaviors (Berger 1994; Osgood, Suci, and Tannenbaum 1957; Wiggins 1979). These two features are also matched by the two main dimensions used for interpersonal judgements, i.e., competence and warmth (e.g., Fiske, Cuddy, and Glick 2007). The vertical dimension is fundamental in organizational contexts, particularly starting with the employment interview, where the asymmetry is primary and paradigmatic (although the other one can also be relevant within the same context). Employment interviews are used within companies in order to select and enroll human resources in a work context. Moreover, self-presentation is a matter of regulating one’s own behaviors to create a particular impression on others (Jones and Pittman 1982), of communicating a particular image of oneself to others (Baumeister 1982), or of showing oneself to be a particular kind of person (Schlenker and Weigold 1989). Thus, within such a context, people can use their bodily communication to claim a variety of self-relevant (or context-relevant) characteristics (De Paulo 1992). Several studies have examined the effects of the applicant’s bodily communication behavior on the interviewer’s impressions and hiring decision. An increased bodily behavior used by the interviewer ⫺ smiles, constant eye-contact, head nods, and gestures ⫺ has a positive influence both on the applicant’s perception of the interviewer and on the applicant’s performance (Hall, Coats, and Smith LeBeau 2005). Similarly, an increased bodily behavior by the applicant has a positive effect on the interviewer judgment (Gifford, Ng, and Wilkinson 1985). Studies concerning the relevance of bodily communication in employment interviews have compared the quality of the applicant’s bodily behavior during the interview with the interviewer’s final judgments and hiring decisions. Several control studies showed that only applicants who displayed bodily behaviors as aboveaverage amount of eye contact, high energy level, speech fluency, and voice modulation were evaluated as worth seeing for a second interview (McGovern and Tinsley 1978). In Amalfitano and Kalt’s (1977) study, applicants who engaged in more eye contact were judged more alert, assertive, dependable, confident, responsible, and as having more initiative. Applicants rated highly on these attributes were also evaluated most likely to be hired. Several studies have examined the effects of the applicant’s bodily behaviors on the interviewer’s impression and hiring decision. The pattern of results suggests that increased eye contact, smiling, gestures, and head nods by an applicant produce favorable outcomes (Edinger and Patterson 1983). Forbes and Jackson (1980) have observed actual employment interviews in an employment office for internship. They showed that applicants were most favorably rated when

1352

VII. Body movements – Functions, contexts, and interactions

they engaged in more eye contact, smiling, and head movement. A much more limited bodily communication ⫺ evasive and absent gaze and reduced head movement ⫺ on the contrary was observed among non-accepted applicants. Body posture during the interview did not mark a considerable difference among accepted and non-accepted applicants; most applicants have shown erect body posture, joined hands resting on lap, legs together and not crossed (Gifford, Ng, and Wilkinson 1985). An important factor in a job interview is dressing, which determines the perception of the first interpersonal impression. The clothing is part of the appearance and together with other bodily communication cues it helps to define the personal identity in other people’s eyes. The influence of clothing for potential candidates in job interviews (Forsythe 1990) has been analyzed and it has highlighted the relevance of the consistency between the apparel and stereotyped expectations regarding a specific job profile. This study was based on the assumption that similarities between the clothing of the candidate and the characteristics of the observer (Byrne 1971) are closely linked. Summarizing, bodily communication in employment interviews is a significant element in the definition of the interviewer’s judgment of applicants, as well as a valid predictor of further success in work.

3. Power and social relations Once the person entered the organization, s/he finds her/himself in a new context and (sub-) culture where s/he needs quickly to orientate to properly understand and co-create meaning. Such a context, as already evident during the employment interview, is an asymmetrical one and one where group and category issues (both within and outside one’s own organization and team) are salient. Entering such an ecosystem, the person quickly learns the importance of power in social relations and needs to adapt to both the general social ecology and the specific social niche s/he finds in. Power can be defined as the use of one’s human, material, psychological, or intellectual resources to influence others either actively or passively. Power has a particular relevance in all types of relations, but especially in professional relations within a work organization. Bodily cues play an important role in social interactions in general, both as markers of the conversation space and as indicators of dominance and status. Within organizational contexts, social power and specifically organizational power ⫺ the ability to pursue and attain goals through mastery of one’s organizational environment (Mann 1986) ⫺ are pivotal. Posture, vocal emissions, gesture, face expressions, interpersonal distance, and space management are only some of the important bodily cues that are involved in power relation within the organizations: a brief synthesis is given below, going from the more static to the more dynamic ⫺ in terms of changes during an episode of social interaction ⫺ of the main bodily behaviors.

3.1. Dress and appearance, objects, and accessories One important element within the range of static bodily communication behaviors is dressing and physical appearance. Formal dressing has a meaning of power and control for both men and women, affecting both stakeholders inside and outside the organization (e.g., Stuart and Fuller 1991; Temple and Loewen 1993). To maintain the same style in one’s personal dressing code is considered a sign of status, coherence, reliability, and reputation. Moreover, physical characteristics and appearance is an important cue for

96. Gestures, postures, gaze, and movement in work and organization

1353

power and status, probably due to physical advantages that are linked to these physical characteristics, specifically in cases of competition. These features can also work in signaling stages of organizational socialization. Another important static feature is the management of the space, i.e., the objects and accessories located in the environments. Such spatial features are an important nonverbal indicator of power and dominance position. People having more organizational power than others can access much more places than persons having less organizational power (Remland 1981). They usually access first, and are often followed by others. In hospitals, for example, the head physician is followed by other doctors and Ph.D. students and managers have their own office, while employees share the room with other employees or sometimes they have their desk in an open-space, sharing the same ambient with a lot of people. Similarly, furniture and accessories and amount of space are used to signal organizational status and importance. The seating position on one’s desk can communicate specific signals on the worker’s psychological situation: reclined back and downcast eyes in a servile position might signify a beginning discomfort and an implicit help request (Anderson and Bowman 1999; Exline, Ellyson, and Long 1975).

3.2. Posture and distance Body posture is a strong indicator of power and dominance. In work organizations the body posture corresponds to the organizational position: an erect, open but relaxed posture generally indicates a higher position (Anderson and Bowman 1999). Mehrabian (1970, 1972) has observed the significance of posture in the double line of dominancesubmission and relaxation-tension according to the following lines. “Postural relaxation” is defined by asymmetric position of arms and legs, reclined or oblique trunk inclination, relaxed hands and neck. These indicators are generally used towards people of inferior social status: this position is described as “dominant”. On the contrary, a much more stiff and rigid posture, expressing deference and subordination, is generally observed towards people of a superior social status. In different situations a rigid, erect posture, hands on hips and head reclined backwards are signs of dominance; bows, downcast gaze, and reclined head indicate submission or reverence. Interpersonal distance and proxemics are also important in regulating organizational social interactions as a function of power roles. For example, in the study by Fortenberry et al. (1978), within a university organization, two persons were conversing among them in the middle of the corridor: they were dressed either formally or informally. Results show that people invade more the conversing couple personal space when the two conversants are dressed informally: that is to say that, when the persons’ dresses show their higher power, other people within the same organization show greater deference to them avoiding invading their personal space.

3.3. Gestures and movements People exercising power and influence in work organizations generally show an increased use of gestures, particularly pointing to others (Ekman 1976). Within organizational working teams or persuasive communications, people playing subordinate roles or however resulting to have less social influence ⫺ or even being perceived lower on the “composure” and on the “competence” dimension (but not necessarily on the “warmth” dimension) ⫺ use more self-manipulation gestures and less ideational or conversational

1354

VII. Body movements – Functions, contexts, and interactions

gestures, when compared to more “powerful” colleagues. This is evident in empirical studies where intended persuasive speaker’s hand gestures are manipulated and members of the same organization are asked to evaluate the speaker and her message (Maricchiolo et al. 2009). But the same is also evident when members of an organizational team are asked to discuss in order to reach a decision-making within a problem-solving setting and their bodily communication is coded: a person’s usage of ideational or conversational or object-manipulation gestures, but not of self-manipulation gestures, results in an increase of that person’s perceived influence within the group, and this effect is observed for those persons who are verbally less dominant (Maricchiolo et al. 2011). Although hand gestures have certainly a privileged role in social interaction, other body movements can have their importance too: movements and space management can play a role in the management of social relations within work organizations. A hasty walk through the workplace corridors can either signify a strong commitment in fast and effective work, or boldness. On the contrary, an excessively slow gait can show weak motivation in work (Anderson and Bowmann 1999).

3.4. Gaze and voice Power and dominance relations can also be inferred through gaze. Eye-contact is more frequent and intense in cooperative relationships than it is in competitive ones. Gaze is also a very effective signal to require and obtain approval. In asymmetric interactions the tendency to avoid or sustain the partner’s gaze can depend from differences in status roles but also from the interaction situation. For example, eye-contact by the dominant partner is usually sustained while speaking and much more reduced while listening to the other partner. Moreover, gaze behavior and emotion have both been linked to approach and avoidance motivational orientations (Adams and Kleck 2005). Direct gaze, anger, and joy, share an approach orientation, whereas averted gaze, fear, and sadness, share an avoidance orientation. Thus, within organizational contexts, when gaze direction matches the underlying behavioral intent communicated by a specific emotion expression, it will enhance the perception of that emotion (Adams and Kleck 2005). Within organizations, it is also important to pay attention to the vocal tone in order to express power. People who have more power often speak with a higher volume and quickly when they would like to have more control over the conversation, while they speak with a lower volume and slowly when they would like to control and manage the time of the conversation (Atkinson 1984).

3.5. Process complexities: co-occurrence, reciprocity, automaticity, mediated eects However, in most situations within an organizational context bodily communication is a more complex phenomenon (for the same issues in general, see in this handbook chapter 38 on social psychology). First of all, this is because people use a combination of bodily communication parameters to manage their social relations. Secondly, because people tend to bodily communicate according to the partner’s bodily communication. Thirdly, complexities arise because such processes tend to occur partly or completely out of the actor’s and/or observer’s awareness. A good example is the fact that what is subjectively recognized as a good relationship in the interaction with colleagues or managers can be inferred:

96. Gestures, postures, gaze, and movement in work and organization

1355

(i) from spontaneous coordination of both micro and macro body movements, known as synchrony (Kelly and Barsade 2001); (ii) from spontaneous imitation of the partner’s posture, known as posture mirroring; and (iii) from emotional contagion (Tickle-Degnen and Rosenthal 1987). Specifically, emotional contagion refers to the processes whereby the moods and emotions of one individual are transferred to nearby individuals toward a relatively automatic and unconscious tendency to “mimic and synchronize facial expressions, vocalizations, postures, and movements with those of another person and, consequently, to converge emotionally” (Hatfield, Cacioppo, and Rapson 1992: 151). In humans, the processes of emotional contagion, synchrony, and mirroring, are critical factors in the evolution of emotional convergence between two people (Arizmendi 2011). These are automatic processes that may be manifested in the social and behavioral environment, as well as in the psychological and physiological realm (Hatfield, Cacioppo, and Rapson 1994). These processes are linked to each other, so in the behavioral realm mimicry contributes significantly to emotional synchrony. In their studies referring to social interaction in general, Hatfield, Cacioppo, and Rapson (1994), have demonstrated that people in conversation automatically mimic and synchronize their movements accordingly with others in the conversation, and that this synchrony is based on facial expressions, voice qualities, body postures, movements, and significant behaviors of the other person, “all of which results in an emotional convergence between them” (Hatfield, Cacioppo, and Rapson 1994: 81). Emotional experiences result from either the mimicry itself or the feedback one receives as a result of it: according to the authors “people tend to ‘catch’ others’ emotions, moment to moment” (Hatfield, Cacioppo, and Rapson 1994: 11). Moreover, when interaction synchrony occurs, the outcome is a positive affect (Chapple 1970) and, within the organizational setting too, this positive affect can take the form of satisfaction with the interaction (Bernieri 1988), or greater group rapport (Tickle-Degnen and Rosenthal 1990), while lack of synchrony is unpleasant. Thus, within organizations and work contexts in general, synchrony, mirroring, and automaticity in emotional contagion play a fundamental role in both power relations and social interaction. People with power or dominant positions would tend to impose their emotional status, their way to behave and to move, while other people will try to synchronize their emotional status (and posture) with them. But it is also possible that a way to start exerting power from a powerless position is to use actively, either consciously or not, such process complexities in order to establish a common ground with a powerful interlocutor. Finally, bodily features can have interpersonal effects, such as interpersonal power, on a person’s interlocutor via the mediation given by the effects they have first of all on the person her/himself, such as at the level of social power self-perceptions and behaviors. For example, very recent researches (e.g., Carney, Cuddy, and Yap 2010) show how controlling for subjects’ baseline levels of both testosterone and cortisol, high-power poses decreased cortisol by about 25% and increased testosterone by about 19% for both men and women. In contrast, low-power poses increased cortisol about 17% and decreased testosterone about 10%. Moreover, high-power posers of both genders also reported greater feelings of being powerful and in charge. This means that “posing in displays of power caused advantaged and adaptive psychological, physiological, and

1356

VII. Body movements – Functions, contexts, and interactions

behavioral changes, and these findings suggest that embodiment extends beyond mere thinking and feeling, to physiology and subsequent behavioral choices. That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications” (Carney, Cuddy, and Yap 2010: 1363).

3.6. Some content issues Bodily communication is involved in all aspects of organizational life, not only in those linked with performance and organizational citizenship behaviors (Organ, Podsakoff, and MacKenzie 2006), as most of the above mentioned cases, but also with counterproductive organizational behaviors (Sackett and DeVore 2001). Moreover, it can be subtly linked to an organizational culture and climate. A good example is the phenomenon known as mobbing, a workplace bullying consisting in the use of persistent aggressive or unreasonable behavior against a co-worker or subordinate (Leymann 1990). In general, bodily cues play a crucial role in face-toface communication, as they can completely modify the meaning of verbal messages: transferring the communicative level from the content to the relation; moving the interpretation of the semantic meaning from the literal to a non-literal level and vice-versa; converting usual power conflicts into personal ones; etc. (i.e., exerting many meta-communication functions; e.g., Bateson 1972; Kendon 2004; Watzlawick, Beavin, and Jackson 1967). That is to say that in workplace bullying and mobbing phenomena the meanings exchanged between the bully and the victim depend very much on the social interaction that occurs between them: meanings are assigned to and via both verbal or bodily symbols within the framework of the remote and close interactional story of that dyad. It is important to note that those are meanings both assigned to the objective communication and also to the perceptions of communication and that all this is filtered through the specific culture of the organization (Downs 1988). Thus, this means that the shared meanings are assigned both to verbal and bodily communication. Communication in mobbing situations is usually avoided, limited, troubled, unclear, and frequently subject to misunderstanding or incomplete comprehension of messages. In such a framework, mobbing can be interpreted as a result of a failure in communication. As mobbing behavior is sneaky and not evident, bodily communication can be a privileged mean for the development and further spreading of this form of psychological terror. Observing and analyzing bodily behaviour in workers could potentially help in preventing or diagnosing a mobbing situation in a workplace. More generally, according to Aguinis and Henle (2001), general employee effectiveness is based on their perceptions of power and such issue is closely interdependent with the broader gender issue. The authors defined power as “the potential of an agent to alter a target’s behaviour, intentions, attitudes, beliefs, emotions, or values” (Aguinis and Henle 2001: 537) and they refer to French and Raven’s power taxonomy: reward, coercive, legitimate, referent, and expert (French and Raven 1959). Aguinis and Henle (2001) regarded bodily communication as “relevant to several interpersonal processes such as deception, impression formation, attraction, social influence, and emotional expression” (Aguinis and Henle 2001: 537). According to Aguinis and Henle’s (2001) framework, within social interaction at work, “on the basis of culturally outlined gender roles, men and women are expected to behave in certain ways: when they violate these expectations, others may evaluate them negatively” (Aguinis and Henle 2001: 538). Coherently, they found differences in gender and power perceptions in relations to bodily

96. Gestures, postures, gaze, and movement in work and organization

1357

communication at work: for example, bodily communication by women at work ⫺ such as long eye contact, expressive facial movements, and prominent posture ⫺ often resulted in either a negative evaluation or in a lowered perceived power (for further discussions about gender and workplace settings related bodily communication, see in this handbook chapters 170 and 176).

4. Conclusion A range of important effects have been reviewed showing that bodily communication can affect all stages of organizational life and socialization of employees and managers at work. From the initial stage and in preliminary settings such as the employment interviews up to the full membership and the active exercise of organizational power, organizational interactions are pervasively interested by all features of bodily communication: dresses and appearances; uses of space, objects, and accessories; interpersonal distances and postures; hand gestures and other body movements; gazes and voice parameters. These different bodily communication features work according to complex processes such as: co-occurrence among a plurality of different synchronous features and among those and the parallel verbal language features of communication; reciprocity of bodily communication features among the interlocutors, which appear to rely on mirroring; automaticity of bodily communication may play important roles in phenomena such as for example emotional contagion, even without a full awareness or consciousness by the people involved; eventually there may be mediated effects on social interactions via bodily communications influencing a person’s self-perceptions and actions. Some specific relations are outlined by the scientific literature. However, it must be stressed that the specific consistent patterns of the above reported results have been obtained within a rather homogeneous cultural background, i.e., western organizational contexts. This means that within different cultural settings, the specific content of the results could vary cross-culturally, but also moving from one organizational (sub-)culture to the other. A careful contextualization of the established results and of the relationships of bodily communication features with work and organizational phenomena should therefore always be taken into account. Within the presented framework, further research may provide useful contributions in helping to develop more efficient, healthy and fair organizational working environments, settings, and cultures. An important implication is also that of developing useful knowledge and tools for improving, both at employee and managerial levels, the personnel’s awareness about bodily communication given its proved social impact on the interactions with internal and external organizational stakeholders, as well as on the organizational personnel itself at both self-perception and behavioral levels.

5. Reerences Adams, Reginald B. and Robert E. Kleck 2005. Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion 5(1): 3⫺11. Aguinis, Herman and Christine A. Henle 2001. Effects of nonverbal behavior on perceptions of a female employee’s power bases. Journal of Social Psychology 141(4): 537⫺549. Amalfitano, Joseph G. and Neil C. Kalt 1977. Effects of eye contact on the evaluation of job applicants. Journal of Employment Counseling 14(1): 46⫺48.

1358

VII. Body movements – Functions, contexts, and interactions

Anderson, Peter A. and Linda L. Bowman 1999. Positions of power: Nonverbal influence in organizational communication. In: Laura K. Guerrero, Joseph A. DeVito and Michael L. Hecht (eds.), The Nonverbal Communication Reader: Classic and Contemporary Readings, 317⫺334. Prospect Heights, IL: Waveland. Arizmendi, Thomas G. 2011. Linking mechanisms: Emotional contagion, empathy, and imagery. Psychoanalytic Psychology 28(3): 405⫺419. Atkinson, J. Max 1984. Our Masters’ Voice. London: Routledge. Bales, Robert F. 1950. Interaction Process Analysis: A Method for the Study of Small Groups. Reading, MA: Addison-Wesley. Bateson, Gregory 1972. Steps to an Ecology of Mind. Chicago: University of Chicago Press. Baumeister, Roy F. 1982. A self-presentational view of social phenomena. Psychological Bulletin 91: 3⫺26. Berger, Charles R. 1994. Power, dominance, and social interaction. In: Mark L. Knapp and Gerald R. Miller (eds.), Handbook of Interpersonal Communication, second edition, 450⫺507. Thousand Oaks, CA: Sage. Bernieri, Frank J. 1988. Coordinated movement and rapport in teacher-student interactions. Journal of Nonverbal Behavior 12(2): 120⫺138. Byrne, Don 1971. The Attraction Paradigm. New York, NY: Academic Press. Burgoon, Judee K. and Gregory D. Hoobler 2002 Nonverbal signals. In: Mark L. Knapp and John A. Daly (eds.), Handbook of Interpersonal Communication, third edition, 240⫺299. Thousand Oaks, CA: Sage. Carney, Dana R., Amy J.C. Cuddy and Andy J. Yap 2010. Power posing: Brief nonverbal displays affect neuroendocrine levels and risk tolerance. Psychological Science 21(10): 1363⫺1368. Chapple, Eliot D. 1970. Experimental production of transients in human interaction. Nature 228: 630⫺633. De Paulo, Bella M. 1992. Nonverbal behavior and self-presentation. Psychological Bulletin 111(2): 203⫺243. Downs, Cal W. 1988. Communication Audits. New York, NY: Harper Collins Publishers. Edinger, Joyce A. and Miles L. Patterson 1983. Nonverbal involvement and social control. Psychological Bulletin 93: 30⫺56. Ekman, Paul 1976. Movements with precise meanings. Journal of Communication 26(3): 14⫺26. Exline, Ralph V., Steve L. Ellyson and Barbara Long 1975. Visual behavior as an aspect of power role relationships. In: Patricia Pliner, Lester Krames, and Thomas Alloway (eds.), Nonverbal Communication of Aggression, 21⫺52. New York: Plenum. Fiske, Alan P. 1991. Structures of Social Life. The Four Elementary Forms of Human Relations: Communal Sharing, Authority Ranking, Equality Matching, Market Pricing. New York: Free Press. Fiske, Susan T., Amy J.C. Cuddy and Peter Glick 2007. Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Sciences 11(2): 77⫺83. Forbes, Ray J. and Paul R. Jackson 1980. Nonverbal behavior and the outcome of selection interviews. Journal of Occupational Psychology 53: 67⫺72. Forsythe, Sandra 1990. Effect of applicant’s clothing on interviewer’s decision to hire. Journal of Applied Social Psychology 20(19): 1579⫺1595. Fortenberry, James H., Joyce MacLean, Priscilla Morris and Michael O’Connell 1978. Mode of dress as a perceptual cue to deference. Journal of Social Psychology 104(1): 139⫺140. French, John R.P. and Bertram Raven 1959. The bases of social power. In: Dorwin Cartwright (ed.), Studies in Social Power, 150⫺167. University of Michigan, MI: Ann Arbor. Gifford, Robert, Cheuk Fan Ng, and Margaret Wilkinson 1985. Nonverbal cues in the employment interview: applicant qualities and interviewer judgments. Journal of Applied Psychology 70(4): 729⫺736. Hall, Judith A., Erik J. Coats and Lavonia Smith LeBeau 2005. Nonverbal behavior and the vertical dimension of social relations: A meta-analysis. Psychological Bulletin 131(6): 898⫺294.

96. Gestures, postures, gaze, and movement in work and organization Hall, Judith A. and Gregory B. Friedman 1999. Status, gender, and nonverbal behavior: A study of structured interactions between employees of a company. Personality and Social Psychology Bulletin 25(9): 1082⫺1091. Haslam, S. Alexander, Daan van Knippenberg, Michael J. Platow and Naomi Ellemers (eds.) 2003. Social Identity at Work: Developing Theory for Organizational Practice. New York: Psychology Press. Hatfield, Elaine, John T. Cacioppo and Richard L Rapson 1992. Primitive emotional contagion. Emotion and social behavior. Review of Personality and Social Psychology 14: 151⫺177. Hatfield, Elaine, John T. Cacioppo and Richard L. Rapson 1994. Emotional Contagion. Studies in Emotion and Social Interaction. Cambridge: Cambridge University Press. Jones, Edward E. and Thane S. Pittman 1982. Toward a general theory of strategic self-presentation. In: Jerry Suls (ed.), Psychological Perspectives on the Self, Volume 1, 231⫺262. Hillsdale, NJ: Erlbaum. Kelly, Janice R., and Sigal G. Barsade 2001. Mood and emotions in small groups and work teams. Organizational Behavior and Human Decision Processes 86(1): 99⫺130. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Levine, John M. and Richard L. Moreland 1994. Group socialization: Theory and research. European Review of Social Psychology 5(1): 305⫺336. Leymann, Heinz 1990. Mobbing and psychological terror at workplace. Violence and Victims 5(2): 119⫺126. Mann, Michael 1986. The Sources of Social Power. Cambridge: Cambridge University Press. Maricchiolo, Fridanna, Agosto Gnisci, Marino Bonaiuto and Ginaluca Ficca 2009. Effects of different type of hand gestures in persuasive speech on receivers’ evaluations. Language and Cognitive Processes 24(2): 239⫺266. Maricchiolo, Fridanna, Stefano Livi, Marino Bonaiuto and Agosto Gnisci 2011. Hand gestures and perceived influence in small group interaction. Spanish Journal of Psychology 14(2): 755⫺764. McGovern, Thomas V. and Howard E.A. Tinsley 1978. Interviewer evaluations of interviewee nonverbal behavior. Journal of Vocational Behavior 13(2): 163⫺171. Mehrabian, Albert 1970. The development and validation of measures of affiliative tendency and sensitivity to rejection. Educational and Psychological Measurement 30(2): 417⫺428. Mehrabian, Albert 1972. A semantic space for nonverbal behavior. Journal of Consulting and Clinical Psychology 35(2): 248⫺257. Moreland, Richard L. and John M. Levine 1982. Socialization in small groups: Temporal changes in individual-group relations. Advances in Experimental Social Psychology 15(C): 137⫺192. Organ, Dennis W., Philip M. Podsakoff and Scott B. MacKenzie 2006. Organizational Citizenship Behavior: Its Nature, Antecedents and Consequences. Beverly Hills, CA: Sage. Osgood, Charles E., George J. Suci and Percy H. Tannenbaum 1957. The Measurement of Meaning. Urbana: University of Illinois Press. Remland, Martin S. 1981. Developing leadership skills in nonverbal communication: A situational perspective. Journal of Business Communication 18(3): 17⫺29. Remland, Martin S. 2006. Uses and consequences of nonverbal communication in the context of organizational life. In: Valerie Manusov and Miles L. Patterson (eds.), The Sage Handbook of Nonverbal Communication, 501⫺519. Thousand Oaks, CA: Sage. Sackett, Paul R. and Cynthia J. DeVore 2001. Counterproductive behaviors at work. In: Neil Anderson, Deniz Ones, Handan Sinangil and Chockalingam Viswesvaran (eds.), Handbook of Industrial, Work and Organizational Psychology, 145⫺164. Thousand Oaks, CA: Sage. Schlenker, Barry R. and Michael F. Weigold 1989. Goals and the self-identification process: Constructing desired identities. In: Lawrence Pervin (ed.), Goals Concepts in Personality and Social Psychology, 243⫺290. Hillsdale, NJ: Erlbaum. Stuart, Elnora W. and Barbara K. Fuller 1991. Clothing as communication in two business-tobusiness sales settings. Journal of Business Research 23(4): 269⫺290.

1359

1360

VII. Body movements – Functions, contexts, and interactions

Temple, Linda E. and Karen R. Loewen 1993. Perceptions of power: First impressions of a woman wearing a jacket. Perceptual and Motor Skills 76: 339⫺348. Tickle-Degnen, Linda and Robert Rosenthal 1987. Group rapport and nonverbal behavior. In: Clyde Hendrick (ed.), Review of Personality and Social Psychology, Volume 9, 113⫺136. Newbury Park, CA: Sage. Tickle-Degnen, Linda and Robert Rosenthal 1990. The nature of rapport and its nonverbal correlates. Psychological Inquiry 1(4): 285⫺293. Watzlawick, Paul, Janet H. Beavin and Don D. Jackson 1967. Pragmatic of Human Communication: A Study of Interactional Patterns, Pathologies, and Paradoxes. New York: W.W. Norton and Co. Wiggins, Jerry S. 1979. A psychological taxonomy of trait-descriptive terms: The interpersonal domain. Journal of Personality and Social Psychology 37(3): 395⫺412. Wilson, Mark G., David M. Dejoy, Robert J. Vandenberg, Hettie A. Richardson and Allison L. McGrath 2004. Work characteristics and employee health and well-being: Test of a model of healthy work organization. Journal of Occupational and Organizational Psychology 77(4): 565⫺588.

Marino Bonaiuto, Roma (Italy) Stefano De Dominicis, Roma (Italy) Uberta Ganucci Cancellieri, Reggio Calabria (Italy)

97. Gesture and conversational units 1. 2. 3. 4. 5.

Introduction Kinesic and temporal features of gesture Some functions of gesture Conclusion References

Abstract Speech is packaged in prosodic units (called intonation phrases or tone units) in order to facilitate information processing. Gesture phrases can be seen as visible corollaries to intonation phrases. They derive their functions from their specific performance and their particular relation to prosodic and syntactic features of the ongoing or projected utterance. In section 2.1 kinesic features of gesture-phrases and gesture units are described. Temporal relationships between gesture and speech are addressed in section 2.2, showing that gestures’ strokes are related to the nucleus of tone units without necessarily being exactly synchronized with it. Functions of gestures as integral parts of utterances are discussed in section 3: In coordination with syntactic and intonation units, gesture phrases and gesture units serve information processing (see section 3.1). Another function of gesture is turn-construction (see section 3.2) as well as the organization of turn-taking (see section 3.3). Recognition of gesture leads to a revision of central concepts in interaction analysis, e.g., the notion of speaker and hearer as well as the conversation analytic conceptualization of turn-constructional units and transition relevance places. These, along with open research questions, are addressed in section 4. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13601367

97. Gesture and conversational units

1361

1. Introduction Discourse analysts, despite their varying theoretical backgrounds, generally agree that speech is packaged in prosodic units (called intonation phrases or tone units) in order to facilitate information processing. Gesture phrases can be seen as visible corollaries to intonation phrases. The kinesic features of gesture-phrases and gesture units have been described most prominently by Adam Kendon (see section 2.1; temporal relations between gesture and speech are discussed in section 2.2). In coordination with syntactic and intonation units, gesture phrases and gesture units serve information processing (see section 3.1). Another function of gesture is turn-construction (see section 3.2) as well as the organization of turn-taking (see section 3.3). Recognition of gesture leads to a revision of central concepts in interaction analysis. These, along with open research questions, are addressed in section 4.

2. Kinesic and temporal eatures o gesture 2.1. Kinesic description o gesture phrase and gesture unit Kendon (2004a: 108) defines the gesture phrase as “units of visible bodily action […] which correspond to meaningful units of action such as a pointing, a depiction, a pantomime or the enactment of a conventionalized gesture”. Gesture units and gesture phrases are defined by kinesic features: A gesture unit encompasses the whole excursion of the forelimbs from the movement’s beginning until the return into a position of rest. Any gesture unit may contain one or more gesture phrases. Gesture phrases are constituted by a gesture’s stroke: that moment by which the movement reaches its apex and is best defined. Typically, speakers prepare for gesticulating by bringing their hand(s) into a “starting position”. These preparation phases, along with any phase in which the stroke is held, are equally counted as part of the gesture phrase (see Kendon 2004a: 124).

2.2. Temporal relations between gesture and speech In experimental settings, as well as for naturally occurring conversations, it was often observed that the gesture’s onset and/or stroke coincides with, or slightly precedes, some lexical item the gesture can be related to. In some psycholinguistic analyses, prepositioned gestures are said to facilitate lexical retrieval (for a critical discussion, see Chui 2005 and literature cited there; competing explanations for the temporal coordination between gesture and speech in terms of utterance production are discussed in de Ruiter 2000; Krauss, Chen, and Gottesman 2000; McNeill 2000). In conversation analytic studies, prepositioned gestures are considered to “render the scene in which the talk arrives a prepared scene” (Schegloff 1984: 291). However, given the variability of temporal relationships between a gesture’s stroke and the nucleus of the intonation phrase, Kendon (2004a: 125) considers the assumption held in many papers “that the tonic centre of the tone unit was also the ‘high information’ word of the phrase, so that the stroke of the gesture phrase, the tonic syllable, and the information centre of the speech phrase, were all co-occurrent” to be an inadequate generalization.

1362

VII. Body movements – Functions, contexts, and interactions

3. Some unctions o gesture 3.1. Constitution o unctional units The rendition of turns in units enables the moment by moment information processing for recipients. The intonation phrase as well as gesture phrase is seen as a manifestation of the underlying ideation process (see, e.g., Chafe 1993: 40; Kendon 1980: 216, 2004a: 126; the interrelationships between intonation and gesture phrases cannot be accounted for here, but see Loehr this volume). Typically, a syntactic phrase is produced under a single coherent intonation phrase and is accompanied by a single gesture phrase. Yet, the co-occurrence of syntactic, intonation and gesture phrases is not an inevitable outcome of the process of utterance: with regard to the beginning and ending of gesture phrases in relation to intonation phrases, Schönherr (1997) showed that, typically, syntactic, prosodic, and gestural boundary signals (Grenzsignale) are congruent, yet gesture may begin in advance and last slightly longer. She concludes that prosody is more strongly tied to syntax than gesture is (Schönherr 1997: 132). Furthermore, no universal one-to-one relation between syntactic, intonation, and gesture phrases has been found to exist: In a comparative study Kendon (2004b) found cultural differences in the amount of gesture phrases per tone unit between Neapolitan and English speakers. In their analysis of public political discourse, Jannedy and Mendoza-Denton (2005) found that “95.7% of all apices were accompanied by a pitch accent whereas only 69.4% of all pitch accents were additionally marked by a gesture apex” (Jannedy and Mendoza-Denton 2005: 233).

3.2. Construction o turns In one of the most influential papers on turn-taking, Sacks, Schegloff, and Jefferson (1974) conceived of turn-constructional units as mainly syntactically defined, consisting of words, clauses, or phrases, without recognition of prosody and gesture. In recent years, the role of prosody and gesture in turn-construction has been analyzed. It has been shown that gesture may be used to project globally by projecting meaning aspects that will later be formulated, and locally end of the turn or continuation of the turn is projected by returning into a position of rest or by continuing with a new gesture phrase. The return of a gesticulating hand into a position of rest or relaxation of a tensed hand serve as a turn-ending signal (Duncan and Fiske 1977; Schwitalla 1979), corresponding to ending intonation. Conversely, whenever at some syntactic and/or prosodic possible completion point the hands do not return into a position of rest and/ or are not relaxed, corresponding to the use of continuing intonation, continuation of the turn is projected. Streeck (2009) analyzes the scope of gesture’s projection depending on its position in the turn (before and at turn-beginning, in multi-unit turns, in midturn and at turn-completion). He showed that “the joined comprehension of gesture and talk is not only a matter of their positioning in relation to one another, but also of the ‘position’ or ‘slot’ within the unfolding turn and sequence in which the gesture is inserted” (Streeck 2009: 177). Schönherr (1997) showed that gesture is used in coordination with prosody to signal continuity or discontinuity between syntactic units to varying degrees in the cases of sentences, parentheses, cut offs, and new starts. Although continuity is signalled in both

97. Gesture and conversational units

1363

modes, prosody and gesture, there seems to be a hierarchy between them: Whenever continuity is signalled, prosody is engaged, occasionally supported by gesture. Duncan and Fiske (1977), Schwitalla (1979) and Bohle (2007) have shown that participants orient to the absence respectively presence of bodily tension and/or movement as signals for beginning, continuing, or ending a turn. The next speakers do not take the turn until the current speaker’s hands have returned into a position of rest. In cases where a gesture phrase ends after an otherwise possible completion point, minimal responses as well as beginnings of next turns are placed at possible completion points of gesture phrases rather than that of syntactic and/or intonation phrases (Bohle 2007). Bohle (2007: 287) concludes that the notion of turn-constructional unit has to be refined in order to cover gesture phrases as an integral part of the utterance. Furthermore, the notion of a possible completion point and that of a transition relevance place have to be disentangled, because in any case of multi-unit turns, internal possible completion points in one or two of the modes (syntax, prosody, gesture) may be suspended as transition relevance places by signaling continuation in the same or one of the other modes (for a similar argument based on the analysis of prosody and syntax alone, see Ford and Thompson 1996; Selting 2000). Yet, as de Ruiter, Mitterer, and Enfield (2006: 519, original emphasis) argue, “the observation that certain intonational [and gestural; UB] phenomena cooccur with turn endings does not mean that they are used by the listener as anticipatory cues for projection”. In an experimental setting, the authors found that “removing pitch information from the stimuli had no influence on projection accuracy. By contrast, removing lexical content […] did have a strong detrimental effect on projection accuracy” (de Ruiter, Mitterer, and Enfield 2006: 27, original emphasis). They conclude that “lexico-syntactic structure is necessary (and possibly sufficient) for accurate end-of-turn projection, while intonational structure, perhaps surprisingly, is neither necessary nor sufficient” (de Ruiter, Mitterer, and Enfield 2006: 531). Comparable studies for gesture have not been conducted.

3.3. Turn-taking 3.3.1. Turn-taking as a unction in classiications o gesture In their functional classification of bodily behavior, Ekman and Friesen (1969) distinguish between illustrators, that is, gestures that are tied to speech and serve to “illustrate” what is being said, and regulators, which “are acts which maintain and regulate the back-and-forth nature of speaking and listening between two or more interactants” (Ekman and Friesen 1969: 82) (see Schönherr this volume). In her revision of this classification, Bavelas and her colleagues (1992, 1995) propose a new division of the illustrator class into topic related gestures and interactive gestures with turn gestures as a subcategory. Among these, they differentiate turn yielding, turn taking, and turn open (i.e., signaling that it is anyone’s turn) as turn-taking-related functions (see Knapp and Hall 1997: 270). Ekman and Friesen as well as Bavelas emphasize that any single gesture may serve several functions. Thus, it remains unclear if regulators vs. illustrators respectively turn gestures vs. topic related gestures should be considered as separate types of gesture or if turn-construction and turn-taking are functions of any gesture. Furthermore, how exactly these functions are performed and how they are related to one another remains quite unclear. Without any consistent model for the organization of turn-taking, those specific functions cannot be determined precisely.

1364

VII. Body movements – Functions, contexts, and interactions

3.3.2. Gesture in models o turn-taking On the basis of empirical observations Duncan and Fiske (1977) postulate that participants in conversations signal by the use of verbal, prosodic, and bodily cues if they want to maintain or to change their participant state (being either speaker or listener). Gesture is found to be involved in three turn-taking-related signals: the speaker turn signal consisting of the termination of any hand gesticulation or the relaxation of a tensed hand (e.g., a fist) (Duncan and Fiske 1977: 185), the speaker gesticulation signal, for which one or both of the speaker’s hands are being engaged in gesticulation or in a tensed hand position (switching off the gesticulation signal automatically constitutes the display of a turn cue) (Duncan and Fiske 1977: 188), and the speaker state signal which is constituted by the initiation of a gesticulation (except for self- and object adaptors) (Duncan and Fiske 1977: 216). Yet, empirical studies on turn-taking show that speakership transition often occurs far too fast and next speakers take the turn too early as to be guided by current speaker’s turn yielding signals. In an alternative model of turn-taking, developed by the conversation analysts Sacks, Schegloff, and Jefferson (1974), places of possible speakership transition can be anticipated by participants which allows for smooth speakership transition without gap or overlap. This so-called “simplest systematics” has become the most influential model of turn-taking. However, since it was originally developed on the basis of telephone conversations, bodily behavior was not included. Nevertheless, this model allows for systematic integration of gesture as has been shown by Streeck and Hartge (1992), Streeck (2009), Bohle (2007).

3.3.3. Empirical studies on gesture and turn-taking The gestural signals observed by Duncan and Fiske (1977) have been confirmed by Schwitalla (1979) for German TV-interviews. Furthermore, he observed some gestures specifically designed for turn-allocation and for defending against attempts of interruption. In an analysis of the use of gesture in German TV-conversation, Weinrich (1992) established a gesture vocabulary that encompasses some gestures specifically related to turn-allocation. Among others, she found holding out one hand (Halthand) or building a corridor with both hands (Wandhände) as means to prevent being interrupted. In a conversation-analytic study on gestures at the transition space Streeck and Hartge (1992) found that next speakers may self-select by beginning a gesture right before a syntactic/prosodic possible completion point. Such gestures in the transition space do not only secure speaking rights, but they already foreshadow semantic and/or pragmatic meaning of the turn to come (for further examples, see Streeck 2009). For dyadic ordinary conversation Bohle (2007) showed that participants make use of gesture’s articulatory independence from speech for smooth speakership transition as well as for turn-competitive incomings. In cases of interruption, current speakers may continue, recycle, or hold a gesture just begun in order to maintain speaking rights without actually speaking and holding meaning aspects of the interrupted turn visibly relevant. Likewise, in a study on pointing gestures, Mondada (2007) gives examples of speakers continuing a pointing gesture throughout an adjacency-pair sequence initiated by them or even throughout larger sequences, showing that speaking rights and obligations do not end with the boundaries of current turns, but extend over longer sequences under their control. In the same study, Mondada shows how the beginning of a pointing

97. Gesture and conversational units

1365

gesture before the actual completion of a current speaker’s turn or at the beginning of the incipient speaker’s turn manifests the gradual and interactively established change of participant state from current non-speaker via would-be speaker to incipient speaker. It is especially these cases of listeners gesticulating during the speech of some speaker and cases of speakers “only” gesticulating without saying a word which demonstrates that speaking rights and obligations are shared rather than divided. Thus, speaker and hearer are not mutually exclusive roles, but interlocutors are better seen as co-present participants who, in varying ways and to varying degrees, contribute to the ongoing conversation (Bohle 2007: 287⫺288).

4. Conclusion While current classifications of gesture remain rather vague with regard to the status of turn-taking as a function of a specific type of gesture or as a potential function of any gesture, empirical studies show that participants make use of gesture’s articulatory independence from speech for turn-construction and turn-taking/turn-allocation in two different ways: by specific turn gestures and by the temporal relationship of gesture phrases to syntactic and intonation phrases. Regardless of its specific semantic/pragmatic relation to the utterance’s meaning, gestures of any type may serve to take, to hold, to defend, or to yield the turn. Thus, gesture is better analyzed not only with regard to its relation to some verbal affiliate, but with regard to its position in the turn, which shapes the gesture’s significance (Streeck 2009: 177). Gesture is not adequately accounted for in common models of turn-taking. In the signal model the gesture’s contribution to the utterance’s meaning as well as its projective force are disregarded. In the CA-model, gesture is not considered at all, but may be integrated. Recent studies in the multimodal approach to communication rely on this model, yet show its limitations and lead to a revision of some fundamental concepts (see, e.g., Schmitt 2005; for a revision of the notions of turn-constructional unit, possible completion point and transition relevance place, see Bohle 2007: 282⫺288). Furthermore, considering the temporal dynamics of syntactic structures leads to a re-conceptualization of syntax as “emergent grammar”, which is included in the focus of experimental psycholinguistics as well as conversation analysis (see de Ruiter, Mitterer, and Enfield 2006: 532). How syntax, along with any other linguistic structures, is used by participants for interactive purposes is one major object of investigation in interactional linguistics (as programmatically formulated in Selting and Couper-Kuhlen 2000, 2001). Despite the growing body of research on this topic in recent years, many questions remain open. Firstly, the specific functional impact of syntax, prosody, and gesture and their interrelation in the structuring of information and in the organization of turn-taking remains opaque. Secondly, little is known about the interplay of syntax, prosody, and gesture with other modes of communication (see, however, Goodwin 1981 and Schönherr 1997 for gaze). Perhaps even less is known about the alternative use of specific turn gestures vs. gesture phrases in general with regard to different communicative settings. Most studies rely either on ordinary conversations or on TV-conversations. The influences of the number of participants (dyadic vs. multi-party conversation), the setting (ordinary conversation vs. institutional communication), and the level of formality still remain to be investigated. Another topic of research would be the acquisition of the mechanisms of turn-taking including the specific multimodal resources

1366

VII. Body movements – Functions, contexts, and interactions

used. Lastly, comparative analysis could reveal if and how the specific functions of gesture vary with linguistic structure and/or culture (see Kita 2009 for an overview on crosscultural variation of speech-accompanying gesture).

Acknowledgements Many thanks to Friederike Kern and Joe Couve de Murville for their careful reading and valuable comments on an earlier version of this paper.

5. Reerences Bavelas, Janet Beavin, Nicole Chovil, Lina Coates and Lori Roe 1995. Gestures specialized for dialogue. Personality and Psychology Bulletin 21(4): 394⫺405. Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allan Wade 1992. Interactive gestures. Discourse Processes 15(4): 469⫺489. Bohle, Ulrike 2007. Das Wort ergreifen ⫺ das Wort übergeben. Explorative Studie zur Rolle redebegleitender Gesten in der Organisation des Sprecherwechsels. Berlin: Weidler. Chafe, Wallace L. 1993. Prosodic and functional units of language. In: Jane A. Edwards and Martin Lampert (eds.), Talking Data. Transcription and Coding in Discourse Research, 33⫺43. Hillsdale, NJ: Lawrence Erlbaum. Chui, Kawai 2005. Temporal patterning of speech and iconic gesture in conversational discourse. Journal of Pragmatics 37(6): 871⫺887. de Ruiter, Jan Peter 2000. The production of gesture and speech. In: David McNeill (ed.), Language and Gesture, 284⫺311. Cambridge: Cambridge University Press. de Ruiter, Jan Peter, Holger Mitterer and N.J. Enfield 2006. Projecting the end of a speaker’s turn: A cognitive cornerstone of conversation. Language 82(3): 515⫺535. Duncan, Starkey Jr. and Donald W. Fiske 1977. Face-to-Face-Interaction: Research, Methods, and Theory. Hillsdale, NJ: Lawrence Erlbaum. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Ford, Cecilia and Sandra Thompson 1996. Interactional units in conversation: Syntactic, intonational, and pragmatic resources for the management of turns. In: Elinar Ochs, Emanuel Schegloff and Sandra Thompson (eds.), Interaction and Grammar, 134⫺184. Cambridge: Cambridge University Press. Goodwin, Charles 1981. Conversational Organization: Interaction between Speakers and Hearers. New York: Academic Press. Jannedy, Stefanie and Norma Mendoza-Denton 2005. Structuring information through gesture and intonation. In: Stefanie Dipper (ed.), Interdisciplinary Studies on Information Structure 3: Approaches and Findings in Oral, Written and Gestural Language, 199⫺244. Potsdam: Universitätsverlag Potsdam. Kendon, Adam 1980. Gesticulation and speech: two aspects of the process of utterance. In: Mary R. Key (ed.), Nonverbal Communication and Language, 207⫺227. The Hague: Mouton. Kendon, Adam 2004a. Gesture. Visible Action as Utterance. Cambridge: Cambridge University Press. Kendon, Adam 2004b. Some contrasts in gesticulation in Neapolitan speakers and speakers in Northamptonshire. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gesture, 173⫺193. The Berlin Conference. Berlin: Weidler. Kita, Sotaro 2009. Cross cultural variation in speech-accompanying gesture. A review. Language and Cognitive Processes 24(2): 145⫺167. http://dx.doi.org/10.1080/01690960802586188.

97. Gesture and conversational units

1367

Knapp, Mark and Judith Hall 1997. The effects of gesture and posture on human communication. In: Mark Knapp and Jdith Hall (eds.), Nonverbal Communication in Human Interaction. Fort Worth, TX: Harcourt Brace College Publishers. Krauss, Robert, Yihsiu Chen and Rebecca F. Gottesman 2000. Lexical gestures and lexical access: A process model. In: David McNeill (ed.), Language and Gesture, 261⫺283. Cambridge: Cambridge University Press. Loehr, Dan this volume. Gesture and Prosody In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.). Berlin/Boston: De Gruyter Mouton. McNeill, David 2000. Catchments and contexts: non-modular factors in speech and gesture production. In: David McNeill (ed.), Language and Gesture, 312⫺328. Cambridge: Cambdrige University Press. Mondada, Lorenza 2007. Multi-modal resources for turn-taking: pointing and the emergence of possible next speakers. Discourse Studies 9(2): 194⫺225. http://dis.sagepub.com.content/9/2/194. Sacks, Harvey, Emanuel Schegloff and Gail Jefferson 1974. A simplest systematics for the organization of turn-taking for conversation. Language 50(4): 696⫺735. Schegloff, Emanuel 1984. On some gestures’ relation to talk. In: Maxwell Atkinson and John Heritage (eds.), Structures of Social Action. Studies in Conversation Analysis, 266⫺296. Cambridge: Cambridge University Press. Schmitt, Reinhold 2005. Zur multimodalen Struktur von turn-taking. Gesprächsforschung ⫺ OnlineZeitschrift zur verbalen Interaktion 6: 17⫺61. (www.gespraechsforschung-ozs.de) Schönherr, Beatrix 1997. Syntax ⫺ Prosodie ⫺ nonverbale Kommunikation. Empirische Untersuchungen zur Interaktion sprachlicher und parasprachlicher Ausdrucksmittel im Gespräch. Tübingen: Max Niemeyer. Schönherr, Beatrix this volume. Categories and functions of posture, gaze, face, and body movements. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.). Berlin/Boston: De Gruyter Mouton. Schwitalla, Johannes 1979. Dialogsteuerung in Interviews. Ansätze zu einer Theorie der Dialogsteuerung mit empirischen Untersuchungen von Politiker-, Experten- und Starinterviews in Rundfunk und Fernsehen. München: Hueber. Selting, Margret 2000. The construction of units in conversational talk. Language in Society 29(4): 477⫺517. Selting, Margret and Elizabeth Couper-Kuhlen 2000. Argumente für die Entwicklung einer Interaktionalen Linguistik. Gesprächsforschung. Online-Zeitschrift zur verbalen Interaktion 1: 76⫺95. (www.gespraechsforschung-ozs.de) Selting, Margret and Elizabeth Couper-Kuhlen 2001. Forschungsprogramm Interaktionale Linguistik. Linguistische Berichte 187: 257⫺287. Streeck, Jürgen 2009. Forward-gesturing. Discourse Processes 46(2): 161⫺179. Streeck, Jürgen and Ulrike Hartge 1992. Previews: Gestures at the transition place. In: Peter Auer and Aldo di Luzio (eds.), The Contextualization of Language, 135⫺157. Amsterdam: John Benjamins. Weinrich, Lotte 1992. Verbale und nonverbale Strategien in Fernsehgesprächen. Eine explorative Studie. Tübingen: Max Niemeyer.

Ulrike Bohle, Hildesheim (Germany)

1368

VII. Body movements – Functions, contexts, and interactions

98. The interactive design o gestures 1. 2. 3. 4.

Speaker’s perspective Listener’s perspective Summary References

Abstract There has been accumulating evidence that gesture, rich with semantic and interactive content, is oriented towards the listener and the listener in turn retrieves some of its meaning. As a natural corollary to this comes one assumption that the speaker accommodates their gesture to the immediate needs of the listener for better communication. That is, gesture may be recipient designed when intended as a communicative tool. Each listener comes into the conversational setting with various dispositions, for example, with varying level of knowledge and attention. In addition, factors associated with the speech environment such as the listener’s location or visual availability of the speaker’s gesture (e.g., telephone vs. face to face conversation) impose certain constraints on how gesture is used most effectively to get meaning across. In this chapter, we will first provide an overview of recent findings that speakers indeed modify gesture’s physical form, timing, and its relation with speech according to these various factors. This will then be followed by the research on listeners’ gesture and how they use gestures to achieve their interactional goals, such as signaling attention or strong involvement in talk and showing a desire to talk next.

1. Speakers perspective 1.1. Gesture rate Cohen and Harrison (1973) first tested a claim that gestures are performed with communicative intent. To see whether speakers purposefully use their gestures to get meaning across to their listeners, the listener’s visual access to the speaker was experimentally manipulated. Their finding that more gesture was used in visible condition pointed to the communicative intention on the part of the speaker. By comparing two gesture types differing in the amount of semantic information they convey, Alibali, Heath, and Myers (2001) also found that the rate of representational but not beat gesture decreased when the listener could not see them, indicating visibility’s stronger effect on gesture usage when the gestures have a clear semantic value for the listener. Yet another gesture type, which may be most prominently affected by visibility, is what Bavelas et al. (1992) call interactive gestures, whose defining characteristic is their role in referencing elements at the level of the interaction, namely the listener or his/her previous utterance in particular. Both referring to and physically oriented towards the listener, they found that these gestures are used significantly more when they are visible.

1.2. Listeners location While the above research shows gesture tends to disappear when the listener cannot see it, establishing the overall picture that gestures are oriented towards the listener, other Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13681374

98. The interactive design of gestures

1369

studies explore how gesture is recipient designed. Özyürek (2002) and Furuyama (2000), for instance, provide examples of ways in which gestures are designed in reference to the listener’s location and more specifically in terms of body orientation and direction of gaze. In expressing certain lexical items with directional meanings, the speaker adjusts the orientation of gesture so that it conforms to the location of shared space determined by the relative seating position of the listener(s) and the speaker (Özyürek 2002). Furuyama (2000) shows that during origami instruction, people in the instructor’s role tend to orient their bodies and gestures to the listener in such a way that the information can be easily read off.

1.3. Ensuring an addressee For an utterance to be a valid contribution in a multi-party conversation, it needs to be ratified by at least one listener, who usually receives the speaker’s direct gaze (Goodwin 1979, 1981; Schegloff 1972). If the intended recipient is distracted and not paying attention to acknowledge the content, the speaker re-designs the utterance, attempting to draw the attention of the same listener or by redirecting it to another listener, so that the turn could be salvaged from becoming a monologue. Goodwin (1986) and Heath (1986) suggest that gesture, as a visual mode of representation, has the power to redirect a recipient’s gaze to the speaker. The speaker thus employs gesture as a resource, a kind of attention getting device, to secure a recipient for their utterance. Oftentimes, however, not only speech but gesture itself requires a recipient to be ratified. Streeck (2009) reports a case where a gesture, a visual demonstration of what was being described, was abandoned once the speaker realized that no one was looking. It was only when the speaker found someone looking at her, and thus was in a position to acknowledge the gesture that it was resumed. Furthermore, in accordance with this new recipient, the speaker modified recipient design of her gesture by reformulating her body position and gesture location.

1.4. Common ground Speakers regulate their speech behaviors by inferring the knowledge state of their listener. When referring to objects or events, speakers design their utterance to be more efficient and effective by selecting the most appropriate referential forms based on what is already known by the listener(s) (Clark and Schaefer 1989; Clark and Wilkes-Gibbs 1986; Schegloff 1972). The body of knowledge shared between the speaker and listener is called common ground. Some studies suggest that common ground also influences gesture. For example, Holler and Beattie (2003) focused on ambiguity associated with certain words and how gesture can help disambiguate it. Their findings that speakers make more gestures when using the words with homonyms, and thus when there is a potential need for disambiguation, points to speakers’ ability to organize gestures differently depending on the listener’s need to obtain more information. There are also studies which try to gain insight on the effect of common ground by manipulating visibility to see whether it changes the way information is distributed across speech and gesture. When gesture is not visible, its meaning could no longer be recognized by the listener and be part of common ground. These studies assume that if gesture is recipient designed to ensure transfer of meaning, information would be distributed differently depending on visibility. For instance, Bavelas et al. (2008) asked speakers

1370

VII. Body movements – Functions, contexts, and interactions

to describe a complicated dress with or without visibility. In face-to-face conversation, speakers could choose to convey information in either speech or gesture based on its encodability. However, when speaking via telephone or to a tape recorder, information that would have otherwise appeared in gesture had to be conveyed in speech. As result, they found that speakers in such non-visible conditions had higher redundancy between speech and gesture (with the same meaning encoded twice), while meaning tended to be assigned to either speech or gesture and not both in face-to-face conversation. Similar finding was made in Emmorey and Casey (2001), which showed in their block puzzle task that the number of verbal references to the blocks’ rotating movement (to fit a puzzle frame) increased when the director and the matcher could not see each other. The idea that speakers are sensitive to the interplay between speech and gesture, and package information flexibly can also be generalized to other situations where visibility of gesture is not manipulated. In Melinger and Levelt (2004), speakers were grouped into two types, namely gesturers and non-gesturers, according to whether they used gesture spontaneously. Their analysis on how the two groups described objects’ location (relative position of circles along a path) revealed that the information gesturers expressed in gesture was more likely to be omitted from their speech versus from the speech of non-gesturers. Regardless of the reason why gesture is not employed, be it experimental manipulation or the speaker’s own decision, common ground seems to affect how information is distributed, and speakers seem to be flexible in assigning meaning depending on the availability of gestures as an expressive medium. Common ground also impacts the physical form of gesture. It was initially pointed out by Levy and McNeill (1992) that the amount of gesture material correlates with communicative dynamism (CD). New information is high in communicative dynamism in that it carries noteworthy content. Analyzing stretches of narration and how referents were introduced and tracked, they showed that gestures representing information with low communicative dynamism, i.e., already shared with the listener, tended to take more attenuated forms such as beat or have no accompanying gesture at all as opposed to having iconics, which was associated with high communicative dynamism. While Levy and McNeill (1992) used a discourse based approach, other research used more experimental method and induced difference in the listeners’ knowledge about the to-be-talked-about objects by, for example, providing an opportunity to have a shared experience with the speaker prior to the task. Using such method, Gerwing and Bavelas (2004) found that when talking to the listener who, the speaker knew, had an experience of playing with the same set of toys as they did, the speaker’s gesture became less complex, precise, and informative. Similarly, Holler and Stevens (2007), in their task of spotting a referent in a crowded scene, found that in addition to a decrease in the overall rate of gestures, the size of gesture less closely reflected the actual size of the landmark entities when speakers were talking to the listeners who, they knew, had already seen the scene beforehand than those who saw the scene for the first time (see also, Holler and Wilkin 2009, for different results). It should also be noted that common ground is not a single faceted notion. Parrill (2010), looking more closely into what constitute common ground, shows both the referent’s identifiability by the listener and discourse salience (whether it has been mentioned before) affect inclusion of a ground element (a landmark which offers an anchoring point for the trajectory of the moving object) in gesture while describing motion events. She found that the ground appeared in gesture more often

98. The interactive design of gestures

1371

when the listener did not watch a stimulus cartoon (viz., low identifiability) and/or when the ground was already mentioned in the experimenter’s request to narrate (viz., given information and thus low discourse salience).

2. Listeners perspective So far, we have seen how speakers tailor their gesture in accordance with the state of the listener(s). We will now shift our attention to research findings on gestural mimicry and gesture as a turn-entry device as instances of listeners’ gesture. While the research on speakers’ gesture reveals how they change the physical form, timing, and relation to speech to communicate most effectively, the research on listeners’ gesture provides another valuable insight into the interactive design of gesture, not just because it constitutes the other end of talk exchange but because it reveals the gesture’s interactional functions.

2.1. Gestural mimicry During talk, listeners acknowledge the speaker’s utterance by providing feedback, for example, through head nods or using verbal backchannels such as yeah or I see. These remarks indicate that the listener is following what the speaker said. Gestural mimicry has a similar function. Gestural mimicry refers to a recurrence of the same or similar gesture between speakers through monitoring and not by a mere coincidence (Holler and Wilkin 2011; Kimbara 2008; Mol et al. 2012). One prominent function of mimicry is for the listener to demonstrate their understanding of the content more strongly than, for example, with simple acknowledgement terms (de Fornel 1992; Holler and Wilkin 2011; Kimbara 2006; Streeck 1994; Tabensky 2001). According to Holler and Wilkin (2011), who explored the functions gestural mimicry plays within a process of creating a shared understanding (speakers rearranged randomly placed pictures of tangram figures in the same order), mimicked gesture was used most frequently for presentation purposes; the same gesture recurred between speakers as they made a repeated reference to a figure. Second most frequent (and more relevant to our interest) was for acceptance; a speaker accepted the other speaker’s characterization of a figure, for instance, as a ghost with a verbal expression for acceptance such as yeah, accompanied by gestural mimicry of two hands held in a “ghost-like” manner. While both functions showed gestural mimicry’s contribution to a successful negotiation of shared understanding and its particular role as an index of comprehension, the latter type showed it more clearly because, as reported by the authors, the semantic information carried by the mimicked gesture was not represented verbally at all in about half of the cases. Instead, the mimicked gesture was used as a primary means to demonstrate understanding. Kimbara (2006) also points out that gestural mimicry often occurs during word search and speech co-construction. In word search, the meaning tends to receive gestural representation, thereby inviting the listener to join in the search process (Goodwin and Goodwin 1986; Streeck 1994). In her example, the listener offers a candidate word while repeating the observed gesture. In the example of co-construction, the listener takes over the floor before the speaker completes her utterance and continues it while repeating the same gesture. Continuing not only speech but gesture gives a tighter linkage between the speaker’s original speech-gesture unit and that of the listener. Besides conveying a strong

1372

VII. Body movements – Functions, contexts, and interactions

pragmatic message that the listener is closely attending, gestural mimicry is used here as a pragmatic resource for the listener to engage more fully and actively in the unfolding talk, by turning the conversation into a collaborative act.

2.2. Turn-taking Speakers take turns in conversation. The position where the current speaker completes a turn and other participants have the opportunity to talk is called the transition relevance place (TRP; Sacks, Schegloff, and Jefferson 1974). The imminent speaker aims the beginning of their talk precisely at the transition relevance place by predicting its timing based on the meaning, syntax, and prosody of the ongoing turn (Jefferson 1973; Wilson and Zimmerman 1986). Turn transition therefore requires careful planning, especially in situations where more than two speakers compete over access to the floor. At the transition relevance place, potential next speakers may attempt to take the next turn by starting to speak immediately after the turn completion. However, starting speech at the predicted transition relevance place involves the risk of creating speech overlap if the prediction proves wrong. While speech overlap is a dispreferred event in conversation, gestures have the benefit of being nonverbal and thus less intrusive. As Schegloff (1996) notes, gesture as well as other nonverbal behaviors such as reorientation of gaze toward a potential recipient, change in facial expression, lip parting, etc., can all project talk onset. Some examples of gesture as turn-entry device are found in a conversation between Ilokano speakers reported by Streeck and Hartege (1992), where speakers use two kinds of socially and culturally accepted gestures: [a]-face (silent production of the vowel) and a list gesture (palm-up open hand as a base to enumerate or count things). It is crucial that [a]-face can precede an utterance beginning with a consonant, which demonstrates that it is not a preparatory action for articulating an upcoming word. Rather, it constitutes a conventional practice that conveys a culturally specific pragmatic message that he/she is ready to talk next. The other turn-entry gesture, list gesture, not only signals the speaker’s intent to talk but also prefigures its content, that is, enumerating things. What type of gesture is used as a turn-entry device is partly determined by the culture, as in these examples. However, the spatial and material environment within which the interaction takes place also proves to be key. Mondada (2007) shows that a pointing gesture is deployed to claim speakership at a meeting where participants’ attention is focused on documents, maps, and other visual materials arranged on a table. In such a visually focused environment, pointing at the relevant object provides an effective means of making the intent visible for other participants.

3. Summary Gesture is spatial in nature. It is hard to imagine someone telling about a highly geometric Mondrian painting without recourse to gesture. Sometimes, gesture, as an iconic representation of referents, has expressive power that speech lacks. At the same time, gesture is interactively designed. Recent studies made great contribution to our understanding of how various aspects of interaction shapes gesture. We have seen in this chapter that speakers modify their gestures according to the specifics of the communicational setting and the listeners use gesture to send interactional messages, i.e., by signal-

98. The interactive design of gestures

1373

ing understanding, showing intent to collaborate in producing the speech-gesture unit, and making visible that they want to talk next. Oriented towards and expected to be understood by the other speech participant(s), the significance of these gestures in communication lies not only in its semantic value (i.e., creating a referential relationship with the entity) but in its interactional value to communicate more effectively and successfully.

4. Reerences Alibali, Martha W., Dana C. Heath and Heather J. Meyers 2001. Effects of visibility between speaker and listener on gesture production: Some gestures are meant to be seen. Journal of Memory and Language 44(2): 169⫺188. Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allan Wade 1992. Interactive gestures. Discourse Processes 15(4): 469⫺489. Bavelas, Janet Beavin, Jennifer Gerwing, Chantelle Sutton and Danielle Prevost 2008. Gesturing on the telephone: Independent effects of dialogue and visibility. Journal of Memory and Language 58(2): 495⫺520. Clark, Herbert H. and Edward F. Schaefer 1989. Contributing to discourse. Cognitive Science 13(2): 259⫺294. Clark, Herbert H. and Deanna Wilkes-Gibbs 1986. Referring as a collaborative process. Cognition 22(1): 1⫺39. Cohen, Akiba A. and Randall P. Harrison 1973. Intentionality in the use of hand illustrators in faceto-face communication situations. Journal of Personality and Social Psychology 28(2): 276⫺279. de Fornel, Michel 1992. The return gesture. In: Peter Auer and Aldo di Luzio (eds.), The Contextualization of Language, 159⫺193. Amsterdam: John Benjamins. Emmorey, Karen and Shannon Casey 2001. Gesture, thought and spatial language. Gesture 1(1): 35⫺50. Furuyama, Nobuhiro 2000. Gestural interaction between the instructor and the learner in origami instruction. In: David McNeill (ed.), Language and Gesture, 99⫺117. Cambridge: Cambridge University Press. Gerwing, Jennifer and Janet Beavin Bavelas 2004. Linguistic influences on gesture’s form. Gesture 4(2): 157⫺196. Goodwin, Charles 1979. The interactive construction of a sentence in natural conversation. In: George Psathas (ed.), Everyday Language: Studies in Ethnomethodology, 97⫺121. New York: Irvington Publishers. Goodwin, Charles 1981. Conversational Organization: Interaction between Speakers and Hearers. New York: Academic Press. Goodwin, Charles 1986. Gestures as a resource for the organization of mutual orientation. Semiotica 62(1⫺2): 29⫺49. Goodwin, Marjorie Harkness and Charles Goodwin 1986. Gesture and coparticipation in the activity of searching for a word. Semiotica 62(1⫺2): 51⫺75. Heath, Christian 1986. Body Movement and Speech in Medical Interaction. Cambridge: Cambridge University Press. Holler, Judith and Geoffrey Beattie 2003. Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture 3(2): 127⫺154. Holler, Judith and Rachel Stevens 2007. The effect of common ground on how speakers use gesture and speech to represent size information. Journal of Language and Social Psychology 26(1): 4⫺27. Holler, Judith and Katie Wilkin 2009. Communicating common ground: How mutually shared knowledge influences speech and gesture in a narrative task. Language and Cognitive Processes 24(2): 267⫺289.

1374

VII. Body movements – Functions, contexts, and interactions

Holler, Judith and Katie Wilkin 2011. Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Bahavior 35(2): 133⫺153. Jefferson, Gail 1973. A case of precision timing in ordinary conversation. Semiotica 9(1): 47⫺96. Kimbara, Irene 2006. On gestural mimicry. Gesture 6(1): 39⫺61. Kimbara, Irene 2008. Gesture form convergence in joint description. Journal of Nonverbal Behavior 32(2): 123⫺131. Levy, Elena T. and David McNeill 1992. Speech, gesture, and discourse. Discourse Processes 15(3): 277⫺301. Melinger, Alissa and Willem J.M. Levelt 2004. Gesture and the communicative intention of the speaker. Gesture 4(2): 119⫺141. Mol, Lisette, Emiel Krahmer, Alfons Maes and Marc Swerts 2012. Adaptation in gesture: Converging hands or converging minds? Journal of Memory and Language 66(1): 249⫺264. Mondada, Lerenza 2007. Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies 9(2): 194⫺225. Özyürek, Asli 2002. Do speakers design their co-speech gestures for their addressees? The effect of addressee location on representational gestures. Journal of Memory and Language 46(4): 688⫺ 704. Parrill, Fey 2010. The hands are part of the package: Gesture, common ground and information packaging. In: Sally Rice and John Newman (eds.), Empirical and Experimental Methods in Cognitive/Functional Research, 285⫺302. Stanford: CSLI Publications. Sacks, Harvey, Emanuel A. Schegloff and Gail Jefferson 1974. A simplest systematics for the organization of turn-taking for conversations. Language 50(4): 696⫺735. Schegloff, Emanuel A. 1972. Notes on a conversational practice: Formulating place. In: David N. Sudnow (ed.), Studies in Social Interaction, 75⫺119. New York: MacMillan, The Free Press. Schegloff, Emanuel A. 1996. Turn organization: One intersection of grammar and interaction. In: Elinor Ochs, Emanuel A. Schegloff and Sandra Thompson (eds.), Interaction and Grammar, 52⫺133. Cambridge: Cambridge University Press. Streeck, Jürgen 1994. Gesture as communication II: The audience as co-author. Research on Language and Social Interaction 27(3): 239⫺267. Streeck, Jürgen 2009. Gesturecraft: The Manu-Facture of Meaning. Amsterdam: John Benjamins. Streeck, Jürgen and Ulrike Hartge 1992. Previews: Gestures at the transition place. In: Peter Auer and Aldo di Luzio (eds.), The Contextualisation of Language, 135⫺157. Amsterdam: John Benjamins. Tabensky, Alexis 2001. Gesture and speech rephrasings in conversation. Gesture 1(2): 213⫺235. Wilson, Thomas P. and Don H. Zimmerman 1986. The structure of silence between turns in twoparty conversation. Discourse Processes 9(4): 375⫺390.

Irene Kimbara, Kushiro (Japan)

99. Gestures and mimicry

1375

99. Gestures and mimicry 1. 2. 3. 4. 5.

Repetition in verbal and bodily behaviors Gestural mimicry in face-to-face conversation Gestural mimicry in non face-to-face settings Summary References

Abstract This chapter provides an overview of recent studies on gestural mimicry, a repetitious use of gesture between speakers that converge in form. It argues that, unlike other repetitious behaviors in speech or self-regulating behaviors such as foot tapping, its occurrence is mediated by meaning and serves interactional functions. In face-to-face interaction, it displays comprehension and involvement and shows that the listener is closely attending to the speaker’s talk. It is also noted that mimicry can be observed in non-face-to-face settings; perceived gestures recur in the listener’s own speech later, while still keeping the same formal features. This suggests that seeing other’s gestures has long-term impact on their representation in the listener’s mind.

1. Repetition in verbal and bodily behaviors People display behaviors just observed or heard. In speech, speakers are known to adopt another speaker’s word choice (Brennan and Clark 1996; Clark and Wilkes-Gibbs 1986; Garrod and Anderson 1987), syntactic structure (Bock and Loebell 1990; Branigan, Pickering, and Cleland 2000; Levelt and Kelter 1982) as well as supra-segmental features such as duration of pause, speech rate, vocal intensity, etc. (Cappella and Palmer 1990; Street and Cappella 1989). It has also been amply demonstrated that nonverbal behaviors, which are generally known as social indices of personal distance and attitude among others, also show convergence between speakers: gaze duration (Cappella and Palmer 1990; Kleinke, Staneski, and Berger 1975; Street and Buller 1987), touch (Guerrero and Andersen 1994), posture (Dabbs 1969; Kendon [1970] 1990; LaFrance and Broadbent 1976), facial expressions (Bavelas et al. 1986), and self-adaptors such as foot tapping, face scratching, and nose rubbing (Chartrand and Bargh 1999). Although behavioral convergence across individuals is a wide-spread phenomenon, there are different views as to why such convergence occurs. Research on verbal convergence tends to focus on cognitive benefit and discusses the phenomenon in relation to memory, notably in terms of priming effects; reusing the same linguistic item confers memory advantage because it requires less effort than formulating it from scratch and, thus, lightens the computation load (Bock 1986; Branigan, Pickering, and Cleland 2000; Pickering and Garrod 2004). On the other hand, research on nonverbal, bodily behaviors tends to argue that speakers mirror each other’s behaviors due to the underlying unconscious, social mechanism to increase the level of rapport. For example, lectures in which the instructor’s body position is mirrored more frequently by students tend to receive more positive student evaluations (LaFrance and Broadbent 1976) and people whose posture and body movements are mimicked by a confederate report higher liking (Chartland and Bargh 1999). Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13751381

1376

VII. Body movements – Functions, contexts, and interactions

2. Gestural mimicry in ace-to-ace conversation Illustrator, or representational gestures (I will refer to them simply as gesture), which accompany speech and represent meaning related to speech, are also mirrored between speakers. In comparison to convergence in verbal and bodily behaviors, research on convergence in gesture is relatively scarce. However, there have been some findings that point to the existence of such convergence in various contexts, such as joint narration, task oriented interaction involving an artifact, and casual conversation. To distinguish it from convergence in verbal and bodily behaviors, the repetition of the same or a similar gesture by another speaker simultaneously or in subsequent discourse is called gestural mimicry. When referring to the same objects, speakers tend to use similar gestures. Nonetheless, there is some evidence to the claim that convergence in physical form of gesture in mimicry is due to monitoring and not a mere coincidence. Kimbara (2008), in a triadic setting where two speakers narrated a cartoon animation together to a video camera (or conceivably to the experimenter behind it), examined the frequency with which the conarrators employed the same hand shape when referring to the same characters or objects. She found that those who could see each other had a higher rate of convergence than those whose mutual view was blocked by an opaque screen. In a more dialogic setup, Holler and Wilkin (2011) had a similar finding. In their study, two speakers, trying to identify the same tangram figure, used the same gesture form (not just hand shape) more often when they could see each other than when they could not. What causes people to repeat each other’s gestures? As noted above, people tend to mimic each other’s bodily behaviors such as foot-tapping and face-scratching. According to Chartrand and Bargh (1999), such echoic acts, grounded on our desire as a social being to increase rapport, occur automatically as a direct response to perceiving others’ behaviors. Is gestural mimicry a subtype of such an automatic reaction? There is contrary evidence that it is not. Mol et al. (2012) show that gesture is not mimicked if it is semantically incompatible with speech. In their retelling task, speakers were found to repeat observed gestures only if they were congruent with speech to constitute a combined meaning. This suggests that gestural mimicry undergoes semantic processing and thus is fundamentally different from automatic behavior; not just the same gesture form, but the same form-meaning pair as an inseparable unit recurs across speakers.

2.1. Aligning cognition Gestures reflect how objects or events are construed in the speaker’s mind. When asked to describe the same cartoon, people tend to use similar gestures for the obvious reason, namely because they have the same visual experience. At the same time, however, there is also certain individual difference as to which components of the referents are expressed and how they are mapped onto the gesture form. Such idiosyncrasy in the gesture form corresponds, in part, to differences in each speaker’s descriptive focus. Whether to trace a path with a pointing finger or form a fist as if to grab a bird while saying “the cat ran with a canary” depends on where the speaker’s focus lies at the moment of speech. Schwarts (1995) suggests that gestures in face-to-face conversation help speakers keep their descriptive focus aligned and maintain interpersonal coherence over discourse. He asked pairs of children to solve a question about adjoined gears. The hand motion representing the gears’ rotary motion by each child became less distin-

99. Gestures and mimicry

1377

guishable as they solved more problems. Based on this observation, Schwarts describes the way children interacted as “negotiat(ing) a common representation that could serve as a touchstone for coordinating the members’ different perspectives on the problem” (Schwarts 1995: 321). When more than two people are involved, mutual understanding seems to be facilitated if the speakers use unified means of representation. This view on gestural mimicry as a kind of interface between two minds can be called a cognitive view because it stresses the point that speakers achieve common construals of things by converging on their use of gestures.

2.2. Signaling comprehension and involvement Another important function mimicry serves in face-to-face conversation is to signal comprehension and involvement. Common verbal markers of comprehension are acknowledgement terms such as m-hm and uh-huh. Compared to these semantically bleached terms, however, repetition of the speaker’s speech demonstrates in a more explicit manner that the listener has been attending to the talk (Halliday and Hasan 1976; Weiner and Labov 1983). By the same token, gestural mimicry indicates comprehension of the original gesture and signals close attention to the talk. For example, Holler and Wilkin (2011) lists acceptance of other speakers’ contribution to the talk as one of three major pragmatic functions that gestural mimicry serves in their tangram experiment, along with presentation and displaying incremental understanding. They cite an example where the listener repeated the speaker’s gesture while saying yeah. Comprehension of the speaker’s previous utterance was assured through gesture with its clear referential content rather more evidently than it would otherwise be with only the simple agreement term. Heath (1992) and Goodwin (1980) provide an interesting case relating to gestural mimicry as a display of comprehension, which suggests that the original and mimicked gesture can form an adjacency pair, a kind of call-response unit (Schegloff and Sacks 1973). From doctor-patient dialogues, Heath reports cases where a doctor produced speech along with a gestural token of acknowledgment such as head nods, which was then immediately followed by the patient’s mimicry. He argues that by modeling it first (by prefiguring a preferred response), doctors can shape the framework of their participation as that of acknowledgement, whereas patients tend to respond by reciprocating the same gesture. The speaker in Goodwin’s (1980) example showed negative evaluation through a head shake, which was then followed by a pause. It was only after the head shake (and negative evaluation expressed within) was reciprocated by the listener that the speaker resumed his talk. These examples show that at occasions where acknowledgment is being sought after, the speaker’s own gestural display of it first can create a certain participation framework, for which mimicry of the display is taken as an appropriate second pair part to the initial display. In their tangram task, in which acceptance proved to be one of the major functions of gestural mimicry, Holler and Wilkin (2011) asked pairs of participants to arrange pictures of different figures in the same order. Since making sure that both are following each other is of primal importance in the whole strategy of successful performance of the task, one may wonder if mimicry serves the same purpose in conversation where checking understanding is not made central by the task at hand. In fact, there are studies that provide such examples. De Fornel (1992) and Tabensky (2001), both employing a more qualitative, context-based approach to examine when and how gestural mimicry

1378

VII. Body movements – Functions, contexts, and interactions

occurs, look at casual conversations. De Fornel reports a case where a speaker extended her arm toward her listener to solicit agreement, which was then mimicked (a return gesture in his term). By producing the same gesture, the listener displays to the speaker that “he is an active co-participant, both as a listener and as a viewer” (De Fornel 1992: 169). In addition to showing understanding, Tabensky argues that mimicry in natural conversation serves as a resource to show involvement in the unfolding talk. According to her, the listener sometimes achieves an effect akin to verbal rephrasing by slightly reformulating the speaker’s gesture or its relation to its accompanying speech. In such gestural rephrasing, she notes, the mimicked gesture can not only signal understanding but highlight strong involvement in the talk precisely because a meaningful modification of the original speech-gesture unit contributes some additional content to the talk. Regarding gestural mimicry as a resource to show involvement, Kimbara (2006) argues that a speaker’s attempt to highlight his/her involvement in talk is sometimes achieved more effectively by co-constructing gesture, that is, by producing the mimicked gesture in overlap with the original gesture. In speech co-construction, speakers coordinate to produce utterances within syntactic boundaries. For example, when the current speaker is producing an utterance which has the format of if X then Y, the listener can expect what is to come before its completion. Once the format is recognized, the listener can provide the subcomponent (Y for instance), thereby achieving a choral production of the segment through overlap (Bolden 2003; Goodwin and Goodwin 1987; Lerner 2002). Here, the listener shares the turn with the current speaker and establishes her/ himself as co-speaker. In one of her examples taken from cartoon co-narrations, gestural mimicry occurs during such speech co-construction. While overlapping each other to contribute parts to speech, the speakers also gestured collaboratively by mimicking each other’s gesture in overlap. In other words, they jointly produced a speech-gesture unit by closely monitoring each other’s gesture and speech. In such a case, something more than displaying comprehension is taking place; a shared visual image is unpacked collaboratively into a speech-gesture unit across two speakers. The close temporal synchrony between the original gesture and the mimicked gesture in co-construction also holds special importance, especially because demonstrating involvement is more effective when one speaker’s contribution (comment, evaluation, acknowledgment, etc.) is expressed without waiting for the other to express it first (Goodwin and Goodwin 1992). Overall, these studies see gestural mimicry, when used in face-to-face conversation, as conveying an interactional message that the listener is attending to and being an active co-participant in the talk. This interactional view provides a different account from the cognitive account for why mimicry occurs. While the cognitive view holds that it serves to coordinate speakers’ construal of meaning and how things are perceived, the interactional view holds that it serves particular interactional goals. Of course, these two views are not mutually exclusive. In fact, speakers may converge in their use of gesture both to signal comprehension and, at the same, to align their understanding of how referents are construed.

3. Gestural mimicry in non ace-to-ace settings Thus far, we have discussed gestural mimicry in face-to-face conversation. There is also some research suggesting that mimicry can occur in other situations. These studies tested whether the tendency to repeat gesture between speakers is resilient over time and medium.

99. Gestures and mimicry

1379

Wagner and Tanenhaus (2009) examined mouse trajectorie on a computer screen as a kind of gesture movement and provided evidence not only that listeners repeat the speaker’s gestures but that the mimicry can occur over a long time span and across different expressive media (hand gestures and mouse movements). They asked a group of people to solve the Tower of Hanoi Puzzle either with a real object or on a computer screen and then to explain the task to a listener. In their explanation, gestures in the real object condition tended to have a curved trajectory to reflect the realistic movement of the pegs. Unlike with real objects, however, pegs on the computer screen could be moved from bar to bar without being lifted above the bar. As result, in the computer screen condition, gestures tended to have a horizontal trajectory. Subsequently, all the listeners solved the puzzle on a computer monitor. Examination of their mouse trajectories indicated observed gestures’ long-term impact on the listeners’ mental representations. It revealed that the gesture trajectory observed in the speaker’s explanation recurred in the listener’s mouse trajectories; those who saw curved gestures showed curved mouse trajectories while those who saw horizontal gestures showed horizontal mouse trajectories. Furthermore, Parrill and Kimbara (2006) found even outside observers (i.e., an observer who is not directly involved in the talk as a listener) repeat perceived gestures. In their study, observers were shown a video clip of two speakers conferring on a route through a model town, in which inter-speaker similarity in speech and gesture was varied with respect to motion, location, and hand shape features. When describing the content of the video subsequently to an experimenter, those who saw speakers with highly similar gestures reused the same gestures more, resulting in more instances of temporally and spatially dislocated mimicry.

4. Summary Mimicked gestures do not differ from other, for example, iconic gestures, in that they both represent referents being talked about. What distinguishes them from other gestures is that they make reference to the other speaker’s earlier or currently unfolding gestures, which gives rise to the intended interactional messages, such as the display of understanding and involvement in the talk. Not less important than the functions of gestural mimicry reviewed in this chapter is the fact that it elucidates the interactive nature of gesture production, that gesture takes input not only from the referent or the speaker’s construal of it but from the accumulated and unfolding discourse by participants with various interactional goals at hand.

5. Reerences Bavelas, Janet Beavin, Alex Black, Charles R. Lemery and Jennifer Mullett 1986. I show how you feel: Motor mimicry as a communicative act. Journal of Personality and Social Psychology 50(2): 322⫺329. Bock, J. Kathryn 1986. Syntactic persistence in language production. Cognitive Psychology 18: 355⫺387. Bock, J. Kathryn and Helga Loebell 1990. Framing sentences. Cognition 35(1): 1⫺39. Bolden, Galina B. 2003. Multiple modalities in collaborative turn sequences. Gesture 3(2): 187⫺212. Branigan, Holly P., Martin J. Pickering and Alexandra A. Cleland 2000. Syntactic co-ordination in dialogue. Cognition 75(2): B13⫺25.

1380

VII. Body movements – Functions, contexts, and interactions

Brennan, Susan and Herbert H. Clark 1996. Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition 22(6): 1482⫺1493. Cappella, Joseph N. and Mark T. Palmer 1990. Attitude similarity, relational history and attraction. Communication Monographs 57(3): 161⫺183. Chartrand, Tanya L. and John A. Bargh 1999. The cameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology 76(6): 893⫺910. Clark, Herbert H. and Deanna Wilkes-Gibbs 1986. Referring as a collaborative process. Cognition 22(1): 1⫺39. Dabbs, James M. 1969. Similarity of gestures and interpersonal influence. Proceedings of the 77th Annual Convention of the American Psychological Association: 337⫺338. de Fornel, Michel 1992. The return gesture. In: Peter Auer and Aldo di Luzio (eds.), The Contextualization of Language, 159⫺193. Amsterdam: John Benjamins. Garrod, Simon and Anthony Anderson 1987. Saying what you mean in dialogue: A study in conceptual and semantic co-ordination. Cognition 27(2): 181⫺218. Goodwin, Charles and Majorie H. Goodwin 1987. Concurrent operations on talk: Notes on the interactive organization of assessments. IPrA Papers in Pragmatics 1(1): 1⫺54. Goodwin, Charles and Majorie H. Goodwin 1992. Assessments and the construction of context. In: Alessandro Duranti and Charles Goodwin (eds.), Rethinking Context, 147⫺189. Cambridge: Cambridge University Press. Goodwin, Marjorie H. 1980. Processes of mutual monitoring implicated in the production of description sequences. Sociological Inquiry 50(3⫺4): 303⫺317. Guerrero, Laura K. and Peter A. Andersen 1994. Patterns of matching and initiation: Touch behavior and touch avoidance across romantic relationship stages. Journal of Nonverbal Behavior 18(2): 137⫺253. Halliday, Michael A.K. and Ruqaiya Hasan 1976. Cohesion in English. London: Longman. Heath, Christian 1992. Gesture’s discreet tasks: Multiple relevancies in visual conduct and in the contextualisation of language. In: Peter Auer and Aldo di Luzio (eds.), The Contextualisation of Language, 101⫺127. Amsterdam: John Benjamins. Holler, Judith and Katie Wilkin 2011. Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior 35(2): 133⫺153. Kendon, Adam 1970. Movement coordination in social interaction. Acta Psychologica 32: 100⫺ 112. [Reprinted in: Adam Kendon 1990 Conducting Interaction: Patterns of Behavior in Focused Encounters, 91⫺115. Cambridge: Cambridge University Press.] Kimbara, Irene 2006. On gestural mimicry. Gesture 6(1): 39⫺61. Kimbara, Irene 2008. Gesture form convergence in joint description. Journal of Nonverbal Behavior 32(2): 123⫺131. Kleinke, Chris L., Richard A. Staneski and Dale E. Berger 1975. Evolution of an interviewer as a function of interviewer gaze, reinforcement of subject gaze, and interviewer attractiveness. Journal of Personality and Social Psychology 31(1): 115⫺122. LaFrance, Marianne and Maida Broadbent 1976. Group rapport: Posture sharing as a nonverbal indicator. Group and Organization Studies 1(3): 328⫺333. Lerner, Gene H. 2002. Turn-sharing: The choral co-production of talk-in-interaction. In: Cecilia E. Ford, Barbara A. Fox and Sandra A. Thompson (eds.), The Language of Turn and Sequence, 225⫺256. Oxford: Oxford University Press. Levelt, Willem J. M. and Stephanie Kelter 1982. Surface form and memory in question answering. Cognitive Psychology 14(1): 78⫺106. Mol, Lisette, Emiel Krahmer, Alfons Maes and Marc Swerts 2012. Adaptation in gesture: Converging hands or converging minds? Journal of Memory and Language 66(1): 249⫺264. Parrill, Fey and Irene Kimbara 2006. Seeing and hearing double: The influence of mimicry in speech and gesture on observers. Journal of Nonverbal Behavior 30(4): 141⫺150. Pickering, Martin J. and Simon Garrod 2004. Toward a mechanistic psychology of dialogue. Behavior and Brain Sciences 27(2): 169⫺226.

100. Gesture and prosody

1381

Schegloff, Emanuel A. and Harvey Sacks 1973. Opening up closings. Semiotica 8(4): 289⫺327. Schwarts, Daniel L. 1995. The emergence of abstract representations in dyad problem solving. The Journal of the Learning Science 4(3): 321⫺354. Street, Richard L., Jr. and David B. Buller 1987. Nonverbal response patterns in physician-patient interactions. Journal of Nonverbal Behavior 11(4): 234⫺253. Street, Richard L., Jr. and Joseph N. Cappella 1989. Social linguistic factors influencing adaptation in children’s speech. Journal of Psycholinguistic Research 18(5): 497⫺519. Tabensky, Alexis 2001. Gesture and speech rephrasings in conversation. Gesture 1(2): 213⫺235. Wagner Cook, Susan and Michael K. Tanenhaus 2009. Embodied communication: Speakers’ gestures affect listeners’ actions. Cognition 113(1): 98⫺104. Weiner, E. Judith and William Labov 1983. Constraints on the agentless passive. Journal of Linguistics 19(1): 29⫺58.

Irene Kimbara, Kushiro (Japan)

100. Gesture and prosody 1. 2. 3. 4. 5.

Introduction High-level summary of gesture/prosody relationship Description of gesture/prosody relationship Conclusion References

Abstract This chapter provides an overview of what is currently known regarding the interrelation and integration of gesture and prosody. Though research in this area is still young, the evidence is clear that gesture and prosody are interrelated. Certain hierarchical units within the two channels are synchronized with each other, as are internal rhythms. The two channels work together to construct and reference discourse, and to regulate conversational interaction. The two channels interact in the diachronic development of languages, in second language acquisition, and during deception. The relationship has been noted in both production and perception studies, and in subjects with neurological disorders. The relationship has also been found in all ages, from infancy to adulthood, and in over a dozen languages, both spoken and signed. Finally, many parts of the body take part in the relationship with prosody, including the hands, arms, head, torso, legs, eyebrows, eyelids, and other facial muscles. Though the exact nature and cause of gesture and prosody’s relationship is still under investigation, many researchers believe their tight connection is a facet of the strong underlying linkage between gesture and speech in general, which in turn is felt to exist because speech is a fundamentally embodied phenomenon.

1. Introduction This chapter provides an overview of what is currently known regarding the interrelation and integration of gesture and prosody. While there has been much written on the relaMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13811391

1382

VII. Body movements – Functions, contexts, and interactions

tionship between gesture and speech in general, this chapter will focus only on the prosodic aspects of speech with respect to gesture. The term gesture will follow Kendon’s (2004) broad definition, “visible action as utterance”, meaning any bodily movement which contributes to communication, including sign language. The term prosody in this chapter will refer to the stress, intonation, and rhythm of speech. Not included here is the timing relationship between gestures and words, which is well covered in the literature. Section 2 contains a concise summary of what is known so far, Section 3 provides a longer description of these findings, and Section 4 concludes.

2. High-level summary o gesture/prosody relationship The evidence is clear that gesture and prosody are interrelated. We also understand a bit of how the two channels are related, and researchers are actively learning. Most importantly, the community is actively discussing why the relationship exists. Many scholars believe that the linkage between gesture and prosody is a facet of the connection between gesture and speech in general, which in turn is felt to exist because speech is a fundamentally embodied phenomenon. What is known to date regarding the relationship between gesture and prosody? Cumulative research has shown us the following (see Section 3 for more details and references). ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺

Certain hierarchical units within both channels are synchronized with each other. The two channels work together to construct and reference discourse. The two channels work together to regulate conversational interaction. Each channel has internal rhythms which are synchronized with those of the other. The relationship has been noted in perception studies as well as production studies. The relationship has been observed in subjects with neurological disorders. The two channels interact in the diachronic development of languages. The two channels interact in second language acquisition. The two channels interact during deception. Many parts of the body take part in the relationship with prosody, including the hands, arms, head, torso, legs, eyebrows, eyelids, and other facial muscles. ⫺ The relationship has been observed in all ages, from infancy to adulthood. ⫺ The relationship has been observed, in some form or other, in well over a dozen languages, both spoken and signed. Section 3 will describe more fully the above findings. As many researchers have addressed multiple points listed above, the discussion in Section 3 will not be as clearly delineated by topic as are the above bullets.

3. Description o gesture/prosody relationship 3.1. Correlation o gestural and prosodic units Kendon (1972, 1980) defined hierarchical units of gesture, and observed an alignment of this gestural hierarchy with an intonational hierarchy. The gestural stroke typically occurs just prior to or at the onset of a stressed syllable. The gestural phrase boundaries

100. Gesture and prosody

1383

coincide with the edges of a tone group, a construct from the British School of intonation which is “the smallest grouping of syllables over which a completed intonation tune occurs” (Kendon 1972: 184). The gestural unit coincides with what Kendon termed a locution, corresponding to a complete sentence. Groups of gestural units sharing consistent head movement are time-aligned with locution groups, or locutions sharing a common intonational feature apart from other groupings of locutions. Finally, consistent arm use and body posture are synchronized with a locution cluster, corresponding to a paragraph. McNeill (1992: 6) rephrased Kendon’s first observation above as the Phonological Synchrony Rule: “[…] the stroke of the gesture precedes or ends at, but does not follow, the phonological peak syllable of speech”. Nobe (1996, 2000) refined this phonological synchrony rule to a “gesture and acoustic-peak synchrony rule”, defining an “acoustic peak” as a peak of F0 and/or intensity. Valbonesi et al. (2002) also found that strokes align with stressed syllables. Birdwhistell (1970) noted four levels of kinesic stress which in general corresponded to the then-hypothesized four levels of linguistic stress. Scheflen (1964, 1968) observed that eyeblinks, head nods, and hand movements occur at intonational junctures. Keating et al. (2003) noted a correlation between head movements and stressed syllables. McClave (1991, 1994) found that beats coincide with tone unit nuclei, and supported Kendon’s observation that gestural phrases align with tone unit boundaries. McClave also discovered that strokes and holds are shorter than normal when followed by others within the same tone unit. This “fronting” of gestures suggests that speakers know in advance they will express several concepts gesturally along with a concept lexically, and time the gestures to finish at the same time as their verbal counterpart. Loehr (2004, 2012) found that gestural phrases (g-phrases) tended to align with Beckman and Pierrehumbert’s (1986) intermediate phrases. Ferre´ (2010) found that in French, gphrases overlap Selkirk’s ([1978] 1981) Intonational Phrases (IPs) (which Ferre´ equates to Beckman and Pierrehumbert’s intermediate phrases), such that g-phrases start before their co-temporal IPs, and end after their co-temporal IPs. A number of researchers have confirmed a correlation between strokes and pitch accents, in addition to the more general correlation above between strokes and stressed syllables. These include Roth (2002) for German teenagers, and Esteve-Gibert and Prieto (2011) for Catalan young children and adults. The latter found that “at the beginning of the babbling stage, the pitch peak tends to be aligned at the end of the stroke […]. However, at the late babbling stage and the one-word periods, the pitch peak has moved to the left and it is aligned at the beginning of the stroke”, as seen in adults (EsteveGibert and Prieto 2011: 3). Even more specifically, a correlation between apices of strokes (not just strokes in general) and pitch accents was found by Loehr (2004, 2012) for apices of the hand, head, leg, and eyelids, by Shattuck-Hufnagel and colleagues (Esposito et al. 2007; Renwick, Shattuck-Hufnagel, and Yasinnik 2004; Shattuck-Hufnagel et al. 2007) in both English and Italian, by Jannedy and Mendoza-Denton (2005), by Flecha-Garcı´a (2006) for eyebrow raises, and by Leonard and Cummins (2011), using a 3-D motion tracker attached to a subject’s hand. Cave´ et al. (1996) also found a correlation between eyebrow movements and F0 rises. However, Rusiewicz (2010), in controlled production experiments using a capacitance sensor for gesture tracking, found less clear results with respect to synchrony between gestures (including apices) and contrastive pitch accents. Beskow, Granström, and House (2006), using motion-capture technology on Swedish speakers, showed that words with intonational focal accents were accompanied by greater variation of facial movements than words without such accents.

1384

VII. Body movements – Functions, contexts, and interactions

3.2. Cooperation o gesture and prosody in constructing and reerencing discourse McNeill gave the term catchment to a recurrence of gestural features over a length of discourse. McNeill and colleagues (McNeill 2000; McNeill et al. 2001; Quek et al. 2000) found nearly perfect alignment between gestural catchments and discourse structure, where the latter had been independently derived per guidelines published by Grosz and colleagues (Nakatani et al. 1995). Importantly, Hirschberg and Nakatani (1996) had previously demonstrated a strong correlation between this type of discourse structure and intonation, prompting McNeill et al. to point out a close relationship between discourse structure, gesture, and intonation. Jannedy and Mendoza-Denton (2005) also discovered intonation-gesture synchrony in larger domains, describing metaphorical gestural spatialization spanning long stretches of discourse, which provide information about relationships between entities in the world. Benner (2001) found that the timing between the onsets of gestures and corresponding tone units was sensitive to narrative context. More plot focus yielded longer intervals between the onsets of gestures and their counterparts, presumably due to more complex gestures. Ferre´ (2005) found that the climax of narratives in French produced increased gesturing, slower speech rate, and higher voice intensity, though no appreciable differences in pitch. Ferre´ (2011), studying the interaction among syntax, prosody, and gesture in highlighting elements of discourse in French, found these three types of marking are generally used in complementary fashion, though when they are used in conjunction, it is typically prosody and gesture coinciding to reinforce emphasis. Flecha-Garcı´a (2006) discovered that the alignment of eyebrow raises with pitch accents is sensitive to utterance function, with the alignment becoming more frequent with instructions than non-instructions. Duncan, Loehr, and Parrill (2005) examined gesture and prosody (specifically stress and intonation) under a condition of discourse (retelling a story seen in logical order) or lack of discourse (retelling a vignette from the story seen randomly among vignettes from other stories). In the latter, both gesture and prosody showed a reduced tendency to highlight contrastive elements, suggesting that both gesture and prosody are creatures of discourse context. Loehr (2004, 2012) described cooperation of gestures and intonational tunes for a variety of pragmatic functions, including focus, contrast, completeness, and information status of entities.

3.3. Cooperation o gesture and prosody in regulating interaction Duncan (1972; Duncan and Fiske 1977) investigated interactional signals in both channels, noting that, for instance, turn-yielding can be signaled by cessation of gesture, or by a rising or falling final intonational contour. Loehr (2004, 2012) observed similar signals in cooperating gesture and intonation for turn endings. Creider (1978, 1986) described differences in the hand movements among several East African languages (Kipsigis, Luo, and Gusii) which “appear to be conditioned by the nature of the use of stress in the prosodic systems of the languages” (Creider 1986: 156⫺157). In the language Luo, he further found that beats were timed with the nuclear tone, and that a falling nuclear tone, in conjunction with a beat ending in a lowering hand or head, signaled the end of the speaker’s turn. Al Bayyari and Ward (2007) noted simultaneous use of gesture and intonation in inviting backchannels in Arabic, while Levow, Duncan, and King (2010) are investigating this phenomenon in Arabic, English, and Spanish.

100. Gesture and prosody

1385

3.4. Gestural and prosodic rhythm A number of researchers have looked at the rhythmic relation between gesture and prosody. Condon (1976) noted that hierarchical speech/gesture “waves” occur rhythmically. For example, subtle body movements pattern at the level of the phone, changing direction and quality from one phone to the next. The same is true at the syllable and word level. Cycles of verbal stress and body movement tend to occur also at half-second and full-second intervals. This relationship of gesture and speech Condon termed selfsynchrony, and he suspected it was due to a common neurological basis of both. In addition, Condon also made the “surprising and unexpected observation that […] the listener moves in synchrony with the speaker’s speech almost as well as the speaker does” (1976: 305). Condon termed this interactional synchrony, and noticed it in infants as young as 20 minutes old. Erickson (1981; Erickson and Shultz 1982) reported findings similar to Condon in terms of rhythm, self-synchrony, and interactional synchrony. Tuite (1993) postulated an underlying rhythmic pulse influencing both speech and gesture. The surface correlates of this rhythm are the gestural stroke and the tonic nucleus, which are correlated such that the stroke occurs just before or at the nucleus. McNeill (1992) measured this rhythm in examples where speakers gestured fairly continuously, finding that the period between strokes was fairly regular, and ranged between one and two seconds. McClave (1991, 1994) discovered that not all beats occur on stressed syllables, but rather seem to be generated rhythmically outward from the tone unit nucleus. That is, a rhythm group of beats exists anchored around the nucleus, such that beats are found at even intervals from the nucleus, even if they fall on unstressed syllables or pauses. Even more surprisingly, this isochronic pattern of beats begins well before the nucleus, implying that the entire tone unit, along with its rhythm group, is formed in advance, before the first word is uttered. McClave also noticed, as did Condon, an interspeaker rhythm, in which a listener produced beats during the speaker’s utterance. Loehr (2004, 2007) found a rich rhythmic relationship among the hands, legs, head, and voice. Each articulator produced pikes (a general term for short, distinctive kinesic or auditory output, regardless of the source) in complex synchrony with other articulators. Even eyeblinks were synchronized, with eyelids held closed until reopening on the rhythmic beat, akin to a pre-stroke hold before a gestural stroke. Eyeblinks also took part in interactional synchrony, as listeners blinked in rhythm with the speaker’s speech.

3.5. Perception o gesture and prosody While most studies have examined how speakers produce gesture and prosody, researchers are also looking at how interlocutors perceive the two, using controlled audio-visual stimuli produced by human actors or synthesized by animated characters or avatars. Krahmer et al. (2002), using a Dutch “talking head”, found that both pitch accents and rapid eyebrow raises can have an effect on the perception of focus, though the effect of pitch accents is greater. Al Moubayed et al. (2010) used eye-gaze tracking on moderately hearing impaired subjects viewing a Swedish talking head to show that users attend to the face more similarly to a natural face when pitch movements are coupled with facial gestures than when pitch movements are without gestures. Treffner, Peter, and Kleidon (2008) found that varying the timing of an avatar’s gesture while holding its stress and intonation constant clearly affected the perceived focus word, with gradient shifts in gesture timing leading to categorical perception of the focus word. Leonard and Cum-

1386

VII. Body movements – Functions, contexts, and interactions

mins (2011) found subjects tended to spot altered gesture/intonation timing if the gesture was later than normal, but not if earlier than normal, suggesting an asymmetry in the expectation of the interlocutor about which elements in speech link to which elements in gesture. Morency, de Kok, and Gratch (2008) used probabilistic models (e.g., Hidden Markov Models) trained on the stress, intonation, gestures, and words of human-human interactions to automatically detect appropriate moments for an avatar to provide backchannel signals to a human conversational participant. Sargin et al. (2008) also used probabilistic techniques to extract and align head movements and stress/intonation from human subjects and produce an audio-visual mapping model for improved naturalness in avatar utterances. Kettebekov et al. (2002) used automatic detection of stress and intonation to improve automatic visual detection of gesture strokes by human-computer interfaces. Sondermann (2007) found that the characters in a popular animated movie produced gestures timed faithfully to their speech, including subtle beats synchronized with the vocal rhythm, showing the animators’ intention to align gesture and prosody for realism. Cvejic, Kim, and Davis (2010) showed that head movements (even a video of the outline of the top of the head) can provide a visual cue to prosody to enhance speech perception. A number of studies, beginning with the classic work by Mehrabian and Ferris (1967), have studied the relative contribution of words, facial gestures, and intonation to interpretation of meaning such as uncertainty, with facial gestures and intonation often over-riding lexical choice. See Borra`s-Comes and Prieto (2011) for an overview and continuation of this line of research.

3.6. Other aspects o the relationship between gesture and prosody Duncan (2009) found that Parkinson’s disease patients as well as healthy individuals use gesture and prosody to jointly highlight discourse focal information, though gesture and prosody were reduced in the Parkinson’s patients (specifically examined were stress and intonation). In addition, Duncan noted that variations in emotion/arousal during discourse affected both gesture and prosody together. Several researchers have noticed reduced gesturing and intonational variability in patients with right-hemisphere damage, though no direct correlation between the two; Hogrefe et al. (2011) provide an overview. In a second language acquisition (SLA) study, McCafferty (2006) examined L2 English learners’ use of beats, proposing that the rhythmic gesturing helped internalize features of the L2 prosody, including syllable structure and intonation patterns: “it is speculated that movement itself might prove to be part of SLA, that it establishes a physicalized (kinesic) sense of prosodic features of the L2, promoting automaticity and fluency” (2006: 197). Ekman, Friesen, and Scherer (1976) looked at the contributions various channels, including pitch and body movement, made during deception. They found fewer illustrators during deception, higher pitch, and a negative correlation between the two (i.e., illustrators do not often co-exist with high pitch). Rhetorically, gesture and prosody have been studied as far back as the ancient Indians, whose treatises on the pronunciation of the Sanskrit Vedas prescribed not only tonal accents (high, low, and rising-falling), but also manual gestures to accompany these accents (near the head, heart, and ear, respectively). Thus, the height of the gesture matched the “height” of the tone (Shukla 1996). Hübler (2007) hypothesized that in Early Modern English, gestural components of conversation became replaced by prosodic components; this “substitutability of gestural by prosodic behavior is claimed to

100. Gesture and prosody

1387

rest on an isomorphism between the two modes” (2007: viii). Bloomfield (1933: 114) linked gesture and intonation due to the paralinguistic qualities of each: “we use features of pitch very largely in the manner of gestures, as when we talk harshly, sneeringly, petulantly, caressingly, cheerfully, and so on.” Pike (1967) also hypothesized a relationship between the two. Bolinger (1982, 1983, 1986) argued that “intonation belongs more with gesture than with grammar” (1983: 157), and that “[features of pitch] are gestures […] and that the contribution to discourse is of the same order” (1982: 19). His proposal for this commonality was that both channels are expressing the speaker’s emotional state. He felt that intonation, like gesture, could be an “iconic”, in this case an iconic for the speaker’s emotions, and that pitch and body parts move in parallel, rising together with emotional tension, and lowering together with emotional relaxation. Scheflen (1964) similarly argued that the head and hand rise with rising terminal pitch, and fall with falling terminal pitch. Balog and Brentari (2008) reported this parallelism of gestural and pitch movement in toddlers. McClave (1991) and Loehr (2004) both specifically looked for this parallelism, and found no evidence for it. Sign languages (included within Kendon’s (2004) broad definition of gesture) also contain prosody. Brentari’s (1998) model of sign language phonology included prosodic features in the movements of sign production. Nespor and Sandler (1999) discovered phonological phrases and intonational phrases, each with phonetic correlates, in Israeli Sign Language. They also found, as did Sandler and Lillo-Martin (2006), that facial articulators, which can combine in a variety of ways, are analogous to components of intonational melody. Note that there are several ways to talk about gesture and prosody with respect to sign language. The first is to note that sign language is itself gesture, according to Kendon’s definition, and to then discuss internal prosody within sign language. The preceding paragraph is along these lines. Another way is to note that researchers (e.g., Emmorey 1999) are proposing that there is gesture within sign language. Thus, sign languages contain not only internal prosody, but also internal gesture. The relationship between the two has not been explored, though Liddell does argue for the existence of both in ASL, stating that while intonation and gesture have both been traditionally analyzed in spoken languages as gradient and non-linguistic, “this cannot be done with ASL. This is because obligatory, gradient, and gestural phenomena in ASL play such a prominent, meaningful role that they cannot be ignored” (2003:xi). More specifically, Liddell treats “directional uses of signs as gradient and gestural phenomena driven by grammar and by meaning construction” (2003: xi). Wilcox (2004) explored yet another aspect of gesture and prosody with respect to sign language. In this case, gesture refers to the “traditional” kind, accompanying spoken language, while prosody refers to prosody within sign language. Using data from ASL, Catalan Sign Language, French Sign Language, and Italian Sign Language, Wilcox examined the role of gestures in the development of sign languages, specifically as a source of lexical and grammatical morphemes via an intermediate paralinguistic intonation stage. No survey of gesture and prosody would be complete without acknowledging the intellectual debt to Erving Goffman for championing the study of face-to-face interaction as a discipline in its own right. Goffman’s lifelong interest in what people “give off”

1388

VII. Body movements – Functions, contexts, and interactions

when they are co-present with others has given direction to half a century of researchers to date. “A continuum must be considered, from gross changes in stance to the most subtle shifts in tone that can be perceived” (Goffman 1981: 128).

4. Conclusion As summarized in Section 2 and detailed in Section 3, gesture and prosody are clearly and convincingly inter-related, in numerous and diverse ways. They often perform complementary or cooperating functions, and even seem to be able to substitute for each other in utterance production. Though the exact nature and cause of their relationship is still under investigation, many researchers believe their tight connection is a facet of the strong underlying linkage between gesture and speech in general, which in turn is felt to exist because speech is a fundamentally embodied phenomenon.

5. Reerences Al Bayyari, Yaffa and Nigel Ward 2007. The role of gesture in inviting back-channels in Arabic. 10th Meeting of the International Pragmatics Association, El Paso, April 13 2007. Al Moubayed, Samer, Jonas Beskow, Björn Granström and David House 2010. Audio-visual prosody: Perception, detection, and synthesis of prominence. In: Anna Esposito (ed.), Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces: Theoretical and Practical Issues, 55⫺71. Berlin: Springer. Balog, Heather and Diane Brentari 2008. The relationship between early gestures and intonation. First Language 28(2): 141⫺163. Beckman, Mary and Janet Pierrehumbert 1986. Intonational structure in English and Japanese. Phonology Yearbook 3: 255⫺310. Benner, Allison 2001. The onset of gestures: General and contextual effects for different categories of gesture in spontaneous narratives. Orage (Orality and Gesture), Aix-en-Province, France. Beskow, Jonas, Björn Granström and David House 2006. Visual correlates to prominence in several expressive modes. Proceedings, Interspeech 2006: 1272⫺1275. Birdwhistell, Ray 1970. Kinesics and Context. Philadelphia: University of Pennsylvania Press. Bloomfield, Leonard 1933. Language. New York: Holt, Rinehart and Winston. Bolinger, Dwight 1982. Nondeclaratives from an intonational standpoint. In: Robinson Schneider, Kevin Tuite and Robert Chametzky (eds.), Papers from the Parasession on Nondeclaratives, 1⫺ 22. Chicago: Chicago Linguistic Society. Bolinger, Dwight 1983. Intonation and gesture. American Speech 58: 156⫺174. Bolinger, Dwight 1986. Intonation and its Parts: Melody in Spoken English. Stanford: Stanford University Press. Borra`s-Comes, Joan and Pilar Prieto 2011. ‘Seeing tunes’. The role of visual gestures in tune interpretation. Journal of Laboratory Phonology 2: 335⫺380. Brentari, Diane 1998. A Prosodic Model of Sign Language Phonology. Cambridge, MA: Massachusetts Institute of Technology Press. Cave´, Christian, Isabelle Guaı¨tella, Roxanne Bertrand, Serge Santi, Franc¸oise Harlay and Robert Espesser 1996. About the relationship between eyebrow raises and F0 variations. Proceedings of the International Conference on Spoken Language, 2175⫺2178. Wilmington: University of Delaware. Condon, William 1976. An analysis of behavioral organization. In: William C. Stokoe and H. Russell Bernard (eds.), Sign Language Studies, Volume 13, 285⫺318. Silver Spring, MD: Linstok Press.

100. Gesture and prosody Creider, Chet 1978. Intonation, tone group and body motion in Luo conversation. Anthropological Linguistics 20: 327⫺339. Creider, Chet 1986. Interlanguage comparisons in the study of the interactional use of gesture. Semiotica 62(1/2): 147⫺163. Cvejic, Eric, Jeesun Kim and Chris Davis 2010. Prosody off the top of the head: Prosodic contrasts can be discriminated by head motion. Speech Communication 52(6): 555⫺564. Duncan, Starkey and Donald Winslow Fiske 1977. Face-to-face Inter Action: Research, Methods, and Theory. Hillsdale, NJ: Lawrence Erlbaum Associates. Duncan, Susan 2009. Gesture and speech prosody in relation to structural and affective dimensions of natural discourse. Gesture and Speech in Interaction (GESPIN), Poznan´, Poland, 24⫺26 September 2009. Duncan, Susan, Dan Loehr and Fey Parrill 2005. Discourse factors in gesture and speech prosody. Presented at the 2nd Conference of the International Society for Gesture Studies (ISGS), Lyon, France, June 2005. Ekman, Paul, Wallace Friesen and Klaus Scherer 1976. Body movement and voice pitch in deceptive interaction. Semiotica 16(1): 23⫺27. Emmorey, Karen 1999. Do signers gesture? In: Lynn S. Messing and Ruth Campbell (eds.), Gesture, speech, and sign, 133⫺159. Oxford: Oxford University Press. Erickson, Frederick 1981. Money tree, lasagna bush, salt and pepper: Social construction of topical cohesion in a conversation among Italian-Americans. In: Deborah Tannen (ed.), Analyzing Discourse: Text and Talk, 43⫺70. Washington: Georgetown University Press. Erickson, Frederick and Jeffrey Shultz 1982. The Counselor as Gatekeeper: Social Interaction in Interviews. New York: Academic Press. Esposito, Anna, Daniela Esposito, Mario Refice, Michelina Savino and Stefanie Shattuck-Hufnagel 2007. A preliminary investigation of the relationship between gestures and prosody in Italian. In: Anna Esposito, Maja Bratanic´, Eric Keller and Maria Marinaro (eds.), Fundamentals of Verbal and Nonverbal Communication and the Biometric Issue, 65⫺74. Amsterdam: IOS Press. Esteve-Gibert, Nu´ria and Pilar Prieto 2011. The temporal alignment between prosody and gesture in Catalan-babbling infants. Gesture and Speech in Interaction (GESPIN), Bielefeld, Germany. Ferre´, Gae¨lle 2005. Gesture, intonation, and the pragmatic structure of narratives in British English conversation. York Papers in Linguistics 2(3): 3⫺25. Ferre´, Gae¨lle 2010. Timing relationships between speech and co-verbal gestures in spontaneous French. Workshop on Multimodal Corpora, Language Resources and Evaluation Conference (LREC), 86⫺91. Ferre´, Gae¨lle 2011. Thematisation and prosodic emphasis in spoken French: A preliminary analysis. Gesture and Speech in Interaction (GESPIN), Bielefeld, Germany. Flecha-Garcı´a, Marı´a L. 2006. Eyebrow raising in dialogue: Discourse structure, utterance function, and pitch accents. Ph.D. dissertation, Department of Linguistics, University of Edinburgh. Goffman, Erving 1981. Forms of Talk. Philadelphia: University of Pennsylvania Press. Hirschberg, Julia and Christine Nakatani 1996. A prosodic analysis of discourse segments in direction-giving monologues. Proceedings of the Association of Computational Linguistics. Hogrefe, Katharina, Wolfram Ziegler, Carina Tillmann and Georg Goldenberg 2011. Intonation and hand gestures in narrations of healthy speakers and speakers suffering from right hemisphere damage: A pilot study. Gesture and Speech in Interaction (GESPIN), Bielefeld, Germany. Hübler, Axel 2007. The Nonverbal Shift in Early Modern English Conversation. Amsterdam: John Benjamins. Jannedy, Stefanie and Norma Mendoza-Denton 2005. Structuring information through gesture and intonation. Interdisciplinary Studies on Information Structure 3: 199⫺244. Keating, Pat, Marco Baroni, Sven Mattys, Rebecca Scarborough and Abeer Alwan 2003. Optical phonetics and visual perception of kinetic stress in English. Presented at the 15th International Congress of Phonetic Sciences, Barcelona, August 2003.

1389

1390

VII. Body movements – Functions, contexts, and interactions

Kendon, Adam 1972. Some relationships between body motion and speech: An analysis of an example. In: Aaron Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 177⫺ 210. New York: Pergamon Press. Kendon, Adam 1980. Gesticulation and speech: Two aspects of the process of utterance. In: Mary Ritchie Key (ed.), The Relationship of Verbal and Nonverbal Communication, 207⫺227. The Hague: Mouton. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kettebekov, Sanshzar, Mohammed Yeasin, Nils Krahnstoever and Rajeev Sharma 2002. Prosody based co-analysis of deictic gestures and speech in weather narration broadcast. Language resources and evaluation conference (LREC). Krahmer, Emiel, Zsofia Ruttkay, Marc Swerts and Wieger Wesselink 2002. Pitch, eyebrows and the perception of focus. Speech Prosody, Aix en Province, France. Leonard, Thomas and Fred Cummins 2011. The temporal relationship between beat gestures and speech. Language and Cognitive Processes 26(10): 1457⫺1471. Levow, Gina-Anne, Susan Duncan and Edward King 2010. Cross-cultural investigation of prosody in verbal feedback in interactional rapport. International Speech Communication Association. Interspeech, Tokyo, Japan, 28 September 2010. Liddell, Scott 2003. Grammar, Gestures, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Loehr, Dan 2004. Gesture and intonation. Ph.D. dissertation, Faculty of the Graduate School of Arts and Sciences, Georgetown University. Loehr, Dan 2007. Aspects of rhythm in gesture and speech. Gesture 7(2): 179⫺214. Loehr, Dan 2012. Temporal, structural, and pragmatic synchrony between intonation and gesture. Journal of Laboratory Phonology 3(1): 71⫺89. McCafferty, Steven 2006. Gesture and the materialization of second language prosody. International Review of Applied Linguistics in Language Teaching 44(2): 197⫺209. McClave, Evelyn 1991. Intonation and gesture. Ph.D. dissertation, Department of Linguistics, Georgetown University. McClave, Evelyn 1994. Gestural beats: The rhythm hypothesis. Journal of Psycholinguistic Research 23(1): 45⫺66. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. McNeill, David 2000. Catchments and contexts: Non-modular factors in speech and gesture production. In: David McNeill (ed.), Language and Gesture, 312⫺328. Cambridge: Cambridge University Press. McNeill, David, Francis Quek, Karl-Erik McCullough, Susan Duncan, Nobuhiro Furuyama, Robert Bryll, Xin-Feng Ma and Rashid Ansari 2001. Catchments, prosody, and discourse. Gesture 1(1): 9⫺33. Mehrabian, Albert and Susan R. Ferris 1967. Inference of attitudes from nonverbal communication in two channels. Journal of Consulting Psychology 31(3): 248⫺252. Morency, Louis-Philippe, Iwan de Kok and Jonathon Gratch 2008. Context-based recognition during human interactions: Automatic feature selection and encoding dictionary. 10th International Conference on Multimodal Interfaces (ICMI), Chania, Greece, 20⫺22 October 2008. Nakatani, Christine, Barbara Grosz, David Ahn and Julia Hirschberg 1995. Instructions for annotating discourses. (Tech. Rep. No. TR-21⫺95). Boston, MA: Harvard University, Center for Research in Computer Technology. Nespor, Marina and Wendy Sandler 1999. Prosodic phonology in Israeli sign language. Language and Speech 42(2/3): 143⫺176. Nobe, Shuichi 1996. Representational gestures, cognitive rhythms, and acoustic aspects of speech: A network/threshold model of gesture production. Ph.D. dissertation, Department of Psychology, University of Chicago.

100. Gesture and prosody

1391

Nobe, Shuichi 2000. Where do most spontaneous representational gestures actually occur with respect to speech? In: David McNeill (ed.), Language and Gesture, 186⫺198. Cambridge: Cambridge University Press. Pike, Kenneth 1967. Language in Relation to a Unified Theory of the Structure of Human Behavior. The Hague: Mouton. Quek, Francis, David McNeill, Robert Bryll, Cemil Kirbas, Hasan Arslan, Karl-Erik McCullough, Nobuhiro Furuyama and Rashid Ansari 2000. Gesture, speech, and gaze cues for discourse segmentation. IEEE Conference on Computer Vision and Pattern Recognition, 247⫺254. Renwick, Margaret, Stephanie Shattuck-Hufnagel and Yelena Yasinnik 2004. The timing of speechaccompanying gestures with respect to prosody. Journal of the Acoustical Society of America 115(5): 2397. Roth, Wolff-Michael 2002. From action to discourse: The bridging function of gestures. Cognitive Systems Research 3(3): 535⫺554. Rusiewicz, Heather 2010. The role of prosodic stress and speech perturbation on the temporal synchronization of speech and deictic gestures. Ph.D. dissertation, Faculty of the School of Health and Rehabilitation Sciences, University of Pittsburgh. Sandler, Wendy and Diane Lillo-Martin 2006. Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Sargin, Mehmet, Yucel Yemez, Engin Erzin and Ahmet M. Tekalp 2008. Analysis of head gesture and prosody patterns for prosody-driven head-gesture animation. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(8): 1330⫺1345. Scheflen, Albert 1964. The significance of posture in communication systems. Psychiatry 27: 316⫺ 331. Scheflen, Albert 1968. Human communication: Behavioral programs and their integration in interaction. Behavioral Sciences 13(1): 44⫺55. Selkirk, Elisabeth 1981. On prosodic structure and its relation to syntactic structure. In: Thorstein Fretheim (ed.), Nordic Prosody II, 111⫺140. Trondheim: Tapir. First Published [1978]. Shattuck-Hufnagel, Stefanie, Yelena Yasinnik, Nanette Veilleux and Margaret Renwick 2007. A method for studying the time-alignment of gestures and prosody in American English: ‘Hits’ and pitch accents in academic-lecture-style speech. In: Anna Esposito, Maja Brataniæ, Eric Keller and Maria Marinaro (eds.), Fundamentals of Verbal and Nonverbal Communication and the Biometric Issue, 34⫺44. Amsterdam: IOS Press. Shukla, Shaligram 1996. S´iks¸a:s, pra:tis´a:khyas, and the vedic accent. In: Kurt R. Jankowsky (ed.), Multiple Perspectives on the Historical Dimensions of Language, 269⫺279. Münster: Nodus Publikationen. Sondermann, Kerstin 2007. You’re talking to a horse! The interaction of speech and gesture in the animated movie ‘The Road to El Dorado’. Manuscript, Georgetown University. Treffner, Paul, Mira Peter and Mark Kleidon 2008. Gestures and phases: The dynamics of speechhand communication. Ecological Psychology 20(1): 32⫺64. Tuite, Kevin 1993. The production of gesture. Semiotica 93(1/2): 83⫺105. Valbonesi, Lucia, Rashid Ansari, David McNeill, Francis Quek, Susan Duncan, Kark-Erik McCullough and Robert Bryll 2002. Temporal correlation of speech and gestures focal points. Conference on Systemics, Cybernetics and Informatics, Orlando, USA. Wilcox, Sherman 2004. Gesture and language: Cross-linguistic and historical data from signed languages. Gesture 4(1): 43⫺73.

Dan Loehr, Washington D.C. (USA)

1392

VII. Body movements – Functions, contexts, and interactions

101. Structuring discourse: Observations on prosody and gesture in Russian TV-discourse 1. 2. 3. 4. 5. 6. 7.

Introduction Gestures and speech in discourse Data discussed Analysis Discussion Concluding remarks References

Abstract The paper focuses on co-speech gestures that are relevant for the structuring of conversations. Therefore, some theoretic approaches concerning conversational and interactional gestures are introduced first (Bavelas et al. 1995; Kendon 2004; McNeill 1992; Müller 2004). With an example from Russian media discourse (TV interviews) the structuring of a conversation is described. In the following, the observations and analyses give insight into the dialogue structure, into co-occurring prosodic features and its gestural structure. Accentuation is being described from a perception point of view with regard to prosodic characteristics and gestures. The interplay of prosodic features and gestures in the turns of the data and their temporal coordination supports the idea of a multimodally created discourse.

1. Introduction In interaction, speakers use verbal, prosodic, and nonverbal cues to convey their information. They use prosody, gestures, and mimics to build and to structure their verbal contributions. Gestures do not appear randomly in conversations, they are expressed in coordination with the verbal elements communicated (see Kendon 2004; McNeill 2000; Müller 1998). Therefore, the precise position at which co-speech gestures occur in spontaneous speech has to be revealed. When studying speech production, especially spontaneous speech, it is essential to look both at prosody and gestures in order to figure out whether and how the two are related. Consequently, an audio-visual analysis including micro-observations of gestures and prosodic phenomena is suggested in this article. The analyzed speech data comprise authentic Russian media discourse. For Russian, there is only a limited number of studies on gestures and speech (e.g., Grigor’eva, Grigor’ev, and Krejdlin 2001; Gorelov 1980; Krejdlin 2002a; Krejdlin 2002b). They rather focus on intercultural aspects of nonverbal communication than on linguistic dimensions. A primary concern in these studies is gestures being produced without speech and having a verbal translation, so called emblems (see Ekman and Friesen 1969). The dictionary of Russian gestures by Grigor’eva, Grigor’ev, and Krejdlin (2001) mainly presents and explains emblems. In articles following the dictionary part, emblematic gestures and co-speech gestures are discussed for topics such as gender specific usage or language acquisition. They are mostly oriented to Western gesture researchers, as, e.g., McNeill or Kendon whose accounts of speechaccompanying gestures will be outlined in the following. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 13921400

101. Structuring discourse

1393

2. Gestures and speech in discourse Gestures occurring simultaneously with speech are mostly called co-speech gestures that actually fulfill different functions, one of which is related to discourse and to the organization and structuring of discourse. In their terminology Ekman and Friesen (1969) refer to the term illustrator, while Müller (1998) suggests the term discursive gesture. Bavelas and colleagues (1995) employ the term conversational gestures, which occur while people are talking to each other. Chui, mainly working on Chinese conversation, mentions a subtype of gestures not associated with lexical elements that “convey script evoked information,” which can be subsumed here as well (Chui 2009: 672). One main common feature of the gestures mentioned is that they are spontaneously created by the speakers in the speech production process. Nevertheless, these conversational gestures, when analyzed carefully, share certain characteristics concerning shape, realization, and their relation to the current discourse. Gestures and speech together with voice modulate and structure discourse. Especially by accentuation, speakers use the attention of a hearer to organize their contributions. The following section introduces some significant aspects of gesture types suggested by different gesture researchers necessary for the observations and analyses at hand.

2.1. Kendons approach: discourse unit marker gestures and the open hand supine Kendon mentions the interactive functions of gestures. Gestures can be used to signal who is addressed in a current conversation. They also regulate turns, as in pointing to a person to give them a turn (Kendon 2004: 159). Kendon introduces gestures that play a role in discourse and may indicate the status of a certain unit within discourse. These have been called discourse unit marker gestures (Kendon 1995: 274) and fulfill a pragmatic function within discourse. A gesture type that has also been discussed in detail by Kendon (2004) is the open hand supine (OHS) gesture that can be used for pointing. Another function the open hand supine gesture can have is that of a discourse structure marker (Kendon 2004: 204). The open hand supine gesture will be discussed in the analysis of the Russian TV data below.

2.2. Bavelas approach: interactive gestures The classification suggested by Bavelas et al. (1995) shows similarities to the approaches offered by Ekman and Friesen (1969) and McNeill (1992) but differs in some details. The focus is on the structure of dialogues. The gestures accompanying speech are called conversational gestures and contain topic gestures and interactive gestures with the latter being subclassified in a very detailed and useful manner. Interactive gestures “address and maintain the interaction required by dialogue rather than conveying meaning within the dialogue” (Bavelas et al. 1995: 394). Since they “serve to […] regulate the process of having a dialogue” (Bavelas et al. 1995: 398), they rather remind one of Ekman and Friesen’s (1969) regulators. According to the authors, this type of gesture can be included without interrupting the interlocutor. The interlocutor will probably notice the insertion of these gestures as well, so it depends on how interruption is defined.

1394

VII. Body movements – Functions, contexts, and interactions Within interactive gestures, there are delivery gestures, citing and seeking gestures, and turn gestures. In our analysis of an interview, the two subtypes turn and delivery gestures appear to be especially interesting. A speaker may want the turn or may hold it. Turn gestures should, from the interviewer’s perspective, be highly conventionalized and role specific. However, their realization may differ and also depends on the urgency of the turn taking. So, turn gestures assist the process of turn exchange. “Delivery gestures […] refer to the delivery of information by speaker to addressee”, (Bavelas et al. 1995: 397). In the data analysis section, the use of interactive gestures will be described.

2.3. Müllers approach: Discursive gestures and the palm up open hand The multimodal functioning of gestures is studied by Müller for metaphorization but also for communicative processes in general. So, one important gesture type also examined in Müller (2004) is the palm up open hand gesture that may serve several functions. A palm up open hand has according to Müller “two recurrent kinesic features: a more or less loosely extended palm and an upward orientation” (Müller 2004: 241). She argues from her analyses that “the recurrent kinesic features are related to the communicative activities and the less recurrent features to the content of speech” (Müller 2004: 253). The discursive function of gestures, analyzed by Müller, and other communicative activities with co-occurring gestures will be observable in the data in section 4.

2.4. McNeills approach: illustrators and cohesives McNeill’s classification refers to categories established by Ekman and Friesen in 1969, who differentiated emblems, adaptors, illustrators, regulators, and affect displays. McNeill concentrates on those gestures illustrating verbal speech. For illustrators that co-occur with speech, McNeill assumes the subcategories iconics, metaphorics, deictics, and beats (see McNeill 2000). He earlier also mentions cohesives that are related to discourse structure and “emphasize continuities” (see McNeill 1992: 16). For McNeill, both gesture and speech are part of language. Especially worth mentioning in the data presented here are deictics, beats, and cohesives. Deictics do not necessarily have to refer to a spatial context. They might as well be used in temporal or other abstract contexts. Beats are coordinated with the verbal elements of a contribution, especially with accented syllables. They are characterized by their rhythmic nature. Gestures may have several functions, so that deictics or even beats may carry important discourse structuring information. In the following analytical section, gestures playing a decisive role for the organization of discourse structure are discussed.

3. Data discussed The data chosen for illustration of an audio-visual analysis are authentic TV interviews. These interviews have a permanent interviewer and a guest who is being interviewed. The topic of the selected conversations is predominantly business-financial with some reference to politics. The political interview programme is Aktual‘nyj razgovor (‘the latest talk’) which is broadcasted by Odesskoe TV; the year of production is 2009. This type of communication has several advantages for the analysis of gestures: the interviewee is in the position where he has to answer or to explain some complex matter.

101. Structuring discourse

1395

Therefore, a fairly high frequency of conversational and interactive gestures can be expected in these interviews concerning business and politics. Using authentic material has advantages and disadvantages for linguistic analyses. An advantage is that the analyst can in no way influence the interactants. However, existing material, especially media data, also has disadvantages. For gesture analysis, one cannot control the camera angle which can sometimes be rather inconvenient. Still, it is undeniably authentic spontaneous speech where an audio-visual analysis appears to be promising.

4. Analysis The analysis is done in order to show the interplay of prosodic features and gesture realizations when turn taking is planned or carried out. We will look at prosodic realization including pitch and temporal features and at gestures being used to initiate turn taking. The video data were analyzed using the analysis software ELAN (Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands) (see Sloetjes and Wittenburg 2008); the prosodic analysis was done in Praat (see Boersma and Weenink 2004).

4.1. Dialogue structure The interview in question has two interlocutors: the presenter and the interviewed expert. The roles within this type of dialogue are fixed: the interviewer asks questions and the interviewee is then expected to answer them. However, the interviewee might also alter the distribution of questions and answers. In the following, we will focus on two Russian examples that illustrate common realizations concerning turn taking phenomena in detail, especially when one interlocutor actively decides on turn taking, i.e., giving the turn (to her interlocutor) in (1) and in contrast holding the turn rather than giving it in (2). In example (1) the (female) interviewer directly asks the interlocutor for his opinion and thus invites him to take the turn. Turn taking is accompanied by pitch changes and changes in the temporal structure of utterances, e.g., pauses (see Mondada 2007). The turn may be advised by the interlocutor directly, i.e., via verbal expression or via gesture. (1)

a VAsˇhe mnenie, kak vy dumaete? and yourACC opinionACC how youNOM thinkPL.2 PRS ‘and your opinion, what [lit. how] do you think?’

Prosodic highlighting is associated with the relevant verbal elements, the pronoun VAsˇhe (‘your’) in this case. The personal pronoun in (1) is accentuated and carries the main stress of the utterance. The discourse function for the interviewer is to signal her interest in the opinion of the interviewed and to invite him to start talking. Example (2) is interesting from two perspectives: the interviewed expert wants to point out a detail in his contribution that is content-related, while at the same time he shows that he is not willing to be interrupted which serves a pragmatic function. (2)

a kogda oni sobeRUtsja […] and when theyNOM intendPL 3 PRS ‘and when they intend to […]’

1396

VII. Body movements – Functions, contexts, and interactions

It even includes the rejection of the other person’s attempt to get the turn. This blocking of the interviewer’s attempt to take over the turn is realized in different modes. Auditively, the speaker increases loudness and thus pitch on kodga oni (‘when they’) while the interviewer is speaking simultaneously, which indicates that she wants the turn. At this point the interviewee inserts a rather long intra-phrasal pause of 618 ms after kogda oni (‘when they’) as if to wait for her to stop speaking. After the lengthy pause his turn continues and she remains silent. This strategy to hold the turn has several components: loudness, pitch, pause, and gesture (see 4.3).

4.2. Prosody Since it is assumed that accentuation is an important means of structuring one’s discursive contribution, the realization of accented syllables is to be focused in the present analysis. A main stress or accent in a prosodic unit is characterized by increased loudness, increased pitch, and, especially in Russian, more precise articulation (see Cruttenden 1994: 16). The main accent of utterance (1) on the possessive pronoun is performed by a rising pitch and increasing loudness on -VA- while the syllable is lengthened so that it gets special attention. This impression is underlined by accelerating the tempo of the following question.

Fig. 101.1: Pitch movement on the accented verb “intend” in example (2)

In phrase (2) the speaker accentuates -RU- on the word soberutsja (engl. ‘intend’) with intensified loudness and special lengthening, which ends up in an almost bisyllabic realization of the stressed syllable. The pitch increases considerably while it gets louder and shows a high plateau before the fall (see Fig. 101.1). For a male voice, the pitch used is relatively high, which might be caused by the speaker’s emotional involvement in order to stop his interlocutor (see Paeschke 2003; Richter 2003, 2009).

4.3. Gestures On the accented syllable VA- (‘your’) in example (1) there is a slight head nod by the female presenter accompanying the beat-like palm up open hand gesture directed to the interlocutor (Fig. 101.2a). So, the accent seems to be articulated prosodically and via hand and head movement (for similar observations see Flecha-Garcı´a 2010). When

101. Structuring discourse

Fig. 101.2a: palm up open hand in (1) position in (1)

1397

Fig. 101.2b: Hands retracted in rest position in (1)

adding a direct question her right hand moves back to her left one in rest position (Fig. 101.2b) while her head is moving forward. Fig. 101.3a and Fig. 101.3b illustrate the speaker’s movement at the same time as the accentuated word sobeRUtsja (‘intend’) is articulated in (2). At this point, the speaker is leaning forward and taking a bow as if to bodily co-produce an accent that is articulated prosodically. The leaning forward is additionally foregrounding the hand gesture. The index finger of his right hand is extended and is pointing upwards. This forwards movement of the interviewee is a sign of presence for his interlocutor in the conversation.

Fig. 101.3a: Raised index finger in (2)

Fig. 101.3b: Additional bow by the interviewed in (2)

Simultaneously with the leaning forward in example (2), the speaker fixes his gaze on the interlocutor, so that the eye contact seems to intensify. Another synchronized movement can be observed: on the accented syllable -RU-, the right hand is moved in lateral motion in a beat-like manner which underlines the accentuation as well. This movement starts exactly after the inserted pause (see 4.2), and together with the other multimodal aspects, foregrounds the intention to hold the turn. In the two examples under discussion, accentuation is coproduced using both voice and gesture (see Swerts and Krahmer 2008 for similar findings).

5. Discussion In these micro-observations the combined analysis of prosodic features and gestures has proven to be a useful device in order to obtain a comprehensive picture of what is produced by the interactants. A further advantage of a combined prosodic-gestural ap-

1398

VII. Body movements – Functions, contexts, and interactions

proach is to become aware of the various functions co-speech gestures can fulfill. These functions are related to the discourse structure, to the cognitive structure of propositions as well as to the semantics of words, to name but a few. A gesture in a specific discourse situation may, as was shown, have more than one function (see Kendon 2004) which holds true for the functions of prosody as well. Judging from the data, the interlocutors are able to interpret the signals simultaneously and to react appropriately, as in (2) when the interviewer stops her attempt to take the turn. She does so because of the increased prosodic cues, the hand gesture, and the bow which are performed at the same time. Interpretation is done immediately by addressees who perceive speech as a complex phenomenon. The several layers unpicked in this paper are not dismantled and interpreted in turn but rather analyzed jointly: we perceive prosody and gestures as a whole and are able to infer an interpretation. In further analyses focusing on the interplay of prosody and gestures, different types of turn taking could be compared. As shown, accentuation phenomena are realized in different modalities what seems to ensure that the intended reading is successfully transferred to the interlocutor. As also Goodwin pointed out: “[…] talk and gesture mutually elaborate each other within (1) a larger sequence of action and (2) an embodied participation framework constituted through mutual orientation between speaker and addressee” (Goodwin 2000: 1499). Along with the descriptions for Russian in this paper these suggestions originally made for English can be supported.

6. Concluding remarks Having suggested the inclusion of visible and audible cues in speech analysis, we have seen that in the data discussed both gestures and prosody seem to be responsible for the transfer of discourse strategies. It is essential to further investigate phenomena in speech production and perception, and consequently to consider prosodic and gestural elements in speech. The observations in the Russian conversational data presented here tend to correspond with studies on interactive gestures (e.g., Bavelas et al. 1992; Kendon 1995; Levy and McNeill 1992; Quek et al. 2002) and shed new light on the processes going on during conversation, especially turn taking phenomena. Corresponding with Chui in her analysis of Chinese discourse situations: “hand movements can contribute to conversational coherence as a collaborative achievement between the participants, just like speech” (Chui 2009: 675). So, both gesture and prosody appear to be influential in the structuring of discourse. However, more conversational data across a broad range of languages has to be considered in future research.

Acknowledgements The chapter is partly based on a contribution presented at the GESPIN conference 2011, Bielefeld/Germany (see Richter 2011). I thank the anonymous reviewers of the conference for helpful comments that have been included in the article.

7. Reerences Bavelas, Janet Beavin, Nicole Chovil, Linda Coates and Lori Roe 1995. Gestures specialized for dialogue. Personality and Social Psychology Bulletin 21(4): 394⫺405.

101. Structuring discourse Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allan Wade 1992. Interactive Gestures. Discourse Processes 15(4): 469⫺489. Boersma, Paul and David Weenink 2004. Praat: Doing phonetics by computer, Version 4.2.19, [Computer programme]. Online: http://www.praat.org, accessed on 12 Sep 2013. Chui, Kawai 2009. Conversational coherence and gesture. Discourse Studies 11(6): 661⫺680. Cruttenden, Alan 1994. Intonation. Cambridge: Cambridge University Press. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1(1): 49⫺98. Flecha-Garcı´a, Maria L. 2010. Eyebrow raises in dialogue and their relation to discourse structure, utterance function and pitch accents in English. Speech Communication 52(6): 542⫺554. Goodwin, Charles 2000. Action and embodiment within situated human interaction. Journal of Pragmatics 32(10): 1489⫺1522. Gorelov, Ilja Naumovicˇ 1980. Neverbal’nye Komponenty Kommunikacii [Nonverbal components of communication]. Moscow: Izadel’stvo Nauka. Grigor’eva, Svetlana. A., Nikolaj V. Grigor’ev and Gregorij E. Krejdlin 2001. Slovar’ Jazyka Rusˇ estov [Dictionary of Russian Gestures]. Moscow/Vienna: Jaz. russkoj kul’tury. skich Z Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺279. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Krejdlin, Gregorij E. 2002a. Neverbal’naja Semiotika [Nonverbal semiotics]. Moscow: Novoe literaturnoe obozrenie. Krejdlin, Gregorij E. 2002b. Slovar’ jazyka russkich zˇestov v ego sopostavlenii s drugimi slovarjami [Dictionary of Russian Gestures in comparison with other dictionaries]. Semiosis Lexicographica X: 27⫺45. Levy, Elena and David McNeill 1992. Speech, gesture, and discourse. Discourse Processes 15(3): 277⫺301. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press. McNeill, David (ed.) 2000. Language and Gesture. Cambridge: Cambridge University Press. Mondada, Lorenza 2007. Multimodal resources for turn-taking: pointing and the emergence of possible next speakers. Discourse Studies 9(2): 194⫺225. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte, Theorie, Sprachvergleich. Berlin: Arno Spitz. Müller, Cornelia 2004. Forms and uses of the palm up open hand: A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The Semantics and the Pragmatics of Everyday Gestures, 233⫺256. Berlin: Weidler. Paeschke, Astrid 2003. Prosodische Analyse emotionaler Sprechweise. Berlin: Logos. Quek, Francis, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma, Camil Kirbas, Karl. E. Mccullough and Rashid Ansari 2002. Multimodal human discourse: Gesture and speech. ACM Transactions on Computer-Human Interaction 9(3): 171⫺193. Richter, Nicole 2003. Evaluative utterances and their prosodic realization. In: Renate Blankenhorn, Joanna Blaszak and Robert Marzari (eds.), Beitraege der Europaeischen Slavistischen Linguistik (POLYSLAV 6.), 150⫺155. München: Sagner. Richter, Nicole 2009. Prosodie evaluativer Äußerungen: experimentelle Untersuchungen zum Russischen. Frankfurt am Main: Peter Lang. Richter, Nicole 2011. Interactive co-speech gestures and accentuation in Russian discourse: Preliminary findings. GESPIN. Bielefeld, available online URL: http://gespin.uni-bielefeld.de/?q⫽node/ 66, accessed on 12 Sep 2013. Sloetjes, Han and Peter Wittenburg 2008. Annotation by category ⫺ ELAN and ISO DCR. In: Proceedings of the 6th International Conference on Language Resources and Evaluation, Online: http://www.lat-mpi.eu/tools/elan/, accessed on 12 Sep 2013.

1399

1400

VII. Body movements – Functions, contexts, and interactions

Swerts, Marc and Emiel Krahmer 2008. Facial expression and prosodic prominence: effects of modality and facial area. Journal of Phonetics 36(2): 219⫺238.

Nicole Richter, Frankfurt (Oder) (Germany)

102. Body movements in political discourse 1. 2. 3. 4. 5. 6.

Politics and Politics and Politics and Politics and Conclusion References

the body physical appearance body movements hand gestures

Abstract This contribution provides a brief overview on politicians’ bodily communication. Gesture is considered fundamental in rhetoric communication since classical times. In contemporary technological age, politics bodily communication is still important, especially on TV and new media, where politicians can build a credible and persuasive image thanks to gesturalmimicry-postural devices too. All this is now even emphasized by first and extreme closeups: all new technologies, permitting the perception of small details of the politicians’ body, magnify the role and power played by the body in political communication. Studies tried to explain the role of body both in audience impressions’ formation about a political candidate and in related voting decisions. Some researches studied how physical appearance can be a predictor of electoral outcomes. Many studies showed how body movements and hand gestures can impact on the receivers’ evaluations of a politician’s competence and personality (and also her/his health state) and how these evaluations can influence their vote choices. The chapter stresses how the importance of the body within contemporary political communication is probably based on psychological processes which are mainly, if not exclusively, automatic and unaware, from the point of view of both the actors and especially the audiences.

1. Politics and the body According to Aristotle humans are political animals (Aristotle, Politics, written between 355 and 323 BC). Even contemporary scholars now agree that we are social animals driven by interpersonal exchanges. Thus, the mutual relevance of body signals and politics is natural (Montepare 2010), and, in a world full of political competition and conflict, a greater recognition and understanding of this relationship (body and politics) is needed. Today, politicians are by and large very much valued on how they are perceived and for their communication style. So, today, more than politics, it seems appropriate to Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14001412

102. Body movements in political discourse

1401

speak of “political perception” (De Landtsheer 2010). The political perception is influenced by a combination of factors, often referred to modernization or “americanization” of politics: emphasis on marketing, globalization, visual culture, and growth of new technologies, the World Wide Web. This sort of development of political communication encourages citizens to form intuitive impressions of political candidates mainly based on indices such as linguistic style, physical appearance, and bodily behavior, rather than to form a weighted opinion based on the content of political arguments. The body seems to have replaced the ideology. This is not a new trend, since from ancient times and in all ages attention has been given to bodily and verbal features of any political communication (e.g., Billig [1987] 1996). However, in present times there has been a particular growth of research in this field. A recent study by Todorov et al. (2005), published in Science, shows that rapid and spontaneous judgments about the competence of political candidates can be based solely on their physical appearance and those can predict the electoral outcomes. These results, which have opened a scientific debate within the studies on “political perception” and voting behavior (see also the Journal of Nonverbal Behavior, June 2010), are reviewed in this chapter. Traditionally, bodily or nonverbal communication refers to both visual and vocal cues: however, for the sake of brevity here space is given to the more recent studies (last 5⫺10 years) on visual and bodily aspects of political communication. Furthermore, given the high penetration of TV in politics, visual signals are becoming more prevalent in the presidential election than voice signals (Bucy and Grabe 2007). Several studies have demonstrated the relationship between various forms of visual bodily cues and political preferences. Cherulnik and collaborators (2001) found that persons of the audience mimicked more bodily behaviors of the highly charismatic political candidates (which exhibited more smiles and visual attention to the public, eyes frequently turned to it) than those of less charismatic politicians. Presumably, this emotional contagion leads to increased consensus towards the charismatic candidates. The idea of bodily-emotional contagion is the basis of other studies that focused to investigate how body behavior of politicians during the communication of information influences the perception both of the politicians and of the information by the audience. Bucy (2000) found that bodily reactions of politicians to important news had an effect on how voters perceive the news. Certainly, the most interesting part of research on body in political communication is that about the direct influence of bodily signals of politicians on their image perceived by the voters and on the electoral outcomes. For example, recent research has shown that judgments about the personality traits of political candidates are often based solely on their image and body signals, and these judgments can predict election outcomes (Mattes et al. 2010). This suggests that voters heavily rely on appearances related to the candidates to be elected. A functioning democracy, which gives citizens the power to choose their representatives and political leaders, requires most of the voters to be responsible and aware, in order for the society to be confident that their judgments are wise. In fact, scholars of political behaviors often have assumed that voters are rational actors whose political choices are free from prejudice and resistant to the influence of irrelevant factors, and that the process of vote choice is based on cognitive-psychological relationships in which actors make conscious and rational evaluations. But the process of vote choice is not so straightforward. Today,

1402

VII. Body movements – Functions, contexts, and interactions

political candidates are less ideologized and have complex and nuanced political views. To choose which candidate to elect requires voters to consider a large number of important dimensions, including international, national, social issues of economy, morality, security, and religion. The rich flow of information on candidates, from the press, radio, television, and internet sources also means that voters are inundated with facts, rumors, quotes, interviews, images, and other variously relevant signals that they should properly filter, encode, organize, store, keep and then remember and retrieve, but also evaluate and judge in order to make fully informed choices. Cognitive psychology teaches that when a person is dealing with more information than s/he can examine, the mind tends to simplify the decision-making process by relying on simple rules or heuristics (Kahneman, Slovic, and Tversky 1982). Given the complexity of voting choice, it is not surprising that voters use mental shortcuts to get to their final decision. Research on political choice has indeed identified a number of heuristics that voters use in order to simplify the decision process (Lau and Redlawsk 2006; Riggle 1992). For example, many voters rely on political party affiliation when selecting candidates (Bartels 2000). This is a common strategy, so much so that political affiliation is a good predictor of a candidate’s positions on many political issues (Campbell et al. 1960). Other strategies, such as the use of apparently “superficial” information as those relating to the candidate image, are not so normatively defensible and therefore they call into question the concept of a rational voter. In general then ⫺ especially in a political landscape as the contemporary one ⫺ the candidate characteristics may be considered not necessarily superficial, but as a source of relevant features and thus be elaborated in detail (see the unimodel according to Kruglanski and Thompson 1999). In any case, body signals provide a channel separate from more explicit verbal information, through which voters often form their impressions about the candidate (Noller et al. 1988), either by heuristic or by a more detailed elaboration: a variety of bodily signals indeed correlate with the political perception and vote choice. The following paragraphs analyze the role of some components of bodily communication in political perception and subsequent vote choice.

2. Politics and physical appearance Many researchers have been devoted to the analysis of the perception of politicians (formation of impressions) on the basis of facial appearance (face features) of the politician. Appearance provides information about gender, age, race, physical attractiveness of the candidate ⫺ all variables that have proven per se to be predictors of the vote (Banducci et al. 2008) ⫺ but it is often used to infer personality traits too (Hall et al. 2009; Hassin and Trope 2000; Langlois et al. 2000; Todorov et al. 2008; Zebrowitz and Montepare 2005). Judging by the physical appearance means to infer the personality traits of a candidate, for example from his facial appearance, and to use this assessment in order to make a political choice (Mattes et al. 2010). Many recent studies have shown that political decisions are influenced by trait inferences based on appearance. In these studies, participants are shown pictures of people (usually little-known real politicians) and they are asked to judge the photos on one or more dimensions (e.g., “How competent this person seems?”). These judgments are compared with actual election results (referring to those politicians), or with the hypothetical vote decisions of a separate group of participants in the study (who are only

102. Body movements in political discourse

1403

asked to indicate their willingness to vote for the people shown in the pictures). Such comparisons have shown the predictive role of these judgments on electoral outcomes (both real and ad hoc ones). In one study, Martin (1978) found that judgments of competence, based on politicians’ pictures published in newspapers, predicted the outcomes of hypothetical and real elections. More recently, other scholars (Ballew and Todorov 2007; Hall et al. 2009; Todorov et al. 2005) showed that the competence inferred from the face (face competence) predicts the vote share and the probability of winning U.S. elections (Senate, House, and Governor). The predictive power of facial competence remains also when controlling for other variables, such as the familiarity of the candidate, gender, race, attractiveness, and age. In general, competence ⫺ and other traits linked to it, such as intelligence and leadership ⫺ emerged as the only clear predictor of the election outcome. This makes sense because competence is considered one of the most important traits for a political candidate. Indeed, the degree to which the opinion on a specific trait predicts election results is strongly correlated with the importance assigned to this trait (Hall et al. 2009). For example, judgments about traits considered unimportant for a politician (e.g., confidentiality) do not predict election results. On the contrary, judgments about traits considered important (e.g., reliability, honesty, organization) predict election results, and the degree of success of the prediction can be derived from the importance assigned to these traits. These results suggest that voters have the “right” notion of the types of politicians who should be elected. However, some voters may rely on “wrong” signals to infer “right” attributes. In other words, instead of basing their decision on valid indicators of competence, they prefer, for simplicity, relying on heuristics as appearance (see Lenz and Lawson 2008, for a discussion on the subject): i.e., they rely on surface signals for a deep effect (Hall et al. 2009). Therefore, automatic processes (Strack and Deutsch 2004) of evaluation are often activated (see also Bonaiuto and Maricchiolo 2013). They can be explained, for example, by the MODE model of Fazio (1990, Motivation and Opportunity as Determinants). According to this model, motivation and opportunities, defined as cognitive resources and time, determine how much a process of attribution or evaluation is deliberate or automatic. The psychological processes (cognitive and behavioral) are neither purely spontaneous nor merely intentional, but they are mixed processes involving both automatic and controlled components. Any component controlled within a mixed sequence requires that the individual is motivated to engage in cognitive tasks (elaboration, assessment, etc.) and s/he has the opportunity (time and cognitive resources) to do so. Probably ⫺ since some voters could have low motivation for political decisions, and/or inability to learn more about the candidates, and/or inappropriate skills to recognize the right indicators of the salient traits, and/or time lack for a deliberate processing ⫺ they form impressions about candidates, instantly and automatically, using surface signals. This hypothesis is consistent with research data showing that inferences based on appearance are made even after an extremely rapid exposure to faces (Todorov 2008). Ballew and Todorov (2007) observed that judgments of competence produced after exposure of 100 milliseconds to faces of the winners and of the runners-up to the U.S. elections for Governor or Mayor were as accurate in predicting the election results as judgments produced after a vision of 250 milliseconds or after unlimited exposure. In a separate experiment, the authors showed that forcing participants to issue their judgments within 2 seconds (when

1404

VII. Body movements – Functions, contexts, and interactions

there was no limit, the average response time was 3.5 seconds) the accuracy of prediction did not decrease. The only manipulation that influenced this accuracy was to ask participants to reflect and make accurate judgments. After these instructions, the predictions were curiously worst. Further analysis showed that the accuracy could be detected into automatic rather than deliberative components of judgments. In a sense, since people can make judgments of personality traits after a single look at a face, these results are not surprising. People are generally unaware of the signs that they use to express their judgments of faces (Rule et al. 2008) and the instructions to make accurate judgments can not help with these conditions (Wilson and Schooler 1991). These instructions simply introduce noise in the ratings (Levine, Halberstadt, and Goldstone 1996). The experiments of Ballew and Todorov (2007) demonstrate that the impressions of competence can be formed quickly and easily without any deliberative process. Once formed, these impressions can influence voting decisions, and this influence cannot always be recognized by voters (Hall et al. 2009). This relationship between appearance and vote was observed not only in the U.S., but also in other countries, in some cases by assessing politicians of different nationality with respect to the one of the participants (Antonakis and Dalgas 2009; Atkinson, Enos, and Hill 2009; Castelli et al. 2009; Poutvaara, Jordahl, and Berggren 2009). Benjamin and Shapiro (2009) have shown that participants were able to predict governor election outcomes from “thin slice” (Ambady and Rosenthal 1992) of bodily behavior, i.e., video clips without audio of 10 seconds of an electoral debate. Curiously, when participants could instead listen to the debate, and thus infer the party affiliation, political preferences and views of the candidates, their predictions were not different to the chance. This finding provides strong evidence that body behavior indicators may have more weight than verbal ones in determining likely election outcomes. Moreover, these results are consistent with the arguments by Ballew and Todorov (2007) which consider the effects of physical appearance on voting decisions derived from quick and unweighted impressions. The rapid judgments of competence, based solely on facial appearance, therefore predict election outcomes. However, it is not yet clear what factors underlie this relationship and how this relation interacts with other factors. Olivola and Todorov (2010) have tried to understand what the determinants of inference of competence from the appearance are that predict election outcomes. To do this, the authors have tried to identify other trait inferences that co-vary with the inference of competence, noting that judgments of competence are correlated with judgments of reliability, organization (a facet of conscientiousness), emotional stability, and honesty. They are not related with judgments of pleasantness. The authors also investigated the extent to which judgments of facial competence correlate with facial attributes considered important in studies on appearance: attractiveness (Langlois et al. 2000), familiarity (Peskin and Newell 2004; Zajonc 1968; Zebrowitz, White, and Wieneke 2008), and facial maturity and apparent age (referring to “baby face”, that is, a face with infantile features; Keating, Randall, and Kendrick 1999; Montepare and Zebrowitz 1998). The judgments of competence were positively correlated with facial attractiveness and familiarity ones (Bailenson et al. 2008) and negatively with judgments of baby faces. Among other inferred traits from facial appearance, attractiveness and perceived age are predictors of electoral outcome and hypothetical vote, while age (in terms of perception of maturity) also predicts the actual vote. Finally, however, perceived competence was found to be the best predictor of all the three criteria used in research: prediction of election results by the participants, hypothetical vote, and actual vote.

102. Body movements in political discourse

1405

Olivola and Todorov (2010) have introduced a computer model to explore and examine the facial signals that influence inferences of competence and whether other facial signals may mediate this relationship, showing that judgments of facial competence are related to physical attractiveness of face and to facial maturity. Verhulst, Lodge, and Lavine (2010) ⫺ although agreeing with Olivola and Todorov (2010) on the notion that surface signals can drive political outcomes ⫺ do not agree on the fact that perceived competence is the strongest predictor of electoral win. The assessment of competence indeed cancels the effect of the attractiveness and of perceived maturity on the criteria of voting behavior. This means that these two inferences (maturity and attractiveness) come before that of competence and the last one therefore mediates the effect of the first two on the election outcomes. They suggest that maturity and above all face attractiveness affect perception of competence and then this in turn predicts the election outcome. Other authors debate about the weight that the perception of competence has on voting behavior. Riggio and Riggio (2010), for example, focus their attention on the likely evolutionary origins of these rapid and automatic judgments and on the extent to which other phylogenetically evolved paths, characterized by conscious control and motivation, could be used to moderate these processes. Lieb and Shah (2010) encourage more contextual analysis of the implications of rapid judgments, considering culture, campaign systems and image management of the candidate. A task of the social psychology research could also be to understand what “helps” people to overcome the influence of first impressions. However, the speed, the automaticity and implicit nature of inferences based on appearance make it very difficult to correct this trend; in addition, many people do not recognize to form judgments about others from their appearance; finally, it is difficult to control the way in which people use television images or media in general (Lenz and Lawson 2008), from which they form these rapid judgments. Another task could be to understand how to create situations of media communication which are as neutral or impartial as possible for the different candidates, and which reduce the influence of contextual factors thanks to proper physical and social setting of the political communication.

3. Politics and body movements Some recent studies have examined bodily movements of politicians to analyze their effect on perception of politicians and voting behavior. Body movements indicate physical and psychological traits relevant in social interaction, such as gender (Kozlowski and Cutting 1977; Pollick et al. 2005; Troje 2002), age (Montepare and Zebrowitz 1998), and emotions (Dittrich et al. 1996): in this way they can affect social judgments. Kramer, Arend, and Ward (2010) showed to participants some sections of the interventions of the candidates in the 2008 U.S. Presidential Election debates (Obama and McCain). These sections were converted into virtual images: arms, shoulders, and torso were represented by lines, and eyes and hands by points. Participants after having watched the movements of politicians by the movements of these lines and points, had to assess personality and social traits, and then indicate their vote preference. The sound had been removed, so that the two candidates were not recognizable. The results showed that body movements per se affect voting choices; in addition, physical health as perceived by these body movements was the only predictor of voting choice. Although perceived attractiveness and leadership correlated with voting behavior, they did not add predictive

1406

VII. Body movements – Functions, contexts, and interactions

power to the perception of well-being. Therefore, health and well-being of a candidate, inferred from the movements of his body (in particular, shoulders, arms and hands), would be a good predictor of electoral win, probably because, somehow, a symbol of political strength and health. Politicians, accustomed to have television visibility, seem to have understood this relationship. Often they show themselves in sportswear (for example, U.S. presidents play golf or play with their dog in the garden of presidential residence, etc.), to demonstrate their athleticism and therefore good health. Health is clearly a social relevant trait and body movements are probably a useful indicator of the level of health (Blake and Shiffrar 2007). This would thus suggest that politicians should also pay attention to their movements as well as their appearance. Recent studies had also shown that even the spatial position of one’s own body can affect one’s own political attitudes. Oppenheimer and Trail (2010) have experimentally shown that body spatial orientation of people to their right or their left leads people to evaluate their own political attitudes, respectively, as more conservative or more liberal. If, during the questionnaire compilation, the chair of the participants was inclined to the left, they evaluated their own political orientation as more democratic than those participants whose chair was inclinated to the right side. The same happened to the participants whose chair had a defect of inclination to the right, which tendentially assessed their own orientation as more conservative than the other group. Thus, the leftright metaphor used in political speech for attitudes has a strong relationship with physical perception of the orientation of the own body, when even the latter can determine the first. These results further corroborate the view that political attitudes are not necessarily derived from rational assessments or deliberative considerations about objectives and interests of the voter, but they are often linked to the cognitive embodiment of the evaluations of abstract concepts, also connected to metaphors used in political language to represent these concepts. In addition, the body of the speaker moves in synchrony with his/her speech (selfsynchrony; Condon and Ogston 1966). The movements of all parts of the body are closely coordinated with speech, although this does not mean that all body movements are equally related to the speech. An example of self-synchrony is the relationship between movement and vocal accents, but the aspect of body movements more connected to speech is certainly hand gesture (McNeill 1992).

4. Politics and hand gestures Gesture is used together with speech often with the same objectives: it is as fundamental as the verbal utterance in the representation of meaning. In sentence organization, speech and gesture are planned at the beginning: sentence encoding can occur simultaneously through verbal elements and gesture. Conversely, sentence decoding uses not only verbal elements, but also the bodily ones. This is the gist of McNeill’s (1985) theoretical conception of speech-gesture integration. Regarding the hand gestures, only few experimental studies have been carried out so far. Streeck (2008) argues that in an era of television politics, the study of politics should be a cultural practice that includes a descriptive analysis of bodily expressions. This analysis should serve as a basis for studying audience reactions and media effects. In a recent qualitative analysis of the shape and functions of hand gestures used by candidates in the U.S. Democratic primary during television debates, Streeck (2008) shows

102. Body movements in political discourse

1407

that candidates perform a shared code of pragmatic gesture. While iconic gestures represent and describe real objects introduced in the discourse (e.g., to reproduce with hand shape and movement the figure of an object or the form of an action: such as the shape of the phone with thumb and little finger stretched and the other fingers closed, or index and middle fingers that move to indicate the act of walking, etc.), pragmatic gestures refer to the structure of linguistic expression (e.g., horizontal movements of the hands to indicate the development of speech; Kendon 2004). These hand gestures mark the speech acts and aspects of the structure display of information, providing the recipients with a visual structure of the speech that facilitates the analysis and processing of it. Streeck (2008) also noted the use by some politicians of a single repetitive gesture, the index finger raised, pointing to the audience. This is an intrusive gesture of involvement that is not always appreciated by the television audiences, but that is much used by politicians during televised comparisons. In general, studies have shown that people, when trying to be persuasive, use significantly more gestures compared to when they present a message in a neutral manner. Political leaders particularly use and combine verbal rhetorical devices, hand gestures, and vocal intonation to make themselves more persuasive (Bull 1986). Gestures articulate the structure of rhetorical devices. Rhetorical devices ⫺ such as the use of tripartite lists and contrasts, or a combination of these ⫺ work very well in political speech and are associated with collective rather than isolated applauses. Gestures underlining these devices, both rhythmically and illustrating spatially the devices structure (e.g., the opposite sides of the conflict and the three parts in the list, etc.) are very effective in political speech (Bull 2003). Therefore rhetoric and gesture can make leader’s performance more or less effective and persuasive and in turn they can affect his/her positive or negative impression on the audience. Other studies dealt with politicians’ bodily behavior, in particular hand gesture during public speech (Argentin, Ghiglione, and Dorna 1990; Bull 1986), showing politicians use particular types of hand gesture. In literature, gestures are classified into different categories (e.g., Ekman and Friesen 1969; McNeill 1992, for a synthesis see Maricchiolo, Gnisci, and Bonaiuto 2012, see also Bohle this volume). These can be summarized as follows: speech-linked gestures ⫺ emblems, iconics, metaphorics, deictics, cohesive, and rhythmic gestures ⫺ and speech-non-linked gestures, as object-, other-, and self-adaptors (object-, other-, and self-addressed hand movements). Studies on other social contexts (experimental settings) found speakers were more persuasive (they have the best communicative performance) when they gesticulated more frequently, used more gestures linked to the speech content, as metaphorics, or to the speech structure, as cohesive and rhythmic gestures that provide their discourse with emphasis, rather than when they used “nervousness” or “discomfort” gestures, such as self-manipulation hand movements (Burgoon, Birk, and Pfau 1990; Butterworth and Hadar 1989; Cesario and Higgins 2008; Maricchiolo et al. 2009; Mehrabian and Williams 1969). Some observational studies carried out by our research group on Italian political contexts found that gesture styles of both Presidential candidates were similar to the effective styles described in literature, although very different from each other. In a study on the National Campaign 2006 (Presidential candidates: Berlusconi, center-right, vs. Prodi, center-left; Maricchiolo, Gnisci, Bonaiuto 2013), the most used gestures by both politicians were rhythmical (confirming previous studies: Argentin et al. 1990; Bull 1986)

1408

VII. Body movements – Functions, contexts, and interactions

and cohesive gestures, whereas the least used ones were iconics. Hand gesture accompanying political speech confirms its emphatic nature with a persuasive (rhythmical function) rather than descriptive (iconics function) aim. The differences in body language between the two politicians seem to reflect their different communication styles. Gestures most used by Berlusconi, deictic gestures (pointing the finger at the other) and objectaddressed movements (touching objects around, even of other people), can make the communication style intrusive, impetuous, and exuberant, while rhythmic gestures (also often used by Berlusconi) make the way of formulating the subject of discussion passionate. Instead, gestures used by Prodi, metaphorical (which emphasize verbal content) and cohesive gestures (structuring the discourse), characterize the style of his calm and rational communication. In this study the evaluation of politicians and their performance during Presidential debate by audience of different political orientation are measured. Different voters (left/right orientation) evaluated the two candidates in different ways and these evaluations were correlated to different communicative performances. In particular, each politician seems to take advantage or disadvantage thanks to a specific communicative profile which is different from his opponent’s one: this effect sometimes causes cross-voters (i.e., it is a general trend in the whole audience); some other times it appears as a voter-specific effect (i.e., it results only in a specific part of the audience, whether ingroup or outgroup with respect to the specific political leader). The building, by the politician, of a credible and persuasive image would therefore result to be implemented, in various effective ways, using not only lexical-syntactic and rhetorical devices, but also thanks to gestural-mimicry-postural ones which complete or reinforce the important features of the speech, in terms of both content and structure. On the other hand, today the influence of body in the communication takes on greater importance since on TV the shot of the speaker leads the viewer’s attention on the expression conveyed by face, shoulders, arms, hands, and torso, down to the smallest details guaranteed by the first and extreme close-ups.

5. Conclusion Overall, the most recent literature, as examined in this chapter, empirically illustrates what has long been known, namely the importance that the body takes in political communication for its effects on the perception of the source and of the political content of the messages, as well as on voting behavior. Empirical research shows that different aspects of the body are relevant to determining the impressions formed by the audience as well as the intentions and the final decisions to vote. There are many important aspects of bodily communication in political speech: different elements of physical appearance (such as facial features), body movements, hand gestures. These physical aspects of communication are important because they are used by audiences and voters to form impressions on various characteristics of the politician: relevant impressions regarding the features of competence, health, and confidence. These in turn influence voting intentions and decisions. These processes seem to work when body communication is accompanied by verbal communication as well as when the bodily communication is the only one present, i.e., also in the absence of verbal communication. In general, the role of the body in contemporary political communication probably bases on psychological processes mainly, if not exclusively, automatic and unaware for both the actors and especially the audience. This also raises questions about ethical

102. Body movements in political discourse

1409

issues: namely whether the knowledge being accumulated on these aspects should be capitalized on making citizens more aware of these effects, which always existed even when political communication was not mediated by mass-media; or whether it should be capitalized on improving the communication skills of politicians, as it had always happened starting from the classical treatises on oratory and rhetoric (e.g., Billig [1987] 1996); or, furthermore, whether it should inspire journalism for serving the purposes of designing different media rules, settings and scripts for political interviews and communication. It is true, however, that the fact that the body and bodily communication now come back powerfully to the fore of mass-media, and new technologies can represent a way to counter the gap, given by the interpersonal dematerialization and abstractness of mediated communication, experienced in mediated forms of interactions with respect to face-to-face conditions. Indeed, the very characteristics of new communication technologies is that, while on the one hand they move physically away the bodies of the participants involved in political communication processes, on the other hand they have the paradoxical effect of magnifying the role of body through close-ups, replays, diffusion, etc. The new media technologies, indeed, permit, with close-ups, to perceive details of the body not perceived in the past and, with the ubiquity of recordings, to enter any physical and social contexts, even those that in the past did not fall within the general public political communication. Far from decreeing the ending or resizing of the bodily features, the mediated forms of political communication seem paradoxically even more conditioned by ancestral elements and processes of our interpersonal communication.

6. Reerences Ambady, Nalini and Robert Rosenthal 1992. Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin 111(2): 256⫺274. Antonakis, John and Olaf Dalgas 2009. Predicting elections: Child’s play! Science 323(5918): 1183. Argentin, Gabriel, Rodolphe Ghiglione and Alexandre Dorna 1990. La gestualite´ et ses effets dans le discours politique. Psychologie Franc¸aise 35(2): 153⫺161. Aristotle 1977. Politics. Cambridge, MA: Harvard University Press. Atkinson, Matthew D., Ryan D. Enos and Seth J. Hill 2009. Candidate faces and election outcomes: Is the face-vote correlation caused by candidate selection? Quarterly Journal of Political Science 4(3): 229⫺249. Bailenson, Jeremy N., Shanto Iyengar, Nick Yee, and Nathan A. Collins 2008. Facial similarity between voters and candidates causes influence. Public Opinion Quarterly 72(5): 935⫺961. Ballew, Charles C. and Alexander Todorov 2007. Predicting political elections from rapid and unreflective face judgments. Proceedings of the National Academy of Sciences of the USA 104(46): 17948⫺17953. Banducci, Susan A., Jeffrey A. Karp, Michael Thrasher and Colin Rallings 2008. Ballot photographs as cues in low information elections. Political Psychology 29(6): 903⫺917. Bartels, Larry M. 2000. Partisanship and voting behavior, 1952⫺1996. American Journal of Political Science 44(1): 35⫺50. Benjamin, Daniel J. and Jesse M. Shapiro 2009. Thin-slice forecasts of gubernatorial elections. Review of Economics and Statistics 91(3): 523⫺536. Billig, Michael 1996. Arguing and Thinking: A Rhetorical Approach to Social Psychology, second revised edition. Cambridge: Cambridge University Press. First Published [1987]. Blake, Randolph and Maggie Shiffrar 2007. Perception of human motion. Annual Review of Psychology 58(1): 47⫺73.

1410

VII. Body movements – Functions, contexts, and interactions

Bohle, Ulrike this volume. Contemporary classification systems. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction (Handbooks of Linguistics and Communcation Science 38.2.) Berlin/Boston: De Gruyter Mouton. Bonaiuto, Marino and Fridanna Maricchiolo 2013. Social Psychology: Body and language in social interaction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction, 254⫺271. (Handbooks of Linguistics and Communication Science 38.1.) Berlin/New York: De Gruyter Mouton. Bucy, Erik P. 2000. Emotional and evaluative consequences of inappropriate leader displays. Communication Research 27(2): 194⫺226. Bucy, Erik P. and Maria E. Grabe 2007. Taking television seriously: A sound and image bite analysis of presidential campaign coverage, 1992⫺2004. Journal of Communication 57(4): 652⫺675. Bull, Peter E. 1986. The use of hand gesture in political speeches: Some case studies. Journal of Language and Social Psychology 5: 102⫺118. Burgoon, Judee K., Thomas Birk and Michael Pfau 1990. Nonverbal behaviors, persuasion, and credibility. Human Communication Research 17(1): 140⫺169. Butterworth, Brian and Uri Hadar 1989. Gesture, speech and computational stage: A reply to McNeill. Psychological Review 96(1): 168⫺174. Campbell, Angus, Philip Converse, Warren Miller and Donald Stokes 1960. The American Voter. New York: John Wiley and Sons, Inc. Castelli, Luigi, Luciana Carraro, Claudia Ghitti and Massimiliano Pastore 2009. The effects of perceived competence and sociability on electoral outcomes. Journal of Experimental Social Psychology 45(4): 1152⫺1155. Cesario, Joseph and E. Tory Higgins 2008. Making message recipients “feel right”. How nonverbal cues can increase persuasion. Psychological Science 19(5): 415⫺420. Cherulnik, Paul D., Kristina A. Donley, Tay Sha R. Wiewel and Susan R. Miller 2001. Charisma is contagious: The effect of leaders’ charisma on observers’ affect. Journal of Applied Social Psychology 31(10): 2149⫺2159. Condon, William S. and William D. Ogston 1966. Sound film analysis of normal and pathological behaviour patterns. Journal of Nervous and Mental Disease 143(4): 338⫺347. De Landtsheer, Christ’l 2010. Book review. The microanalysis of political communication. Claptrap and ambiguity by Peter Bull (2003). Politics, Culture and Socialization 1: 86⫺89. Dittrich, Winand H., Tom Troscianko, Stephen E. G. Lea and Dawn Morgan 1996. Perception of emotion from dynamic point-light displays represented in dance. Perception 25(6): 727⫺738. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior. Semiotica 1(1): 49⫺98. Fazio, Russell H. 1990. Multiple processes by which attitudes guide behavior: The MODE model as an integrative framework. In: Mark P. Zanna (ed.), Advances in Experimental Social Psychology, Volume 23, 75⫺109. New York: Academic Press. Hall, Crystal C., Amir Goren, Shelly Chaiken and Alexander Todorov 2009. Shallow cues with deep effects: Trait judgments from faces and voting decisions. In: Eugene Borgida, John L. Sullivan and Christopher M. Federico (eds.), The Political Psychology of Democratic Citizenship, 73⫺99. New York: Oxford University Press. Hassin, Ran and Yaacov Trope 2000. Facing faces: Studies on the cognitive aspects of physiognomy. Journal of Personality and Social Psychology 78(5): 837⫺852. Kahneman, Daniel, Paul Slovic and Amos Tversky (eds.) 1982. Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. Keating, Caroline F., David W. Randall and Timothy Kendrick 1999. Presidential physiognomies: Altered images, altered perceptions. Political Psychology 20(3): 593⫺610. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.

102. Body movements in political discourse Kozlowski, Lynn T. and James E. Cutting 1977. Recognizing the sex of a walker from a dynamic point-light display. Perception and Psychophysics 21(6): 575⫺580. Kramer, Robin S. S., Isabel Arend and Robert Ward 2010. Perceived health from biological motion predicts voting behaviour. The Quarterly Journal of Experimental Psychology 63(4): 625⫺632. Kruglanski, Arie W. and Erik P. Thompson 1999. Persuasion by a single route: a view from the unimodel. Psychological Inquiry 10(2): 83⫺109. Langlois, Judith H., Lisa Kalakanis, Adam J. Rubenstein, Andrea Larson, Monica Hallam and Monica Smoot 2000. Maxims or myths of beauty? A meta-analytic and theoretical review. Psychological Bulletin 126(3): 390⫺423. Lau, Richard R. and David P. Redlawsk 2006. How Voters Decide: Information Processing during Election Campaigns. New York: Cambridge University Press. Lenz, Gabriel and Chappell Lawson 2008. Looking the part: Television leads less informed citizens to vote based on candidates’ appearance. Paper presented at the annual meeting of the Midwest Political Science Association 67th Annual National Conference, The Palmer House Hilton, Chicago. Levine, Gary M., Jamin B. Halberstadt and Robert L. Goldstone 1996. Reasoning and the weighting of attributes in attitude judgments. Journal of Personality and Social Psychology 70(2): 230⫺240. Lieb, Kristin and Dhavan Shah 2010. Consumer culture theory, nonverbal communication, and contemporary politics: Considering context and embracing complexity. Journal of Nonverbal Behavior 34(2): 127⫺136. Maricchiolo, Fridanna, Augusto Gnisci, Marino Bonaiuto and Gianluca Ficca 2009. Effects of different types of hand gestures in persuasive speech on receivers’ evaluations. Language and Cognitive Processes 24(2): 239⫺266. Maricchiolo, Fridanna, Augusto Gnisci and Marino Bonaiuto 2012. Coding hand gestures: A reliable taxonomy and a multi-media support. In: Anna Esposito, Antonietta M. Esposito, Alessandro Vinciarelli, Rüdiger Hoffmann and Vincent C. Müller (eds.), Cognitive Behavioural Systems 2011, LNCS 7403, 405⫺416. Berlin Heidelberg: Springer-Verlag. Maricchiolo, Fridanna, Augusto Gnisci and Marino Bonaiuto 2013. Political leaders’ communicative style and audience evaluation in Italian Presidential debate. In: Isabella Poggi, Francesca D’Errico, Laura Vincze and Alessandro Vinciarelli (eds.), Political Speech 2010, LNAI 7688, 99⫺117. Berlin Heidelberg: Springer-Verlag. Martin, Donald S. 1978. Person perception and real-life electoral behavior. Australian Journal of Psychology 30(3): 255⫺262. Mattes, Kyle, Michael L. Spezio, Hackjin Kim, Alexander Todorov, Ralph Adolphs and R. Michael Alvarez 2010. Predicting election outcomes from positive and negative trait assessments of candidate images. Political Psychology 31(1): 41⫺58. McNeill, David 1985. So you think gestures are nonverbal? Psychological Review 92(3): 350⫺371. McNeill, David 1992. Hand and Mind. Chicago: The University of Chicago Press. Mehrabian, Albert and Martin Williams 1969. Nonverbal concomitants of perceived and intended persuasiveness. Journal of Personality and Social Psychology 13(1): 37⫺58. Montepare, Joann M. 2010. Politics and nonverbal cues: A natural pairing. Introduction to the Special Issue. Journal of Nonverbal Behavior 34(2): 81⫺82. Montepare, Joann M. and Leslie A. Zebrowitz 1998. Person perception comes of age: The salience and significance of age in social judgments. Advances in Experimental Social Psychology 30: 93⫺161. Noller, Patricia, Cynthia Gallois, Alan Hayes and Philip Bohle 1988. Impressions of politicians: The effect of situation and communication channel. Australian Journal of Psychology 40(3): 267⫺280. Olivola, Christopher Y. and Alexander Todorov 2010. Elected in 100 milliseconds: Appearancebased trait inferences and voting. Journal of Nonverbal Behavior 34(2): 83⫺110.

1411

1412

VII. Body movements – Functions, contexts, and interactions

Oppenheimer, Daniel M. and Thomas E. Trail 2010. Why leaning to the left makes you lean to the left: effect of spatial orientation on political attitudes. Social Cognition 28(5): 651⫺661. Peskin, Melissa and Fiona N. Newell 2004. Familiarity breeds attraction: Effects of exposure on the attractiveness of typical and distinctive faces. Perception 33(2): 147⫺157. Pollick, Frank E., Jim W. Kay, Katrin Heim and Rebecca Stringer 2005. Gender recognition from point-light walkers. Journal of Experimental Psychology: Human Perception and Performance 31(6): 1247⫺1265. Poutvaara, Panu, Hendrik Jordahl and Niclas Berggren 2009. Faces of politicians: Babyfacedness predicts inferred competence but not electoral success. Journal of Experimental Social Psychology 45(5): 1132⫺1135. Riggio, Heidi R. and Ronald Riggio 2010. Appearance-based trait inferences and voting: Evolutionary roots and implications for leadership. Journal of Nonverbal Behavior 34(2): 119⫺125. Riggle, Ellen D. 1992. Cognitive strategies and models of voter judgments. American Politics Quarterly 20(2): 227⫺246. Rule, Nicholas O., Nalini Ambady, Reginald B. Jr. Adams and C. Neil Macrae 2008. Accuracy and awareness in the perception and categorization of male sexual orientation. Journal of Personality and Social Psychology 95(5): 1019⫺1028. Strack, Fritz and Roland Deutsch 2004. Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review 8(3): 220⫺247. Streeck, Jürgen 2008. Gesture in political communication: A case study of the Democratic Presidential Candidates during the 2004 Primary Campaign. Research on Language and Social Interaction 41(1): 154⫺186. Todorov, Alexander 2008. Evaluating faces on trustworthiness: An extension of systems for recognition of emotions signaling approach/avoidance behaviors. In: Alan Kingstone and Michael Miller (eds.), The Year in Cognitive Neuroscience 2008: Annals of the New York Academy of Sciences, Volume 1124, 208⫺224. Malden, MA: Blackwell. Todorov, Alexander, Anesu N. Mandisodza, Amir Goren and Crystal C. Hall 2005. Inferences of competence from faces predict election outcomes. Science 308(10): 1623⫺1626. Todorov, Alexander, Chris P. Said, Andrew D. Engell and Nikolaas N. Oosterhof 2008. Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences 12(12): 455⫺460. Troje, Nikolaus F. 2002. Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. Journal of Vision 2(5): 371⫺387. Verhulst, Brad, Milton Lodge and Howard Lavine 2010. The attractiveness halo: Why some candidates are perceived more favorably than others. Journal of Nonverbal Behavior 34(2): 111⫺117. Wilson, Timothy D. and Jonathan W. Schooler 1991. Thinking too much: Introspection can reduce the quality of preferences and decisions. Journal of Personality and Social Psychology 60(2): 181⫺192. Zajonc, Robert B. 1968. Attitudinal effects of mere exposure. Journal of Personality and Social Psychology Monograph Supplement 9(2): 1⫺27. Zebrowitz, Leslie A. and Joann M. Montepare, 2005. Appearance DOES matter. Science 308(5728): 1565⫺1566. Zebrowitz, Lelsie A., Benjamin White and Kristin Wieneke 2008. Mere exposure and racial prejudice: Exposure to otherrace faces increases liking for strangers of that race. Social Cognition 26(3): 259⫺275.

Fridanna Maricchiolo, Rome (Italy) Marino Bonaiuto, Rome (Italy) Augusto Gnisci, Naples (Italy)

103. Gestures in industrial settings

1413

103. Gestures in industrial settings 1. 2. 3. 4. 5. 6. 7.

Introduction From gestures in the workplace to gesture codes in factories Data collection and analysis at a salmon factory A typology of gestures at the salmon factory An example of the gestures in use Discussion References

Abstract This entry documents one of the ways that conditions for communication affect the way that people gesture. Using fieldwork data from a salmon factory, we will discuss the way workers in industrial environments rely on gestures to communicate technical messages and present a typology of such gestures. Gestures in this context emerge as an efficient way to communicate messages urgently across space and against noise. Using gesture also helps people to overcome barriers imposed on oral communication, including restrictions from health and safety equipment like earplugs and face-masks.

1. Introduction In industrial environments, people may begin to use gestures for basic technical communication of simple messages like “stop”, “more”, and “again”. If some of the gestures are codified and relating specifically to communication in that context, we may speak of a gesture code. Several factors can lead to the emergence of a gesture code, including noise, distance between workers, and other barriers to efficient oral communication, such as face masks, ear-plugs, and a workforce who doesn’t necessarily share the same native language. In such an environment, a gesture code may come to function as a sort of rudimentary limited lingua franca. There are very few studies of gesture codes and how they are used in industrial environments. Industrial sites are usually out of bounds for researchers being restrictedaccess areas not only for health and safety reasons, but also to maintain confidentiality about best practice. In this entry, we report a gesture code used by workers on the shop floor of a salmon factory in France. After a brief review of other descriptions of gesture codes, we will describe the shop floor where we observed a gesture code and present a typology of the gestures that the workers were using. We will then demonstrate how the workers used those gestures examining a strip of interaction from a video of communication along the production line.

2. From gestures in the workplace to gesture codes in actories In the fields of conversation analysis, ethnomethodology, and distributed cognition, numerous studies highlight the importance of gestures for efficient communication in environments that could qualify loosely as “industrial”. These studies include airport control Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14131419

1414

VII. Body movements – Functions, contexts, and interactions

rooms (Goodwin and Goodwin 1996), underground transport operation centers (Heath and Luff 1992), and hospital operation rooms (Mondada 2014). But if we describe “industrial” narrowly as relating to “economic activity concerned with the processing of raw materials and manufacture of goods in factories” (New Oxford American Dictionary), we find a context where there have been few explicit studies of gestures. Filliettaz (2005) studied the relationship between speech and gestures in meetings between production team managers of the shop floor in a pharmaceutical factory, while Sunaoshi (2000: 78) described the role of gesture as “an indispensable part of interaction” between Japanese engineers and American floor workers in specific areas of a US production plant. While these studies document that gesturing is essential to communication in any workplace, one unique feature of how gestures can become used in a heavily industrialized environment deserves a discussion of its own in this handbook. Kendon (2004: 291) refers to this feature when he writes that “[k]inesic codes of varying degrees of complexity have developed at various times and places in communities of speaker-hearers for use in circumstances where speech either cannot be used for environmental reasons, or may not be used for ritual reasons”. Meissner and Philpott’s (1975a) study of gestural communication in a sawmill is a rare example of this process. They observed workers in a British Columbian sawmill using over 130 signs to mediate work-related communication. This gesture code included number signs, lexical signs for categories of people and their role on the shop floor, signs for actions and qualities of material, and signs for sizes and shapes (see also Meissner and Philipott 1975b and Kendon 2004: 294⫺297). According to Johnson (1977: 353), this type of “[s]awmill sign language is a functional sign-language variety used widely in the northwestern United States and western Canada”. Johnson further states that such “industrial sign languages” result from “a combination of extremely noisy working environments and strongly independent work tasks” (Johnson 1977: 353), and he comments on their observation not only in sawmills, but also in steel mills and ships’ engine rooms. By shifting from the oral modality to the gestural modality, communication in heavily industrialized contexts evolves and adapts to overcome language barriers.

3. Data collection and analysis at a salmon actory The following report of gestures in industrial settings is based on two months of fieldwork in a salmon factory in the Alsace region of France. Situated on a 3-acre site, the factory employed approximately 350 workers from the surrounding villages (with mixed native languages including French, Turkish, Arabic, and Portuguese). The factory was divided into four different zones: the filleting zone, the process zone, the conditioning zone, and the packaging and dispatch zone. In the conditioning zone, where we carried out fieldwork, frozen filets of salmon arrived in boxes from the process zone at the beginning of the line. These filets were then sliced, placed onto trays, weighed, and loaded into a machine to be vacuum wrapped at the end of the line. Each one of those tasks corresponded to a different workstation and smooth workflow depended on smooth communication between workers at different stations. Fig. 103.1 is a line drawing of the stretch of production line that we filmed to establish the typology of gestures presented below.

103. Gestures in industrial settings

1415

Fig. 103.1: The stretch of production line filmed at the salmon factory

To document communication along this production line, we engaged in two steps of research. First, we spent several hours each day for two weeks working at each one of the stations to familiarize ourselves with the communicative demands specific to each one. Then we engaged in a period of observation, made notes, and collected a corpus of video recordings of interactions along the production line. For the video corpus, we made 36 video clips in total amounting to approximately 2h30 of video recording (the video clips ranged from 15 seconds to over 14 minutes). We used a number of different filming techniques to capture how workers used gestures at different parts of the line. We made some clips by fixing the camera to machines, while for others we filmed from a tripod. Yet for others, we filmed individual workers from a fixed point (panning), used a mobile camera to follow a specific person around the shop floor, and we used a wide angle lens from a fixed point to capture interactions among the whole team on the one production line.

4. A typology o gestures at the salmon actory Workers used gestures to communicate messages relating to various aspects of work. The specificity with which they performed these gestures depended on several factors, such as urgency, distance between speaker and addressee, and number of people involved in the communication. To present the gestures below we use the professional drawings that were made for formalized communication sheets corresponding to each work station.

4.1. Gestures or numbers The workers had a set of gestures for different numbers (Fig. 103.2). One of the more specific uses of this number system would occur when a gate on the production line malfunctioned, requiring the slicer at the head of the line to guide a nearby worker to

1416

VII. Body movements – Functions, contexts, and interactions

Fig. 103.2: Gestures for different numbers

the correct gate. Since the gates each had a number, the slicer would do this with a number gesture.

4.2. Gestures to reer to the quality o raw material Workers at the salmon factory used gestures to refer to qualities of the raw material they were using, which in this case was salmon. Using gestures, workers communicated to the people concerned that the filets were too hard or too soft (referring to their temperatures with implications for how they would be sliced). Other gestures in this category include gestures about the size of the slices of salmon coming down the line; for example, the weigher could communicate whether they were too big (and therefore too heavy) or too small (and therefore too light).

Fig. 103.3: Gestures referring to the size and texture of the salmon

4.3. Gestures as instructions or production processes The workers used a third set of gestures relating to the flow of raw material and production processes. These gestures essentially communicated instructions for machine operations and production processes. The workers could use these gestures to instruct other workers to stop machines, change over raw materials, and increase or decrease settings of some of the machines (see Fig. 103.4).

103. Gestures in industrial settings

1417

Fig. 103.4: Instructions for machine operations

4.4. Gestures to indicate problems on the line The last set of gestures in the typology was used to indicate problems with the salmon slices along the line. Workers could perform specific gestures to indicate that slices were returning on the line (because too heavy), or falling off the line (because too small) and overflowing from the box situated to catch the slices falling off the line (Fig. 103.5).

Fig. 103.5: Gestures relating to problems with the salmon on the line

This collection of gestures provided the workers with a simple communication system that encoded a shared vocabulary of basic meanings. In the next section, we will see how the workers actually used the gestures in a strip of interaction taken from one of the recordings of the shop floor.

5. An example o the gestures in use To give an example of how workers used these different gestures on the shop floor, Fig. 103.6 presents a strip of typical interaction that occurred during one of the shifts. The order of events in this interaction are: (i) the weigher notices the salmon slices are too big and tells the line pilot who is near to her station using the TOO BIG gesture; (ii) based on this information the line pilot attracts the attention of the slicer at the head of the line by raising his hand;

1418

VII. Body movements – Functions, contexts, and interactions

Fig. 103.6: Gestural communication during a strip of interaction

(iii) when the line pilot has secured the slicer’s attention, he instructs her to increase her machine by two degrees by performing the INCREASE gesture twice. From this brief description, we see some of the advantages of using gestures for communication along the production line. The workers use gesture to communicate messages across space, keeping messages visible for longer and high above the machines, with an easily understood and non-interruptive means of communication (compared to shouting).

6. Discussion In environments where oral communication is difficult because of noise or hindrances on speech, the case study of communication at the salmon factory suggests that workers may rely on gesture as a tool to communicate a simple vocabulary of meanings efficiently across space and against noise. The gestures in the code are similar to everyday gestures, although footage from the factory indicates that they do exhibit some qualitative differences, such as differences in size and height of gesturing. One line of future research into gestures in industrial settings could be concerned with comparing typologies of such gestures across different industrial sites to examine which specific contextual factors give rise to which types of gestures. A more applied approach could be to explore whether gesture codes could be extended or adapted to new environments (see Harrison 2011 for an attempt).

7. Reerences Filliettaz, Laurent 2005. Gestualite´ et (re)contextualisation de l’interaction dans des re´unions de rele`ve de poste en milieu industriel. Proceedings of Interacting bodies / Le corps En Interaction, Universite´ Lumie`re Lyon, France, 15⫺18 June. Harrison, Simon 2011. The creation and implementation of a gesture code for factory communication. Proceedings of Gesture and Speech in Interaction ⫺ GESPIN, Bielefeld, Germany, 5⫺7 September. Heath, Christian and Paul Luff 1992. Collaborative activity and technological design: Task coordination in London underground control rooms. In: Liam Bannon, Mike Robinson and Kjeld Schmidt (eds.), ESCW’91. Proceedings of the Second European Conference on Computer-Supported Cooperative Work, 65⫺80. Amsterdam: Kluwer Academic Publishers.

104. Identification and interpretation of co-speech gestures in technical systems

1419

Johnson, Robert 1977. An extension of Oregon sawmill sign language. Current Anthropology 18(2): 353⫺354. Kendon, Adam 2004. Gesture. Visible Action as Utterance. Cambridge: Cambridge University Press. Meissner, Martin and Stuart B. Philpott 1975a. The sign language of sawmill workers in British Columbia. Sign Language Studies 9: 291⫺308. Meissner, Martin and Stuart B. Philpott 1975b. A dictionary of sawmill workers signs. Sign Language Studies 9: 309⫺347. Mondada, Lorenza 2014. The organization of concurrent courses of action in surgical demonstrtations. In: Jürgen Streeck, Charles Goodwin and Curtis LeBaron (eds.), Embodied Interaction. Language and Body in the Material World, 207⫺226. Cambridge: Cambridge University Press. Sunaoshi, Yukako 2000. Gesture as a situated communicative strategy at a Japanese manufacturing plant in the US. Cognitive Studies 7(1): 78⫺85.

Simon Harrison, Ningbo (China)

104. Identiication and interpretation o co-speech gestures in technical systems 1. 2. 3. 4. 5. 6. 7.

Co-speech gesture processing as a technical process Recognition of the body’s spatial configuration Detecting gestures Recognizing gestures Interpreting gestures of different types Linking gesture and speech References

Abstract Co-speech gesture is an attractive input modality to be used in technical systems. The identification and interpretation of these gestures can be considered a technical process consisting of at least five tasks. All five have to be solved to create a gesture-understanding system. The first three tasks are recognizing the body, detecting, and recognizing the gesture. Gesture detection means finding something meaningful in the stream of body movement. Gesture recognition assigns the meaningful segment to a class. Each of these tasks has been tackled with various techniques and research approaches. The fourth task, interpreting gestures, is to assign a meaning. It is determined by the gesture type(s) under focus. Deictic, iconic, and symbolic gestures are completely different in the way they express meaning. These differences are reflected in the computational approaches and models for interpretation. Finally, co-speech gestures have to be linked to and integrated with cooccurring speech. Formal frameworks for this task have been proposed and implemented in different kinds of multimodal systems. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14191425

1420

VII. Body movements – Functions, contexts, and interactions

1. Co-speech gesture processing as a technical process The expressive power of human gesture in communication is broadly recognized in research on human-computer interaction. When interacting with a machine, co-speech gesture may contribute to the naturalness, simplicity, and efficiency of communication. Co-speech gesture processing can be regarded as a transformation of some low-level input signals, possibly collected from independent sensor systems, into a common representation of meaning. Such representations may in turn elicit responses by the system. On the way from low-level signal to representation, the system has to accomplish a couple of tasks. The initial task is typically to recognize where the hands are and how they move. Once the system is able to build a model of the user’s limb movements, it has to find the parts that are meaningful. These interesting parts need to be classified to find out what kind of gestures the system is dealing with. A type-dependent analysis of the gesture (a pointing gesture was detected, but what is the object?) will be included, and finally the gesture should somehow be integrated with speech to end up as a common representation. This enumeration of tasks does not necessarily imply an order, nor does it imply linear processing, though this is the way systems are often built. In the following sections the requirements and solutions for the sub-tasks are explained in some more detail.

2. Recognition o the bodys spatial coniguration Processing of co-speech gestures begins with building a spatial configuration of the relevant body parts. “Spatial configuration” refers to the position and orientation of body parts in space ⫺ typically the hands and the arms. Some approaches include hand pose recognition, i.e., they compute the position and orientation of all finger joints and phalanges. In general, static recognition systems can be distinguished from dynamic recognition systems. Static means that the body’s configuration is analyzed just at a single point in time. Dynamic means that the system keeps a history of the body’s configuration in order to analyze movement in gestures. There are several capturing devices and processing methods, each of which with certain advantages and disadvantages. One can distinguish between invasive and non-invasive approaches. The former require the user of the system to wear some kind of equipment or marker while the latter do not. Thus, non-invasive systems tend to be more convenient for the user. The “human-like” way of sensing gestures would be vision, using one or two video cameras. Two cameras allow depth information to be computed. An alternative to using stereo cameras for 3D are time-of-flight cameras (Breuer, Eckes, and Müller 2007). They emit invisible pulses of light and compute the distance from the time the pulse is reflected until it arrives back at the sensor. A time-of-flight camera can be used alone or in combination with a normal video camera to get depth plus visual information. To recognize the body’s configuration from video, the limbs have to be separated from the background and other irrelevant body parts. This can be achieved by looking for skin-colored regions in the image. The method is simple, but it becomes complicated if there are other skin-colored objects in the scene. Skin-color detection is applicable to static and dynamic gesture processing. For dynamic gestures, limbs can be identified with difference images. The pixel values for subsequent images are subtracted, filtering out static parts of the image and leaving only what is moving. However, this method is sensitive to moving

104. Identification and interpretation of co-speech gestures in technical systems

1421

things in the background. The identification of body parts can be simplified if depth (3D) information is available. In that case the system can directly differentiate between the user’s upper body in the foreground and everything else in the background. For dynamic gesture processing, the limbs have to be tracked over time. A major problem with visual tracking is occlusion, yet the limbs’ configuration can be estimated with a body model. This is an internal representation of the user’s joints and limbs. It provides constraints as to where a body part can be and how it can move. Body models can be 2D or 3D, though 2D models only work for processing “flat” gestures in the horizontal-vertical plane where depth information is negligible. In particular, models of hand joints and links are used to estimate hand pose from image data. This is very challenging because of the multiple possible occlusions and limited resolution of the images in which the hands only cover a small area (Erol et al. 2007). Invasive sensing techniques are more obtrusive for the user. Some of them use cablebound devices, which might limit the range of motion. The posture of the hand can be measured quite accurately using a dataglove (Kessler, Hodges, and Walker 1995). Such gloves measure the angle of the hand’s and fingers’ joints (sometimes down to the elbow). Dataglove-based systems approximate the real configurations of the user’s hands much better than vision-based systems. Yet datagloves need an extensive calibration procedure for each user and have problems estimating the thumbs’ configurations ⫺ due to the complexity of the thumb joints. While the dataglove alone only provides information about the joints and thus, the hand posture, it does not give us the positions and orientations of the hands and arms in space. This is why datagloves are typically combined with a position/orientation tracking system. One type of tracking system employs pulsed electromagnetic fields emitted by a sender unit. The strength of the field in all axes of space is measured by receiver units attached to the user’s body. This has the advantage of eliminating the visual occlusion problem. One disadvantage is its sensitivity to metal parts in the proximity, which leads to errors in the measurement. Other types of tracking systems use colored markers whose position is tracked via cameras. There are also systems with multiple cameras that emit and detect infrared light reflected by markers. The precision reached with that kind of tracking is quite high. Marker-based systems are generally subject to occlusion problems, which can be attenuated, however, by using more cameras with different viewing angles.

3. Detecting gestures Let’s say that a recognizer for body configurations as described before delivers a continuous stream of movement data to the next processing units. Some parts embedded in this stream will not represent gesturing (scratching, arm movements while walking to another spot, etc.). But “real” gestures are embedded somewhere. Such gestures can be regarded as an excursion of the articulators from a relaxed resting position until they reach the resting position again. Within an excursion there may be one or several gesture phrases (Kendon 1986). The gesture phrase itself can be described as a prototypical sequence of phases: an optional preparation phase in which the articulators are made ready, the stroke or expressive phase in which the meaning is expressed, and an optional retraction phase in which the articulators are retracted and relaxed. The gesture stroke is typically the phase of highest effort and clearest shape. Identifying co-speech gestures actually means finding the strokes or expressive phases in the stream of movement data. In technical systems gestures are sometimes explicitly

1422

VII. Body movements – Functions, contexts, and interactions

segmented in a dedicated processing step, and sometimes implicitly in a recognition stage (see next section). Explicit segmentation is facilitated by certain kinematic properties of the movement. Harling and Edwards (1997) use hand tension as a segmentation cue for a dataglove-based system. The idea is that normally the fingers’ joints take on a relaxed angle between minimum and maximum. During stroke phases, however, the effort, which is defined as the deviation from the relaxed position, is maximized. The stroke of a pointing gesture with the index finger, for instance, comes with high hand tension, because the index is maximally stretched which means maximal deviation of the angle in one direction, while the other fingers are rolled, which means high deviation in the other direction. The effort can also be computed for other limbs. For instance, the difficulty to maintain an arm pose against gravity can be used to detect expressive hold phases. The sum of the torques acting on the arm joints while holding the pose can be employed technically to measure this effort. Some systems perform gesture segmentation without model-based calculations, but based on training data. Nickel and Stiefelhagen (2007) describe such an approach to detect pointing gestures in human-robot communication using stereo cameras and hand/head tracking. Pointing is modeled by three dedicated statistical models, one for each of the three major gesture phases. The models are built from movement features of the hand and head. The system tries to find a sequence of three subsequent intervals which fits to these models.

4. Recognizing gestures The term gesture recognition, when used in technical contexts, typically means classification. Thus, a body configuration is perceived and assigned a label. The number of labels is often limited and pre-defined. The meaning behind these labels, though they make sense to a human reader, is not always in focus. As indicated before, there are systems able to recognize dynamic gestures that include movement. Other recognition approaches only classify a static snapshot of a gesture. Static recognition can be achieved quite easily if the body’s configuration is robustly detected, for instance, with datagloves and movement tracking devices. In this case a gesture can simply be defined by its feature values, such as joint angles and positions. This approach is principally also applicable to dynamic recognition, but it would be too complex to determine relevant feature values of dynamic gestures explicitly. Thus, statistical models are preferred where the decision for a gesture class is based on training data (Mitra and Acharya 2007). A popular statistical model for gesture recognition is the Hidden Markov Model. Hidden Markov Models model a continuous series of data as a series of discrete states with state-change probabilities. They are well suited to model time-varying signals and are robust to variations of time and amplitude of a signal. Hidden Markov Models are the standard approach for speech recognition (Rabiner 1989). Other statistical approaches for gesture recognition include neural networks, Bayesian Networks, and particle filtering. Training the system, which employs one of such methods, means computing a model from recorded data. This is either done offline in a preparatory stage, or, in more recent approaches, on-line by demonstration. The underlying model can be trained by imitating a human demonstrator, which has been shown for robotics (Calinon and Billard 2007). Training can also take place implicitly, so that the system’s repertoire of known gestures or movements grows during operation. This idea has been applied to hand gesture recognition in social interaction with a virtual agent (Sadeghipour and Kopp 2010).

104. Identification and interpretation of co-speech gestures in technical systems

1423

5. Interpreting gestures o dierent types Once a meaningful movement has been detected and its form is determined, the next logical step would be interpretation. What it means to “interpret” a gesture will be briefly discussed here based on the typological approach by Peirce (Robin 1967, MS404 §3) that differentiates indices, icons, and symbols. These classes have been widely applied (though with modifications) in gesture research (McNeill 1992). Indices correspond to deictic gestures. They stand for their objects because they are either compulsively connected to them, or they force the observer’s mind to turn his attention to their objects. Consequently, their (computational) interpretation requires a model of the domain indicated. Note that speakers indicate all kinds of domains by gesture in everyday discourse. For instance, there are gestures to imaginary time lines. The prototype for a gestured indexical, however, is a pointing gesture to a visible, physical object or place. In that case one needs a model of the space surrounding the gesturer to know “where things and places are”. This type of gesture interpretation has been implemented in a number of technical systems, for instance in Nickel and Stiefelhagen’s (2007) system for interaction with a robot. Peirce’s icons correspond to iconic gestures. Icons stand for objects, because they evoke an idea on perception, and this idea is connected to the object. The nature of this connection is similarity. Thus, for interpretation of an iconic gesture, one needs a means to perceive and assess this similarity with the object. Unfortunately, the relation between iconic gestures and their objects (even in case of physical objects) is not image-like. It could be metaphoric ⫺ McNeill (1992: 14) thus discriminated between iconic gestures and metaphoric gestures ⫺ or it could be abstract and the icon omits many details of the object. A technical system that formalizes the idea of similarity to interpret shaperelated gestures is described by the author (Sowa 2006). Finally, there is the symbol. The symbolic gesture, also called emblem, is characterized by a very strong connection between a mental image and an object. In other words, a symbolic relation between signifier and object is there, and persists, even deprived of context. Thus, in a technical system no model of the domain or model of similarity is necessary for interpretation ⫺ just the link between form and meaning, no matter how meaning is represented in the computer. The recognition of a symbolic gesture itself, the class label, could be called its interpretation.

6. Linking gesture and speech Co-speech gestures are parts of multimodal utterances, so they typically do not convey meaning on their own. They are linked to speech both in time and in reference. This linkage can be used in technical systems to facilitate the processing of either gesture or speech. Gesture analysis can be applied to improve speech recognition and interpretation on different levels. In conversations, for instance, deictic gestures typically increase the salience of objects or places on the listener’s side. This priming effect is used to increase the chances of a word that is connected with the object to be recognized (Qu and Chai 2006). It is also known that gesture is linked to speech on a prosodic level. This can be utilized to facilitate the detection of syntactical structures in spontaneous speech (Chen and Harper 2011). Conversely, speech can be used to improve gesture recognition. For instance, the accuracy of computational gesture segmentation into phases (preparation,

1424

VII. Body movements – Functions, contexts, and interactions

expressive phase, retraction) benefits from the analysis of speech prosody (Kettebekov, Yeasin, and Sharma 2005). The coherence of speech and gesture is also used on a semantic level to compute a common, integrated meaning of a multimodal utterance. This is useful if gesture contributes an aspect of meaning that is not expressed in speech (or vice versa). The obvious case is pointing gestures, in particular if an utterance contains deictic adverbs or demonstratives. Bolt (1980) describes a system that manipulates geometrical objects according to the user’s utterances. If the system encounters a “there” or “this” the pointing direction of the arm is used to determine the respective place or object. A common challenge for such methods of gesture-speech integration is the problem of correspondence between chunks of information contained in the modalities. Usually, temporal proximity is employed as a cue to determine what chunk of speech belongs to what gesture. Another method is to examine a spoken utterance for possible candidates that might go with a gesture (demonstratives etc.), trying to find “the best fit” between them and the gestures while maintaining the order of events. The semantic integration of speech and gesture implies a framework for the representation of meaning, and a mechanism for merging the representations of gesture and speech. Typically, established approaches from computer science and linguistics are applied to this challenge. One of them is using frame structures of attribute-value pairs (Tho´risson 1999; Waibel et al. 1996). Here, the input from each modality is evaluated and produces partially filled frames to be integrated in a fusion stage. Koons, Sparrell, and Tho´risson (1993) describe the application of a frame-based approach for the interpretation of co-speech deictic and iconic gestures. The system recognizes hand-shape gestures to indicate objects and two-handed gestures to indicate object relations. Holzapfel, Nickel, and Stiefelhagen (2004) use frame-based parsing of multimodal input for integrating speech and pointing gestures for interaction with a robot. The integration of frame structures can formally be achieved with the rules of unification. The idea is that commands are represented as frames that might contain sub-frames. For command recognition, the system has to “fill” all sub-frames which may contain sub-frames themselves, and so on. If the user produces a multimodal utterance, speech and gesture produce frames containing pieces of information. They are unified with higher-level frames until a command frame is fully completed, which can then be executed. Finitestate machines were proposed as an alternative approach for multimodal integration. Input to the network comes from gesture and speech, and leads to state transitions. Multimodal output is generated during that process. Finite-state machines are less complex than unification, and can be integrated with speech recognition more easily (Johnston and Bangalore 2005). Frames, finite-state machines and further developments of these approaches are generally applicable to all kinds of gestures and spoken utterances, as long as information from both modalities can be represented in a symbolic, compositional manner.

7. Reerences Bolt, Richard 1980. “Put-that-there”: voice and gesture at the graphics interface. Journal of Computer Graphics 14(3): 262⫺270. Breuer, Pia, Christian Eckes and Stefan Müller 2007. Hand gesture recognition with a novel IR time-of-flight range camera ⫺ a pilot study. In: Andre´ Gagalowicz and Wilfried Philips (eds.), Computer Vision/Computer Graphics Collaboration Techniques, 247⫺260. Berlin: Springer.

104. Identification and interpretation of co-speech gestures in technical systems

1425

Calinon, Sylvain and Aude Billard 2007. Incremental learning of gestures by imitation in a humanoid robot. In: HRI ’07: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, 255⫺262. New York: Association for Computing Machinery Press. Chen, Lei and Mary P. Harper 2011. Utilizing gestures to improve sentence boundary detection. Multimedia Tools and Applications 51(3): 1035⫺1067. Erol, Ali, George Bebis, Mircea Nicolescu, Richard D. Boyle and Xander Twombly 2007. Visionbased hand pose estimation: A review. Computer Vision and Image Understanding 108(1⫺2): 52⫺73. Harling, Philip and Alistair Edwards 1997. Hand tension as a gesture segmentation cue. In: Philip Harling and Alistair Edwards (eds.), Progress in Gestural Interaction, 75⫺87. Berlin: Springer. Holzapfel, Hartwig, Kai Nickel and Rainer Stiefelhagen 2004. Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures. In: ICMI’04: Proceedings of the Fourth International Conference on Multimodal Interfaces, 175⫺182. New York: Association for Computing Machinery Press. Johnston, Michael and Srinivas Bangalore 2005. Finite-state multimodal integration and understanding. Natural Language Engineering 11(2): 159⫺187. Kendon, Adam 1986. Current issues in the study of gestures. In: Jean-Luc Nespoulous, Paul Perron and Andre´ Roch Lecours (eds.), The Biological Foundations of Gestures: Motor and Semiotic Aspects, 23⫺47. Hillsdale, NJ: Lawrence Erlbaum Associates. Kessler, G. Drew, Larry F. Hodges and Neff Walker 1995. Evaluation of the Cyberglove as a wholehand input device. Transactions on Computer Human Interaction 2(4): 263⫺283. Kettebekov, Sanshzar, Mohammed Yeasin and Rajeev Sharma 2005. Prosody based audiovisual coanalysis for coverbal gesture recognition. IEEE Transactions on Multimedia 7(2): 234⫺242. Koons, David B., Carlton J. Sparrell and Kristinn R. Tho´risson 1993. Integrating simultaneous input from speech, gaze and hand gestures. In: Mark T. Maybury (ed.), Intelligent Multimedia Interfaces, 257⫺276. Cambridge: Association for the Advancement of Artificial Intelligence Press/Massachusetts Institute of Technology Press. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press. Mitra, Sushmita and Tinku Acharya 2007. Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics ⫺ Part C: Applications and Reviews 37(3): 311⫺324. Nickel, Kai and Rainer Stiefelhagen 2007. Visual recognition of pointing gestures for human-robot interaction. Image and Vision Computing 25(12): 1875⫺1884. Qu, Shaolin and Joyce Y. Chai 2006. Salience modeling based on non-verbal modalities for spoken language understanding. In: ICMI ’06: Proceedings of the 8th International Conference on Multimodal Interfaces, 193⫺200. New York: Association for Computing Machinery Press. Robin, Richard S. 1967. Annotated Catalogue of the Papers of Charles S. Peirce. Amherst, MA: University of Massachusetts Press. Rabiner, Lawrence 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2): 257⫺286. Sadeghipour, Amir and Stefan Kopp 2010. Embodied gesture processing: Motor-based integration of perception and action in social artificial agents. Cognitive Computation. Online first DOI 10.1007/s12559⫺010⫺9082-z. Sowa, Timo 2006. Towards the integration of shape-related information in 3-D gestures and speech. In: ICMI ’06: Proceedings of the 8th International Conference on Multimodal Interfaces, 92⫺99. New York: Association for Computing Machinery Press. Tho´risson, Kristinn R. 1999. A mind model for multimodal communicative creatures & humanoids. International Journal of Applied Artificial Intelligence 13(4⫺5): 449⫺486. Waibel, Alex, Minh Tue Vo, Paul Duchnowski and Stefan Manke 1996. Multimodal interfaces. Artificial Intelligence Review 10(3⫺4): 299⫺319.

Timo Sowa, Nürnberg (Germany)

1426

VII. Body movements – Functions, contexts, and interactions

105. Gestures, postures, gaze, and other body movements in the 2nd language classroom interaction 1. 2. 3. 4.

Views from the 2nd language teaching discipline Evidence from the 2nd language classroom interaction Some questions and issues References

Abstract This article is about the use of gesture and body movement within the context of second language teaching and learning. The first section proposes that several teaching approaches rely on the body as a mediator to support instruction, while others integrate cultural gestures as a proper object of study. The second section focuses on observational studies that document real use of hand gesture, gaze, facial expression, and other nonverbal conduct by teachers and learners in classroom interaction. It is found that teachers use effectively a variety of gestures to elucidate meaning, to assist in class participation and management, and to provide feedback. However, error correction is seldom accompanied by gesture and some gestural strategies can lead to misunderstanding in negotiation of meaning sequences. As for learners, even at novice stages, their gestures reveal skills for manipulating language rather than language deficit; more precisely, they are indicative of development not only in terms of interactive competence (e.g., allocate and take turns, solicit assistance), but also in terms of cognitive processes (e.g., self-regulation, self-initiated repair). The questions these findings raise, their implications for second language education, and the need for more classroom observational studies are addressed in the final section.

1. Views rom the 2nd language teaching discipline The idea that the body can, and even should, be used in second language education is not new and has been consistently restated over the years within various methodological and pedagogical frameworks. One of the main foundations for this argument is the concept of communicative competence, which states that the mastery of a set of linguistic rules is not enough to become a proficient speaker of a language. For the communicative approaches, the skills to interpret people’s gestures, facial expressions and body movements, as well as the ability to respond appropriately or to initiate communication, are as necessary as the command of grammar, in the traditional sense. Other pedagogical models have shared these views or proclaim the importance of the body in language education on different grounds. For the clarity of this section of the overview, I will address the issue, first, from the perspective of the body as a mediator for teaching and learning and, second, from the perspective of the teaching of gestures themselves. Several approaches and techniques incorporate the body in 2nd language teaching as a mediator, a means or a tool to achieve a goal rather than as the object of study: The Direct Method relies heavily on visual supports and teachers’ gestures to explain new vocabulary; the Audio-Visual methods provide cues for the understanding of proxemics Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14261432

105. Gestures, postures, gaze, and other body movements

1427

and body behaviours; and the Structuro-Global Audio-Visual approach exploits the synchronic relation between speech and body movements to enable the learner gain the rhythmic patterns of the L2. Drama or action techniques also allow learners to use their bodies and voices to act out and even improvise sketches in the target language. Frequently used within the context of communicative approaches, they are also practised in teaching of socio-cultural orientation, where the embodiment of language learning is foundational. Methods usually referred to as “alternative” also rely on the body: students perform physical actions following the teacher’s commands to develop listening skills (Total Physical Response), they decode the teacher’s gestural, facial, and other visual cues to learn grammar and pronunciation (the Silent Way), and they practice relaxation exercises while listening to classical music to memorise dialogues (Suggestopeadia). The Accelerative Integrated Method introduces each vocabulary item with a matching gesture that offers a visual representation of the meaning of the word. In fact, the use of teachers’ gestures, including pantomime, to introduce new vocabulary is a long lasting technique. Together with codes of gestural cues or symbols for class management, these pedagogic or instructional gestures may render teacher’s talk more effective. They will be discussed in the second section of this review. Gestures typically found in a given linguistic community, cultural gestures, are often described in inventories, repertoires, and dictionaries. The term applies essentially to emblems, but also includes gestures used with discourse markers in conversation and gestures used in social rituals, such as greetings. Emblems seem to be the default type of gesture in 2nd language teaching and are mainly used to promote conversational skills and understanding across cultures, although they may also help recall of language items (Allen 1995). Interaction patterns of the target language, including bodily and vocal behaviours, are studied with support of feature films, television programs and dialogues filmed on video, also with a view to developing the learner’s cultural awareness (Damnet and Borland 2007). Some textbooks for the teaching of French include video materials to demonstrate the use of cultural gestures in context or, more specifically, of pragmatic gestures frequent in conversation (Calbris and Montredon 2011). Overall, however, despite the renewed interest in nonverbal communication in 2nd language education, teaching resources that integrate the verbal and nonverbal patterns of the target language within a coherent pedagogical program, remain scarce.

2. Evidence rom the 2nd language classroom interaction Second language classrooms are socially organized sites, and teachers and learners are agents of change because their words and actions are decisive in the conduct and outcomes of the process of teaching and learning. The ways they use their bodies are part and parcel of this process and a significant indicator of how this unfolds. In this section I present results from observational studies, mainly of a qualitative nature, that document real gesture use in classroom interaction, including hand gesture, gaze, facial expression, and other nonverbal conduct.

2.1. Teachers gestures Teachers use a good range of nonverbal means, especially hand gestures and facial mimicry, mostly to elucidate meaning for the learners. More precisely, pedagogic gesture can

1428

VII. Body movements – Functions, contexts, and interactions

communicate linguistic information, assist in class and participation management, and provide positive feedback or signal errors (Tellier 2008). Information about the language contents under study is provided any time during a lesson, but more often during the input phase, characterized by dense sequences of information, structured according to didactic criteria. The teacher’s verbal and nonverbal behaviours are contingent on these specific conditions of the 2nd language class and, at the same time, mediators of the process. As such, they may vary according to language level and class focus; for example, in introductory classes centred on dialogue memorisation, iconic gestures were used to refer to the meaning of words and beat type gestures to stress linguistic forms, whereas in intermediate classes oriented to meaning and thematic content, the teacher favoured metaphoric gestures of a naturalistic rather than pedagogic nature (Griggs 2010). Beats also emphasized a word or a phrase in another study, but emblems and iconics, as well as metaphorics, deictics, miming, and facial expressions, all facilitated comprehension in same level Spanish classes (Allen 2000). Representational gestures were also used in tight coordination with speech to complete the verbal information the teacher was giving while explaining vocabulary (Lazaraton 2004). Further to facilitating comprehension, iconic and deictic gestures were used to elicit vocabulary from the learners and to provide cues for corrective feedback (Talleghani-Nikazm 2008). In addition, investigation of the students’ perceptions of teachers’ gestures confirmed that representational and deictic gestures can enhance comprehension (Sime 2008). Teachers’ nonverbal behaviours also assist in guiding students’ participation in classroom interaction and in facilitating learning. Beats and emblematic gestures used in beginning Italian classes were spontaneously mirrored by the learners and assisted not only to comprehend the lesson, but also to create an Italian identity (Peltier Nardotto and McCafferty 2010). Pointing gestures drew the students’ attention to something relevant to what was going on: a person, an object, or the task itself (Sime 2008), while the teacher’s gaze, directed towards one learner in particular or towards the entire group, clarified for the students to whom the learning sequence was intended (Faraco and Kida 2008). However, the teacher may sometimes avoid mutual gaze with a student who is having trouble in following the learning sequence. While such a softening strategy seemingly prevents the student’s embarrassment, it can lead to misunderstanding. Also, head nods, presumably made by the teacher to encourage students’ efforts, may be taken as signs of approval during a faulty sequence and induce error. In other words, negotiation of meaning between teachers and learners is a delicate business and teachers should not underestimate the impact of their gesturing. This is also highly relevant in error correction. Fanselow (1977) found that teachers addressed incorrect meaning rather than incorrect grammar and that they did not include gesture frequently in their strategies. Similar results emerge from studies focused on gestural correction. Davies (2006) observed that very few occurrences of focus on form episodes involved gesture alone, not accompanied by speech, and that they were used only by two of the four participating teachers. Remarkably, all these episodes, although scarce in numbers, were followed by uptake, that is, the student noticed the error and replaced it with a corrected form immediately after the teacher’s intervention. Kamiya (2012) confirmed that focus on form episodes were, in their majority, not accompanied by any kind of gestural activity and, going further than Davies (2006), computed all types of focus on form occurrences, with and without gesture. Proactive focus on form, when the teacher pays attention to a problematic form without learners’ error, was more often

105. Gestures, postures, gaze, and other body movements

1429

accompanied by gesture than reactive focus on form, when feedback is provided in response to an error. This shows again that error correction proper is seldom performed gesturally. Other studies observed the form of the gesture used for correction and its relationship with the language within a pedagogical sequence. Some of these hand gestures displayed metaphorically the nature of the error or gave a clue for correction. For example, a bi-handed rotation movement in one sense and then the opposite to indicate that word order needs attention (Faraco 2010), or a flat hand held upwards, palm facing the body and moving backward and forward to signal use of past tense (Muramoto 1999). According to Faraco (2010), there is no certainty that uptake will follow if gestural error correction is accompanied by other modalities, mainly verbal and prosodic, because information overload may occur, preventing the learner from focusing on the target form. However, uptake may be facilitated by the visual similarity between the gesture form and the content of the correction, as in the examples above, and by a perceptible focus on the problematic utterance, e.g., through repetition and gaze away from the learner, prior to performing gestural error correction. Thus, for gestural correction to be effective, it might need to be isolated from the other means employed by the teacher, a finding in line with those made by Davies (2006). In other words, people’s ability to glean information from speakers’ gestures in ordinary conversation may be lessened by the context of the 2nd language classroom, where patterns of interaction are structured by the goal of the encounter and by pre-existing institutional constraints, such as duration, frequency, space layout, number of participants, and so on. The implications for 2nd language pedagogy of these findings should not be disregarded.

2.2. Learners gestures In peer interaction, that is, in multi-party conversation style encounters including taskbased activities and open discussion classes, learners engage in collaborative talk without the teacher’s intervention. There are immediate implications for the proxemic conditions of such encounters: learners are turned towards their interactional partners, mostly at conversational distance, and can pay better attention to their gaze, facial expressions and posture than in teacher-fronted classes. Together with hand and arm gestures, these nonverbal behaviours combine with talk in the pursuit of the goal intended in the activity. The studies reported here show that learners, even at novice stages, are more skillful at manipulating language than analyses based solely on speech transcriptions would suggest. Learners, for example, use gaze (Ikeda 2012), sometimes in conjunction with a pointing palm hand gesture (Greer and Potter 2012), to allocate and take turns at talk in ways that enhance the collaborative aspect of interaction for learning. Talk with phrasal breaks that would be categorized as false starts by analyses of audio data only, turns out to involve, on close inspection of video data, gaze and head shifts from the speaker to solicit and obtain gaze of a non-gazing recipient (Carroll 2004). Pointing hand gestures, gaze, and pantomime are used instead of talk in turn completion to avoid explicit expression of something considered sensitive in the situation, or to better demonstrate the quality of the object talked about. Far from being cases of language deficit, such strategic use of gesture, gaze, and head movements, and the relevant responses given by the recipients, are indicative of interactive competence in development (Olsher 2004). When language troubles do occur, as for example in cases of lexical error, gaze is used to solicit assistance and pointing gestures are used to indicate relevant

1430

VII. Body movements – Functions, contexts, and interactions

information, so as to achieve mutual understanding and solve the impasse (Mori 2004). Coordinated talk, gaze, posture shifts, and other body conduct, such as facial expression and the manipulation of the textbook, were crucial for the organization of students’ interaction in a word search episode (Mori and Hasegawa 2009). Similarly, coordinated talk, gaze, smile, hand gesture, pantomime, and head movements were used by a learner to successfully complete a challenging sequence of storytelling, all in collaborative coordination with the recipients’ talk and collective laughter (Tabensky 2012). The teller’s gaze was, in addition, a visual display of self-regulation and self-initiated repair. Students’ coordinated talk and gestures are therefore indicative not only of abilities to conduct interaction in L2, but also of states of cognition in development. Moreover, the relationship between language proficiency and gesture use appears to be a more complicated matter than high gesture rates produced by low proficiency level learners. The type of task students are engaged in seems to play an important role; for example, in an oral presentation, competent students used more frequent, and more elaborate, hand gestures than their less competent peers (Tabensky 2008). Also, in a conversation task, the gestures and facial expressions of higher-scoring students appeared well synchronized with the flow of speech and turn-taking organisation, whereas those of the lower-scoring students appeared to relate more to language difficulties, tension, and lack of confidence (Gan and Davison 2011). Relational aspects can also be relevant: the nonverbal expression of emotions ⫺ mimicry, gaze and head shifts, posture, hand and arm gestures, voice intensity, and laughter ⫺ was influential on the way students performed self-correction and other-correction and, overall, on the way the interaction unfolded (Pe´pin and Steinbach 2007). There are also suggestions that body posture, including arms and legs positioning, gaze and hand gestures displayed during peer interaction, change according to group friendships and that these accommodations may have an impact on task engagement and performance quality (Stone 2012).

3. Some questions and issues While empirical studies offer enlightening insights on gesture use by teachers and learners, we still need to understand better how this really happens in genuine contexts of classroom interaction. More observational studies are needed to investigate further the role of gesture in teacher-learner sequences of interaction and, in particular, the best practices to avoid misunderstanding, and whether gestural input does effectively lead to uptake and actual use of new language items, with or without gesture. There are suggestions in the literature that for gesture to be an effective mediator in L2 education it needs to be presented on its own rather than accompanied by other means of clarification. Empirical studies can investigate what types of nonverbal input may lead more frequently to understanding, but they cannot inform us on the moment by moment unfolding of the teacher-learners interaction, where meaning is manufactured. Another question is how learners’ gestures evolve with increasing language proficiency. Indeed, spontaneous discourse involving more and more complex language will occur at some stage of the development and it would be useful to investigate how gesture participates at such level of complexity. From the instructional point of view, a recurrent question is the teachability/learnability of gesture, and more precisely, whether gestures should be taught for comprehension only or for actual use in oral communication. Anecdotal evidence tells us that pragmatic

105. Gestures, postures, gaze, and other body movements

1431

gestures can be integrated in teaching and that learners benefit a great deal from such an instruction. Truly emblematic gestures, on the other hand, may need strong feelings of identification with the target culture to be internalized and used spontaneously. But again, there is currently no sufficient classroom evidence to support or invalidate these views. Developments in 2nd language classroom interaction research will advance our knowledge of gesture use and may suggest better ways for its integration in L2 education.

4. Reerences Allen, Linda Quinn 1995. The effects of emblematic gestures on the development and access of mental representations of French expressions. The Modern Language Journal 79(4): 521⫺529. Allen, Linda Quinn 2000. Nonverbal accommodations in foreign language teacher talk. Applied Language Learning 11(1): 155⫺176. Calbris, Genevie`ve and Jacques Montredon 2011. Cle´s pour l’Oral. Paris: Hachette Francœ ais Langue E´trange`re. Carroll, Donald 2004. Restarts in novice turn beginnings: Disfluencies or interactional achievements? In: Rod Gardner and Johannes Wagner (eds.), Second Language Conversations, 201⫺ 220. London/New York: Continuum. Damnet, Anamai and Helen Borland 2007. Acquiring nonverbal competence in English language contexts. The case of Thai learners of English viewing American and Australian films. Journal of Asian Pacific Communication 17(1): 127⫺148. Davies, Matthew 2006. Paralinguistic focus on form. TESOL Quarterly 40(4): 841⫺855. Fanselow, John F. 1977. The treatment of error in oral work. Foreign Language Annals 10(5): 583⫺593. Faraco, Martine and Tsuyoshi Kida 2008. Gesture and the negotiation of meaning in a second language classroom. In: Steven G. McCafferty and Gale Stam (eds.), Gesture. Second Language Acquisition and Classroom Research, 280⫺297. New York/London: Routledge. Faraco, Martine 2010. Geste et prosodie didactiques dans l’enseignement des structures langagie`res en FLE. In: Olga Galatanu, Michel Pierrard, Dan Van Raemdonck, Marie-Eve Damar, Nancy Kemps and Ellen Schoonheere (dir.), Enseigner les Structures Langagie`res en FLE, 203⫺212. Bruxelles: P.I.E. Peter Lang. Gan, Zhengdong Gan and Chris Davison 2011. Gestural behavior in group oral assessment: A case study of higher- and lower-scoring students. International Journal of Applied Linguistics 21(1): 94⫺120. Greer, Tim and Hitomi Potter 2012. Turn-taking practices in multi-party EFL oral proficiency tests. Journal of Applied Linguistics (JAL) 5(3): 297⫺320. Griggs, Peter 2010. La structuration de l’input dans le cadre des interactions multimodales de la classe de langue e´trange`re. Language, Interaction and Acquisition 1(2): 297⫺328. Ikeda, Keiko 2012. L2 ‘Second-order’ organization: Novice speakers of Japanese in a multi-party conversation-for learning. Journal of Applied Linguistics (JAL) 5(3): 245⫺273. Kamiya, Nobihuro 2012. Proactive and reactive focus on form and gestures in EFL classrooms in Japan. System 40(3): 386⫺397. Lazaraton, Anne 2004. Gestures and speech in the vocabulary explanations of one ESL teacher: A microanalytic inquiry. Language Learning 54(1): 79⫺117. Mori, Junko 2004. Negotiating sequential boundaries and learning opportunities: A case from a Japanese language classroom. The Modern Language Journal 88(4): 536⫺550. Mori, Junko and Atsushi Hasegawa 2009. Doing being a foreign language learner in a classroom: Embodiment of cognitive states as social events. International Review of Applied Linguistics (IRAL) 47(1): 65⫺94. Muramoto, Naoko 1999. Gesture in Japanese language instruction: The case of error correction. In: L. Kathy Heilenman (ed.), Research Issues and Language Program Direction, 145⫺175. Boston, MA: Heinel and Heinle.

1432

VII. Body movements – Functions, contexts, and interactions

Olsher, David 2004. Talk and gesture: The embodied completion of sequential actions in spoken interaction. In: Rod Gardner and Johannes Wagner (eds.), Second Language Conversations, 221⫺245. London/New York: Continuum. Peltier Nardotto, Ilaria and Steven G. McCafferty 2010. Gesture and identity in the teaching and learning of Italian. Mind, Culture, and Activity 17(4): 331⫺349. Pe´pin, Nicolas and Fee Steinbach 2007. Multimodalite´, stabilisation de ressources linguistiques et e´motionalite´ en classes de FLE. Les travaux en groupe: une e´tude de cas. Bulletin Suisse de Linguistique Applique´e 85: 81⫺105. Sime, Daniela 2008. “Because of her gesture, it’s very easy to understand” ⫺ Learners’ perceptions of teachers’ gestures in the foreign language class. In: Steven G. McCafferty and Gale Stam (eds.), Gesture. Second Language Acquisition and Classroom Research, 259⫺279. New York/London: Routledge. Stone, Paul 2012. Learners performing tasks in a Japanese EFL classroom: A multimodal and interpersonal approach to analysis. RELC Journal 43(3): 313⫺330. Talleghani-Nikazm, Carmen 2008. Gestures in foreign language classrooms: An empirical analysis of their organization and function. In: Melissa Bowles, Rebecca Foote, Silvia Perpin˜a´n and Rakesh Bhatt (eds.), Selected Proceedings of the 2007 Second Language Research Forum, 229⫺ 238. Somerville, MA: Cascadilla Proceedings Project. Tabensky, Alexis 2008. Expository discourse in a second language classroom: How learners use gesture. In: Steven McCafferty and Gale Stam (eds.), Gesture. Second Language Acquisition and Classroom Research, 298⫺320. New York/London: Routledge. Tabensky, Alexis 2012. Non-verbal resources and storytelling in second language classroom interaction. Journal of Applied Linguistics (JAL) 5(3): 321⫺348. Tellier, Marion 2008. Dire avec des gestes. Le Franc¸ais Dans le Monde: Recherche et Application 44: 40⫺50.

Alexis Tabensky, UNSW Australia (The University of New South Wales)

106. Bodily interaction (o interpreters) in music perormance 1. 2. 3. 4. 5. 6. 7. 8.

Introduction: Music and movement Musical performance as joint action Categories of movements made by musicians Synchronization between musicians Studies of movements of musicians performing together Studies of movements of musicians rehearsing together Studies on non-Western music References

Abstract The connection between music and movement has long been noted by scholars and performers. This chapter focuses on the ways in which musicians move while performing music together, and the functions of these movements. Performing music together is understood Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14321440

106. Bodily interaction (of interpreters) in music performance

1433

as a kind of joint action, with participants making use of common ground to achieve shared goals. In musical joint action the primary goal is synchronization of behaviors at the level of milliseconds (individual note onsets) and larger units (such as phrases lasting a few seconds or longer). Various taxonomies have been proposed for musicians’ movements, typically divided first into those which are sound-producing, also called “essential” or “instrumental”, and second into those called “expressive”, “ancillary”, or “non-obvious”. Some researchers make use of motion tracking techniques, but many of the studies published to date use video recordings of musicians as the data to be analyzed. Different taxonomies of musicians’ ancillary movements have been used by researchers, most stemming from the work of Ekman and Friesen. A variety of studies are reviewed, demonstrating that musicians’ ancillary movements stem from and reveal their musical intentions; that such movements serve to increase the efficacy of joint actions between musicians; and that these movements alter dynamically with context.

1. Introduction: Music and movement The connection between music and movement has long been noted by scholars as well as by performers in music and dance. Motion has been deemed the essence of musical structure (Kurth 1917). Music has been understood in terms of physical motion (Zuckerkandl 1956) and as related to embodied movement directly or metaphorically (Larson 2012; Todd 1999). Experimental investigations into the relationship between music and movement began early in the twentieth century, with regard to both listeners (Truslit 1938) and performers (Seashore 1936, 1938). Musicians move when performing alone, but this chapter focuses on the ways in which musicians move while performing music together, and the functions of these movements. This chapter aims to give examples of research methodologies and findings along with citations for further reading.

2. Musical perormance as joint action Music is most often understood in terms of sonic artifacts or objects; here, however, music is considered as a category of human activities which yield sonic results (Small 1998). Music as human activity is anthropologically ubiquitous and species-specific (Blacking 1973; Merriam 1964), and is widely understood as communicative, albeit lacking in the propositional content central to most linguistic communication (Ashley in press b; Bharucha, Curtis, and Paroo 2012; Cross 2005). Musicians’ activities involve the creation or interpretation of structured sequences of sounds. This may involve very broad decisions and actions (as in the case of music which involves a large degree of improvisation) or may be more limited in scope, dealing with matters of tempo (the speed of the music), dynamics (the varying loudness of the music), articulation (how connected or separated notes in a musical line are), and balance (the relative prominence of given lines in the overall musical texture). The way in which musicians achieve their goals in dealing with these many factors is best seen as a variety of joint action (Knoblich, Butterfill, and Sebanz 2011), also involving shared knowledge structures or “common ground” (Clark 1996). In musical joint action the primary goal is synchronization of action at the level of milliseconds. This is accomplished through processes related to those seen in imitation and mimicry (Nowicki et al. 2013) which may have their roots in infancy (Phillips-Silver and Keller 2011). Synchronization of musicians’ movements is

1434

VII. Body movements – Functions, contexts, and interactions

in part unintentional (Keller and Appel 2010), which lends support to the notion that musical actions share sources common to other human activities, such as conversation (Monson 1997; Sawyer 2005) where precise synchronization has long been recognized as important (Sacks, Schegloff, and Jefferson 1974).

3. Categories o movements made by musicians Musicians must move in order to produce the sounds of music, but not all movements they make are necessary for sound production. Various taxonomies have been proposed for musicians’ movements, typically divided first into those which are sound-producing, also called “essential” or “instrumental”, and second into those called “expressive”, “ancillary”, or “non-obvious”. The first category of movements is very interesting in and of itself, especially with regard to bodily interactions with the affordances of musical instruments (Baily 1985). However, the focus here is on “expressive” or “ancillary” movements which may be considered to have an intended, rather than unintended, communicative purpose (as with “non-natural” vs. “natural” meaning in Grice 1989). It is often difficult to clearly differentiate between expressive and essential gesture, as all sound production involves movement, but it is often possible to connect ancillary movements with musicians’ expressive intentions or the expressive possibilities given in a musical score (Davidson 2007; Shoda and Adachi 2012; Wanderley et al. 2005). Ancillary or expressive movements of musicians have long been noted, if only to urge that they be muted or eliminated (Quantz 1752). There is a growing research literature dealing with musicians’ movements (for relatively recent overviews, see Davidson 2009, and also Gritten and King 2006, 2011); current research employs a variety of methods. Some researchers make use of motion tracking techniques, but many of the studies published to date use video recordings of musicians as the data to be analyzed. Such analyses typically begin by enumerating movements by category. The categories most frequently used derive from those of Ekman and Friesen (1969): emblems (movements which have a relatively fixed, lexicalized meaning), regulators (movements which enable the proper timing of musical actions, especially between co-performers), illustrators (movements which depict or highlight aspects of musical structure or performance nuance), adaptors (movements which regulate the state of the performer or her relationship with other persons or objects), and affects (movements which reveal emotional states). Additional categories may be added to these, such as gaze, facial expressions, posture (i.e., a held position), and touch. Other systems of categorization, as for example those put forward by McNeill (1992), Cassell (1998), or Kendon (2004), are also sometimes used.

4. Synchronization between musicians Music is at root a temporal art, founded on the organization of musical time through rhythm and how performers engage rhythm. Studies of movement to music often deal with entrainment, the synchronization of bodily movement to some external timing source, such as a recording, a metronome, or a performer. There is a large literature on musical rhythm, most of which is beyond the scope of researchers interested in multimodal communication (important texts include Cooper and Meyer 1960; Hasty 1997; Lerdahl and Jackendoff 1983; London 2004). Much of this literature relates to the notion of a cognitive or perceptual clock mechanism, hierarchically structured, which provides a referent against which

106. Bodily interaction (of interpreters) in music performance

1435

rhythms are perceived (Povel and Essens 1985). This hierarchic beat structure is fundamental not only to the organization of pieces of music but also to beat matching with the body (Large 2000), how different body parts are moved to music (Toiviainen, Luck, and Thompson 2010), and how listeners’ and performers’ attention is allocated dynamically, with more attention being focused toward beats coinciding with relatively higher levels of metric structure (Jones and Boltz 1989; Jones 2009). The need for musicians to synchronize their activities is of primary importance in a wide range of musical behaviors. When musicians play together ⫺ physically copresent, overdubbing in a studio recording, or in the virtual environment of a videolink ⫺ adequate and appropriate synchronization is a basic goal. One challenge for musicians performing together is to achieve appropriate levels of synchronization at the level of note onsets. The nature of such alignments varies according to context. For musical lines to sound appropriately synchronous in Western music, the onsets of tones in these lines should align with an inferred reference beat within a narrow range, typically plus or minus 30milliseconds, although this varies with genre (Keller in press). Expertise developed over thousands of hours of practice results in high levels of motor control, such that performances by a single musician recorded months apart are practically identical to one another (Shaffer 1984), and can also align to a beat with precision approaching neural limits (Ashley in press a). One challenge for co-performers is to synchronize when the local speed of the music changes, for example when slowing before the end of a section or a composition, a very common phenomenon demonstrating musicians’ regard for hierarchic musical structure. In such situations, musicians might plausibly use visual cues from their collaborators’ ancillary movements as a means of better achieving the dynamic coordination required by changing tempo. This might extend to other aspects of their playing together, including the “shaping” of musical units such as phrases not only in tempo, but also in dynamics (loudness). We now address these matters through findings from the research literature.

5. Studies o movements o musicians perorming together Early studies of musicians’ synchronization (Shaffer 1984) suggested that musicians coordinated with one another almost exclusively by auditory feedback and shared motor programs, with visual information being of little importance. However, later studies have demonstrated the significant role which seeing the movements of co-performers may have on ensemble synchronization. One study (Appleton, Windsor, and Clarke 1997) used two pianists in a three-condition design: playing together on one keyboard; playing on two keyboards in the same room at the same time but unable to see one another; and with one “live” pianist playing along with the other’s pre-recorded part. One finding of this study was that both copresence and visual access influenced synchronization. Visual feedback increased synchronization, and asynchrony increased notably at the musical level of the measure (as opposed to the beat) in the absence of visual information and when performing with the recording. This suggests that seeing co-performers’ movements facilitates coordination at higher levels of musical structure, where much of the work of musical interpretation, such as phrasing, is located. In addition, the visual feedback condition exhibited higher overall timing variability than the nonvisual condition, indicating that players who could see each other could engage in freer expressive variation without sacrificing synchronization.

1436

VII. Body movements – Functions, contexts, and interactions Other studies have investigated how a performer’s role in an ensemble influences the kind and extent of their ancillary movements. This has typically been studied using a leader/subordinate dichotomy, avoiding the much more complicated interplay in many musical contexts. One might hypothesize that leaders in a musical context would move more often or in a more obvious way, thereby communicating musical intentions to be followed by others in the ensemble; in fact, this has been shown to be the case. In one study of synchronization (Goebl and Palmer 2009), duo-pianists’ finger and head movements were measured. The two pianists sat side by side at a single keyboard, each playing one line with the right hand. Results demonstrated that the leader produced larger finger movements than the follower, taken to be an indication of visual communication from leader to follower; however, when the follower’s passage contained more notes than the leader’s, the leader’s finger height decreased, showing the influence of rhythmic activity on ensemble role. Interestingly, the overall duration of followers’ finger movements was larger than leaders’, which the authors took to be an index of hesitancy, waiting for the leader to move first (probably an elongation of the prestroke “hold” phase of the gesture). This study also investigated the pianists’ head movements, taking synchronization (cross-correlation) of these movements as the measure of interest. In conditions where auditory feedback was reduced (where both pianists could not hear each other), finger synchronization decreased but head movement synchronization, as well as finger height, increased, presumably as communicative compensation in the visual modality. In musical situations where other kinds of interplay occur, different behaviors have been noted. A study of the pop band “The Corrs” (Kurosawa and Davidson 2005) found that, compared to the other band members, the drummer used a relatively larger number of regulator-category movements, mostly gaze, presumably to maintain her coordination with the other musicians, but also perhaps to support her as a shy person performing in public. The role of gaze as a means by which performers interact with each other is frequently attested to in musicians’ anecdotes, but has not been studied systematically and is clearly an area ripe for further investigation.

6. Studies o movements o musicians rehearsing together Rehearsal is a ubiquitous aspect of musicians’ work, whether individually (Chaffin, Imreh, and Crawford 2002) or in groups (Blum 1986). Nevertheless until recently most empirical studies of rehearsal have dealt with efficiency of learning rather than the rehearsal process itself. There are, however, a few studies which have brought forth interesting results. A study of duo-pianists (Willamon and Davidson 2002) found a strong connection between the pianists’ behaviors (hand lifts, swaying, gaze) and locations in the music deemed important by the performers. The brief quantitative analyses given indicate that the pianists’ body sways and hand lifts became better synchronized over the course of rehearsal. Movements at important musical locations and gaze increased from the penultimate rehearsal to the performance, perhaps as a desire to better coordinate, but also perhaps due to less need to look continuously at the printed music. Such results contrast with those from a study of the movements of a flutist leading a chamber ensemble (Mader 2002), which found that beat-like movements needed for facilitating synchrony decreased over rehearsals, whereas more generally expressive movements remained constant or even increased. This was interpreted as an indication of a kind of

106. Bodily interaction (of interpreters) in music performance

1437

musical pragmatics at work, such as Grice’s Maxim of Quantity (say enough but not too much); once the performers had learned and internalized how to synchronize important beats, the overt coordinating movements were no longer necessary. A related study (Keller and Appel 2010) investigated duo pianists’ asynchrony of finger motions (note onset timing) alongside body sway (measured by motion capture), over repeated playings of the same musical excerpts. Note asynchronies decreased and coordination of body sway increased over repetitions, indicating better entrainment at multiple time scales. Interestingly, when the leader’s body sway was followed by his partner’s, note asynchronies were lower. Thus, low-level instrumental actions were systematically related to higher-level ancillary movements. These results are particularly interesting in that the pianists could not see each other; their movements, both instrumental and ancillary, were motivated by their knowledge of musical structure (what to play and how, in general, to play it) and by a gradually increasing understanding of their partner’s way of timing the music and dynamically coordinating with it. Researchers have also investigated the movements of vocalists in rehearsal. In a study of rehearsals between “classical” singers and their accompanists (King and Ginsborg 2011), duos composed of a vocalist and a pianist were asked to collaborate by preparing and performing unfamiliar songs, first practicing alone, then rehearsing together, then performing the song, and finally participating in a post-performance interview. Each singer collaborated with their usual accompanist in one condition and a new, unfamiliar accompanist in a paired condition. The participants produced more ancillary, expressive gestures when paired with familiar partners or partners at their same level of expertise, and also produced a wider range of movements. In all conditions, though, expressive movements were made to coordinate tempo and pulse (through beats or deictics), to coordinate shaping of phrases (through metaphorics and illustrators), to coordinate the beginnings of musical units (through regulators), and to focus on musical details (through emblems and illustrators). Thus, the variety of movements made was informed by many factors, including music-structural, interpretive, and interpersonal ones.

7. Studies on non-Western music In closing, we should note that one difficulty in the existing literature on musicians’ movements lies in its close ties to Western musics. Not all of humanity’s music is structured like Western music, with its emphasis on harmony and its beat-oriented hierarchic rhythmic structure. There is a literature on the bodily movements of non-Western musicians, much of it written from the “thick description” ethnological standpoint. Here we can only point to a pair of interesting studies. Clayton (2007) presents a case study in quantitative analysis of videorecordings, using Indian musicians as participants. The musical excerpt examined lacks a consistent beat shared by the ensemble, but nevertheless the performers’ movements demonstrate a common rhythmic structure which emerges from their interacting dynamics. This is particularly striking in that the musicians claim that their movements are intended to be independent of one another. One of Clayton’s students (Moran 2013) has engaged North Indian duo performances in an entirely different manner, linking many aspects of performance, including gaze and movement, to culturally-defined communicative intentions.

1438

VII. Body movements – Functions, contexts, and interactions

8. Reerences Appleton, Lucy J., W. Luke Windsor and Eric F. Clarke 1997. Cooperation in piano duet performance. In: Alf Gabrielsson (ed.), Third Triennial ESCOM Conference: Proceedings, 471⫺474. Uppsala: Uppsala University. Ashley, Richard in press a. Expressiveness in funk. In: Dorottya Fabian, Renee Timmers and Emery Schubert (eds.), Expressiveness in Music Performance: Empirical Approaches Across Styles and Cultures. Oxford: Oxford University Press. Ashley, Richard in press b. Communication. In: William F. Thompson and J. Geoffrey Golson (eds.), Music in the Social and Behavioral Sciences. New York: Sage Publications. Baily, John 1985. Music structure and human movement. In: Peter Howell, Ian Cross and Robert West (eds.), Musical Structure and Cognition, 287⫺332. London: Academic Press. Bharucha, Jamshed, Megan Curtis and Kaivon Paroo 2012. Musical communication as alignment of brain states. In: Patrick Rebuschat, Martin Rohrmeier, John A. Hawkins and Ian Cross (eds.), Music and Language as Cognitive Systems, 139⫺155. Oxford: Oxford University Press. Blacking, John 1973. How Musical is Man? Seattle: University of Washington Press. Blum, David 1986. The Art of Quartet Playing: The Guarneri Quartet in Conversation. New York: Alfred Knopf. Cassell, Justine 1998. A framework for gesture generation and interpretation. In: Robert Cipolla and Alex Pentland (eds.), Computer Vision in Human-Machine Interaction, 191⫺215. New York: Cambridge University Press. Cooper, Grosvenor and Leonard B. Meyer 1960. The Rhythmic Structure of Music. Chicago: University of Chicago Press. Chaffin, Roger, Gabriella Imreh and Mary Crawford 2002. Practicing Perfection: Memory and Piano Performance. Mahwah, NJ: Lawrence Erlbaum. Clayton, Martin. R. L. 2007. Observing entrainment in music performance: Video-based observational analysis of Indian musicians’ tanpura playing and beat marking. Musicae Scientiae 11(1): 27⫺59. Davidson, Jane W. 2007. Qualitative insights into the use of expressive body movement in solo piano performance: a case study approach. Psychology of Music 35(3): 381⫺401. Davidson, Jane 2009. Movement and collaboration in musical performance. In: Susan Hallam, Ian Cross and Michael Thaut (eds.), Oxford Handbook of Music Psychology, 364⫺376. Oxford: Oxford University Press. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of non-verbal behavioral categories: Origins, usage and coding. Semiotica 1(1): 49⫺98. Grice, H. Paul 1989. Studies in the Way of Words. Cambridge, MA: Harvard University Press. Gritten, Anthony and Elaine King (eds.) 2006. Music and Gesture. Aldershot: Ashgate Press. Gritten, Anthony and Elaine King (eds.) 2011. New Perspectives on Music and Gesture. Aldershot: Ashgate Press. Hasty, Christopher 1997. Meter as Rhythm. New York: Oxford University Press. Jones, Mari R. 2009. Musical time. In: Susan Hallam, Ian Cross and Michael Thaut (eds.), Oxford Handbook of Music Psychology, 81⫺ 92. Oxford: Oxford University Press. Jones, Mari R. and Marilyn Boltz 1989. Dynamic attending and responses to time. Psychological Review 96(3): 459⫺491. Keller, Peter. E. and Mirjam Appel 2010. Individual differences, auditory imagery, and the coordination of body movements and sounds in musical ensembles. Music Perception 28(1): 27⫺46. Keller, Peter E. in press. Ensemble performance: Interpersonal alignment of musical expression. In: Dorottya Fabian, Renee Timmers and Emery Schubert (eds.), Expressiveness in Music Performance: Empirical Approaches Across Styles and Cultures. Oxford: Oxford University Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.

106. Bodily interaction (of interpreters) in music performance King, Elaine C. and Jane Ginsborg 2011. Gestures and glances: Interactions in ensemble rehearsal. In: Anthony Gritten and Elaine King (eds.), New Perspectives on Music and Gesture, 177⫺201. Aldershot: Ashgate Press. Knoblich, Gunter, Stephen Butterfill and Natalie Sebanz 2011. Psychological research on joint action: Theory and data. In: Brian Ross (ed.), The Psychology of Learning and Motivation, Volume 54, 59⫺101. Burlington, VT: Academic Press. Kurosawa, Kaori and Jane W. Davidson 2005. Non-verbal interaction in popular performance: The Corrs. Musicae Scientiae 19(1): 111⫺136. Kurth, Ernst 1917. Grundlagen des linearen Kontrapunkts. Bern: Krompholz. Large, Edward W. 2000. On synchronizing movements to music. Human Movement Science 19(4): 527⫺566. Larson, Steve 2012. Musical Forces. Bloomington, IN: Indiana University Press. Lerdahl, Fred and Ray Jackendoff 1983. A Generative Theory of Tonal Music. Cambridge, MA: Massachusetts Institute of Technology Press. London, Justin 2004. Hearing in Time. New York: Oxford University Press. Mader, Ronda J. 2002. Expressive movements of flutists: Categories and functions. D.Mus. dissertation, School of Music, Northwestern University. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press. Merriam, Alan P. 1964. The Anthropology of Music. Evanston: Northwestern University Press. Monson, Ingrid 1997. Saying Something: Jazz Improvisation and Interaction. Chicago: University of Chicago Press. Moran, Nikki 2013. Music, bodies and relationships: an ethnographic contribution to embodied cognition studies. Psychology of Music 41(1): 5⫺17. Nowicki, Lena, Wolfgang Prinz, Marc Grosjean, Bruno H. Repp and Peter E. Keller 2013. Mutual adaptive timing in interpersonal action coordination. Psychomusicology: Music, Mind, and Brain 23(1): 6⫺20. Phillips-Silver, Jessica and Peter E. Keller 2012. Searching for roots of entrainment and joint action in early musical interactions. Frontiers in Human Neuroscience 6: 26. Povel, Dirk-Jan and Peter Essens 1985. Perception of temporal patterns. Music Perception 2(4): 411⫺440. Quantz, Johann J. 1752. Versuch einer Anweisung, die Flöte traversiere zu spielen. Berlin: Johann Friedrich Boß. Sacks, Harvey, Emmanuel Schegloff and Gail Jefferson 1974. A simplest systematics for the organization of turn-taking for conversation. Language 50: 696⫺735. Sawyer, Keith 2005. Music and conversation. In: Dorothy Miell, David Hargreaves and Raymond Macdonald (eds.), Musical Communication, 45⫺60. Oxford: Oxford University Press. Seashore, Carl 1936. Objective Analysis of Musical Performance. Iowa City: Iowa University Press. Seashore, Carl 1938. Psychology of Music. New York: McGraw-Hill. Shaffer, L. Henry 1984. Timing in solo and duet pianor performance. Quarterly Journal of Experimental Psychology 36(A): 577⫺595. Shoda, Haruka and Mayumi Adachi 2012. The role of a pianist’s affective and structural interpretations in his expressive body movement: A single case study. Music Perception 29(3): 237⫺ 254. Small, Christopher 1998. Musicking. Middletown: Wesleyan University Press. Toiviainen, Petri, Geoff Luck and Marc R. Thompson 2010. Embodied meter: hierarchical eigenmodes in music-induced movement. Music Perception 28(1): 59⫺70. Todd, Neil P.M. 1999. Motion in music: A neurobiological perspective. Music Perception 17(1): 115⫺126. Truslit, Alexander 1938. Gestaltung und Bewegung in der Musik. Berlin-Lichterfelde: C.F. Viewig. Wanderley, Marcelo M., Bradley W. Vines, Neil Middleton, Cory McKay and Wesley Hatch 2005. The musical significance of clarinetists’ ancillary gestures: An exploration of the field. Journal of New Music Research 34(1): 97⫺113

1439

1440

VII. Body movements – Functions, contexts, and interactions

Willamon, Aaron and Jane Davidson 2002. Exploring co-performer communication. Musicae Scientiae 6(1): 1⫺17. Zuckerkandl, Victor 1956. Sound and Symbol. New York: Pantheon.

Richard Ashley, Evanston (USA)

107. Gestures in the theater 1. The human body in the performing arts 2. The history of the art of acting 3. References

Abstract In theater, gestures are usually employed intentionally and in accordance with a particular style of acting in order to convey certain meanings and evoke specific responses from the spectators. This leads us to two questions, the first of which relates to the human body and the second to the history and cultural conditions of acting. Since gestures are inseparable from the human body performing them on stage, what is the nature of this “material”? Can it be used and shaped at will or does it pose a challenge or even offer resistance? The history of acting in Europe as well as a comparison of acting in different cultures teaches us that acting styles are not only dependent on the conditions set by the human body but also on several historical and cultural ones. What are these conditions? In what ways do the different acting styles succeed in conveying meaning and to evoke responses, partly even strong emotions, in the spectators? After dealing with these questions, the article concludes by discussing the mixing of acting styles from different performance cultures in so-called intercultural theater since the 1970s.

1. The human body in the perorming arts In the following, the term “theater” is understood and used as a generic term comprising all performing arts.

1.1. The human body as material In theater, gestures cannot be conceived independently of the human body performing them. Gestures, defined as all movements of the actor/performer’s body, are an artistic means formed out of the material of the human body, which, unlike other materials, cannot be shaped and controlled at will. Bodies constitute living organisms and are thus constantly engaged in the process of becoming, i.e., of permanent transformation. The human body knows no state of being. It exists only in a state of becoming: It recreates itself with every blink of the eye; every breath and gesture brings forth the body anew. For that reason, the body is ultimately elusive. The bodily being-in-the-world, which Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14401452

107. Gestures in the theater

1441

cannot be but becomes, vehemently refutes all notions of the completed work of art. The human body might turn into an artwork only through its mortification, as a corpse. Only then does the body temporarily achieve a state of actual being, even if this state can be maintained only by a swift mummification. In this state it can be deployed as a material to be prepared, used and decorated in ritualistic or artistic processes. Gunther von Hagens’ BODY WORLDS exhibitions offer a vivid example of such bodies. However, in this state the body cannot perform gestures anymore. The living body, however, refuses to be declared or turned into a work of art. The actor/performer instead undergoes processes of embodiment (Cso´rdas 1994). Through these processes, the body is transformed and recreated ⫺ the body happens. This is why, at the beginning of the twentieth century, the director, stage designer and theoretician Edward Gordon Craig wanted to ban the human body from the stage altogether and replace it by an ueber-marionette. In order to guarantee the status of the artwork in performance, Craig argued that the human body would have to be removed from the stage. The whole nature of man tends towards freedom; he therefore carries the proof in his own person, that as material for the theatre he is useless. In the modern theatre, owing to the use of the bodies of men and women as their material, all which is presented there is of an accidental nature. … Art … can admit of no accidents. That then which the actor gives us, is not of a work of art … (Craig 1908: 3)

Since, in fact, every gesture embodies a new body, this has to be taken as the condition sine qua non for any artistic use made of the body. Gestures in a performance do not contribute to creating a work of art, but to allowing an art event to happen.

1.2. Body techniques In his lecture “Les techniques corporelles”, held in 1934 and published one year later, the French sociologist Marcel Mauss explains that the human body is shaped by the repeated use of particular body techniques. While the fact that humans eat, sleep, walk, dance, swim, etc., is biologically determined, the ways in which they do so depends, among other things, on particular cultural traditions. This cultural shaping of the body, on the one hand, enables culturally meaningful gestures and, on the other, functions as a process of disciplining the body. Learning particular body techniques thus results in a physical habitus characteristic of a certain culture (Mauss 1935). However, actors usually have to undergo some training in order to acquire body techniques different from those they employ in everyday life. On the basis of the habitus they acquired within a particular culture they have to adopt a new one, which depends on specific aesthetic norms, rules, or simply preferences. It is meant to enable the actor to represent different dramatic characters ⫺ or, as in the Italian commedia dell’arte, the Japanese Noh or the Indian Kathakali ⫺ one particular type. Wherever we deal with clearly defined theater forms and acting styles, the bodily habitus as acquired through education and training is defined by their requirements. The more varied the performing arts and the more permeable towards each other, the more difficult it is to determine and acquire one particular habitus as a basis for the actor’s work. Even today, acting schools differ from one another by relying on different “masters”. However, the conviction prevails that even after leaving the school, actors have to con-

1442

VII. Body movements – Functions, contexts, and interactions

tinue working on their bodies and continuously acquire a new habitus in order to meet the various demands made by different forms and styles. The growing popularity of workshops plays an even more important role. Workshops offer actors a few weeks of training in particular body techniques, which former generations of actors acquired in a learning process of years, if not decades. They might focus on commedia dell’arte or biomechanics, the body techniques developed by Tadashi Suzuki or Theodoros Terzopoulos, or even on those used in highly codified theater forms, such as Noh and Kabuki, different Chinese opera forms or Indian dance theater. Each of these techniques reshapes the actor’s body. The processes of embodiment performed in such workshops contribute to bringing forth another body. How the body will be reshaped in these processes cannot be predicted. Unlike the learning processes in which children acquire body techniques that might be typical of their culture, and also unlike the life-long training processes undergone by actors in particular traditional theater forms, workshop training will not result in the shaping of a particular habitus. It will lead to different results for each participant depending on their expectations or the kind and number of other workshops they attended before. This workshop culture ⫺ generally not unproblematic ⫺ reveals that processes of shaping and reshaping the body are never complete. Each habitus once acquired can be changed and transformed by learning others. The actor’s work on his own body, in this sense, is never finished.

1.3. The phenomenal and the semiotic body In Helmuth Plessner’s anthropology the peculiar role of the body as aesthetic material has a central place. He emphasizes the tension between the phenomenal body of the actor or his bodily being-in-the-world, and the semiotic body or his representation of a dramatic character. For Plessner, this tension marks the ontological distance of human beings to themselves; in other words, the actor in particular symbolizes the conditio humana. Humans have bodies which they can manipulate and instrumentalize just like any other object, and which they can use as signs by which to communicate certain meanings. At the same time, they are their bodies, they are body-subjects. By stepping out of themselves to portray a dramatic character in “the material of one’s own existence” the actor refers to this doubling and man’s “eccentric position” (Plessner 1970) inherent in the distance from one’s self. According to Plessner, the tension between the actors’ phenomenal body and their portrayal of a character bestows a deeper anthropological significance and special dignity on the art of acting. The actor/performer communicates with the audience through his phenomenal and his semiotic body. While his phenomenal body, for example, communicates rhythm, energy, or presence, the semiotic body challenges the spectator to generate meaning (Fischer-Lichte 2008). When an actor performs gestures according to a certain rhythm, this rhythm will communicate to the bodies of the spectators. Rhythm is a principle based on the human body. The heartbeat, blood circulation, and respiration each follow their own rhythm, as do the movements we carry out when walking, dancing, writing, and performing other gestures. Even the inner movements of our bodies that we are incapable of perceiving are rhythmically organized. The human body is indeed rhythmically tuned. We have a particular capacity for perceiving rhythms and tuning our bodies to them. The rhythm set and followed by a performer’s gestures might collide with the various

107. Gestures in the theater

1443

rhythms of each individual spectator, but just as well might be able to draw them in, so that they tune their body to it. In case a group of actors is rhythmically stomping or clapping their hands, as in choric theater, even a rhythmic community of actors and spectators might come into being or, quite the contrary, some spectators will refuse to tune in, so that different rhythms clash. In the history of European theater, since the times of the church fathers, there has been an ongoing debate on the harmful or healing consequences of the actors’ (phenomenal) body on the spectators. The church fathers as well as those involved in the discussion on the morality of theater, led in France in the seventeenth century (Thirouin 1998), acknowledged the actor’s ability to exercise an immediate sensual effect on the spectator and to trigger strong, even overwhelming affects based on the presence of the phenomenal body. The atmosphere within a theatre has been described and interpreted as highly infectious. The actors perform passionate gestures on stage, the spectators perceive and are infected by them: they begin to feel passionate. Through the act of perception, the infection is transferred from the actor’s present body to the spectator’s present body. Both theatre enthusiasts and theatre critics agree that this transmission is possible only through the bodily co-presence of actors and spectators. They only differ in the evaluation of this bodily co-presence. They either see the excitement of passion as a healing catharsis or as a profoundly harmful, destructive, and estranging (from oneself and God) disturbance, as Rousseau still argued in the second half of the eighteenth century (Rousseau 2004: 10, 251⫺352). Both parties argue that the bodily co-presence of actors and spectators may lead to a transformation of the spectator: it heals his sickness of passion, results in a loss of self-control, or can change one’s identity. The (phenomenal) body of the actor thus, communicates via contagion or transfer directly to the (phenomenal) body of the spectator. Today, the actor’s presence is much discussed as energy being brought forth by certain body techniques, circulating in the space and being transferred on the spectators, challenging them to conjure up energy in themselves (Giannachi, Kaye, and Shanks 2012). Some of these body techniques consist in rhythmically performing certain gestures like stomping or clapping. Thus, the communication between the phenomenal bodies of actors and spectators is conceived of as an exchange of energy. When the spectators focus their perception on the actor’s body not as a phenomenal but as a semiotic one, they begin to constitute meanings. They perceive the actor’s body as that of a particular dramatic character and, accordingly, each gesture he performs as the dramatic character expresses a certain emotion, represents a particular action or depicts a specific relationship he has to another character. Thus, focusing the perception on the semiotic body of the actors allows the spectators to construct an action and a character by according meaning to their gestures. Of course, the phenomenal and the semiotic body are indissolubly linked to each other. However, in theater, the perception of the spectator may shift between the phenomenal and the semiotic body, one moment focusing on the energy with which a gesture is performed and, the next, on the action it might signify. It is due to the specific aesthetic of a production whether the spectators’ perception is directed more to the phenomenal or the semiotic body; whether they feel challenged to shift often between the two or to remain fixed on one of them for quite a while. Whereas one might assume that a realistic-psychological production of an Ibsen play would direct the perception of the spectator mostly to the actors’ semiotic bodies and Marina Abramovic´’s performance The Artist is Present mostly to her

1444

VII. Body movements – Functions, contexts, and interactions

phenomenal body, the perception is not only guided by the aesthetic of the performance, but also by the subjective preferences of the spectator. In the first case, a spectator might direct his or her attention to the rhythm of gestures, the intensity by which they are performed, or to the energy they unleash. In the other case a spectator might ask for the meaning of the artist’s body posture and the way she looks at him/her. While the distinction between the phenomenal and the semiotic body might be of no great relevance with regard to gestures in everyday life, it is decisive in the performing arts. Since the perception of the gestures will shift between focusing on their phenomenality or their semioticity depending on the aesthetic of the performance and the subjective preferences of the spectators at different times, each spectator will have perceived other gestures.

2. The history o the art o acting Most theories of theatre, be they European or Asian, focus on the particular impact acting is meant to have on spectators. Aristotle (384⫺322 B.C.E.) in his Poetics determines that the aim of performing a tragedy was to arouse eλεοw and φο´ βοw, pity and fear, in the spectator, which should result in a cleansing of these affects through catharsis. Aristotle is not as concerned when it comes to the art of acting or, even more specifically, to particular gestures that would promote the intended effect. In contrast, the Indian Natyasastra by the sage Bharata is much more detailed in describing what kinds of gestures are best qualified to express feelings (affects, sentiments, emotions) most adequately and, at the same time, to arouse them in the spectators. Gestures are, on the one hand, conceived as signs of this very feeling and, on the other, as a potentially transformative force. This force can affect the actor as well as the spectator (see 2.2). In medieval times in Europe the actor appeared as an uncanny person, because, by performing certain gestures, he seemed to be able to become someone else. This is why Gerhoh von Reichersberg (1093⫺1169) did not allow clerics to act in the mystery plays, since he was afraid that those playing the devil or the Antichrist would be transformed into the latter’s servants by performing corresponding gestures. In later periods, the transformative power potentially exerted by the actors’ gestures over the spectators is described as contagious (see 1.3).

2.1. Theories o acting in Europe since the seventeenth century 2.1.1. Theories o the seventeenth century The theories of acting in the seventeenth century proceeded from the assumption that it should arouse strong emotions in the spectators. Emotional responses were stimulated by gestures representing an emotion on stage. In Dissertatio de Actione Scenica, which appeared in Munich in 1727, the Jesuit priest Franciscus Lang summarized the rules which up to that point had generally been upheld in order to guarantee the most effective representation of the canonical affects: (i) (ii)

Admiration: Both hands stretched out above the chest with palms towards the audience. Shame: Face turned away over the left shoulder and hands calmly joined behind the back …

107. Gestures in the theater (iii) (iv)

(v) (vi) (vii) (viii)

1445

Entreaty: Both hands raised with palms turned towards the listener again and again. Weeping and Melancholy: Both hands joined in the middle of the chest, either high on the chest or lower about the belt. Also accompanied by extending the right hand gently and motioning towards the chest … Reproach: Three fingers folded and forefinger extended … Imploring: Both hands extended towards the addressee as if about to embrace him … Repentance: Pressing hands to the chest. Fear: Right hand reaching towards the chest with four digits visible while the rest of the body is bent, relaxed and bowed. (Engle 1968: 107)

Performing such gestures, the actor had to take care not to give up the crux scenica, i.e., an angle of 90 degrees formed by his feet. The stance was received as a representation of a firm ego, which may be attacked but never overwhelmed by strong emotions, as in the case of a Christian martyr or the ideal courtier. A breach of these rules, by contrast, was conceived as an appropriate sign that the character represented by the role was so weak that he succumbed to the emotions without resistance. In this case, the actor might run across the stage, beat his head against a wall, raise his arms well above his eyes, fists clenched, rolling his eyes, ranting and raving. Obeying these rules demonstrated the character’s strength, while violating them revealed his weakness (Fischer-Lichte 1990). This kind of acting was supposed to transform the spectators into viri perculsi (deeply moved beings). While the stories told in the performance and the gestures presented by the exemplary protagonist demonstrated over and over again that the loss of self-control and the succumbing to passions (rather than stoically and heroically resisting them) resulted in complete disaster, the actor’s gestures representing this were meant to excite strong emotions in the spectators. Indeed, they lost self-control to the point of screaming, crying, lamenting, moving in their seats, and revealing all signs of being deeply moved. This is a performative contradiction. The semiotic dimension of the gestures taught the spectators a lesson, which their performative dimension dismissed and counteracted.

2.1.2. Theories o the eighteenth century The formation of middle-class society in the eighteenth century went hand in hand with the formulation of a new concept of art in general and of theater in particular. The new ideal of life and art was “naturalness”. Acting should imitate nature and so create the illusion of reality. Its development was closely related to the contemporary discussion of an original language. Since Condillac had stated that the first human language had been a langage d’action, consisting of gestures, it was a common and widespread notion that gestural language had been a universal human language. Most theoreticians of the period agreed that acting should be an imitation of this language. Georg Christoph Lichtenberg argued that actors should take as a model the “involuntary language of gesture, which passion in all gradations uses throughout the world. Man learns to understand it completely, usually before he is twenty-five. He is taught to speak it by nature and this so emphatically that it has become an art to make faults” (Lichtenberg 1972: 278). Nature’s language had to be transferred to the stage in order to provide actors with the desired patterns of natural behaviour. The question of how this could be accomplished arose. While Re´mond de Sainte Albine suggested in Le Come´dien (1747) that the actor himself had to sense the emotion

1446

VII. Body movements – Functions, contexts, and interactions

the dramatic character was supposed to feel in order to quite automatically bring forward the appropriate gestural signs for their emotions, Lessing and Diderot held the opinion that empathy of the actor with the character could not be taken as a method for creating the natural gestural signs of emotion on stage. For “on stage (…) we want to see sentiments and passions expressed not just in a partial manner, not just in the imperfect way in which an individual would express himself in the same circumstances. We want rather to see them expressed as perfectly as possible, leaving no room for further improvement” (Lessing 1883⫺1890: 158⫺159). Nor could the observation of people’s gestures in everyday life serve as a method, because education spoils man by teaching him either to hide, or to exaggerate his true emotion. Two possible solutions to the problem were offered. The actor was to search for “natural gestures” where their original expressiveness was still preserved ⫺ with “savages”, children and peasants. The other possibility would be to reconstruct the natural language of emotions with reference to the “Law of Analogy”, which was formulated by the physiologists of the time: anything that occurred in the soul or the mind had its analogy in the body. Taking recourse to this “law”, the German philosopher and later director of the Berlin Court Theater Johann Jakob Engel in his Mimik (1785/86) attempted to make an exhaustive list and detailed description of all possible gestural signs which might represent an emotion. For example, he described anger as “the desire to remove, to destroy an ill”, a desire which is “one with the desire to punish and take revenge”: All Nature’s energies stream outwards in order to transform the joy of what is Evil into Fear by the terrifying sight of it, into Pain by its destructive effect and, by contrast, to turn our bitter Annoyance into a pleasant feeling of Strength, the Terror we instil in others. (Engel [1804] 1971, volume 7: 236⫺237)

The corresponding physical expression followed analogously from this state: Anger equips … all the external limbs with strength; pre-eminently arming those who are destined to destroy. If the external parts, overfilled with blood and juices, brim over and tremble, and the bloodshot, rolling eyes shoot glances like fiery daggers, then a certain indignation, a certain disquiet is also expressed by the hands and teeth: the former are clenched convulsively, the latter are bared and gnashed … all movements are jerky and of extreme violence; the gait is heavy, forced, shattering. (Engel 1971, volume 7: 238)

Each individual physical change was thus seen to have its cause in a certain emotion and, therefore, pointed back to that cause. Taken together, all these changes formed the gestural sign for anger and so described the expression of the respective emotion perfectly. They were neither a spontaneous expression of the feeling nor an arbitrary, conventional sign thereof, but an adequate representation of the “natural sign” of this emotion. This provides the necessary prerequisite for spectators to feel a certain emotion. Perceiving the modifications in the actor’s body, displayed as the representation of an emotion, causes a physical transformation of the perceiving subjects by infecting them with the emotion represented. According to Engel, this happens only if the aesthetic illusion is successful. He therefore chides actors ⫺ and, in particular, actresses ⫺ for drawing the spectator’s attention away from the dramatic character s/he is portraying

107. Gestures in the theater

1447

and towards their own phenomenal body. In the theater, spectators were to exclusively perceive and empathize with the dramatic characters. If their attention was diverted to the actor’s phenomenal body, this would “invariably destroy the illusion” (Engel 1971, volume 7: 58). Spectators would be forced to leave the fictive world of the play and enter the world of real physicality. In this case, infection would be impossible. The “contagiousness of another’s gesture” (Engel 1971, volume 7: 100) works only when the spectator is immersed in the illusion created by the acting. The gestures on stage served the purpose of developing the spectators’ ability to feel empathy. They fulfilled an important function regarding the articulation of the educated middle classes’ values. As Lessing put it: The man of empathy is the most perfect man; among all social virtues, among all kinds of generosity, he is the most outstanding. A person who can make us feel such empathy, therefore, makes us more perfect and more virtuous. (Letter to Nicolai in November 1756; Lessing 1970⫺1979, volume 4: 163)

Naturalistic theater further developed this kind of acting. At the beginning of the twentieth century, Constantin S. Stanislavski, taking recourse to the psychology of his times, re-theorized it by keeping the two basic assumptions of eighteenth century theories: (1) the actor’s gestures are a perfect “natural” sign for the emotion of the dramatic character and (2) this is the conditio sine qua non for arousing emotions, identification and empathy in the spectator. This paradigm survives until today in what we call realistic-psychological acting (Fischer-Lichte 1990).

2.1.3. Theories o the twentieth century At the beginning of the twentieth century, the representatives of avant-garde theater all over Europe negated the principles underlying the bourgeois illusionistic stage. Although one line of modernist theater, from Stanislavski to Peter Hall, continued to carry forward this realist tradition (even if somewhat ironically), much of modern theatre turned away from it. The actor’s body was no longer conceived of as a text composed of natural signs for emotions, but as the raw material for sign processing with a wider field of reference than the character and his emotions. Formulating these tendencies in his essay The Actor of the Future (1922) Vsevolod E. Meyerhold wrote: “The actor must train his material [his body] so that it is capable of executing instantaneously those tasks which are dictated externally [by the actor or the director]” (Meyerhold 1969: 198). Although the avant-gardists often differed in their particular aims and styles, sometimes considerably so, they all understood the actor’s body to be the raw material that could be reshaped at will according to artistic intention. Proceeding from this assumption ⫺ which Craig vehemently opposed (see 1.1) ⫺ they elaborated new body techniques, i.e., new styles of acting such as Meyerhold’s biomechanics, the visual presentations of the Bauhaus, Brecht’s alienation effect or Artaud’s theatre of cruelty. When formulating and elaborating his theory of biomechanics, Meyerhold referred to working processes and defined the actor as being equivalent to the engineer: In art our constant concern is the organization of raw material … The art of the actor consists in organizing his material; that is, in his capacity to utilize correctly his body’s means

1448

VII. Body movements – Functions, contexts, and interactions of expression. The actor embodies in himself both the organizer and that which is organized (i.e. the artist and his material). The formula for acting may be expressed as follows: N⫽A1 ⫹ A2 (where N ⫽ actor; A1 ⫽ the artist who conceives the idea and issues the instructions necessary for its execution; A2 ⫽ the executant who executes the conception of A1) (Meyerhold 1969: 198)

Meyerhold ⫺ like most of the avant-gardists ⫺ saw the human body as an endlessly perfectible machine optimized through clever calculations by its engineer. Thus, any susceptibility to malfunction was significantly reduced, guaranteeing a seamless progression. The relationship between the materiality and the semioticity of the gestures was reversed. While the theories of the eighteenth century proclaimed that only the perfect representation of an emotion is able to excite a corresponding feeling in the spectator, Meyerhold developed his theory as an explicit antithesis to them. He assumed that the actor’s malleable body itself had an immediate effect on the body of the spectator. The actor’s gestures served as a stimulus to induce excitement in the spectators. The various exercises of biomechanics focused on and displayed the body’s kinaesthetic potential as well as drew attention to its flexibility ⫺ its “innate capacity for reflex excitability” which “grips the spectator” (Meyerhold 1974: 199; italics in the original), inducing a state of excitability. This is the prerequisite for processes of meaning generation. The emphatic accentuation of the actor-body’s materiality creates the possibility for the spectators to draw entirely unpredictable meanings from what they perceived and thus, each became the “creator of a new meaning” (Meyerhold 1974: 2; italics in the original). The actor brings forth his corporeality with the potential to affect the audience directly and, at the same time, allows for the generation of a new meaning. It is the focussing on the phenomenal body, which allows for the semiotic body to emerge. Theater and performance art since the 1960s has been experimenting with and developing the use of the body by frequently referring to and drawing on the historical avantgarde’s emphasis on the body’s materiality. The artists since the 1960s differ from it insofar as they do not take the body for granted as an entirely malleable and controllable material but consistently acknowledge the doubling of “being a body” and “having a body”. Jerzy Grotowski fundamentally redefined the relationship between the actor and his role. The body can no longer serve as a sign for a dramatic character. Rather, the actor “must learn to use his role as if it were a surgeon’s scalpel, to dissect himself ” (Grotowski 1968: 37). For Grotowski, “having a body” cannot be separated from “being a body”. The body does not represent a tool ⫺ it is neither a means for expression nor the material for the creation of signs. Instead, its “material” is “burned” and converted into energy through acting. The actors do not control their body ⫺ neither in Engel’s nor in Meyerhold’s sense ⫺ they rather transform it into an actor itself: the body acts as embodied mind and each gesture testifies to that. Grotowski’s theory demands a very particular, prolonged training of the actor, which, however, avoids […] teaching him something; we attempt to eliminate his organism’s resistance to this psychic process. The result is freedom from the time-lapse between inner impulse and outer reaction in such a way that the impulse is already an outer reaction. Impulse and action are concurrent: the body vanishes, burns, and the spectator sees only a series of visible impulses. Ours then is a via negativa ⫺ not a collection of skills but an eradication of blocks. (Grotowski 1968: 16)

107. Gestures in the theater

1449

In a sense, Grotowski’s theory indicates one end in the spectrum of today’s possible methods of acting. The other end is marked by “non-acting” (Kirby 1987) ⫺ not only in performance art but also in theatre. In many performances of the last ten to fifteen years, people who obviously did not receive any training in acting at all are put on stage. In many of Rimini Protokoll’s productions, for example, the “experts of reality” play their own part. Other times old, fragile and disabled people enter the stage alongside homeless people or prisoners. Although untrained, they have a strong impact on the audience. In such cases, the voiceless and marginalized are given a voice and granted a public appearance. Nonetheless, they are exposed to the gaze of spectators. Their gestures on stage, be they ones demanded by a director or their “own” gestures from everyday life, are perceived within the framework of theater. What impact they may have on the spectator and what meanings would be attributed to them remains underexplored and undertheorized.

2.2. Theories o acting in Asia All over the world, different theater and performance traditions encompass a variety of acting styles referencing a particular repertoire of gestures depending on the performance genre. In Asia, as in Europe, we have written sources documenting a long-standing tradition of theorizing acting. In India, the Natyasastra was written in Sanskrit somewhere between the second century B.C. and the second century A.D. It is ascribed to the sage Bharata, but was likely compiled by many authors. It remains influential until today, not only in the live performing arts but also in movies and television plays, even in sculpture and painting, i.e., in all the arts. In Japan, Zeami (1363⫺1443 A.D.) composed a number of writings on the art of Noh, which were secretly handed down within the Kanze and Komparu families who guarded the treasure until modern times. In 1909 sixteen writings were published. In China, a number of treatises were written from the sixteenth century onwards on the oldest opera form, Kunqu opera. All these cases describe highly codified forms of performance and are comparable in this respect to the European classical ballet or the theatre of the seventeenth century. Regarding the gestures they suggest or prescribe, they obviously not only differ from those dealt within European theories of acting, but also from each other. However, all treatises, be they European or Asian, put forward that the gestures performed by actors have to affect the spectators, mainly by exciting their emotions. Particularly telling here is the Natyasastra. Chapters six and seven of the 36 chapters written in verse elaborate the teaching of bhava and rasa. Whereas bhava designates a state of mind, being, disposition or emotion referring to the actor’s embodiment of a character’s states of being/emotion, rasa points to the specific aesthetic delight which the representation of bhava brings to the spectator. The nine basic states listed correspond to nine rasas: (i) (ii) (iii) (iv) (v) (vi) (vii)

the erotic, love or pleasure, the comic, mirthful or derisive, pathos, sadness fury, anger, wrath the heroic, vigorous fear, the terrible the repulsive, disgust

1450

VII. Body movements – Functions, contexts, and interactions (viii) the wondrous, marvellous (ix) peace, atonement (Zarrilli 2000: 78)

Each perfect representation of a bhava will excite the corresponding rasa in the spectator. The representation itself is strictly codified. It is performed by a particular facial expression and posture, and, in the case of the erotic, pathos, fury and the heroic, it is also accompanied by special hand gestures. The aim of acting is to excite a specific rasa in the spectators that will evoke the corresponding bhava in them. Quite differently from Aristotle’s Poetics that highlights the plot and dramatic action, the focus here is on the bhava/rasa relationship. However, both are similar in that they advocate, indeed demand, a transformative aesthetics. The 24 root gestures, mudras, not only serve as part of the representation of the bhavas. They can also establish a relationship between the characters when signifying a command or the refusal of a request. Moreover, they also signify personal relationships, such as “brother”, “sister”, etc., and even describe qualities of what is seen, such as “mountain”, “brightness”, “black”, “red”, or “clouds”. In the latter case, as for example when signifying a mountain, lively movements through the space accompany them. Some of the hand gestures, however, are performed in a neutral, stationary position, such as “lotus”, “moon”, or “sun”. All facial expressions, hand gestures, movements, and postures have fixed forms through which they convey their particular meanings. Being able to appreciate the way these forms are executed and to understand the meanings they convey is the precondition for spectators to experience rasa and for the corresponding bhava to emerge. Besides traditional theater, European theater forms and the realistic-psychological or melodramatic styles of acting were introduced and developed by the colonizers in India already in the 19th century, and in Japan and China in the early 20th century. In the latter cases, this resulted in the establishment of a new theater form, spoken theater, called shingeki in Japan and huaju in China.

2.3. Mixing European and Asian acting styles The European avant-garde movements at the beginning of the twentieth century turned to the art of acting as developed and employed in Asian theatres, e.g., in India, China, Japan, or Bali. Negating the creation of an illusion of reality as the aim of theatre, they intended to lay open the conventional nature of the theatrical process and thereby to foreground and theatricalize it. Guest performances of Asian troupes in Europe as well as reports of travellers on the theater of these countries were perceived and read in light of this goal. This is why Meyerhold applauded the acting style of the Japanese Kabuki as well as the appearance of kurogos, the stagehands dressed in black, in it. The kurogos help actors change their costumes on stage, bring the props needed for the following scene and take them away later, cover the fallen hero with a black cloth, which allows the actor to exit the stage, or, in a rather dark scene, would squat down at the hero’s feet and illuminate his face with a candle attached to the end of a long stick (Meyerhold 1969: 99⫺100). After a guest performance of Tokujiro Tsutsui and his troupe in Berlin (1930), Brecht praised their acting techniques as transportable devices being able to accomplish the

107. Gestures in the theater

1451

new tasks of the European theater and, in fact, employed some in his own production of Man is Man. In these and similar cases, European theater practitioners were not interested in the purposes the corresponding acting devices fulfilled in their original theater forms but in how they could serve their own avant-garde project to re-theatricalize European theater. The introduction of European spoken theater in Japan and China, and the European avant-gardists’ use of artistic means lifted from traditional Japanese, Chinese, Indian, or Balinese theater triggered a development at the beginning of the twentieth century that continues until today, albeit in very different modes and for very different ends. In Europe since the 1970s, directors such as Peter Brook, Ariane Mnouchkine, or Eugenio Barba have created forms of so-called intercultural theater by using the acting techniques of different Asian performance traditions, mostly transforming them to a great extent. At the same time in Japan, Tadashi Suzuki and Yukio Ninagawa created new forms of theater by drawing on the acting styles of Noh and Kabuki as well as on gestural patterns from Shintoistic rituals. They performed Western plays ⫺ preferably Greek tragedies or plays by Shakespeare or Chekhov ⫺ no longer in shingeki, but in their new style. After the Cultural Revolution and in particular since the 1980s in China, Western dramas have been performed in the style of traditional operas, such as the Kunqu, Szechuan, Peking, Hebei Bangzi, or Yue opera. This resulted not only in a rewriting of the texts but also in changes to the acting styles, resulting in the transformation of traditional gestures. The way the above-mentioned gestures of traditional Asian theatre forms were used by the European directors emptied them of their “set” forms and the meaning and transformative potential that went with them, particularly since European or quite generally Western audiences are not familiar with their origin and what it entails. They could therefore be applied as “new” devices, even while exhibiting their “foreignness” as Orientalizing or exoticizing devices. In contrast, the Japanese and Chinese directors drawing on their own tradition and transforming particular acting devices constitutive for them, could regard and declare this process as building bridges between the past and the present of their own theater and performance cultures. In all cases of so-called intercultural theater, whether in Europe, Asia or elsewhere, the use of acting devices from other cultures entails not only an aesthetic, but also, if not foremost, an ethical and a political dimension that cannot be ignored. This also holds true for the guest tours criss-crossing the world as well as for the flourishing workshop culture (see 1.2). To label these phenomena an element of intercultural communication seems highly misleading: they require a new kind of research (Fischer-Lichte, Jain, and Jost 2014).

3. Reerences Cso´rdas, Thomas 1994. Embodiment and Experience: The Existential Ground of Culture and Self. Cambridge: Cambridge University Press. Craig, Edward Gordon 1908. The actor and the Über-Marionette. The Mask 1(2): 3⫺15. Engel, Johann Jakob 1971. Mimik 1785/6, Schriften, Volumes 7 and 8. Frankfurt am Main: Athenäum. First published [1804]. Fischer-Lichte, Erika 1990. The Semiotics of Theatre. Bloomington, IN: Indiana University Press. Fischer-Lichte, Erika 2008. The Transformative Power of Performance: A New Aesthetics. London/ New York: Routledge.

1452

VII. Body movements – Functions, contexts, and interactions

Fischer-Lichte, Erika, Saskya Jain and Torsten Jost (eds.) 2014. Beyond Postcolonialism. The Politics of Interweaving Performance Cultures. London/New York: Routledge. Giannachi, Gabriela, Nick Kaye and Michael Shanks (eds.) 2012. Archaeologies of Presence: Art, Performance and the Persistence of Being. London/New York: Routledge. Grotowski, Jerzy 1968. Towards a Poor Theatre. New York: Simon and Schuster. Kirby, Michael 1987. Acting and not-acting. In: Michael Kirby (ed.), A Formalist Theatre, 3⫺20. Philadelphia: University of Pennsylvania Press. Lang, Franciscus 1968. Dissertatio de actione scenica, Munich 1727. In: Ronald Gene Engle, Franz Lang and the Jesuit Stage, unpublished thesis (University of Illinois), University Microfilms, Ann Arbor, University of Michigan. Lessing, Gotthold Ephraim 1970⫺1979. Brief an Nicolai. In: Herbert G. Goepfert (ed.), Lessings Werke, Volume 4, 159⫺165. München: Carl Hanser. Lessing, Gotthold Ephraim 1883⫺1890. Lessings Werke. Edited by Robert Boxberger. Berlin/Stuttgart: Spemann. Lichtenberg, Georg Christoph 1972. Schriften und Briefe. Edited by Wolfgang Promies. München: Carl Hanser. Mauss, Marcel 1935. Les techniques corporelles. Journal de Psychologie Normale et Pathologique 32(3⫺4): 271⫺293. Meyerhold, Vsevolod E. 1969. Meyerhold on Theatre. Edited by Edward Braun. New York: Hill and Way. Meyerhold, Vsevolod E. 1974. Theaterarbeit 1917⫺1930. Edited by Rosemarie Tietze. München: Carl Hanser. Plessner, Helmuth 1970. Laughing and Crying: A Study of the Limits of Human Behaviour. Translated by James Spencer Churchill and Marjorie Grene. Evanston: Northwestern University Press. Rousseau, Jean-Jacques 2004. Letter to M. d’Alembert. In: Jean-Jacques Rousseau, Letter to d’Alembert and Writings for the Theatre, The Collected Writings of Rousseau. Translated by Allan Bloom. Sudbury: Dartmouth. Sainte Albine, Re´mond 1747. Le Come´dien. Paris: Desaint and Saillant et Vincent fils. Thirouin, L. (ed.) 1998. Pierre Nicole, Traite´ de la Come´die et autres Pie`ces d’un Proce`s du The´aˆtre. Paris: Champion. Zarrilli, Phillip B. 2000. Kathakali Dance Drama. Where Gods and Demons Come to Play. London/ New York: Routledge.

Erika Fischer-Lichte, Berlin (Germany)

108. Contemporary classification systems

1453

108. Contemporary classiication systems 1. Introduction 2. Gesture as culturally determined behavior ⫺ Efron’s classification of gesture in cross-cultural perspective 3. Gesture as part of the bodily behavior ⫺ Ekman and Friesen’s classification of gesture in socio-psychological and cross-cultural perspective 4. Gesture as a window to thought ⫺ McNeill’s classification of gesture in psycholinguistic perspective 5. Gesture as sign ⫺ Fricke’s classification of gesture in semiotic perspective 6. Gesture as means of addressing an interlocutor ⫺ Bavelas’ classification of gesture in dialogic perspective 7. Problems of gesture classification 8. Conclusion 9. References

Abstract Gesture is studied in various disciplines, among them psychoanalysis, social-psychology, psycholinguistics, semiotics, linguistics, and interaction analysis. Especially for quantitative approaches, a comprehensive classification system is indispensable in order to define the object of investigation as well as to avoid misinterpretations of the data due to missing differentiation between gesture types. Regardless of whether a quantitative or qualitative approach is chosen, a unified classification of gesture is desirable in order to enable researchers to draw on other research results and to avoid unnecessary controversies due to discrepant classification schemes. Despite the fact that ⫺ due to the very nature of gesture ⫺ it is nearly impossible to set up mutually exclusive categories for gesture, there exist a variety of classification systems which will be presented here. This paper will outline three aspects: First, the classification is located in its research context, second, the criteria underlying the classification are explicated, and third, the classification is critically evaluated.

1. Introduction Gesture is studied in various disciplines, among them psychoanalysis, social-psychology, psycholinguistics, semiotics, linguistics, and interaction analysis. One of the first attempts to classify gesture was undertaken by Ekman and Friesen (1969). Their classification, that encompasses any bodily behavior, has been the most influential for subsequent gesture studies. Classifications since then have focused on gesture only. Especially for quantitative approaches, a comprehensive classification system is indispensable in order to define the object of investigation (Müller 1998: 101) as well as to avoid misinterpretations of the data due to missing differentiation between gesture types (Chieffi and Ricci 2005; Mizuguchi 2006). Regardless of whether a quantitative or qualitative approach is chosen, a unified classification of gesture is desirable in order to enable researchers to draw on other research results and to avoid unnecessary controversies due to discrepant classification schemes (such as, e.g., the dispute between McNeill and Feyereisen; Feyereisen 1987; McNeill 1985, 1987). Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14531461

1454

VII. Body movements – Functions, contexts, and interactions Whereas historical classifications of gestures were highly descriptive “[…] as we approach the modern era, the schemes […] seem to become more explicitly categorical. This is no doubt a reflection of the desire on the part of modern investigators to apply quantitative statistical methods in their research” (Kendon 2004: 103, original emphasis). Despite the fact that ⫺ due to the very nature of gesture ⫺ it is nearly impossible to set up mutually exclusive categories for gesture (Kendon 2004: 107), there exist a variety of classification systems which will be presented here. This paper will outline three aspects: First, the classification is located in its research context, second, the criteria underlying the classification are explicated, and third, the classification is critically evaluated. The following survey (part 2 to 4) is mainly based on the comprehensive account in Müller (1998: 91⫺103), which has been complemented and updated (especially part 5 and 6).

2. Gesture as culturally determined behavior  Erons classiication o gesture in cross-cultural perspective Modern gesture study began with David Efron’s famous Gesture and Environment published in 1941, in which he argues against the then predominant Nazi view of racially inherited differences in nonverbal behavior. By comparing gestures of first and second generation immigrants to the US, with Italian and Jewish backgrounds respectively, he proved gesture to be a cultural phenomenon. Efron was not interested in creating a systematic overview or categorization of gesture, therefore, his classification remains rather implicit. Yet in 1972, Ekman, in the preface to the re-edition of Efron’s book, explicated the underlying criteria and gesture categories, which have since then become the point of reference for most contemporary gesture studies and most attempts at classification. In his pioneering work, Efron focused on movements of the hands and arms, only partially taking into account movements of the head as well. He distinguished and measured spatio-temporal, interlocutional, and semiotic aspects of hand-head movements. This latter aspect, the referential meaning of the gesture, is the one that ultimately guided Efron’s categorization. Efron ([1941] 1972: 96) distinguished two broad categories of gesture, depending on whether the gesture has meaning independent of or only in conjunction with speech: logical-discursive gestures, including baton-like and ideographic gestures, do not refer to an object or thought but to the course of the ideational process. On the other hand, objective gestures have meaning independent of speech to which they may or may not be an adjunct. These encompass deictic gestures, physiographic (iconographic as well as kinetographic) gestures, and emblematic or symbolic gestures, which represent either a visual or a logical object by means of pictorial or nonpictorial form, which has no morphological relationship to the thing represented. With the establishment of emblems as a third category within the objective gestures, Efron introduces the grade of conventionalization as a second guiding criterion along with the gesture’s referential meaning. Given Efron’s argumentative goal of deconstructing racial theories on human conduct, his cross-cultural comparison is inductively based on ⫺ and thus restricted to ⫺ the gestural repertoire of two ethnic groups. By basing his classification on the distinction between reference to the discourse vs. reference to the conversation’s topic, he covers only a small range of the representational potential of gestures (Müller 1998: 93). Especially, he does not systematically distinguish between the gesture’s para- vs. meta-discursive relation to

108. Contemporary classification systems

1455

speech (Fricke 2007: 178⫺179). Moreover, as Efron based his analysis on silent motion pictures and on observation of natural conversations, he could only sketch the topic of the observed conversations. A more detailed analysis of the interrelationship between the gestures’ and the speech’s meaning was not possible (Müller 1998: 93).

3. Gesture as part o the bodily behavior  Ekman and Friesens classiication o gesture in socio-psychological and cross-cultural perspective In 1969, as a result of empirical studies on cross-cultural differences and nonverbal leakage in deceptive settings, Ekman and Friesen stated a need for classifying nonverbal behavior (Ekman and Friesen 1969: 49) and published one of the most influential classification schemes. They explicitly drew on Efron’s classification, modifying and refining it on the basis of socio-psychological and cross-cultural studies on bodily behavior. The authors based their classification on three fundamental considerations: “how that behavior became part of the person’s repertoire, the circumstances of its use, and the rules which explain how the behavior contains or conveys information. We will call these three considerations origin, usage, and coding” (Ekman and Friesen 1969: 49, original emphasis). The authors categorize nonverbal behavior into five categories: emblems (gestures with standardized form and conventionalized meaning), illustrators (speech-accompanying gestures), regulators (behaviors that serve to regulate the back and forth of interaction), affect displays (which consist mainly of facial expressions), and adaptors (touching of self and others as well as manipulations of objects, that is supposed to originate in some instrumental task). Of these behaviors, especially adaptors, illustrators, and emblems as well as some of the regulators are hand movements. Nevertheless, adaptors typically are not used for communicative purposes. Here, this classification system will be presented with respect to gesture only (for a full representation and discussion, see Schönherr this volume). Emblems are conventionalized in form, they have a direct verbal translation, usually consisting of a word or phrase that is known to all members of a social or cultural group (Ekman and Friesen 1969: 63). Ekman and Friesen subsume both other gesture types established by Efron under the heading of illustrators, that is, gestures, that are tied to speech and serve to “illustrate” what is being said. On the basis of their content, the authors further distinguish between the following types of illustrators: batons, movements, which time out, accent or emphasize a particular word or phrase, ‘beat the tempo of mental locomotion’; ideographs, movements which sketch a path or direction of thought; deictic movements, pointing to a present object; spatial movements, depicting a spatial relationship; and kinetographs, movements which depict a bodily action. The sixth type of illustrator, not described by Efron, is pictographs, which draw a picture of their referents. (Ekman and Friesen 1969: 68, original emphasis)

Regulators “are acts which maintain and regulate the back-and-forth nature of speaking and listening between two or more interactants” (Ekman and Friesen 1969: 82). While any gesture may serve this regulative function, Ekman and Friesen reserve the term regulator for only those gestures that do not fit into any other category, thus making it a residual category.

1456

VII. Body movements – Functions, contexts, and interactions This classification scheme has become the most widespread one in the study of bodily communication. The authors purposely draw on the terms coined by Efron 1972) ⫺ and thus made this early study widely known. Yet they define the categories differently (see Ekman and Friesen 1969: 63) and dismiss some of Efron’s basic insights. As Ekman and Friesen point out, the categories they set up are not mutually exclusive (Ekman and Friesen 1969: 68). Without making it explicit, “[t]heir divisions are based partly on motivation (‘adaptors’), partly on function (‘regulators’), partly on the type of information conveyed (‘affect displays’), and partly on relationship with speech and social conventions (‘emblems’ and ‘illustrators’)” (Kendon 2004: 102). For each category there seems to be one defining criterion, all of the other criteria are just describing characteristics. The criteria chosen by Ekman and Friesen (1969) clearly reveal the crosscultural and socio-psychological perspectives of the authors. Meanwhile, linguistic questions of the meaning potential of gestures are treated rather superficially. Ekman and Friesen neither differentiate between semantic, social, and psychological “meaning” nor between reference to signs vs. reference to nonsigns. Furthermore, they blur Efron’s distinction between discourse and object reference by subsuming both types of gestures under the category of illustrators (Fricke 2007: 165⫺166; Müller 1998: 95⫺97).

4. Gesture as a window to thought  McNeills classiication o gesture in psycholinguistic perspective One of the most influential is the classification of speech-accompanying gesture developed by McNeill. As a psycholinguist, he investigates how human thought is disclosed in gestures. Consequently, he analyzes how gestures represent meaning. McNeill bases his classification on diverse, yet interrelated criteria: gesture’s form, its meaning, and its communicative function. Iconic gestures are semantically and pragmatically coexpressive with speech, the gesture being not just redundant, but complementary to speech (McNeill 1992: 12⫺13). Metaphorics “are like iconic gestures in that they are pictorial, but the pictorial content presents an abstract idea rather than a concrete object or event” (McNeill 1992: 14). “The semiotic value of the beat lies in the fact that it indexes the word or phrase it accompanies as being significant, not for its own semantic content, but for its discourse-pragmatic content.” (McNeill 1992: 15, emphasis mine) These semantic differences result from differences in form: Whereas iconic and metaphoric gestures have three phases ⫺ preparation, stroke, and retraction ⫺ beats consist of two-phased (e.g., in/out or up/down) movements. Deictic gestures (or points) resemble to iconic and metaphoric gestures in that they refer to either concrete objects and events in the world, or as abstract pointing gestures, they “imply a metaphorical picture of their own in which abstract ideas have a physical locus” (McNeill 1992: 18). As a fifth category, cohesive gestures serve “to tie together thematically related but temporally separated parts of the discourse” (McNeill 1992: 16). They do so by repeating a formerly produced gesture that may be iconic, metaphoric, or deictic, or even a beat. Without making this explicit, McNeill takes up the distinction between referential and discursive gestures introduced by Efron 1972) (see McNeill 1985: 350): Iconic and metaphoric gestures are said to have a referential function, while beats and cohesives have a discursive function. Deictics may serve either function. This more or less parallels another difference that is fundamental to McNeill’s argument: Whereas iconics and meta-

108. Contemporary classification systems

1457

phorics are imagistic, beats and deictics are not. Cohesives serve discursive function, no matter whether they take up a formerly produced imagistic or non-imagistic gesture. The specific function of a gesture is due to its semantic content, which in turn results from formal characteristics. Based on an analysis of the function of gestures in narratives, McNeill (1992: 183⫺217) later associates these gesture types with narratological structure: Iconics appear at the narrative level, metaphorics at the metanarrative level. Pointings appear at all levels, beats signal shifts between levels. The main contribution of McNeill’s classification is its differentiation of gestures’ meaning potential. While he is the first one to identify the potential of gesture to metaphorically refer to abstract ideas and concepts and the first to focus on the multi-functionality of pointing gestures (see Müller 1998: 101), his notion of abstract pointing remains weak due to his misconception of metaphoric gesture (see Fricke 2007: 172 and 180⫺181). His classification is based on gestures produced in the retellings of a comic strip, which limits the types of gesture to be found in the data. Furthermore, since his database consists of rather monologic narratives, and since he is interested in what gestures reveal about thought, he disregards those gestures primarily serving interactive function.

5. Gesture as sign  Frickes classiication o gesture in semiotic perspective In her study on origo, gesture, and space, Fricke (2007) revises the aforementioned classifications (along with that of Freedman 1972; Müller 1998; Wundt [1900] 1904, and

Fig. 108.1: Classification of gesture according to Fricke (2007: 222)

1458

VII. Body movements – Functions, contexts, and interactions

others) in a semiotic perspective. As main weakness of the current classification systems, she diagnoses missing or unsystematic differentiation of meaning and reference, of reference to signs vs. reference to nonsigns, and of gestures’ para- vs. meta-discursive relation to speech. On the base of Peirce’s semiotic theory, she proposes a restructured classification of speech-accompanying gesture (Fricke 2007: see Fig. 108.1 above). Fricke’s comprehensive account of the meaning potential of speech-accompanying gestures surmounts most weaknesses of previous classifications. All gesture types defined in previous classifications find their place. Even if Fricke does not include emblems into her classification, they are easily to be integrated. Yet, due to her genuinely semiotic perspective, Fricke neglects the interaction-regulating function of gesture.

6. Gesture as means o addressing an interlocutor  Bavelas classiication o gesture in dialogic perspective Bavelas and her colleagues take up the traditional distinction between emblems and conversational gestures (which correspond to Ekman and Friesen’s illustrators). Yet, they propose a new division of the illustrator class on the basis of reference to some aspect of the semantic content vs. reference to the addressee: They distinguish between topic related and interactive gestures (Bavelas et al. 1992: 473). Interactive gestures are rarely produced in absence of an interlocutor, and they are deictically directed towards him (see Bavelas et al. 1992: 473). The group of interactive gestures mainly consists of, but is not limited to those gestures called beats/batons. In contrast to the traditional view of beats/batons as having discursive functions, Bavelas interprets them as being directed to an interlocutor. She proposes a functional subdivision of the interactive gestures into delivery gestures which refer to the delivery of information by speaker to addressee, seeking gestures which aim to elicit a specific response from the addressee, citing gestures which refer to a previous contribution by the addressee, and turn gestures which refer to issues around the speaking turn (Bavelas 1994: 213; Bavelas et al. 1995: 395⫺ 397). Bavelas argues the case for a functional approach instead of taxonomic approaches in gesture studies (Bavelas 1994: 202). She emphasizes that, “not only can gestures serve many different functions, but a gesture can have more than one function at once […]” (Bavelas 1994: 204). Consequently, her distinction of gesture types is exclusively based on communicative functions. Yet, her terminology is quite misleading, rather implying a functional classification of gestures, not a classification of gestures’ functions. Bavelas’ main contribution consists of introducing the dialogic perspective and consequently differentiating the class of illustrators into gestures serving several interactive functions, while the gestures’ referential potential remains rather opaque in this perspective.

7. Problems o gesture classiication Despite the many differences in range of movements included in the classification systems and in details of subcategories, most researchers agree in some fundamental assumptions: Firstly, gesture is to be seen as part of communication in coordination with speech, secondly, there are gestures that depict some aspect of the topic (illustrators, topic-related gestures), gestures that indicate to places in the real world as well as to abstract ideas (deictic

108. Contemporary classification systems

1459

or pointing gestures), gestures that serve rather to mark the phrasal or logical structure of the discourse (beats, batons), and last but not least, gestures that serve to maintain and organize the interaction itself (regulators, interactive gestures) (see Kendon 2004: 103). Yet, until now, there exists no comprehensive classification for gesture, even less so for all communicative bodily behavior. Each classification system reflects the researcher’s discipline, which in its turn influences the range of gestures/gesture types as well as the (set of) functions to be investigated, be they psychological, social, interactive, discursive, or referential. Thus, classification systems display researchers’ conceptualizations of gesture as well as their research interests. In any given study, criteria such as its communicative functions (referential, discursive, or interactive), its meaning potential due to its semiotic properties, its autonomy from or relation to speech, its degree of conventionalization are focused. Yet, the criteria chosen for classification are rarely explicated. Moreover, some of these criteria are context-bound, e.g., a gesture’s meaning potential can only be established with respect to the concurrent speech. Still, other criteria are not independent from each other. E.g., most researchers consider “emblems” as a gesture type of its own. Yet, being (potentially) independent from speech just results from the gestures’ standardization of form and conventionalization of meaning, which neither implies that emblems are solely used in the absence of and as a substitute for speech nor that they form a semiotically homogenous class (Kendon 1992, 1995; Ricci-Bitti and Poggi 1991). Comparison between the classifications is further complicated by the fact that each relies on a different set and ranking of criteria. As a consequence, any single gesture may be categorized differently depending on the aspect deemed the most relevant. Moreover, several gesture types are interpreted and categorized quite differently (e.g., see for “beats” or “batons” Ekman and Friesen and McNeill vs. Bavelas). Last but not least, since most classification systems are based on a range of diverse criteria and tend to name the gesture types according to the feature deemed the most relevant, there is much inconsistency in terminology.

8. Conclusion The categories of the gesture classifications presented here do not meet essential exigencies for any classification scheme in quantitative research: except for that of Fricke (2007), they are not one-dimensional, but constitute a mix of a set of varying criteria; the categories are not mutually exclusive with no clear-cut dividing line between them; none of the schemes is comprehensive, each focuses on some range of gestures at the expense of others. In fact, as Kendon (2004: 107) points out, a single unified classification scheme of gesture is merely impossible given the multitude of dimensions gestures can depend on. Therefore, he suggests to refrain from attempting to classify gesture in favor of comparing them along the various dimensions. As early as 1988, Kendon published a paper in which he reconstructed the various processes through which idiosyncratic gesture becomes conventionalized and through which sign languages evolve. McNeill (1992: 37⫺ 40) analyzed what in honor of this work he calls Kendon’s continuum and showed that “[t]he changes that take place along Kendon’s continuum have widespread consequences for the structure of the gestures themselves, considered both individually and in their collective relationship to each other” (McNeill 1992: 38⫺39). Gestures develop languagelike structures in terms of segmentation, compositionality, a lexicon, a syntax, paradig-

1460

VII. Body movements – Functions, contexts, and interactions

matic opposition, distinctiveness, arbitrariness, utterance-like timing, standards of form, and a community of users. McNeill (2000: 2⫺6) further elaborates this idea, stating that the different types of signs (gesticulation, pantomime, emblems, and sign languages) may be arranged differently on the continuum depending on the dimension focused on, such as: “relationship to speech”, “relationship to linguistic properties”, “relationship to conventions”, “character of semiosis”. Another solution to the multi-functionality of gesture has been proposed by Bavelas, who argues the case for giving up classification in favor of a functional analysis: “Taxonomic categories imply mutually exclusive classifications, whereas functions need not be exclusive. […] [T]he goal of analysis should not be to decide in which category we should put a gesture (or all gestures) but rather to discover at least some of the things a gesture is doing at its particular moment in the conversation.” (Bavelas 1994: 203⫺204) This implies to follow a strictly qualitative approach. Since communicative functions as well as the specific semantic meaning of a gesture can only be reconstructed contextsensitively, gesture study should focus on the gesture’s form, its actual meaning potential in the given verbal context, as well as the sequential context of the verbal-gestural utterance, including recipient’s reaction.

Acknowledgements Many thanks to Friederike Kern and Joe Couve de Murville for their careful reading and valuable comments on an earlier version of this paper.

9. Reerences Bavelas, Janet Beavin 1994. Gestures as part of speech: Methodological implications. Research on Language and Social Interaction 27(3): 201⫺221. Baveas, Janet Beavin, Nicole Chovil, Linda Coates and Lori Roe 1995. Gestures specialized for dialogue. Personality and Social Psychology Bulletin 21(4): 394⫺405. Bavelas, Janet Beavin, Nicole Chovil, Douglas Lawrie and Allan Wade 1992. Interactive gestures. Discourse Processes 15(4): 469⫺489. Chieffi, Sergio and Mariateresa Ricci 2005. Gesture production and text structure. Perceptual and Motor Skills 101(2): 435⫺439. Efron, David 1972. Gesture, Race and Culture. The Hague/Paris: Mouton. First published [1941]. Ekman, Paul and Wallace Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Feyereisen, Pierre 1987. Gestures and speech, interactions and separations: A reply to McNeill (1985). Psychological Review 94(4): 493⫺498. Fricke, Ellen 2007. Origo, Geste und Raum. Lokaldeixis im Deutschen. Berlin/New York: Mouton de Gruyter. Kendon, Adam 1988. How gesture can become like words. In: Fernando Poyatos (ed.), CrossCultural Perspectives in Nonverbal Communication, 131⫺141. Toronto: C. J. Hogrefe. Kendon, Adam 1992. Some recent work from Italy on quotable gestures (emblems). Journal of Linguistic Anthropology 2(1): 77⫺93. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺279. Kendon, Adam 2004. Gesture. Visible Action as Utterance. Cambridge: Cambridge University Press. McNeill, David 1985. So you think gestures are nonverbal? Psychological Review 92(3): 350⫺371.

109. Co-speech gestures: Structures and functions

1461

McNeill, David 1987. So you do think gestures are nonverbal. Reply to Feyereisen (1987). Psychological Review 94(4): 499⫺504. McNeill, David 1992. Hand and Mind. What Gestures Reveal about Thought. Chicago: University of Chicago Press. McNeill, David 2000. Introduction. In: David McNeill (ed.), Language and Gesture, 1⫺10. Cambridge: Cambridge University Press. Mizuguchi, Takashi 2006. Significance of definition and classification of gesture for study of spontaneous gesture: comment on Chieffi and Riccis’s study. Perceptual and Motor Skills 103(2): 461⫺462. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag Arno Spitz. Ricci-Bitti, Pio Enrico and Isabella Poggi 1991. Symbolic nonverbal behavior: talking through gestures. In: Robert Feldman and Bernard Rime´ (eds.), Fundamentals of Nonverbal Behavior, 433⫺ 457. Cambridge: Cambridge University Press. Schönherr, Beatrix this volume. Categories and functions of posture, gaze, face, and body movements. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1333⫺1341. Berlin/Boston: De Gruyter Mouton. Wundt, Wilhelm 1904. Völkerpsychologie. Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythos und Sitte. Volume 1: Die Sprache. Leipzig: Engelmann. First published [1900].

Ulrike Bohle, Hildesheim (Germany)

109. Co-speech gestures: Structures and unctions 1. 2. 3. 4. 5.

Co-speech gestures definitions Structures Functions Conclusions References

Abstract Hand gestures enjoyed an increasing attention in the last decades, thanks to their finegrained coordination with speech. Such coordination had been observed with respect either to the word content or to other verbal features. The study of co-speech gestures focused on the gestures’ forms, i.e., their structure; in other cases the attention had been on gestures’ consequences, i.e., their function. In the former, authors struggled to find a way to reduce the infinite single gesture performances into a finite range of gesture categories on the basis of the gesture movement shape via several taxonomies used to classify hand gestures. Different taxonomies can converge on some main categories and can be integrated, to some extent. In the latter, authors strived to show associations between a gesture or gesture category and its consequences on the performer/speaker (facilitation function), or on the receiver and on the social interaction (conversational functions). Within the conversational Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14611473

1462

VII. Body movements – Functions, contexts, and interactions

functions, the rhetorical one enriches the content, the discursive one marks the syntactic structure of the discourse, the interactive one allows conversation management, the persuasive one convinces and influences the interlocutors. Furthermore, specific communication functions have been identified.

1. Co-speech gestures deinitions Gestures are recognized as fundamental elements of communication in general, but they have a special status since they are fundamental elements of language too: Linked to speech, particularly hand gestures favor a deeper understanding and a greater coherence of the discourse. Sometimes, when hand movements are not linked to speech, they can provide information on the speaker’s condition and status. Talking of “gestures”, we generally refer to any body movement, but this contribution focuses particularly on hand gestures, that is, movements performed by hands (but also arms and shoulders) with a conscious or unconscious intent of communicating (Kendon 2004; Poggi 2008) and whose meaning is, at least partly, provided by the connection with linguistic contents. Thus, the term “co-speech gestures” indicates the use of hand and arm movements as a means to improve the quality of communication by promoting the semantic comprehension of the speech. These gestures do not depend on a specific behavioral model nor do they vary according to a specific observer: they seem to be central to the entire speaking process (Iverson and Goldin-Meadow 1998). Gestures that complement speech via accompanying it, reflect and facilitate the cognitive process that triggers speaking (Iverson and Goldin-Meadow 1998; McNeill 1992; Rauscher, Krauss, and Chen 1996). Accordingly, speech and gesture are considered as having either a complementary or redundant relation. In both cases, they would be useful for improving the quality of conversation by conveying additional semantic information (Alibali, Flevares, and Goldin-Meadow 1997; Kelly and Church 1998; Kendon 1995, 2004) and by aligning the interacting people in order to create shared meaning (Clark 1996; Goldin-Meadow 2005, 2006; Goodwin 2000; Holler and Wilkin 2011; Kendon 1994; LeBaron and Streeck 2000). About this issue, several authors currently agree with the idea that different kinds of hand gestures accomplish different functions. The following sections briefly summarize the main gesture types and their classification (section 2. “Structures”) and then discuss gestures’ main functions within the conversational interaction (section 3. “Functions”).

2. Structures 2.1. Gesture classiications: A synthesis Studying the structure of gestures means to analyze and formalize the main types and shapes of gestures in order to identify micro and macro categories of gestures, aiming at providing a classification of them. In line with this objective, several taxonomies of hand gestures have been proposed, in ancient times as well as in contemporary science. In one of the first in recent times (e.g., Ekman and Friesen 1969), the authors outlined several distinctive features of hand gestures’ functions and use by analyzing (i) the circumstances in which they appear, (ii) how they are linked to phylogenetic, ontogenetic, and cultural origins,

109. Co-speech gestures: Structures and functions

1463

(iii) how they become part of the individual repertoire, (iv) which is the correspondence between the nonverbal acts, (v) and the meaning they convey, distinguishing between extrinsic (the act and its meaning are arbitrarily linked) and intrinsic (the act corresponds to or “depicts” its meaning) signification. On these bases, Ekman and Friesen (1969) identified five main categories of gestures: emblems or symbolic ones, illustrators, regulatory signs, emotional expressions, and adaptors. Argyle (1975) proposed a five-category classification of gestures too, including: illustrators (associated with speech, improving the conversational rhythm and the comprehension, as well as the actors’ synchronization); conventional gestures (assuming a peculiar meaning shared within a specific culture); movements expressing the speaker’s emotional states; gestures employed during rituals and celebrations (which can evolve into conventional ones); gestures indicating the speaker’s personality and character. In 1983, Kendon puts the emphasis on the connection between speech and gestures and delineates a continuum ranging from the perfect integration of hand gestures and language (spontaneous gesticulation accompanying the discourse and movement of hands and arms, integrated into the content of the utterance), and the presence of hand gestures in absence of language (pantomime and emblems). Subsequently, McNeill (1992) proposed a specific classification of hand gestures, based on their location within discourse. He identified two macro-categories of gestures: propositional (linked to the ideational process) and non-propositional (gestures characterizing the discursive activity). Propositional gestures include iconic, metaphoric, and deictic or pointing gestures, whereas the non-propositional ones include beats and cohesive gestures. The iconic gestures correspond to a body movement concretely recalling the idea expressed verbally; the metaphoric ones resemble the abstract feature linked to the content of the speech; the deictic ones are gestures devoted to indicate objects present in the space around the speaker or abstract concepts generally associated with spatial information. The non-propositional gestures are not linked to the semantic content of the discourse: They include rhythmic and cohesive movements. Indeed, their use can give rhythm and underline the consequentiality and coherence of the discourse, and it can emphasize parts of the speech by remarking relevant words, phrases, or periods. Some authors focused on just a part of the classification, i.e., on special kinds of gestures. For example, a more complex and detailed taxonomy of micro-categories for cohesive hand gestures has been provided by Contento (1999). Moreover, Bavelas et al. (1992) analyzed a particular type of hand gesture, defined interactive, whose function is to facilitate the dialogic process as a means to include the addressees in the conversation. However, Poggi (2008) acknowledges that several classifications have been proposed using different criteria: thus, all gestures can be differently classified according to different parameters. These parameters are: semantic content (speaker’s mind or identity or the world); goal source (individual, biological, or social); level of awareness; (relation to other signals, autonomous or co-verbal); cognitive construction (codified or creative gestures); gesture-meaning relationship (motivated ⫺ natural or iconic vs. arbitrary). Poggi (2008) focused on the iconicity of gestures and their relationship to the communicative goals. An integrative proposal has been attempted by Maricchiolo, Gnisci, and Bonaiuto (2012). It is mainly based on the distinction between gestures linked versus non-linked to the speech, each one composed of two macro-categories (cohesive and ideational

1464

VII. Body movements – Functions, contexts, and interactions

versus hetero-adaptors and self-adaptors), including several specific micro-categories within the macro-categories. This taxonomy includes several kinds of hand-gestures, classified on the basis of the most important characteristics generally assigned to a class of gestures: the concrete manifestation in form of movement during the conversation and the function served. The main distinction concerns linked to the speech gestures versus non-linked to the speech gestures. The first macro-category includes cohesive, beats, and ideational gestures: the cohesive gestures are nippers, hank, weaving, star, whirlpool, brush, pincers, and tray; while the ideational gestures are distinguished in emblematic and illustrative gestures, with the latter including iconic, metaphorical, and deictic categories. The second macro-category includes manipulative hand movements with the two main categories of the hetero-adaptors (with two sub-categories: objectand person-addressed adaptors) and the self-adaptors.

2.2. Classiications o gestures: a comparison and integration Several classifications of hand gestures have been thus proposed, by focusing on their different features: (i) the functions they accomplish, (ii) the modality of their expression as body movement, and (iii) the presence/absence of a semantic association with language. From the above-sketched brief review, it is possible to outline some resemblances as well as dissimilarities amongst the different main hand gesture classifications taken from the literature (see Tab. 109.1). From such a comparison, it is clear that not all the described classifications are exhaustive. Moreover, each taxonomy adopts a different taxonomic criterion: In some cases it is clear and unambiguous, as well as exhaustive (Krauss, Chen, and Chawla 1996; Maricchiolo, Gnisci, and Bonaiuto 2012), but in other cases it is complex and not operationally unambiguous, as well as not exhaustive or mutually exclusive (Ekman and Friesen 1969). In other cases, although the criterion is single and clear, it is not exhaustive (Bavelas et al. 1992; Kendon 1995; McNeill 1995), or the categories are not articulated into specific sub-categories (e.g., Krauss, Chen, and Chawla 1996), or they are simplifications and/or integrations from categories described by previous authors (i.e., Maricchiolo, Gnisci, and Bonaiuto 2012; McNeill 1985; for a historical, critical, and comparative discussion survey of classification systems, see Kendon 2004; Maricchiolo, Gnisci, and Bonaiuto 2012; Rime` and Schiaratura 1991). All the cited classification elements have been considered as relevant by different authors and actually none of them can be ignored when co-speech gestures are studied as part of human communicative exchanges. However, more efforts should be devoted to produce a solid classification, including macro- and micro-categories ⫺ which can be applied in different ways according to the researchers’ specific interests ⫺ in order to favor the comparison among different studies too. The issue of a greater agreement between scholars on gesture structure ⫺ i.e., categorization on the basis of different gestural forms ⫺ is a crucial one for the scientific endeavor and progress in the field. The above presented taxonomies and classifications, though starting from different criteria, suggest indeed the possibility of a gesture differentiation, which can be shared among many researchers. This has been made possible also thanks to classification tests in different social contexts and cultures, though

109. Co-speech gestures: Structures and functions

1465

Tab. 109.1: Comparison of the main different gesture classifications with reference to the labels used for gesture’s general categories Gesture categories Cultural gestures Taxonomy’s authors and taxonomic criterion/a

Gestures with referent in discourse content

Gestures co-occurring with discourse structure

Discourseindependent gestures

Ekman, Friesen (1969) ⫺ Use ⫺ Origin ⫺ Coding

emblems

illustrators



adaptors

Argyle (1975)

conventional and ritual gestures

illustrators

illustrators

gestures indicating speaker’s personality

McNeill (1985, 1992) Discourse collocation



propositional (iconic, metaphoric, deictic)

non-propositional (beats, cohesive)



Kendon (1995) Discourse link type

conventional

substantive

pragmatic



Bavelas et al. (1992) Verbal gesture destination



topic and interactive





Krauss, Chen, and Chawla (1996) Level of lexicalization

symbolic

lexical movements

motor movements

adaptor

Maricchiolo, Gnisci, and Bonaiuto (2012) Linkage to speech

emblematic

ideational (iconic, metaphoric, deictic)

cohesive and rhythmic (with many subcategories)

adaptors (person-, object-, self-adaptors)

a proper cross-cultural comparison is presently lacking (for an example, see Bonaiuto and Bonaiuto this volume). However, a full generalizability test still remains to be accomplished (the taxonomy by Maricchiolo, Gnisci, and Bonaiuto [2012] represents a step forward in this direction, trying to build an exhaustive and mutually exclusive system across different interaction contexts at the same time). Another point that should be further clarified by future research is, if and to what extent a structural classification without any functional consideration within the taxonomy criteria is possible. Once a good level of agreement has been reached in terms of structural description of the phenomenon ⫺ through taxonomic systems based on exhaustive and mutually exclusive categories, as well as on clear and easily operational taxonomic criteria ⫺ it

1466

VII. Body movements – Functions, contexts, and interactions

becomes crucial to deepen the investigation of the functional side of the phenomenon. Giving attention to social interaction, this can be achieved, for instance, primarily through the analysis of the speech-gesture coordination as well as through the impact of co-speech gestures on both the speaker and the receiver.

3. Functions 3.1. General unctions o gestures with speech: Reerential, discursive, conversational, and interactional Discussions about the functions accomplished by co-speech gestures mainly regarded two points of view, reciprocally opposing over a long time: conversational versus facilitation function. The supporters of the conversational function consider hand gestures as means to improve the quality of the conversation by improving the comprehension and the reciprocity between the interactants (Clark 1996; Goodwin 2000; Kendon 1994; LeBaron and Streeck 2000). Holler and Wilkin (2011), for example, state that co-speech gestures take part in the collaborative process of creating a mutually shared understanding of referring expressions, therefore they convey the conversational function by creating a shared ground for reciprocal understanding among the interactans of a given conversation. On the contrary, the facilitation hypothesis maintains that the peculiarity of hand gestures is to simplify the production of speech for the speaker, that is, to support the conversion from what s/he is going to say to what s/he actually says (Alibali, Kita, and Young 2000; Krauss, Chen, and Chawla 1996; Krauss, Chen, and Gottesman 2000; Rime´ and Schiaratura 1991). Indeed, particular types of co-speech gestures facilitate the linguistic and syntactic speech production (see Krauss, Chen, and Chawla 1996; Rime´ and Schiaratura 1991). Gesturing can also influence speech at an early stage of utterance production, by helping the information in a suitable way for linguistic expression. On the one hand, speakers tend to use more representational gestures when they have greater difficulties in language production (Alibali, Kita, and Young 2000) and, whenever their gesturing is inhibited, the language is poorer (Hostetter, Alibali, and Kita 2007), less rapid (Allen 2003), and less fluent (Rauscher, Krauss, and Chen 1996). On the other hand, when the use of speech is restricted or the lexical access is difficult, the occurrence of gestures increases (Morsella and Krauss 2004; Rauscher, Krauss, and Chen 1996). Of course, conversational and facilitation functions should not be considered mutually exclusive. Indeed, as highlighted by several authors, hand gestures can accomplish several and diverse functions, associated with the speaking rhetoric power (referential function), with respect to discourse, interaction, and persuasion. The rhetorical function concerns the use of gestures as rhetorical devices enriching the emphasis and the power of communicative contents (Atkinson 1984; Edwards and Potter 1992). For example, the use of hand gestures has been studied in an experimental study on telling the truth versus deception under a high or low level of suspicion (Caso et al. 2006). The findings show that lying was associated with a decrease in deictic gestures and adaptors and with an increase in metaphoric gestures; strong suspicion led to an increase in metaphoric, rhythmic, and deictic gestures and to a decrease in self-adaptors as well as emblematic and cohesive gestures. Therefore, different kinds of hand gestures proved to be associated with particular characteristics of the situation and the speaker, but also with the rhetorical goal of the conversation itself.

109. Co-speech gestures: Structures and functions

1467

The discursive function serves to mark the syntactic structure of the discourse (Fraser 1999; Schiffrin 1987). For example, Kendon (1995) has shown that gestures can be used as discourse markers. Within a more general approach encompassing different gestures and functions, and at the same time with a fine-grained level of analysis, Maricchiolo, Bonaiuto, and Gnisci (2007) identified specific differential functions for particular hand gestures, studying their co-occurrence with specific verbal features to clarify their discursive functions: Cohesive gestures are usally employed to improve the discourse structure and coherence, while rhythmic gestures permit to emphasize focal points of speech. In the same data set, it emerged that ideational gestures provide concrete illustrations of conversational contents (referential function), and adaptors support the effort for managing the speaker’s emotional states, the contextual discursive, and social interactions (interactive function). The interactive function concerns all the features of conversation management (such as turn taking and synchronization) and social relationship between the interactants. For example, gestures serve as signals for turn-taking: termination of a co-speech gesture such as arm or hand movement may signal the desire to yield the turn, while continued gesticulation by the speaker acts as a signal to suppress turn-taking by the hearer (Beattie 1981; Duncan 1972; Taboada 2006). Some authors also analyzed the role of gestures in dyadic and small group settings: Maricchiolo et al. (2011) found that hand gestures improve the perceived influence of a group member. Again, Bavelas et al. (1995) identified a small number of gestures supporting the interaction process rather than conveying additional information: gestures are more frequent in dyadic dialogues than in monologues and accomplish the function to include the addressees in the conversation. Finally, the persuasion function regards the power of co-speech gesture in convincing and influencing the interlocutors. Indeed, gestures can be used alone or in association with other nonverbal signals to improve the persuasiveness of a discourse or to obtain a greater consent (see Argentin, Ghiglione, and Dorna 1990; Burgoon, Birk, and Pfau 1990; Carli, LaFleur, and Loeber 1995; Henley 1977; Maricchiolo et al. 2009; Poggi and Vincze 2009). The persuasion function of co-speech gestures has been recognized as a very useful tool in politics, specifically in political communication. Recent studies indicate that political judgments and preferences are often developed by observing bodily cues and signals (Cherulnik et al. 2001; Mattes et al. 2010), becoming increasingly more prevalent in the presidential election than voice signals (Bucy and Grabe 2007). Moreover, conducting a study on the 2006 political campaign for the forthcoming Presidential elections, Maricchiolo, Gnisci, and Bonaiuto (2013) observed that the politicians tended to use more rhythmical and cohesive gestures rather than iconic ones, indicating that the nature of the political speech is mainly persuasive rather than descriptive. Notwithstanding the abovementioned functions, a general distinction can be delineated, largely corresponding to the basic distinction between categories of propositional versus non-propositional gestures (McNeill 1992). It implies a differentiation in their functions: the former ones aim to improve the understanding of the discourse contents, whereas the latter ones are directed to improve the development of the speech (fluidity or cohesion, McClave 1994; McNeil 1992), to manage the conversation from an interactional point of view (Bavelas et al. 1995), or to manage particular emotional states, such as tension or anxiety (Ekman and Friesen 1969). However, besides the general inclusive conversational and facilitation functions of co-speech gestures, a series of studies reveal that these cues may exert several specific roles according to specific contexts and interactants.

1468

VII. Body movements – Functions, contexts, and interactions

3.2. Speciic unctions o co-speech gestures Despite many differences, it is clear and obvious that co-speech gestures have a general communicative function. Past research has provided us with evidence that co-speech gestures contribute for a significant amount of information to a speaker’s message (e.g., Graham and Argyle 1975; Holler, Shovelton, and Beattie 2009; Kelly and Church 1998), implying communicative advantages for both the speaker and the listener. However, this does not mean that speakers produce gestures with a communicative intent (Holler and Wilkin 2011), supporting the notion that co-speech gestures are an integral part of the dialogue (Bavelas et al. 2008). Moreover, a recent meta-analysis (Hostetter 2011) shows that co-speech gestures provide a significant benefit to the whole communication process. However, this effect is moderated (i) by the topic: gestures depicting motor actions are more communicative than those depicting abstract topics; (ii) by the redundancy: gestures are more effective when they add some information and therefore are not completely redundant to the accompanying speech; and (iii) by the age of the listener: children benefit more from gestures than do adults. Therefore, it is evident that, besides the general and quite universal functions of cospeech gestures, several other functions can be highlighted according to specific setting, contexts, and interactants. The research on this topic has been gaining increasing attention, however there is still sparse evidence and a lack of understanding about specific functions of co-speech gestures. For example, Kendon (1985, 2004) provides a range of descriptions illustrating different gestures’ specific functions, mostly related to the speaker-related benefit of gesturing while speaking, for example, the disambiguation of speech, the substitution of speech, the emphasis and telescoping of information. However, Clark and Wilkes-Gibbs (1986) argue that interactants of a conversation create meaning jointly, establishing mutual understanding through the collaborative process of “grounding” (Clark and Brennan 1991). According to this line of research, Holler and Wilkin (2011) found that cospeech gesture mimicry does occur in face-to-face dialogue, demonstrating that mimicked gestures play an important role in creating mutually shared understanding and in the grounding process. However, it is important to note that co-speech gestures remain also when the listener is out of sight. Indeed, co-speech gestures serve multiple intrapersonal functions, one of which may be to facilitate access to words in the mental lexicon (Pine, Gurney, and Fletcher 2010). Therefore, although gestures have undeniable communicative functions, these functions appear to be only a part of the whole spectrum. Several studies have focused in detail on specific functions of co-speech gestures, related to the specific contexts and interactants. For example, Clark and Estigarribia (2011) have found that parents use co-speech gestures to provide their children with pertinent information on new words or objects’ meanings, using gestures to maintain attention on the object, to add other information about the object itself, or to demonstrate and depict actions and functions of it. Moreover, co-speech gestures have been found to function as a source of input for language learners, allowing the receiver of the message to interpret unfamiliar parts of the speech, both in adults and children (Goodrich and Hudson Kam 2009). According to this line of research, a recent study on neural

109. Co-speech gestures: Structures and functions

1469

activity during conversation shows that perceiving hand movements during speech moderates the neural activation in listeners, specifically involving both biological motion perception and discourse comprehension (Dick et al. 2009). These findings suggest that listeners attempt to find meaning also in the hand movements that accompany speech, not only in the words that speakers produce. Cooperrider and Nu´n˜ez (2009) have investigated the specific function of co-speech gestures in displaying time. According to them, American English speakers show a specific pattern of transversal temporal gestures (different from the sagittal temporal gestures; for detail, see Cooperrider and Nu´n˜ez 2009), in which five types of gestures, each one with a specific function, can be recognized: placing, pointing, duration-making, bridging, and animating. Other studies have shown that patients with apraxia (disability to produce actions, gestures, movements from instructions or to imitate them) or with aphasia (disability in verbal language), extensively use spontaneous gesturing with compensatory function (Ahlse´n 1985; Macauley and Handley 2005). Again, other studies showed that a person with aphasia increased communicative ability in activities, in which action for communication is allowed, compared to pure verbal conversation activities (Ahlse´n 2002; Allwood 2000). Concluding, hand gestures seem to accomplish the relevant function of improving the quality of the communication in many ways: facilitating the production and comprehension of discourse contents, enhancing the structural characteristics of speech, favoring the management of the speaker’s distress, and supporting the interactive, relational, and persuasive features of the communication.

4. Conclusions The present review of hand gestures’ structure and functions shows that several criteria for classification have been used by different scholars, but it also underlines the efforts produced in order to propose an inclusive, exhaustive, mutually exclusive, and shared category system, answering to several descriptive needs and permitting the comparison among different studies and research fields. Nowadays, a basic criterion for the classification of hand gestures is their relation with speech (present vs. absent), from which a complex hierarchical system of categories and sub-categories can be developed according to the specific research needs. Moreover, an increasing attention is put on the specific functions accomplished by the different macro-categories, categories, and even sub- or micro-categories of gestures. Furthermore, a brief review of a number of studies conducted with different methodological approaches and research aims have been discussed, aiming to provide an idea about the relevance of studying hand gestures in association with several features of communicative style, in many areas of interest covering very different real-life episodes, from simple ordinary dyadic conversation, to small group interaction, to disabled patients or learning settings, up to political communication. All the described studies and results support the importance of considering hand gestures in coordination with verbal characteristics of speech, in order to improve the understanding of human social interactions. All in all, the quoted studies highlight a close structural and functional link among speech and gesturing, underlining the pervasiveness of the integration among verbal and bodily components in human communication. Moreover, given the interaction between

1470

VII. Body movements – Functions, contexts, and interactions

speakers and listeners, it is important to note the relevance of co-speech gestures in serving different as well as shared functions among them. Co-speech gestures take different forms, changing in terms of different contexts and interactants as well as within a specific conversation, and those forms are tied to specific functions. When gesture acts alone as a communication tool, it takes on a language-like form (Goldin-Meadow 2003, 2005, 2006). However, when gesture interplays with verbal communication, sharing with speech the burden of communication, it takes a holistic form, therefore loosing its language structure (Goldin-Meadow 2003, 2005, 2006). In this new form, co-speech gesture is a communicative pattern that evolved together with human language, it conveys information in an image-like way, and it adds information in a qualitative as well as quantitative way to the current communication process. Cospeech gestures play an important role in the speakers’ cognitive processes, conveying thoughts they do not have words for and providing them linearity and a sense of sequence to the speech itself. Co-speech gestures are also fundamental for listeners’ understanding by conveying additional information to speech, providing insight for new information to be added to one’s previous knowledge, and offering a shared ground for a joint communication process together with the speaker, affecting the persuasiveness of speech. Finally, the study of co-speech gestures is particularly enriched when approached from different disciplines and from different perspectives within one discipline. In the case of psychology, the reviewed studies show the fruitfulness of integrating notions and evidences coming from different levels of analysis: psychoneurophysiology, general psychology, developmental psychology, social psychology, psycholinguistics, etc. Merging together the results coming from these different perspectives, it is possible to significantly improve the description of the characteristics of speech-gesture coordination in the ordinary activities of interpersonal communication, as well as to fully understand and explain the complex array of their diverse functions.

5. Reerences Ahlse´n, Elizabeth 1985. Discourse Patterns in Aphasia. (Gothenburg Monographs in Linguistics 5.) Gothenburg: Department of Linguistics, University of Goteborg. ´ Ahlse´n, Elisabeth 2002. Speech, vision and aphasic communication. In: Paul Mc Kevitt, Sea´n O Nualla´in and Conn Mulvihill (eds.), Language, Vision, and Music, 137⫺148. Amsterdam: John Benjamins. Alibali, Martha W., Kristina L. Flevares and Susan Goldin-Meadow 1997. Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology 89(1): 183⫺193. Alibali, Martha W., Sotaro Kita and Amanda J. Young 2000. Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes 15(6): 593⫺613. Allen, Gary L. 2003. Gestures accompanying verbal route directions: Do they point to a new avenue for examining spatial representations? Spatial Cognition and Computation 3(4): 259⫺268. Allwood, Jens 2000. An activity based approach to pragmatics. In: Harry Bunt and Bill Black (eds.), Abduction, Belief and Context in Dialogue: Studies in Computational Pragmatics, 47⫺80. Amsterdam: John Benjamins. Argentin, Gabriel, Rodolphe Ghiglione and Alexandre Dorna 1990. La gestualite´ et ses effets dans le discours politique. Psychologie Franc¸aise 35(2): 153⫺161. Argyle, Michael 1975. Bodily Communication. London: Methuen.

109. Co-speech gestures: Structures and functions Atkinson, John Max 1984. Our Masters’ Voices: The Language and Body Language of Politics. London: Routledge. Bavelas, Janet Beavin, Nicole Chovil, Linda Coates and Lori Roe 1995. Gestures specialized for dialogue. Personality and Social Psychology Bulletin 21(4): 394⫺405. Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allan Wade 1992. Interactive gestures. Discourse Processes 15(4): 469⫺489. Bavelas, Janet Beavin, Jennifer Gerwing, Chantelle Sutton and Danielle Prevost 2008. Gesturing on the telephone: Independent effects of dialogue and visibility. Journal of Memory and Language 58(2): 495⫺520. Beattie, Geoffrey W. 1981. A further investigation of the cognitive interference hypothesis of gaze patterns in conversation. British Journal of Social Psychology 20(4): 243⫺248. Bonaiuto, Marino and Tancredi Bonaiuto this volume. Gestures and body language in Southern Europe: Italy. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2): 1240⫺1253. Berlin/Boston: De Gruyter Mouton. Bucy, Erik P. and Maria Elizabeth Grabe 2007. Taking television seriously: A sound and image bite analysis of presidential campaign coverage, 1992⫺2004. Journal of Communication 57(4): 652⫺675. Burgoon, Judee K., Thomas Birk and Michael Pfau 1990. Nonverbal behaviors, persuasion, and credibility. Human Communication Research 17(1): 140⫺169. Carli, Linda L., Suzanne J. LaFleur and Christopher C. Loeber 1995. Nonverbal behavior, gender, and influence. Journal of Personality and Social Psychology 68(6): 1030⫺1041. Caso, Letizia, Fridanna Maricchiolo, Marino Bonaiuto, Aldert Vrij and Samantha Mann 2006. The impact of deception and suspicion on different hand movements. Journal of Nonverbal Behavior 30(1): 1⫺19. Cherulnik, Paul D., Kristina A. Donley, Tay-Sha R. Wiewel and Susan R. Miller 2001. Charisma is contagious: The effect of leaders’ charisma on observers’ affect. Journal of Applied Social Psychology 31(10): 2149⫺2159. Clark, Eve V. and Bruno Estigarribia 2011. Using speech and gesture to introduce new objects to young children. Gesture 11(1): 1⫺23. Clark, Herbert H. 1996. Using Language. Cambridge: Cambridge University Press. Clark, Herbert H. and Susan E. Brennan 1991. Grounding in communication. In: Lauren B. Resnick, John M. Levine and Stephanie D. Teasley (eds.), Perspectives on Socially Shared Cognition. Washington D.C.: American Psychological Association. Clark, Herbert H. and Deanna Wilkes-Gibbs 1986. Referring as a collaborative process. Cognition 22(1): 1⫺39. Contento, Silvana 1999. Gestural cohesion in discourse. In: Maria da Grac¸a Pinto, Joa˜o Veloso and Belinda Maia (eds.), Proceedings of 5th Congress of the International Society of Applied Psycholinguistics, 201⫺205. Porto: Facultade de Letras da Universidade do Porto. Cooperrider, Kensy and Rafael Nu´n˜ez 2009. Across time, across the body: Transversal temporal gestures. Gesture 9(2): 181⫺206. Dick, Anthony S., Susan Goldin-Meadow, Uri Hasson, Jeremy I. Skipper and Steven L. Small 2009. Co-speech gestures influence neural activity in brain regions associated with processing semantic information. Human Brain Mapping 30(11): 3509⫺3526. Duncan, Starkey 1972. Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology 23(2): 283⫺292. Edwards, Derek and Jonathan Potter 1992. Discursive Psychology. London: Sage. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1(1): 49⫺98. Fraser, Bruce 1999. What are discourse markers? Journal of Pragmatics 31: 931⫺952.

1471

1472

VII. Body movements – Functions, contexts, and interactions

Goldin-Meadow, Susan 2003. Hearing Gesture: How Our Hands Help Us Think. Cambridge, MA: Harvard University Press. Goldin-Meadow, Susan 2005. The two faces of gesture: language and thought. Gesture 5(1/2): 241⫺257. Goldin-Meadow, Susan 2006. Talking and thinking with our hands. Current Directions in Psychological Science 15(1): 34⫺39. Goodrich, Whitney and Carla L. Hudson Kam 2009. Co-speech gesture as input in verb learning. Developmental Science 12(1): 81⫺87. Goodwin, Charles 2000. Gesture, aphasia and interaction. In: David McNeill (ed.), Language and Gesture, 84⫺98. Cambridge: Cambridge University Press. Graham, Jean Ann and Michael Argyle 1975. A cross-cultural study of the communication of extraverbal meaning by gestures. International Journal of Psychology 10(1): 57⫺67. Henley, Nancy M. 1977. Body Politics: Power, Sex, and Nonverbal Communication. Englewood Cliffs: Prentice-Hall. Holler, Judith, Heather Shovelton and Geoffrey Beattie 2009. Do iconic gestures really contribute to the semantic information communicated in face-to-face interaction? Journal of Nonverbal Behavior 33(2): 73⫺88. Holler, Judith and Katie Wilkin 2011. Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior 35(2): 133⫺153. Hostetter, Autumn B. 2011. When do gestures communicate? A meta-analysis. Psychological Bulletin 137(2): 297⫺315. Hostetter, Autumn B., Martha W. Alibali and Sotaro Kita 2007. Does sitting on your hands make you bite your tongue? The effects of gesture inhibition on speech during motor descriptions. In: Danielle S. McNamara and J. Gregory Trafton (eds.), Proceedings of the 26th Annual Meeting of the Cognitive Science Society, 1097⫺1102. Mahwah, NJ: Lawrence Erlbaum Associates. Iverson, Jana M. and Susan Goldin-Meadow 1998. Why people gesture when they speak. Nature 396: 228. Kelly, Spencer D. and R. Breckinridge Church 1998. A comparison between children’s and adults’ ability to detect conceptual information conveyed through representational gestures. Child Development 69(1): 85⫺93. Kendon, Adam 1983. Gesture and speech: How they interact. In: John M. Wiemann and Randall P. Harrison (eds.), Nonverbal Interaction, 13⫺46. Beverly Hills: Sage. Kendon, Adam 1985. Some uses of gesture. In: Deborah Tannen and Muriel Saville-Troike (eds.), Perspectives on Silence, 215⫺234. Norwood, NJ: Ablex. Kendon, Adam 1994. Do gestures communicate? A Review. Research on Language and Social Interaction 27(3): 175⫺200. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺79. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Krauss, Robert M., Yihsiu Chen and Purnima Chawla 1996. Nonverbal behavior and nonverbal communication: What do conversational hand gestures tell us? Advances in Experimental Social Psychology 28: 389⫺450. Krauss, Robert M., Yihsiu Chen and Rebecca F. Gottesman 2000. Lexical gestures and lexical access: a process model. In: David McNeill (ed.), Language and Gesture: Window into Thought and Action, 261⫺284. Cambridge: Cambridge University Press. LeBaron, Curtis and Jürgen Streeck 2000. Gesture, knowledge, and the world. In: David McNeill (ed.), Language and Gesture: Window into Thought and Action, 118⫺138. Cambridge: Cambridge University Press. Macauley, Beth L. and Candace L. Handley 2005. Conversational gesture production by aphasic patients with ideomotor apraxia. Contemporary Issues in Communication Sciences and Disorders 32: 30⫺37.

109. Co-speech gestures: Structures and functions

1473

Maricchiolo, Fridanna, Marino Bonaiuto and Augusto Gnisci 2007. Hand gestures in speech: studies on their roles in social interaction. In: Lorenza Mondada (ed.), Proceedings of the 2nd ISGS Conference, Interacting Bodies, Lyon-France. Lyon: Ecole Normale Supe´rieure Lettres et Sciences humaines. Maricchiolo, Fridanna, Augusto Gnisci and Marino Bonaiuto 2012. Coding hand gestures: A reliable taxonomy and a multi-media support. In: Anna Esposito, Antonietta M. Esposito, Alessandro Vinciarelli, Rüdiger Hoffman and Vincent C. Müller (eds), Cognitive Behavioural Systems 2011, Lecture Notes in Computer Science 7403, 405⫺416. Berlin/Heidelberg: Springer-Verlag. Maricchiolo, Fridanna, Augusto Gnisci and Marino Bonaiuto 2013. Political leaders’ communicative style and audience evaluation in Italian general election debate. In: Isabella Poggi, Francesca D’Errico, Laura Vincze and Alessandro Vinciarelli (eds.), Political Speech 2010, Lecture Notes in Artificial Intelligence 7688, 99⫺117. Berlin/Heidelberg: Springer-Verlag. Maricchiolo, Fridanna, Augusto Gnisci, Marino Bonaiuto and Gianluca Ficca 2009. Effects of different types of hand gestures in persuasive speech on receivers’ evaluations. Language and Cognitive Processes 24(2): 239⫺266. Maricchiolo, Fridanna, Stefano Livi, Marino Bonaiuto and Augusto Gnisci 2011. Hand gestures and perceived influence in small group interaction. The Spanish Journal of Psychology 14(2): 755⫺764. Mattes, Kyle, Michael Spezio, Hackjin Kim, Alexander Todorov, Ralph Adolphs and R. Michael Alvarez 2010. Predicting election outcomes from positive and negative trait assessments of candidate images. Political Psychology 31(1): 41⫺58. McClave, Evelyn M. 1994. Gestural beats: the rhythm hypothesis. Journal of Psycholinguistic Research 23(1): 45⫺66. McNeill, David 1985. So you think gestures are nonverbal? Psychological Review 92(3): 350⫺371. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press. Morsella, Ezequiel and Robert M. Krauss 2004. The role of gestures in spatial working memory and speech. The American Journal of Psychology 117(3): 411⫺424. Pine, Karen J., Daniel J. Gurney and Ben Fletcher 2010. The semantic specificity hypothesis: when gestures do not depend upon the presence of a listener. Journal of Nonverbal Behavior 34(3): 169⫺178. Poggi, Isabella 2008. Iconicity in different types of gestures. In: Adam Kendon and Tommaso Russo Cardona (eds.), Dimensions of Gesture. Special Issue of Gesture 8, 45⫺61. Amsterdam: John Benjamins. Poggi Isabella and Laura Vincze 2009. Gesture, gaze and persuasive strategies in political discourse. In: Michael Kipp, Jean-Claude Martin, Patrizia Paggio and Dirk Heylen (eds.), Multimodal Corpora, 73⫺92. Berlin/Heidelberg: Springer-Verlag. Rauscher, Frances H., Robert M. Krauss and Yihsiu Chen 1996. Gesture, speech, and lexical access: The role of lexical movements in speech production. Psychological Science 7(4): 226⫺231. Rime´, Bernard and Loris Schiaratura 1991. Gesture and speech. In: Robert S. Feldman and Bernard Rime´ (eds.), Fundamentals of Nonverbal Behavior, 239⫺281. Cambridge: Cambridge University Press. Schiffrin, Deborah 1987. Discourse Markers. Cambridge: Cambridge University Press. Taboada, Maite 2006. Spontaneous and non-spontaneous turn-taking. Pragmatics 16(2⫺3): 329⫺360.

Fridanna Maricchiolo, Rome Stefano De Dominicis, Rome Uberta Ganucci Cancellieri, Reggio Calabria Angiola Di Conza, Napoli Augusto Gnisci, Napoli Marino Bonaiuto, Rome

(Italy) (Italy) (Italy) (Italy) (Italy) (Italy)

1474

VII. Body movements – Functions, contexts, and interactions

110. Emblems or quotable gestures: Structures, categories, and unctions 1. 2. 3. 4. 5.

Definition and delimitation of the category Structures, categories, and functions Research methodologies: Repertoires and cross-cultural variation New trends and long-standing gaps in the study of emblems References

Abstract Emblems or quotable gestures can be defined by their autonomy from speech, their communicative goals, their illocutionary force, their semantic core, and their social nature. Located somewhere between gesticulation and sign language, they are synthetic and partly conventionalized, and have certain linguistic qualities. Traditionally they are said to be fully aware non-verbal acts which are used as deliberate tools of communication and are recognized immediately inside their particular social community; and can be easily translated into verbal language (and quoted). All these properties suggest that emblematic gestures should be characterized as a gradual, prototypical category, with clear features of salience and relevance in communication. Emblems can be studied diachronically or synchronically, from their genesis and spread to the interrelationship between their semantic values and pragmatic functions in social interaction. Despite the diversity of research methodologies, repertoires of emblems reveal many cross-cultural similarities and differences between items. Regional and cultural variants are also associated with different combinations of metonymic and metaphoric processes in the creation of gestures.

1. Deinition and delimitation o the category Although some indirect references to symbolic gestures can be traced in authors writing before the 20th century, the first explicit mention of the notion of emblems or quotable gestures is in the work of Efron ([1941] 1972). His symbolic or emblematic gesture represents “either a visual or a logical object by means of a pictorial or non-pictorial form which has no morphological relationship to the thing represented” (Efron 1972: 96). Years later, Ekman and Friesen (1969) reformulated Efron’s classification of gesture and broadened his category of emblematic gestures, now labeled simply emblems, and defined as “those non-verbal acts which have a direct verbal translation, or dictionary definition, usually consisting of a word or two, or perhaps a phrase” (Ekman and Friesen 1969: 63). In fact, the concept of emblem substitutes or partially overlaps with that of symbolic gesture (traceable to Wundt [1900] 1973) and admits a large number of variants in different studies: formal pantomimic gestures, semiotic gestures, quasi-linguistiques, folkloric gestures, autonomous gestures, quotable gestures, … (see Kendon 2004: 335; Payrato´ 1993: 195); moreover, a great deal of information about emblems has been included under the simple label of gesture (i.e., French gestures, Arabic gestures, Mediterranean gestures, …), and indeed most dictionaries “of gestures” are collections of emblems, precisely because they are the gestures that have the most standardized forms and maintain a stable core of meaning outside the real contexts of production and interpretation. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14741481

110. Emblems or quotable gestures: Structures, categories, and functions

1475

Hanna (1996) devoted her study to the task of finding a coherent, semiotically-founded definition of emblem, emphasizing the aspect of convention and that the foundation of the category must not rely on its condition of (non-)dependency on verbal language. Payrato´ (1993, 2003, 2008) has summarized previous contributions and redefined the category with a list of features: autonomy from speech, communicative goal, illocutionary force, semantic core, and social nature. According to this characterization, emblems are gestures which can ⫺ and usually do ⫺ take place independently of verbal language, and which are made deliberately, with a full communicative intention. They have an illocutionary value analogous to that of pragmatic speech acts, are typical of sociocultural groups or communities, and maintain a semantic core or nucleus which is specified in every concrete context, depending on situational factors and functional values. For his part, Kendon has proposed the notions of autonomous gestures (Kendon 1983) and quotable gestures (Kendon 1984, 1990) “to refer to any gesture that makes its way into an explicit list or vocabulary” (Kendon 2004: 335), and McNeill (2000a: 2⫺6) has situated them along four continua: (i) Regarding their relationship to speech, emblems (performed with the optional presence of speech) are placed between gesticulation (obligatory presence of speech) and pantomime and sign language (obligatory absence of speech). (ii) As for their relationship to linguistic properties, emblems (with some linguistic properties) are between gesticulation and pantomime (without linguistic properties) and sign language (with linguistic properties). (iii) Regarding their relationship to conventions, emblems are partly conventionalized, compared with gesticulation and pantomime (not conventionalized) and sign language (fully conventionalized). (iv) Finally, as for the character of the semiosis, gesticulation is global and synthetic, pantomime is global and analytic, sign language is segmented and analytic, and emblems are segmented and synthetic. The properties involved as well as the location of the category in the chains proposed by McNeill, suggest that the emblem should be characterized as a gradual, prototypical category, that is to say, one in which the performance of many of the defining traits is not dichotomous and the items do not have to comply necessarily with all the features. However, at least from a pragmatic perspective, there is one trait which stands out and from which the other features can be derived: the illocutionary force, i.e., the ability to materialize an act (a speech, communicative act: an assertion, a promise, a request, etc.) via the performance of the gesture. The fact that a gesture has reached this stage explains other features (and not vice versa): it is conventional and therefore social, typical of a group in which the process of conventionalization has been (gradually) carried out; it is autonomous and quotable (it does not need verbal language); it involves a core of basic meaning (like any other illocutionary act), and it can be translated (relatively) easily into verbal language (with a statement, precisely the one that identifies it as a speech act). Traditionally, it has also been said that emblems are fully aware non-verbal acts, which are used as deliberate tools of communication and which are recognized immediately. These features show, on the one hand, the high degree of salience or prominence of these units and, on the other, their relevance (in the sense of Sperber and Wilson 1986): emblems greatly increase the information about our mental, cognitive context and at the same time are processed at a very low cost; therefore they are diametrically opposed to movements that a speaker would interpret as totally irrelevant.

1476

VII. Body movements – Functions, contexts, and interactions

On the other hand, the possibility of breaking emblems into minimal units and identifying their morphological variants are more complex and controversial issues. While the standards of “good formation” for emblems are much more rigid than those for coverbal gestures, the argumentation in favor of their articulation in minimal components seems questionable and detached from theories on production and interpretation (in fact, McNeill [2000b] considers them synthetic). The formal morphological variation of many emblems, their (at least partial) synonymy and polysemy and the modalization they can receive (especially from facial expression) make it advisable to treat these units and their repertoires not as closed categories and simple lexicographic lists, but in terms of family resemblance networks with radial frames (Payrato´ 2003). In these networks, related items, with varying degrees of prototypicality, link with each other through metaphorical or metonymic extensions (or both simultaneously), sometimes also through the more or less literal or figurative meaning of verbal expressions. Similarly, as the degree of conventionalization of each unit is different, often clear boundaries between emblems and co-speech gestures (illustrators, to use Ekman and Friesen’s term) cannot be established, so the repertoires of emblems should also be conceived in this sense as open sets with fuzzy boundaries. In fact, many emblems arise from processes of ritualization of actions, which initially did not have a communicative purpose (see especially Posner 2003), or they rely on other gestures such as affect displays or illustrators (Ekman and Friesen 1969), or they appear from particular behaviors, concerning extra-linguistic or linguistic phenomena (see Brookes 2001, 2004; Morris et al. 1979; Payrato´ 2008). The combination of this set of processes with social, gender, regional, and generational variables makes the repertoires even more open and changeable.

2. Structures, categories, and unctions Inside non-verbal communication, most studies coincide in seeing emblems as the manifestations that are closest to verbality, considering them almost as linguistic elements (hence the French term quasi-linguistique, used by Dahan and Cosnier 1977). Indeed, the emblem allows us to speak in a way that no other gestures make possible, and which combines an effect very similar to that of verbal interjections (which syntactically have the value of an independent sentence) with that of the items of sign language (which have a lexical value). Regarding the structure, Kendon (1981: 152) proposed to distinguish a base (apart from the referent), defined as “the object, action, or (in some cases) abstract entity that the gestural form may be regarded as being modeled upon”. From the collection of samples analyzed in Morris et al. (1979), Kendon mentions six types of base: (i) specific interpersonal actions (for instance, in the case of the Fingertips Kiss or the Teeth Flick, following Morris et al.’s labels), (ii) certain ‘intention movements’ (i.e., in the Head Toss), (iii) action patterns that can be observed in others (i.e., in the Flat Hand Flick), (iv) concrete objects (i.e., in the Fig Hand), (v) symbolic objects (i.e., in the V-Sign or the Finger Cross), and (vi) abstract entities (i.e., in the Hand Purse with the meaning of ‘many’). On the pragmatic side, if we understand emblems as paralinguistic devices which fulfill communicative functions and manifest illocutionary force, their values can be analyzed

110. Emblems or quotable gestures: Structures, categories, and functions

1477

following traditional classifications of speech acts, for instance, as assertive, directive, expressive, commissive, and declarative (Payrato´ 1993). In the repertoires compared by Kendon (1981), most of the messages referred to three categories: interpersonal control, announcement of one’s current state, and evaluative response. Regarding usage, emblems appear associated mainly with colloquial speech (spontaneous and informal, see Payrato´ 2004), and they are used especially when the verbal channel is inoperative or ineffective: in cases of great distance, when silence or a mechanism faster than speech is required, or to send messages such as greetings or insults or others which need no response, etc. (Kendon 1981, 2004). Contrasting and complementing the typical functions of oral and verbal elements on the one hand, and those of co-verbal (or co-speech, co-expressive) gesture on the other, emblems occupy an intermediate position and are clearly multifunctional. In many cases, the performance of the emblem occurs without speech, and in this sense it is perfectly autonomous: for example, the case of a greeting from a (relatively) long distance, raising the arm and waving the hand, turning towards another person, who acts in a similar way. The same can occur in greetings from short distances (raising eyebrows) or with gestures of complicity (winking, touching the nose or the ear, depending on cultures). In other cases, the emblem occurs between fragments of speech (intonation groups) or filling empty boxes marked by intonation (in sequences that Slama-Cazacu [1976] called mixed syntax); for instance, That guy … [plus emblem “to be crazy”], That is a … [plus emblem “thief ”]. In fact, unlike the previous case, here the verbal part is the topic (and subject) of the utterance, and the non-verbal part is the comment; the gesture acts only as a noun or noun phrase. In both examples, the illocutionary force comes from the symbiosis (as a perfectly coordinated sum) of verbal and non-verbal constructions. Poggi (see Poggi and Magno Caldognetto 1997) refers to distinctions of this kind with the concepts of holophrastic (when gesture is equivalent to a phrase) and lexical, if it is equivalent to a noun (see Brookes 2004 and Kendon 2004, also with the class of concept, which is catalogued as the most common form in the former study). Regarding syntactic ability, emblems can be combined but by a simple juxtaposition (“food” ⫺ “request”, “(I) call” ⫺ “later” ⫺ “you”, etc.) and without constituting complex syntactic constructions (in the sense of verbal utterances or combinations characteristic of sign languages, including a predication).

3. Research methodologies: Repertoires and cross-cultural variation Poyatos (1975) designed the first fieldwork methodology to compile inventories of emblems. In the production of a first repertoire of American emblems, Johnson, Ekman, and Friesen (1975) applied encoding and decoding tests (and percentage rates with respect to decoding, the awareness of the naturalness of the usages, and the degree of certainty about message and usages) which have been repeated and modified in subsequent works (see Brookes 2004; Payrato´ 1993; Sparhawk 1978). In the case of French, Calbris (1990) carried out intra-cultural and intercultural experimental studies, and for Italian Poggi and Magno Caldognetto (1997) established explicit criteria for an analytical and lexicographic treatment of the emblems (in the “gestionario”, see Poggi 2002). These rigorous, explicit methods contrast with those of other lexicographic works, valuable as regards the materials collected and the information provided, but relatively

1478

VII. Body movements – Functions, contexts, and interactions

inexplicit regarding the empirical basis of research on which they are grounded (for instance Green 1968; Meo-Zilio and Mejia´ 1980⫺1983). The establishment of repertoires of quotable gestures for countries, communities, or cultural groups has increased since the work of Efron, and it has even allowed some comparisons between inventories and processes of conventionalization (Kendon 1981, 2004; Saitz and Cervenka 1972), but because of the methodological differences in the preparation of the studies the comparisons and contrasts are not entirely reliable (Payrato´ 2001, 2008). Very few data have been compiled on the processes of acquisition of these units and repertoires used by children (but see Guidetti 2003a, 2003b). The equivalent of current, casual, or colloquial emblems in technical or specialized domains represents the development of systems that act as authentic ⫺ though limited ⫺ languages for special purposes (in the context of monasteries, professional activities, sports, etc.). Applying an approach similar to that used in many works in dialectology or linguistic geography, Morris et al. (1979) carried out a study of twenty emblems over forty European cities, with a sample of twenty people per city (chosen at random in public places) and a very explicit mapping of the results. Leaving aside some methodological problems, particularly with regard to the questionnaire (which sometimes caused confusion) and the choice of cities (often selected according to spatial criteria instead of cultural representativeness), the study provides many valuable data on the knowledge and usage of these units, on their variants, the traces of the historical developments that have followed, and even on the borders separating two contiguous areas (the case of the Head Toss, for example, which is characteristic of several parts of Europe and southern Italy, is like that of linguistic isoglosses; it is used twenty kilometers south of Rome, due to the ancient Greek dominance in the area, but not north of this point). The study highlights the interest of a thorough historical analysis (there are emblems that survive after twenty centuries without formal changes) and the pertinence of many cross-cultural comparison subjects (i.e., head movements to affirm or deny all over Europe). Some emblems have been studied in depth, some in already classic works (see, inter alia, Leite de Vasconcellos 1925; Taylor 1956), others in recent works which have even revised their graphic signs (i.e., Serenari 2003). Meanwhile, qualitative ethnographic research has made possible the study of emblems in their real contexts, highlighting their semantic and pragmatic differences (Sherzer [1991] in the case of the Brazilian Thumbs-Up Gesture) and their processes of birth and spread (Brookes [2001] in the case of the clever gesture in South Africa). Especially valuable is the analysis of the interrelationship between the semantic values of the emblem and its pragmatic functions in communicative interaction (Brookes 2004; Kendon 2004; Poggi and Magno Caldognetto 1997), which further demonstrates that many times clear boundaries between quotable gestures and co-verbal gestures or illustrators can not be established.

4. New trends and long-standing gaps in the study o emblems A number of methodological problems must be solved to give impetus to the studies, and this means that researchers should have common corpora at their disposal, a homogeneous, standard transcription system, and cross-culturally comparable data. The repertoires or dictionaries of emblems must be established with explicit criteria and must contain appropriate information regarding the informants’ characteristics and the data elicitation process. Ethnographic research is also essential in this regard to obtain reliable data on the

110. Emblems or quotable gestures: Structures, categories, and functions

1479

usage of emblems (depending on variables as gender, age, and contextual factors), and must include an adequate description of the semantic values of each unit, with their precise roles in social interaction. Alongside the intercultural research which has always been present in the collection and analysis of quotable gestures, the application of the new cognitive paradigm should also bear fruit in the form of innovative studies: on the one hand, in domains such as determining the brain areas responsible for the ability to manage these units (and their close relationship to verbal language), and on the other, in the improvement of theories that explain the coordination between verbal and non-verbal resources in discourse production, with the creation of meaning in context, the concrete relationship between emblems, global salience and relevant information, and the combination between old and new information, thematic development, and communicative dynamism. Finally, new cognitive theories also seem well equipped to explain the combination of metaphoric and metonymic mechanisms in the creation of emblems, another topic still to be explored. In this case, there are obvious similarities with respect to the processes that lead to the formation of signs in sign languages, with which emblems share a degree of iconicity much higher than that found in the verbal lexicon, except in onomatopoeia and words where sound symbolism is present. Research into emblems still faces two main challenges, which correspond to two aspects of study: diachronic and synchronic. In the first dimension, we must be able to describe the process of the birth of each emblem within a particular social group, to explain it as the result of conventionalization, and to reconstruct the process of its historical and geographical spread, with the possible emergence of regional or cultural variants (through various formal modifications, or via metaphor and metonymy). In the synchronic dimension, we must describe how emblems are integrated into the communicative repertoire of each individual and each community or cultural group. There the meanings (and interactive functions) of basic items and their variants can be framed in radial networks defined by family resemblance, with several degrees of prominence and variable compliance of features. After all, the categories which the analysts project into the real world (now with labels such as emblems, quotable gestures, or similar terms) should not hide the fact that our ultimate goal is not a gestural typology per se in a purely taxonomic sense, but to describe and to explain the human communicative behavior manifested by the regular usage of conventional items with significant values and complex pragmatic functions.

5. Reerences Brookes, Heather J. 2001. O clever ‘He’s streetwise’. When gestures become quotable: the case of the clever gesture. Gesture 1(2): 167⫺184. Brookes, Heather J. 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186⫺224. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Dahan, Gelis and Jacques Cosnier 1977. Se´miologie des quasi-linguistiques franc¸ais. Psychologie Me´dicale 9(11): 2053⫺2072. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton. First published [1941]. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of non verbal behavior: categories, origins, usage and coding. Semiotica 1: 49⫺97. Green, Jerald R. 1968. A Gesture Inventory for the Teaching of Spanish. Philadelphia: Chilton Books.

1480

VII. Body movements – Functions, contexts, and interactions

Guidetti, Miche`le 2003a. Pragmatic aspects of conventional gestures in young French children. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures: Meaning and Use, 39⫺44. Porto: Edic¸oes Universidade Fernando Pessoa. Guidetti, Miche`le 2003b. Pragmatique et psychologie du de´veloppement. Comment communiquent les jeunes enfants. Paris: Belin. Hanna, Barbara E. 1996. Defining the emblem. Semiotica 112(3/4): 289⫺358. Johnson, Harold G., Paul Ekman and Wallace V. Friesen 1975. Communicative body movements: American emblems. Semiotica 15(4): 335⫺353. Kendon, Adam 1981. Geography of gesture. Semiotica 37(1/2): 129⫺163. Kendon, Adam 1983. Gesture and speech: How they interact. In: John M. Wiemann and Randall P. Harrison (eds.), Non-verbal Interaction, 13⫺45. Beverly Hills: Sage. Kendon, Adam 1984. Did gesture have the happiness to escape the curse at the confusion of Babel? In: Aaron Wolfgang (ed.), Non-verbal Behavior: Perspectives, Applications, Intercultural Insights, 75⫺114. Lewiston/New York/Toronto: C.J. Hogrefe, Inc. Kendon, Adam 1990. Gesticulation, quotable gestures, and signs. In: Michael Moerman and Masaichi Nomura (eds.), Culture Embodied, 53⫺77. Osaka: National Museum of Ethnology. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Leite de Vasconcellos, Jose´ 1925. A figa. Estudo de etnografia comparativa, precedido de algumas palavras a respeito do ‘sobrenatural’ na medicina popular portuguesa. Porto: Araujo and Sobrinho. McNeill, David 2000a. Introduction. In: David McNeill (ed.), Language and Gesture, 1⫺10. Cambridge: Cambridge University Press. McNeill, David (ed.) 2000b. Language and Gesture. Cambridge: Cambridge University Press. Meo-Zilio, Giovanni and Silvia Mejı´a 1980⫺1983. Diccionario de gestos. Espan˜a e Hispanoame´rica, Volume 1 (1980) and Volume 2 (1983). Bogota´: Instituto Caro y Cuervo. Morris, Desmond, Peter Collet, Peter Marsh and Marie O’Shaughnessy 1979. Gestures, Their Origins and Distribution. London: Jonathan Cape. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20(3): 193⫺216. Payrato´, Lluı´s 2001. Methodological remarks on the study of emblems: The need for common elicitation procedures. In: Christian Cave´, Isabelle Guaı¨tella and Serge Santi (eds.), Oralite´ et gestualite´. Interactions et comportements multimodaux dans la communication, 262⫺265. Paris: L’Harmattan. Payrato´, Lluı´s 2003. What does ‘the same gesture’ mean? Emblematic gestures from some cognitivelinguistic theories”. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures: Meaning and Use, 73⫺81. Porto: Edic¸oes Universidade Fernando Pessoa. Payrato´, Lluı´s 2004. Notes on pragmatic and social aspects of everyday gestures. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 103⫺ 113. Berlin: Weidler. Payrato´, Lluı´s 2008. Past, present, and future research on emblems in the Hispanic tradition: Preliminary and methodological considerations. Gesture 8(1): 5⫺21. Poggi, Isabella 2002. Symbolic gestures: The case of the Italian gestionary. Gesture 2(1): 71⫺98. Poggi, Isabella and Emanuela Magno Caldognetto 1997. Mani che parlano. Gesti e psicologia della comunicazione. Padova: Unipress. Posner, Roland 2003. Everyday gestures as as result of ritualization. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures: Meaning and Use, 217⫺229. Porto: Edic¸oes Universidade Fernando Pessoa. Poyatos, Fernando 1975. Gesture inventories: Fieldwork methodology and problems. Semiotica 13(2): 199⫺227. Saitz, Robert L. and Edward J. Cervenka 1972. Handbook of Gestures: Colombia and the United States. The Hague: Mouton. Serenari, Massimo 2003. Examples from the Berlin dictionary of everyday gestures. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures: Meaning and Use, 111⫺117. Porto: Edic¸oes Universidade Fernando Pessoa.

111. Semantics and pragmatics of symbolic gestures

1481

Sherzer, Joel 1991. The Brazilian thumbs-up gesture. Journal of Linguistic Anthropology 1(2): 189⫺ 197. Slama-Cazacu, Tatiana 1976. Non-verbal-components in message sequence: The “mixed syntax”. In: William C. McCormack and Stephen A. Wurm (eds.), Language and Man, 127⫺148. The Hague: Mouton. Taylor, Archer 1956. The Shanghai gesture. Folklore Fellowship Communications 166: 1⫺76. Sparhawk, Carol M. 1978. Contrastive-identificational features of Persian gesture. Semiotica 24(1/2): 49⫺86. Sperber, Dan and Deirdre Wilson 1986. Relevance: Communication and Cognition. Oxford: Blackwell. Wundt, Wilhelm 1973. The Language of Gestures. The Hague: Mouton. First published [1900].

Lluı´s Payrato´, Barcelona (Spain)

111. Semantics and pragmatics o symbolic gestures 1. 2. 3. 4. 5. 6. 7.

Symbolic gestures. A definition Iconicity in symbolic gestures Words and sentences in gestures The “tulip hand”: An Italian holophrastic gesture Rhetorical figures in gestures The gestionary References

Abstract The chapter defines symbolic gestures as autonomous culturally codified gestures, to which a canonical verbal phrasing corresponds in a given culture. Examples of iconic and arbitrary gestures are given, and a way is proposed to measure a gesture’s iconicity, based on how much its parameters still imitate the referent’s shape, location, orientation, or movement. A protogrammatical distinction is made between holophrastic gestures, that convey the meaning of a whole sentence, and articulated ones, corresponding to single words, and the Italian holophrastic gesture of the “tulip hand”, with its meanings as a true and a rhetorical question, is analyzed. Then, several cases of rhetorical figures in gestures are illustrated: metaphor, metonymy and synecdoche, hyperbole, irony, allusion, dysphemism. Finally, the chapter presents a protocol for the construction of a gestionary, a dictionary of symbolic gestures, that encompasses information on a gesture’s verbal formulation, contexts, synonyms, semantic content, semantic, pragmatic and grammatical classification, and rhetorical figures.

1. Symbolic gestures. A deinition Symbolic gestures, also called “emblematic gestures”, “emblems” (Efron 1941, Ekman and Friesen 1969), or “quotable gestures” (Kendon 2004), are communicative movements made by hands, arm and/or shoulders, that in a given culture quite systematically Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14811496

1482

VII. Body movements – Functions, contexts, and interactions

correspond to a specific verbal phrasing, a sort of translation in words or sentences, shared by the speakers of that culture. To define them in cognitive terms, we may characterize symbolic gestures in terms of the criteria for gesture classification proposed by Poggi (volume 1). Considering the relation of the gesture with signals in other modalities, symbolics are autonomous gestures in that, different from beats (McNeill 1992), they can completely replace words, even if they do not necessarily do so. As to the criterion of cognitive construction, a symbolic gesture is a codified one: in the sender’s long-term memory a signal-meaning link is stably stored ⫺ something comparable to the lexical entry of a word in the mental lexicon ⫺ in which a canonical hand shape and hand movement are attached to a particular meaning, to which, moreover, a shared verbal paraphrase corresponds. Second, a symbolic gesture is culturally codified. In fact, while other gestures are codified biologically (for example, raising both arms up, a typical gesture of pride and elation made by athletes when they win a race) and hence are fairly shared across cultures, symbolic gestures are codified on a cultural basis: a gesture that in a culture has a specific meaning, in another culture may have a different one, or no meaning at all. Further, just like for words, children learn their canonical form and their meaning by seeing them performed by adults; and blind children cannot learn them unless they are explicitly taught to perform them by modelling the shape and movement of their hands. A last feature characterizes symbolic gestures among all others. Their having a codified meaning does not differentiate them from beats (that means “I am stressing this word/syllable/sentence”), nor from other body signals (like gaze items: Poggi 2007, volume 1) that may have very precise meanings, too. The meaning of a symbolic gesture, though different from a beat or a glance, has a canonical verbal phrasing in a given culture: something like a codified verbal translation. This has been shown by studies on symbolic gestures in several cultures, like Catalan Emblems (Payrato` 1993) or SouthAfrican quotable gestures (Brookes 2004a). For Italian symbolic gestures, by submitting gestures to Italian speakers, Poggi and Magno Caldognetto (1997) found that for each gesture the verbal paraphrases proposed by participants, albeit literally different from one another, all share the same meaning: for example, the “thumb up” gesture was paraphrased as “ok” by 10 subjects out of 18, 6 paraphrased it as “va bene” (‘all right’), or “ok va bene”, or “sı` va tutto bene ok” (‘yes, it’s all right, ok’), and 1 subject as “ok siamo i primi” (‘ok, we are the first’). One more evidence of the status of symbolic gestures as different from other types of gestures and body signals is that, as well as signs in Sign Languages of the deaf, they are processed by the same neural system as spoken language (Xua et al. 2009). To sum up, according to these criteria symbolic gestures are autonomous culturally codified gestures corresponding to well-established paraphrases in words or sentences.

2. Iconicity in symbolic gestures If we consider symbolic gestures in terms of the signal-meaning relationship, within them we may find both arbitrary gestures, where no semantic relationship can be found between gesture and meaning, and motivated ones, those in which hand shape or movement allow one to infer what they mean. Actually, among systems of symbolic gestures it is not easy to find completely arbitrary ones; one (at least apparently) arbitrary Italian gesture is one of sliding open hand under chin (Fig. 111.1) that means “I couldn’t care less”.

111. Semantics and pragmatics of symbolic gestures

1483

Fig. 111.1: “I couldn’t care less”

Within motivated symbolic gestures, the form-meaning relationship in some cases is one of biological determinism, like for the gesture of raising both arms up, where the physiological arousal of the emotion of elation gives rise to arm movements characterized by amplitude and energy. In the majority of cases, though, the relationship between form and meaning is one of similarity: many symbolic gestures are iconic. The origin of iconic gestures within symbolic ones can be traced back to previous pantomimes (Poggi 2008), that is, gestures that were initially creative, non-codified, invented on the spot, and then have become codified through a process of stylization and schematization comparable to that found by Frishberg (1975) and Radutzky (1981, 1987) for signs of American Sign Language and Italian Sign Language. Actually, when people create a gesture completely anew, they must necessarily produce an iconic gesture (or else, the addressee could not understand it), and to do so, they single out the most characterizing physical features of the referent to convey ⫺ its typical shape, movement, location ⫺ and reproduce them with the hands. These features of the pantomimic gesture crystallize by giving rise to the parameters of codified gestures found by Stokoe (1978) in American Sign Language, by Radutzky (1992) in Italian Sign Language, and by Poggi (2007) in symbolic gestures: handshape, movement and orientation, and location. So, the meaning “to walk” might first have been represented by the pantomime of index and middle finger performing a forward movement, and this pantomime might have given rise to the codified gesture “walk” (Fig. 111.2) with the handshape of index and middle finger in V shape oriented downward (evoking the shape of two legs) and the movement of alternatively moving the two fingers forward (similar to walking legs). Thus, the para-meters of handshape and movement maintain iconicity in the symbolic gesture, since they recall corresponding features in the referent.

Fig. 111.2: “walk”

1484

VII. Body movements – Functions, contexts, and interactions

Distinguishing which parameters of a gesture recall its meaning, may give us a measure of iconicity. For example, in the Italian gesture “two” (Fig. 111.3) only the handshape is iconic: the two fingers, index and middle finger extended upward, represent the quantity of two.

Fig. 111.3: “two”

Hitting one’s temple with curve index, which means “mad”, is iconic only for its location (pointing at head). “Indian”, index and middle finger extended open on top of the head, is iconic in two parameters: shape of fingers (⫽ two feathers) and location (⫽ head) (Fig. 111.4). “Walk”, on the contrary, is iconic in handshape (the two fingers as two legs), orientation (forward orientation of knuckles as forward orientation of knees in walking), and movement (fingers moving like legs) (Fig. 111.2).

Fig. 111.4: “Indian”

3. Words and sentences in gestures What is in symbolic gestures the relationship between signal and communicative act? Since, as seen above, symbolic gestures are much more similar to words than other body signals, we might wonder if there are grammatical categories within symbolic gestures. Actually, we cannot find a distinction like that in verbal languages, i.e., words classified as adjectives, nouns, or verbs: often the same gesture may correspond both to a noun and to a verb (e.g., Fig. 111.5 may mean both “smoke” and “cigarette”). Yet, we can draw a “proto-grammatical” difference between articulated and holophrastic gestures: those paraphrasable in words and those having the meaning of whole sentences.

111. Semantics and pragmatics of symbolic gestures

1485

Fig. 111.5: “smoke”, “cigarette”

A holophrastic signal is a unitary signal that conveys a whole communicative act, of both performative and propositional content, while an articulated signal conveys only part of it. So, number 6 is holophrastic, since it corresponds to a whole sentence, “come here”, and conveys not only an action (addressee coming to sender) but also a performative of request (sender requests addressee to come to sender); number 2, “walk”, is articulated, since it conveys the meaning of an action, but does not tell if it is asserted, requested, wished, forbidden, or other.

Fig. 111.6: “Come here”

To tell if a gesture is holophrastic or not, you must test if a given performative is incorporated in its meaning: you make the gesture while displaying a facial expression that conveys a contrasting performative; if the resulting face-hand matching is awkward to observe or difficult to produce, that gesture has that performative incorporated, hence, it is holophrastic. In fact, matching gesture number 6 with an interrogative expression is unacceptable or awkward, whereas different performative expressions can be matched to gesture number 5: depending on the matched facial expression conveying the performative, the gesture may mean “He is smoking now” or “He is a smoker”, if produced with an informative expression; “Let us have a cigarette” with the facial expression of a proposal; “Is he a smoker?” with an interrogative expression. An articulated gesture, not incorporating any specific performative, can stand for different communicative acts, while a holophrastic one always conveys the same communicative act (Poggi 2007). We may classify all gestures as articulated or holophrastic, and each holophrastic gesture as to its performative. Raising a fist with extended index finger up to ask for speaking turn has a performative of request; the Italian gesture of the “tulip hand” (Fig. 111.7a) is a question; shaking flat hand obliquely is a threat.

1486

VII. Body movements – Functions, contexts, and interactions

4. The tulip hand: an Italian holophrastic gesture A gesture in some way prototypical of Italian communication is the “tulip hand” (Fig. 111.7): hand palm up, with fingers joint together as the petals of a tulip, moving up and down. This gesture has two different meanings, both holophrastic, one stemming from the other. In the former, it has the performative of a Wh question, and it may mean “what is this?”, “what do you want?”, “so what…?”; in the latter, the question is used as an indirect speech act ⫺ a pseudo-question ⫺ that indirectly conveys a negative assertion, a criticism, or an expression of perplexity, disagreement, disproval, that can be paraphrased as “but what the hell are you doing/saying?”, “not at all”, “I don’t agree”, “what you are saying is not true”, “I don’t approve of what you are doing”. The two readings (Fig. 111.7a, Fig. 111.7b) are distinguished by the parameters of expressivity of gesture (Hartmann, Mancini, and Pelachaud 2002; Poggi and Pelachaud 2008) ⫺ amplitude, fluidity, velocity, repetition, muscular tension ⫺ and by the facial expression accompanying the gesture.

Fig. 111.7a: “What do you want?”

Fig. 111.7b: “What the hell do you want?”

In the meaning of a true question (Fig. 111.7a), the hand moves up and down fast (high velocity), going quite a short path (low amplitude) with high muscular tension, and stops abruptly (low fluidity) after no more than two/three repetitions (low repetition); the gesture is accompanied by a frown and a fast head shake: generally, a face and head expression of curiosity. In the meaning of criticism (Fig. 111.7b), the hand moves up and down slowly (low velocity), two times or more (high repetition), for a long run, possibly up to a complete arm bending and extension (high amplitude). The mouth displays a retraction and asymmetrical lip corner lift, similar to an ironic smile, and may produce a dental click (a sign of skepticism); finally, the head is bent aside, as if ironically imploring the addressee not to do/say what s/he is about to, all in all, an expression of irony or skepticism, and no curiosity. Let us have a look at a case of the “true question reading” (see Fig. 111.7): (1)

Student S, on the class door, hands teacher T a strange leaflet. Before taking it, the teacher wonders what the leaflet may be about and, with an interrogative face, makes the gesture of the “tulip hand”.

Here, the gesture could be verbally paraphrased as: “What leaflet is it? What is it about?”

111. Semantics and pragmatics of symbolic gestures

1487

It conveys a true question, a request for information that T does not really know. In the following example, instead, T’s “tulip hand” (Fig. 111.7b) has a “pseudo-question” reading. (2)

Rome, Italy, during the eighties. Time of students’ rebellion. T occasionally teaches in the University, but being simply a doctoral student, not a proper teacher, her role is not that prestigious. One day, in the class a student cannot counterargue to T’s arguments, who seems instead to convince all other students, and says: “Well, but you have the role of the one who teaches, so whatever you say is taken as pure gold…” To answer, T simply makes the gesture of the “tulip hand”.

Here, a verbal paraphrase of the gesture could be: “But what the hell are you saying?” The real meaning of the gesture is: “I do not think anybody in this cultural milieu could take what a doctoral student says as pure gold. What you say is not true at all”. This is not a true question ⫺ there is no information that T does not know ⫺ but a sceptical comment, finally resulting in an expression of disagreement, concerning the student’s sentence. Sometimes, if only the bare parameters of handshape and movement type are taken into account, and the context does not contribute to disambiguation, the “tulip hand” might at first sight look ambiguous between the two readings ⫺ true question and pseudo-question. Yet, if facial expression and the parameters of gesture expressivity are considered (amplitude, fluidity, velocity, tension, repetition), the right interpretation becomes immediately clear. (3)

One night, A meets B. B is usually dressed quite casually, but tonight he is very smart and elegant. A, while looking at B with a light smile, makes the “tulip hand” gesture.

In this situation, at least as described in a written text, the gesture is ambiguous: A might mean either its literal reading or its indirect reading. Suppose A is really curious about B’s unusual arrangement and wants to ask him what the hell date he is going to: in this case, the gesture is used with its literal reading and can be paraphrased as: (3a) A: Where are you going with that nice suit? But suppose that B looks irresistibly ridiculous in his elegant suit: in this case, the gesture could have the meaning of an ironic comment or an act of teasing, one indirectly conveyed by a pseudo-question. In other words, here the meaning of the “tulip hand” would be: (3b) A: But where the hell are you going with that suit?! In this sentence, behind the interrogative literal goal you clearly feel an indirect meaning of friendly ridiculization, more or less like saying: (3c) A: You’re really funny with that suit.

1488

VII. Body movements – Functions, contexts, and interactions

In (3), B does not really have elements from context to tell whether A is really curious about B’s dates, or if he is just ridiculing B’s suit. And yet, in everyday real communication the “tulip hand” gesture will be perfectly understood as having either one or the other reading with no uncertainty, thanks to the cues of facial expression and of movement expressivity that, with gesture, has the same role fulfilled by intonation in unmasking the ironical reading of a statement. As shown by Ricci Bitti and Poggi (1991), as compared to other gestures that can be disambiguated only thanks to differences in facial expression and expressivity parameters, these seem more important for interpreting the “tulip hand” than for other ambiguous gestures. Finally, coming to the frequency of use of the “tulip hand”, its indirect meaning as a pseudo-question looks more frequent in everyday life than the one as a true question; with its ironical, sceptical, and provocative nuances, it works as a memento to the addressee not to take oneself too seriously.

5. Rhetorical igures in gestures Suppose you clumsily spill tomato sauce on your friend’s white shirt, and he starts clapping hands. He is not praising, but blaming or teasing you: his gesture is ironic. Within symbolic gestures, rhetorical figures are often at work and have an important role in gestures at both the diachronic and synchronic level. A rhetorical meaning sometimes causes historical change by giving rise to a new meaning of a gesture that can subsequently replace and obscure the previous meaning. For example, gesture number 8, “I can’t bear him/her”, in its original meaning means “I have it on my stomach”, “I can’t digest it”, but metaphorically comes to refer to rejection not of food but of a person: The literal original meaning of a concrete digestion is now obscured, while the rhetorical meaning of an unbearable person is the only one valid now. In other cases, the operating of a rhetorical figure simply adds an additional meaning to a pre-existing one, and the two meanings presently coexist, thus causing a polysemy of the gesture. The gesture of clapping hands (Fig. 17 in Tab 111.1) still has both its literal sense of praise and an ironic meaning of blame. The two quite different meanings, one deriving from the other, coexist, thus causing a polysemy of the gesture. Let us have a look at some rhetorical figures in Italian Symbolic Gestures.

5.1. Metaphor Metaphor is a rhetorical figure quite widespread in gestures, as shown by various authors (Calbris 1990, 2003; Cienki and Müller 2008; De Jorio 2000 [1832]; Kendon 1992, 2000; Poggi 2002; Poggi and Magno Caldognetto 1997), and some symbolic gestures have a metaphorical meaning, too. We have a metaphorical use of a signal when a signal generally used in referring to X is used to refer to Y, thus extending its meaning to a different semantic field. Among Italian Symbolic gestures, number 8 means mi sta qua (‘s/he’s on my stomach’, ‘I can’t digest him/her’ ⫽ ‘I can’t bear him/her’). Both the gesture and its verbal formulation use a metaphor implying a transfer from the field of food to an affective-interactional field. Another metaphorical gesture is number 9, extended index and middle fingers alternatively closing and opening, like in cutting something with scissors, that means taglia (‘cut’)

111. Semantics and pragmatics of symbolic gestures

1489

Fig. 111.8: “I can’t bear him/her”

in the sense of “be concise”, “cut your discourse”. Again, a transfer from a concrete material which can be cut to an abstract, mental communicative material, a discourse. A further metaphor for the same meaning, “be concise”, is in number 10, closing fist or fingertips together twice, like in squeezing something. This means stringi (‘squeeze it’), but what is supposed to be squeezed is not a concrete thing, but, again, a discourse. In gesture number 11, which means duro (‘hard’) in the two senses of “stubborn” and “stupid”, the property of hardness is transferred from a perceptual to a mental field. The transferred meaning is “something difficult to penetrate”, and in this metaphor it is applied, respectively, to the fields of goals and beliefs: A stubborn person is one difficult to influence, one hard to accept new goals, whereas a stupid person is one hard to acquire new knowledge. Gesture number 12, drawing a cross in the air, means morto (‘dead’), but metaphorically it can also mean ‘finished, over, gone off ’. A girl might use it to answer the question “How about your boyfriend John?”, meaning not that John died, but that her flirt is over. The gesture refers to something metaphorically dead, finished, gone off.

Fig. 111.9: “cut”

Fig. 111.10: “squeeze” Fig. 111.11: “hard”

Fig. 111.12: “dead”

5.2. Metonymy and synecdoche In the rhetorical figure of metonymy, one mentions some meaning X to refer to another meaning Y that is semantically linked to X, for example, because it is a part of its description or definition. Gesture number 12 for “dead”, which as just seen may be used metaphorically to mean “finished”, in its turn might derive from a metonymy: Since it

1490

VII. Body movements – Functions, contexts, and interactions

is performed by drawing a cross in the air in the same way as priests do when blessing, it might have initially meant “someone like those that priests bless by a cross”. Synecdoche may be seen as a case of metonymy: a gesture using a synecdoche mentions the referent or concept X to refer to Y, where Y is linked to X in some specific way through a relation like part-whole, object-function, or other. Gesture number 13, where open hand with spread fingers, palm onto Speaker’s face imitates the bars of a jail, means prigione (‘jail’): a part X (bars) of an object Y (jail) is mentioned to refer to the whole object. The relation between X and Y is a part-whole relation (see Fig. 111.13). A double synecdoche underlies gesture number 14, index tapping wrist, that means “What’s the time?” or “C’mon, it’s late”: first, you refer to a watch by indicating where a watch usually is (from place X, wrist, to object Y, watch); second, by referring to the watch you remind time is passing (from object Y, watch, to function Z, knowing the time) (see Fig. 111.14).

Fig. 111.13: “jail”

Fig. 111.14: “It’s late”

5.3. Hyperbole Some gestures use a hyperbolic, that is, an exaggerated version of their intended meaning: the X object or action you mention is much larger, greater, longer, intense, or numerous than the Y you really mean. For instance, to show that you are sorry of something in Italy, you can touch your cheek from eye-socket down, as if indicating a tear sliding down, while displaying an expression of sadness: to convey “I cry”, is hyperbolic with respect to simply showing sorry. Hyperbole is frequent in obscene, threatening, and insulting gestures, where the action threatened is much harder or more conspicuous than that one would really perform.

5.4. Irony In the rhetorical figure of irony, sender S communicates X while thinking NON-X, but also meta-communicates s/he is not communicating what s/he actually thinks (Anolli, Ciceri, and Riva 2002; Attardo 2000; Attardo et al. 2003; Castelfranchi and Poggi 1998). Clapping hands (Fig. 111.17) has a rhetorical reading beside the literal meaning: It literally means “I praise you”, but may be used ironically, meaning “I blame you” or “I tease you”.

5.5. Allusion Some gestures use the figure of allusion. Allusion is to refer to someone or something without explicitly mentioning it, but while letting the other infer what you refer to and

111. Semantics and pragmatics of symbolic gestures

1491

why you are not doing so explicitly (Castelfranchi and Poggi 1998). An allusive gesture is number 15, which like number 6 means “come here” but contains a nuance of allusion: it lets you infer that “there is something strange, curious, or threatening” waiting for you here. If you are summoned by this gesture, you may not be completely calm about “coming here”, but you also know you’d better come, or else…

Fig. 111.15: “Come here”

5.6. Dysphemism While euphemism, the figure where terms loaded with negative evaluation are replaced by less negative, even positive terms, seems hardly represented among gestures, several gestures exploit dysphemism (or cacophemism), the opposite rhetorical figure, that consists (Allan and Burridge 1991; Castelfranchi and Parisi 1980) in deliberately using a word particularly insulting or unpleasant in form or meaning out of a provocative intent. A fairly high percent of Italian symbolic gestures are insulting or obscene, and their negative or threatening impact is often enhanced by hyperbole. A case of enhancement of a gesture that is quite dysphemistic on its own is in the evolution of the “forearm jerk” (Morris et al. 1979), the phallic gesture for “fuck off ”, from the middle finger up, the obscene ancient Roman “digitus impudicus” (Krüger 1999). Here, a previously small gesture performed on a single hand scale developed into one on a whole body scale with both arms implied. The reason for the frequency of dysphemism is, quite likely, that gestures work as a substitute language particularly in tabooed semantic areas (Galli de’ Paratesi 1969): Where some word is forbidden or sanctioned, a gesture may be used; thus, the area of gestures conveying negative evaluation, obscene components, and insults is, in percent, particularly wide within the overall gestural lexicon. This particular load of dysphemistic content in gestures is partly due to the very essence of visual communication: a visual signal, different from an acoustic one, cannot be perceived beyond an obstacle, so its delivery allows to select the addressee, and this makes it particularly fit for intimate, secret, furtive communication (Kendon 1981; Poggi 2007).

6. The gestionary The form and meaning of symbolic gestures has been represented in various gesture dictionaries: see Cocchiara (1977); De Jorio (2000); Diadori (1990); Munari (1963),

1492

VII. Body movements – Functions, contexts, and interactions

(1994); Poggi and Magno Caldognetto (1997) for Italian symbolic gestures; Mallery [1881] (1972) for American Indians; Efron (1941) for Jewish gestures; Morris et al. (1979) for those spread in the Mediterranean cultures; Meo-Zilio and Mejı´a (1983) for South American; Payrato` (1993) for Catalan; Kreidlin (2004) for Russian; Brookes (2004a), (2004b) for South African gestures. A possible way to record symbolic gestures is a gestionary (Poggi 2002): a list of gesture-meaning pairs representing the lexical competence of people that in a certain culture use a system of symbolic gestures. Such competence entails for each gesture its physical and its semantic and pragmatic aspects: on the one side, how it is motorically performed and how it appears perceptually, on the other, what it means and what its pragmatic function is, i.e., its literal and possibly rhetorical meanings, its gestural synonyms, its contexts of use. The semantic and pragmatic side of an entry in a gestionary includes the information below, exemplified in Tab. 111.1 by cases of Italian Symbolic Gestures. (i) Verbal formulation: the gesture is glossed with its most frequent verbal paraphrase(s). For example, gesture 16 in Tab. 111.1, approaching fists with palms down and extended fingers parallel, can be paraphrased as “they have an understanding with each other” or “they are lovers” or simply ”there is a link”; (ii) Context: some contexts are provided in which the gesture can be used: gesture 16 can be used while speaking of two persons or two events; (iii) Synonyms: other gestures are eventually mentioned that have (about) the same meaning as the analyzed one; a synonym of number 16 is hand oscillating on wrist with thumb and index open curve; (iv) Semantic content: a definition is provided of the meaning of the gesture, similar to those of word dictionaries. Gesture number 16 means “link between two persons or events”; (v) Grammatical classification: gestures are classified according to the proto-grammatical distinction between holophrastic and articulated gestures (Poggi 1983), depending on their having the meaning of a whole sentence with its built-in performative, or of a single word or semantic predicate. Number 16 is articulated because the mentioned link can be either asserted in an act of information or asked in a question. On the contrary, gesture number 17, clapping hands, is holophrastic because it has the performative of praise (or ironic praise, i.e., blame) incorporated in it, so much that it cannot be used as a simple (non-evaluative) information nor as a question or a command. (vi) Pragmatic classification: holophrastic gestures are classified as to their specific performative as, say, questioning, requesting, threatening gestures, and so on. If number 17 has a performative of praise, number 8, “I can’t bear him/her”, has a performative of information. (vii) Semantic classification: the semantic content of the gesture is classified according to the typology presented in Poggi (volume 1), as providing information about the world, the sender’s mind, or the sender’s identity. Here, gesture 16, “link between the two”, bears information on the world, whereas gesture 17, “praise”, gives information on the sender’s mind. The gesture of putting one’s hand on one’s

111. Semantics and pragmatics of symbolic gestures

1493

heart, that means “I/we, the noble one/s”, conveys information on the sender’s identity in that it aims at providing a positive image of the sender. Finally, two lines of the gestural entry may represent the work of rhetorical devices: (viii) Source rhetorical meaning of the gesture: the previous meaning from which the present meaning of a gesture derives. Gesture 8 derives its present meaning “I can’t bear him/her” through the rhetorical device of metaphor from the previous meaning “I can’t digest this”. (ix) Coexisting rhetorical meaning: In gesture 17, the ironic meaning of blame presently coexists with the literal meaning of praise.

Tab. 111.1: The semantic and pragmatic analysis of gestures in the Italian Gestionary (Fig. 1, 3, 5, 6, 7, 15 are drawn from Poggi ([1987], Fig. 2, 4, 8⫺14, 16, 17 are drawn from Poggi [2006]). Gesture

1. Verbal formulation

2. Context

3. Synonyms

4. Meaning 5. Grammatical classification 6. Pragmatic classification 7. Semantic classification 8. Source rhetorical meaning 9. Coexisting rhetorical meaning

16

17

18

⫺ se l’intendono ⫽ ‘they have an understanding’ with each other; ⫺ c’e’ del tenero ⫽ ‘they are lovers’; ⫺ connessione ⫽ ‘link’ speaking of two persons; speaking of two facts hand with thumb and index open oscillates on wrist link between persons or events articulated

bravo! ⫽ ‘very good!’

mi sta qua ⫽ ‘he’s on my stomach’

commenting on something done by B

commenting on some person

“ring” with thumb and index sender praises Addressee holophrastic praise

world

mind: performative

irony: blame

sender can’t bear some person holophrastic evaluative information mind: social emotion metaphor: I can’t digest this

1494

VII. Body movements – Functions, contexts, and interactions

Acknowledgements This work is partially supported by the Seventh Framework Program, SSPNet European Network of Excellence (Social Signal Processing Network), Grant Agreement N. 231287.

7. Reerences Allan, Keith and Kate Burridge 1991. Euphemism and Dysphemism: Language Used As Shield and Weapon. Oxford: Oxford University Press. Anolli, Luigi, Rita Ciceri and Giuseppe Riva (eds.) 2002. Say Not To Say. Amsterdam/Washington, DC: IOS Press. Attardo, Salvatore 2000. Irony markers and functions: Towards a goal-oriented theory of irony and its processing. Rask. International Tidsskrift for Sprog og Kommunication 12: 3⫺20. Attardo, Salvatore, Jodi Eisterhold, Jennifer Hay and Isabella Poggi 2003. Multimodal markers of irony and sarcasm. Humor. International Journal of Humor Research 16(2): 243⫺260. Brookes, Heather 2004a. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186⫺224. Brookes, Heather 2004b. What gestures do: some communicative functions of quotable gestures in conversation among black Urban South Africans. Journal of Pragmatics 37(12): 2044⫺2085. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington, IN: Indiana University Press. Calbris, Genevie`ve 2003. L’Expression Gestuelle De La Pense´e D’un Homme Politique. Paris: Edition du CNRS. Castelfranchi, Cristiano and Domenico Parisi 1980. Linguaggio, Conoscenze e Scopi. Bologna: Il Mulino. Castelfranchi, Christiano and Isabella Poggi 1998. Bugie, Finzioni, Sotterfugi. Per Una Scienza Dell’Inganno. Roma: Carocci. Cienki, Alan and Cornelia Müller 2008. Metaphor, gesture and thought. In: Raymond W. Gibbs (ed.), Cambridge Handbook of Metaphor and Thought, 483⫺501. Cambridge: Cambridge University Press. Cocchiara, Giuseppe 1977. Il Linguaggio Del Gesto. Palermo: Sellerio. De Jorio, Andrea 2000. La Mimica Degli Antichi Investigata Nel Gestire Napoletano. Napoli. English translation, introduction and notes by Adam Kendon, Gesture in Naples and Gesture in Classical Antiquity. Bloomington: Indiana University Press. First published [1832]. Diadori, Pierangela 1990. Senza Parole: 100 Gesti Degli Italiani. Roma: Bonacci. Efron, David 1941. Gesture and Environment. New York: King’s Crown Press. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal Behavior: Categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Frishberg, Nancy 1975. Arbitrariness and Iconicity: Historical change in American Sign Language. Language 51(3): 696⫺719. Galli de’ Paratesi, Nora 1969. Le Brutte Parole. Semantica Dell’Eufemismo. Milano: Mondadori. Hartmann, Björn, Maurizio Mancini and Catherine Pelachaud 2002. Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis. Computer Animation 2002: 111⫺119. Kendon, Adam 1981. Geography of gesture. Semiotica 37(1/2): 129⫺163. Kendon, Adam 1992. Abstraction in Gesture. Semiotica 90(3⫺4): 225⫺250. Kendon, Adam 2000. Introduction and Notes to A. De Jorio, Gesture in Naples and Gesture in Classical antiquity. A translation of La mimica degli antichi investigata nel gestire napoletano. Bloomington: Indiana University Press. First published [1832]. Kendon, Adam 2004. Gesture. Visible Action As Utterance. Cambridge: Cambridge University Press.

111. Semantics and pragmatics of symbolic gestures Kreidlin, Grigorii E. 2004. The dictionary of Russian gestures. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 59⫺76. Berlin: Berlin Verlag Arno Spitz. Krüger, Reinhard 1999. Mit Händen und Füßen. Literarische Inszenierungen gestischer Kommunikation. Berlin: Berlin Verlag Arno Spitz. Mallery, Garrick 1972. Sign Language among North American Indians Compared with that among Other Peoples and Deaf-Mutes. Annual Reports of the Bureau of American Ethnology. Le Hague: Mouton. First published [1881]. McNeill, David 1992. Hand and Mind. What Gestures Reveal about Thought. Chicago: University of Chicago Press. Meo-Zilio, Giovanni and Silvia M. Mejı´a 1983. Diccionario de Gestos. Espan˜a e Hispanoame´rica. Bogota´: Instituto Caro y Cuervo. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures. Their Origins and Distribution. London: Jonathan Cape. Munari, Bruno 1963. Supplemento al Dizionario Italiano. Supplement to the Italian Dictionary. Mantova: Corriani. Munari, Bruno 1994. Il Dizionario Dei Gesti Italiani. Roma: AdnKronos. Payrato`, Llui´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20: 193⫺216. Poggi, Isabella 1983. Le analogie fra gesti e interiezioni. Alcune osservazioni preliminari. In: Franca Orletti (ed.), Comunicare nella vita quotidiana, 117⫺133. Bologna: Il Mulino. Poggi, Isabella (ed.) 1987. Le Parole Nella Testa. Guida a Un’Educazione Linguistica Cognitivista. Bologna: Il Mulino. Poggi, Isabella 2002. Symbolic gestures. The case of the Italian gestionary. Gesture 2(1): 71⫺98. Poggi, Isabella 2006. Le Parole Del Corpo. Roma: Carocci. Poggi, Isabella 2007. Mind, Hands, Face and Body. A Goal and Belief View of Multimodal Communication. Berlin: Weidler. Poggi, Isabella 2008. Iconicity in different types of gestures. Dimensions of Gesture. Special Issue of Gesture 8(1): 45⫺61. Poggi, Isabella volume 1. Mind, hands, face, and body: A sketch of a goal and belief view of multimodal communication. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.) Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 622⫺642. Berlin/Boston: De Gruyter Mouton. Poggi, Isabella and Emanuela Magno Caldognetto 1997. Il gestionario: Un dizionario dei gesti simbolici italiani. In: Isabella Poggi and Emanuela Magno Caldognetto (eds.), Mani Che Parlano. Gesti e Psicologia Della Comunicazione, 258⫺313. Padova: Unipress. Poggi, Isabella and Catherina Pelachaud 2008. Persuasion and the expressivity of gestures in humans and machines. In: Ipke Wachsmuth, Manuela Lenzen and Günther Knoblich (eds.), Embodied Communication in Humans and Machines, 391⫺424. Oxford: Oxford University Press. Radutzky, Elena 1981. Iconicita` e arbitrarieta`. In: Virginia Volterra (ed.), I Segni Come Parole, 39⫺ 48. Torino: Boringhieri. Radutzky, Elena 1987. La lingua dei segni dei sordi e la comunicazione “non verbale”. In: Grazia Attili and Pio E. Ricci Bitti (eds.), Comunicazione e Gestualita`, 86⫺107. Milano: Franco Angeli. Radutzky, Elena 1992. Dizionario Bilingue Elementare Della Lingua Italiana Dei Segni. Roma: Kappa. Ricci Bitti, Pio E. and Isabella Poggi 1991. Symbolic nonverbal behavior: talking through gestures. In: Robert Feldman and Bernard Rime` (eds.), Fundamentals of Nonverbal Behavior, 433⫺457. New York: Cambridge University Press. Stokoe, William C. 1978. Sign Language Structure: An Outline of the Communicative Systems of the American Deaf. Silver Spring: Linstock Press.

1495

1496

VII. Body movements – Functions, contexts, and interactions

Xua, Jiang, Patrick J. Gannon, Karen Emmorey, Jason F. Smith and Allan R. Braun 2009. Symbolic gestures and spoken language are processed by a common neural system. Proceedings of the National Academy of Sciences 106(49): 20664⫺20669.

Isabella Poggi, Rome (Italy)

112. Head shakes: Variation in orm, unction, and cultural distribution o a head movement related to no 1. 2. 3. 4. 5. 6.

Introduction Cultural distribution Variations in form and function Organization of the head shake in relation to verbal negation Conclusion References

Abstract This chapter documents a gesture that appears to be universal yet culturally marked, integrated with language yet also independent ⫺ the head shake. After summarizing the debate about the cultural distribution of head shakes, I focus on the variations in form and function that head shaking exhibits, and then I use English as a case study to describe how the head shake relates to verbal negation (primarily in spoken language but also in signed language).

1. Introduction Although gesture studies often “pay dues to the extraordinary status of the human hand” (Streeck 2009: 4) and thus define “gesture” as “communicative movements of the hands and arms” (Müller 1998: 13), several head gestures also play a crucial role in face-toface communication. By far the most famous of these is the head shake, which according to Kendon’s (2002) definition occurs “whenever the actor rotates the head horizontally, either to the left or the right, and back again, one or more times, the head always returning finally to the position it was in at the start of the movement” (Kendon 2002: 149). In this chapter, I will document the cultural distribution of the head shake, and then, focusing on its use within communities where it has a primarily negative meaning, I will examine how the head shake varies in form, in function, and in relation to verbal negation when produced as part of a negative utterance.

2. Cultural distribution In Morris (1994) World Guide to Gestures, the head shake gesture appears as a widespread gesture or emblem that means “No!” (Morris 1994: 144). But although wideMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 14961501

112. Head shakes

1497

spread, the negative meaning of the head shake is not universal, as evidenced initially by Darwin’s (1872) casual observations of variation in gestures for affirmation and negation on the Malay Peninsula, the Guinea coast, in China, South Africa, Australia, Greece, and Turkey (Darwin 1872: 164⫺166). In a more systematic observation of “Motor signs for yes and no”, Jakobson (1972) circumscribes three main systems of head gestures associated with affirmation and negation and locates them geographically. A pair of binary systems emerges, in which the head shake and the head nod are opposites for negative and positive response signals, while in the third less-common system a form of head nod expresses both affirmation and negation, the distinguishing feature here being facial expression (see entries in Morris [1977: 142⫺146] for the “head nod”, “head shake”, “head side-turn”, “head toss 1”, and “head toss 2”). According to Jakobson, systems where the head shake means “no” are in use among “the vast majority of European peoples, including, among others, the Germanic peoples, the East and West Slavs (in particular, the Russians, Poles, and Czechs), the French and most of the Romance peoples, etc.” (1972: 92). The system where the head shake means “yes” is referred to as the “Bulgarian code” (Jakobson 1972: 93) but also has “parallels among a few ethnic groups in the Balkan Peninsula and the Near East” (Jakobson 1972: 93). Va´vra (1976: 95) summarizes the two systems as follows: System A (Czechs, Russians, English, French, Germans) ASSENT ⫺ nodding the head vertically DISSENT ⫺ turning the head horizontally System B (Bulgarians) ASSENT ⫺ turning the head horizontally DISSENT ⫺ nodding the head vertically

Although they could seem like opposites, several researchers have pointed out qualitative differences in the movements involved (see collection of head gestures discussed by Morris 1977: 142⫺146). Jakobson (1972) observed that the initial movement of the head shake was different: to the left for negation in System A, to the right for affirmation in System B (1972: 93⫺94). Evoking the origins of the head shake gesture, Jakobson (1972) related this difference in directionality to vision and audition. On the one hand, turning the head to the side may mean “no” because it is primarily an action that breaks eye-contact, thereby symbolizing “alienation, refusal, the termination of direct face-to-face contact” (Jakobson 1972: 93); on the other hand, a sideways turn of the head may also mean “yes” because “the addressor of the affirmative cue offers his ear to the addressee, displaying in this way heightened attention well-disposed to his words” (Jakobson 1972: 94). To test these hypotheses experimentally, Va´vra (1976) elicited head shakes from students in Czechoslovakia (System A) and from students in Bulgaria (System B). She found that head shakes began consistently to the left in A and consistently to the right in B, and she related the movements to either breaking contact with the dominant eye (A) or bringing closer the dominant ear (B), thus claiming to confirm Jakobson’s hypotheses. Following this line of studies, Collett and Chilton (1981) dismissed Va`vra’s findings on methodological and experimental grounds. First of all they pointed out that Va´vra’s reference frame for the direction of the head shake was different to Jakobson’s. While Va´vra was using an observer viewpoint (i.e., “left” in relation to the person coding the

1498

VII. Body movements – Functions, contexts, and interactions

videos), Jakobson was apparently using an actor viewpoint (i.e., to the speaker’s left). Second, Collett and Chilton criticized Va´vra’s method of eliciting head shakes by asking subjects “to express dissent in a mimic manner” (Va´vra 1976: 97), arguing that such a method yields unnatural results because “the active effort required to consciously mime a gesture may produce a stylized form of response which bears little or no relation to the ways in which people perform the gesture” (Collett and Chilton 1981: 68). Third, they argued that the initial movement of the head should not be used as a reliable measure of the head gesture’s meaning. For them, “slight, preparatory movements are quite different from the major swings of the head in terms of speed, amplitude, and function, and should therefore be considered separately” (1981: 69). In their own study, Collett and Chilton (1981) used regression analysis of independent variables onto head shakes and found that “laterality, whether it be with respect to the eyes, the ears, the head, or these modalities in combination, is unrelated to the ways in which people signal negation with the head” (1981: 65⫺66). Their results showed that “head laterality is not a stable feature of the individual, but varies across the occasions on which negation is expressed” (Collett and Chilton 1981: 66). Disagreeing with both Jakobson and Va´vra, they argued that breaking eye gaze cannot be the motivation for a negative head shake, since “even the most cursory of observations of headshaking reveals that this is not what happens”; instead, “while the head is rotating the eyes invariably remain fixed on the addressee” (Collett and Chilton 1981: 67; see McClave et al. [2007: 345⫺347] for a complementary summary of the Jakobson ⫺ Va´vra ⫺ Collett and Chilton debate). There is nonetheless still general consensus that the head shake for negation derives semiotically from the behavior observed among children of turning one’s head away (and thus the senses) from unpleasant or unwanted sights, smells, and tastes (see Calbris 2011; Morris 1977). In Calbris’ (2011, 211⫺212) terminology, the head shake in this sense can be considered a “reflex gesture”.

3. Variations in orm and unction Empirical research focusing on the negative head shake has emphasized a plurality of forms, functions, and meanings. In terms of methods for documenting head gestures in general, Birdwhistell’s (1970) annotation scheme for the body section “head” accounts for his observation that head gestures vary greatly in articulation. He includes kinegraphs for “full”, “half ”, and “small” head movements, with diacritics to account for movements that were “normal”, “stressed”, or “oversoft”; while a further catalogue of arrows, circles, slashes, and dashes code variations in direction (1970: 259). Unlike hand gestures which can exploit several parameters of gestural action, head gestures make use primarily of the movement parameter, especially: direction, degree or amplitude, quality, quantity, velocity, and duration of movement. Tab. 112.1 summarizes vocabulary for coding these features, as derived from the literature on head gestures. McClave (2000) has highlighted the multifunctionality of head shakes by identifying semantic, cognitive, and interactive functions for the gesture in American English. Semantically, head shakes may express intensification when occurring with words like “very”, “a lot”, and “great” (2000: 861), while they may mark uncertainty with words like “I think”, “whatever”, and “whoever” (2000: 862). In the region of lexical repairs, McClave interpreted head shakes as the speaker’s outward expression of an inner cognitive process of “erasing” or “wiping” the error away (2000: 869; see the function of head

112. Head shakes

1499

Tab. 112.1: Different features for head action with descriptors Feature

Descriptor

Direction of movement

Vertical, horizontal, diagonal, side-to-side, back-and-forth, etc. Large, deep, shallow, etc. Abrupt, extended, exaggerated, relaxed, contained, staccato, etc. One, two, single, repeated, etc. Slow, moderate, rapid, etc. Fleeting, sustained, etc. Flexion, extension, rotation, etc.

Degree or amplitude of movement/rotation Quality of movement Quantity of movements Velocity Duration Articulation of the neck

movements in the speech production process; Hadar 1989). In a follow-up comparative study, only the head shake associated with intensification was found to occur crossculturally among speakers of Arabic, Bulgarian, Korean, and African-American Vernacular English (McClave et al. 2007). Studying the semiotics of French gestures, Calbris (2011) emphasizes her concept of gestural polysemy and claims “the head shake is simultaneously an emblem of negation and one of the co-speech signs of totality” (2011: 175). As a substitute for speech, the head shake expresses “no”, but when combined with speech, “it can accompany verbal utterances of positive assessment, of certainty, and of agreement” (Calbris 2011: 173). According to Calbris, the “positive meanings” of the head shake in a community where it primarily relates to negation nonetheless derive from an underling negative implication: a positive assessment implies a rejection of actual or imagined objections, while certainty implies the negative “no doubt” (Calbris 2011: 174⫺175, Calbris 2005). In this line of reasoning, it is always possible to derive an implied negative from an otherwise positive statement, and the head shake is “equivalent to the antithetical paraphrase” that captures the implied negation (Calbris 2011: 174). Corroborating Calbris theory, Kendon (2002) has used a study of a collection of head shakes among Neapolitan Italians to argue that, at least within communities where the head shake is also used as an emblem for “no”, “the best understanding of the significance of the head shake is arrived at if it is assumed that the head shake may be understood as an expression of negation” (Kendon 2002: 150). Findings from studies of multimodality in language acquisition would appear to be consistent with the idea that head shakes are primarily negative. In Andre´n’s (2010) study of Swedish children’s gesture from 18 to 30months, the head shake was first of all tightly connected to negative response signals and negative words, only later being used more “flexibly” or with various other expressions.

4. Organization o the head shake in relation to verbal negation The examples in Kendon’s (2002) paper on the head shake show that speakers of Neapolitan Italian organize the gesture in relation to speech in several ways. They may synchronize the gesture with negative words like “no”, perform it in absence of speech as a negative response signal on its own, synchronize the gesture with larger stretches of discourse, and place it either before or after a negation. When speakers perform the

1500

VII. Body movements – Functions, contexts, and interactions

gesture after the verbal utterance, the head shake may serve as a kind of “negative tag”. While examples in other papers suggest similar variations, there are few studies that explicitly address the organization of head shakes in relation to speech. Meanwhile, several authors have observed that their co-occurrence over a stretch of utterance may be grouped into phases of action not unlike (and often corresponding to) those of manual gestures (e.g., Kendon 2004: 121; McClave et al. 2007: 357⫺358). For manual gestures associated with negative utterances, research shows that speakers produce and organize certain gesture forms in relation to verbal negation in highly specific ways. With sentential negatives in English, for example, Harrison (2010) studied the “palm down” gesture associated with negation and observed that speakers prepared the gesture in advance of the verbal negative particle, coordinated the stroke of the gesture with the particle, and then maintained the gesture in a hold as they uttered the words to which the negation particle applied (i.e., node and scope of negation). A follow up study showed that gesture organization is also sensitive to the presence of negative elements in the utterance other than particles, such as discourse elements being rejected and so-called Negative Polarity Items (Harrison 2013). Some preliminary research indicates that these findings extend to the head shake. In Harrison (2009), I analyzed how speakers combine verbal and gestural resources when they negate, looking at the temporal relations between negative particles, manual gestures associated with negation, and head shakes (Harrison 2009: 201⫺206). The analysis of several examples indicates that English speakers also organize the head shake in relation to the negative constructs in speech. Research currently being reviewed for publication documents speakers aligning both manual gestures and head shake activity with node, scope, and focus of negation. Systematicity of head shake use and organization in relation to verbal negation is also a characteristic of signed languages. Similarities are unsurprising if we agree with Quer (2012) that “[t]he main manual and non-manual ingredients of linguistic negation can be traced back to affective and conventionalized gestures of the hearing community the languages are embedded in” (2012: 316). The head shake is the most widely used non-manual marker for negation in signed languages (see Pfau and Quer 2010), where it can have syntactic properties similar to spoken language particles like “not” (Pfau [2008], for example, describes the “grammar of the head shake” in German Sign Language). However, while a head shake can reverse the polarity of an otherwise positive utterance in signed language, functions or operations with such a “linguistic” extent are yet to be documented for the head shake in coordination with utterances in spoken languages.

5. Conclusion In conclusion, the head shake is a common gesture with diverse functions that vary across communities around the world. Where the gesture is associated with negation, it may function as a negative response signal and express concepts related to negation, as well as be involved in positive evaluations via negative implicature. How speakers organize the head shake in relation to their speech will depend on the role it plays in the interaction as well as the co-presence of other negative forms of negation in the utterance (both verbal and gestural). Studying the head shake gesture requires attention to form characteristics and different phases of activity, as well as a clear indication of the reference point taken for directionality (i.e., observer vs. actor’s viewpoint).

112. Head shakes

1501

Finally, paying more attention to the head shake gesture may help shed light upon the combination of verbal and gestural resources in multimodal communication, the unique contribution of different articulators in that process, and the relation between gesture and linguistic universals like negation.

6. Reerences Andre´n, Mats 2010. Children’s gestures from 18 to 30 months. Ph.D. dissertation, Centre for Languages and Literature, Lund University. Birdwhistell, Ray L. 1970. Kinesics in Context: Essays on Body Motion Communication. Philadelphia: University of Pennsylvania Press. Calbris, Genevie`ve 2011. Elements of Meaning in Gesture. Amsterdam: John Benjamins. Collett, Peter and Josephine Chilton 1981. Laterality in negation: Are Jakobson and Vavra right? Semiotica 35(1⫺2): 57⫺70. Darwin, Charles 1872. The Expression of the Emotions in Man and Animals. London: John Murray. Hadar, Uri 1989. Two types of gesture and their role in speech production. Journal of Language and Social Psychology 8(3⫺4): 221⫺228. Harrison, Simon 2009. Grammar, gesture, and cognition. The case of negation in English. Ph.D. dissertation, Universite´ Michel de Montaigne ⫺ Bordeaux 3. Harrison, Simon 2010. Evidence for node and scope of negation in coverbal gesture. Gesture 10(1): 29⫺51. Harrison, Simon 2013. The temporal coordination of negation gestures with speech. Proceedings of TiGeR 2013, http://tiger.uvt.nl/pdf/papers/harrison.pdf. Jakobson, Roman 1972. Motor signs for ‘Yes’ and ‘No’. Language in Society 1(1): 91⫺96. Kendon, Adam 2002. Some uses of the head shake. Gesture 2(2): 147⫺182. Kendon, Adam 2004. Gesture. Visible Action as Utterance. Cambridge: Cambridge University Press. McClave, Evelyn Z. 2000. Linguistic functions of head movements in the context of speech. Journal of Pragmatics 32(7): 855⫺878. McClave, Evelyn Z., Helen Kim, Rita Tamer and Milo Mileff 2007. Head movements in the context of speech in Arabic, Bulgarian, Korean, and African-American Vernacular English. Gesture 7(3): 343⫺390. Morris, Desmond 1977. Manwatching. A Field Guide to Human Behaviour. New York: Harry Abrahams. Morris, Desmond 1994. Bodytalk. A World Guide to Gestures. London: Jonathan Cape. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Arno Spitz Verlag. Pfau, Roland 2008. The grammar of headshake: A typological perspective on German Sign Language Negation. Linguistics in Amsterdam 1(1): 37⫺74. Pfau, Roland and Josep Quer 2010. Nonmanuals: their grammatical and prosodic roles. In: Diane Brentari (ed.), Sign Languages (Cambridge language surveys), 381⫺402. Cambridge: Cambridge University Press. Quer, Josep 2012. Negation. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language (Handbooks of Linguistics and Communication Science 37), 316⫺339. Berlin: Mouton de Gruyter. Streek, Jürgen 2009. Gesturecraft. The Manufacture of Meaning. Amsterdam: John Benjamins. Va´vra, Vlastimil 1976. Is Jakobson right? Semiotica 17(2): 95⫺110.

Simon Harrison, Ningbo (China)

1502

VII. Body movements – Functions, contexts, and interactions

113. Gestures in dictionaries: Physical contact gestures 1. 2. 3. 4. 5.

Introduction The history of gesture dictionaries A “Dictionary of Physical Contact Gestures” Conclusion References

Abstract This chapter examines the challenges of collecting gestures into a dictionary form and attempts to justify the study of gestures involving physical contact within the broader field of gesture research. Taking a brief look at the history of gesture dictionaries introduces, through the development of classification schemes and attempts at achieving comprehensive yet manageable descriptions of gestures, some of the recurring problems in gesture lexicography. Using the author’s “Dictionary of Contemporary Physical Contact Gestures in the Mid-Atlantic Region of the United States” as an example, we present a potential framework for describing, organizing, and indexing gestures that could be applied to a variety of gesture collections and argue that a certain level of standardization can help facilitate crosscultural and comparative analysis. In investigating the state of gesture dictionaries, it becomes apparent that until now, gesture collections have been mostly focused on speakers’ solo gestures in primarily dyadic communication relationships. While there has been some research into so-called “physical contact gestures” in other fields, gestures with physical contact as a defining feature have been largely neglected from the perspective of linguistics/semiotics.

1. Introduction Until the end of the nineteenth century, gesture studies in the west predominantly focused on the gestures of speakers, actors, and political and religious leaders (Müller 1998: 55). It is not until the twentieth century that we see a concern with gestures in everyday use. This shift in attention brings with it additional challenges in attempting to catalog and document gestures. The volume of gestures in everyday use and the variety of their meaning and interpretation makes it difficult to adequately capture the full range of potential expressions in a form that remains easy enough to navigate that it can serve as a reference. This is in addition to the traditional problem of describing fluid movement in words that are descriptive enough to be meaningful without burying the spontaneous aspect of gesture. While working on the “Dictionary of Contemporary Physical Contact Gestures in the Mid-Atlantic Region of the United States”, some of the solutions to these problems presented themselves as a possible foundation for gesture dictionaries in general, regardless of their specific focus. To put the current dictionary in context, we will take a short look at the development of gesture documentation in specific cultural contexts and the resulting creation of gesture dictionaries. Then we will more closely examine the structure of the dictionary to assess the features of each entry that aid investigation and analysis. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 15021511

113. Gestures in dictionaries: Physical contact gestures

1503

2. The history o gesture dictionaries It is in the ethnographic description of everyday Neapolitan gestures La Mimica degli Antichi Investigata nel Gestire Napoletano (‘The mime of the Ancients investigated through Neapolitan gesture’) from Andrea de Jorio ([1832] 2000) that we first see the defining elements of a gesture dictionary. The gestures are placed within the context of their use with descriptions of the communication situation, participants, body posture, verbal expression, and accompanying facial expression. This dictionary can also be read as a guide to everyday gestures in use in nineteenth century Naples which encompasses nearly all aspects of research into emblematic gestures and provides a template for the organization and content of gesture dictionaries to come. The first systematic investigation into everyday gestures in a cross-cultural perspective comes in the 1930’s from David Efron ([1941] 1972). In “Gesture, Race and Culture”, Efron compares the gestures of two cultures and two generations in everyday situations in relation to performance, coding, and meaning. The classifications used in this work can be seen as a starting point for all classification schemes to follow, and the work in general opened a new phase of scientific interest in gestures (Müller 1998; Kendon 1995). Two other noteworthy dictionaries attempt a comprehensive and systematic description of gestures throughout the world: “Bodytalk” from Morris (1995) and the “Dictionary of Worldwide Gestures” of Bäuml and Bäuml (1997). Poggi (2001) concentrates on the structure of the entries to create a clear framework of parameters. Gestures are divided into multiple parameters on the morphological and semantic level so that each gesture can be presented as a combination of these fundamental elements. Pertinent to the present essay, it is important to also mention the collection of everyday gestures in Berlin, the “Berlin Dictionary of Everyday Gestures” (BLAG), a work which has been led by Roland Posner and which has its roots in the interdisciplinary research project ‘Gesture recognition with sensor gloves’ (Gebärdenerkennung mit Sensorhandschuhen) at the Technical University of Berlin. This dictionary has the goal of documenting and describing all speech-replacing gestures in everyday use in the MidEuropean cultural sphere that are defined with conventional meanings and standardized movement patterns.

3. A Dictionary o Physical Contact Gestures The “Dictionary of Physical Contact Gestures” offers a systematic presentation of the gesture repertoire similar to the “Berlin Dictionary of Everyday Gestures” but of the Mid-Atlantic area of North America instead of the Middle European culture The subject of the dictionary is a collection of speech-replacing gestures observed and analyzed during field research in the period between 2007⫺2009 in a specific geographical region defined by the borders of a group of U.S. states. Belonging to this group are: New York, New Jersey, Pennsylvania, Delaware, Maryland, Washington D.C., and occasionally Virginia and West Virginia. The empirical studies resulted in a collection of 156 movement forms of physical contact which are gestures that occur through body contact between two or more people. Such gestures have been designated by Posner (2007) in the German language as fremdberührende Gesten which we translate into English as ‘Physical Contact Gestures’ (or PCG). They are unique in that they are defined not just by movement but explicitly by touch. They can include all parts of the body

1504

VII. Body movements – Functions, contexts, and interactions

and can be speech-accompanying or speech-replacing. In contrast to use movements that only serve a physical function (Gebrauchsbewegungen), these gestures have a meaning that is constituted by the communicative content of the gesture. Like the “Berlin Dictionary of Everyday Gestures”, the “Dictionary of Physical Contact Gestures” offers a systematic presentation of the gesture repertoire as well as an uncomplicated layout of the individual entries. The dictionary is separated into two sections: Part 1 is a 57-page introduction explaining the data collection, research methods, and structure of the individual entries and situates the work in the context of current research and investigates communication constellations that are unique to Physical Contact Gestures and the potential problems that may be encountered when using standard sender-receiver models. Part 2 consists of 503 pages of individual dictionary entries. To explain the composition of the dictionary entries, we will look at the first gesture “Handshake” as an example. For each entry it was important to establish the gesture as a formal entity, as a sign, in order to analyze the meaning level. To this end, each entry has a unique title as headline which then, naturally, serves as a navigation reference for the table of contents and the indices. Each entry title can be viewed as a compact description of the body movement involved in the gesture. This description is strictly on the physical level and does not draw upon the meaning of the gesture. The entry title is followed by a photo that shows the stroke (Kendon 1980) of the gesture.

Fig. 113.1: Stroke of the entry “Handshake”

3.1. The Expression Level Under the subtitle “expression” follows a detailed written description of the movements and hand and body configurations used while performing the gesture. Here again, an attempt is made to avoid descriptive elements that come from the interpretation or meaning of the gesture. In order to make the description of specific hand configurations more easily understood, the finger alphabet of American Sign Language is used. While standing face to face with P2, P1 lifts the right hand toward the front of his or her body in a slightly left diagonal with the right arm either straight or bent at the elbow. The hand is held in a horizontal B-shape with the palm facing inward and the thumb extended vertically. The hand is held out to meet the extended right hand of P2. Both hands meet, palms touching, and the fingers wrap around the bottom of the other’s hand in a slightly Cshaped grip with the thumb on top pointing toward fingertips of the same hand. P1 and P2 then either squeeze the hand or briefly pump their hands once or several times in the same movement. This pumping movement involves lifting the joined hands slightly above the hori-

113. Gestures in dictionaries: Physical contact gestures

1505

zontal plane of the starting position, then dropping slightly below this plane and returning to the original position. (Lynn 2011: 1)

In order to make the descriptive texts easier to understand, it is advantageous to use variables that are used consistently in all entries. For non-reciprocal gestures, the sender, that is, the participant who initiates the movement pattern, will be called P1 and the receiver P2. Additional participants to the gesture will simply be numbered consecutively P3, P4, etc. People outside of the sphere of body contact, who by their observation of the contact gesture are potential or actual addressees of the communication, will be called Px. After the comprehensive verbal description, as far as it is relevant to the gesture, comes the sub-line “expression variants” which precedes a listing of possible variations that can be made in executing the gesture. These expression variants, together with the prototypical and most conventionalized expression of the body movement, form a “gesture family” (Calbris 1990; Fricke, Bressem, and Müller this volume; Kendon 1996, 2004; Müller 2004; see also Bressem and Müller this volume). Consider, for example, the variations of “Handshake”:

Fig. 113.2: “Handshake” with both hands

Fig. 113.4: “Handshake” with shoulder

Fig. 113.5: “Handshake” with bow

Fig. 113.3: “Handshake” with elbow

1506

VII. Body movements – Functions, contexts, and interactions

Fig. 113.6: “Handshake” with left hands

3.2. The Meaning Level In addition to the level of expression, each gesture entry is analyzed on the level of meaning. Once the physical form of the gesture has been described in terms of body configuration and movement, the semantic structure is analyzed in the form of communicative meaning. This analysis of the communication acts comes under the heading “meaning and use variants”. The gestures are investigated using ideas and terminology from the speech act theories developed by John Austin (1962) and John Searle (1969). In this dictionary, we will investigate Physical Contact Gestures using the concepts developed by Austin and expanded by Searle with the difference that we are not dealing with conventionalized noise production but rather with conventionalized body movements. This comparison becomes clearer when we consider the following chart: Tab. 113.1: Speech Acts and Touch Acts

Locutionary Act Illocutionary Act Perlocutionary Act

Speech Act

Touch Act

Utterance The dog will bite you. Action threatening Causal Effect Attacker flees

Body Movement P1 raises P2’s chin with a balled fist. Action threatening Causal Effect P2 considers the effect of a punch in the face.

We can see here that gestures have speech-like properties and communicate in a similar fashion. Speech-replacing body movements in the form of emblematic Physical Contact Gestures can be interchanged with speech utterances and can form, like speech, a rulebased, learnable, and culturally determined symbol system.

3.2.1. Meaning and Use Variants Consistent with applying speech act theory to gestures, many dictionary entries also have interjections and/or colloquial renderings added. We have used these terms to categorize sounds, words, or sentences that might be uttered during the performance of the gesture. These can be automatically added to certain conventional gestures during everyday use but are not necessary in order to complete the communication act. The interjections are a list of single words or combinations of words that function at the level of a sentence as well as all sounds uttered as a by-product of performing the gesture. Examples in-

113. Gestures in dictionaries: Physical contact gestures

1507

clude: lip smacking while kissing, a spontaneous grunt while lifting someone up, moaning, screaming, etc. Colloquial renderings are shown as examples of sentences that could accompany the gesture. It must be specifically noted here, however, that although both interjections and colloquial renderings are associated with specific gestures, that does not mean that they are seen as “belonging” to the gesture. No gesture in this collection is absolutely speech-accompanying, which means that they all retain their meaning whether or not there is an accompanying verbal utterance. On occasion, there is an extra row added to the meaning and use variants labeled “applies to variants”. This is only the case when a specific meaning variant only applies to certain expression variants rather than all of them. To illustrate this with an example, we see under entry “1 ⫺ Handshake” the meaning and use variant III ⫺ confirming an agreement. This meaning and use variant only applies to expression variants b) and e) and not to the other expression variants of the gesture. When there is no such indication, the meaning and use variant applies to all forms, that is, all expression variants of the gesture. In order to situate the use of a gesture in a societal context to help clarify certain meaning variants, we will occasionally make use of the register labels used in the Oxford Dictionary. Within linguistics, the term “register” is used to denote a manner of speaking or writing that is characteristic of a certain social circumstance. Here, social relations are mirrored in the language use. We see this when an employee uses a certain manner of speaking when communicating with his or her superiors and quite another when speaking with friends. Likewise with gestures, we see that there are certain contexts that affect the form of a gesture or even whether or not it can be performed. The chart following the meaning and use variants for each entry helps to communicate an overview of the characteristics of the gesture. This chart provides a quick analysis using the ten binary characteristics of physical contact as formulated by Posner (2007). Of these ten, there are six that are relevant to Physical Contact Gestures and are here presented in a table that allows for quick classification of the gesture. All binary pairs are listed in each chart with a checkbox that is darkened next to the characteristic appropriate to the gesture. This is helpful when the gesture is not only being analyzed on the level of expression or meaning but also on the level of tactile relationship of sender and receiver or, if appropriate, between participants. Each gesture entry is emblematic and stands on its own. With certain Physical Contact Gestures, however, we see the communicated message being underscored, strengthened, or expanded by the simultaneous performance of other individual gestures. Whenever there appears in a dictionary entry a row labeled “potentially combinable with”, we find a listing of other gestures from the dictionary that can be frequently performed simultaneously with that gesture but whose presence or absence does not affect its performance. The entry for “1 ⫺ Handshake” lists seven gestures with which the handshake might potentially be combined: “3 ⫺ Separating a handshake”, “5 ⫺ Enclosing someone’s hand with both hands”, “91 ⫺ Hugging”, “103 ⫺ Kissing someone on the cheek”, “104 ⫺ Simultaneously kissing cheeks”, “105 ⫺ Simultaneously kissing lips”, “113 ⫺ Kissing someone’s hand” (Lynn 2011: 6). This does not mean that all seven possibly accompanying gestures would be performed in addition to the main gesture, but rather that they might be added individually. We may also occasionally find two or three simultaneous accompanying gestures used in common practice, but that is exceptional. An example would be if P1 shakes P2’s hand while simultaneously kissing and hugging P2. In this case, we see

1508

VII. Body movements – Functions, contexts, and interactions

“1 ⫺ Handshake” combined with “104” and “91”. This is a gesture combined with two simultaneous accompanying gestures, each of which is also an individual Physical Contact Gesture that is common in everyday use.

3.2.2. Related Gestures and Context Under the subheading “related gestures” a cross reference to other gestures that are similar at the level of expression can be found, that is, Physical Contact Gestures from the collection that have a similar series of movements or hand or body configuration. Here again, we use “1 ⫺ Handshake” as an example to find that it has similarities with the following entries: “2 ⫺ Handshake with bent middle finger, 3 ⫺ Separating a handshake, 4 ⫺ Holding hands, 5 ⫺ Enclosing someone’s hand with both hands, 7 ⫺ Clapping palms” (Lynn 2011: 6). One last categorization, “context”, appears with some dictionary entries. This helps readers to understand the conditions necessary for a gesture to be performed. We have here the opportunity to distinguish certain gestures collected in the dictionary that are known but not actively performed in the region of study. We have chosen to categorize such gestures as belonging to the passive gesture repertoire of the region. Leaving such gestures out of the collection completely would not have been accurate as they have a recognized meaning, even if not actively performed by people within the cultural area here studied. As an example, “139 ⫺ Drinking with linked arms” or “137 ⫺ Placing a sword on someone’s shoulder” are readily interpreted but are not part of the active gesture repertoire in the Mid-Atlantic region of the U.S. This inclusion in the passive repertoire is noted under the “context” subheading. Also included here is additional information that helps the reader better understand the use of a gesture when it is most commonly performed between two types of people and seldom or never in other constellations. For example, “48 ⫺ Pulling on someone’s earlobe” would most often be performed with an adult as sender (P1) and a child as receiver (P2). It would rarely be executed in the opposite direction. Seldom would we see this gesture being performed between two adults, but it is not impossible. Gesture “132 ⫺ Touching feet” is most frequently performed with a woman as P1 and a man as P2, though other constellations occur. It is important here not to place too much analytic weight on these distinctions. They are intended only as a help to the reader, who may not be familiar with the gesture in question and the variable but nevertheless present norms that regulate its occurrence in the cultural space. Another example of the type of information to be found under the “context” subheading can be seen in the entry for “12 ⫺ Joining in matrimony”. Here, we read that the gesture, in order to have its intended meaning, must be performed in front of a person invested with appropriate authority and at least one witness. Again, it is important here not to interpret this extra information as being on par with an analysis of the communication act. A deeper analysis of such a gesture could surely be made following Searle’s lead in investigating institutional facts and collective intentionality (see, for example, Searle 1969), but such analysis is outside the scope of this work. It bears repeating that the “context” information should only be viewed as an aid to the reader.

3.3. Indices The order of the gestures in the dictionary is loosely structured according to the body part making primary contact during the gesture. Since the hands are very frequently

113. Gestures in dictionaries: Physical contact gestures

1509

involved in Physical Contact Gestures, we have chosen to start there, working then towards the body proper over the arms and shoulders, then from top to bottom: head, upper torso, lower torso, leg, and foot. A more specific ordering of the gestures is available in two indices that allow quick reference for users of the dictionary. Index 1 orders the gestures according to the point on the body where the contact takes place that defines the gesture. We have divided the body into seven regions: head, upper torso, shoulder/arm, hand, lower torso, leg, and foot. Each Physical Contact Gesture is then categorized in Index 1 based on the body parts of sender and receiver that come in contact during the gesture. This allows a quick and easily understood method of searching for a specific gesture entry based on the level of expression. This categorization is always sender-based. That means the body part of the sender will be listed first, followed by the body region of the other participant(s). Such a categorization based solely on visually identifiable features of a Physical Contact Gesture is intended to serve as an aid to the gesture observer, who is perhaps unfamiliar with the culture in which the gestures are used. Such a user of the dictionary would then be able to find the entry for a gesture simply by searching Index 1 based on their observation without needing any knowledge of meaning or use. In contrast to Index 1, in which the gestures are grouped according to the visually observable features of the body contact, Index 2 seeks to provide a reference based on the meaning of the Physical Contact Gesture. In order to achieve this, we have developed the following categories of gestures: “Greeting Gestures”, “Attention Gestures”, “Confirmation Gestures”, “Institution Gestures”, “Consolation Gestures”, “Encouragement Gestures”, “Affection Gestures”, “Attraction Gestures”, “Sexual Relation Gestures”, “Assistance Gestures”, “Aggression Gestures”, “Playful Power Gestures”, and “Indirect Contact Gestures”. Each of the 156 Physical Contact Gestures can be included in at least one of these meaning-based categories. Many gestures are included in several different categories at the same time as a result of having multiple possible meanings. Again, we look to our example of “1 ⫺ Handshake”. This can serve not only as a greeting gesture but also as a confirmation gesture. So, too, is “91 ⫺ Hugging” not exclusively a greeting but can also effectively be used as a consolation gesture, affection gesture, or sexual relation gesture. Index 2 therefore suggests a solution to the problem of how to categorize gestures that have the same expression but different meanings. That is, a gesture that has multiple meanings but always looks the same. In Index 2 we simply find the gesture listed by title, which, as we have seen, is a short description on the expression level, in multiple meaning categories. This categorization based on meaning and use gives the reader an idea of which gestures can be used to express these mental states and provides an easy cross reference for finding the entry for the corresponding gesture.

3.4. Indirect Contact Gestures As we have seen above, both the index based on expression and that based on meaning have a last category which we have not yet explained. This category, “Indirect Contact Gestures”, has not been used in this way in gesture research literature to date and deserves, therefore, a more detailed explanation. The creation of this category was driven by a recognized need to include those communication acts as contact gestures, in which physical contact takes place not directly from body to body but rather through a medium. This is best illustrated by the gestures “133 ⫺ Pouring water over someone’s head”, “134 ⫺ Touching someone through glass”, “135 ⫺ Awarding a medal”, “138 ⫺ Clinking glassware”, “140 ⫺ Feeding someone”, “144 ⫺ Tying someone’s shoelace”, “149 ⫺ Combing someone’s hair”, “152 ⫺ Pillow fight”, and “155 ⫺ Kissing an adornment”, for instance.

1510

VII. Body movements – Functions, contexts, and interactions We see in these examples that a type of contact between P1 and P2 takes places but is mediated by some physical object or artifact. What is important here is that the sender and receiver at some point in the performance of the gesture must both simultaneously touch the object in order for the movement to be considered an indirect contact gesture.

4. Conclusion Future documentation of gestures in dictionary form can benefit from a standardization of techniques of categorization and indexing. Such a standardization can provide a solid basis for analysis within a given cultural space as well as simplify cross-cultural comparison. The “Dictionary of Contemporary Physical Contact Gestures in the Mid-Atlantic Region of the United States” as well the “Berlin Dictionary of Everyday Gestures”, by which it was inspired and to which it serves as an extension, both offer an easily navigable reference work that can suggest possible solutions to the difficulties encountered in the lexicography of gestures. In developing the form of the entries, emphasis has been placed on creating a format that provides a simple overview to assist in navigation while also providing for deeper analysis of the communication acts described. Additionally, in the context of analysis, the application of general principles of Austin and Searle’s speech act theory serves to justify and situate the study of Physical Contact Gestures within a broader and more mature discipline to help compensate for a lack of literature specific to the field.

Acknowledgements I am indebted to Veronika and Jarmila Opletalova´ for providing the illustrations based on the photographic documentation from the “Dictionary of Contemporary Physical Contact Gestures in the Mid-Atlantic Region of the United States”.

5. Reerences Austin, John L. 1962. How to Do Things With Words. Oxford: Oxford University Press. Bäuml, Betty J. and Franz H. Bäuml 1997. Dictionary of Worldwide Gestures. Lanham, MD: Scarecrow Press. Bressem, Jana and Cornelia Müller this volume. The family of AWAY gestures: Negation, refusal, and negative assessment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1592⫺1604. Berlin/Boston: De Gruyter Mouton. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington, IN: Indiana University Press. De Jorio, Andrea 2000. Gesture in Naples and gesture in classical antiquity. A translation of La mimica degli antichi investigata nel gestire napoletano (Fibreno, Naples 1832) and with an introduction and notes by Adam Kendon. Bloomington/Indianapolis: Indiana University Press. First published [1832]. Efron, David 1972. Gesture, Race and Culture. Den Haag: Mouton. First published [1941]. Fricke, Ellen, Jana Bressem and Cornelia Müller this volume. Gesture families and gestural fields. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodal-

114. Ring-gestures across cultures and times: Dimensions of variation

1511

ity in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1630⫺ 1640. Berlin/Boston: De Gruyter Mouton. Kendon, Adam 1980. Gesticulation and Speech: Two Aspects of the Process of Utterance. In: Mary R. Key (ed.), The Relationship of Verbal and Nonverbal Communication, 207⫺227. The Hague: Mouton. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺279. Kendon, Adam 1996. An agenda for gesture studies. The Semiotic Review of Books 7(3): 7⫺12. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Lynn, Ulrike 2011. Keep in Touch ⫺ A Dictionary of Contemporary Physical Contact Gestures in the Mid-Atlantic region of the United States. Phd Dissertation, Technische Universität Berlin: Digital Repository of Technische Universität Berlin. http://opus4.kobv.de/opus4-tuberlin/frontdoor/index/index/docId/3484. Morris, Desmond 1995. Bodytalk. Körpersprache, Gesten und Gebärden. München: Wilhelm Heyne Verlag. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand. A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), Semantics and Pragmatics of Everyday Gestures, 233⫺256. Berlin: Weidler Verlag. Poggi, Isabella 2001. The lexicon and the alphabet of gesture, gaze, and touch. Lecture Notes in Computer Science 2190: 235⫺236. Posner, Roland 2007. Gestures with and without touching the addressee. Lecture at the Berliner Arbeitskreis für Kultursemiotik BAKS (Berlin Circle of Cultural Semiotics). Technische Universität Berlin. Searle, John R. 1969. Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press.

Ulrike Lynn, Chemnitz (Germany)

114. Ring-gestures across cultures and times: Dimensions o variation 1. Introduction 2. Ring-gestures across time ⫺ from Antiquity to the present: Stability in the motivation of meanings 3. Ring-gestures within cultures: Variations of forms and meanings 4. Ring-gestures across cultures: Stability and variation 5. A comparative analysis of gestures concerns various dimensions of variation 6. References

Abstract This chapter presents an overview of an extremely widespread gesture: the ring-gesture. The focus of this chapter is on the dimensions of variation in the forms and meanings of Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 15111522

1512

VII. Body movements – Functions, contexts, and interactions

this particular gestural form which is characterized by a particular hand shape. In the ringgesture, fingertips of index and thumb or middle-finger and thumb touch each other, thereby creating a more or less round shape. The remaining fingers may be spread or curled. We will show that the varying forms go along with differences in meaning sometimes marking cross-cultural differences, sometimes intra-cultural ones. Differing as well as stable formmeaning pairings of ring-gestures will be traced from classical times to the present, and similar as well as differing iconic motivations of ring-gestures will be presented to account for the variations and similarities in meaning. To sum up, we depart from the assumption that all formal features of a gesture may contribute to its meaning. The dimensions of variation in meaning that are pertinent in a cross-cultural, intra-cultural, and historical analysis of gestures go along with variations in form. They concern the motivation of a gestural form, hand shape, orientation, position, and movement patterns of a given gesture.

1. Introduction The Ring is one of the most widespread and oldest gestures we know of. It is documented for the European culture for about 2500 years, and it is found in cultures all over the world. The ring-gesture is typically characterized by a specific hand shape, in which thumb and index finger touch each other at the fingertips so that the fingers form (a more or less) round shape ⫺ a ring. Sometimes it is also performed with the middle finger touching the thumb. The position of the remaining fingers can vary: Either they are spread apart or bent (Fig. 114.1). Movement pattern, orientation, and location in the gesture space may vary too. Ring-gestures have been characterized as conventionalized gestures, as emblems (Ekman 1977), or quotable gestures (Kendon 1995b). They have stable form-meaning relations, can replace speech (Ekman and Friesen 1969), can be quoted (Kendon 1990), and tend to replace full speech-acts (Müller 2010). Conventionalized gestures like the ringgestures may vary or remain stable across cultures and times. They may survive periods of extended linguistic change such as reported for the head-tilt with chin-flip of the hand as a gestural expression for “no” in the southern parts of Italy, which is a reminiscence of Greek settlements in classical times, dating more than 2000 years back (Morris 1978: 77). It is noteworthy that the head-tilt remained unaffected by the linguistic changes that led to modern Italian. This emblem remained stable in a particular area over a very long time and across many different cultures, and it did so unaffected by language change and moving linguistic borders. On the other hand, we know that the meaning of conventionalized gestures may vary across cultures and linguistic borders and within cultures. Thus, while the ring-gesture is a sign for “okay” in northern European countries, in the south of Europe it is recog-

Fig. 114.1: Two common hand shapes of ring-gestures (Austin 19th century England) (Austin [1806] 1966, plates 5 and 6).

114. Ring-gestures across cultures and times: Dimensions of variation

1513

nized as a severe sexual insult. Apparently, these emblems live a life relatively independent from linguistic borders. This fact is counterintuitive to the linguistic expectations of an average speaker, who believes that, when language does not work for communication, we can use our hands to talk. Such implicit assumptions actually produce quite a lot of misunderstandings for travelers that believe “body language is universal” (Müller 2008). The opposite is true: The bodily resources for communication are universal, but the ways in which they are exploited are a matter of culture. In this chapter, we will be looking at the meanings and subtle variations in form of the various ring-gestures reported in the literature from classical times to the present. We find documentations of a range of very different meanings ranging from love and friendship, to perfection, justice, sexual insult, zero, okay, and to marking discourse moves (Calbris 1990, 2011; Kendon 1995a, 2004; Morris 1978; Neumann 2004; Seyfeddinipur 2004). We will suggest that these different meanings each have different motivations ⫺ the iconic potential of the Ring being exploited in different ways (see also Calbris 1990, 2011). Moreover, we will argue that other form features play an important role in creating variation in meanings ⫺ notably within cultures, as well as across cultures. Subtle variations in hand shape, orientation, position, and movement of the ring-gesture are used to create different meanings.

2. Ring-gestures across time  rom Antiquity to the present: Stability in the motivation o meanings In this section, we will focus on the historical stability of ring-gestures. It is astounding how stable certain meanings of ring-gestures have been over large time spans, and how much they have resisted major cultural and linguistic changes within a particular local area. We are assuming that despite a similar and clear-cut hand shape ⫺ the name giver of the gesture ⫺ we actually face different ring-gestures (see also Calbris’ [1990: 116⫺ 154, 2011: 23⫺34] notion of the polysemy of gestures). This is the case because the ringshape can be motivated by many different semantic or semiotic bases: On the one hand it can be used to depict all kinds of round things in the world, but on the other hand the ring-shape can also be a secondary effect of a finger pinch. So, while presenting different historical venues of ring-gestures, we will elaborate on the varying underlying iconic motivations for the Ring hand shape (see also Calbris’ [1990: 116⫺154, 2011: 23⫺ 34, 118⫺124] notion of analogical links).

2.1. Touching inger-tips as kissing lips: The Ring as an expression o love and aection rom Ancient Greece to the 19th century The ring-gesture as a sign for love, affection, and matrimony is already part of the iconography of Greek vase painters in the 5th century BC. In Fig. 114.2 we see such a ring-gesture on a vase painting. The painting shows a gestural dialog between a man and a woman. On the left-hand side we see a man holding a ring-gesture high up in the gesture-space between him and the woman. The dialog depicted on the vase is described as a conversation between lovers, the loving man expressing his love with the ringgesture (Neumann 1965: 13). Neumann (1965) reports that in the iconography of Greek vase paintings the ringgesture was an established sign to express love: He suggests that the touching fingertips

1514

VII. Body movements – Functions, contexts, and interactions

Fig. 114.2: The ring-gesture as an expression of love on a Greek vase painting (5th century BC) (Neumann 1965)

depict the lips of a loving couple in a kiss. Note that in this case it is not the circle, which is the crucial semantic element of this hand shape, but the fact that the fingertips “kiss” each other. Thus, the motivation of form, the iconic relationship motivating the meaning of the form, is the analogy between the touching lips and the touching fingertips (Calbris [1990: 116⫺154, 2011: 23⫺34, 118⫺124] mentions this type of motivation as an “analogical link”). Moreover, this variant of the ring-gesture appears not to be iconographic in the first place. We have good reasons to believe that it has been in use with the same form and meaning as an everyday gesture at least in one cultural area for about 2400 years. Already in the early days of the 19th century, “the first ethnographer of gesture” (Kendon 1995a), Andrea de Jorio, provides detailed accounts of the ring-gesture and its varying usages in the Neapolitan area, which document forms and meanings still pertinent today. De Jorio conducted his research in the streets of Naples and published it in 1832 La mimica degli antichi investigata nel gestire napoletano (de Jorio [1832] 2000). De Jorio was a canon of the cathedral of Naples, and an archeologist and curator at the Royal Borbonic Museum of Naples, and ⫺ being an expert on Greek vases ⫺ he came to recognize that the Greek vase painters depicted people waving their hands about in much the same way as his Neapolitan citizens did. De Jorio thought that the ordinary people in the streets of Naples had preserved the gestures of their Greek forebears who once founded the city of Naples. Therefore, he decided to study the way in which his Neapolitan fellows made use of gestures in their everyday lives, hoping that this would lead to a better understanding of the gestures depicted on the Greek vases in the museum where he worked (see Kendon 2000). In his “Mimica”, de Jorio describes in great detail the many different uses of the ring-gesture: Among those is the Ring as an expression of love or of affection: “Another gesture which indicates affection, which is in common use, is one in which index finger and thumb are brought together so that the papillae of their final joints make contact as if they are kissing each other.” (de Jorio 2000: 83) In

114. Ring-gestures across cultures and times: Dimensions of variation

1515

fact, de Jorio mentions another related meaning of the Ring, namely, the “kissing” finger-tips may indeed depict kissing and even be used to throw a kiss at somebody: Drawing together the tips of the fingers into a point (see Plate XX, No. 6), one gives this kiss, pretending that the kiss is held tight and firm by the tips of the fingers. Then, moving the hand from the mouth, one turns it energetically toward the person to whom one wishes to direct the kiss, and opening the hand all of a sudden, one makes as if to throw something. With these actions one is understood to throw the kiss to someone who is at a distance. (de Jorio 2000: 107)

The Ring used as a kiss hand is one of the gestures that is included in the annotated plates of his book (Fig. 114.3).

Fig. 114.3: The Ring used as an expression of love, affection, and as a way of throwing a kiss at somebody among Neapolitans in early 1800 (de Jorio 2000: 475; Plate XX, No. 6).

De Jorio’s observations show that the ring-gesture as a sign for love, affection, and matrimony was a conventionalized gesture of his Neapolitan citizens, which had ‘survived’ major cultural and linguistic changes in the Neapolitan area, as witnessed by Greek vase paintings from their ancestors. Naples was founded through Greek settlers as nea polis, from Greek ‘new city’ around 500 BC. It is commonly assumed, however, that the first Greek settlements in the area date far further back, e.g., to the 8th or 9th century BC (Matthews and Taylor 1994). We do not know exactly when Greek settlers invented the Ring, or whether they brought it along from Greece, but at least around the time of the vase painting we can assume that the ring-gesture was an established gestural expression of love, affection, and matrimony. De Jorio writes: “It has been widely noted that, among the ancients, this gesture was the emblem of matrimony.” (de Jorio 2000: 84) And, actually, de Jorio derives the meaning of love from the meaning of matrimony: Now having established that this gesture indicates matrimony, who may doubt that, equally, indeed even more appropriately, it may be used to express love, the joint feeling which is the true reason for marriage; and then, by an extension of meaning, it would come to be used to indicate other feelings of affectionate relationship, including simple friendship. (de Jorio 2000: 84)

De Jorio uses historical accounts of the gesture and interprets the Greek vase paintings based on them and based on his observations on the gesture uses of his contemporaries.

1516

VII. Body movements – Functions, contexts, and interactions

Notably, for him it was a given fact that the meaning of gestures, used by the people of Naples, is reminiscent of the Greek ancestors: “In accord with our principle that almost all of our gestural expression can be acknowledged to have an origin in antiquity […]” (de Jorio 2000: 83). It thus appears that the Ring was used and employed by speakers of Ancient Greek around 5 BC and de Jorio’s report shows that it was in common use among his Neapolitan compatriots in the 19th century. This means that the ring-gesture for love and affection was in use in the Neapolitan area from Greek Antiquity to 19th century Italy. It survived major cultural and linguistic changes from the Greek to the Romans and from there to Modern Italy. As a conventionalized gesture it was maintained with the same form and meaning across a period of roughly 2400 years in the Neapolitan area.

2.2. Grasping with precision as arguing with precision: The Ring as discourse gesture rom Quintilians rhetoric to present day gesturing Quintilian, the late Roman teacher of rhetoric (1 AD), included a detailed account of gestures in his Institutio Oratoriae. In the actio part he develops an elaborate rhetoric of the body (1975: Inst.Orat. XI 3, 1⫺184). Notably, his prescriptive account is based on an elaborate knowledge of what contemporary rhetoricians would conceive of as appropriate forms of employing the hands to support the delivery: “All this ⫺ the restrictions and the selection of gestures ⫺ points in the same direction: rhetorical gestures are highly conventional, they are a selection and adjustment of gestures from daily conversation to the purpose of public speaking.” (Graf 1994: 47) Apparently, Roman and Greek orators used the ring-gesture frequently to support their declamation. Relating the Ring with different discursive contexts, or to different parts of the canonical structure of the delivery (inventio, dispositio, elocutio, memoria, pronuntiatio, or actio), Quintilian describes several different meanings of this gesture. In a proper Roman delivery, the ring-gesture was regarded as an appropriate accompaniment for the beginning and for narrative parts of the declamation in general. Furthermore, it was recommended to express certainty or accuse, but could also be used for warnings and praises. Finally, Quintilian suggests it as means of clarifying distinction, of pointing out agreement, and for the successive termination of an argument. The discourse variant of the ring-gesture is in common use until today, and Quintilian’s prescriptive rhetoric most likely just provides a canonization of an extremely widespread and mundane usage of this gesture already in ancient Rome (Graf 1994). Quintilian’s account reveals an interesting functional distinction between the different common ring-gestures: Roman and Greek orators used the Ring as accompaniment to an oratorical declamation. It was used intertwined with speech and had a discursive function that qualified parts of the delivery and performed communicative actions. In contrast, the ring-gestures for love, insult, okay, or zero are used to replace full fledged utterances and are often employed when people cannot hear each other: A ring-gesture may express that somebody is in love with somebody else, it may qualify a meal as absolutely perfect, it may be used as an insult, for the number zero, or for the expression “It is okay”. While in these contexts the performance of a ring-gesture may function as complete utterance on its own, in the discursive usages the Ring functions as part of a multi-modal utterance.

114. Ring-gestures across cultures and times: Dimensions of variation

1517

Until today the discursive Ring is widely used all over Europe. Kendon (1995b) reports that one way in which the Neapolitans use the Ring (fingers spread apart) nowadays is with a downward movement pattern and a specific sequential position in an ongoing discourse. The instances of the Ring Kendon has analyzed reveal a variety of usages: “[…] the Ring occurs in association with a segment of speech that provides precise information, makes a specific reference to something, makes something specific in contrast to other possibilities or in contrast to something more general, or which gives a specific example of something” (Kendon 1995b: 268). More recently, Kendon has investigated the ring-gesture as part of a family of gestures which use the Ring hand shape (Kendon 2004: 238⫺247). He distinguishes three variants of the Ring: (i) “R-to-open: The Ring as an initial hand shape in a ‘closed-open’ hand shape sequence” (Kendon 2004: 241⫺242), (ii) “R-display: The hand is raised and closed to the Ring and then held up and sustained in position, as if to display it to the interlocutor” (Kendon 2004: 242⫺245), and the (iii) “R-vertical: The hand, posed in the ring-shape, held so the palm of the hand is vertical ⫺ the rotation of the forearm is neutral ⫺ is moved downward forward in one or more well-defined baton-like movements” (Kendon 2004: 245⫺247). The three form variants all go along with subtle meaning variations, while at the same time sharing a common semantic theme of precision. Morris offers an excellent analysis of a possible iconic motivation of this type of ring-gesture (1978: 80). Terming it “precision grip”, Morris suggests that its meaning is derived from the pinch grip or the action of grasping tiny objects. Grasping with precision is being transposed to arguing with precision. We suggest that the different meanings Quintilian, Kendon, and Morris describe, can all be considered variants of this discursive meaning, namely the qualification of the concurrent part of speech as a precise argument. It is quite astounding that also this meaning of the ring-gesture appears to have survived many cultural changes and a very long time span in Europe ⫺ ranging from classical rhetoric declamation to present day conversations.

3. Ring-gestures within cultures: Variations o orms and meanings In this section, we will give examples of intra-cultural variation of ring-gestures. We know that since antiquity different meanings of the Ring co-exist within one culture, and we will now discuss in more detail the different formational grounds of this variation. These include different iconic motivations but also differences in hand shape, orientation, position in gesture space, and movement pattern, e.g., differences in form that go along with differences in meaning.

3.1. The ring-gesture as an expression o love, justice, as perection, as OK-sign, and as discourse gesture in Italy: Dierent iconic motivations o the ring-shape We have already mentioned that Andrea de Jorio documents that in the beginning of the 19th century the ring-gesture was widely used among Neapolitans (de Jorio 2000). Among the various different cases of the Ring that de Jorio documents, we would now like to focus on two: the Ring as an expression of love and the Ring as expression for

1518

VII. Body movements – Functions, contexts, and interactions

justice. Notably, the Ring for love and the Ring for justice differ in one important formal respect: While the Ring for love is performed with an upward orientation, the Ring for justice shows a downward one. It is this form variation, which accounts for the difference in meaning and which indicates the different iconic motivations of the two gestures: The Ring in a downward orientation is derived from holding a pair of scales, in the other one the finger tips represent kissing lips. Notably, in both gestures, the ring-shape is secondary for the iconic motivation of their meaning. Kendon (1995b) and Morris et al. (1979) report that the Ring continues to be in use in Italy as a discourse gesture and as a question marker (upward orientation) (for more detail, see Kendon 2004: 238⫺247). Diadori also describes different contemporary usages of ring-gestures in Italy. She suggests that they are used as informal signs for approval, expressing “OK, perfect, everything is all right, or everything is set” (Diadori 1990: 37). However, under the heading of “approval”, Diadori (1990) groups together two different ring-gestures: the Ring as a sign for okay and the Ring as an expression of perfection and excellence. It is rather obvious that in the okay version, the ring-shape is the meaningful aspect of form. The iconic motivation is straightforward: The fingers form a circle to depict the round letter ‘O’. When used as expression of perfection and excellence, on the other hand, the ring-shape is of secondary importance, because here the motivation of the meaning is derived from the precision grip. This also indicates that the ring-gesture for perfection shares the iconic motivation with the discourse Ring.

3.2. The Ring as discourse gesture in Quintilians rhetoric: Variations o hand shape discriminate dierent contexts o use We have mentioned above that Quintilian assigns to the ring-gesture various discourserelated functions. The idea of expressing love is not mentioned as an aspect of the gesture’s meaning ⫺ probably because Quintilian’s goal was to teach orators to present and defend their cause in front of the court, not to educate people in all mundane situations of communication. Quintilian treats the Ring as part of the art of delivery, and he distinguishes three subtle variations of the ring-shape that go along with variations in appropriate contexts of use. (i) Version one ⫺ the index-finger Ring with fingers bent ⫺ is characterized as the most general gesture, recommended for the beginning of the delivery, for any parts of a narration where certainty is expressed, and also when a severe accuse is put forward (Quintilian 1975: Inst.Orat XI 3, 92). (ii) Version two ⫺ the index-finger Ring with fingers spread apart ⫺ is regarded as appropriate when clarifying distinction is needed but also for the expression of agreement or accuse (Quintilian 1975: Inst.Orat XI 3, 101). (iii) The third version of the discourse Ring ⫺ the middle-finger Ring with fingers spread apart ⫺ is a form variant used by the Greek orators in a similar context of use: the succession of points in an argumentation (Quintilian 1975: Inst.Orat XI 3: 102).

3.3. The Ring as an expression o perection and the discursive precision grip in Germany: Orientation, position, and movement as dierence markers In a detailed linguistic study of different forms, functions, and usages of ring-gestures, Neumann (2004) found that they are used in a variety of ways in Germany, too. Like

114. Ring-gestures across cultures and times: Dimensions of variation

1519

in Italy, these different versions of ring-gestures co-exist side by side. Ring-gestures are used to express perfection and excellence of something, or they may express the precision of a point made in a discourse. Neumann reports a subtle but important form difference between those two gestures. The ring-gesture expressing “perfection” is oriented vertically, it is performed at head or upper chest level, and it is held for a moment (Neumann 2004). In contrast, the discursive ring-gesture expressing the precision of an argument is oriented horizontally, is performed at chest level (or lower), and shows a rhythmical ⫺ typically downward ⫺ movement pattern (Neumann 2004). While the perfection Ring and the discursive precision grip appear to share their iconic motivation (grasping with precision as arguing with precision) and accordingly also have a related meaning, their difference is marked by formational features that are not primary for their motivation: namely the orientation of the gesture, the position in gesture space, and the movement pattern.

4. Ring-gestures across cultures: Stability and variation In this last section we will give examples of ring-gestures across cultures and we will see that there are a great number of cross-cultural convergences but also subtle cross-cultural differences. A form-meaning analysis shows cross-cultural similarities regarding the iconic motivations and contexts of use, but also that variations in secondary kinesic features of the ring-shape (the fingers spread or bent) may discriminate cross-cultural differences.

4.1. The Ring as insult, as perection, as OK-sign, as sign or the number zero: Similar iconic motivations across dierent cultures The Ring as “sexual insult” continues to be in use until today in many different places. According to Morris (1995: 86), it is used in Germany, Sardinia, Malta, Tunisia, Greece, Turkey, Russia but also outside of Europe in the Middle East and in parts of South America. Morris (1995: 86⫺87) reports that the Ring as sign for “okay” is a common gesture all over Europe and America, and as a gesture for “This is zero, this is nothing” it is known in France, Tunisia, and Belgium. Notably, the iconic motivations for those ring-gestures are similar across different cultures: ring-shape for “orifice”, ring-shape for letter “O”, and the ring-shape for number “0”. The Ring as expression of perfection appears to be fairly widespread: Diadori (1990: 37), Kendon (1995b, 2004), and Morris (1978, 1995) report it for Italy, Neumann (2004) and Weinrich (1992) for Germany, Calbris (1990, 2011) for France, and Meo-Zilio and Meji´a (1980) for South America. We know that it is used widely in the United States as well as in the Arabic world. We assume that all those usages of the Ring are based on the same iconic motivation: grasping with precision as expression of precision.

4.2. The Ring as discourse gesture in Roman and Greek delivery: Hand shape variation discriminates cross-cultural variation Since the very beginning of historically documented gesture studies, scholars were aware of cultural differences with regard to co-verbal gesturing. Thus, Quintilian (1975: Inst. Orat XI 3) mentions intra-cultural variation and cross-cultural differences in the per-

1520

VII. Body movements – Functions, contexts, and interactions

formance of the discursive ring-gesture. He describes two form variants used by Roman and a third one used by Greek orators. While the common form for Roman orators was the index-finger Ring, for Greek orators the middle-finger Ring was the ubiquitous form. This case of cross-cultural variation is a very interesting one, because it shows that distinctions can be established by using aspects of a gestural form that are not primary for its iconic motivation. Whether the Ring is formed with index or with thumb does not change the ring-shape, it does not affect the grasping allusion of this configuration either. We can use the index as well as the middle finger for a precision grip. Thus, the difference in form between the Greek and the Roman ring-gesture does not affect its primary motivation nor its meaning, but it does indicate cultural difference without changing the semantics of the gesture.

4.3. The ring-gesture as discourse gesture in Iran, Ancient Greece, and Italy: Same hand shape and similar contexts o use across dierent cultures Until today, the ring-gesture is a common discourse gesture in Iran (Persia) (Seyfeddinipur 2004). However, this discursive index-finger Ring is performed with the fingers bent. Seyfeddinipur found that it is also used as topic comment marker, showing the same function as the finger bunch or the grappolo gesture in Italy (2004: 229⫺238; see also Kendon 1995b). Interestingly, in Iran this gesture is documented in miniatures since the 15th century. Now we know from Quintilian that the Greek and the Romans used a similar discourse Ring, and we might therefore speculate that we find this gesture in Persia as well as in Ancient Greece because of the intense cultural contact these cultures happened to have in those days. But it may also be the case that both exploited the iconic potential of the Ring in a parallel and similar fashion.

5. A comparative analysis o gestures concerns various dimensions o variation Already Quintilian (1975: Inst.Orat XI 3, 92, 101⫺102) describes form variants of a ringgesture with differing discourse functions that relate to minor changes in the hand shape. These formal variants represent on the one hand intra-cultural differences: (i) (ii) (iii) (iv)

index-finger Ring with fingers bent, index-finger Ring with fingers spread, and on the other hand cross-cultural differences and the middle-finger Ring.

Unfortunately, Quintilian gives no account of the orientation of the hand, but he does mention a further formational feature of the gesture ⫺ the movement of the hand. He notes that these gestures may be performed at varying paces, thus whereas a slower movement of the Ring is useful for promises and agreements, the faster movements are used when warnings or praises are uttered. See for an illustration of these three gestures a reproduction from the 19th century taken from Austin’s (1966) treatise on rhetorical delivery (Fig. 114.1).

114. Ring-gestures across cultures and times: Dimensions of variation

1521

Altogether, Quintilian’s differentiated account of the different uses of the Ring makes a case for the importance of form variation to the meaning of a gesture in the widest sense of the word “meaning”. It shows that differences in the meaning of gestures may be accounted for by differences in hand shape and movement dynamics of a gesture. Moreover, it indicates that these differences may occur cross-culturally as well as intraculturally. Quintilian’s analysis shows that a comparative analysis of gestures must consider different dimensions of variation ⫺ even within one seemingly similar gesture. One hand shape may participate in different kinds of gestures (all the gestures discussed above used the Ring hand shape), and different meanings of the same form may be derived from different semantic bases (kissing lips, holding a pair of scales, holding a fine object, representing circles, the orifice, the number 0, the letter O). In some cases, the semantic bases have proven to be useful to distinguish different meanings of the Ring: for instance, the downward orientation in the “justice” gesture and the upward position in the “love” gesture. But the secondary aspects of the hand shape may also be used to create variation: such as in the ring-gesture with fingers bent versus the Ring with fingers spread, or in the index-finger versus the middle-finger Ring. Sometimes a ring-gesture ⫺ as in the case of the German “perfection” Ring ⫺ is further specified by a specific location (at the upper part of the body), where it is performed, and a specific movement pattern (a significant hold). So, for instance, in Germany this “perfection” Ring stands in contrast with the discourse related use of the Ring, with regard to the position in gesture-space and the movement pattern employed. Whereas the “perfection” Ring is static and held high up in gesture space, the “precision” Ring is performed typically with a downward movement, mostly a horizontal orientation, and lower in the gesture space. In short, all dimensions of form that we have introduced above may be used to create variation: motivation, hand shape, orientation, position, and movement. A comparative analysis of a gesture therefore must calculate with all these dimensions of variation. It must be prepared to look at gestures independent of cultural and linguistic boundaries, and it must also include the possibility that similar forms do not necessarily come with a similar meaning. The case of ring-gestures indicates that linguistic and national boundaries do not necessarily go hand in hand with a difference in gesture and that some gestures remain stable within a particular area in form and meaning for more than two thousand years ⫺ leaving traces of long gone cultures.

6. Reerences Austin, Gilbert 1966. Chironomia or a Treatise on Rhetorical Delivery. Carbondale/Edwardsville: Southern Illinois University Press. First published [1806]. Calbris, Genevie`ve 1990. The Semiotics of French Gesture. Bloomington: Indiana University Press. Calbris, Genevie`ve 2011. Elements of Meaning. Amsterdam/Philadelphia: John Benjamins. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity: A Translation of ‘La Mimica Degli Antichi Investigata nel Gestire Napoletano’ and With an Introduction and Notes by Adam Kendon. Bloomington/Indianapolis: Indiana University Press. First published [1832]. Diadori, Pierangela 1990. Senza Parole: 100 Gesti degli Italiani. Roma: Bonacci. Ekman, Paul 1977. Bewegungen mit kodierter Bedeutung: Gestische Embleme. In: Roland Posner and Hans-Peter Reinecke (eds.), Zeichenprozesse. Semiotische Forschung in den Einzelwissenschaften, 180⫺198. Wiesbaden: Athenaion.

1522

VII. Body movements – Functions, contexts, and interactions

Ekman, Paul and Wallace Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1: 49⫺98. Graf, Fritz 1994. Gestures and conventions: The gestures of Roman actors and orators. In: Jan Bremmera and Herman Roodenburg (eds.), A Cultural History of Gesture, 36⫺58. Cambridge Polity Press. Kendon, Adam 1990. Gesticulation, quotable gestures and signs. In: Michael Moerman and Masaichi Nomura (eds.), Culture Embodied. (Senri Ethnological Studies 27), 53⫺77. Osaka: National Museum of Ethnography. Kendon, Adam 1995a. Andrea de Jorio ⫺ The first ethnographer of gesture? Visual Anthropology 7(4): 375⫺394. Kendon, Adam 1995b. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺279. Kendon, Adam 2000. Introduction to Andrea de Jorio, Gesture in Naples and Gesture in Classical Antiquity. Bloomington/Indianapolis: Indiana University Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Matthews, Jeff and David Taylor. 1994. A Brief History of Naples and Other Tales. Naples: Fotoprogetti Press. Meo-Zilio, Giovanni and Silvia Meji´a 1980. Diccionario de Gestos: Espan˜a e Hispanoame´rica. Bogota´: Instituto Caro y Cuervo. Morris, Desmond 1978. Der Mensch mit dem wir leben. Ein Handbuch unseres Verhaltens. München: Droemer Knaur. Morris, Desmond 1995. Bodytalk. Körpersprache, Gesten und Gebärden. München: Heyne. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures: Their Origins and Distribution. London: Jonathan Cape. Müller, Cornelia 2008. Wie man aneinander vorbei gestikulieren kann … Gesten als Quelle intraund interkultureller Missverständnisse. In: Veit Didczuneit, Anja Eichler and Lieselotte Kugler (eds.), Missverständnisse ⫺ Stolpersteine der Kommunikation, 102⫺109. Berlin: Edition Braus. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Neumann, Gerhard 1965. Gesten und Gebärden in der Griechischen Kunst. Berlin: De Gruyter. Neumann, Ranghild 2004. The conventionalization of the ‘Ring’ in German discourse. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gesture, 217⫺224. Berlin: Weidler. Quintilianus, Marcus F. 1975. Institutionis Oratoriae Libri XII, Pars Posterior, Libros VII⫺XII Continens. Darmstadt: Wissenschaftliche Buchgesellschaft. Seyfeddinipur, Mandana 2004. Case study on a conventionalized Persian gesture. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gesture, 205⫺216. Berlin: Weidler. Weinrich, Lotte 1992. Verbale und Nonverbale Strategien in Fernsehgesprächen. Eine Explorative Studie. Tübingen: Niemeyer.

Cornelia Müller, Frankfurt (Oder) (Germany)

115. Gesture and taboo: A cross-cultural perspective

1523

115. Gesture and taboo: A cross-cultural perspective 1. 2. 3. 4. 5. 6. 7.

Taboo Communication and taboo Taboo gestures When speech offends When speech is taboo Gestural behavior and taboo References

Abstract When communicating about socially awkward, sensitive, and taboo topics, speakers may avoid speech and resort to the use of gesture. In the case of a widespread socio-cultural taboo, speakers may consistently use a gesture until it becomes systematically recognized and understood to represent the taboo topic. A significant proportion of gestures in the conventionalized or quotable repertoires of most cultures are obscene, insulting, or represent socially sensitive subjects. Gesture is particularly suited to managing communicative taboos as it is easily disguised and carries less value than speech in terms of the speaker’s responsibility for a communicative act. In the case of taboos against speaking for extended periods, gestural/sign systems or alternate sign languages may develop. The more extensive the taboo against speaking, the more complex the kinesic code. Gestural behavior is also subject to social regulation and rules of appropriacy based on the nature of the interaction, the identity of the participants, and the social context. Use of space, frequency of gesture, and the types of gesture used appear to be the main features constrained by social convention.

1. Taboo Taboos are prohibitions on behaviors, both acts and utterances, that a particular society forbids or encourages its members to avoid. Many taboos occur in relation to sensitive topics or events that are emotive and potentially destabilizing or risky. Most often, these taboos relate to life-stage rituals such as birth, initiation, marriage, and death and aspects such as the body, bodily excretions, sex, disease, religion, and death (Burridge 2006a; Napoli and Hoeksema 2009). There are also taboos against political incorrectness and distasteful or impolite behaviors that may be offensive under all or some social conditions (Burridge 2006b). Levels of offensiveness vary and depend on the nature of the taboo, speaker-listener relationship, pragmatic and paralinguistic features, as well as social context (Jay 2009).

2. Communication and taboo Communication is an integral part of enacting and managing taboos. In every society, there are taboo words and utterances (see Burridge [2006a, b] and Jay [2009] for the psychosocial aspects of swearing; Napoli and Hoeksema [2009] for pragmatic and grammatical aspects), taboo topics (see Agyekum [2002] on menstruation; Burridge [2006a] Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 15231530

1524

VII. Body movements – Functions, contexts, and interactions

on political correctness), and rules that govern linguistic politeness so that what is said does not threaten the public self image or “face” of either addressor or addressee (Brown and Levinson 1987). Among some groups, there are systematic speech taboos. (See Treis [2005] on avoidance languages or languages of respect towards affinal relatives in Kambaata and several other Ethiopian languages; Herbert [1990a, b, c] and Kunene [1958] on similar practices among Xhosa and South Sotho speakers in South Africa; Haviland [1979] and Merlan [1997, 2006] on different speech taboos in Australian Aboriginal societies.) Managing verbal taboos involves systematic suppression and replacement of linguistic forms and includes euphemism, circumlocution, semantic changes such as metaphor and metonymy, phonological distortion, blending and substitution, and lexical substitution (Agyekum 2002; Merlan 2006; Napoli and Hoeksema 2009). Merlan (2006) points out that verbal taboos usually work in conjunction with nonverbal markers such as touching, personal space, posture, orientation, gaze, and gesture. There are also many prohibitions on nonverbal behaviors relating to appropriate conduct in different social settings. Although the focus here is on taboo and gesture (defined by Kendon [2004a] as deliberate, conscious, and intentional bodily expressions that are part of what a person intends to say), taboos relating to taboo words, speech taboos, and taboos against nonverbal behaviors also shape the nature of gestures and gestural behavior (Kita 2009; Kita and Essegbey 2001). There are four ways in which gesture functions in relation to taboo. Firstly, there are conventionalized gestures that are offensive and therefore taboo. Secondly, speakers may use improvisatory gesturing to replace speech when speech could cause offence. In some cases, gestures become conventionalized to replace speech concerning taboo topics. Thirdly, kinesic codes have developed to replace speech when the act of speaking is taboo. Finally, there are prohibitions relating to the pragmatics of gestural use/behavior.

3. Taboo gestures Most societies appear to have gestures equivalent to swear words and insults. These taboo gestures are stable in form and meaning and therefore belong to the category of emblems (Ekman and Friesen 1969) or quotable gestures (Kendon 1992). Although there have been no systematic studies focusing specifically on taboo gestures within a social group and only three cross-cultural comparisons of quotable gesture repertoires (Creider 1977; Kendon 1981; Morris et al. 1979), most of the quotable gestural repertoires published include gestures of this type (see Brookes [2004] or Payrato´ [1993] for lists of published repertoires and Payrato´ [this volume]). Kendon (1981) compared lists of quotable gestures from six countries (Columbia and the USA, Southern Italy, France, Iran, and Kenya) and found that 80 per cent or more (except in Iran where they accounted for 66 per cent) were gestures of interpersonal control (including insults), gestures expressing one’s current state or condition, and evaluative comments about others. Taboo gestures, which would include insults and evaluative comments, appear to be a notable feature of most quotable gestural repertoires. Poggi (2002) suggests that a large proportion of Italian quotable gestures are insulting or obscene. Even among societies that appear to have fewer emblematic/quotable gestures, gestural insults are usually present. In most of the repertoires, there is usually at least one gesture for sexual intercourse and often more a gesture for masturbation and frequently more than one gesture for the

115. Gesture and taboo: A cross-cultural perspective

1525

spoken insult “fuck you”. Bodily parts related to sex and excretion are also frequently depicted in gesture as obscene insults (Morris et al. 1979). Repertoires often have gestures related to sexual activities such as pregnancy, being cuckolded, relationships, and homosexuality. There are also many gestures expressing evaluative comments related to bodily aesthetics such as fatness, thinness, baldness, and smell among others, a person’s mental state, for example, craziness, stupidity, and drunkenness, and comments about negative actions such as gossiping and lying. Why are insults and evaluative comments a common feature of quotable gestural repertoires? Several arguments have been put forward as to why gesture may be more appropriate for certain communicative functions (Kendon 1995). In the case of taboo gestures, gesture may substitute for speech, because speech attributes more responsibility to the speaker than gesture (McNeill 1992). A person can insult or negatively evaluate another in gesture without taking responsibility for having said anything. On the other hand, the graphically visual nature of a gestural insult and its manner of performance may intensify the illocutionary force of a direct insult, a form of cacophemism (Poggi 2002). Context and manner of performance would determine the illocutionary force of the gesture. It is possible that a gesture’s visual nature may enhance the force of the insult, while at the same time lessening the gesturer’s responsibility for the message making gesture an ideal medium for this kind of communicative act. Using gesture also allows a person to safely insult another from a distance avoiding physical retaliation. On the other hand, the fleeting and ambiguous nature of gesture means speakers can disguise a taboo gesture by making it look like a body movement. De Jorio ([1832] 2000) and Morris et al. (1979) give examples of how gestures, in particular insults, can be made to look like actions rather than gestures. Gesture is therefore also an ideal medium for expressing taboo messages by obfuscating their communicative intent as actions or incidental movements. Although the majority of taboo gestures appear to be insults and evaluative comments, there are studies that document other kinds of gestures as taboo. Pointing with the left hand to indicate a location or path is taboo in Ghana (Kita and Essegbey 2001). Although Ghanaians consider a left hand point “highly provocative and culturally disrespectful”, it may still occur in certain situations (Kita and Essegbey 2001: 74). Age and familiarity play a role in its use with Ghanaians reporting that it is more inappropriate to point with the left hand when the interlocutor is older or a stranger than if the person is younger or a peer. Taboos relating to left hand pointing gestures among Ghanaians come about as a result of taboos relating to the body. Use of the left hand for giving/receiving and eating/ drinking is taboo in Ghana. Kita and Essegbey (2001) show that the left hand taboo shapes gestural practice more widely. The left hand is generally less conspicuous when gesturing, being placed in a “respect position” on the buttocks (Kita 2009). The right hand is overused, taking up more physically difficult positions in order to avoid left hand gestures, and bi-manual gesturing occurs, as using the right hand with the left hand neutralizes the taboo. The left hand may still be used when pointing, but it appears to be a “semi-point” in that it is held below the waist and the movement is minimal. Kita and Essegbey (2001) suggest that with the verbal concept left, the process of thinking about “left” elicits a movement of the left hand. Pointing with the left hand is also taboo among the Yoruba, the Igbo, the Iyala, and the Hausa (Nigeria), the Gikuyu and the Luya (Kenya), and among the Chichewa (Malawi) (Orie 2009), possibly because of negative values associated with the left hand.

1526

VII. Body movements – Functions, contexts, and interactions Do cultures vary in the number of taboo gestures they have? Are there more taboo gestures in cultures that have more emblematic gestures and/or where speakers appear to utilize gesture more extensively in terms of gestural space and rate/frequency? Do similar taboo concepts and meanings become expressed in gestural form across most societies? Are there common taboo terms that do not have gestural equivalents? Do similar meanings find their expression in similar gestural forms in different societies? Several cross-cultural studies of gesture and other nonverbal behaviors describe crosscultural misinterpretations of gestures considered to be insults in one culture but not in another (Axtell 1991; Chu 2009), confirming that like other gestures, the same meanings do not necessarily result in similar forms across cultures (Kita 2009).

4. When speech oends Kendon (1995) suggests that speakers use gesture as a substitute for speech when they wish to lessen their responsibility for, or appear less committed to, the message they convey. He observes that a speaker may employ improvisatory gestures for part of an utterance that is socially awkward or could cause offence. In these instances, a switch to the gestural medium also allows the speaker to display awareness of the sensitivity of the message, therefore ameliorating its impact and avoiding social sanction (Brookes 2011). Speakers may replace a taboo topic or concept with a gesture that then becomes conventionalized in form and meaning and established among a social group, i.e., quotable. Unlike swear words and insults, where both a word or phrase and its equivalent gesture may be equally taboo, the spoken utterance is more offensive. Speakers often handle socio-cultural taboos such as illness, sex, or death using various verbal strategies. However, it appears under certain conditions, where a topic is so extremely sensitive that even obscure spoken references become too indelicate, that a gesture may develop. One documented example is the emergence of a quotable gesture for HIV (Human Immunodeficiency Virus) in South Africa (Brookes 2011). The gesture involves extending the last three fingers slightly apart, palm towards the gesturer, while the forefinger is bent over and the tip covered by the thumb. Speakers began using the spoken phrase amangama amathathu ‘the three letters’ to refer to the acronym HIV from the late 1990’s. As deaths from HIV began to significantly increase in the early 2000’s, speakers were observed sometimes using a manual expression for “three” by showing three extended fingers when uttering the phrase amangama amathathu or other spoken references to HIV that used the metonym of “three”. Initially, speakers glossed the gesture as “three”, but eventually, the manual expression by itself came to be established as a quotable gesture for HIV. Suggesting that a person has HIV, a highly stigmatized disease, or questioning a person’s status is taboo. Speakers were observed using the gesture as a substitute for speech to avoid any spoken references to HIV. The replacement of speech with gesture allowed speakers to talk about HIV without negatively affecting their selfimage and slandering another. Transferring the message from speech to gesture made the speaker less committed or responsible for what is “said” (Kendon 1995). In the case of HIV and other taboo topics, gestures and the need to conventionalize them will be “most strongly felt for those speech acts where the cultural sanctions are most severe” (McNeill 1992: 65).

115. Gesture and taboo: A cross-cultural perspective

1527

5. When speech is taboo Several scholars have documented kinesic codes of varying complexity that exist as a consequence of speech taboos. Rules of silence in several religious communities resulted in the emergence of gesture/sign systems (Kendon 2004a). Rijnberk (1954), Umiker-Sebeok and Sebeok (1987), and Kendon (1990) provide lists and some analysis of these sign systems based on written records. For contemporary monastic sign language, see Barakat’s (1975) study of a Cistercian monastery, St. Joseph’s Abbey in Spencer, Massachusetts. Kinesic codes have also developed among some cultural groups for periods of mourning, during initiation rituals, and when speech is prohibited according to familial bonds rather like an avoidance or respect language. Karbelashvili (1935) describes an alternate sign language used by married women in the Baraninskiy region of Armenia who could not speak in the presence of affinal relatives. Kendon (1988) provides detailed descriptions and analysis of alternate sign languages among women in seven Aboriginal groups in the North Central Desert of Australia, in particular the Warlpiri, Warumungu, and Warlmanpa, who may not speak for an extended period after the death of a male relative. Alternate sign languages also occur in western and northwest central Queensland, where they developed because of speech taboos imposed on male initiates. The more extensive and generalized the speech taboo, the more complex the kinesic code. The relationship between spoken language and alternate sign languages depends on the complexity of the system, the purposes for which it is used, and on the structure of spoken language (Kendon 2004a).

6. Gestural behavior and taboo Since classical antiquity, Western tradition has expressed a variety of views on the value of bodily comportment and gestural behavior. The notion that gesture was critical to effective public oratory and social conduct gained prominence during the Middle Ages and in subsequent centuries leading to a number of treatises on the rules for gestures and other bodily behaviors appropriate to performance as a public speaker and the expression of values such as grace and decorum (for example, Bonifacio and Bulwer [[1644] 1974] and Le Faucheur and Austin [[1806] 1966] in Kendon [2004a]). Historians of gesture also note that various writers from the Middle Ages to the Renaissance commented on different gestural styles and practices among different cultures and classes across Europe noting, for example, the prominence of Italian gesture and its influence among the French nobility (Burke 1992; Knox 1990). With the rise of Protestantism and the moral code of the Counter-Reformation especially in northern Europe, gesticulation and extensive bodily expression was considered inappropriate, while bodily restraint signified reason and self-control (Burke 1992). Efron (1972) reports that gestural styles changed again in the mid-eighteenth century in France, when the display of emotion through more prominent use of gesture was encouraged as it signified a “sensitive soul”. De Jorio’s (2000) ethnography of gesture in everyday life in Naples demonstrates the aesthetic value of gestural use in conversational performance among Neapolitans. These studies show that gesture is subject to social regulation with cultures placing different values on gesture as a communicative instrument (Kendon 2004a).

1528

VII. Body movements – Functions, contexts, and interactions Although more recent research has focused on the formal/structural aspects of gesture and its role in cognition, learning, and language development, a number of contemporary studies examine gestural pragmatics and how underlying cultural values and notions of politeness and appropriacy shape the nature of gesture (Brookes 2005; Efron 1972; Kendon 2004a; Kita 2009; Kita and Essegbey 2001). Work by Kendon (2004a, b) has developed de Jorio’s initial observations to explain the prominent role of gesturing among Neapolitans. He points to the role of gesture in the ecology of communication where theatrical and aesthetic values underlying “conduct in co-presence” require speakers to draw attention to self and assert identity in the Neapolitan social environment. How values related to the aesthetic and performative aspects of interaction shape gestural behavior, has also emerged in ethnographic work among male youth in the townships of South Africa. Here, skillful use of gesture and speech in maximally entertaining ways is vital to membership and status in male street corner groups, where the certain styles of gestural performance are part of indexing a streetwise male township identity (Brookes 2001, 2004, 2005). The social meanings of different styles of gestural behavior influence the use of gesture space, the types of gesture used, and their frequency. Gestural behavior is part of “giving off ” identity (Goffman 1963) with certain kinds of gestural behavior indexing a person as disrespectable or rough within township communities (Brookes 2004). Moreover, only certain kinds of gestures and gestural behavior are appropriate for females to use in these communities (Brookes 2004; Kunene 2010). Similarly, a study of pubs in rural Andalusia, Spain, shows how gestures among male patrons are part of indexing masculinity and negotiating identity and status. This kind of gesturing is only appropriate in these social spaces (Driessen 1992). Clearly, there are culture-specific norms regarding gestural politeness that explain cultural variation in gestural behavior (Kita 2009). Kita and Essegbey’s (2001) study on the left hand taboo in Ghana demonstrates, how cultural conventions shape gestural behavior resulting in a distinctive gestural style. Cultures ascribe both positive and negative values to the types of gesture that can be used, the kinds of gestures that are appropriate in conversational interactions, the use of gesture space to foreground gestural aspects of communication (Efron 1972; Müller 1998), as well as gestural rate (Kita 2009). These features of gestural behavior also play a role in enacting multiple identities within a culture, creating a complex system of communicative practices related to degrees of taboo and appropriacy.

Acknowledgements This work is based on research supported by the National Research Foundation, South Africa under Grants 77955 and 75318. Any opinions and conclusions are those of the author and not the University of Cape Town nor the National Research Foundation.

7. Reerences Agyekum, Kofi 2002. Menstruation as a verbal taboo among the Akan of Ghana. Journal of Anthropological Research 58(3): 367⫺387. Axtell, Robert E. 1991. Gestures: The Do’s and Taboos of Body Language Around the World. New York: John Wiley and Sons. Barakat, Robert A. 1975. Cistercian Sign Language. A Study in Nonverbal Communication. Kalamazoo, MI: Cistercian Publications.

115. Gesture and taboo: A cross-cultural perspective Brookes, Heather J. 2001. O clever “He’s streetwise”. When gestures become quotable: The case of the clever gesture. Gesture 1(2): 167⫺184. Brookes, Heather J. 2004. A first repertoire of South African quotable gestures. Linguistic Anthropology 14(2): 186⫺224. Brookes, Heather J. 2005. What gestures do: Some communicative functions of quotable gestures in conversations among black urban South Africans. Journal of Pragmatics 37(12): 2044⫺2085. Brookes, Heather J. 2011. Amangama amathathu “The three letters”. The emergence of a quotable gesture (emblem). Gesture 11(2): 194⫺218. Brown, Penelope and Stephen Levinson 1987. Politeness: Some Universals in Language Usage. Cambridge, UK: Cambridge University Press. Burke, Peter 1992. The language of gesture in early modern Italy. In: Jan Bremmer and Herman Roodenburg (eds.), A Cultural History of Gesture, 71⫺83. Ithaca, NY: Cornell University Press. Burridge, Kate 2006a. Taboo, euphemism, and political correctness. In: Keith Brown (ed.), Encyclopedia of Language and Linguistics, Second Edition, Volume 12, 455⫺462. Oxford: Elsevier. Burridge, Kate 2006b. Taboo words. In: Keith Brown (ed.), Encyclopedia of Language and Linguistics, Second Edition, Volume 12, 452⫺455. Oxford: Elsevier. Chu, Man-ping 2009. Chinese cultural taboos that affect their language and behaviour choices. Asian Culture and History 1(2): 122⫺139. Creider, Chet A. 1977. Towards a description of East African gestures. Sign Language Studies 14: 1⫺20. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. A Translation of ‘La Mimica Degli Antichi Investigata nel Gestire Napoletano’ With an Introduction and Notes by Adam Kendon. Bloomington: Indiana University Press. First published [1832]. Driessen, Henk 1992. Gestured masculinity: body and sociability in rural Andalusia. In: Jan Bremmer and Herman Roodenburg (eds.), A Cultural History of Gesture, 237⫺249. Ithaca, NY: Cornell University Press. Efron, David 1972. Gesture, Race, and Culture. The Hague: Mouton and Co. Ekman, Paul and Wallace Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Goffman, Ervin 1963. Behavior in Public Places: Notes on the Social Organization of Gatherings. New York: Free Press. Haviland, John B. 1979. Guugu-Yimidhirr brother-in-law language. Language in Society 8(3): 365⫺393. Herbert, Robert K. 1990a. Hlonipha and the ambiguous woman. Anthropos 85: 455⫺473. Herbert, Robert K. 1990b. The relative markedness of click sounds: Change, acquisition, and avoidance. Anthropological Linguistics 32(1/2): 120⫺138. Herbert, Robert K. 1990c. The sociohistory of clicks in southern Bantu. Anthropological Linguistics 32(3/4): 295⫺315. Jay, Timothy 2009. The utility and ubiquity of taboo words. Perspectives on Psychological Science 4(2): 153⫺161. Karbelashvili, D.P. 1935. Ruchnaia-rech na Kavkaze. Tiflis: Izdanie Nauchno-issledovatel’ skogo instituta kavkazovedenija. Kendon, Adam 1981. Geography of gesture. Semiotica 37(1/2): 129⫺163. Kendon, Adam 1988. Sign Languages of Aboriginal Australia: Cultural, Semiotic and Communicative Perspectives. Cambridge: Cambridge University Press. Kendon, Adam 1990. Signs in the cloister and elsewhere. Semiotica 79(3/4): 307⫺329. Kendon, Adam 1992. Some recent work from Italy on quotable gestures (emblems). Journal of Linguistic Anthropology 2(2): 92⫺108. Kendon, Adam 1995. Some uses of gesture. In: Deborah Tannen and Muriel Saville-Troike (eds.), Perspectives on Silence, 215⫺234. Norwood, NJ: Ablex. Kendon, Adam 2004a. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.

1529

1530

VII. Body movements – Functions, contexts, and interactions

Kendon, Adam 2004b. Contrasts in gesticulation. A British and a Neapolitan speaker compared. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gesture, 173⫺193. Berlin: Weidler Buchverlag. Kita, Sotaro 2009. Cross-cultural variation of speech-accompanying gesture: A review. Language and Cognitive Processes 24(2): 145⫺167. Kita, Sotaro and James Essegbey 2001. Pointing left in Ghana: How a taboo on the use of the left hand influences gestural practice. Gesture 1(1): 73⫺95. Knox, Dilwyn 1990. Late medieval and renaissance ideas on gesture. In: Volker Kapp (ed.), Die Sprache der Zeichen und Bilder. Rhetorik und nonverbale Kommunikation in der frühen Neuzeit, 11⫺39. Marburg: Hitzeroth. Kunene, Daniel P. 1958. Notes on Hlonepha among the Southern Sotho. African Studies 17(3): 159⫺182. Kunene, Ramona 2010. A comparative study of the development of multimodal narratives in French and Zulu children and adults. Ph.D. dissertation, University of Grenoble 3. McNeill, David 1992. Hand and Mind. What Gestures Reveal about Thought. Chicago: Chicago University Press. Merlan, Francesca 1997. The mother-in-law taboo: avoidance and obligation in Aboriginal Australian society. In: Francesca Merlan, John Morton and Alan Rumsey (eds.), Scholar and Sceptic: Australian Aboriginal Studies in Honour of L.R. Hiatt, 95⫺122. Canberra: Aboriginal Studies Press. Merlan, Francesca 2006. Taboo: Verbal practices. In: Keith Brown (ed.), Encyclopaedia of Language and Linguistics, Second Edition, Volume 12, 462⫺466. Oxford: Elsevier. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures: Their Origins and Distribution. A New Look at the Human Animal. London: Jonathan Cape. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Napoli, Donna Jo and Jack Hoeksema 2009. The grammatical versatility of taboo terms. Studies in Language 33(3): 612⫺643. Orie, Olanike O. 2009. Pointing the Yoruba way. Gesture 9(2): 237⫺261. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20(3): 193⫺216. Payrato´, Lluı´s this volume. Emblems or quotable gestures: Structures, categories, and functions. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1474⫺ 1481. Berlin/Boston: De Gruyter Mouton. Poggi, Isabella 2002. Symbolic gestures: The case of the Italian gestionary. Gesture 2(1): 71⫺98. Rijnberk, Ge´rard Van 1954. Le Langage Par Signes Chez le Moines. Amsterdam: North Holland Publishing Company. Treis, Yvonne 2005. Avoiding their names, avoiding their eyes: How Kambaata women respect their in-laws. Anthropological Linguistics 47(3): 292⫺320. Umiker-Sebeok, Donna-Jean and Thomas A. Sebeok (eds.) 1987. Monastic Sign Languages. Berlin: Mouton de Gruyter.

Heather Brookes, Cape Town (South Africa)

VIII. Gesture and language 116. Pragmatic gestures 1. 2. 3. 4. 5.

Introduction The notion of pragmatic gestures: Features and functions Families of pragmatic gestures Gaps, trends, and new issues in the study of pragmatic gestures References

Abstract This article summarizes research on gestures that are mainly used for pragmatic purposes and that reveal a tendency toward codification or conventionalization. They have been described as interactive, speech-handling, performative and recurrent, pragmatic and metapragmatic gestures, as pragmatic markers, and gesture families. Their functions are manifold: They frame the conversational situation, provide interpretation clues, they help maintain the conversation as a social system, they refer to and include the listener. They function as performatives, as parsers, and with modal functions. Following Streeck (2005: 73), they are those gestures that display and highlight “aspects of the communicative interaction” itself. As an illustration, we present examples of so-called gesture families (Kendon 2004; Müller 2004), as the Grappolo family, the Ring family, the Palm Up Open Hand, and the Brushing Aside Gesture, showing how their members differ from one another, what function(s) they take up, and how they interact in coordination with the spoken part of the utterance. Their clustering reveals certain systematizations that certainly require further elaboration, and we conclude that at this point an exchange between detailed empirical indepth studies and pragmatic linguistic theorization is needed.

1. Introduction Gesture studies, as a linguistic research area, are usually embedded within the wider area of pragmatics, although pragmatic perspectives on gestures are rather scarce within the field (see the Cooperrider-Wharton debate in Gesture 2011a, b). Exceptions are, inter alia, the studies from a interactional perspective by Bavelas et al. (1992, 1995), Goodwin and Goodwin (1986), and Goodwin (1986, inter alia), the conversation analytical studies by Heath (1992) and Mondada (2006, volume 1 for a summary), Bohle (2007), Schmitt (2005), Streeck (1995), and Müller (e.g., with Paul 1999), and analyses of the pragmatic functions of emblems (Brookes 2004, 2005; Kendon 1981; Payrato´ 1993, 2003, 2004, this volume; Poggi 2004; Poggi and Zomparelli 1987; Sherzer 1991). See also the recent volume (46, 2013) of Journal of Pragmatics edited by Deppermann, devoted to conversation analytic studies of multimodal interaction. On the one hand, this must be due to the enormous impact that psychological and psycholinguistic research traditions have had within the field. Another explanation is the fact that gestures are multidimensional and multifunctional and therefore hard to get a hold of in their day-to-day usage (see Müller 1998, volume 1). Similar to words in an utterance they Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 15311539

1532

VIII. Gesture and language

fulfill various functions at a time, and they are multidimensional, because each dimension of their performance (size of the gesture, gesture dynamics, their local position in gesture space, and their temporal position within the verbal utterance) brings in certain semantic and pragmatic properties that all lead up to the final meaning and function of the gesture. This implies that each gesture performance has a pragmatic aspect, and the attempt to categorize all these aspects has hitherto not been undertaken. In a lato sensu, any gesture is a pragmatic gesture, i.e., it can be analyzed through its semiosis as a (pragmatic) sign.

2. The notion o pragmatic gestures: Features and unctions Stricto sensu, when, in the following, we write about pragmatic gestures, we mean gestures that seem to be most suitable for pragmatic purposes. Although emblems are thoroughly pragmatic, we will not discuss them here for reasons of space and redundancy (see Payrato´ this volume; Teßendorf volume 1). Here, we will concentrate on the socalled gesture families (Fricke, Bressem, and Müller this volume; Kendon 1995, 2004; Müller 2004), recurrent gestures (Ladewig 2010, 2011, this volume; Müller 2010), and speech-handling gestures (Streeck 2005, 2009).

2.1. Interactive gestures Coming from an interactional background, Bavelas et al. (1992) investigated the so-called illustrators (Ekman and Friesen 1969) on the basis of face-to-face dialog data. They found functional differences within this class and divided it into gestures that are concerned with the topic of the conversation (topic gestures) and interactive gestures. While there already had been a distinction between topic and non-topic gestures (see Kendon 1985), the latter ones were usually restricted to batons (Efron [1941] 1972; Ekman and Friesen 1969), beats (McNeill and Levy 1982), or speech primacy movements (Freedman 1972), that is: quick flicks of the hand without depictive potential. The new category of interactive gestures includes beats and batons, but emphasizes non-topic gestures that convey pictorial meaning by their form (Palm Up Open Hand gestures, Palm Outwards Open Hand gestures, McNeill’s metaphoric conduit gestures, see McNeill and Levy 1982) and “refer […] to some aspect of the process of conversing with another person” (Bavelas et al. 1992: 473). Interactive gestures help maintain the conversation as a social system and make reference to the interlocutor. Their four basic functions are: marking the delivery of the information (the delivery of new versus shared information); citing the other’s contribution (e.g., acknowledgement of the addressee’s contribution, indication of following); seeking a response (e.g., agreement, understanding, help); and turn coordination (e.g., taking or forestalling the turn) (see Bavelas et al. 1995: 397, Tab. 1). Interactive gestures have similar functions like discourse markers as you know?, eh?, the rising intonation on a declarative sentence, and framing statements (well, this aside; so, anyway). They include the listener in the dialog who turns active by back channels, listener responses, and interactive facial displays.

2.2. Pragmatic markers Kendon emphasizes that gestures that take up pragmatic functions are exactly this: gestures that take up pragmatic functions and not “pragmatic gestures”. These gestures then serve in “any of the ways in which gestures may relate to features of an utterance’s meaning that are not a part of its referential meaning or propositional content” (Kendon 2004: 158).

116. Pragmatic gestures

1533

He summarizes that the “so-called pragmatic gestures […] serve in a variety of ways as markers of the illocutionary force of an utterance, as grammatical and semantic operators or as punctuators or parsers of the spoken discourse” (Kendon 2004: 5). These make up four pragmatic functions: Gestures may serve as performatives, with parsing function, with modal functions (see Müller and Speckmann 2002), and with interactional functions. One gesture can take up several of these functions, depending upon its context-of-use.

2.3. Speech-handling or pragmatic and meta-pragmatic gestures Streeck (2005: 73) states that pragmatic gesture “encompasses all actions of the hands (and a variety of other body parts, notably the face, head, and shoulders) by which aspects of the communicative interaction are displayed”. Included are recipient gestures (affirmation, negation, rejection, etc.), beats that mark speech units, pronominal referential gestures (mostly with pointing motions that mark acts of reference), pointing-like movements, gestures that express the stance/attitude of the speaker, speech act or pragmatic gestures that act upon the utterance, and finally meta-pragmatic gestures that order or enable transactions and are used to regulate the actions of the interaction participants. Pragmatic and meta-pragmatic gestures often overlap, and one gesture can be functioning in one way or the other. Pragmatic gestures such as the Palm Up Open Hand gesture, other Open Hand gestures, shrugs, gestures that “push” something back, etc. possess metaphorical qualities that figure aspects of the processes of communicating as handlings of physical actions or as conduit (see Streeck 2009: 182). They are relicts of “ceivings”, a term Streeck (2005: 75) proposes for gestures where the hands help the gesturer think, grasp for concepts, and their world-knowledge helps and constitutes the meaning of the utterance. As “practical metaphorizations” (Streeck 2009: 201) their function is twofold: they provide an interpretational frame for the interlocutor and at the same time provide an experiential frame for the speaker, “in terms of which our own communicative actions are tacitly made meaningful to us” (Streeck 2009: 202). Meta-pragmatic gestures include pointing and touching and other, rather conventional, gestures that attempt to regulate the behavior of others. According to Streeck, it is this function that has led to a codification of gestures for sign systems of traffic cops, musical conductors, etc.

2.4. Dominant pragmatic unctions: perormative and recurrent gestures In her functional classification of gestures, Müller distinguishes between gestures that are used primarily referentially, performatively, modally, and discursively (Müller 1998; Müller and Speckmann 2002). When coining the term performative, Müller (1998) emphasizes that in contrast to other gestures, the underlying action is not displayed or referred to but accomplished. The underlying action of the gesture is transferred functionally through processes of metonymy and metaphor into the realm of speech (see Müller 2004, 2010; Müller and Cienki 2009 for details). More recently, Müller and colleagues (Ladewig 2010, 2011, this volume; Müller 2010; Müller and Cienki 2009) have proposed the term recurrent gestures. A gesture is recurrent when “it is used repeatedly in different contexts and its formational and semantic core remains stable across different contexts and speakers” (Ladewig 2010) and, according to Müller (2010), these gestures, like the Palm Up Open Hand, are linked to a set of modal and performative functions. The function of the gesture depends highly on the exact positioning of the gesture in its interplay with the concurrent

1534

VIII. Gesture and language

speech. This fact implies the need to study in detail the interaction between speech and (pragmatic) gestures in communicative uses typical of each community

3. Families o pragmatic gestures We will summarize findings of several micro-analyses of some gesture families: the families of precision grip gestures (following Kendon 1995, 2004), the Palm Up Open Hand (Kendon 2004; Müller 2004; Streeck 2009), and the Brushing Aside Gesture (Müller and Speckmann 2002; Teßendorf 2005, 2008, this volume). As for the different family members, the studies do not claim to be exhaustive. Other members may well be found and subject to future investigations, but certain features seem to be more relevant than others (i.e., motion pattern, not motion). The aspect on how these gesture families rely on metaphorizations or ritualizations of instrumental actions cannot be treated here. Examples are examined in situations of speech, not as speech-replacing gestures, nor as emblems (an approach to categorize emblematic gestures using a refined family resemblance model has been undertaken by Payrato´ 2003).

3.1. Gestures o the precision grip: The G-amily (Grappolo) and the R-amily (Ring-gestures) Following Morris’ et al. (1979) observations that gestures of the “finger bunch” or grappolo and the so-called Ring-gestures both root in actions that are concerned with grabbing or holding something (small) in a precise fashion, Kendon (2004: 225⫺247) presents context-of-use analyses for these two gesture families. The grappolo family unites gestures in which all fingers are drawn together, the knuckles are flexed, the fingertips touching, and the palm faces upwards, including the Italian quotable gesture of the “mano a borsa” (gesture b below, see also Kendon 1995; Poggi 1983). The general semantic theme might be described as extracting and seizing the essence of something (“topic seizing”), and the action motif “is that of holding onto something and making it prominent for the attention of the other” (Kendon 2004: 236). Kendon reports that the closing of the hand to the grappolo reminds Neapolitans at pulling something out and seizing it. Within the G-family Kendon examines four variants that differ according to the transformation of the hand and their movement patterns: (a) the hand is closed into the grappolo and drawn towards the speaker; (b) the grappolo is oscillated several times; (c) the grappolo opens into a Palm Up Open Hand; (d) the grappolo is held in a vertical position and moved downwards vertically. While the fourth variant seems to be used at the propositional level, expressing the ideas of essence, substance, or core, the other three function pragmatically. Variant (a) is used to establish a topic which needs close attention, qualifying it as a clarification or specification of someone’s puzzlement. Because it marks the topic of the conversation, the gesture has a parsing function. Variant (b) describes the (quotable) gesture of the mano a borsa, a grappolo that is held and oscillated several times. The performances of this gesture can vary on a formal (body parts included in the movement, direction of the oscillation, etc.) and semantic level, leading to possible sub-categories. The common feature is the oscillation of the grappolo. In dictionaries of Italian gestures, the mano a borsa has often been described as marking or performing a question. In the context-of-use analyses of Kendon (2004: 232), the gesture is used as a display of puzzlement, of something that contradicts the expectations

116. Pragmatic gestures

1535

of the speaker and demands a justification or explanation. The third variant, (c), constitutes a sequence by itself: the grappolo is sustained, moved outwards, and the fingers open into a palm open hand with a forward or downward thrust. The thrust movement marks or expresses the comment on this topic, or a qualification or modification of it. The family of the Ring-gestures (or R-family) is analyzed in a similar vein. In these gestures, the tips of the index finger and the thumb are touching, forming a ring. Gestures of this family have been described and have usually been qualified as different emblems, whereby their meanings and usages root in different derivations (a context-of-use analysis of the Ringgesture in German is presented by Neumann 2004). Kendon concentrates his analyses on gestures of the R-family that root in precision grip actions: the holding something between the tips of the index finger and thumb. The shared semantic theme is related to ideas of preciseness and exactness. Just as in the G-family above, the specific fact or idea is made prominent. What differs from the G-family, is the act of grabbing and therefore the object implied: here, one specific item is picked up from a variety of others. Kendon distinguishes three gesture variants: In variant (a), the hand starts as in the Ring-shape and is then opened; in variant (b) the hand is lifted, forming the Ring and then presenting it; in variant (c) the hand starts in Ring-shape, the palm facing the speaker’s midline, and is moved in the vertical plane, the strokes of the gesture corresponding the stresses of the verbal utterance. With the Ringgesture, the topic is nominated and at the same time marked as something precise, often in the context of an already established topic and within an expository discourse. With variant (b), also called the R-display, the gesturer offers precise information or gives precise instructions, when the information that is specified stands in contrast to what has been assumed or talked about before. Variant (c) is used when the speaker is making clear a specific point, when insisting, the baton-like movements being orchestrated with the verbal utterance.

3.2. Palm Up Open Hand The gesture of the Palm Up Open Hand seems to be one of the most widespread gestures within everyday conversations and has been described by various authors (for a review, see Müller 2004: 233⫺235). Many sequences end in an open hand, quite often with the palm facing upwards, which in theses sequences functions in a common-like manner. The two core kinesic features of the Palm Up Open Hand are the hand shape (fingers are more or less extended) and the orientation of the palm (facing up). Müller (2004) shows that two basic actions seem to serve as the derivational bases for all Palm Up Open Hand gestures described in the literature: “giving, showing, offering an object by presenting it on the open hand” (Müller 2004: 236), and the readiness to receive an object by the display of an empty open hand. The various gestures of the family studied by Müller are based on the instrumental action of presenting something on the palm of the open hand and sharing it for joint inspection; they share kinesic features and a common meaning “as a result of the functional extension from the instrumental action” (Müller 2004: 240). The Palm Up Open Hand in its “default” variant is used “in contexts of presenting some kind of discursive object to an interlocutor, where this object may be offered for inspection or suggested as candidate for agreement, hence where the gesturer invites the listener to share a proposed perspective on something” (Müller 2004: 241). Her conclusions fit into Kendon’s (1995) suggestions that pragmatic aspects “tend to show a higher degree of consistency” (Müller 2004: 252). Apart from the formational variants, the sequential position within the conversation is of crucial importance.

1536

VIII. Gesture and language

When the gesture is used around a transition relevance point, it functions not only performatively but also discursively, regulating the conversation. When it is used within a current turn, the gesture presents the developed arguments as obvious (Müller 2010: 56).

3.3. The Brushing Aside Gesture The Brushing Aside Gesture is a very common and frequently used gesture in Spanish everyday conversation. It has been described by Montes (2003) for Mexican Spanish, Müller and Speckmann (2002) for Cuban Spanish, and by Müller (1998) and Teßendorf (2005, 2008, this volume) for Iberian Spanish speakers. The gesture is based on the action of brushing something aside, usually small, annoying objects, and it is claimed that the actual physical action it is derived from persists in the semantic core of the gesture and thus determines its use. The Brushing Aside Gesture is most often used to ‘brush aside’ discursive objects or the behavior of others. Using the back or the side of the hand instead of the sensitive palm in order to remove things, supports the assumption that the objects are indeed conceived of as annoying. Thus, a modal aspect ⫺ expressing a negative stance towards the objects in question ⫺ is always implied (see Müller and Speckmann 2002). Certain features of the action (e.g., the reason for the action: a state of annoyance; the effect of the action: a neutral or relieved situation, Teßendorf 2008, this volume) are metonymically foregrounded in each use of the gesture. While the modal use foregrounds the cause of the action, in its performative use the effect is foregrounded. Two places of execution can be differentiated functionally. When used at the midline level of the speaker (variant (a)), the gesture usually serves a modal and discursive function: qualifying something as negative and marking the end of a certain discursive activity (a turn, a listing of arguments, a narration). When the Brushing Aside Gesture is used at shoulder level (variant (b)), it is primarily used performatively, to express a communicative move. In contrast to variant (a), this gesture is most often used without accompanying speech, either as an utterance on its own or at the end of a completed verbal utterance. As for the direction, the marked variant is where the movement is directed towards somebody else (variant (c)). The addressed person becomes the object that is to be brushed aside, the gesture thus turns into an insult and a request to go away. Here, the gesture is used performatively as well, but it is meta-pragmatic (Streeck 2009) or addressee-oriented (in contrast to speech-oriented, see Teßendorf 2008, this volume), since its aim is to influence the behavior of others.

4. Gaps, trends, and new issues in the study o pragmatic gestures It has been said in the introduction that ⫺ at least in a general sense ⫺ any gesture is a pragmatic gesture. Many categories of gestures could receive this identification mark: for instance, gestures accompanying (verbal) gestural deictics (Levinson 1983), as necessary (nonverbal) units to identify referents and to understand utterances; coverbal gestures as discourse markers (Kendon 1995), or emblematic gestures as units defined by their illocutionary force (Payrato´ 1993, 2003). In fact, at present, no clear notion of pragmatic gesture is available, neither in the area of (linguistic) pragmatics nor in gesture studies. Instead of this, what we would expect is, as put by Wharton (2011: 384) that “[r]esearchers into gesture should no more ignore pragmatics than those working in pragmatics should ignore the study of gesture”. Archer, Aijmer, and Wichmann (2012) present the first current handbook on pragmat-

116. Pragmatic gestures

1537

ics (not encyclopaedic) with several chapters on prosody, gesture, and non-verbal communication, consequently recognizing the appropriateness of these topics for the pragmatic theory. Empirical research on gesture may apport many evidences for pragmatic theorization, and inversely pragmatic interpretation theories can illuminate and frame contextually many gestural phenomena. The development of studies will allow for a better understanding of the several types of interaction of verbal and non-verbal items in communicative contexts, and therefore in the knowledge of pragmatic functions of gesture, cross cultural differences in the use of so-called pragmatic gestures, and cognitive congruence of verbal and non-verbal frames of reference (as in spatial cognition). If the aim of pragmatics is to explain the production and understanding of utterances, pragmatic gestures have undoubtedly an important role in it, and its contribution to notions such as relevance, salience, and informativeness should be elucidated. At the same time, as multifunctional and multidimensional items, their interactive role in the regulation of speech turns and the structure of natural conversation must also be further analyzed.

5. Reerences Archer, Dawn, Karin Aijmer and Anne Wichmann 2012. Pragmatics. An Advanced Resource Book for Students. London/New York: Routledge. Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allan Wade 1992. Interactive gestures. Discourse Processes 15(4): 469⫺489. Bavelas, Janet Beavin, Nicole Chovil, Linda Coates and Lori Roe 1995. Gestures specialized for dialogues. Personality and Social Psychology Bulletin 21(4): 394⫺405. Bohle, Ulrike 2007. Das Wort ergreifen ⫺ Das Wort übergeben. Explorative Studie zur Rolle redebegleitender Gesten in der Organisation des Sprecherwechsels. Berlin: Weidler. Brookes, Heather J. 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186⫺224. Brookes, Heather J. 2005. What gestures do: Some communicative functions of quotable gestures in conversations among Black urban South Africans. Journal of Pragmatics 37(12): 2044⫺2085. Cooperrider, Kensy 2011a. Book review: Tim Wharton (2009). Pragmatics and non-verbal communication. Gesture 11(1): 81⫺89. Cooperrider, Kensy 2011b. Pragmatics and nonverbal communication. An exchange. Further comment. Gesture 11(3): 388⫺393. Deppermann, Arnulf (ed.) 2013. Conversation Analytic Studies of Multimodal Interaction. Journal of Pragmatics 46(1) Special Issue. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton. First published [1941]. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of non verbal behavior: categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Freedman, Norbert 1972. The analysis of movement behavior during the clinical interview. In: Aron Wolfe Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 153⫺175. New York: Pergamon Press. Fricke, Ellen, Jana Bressem and Cornelia Müller this volume. Gesture families and gesture fields. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1630⫺ 1641. Berlin/Boston: De Gruyter Mouton. Goodwin, Charles 1986. Gesture as a resource for the organization of mutual orientation. Semiotica 62(1/2): 29⫺49. Goodwin, Charles and Marjorie Goodwin 1986. Gesture and coparticipation in the activity of searching for a word. Semiotica 62(1/2): 51⫺75.

1538

VIII. Gesture and language

Heath, Christian 1992. Gesture’s discreet tasks: Multiple relevancies in visual conduct and in the contextualisation of language. In: Peter Auer and Aldo Di Luzio (eds.), The Contextualization of Language, 101⫺128. Amsterdam: John Benjamins. Kendon, Adam 1981. Geography of gesture. Semiotica 37(1/2): 129⫺163. Kendon, Adam 1985. Some uses of gesture. In: Deborah Tannen and Muriel Saville-Troike (eds.), Perspectives on Silence, 215⫺234. Norwood, NJ: Ablex Publishing Corporation. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺279. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Ladewig, Silva H. 2010. Beschreiben, auffordern und suchen ⫺ Varianten einer rekurrenten Geste. Sprache und Literatur 41(1): 89⫺111. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. Ladewig, Silva H. this volume. Recurrent gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1558⫺1575. Berlin/Boston: De Gruyter Mouton. Levinson, Stephen C. 1983. Pragmatics. Cambridge: Cambridge University Press. McNeill, David and Elena Levy 1982. Conceptual representations in language activity and gesture. In: Robert J. Jarvella and Wolfgang Klein (eds.), Speech. Place and Action: Studies in Deixis and Related Topics, 271⫺295. Chichester: Wiley. Mondada, Lorenza 2006. Participants’ online analysis and multimodal practices: projecting the end of the turn and the closing of the sequence. Discourse Studies 8(1): 117⫺129. Mondada, Lorenza volume 1. Conversation analysis: Talk and bodily resources for the organization of social interaction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 218⫺226. Berlin/Boston: De Gruyter Mouton. Montes, Rosa 2003. “Haciendo a un lado”: gestos de desconfirmacio´n en el habla mexicano. IZTAPALAPA 53: 248⫺267. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures. Their Origins and Distribution. London: Jonathan Cape. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. The Palm-Up-Open-Hand. A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 233⫺256. Berlin: Weidler. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia and Alan Cienki 2009. Words, gestures, and beyond. Forms of multimodal metaphor in the use of spoken language. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 297⫺328. Berlin/New York: De Gruyter. Müller, Cornelia and Ingwer Paul 1999. Gestikulieren in Sprechpausen. Eine konversations-syntaktische Fallstudie. In: Hartmut Eggert and Janusz Golec (eds.), „… wortlos der Sprache mächtig“. Schweigen und Sprechen in der Literatur und sprachlicher Kommunikation, 265⫺281. Stuttgart: Metzler. Müller, Cornelia and Gerald Speckmann 2002. Gestos con una Valoracı´on Negativa en la Conversacio´n Cubana. DeSignis 3. Buenos Aires: Gedisa.

116. Pragmatic gestures

1539

Neumann Ranghild 2004. The conventionalization of the ring gesture in German discourse. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 217⫺223. Berlin: Weidler. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20(3): 193⫺216. Payrato´, Lluı´s 2003. What does ‘the same gesture’ mean? A reflection on emblems, their organization and their interpretation. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures, Meaning and Use, 73⫺81. Porto: Fernando Pessoa University Press. Payrato´, Lluı´s 2004. Notes on pragmatic and social aspects of everyday gestures. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 103⫺ 113. Berlin: Weidler. Payrato´, Lluı´s this volume. Emblems or quotable gestures: Structures, categories, and functions. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1474⫺ 1481. Berlin/Boston: De Gruyter Mouton. Poggi, Isabella 1983. La mano a borsa: analisi semantica di un gesto emblematico olofrastico. In: Grazia Attili and Pio Enrico Ricci-Bitti (eds.), Communicare Senza Parole, 219⫺238. Rome: Bulzoni. Poggi, Isabella 2004. The Italian gestionary. Meaning representation, ambiguity, and context. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 73⫺88. Berlin: Weidler. Poggi, Isabella and Marina Zomparelli 1987. Lessico e grammatica nei gesti e nelle parole. In: Isabella Poggi (ed.), Le Parole nella Testa: Guida a un’ Edicazione Cognitivista, 291⫺328. Bologna: Il Mulino. Schmitt, Reinhold 2005. Zur multimodalen Struktur von turn-taking. Gesprächsforschung ⫺ Online Zeitschrift zur verbalen Interaktion 6: 17⫺61. Sherzer, Joel 1991. The Brazilian thumbs-up gesture. Journal of Linguistic Anthropology 1(2): 189⫺ 197. Streeck, Jürgen 1995. On projection. In: Esther N. Goody (ed.), Interaction and Social Intelligence, 84⫺110. Cambridge: Cambridge University Press. Streeck, Jürgen 2005. Pragmatic aspects of gesture. In: Jacob Mey (ed.), Encyclopedia of Language and Linguistics, Volume 5: Pragmatics, 71⫺76. Oxford: Elsevier. Streeck, Jürgen 2009. Gesturecraft. The Manu-Facture of Meaning. Amsterdam: John Benjamins. Teßendorf, Sedinha 2005. Pragmatische Funktionen spanischer Gesten am Beispiel des ‘gesto de barrer’. Unpublished M.A. thesis, Freie Universität Berlin. Teßendorf, Sedinha 2008. Pragmatic and metaphoric gestures ⫺ combining functional with cognitive approaches. Unpublished manuscript, European University Viadrina, Frankfurt (Oder). Teßendorf, Sedinha volume 1. Emblems, quotable gestures, or conventionalized body movements. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 82⫺ 100. Berlin/Boston: De Gruyter Mouton. Teßendorf, Sedinha this volume. Pragmatic and metaphoric ⫺ combining functional with cognitive approaches in the analyses of the Brushing Aside Gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1540⫺1558. Berlin/Boston: De Gruyter Mouton. Wharton, Tim 2011. Pragmatics and nonverbal communication. An exchange. Reply to the review of Kensy Cooperrider. Gesture 11(3): 383⫺393.

Lluı´s Payrato´, Barcelona (Spain) Sedinha Teßendorf, Berlin (Germany)

1540

VIII. Gesture and language

117. Pragmatic and metaphoric  combining unctional with cognitive approaches in the analysis o the brushing aside gesture 1. 2. 3. 4. 5. 6.

Introduction Pragmatics and performativity in recurrent gestures Metaphor in (recurrent) gestures The “brushing aside gesture” in Iberian Spanish Conclusion References

Abstract The goal of this article is twofold: firstly to present aspects of the “brushing aside gesture”, a recurrent gesture in Iberian Spanish everyday conversation, where the hand acts, as if it was brushing something aside. The gesture is mostly used pragmatically to “brush aside” discursive objects, acting upon the concurrent speech as a speech-performative gesture, or as a performative gesture acting upon the behavior of somebody else. The article’s second goal aims at building a bridge between pragmatic or functional and cognitive or metaphoric approaches toward gestures since the “brushing aside gesture” combines both pragmatic functions with metaphoric extensions. The interplay between pragmatic function, metaphoric, and metonymic processes in the “brushing aside gesture” will be in the focus of this chapter.

1. Introduction Pragmatic functions of gestures have been described as early as in the first century B.C. The rhetorician Quintilian states that “with our hands we ask, promise, call persons to us and send them away, threaten, supplicate, intimate dislike or fear; with our hands we signify joy, grief, doubt, acknowledgment, penitence, and indicate measure, quantity, number, and time. (87). Have not our hands the power of inciting, of restraining, of beseeching, of testifying approbation, admiration, and shame?” (Quintilian Book 11, chapter 3, 86⫺87). Quintilian puts a strong focus on the illocutionary force of gestures and the possibility that gestures act on their own, with or without the help of the concurrent speech. The “brushing aside gesture” (see also Montes 1994, 2003; Müller and Speckmann 2002; Teßendorf 2005) fits well in Quintilian’s list: It ends arguments, qualifies parts of the speech as negative or irrelevant, and it is used to get someone else to stop his or her behavior. Within Spanish everyday communication the “brushing aside gesture” is most often used to “brush aside” discursive objects or the behavior of others, thus realizing a pragmatic and meta-communicative function. It is a recurrent gesture in Spanish everyday conversation, based on the action of brushing something aside, usually small, annoying objects. It is claimed that the actual physical action it is derived from persists in the semantic core of the gesture and thus determines its use. It is a recurrent gesture, because “it is used repeatedly and its formal and semantic core remains stable across different contexts and speakers” (Ladewig 2011, for an overview and a demarcation of “recurrent gestures”, see Ladewig this volume a). Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 15401558

117. Pragmatic and metaphoric

1541

We also suggest that the gesture relies on a metaphoric extension of the real action in the real world into the realm of communication. Thus, the gesture can also be called “metaphoric”, since the ability to be used pragmatically, as a communicative action, seems to rely on a conceptual metaphor like ideas (or meanings) are objects (Lakoff and Johnson [1980] 2003: 10). In this chapter, we will show how these dimensions are intertwined in this recurrent gesture. In order to do so, we will try to shed some light on the discussion on recurrent gestures from a pragmatic perspective (Brookes 2004, 2005; Kendon 1995, 2004; Ladewig 2006, this volume b; Müller 1998, 2004; Payrato´ 1993, 2004; Payrato´ and Teßendorf this volume; Seyfeddinipur 2004; Streeck 1994, 2005, 2011; Teßendorf 2005) and from a cognitive perspective (Calbris 2003; Cienki and Müller 2008; McNeill 1992, 2005, 2008; Müller and Cienki 2009; Parrill 2008; Webb 1996, inter alia), before turning to the study of the “brushing aside gesture” in Spanish everyday conversation. The examples present a variety of functions the “brushing aside gesture” can take up and some metaphoric variations, and we hope to be able to show how pragmatic function, metaphor, and metonymy work together.

2. Pragmatics and perormativity in recurrent gestures Inspired by Adam Kendon’s (1995) article about gestures as ‘illocutionary and/or discourse markers’ in Southern Italy, many studies of recurrent gestures appeared in the following. In this study, Kendon compared the use of emblematic gestures, for example, the mano a borsa (a ‘purse hand’ which is moved upwards and toward the speaker) with the uses of formally similar recurrent gestures, here, the “finger bunch”, whose status of conventionalization was undefined. Using a functional context-of-use analysis, Kendon found that emblems as well as the other recurrent gestures were mostly used together with speech ⫺ whereby only the emblems could act as a replacement⫺ and that both types were acting upon discourse, conveying rather pragmatic than substantial information. Another finding was that the emblems were used primarily with a performative function (see Kendon 1981; Müller 1998, 2010; Payrato´ 1993, 2003), that is, the mano a borsa was used to indicate a request or a negative comment on somebody else’s speech, and with this displayed “the speech act or illocutionary act that the speaker intends or has intended with a given utterance” (Kendon 1995: 258). In contrast, the “finger bunch” was used to mark the topic, or label the essence or theme of the discourse (Kendon 1995: 266). The configuration of the “finger bunch”, meaning hand shape and orientation of the palm, is very similar to that of the mano a borsa. What differs is the movement pattern: The latter gesture seems more dynamic, starting out as a grasping movement, then moving downwards and towards the interlocutor, commonly ending up in a flat, open hand, the palm facing up. These differences in form lead to semantic differences: The movement of the mano a borsa seems to draw the wanted and expected information towards the speaker, whereas the movement of the “finger bunch” resembles holding onto something or presenting it. Regarding the status of these gesture types, Kendon concludes that the mano a borsa is an emblem or a “quotable gesture” (see Kendon 1992), because it can substitute a complete speech act and “is ‘detachable’ from any particular spoken structure” (Kendon 1995: 267). The “finger bunch”, on the other hand, needs the spoken utterance to act upon. Its use as a means for presenting the essence of the utterance can be regarded as performative but is tied to the structure of the conversa-

1542

VIII. Gesture and language

tion. Although the illocutionary force does not seem as strong as in the mano a borsa, the gesture accomplishes the communicative action or move instead of depicting it (see Müller 1998, 2010; Müller and Cienki 2009; Müller and Haferland 1997). Both gestures can be regarded as stylized and conventional, belonging to a repertoire of “gestural forms” of a certain community. Kendon assumes conventionalization for two reasons: metaphor and pragmatic function. He assumes that both gestures, the mano a borsa and the “finger bunch”, draw upon “the metaphor of bringing a set of things together, uniting them into a common object, or grasping on to something that is small and light, something that can be held with the tips of the fingers, as might befit the notion of the ‘essence’ of that something” (Kendon 1995: 267). And, since most metaphors are socially shared, as shown by Lakoff and Johnson (2003) and others, it would be most likely that the gestural forms created upon them will be “fairly consistent” (Kendon 1995: 275) within a given community, too (see also McNeill 1992, 2005, 2008). The other reason for stylization seems to be strongly connected to the pragmatic function of these gestures. The differences in a possible process of conventionalization between pragmatic and referential gestures lie in the information they convey: Whereas the referential content of utterances and gestures seems infinite, the number of speech acts or, more generally, interactional moves is limited, which means that “a vocabulary of gestures marking these can be more readily established” (Kendon 1995: 275). The properties of all gestures in that they are silent, quickly produced and perceived, reminiscent of actions, and medially independent of speech make these gestures most suitable to convey information about the intentional frame of an utterance or parts of the discourse without “interrupting the verbal flow of discourse” (Kendon 1981; see also Bavelas et al. 1992: 469; Payrato´ 2003: 79; Payrato´ and Teßendorf this volume). In this tradition other studies of recurrent gestures appeared, taking two possible routes: One was to establish repertoires of emblematic gestures, working on a systematic view of the functions or speech acts they perform (Brookes 2004, 2005; Kendon 1981; Payrato´ 1993; Posner 2002; Saitz and Cervenka 1972, inter alia), and the other was taking recurrent co-speech gestures as a starting point for micro-analytic case studies (Bressem and Müller this volume a, b; Brookes 2001; Calbris 2003; Kendon 1995, 2004; Ladewig 2006, 2011, this volume a; Müller 2004; Müller and Speckmann 2002; Neumann 2004; Seyfeddinipur 2004; Teßendorf 2005), investigating their functions, their relation to speech, their dependence on speech, and thus their ability to become “detachable” from any spoken utterance. The question of the status of this class of gestures and how it related to a supposed opposition between emblems and idiosyncratic gestures (Efron [1941] 1972; Ekman and Friesen 1969; McNeill 1992, 2000, 2005) was always implicitly or explicitly addressed when studying recurrent gestures (see Ladewig [this volume a] for a thorough description of the class of recurrent gestures).

2.1. Case studies o recurrent gestures In Seyfeddinipur’s study (2004) on the “pistol hand”, a widespread gesture in Iran, where the hand is held flat, palm oriented upwards or slightly diagonal, thumb and index extended as to iconically embody the form of a pistol, she found that the gesture was used for two distinct pragmatic functions. On the one hand, to mark the comment, the crucial or central part for the understanding of a story and thus contrasting it to the topic of the utterance, which itself was often marked by the use of the “ring-gesture”. The second function was to perform an indirect or direct directive speech act. When the

117. Pragmatic and metaphoric

1543

gesture was used to perform a directive speech act, i.e., ordering or forcing somebody to do something, it could stand on its own, without accompanying speech. The two functions could be distinguished by their contexts-of-use and by differences in their form, the performative function being executed with a more defined and crisper form, with a fully extended index finger and an upward oriented palm. Seyfeddinipur concludes that the “pistol hand”, at least in its performative function, with its clear-cut form, is a conventionalized Persian gesture, although a verbal gloss is still lacking. In Ladewig’s study of the “cyclic gesture” (2006, 2011, this volume b), where the hand is moved in situ with at least two circle movements to the front, she found the gesture to enact different functions. It supplied referential information when describing the action of scumming and was used pragmatically when marking a concept or word search, or a request, here also without speech. By using a context-of-use and a sequential analysis, Ladewig found that certain formational patterns, the size of the gesture and its position in gesture space, corresponded to different functions. Although the semiotic construction of the gesture and its underlying semantic core is highly complex, involving the creation of an ICM (Idealized Cognitive Model) and the use of at least two different metaphors (Ladewig 2011), one fundamental underlying metaphor for this gesture seems to be mind is machine (Ladewig 2006, 2011) which enables the gesture to illustrate the ongoing thinking process. When the gesture is used as a request for someone else to react more quickly or to keep on talking, it “cranks” up the speaking activity during discourse, thereby working on the metaphor communication is a process. Müller’s fundamental study on the “palm up open hand” gesture as a gesture family (2004) brings in similar findings, systematizing the formal and functional approach. Her analysis suggests that the “palm up open hand” with its fixed features orientation of the palm (up) and hand shape (flat open) combines with other kinesic features (i.e., rotation, alternation, repetition, handedness, moving laterally, etc.), which iconically add diverse meanings (emphasis, continuity, etc.) to the main semantic core and function of the gesture: presenting arguments and offering them for inspection to the interlocutor. The base of the gesture, the actual physical action of presenting something on the open hand, is converted into a gesture, now presenting arguments, ideas, etc. This semiotic process is explained through modulation in the notion of Goffman (see Müller and Haferland [1997] for further details) and by metaphoric mappings (Lakoff and Johnson 2003). Together with Kendon (1995) and Streeck (1994, 2009), who call these recurrent gestures “speech-handlings”, Müller argues that especially gestures which are based on everyday actions are most likely to be used for performative functions: Whereas in the real world these actions are used with objects, in conversations they are used to act upon speech or for “performing manual actions upon virtual objects” (Müller 2004: 236; see also Müller and Cienki 2009). As we have seen, all of the recurrent gestures introduced above can be used in order to act upon speech, or to display the communicative activity the speaker-gesturer is involved in. They can therefore act as performative gestures. The “finger bunch” presents information in a precise way, thereby qualifying the objects presented as something precise, central; the case of the “pistol hand” combines a deictic movement toward somebody with a defined hand shape in order to make somebody do something, and the “palm up open hand” is used to present rather unspecified objects on the open hand for introspection. The term performative for the function that these gestures share, has been introduced by Müller (1998, see also Müller and Haferland 1997) in her functional ges-

1544

VIII. Gesture and language

ture classification, where she draws on the speech act theory as delineated by Austin (1962). Gestures taking up this function actually accomplish the action underlying the gesture, such as presenting in the case of the “palm up open hand”, they do not depict it. Similar to performing a verbal speech act as, for example, an oath by using the “I hereby swear”-formula, the gesture of an open hand with index and middle finger stretched, the thumb and ring-finger touching, palm oriented away from body, can fulfill the same function, performing the speech act of an oath. The observation that some of these gestures, like the oath gesture, are rather formal and their use is linked to a certain setting and personnel, whereas others, like the “palm up open hand” or the “finger bunch” are used freely in everyday conversation, leads Müller and Haferland (1997) to the conclusion that there are ⫺ at least ⫺ two different kinds of performative gestures. The first class is tied to ritualized contexts, as, for example, the swearing of an oath in court, the blessing or baptizing in church. Through their constant use in strictly organized, ritualized contexts, they may then be performed independently of speech within this surrounding, taking up characteristics of traditional emblems. The second class, however, acts upon and within the realm of speech. As we have seen in the example of the “palm up open hand”, these gestures perform interactional moves on the discourse structure of everyday conversations and are thus closely tied to it. If we now reconsider Kendon’s differentiation of the mano a borsa and the “finger bunch”, we might find yet another important difference regarding these types of performative gestures. Whereas the mano a borsa appeals to the behavior of someone else, the “finger bunch” conveys information about the communicative task of the speaker. Nevertheless, the core performative function of both gesture types, in that they accomplish an action instead of referring to it, seems to be the same. Taking up a similar perspective, Streeck (1994, 2005, 2009) sees the differences of these two types of performative gestures, differentiating between pragmatic and metapragmatic gestures (Streeck 2005, 2009), on a continuum. For Streeck, the differences between gestures that display communicative action and those that are used to regulate the behavior of others, seems to be of limited analytic value, only. Streeck states that “at one end of the polarity, gestures are aligned with what the speaker is presently doing, and convey something about it, at the other end they are performed in attempts to structure the actions of other participants” (Streeck 2005: 74). And one and the same gesture can be aligned in both ways at the same time. Although this is a very good argument, we also believe that this functional difference should not be disregarded. Taking the observations of the other authors of this section into account, we support the view that there are at least two different types of performatives within everyday communication, and that it seems useful to differentiate between these two functions analytically and also terminologically. We would therefore propose to consider gestures that display the communicative act of the speaker and act upon speech as “speech-performatives” (close to Streeck’s meta-pragmatics) and those that aim at a regulation of the behavior of others as “performatives” (close to Streeck’s pragmatic gestures, which comprise more than performative gestures, see Streeck 2011: chapter 8; Payrato´ and Teßendorf this volume). What we have seen in this section, is that most recurrent gestures are based on instrumental actions or manipulations. This seems to be important for the gesture to be used performatively, which ⫺ in the end ⫺ is not too surprising, when we take Austin’s (1962) considerations about how to do things with words, or here, with gestures serious. We

117. Pragmatic and metaphoric

1545

have seen that the basic actions of these gestures are metaphorically extended, in that they then “figure aspects of the processes of speaking and communicating as handlings of physical objects” (Streeck 2005: 74), e.g., when the “palm up open hand” is used to present an idea instead of a tactual, real-world object (see also Cienki and Müller 2008).

3. Metaphor in (recurrent) gestures Within gesture research, there has always been a strong interest in investigating the significance of metaphors in gesture creation and use, especially from a psychological or cognitive perspective (see de Jorio [1832] 2000; Kendon 2004; Wundt 1900 for a historical overview; Calbris 1990; Cienki 1998; Cienki and Müller 2008; McNeill 1992, 2000, 2005, 2008; Mittelberg 2006; Mittelberg and Waugh 2009; Müller 2004, 2008, 2010; Müller and Cienki 2009; Sweetser 1998; Webb 1996, inter alia), and I will therefore concentrate on aspects of gestural metaphoricity that can be combined with the findings of the functional analyses of the previous section. When Wilhelm Wundt (1900: 182⫺199) developed his theory about the evolution of language out of expressive movements, assuming gestures to be the first step towards any symbolization process, he implicitly took the notions of metaphor and metonymy into account. Wundt proposed the term “symbolic” gestures for those that are involved in metaphoric processes and represent the most complex and culturally anchored gesture type in his classification. Within this class, he further distinguishes between “primary” and “secondary” symbolic gestures. For primary symbolic gestures, their meaning construction is symbolic and arbitrary right from the start. They are created intentionally as signs for abstract concepts, which have no direct representable referent. By contrast, secondary symbolic gestures are based on iconic gestures and obtain their symbolic load through a shift from (rather direct) iconicity to associations and indirect reference. Whereas non-symbolic gestures are associated rather directly with their referent, e.g., a hand embodying the head of a donkey refers directly to the referent “donkey”, secondary symbolic gestures need at least one intermediate step (or connecting link) between iconicity, reference, and their interpretation. When the above-sketched gesture is used to refer to a person’s foolishness and becomes a symbolic gesture, there are several intermediate steps needed, which link the gesture to the directly associated referent (through processes of abstraction and metonymy): the donkey. Then, there is a link from the donkey to its conventionally ⫺ and in German proverbially ⫺ ascribed characteristic of “foolishness” and finally a link from the foolishness of the animal to the foolishness of the person referred to (Wundt 1900: 185). The symbolic “donkey gesture” still relies on iconicity, but iconicity alone does not suffice to associate the gesture with its meaning. In symbolic gestures, in general, the associations or links between the base and its referent (see Kendon 1980; Müller 2010) can be backgrounded to the point where their initial iconic relation is almost lost and the gestures become arbitrary and fully conventional. Sticking to the donkey example, this would be the case if donkeys were extinguished, but the gesture remained as a predication of someone’s foolishness without a recognizable trace between the base and the referent. The gesture would have to be used by convention, and its form might be judged arbitrary. The loss of transparency, though, is not a definitional criterion for symbolic gestures for Wundt. In his view, the possibility of consolidation between form and meaning in the case of secondary symbolic gestures is enabled through constant use and not a matter of transparency or arbitrariness. A

1546

VIII. Gesture and language

common gesture for dismissal and refusal, for example, where the hands are moved to the front, the palms away from the body, is considered a symbolic gesture, being based on the ⫺ in newer terms ⫺ “metaphorical extension” of the physical action of pushing something away toward virtually pushing something or someone away. It is interesting to note that, although Wundt does not provide a systematic account of the gestures’ usage, many of his examples of secondary symbolic gestures are used pragmatically in the sense that they perform speech acts and thus constitute performative gestures. To conclude, Wundt states two possibilities for gestures presenting abstract concepts or metaphors: one is the evolution of a gesture from an iconic gesture, the other possibility is to choose a somewhat relatable form for an abstract concept. Both gesture types, however, are closely tied to a certain culture and cultural norms. In his 1992 book, David McNeill introduced the category of metaphoric gestures, which together with iconics present the class of imagistic and representational gestures, contrasting them with beats, deictics, and emblems, excluding the lattest from further considerations. Since McNeill considers gestures as an important “window onto thought”, he focuses on gestures with which “people unwittingly display their inner thoughts and ways of understanding events of the world […]” (McNeill 1992: 12). In his classification, he defines metaphoric gestures as “iconic in that they are pictorial, but the pictorial content presents an abstract idea rather than a concrete object or event” (McNeill 1992: 14). In recent publications (2000, 2005, 2008), however, McNeill leaves the path of a seemingly rigid classification and prefers to refer to dimensions of iconicity, metaphoricity, and so forth (McNeill 2005), emphasizing the multifunctional character of cospeech gestures. Yet, that does not challenge one of his main assertions about metaphors in gesture, and how they relate to a presumed taxonomy of the gestures expressing them. McNeill claims that, because metaphors, be they “expected” or “unexpected” (McNeill 2008), are so pervasive, gestures expressing them tend to appear systematic as well. “Expected” metaphors draw back on culturally shared images or conceptual metaphors, as the conduit metaphor, for example. The metaphoric gesture expressing it seems to be conventional just for this reason: “It is ‘expected’ in the sense that, given a repertoire of metaphors embodied in a culture, form and content are more or less predictable.” (McNeill 2008: 185) The visibility of “unexpected” metaphors is based on the context, they appear in combination with catchments, with which they build a contrast. They are produced ad hoc in the on-line processing of thinking and speaking, when there is no cultural image available. The “unexpected” metaphors have a “discourse and an utterance creation function. They form a bridge between the core idea unit or growth point of the utterance at the moment of speaking, and the larger discourse framework” (McNeill 2008: 187, italics in the original), for example, when they iconically display a bowling ball being thrust down and metaphorically express the idea of antagonistic forces (McNeill 2008: 188). The function of “unexpected” metaphors is to differentiate within a field of oppositions and to bridge thought to context. They are pragmatic, because they focus on the discourse structure, the unfolding and differentiation of the growth point, and its relation to the coherence structure of the catchments, showing a different pragmatic function than the recurrent ones: They alter or qualify the illocutionary force, acting in a meaningful way upon the referential side of the utterance. They relate to the utterance structure in a way of commenting on it or qualifying the part of the utterance as the topic or the comment, regardless of how the content of the story is structured or how it is seen.

117. Pragmatic and metaphoric

1547

There are strong reasons for seeing similarities between gestures that express metaphors of the expected kind and recurrent gestures, since both consist of stable formmeaning pairings, which at least possess some degree of conventionality ⫺ be it through metaphor, only, or through the combination of metaphor, constant use, and pragmatic functions ⫺ and thus can be used to act independently upon the referential side of an utterance. But it might not be the case that any (expected) metaphoric gesture can adopt any pragmatic function. Taking McNeill’s metaphoric gestures as a starting point, Rebecca Webb (1996) investigated their form and use in three different naturally occurring communicative situations. A rather impressive finding of her study was the high amount of “expected” metaphoric gestures during natural conversations and that most of them were used by different speakers in different contexts. Although her formal and functional analysis is rather generic, her results show that there is a culturally shared repertoire of metaphoric gestures that consists of stable form-meaning pairings and is used across speakers in different contexts, which can be formally described and identified. Webb reinterprets Kendon’s (1995) “pragmatic gestures” as metaphorics but does not focus on the fact that some of her gestures are actually used pragmatically and some are not (e.g., the together gesture, Webb 1996: 96). The example for a metaphoric gesture, from which she departs in her study, is McNeill’s famous “cartoon as a genre as an object gesture” (McNeill 1992: 148), based on the conduit metaphor, where the cartoon is presented as if it was a manipulable object. The pragmatic function of this gesture is that it presents information on the meta-linguistic or illocutionary level and thus acts upon speech. But it has this feature, because “a treatable object which stands for the cartoon” is presented, and not because the cartoon is seen as an object. To use a gesture, which is based on the conduit metaphor, does not have to be pragmatic but surely metaphoric. In an experimental study of the differences of recurrent, “expected” metaphoric gestures and emblems, Parrill (2008) compares the conventional status of the “palm up open hand”, as described by Müller (2004) and Kendon (2004), with that of the emblem for okay (thumb and index finger touch, the rest of the fingers are loosely stretched), being the first one to empirically challenge the question of this relationship. Claiming that the “palm up open hand” gesture (or the presenting gesture, as she calls it) belongs to the class of representational gestures and thus differs cognitively from an emblem, she bases her argumentation on a revised version of “Kendon’s continuum” (McNeill 2000, 2005), stating that “representational gestures occur with speech, are non-linguistic, not conventionalized, and are global and synthetic. Emblems differ according to all four dimensions.” (Parrill 2008: 228) It is interesting to note that Parrill describes the conventionalization of emblems in opposition to representational gestures as “a collective agreement that motivates the use of a certain form” (Parrill 2008: 228), somehow seemingly reserving the notion of visuo-spatial thinking for the motivation of representational gestures. Her argumentation implicitly hints at an understanding of emblems that resemble Wundt’s “primary” symbolic gestures. For Parrill as well as for McNeill, metaphoric gestures belong to the category of representational gestures and as such, they are “gestures, which represent something in the accompanying speech” (Parrill 2008: 228). Within the borders of this definition, it is difficult to account for metaphoric gestures that do not represent anything of the accompanying speech but act upon it in a meta-communicative manner. In their performative use, they stop being representational. The differentiation between representational or propositional and illocuti-

1548

VIII. Gesture and language

onary aspects of speech, as proposed by Austin (1962) and Searle (1969) for verbal speech, and Müller (1998) and Payrato´ (1993) for gestures, is very useful for the analysis of gestures and especially useful, when comparing recurrent gestures with emblems. By excluding the topic of the difference between referential and pragmatic functions of gestures, “because it is a functional and not a psychological division, thus its consequences for production or cognition are not spelled out” (Parrill 2008: 233), she also excludes the functional similarities of recurrent gestures and emblems that might lead to a similar process of conventionalization. But Parrill emphasizes an interesting point, in which referential metaphoric gestures and pragmatic gestures might differ, which consists in the difference of target domains, meaning that the target domain of pragmatic gestures seems to be discourse itself (Parrill 2008: 234). We will take this assumption into consideration, when we turn to the analysis of the “brushing aside gesture”. To conclude, metaphor as a dynamic process of seeing things in terms of something else, plays an important role in gesture creation and use. For Wundt, the development of fully conventional symbolic gestures of the secondary type is a question of their frequent use in a variety of contexts. Transparency or arbitrariness is not considered a criterion of standardization or conventionality. McNeill’s “expected” metaphoric gestures, which are based on conceptual or at least conventional metaphors, and which are also the object of Webb’s investigation, seem to partly overlap with what we call recurrent gestures. But, as we have noticed above, not all gestures that are based on conventional metaphors can subsume pragmatic functions. The distinction between pragmatic gestures and representational gestures, which, according to Parrill, lies in the difference of target domains, seems to be worth further inspection. We already observed that gestures, which are based on the conduit metaphor and have discourse or communication as target domains, can still be representational.

4. The brushing aside gesture in Iberian Spanish The study of the “brushing aside gesture” was undertaken within Iberian Spanish (Teßendorf 2005), assuming that when a gesture is used frequently, it might have a stable form and meaning and be used systematically within one culture. The data used for the analysis consisted of eight hours of videotaped everyday conversation (corpus Müller [1998] and Teßendorf [2005]), which contained 64 instances of the “brushing aside gesture” that were extracted and analyzed, taking micro-analytic procedures, as outlined in Müller (2004), as a starting point for the functional analysis. One of the starting points was the assumption that the “brushing aside gesture” was based on the action of brushing something aside ⫺ usually small, annoying objects like crumbs, mosquitos, dust, etc. ⫺ and that the characteristics of the action were transferred metaphorically to the realm of communication by the use of the gesture (for the semiosis of gestures being based on actual physical actions, see Kendon 1980, 1981; Müller 2010; Müller and Haferland 1997; Posner [2002] on ritualization). Accordingly, the “brushing aside gesture” was defined as an enactment of the action scheme (see Tab. 117.1) of brushing something aside, thereby performing a quick flick of the wrist, the most prominent feature of the actual physical action. Using the back or the side of the hand instead of the sensitive palm in order to remove things from one’s immediate surrounding, supports the assumption that the objects are indeed conceived of as annoying.

117. Pragmatic and metaphoric

1549

Tab. 117.1: Action scheme of “brushing something aside” (Teßendorf 2007) ACTION SCHEME OF BRUSHING SOMETHING ASIDE Point of departure: unpleasant situation Cause: annoying objects are in the immediate surrounding Action/Process: the back of the hand brushes these objects aside Endpoint/Goal: objects are removed; end of unpleasant and recovery of a neutral situation

At the initial point of the action, as well as in the gesture, the fingers are slightly bent, opening up dynamically into a relaxed open hand, thereby moving away from the body, to the side, over the shoulder, over the head, and sometimes toward the front. The gesture is tied to the underlying action through iconicity, abstraction, and metonymy, since firstly, the objects involved in the action are not part of it but have to be metonymically inferred (Mittelberg 2006, Mittelberg and Waugh this volume), and secondly, the different pragmatic functions metonymically highlight different aspects of the underlying action scheme (see Bressem and Müller [this volume b] for a new approach toward gesture families, based on the selection of parts of an action scheme).

4.1. Reerential and metaphoric: they brushed them aside The following excerpt is taken from a dyadic conversation about the appearance and disappearance of Romani people in certain areas of Berlin and London. Speaker B says that ⫺ in contrast to the past ⫺ there aren’t as many Romanies in Berlin anymore, “they brushed them aside”, hereby performing a “brushing aside gesture” with the left hand to the left side. (1)

Example “los barrieron” [lh if beat | lh if beat | lh if beat ] rp 1 A: pero [sAbes que | yA no hay | tantos rumanos] en berlı´n ‘but you know there aren’t as many rumanians in berlin anymore, [prep. lh BAS t.l. ] rp 2 A: [los barRIEron ] they brushed them aside’

Note that in the first part of the utterance, A draws the attention of the interlocutor to the upcoming part by two means: first, with a beat-like gesture, a stretched index finger, which is moved toward the interlocutor three times, in this case operating as an attention getter, and secondly, with the gaze that she directs toward her gesture. When uttering los barrieron (‘they brushed them aside’), the stroke of the gesture is synchronized with the term barrieron, and we can assume that both expressions, the verbal and the gestural are semantically co-expressive, deploying the same meaning. Gesture and verbal utterance work together in the metaphorical extension of the treatment of Romanies in Berlin, since they were not brushed aside literally. The source domain as the action of brushing small, annoying objects aside in order to get them away from one’s immediate surrounding is mapped onto the target domain: the chasing away of real people, which is part of the proposition of the utterance. In drawing upon the concept of brushing some-

1550

VIII. Gesture and language

thing aside, embodied by the gesture and the verbal utterance, the negative and disrespectful attitude toward the Romanies ⫺ that they were treated as small, annoying objects ⫺ is expressed through metonymic operations or pragmatic inference (see Panther and Thornburg 2004; Teßendorf 2007). If we consider the action scheme as the underlying blueprint, the characterization of the objects (cause) and the cleaning up of one’s immediate surrounding (endpoint, goal) are equally highlighted in this example. The metaphor as a means of seeing something as something else operates on both modalities, the verbal and the gestural, showing the same source and target domains and can thus be called verbo-gestural, following the argumentation of Müller (2008), or multimodal (Müller and Cienki 2009). The metaphor here is not used to express an abstract notion in terms of a concrete: The action of brushing something aside and the action of chasing somebody away are both concrete actions (see also Fricke [2007] and Müller and Cienki [2009] for a discussion of metaphors within concrete domains). The metaphoric use of the “brushing aside gesture” in this example clearly belongs to the proposition of the utterance, it is part of the story being told. Speaker A reports verbally and gesturally, what had happened to the Romanies in Berlin. This “brushing aside gesture” functions as a referential metaphoric gesture that is co-expressive with the verbal utterance.

4.2. Pragmatic and metaphoric: I brush it aside In contrast to the referential or substantial use of the “brushing aside gesture”, which was found in the data only three times, its use with pragmatic functions was impressively more frequent. Leaving aside the discursive and modal function (see Montes [2003] for a detailed analysis of the modal function in Mexican Spanish), where the gesture is used to mark the structure of the utterance or displays a negative attitude toward objects in question ⫺ a function which strongly highlights, what we have called the “cause” of the action scheme ⫺ we will now present two examples, where the “brushing aside gesture” serves the performative function of ending something or bringing something to an end. As we have seen before, the performative functions of the “brushing aside gesture” operate on two different levels: In the first example, the gesture acts upon the concurrent speech (see also Müller and Cienki [2009] for a similar example from Cuba) and thus constitutes a speech-performative function, whereas in the second example, it is completely detached from a spoken utterance, aiming at the behavior of someone else and thus constitutes a performative function.

4.2.1. Speech-perormative The first example is taken from a dyadic conversation about the difficulties of finding an affordable apartment for a certain time. Speaker B reports his experience with a landlord who demanded a deposit of two monthly rents for a stay of one month. He says that he understands that providing expensive furniture might justify a deposit, but the apartment in question was empty. While saying fue como un estudio que lo utilizaba so´lo allı´ para (‘it was a studio, which he only used for’) he performs a “brushing aside gesture” over his shoulder, whose stroke co-occurs with para (‘for’). After the gesture, the sentence remains cut off. The speaker then restarts with the concluding remark no tenı´a absolutamente nada (‘there was absolutely nothing in it’).

117. Pragmatic and metaphoric (2)

1551

Example “un estudio” [ bh PUOH | lh PBOH | lh PUOH | 1 B: [por estar un mes | tienes que pagar dos de caucION/ | dices (.) | for staying there a month you would have to pay two as a deposit, you see, | lh PUOH to left | 2 B: | y adema´s ni hay nada importante en el PISo me entiendes/ | and above all, there is nothing important in the apartment, you understand, | lh PUOH to left to right | lh PVOH | lh PUOH to left | 3 B: | que decir un mogollo´n de cosas | por ahı´/(.) | pues yo que se´ (.) | which means a lot of things there, well I don’t know, | lh PVOH ] rp 4 B: | puedes romper] algo por accidente y:: (.)y tAL pero you might destroy something by accident and and so, but [lh PUOH to left | lh PUOH to left | BAS] 5 B: [fue como un estudio | que lo utilizABa so´lo allı´ | para ] it was like a studio, which he used only for 6 B: (-) no tenı´a absolutamente NAda (-) there was absolutely nothing in it.

In this example, the “brushing aside gesture” is again employed at the end of an utterance, at a possible transition point and, what is more interesting, at the end of a gestural phrase, right after two palm up open hands that slightly rotate toward the left. Let us take a look at the argumentation structure of this extract. In line one, the speaker introduces the fact that the owner wants a high deposit, followed by an objection (line two) that there was nothing important in the apartment that would justify it. What follows (lines three and four), is an elucidation of possible reasons for such a deposit, namely a fully equipped apartment, where a deposit would cover possible damages. But again, the apartment was only a studio, an argument presented by the gestures in line five. The speaker continues his argumentation by a conjecture about the use of this studio, which is cut off. The speaker may not know, what the owner of the studio uses it for, or it might just not be of any interest. The “brushing aside gesture” ends the parenthesis which began in line three and brushes the excursion aside. The main argument against the deposit, which is the emptiness of the apartment, is presented in line two, and is reformulated after the brushing aside gesture and the end of the excursion, is reformulated in line six ‘there was absolutely nothing in it’. As we can see, there is no word or phrase in the utterance, which can be connected in a meaningful way with the action of brushing something aside, so where does the metaphor come into play? In contrast to the previous example, the “brushing aside gesture” does not depict the action of brushing something aside but enacts it. The speaker “brushes aside” the excursion of possible reasons for the deposit and thus acts upon speech, in this case on the argumentation structure. To be able to do so, the target domain for the “brushing aside gesture”, the metaphoric extension, is communication itself, the arguments that were presented beforehand. Communication here is conceived of as consisting of concrete and manipulable objects. Both gestures, the “palm up open

1552

VIII. Gesture and language

hand” and the “brushing aside gesture”, act upon this premise, by presenting these items or brushing them aside. It might be that the “palm up open hand” in this example sets the stage for the metaphoric interpretation of the “brushing aside gesture”. Sequences of speech-performative gestures appear quite frequently, as they create and visualize the structure of an utterance and show in what kind of communicative activity the speaker/ gesturer is engaged in. What is important here, and in contrast to the first example, is, firstly, that the target domain is indeed communication or discourse, just as Parrill (2008) proposed. But secondly, the difference between these two examples lies in the speaker’s use of the gesture. In the first example, the gesture belongs to the reported speech, the gesture illustrates the chasing away of the Romanies, but here the speaker uses it as a communicative action. He literally brushes the excursion aside. This can also be seen, when we consider the underlying action scheme. While, again, we have a characterization of the objects (the arguments of the parenthesis) involved as small and therefore unimportant, the focus is on the ending of this excursion. What is highlighted in this example of a speech-performative gesture is the goal or the endpoint of the action scheme. From a first person perspective parts of the discourse are brushed aside.

4.2.2. Perormative In the second example for a pragmatic use, the gesture is used without speech. This excerpt is taken from a dyadic conversation, where two friends discuss the negative working attitude of a common colleague. J complains that his behavior leads to constant complications and more work for everyone else, which he is no longer willing to accept. Speaker S, on the other hand, defends the colleague and explains that he is having a hard time in his private life and that one needs to be patient with him. After she says ‘it was hard, in the meantime it is hard and it will be hard now’ (Era dura, durANte es dura y xxx ser dUra aHOra) and a short pause, J responds with a “brushing aside gesture” over his right shoulder, which then is followed by a gestural response by S: a “palm up open hand” and a shrug, performed at the same time. (3)

Example “dura” 1 S:

[rh PUOH from r zigzag to left | rh PUOH to r ] [era dura durANte es dura y | xxx ser dUra aHOra] ‘it was hard, it is hard in the meantime, and xxx to be hard now’

2 J:

[BAS]

3 S:

[rh PUOH to right and shrug]

Both gestures are performed without speech, they take up the whole communicative load: We have a gestural mini-conversation. By using the “brushing aside gesture”, J rejects not only S’s plea for sympathy, but finishes the whole controversy, indicating that they will not reach an agreement on this topic. Although we find ourselves in a similar situation as in the example of the deposit, where the objects brushed aside are part of the communication, the use of the “brushing aside gesture” in this case exceeds the level of the verbal utterance(s). Being detached from speech and therefore acting on its own right, the “brushing aside gesture” is not only used to express J’s eagerness to finish the topic, but is also a request addressed to S to quit the topic, to put it aside, to stop

117. Pragmatic and metaphoric

1553

arguing. This interpretation is ratified by S’s gestural response, a “palm up open hand” combined with a shrug. This indicates that everything she has to say about the issue has been said and presented, and it is also a confirmation of his request to stop the discussion: S will not take it up again. Here, J’s “brushing aside gesture” aims at the behavior of S, in this case the activity of arguing. The target domain therefore is the activity of somebody else, and if this activity is arguing, as in the example just presented, the gesture requests to “brush aside” this communicative activity. As in the previous example, the gesturer uses the gesture in order to perform a communicative action. He does not use the gesture as a means of depicting the action of “brushing something aside” but enacts it. What is highlighted from the action scheme is again the goal or endpoint of the action, a situation where the annoying objects, here the quarrel, are brushed aside and stay removed from the immediate surrounding. A shift in the characterization of the objects can also be observed. In the previous example, the objects were qualified as being irrelevant, here they are clearly seen as annoying. The main difference, though, between the speech-performative and performative use of the “brushing aside gesture” lays in its direction and therefore its target: Whereas in the deposit example the gesture was directed towards the speaker’s own communication, and acting upon speech, in the performative use the “brushing aside gesture” affects the behavior of someone else and thus becomes relevant in an interactive manner.

4.3. Summary The above discussed examples of the “brushing aside gesture” show that one and the same gesture can subsume different functions. The first example showed that it can refer metaphorically to the substance of an utterance, in this case working as a “narrow gloss gesture” (Kendon 2004) together with its verbal gloss (barrieron). In the second example, the “brushing aside gesture” functions as a speech-performative gesture. Neither the metaphor nor the function can easily be related to the concurrent speech. Only by expanding the perspective, one is able to see that the metaphor, which enables the gesture to take on the ascribed function, works on the argumentation structure. One way of accounting for it could be the conceptual metaphor that treats ideas ⫺ or in this case arguments ⫺ as if they were objects (Lakoff and Johnson 2003: 10). According to this presumption, the action of brushing aside objects is metaphorically extended to now brushing aside arguments or parts of speech. The target domain of the gesture is discourse. The “palm up open hand” and the “brushing aside gesture” act upon the utterance, thereby displaying the communicative activities the speaker is engaged in and function as a metaphoric and pragmatic gesture. This also holds for the third example. One of the major differences lays in its dissociation from speech. It is sequentially connected to it, as it follows a verbal utterance by the interlocutor, but there is no concurrent verbal utterance upon which the gesture might act. The “brushing aside gesture” stands on its own as a purely gestural turn, working as a complete speech act. The metaphoric extension of this gesture transcends the realm of an utterance and operates on the level of communication as an activity. The “brushing aside gesture” expresses the interactional move of the gesturer toward his interlocutor in that he wishes the argumentation to be brushed aside, to be finished. In contrast to the speech-performative function, the target domain of the performative gesture is not necessarily discourse or communication but the behavior of someone else. Here, the “brushing aside gesture” functions as a performative gesture and

1554

VIII. Gesture and language

as such resembles the use of most emblems (see Kendon 1981; Payrato 1993, 2003), as it addresses the behavior of someone else. It is, following our argumentation, metaphoric and pragmatic, operating as an entire speech act.

5. Conclusion In the beginning of the paper we introduced functional approaches toward gesture use, focusing on the peculiarities of recurrent gestures, whereas in the second section we sketched cognitive approaches toward gesture classification (or toward gestures as cognitive phenomena). These traditions should not be seen as competing, since they approach the phenomenon from different angles and with different foci. Instead, as has been proposed here, findings of both traditions can be combined to get a better understanding of how gestures come to carry meaning and how this meaning is used in conversation. In this chapter, we limited our perspective to gestures which are to some extent iconic and transparent with regard to their base, and which present form-meaning pairings that remain stable throughout their use in different contexts. In contrast to beats they are semantically loaded, and in contrast to idiosyncratic gestures they are formally and semantically stable, which enables them to fulfill a range of pragmatic functions. All gestures presented in the first and last section are dynamic in their movement, which means that a certain movement pattern is connected to their hand-configuration. Beside the Iranian “pistol hand” and the “cyclic gesture”, which both present special cases (see Ladewig this volume b), all gestures are based on manipulations, which enables them to act upon objects, be they virtual, abstract, or concrete. This seems to be one of the prerequisites for an iconic gesture to function performatively, since in its speech-performative or performative use the action is performed, it is what the speaker/gesturer is doing in a conversation. What this implies is that the speaker/gesturer is gesturing and acting from a first person perspective. What holds for Austin’s performative words, holds for performative gestures as well: They need the first person point of view to be performed successfully. Just as it does not count as a speech act to say “he swears that […]”, it does not count as a performative gesture to express “he stopped me talking” with a “brushing aside gesture”. The difference between representational or referential and pragmatic gestures is not that the target domain for pragmatic gestures is discourse or communication, as Parrill proposed, although this accounts for speech-performatives. As we have seen in the discussion of McNeill’s example of the conduit metaphor, metaphoric referential gestures can also have communication as their target domain. This may be a logical prerequisite for speech-performative gestures, but it does not suffice as an explanation for the differences between referential and pragmatic gestures, nor does it cover all pragmatic functions. One possibility would be to widen the scope of the supposedly underlying conceptual metaphor ideas or meanings are objects (Lakoff and Johnson 2003: 10) into a more general one, which treats behavior, ideas, arguments, etc. as concrete and treatable objects. This might be the base, on which speech-performative and performative gestures work. A different perspective, the one which has been adopted in this chapter, starts from the actual actions and not from the conceptual metaphors. This seems to be in line with the idea of (pragmatic) gestures as “modulated action” by Müller and Haferland (1997), and, in a similar vein, what Streeck (2009: 195) has called “embodied practices” or “practical metaphorizations” (Streeck 2009: 201) of actions and motor patterns. It is the

117. Pragmatic and metaphoric

1555

assumption that there is a “fluid repertoire of abstract, schematic actions of the hands, actions that are ‘uncoupled from real-world consequences’ and thus available for symbolic use” (Streeck 2009: 201). To conclude, the combination of a cognitive viewpoint with a functional approach has certainly lead us to a better understanding of the characteristics of recurrent gestures, but some questions regarding recurrent gestures, pragmatic functions, and cognitive activity remain unanswered. It is unclear, how cognitive processes differ according to the functional use of a gesture. And last but not least, this chapter has been written with the implicit assumption that there is something like a class of recurrent gestures. The scope and limits of the phenomenon of recurrent gestures needs further theoretical considerations and more empirical material.

Acknowledgements I would like to thank Silva H. Ladewig, Jana Bressem, and Cornelia Müller for their support and commentaries on earlier drafts of this paper. The conventions of transcription follow Teßendorf (2005). For the visual channel: BAS means brushing aside gesture; PUOH ⫽ palm up open hand; PVOH ⫽ palm vertical open hand; rp ⫽ rest position; lh, rh, bh ⫽ left, right, both hands; t.l., t.r. ⫽ to left, to right, direction of the gesture. For the vocal channel: aHORa ⫽ emphasis; (.) ⫽ small pause; (-) ⫽ slightly longer pause; [si todavı´a] ⫽ beginning and end of a gesture phrase; un mes | tienes ⫽ beginning of a new gesture phase (stroke); para ⫽ marks the stroke; nada ⫽ marks the stroke of the “brushing aside gesture”.

6. Reerences Austin, John L. 1962. How to Do Things With Words. Oxford: Oxford University Press. Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allen Wade 1992. Interactive Gestures. Discourse Processes 15: 469⫺489. Bressem, Jana and Cornelia Müller this volume a. The family of AWAY gestures: Negation, refusal, and negative assessment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1592⫺1605. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume b. A repertoire of recurrent gestures of German. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1575⫺ 1592. Berlin/Boston: De Gruyter Mouton. Brookes, Heather 2001. The case of the clever gesture. Gesture 1(2): 167⫺184. Brookes, Heather 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186⫺224. Brookes, Heather 2005. What gestures do: Some communicative functions of quotable gestures in conversations among Black urban South Africans. Journal of Pragmatics 37(12): 2044⫺2085. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Calbris, Genevie`ve 2003. From cutting an object to a clear cut analysis: Gesture as the representation of a pre-conceptual schema linking concrete actions to abstract notions. Gesture 3(1): 19⫺46.

1556

VIII. Gesture and language

Cienki, Alan 1998. Metaphoric gestures and some of their relations to verbal metaphoric expressions. In: Jean-Pierre Koening (ed.), Discourse and Cognition: Bridging the Gap, 189⫺204. Stanford: CSLI Publications. Cienki, Alan and Cornelia Müller 2008. Metaphor, gesture, and thought. In: Raymond W. Gibbs, Jr. (ed.), The Cambridge Handbook of Metaphor and Thought, 483⫺501. Cambridge, NY: Cambridge University Press. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. A translation of ‘La mimica degli antichi investigata nel gestire napoletano’ with an introduction and notes by Adam Kendon. Bloomington: Indiana University Press. First published [1832]. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton. First published [1941]. Ekman, Paul and Wallace V. Friesen 1969. The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding. Semiotica 1: 49⫺98. Fricke, Ellen 2007. Origo, Geste und Raum. Lokaldeixis im Deutschen. Berlin/New York: De Gruyter. Kendon, Adam 1981. Geography of gesture. Semiotica 37(1/2): 129⫺163. Kendon, Adam 1992. Some recent work from Italy on quotable gestures (emblems). Journal of Linguistic Anthropology 2(1): 92⫺108. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23: 247⫺279. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Ladewig, Silva H. 2006. Die Kurbelgeste ⫺ konventionalisierte Markierung einer kommunikativen Aktivität. Unpublished M.A. thesis, Freie Universität Berlin. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. Ladewig, Silva H. this volume a. Recurrent gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1558⫺1574. Berlin/Boston: De Gruyter Mouton. Ladewig, Silva H. this volume b. The cyclic gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), xx-xx. Berlin/Boston: De Gruyter Mouton. Lakoff, George and Mark Johnson 2003. Metaphors We Live By. With a new afterword. Chicago, IL: University of Chicago Press. First published [1980]. McNeill, David 1992. Hand and Mind. What Gestures Reveal About Thought. Chicago, IL: University of Chicago Press. McNeill, David 2000. Introduction. In: David McNeill (ed.), Language and Gesture, 1⫺10. Cambridge, UK: Cambridge University Press. McNeill, David 2005. Gesture and Thought. Chicago, IL: University of Chicago Press. McNeill, David 2008. Unexpected Metaphors. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 185⫺199. Amsterdam/Philadelphia: John Benjamins. Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar. Ph.D. dissertation, Cornell University. Ann Arbor, MI: UMI. Mittelberg, Irene and Linda Waugh 2009. Multimodal figures of thought: A cognitive-semiotic approach to metaphor and metonymy in co-speech gesture. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 329⫺356. Berlin/New York: Mouton de Gruyter. Mittelberg, Irene and Linda Waugh this volume. Gestures and metonymy. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1747⫺1766. Berlin/Boston: De Gruyter Mouton. Montes Miro´, Rosa Graciela 1994. Relaciones entre expresiones verbales y no verbales en la organizacio´n del discurso. Estudios de Lingüı´stica Aplicada 19/20: 251⫺272.

117. Pragmatic and metaphoric Montes Miro´, Rosa Graciela 2003. “Haciendo a un lado”: gestos de desconfirmacio´n en el habla mexicano. IZTAPALAPA 53: 248⫺267. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. The Palm-Up-Open-Hand. A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The semantics and pragmatics of everyday gestures, 233⫺256. Berlin: Weidler. Müller, Cornelia 2008. What gestures reveal about the nature of metaphor. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 249⫺275. Amsterdam: John Benjamins. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia and Alan Cienki 2009. Words, gestures, and beyond: Forms of multimodal metaphor in the use of spoken language. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 297⫺328. Berlin/New York: Mouton de Gruyter. Müller, Cornelia and Harald Haferland 1997. Gefesselte Hände. Zur Semiose performativer Gesten. Mitteilungen des Deutschen Germanistenverbandes 44 (3): 29⫺53. Müller, Cornelia and Gerald Speckmann 2002. Gestos con una valoracı´on negativa en la conversacio´n cubana. DeSignis 3: 91⫺103. Neumann, Ranghild 2004. The conventionalization of the Ring Gesture in German discourse. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, April 1998, 217⫺224. Berlin: Weidler. Panther, Klaus-Uwe and Linda L. Thornburg 2004. The Role of Conceptual Metonymy in Meaning Construction. metaphorik.de 06. Parrill, Fey 2008. Form, meaning and convention: An experimental examination of metaphoric gestures. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 225⫺247. Amsterdam: John Benjamins. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20(3): 193⫺216. Payrato´, Lluı´s 2003. What does ‘the same gesture’ mean? A reflection on emblems, their organization and their interpretation. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures, meaning and use, 73⫺81. Porto: Fernando Pessoa University Press. Payrato´, Lluı´s 2004. Notes on pragmatic and social aspects of everyday gestures. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 103⫺ 113. Berlin: Weidler. Payrato´, Lluı´s and Sedinha Teßendorf this volume. Pragmatic gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1531⫺1540. Berlin/Boston: De Gruyter Mouton. Posner, Roland 2002. Alltagsgesten als Ergebnis von Ritualisierung. In: Matthias Rothe and Hartmut Schröder (eds.), Ritualisierte Tabuverletzung, Lachkultur und das Karnavelske. Beiträge des Finnisch-Ungarischen Kultursemiotischen Symposiums 9. bis 11. November 2000, Berlin-Frankfurt (Oder), 395⫺421. Frankfurt a.M.: Peter Lang. Reddy, Michael 1979. The conduit metaphor. In: Andrew Ortony (ed.), Metaphor and Thought, 284⫺297. Cambridge, UK: Cambridge University Press. Saitz, Robert L. and Edward D. Cervenka 1972. Handbook of Gestures. The Hague/Paris: Mouton. Searle, John R. 1969. Speech Acts: An Essay in the Philosophy of Language. Cambridge, UK: Cambridge University Press. Seyfeddinipur, Mandana 2004. Meta-discursive gestures from Iran: Some uses of the ‘Pistol Hand’. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures., 205⫺216. Berlin: Weidler.

1557

1558

VIII. Gesture and language

Sparkhawk, Carol M. 1978. Contrastive-Identificational features of Persian Gesture. Semiotica 24(1/2): 49⫺85. Streeck, Jürgen 1994. “Speech-handling”: The metaphorical representation of speech in gesture. A cross-cultural study. Unpublished Manuscript. Streeck, Jürgen 2005. Pragmatic aspects of gesture. In: Jacob Mey (ed.), Encyclopedia of language and linguistics, Volume 5: Pragmatics, 71⫺76. Oxford: Elsevier. Streeck, Jürgen 2009. Gesturecraft. The Manu-Facture of Meaning. Amsterdam: John Benjamins. Sweetser, Eve E. 1998. Regular metaphoricity in gesture: Bodily-based models of speech interaction. Actes du 16e Congre`s International des Linguistes. Oxford: Elsevier. Teßendorf, Sedinha 2005. Pragmatische Funktionen spanischer Gesten am Beispiel des “gesto de barrer”. Unpublished M.A. thesis, Freie Universität Berlin. Teßendorf, Sedinha 2007. From everyday action to gestural performance: Metonymic motivations of a pragmatic gesture. Talk presented at the Second AFLiCo, 10⫺11 May 2007, Lille, France. Teßendorf, Sedinha volume 1. Emblems, quotable gestures, or conventionalized body movements. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Alan Cienki, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 82⫺100. Berlin/Boston: De Gruyter Mouton. Webb, Rebecca 1996. Linguistic Features of Metaphoric Gestures. Unpublished Ph.D. dissertation, University of Rochester, New York. Wundt, Wilhelm 1900. Völkerpsychologie: Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte. Volume 1, Die Sprache. Leipzig: Engelmann.

Sedinha Teßendorf, Berlin (Germany)

118. Recurrent gestures 1. 2. 3. 4. 5. 6. 7.

What are recurrent gestures? Why introduce the notion of “recurrent gestures”? Methodological and theoretical aspects Properties of recurrent gestures Recurrent gestures on their way to language The question of a demarcation between recurrent gestures and other gesture types References

Abstract Recurrent gestures have often been investigated under the label of pragmatic or interactive gestures (Bavelas et al. 1992; Kendon 1995). However, when having a closer look at gestures allocated to this group, it turns out that they seem to have been identified on the basis of their conventional character rather than on their pragmatic functions. Given these observations, the following chapter pleads for an introduction of the term “recurrent gestures” into the field of gesture studies. By giving reasons for the implementation of this term, the chapter focuses on methodological and theoretical aspects, properties of recurrent Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 15581574

118. Recurrent gestures

1559

gestures such as their relation to speech as well as semiotic and linguistic characteristics, and their “potential for language” (Müller 2009). The chapter closes by approaching the question of demarcating recurrent gestures from other gesture types.

1. What are recurrent gestures? Recurrent gestures arouse interest of gesture scholars very early although they were first addressed from the point of view of rhetoric (e.g., Quintilian 1969) or of the education of actors (e.g., Mosher 1916; Ott 1902). One reason for the engagement with this gesture type is probably the fact that recurrent gestures are conventionalized to a certain degree, are culturally shared, and thus can be identified clearly within the stream of manual movements. Another important reason may also be the observation that recurrent gestures often work on the level of speech, fulfilling pragmatic functions. Therefore, they have often been referred to as “pragmatic gestures” (Kendon 1995; Streeck 2005, 2009; Teßendorf this volume) (for an overview see Payrato´ and Teßendorf [this volume]), “gestures with pragmatic function” (Kendon 1995, 2004b), “interactive gestures” (Bavelas et al. 1992, 1995), or “speech handling” gestures (Streeck 2009), to name but the most widespread terms. Like emblems, recurrent gestures show a stable form meaning relation and can be distinguished from “singular gestures” (Müller 2010b) or “iconic” and “metaphoric gestures” (McNeill 1992) due to their conventional character. Singular gestures have been described as spontaneous creations, which are used co-expressively with a certain speech segment and, as such, are part of the propositional content of an utterance. Recurrent gestures often fulfill performative functions, act upon speech, and form a repertoire of gestures that is shared within a culture. The following gives an overview of what is known about recurrent gestures till now. First, the introduction of the term “recurrent gestures” will be explained. Then the chapter turns to methodological as well as theoretical issues challenged by an analysis of this gesture type. Afterwards properties of recurrent gestures will be elucidated such as their relation to speech or their semiotic and linguistic characteristics. The chapter closes with some considerations of the linguistic potential of recurrent gestures and the question of demarcating them from other gesture types.

2. Why introduce the notion o recurrent gestures? As mentioned above, recurrent gestures have been referred to as pragmatic or interactive gestures highlighting the pragmatic function this gesture type often fulfills. Why then introduce a new term? This is done for three reasons: First of all, referring only to the pragmatic function of this gesture type does not provide the full picture of the phenomenon and even reduces the functional range of these gestures. Second, particular characteristics such as their conventional character or their linguistic properties are captured with this term. In doing so, other aspects than their pragmatic function are brought to the fore, which, third, rather embrace the semiotic nature of this gesture type. The conventional character of recurrent gestures is the main argument for introducing a new term to the field of gesture studies. Recurrence here refers to the building of a formational core that correlates with a semantic core. This stable form-meaning unit recurs in different contexts of use over different speakers in a particular speech com-

1560

VIII. Gesture and language

munity. However, although recurrent gestures have undergone processes of conventionalization they cannot be considered as emblems since their meaning is schematic rather than word-like (see also Kendon 2004a; Ladewig 2010; Müller 2010b). Furthermore, the relation between the formational and the semantic core can be conceived as motivated that is the meaning of a recurrent gesture is derived from its form. This motivational link of form and meaning is still transparent which means that the semiotic base a gestural form is being derived from, as for instance an instrumental action, contributes to the meaning of a gesture (Ladewig 2010, 2011, this volume b; Müller 2010b; Müller, Ladewig, and Bressem volume 1). This aspect differentiates recurrent gestures from emblems, amongst other things, as the link between form and meaning in emblems can in many cases not be reconstructed anymore and is very often considered to be opaque. The characteristic of conventionality leads to the second reason for arguing in terms of recurrence rather than in terms of pragmatic functions. When taking a closer look at interactive or pragmatic gestures, it appears, that these gestures were not primarily identified on the basis of their pragmatic function, but rather on the stable unit of form and meaning they build, and which comes with particular pragmatic functions ⫺ these are all aspects that recur. Bavelas et al. (1992; 1995), for instance, identified the “palm up open hand” (Müller 2004; see also Kendon 2004b), the “holding away gesture” (Bressem and Müller this volume a; Müller, Bressem, and Ladewig volume 1), or the “cyclic gesture” (Ladewig 2010, 2011, this volume b) as interactive gestures which clearly show a stable form meaning relation. Kendon characterizes the mano a borsa ‘purse hand’ (Kendon 1995) as pragmatic gesture which he later determines as a variant of the “G-family” (Kendon 2004b). Likewise, in his data Streeck (2009) identifies gestures that show a stable form meaning relation such as the palm up open hand, or negation gestures like “moving things aside” and “throwing back”. Although referring to this gesture type as pragmatic gestures, Streeck (1993) also observed the conventional character of these movements and stated that “certain recurrent functions of gesture are fulfilled by different conventional forms in different communities” (Streeck 1993: 281, emphasis in the original). On another line of thought, applying the pragmatic function of gestures as a criterion for establishing a certain gesture type is a bit misleading as it implies that only a particular type of gestures can fulfill certain, in this case, pragmatic functions. However, many scholars of gestures have observed that gestures are multifunctional. Iconic or metaphoric gestures, for instance, can also be used with a referential and a pragmatic function simultaneously but they were not allocated to the group of interactive or pragmatic gestures. An iconic gesture can, for instance, convey complementary meaning to the proposition of an utterance and mark particular ideas as prominent, directing attention to it, as was shown for repetitive gestures (Bressem 2012, this volume). When used in syntactic gaps of interrupted utterances an iconic gesture can complete the spoken utterance by adding semantic content to it but it can, at the same time, work as a turn holding device regulating the interaction between the participants of a conversation (Ladewig 2012, this volume a). Last but not least, a further observation should be mentioned that qualifies for the introduction of the term “recurrent gestures”. Gestures that have been allocated to the group of pragmatic or interactive gestures show gestural variants serving a referential function. The brushing aside gesture, for instance, which is “most often used to ‘brush aside’ discursive objects, or the behavior of others“ and to express “a negative stance towards the objects in question” (Payrato´ and Teßendorf this volume: 1536) can also be

118. Recurrent gestures deployed to depict how concrete objects are brushed away to empty the speaker’s personal space. Likewise the sweeping away gesture, used to reject topics or objects of talk “by (rapidly) moving the palm away form the center to the periphery” (Müller, Bressem, and Ladewig volume 1: 720) can illustrate a period of time or the action of smoothing a plane (Bressem and Müller this volume a). The cyclic gesture used with referential function can depict an ongoing action or mental process such as scooping or thinking. The exclusion of a variant with referential function from a particular group or class of gestures does not seem reasonable since they show a) the same form and b) the same meaning like the gestural variants serving pragmatic functions in the first place. They are variants of the same gesture. Furthermore, gestural variants with a referential function often reflect the semiotic origin of a recurrent gesture, which becomes more and more abstract when a gesture undergoes processes of conventionalization (see sections 5 and 6). In fact, studies have demonstrated that the investigation of variants with a referential function gives insights into the emergence and development of a particular recurrent gesture (Brookes 2001, 2004; Ladewig 2010, 2011; Teßendorf this volume). These arguments qualify for an extension of the notion of interactive or pragmatic gestures to an inclusion of other gestural variants sharing the same semiotic base, form, and meaning (as is the case with variants serving a referential function). Even more so, these arguments suggest applying the term “recurrent gestures” to this group of gestures reflecting the nature of this gesture type more clearly than a term encompassing only functional properties.

1561

3. Methodological and theoretical aspects The analysis of recurrent gestures has been enhanced over the last 20 years. Methodological steps in the analysis of emblematic as well as of recurrent gestures have been brought together and systematized in order to offer an analytical grid for a detailed investigation of a recurrent gestural form and its occurrences. The analytical approach presented in the following was influenced mainly by works conducted by Bressem (volume 1); Brookes (2001, 2004, 2005); Kendon (1995, 2004b); Ladewig and Bressem (2013); Müller (2004, 2010b); Sherzer (1991); Sparhawk (1978). In order to give an encompassing account of a recurrent gesture, three different aspects are investigated in detail, namely the form, the meaning, and the function. These aspects are examined on a qualitative as well as on a quantitative level. The determination of the formational core/kinesic core of a recurrent gesture is central to the whole analysis of a recurrent gesture. Not only are occurrences of a gesture identified in the data on the basis of their form but the formational core also builds the foundation for the reconstruction of the meaning and function of a gesture (see section 2). It is usually restricted to one or two form parameters such as the movement or the configuration of the hand. Based on the gestural form, the action, or the movement pattern a gesture is derived from ⫺ its semiotic base ⫺ can be reconstructed. Most of the recurrent gestures identified so far build on mundane actions such as the “palm up open hand” gesture (Müller 2004) or “the brushing aside gesture” (Payrato´ and Teßendorf this volume; Teßendorf this volume). Only a small group of gestures is based on the representation of movements as in the case of the “cyclic gesture” (Ladewig 2010, 2011, this volume b). Since the gestural form is the key to analyzing the meaning of a gesture, this analytical step should ideally be conducted without paying attention to the concomitant speech, i.e. with the video sound turned off. This procedure assures to avoid a possible influence coming from the spoken utterance.

1562

VIII. Gesture and language

Based on the analysis of a gestural form its meaning, i.e. the semantic core (“semantic theme” Kendon 2004b), can be reconstructed. The “gestural modes of representation” (Müller 1998, 2010a), the underlying actions as well as “image schemas” (Johnson 1987) and motor patterns contribute to the meaning that is inherent to a gestural form. This basic meaning is reflected in all instances of a recurrent gesture but also varies according to its usage that is the local context and the context of use in which a gesture is placed (Ladewig 2007, 2010; Müller 2010b). Local context refers to the interactive environment of a recurrent gesture in a particular video example. It is informed by sequential, syntactic, semantic, as well as pragmatic information given by speech but also by semantic and pragmatic information conveyed by adjacent gestures. It contributes to the local meaning and the local function of a particular instance of a recurrent gesture. Context-of-use (Kendon 1995; Ladewig 2007, 2010; Müller 2004, 2010b; Scheflen 1973; Sherzer 1991) is understood as the broader discursive situation in which a recurrent gesture occurs. Basically, the speech activity conducted by the speaker while s/he is using a recurrent gesture is identified such as an enumeration, a description, or a request. The determination of the contexts of use builds the basis for the distributional analysis of a gesture and the identification of gestural variants. In this analytical step the formbased analysis and the context-of-use analysis are combined in order to determine whether context of use and form vary systematically (e.g., Ladewig 2007, 2010; Müller 2010b). Studies have demonstrated that the different context variants often correlate with variation of form, meaning, and function. The cyclic gesture used in the context of a word/concept search, for instance, is positioned in the speaker’s central gesture space, in most cases, and represents the ongoing searching process, thereby fulfilling the function of a turn-holding device (Ladewig 2010, 2011). The palm up open hand used with a wide movement on the horizontal plane is used to offer a “wide range of entities” (Müller 2004: 252). With these different concepts and analytical steps, the forms, meanings, and functions as well as the distribution of a recurrent gesture can be determined (cf. Ladewig 2007, 2010, 2011; Müller 2010b). Moreover, questions of semantization and grammaticalization can be approached (see section 5 and 6).

4. Properties o recurrent gestures In what follows some characteristics of recurrent gestures will be spelled out. The list of properties is inspired by McNeill’s reflection on the “Gesture Continuum” (formerly known as the “Kendon’s Continuum”, see McNeill volume 1) and takes the relation of speech and gesture as well as linguistic and semiotic properties into account.

4.1. Relation o speech and recurrent gestures The relation between speech and gesture regards the occurrence of both modalities and their temporal relation. Considerations on the distribution of semantic and pragmatic information contributing to a multimodal utterance are also taken into consideration. Recurrent gestures or particular variants of them were subsumed under the notion of “conversational gestures” (see Bavelas et al. 1995) or “gesticulatory forms” (Kendon 1995), demonstrating that speech and gestures are tightly linked. Recurrent gestures like other co-verbal gestures interact with speech. However, the strength of the link between speech and gesture varies in the different variants.

118. Recurrent gestures

1563

When recurrent gestures are used with referential function, that is when they depict objects, actions, and events, they give redundant or, most often, complementary information to the propositional content of an utterance. In these cases, recurrent gestures are co-expressed with a verbal unit, be it a word, a phrase, or a sentence. The object or action depicted is referred to in speech. Examples are rarely given in the literature as often only gestural variants with a pragmatic function are taken into account (but see Bressem and Müller this volume a, b; Ladewig 2007, 2010; Ladewig, Müller, and Teßendorf 2010; Teßendorf this volume). Recurrent gestures that act as “speech performatives” (Teßendorf this volume) are deployed meta-communicatively and operate upon a speaker’s utterance. In doing so, they serve discursive and modal functions. When adopting discursive functions these gestures often operate on the structure of an utterance, marking its topic or comment and bringing specific aspects into the receiver’s attention (e.g., the “ring gesture” [Kendon 2004b; Neumann 2004], the “pistol hand” [Seyfeddinipur 2004], or the mano a borsa [Kendon 1995, 2004b; Poggi 1983]). When taking up a modal function they often display an attitude or stance towards something being said or done as in the case of several negation gestures (Bressem and Müller this volume a; Harrison 2009a, 2010; Teßendorf this volume). In the majority of cases, speech performatives co-occur with speech and are tightly connected with the verbal unit they act upon. They might even be constrained by the syntax of a spoken utterance as Harrison (2009b, 2010) could show. He found that the stroke of a gesture may coincide with the “node” of a negation, i.e. the “location of a negation”, and that the subsequent “post-stroke hold” may co-extend with the “scope”, i.e. “the stretch of language to which the negation applies” (Harrison 2010: 29). However, in some cases these variants of recurrent gestures can also be used without speech, in pauses. Some occurrences of the cyclic gesture used in word/concept searches, for instance, were observed in silent pauses. By representing the ongoing searching process, these gestures fulfill the same function as verbal disfluency markers, namely indicating that the speaker is engaged in a searching process. As such, when used without speech these variants of the cyclic gesture replace verbal markers of hesitation and work as a turn-holding device (Ladewig 2010, 2011, this volume b). Recurrent gestures that work as “performatives”, “aim at a regulation of the behavior of others” (Teßendorf this volume: 1544) and ‘perform’ the illocutionary force of an utterance. These gestural variants are not directed to the speaker but to the interlocutor and act as a type of speech act or “interactional move” (Kendon 1995: 274). They are used co-expressively with a verbal unit but quite often they are also detached from speech. These variants can stand alone and substitute for directive speech acts. The brushing aside gesture used as a performative, for instance, might adopt the function of a whole turn and brush aside the communicative activity of the interlocutor, requesting her/him to finish his turn or action (Teßendorf this volume). It can thus be concluded that in most usages recurrent gestures co-occur with speech conveying semantic or pragmatic information. Gestures with referential function are closely related to the proposition of the spoken utterance and interact with a verbal lexical unit. Pragmatic variants work on a meta-communicative level of speech. Some variants of recurrent gestures may be used in speech pauses or may be fully detached from speech, working as gestural performative acts which is why they sometimes also have been considered as emblems or as “quotable forms” (Kendon 1995; see also section 6).

1564

VIII. Gesture and language

4.2. Semiotic and linguistic properties As outlined above, the notion of recurrent gestures is based on the observation that gestures can build stable units of form and meaning. It is not grounded on functional properties of gestures as is done in many other classifications. Accordingly, when examining the semiotic characteristics of recurrent gestures, one has to be aware of their conventional character. For the analytical process of reconstructing their production and understanding this means that a producer or recipient does not pass all the semiotic processes and stages that were involved in the genesis of a recurrent gesture; at least in the case of gestural variants that show a higher degree of conventionalization (see section 6). Accordingly, the semiotic paths taken when producing or receiving a recurrent gesture have a different point of departure than the paths to be considered when, as an analyst, the origin of a gesture or the motivation of a gestural form is examined (see also Müller 2010b). These different aspects will be elucidated briefly in the following.

4.2.1. Motivation o orm Approaching the issue of the motivation of a gestural sign concerns, in most cases of recurrent gestures, the path taken from a mundane (instrumental) action to a stable form-meaning unit that does not involve the manipulation of concrete objects anymore. Aspects of an action scheme are thereby mapped onto the structure of a communicative action. “However, the motion patterns of these everyday actions are modulated significantly, they are abstracted from the actions in the real world […]” (Müller Ms). Similar to the differences observable in the relation of gesture and speech, different levels of abstraction can be traced for the different gestural variants of a recurrent gesture. Gestural variants serving a referential function are the ones which are closest to the every day action they originate from. They may depict actions manipulating concrete or abstract entities. The action and/or object depicted are being referred to in the direct speech environment of a gesture. In the case of the brushing aside gesture, for instance, a speaker acts as if brushing small annoying objects aside but may refer to brushing away an ethnic group out of a city (example taken from Teßendorf this volume). In variants with pragmatic function, the action scheme is transferred to the domain of communication or interaction. As described above, these variants operate on the speaker’s own speech or on the communicative behavior of others. The gesture does not depict an action being referred to in speech but conducts an action on a meta-communicative level. As such, these variants are more abstract than their referential counterparts. The brushing aside gesture with pragmatic functions, for instance, may brush aside arguments or other discursive objects, or even the interlocutor (Payrato´ and Teßendorf this volume; Teßendorf this volume). In all the aforementioned examples of the brushing aside gesture, the action scheme of removing annoying objects from the speaker’s personal space underlies the different gestural variants. However, different aspects of the action scheme are highlighted in these variants which may a) be the cause of an action, i.e. the annoying objects, or b) the effect of an action, i.e. the removal of annoying objects. Metonymic processes are involved in inferring the whole underlying action of a recurrent gesture (“internal and external metonymy”, see Mittelberg 2006, 2010; Mittelberg and Waugh 2009). Metaphoric processes are responsible for extending the action scheme to the abstract domain of communication or interaction. In cases in which gestural variants contribute to the

118. Recurrent gestures

1565

proposition of an utterance, these cognitive-semiotic processes are still active. In more conventionalized variants (see below) these processes have become entrenched to a certain degree.

4.2.2. Linguistic properties In the following section, principles of meaning creation are discussed for recurrent gestures. This means that the “simultaneous (variation of formational features and gesture families) and linear structures (combinations within gesture units) of gesture forms” are focused on which have been defined as aspects defining a “grammar of gesture” (Müller, Bressem, and Ladewig volume 1: 707).

4.2.2.1 Simultaneous structures In the field of gesture studies two different views on how meaning is created in gestures are put forward. One approach favors the idea that gestures are holistic or “global” in nature, i.e., the features of a gesture are determined by the meaning of the whole (e.g., McNeill 1992, 2005; McNeill and Duncan 2000; Parrill 2008; Parrill and Sweetser 2002, 2004). A second approach introduced the notion of compositionality of gestural forms proposing that gestural meanings are composed of isolated features (Calbris 2003, 2011; Kendon 2004b; Webb 1996). (For a discussion see Kendon 2008.) The idea of decomposing gestures into their meaningful segments was particularly advanced by studies on recurrent gestures. By contrasting gestural forms, relevant form features and their meanings as well as their distribution over particular contexts of use were determined. Selective studies incorporated the four parameters “hand shape”, “orientation of the palm”, “movement”, and “position” in gestures space, introduced for the notation of sign language (Battison 1974; Stokoe 1960, 1972), as a notational grid for discovering structures in gestures. (For a discussion see Ladewig and Bressem 2013.) Variation of form and meaning has been investigated since Quintilian. Many authors, dealing with the education of the orator were exact observers of gestures in general but of recurrent gestures in particular and offered a list of gestural movements, hand shapes, and their positions in gesture space correlating with different meanings (e.g., Bacon 1884; Mosher 1916; Ott 1902; Potter 1871; Quintilian 1969), which is being continuously extended. For many cases of recurrent gestures, the configuration of the hand was identified as the formational core. The orientation of the palm, the movement of the hand, or the position in gesture space, were often observed to form variants of a recurrent gesture. The open hand prone gesture with a downward orientation is, for instance, used by the speaker to indicate interrupting some line of action of which the speaker is not the author (Kendon 2004b; see also Calbris 2003; Harrison 2009a, 2009b). The grappolo or “finger bunch” held upwards and oscillating on a vertical or horizontal plain towards the speaker is deployed to ask a question or demand an explanation (Kendon 2004b). The “palm up open hand” gesture used with downward movement serves the function of listing offered arguments (Müller 2004: 252). In the analysis of the cyclic gesture, the position in gesture space was systematically taken into account. This gesture was found to be used in the central gesture space when it depicts the speaker’s communicative activity of searching for a word or concept. In the right peripheral gesture space it often refers to abstract ongoing processes or, combined with a large movement size, it serves

1566

VIII. Gesture and language

the function of a request (Ladewig 2010, 2011, this volume b). For some variants of recurrent gestures, the direction and the size of movement were identified to add a deictic dimension. These formational features were characteristic for cyclic gestures used as “interactive gestures” (e.g., Bavelas et al. 1992). Recurrent gestures showing these variations in form are addressee oriented and fulfill a performative function attempting to regulate the behavior of others (Ladewig 2010; Teßendorf this volume). The systematic distribution of form parameters over different contexts of use, correlating with different meanings and functions gave rise to the idea of “gesture families”, i.e. the emergence of “structural islands” within gestures (Müller, Bressem, and Ladewig volume 1: 727). Each familiy shares a “distinct set of kinesic features but each is also distinct in its semantic themes. The forms within these families, distinguished as they are kinesically, also tend to differ semantically although, within a given family, all forms share in a common semantic theme” (Kendon 2004b: 227). Accordingly, what seems to be crucial for considering a recurrent gesture to be a gesture family is variation. Recurrent gestures can only be referred to as family if they show variation of form, meaning, and context. The variants of a recurrent gesture are defined as members of a gesture family. Gesture families were identified for the “G-family” (grappolo), the “R-family” (“ring gesture”), the “Open Hand Prone gesture” (Kendon 2004b), the “Palm Up Open Hand gesture” (Müller 2004), the “cyclic gesture” (Ladewig 2007, 2011) or the family of the “Away gestures” (Bressem and Müller this volume a). The notion of a gesture family is useful in order to group the different variants of a recurrent gesture and reveal that these belong to the same core. It was also linked to the cognitive-linguistic concept of an “idealized cognitive model” (ICM) (Lakoff 1987) in order to account for cognitive processes that might be involved in the meaning-making process of recurrent gestures. In doing so, it was argued that the variants of a recurrent gesture, or the members of a gesture family, are not only related to each other by way of their form, their meaning and their underlying action or motion, but that they are also related on a conceptual level. Cognitive models account for cognitive processes in meaning creation and understanding. As they are shared cultural models providing the basis for mutual understanding, this notion also accounts for the fact that gesture families are known in a cultural community and are, as such, used recurrently by its members (Ladewig 2011, this volume b).

4.2.2.2 Linear structures When taking linear structures of gestures into account, one can a) refer to the internal structures of gestures, that are gesture phases, and to the formation of higher level units, that are “gesture phrases” and “gesture units” (Kendon 1980, 2004b), or one can b) relate to the combination of gestures and the formation of gesture sequences (Müller, Bressem, and Ladewig volume 1). In the following section, the combination of recurrent gestures will be in focus. The linear combination of gestures has been observed particularly for recurrent gestures but it has not evolved into a distinct research topic yet. Most examples were given for recurrent gestures that operate on the thematic structure of an utterance, marking its topic and comment. Gesture combinations serving these functions often show an “open-to” structure. Seyfeddinipur (2004) for instance, documented the use of the “ring gesture” being combined with the “pistol hand”: The ring, used to mark the topic of an utterance, is opened to the pistol hand, which then marks the comment of an utterance.

118. Recurrent gestures

1567

Kendon (1995, 2004b) observed another interesting sequence of topic-comment marking gestures. He could show that the closing of the finger bunch hand shape, i.e. the grappolo, marks the topic of an utterance; the hand opens as the speaker gives a comment. Another combination of recurrent gestures, acting on the argumentative structure of an utterance, was documented by Teßendorf (this volume). She gives an example in which a series of palm up open hand gestures is concluded by a brushing aside gestures: The palm up open hands present a range of reasons supporting a particular argument which is then brushed aside by the speaker. Similarly, the cyclic gesture (Ladewig 2010, 2011) can be combined with the palm up open hand. When deployed in word or concept searches, the communicative activity of searching is embodied by the cyclic gesture, followed by a palm up open hand on which the word or concept searched for is presented. The reverse pattern is found in cases in which the cyclic gesture is used in requests: an idea or a notion is first presented on the open palm which is then supposed to be elaborated by an interlocutor. The cyclic gesture is used here to perform a demand. The examples given so far, embrace single utterances or short sequence of utterances. However, it was also found that recurrent gestures combine in longer series, spanning larger verbal units. Bressem (2012) documented the repetitive use of recurrent gestures used with a “prosodic function” embodied by an up and down movement or an enlarged movement size. In her examples, recurrent gestures such as the ring or the index finger are combined and encompass long series of utterances, marking focal points the speaker’s argumentation. The identification of simultaneous structures and distinctive form features accord with notions of compositionality as well as of contrastivity of gestural meaning (Calbris 1990, 2011; Sparhawk 1978). These notions run contrary to the idea that gestures are holistic or global in nature, meaning that the features of a gesture are determined by the meaning of the whole. The findings of systematic correlations of gestural form and meaning in different contexts of use allow for the conclusion of a “rudimentary morphology” (Müller 2004: 254), observable at least in recurrent gestures. Likewise, the linear combination of gestures argues against the findings of nonlinearity of gestures (McNeill 1992, 2005). As was found for recurrent but also for iconic or metaphoric gestures, gestures can occur in sequences in which they are not only related to their concomitant speech but also to their gestural neighbors. Thus, the findings show that gestures in general and recurrent gestures in particular are closer to language than has been assumed so far. These systematic documentations of the nature of gesture forms, their motivation, their simultaneous and linear structures is what we term a ‘grammar’ of gestures. […] Analyzing both the simultaneous and the linear forms of gestures reveals embodied forms and principles of emerging structures that may shed light on processes of grammaticalization in signed languages […]. (Müller, Bressem, and Ladewig volume 1: 727)

5. Recurrent gestures on their way to language Scholars of gesture and sign language observed that gestures may enter a fully-fledged linguistic system as discourse markers or even as lexical or grammatical morphemes. Single form parameters can be isolated, semanticized, and even take over grammatical functions. Such form parameters often constitute the core of recurrent gestures like, for

1568

VIII. Gesture and language

instance, the hand configuration or the movement of the hand (Bressem 2012; Fricke 2012; Ladewig 2010; Ladewig and Bressem 2013; Pfau and Steinbach 2006; Wilcox 2004, 2005). The palm up open hand, for instance, used in spoken discourse to express agreement or to seek agreement for the discursive objects presented on the open hand, was observed in many sign languages such as American Sign Language (Conlin, Hagstrom, and Neidle 2003), Danish Sign Language (Engberg-Pedersen 2002), or Turkish Sign Language (Zeshan 2006). (For an overview see van Loon, Pfau, and Steinbach this volume; van Loon 2012.) Since functional similarities between gestures and signs were observed in this case, it seems to be logical that the palm up open hand used as a grammatical marker in sign language has developed from the recurrent palm up open hand gesture deployed in spoken discourse (see, e.g., Pfau and Steinbach 2006). In order to explain such grammaticalization processes from gesture to sign, Wilcox (2004, 2007) suggested two paths: (i) from gesture to a grammatical morpheme via a lexical morpheme, and (ii) from gesture to a grammatical morpheme via a marker of intonation/prosody. Regarding the first route, several examples were given for ASL modal verbs. As sign linguists have shown (Janzen and Shaffer 2002; Wilcox 2004; Wilcox and Shaffer 2006; Wilcox and Wilcox 1995) the marker of possibility ‘can’ in American Sign Language (ASL) has developed from a lexical sign meaning ‘strong’ which has itself emerged from a gesture. In this case, the two fists are moved downwards in the central signing space. Wilcox (2004) concludes that the old ASL sign for ‘strong’ may have developed from an “improvised gesture” enacting upper body strength. Interestingly, he notes that these gestures might have been recurrent gestures: “Although I am calling these ‘improvised’ gestures, I do not mean to suggest that they do not also become standardized, although apparently not to the extent that they become quotable gestures” (Wilcox 2004: 70). In fact, the fist being rapidly moved downwards or vertically away from the speaker’s body has been documented as a recurrent gesture in the repertoire of German speakers. Among other functions, it is used “to put emphasis on the parts of the utterance by directing the listener’s attention and signals emotional involvement and insistence” (Müller, Bressem, and Ladewig volume 1: 721). The formational core of the fist being moved in different directions comes with the semantic core of strength which has been reconstructed for contexts of descriptions, requests, and emotional involvement (Arnecke 2011). Likewise, sources in recurrent gestures can be identified for the second route of grammaticalization. The cyclic gesture, for instance, which is characterized by a continuous clockwise rotation movement, comes with the semantic core of cyclic continuity. Three different context variants could be identified, namely descriptions, word/concept searches, and requests, which vary in the positions in gesture space and movement size. What is of particular interest with respect to these different variants is that a similar phenomenon has been described for Italian Sign Language (LIS) (Wilcox 2004, 2005, 2007; Wilcox, Rossini, and Pizzuto 2010). The sign IMPOSSIBLE in Italian Sign Language (LIS) which is also performed with a circular movement varies according to size of motion and position in gesture space thereby “indicating various degrees of impossibility“ (Wilcox 2004: 60). Wilcox, Rossini, and Pizzuto (2010: 353) argue that both form variations are “analogous to

118. Recurrent gestures

1569

prosodic stress”. In case of modal verbs in Italian Sign Language however, both form features have achieved a grammatical status, marking morphological alternations of strong and weak forms. Accordingly, different grammaticalization states can be observed for movement size and position in gesture space which might have their origin in gestural expressions of these parameters. Moreover, Klima and Beluggi (1979) argue that manner of movement is used to mark verb aspect in American Sign Language or Italian Sign Language. A continuous rotational movement can, amongst others, mark durativity or continuation of events (Klima and Bellugi 1979: 293; see also Wilcox 2004: 63). In view of these findings, it can be argued that the core of the cyclic gesture has developed to a marker of aspect in sign languages. Which one of the routes this particular gesture may have followed, however, awaits further detailed investigation. By and large, it becomes clear that recurrent gestures can be the point of departure in grammaticalization processes. Their formational and semantic cores can develop into grammatical markers in sign language. However, as was shown for the cyclic gesture, it might also be the case that form parameters constituting gestural variants of a recurrent gesture, enter a linguistic system. This is possible as these additional parameters become free to take over additional functions when a gestural form-meaning unit has emerged. What can be observed in general for gestures on their way to becoming language is that “[s]emantic information inherent in the gestural modality can be isolated and become entrenched under certain communicative circumstances” (Ladewig and Bressem Ms).

6. The question o a demarcation between recurrent gestures and other gesture types On a continuum capturing processes of conventionalization in manual semiotic signs, iconic and metaphoric (singular) gestures would mark the starting point and sign language would mark the endpoint. Recurrent and emblematic gestures would capture the space between these points (see Fig. 118.1). However, assigning recurrent gestures a place on the continuum is no easy task since recurrent gestures themselves show variants exhibiting different degrees of conventionalization (Kendon 1995; Ladewig 2010, 2011; Neumann 2004; Seyfeddinipur 2004). Variants of recurrent gestures depicting aspects of concrete or abstract entities or events show a referential function and are least conventionalized (Brookes 2001, 2005; Ladewig 2010, 2011; Teßendorf this volume). Variants used with a speech-replacing function were considered most conventionalized and have been categorized as emblems (Kendon 1995; Neumann 2004; Seyfeddinipur 2004). Conventionalization here refers to both the formation of a stable form meaning unit and the usage of the gesture in a particular communicative context. Very often it can be observed that the more restricted a gesture is in its fields of use, the more conventionalized it is in its form. The cyclic gesture, for instance, used with a referential function can depict all kinds of continuous events or actions, although it is mostly used to refer to abstract things. It can span single words but also phrases and even sentences. Used in a word/concept search, only four possible sequential positions with respect to disfluency markers and stages of a word/ concept search were identified. Both gestural forms were found to be used in particular positions in gesture space, namely in the right periphery in the former and in the central gesture space in the latter variant. The third variant used with performative function (the cyclic gesture in requests) is most restricted in its possibilities of application and it is most convention-

1570

VIII. Gesture and language

alized in form. This variant can only be used to ask the interlocutor to continue a (communicative) activity and it is performed in the peripheral gesture space with a larger movement size. Furthermore, it is the only variant that can substitute for speech (cf. Kendon 1988, 1995). (For further detail see Ladewig [this volume b].) Argued from a diachronic perspective, the different degrees of conventionalization observable in one recurrent gesture or one gesture family reflect different stages of conventionalization ⫺ from non-conventionalized to conventionalized. Variation in the range of meanings and speech act functions these gestures fulfil suggests that quotable gestures may begin as spontaneous depictions that are used to fulfil immediate communicative needs. As they are found to fulfil important practical and then social functions offering opportunities to express important conditions and social relations, the meanings and functions of these gestures expand. (Brookes 2001: 182)

In view of these facts, it is proposed to regard a taxonomy of gestures in terms of dimensions rather than in terms of categories (see also McNeill 2005; see Fig. 118.1).

Fig. 118.1: Dimension of gesture types

Accordingly, a more flexible transition from singular gestures to recurrent gestures and from recurrent gestures to emblems should be considered as recurrent gestures show variants exhibiting properties of their adjacent gesture types. By and large, recurrent gestures provide an interesting field for the investigation of semantization and grammaticalization processes in gestures and thus give insights into the emergence of signed languages. Moreover, identifying and setting up culturallyshared repertoires of recurrent gestures, as has been done for the German speech community (Bressem and Müller this volume b), offers the chance of opening up a multimodal perspective on cross-cultural and cross-linguistic studies.

7. Reerences Arnecke, Melissa 2011. Die Faust. Untersuchung einer rekurrenten Geste. Unpublished Manuscript. Bacon, Albert M. 1884. A Manual of Gesture: Embracing a Complete System of Notation, Together with the Principles of Interpretation and Selections for Practice. Chicago: SC Griggs. Battison, Robin 1974. Phonological deletion in American Sign Language. Sign Language Studies 5(1): 1⫺19. Bavelas, Janet Beavin, Nicole Chovil, Linda Coates and Lori Roe 1995. Gestures specialized for dialogue. Personality and Social Psychology Bulletin 21(4): 394⫺405.

118. Recurrent gestures

1571

Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allan Wade 1992. Interactive gestures. Discourse Processes 15(4): 469⫺489. Bressem, Jana 2012. Repetitions in gestures: Structures and cognitive aspects. Ph.D. dissertation, European University Viadrina, Frankfurt (Oder). Bressem, Jana volume 1. A linguistic perspective on the notation of form features in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1079⫺1098. Berlin/Boston: De Gruyter Mouton. Bressem, Jana this volume. Repetitions in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1641⫺1650. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume a. The family of AWAY gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1592⫺1605. Berlin/ Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume b. A repertoire of recurrent gestures of German with pragmatic functions. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1575⫺1592. Berlin/Boston: De Gruyter Mouton. Brookes, Heather 2001. O clever ‘He’s streetwise’. When gestures become quotable. Gesture 1(2): 167⫺184. Brookes, Heather 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186⫺224. Brookes, Heather 2005. What gestures do: Some communicative functions of quotable gestures in conversations among Black urban South Africans. Journal of Pragmatics 37: 2044⫺2085. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Calbris, Genevie`ve 2003. From cutting an object to a clear cut analysis. Gesture as the representation of a preconceptual schema linking concrete actions to abstract notions. Gesture 3(1): 19⫺46. Calbris, Genevie`ve 2011. Elements of Meaning in Gesture. Amsterdam: John Benjamins. Conlin, Frances, Paul Hagstrom and Carol Neidle 2003. A particle of indefiniteness in American Sign Language. Linguistic Discovery 2(1): 1⫺21. Engberg-Pedersen, Elisabeth 2002. Gestures in signing: The presentation gesture in Danish Sign Language. In: Rolf Schulmeister and Heimo Reinitzer (eds.), Progress in Sign Language Research: In Honor of Siegmund Prillwitz, 143⫺162. Washington, DC: Gallaudet University Press. Fricke, Ellen 2012. Grammatik Multimodal: Wie Wörter und Gesten zusammenwirken. Berlin: Mouton de Gruyter. Harrison, Simon 2009a. The expression of negation through grammar and gesture. In: Jordan Zlatev, Mats Andren, Marlene Johansson Falck and Carita Lundmark (eds.), Studies in Language and Cognition, 405⫺409. Cambridge: Cambridge Scholars Publishing. Harrison, Simon 2009b. Grammar, gesture, and cognition: The case of negation in English. Ph.D. dissertation, Universite´ Michel de Montaigne, Bourdeaux 3. Harrison, Simon 2010. Evidence for node and scope of negation in coverbal gesture. Gesture 10(1): 29⫺51. Janzen, Terry and Barbara Shaffer 2002. Gesture as the substrate in the process of ASL grammaticization. In: Richard Meier, David Quinto⫺Pozos and Kearsy Cormier (eds.), Modality and Structure in Signed and Spoken Languages, 199⫺223. Cambridge: Cambridge University Press. Johnson, Mark 1987. The Body in Mind. The Bodily Basis of Meaning, Imagination, and Reason. Chicago, IL: University of Chicago.

1572

VIII. Gesture and language

Kendon, Adam 1980. Gesticulation, speech, and the gesture theory of language origins. Sign Language Studies 9: 349⫺373. Kendon, Adam 1988. How gestures can become like words. In: Fernando Poyatos (ed.), Crosscultural Perspectives in Nonverbal Communication, 131⫺141. Toronto: C. J. Hogrefe. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺279. Kendon, Adam 2004a. Contrasts in gesticulation: A Neapolitan and a British speaker compared. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 173⫺193. Berlin: Weidler. Kendon, Adam 2004b. Gesture. Visible Action as Utterance. Cambridge: Cambridge University Press. Kendon, Adam 2008. Language’s matrix. Gesture 9(3): 355⫺372. Klima, Edward S. and Ursula Beluggi 1979. The Signs of Language. Harvard: Harvard University Press. Ladewig, Silva H. 2007. The family of the cyclic gesture and its variants ⫺ systematic variation of form and contexts. Unpublished manuscript, European University Frankfurt (Oder). http:// www.silvaladewig.de/publications/papers/Ladewig-cyclic_gesture.pdf, accessed June 2013. Ladewig, Silva H. 2010. Beschreiben, suchen und auffordern ⫺ Varianten einer rekurrenten Geste. Sprache und Literatur 41(1): 89⫺111. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. http://cognitextes.revues.org/406. Ladewig, Silva H. 2012. Syntactic and semantic integration of gestures into speech: Structural, cognitive, and conceptual aspects. Ph.D. dissertation, European University Viadrina, Frankfurt (Oder). Ladewig, Silva H. this volume a. Creating multimodal utterances: The linear integration of gestures into speech. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1622⫺1677. Berlin/Boston: De Gruyter Mouton. Ladewig, Silva H. this volume b. The cyclic gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1605⫺1618. Berlin/Boston: De Gruyter Mouton. Ladewig, Silva H. and Jana Bressem 2013. New insights into the medium hand ⫺ Discovering Structures in gestures based on the four parameters of sign language. Semiotica 197: 203⫺231. Ladewig, Silva H. and Jana Bressem Ms. Looking for nouns and verbs in gestures ⫺ empirical grounding of a theoretical question. Ladewig, Silva H., Cornelia Müller and Sedinha Teßendorf 2010. Singular gestures: Forms, meanings and conceptualizations. 4th conference of the International Society of Gesture Studies. Frankfurt (Oder), Germany. Lakoff, George 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago: University of Chicago Press. McNeill, David 1992. Hand and Mind. What Gestures Reveal About Thought. Chicago: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. McNeill, David volume 1. The co-evolution of gesture and speech, and downstream consequences. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 480⫺512. Berlin/Boston: De Gruyter Mouton. McNeill, David and Susan D. Duncan 2000. Growth points in thinking-for speaking. In: David McNeill (ed.), Language and Gesture, 141⫺161. Cambridge: Cambridge University Press. Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar. Cornell University, Ann Arbor, MI: UMI.

118. Recurrent gestures

1573

Mittelberg, Irene 2010. Interne und externe Metonymie: Jakobsonsche Kontiguitätsbeziehungen in redebegleitenden Gesten. Sprache und Literatur 41(1): 112⫺143. Mittelberg, Irene and Linda R. Waugh 2009. Metonymy first, metaphor second: A cognitive-semiotic approach to multimodal figures of thought in co-speech gesture. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 329⫺356. Berlin: Mouton de Gruyter. Mosher, Joseph A. 1916. The Essentials of Effective Gesture for Students of Public Speaking. New York: The Macmillan company. Müller Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand. A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), Semantics and Pragmatics of Everyday Gestures, 234⫺256. Berlin: Weidler. Müller, Cornelia 2009. Gesture and language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. Abington/New York: Routledge. Müller, Cornelia 2010a. Mimesis und Gestik. In: Gertrud Koch, Martin Vöhler and Christiane Voss (eds.), Die Mimesis und ihre Künste, 149⫺187. Paderborn/München: Fink. Müller, Cornelia 2010b. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia Ms. How gestures mean. The construal of meaning in gestures with speech. Müller, Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Silva H. Ladewig and Jana Bressem volume 1. Gestures and speech from a linguistic perspective: A new field and its history. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 55⫺81. Berlin/Boston: De Gruyter Mouton. Neumann, Ranghild 2004. The conventionalization of the ring gesture in German discourse. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 217⫺223. Berlin: Weidler. Ott, Edward Amherst 1902. How to Gesture. New York: Hinds and Noble. Parrill, Fey 2008. Form, meaning, and convention: A comparison of a metaphoric gesture with an emblem. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 195⫺217. Amsterdam: John Benjamins. Parrill, Fey and Eve Sweetser 2002. Representing meaning: Morphemic level analysis with a holistic approach to gesture transcription. Paper presented at the First Congress of the International Society of Gesture Studies, The University of Texas, Austin. Parrill, Fey and Eve Sweetser 2004. What we mean by meaning: Conceptual integration in gesture analysis and transcription. Gesture 4(2): 197⫺219. Payrato´, Lluı´s and Sedinha Teßendorf this volume. Pragmatic gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1531⫺1540. Berlin/Boston: De Gruyter Mouton. Pfau, Roland and Markus Steinbach 2006. Modality-Independent and Modality-Specific Aspects of Grammaticalization in Sign Language. Potsdam: Universitätsverlag. Poggi, Isabella 1983. La mano a borsa: analisi semantica di un gesto emblematico olofrastico. In: Grazia Attili and Pio E. Ricci Bitti (eds.), Communicare Senza Parole, 219⫺238. Roma: Bulzoni. Potter, H.L.D. 1871. Manual of Reading, in 4 Pts: Orthophony, Class Methods, Gesture, and Elocution. New York: Harper and Brothers Publishers.

1574

VIII. Gesture and language

Quintilian, Marcus Fabius 1969. The Institutio Oratoria of Quintilian. Translated by Harold E. Butler. The Loeb Classical Library. New York: G. P. Putnam and Sons. Scheflen, Albert E. 1973. How Behavior Means. New York: Gordon and Breach. Seyfeddinipur, Mandana 2004. Meta-discursive gestures from Iran: Some uses of the ‘Pistol Hand’. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 205⫺216. Berlin: Weidler. Sherzer, Joel 1991. The Brazilian thumbs-up gesture. Journal of Linguistic Anthropology 1(2): 189⫺ 197. Sparhawk, Carol 1978. Contrastive-identificational features of Persian gesture. Semiotica 24(1/2): 49⫺86. Stokoe, William C. 1960. Sign Language Structure. Buffalo, NY: Buffalo University Press. Stokoe, William C. 1972. Classification and description of sign languages. Current Trends in Linguistics 12(1): 345⫺371. Streeck, Jürgen 1993. Gesture as communication I: Its coordination with gaze and speech. Communication Monographs 60(4): 275⫺299. Streeck, Jürgen 2005. Pragmatic aspects of gesture. In: Jacob Mey (ed.), International Encyclopedia of Languages and Linguistics, 71⫺76. Oxford: Elsevier. Streeck, Jürgen 2009. Gesturecraft. The Manu-facture of Meaning. Amsterdam/Philadelphia: John Benjamins. Teßendorf, Sedinha this volume. Pragmatic and metaphoric ⫺ combining functional with cognitive approaches in the analysis of the “brushing aside gesture”. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1540⫺1558. Berlin/Boston: De Gruyter Mouton. van Loon, Esther 2012. What’s in the palm of your hands? Discourse functions of PALM-UP in Sign Language of the Netherlands. Unpublished MA thesis, University of Amsterdam, Amsterdam. van Loon, Esther, Roland Pfau und Markus Steinbach this volume. The Grammaticalization of Gestures in Sign Languages. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2131⫺2147. Berlin/Boston: De Gruyter Mouton. Webb, Rebecca 1996. Linguistic features of metaphoric gestures. Unpublished PhD thesis, Universtiy of Rochester, New York. Wilcox, Sherman 2004. Gesture and language. Gesture 4(1): 43⫺73. Wilcox, Sherman 2005. Routes from gesture to language. Revista da ABRALIN ⫺ Associac¸a˜o Brasileira de Lingüı´stica 4(1⫺2): 11⫺45. Wilcox, Sherman 2007. Routes from gesture to language. In: Elena Pizzuto, Paola Pietrandrea and Raffaele Simone (eds.), Verbal and Signed Languages: Comparing Structures, Constructs and Methodologies, 107⫺131. Berlin/New York: Walter de Gruyter. Wilcox, Sherman E., Paolo Rossini and Elena Antinoro Pizzuto 2010. Grammaticalization in sign languages. In: Diane Brentari (ed.), Sign Languages, 332⫺354. Cambridge: Cambridge University Press. Wilcox, Sherman E. and Barbara Shaffer 2006. Modality in American Sign Language. In: William Frawley, Erin Eschenroeder, Sarah Mills and Thao Nguyen (eds.), The Expression of Modality, 207⫺237. Berlin/New York: Mouton De Gruyter. Wilcox, Sherman and Phyllis Wilcox 1995. The gestural expression of modality in ASL. Modality in Grammar and Discourse: 135⫺162. Zeshan, Ulrike 2006. Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press.

Silva H. Ladewig, Frankfurt (Oder) (Germany)

119. A repertoire of German recurrent gestures with pragmatic functions

1575

119. A repertoire o German recurrent gestures with pragmatic unctions 1. 2. 3. 4. 5.

Introduction Setting up a repertoire The repertoire: Sixteen recurrent gestures with pragmatic functions Discussion References

Abstract The chapter presents a repertoire of recurrent gestures of German. It shortly discusses the data basis on which the repertoire is based and the steps in identifying and analyzing members of the repertoire before presenting the gestures in detail. Particular focus is put on the semantic themes of the gestures as well as possible semantic relations between members of the repertoire. Moreover, the chapter discusses the illocutionary acts and pragmatic functions identified for the gestures. It is shown that the same gesture can have several different illocutionary values simultaneously and that the respective gestures highlight the impact and consequences of the illocutionary acts in different ways. Concluding, the chapter presents structural relations between members of the repertoire.

1. Introduction Emblems or quotable gestures (Kendon 1983, 1992) are conventional gestures that have a stable form-meaning relationship and can be translated into spoken words, phrases, or full sentences (mostly constituting a full speech-act on their own, Müller 2010). They can be used as a substitute for speech and are easily understood by speakers of particular cultural or social groups. Over the past years, a range of repertoires of emblems in various languages has been published. With a focus on Europe and few studies on Asia and America, mono-cultural and cross-cultural repertoires were set up providing fundamental insights into the range, use, and distribution of emblems (for an overview of existing studies, see Brookes 2004; Kendon 1992; Payrato´ 1993, this volume; Payrato´ and Teßendorf this volume; Teßendorf volume 1). A classical reference work on the cross-cultural comparison of emblems remains the work of Desmond Morris and his colleagues (Morris 1977; Morris et al. 1979). Cross-cultural studies indicate that emblems appear to center on particular semantic domains and that they tend to cluster around specific contexts-of-use. Kendon thus points out that emblems are used for interpersonal control (gestures with meanings such as “stop!”, “be quiet!”, “I’m warning you!”), announcement of one’s current state or condition (“I’m amazed!”, “I’m broke!”, “I’m hungry”), and evaluative descriptions of the actions or appearances of another (“He’s crazy”, “pretty girl!”, “He’s dangerous”).” (Kendon 2004b: 339)

Furthermore, emblems show different degrees of conventionalization (see, e.g., Brookes 2004; Payrato´ 1993; Poggi 2002), indicating that “emblems are a category of gestures Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 15751591

1576

VIII. Gesture and language

with fuzzy edges and conventionality is a not an exclusive characteristic of them” (Teßendorf volume 1: 93). Recent investigations have shown that a type of conventional co-speech gesture can be identified, which shows structural and functional similarities with emblems. We have characterized this type of gesture as “recurrent, since it is used repeatedly in different contexts and its formational and semantic core remains stable across different contexts and speakers” (Ladewig 2011; see also Müller 2010). Depending on their context-ofuse, recurrent gestures show differences in form, which correlate with variants of meaning and function (Ladewig 2010, 2011, this volume a, b; Müller 2004, 2010; Müller and Speckmann 2002; Neumann 2004; Seyfeddinipur 2004; Teßendorf this volume). (For further work along similar lines, see Bavelas et al. 1992; Brookes 2004, 2005; Calbris 1990, 2003; Fricke 2010, this volume; Harrison 2009; Kendon 1995, 2004b). By clustering around a shared and “distinct set of kinesic features” that goes along with a “common semantic theme”, recurrent gestures may build so called gesture families. Gesture families are “groupings of gestural expressions that have in common one or more kinesic or formational characteristics” and “share in a common semantic theme” (Kendon 2004b: 227) (see also Bressem and Müller this volume; Fricke, Bressem, and Müller this volume). Recurrent gestures often take over pragmatic functions, perform communicative actions, and fulfill meta-communicative functions. While recurrent gestures may also take over referential functions (see Ladewig this volume b), the repertoire to be discussed in the present paper consists of recurrent gestures with pragmatic functions. As such they either “display the communicative act of the speaker and act upon speech as ‘speechperformatives’” or “aim at a regulation of the behavior of others” and thus act as ‘performatives’ (Teßendorf this volume). (See Ladewig this volume b for a detailed discussion of recurrent gestures.) In any case, recurrent gestures either operate upon or are part of the verbal utterance. While emblems are apt at replacing speech completely, recurrent gestures form part of a multimodal utterance meaning. They are conventionalized co-speech gestures. Obviously boundaries between the two categories are fluent (the ring gesture is an example for an emblem and a recurrent gesture, depending on its contexts-of-use and on its specific formational characteristics). The fixed form-meaning relation that holds stable across a wide range of communicative contexts along with their primarily pragmatic function makes it likely to assume that recurrent gestures undergo processes of conventionalization. They appear to form a relatively closed group of gestures (Müller 2010), and thus form repertoires characteristic of a particular socio-cultural community. As Kendon has pointed out, it seems likely that pragmatic gestures appear in a limited set, mirroring a limited number of pragmatic functions (Kendon 1995). This chapter presents a data-driven account of such a repertoire of recurrent gestures with pragmatic functions for speakers of German. The repertoire has been identified by applying a form-based perspective, which regards gestures as motivated signs and considers a close analysis of their form as the point of departure for reconstructing their meaning (for more detail, see Müller, Bressem, and Ladewig volume 1). The chapter presents the individual gestures included therein, it describes their forms, their semantic themes and functions, as well as structural and functional overlaps between gestures of the repertoire. In doing so, recurrent gestures are not treated as part of a repertoire of emblematic gestures (see, e.g., Brookes 2004), but are made the sole topic of a repertoire of conventionalized co-speech gestures.

119. A repertoire of German recurrent gestures with pragmatic functions

1577

2. Setting up a repertoire 2.1. Data basis The repertoire of recurrent gestures of German is based on the analysis of a set of 24 hours of video data including a variety of discourse types and different conversational settings. The corpus consists of face-to-face interactions (Müller 1998), discussions about political as well as non-political topics, academic lectures, parliamentary debates, data from a German TV game show (“Genial Daneben”), as well as some experimental data (Müller et al. 2009). The corpus was deliberately designed to include a wide spectrum of different discourse types in order to identify recurrent gestures in various contexts-ofuse. We wanted to prevent possible misrepresentations of recurrent gestures caused by a narrow set of data.

2.2. Identiying recurrent gestures with pragmatic unctions In order to identify possible candidates for a repertoire of recurrent gestures with pragmatic functions, the video data set was approached with a bottom up and inductive perspective. Recurrent gestures were identified based on a twofold selection process. First, a trained gesture researcher sifted through half of the video data. Based on the researcher’s own communicative competence of German, all recurring gestural forms were noted, and first hypotheses concerning the meaning of the forms (the motivation of the form), their contexts-of-use, and their possible functions were formulated. Since we concentrated on a repertoire of gestures with pragmatic functions, gestural forms with deictic function as well as gestural forms with concrete referential function were excluded. This first step in the identification of recurrent gestures was used to gain a rather general impression of the types of recurrent gestures used and their frequency and distribution. Based on this pre-identification, a list of recurrent gestures was put together, setting up the grounds for the second step in the selection. In this second step, a second gesture researcher annotated all tokens of recurrent gestures based on the list of recurrent gestures derived at in the first step. The pre-defined list was not exclusive. It could be expanded by the second gesture researcher, in order to include further frequent gestural forms that had been overlooked in the first round of analysis. At the end of the second step, all recurring gestural forms were identified in the whole data set. The final list comprised 16 recurring gestural forms, which were assumed to be candidates for recurrent gestures (see Tab. 119.1), and which were analyzed in detail in a third step of the analysis (see section 2.3).

2.3. Methods o gesture analysis The detailed analysis of the recurring gestural forms was approached from a linguistic perspective, in which gestural forms are regarded as motivated meaningful wholes, in which every aspect of a gesture’s form is treated as potentially meaningful and, accordingly, in which changes in form features are regarded as potentially meaningful as well. In recurrent gestures gestural form features are not random, by definition, but recur across speakers and contexts whilst sharing stable meanings. A gesture is “termed recurrent, since it is used repeatedly in different contexts and its formational and semantic core remains stable across different contexts and speakers” (Ladewig 2011; see also Bressem and Müller this volume; Müller 2004, 2010).

1578

VIII. Gesture and language Methodologically, the analysis of the recurrent gestures was based on the Methods of Gesture Analysis, a form-based method to systematically reconstruct the meaning of gestures (Bressem, Ladewig, and Müller volume 1; Müller 2010; Müller, Bressem, and Ladewig volume 1; Müller, Ladewig, and Bressem volume 1). The method addresses fundamental properties of gestural meaning creation and basic principles of gestural meaning construction, by distinguishing four main building blocks: form, sequential structure of gestures in relation to speech and other gestures, local context-of-use, i.e., gestures’ relation to syntactic, semantic, and pragmatics aspects of speech, and distribution of gestures over different contexts use. By assuming that the meaning of a gesture emerges out of a fine-grained interaction of a gesture’s form, its sequential position, and its embedding within a context-of-use (local and distributed), a gesture’s meaning is determined in a (widely) context-free analysis of its form, which grounds the later context-sensitive analysis of gestures. Based on the Methods of Gesture Analysis and the Linguistic Annotation System for Gestures (see Bressem, Ladewig, and Müller volume 1), first a detailed description and motivation of the recurrent gestural forms (modes of representation, image schemas, motor patterns, and actions) was carried out. Afterwards, the gestures were analyzed in relation to speech on a range of levels of linguistic description (prosody, syntax, semantics, and pragmatics). As a final step, the detailed analysis of the gestural forms was brought together with the analysis of the contexts-of-use in order to analyze the distribution of the recurrent gestural forms across various contexts and to account for possible characteristic form aspects in particular contexts. In doing so, the semantic core of recurrent gestural forms was distinguished from the local meanings of the recurrent form and the meaning of its context-variants (Bressem and Müller this volume; Ladewig 2010, this volume a, b).

3. The repertoire: Sixteen recurrent gestures with pragmatic unctions In the corpus of 24 hours of video data, we found a set of 16 gestures with a recurring form and meaning and a pragmatic function (see Tab. 119.1) produced by 60 speakers (both females and males). The repertoire does not claim to be complete, however we do assume that the gestures included in the repertoire represent the most frequent and common recurrent gestures used among speakers of German. Tab. 119.1 presents a short description of the repertoire which is structured as follows: (i) (ii) (iii) (iv) (v)

name of the gesture (based on its kinesic form), description of its prototypical form (formational features, form Gestalt), three short example utterances, semantic core, illocutionary force (and pragmatic function).

A more detailed discussion of the members of the repertoire, which is not possible within the present chapter, would have to include also the following aspects: (i) form variants (including further articulators, e.g., body shifts, facial expressions, gaze),

119. A repertoire of German recurrent gestures with pragmatic functions

1579

(ii) a detailed account of the range of pragmatic functions, (iii) the number of occurrences in the data, (iv) distribution across discourse types. The arrangement of the gestures in the table reflects their degree of conventionalization. The respective degrees of conventionalization are determined based on their relation with speech (can they substitute for speech or not) and on the nature of their form variants (does a variant show a limited set of forms, functions, and contexts-of-use or not). The basic rationale followed is: The more a gesture can substitute for speech and the less it varies in form-function and in contexts-of-use, the more it is considered conventionalized (see Ladewig this volume b for further discussion). Section A of the table (Tab. 119.1) presents recurrent gestures, which are primarily used in conjunction with speech and for which we found several form variants. Section B contains all recurrent gestures, which were predominantly used in conjunction with speech but for which also a speech-replacing use was documented. Furthermore, members of section B also showed several form variants. Gestures from section A and B build so called gesture families, that is, grouping of gestures with shared sets of kinesic features and a common semantic theme (see Bressem and Müller this volume; Fricke, Bressem, and Müller this volume; Kendon 2004b). Section C entails all those recurrent gestures which appear to have undergone a process of “emblematization” (Payrato´ 1993: 206). These gestures were used in conjunction with speech or in the absence of speech. Yet, in contrast to the gestures in section B, hardly any form variants were found and if so, they were restricted to the handedness and the concomitant use of other articulators (facial expression, gaze, body shifts). Note, that for the analyses summarized in the table, we also drew upon existing research on particular gestural forms. Thus, for the Palm Up Open Hand (PUOH) or Open Hand Supine, Palm Presentation (OHS, PP) we build upon Kendon’s and Müller’s research (Kendon 2004b: 264⫺281; Müller 2004; for more detail, see Bressem and Müller this volume). For the cyclic gesture we rely on Ladewig (2010, 2011, this volume a), for the ring gesture we refer to Morris and Kendon and others (Kendon 2004b: 238⫺247; Morris 1977; Neumann 2004; Weinrich 1992; for a historical and cross-cultural survey of the Ring, see Müller 1998: 36⫺42, this volume a). Concerning the shaking off gesture we build upon Posner’s semiotic analysis (Posner 2003). Regarding the gestures which share a movement away from the body and which are associated with the notion of exclusion, we rely on a detailed discussion of them in the context of the Away family of gestures (Bressem and Müller this volume). Analyses on members of the repertoire in other languages can be found in the work of Calbris on French gestures (Calbris 1990, 2003, 2008, 2011), Harrison has worked on British gesturing (Harrison 2009), Kendon on the gesticulation of Italian and British speakers (Kendon 2004a, b), Payrato´ and Teßendorf on Catalan and Spanish gestures (Payrato´ and Teßendorf this volume; Teßendorf this volume), Streeck has conducted research on gestures of German, Japanese, Illokano, and American-Arabic speakers (Streeck 2009), and Webb has researched recurrent gestures of American speakers (Webb 1996). Our repertoire of recurrent gestures with pragmatic functions thus not only documents gestures that are frequent for German speakers but also hints at possible crosscultural gesture forms and thus at potential overlaps of the German repertoire with those identified for other cultural and linguistic communities.

A

Loose hands alternate away and towards the speaker’s body.

Index finger and thumb are bent, palm is held laterally, tensed index and thumb ⫺ as if measuring something ⫺ are turned back and forth from the wrist. Continuous rotational movement, performed away from the body ⫺ as if hand was a turning crank. Lax open hand, palm downwards, repeatedly moved left and right by a clockwise rotation of the wrist.

Back and forth, loose hands

Back and forth, index-thumb from wrist

PDOH with clockwise rotation

Cyclic gesture

Description of prototypical form

Recurrent gesture

Tab. 119.1: Repertoire of German recurrent gestures

(1) well I think it was 10 years ago (2) were you feeling a but uneasy (3) rather like popular classic

(1) started at a time at which you can take this step (2) I realized (-) how tough I was

(1) that we always fall back on the youth a little (2) well there were so many lucky moments that went along with the team (1) different pattern, but still (2) did you win, did you lose, how do you feel

Example utterance

Vague Uncertain

Cylic continuity Process Duration

Change Process Opposition Contrast

Change Uncertainty Ambivalence

Semantic core

(Continued)

Assertive, directive Used in the context of: word/concept searches and requests. Marks in general processes, duration, continuity, and the procedural structure of conversations. (meta- and communicative function) Assertive Used to mark events, states, as well as ideas as uncertain and indeterminate. (meta- and communicative function)

Assertive Used to exemplify changing events and processes, to mark oppositions of arguments, events, etc. (meta-communicative function)

Assertive Used to mark several arguments and points of view on the same topic, in particular, when referring to changing situations and events. (meta-communicative function)

Illocutionary force and pragmatic function

1580 VIII. Gesture and language

B

Description of prototypical form The (lax) flat hands, palm facing towards the center, are alternated by rotations of the wrist.

Loose hand, palm oriented towards body, is moved away from body with a (rapid) twist of the wrist ⫺ as if brushing away annoying crumbs. Cupped hand oriented vertically, palm facing away from the speaker’s body, hand flaps downward from the wrist ⫺ as if throwing an annoying object away.

Recurrent gesture

Swaying

Brushing away

Throwing away

Tab. 119.1: Continued

(1) well (2) it was interesting because they were gone for 1 or 2 years (3) alright, leave it

(1) well a half standardized guide line (2) she started to sway (3) because ehm this is not easy constructionally (1) you worked for it a long time (2) because what you say is not the truth (3) the gulf war, although it was over quite quickly

Example utterance

Excluding Negative assessment

Excluding Negative assessment

More or less Roughly Approximation

Semantic core

(Continued)

Assertive, directive, expressive Getting rid of, removing, and dismissing annoying topics of talk by rapidly brushing them away from the speaker’s body. Clearing off body space goes along with a qualification of the rejected objects as annoying, e.g., a topic of talk is being negatively assessed. (meta- and communicative function) Assertive, directive, expressive Getting rid of, removing, and dismissing annoying topic of talk by throwing it away from the speaker’s body. Clearing off body space goes along with a qualification of the rejected objects as annoying, e.g., a topic of talk is being negatively assessed. (meta- and communicative function)

Assertive Used to mark events, states, as well as ideas as uncertain and indeterminate. (meta- and communicative function)

Illocutionary force and pragmatic function

119. A repertoire of German recurrent gestures with pragmatic functions 1581

Flat open hand(s), palm vertically, away from speaker’s body, moved or held outwards ⫺ as if holding or pushing away an object, or stopping an object from falling over. Flat open hand(s), palm facing downward, move laterally and horizontally outwards ⫺ as if sweeping away something from a flat surface (a liquid or bread crumbs) so that absolutely nothing is left. Palm open, turned upwards, often with a downward movement or turn of the wrist and a hold in the end ⫺ as if showing, offering, presenting, or receiving an object.

Holding away

Palm Up Open Hand (PUOH)

Sweeping away

Description of prototypical form

Recurrent gesture

Tab. 119.1: Continued

(1) if this isn’t second class, what else? (2) right? (3) because I am a philosopher originally

(1) there were no problems (2) solely (3) alright, lets leave the topic aside

(1) there are things I don’t want to hear (2) but hold on (3) -----

Example utterance

Presenting, Giving, Offering, Showing

Excluding Negating

Excluding Refusing Stopping Rejecting

Semantic core

(Continued)

Assertive, directive Presenting an abstract discursive object as a manipulable and visible one, inviting participants to take on a shared perspective on this object. (meta- and communicative function)

Assertive, directive Negation, e.g., completely rejecting topics of talk by (energetically) sweeping them away from the center to the periphery, so that they are excluded from the conversation and negated. (meta- and communicative function)

Assertive, directive, commissive, expressive Refusal, stopping something from intrusion, stopping from continuation, rejecting a speaker’s or hearer’s topic of talk, and a qualification of the rejected topic as an unwanted one. (meta- and communicative function)

Illocutionary force and pragmatic function

1582 VIII. Gesture and language

C

Stretched index finger ⫺ moved horizontally

Stretched index finger, palm facing away from the speaker, is moved upwards and rapidly moved horizontally by turning the wrist.

Index finger(s) and thumb(s) form a circle. Index and thumb touch each other ⫺ as if grasping a small object. The hand(s) are held or moved up and down repeatedly. Stretched index finger is raised and held.

Ring

Stretched index finger ⫺ held

Description of prototypical form

Recurrent gesture

Tab. 119.1: Continued

(1) this is not true (2) who you do not obey (3) headshake -----

(1) the so called (-) attention Mario (2) on the one hand (-) will give two examples (3) but it is like this

(1) in particular the little pensioner (2) in a great condition, after he did a wonderful job

Example utterance

Denial

Attention

Precision

Semantic core

(Continued)

Assertive, directive Used with cataphoric function by drawing the attention of other participants to new and particular important topics of talk as well as to signaling thematic shifts, such as when dismissing the statement of others. (meta- and communicative function) Assertive, directive The gesture is used to negate and express denial often going along with verbal negation. (meta- and communicative function)

Assertive Used for marking the precision of arguments. The precision grip is used for specification, clarification, and emphasis of the speaker’s utterance. (metacommunicative function)

Illocutionary force and pragmatic function

119. A repertoire of German recurrent gestures with pragmatic functions 1583

Description of prototypical form Lax flat hand moves upwards, then drops on the lap or the table etc. Dropping usually results in an acoustic signal of the hand.

Clenched fist moves (rapidly) downwards ⫺ as if hitting hard.

Rapidly shaking lax open hand, oriented towards body ⫺ as if shaking off hot water to avoid boiling the hand. Often comes with a recurrent facial expression.

Recurrent gesture

Dropping of hand

Fist

Shaking off

Tab. 119.1: Continued

(1) billarziose for instance (-) a terrible disease (2) puh:::

(1) everybody ran (2) that rather relies on totalitarian mechanism like pressure and control

(1) oh well you can forget about it (2) I absolutely don’t know, I have no clue (3) well the team did not exist yet (-) well (-) i would say

Example utterance

Dangerous Delicate Appalling

Strength Force Power

Dismissing

Semantic core

Assertive, expressive Used to put emphasis on the parts of the utterance by directing the listener’s attention, expresses emotional involvement and insistence. (metacommunicative function Assertive, expressive Used to mark an object or situation as potentially dangerous, delicate, or appalling. (meta- and communicative function)

Assertive, expressive Used to dismiss topics of talk by marking parts of the utterance as less important and interesting. (meta- and communicative function)

Illocutionary force and pragmatic function

1584 VIII. Gesture and language

119. A repertoire of German recurrent gestures with pragmatic functions

1585

3.1. Semantic cores Based on the local meaning of the recurrent form and the meaning of its context-variants, the semantic core of the recurrent gestures was identified. The meaning was reconstructed based on the assumption that image-schematic structures and everyday actions constitute the derivational basis for the gesture’s form, meaning, and function. For the group of recurrent gestures with the semantic core of “excluding” (see Bressem and Müller this volume), for instance, two shared image-schematic structures underlying all Away gestures are assumed: CENTER-PERIPHERY and SOURCE-PATH-GOAL. For the holding away member of the away group we suggested a further image-schematic structure as motivation: BLOCKAGE. For all members of the family we found that different mundane actions work as derivational bases for those gestures. Thus, everyday actions of sweeping, brushing, holding, and throwing away (annoying and unpleasant) things in the surroundings of the speakers body, all result in a common effect: clearing the space surrounding the body of something by moving or keeping things away. This effect of action is semanticized in the gesture family “away” leading to a) shared structural and functional characteristics but also to b) particular kinesic qualities as well as to differences in the recurrent gestures. On the basis of these commonalities and differences, the semantic core “excluding” was identified for the recurrent gestures (see Bressem and Müller this volume for a detailed discussion of the “away” gestures; Ladewig 2010; Müller 2004; Teßendorf 2009 for a discussion of the methodological and analytical procedure in analyzing the image-schematic and action base of recurrent gestures).

3.2. Illocutionary acts and pragmatic unctions Based on Searle’s classification (1979), the recurrent gestures included in the repertoire were assigned to the five categories of illocutionary acts: 1) Assertives, by which the sender specifies the truth of the proposition by expressing acts about the sender, the receiver, or the state of things, 2) directives, by which the sender expresses how the receiver should act, 3) commissives, by which the sender commits him- or herself to do something, 4) expressives, by which the sender expresses aspects about his state of mind, and 5) declaratives, by which the sender alters the state of things in the real world. Depending on the context-of-use, the recurrent gestures included in the repertoire carry different illocutionary values. All of the gestures can be said to carry assertive value, as they constitute acts about the state of things. Very frequent are furthermore directives as well as expressive acts by which either acts for the receiver are signaled or the state of mind of the sender is expressed. Rather uncommon for the recurrent gestures included in the repertoire are commissives, acts by which the sender commits him- or herself to do something. This can only be assumed to play a role in the gesture “holding away”: When executed with both hands and an averted gaze or body, the sender commits himself to no further statements on the topic being talked about that is held away by the gesture. Apart from expressing different illocutionary values, the gestures included in the repertoire highlight the impact and consequences of the illocutionary acts expressed. With their performative or pragmatic function, recurrent gestures “rather than contributing to the propositional content of the utterance, […] embody the illocutionary force or the communicative action which often remains verbally implicit” (Müller 2008: 225). In so doing, the gestures either fulfill meta-communicative function and “display the com-

1586

VIII. Gesture and language

municative act of the speaker by acting upon speech as ‘speech-performatives’” or they fulfill communicative actions and “aim at a regulation of the behavior of others as ‘performatives’” (Teßendorf this volume: 1544) (see also Brookes 2004; Kendon 1995, 2004b; Payrato´ and Teßendorf this volume; Streeck 2006). For the gestures included in the repertoire, members can be identified, which solely seem to embody the illocutionary force of the proposition (“back and forth”, “back and forth, index-thumb from wrist”, “ring”). By highlighting and marking various aspects of the discourse and the discourse structure, these gestures specify a piece of discourse to have particular relevance and status in respect to other pieces or distinguish topic from comment (Kendon 1995: 164, 2004b: 225⫺247). In so doing, they act upon speech and take over meta-communicative function. Most members of the repertoire however cannot be said to have either meta-communicative or communicative function. Rather, the majority of gestures carries more than one pragmatic function and may do so even simultaneously. Depending on the context-of-use, the gestures show different dominance effects of these functions. A range of gestures primarily embodies the illocutionary force of the speaker’s own utterance. Examples are the gestures “PDOH with clockwise rotation” and “swaying”, which are used to mark events, states, as well as ideas as uncertain and indeterminate, the gesture “shaking off ”, used to mark an object or situation as potentially dangerous, delicate, or appalling, as well as the family of the “away” gestures, by which topics of talk are rejected by holding or moving them away. All of these gestures may primarily act upon the speaker’s own utterance and, in these cases, can be understood to function as modal particles (Müller and Speckmann 2002). Yet, apart from signaling the uncertainty or denial of the speaker, the gestures also provide instructions for the hearer on how to act, namely to take into account the uncertainty and denial in following communicative actions. By implying instructions on the following communicative actions, the gestures act upon the behavior of the other. For the family of the “away” gestures, for instance, it is implied that the hearer should not bring forward counterarguments, which might relativize the speaker’s own position. Sometimes, however, the gestures may express the perlocutionary value of an utterance. This is particularly prominent in speech-replacing uses, as documented for the gestures “throwing away”, “holding away”, and the “Palm Up Open Hand” for instance. In all of these cases, the gestures act upon the behavior of the other by expressing instructions on how to act. In the case of “throwing away”, the speaker gesturally utters the instruction to forget about what has been uttered. In the “holding away” gesture, executed with both hands and with an averted gaze or body, the gesturer requests the speaker to stop addressing him- or herself on that topic any longer. Similarly, a both handed “Palm Up Open Hand”, often executed with raised shoulders, signals ignorance and, by doing so, requests the other to stop any further inquiries (cf. Kendon 2004b: 275⫺281 on the Open Hand Supine PL gestures). Apart from the fact that the individual members of the repertoire carry different illocutionary values, our results underline existing observations made for emblematic gestures: “The same body action can (occasionally) have several different illocutionary values.“ (Payrato´ 1993: 202) Moreover, German recurrent gestures highlight the impact and consequences of the illocutionary acts in different and quite specific ways, thus showing a spectrum of pragmatic functions.

119. A repertoire of German recurrent gestures with pragmatic functions

1587

3.3. Structural and unctional relations between gestures In discussing the semantic cores of the recurrent gestures, we have highlighted particular semantic relations between specific members of the repertoire (e.g., the “away” gestures) and argued that semantic differentiations and specifications of particular types of gestures need to be seen in relation to other members of the repertoire (see section 3.2). By discussing these semantic overlaps we have shown that our focus in analyzing the repertoire of recurrent gestures has not only been on a close description and discussion of single members but also on relations between the members of the repertoire. By pursuing this perspective, different relations between members of the repertoire were uncovered. Moreover, it became apparent that in addition to examining groupings of gestures from the perspective of gesture families also a perspective is needed which takes into account the possibility of so called gestural fields (Fricke, Bressem, and Müller this volume). In doing so, a complex network of relations between the members of the repertoire may be uncovered. We will illustrate this aspect by shortly discussing the “away” gestures and the gestures “back and forth, index-thumb from wrist”, “swaying”, and “PDOH with clockwise rotation” (see Tab. 119.2). Tab. 119.2 Structural and functional relation between gestures Recurrent gesture

Shared aspect of form

Shared aspect of meaning

sweeping away horizontal and vertical movements holding away throwing away brushing away

away rapid (downwards) twists of the wrist

vague swaying change

Uncertainty, vague, approximation wrist movement (clockwise) Change, process, opposition, contrast

All “away” gestures share the semantic theme of excluding. This theme is motivated by actions that serve to remove or hold away things, resulting in a shared effect, namely the clearing of the body space by moving or keeping things away (Bressem and Müller this volume). This semantic core goes along with shared aspects of form as all gestures exhibit movements away from the center of the speaker. However, the “away” gestures can furthermore be split in two groups, based on differences in their movements: horizontal and vertical movements (sweeping and holding away) vs. movements of the wrist (brushing and throwing away). While sharing the semantic core “away”, differences in form lead to further internal structuring and grouping of the “away” gestures. A similar pattern can be observed for the gestures “PDOH with clockwise rotation”, “swaying”, and “back and forth, index-thumb from wrist”. For all gestures, the shared aspect of form lies in clockwise twists of the wrist. Despite form differences between the gestures (both handedness, lax flat hand, and bent fingers), a common formational core of all gestures can be found on the level of the movement type. This shared aspect of form however does not go along with a shared semantic core. Whereas the gestures

1588

VIII. Gesture and language

“PDOH with clockwise rotation” and “swaying” express the notion of uncertainty, vagueness, and approximation and are used to mark events, states, and ideas as uncertain and indeterminate, the gesture “back and forth, index-thumb from wrist” carries the notion of change, process, opposition, and contrast and is used to exemplify changing events and processes as well to mark the opposition of arguments, events, and the like. Here, a shared aspect of form results in the differentiation of two specific types of meaning and thus exhibits a different pattern than observed for the “away” gestures, which are primarily held together by a shared semantic theme. These two groups of gestures thus illustrate that recurrent gestures as members of a repertoire may be investigated from two different perspectives: either from common formational features (e.g., effect of action, types of movement) or from a common meaning (e.g., away, uncertainty, vague, approximation, etc.). Semasiology starts from the form of individual signs and considers the way in which their meaning(s) are manifested, whereas onomasiology starts from the meaning or concept of a sign and investigates the different forms by which the concept or meaning can be designated or named (Baldinger 1980: 278; Geeraerts 2010: 23; Schmidt-Wiegand 2002: 738). The distinction between semasiology and onomasiology is equivalent to the distinction between family-oriented and field-oriented thinking. (Fricke, Bressem, and Müller this volume: 1632)

Accordingly, the recurrent gestures identified in the repertoire may be investigated by pursing a perspective on gesture families or that of gestural fields (for more details, see also Fricke 2012). Without going into further detail on the distinction between gesture families and gestural fields, it needs to be pointed out that by pursing a perspective on recurrent gestures which takes into account their relations with other gestures in the repertoire either sharing common formational aspects or common aspects of meaning, complex internal structuring of groupings of gestures may be identified. In so doing, a new perspective on the nature of recurrent gestures as well as their relations with other recurrent gestures is offered.

4. Discussion The chapter has presented a first repertoire of recurrent gestures for speakers of German. By discussing the gestures’ illocutionary values and pragmatic functions, the repertoire ties in with existing repertoires for emblematic gestures (e.g., Brookes 2004; Payrato´ 1993) and thus offers the grounds for comparative analyses. Other than existing accounts however, which include this type of gestures within a repertoire of emblematic gestures, the present repertoire has made recurrent gestures the sole focus and, in doing so, revealed characteristics of recurrent gestures so far not discussed. By arranging them based on the gestures’ degree of conventionalization, by discussing the semantic themes expressed, as well as existing semantic and structural relations between the members of the repertoire, the chapter has furthered an understanding of the linguistic potential of gestures and shown that, in a similar way as for the examination of the spoken lexicon, relations between exemplars are also of utmost relevance for a repertoire or lexicon of conventionalized gestures. In drawing the focus away from discussing specific gestures of the repertoire as isolated types to a focus of the gestures as members exhibiting rela-

119. A repertoire of German recurrent gestures with pragmatic functions

1589

tions with each other, the chapter has shown that a repertoire-based perspective allows for the delineation of gestures from each other, for the identification of their singularity, and the explanation of specific gestures.

Acknowledgements We thank Mathias Roloff for providing the drawings (www.mathiasroloff.de) and the Volkswagen Foundation for supporting this work with a grant for the interdisciplinary project “Towards a grammar of gesture: Evolution, brain and linguistic structures” (www.togog.org).

5. Reerences Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allan Wade 1992. Interactive Gestures. Discourse Processes 15: 469⫺489. Bressem, Jana, Silva H. Ladewig and Cornelia Müller volume 1. Linguistic Annotation System for Gestures (LASG). In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1098⫺1125. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume. The family of Away gestures: Negation, refusal, and negative assessment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1592⫺1604. Berlin/Boston: De Gruyter Mouton. Brookes, Heather 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186⫺224. Brookes, Heather 2005. What gestures do: Some communicative functions of quotable gestures in conversations among Black urban South Africans. Journal of Pragmatics 32: 2044⫺2085. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Calbris, Genevie`ve 2003. From cutting an object to a clear cut analysis. Gesture as the representation of a preconceptual schema linking concrete actions to abstract notions. Gesture 3(1): 19⫺46. Calbris, Genevie`ve 2008. From left to right…: Coverbal gestures and their symbolic use of space. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 27⫺53. Amsterdam: John Benjamins. Calbris, Genevie`ve 2011. Elements of Meaning in Gesture. Amsterdam: John Benjamins. Fricke, Ellen 2010. Phonaestheme, Kinaestheme und multimodale Grammatik: Wie Artikulationen zu Typen werden, die bedeuten können. Sprache und Literatur 41(1): 70⫺88. Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin: Mouton de Gruyter. Fricke, Ellen this volume. Kinesthemes: Morphological complexity in co-speech gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1618⫺1630. Berlin/ Boston: De Gruyter Mouton. Fricke, Ellen, Jana Bressem and Cornelia Müller this volume. Gesture families and gestural fields. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1630⫺ 1640. Berlin/Boston: De Gruyter Mouton.

1590

VIII. Gesture and language

Harrison, Simon 2009. Grammar, gesture, and cognition: The case of negation in English. PhD dissertation, Universite´ Bourdeaux 3. Kendon, Adam 1983. Gesture and speech: how they interact. In: John M. Wiemann (ed.), Nonverbal Interaction, 13⫺46. Beverly Hills, CA: Sage Publications Kendon, Adam 1992. Some recent work from Italy on quotable gestures (emblems). Journal of Linguistic Anthropology 2(1): 92⫺108. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23: 247⫺279. Kendon, Adam 2004a. Contrasts in gesticulation. A British and a Neapolitan speaker compared. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 173⫺193. Berlin: Weidler. Kendon, Adam 2004b. Gesture. Visible Action as Utterance. Cambridge: Cambridge University Press. Ladewig, Silva H. 2010. Beschreiben, suchen und auffordern ⫺ Varianten einer rekurrenten Geste. Sprache und Literatur 41(1): 89⫺111. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. Ladewig, Silva H. this volume a. The cyclic gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1605⫺1618. Berlin/Boston: De Gruyter Mouton. Ladewig, Silva H. this volume b. Recurrent gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1558⫺1574. Berlin/Boston: De Gruyter Mouton. Morris, Desmond 1977. Manwatching. A Field Guide to Human Behavior. London: Jonathan Cape; New York: Harry Abrams. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures: Their Origins and Distribution. London: Jonathan Cape. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand. A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), Semantics and Pragmatics of Everyday Gestures, 233⫺256. Berlin: Weidler. Müller, Cornelia 2008. What gestures reveal about the nature of metaphor. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 249⫺275. Amsterdam: John Benjamins. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia this volume. The Ring across space and time: Variation and stability of forms and meanings. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1511⫺1522. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Jana Bressem, Silva H. Ladewig and Susanne Tag 2009. Introduction to special session “Towards a grammar of gesture”. Paper presented at the Conference Paper presented at the Gesture and Speech in Interation (GESPIN) at the Adam Mickiwicz University Poznan, Poland. Müller, Cornelia, Silva H. Ladewig and Jana Bressem volume 1. Gesture and speech from a linguistic point of view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David

119. A repertoire of German recurrent gestures with pragmatic functions

1591

McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 55⫺81. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia and Gerald Speckmann 2002. Gestos con una valoracio´n negativa en la conversacio´n cubana. DeSignis 3: 91⫺103. Neumann, Ranghild 2004. The conventionalization of the ring gesture in German discourse. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 217⫺223. Berlin: Weidler. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20: 193⫺216. Payrato´, Lluı´s this volume. Emblems or quotable gestures: Structures, categories, and functions. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1474⫺ 1481. Berlin/Boston: De Gruyter Mouton. Payrato´, Lluı´s and Sedinha Teßendorf this volume. Pragmatic gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1531⫺1539. Berlin/Boston: De Gruyter Mouton. Poggi, Isabella 2002. Symbolic gestures: The case of the Italian gestionary. Gesture 2(1): 71⫺98. Posner, Roland 2003. Everyday gestures as a result of ritualization. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Everyday Gestures: Meaning and Use, 217⫺230. Porto: Fernando Pessoa. Searle, John R. 1979. Expression and Meaning. Studies in the Theory of Speech Acts. Cambridge: Cambridge University Press. Streeck, Jürgen 2006. Gestures: pragmatic aspects. In: Keith Brown (ed.), Encyclopedia of Language and Linguistics, 71⫺76. Oxford: Elsevier. Streeck, Jürgen 2009. Gesturecraft. The Manu-facture of Meaning. Amsterdam/Philadelphia: John Benjamins. Seyfeddinipur, Mandana 2004. Meta-discursive gestures from Iran: Some uses of the “Pistol Hand.” In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 205⫺216. Berlin: Weidler. Teßendorf, Sedinha 2009. From everyday action to gestural performance: Metonymic motivations of a pragmatic gesture. Paper presented at the Conference Aflico, Lille, France.. Teßendorf, Sedinha volume 1. Emblems, quotable gestures, or conventionalized body movements. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbook of Linguistics and Communication Science 38.1.), 82⫺ 100. Berlin/Boston: De Gruyter Mouton. Teßendorf, Sedinha this volume. Pragmatic and metaphoric gestures⫺ combining functional with cognitive approaches in the analysis of the “brushing aside gesture“. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbook of Linguistics and Communication Science 38.2.), 1540⫺1558. Berlin/Boston: De Gruyter Mouton. Webb, Rebecca 1996. Linguistic features of metaphoric gestures. Unpublished PhD Dissertation, University of Rochester, New York. Weinrich, Lotte 1992. Verbale und nonverbale Strategien in Fernsehgesprächen: Eine explorative Studie. Tübingen: Niemeyer.

Jana Bressem, Chemnitz (Germany) Cornelia Müller, Frankfurt (Oder) (Germany)

1592

VIII. Gesture and language

120. The amily o Away gestures: Negation, reusal, and negative assessment 1. 2. 3. 4. 5. 6.

Gesture families: A concept and its research The family of Away gestures: How action schemes motivate semantic structures A shared motivation of the Away family: Semanticization of an action scheme The Away family with pragmatic functions Conclusion References

Abstract Departing from an overview of research on gesture families and, in particular, on “gestures of negation” (Kendon 2004), the chapter describes “the family of Away gestures” along with their structural motivations: shared formational features, shared motivations, and shared semantic themes. Building upon Kendon’s analysis of two gesture families, the Open Hand Supine (OHS) family and the Open Hand Prone (OHP) family, we present a systematized reconstruction of a structural island of interrelated gestures: the Away family. The family consists of four recurrent gestures (including Kendon’s Open Hand Prone family), which share one formational or kinesic feature “a (mostly straight) movement away from the body” and a motivation: The family is semantically based on a similar effect of different kinds of manual actions, which serve to clear the body space from unwanted objects. The chapter presents an account of how an action scheme may selectively be used to motivate gestural meaning. It also shows how such an action scheme may provide a semantic motivation for a structural island within the gestural mode of expression that is visible in both forms and functions of the gestures. In doing so, suggestions for embodied roots of negation, refusal, and negative assessments are made and a further pathway to the study of how gestures may evolve to signs within signed languages is outlined.

1. Gesture amilies: A concept and its research In her study of French gestures, Calbris (1990) suggests that investigating form variants of a gesture may not only provide new insights into the relationship between the gestures’ form, meaning, and motivation but more importantly may “uncover a network of physico-semic components whose points of intersection seem to determine the [semantic] nuances” (Calbris 1990: 134). Over the past years, a range of studies has addressed the question of whether variants of form go along with differences in form, meaning, and motivation. Those studies have shown, in fact, that gestures may constitute larger coherent groups, structural islands, which are based on common aspects of form and meaning. In the context of his research on Neapolitan and British gestures, Adam Kendon has put forward the concept of families of gesture, or “gesture family”: When we refer to families of gestures we refer to groupings of gestural expressions that have in common one or more kinesic or formational characteristics. […] Within each family, the different forms that may be recognized, in most cases are distinguished in terms of the different movement patterns that are employed. Each family not only shares in a distinct set

Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 15921604

120. The family of Away gestures: Negation, refusal, and negative assessment

1593

of kinesic features but each is also distinct in its semantic themes. The forms within these families, distinguished as they are kinesically, also tend to differ semantically although, within a given family, all forms share in a common semantic theme. (Kendon 2004: 227)

Such groupings of gestures have been identified for a range of gestural forms. Probably, an almost “classic” example of gesture families is the family of the open hand. “Open Hand Supine” (OHS) (Kendon 2004: 264⫺281) or “Palm Up Open Hand” gestures (Müller 2004) come with the shared semantic themes of “offering” and “receiving” (Kendon 2004; Müller 2004). They are based on basic actions of “giving, showing, offering an object by presenting it on the open hand”, which serve as the derivational basis for all members of the family and are used to present an “abstract, discursive object as a concrete, manipulable entity” (Müller 2004: 233, 236). By varying the form of the kinesic core through various movement patterns (rotation, lateral movement, up and down movement), the semantic core of offering, giving, and receiving of objects is extended to mean continuation and listing of ideas, and “a sequential order of offered arguments”, or to present “a wide range of discursive objects” (Müller 2004: 254). Kendon distinguishes three members of the Open Hand Supine family: the Palm Presentation (PP) gesture, the Palm Addressed (PA) gesture, and the Palm Lateral (PL) gesture (Kendon 2004: 265) and assumes different motivations for their respective meanings. For Palm Presentation gestures (Open Hand Supine with simple wrist turn) these are the actions of presenting, requesting, or offering, for Palm Addressed (Open Hand Supine moves towards an interlocutor) these are actions of presenting or displaying for inspection, for Palm Lateral (Open Hand Supine with a lateral, sometimes backwards movement) this is the action of withdrawal. The shared semantic theme uniting the Open Hand Supine (Palm Up) family of gestures is that of offering, giving, presenting, topics of talk. (See Tab. 120.1 for an overview of the Open Hand Supine family and their family members, along with their shared formational features, shared motivations, and shared semantic themes.) In recent studies, however, another group of gestures has received considerable attention among scholars of gestures. Kendon (2004: 248⫺264) has described them as members of the family of the Open Hand Prone (OHS) (palm down), which are used “in contexts where something is being denied, negated, interrupted, or stopped, whether explicitly or by implication” (Kendon 2004: 248) and which “share the semantic theme of stopping or interrupting a line of action that is in progress” (Kendon 2004: 249) or express “active physical refusal” (Calbris 2011) (see also de Jorio [1832] 2000). Depending on the palm’s orientation, Kendon identifies two members of the family. In cases in which the palm is oriented downwards horizontally and the hand(s) are moved laterally (Open Hand Prone ZP), the gestures are assumed to be based on actions “of cutting something through, knocking something away or sweeping away irregularities on a surface” (Kendon 2004: 263). Kendon points out that these gestures are not derived from actions authored by the speaker but “describe something that has happened, is happening or could happen” (Kendon 2004: 263, emphasis in original) (see also Calbris 1990, 2003). The Open Hand Prone ZP gestures share in the semantic theme of “interrupting, suspending or stopping a line of action” (Kendon 2004: 262). They may serve various functions among them the negation, by presupposing something “in relation to which they act” (Kendon 2004: 263). Is the palm oriented vertically (Open Hand Prone VP), the speaker uses the gesture to establish a barrier, push back, or hold back things moving towards him- and herself.

1594

VIII. Gesture and language

Tab. 120.1: The Open Hand Supine (Palm Up) family of gestures (Kendon 2004: 264⫺281; Müller 2004) Open Hand Supine (OHS) (Palm Up) Family Shared formational features: open hand supine, palm up Shared motivation: offering, giving, showing, presenting objects in the hands Shared semantic theme: offering, giving, presenting topics of talk Family members

PP (Palm Presentation) Shared formational features: open hand supine, palm up Shared motivation: offering, giving, showing, receiving objects Shared semantic theme: offering, receiving, giving topics of talk

Family members

PA (Palm Addressed) Shared formational features: open hand supine, palm up, move towards interlocutor Shared motivation: offering, giving, handing over of objects, requesting something Shared semantic theme: to present for inspection, display the object pointed at

PL (Palm Lateral) Shared formational features: open hand supine, palm up, lateral (or backwards) move (sometimes combined with shoulder shrug) Shared motivation: action of withdrawal from what has been presented Shared semantic theme: withdrawal, unwillingness or inability

The gesture indicates “the actor’s intent to stop a line of action, whether this be the actor’s own, the line jointly engaged in with others, or that of the interlocutor” (Kendon 2004: 262). Depending on the position of the hands, the gesture specifies the kind of action to be stopped (close to the body: stopping ones own action; in front of the body: stopping action of speaker and interlocutor; movement towards the interlocutor: stopping interlocutor). Although the two members of the Open Hand Prone family share in a common semantic theme (stopping or interrupting a line of action that is in progress), Kendon does not offer a shared motivation for them. On the contrary, he assumes them to be “quite different semiotically” (Kendon 2004: 263). By depicting a schematic act of pushing or holding something away, “Vertical Palm gestures constitute actions that the actor willfully performs. Horizontal Palm gestures are actions that describe something that has happened, is happening or could happen”, because they “represent some event or circumstance of which [the speaker] is not the author” (Kendon 2004: 263, emphasis in original). (Tab. 120.2 presents an overview of Kendon’s Open Hand Prone family.) In accordance with Calbris (2003), Kendon assumes one function of the Open Hand Prone ZP gestures is negation, a “kinesic parallel to the denial, interruption or negation expressed verbally” (Kendon 2004: 255). Similar as negation in speech, the gestures act in relation to possible counter responses that might be implied by what is said and then act in relation to these counter responses. For the Open Hand Prone family as a whole, Kendon suggests that their members may in principle all serve as forms of negation, “if there is something presupposed in relation to which they act” (Kendon 2004: 263). Nota-

120. The family of Away gestures: Negation, refusal, and negative assessment

1595

Tab. 120.2: The Open Hand Prone (Palm Down) family of gestures (Kendon 2004: 248⫺264) Open Hand Prone (palm down) Family Shared formational features: open hand prone, palm down or away Shared motivation: none Shared semantic theme: stopping or interrupting a line of action that is in progress Family members

VP (Vertical Palm) Shared formational features: open hand prone, palm in vertical orientation, or palm away Shared motivation: barrier Shared semantic theme: halt a current line of action, to stop

ZP (Horizontal Palm) Shared formational features: open hand prone, palm down, rapid, horizontal, lateral (decisive) movement away from midline of speaker’s body Shared motivation: cutting, knocking or sweeping away Shared semantic theme: some line of action is being suspended, interrupted or cut off, negation (of implied assumptions)

bly, as Kendon points out in a historical survey, gestures that serve the function of negation have attracted the interest of various scholars for quite a long time (Kendon 2004: 249⫺251). Picking up on Kendon’s account of the Open Hand Prone family, Harrison (2009, 2010) presents an analysis of the two members of the family and offers a systematization of their occurrence with negation in speech. By taking into account further variations of form (hand shape, handedness, type, and direction of movement), Harrison not only documents correlations of form and meaning in different contexts-of-use but also shows that members of the family of the Open Hand Prone or Palm Down gestures correlate with particular verbal expressions such as superlatives (e.g., best, most amazing, sweetest) and maximum degree marking adverbs (e.g., totally, absolutely, completely) for instance. More importantly, however, for particular variants of the Palm Down gestures, Harrison identifies a correlation with the node and scope of negation expressed in speech. Whereas the stroke of the gestures co-occurs with the negative node (e.g., the negating part of speech), the following holds coextend with the scope of the negation and thus gesturally highlight what is being negated. Based on these results, Harrison suggest that “a multimodal principle appears to determine the syntax of negative sentences and the kinesics of negation gestures, while regulating how speakers integrate the two modalities during negative speech acts” (Harrison 2010: 45). In addition to Palm Down gestures, the Brushing Aside gesture has been described as a further example of gestural negation (Müller and Speckmann 2002; Teßendorf this volume). Based on the action of brushing something aside, the gesture is most often used to “brush aside” discursive objects. Depending on the place of the execution (midline level and shoulder level), the gesture either takes over modal and discursive function by “qualifying something as negative and marking the end of a certain discursive activity” or by expressing a communicative move it may function as a performative (Payrato´ and Teßendorf this volume; Teßendorf this volume). In the following, we will present a revised and extended analysis of the family of Open Hand Supine or Palm Down gestures.

1596

VIII. Gesture and language

2. The amily o Away gestures: How action schemes motivate semantic structures Based on results from a data-driven account of a repertoire of recurrent gestures of German, the present chapter reconstructs a structural island within a manual mode of expression: The family of Away gestures. This family is semantically motivated by the effect of actions of removing or keeping away of things. The effect that all actions have in common is that the body space is cleared of annoying or otherwise unwanted objects. Members of the family do not share in a particular hand shape and/or orientation, as in Kendon’s four families but a particular motion. All members of the Away family show a movement “away from body” which is performed mostly in a straight manner. Semantically, the family is bound together by the themes of rejection, refusal, negative assessment, and negation, which are directly derived from the semantics of the underlying action scheme, in particular, from the effect that actions involving the clearing of the body space have in common: Something that was present has been moved away ⫺ or something wanting to intrude has been or is being kept away from intrusion. In any case, the effect of the action is that the space around the body is empty. The members of the family share this effect: Sweeping Away, Holding Away, Brushing Away, and Throwing Away. The four members of the Away family are recurrent gestures (Ladewig 2010, this volume; Müller 2010). Recurrent gestures show a stable form-meaning relation, which “recurs in different contexts-of-use over different speakers in a particular speech community” (Ladewig this volume: 1559). Depending on their context-of-use, recurrent gestures show differences in form, which often correlate with variants of meaning and function. Characteristics of form are based on instrumental actions, from which particular aspects are mapped onto the structure of communicative actions. Accordingly, recurrent gestures often take over pragmatic function and either “display the communicative act of the speaker and act upon speech as ‘speech-performatives’” or they may “aim at a regulation of the behavior of others as ‘performatives’” (Teßendorf this volume: 1544). In addition, recurrent gestures may also serve referential function in depicting concrete or abstract aspects of the topic being addressed in speech. Although recurrent gestures are not translatable into words or phrases, like emblems or quotable gestures for instance (Kendon 2004), the fixed form-meaning relation that holds stable across a wide range of communicative contexts along with their mostly pragmatic functions makes it likely to assume that recurrent gestures undergo processes of conventionalization. It is assumed that only a limited number of conventionalized gestures with pragmatic function exists (see, e.g., Kendon 1995), which can be said to make up a possible repertoire of recurrent gestures widely shared by speakers in a particular cultural or social group (see Ladewig this volume and Müller 2010 for a detailed discussion of the notion “recurrent gestures”). The family of Away gestures was discovered in the context of an investigation of a repertoire of recurrent gestures of German. The repertoire consists of sixteen recurrent gestural forms altogether. It was identified by applying a linguistic analysis to the motivation of recurrent gesture’s forms (their kinesic features, but also their movement gestalts) and their distribution across contexts-of-use (Bressem and Müller this volume).

2.1. Sweeping away The sweeping away gesture, in other studies referred to as “finished” (Brookes 2004), “cutting” (Calbris 2003), “Open Hand Prone ZP” (Kendon 2004: 255⫺264), and “PD

120. The family of Away gestures: Negation, refusal, and negative assessment

1597

across” (Harrison 2010), is a recurrent gesture in which the (lax) flat hand(s) with the palm facing downwards are laterally and horizontally moved outwards, mostly with a decisive movement quality. The hand(s) are typically positioned in the central gesture space. Sweeping away gestures are used only in relation with speech and in so doing may serve either referential or pragmatic functions. When used with referential function, sweeping away gestures illustrate, for instance, a period of time, the action of smoothing a plane, or wiping off elements on a plane. When used pragmatically, they are used as manual forms of negation. Given restrictions of space of this chapter, only a detailed reconstruction of the pragmatic meaning of sweeping away gestures can be provided. When used as gestures of negation, the meaning of sweeping away gestures is based on the effect of the underlying action. The shared motivation of sweeping away gestures is a completely cleared off body space. This clearing off is achieved by energetically and efficiently sweeping away something from a flat surface (e.g., a liquid, bread crumbs, or wrinkles in a table cloth), so that absolutely nothing is left. Sweeping away gestures create an empty plane around the speaker’s body and formerly existing objects or obstacles are completely swept away or are excluded from the body space. With this gesture, topics of talk (e.g., arguments, beliefs, or ideas) are energetically and completely rejected; they are (metaphorically) swept away from the center to the periphery, so that those objects or topics of talk are excluded from the conversation and thus are manually negated (see Fig. 120.1).

Fig. 120.1: Semanticization of an action scheme: The sweeping away gesture as negation

2.2. Holding away Holding away gestures, also referred to as “wait” (Brookes 2004), “Open Hand Prone VP” (Kendon 2004: 251⫺255) and “palm vertical” (Harrison 2009) (see also Calbris 1990, 2011), are recurrent gestures in which the flat hand(s) with the palm vertically facing away are held in front of the speaker’s body. The hand(s) may be positioned in the center of the gesture space or in the upper periphery. Holding away gestures are used in relation with speech but also occur in contexts without accompanying speech. They may serve referential as well as pragmatic functions. When used with referential functions, they illustrate the pushing or holding away of objects or persons. When used pragmatically, they serve as refusal or as an indication to stop and they qualify the refused or stopped objects as unwanted ones. The meaning of the pragmatic holding away gestures is grounded in a shared motivation, namely the effect of actions that serve to maintain a cleared body space and that keep unwanted objects away from the body. This clearing of the body space is achieved by holding or pushing away an object, stopping an object from falling over, a door from smashing into the face, or an unwanted person from intrusion into the personal space. The vertically oriented hands create a

1598

VIII. Gesture and language

blockage, which either keeps objects from moving closer or pushes them away. Holding away gestures are based on a different manual action than the other members of the Away family but the effect is similar: An empty space around the speaker’s body is created. In contrast to the other Away gestures, holding away gestures may either create an empty space around the speaker’s body or they may maintain such an empty surrounding. Based on these different types of away actions shared semantic themes have emerged: Pragmatically used holding away gestures are used to reject topics of talk, to stop arguments, beliefs, ideas from intrusion into the realm of shared conversation, to stop the continuation of unwanted topics, and they qualify rejected topics as unwanted ones, in short, holding away gestures refuse and stop unwanted topics of talk (see Fig. 120.2).

Fig. 120.2: Semanticization of an action scheme: Throwing away gestures as refusal and stopping of unwanted topics of talk

2.3. Brushing away Brushing away gestures, in other studies referred to as “Brushing Aside” (Payrato´ and Teßendorf this volume; Teßendorf this volume) or “wiping off ” (Müller 1998; Müller and Speckmann 2002), are recurrent gestures in which the lax flat hand, with a palm oriented towards the speaker’s body, is moved outwards in a rapid twist of the wrist. They are used only in relation with speech. Speech-replacing functions were not found in the data. Brushing away gestures may serve deictic as well as pragmatic functions. When used with a deictic function, they illustrate paths and directions. The formational feature, which is being semanticized in these variants, is the direction of the movement. The gesture is performed in front of the speaker’s body and in the center of the gesture space. When used pragmatically, the hands are positioned at the side of the speaker’s body and in the periphery of the gesture space. In these cases, brushing away gestures are used as a negative assessment (and with a modal function, cf. Müller and Speckmann 2002; Payrato´ and Teßendorf this volume). The meaning of pragmatic brushing away gestures is based on a shared motivation, or the semanticization of the goal of an action scheme that results in a cleared body space and that involves the removal of unwanted and annoying objects. This common effect is achieved by rapidly brushing away small, annoying object(s), crumbs from a sweater, a mosquito sitting on the arm, or sand from a towel (cf. also Teßendorf this volume). By brushing these metaphorical objects aside, the body space is cleared of unwanted, annoying arguments, beliefs, or ideas. Brushing away gestures share the semantic theme of getting rid of, removing or dismissing annoying topics of talk, by

120. The family of Away gestures: Negation, refusal, and negative assessment

1599

rapidly brushing them away from the body. The clearing of the body space goes along with a qualification of the rejected objects as annoying, so that with this gesture any topic of talk is being negatively assessed (see Fig. 120.3).

Fig. 120.3: Semanticization of an action scheme: The brushing away gesture as negative assessment

2.4. Throwing away Throwing away gestures are recurrent gestures in which the lax flat hand with the palm facing away from the speaker’s body is moved downwards by bending the wrist. The hand is positioned in a space around the body ranging from the center to the upper periphery. The gesture is used in relation with speech but also replaces speech. Throwing away gestures have a pragmatic (modal) function and either act upon speech or upon the behavior of others. They resemble brushing away gestures functionally, in that both gestures are used as negative assessments. (It is likely that the two variants are distributed differently across cultures. While throwing away gestures are very common in German, brushing away gestures appear to be more widely used in Spain and in Cuba.) Throwing away gestures co-occur quite often with the German adjective egal (‘never mind’) as well as with interjections such as ach (‘alas’). Brushing away and throwing away gestures have a similar action base, or a similar shared motivation: a cleared body space and the removal of unwanted and annoying objects. The difference between the two is apparent in the hand shapes and accordingly the removed objects. While the brushing away actions are used to remove really small objects, the throwing away ones are used to get rid of middle sized roundish objects: a rotten fruit, the core of an apple, or a crumbled piece of paper to be thrown into the wastebasket. The goal in both cases is to clear the immediate surrounding of disturbing and useless objects. These instrumental actions serve to create an empty space around the speaker’s body that is used in discourse to mark arguments, ideas, and actions as uninteresting and void. Again the effect of the manual action is what motivates the meaning. The shared semantic theme of throwing away gestures can be characterized as follows: getting rid of, removing, and dismissing annoying topics of talk, by metaphorically throwing them away from the body. The clearing of the body space goes along with a qualification of the rejected objects as annoying, that is, a topic of talk is being negatively assessed (see Fig. 120.4).

Fig. 120.4: Semanticization of an action scheme: Throwing away gestures as negative assessments

1600

VIII. Gesture and language

3. A shared motivation o the Away amily: Semanticization o an action scheme In this section, we will offer a more detailed cognitive-semantic account of how an action scheme can motivate the meaning of gestures and the shared meaning within a gesture family. So far, we have reconstructed the motivation of gestural forms by applying a semiotic analysis that takes into account the gestures’ derivation from instrumental and mundane actions. In so doing, we “depart from the assumption that the meaning of gestures is motivated (see also Calbris 1990, 2011; Mittelberg 2006; Mittelberg and Waugh this volume), that their forms embody meaning in a dynamic and mostly ad hoc manner, and that manual actions are a core basis of gestural meaning creation (see also Streeck 2009, volume 1)” (Müller, Bressem, and Ladewig volume 1: 711). The discovery of the four Away gestures is a result of a methodological process of going back and forth between determining the motivation of recurrent forms (Modes of gestural Representation and the involved manual actions) and different contexts-of-use (Müller 1998, 2004, 2009, 2010, this volume). Such a linguistic analysis of gestural forms and functions revealed that Away gestures share a particular formational feature (not hand shape and orientation as Kendon’s Open Hand Prone gestures) but a movement away from body, which is mostly performed in a straight manner. Moreover, it was found that Away gestures are motivated by different types of reenacted actions which have in common one effect: keeping things away from the body by brushing, sweeping, throwing, or holding them away with the hand(s). This effect is what makes the shared motivation for the family of Away gestures: a cleared body space, e.g., the effect or goal of actions of removing or keeping away things from body space. We suggest that a particular action scheme may serve as a systematic basis for the development of semantic and pragmatic meaning (see Teßendorf this volume). Gestures may reproduce perceptually salient aspects of instrumental actions and extract distinctive elements of the action by comparing, selecting, and recombining physically pertinent elements (see Calbris 1990, 2003; Müller 1998, 2010; Teßendorf this volume). By reproducing aspects of the action, gestures may evoke a particular element from the chain of action, namely, either “the actor, the action, the instrument used or its result” (Calbris 2003: 26). As a consequence, gestures are linked to a motivating action via metonymy (Mittelberg 2010; Mittelberg and Waugh this volume; Müller 1998), so that parts of the action stand for the action as a whole. Teßendorf (this volume) breaks down the action scheme for brushing away gestures into four main steps. (i) (ii) (iii) (iv)

Point of departure: unpleasant situation Cause: annoying objects in the immediate surrounding Action: the back of the hand brushes these objects away Endpoint/goal: objects are removed; end of unpleasant situation and recovery of a neutral situation

Teßendorf shows that because different aspects of the underlying action scheme are highlighted metonymically, brushing away gestures may meet different communicative aims. When taking over modal function, for instance, by expressing the speaker’s attitude towards the content expressed in speech, they may highlight the objects involved in the

120. The family of Away gestures: Negation, refusal, and negative assessment

1601

action. Via metonymic relation, the action stands for the objects involved in the action. When used performatively to finish, for instance, an unpleasant situation, the gesture highlights the goal of the action. The action stands metonymically for the result of the action. For an analysis of the family of Away gestures this meant that even though each Away gesture is based on a particular action (brushing, sweeping, throwing, and holding away) the goal or the effect of all actions is the same: the removal of annoying things. Accordingly, we have assumed that a common action scheme motivates the pragmatic meaning of all four Away gestures: All four are based on actions by which things are removed from or held off resulting in an empty space, plane or surface around the body (see Fig. 120.5).

Fig. 120.5: A shared motivation for the Away family: The semanticization of the effect or goal of an action scheme

4. The Away amily with pragmatic unctions Departing from a shared movement pattern (away) rather than on shared hand shapes, we found that Kendon’s Open Hand Prone (ZP and VP) gestures form part of the group of Away gestures: the Away Family. We have argued that this family consists of four members and expresses negation (sweeping away), refusal (holding away), and negative assessment (brushing and throwing away). The family shares a particular kinesic or formational feature “movement away” and the semanticization of the same aspect of an underlying action scheme, that is, the effect or goal of an action: “keeping the body space clear of objects”. This removal or the holding away of objects is what motivates the shared theme of the Away family: rejection, refusal, negative assessment, and negation. Tab. 120.3 shows an overview of the Away Family of gestures with pragmatic functions. Tab. 120.3: The family of Away gestures with pragmatic function Away ⴚ Family Gestures of negation, refusal, and negative assessments Shared formational features: away from body, mostly straight movement Shared motivation: a cleared body space, e.g., the effect or goal of actions of removing or keeping away things from body space Shared semantic theme: excluding Family members

Sweeping Away (Kendon’s OHP, ZP) Shared formational features: flat hand(s), palm facing downward, move laterally and horizontally outwards

Holding Away (Kendon’s OHP, VP) Shared formational features: flat hand(s), palm vertically away from speaker’s body, moved or held outwards (Continued)

1602

VIII. Gesture and language

Tab. 120.3: Continued Away ⴚ Family Gestures of negation, refusal, and negative assessments Shared formational features: away from body, mostly straight movement Shared motivation: a cleared body space, e.g., the effect or goal of actions of removing or keeping away things from body space Shared semantic theme: excluding Shared motivation: a completely cleared off body space. This is achieved by energetically and efficiently sweeping away something from a flat surface (a liquid, bread crumbs, or wrinkles in a table cloth) so that absolutely nothing is left. Shared semantic theme: negation, e.g., completely rejecting topics of talk by (energetically) sweeping them away from the center to the periphery, so that they are excluded from the conversation and negated. Family members

Brushing Away Shared formational features: lax hand, palm oriented towards speaker’s body, moved outwards in a rapid twist of the wrist. Shared motivation: a cleared body space and the removal of unwanted and annoying objects. This is achieved by rapidly brushing away small, annoying object(s), crumbs from a sweater, a mosquito sitting on the arm, sand from a towel. Shared semantic theme: negative assessment, e.g., getting rid of, removing and dismissing annoying topics of talk, by rapidly brushing them away from the speaker’s body. Clearing off body space goes along with a qualification of the rejected objects as annoying, e.g., a topic of talk is being negatively assessed.

Shared motivation: maintaining a cleared body space and keeping unwanted objects away. This is achieved by holding or pushing away an object, stopping an object from falling over, a door from smashing into the face, or an unwanted person from intrusion into the personal space. Shared semantic theme: refusal, stopping something from intrusion, stopping from continuation, rejecting a speaker’s or hearer’s topic of talk, and a qualification of the rejected topic as an unwanted one. Throwing Away Shared formational features: cupped hand oriented vertically, palm facing away from the speaker’s body, hand flaps downward from the wrist. Shared motivation: a cleared body space and the removal of unwanted and annoying objects. This is achieved by throwing away middle-sized roundish objects, that one wants to get rid of: a rotten fruit, the core of an apple, a crumbled piece of paper for the wastebasket. Shared semantic theme: negative assessment, e.g., getting rid of, removing and dismissing annoying topic of talk, by throwing it away from the speaker’s body. Clearing off body space goes along with a qualification of the rejected objects as annoying, e.g., a topic of talk is being negatively assessed.

5. Conclusion This chapter has dealt with the motivation of a gesture family by one aspect of an underlying action scheme. A linguistic analysis of forms and meanings of gestures has documented processes of semanticization that lead to the emergence of a semantic field in the gestural modality (see also Fricke, Bressem, and Müller this volume). By following

120. The family of Away gestures: Negation, refusal, and negative assessment

1603

an embodied concept of gestural meaning construction, the chapter also shed some light into what might be considered embodied grounds of negation. The particular linguistic and semiotic focus of the analysis has furthermore served to uncover what could be considered proto-morpho-semantic structures in a manual mode of communication. With its focus on the systematic relations between groups of gestures, the chapter contributes to a systematic documentation of the nature of gesture forms and their motivations, it contributes to what we term a “grammar” of gestures (Müller, Bressem, and Ladewig volume 1). By describing such a structural island in a gestural mode of communication, it may also offer valuable insights into the emergence of signs from gestures.

Acknowledgements We are grateful to the Volkswagen Foundation for supporting this work with a grant for the interdisciplinary project “Towards a grammar of gesture: Evolution, brain and linguistic structures” (www.togog.org).

6. Reerences Bressem, Jana and Cornelia Müller this volume. A repertoire of recurrent gestures of German. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1575⫺1591. Berlin/Boston: De Gruyter Mouton. Brookes, Heather 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186⫺224. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Calbris, Genevie`ve 2003. From cutting an object to a clear cut analysis. Gesture as the representation of a preconceptual schema linking concrete actions to abstract notions. Gesture 3(1): 19⫺46. Calbris, Genevie`ve 2011. Elements of Meaning in Gesture. Amsterdam: John Benjamins. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. A translation of La mimica degli antichi investigata nel gestire napoletano. With an introduction and notes by Adam Kendon. Bloomington: Indiana University Press. First published [1832]. Fricke Ellen, Jana Bressem and Cornelia Müller this volume. Gesture families and gestural fields. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1630⫺ 1640. Berlin/Boston: De Gruyter Mouton. Harrison, Simon 2009. Grammar, Gesture, and Cognition: The Case of Negation in English. PhD Thesis. Universite´ Michel de Montaigne, Bourdeaux 3. Harrison, Simon 2010. Evidence for node and scope of negation in coverbal gesture. Gesture 10(1): 29⫺51. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23: 247⫺279. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Ladewig, Silva H. 2010. Beschreiben, suchen und auffordern ⫺ Varianten einer rekurrenten Geste. Sprache und Literatur 41(1): 89⫺111. Ladewig, Silva H. this volume. Recurrent gestures. In: Cornelia Müller, Ellen Fricke, Alan Cienki, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1558⫺1574. Berlin/Boston: De Gruyter Mouton.

1604

VIII. Gesture and language

Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar. PhD Dissertation, Cornell University. Ann Arbor, MI: UMI. Mittelberg, Irene 2010. Interne und externe Metonymie: Jakobsonsche Kontiguitätsbeziehungen in redebegleitenden Gesten. Sprache und Literatur 41(1): 112⫺143. Mittelberg, Irene and Linda Waugh this volume. Gestures and metonymy. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1747⫺1766. Berlin/ Boston: De Gruyter Mouton. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand. A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), Semantics and Pragmatics of Everday Gestures, 233⫺256. Berlin: Weidler Verlag. Müller, Cornelia 2009. Gesture and Language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. Abington/New York: Routledge. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia this volume. Gestural Modes of Representation as techniques of depiction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1687⫺1702. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia and Gerald Speckmann 2002. Gestos con una valoracio´n negativa en la conversacio´n cubana. DeSignis 3: 91⫺103. Payrato´, Lluı´s and Sedinha Teßendorf this volume. Pragmatic gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbook of Linguistics and Communication Science 38.2.), 1531⫺1539. Berlin/Boston: De Gruyter Mouton. Streeck, Jürgen 2009. Gesturecraft. The Manu-facture of Meaning. Amsterdam/Philadelphia: John Benjamins. Streeck, Jürgen volume 1. Praxeology of gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 674⫺688. Berlin/Boston: De Gruyter Mouton. Teßendorf, Sedinha this volume. Pragmatic and metaphoric gestures⫺ combining functional with cognitive approaches. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbook of Linguistics and Communication Science 38.2.), 1540⫺1558. Berlin/Boston: De Gruyter Mouton.

Jana Bressem, Chemnitz (Germany) Cornelia Müller, Frankfurt (Oder) (Germany)

121. The cyclic gesture

1605

121. The cyclic gesture 1. 2. 3. 4. 5.

Introduction The cyclic gesture and its variants Creating meaning ⫺ the conceptual basis of the cyclic gesture Questions of conventionalization and grammaticalization References

Abstract This chapter introduces forms and functions of the cyclic gesture. It focuses on the distribution of this recurrent gesture over different contexts-of-use and shows how the different context variants correlate with variation of form and meaning. Furthermore, the cognitivesemiotic processes driving the use of this recurrent gesture will be elucidated, arguing for an image schematic idealized cognitive model, which motivates the meaning of this gestures (Lakoff 1987). In doing so, questions on conventionlization and grammaticalization processes will be approached.

1. Introduction The gesture under investigation in this chapter shows the formational (or kinesic) core of a continuous rotational movement, performed away from the body, which correlates with the semantic core of cyclic continuity. The hand remains in situ, i.e., it is not moved forwards or sidewards. This gestural movement pattern (see Fig. 121.1) is perceived as a holistic gestalt, as the individual circles are not interrupted or accentuated at the lower trunk and are therefore not observed as discrete movements. It can be designated as recurrent, since it is used repeatedly in different contexts and its formational and semantic core remains stable across different contexts and speakers. (For an overview and discussion of the characteristics of recurrent gestures, see Ladewig this volume.) This formational and semantic core has been termed ‘cyclic gesture’, as this term refers best to the form and the meaning of the gesture. The study of the cyclic gesture used by German speakers, conducted in 2006, aimed at giving an encompassing account of its usage, that is, its distribution over different contexts-of-use, its form and meaning variants, and the different functions it fulfills.

Fig. 121.1: The cyclic gesture

Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 16051618

1606

VIII. Gesture and language

Furthermore, the cognitive-semiotic processes involved in the creation of gestural meaning were examined. In the following sections, these aspects will be summarized. Due to the space restrictions, not all dimensions can be elucidated in detail, thus the reader is referred to publications focusing on single aspects of the conducted research on the cyclic gesture (Ladewig 2007, 2010, 2011; Ladewig and Bressem 2013).

2. The cyclic gesture and its variants The determination of the form of a recurrent gesture is central to the whole analysis, as the form of a gesture builds the foundation for the reconstruction of its meaning and function in a particular context. For this reason, research of the cyclic gesture, summarized here, has put particular emphasis on this aspect, meaning that the description of form has been attached a major role in the whole coding and analytical process (see, e.g., Bressem volume 1; Bressem, Ladewig, and Müller volume 1; Ladewig and Bressem 2013; Müller 1998, 2010; Müller, Bressem, and Ladewig volume 1). Thus, the analytic step of coding gestures is not only regarded as a means for giving a thorough description of gestural forms but it is primarily conceived a “discovery procedure” to identify patterns, structures, and regularities in gestures (Ladewig and Bressem 2013). Accordingly, the description of the gestural forms without speech is given priority. Meanings and functions are investigated only in a second step. Therefore, all instances of the recurrent movement pattern were included in the analysis showing referential but also pragmatic functions. The analysis of the cyclic gesture is grounded in detailed micro-analyses of instances of this gestural form, taken from a set of seven hours of video-recorded conversations (dyadic, triadic, as well as smaller group constellations) in naturalistic settings. 90 minutes were recorded during a parlor game. The remaining 330 minutes of the data consist of conversations which were not confined to a certain topic. Gestural variants of the cyclic gesture were reconstructed in the following steps: (i) Identification of a recurrent movement pattern and set up of a corpus: Determination of context variants, (ii) Annotation of gestures: Determination of form variants, (iii) Analysis of form and speech: Determination of the semantic core, meaning variants, and their functions. According to this procedure, three different context variants of the cyclic gesture ⫺ descriptions, word or concept searches, and requests ⫺ correlating with different forms and meanings, could be determined. These different variants will be introduced and exemplified in the following sections.

2.1. The cyclic gesture in the context o descriptions The cyclic gesture in descriptions depicts continuous actions and events. In most cases, it is used with an abstract meaning. In the first instance (Fig. 121.2a), the cyclic gesture is used with concrete referential meaning. The woman explains the concept of “skimming”, thereby using her right, flat hand, positioned in the right periphery of her gesture space. The stroke of the gesture is

121. The cyclic gesture

1607

Fig. 121.2: The cyclic gesture in the context of a description

performed on the phrase als wenn du wat wenn du wat abschöpfst im prinzip (literally: ‘as if you as if you were skimming in principle’), depicting the action of skimming with an open hand. The other two examples (Fig. 121.2b, c) show the cyclic gesture used with abstract meaning. In the example illustrated in Fig. 121.2b, the woman is talking about a TV show in which scientists dealt with a particular historical event. While saying wo sie das mal so offarbeiten (‘where they deal with’ [literally: ‘work it up’]), she deploys the cyclic gesture, miming how something is being worked up. Fig. 121.2c shows a woman, depicting a mental process with the cyclic gesture. The stroke of the gesture parallels the phrase selber kombiniern kannst (‘can combine yourself’). By accompanying the verb “combine” the cyclic gesture represents the combining of details as an activity that is in progress. This is also reflected in her hand configuration. By pointing at her head the speaker shows that something located in her head is in progress. The cyclic gesture in the context of descriptions represents ongoing actions or events. Note that continuity represented by the movement type is not always referred to in the verbal utterance, meaning that this gesture very often adds information not present in speech. This gesture is used with various hand shapes and orientations. However, what is of interest is that in the majority of cases the cyclic gesture used in descriptions is very often positioned in the right periphery of the speaker’s gesture space as the first and second example show (Fig. 121.2a, b).

1608

VIII. Gesture and language

2.2. The cyclic gesture in the context o word or concept searches The cyclic gesture was primarily identified in contexts in which the speaker was searching for a word or concept. In this context, it fulfills a meta-communicative function, as it operates upon the utterance of the speaker (“speech performative”, Teßendorf this volume). By determining the different stages of a word/concept search (see also Müller 1994) as well as the occurrence of hesitation phenomena, three different sequential positions were identified which are: (i) during a phase of non-fluent speech while searching for the word or concept, (ii) during a phase of fluent speech (or in the transition from non-fluent to fluent speech), when finding the word or concept, and (iii) during a phase of fluent speech after the search.

Fig. 121.3: Cyclic gesture in the context of a word or concept search (sequential position 1 and 2)

121. The cyclic gesture

1609

These three different usages will be exemplified in the following. The example illustrated in Fig. 121.3 shows the use of the cyclic gesture in the first and the second sequential position. In this instance, three women are talking about a movie, speaker Me watched recently. The acting qualities of a famous Hollywood actor are brought up to the conversation. As the transcript shows, speaker Me agrees with her interlocutor concerning the performance of the actor by using the affirmative particle ja (‘yes’). In order to account for her affirmation, she starts searching for the concept ART UND WEISE (WAY OF DOING, Fig. 121.3, line 2 and 2a). The searching process becomes manifest in a reformulation announced by the reformulation indicator also (‘that means’, Fig. 121.3, line 2 and 2a). The stroke of the cyclic gesture is executed during a pause of 0.5 seconds (Fig. 121.3, lines 2⫺3). The second instance of the cyclic gesture can be observed when the speaker finds the concept she is looking for and starts formulating the first vowel of the concept ART UND WEISE (WAY OF DOING, Fig. 121.3, lines 5⫺6). Thus, the first instance of the cyclic gesture is used while the speaker is engaged in the activity of searching for a concept; the second one is deployed when the speaker is finding and formulating the concept she was searching for. The second example shows an incidence of the cyclic gesture deployed in the third sequential position, that is, after the search when fluent speech is resumed. Speaker M is talking about a person who is supposed to have paranormal strength, which is why he is able to suffer extreme pain. She is having problems with finding the right formulation for the concept of paranormal strength (see Fig. 121.4, line 2 and 2a) which she

Fig. 121.4: Cyclic gesture in the context of a word or concept search (sequential position 3)

1610

VIII. Gesture and language

actually is not able to find (see her comment in Fig. 121.4, line 4 and 4a). The cyclic gesture is used when the speaker comes back to her initial argument that “there is something about” the person being talked about. At this point of her explanation, she completes her word search and resumes the narrative track she was following before the search. In all three instances, the cyclic gesture does not describe continuous actions or events that are mentioned in speech, as is the case in depictions (see section 2.2), but marks the communicative activities of a) searching for a word or concept which is the primary use in this context, b) finding and formulating a word or concept searched for, or c) resuming speech. Accordingly, this context variant acts upon the speaker’s own speech (“speech performative”, Teßendorf this volume) and represents stages of the ongoing search activity. This function becomes manifest in additional parameters: In most cases, this variant is used in the central gesture space reflecting the direction towards the speaker him/herself.

2.3. The cyclic gesture in the context o requests The third context the cyclic gesture was observed in is that of requests. This variant is used to encourage an interlocutor to continue an ongoing (speech) activity.

Fig. 121.5: Cyclic gesture in the context of a request

121. The cyclic gesture Fig. 121.5 gives three instances of this variant. All are taken from a parlor game in which the speakers have to explain a word their game partners have to guess. In the first example, speaker Su explains the word Warnblinkanlage (‘warning lights’, Fig. 121.5a). After describing a situation in which warning lights are deployed, her teammates name various objects. When speaker Cl starts formulating her lexical choice, speaker Su affirms her guess by using the affirmative particle ja (‘yes’). She then starts using the cyclic gesture spanning her affirmation, the following pause, and the beginning of the question Ja (0,5sec) wie heißt dieses ding? (‘Yes, (0.5sec) how is this thing called?’). Interestingly enough, in this example, the gestural request performed by the cyclic gesture is used before the verbal request. This is different in the other two examples given above, in which the cyclic gesture co-occurs with verbal requests as in anderes wort (‘different word’, Fig. 121.5b) and Was ist das ‘n andres wort (‘What is it another word’, example 3, Fig. 121.5c). In both examples, the speaker encourages the interlocutor to look for and formulate another word. Although often used concomitantly with speech, these observations show that the cyclic gesture can perform a request all by itself and does not necessarily need a verbal counterpart. The cyclic gesture used in requests performs a speech act and “aim[s] at a regulation of the behavior of others” (“performative”, Teßendorf this volume: 1544). This function is reflected in additional parameters: In most cases, this gesture is performed with a large movement size and is positioned in the right peripheral gesture space. Both form features add a deictic component to this context variant, meaning that it is directed towards a speaker’s interlocutor. Furthermore, although the other two variants can also be used without a verbal utterance, this variant is detachable from speech in that it can perform a speech act by itself.

2.4. Systematic variation o orm and context Many studies on recurrent gestures have demonstrated that variants of a recurrent gesture are distinguished by additional parameters reflecting the meaning variations of the semantic core (see, e.g., Bressem and Müller this volume a, this volume b; Kendon 2004; Müller 2004). In the case of the cyclic gesture, the position in gesture space, a parameter often spared out in investigations, as well as the movement size contribute to the gestural variants (see Fig. 121.6). All variants of the cyclic gesture show a characteristic position in gesture space. Whereas the cyclic gesture used in the context of descriptions is preferably positioned in the right peripheral gesture space, the cyclic gesture in word/concept searches is performed most often in the central gesture space. When used as a request, the cyclic gesture is positioned in the same gesture space as in descriptions. However, it is combined with a large movement size, i.e., the movement is anchored at the elbow. Both parameters add a deictic dimension to this variant, meaning that it is directed towards and addressed to the speaker’s interlocutor (Bavelas et al. 1992). The cyclic gesture in word or concept searches is directed toward the speaker her/himself, as it is positioned in the “personal gesture space” (Sweetser and Sizemore 2008). In case of the cyclic gesture used in the context of descriptions the gesture space is used iconically (“referent space”, Sowa 2006) or as a means to direct the gesture towards the interlocutor. No systematic distribution in the use of the other form parameters (hand shape and orientation) was found. The findings therefore support the argument of a systematic

1611

1612

VIII. Gesture and language

Fig. 121.6: Variation of form and context in the cyclic gesture (the description of the gesture space is adapted to McNeill [1992: 89])

variation of form and context, reflected in the three identified contexts-of-use and the form parameters “position in gesture space” and “movement size”.

3. Creating meaning  the conceptual basis o the cyclic gesture The different context variants of the cyclic gesture are held together by one semantic core, namely cyclic continuity, embodied in the formational core, that is, the continuous clockwise rotation. This basic form and meaning is reminiscent of the image schema cycle (Johnson 1987; for fundamental work on gestures and image schemas, see, inter alia, Cienki 1998, 2005 and Mittelberg 2006, 2010). Image schemas are “pervasive organizing structures in human cognition which emerge from our bodily and social interaction with the environment at a preconceptual level” (Santibanez 2002: 187). They are tied to our perception and to our motoric capabilities. The cycle schema (Johnson 1987) has emerged from the abstraction of perceived and experienced cyclic events and actions such as breathing, the four seasons, circling objects or actions such as cranking or pedaling. Image schemas can be projected onto abstract domains and thus serve as source domains in the metaphoric mapping process. In this way, the cycle can constitute the basis for the conceptualization of cyclic time. It can furthermore structure our understanding of mental or bodily processes. These different metaphoric extensions of the

121. The cyclic gesture

1613

Fig. 121.7: Metaphoric cycle icm underlying the use of the cyclic gesture (adapted from Baldauf 1997)

cycle are also found in the use of the cyclic gesture. They are part of a cognitive model underlying the use of the cyclic gesture and forming its conceptual basis. The cycle icm (“idealized cognitive model”, Lakoff 1987), presented above in Fig. 121.7, integrates the different context variants of the cyclic gesture and shows the interrelation of its possible bases and abstract meanings. The idealized cognitive model (icm) that underlies the use of the cyclic gesture entails the image schema cycle and its metaphoric projections brought forth in the different uses of this gesture. Based on the experience, abstraction, and generalization from cyclic events or actions, the image schema cycle has emerged. Basic knowledge about circles, cyclic temporal events, or recurrence combines with the cycle and gives rise to an image schematic idealized cognitive model (Lakoff 1987; see also “image schematic domain” Clausner and Croft 1999) which is projected onto abstract domains. These abstract do-

1614

VIII. Gesture and language

mains are metaphorically construed by way of metaphors. The different variants of the cyclic gesture serve as single components of the icm. In the examples presented above (see section 2) the metaphors mind is a machine, body is a machine, and time is motion through space were found to be expressed in the variants of the cyclic gestures. However, these metaphors can be differentiated with respect to being in progress and or being motivated: Whereas the cyclic gesture used in descriptions as well as in word or concept searches depicts processes that are in progress, the cyclic gesture used in requests is deployed to encourage processes. To give an example, when the cyclic gesture depicts ongoing thinking processes, as in the example illustrated in Figure 121.2c, the mind-is-a-machine metaphor, or, more specifically, the metaphor thinking is a process in a machine, underlies the conceptualization and presentation of mental operations as circuits. Furthermore, not only is the mind conceptualized as a machine but the whole body is conceived a machine, since thinking processes are part of the body. As circuits work continuously, the metaphor time is motion through space also plays a role in this example. (For a more thorough description, see Ladewig 2011.)

4. Questions o conventionalization and grammaticalization The results of the form and context analysis buttress Bressem’s findings of a standardized clustering of certain form features (Bressem 2007, volume 1). She proposes that parameters do not cluster accidentally or are evoked by the speaker’s individual preferences in gesturing, but are combined systematically. A so-called standard of form is of interest for scholars of gesture, since it is an index of a gesture’s degree of conventionalization (Kendon 1988, 2004). Hence, the findings of a systematic variation of form and context found for the cyclic gesture allow for reflections on conventionalization processes working in this gesture (see Fig. 121.8). Whereas the cyclic gesture in the context of a description is used to represent all kinds of continuous events or actions, although it is mostly used to refer to abstract things, the cyclic gesture used in a word or concept search has only three possible usages, reflected in three sequential positions (see section 2.2). The cyclic gesture used in requests

Fig. 121.8: Degree of conventionalization of the cyclic gesture’s variants

121. The cyclic gesture

1615

is restricted to only one possible usage. Seen from this angle, the cyclic gesture used in requests appears to be the most conventionalized variant. The variation in the gestural form supports this argument. Whereas the cyclic gesture used in contexts of descriptions and word/concept searches are distinguished by only one parameter, i.e., the position in gesture space, the cyclic gesture used in the context of requests shows two additional parameters, contributing to the formation of this variant, namely the position in gesture space and the movement size. Furthermore, it is the only variant that can substitute for speech, as it performs a pragmatic function: It performs a speech act, aiming “at the regulation of the behavior of others” (Teßendorf this volume: 1544). Variants of recurrent gestures (or pragmatic gestures) showing these characteristics have often been defined as emblems or as “quotable forms” (Kendon 1995: 272; see also the discussion in Ladewig this volume). Taking these observations into account, it can be argued that the more restricted a gesture is in its usage, the more conventionalized it is in its form. The characteristic of detachability from speech gives another evidence for a higher degree of conventionalization (cf. Kendon 1988, 1995). Approaching the question of the “linguistic potential of gestures” (Armstrong and Wilcox 2007; Müller 2009, volume 1), these findings are very interesting. The observed variation in the position and movement size of the gesture shows interesting analogies to sign language. As Wilcox and colleagues (Wilcox 2004, 2005, 2007; Wilcox, Rossini, and Pizzuto 2010) could show, the sign impossible in Italian Sign Language (LIS), which is also characterized by a rotating movement, exhibits different “pronunciations”, that is, “modifications to the dynamic movement contour and location of the sign […]” (Wilcox 2005: 30). It is argued that the identified variations in the location and movement size of this sign are “analogous to prosodic stress” (Wilcox, Rossini, and Pizzuto 2010: 353). In case of modal verbs in Italian Sign Language however, both form parameters have achieved a grammatical status and mark morphological alternations of strong and weak form. Tying these findings back to the analysis of the cyclic gesture, it can be argued that, in this case, prosodic stress in signs has a possible origin in gestural movements, thereby giving proof of the second route of grammaticalization (from gesture to a grammatical morpheme via a marker of intonation/prosody see Wilcox 2004, 2007). However, a second observation with respect to grammaticalization should also be mentioned. The movement type of a continuous rotation has been observed to mark verb aspect in American Sign Language or Italian Sign Language, as stated by Klima and Beluggi (1979: 293). This movement type can mark durativity or continuation of events. In view of the findings presented above, it can be argued that the core of the cyclic gesture has developed to a marker of aspect in sign languages as it is used to mark continuous events (see also Wilcox 2004; Wilcox, Rossini, and Pizzuto 2010). In general, the study of recurrent gestures sheds light on the principles working within a “grammar of gesture” (Müller, Bressem, and Ladewig volume 1). Moreover, it gives the opportunity of setting up culturally-shared repertoires that can be investigated from a cross-cultural perspective. The study of the cyclic gesture, in particular, gives insights into embodied concepts of time and shows how these are ‘ex-bodied’ (Mittelberg 2006, volume 1) in a gesture.

Acknowledgements I am grateful to Mathias Roloff for providing the drawings (www.mathiasroloff.de).

1616

VIII. Gesture and language

5. Reerences Armstrong, David F. and Sherman Wilcox 2007. The Gestural Origin of Language. Oxford/New York: Oxford University Press. Baldauf, Christa 1997. Metapher und Kognition: Grundlagen einer neuen Theorie der Alltagsmetapher. Frankfurt Main: Peter Lang. Bavelas, Janet Beavin, Nicole Chovil, Douglas A. Lawrie and Allan Wade 1992. Interactive gestures. Discourse Processes 15(4): 469⫺489. Bressem, Jana 2007. Recurrent form features in coverbal gestures. Unpublished manuscript, European University, Frankfurt (Oder). http://www.janabressem.de/Downloads/Bressem-recurrent form features.pdf. Bressem, Jana volume 1. A linguistic perspective on the notation of form features in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1079⫺1098. Berlin/Boston: De Gruyter Mouton. Bressem, Jana, Silva H. Ladewig and Cornelia Müller volume 1. Linguistic annotation system for gestures (LASG). In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1094⫺1124. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume a. The family of AWAY gestures: Negation, refusal, and negative assessment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1592⫺1605. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume b. A repertoire of recurrent gestures of German with pragmatic functions. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1575⫺1592. Berlin/Boston: De Gruyter Mouton. Cienki, Alan 1998. Straight: An image schema and its metaphorical extensions. Cognitive Linguistics 9(2): 107⫺149. Cienki, Alan 2005. Images schemas and gesture. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics, 421⫺442. Berlin/New York: Mouton de Gruyter. Clausner, Timothy C. and William Croft 1999. Domains and image schemas. Cognitive Linguistics 10(1): 1⫺31. Johnson, Mark 1987. The Body in the Mind. The Bodily Basis of Meaning, Imagination, and Reason. Chicago, IL: University of Chicago Press. Kendon, Adam 1988. How gestures can become like words. In: Fernando Poyatos (ed.), Crosscultural Perspectives in Nonverbal Communication, 131⫺141. Toronto: C. J. Hogrefe. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23: 247⫺279. Kendon, Adam 2004. Gesture. Visible Action as Utterance. Cambridge: Cambridge University Press. Klima, Edward S. and Ursula Beluggi 1979. The Signs of Language. Harvard: Harvard University Press. Ladewig, Silva H. 2007. The family of the cyclic gesture and its variants ⫺ systematic variation of form and contexts. Unpublished manuscript, European University, Frankfurt (Oder). http:// www.silvaladewig.de/publications/papers/Ladewig-cyclic_gesture.pdf. Ladewig, Silva H. 2010. Beschreiben, suchen und auffordern ⫺ Varianten einer rekurrenten Geste. Sprache und Literatur 41(1): 89⫺111. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. http://cognitextes.revues.org/406.

121. The cyclic gesture Ladewig, Silva H. this volume. Recurrent gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1558⫺1575. Berlin/Boston: De Gruyter Mouton. Ladewig, Silva H. and Jana Bressem 2013. New insights into the medium hand ⫺ Discovering structures in gestures based on the four parameters of sign language. Semiotica 197: 203⫺231. Lakoff, George 1987. Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. Chicago: University of Chicago Press. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar. PhD Dissertation, Cornell University. Ann Arbor, MI: UMI. Mittelberg, Irene 2010. Geometric and image-schematic patterns in gesture space. In: Vyvyan Evans and Paul Chilton (eds.), Language, Cognition, and Space: The State of the Art and New Directions, 351⫺385. London: Equinox. Mittelberg, Irene volume 1. The exbodied mind: Cognitive-semiotic principles as motivating forces in gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 755⫺784. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia 1994. Co´mo se llama …? Kommunikative Funktionen des Gestikulierens in Wortsuchen. In: Peter Paul König and Helmut Wiegers (eds.), Satz-Text-Dikurs, 71⫺80. Tübingen: Niemeyer. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand. A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), Semantics and Pragmatics of Everyday Gestures, 234⫺256. Berlin: Weidler. Müller, Cornelia 2009. Gesture and language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. Abington/New York: Routledge. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Santibanez, Francisco 2002. The object image-schema and other dependent schemas. Atlantis 24(2): 16⫺49. Sowa, Timo 2006. Understanding Coverbal Iconic Gestures in Object Shape Descriptions. Berlin: Akademische Verlagsgesellschaft Aka GmbH. Sweetser, Eve and Marisa Sizemore 2008. Personal and interpersonal gesture spaces: Functional contrasts in language and gesture. In: Andrea Tyler, Yiyoung Kim and Mari Takada (eds.), Language in the Context of Use: Cognitive and Discourse Approaches to Language and Language Learning, 25⫺52. Berlin: Mouton de Gruyter.

1617

1618

VIII. Gesture and language

Teßendorf, Sedinha this volume. Pragmatic and metaphoric ⫺ combining functional with cognitive approaches in the analysis of the “brushing aside gesture”. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1540⫺1558. Berlin/Boston: De Gruyter Mouton. Wilcox, Sherman 2004. Gesture and language. Gesture 4(1): 43⫺73. Wilcox, Sherman 2005. Routes from gesture to language. Revista da ABRALIN ⫺ Associac¸a˜o Brasileira de Lingüı´stica 4(1⫺2): 11⫺45. Wilcox, Sherman 2007. Routes from gesture to language. In: Elena Pizzuto, Paola Pietrandrea and Raffaele Simone (eds.), Verbal and Signed Languages: Comparing Structures, Constructs and Methodologies, 107⫺131. Berlin/New York: Mouton de Gruyter. Wilcox, Sherman, Paolo Rossini and Elena Antinoro Pizzuto 2010. Grammaticalization in sign languages. In: Diane Brentari (ed.), Sign Languages, 332⫺354. Cambridge: Cambridge University Press.

Silva H. Ladewig, Frankfurt (Oder) (Germany)

122. Kinesthemes: Morphological complexity in co-speech gestures 1. Introduction: Co-speech gestures between idiosyncrasy and language 2. Beyond the morpheme boundary: Phonesthemes and kinesthemes as products of typification and semantization processes 3. Kinesthemes and morphological complexity: The example of gestural blending in pointing 4. Conclusion 5. References

Abstract Targeting a multimodal approach to grammar and grammaticalization processes raises the crucial questions of how body movements become types that may be meaningful and to what extent typified gestures may combine to form complex structures. Providing and applying a concept of gestural kinesthemes analogous to vocal phonesthemes allows for the conclusion that the same linguistic processes of typification and semantization become manifest both in spoken language and in the gestural modality. Kinesthemes can be simple or complex. Simple kinesthemes can be characterized as intersubjectively typified and semanticized gestural tokens whose similarity on the level of form correlates with a similarity on the level of meaning. This conceptual framework is primarily based on the Peircean principle of diagrammatic iconicity combined with the Wittgensteinian concept of family resemblance and Stetter’s concept of typification following Goodman. Complex kinesthemes turn out to be comparable to morphological blending processes in word formation and therefore possess at least rudimentary morphological and semantic compositionality. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 16181630

122. Kinesthemes: Morphological complexity in co-speech gestures

1619

1. Introduction: Co-speech gestures between idiosyncrasy and language Taking McNeill’s classic version of Kendon’s Continuum (McNeill 1992) as a starting point, co-speech gestures are idiosyncratic by definition and ⫺ although strongly intertwined with the process of verbal utterance production ⫺ lack properties of linguistic coding in the narrow sense. According to McNeill (1992, 2005), they offer direct access to cognitive processes and image-like mental representations during verbal utterance production. Wundt’s ideas (Wundt [1900] 1904, 1973) have led other scholars to focus on semiotic aspects of co-speech gestures and to emphasize their potential for unfolding language-like properties due to their pragmatic, semantic, and grammatical context, and to their communicative intention in language use (e.g., Birdwhistell 1970; Bressem 2012; Bressem and Müller this volume; Brookes 2004; Calbris 1990, 2011; Efron [1941] 1972; Enfield 2009, volume 1; Fricke 2007, 2012, volume 1, this volume; Fricke, Bressem, and Müller this volume; Harrison 2008, 2009; Kendon 1980, 2002, 2004; Ladewig 2010, 2011a, b, this volume; Müller 1998, 2004, 2010, volume 1; Müller, Bressem, and Ladewig volume 1; Payrato´ this volume; Pike 1967; Poggi 2002, 2007; Posner et al. in preparation; Teßendorf volume 1). An important objection against co-speech gestures as potential units of the language system (Saussurean langue or Chomskyan competence) from a grammatical point of view is their supposed lack of conventionalization (see Fricke 2010, 2012). According to this view, no well-defined types of stable form-meaning relations, like morphemes and lexemes, are possible without conventionalization. It supposes that only conventionalized types called “morphemes” and “words” may combine to form higher-level, complex units of syntax. At first sight, it seems impossible to counter this objection. In contrast to emblematic gestures, like the victory sign or the OK gesture, co-speech gestures are not lexicalized and thus cannot be found as entries in gesture lexicons (e.g., Lynn 2012; Posner et al. in preparation). Moreover, co-speech gestures are not at all covered by linguistic definitions of morphemes as minimal form-meaning pairs that remain stable across all contexts and occurrences (e.g., Matthews 1974). The assumption that co-speech gestures are therefore idiosyncratic seems to be unavoidable. The crucial question from a linguistic point of view is whether form-meaning correlations in utterances stop in every case at the classic morpheme boundary. As it turns out, there are linguistic approaches that question such rigid concepts of morphemes and assume that systematic typification and semantization processes beneath the morphological level but above the phonological level may apply to single sounds and sound combinations. These units are called “submorphemic differentials” or “phonesthemes” (Firth [1935] 1957; Bolinger [1968] 1975; Zelinsky-Wibbelt 1983). They are characterized as usage-based, intersubjective sound-meaning correlations, as found in the correlation between rhyme and meaning in the English monosyllabics bump, chump, clump, crump, flump, glump, grump, hump, etc., which all have the semantic feature ‘heavy’ in common (Bolinger 1975: 219; Zelinsky-Wibbelt 1983: 22). Providing and applying a concept of gestural kinesthemes analogous to vocal phonesthemes supports and further elaborates the hypothesis of a “rudimentary morphology” in co-speech gestures (Müller 2004) as part of the concept of gesture families and, moreover, allows for the conclusion that the same processes of typification and semantization become manifest both in spoken language and in the gestural modality (Fricke 2008, 2010, 2012).

1620

VIII. Gesture and language

2. Beyond the morpheme boundary: Phonesthemes and kinesthemes as products o typiication and semantization processes 2.1. Phonesthemes in vocal languages According to Fricke (2010, 2012) a phonestheme is defined as a set of semanticized submorphemic tokens whose similarity on the level of form correlates with a similarity on the level of meaning. Semantizations occur either by assigning the same semantic features to all word forms that contain the corresponding submorphemic item (see Tab. 122.1), or the assignment is based on the Wittgensteinian principle of family resemblance (see Tab. 122.2). This definition broadens prior concepts of the phonestheme by taking into account three additional aspects: firstly, Stetter’s concept of typification following Nelson Goodman (Stetter 2005); secondly, the concept of diagrammatic iconicity following Peirce (1931⫺1958, 2000); and thirdly, the Wittgensteinian concept of family resemblance (Wittgenstein [1952] 1953, [1952] 1989), which allows for a prototypically structured net of semantic similarities (Rosch and Mervis 1975). The integration of concepts like that of the phonestheme into mainstream linguistics has been hindered especially by the strict language-theoretic division between langue and parole or, alternatively, competence and performance, which holds true both for structuralist linguistics in the tradition of Saussure as well as for generative linguistics in the tradition of Chomsky. If, however, one adopts a concept of typification following Goodman ([1968] 1981), this gap is bridged: A linguistic type is conceived of as a set of copies that are not identical but merely similar, and that are not founded on a mutual original (Stetter 2005). Linguistic items, then, are no longer considered as items of either the language system or language use, thus allowing the inclusion of “intermediates” like phonesthemes. The same goes for the semantization of co-speech gestures. Because, according to Stetter, a token is not an identical but only a similar replica of another token, a type can only be found within usage (Stetter 2005). Therefore, processes of language change and code development can be described as the result of intersubjective language use. For the conception of phonesthemes and kinesthemes this means that the starting point for a reconstruction of the system in usage is the utterance, covering the entire range of its medial manifestation, i.e., all aspects of vocal articulations as well as manual gestures and other speech-accompanying body movements. Why is the Peircean term ‘diagrammatic iconicity’ crucial for this kind of usage-based approach? And to what extent are phonesthemes and kinesthemes iconic? Contrary to onomatopoeia, like the classic birdcall example cuckoo, a direct similarity relationship between form and meaning is not necessarily given in diagrammatic iconicity. The similarity exists rather between the relations of forms and the relations of meanings. By way of example, Fig. 122.1 depicts traffic signs that indicate a railway crossing without barri-

Fig. 122.1: Diagrammatic iconicity in traffic signs (cf. Plank 1978: 253)

122. Kinesthemes: Morphological complexity in co-speech gestures

1621

ers. The distance to the crossing, indicated by the signs, decreases from left to right (Plank 1978). On the form level and considered in isolation, the single signs do not appear in any way motivated by their respective content. Furthermore, there is no similarity with regard to the signified object. Put into the context of the other signs, a similarity relation between the single signs can be detected: All three traffic signs are similar apart from the number of diagonal bars, which diminishes from left to right. This similarity relation on the form level corresponds to the relation of the individual signs’ meanings: The diminishing number of bars correlates with the respective reduction in distance. According to Peirce, these kinds of phenomena are termed ‘diagrammatic iconicity’ (Peirce 2000: 98, vol. 2). Saussure calls form-meaning relations like these “relatively motivated” (Saussure [1916] 1966: 133). He distinguishes grammatical from lexicological types of sign systems. The more a sign system is relatively motivated, the more grammatical it is (Saussure 1966: 133). Relative motivation or diagrammatic iconicity, however, is not limited to the lexicon and the grammar of spoken language only but can be shown to hold for phonesthemes and co-speech gestures (kinesthemes) in limited contexts as well. What has the Wittgensteinian principle of family resemblances to offer for modifying the concept of phonestheme introduced so far? For the purpose of illustration, let us consider the English example smog following Zelinsky-Wibbelt’s analysis: The phonestheme -og constitutes the rhyme of the word forms fog, bog, clog, hog, jog, log, and slog. The corresponding phonological unit /-cg/ is semanticized with the meaning ‘heavy’ (Zelinsky-Wibbelt 1983). Similarly, the phonestheme sm- at the onset of the word forms smoke, smear, smirch, smirk, smudge, smut, and smutch is associated with the meaning ‘dirty’. Combining both phonesthemes results in the phonological form smog (represented by the written word form) on the form level and in its twofold semantization of ‘heavy’ and ‘dirty’ on the meaning level. Here we have a case of semantic compositionality: Comparing the meaning of smog with the meaning of fog, the meaning of smog may be paraphrased as ‘heavy, burdensome fog that is dirty’ (cf. Zelinsky-Wibbelt 1983). Following Zelinsky-Wibbelt’s concept of semantic loading, all submorphemic tokens of different word forms are semanticized by a shared meaning or rather, in our terminology, by a shared semantic feature (cf. Tab. 122.1). Zelinsky-Wibbelt thereby adheres to a type-token concept founded on the traditional assumption of a replica relation between the different tokens (Fricke 2010, 2012). Tab. 122.1: Semantization by a shared feature following Zelinsky-Wibbelt (Fricke 2010) Submorphemic structure

Word 1 /-x-/

Word 2 /-x-/

Word 3 /-x-/

Word 4 /-x-/

Semantic features

ⴙa ⫹y ⫹z

ⴙa ⫹u ⫹v

ⴙa ⫹m ⫹n

ⴙa ⫹j ⫹k

Following, as suggested above, Stetter’s concept of typification, and thus assuming similarity rather than identity between tokens, Wittgenstein’s concept of family resemblance may be combined with this so that the structure of semantization can be depicted along the lines of Tab. 122.2:

1622

VIII. Gesture and language

Tab. 122.2: Semantization by family resemblance features (Fricke 2010) Submorphemic structure

Word 1 /-x-/

Word 2 /-x-/

Word 3 /-x-/

Word 4 /-x-/

Semantic features

ⴙa ⴙb ⴙc ⫹w

ⴙb ⴙc ⴙd ⫹x

ⴙa ⴙb ⴙd ⫹y

ⴙa ⴙc ⴙd ⫹z

The submorphemic structure /-x-/ is semanticized by the set of semantic features [⫹a, ⫹b, ⫹c, ⫹d]. However, not all features of this set occur in every word containing this submorphemic structure. Rather, the semantic features constitute a network of semantic similarities on the meaning level analogous to Wittgenstein’s characterization of family resemblance using games as an example (Wittgenstein 1953, 1989; Rosch and Mervis 1975). Combined with the Peircean concept of diagrammatic iconicity and Stetter’s concept of typification introduced above, the concept of family resemblance broadens the technical apparatus for analyzing verbal phonesthemes as well as gestural kinesthemes.

2.2. Kinesthemes in co-speech gestures How can one define kinesthemes analogously to phonesthemes? According to Fricke (2010, 2012), a kinestheme is a set of intersubjectively semanticized movement tokens whose similarity on the form level correlates with a similarity on the meaning level. The similarity relation on the meaning level corresponds to the relation of family resemblance following Wittgenstein. An important difference between vocal phonesthemes and gestural kinesthemes is the different degree of simple iconicity due to their different modalities and their belonging to different codes. In contrast to words as items of the language system, the form-meaning relations of co-speech gestures are primarily non-arbitrary and thus, in Peircean terms, are to be classified as non-symbolic. Moreover, co-speech gestures that display simple iconicity directly are far from being marginalized occurrences, as onomatopoeia are in vocal languages. Consequently, with regard to Peircean diagrammatic iconicity, we are faced with the following question: How can we exclude direct iconic motivation as a cause of gestural semantization in order to prove that gestural kinestheme formation is solely based on diagrammatic iconicity? In German, six hand shapes (see Fig. 122.2) are recurrent across speakers and contexts according to Bressem (Bressem 2006; Ladewig and Bressem 2013). Two of these hand

Fig. 122.2: Cross-speaker hand shapes following Bressem

122. Kinesthemes: Morphological complexity in co-speech gestures

1623

shapes, namely, the open hand (PLOH) and the extended middle finger (G-Form), also occur in non-iconic pointing gestures, that only refer deictically to the reference object intended by the speaker without depicting it at the same time. By choosing non-iconic deictic gestures, we can exclude direct iconic motivation as the cause of gestural semantization for the examples under consideration. For single occurrences in Italian at least (Kendon and Versante 2003; see also Haviland 2003 on pointing in Zinacanta´n), the form differentiation between the palm-lateralopen-hand gesture (PLOH) and the G-Form with respect to deictic gestures is at the same time connected with a semantic differentiation. A quantitative study has shown this to be also true in German (Fricke 2007, 2010). In German, we can observe two typified forms of pointing gestures (see Fig. 122.3): firstly, the so-called G-Form with an extended index finger and the palm oriented downwards; and secondly, the palm-lateralopen-hand gesture (PLOH) (Fricke 2007). The G-Form is semanticized with a meaning that can be paraphrased as ‘pointing to an object’, whereas the meaning of the PLOH gesture is directive (‘pointing in a direction’).

Fig. 122.3: Two types of pointing gestures in German: G-Form and PLOH (Fricke 2007: 109)

Are we concerned here with kinesthematic co-speech gestures or emblems with lexicalized form-meaning relations? The following considerations support a kinesthematic interpretation: On the one hand, both deictic functions may be instantiated through arbitrary idiosyncratic forms, i.e., it is possible to point with the elbow or the foot at a particular spatial point or in a particular direction (Fricke 2007: 279). On the other hand, the interpretation of both recurrent hand shapes ⫺ index finger gesture and open hand ⫺ as ‘pointing to an object’ and ‘pointing in a direction’ is contextually limited to the domain of deixis, though the form of the index finger gesture may be interpreted as an iconic representation of a similar shaped object in other contexts, for example, a road (Fricke 2007: 279, this volume). Lexicalized verbal deictics like here and there or now and then, however, although they are referentially variable, like pointing gestures, possess at the same time a context-independent symbolic function across all contexts, as Bühler ([1934] 1982, [1934] 1990) emphasizes. In general, the same holds for emblematic gestures: The form-meaning relation of emblems is, in principle, lexicalized and not contextually limited.

1624

VIII. Gesture and language

3. Kinesthemes and morphological complexity: The example o gestural blending in pointing Analyses of empirical examples from route descriptions at Potsdamer Platz in Berlin show that kinesthemes can be simple or complex. Complex kinesthemes can be compared to the products of word-formation processes such as morphological contaminations or blendings that occur during speech production (e.g., Denglish is a contamination of the words Deutsch and English) (Fricke 2010, 2012). The following pointing gestures illustrate an analogous process of gesture formation:

Fig. 122.4: Blending of G-Form and palm-lateral-open-hand (PLOH) (Fricke 2012: 112)

Fig. 122.5: Palm-lateral-open-hand (gesture 3)

Fig. 122.7: Palm-lateral-open-hand (gesture 5)

Fig. 122.6: Blending of G-Form and palm-lateral-open-hand (gesture 4)

122. Kinesthemes: Morphological complexity in co-speech gestures

1625

The blending of the G-Form and the palm-lateral-open-hand as shown in Fig. 122.4 is part of the following sequence of gestures accompanying a verbal route description: (1)

A: [du kommst hier vorne raus an dieser Straße (.)]1 [und gehst hier geradeaus (.)]2 3 [nich 4[da durch/]4]3 [sondern hier geradeaus immer geradeaus)]5 ‘[you come here in front out on this road] [and go here straight ahead] [not there through] [but here straight ahead always straight ahead]’

Gesture 3 in Fig. 122.5 is a directional gesture executed with the open lateral hand. Then in gesture 4 (see Fig. 122.6), the index finger branches off this open hand and the other fingers are bent slightly without further changing the hand position. Accompanying the verbal deictic da [there], the index finger points at a spatial point instantiated by an object while maintaining the direction “straight ahead”. In gesture 5 (palm-lateral-openhand, see Fig. 122.7), finally, the fingers are straightened in the same position, and a new stroke is executed with the open hand accompanying the verbal deictic hier [here] in the utterance sondern hier geradeaus, immer geradeaus [but here straight ahead, always straight ahead], thus providing directional information. Why does the palm in gesture 4 not face downwards in this utterance? This could be explained by assuming a morphological blending of the G-Form (‘pointing to an object’) and the palm-lateral-open-hand (‘pointing in a direction’) which can be paraphrased as ‘pointing to an object in a particular direction’ (Fricke 2007: 113⫺114.). This blending arises out of the combination of both types of deictic kinesthemes, as depicted in Fig. 122.8.

Fig. 122.8: Complex kinestheme formation: Blending of directional deixis and spatial point deixis (Fricke 2012: 113)

Analogous to morphological word formation in blendings like Denglish, phonesthematic compositionality is restricted to a relatively small part of the vocabularies of languages (Lyons 1977; Zelinsky-Wibbelt 1983; Bußmann 1990). Examples for kinesthematic com-

1626

VIII. Gesture and language

positionality can be found in so-called gesture families (e.g., Calbris 1990, 2011; Fricke, Bressem, and Müller this volume; Kendon 1995, 2004; Ladewig 2010, 2011b; Müller 2004). However, the main point is that processes of formal typification and semantic loading on the verbal and gestural level are both guided by the same principles (Fricke 2012, volume 1). As previously mentioned, Saussure (1966) distinguishes grammatical from lexicological types of sign systems. The more a sign system is relatively motivated, the more grammatical it is. Kinesthematic processes of typification and semantization and their potential for morphological and semantic compositionality are therefore prerequisites for a multimodal approach to grammar (Fricke 2008, 2012, volume 1).

4. Conclusion Targeting a multimodal approach to grammar and grammaticalization processes raises the crucial questions of how body movements become types that may be meaningful and to what extent typified gestures may function as components of complex entities. Providing and applying a concept of gestural kinesthemes analogous to vocal phonesthemes allows for the conclusion that the same linguistic processes of typification and semantization become manifest both in spoken language and in the gestural modality. Kinesthemes can be simple or complex. Simple kinesthemes can be characterized as intersubjectively typified and semanticized gestural tokens whose similarity on the level of form correlates with a similarity on the level of meaning. This conceptual framework is primarily based on the Peircean principle of diagrammatic iconicity combined with the Wittgensteinian concept of family resemblance and Stetter’s concept of typification following Goodman. Complex kinesthemes turn out to be comparable to morphological blending processes in verbal word formation and therefore possess at least rudimentary morphological and semantic compositionality (Fricke 2010, 2012, volume 1). The concept of kinesthemes not only supports and further elaborates the hypothesis of a “rudimentary morphology” in co-speech gestures (Müller 2004: 3), especially in so-called gesture families, but also substantiates the category of “recurrent gestures” located between idiosyncratic and emblematic gestures in Kendon’s Continuum (e.g., Bressem and Müller this volume; Ladewig this volume; Teßendorf volume 1). Moreover, it complements other types of non-typified meaning construction, for example, processes of abstraction from everyday gestures in gestural etymology (e.g., Posner 1993, 2004; Posner et al. in preparation; Müller 2004; Lynn 2012; Streeck 2009, volume 1) or idiosyncratic metaphor and metonymy in co-speech gestures (Cienki 2008; Cienki and Müller 2008; Mittelberg 2006, 2008; Mittelberg and Waugh 2009, this volume; Müller 2008, 2010).

5. Reerences Birdwhistell, Ray 1970. Kinesics and Context. Essays on Body Motion Communication. Philadelphia: University of Pennsylvania Press. Bolinger, Dwight L. 1975. Aspects of Language. New York: Harcourt Brace Jovanovich. First published [1968]. Bressem, Jana 2006. Formen redebegleitender Gesten ⫺ Verteilung und Kombinatorik formbezogener Parameter. MA thesis, Freie Universität Berlin. Bressem, Jana 2012. Repetitions in gesture: Structures, functions, and cognitive aspects. PhD Dissertation, European University Viadrina, Frankfurt (Oder).

122. Kinesthemes: Morphological complexity in co-speech gestures Bressem, Jana and Cornelia Müller this volume. A repertoire of recurrent gestures in German. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1575⫺1591. Berlin/Boston: De Gruyter Mouton. Brookes, Heather 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186⫺224. Bühler, Karl 1982. Sprachtheorie. Die Darstellungsfunktion der Sprache. Stuttgart/New York: Fischer. First published [1934]. Bühler, Karl 1990. Theory of Language. The Representational Function of Language. Amsterdam/ Philadelphia: John Benjamins. First published [1934]. Bußmann, Hadumod 1990. Lexikon der Sprachwissenschaft. Stuttgart: Kröner. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Calbris, Genevie`ve 2011. Elements of Meaning in Gesture. Amsterdam/Philadelphia: John Benjamins. Cienki, Alan 2008. Why study metaphor and gesture? In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 5⫺24. Amsterdam/New York: John Benjamins. Cienki, Alan and Cornelia Müller (eds.) 2008. Metaphor and Gesture. Amsterdam/New York: John Benjamins. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton. First published [1941]. Enfield, N.J. 2009. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge: Cambridge University Press. Enfield, N.J. volume 1. A “composite utterances” approach to meaning. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 689⫺707. Berlin/Boston: De Gruyter Mouton. Firth, John Rupert 1957. The use and distribution of certain English sounds. In: John Rupert Firth, Papers in Linguistics 1934⫺1951, 34⫺46. London: Oxford University Press. First published [1935]. Fricke, Ellen 2007. Origo, Geste und Raum: Lokaldeixis im Deutschen. Berlin/New York: De Gruyter. Fricke, Ellen 2008. Grundlagen einer multimodalen Grammatik. Syntaktische Strukturen und Funktionen. Habilitation thesis, European University Viadrina, Frankfurt (Oder). Fricke, Ellen 2010. Phonaestheme, Kinaestheme und multimodale Grammatik. Sprache und Literatur 41(1): 69⫺88. Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin/Boston: De Gruyter. Fricke, Ellen volume 1. Towards a unified grammar of gesture and speech: A multimodal approach. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 733⫺754. Berlin/Boston: De Gruyter Mouton. Fricke, Ellen this volume. Deixis, gesture, and embodiment from a linguistic point of view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1803⫺1823. Berlin/Boston: De Gruyter Mouton. Fricke, Ellen, Jana Bressem and Cornelia Müller this volume. Gesture families and gestural fields. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1630⫺ 1640. Berlin/Boston: De Gruyter Mouton.

1627

1628

VIII. Gesture and language

Goodman, Nelson 1981. Languages of Art. An Approach to a Theory of Symbols. London: Oxford University Press. First published [1968]. Harrison, Simon 2008. The expression of negation through grammar and gesture. In: Jordan Zlatev, Mats Andre´n, Marlene Johansson Falck and Carita Lundmark (eds.), Studies in Language and Cognition, 405⫺409. Newcastle upon Tyne: Cambridge Scholars Publishing. Harrison, Simon 2009. Grammar, gesture, and cognition: The case of negation in English. PhD Dissertation, Universite´ Bordeaux 3. Haviland, John B. 2003. How to point in Zinacanta´n. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 139⫺169. Mahwah, NJ: Erlbaum. Kendon, Adam 1980. Gesticulation and speech: Two aspects of the process of utterance. In: Mary R. Key (ed.), The Relationship of Verbal and Nonverbal Communication, 207⫺227. The Hague: Mouton. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23: 247⫺279. Kendon, Adam 2002. Some uses of the head shake. Gesture 2(2): 147⫺182. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kendon, Adam and Laura Versante 2003. Pointing by hand in “Neapolitan”. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 109⫺137. Mahwah, NJ: Erlbaum. Ladewig, Silva H. 2010. Beschreiben, suchen und auffordern. Varianten einer rekurrenten Geste. Sprache und Literatur 41(1): 89⫺111. Ladewig, Silva H. 2011a. Syntactic and semantic integration of gestures into speech: Structural, cognitive, and conceptual aspects. PhD Dissertation, European University Viadrina, Frankfurt (Oder). Ladewig, Silva H. 2011b. Putting a recurrent gesture on a cognitive basis. CogniTextes 6. http:// cognitextes.revues.org/406. Ladewig, Silva H. this volume. The cyclic gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1605⫺1618. Berlin/Boston: De Gruyter Mouton. Ladewig, Silva H. and Jana Bressem 2013. New insights into the medium hand ⫺ Discovering recurrent gestures. Semiotica 197, 203⫺213. Lynn, Ulrike 2012. Keep in Touch ⫺ A Dictionary of Contemporary Physical Contact Gestures in the Mid-Atlantic Region of the United States. OPUS, Digital Repository of Technische Universität Berlin. http://opus4.kobv.de/opus4-tuberlin/frontdoor/index/index/docld/3484. Lyons, John 1977. Semantics. Volume 1. Cambridge: Cambridge University Press. Matthews, Peter H. 1974. Morphology. An Introduction to the Theory of Word Structure. Cambridge: Cambridge University Press. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: Chicago University Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Mittelberg Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar. PhD Dissertation, Cornell University. Ann Arbor, MI: UMI. Mittelberg, Irene 2008. Peircean semiotics meets conceptual metaphor: Iconic modes in gestural representations of grammar. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 115⫺154. Amsterdam/Philadelphia: John Benjamins. Mittelberg, Irene and Linda R. Waugh 2009. Metonymy first, metaphor second: A cognitive-semiotic approach to multimodal figures of speech in co-speech gesture. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 329⫺356. Berlin/New York: Mouton de Gruyter. Mittelberg Irene and Linda R. Waugh this volume. Gestures and metonymy. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction.

122. Kinesthemes: Morphological complexity in co-speech gestures (Handbooks of Linguistics and Communcation Science 38.2.), 1747⫺1766. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. Forms and uses of the palm up open hand: A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 233⫺256. Berlin: Weidler. Müller, Cornelia 2008. Metaphors Dead and Alive, Sleeping and Waking: A Dynamic View. Chicago: University of Chicago Press. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1), 202⫺ 217. Berlin/Boston: De Gruyter Mouton. Müller Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Payrato´, Lluı´s this volume. Emblems or quotable gestures: Structures, categories, and functions In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1474⫺1481. Berlin/Boston: De Gruyter Mouton. Peirce, Charles Sanders 1931⫺58. Collected Papers. Charles Hawthorne and Paul Weiss (eds.), Volumes 1⫺6; Arthur W. Burks (ed.), Volumes 7⫺8. Cambridge: Harvard University Press. Peirce, Charles Sanders 2000. Semiotische Schriften. Volumes 1⫺3. Frankfurt am Main: Suhrkamp. Pike, Kenneth L. 1967. Language in Relation to a Unified Theory of the Structure of Human Behavior. The Hague/Paris: Mouton. Plank, Frans 1978. Über Asymbolie und Ikonizität. In: Günter Peuser (ed.), Brennpunkte der Patholinguistik, 243⫺273. München: Fink. Poggi, Isabella 2002. Symbolic gestures: The case of the Italian gestionary. Gesture 2(1): 71⫺98. Poggi, Isabella 2007. Mind, Hands, Face, and Body ⫺ A Goal and Belief View of Multimodal Communication. Berlin: Weidler. Posner, Roland 1993. Believing, causing, intending: The basis for a hierarchy of sign concepts in the reconstruction of communication. In: Rene´ J. Jorna, Barend van Heusden and Roland Posner (eds.), Signs, Search, and Communication: Semiotic Aspects of Artificial Intelligence, 215⫺ 270. Berlin/New York: De Gruyter. Posner, Roland 2004. Everyday gestures as a process of ritualization. In: Monica Rector, Isabelle Poggi and Nadine Trigo (eds.), Gestures. Meaning and Use, 217⫺230. Porto: Edic¸o˜es Universidade Fernando Pessoa. Posner, Roland, Reinhard Krüger, Thomas Noll and Massimo Serenari in preparation. The Berlin Dictionary of Everyday Gestures. Berlin: Weidler. Rosch, Eleanor and Carolyn B. Mervis 1975. Family resemblances: Studies in the internal structure of categories. Cognitive Psychology 7: 573⫺605. Saussure, Ferdinand de 1966. Course in General Linguistics. New York/Toronto: McGraw-Hill. First published [1916]. Stetter, Christian 2005. System und Performanz. Symboltheoretische Grundlagen von Medientheorie und Sprachwissenschaft. Weilerswist: Velbrück Wissenschaft.

1629

1630

VIII. Gesture and language

Streeck, Jürgen 2009 Gesturecraft. The Manu-facture of Meaning. Amsterdam/Philadelphia: John Benjamins. Streeck, Jürgen volume 1. Praxeology of gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1), 674⫺688. Berlin/Boston: De Gruyter Mouton. Teßendorf, Sedinha volume 1. Emblems, quotable gestures, or conventionalized body movements. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1), 82⫺ 100. Berlin/Boston: De Gruyter Mouton. Wittgenstein, Ludwig 1989. Werkausgabe in acht Bänden. Vol. 1: Tractatus logico-philosophicus, Tagebücher 1914⫺1916, Philosophische Untersuchungen. 6th edition. Frankfurt am Main: Suhrkamp. First published [1952]. Wittgenstein, Ludwig 1953. Philosophical Investigations. Oxford: Blackwell. First published [1952]. Wundt, Wilhelm 1904. Völkerpsychologie. Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte. Volume 1: Die Sprache. Leipzig: Engelmann. First published [1900]. Wundt, Wilhelm 1973. The Language of Gestures. Den Haag: Mouton. Zelinsky-Wibbelt, Cornelia 1983. Die semantische Belastung von submorphematischen Einheiten im Englischen: Eine empirisch-strukturelle Untersuchung. Frankfurt am Main: Peter Lang.

Ellen Fricke, Chemnitz (Germany)

123. Gesture amilies and gestural ields 1. 2. 3. 4. 5. 6.

Introduction: Gesture studies beyond the atomistic view Word families and semantic fields in linguistics Semasiology in gesture studies: Gesture families From semasiology to onomasiology: Gestures in gestural fields ⫺ a multimodal perspective Conclusion References

Abstract Beyond an atomistic view, research on semasiologically oriented gesture families and onomasiologically oriented gestural fields offers two different approaches to the analysis of gestural groupings. This chapter, firstly, introduces the linguistic concepts of word family and field theory, secondly, gives an overview of current research on gesture families within gesture studies, and thirdly, outlines a multimodal perspective on gestures in gestural fields. Both approaches to grouping gestures rely on the basic tenet of Saussurean structuralism that every sign has a unique relational structure, which turns out to be crucial for understanding why typified and semanticized gestures, like recurrent gestures and emblems, look the way they do. It also throws light on processes of grammaticalization by which verbal as well as gestural expressions are recruited as stable items and become integrated into grammar, the lexicon, and multimodal utterance constructions. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 16301640

123. Gesture families and gestural fields

1631

1. Introduction: Gesture studies beyond the atomistic view The basic idea of gesture families and gestural fields is grounded in the central thesis of Saussurean structuralism that every sign system has a unique relational structure (Saussure [1916] 1966). According to Saussure, the units that we identify when analyzing signs or combinations of signs derive their semiotic validity only from the place they occupy in a network of functional interrelations and have no prior or independent existence of their own (Saussure 1966). This means that “we cannot first identify the units and then, at a subsequent stage of the analysis, enquire what combinational or other relations hold between them: we simultaneously identify both the units and their interrelations” (Lyons 1977: 232). As far as phonology is concerned, a phoneme “is an abstract theoretical construct which is postulated as the locus of functional contrasts and equivalences holding among sets of forms” (Lyons 1977: 233). As with the phonological system, so with grammatical and semantic structures. Analogous to Saussurean structuralism, the concept of gesture families focuses on groupings of gestures instead of single isolated gestures alone. According to Kendon (2004), a gesture family is a set of co-speech gestures sharing certain gestural form characteristics: When we refer to families of gestures we refer to groupings of gestural expressions that have in common one or more kinesic or formational characteristics. We shall show that, within each family, the different forms that may be recognized in most cases are distinguished in terms of the different movement patterns that are employed. As we shall see, each family not only shares in a distinct set of kinesic features but each is also distinct in its semantic themes. The forms within these families, distinguished as they are kinesically, also tend to differ semantically although, within a given family, all forms share in a common semantic theme. (Kendon 2004: 227)

Kendon’s concept of gesture families conflates the linguistic concepts of phonological distinctive function, morphological “word family”, and semantic “lexical field” (Fricke 2010, 2012). A first step towards identifying gesture families requires these concepts to be analytically separated from each other. Despite the criticism that can be directed against Kendon’s concept (Fricke 2010, 2012), it should be acknowledged that the analyses of gesture families carried out by Calbris (1990), Kendon (2004), and Müller (2004) are the first ones to leave behind the atomistic perspective on single co-speech gestures by adding the dimension of interrelations between gestural items and their parameter instantiations (see the overview of gesture families in section 3). With respect to emblematic gestures as part of the gestural lexicons employed in face-to-face interaction (e.g., Efron [1941] 1972; Lynn 2012, this volume a; Meo Zilio 1990; Poggi 2002, 2004; Posner et al. in preparation), the concept of gesture families parallels with certain principles for structuring gestural entries which, in contrast to words, cannot be organized in alphabetic order. Such principles are, for example, shared gestural parameter instantiations, such as hand shape or movement patterns (e.g., Bressem and Müller this volume a; Calbris 1990, 2011; Ladewig 2011; Lynn 2012, this volume a; Kendon 1995, 2004; Morris et al. 1979; Müller 2004; Müller, Bressem, and Ladewig volume 1; Posner et. al. in preparation), derivability from non-gestural actions (e.g., Bressem and Müller this volume a, Calbris 1990, 2011; Kendon 2004; Müller 2004; Morris et al. 1979; Posner et al. in preparation, Lynn this volume b; Streeck 2009, volume 1; Teßendorf volume 1), common meaning (e.g., Bressem and Müller this volume a; Calbris 1990, 2011; Ladewig 2011;

1632

VIII. Gesture and language

Kendon 2004; Müller 2004; Poggi 2002, 2004; Posner et al. in preparation; Lynn 2012, this volume b), and common areas of reference (Poggi 2002, 2004; Posner et al. in preparation; Lynn 2012, this volume a). The basic idea that lexical entries are not isolated entities is also shared by cognitive linguistics. As Geeraerts (2010: 242) points out, in both structuralist and cognitive linguistic semantics “lexical items are seen in the context of lexical fields, relational networks, input spaces in blends, frames, or other ‘chunks of knowledge’, as the case may be”. In the following sections we shall be concerned primarily with a structuralist and functionalist approach to lexical structures. This sidesteps issues pertaining to the psychology of language and gesture production to provide a perspective on gesture as a system of signs. This shift from concentrating on individual gestures also involves a change of focus from semasiology to onomasiology.

2. Word amilies and semantic ields in linguistics Semasiology starts from the form of individual signs and considers the way in which their meanings are manifested, whereas onomasiology starts from the meaning or concept of a sign and investigates the different forms by which the concept or meaning can be designated or named (Baldinger 1980: 278; Geeraerts 2010: 23; Schmidt-Wiegand 2002: 738). The distinction between semasiology and onomasiology is equivalent to the distinction between family-oriented and field-oriented thinking. The term “gesture family” is adopted from the linguistic term “word family“ (also “lexical family”), which is usually applied to morphological interrelations between lexical items in vocal languages. Word families are groups of lexemes that share identical or similar morphological stems resulting from the same root, e.g., in English general, generalize, generalization, generalship, generalist, generic or in German fahren, Fahrt, Fuhre, Führer, Gefährt (Bußmann 1983: 588). For word families, the starting point of analysis is the linguistic form (or signifiant in Saussurean terminology). However, if a set of words covers a shared area of content ⫺ without similarities on the level of form necessarily having to be present ⫺ it is said to constitute a “semantic field” or a “lexical field” (cf. Trier [1931] 1973; Coseriu 1967, 1970, 1973). In this case, contrary to word families or gesture families, the starting point is the sign’s content or meaning (signifie´) instead of its form (signifiant). The semantic features shared by all lexical items define the semantic field and delineate it from other fields. When these features are instantiated by one single lexical item, this item is called an “archilexeme” (Coseriu 1967: 294), e.g., flower, as a generic term or hyperonym, is the archilexeme of the items rose, tulip, geranium, etc., in the semantic field of flowers. The basic ideas of semantic field theory can be traced back to German and Swiss scholars in the 1920s and 1930s (Ipsen 1924; Jolles 1934; Porzig 1934; Trier 1973; Weisgerber 1963; cf. Lyons 1977). According to Lyons (1977: 250), Trier’s version of field theory, as outlined in his monograph Der deutsche Wortschatz im Sinnbezirk des Verstandes (‘The German Vocabulary in the Semantic Field of the Intellect’), is “widely and rightly judged to have opened a new phase in the history of semantics […]” and marks the beginning of the structuralist era in this field. Trier looks upon vocabulary as a set of semantically interrelated lexemes that, like a mosaic, cover a certain domain of reality: The position of each lexeme, like each small stone, is determined by its surroundings (Trier 1973; cf. Lyons 1977; see also Geckeler 2002). The relations that hold between the items of a semantic field change through time (cf. Lyons 1977: 252): With respect to the field metaphor, any broadening of the meaning of

123. Gesture families and gestural fields

1633

one lexical item causes the narrowing of the respective neighboring ones. During periods of language change, new items may enter the field while previously existing items disappear. By separately analyzing different synchronic stages of the semantic field of intellectual properties in German vocabulary from Old High German to Middle High German, Trier is able to show that diachronic linguistics presupposes synchronic linguistics (Lyons 1977: 252): Both diachronic and synchronic linguistics must deal with systems or subsystems of interrelated items instead of investigating the semantic shifts of isolated words. Later developments of semantic field theory (cf. Gloning 2002; Geeraerts 2010) integrate the perspectives of componential semantics and syntagmatic relations (e.g., Coseriu 1967, 1970, 1973; Fricke 1996; Greimas [1966] 1983; Pottier [1964] 1978), relational semantics according to Lyons (e.g., Coseriu 1967, 1970, 1973; Lutzeier 1981), and cognitive semantics (e.g., Lutzeier 1981; Johnson-Laird and Oatley 1989; and as an important predecessor Weinrich [1956] 1976; see also Liebert 2002; Peil 2002). In cognitive semantics, the idea of “fields” leaves the language-specific level of lexical semantics behind and applies to culturespecific groups of concepts, e.g., concepts of anger (Kövecses 1986, 2000). We will not go deeper into the commonalities and differences between the various versions and approaches of semantic field theory but concentrate instead upon its relevance to gesture studies and its relation to semasiological concepts of word family.

3. Semasiology in gesture studies: Gesture amilies The notion of lexical family or word family, as introduced in the previous section, is based on morphological relatedness. Several lexemes share the same base and are linked by morphological processes, such as derivation or composition (Matthews 1974). An example of a lexical family are the items word, word family, wordy, and word order. How far do exactly the same principles apply to co-speech gestures? In her article “Forms and uses of the Palm Up Open Hand: A case of a gesture family?” Müller (2004) examines the gesture family of the open palm and develops the basic idea of a rudimentary morphology. Her starting point is the observation that the open hand orientated upwards, “Palm Up Open Hand (PUOH)” in Müller’s terminology and “Open Hand Supine (OHS)” in Kendon’s terminology, is combined with additional semanticized form features, e.g., certain movement patterns that are structured around a formational core: “All members of the family share hand shape and orientation; members vary regarding the movement pattern and the use of one or two hands. These variations in form and function point to a rudimentary gesture morphology that structure this small scale gesture family” (Müller 2004: 254). The phenomenon of gesture families, first outlined in the work of Calbris (1990), has been addressed in few recent studies. As one of the first gesture researchers, Kendon (1995, 2004) presents four gesture families observed in English and Italian speakers: (i) the Open Hand Prone (OHP) family carrying the semantic theme of “halting, interrupting or indicating the interruption of a line of action”, (ii) the Open Hand Supine (OHS) family carrying the semantic theme of “presentation” or “offering” and “reception”, (iii) the G-family carrying the semantic theme “topic seizing”, and (vi) the R-family carrying the semantic theme of singling something out, making it precise and specific.

1634

VIII. Gesture and language Following Morris et al. (1979), Kendon (2004) considers that both the G-family and the R-family are derived from “precision grip” actions of the hand, in which either all fingers are brought together (G-family) or only the thumb and the index finger are connected at their tips (R-Family). In these open-hand gesture families, the open hand is either “held with the palm facing away from the speaker, or downwards” (the Palm Down family) or the “open hand is always held so that the palm faces upwards (or obliquely upwards)” (the Palm Up family) (Kendon 2004: 227). Each of the four families is a “group of gestures that have in common certain kinesic features”, and the individual members of a given family differ in gestural form “according to the movement pattern employed in performing them” (Kendon 2004: 281) (see also Bressem and Müller this volume a; Payrato´ and Teßendorf this volume). In Müller’s (2004) discussion of gestures of the Palm Up Open Hand (PUOH) family observed in German speakers, by definition, all family members share two kinesic features (hand shape and orientation), and the family comprises an open set of members that differ with respect to movement (Palm Up Open Hand combined with various movement patterns). Palm Up Open Hand gestures are based on basic actions that serve as the derivational basis for all members of the family: “giving, showing, offering an object by presenting it on the open hand” (Müller 2004: 236) as well as the readiness to receive an object in the open hand. Palm Up Open Hand gestures are used to present an “abstract, discursive object as a concrete, manipulable entity” (Müller 2004: 233) and invite the interlocutor to take a joint perspective on the perspective offered on the open hand. Based on formal and functional variations of the kinesic core through various movement patterns (rotation, lateral movement, up and down movement), the semantic core of offering, giving, and receiving objects is extended to mean “continuation”, “listing ideas”, and “a sequential order of offered arguments or presenting a wide range of discursive objects” (Müller 2004: 254) (see also Kendon and Müller above for further characteristics of the Palm Up Open Hand family). Both Kendon’s (2004) and Müller’s (2004) “context-of-use” studies of gestures identify gesture families in which the hand shape, or hand shape and orientation, constitute the formational core of the family. Ladewig (2010, 2011) presents a gesture family that has a movement pattern as its formational core. The family of the cyclic gestures is “characterized by a continuous circular movement of the hand, performed away from the body” (Ladewig 2011), in which the hand remains in situ. Based on a continuous circular movement outwards, the family of the cyclic gestures carries the semantic theme of “cyclic continuity”. Differentiation within the family of cyclic gestures is achieved by different positions of the hands in the surrounding gesture space as well as by changes in the size of the movement. As a result, the semantic core of cyclic continuity takes on different functions, such as indicating the speaker’s mental activity of searching for a word or concept, “cranking up the interlocutor’s ongoing searching activity”, or visualizing “semantic aspects of circular movements such as scooping, or of ongoing events such as thinking processes” (Ladewig 2011). Furthermore, Ladewig proposes that variants of the family are not only related to each other based on a shared formational and semantic core but also cognitively. By proposing an underlying cognitive model, which rests upon image schemas and metaphoric processes, Ladewig accounts for possible cognitive relations within the gesture family of the cyclic gestures and, more importantly, for interrelations of two or more formational and semantic cores of gesture families. Based on the observation that the core of the cyclic gesture may be combined with the core of the

123. Gesture families and gestural fields

1635

Palm Up Open Hand, resulting in a gesture that presents a continuation and a listing of ideas (Müller 2004), Ladewig assumes that the cores of both families are “fused”. Such a “marriage of gesture families” (Becker and Müller 2005; see also Becker 2004; Ladewig 2006; Calbris 2011) or a “complex kinestheme” (Fricke 2008, 2010, this volume) must thereby be assumed to be grounded in and made possible by underlying cognitive models that are related with each other (Ladewig 2011). A further perspective on the nature of gesture families, the (inter)relation of gestures as well as their functional characteristics, is put forward in a study by Bressem and Müller (this volume a) on the gestures of German speakers that express negation, refusal, and negative assessment. Taking as their starting point Kendon’s (2004) proposal that Open Hand Prone gestures that carry the semantic theme of “halting, interrupting or indicating the interruption of a line of action” may “come to serve as negations if there is something presupposed in relation to which they act” (Kendon 2004: 263), they examine the scope of “gestures of negation”. Unlike Kendon, who limits the expression of negation to members of the Open Hand Prone palm down, Bressem and Müller broaden the scope to include four gesture families (sweeping away, holding away, throwing away, brushing away) that share a common formational and semantic core and thus constitute the family of the Away gestures. Movements away from the center of the speaker to the periphery, resulting in a particular directionality, constitute the formational core of the family. The semantic theme of ‘away’ is found on the level of everyday actions and, in particular, on actions that remove or hold things away from the speaker, resulting in a particular effect, that is, clearing the space around the body or keeping things away from body. This shared underlying effect of action is semanticized in the Away gesture family, leading to shared structural and functional characteristics, and also to differences in the four gesture families, as each gesture family is characterized by particular kinesic qualities. With this study, Bressem and Müller not only show that the formational core of a gesture family may be found on the level of a common action scheme, and in particular an effect of action, but also, more importantly, that different complexities and nesting hierarchies of gesture families can be identified: Recurrent gestures constitute gesture families that may themselves be members of gesture families. The members of a gesture family stand in particular relations to each other, and these interrelations are not only constitutive for the gesture family as a whole, but also for the particular members of the family. Relations between members of a gesture family can thus show varying degrees of complexity. Hence, structural and functional aspects of gestures can only be seen in relation to other gestures (see also Bressem and Müller this volume b; Fricke 2010, this volume; Ladewig 2011).

4. From semasiology to onomasiology: Gestures in gestural ields  a multimodal perspective The semasiological concept of gesture families, as introduced in the previous sections, has been an important contribution to gesture studies from a linguistic point of view and provided important insights into underlying cognitive processes as well. However, this concept is limited to the level of individual gestures. In adopting a multimodal perspective on groupings of verbal and gestural expressions on the level of sign systems (in contrast to their actual use in concrete utterances), we are in need of a tertium comparationis that form-based concepts of gesture families and word families alone do not provide. The visual and the auditory modalities differ with regard to the material nature

1636

VIII. Gesture and language

of their semiotic output and the articulators involved. Moreover, they do not have any significant potential for sharing a common formational core or morphological stem. This can be stated not only for multimodal research within single languages but also for cross-linguistic and cross-cultural studies from a comparative point of view (Fricke 2012, in preparation). However, applications of the onomasiological perspective in linguistics and gesture studies can already be found in dictionaries and linguistic maps. In current gestural lexicons, for example, semasiological and onomasiological perspectives are combined (Lynn 2012, this volume a; Poggi 2002, 2004). Lynn’s dictionary of physical contact gestures (Lynn 2012, this volume a), for example, offers two indices: The semasiological index provides groupings of gestures according to their visible form, e.g., different executions of handshakes, whereas the onomasiological index uses meanings and pragmatic functions as a starting point, e.g., greetings. Another field of application is constituted by atlases in which onomasiological maps show verbal and gestural expressions used for a given meaning, reference, or specific pragmatic function, such as greetings, in a given geographical area (e.g., Wrede, Mitzka, and Martin 1927⫺1956; Morris et al. 1979; Schmidt and Herrgen 2001⫺2009). It should be noted that family and field structures are not necessarily congruent: A particular semantic field can be covered by several families that differ with regard to their formational core or, in the reverse case, a particular gesture family can cover different semantic fields. Referring to previous studies of head shakes, Kendon (2002: 149⫺ 150) states that these studies “take it for granted that the head shake is a gesture of negation and they do not include specific observations on the head shake in use”. By detaching his context-of-use study from an overly narrow onomasiological view, he strengthens the semasiological perspective in gesture studies. This has to be seen as an important step with respect to further investigations of gesture families. But, conversely, an overly narrow semasiological view might conceal what onomasiology and the concept of semantic fields have to offer. Certain semantic areas lend themselves to offering a tertium comparationis as they are covered both by verbal and gestural units that constitute a mutual multimodal semantic field (Fricke in preparation). In the case of negation (Harrison 2008, 2009; Kendon 2002; see also Bressem and Müller this volume a), a shared semantic field can explain not only occurrences of multimodal substitution like the head shake as a way of saying “no” but also the other relationships between gesture and speech mentioned by Kendon (2002: 148): “additive, complementary or supplementary”. Applying Saussure’s basic structuralist idea that “it is from the interdependent whole that one must start and through analysis obtain its elements” (Saussure 1966), this means, firstly, that verbal and gestural signs, as members of a particular semantic field like negation, get their semiotic value negatively by their relations with the other units in the field, and secondly, that this is also true for their assumed additive, complementary, or supplementary functions in discourse. Considering the combinatoric dimension of multimodal negators, every expression “acquires its value only because it stands in opposition to everything that precedes or follows it, or both” (Saussure 1966).

5. Conclusion With respect to semasiologically oriented gesture families and onomasiologically oriented gestural fields, we have substantiated an approach to gesture studies beyond an

123. Gesture families and gestural fields

1637

atomistic view, firstly, by summarizing the basic concepts of word family and semantic field theory in the framework of linguistic structuralism, secondly, by giving an overview of current research on gesture families within gesture studies, and thirdly, by offering a multimodal perspective on gestures in gestural fields. Both approaches to grouping gestures rely on the basic tenet of Saussurean structuralism that every sign has a unique relational structure, which turns out to be crucial for understanding why typified and semanticized gestures, like recurrent gestures (see Bressem and Müller this volume b) and emblems (e.g., Ekman and Friesen 1969; for an overview, see Teßendorf volume 1), look the way they do. It also throws light on processes of grammaticalization by which verbal and gestural expressions are recruited as stable items and become integrated into grammar, the lexicon, and multimodal utterance constructions (Fricke volume 1, this volume, in preparation). It is worth pointing out that the future collaboration of onomasiological and semasiological approaches might offer new perspectives for cross-linguistic and cross-cultural studies, and also strengthen fields of application, such as gesture dictionaries or gesture atlases.

6. Reerences Baldinger, Kurt 1980. Semantic Theory. Towards a Modern Semantics. Oxford: Blackwell. Becker, Karin 2004. Zur Morphologie redebegleitender Gesten. MA thesis, Freie Universität Berlin. Becker, Karin and Cornelia Müller 2005. Cross-classification of gestural features ⫺ the marriage of two gesture families. Paper presented at the 2nd conference of the International Society of Gesture Studies (ISGS). Lyon, France. Bressem, Jana and Cornelia Müller this volume a. The family of Away gestures: Negation, refusal, and negative assessment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1592⫺1604. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume b. A repertoire of recurrent gestures in German. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1575⫺ 1591. Berlin/Boston: De Gruyter Mouton. Bußmann, Hadumod 1983. Lexikon der Sprachwissenschaft. Stuttgart: Kröner. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Calbris, Genevie`ve 2011. Elements of Meaning in Gesture. Amsterdam/Philadelphia: John Benjamins. Coseriu, Eugenio 1967. Lexikalische Solidaritäten. Poetica 1: 293⫺303. Coseriu, Eugenio 1970. Einführung in die strukturelle Betrachtung des Wortschatzes. Tübingen: Narr. Coseriu, Eugenio 1973. Probleme der strukturellen Semantik. Tübingen: Narr. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton. First published [1941]. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1: 49⫺98. Fricke, Ellen 1996. Die Verben des Riechens im Deutschen und Englischen. Eine kontrastive semantische Analyse [Verbs of olfaction in German and English: A comparative semantic field analysis], MA thesis (KIT-Report 136). Berlin: Technische Universität Berlin. Fricke, Ellen 2008. Grundlagen einer multimodalen Grammatik: syntaktische Strukturen und Funktionen [Foundations of a multimodal approach to grammar: Syntactic structures and functions]. Habilitation thesis, European University Viadrina, Frankfurt (Oder).

1638

VIII. Gesture and language

Fricke, Ellen 2010. Phonaestheme, Kinaestheme und multimodale Grammatik. Sprache und Literatur 41(1): 69⫺88. Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin/Boston: De Gruyter. Fricke, Ellen volume 1. Towards a unified grammar of gesture and speech: A multimodal approach. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 733⫺754. Berlin/Boston: De Gruyter Mouton. Fricke, Ellen this volume. Kinesthemes: Morphological complexity in co-speech gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1618⫺1630. Berlin/ Boston: De Gruyter Mouton. Fricke, Ellen in preparation. Multimodal semantics beyond an atomistic view: Gestural fields. Geckeler, Horst 2002. Anfänge und Ausbau des Wortfeldgedankens. In: David A. Cruse, Franz Hundsnurscher, Michael Job and Peter Rolf Lutzeier (eds.), Lexicology. An International Handbook on the Nature and Structure of Words and Vocabularies. (Handbooks of Linguistics and Communication Science 21.), 713⫺728. Berlin/New York: De Gruyter. Geeraerts, Dirk 2010. Theories of Lexical Semantics. Oxford: Oxford University Press. Gloning, Thomas 2002. Ausprägungen der Wortfeldtheorie. In: David A. Cruse, Franz Hundsnurscher, Michael Job and Peter Rolf Lutzeier (eds.), Lexicology. An International Handbook on the Nature and Structure of Words and Vocabularies. (Handbooks of Linguistics and Communication Science 21.), 728⫺737. Berlin/New York: De Gruyter. Greimas, Algirdas Julien 1983. Structural Semantics: An Attempt at a Method. Lincoln, NE: University of Nebraska Press. First published [1966]. Harrison, Simon 2008. The expression of negation through grammar and gesture. In: Jordan Zlatev, Mats Andre´n, Marlene Johansson Falck and Carita Lundmark (eds.), Studies in Language and Cognition, 405⫺409. Newcastle upon Tyne: Cambridge Scholars Publishing. Harrison, Simon 2009. Grammar, gesture, and cognition: The case of negation in English. PhD dissertation, Universite´ Bordeaux 3. Ipsen, Gunther 1924. Der alte Orient und die Indogermanen. In: Johannes Friedrich (ed.), Stand und Aufgaben der Sprachwissenschaft. Festschrift für Wilhelm Streitberg, 200⫺237. Heidelberg: Winter. Johnson-Laird, Philip and Keith Oatley 1989. The Language of Emotions: An Analysis of a Semantic field. Cognition and Emotion 3: 81⫺123. Jolles, Andre´ 1934. Antike Bedeutungsfelder. Beiträge zur Deutschen Sprache und Literatur 58: 97⫺109. Kendon, Adam 1978. Differential perception and attentional frame in face-to-face interaction: Two problems for investigation. Semiotica 24(3⫺4): 305⫺315. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23: 247⫺279. Kendon, Adam 2002. Some uses of the head shake. Gesture 2(2): 147⫺182. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kövecses, Zolta´n 1986. Metaphors of Anger, Pride and Love: A Lexical Approach to the Structure of Concepts. Amsterdam: John Benjamins. Kövecses, Zolta´n 2000. The Concept of Anger: Universal or Culture Specific? Psychopathology 33: 159⫺170. Ladewig, Silva H. 2006. Die Kurbelgeste ⫺ konventionalisierte Markierung einer kommunikativen Aktivität. MA thesis, Freie Universität Berlin. Ladewig, Silva H. 2010. Beschreiben, suchen und auffordern. Varianten einer rekurrenten Geste. Sprache und Literatur 41(1): 89⫺111.

123. Gesture families and gestural fields

1639

Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. http:// cognitextes.revues.org/406. Liebert, Wolf-Andreas 2002. Bildfelder in synchroner Perspektive. In: David A. Cruse, Franz Hundsnurscher, Michael Job and Peter Rolf Lutzeier (eds.), Lexicology. An International Handbook on the Nature and Structure of Words and Vocabularies. (Handbooks of Linguistics and Communication Science 21.), 771⫺783. Berlin/New York: De Gruyter. Lutzeier, Peter R. 1981. Wort und Feld: wortsemantische Fragestellungen mit besonderer Berücksichtigung des Wortfeldbegriffes. Tübingen: Niemeyer. Lynn, Ulrike 2012. Keep in Touch ⫺ A Dictionary of Contemporary Physical Contact Gestures in the Mid-Atlantic Region of the United States. Opus, Digital Repository of Technische Universität Berlin. http://opus4.kobv.de/opus4-tuberlin/frontdoor/index/index/docld/3484. Lynn, Ulrike this volume a. Gestures in dictionaries: Physical contact gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1502⫺1511. Berlin/ Boston: De Gruyter Mouton. Lynn, Ulrike this volume b. Levels of abstraction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1702⫺1712. Berlin/Boston: De Gruyter Mouton. Lyons, John 1977. Semantics. Volume 1. Cambridge: Cambridge University Press. Matthews, Peter H. 1974. Morphology. An Introduction to the Theory of Word Structure. Cambridge: Cambridge University Press. Meo Zilio, Giovanni 1990. Le dictionnaire de gestes. In: Franz Josef Hausmann, Oskar Reichmann, Herbert Ernst Wiegand and Ladislav Zgusta (eds.), International Encyclopedia of Lexicography. (Handbooks of Linguistics and Communication Science 5.2.), 1112⫺1119. Berlin/New York: De Gruyter. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures. Their Origins and Distributions. New York: Stein and Day. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand: A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 233⫺256. Berlin: Weidler. Müller Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Payrato´, Llui´s and Sedinha Teßendorf this volume. Pragmatic gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1531⫺1539. Berlin/Boston: De Gruyter Mouton. Peil, Dietmar 2002. Bildfelder in historischer Perspektive. In: David A. Cruse, Franz Hundsnurscher, Michael Job and Peter Rolf Lutzeier (eds.), Lexicology. An International Handbook on the Nature and Structure of Words and Vocabularies. (Handbooks of Linguistics and Communication Science 21.), 754⫺771. Berlin/New York: De Gruyter. Poggi, Isabella 2002. Symbolic gestures: The case of the Italian gestionary. Gesture 2(1): 71⫺98. Poggi, Isabella 2004. The Italian gestionary. Meaning, representation, ambiguity and context. In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 73⫺88. Berlin: Weidler. Porzig, Walter 1934. Wesenhafte Bedeutungsbeziehungen. Beiträge zu deutschen Sprache und Literatur 58: 70⫺97.

1640

VIII. Gesture and language

Posner, Roland, Reinhard Krüger, Thomas Noll and Massimo Serenari in preparation. The Berlin Dictionary of Everyday Gestures. Berlin: Weidler. Pottier, Bernhard 1978. Entwurf einer modernen Semantik. In: Horst Geckeler (ed.), Strukturelle Bedeutungslehre, 45⫺89. Darmstadt: Wissenschaftliche Buchgesellschaft. First published [1964]. Saussure, Ferdinand de 1966. Course in General Linguistics. New York/Toronto: McGraw-Hill. First published [1916]. Schmidt, Jürgen Erich and Joachim Herrgen (eds.) 2001⫺2009. Digitaler Wenker-Atlas. Erste vollständige Ausgabe von Georg Wenkers “Sprachatlas des Deutschen Reichs”. Compiled by Alfred Lameli, Tanja Giessler, Roland Kehrein, Alexandra Lenz, Karl-Heinz Müller, Jost Nickel, Karl-Heinz Müller, Christoph Purschkea and Stefan Rabanus. Marburg: Forschungszentrum Deutscher Sprachatlas. Schmidt-Wiegand, Ruth 2002. Die onomasiologische Sichtweise auf den Wortschatz. In: David A. Cruse, Franz Hundsnurscher, Michael Job and Peter Rolf Lutzeier (eds.), Lexicology. An International Handbook on the Nature and Structure of Words and Vocabularies. (Handbooks of Linguistics and Communication Science 21.), 738⫺752. Berlin/New York: De Gruyter. Streeck, Jürgen 2009. Gesturecraft. The Manu-facture of Meaning. Amsterdam/Philadelphia: John Benjamins. Streeck, Jürgen volume 1. Praxeology of gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 674⫺688. Berlin/Boston: De Gruyter Mouton. Teßendorf, Sedinha volume 1. Emblems, quotable gestures, or conventionalized body movements. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 82⫺ 100. Berlin/Boston: De Gruyter Mouton. Trier, Jost 1973. Aufsätze und Vorträge zur Wortfeldtheorie. The Hague: Mouton. First published [1931]. Weinrich, Harald 1976. Münze und Wort. Untersuchungen an einem Bildfeld. In: Harald Weinrich, (ed.), Sprache in Texten, 317⫺327. Stuttgart: Klett. First published [1956]. Weisgerber, Leo 1963. Die vier Stufen in der Erforschung der Sprachen. Düsseldorf: Schwann. Wrede, Ferdinand, Walther Mitzka and Bernhard Martin 1927⫺1956. Deutscher Sprachatlas. Auf Grund des von Georg Wenker begründeten Sprachatlas des Deutschen Reichs. Marburg: Elwert.

Ellen Fricke & Jana Bressem, Chemnitz (Germany) Cornelia Müller, Frankfurt (Oder) (Germany)

124. Repetitions in gesture

1641

124. Repetitions in gesture 1. 2. 3. 4. 5. 6.

Introduction Gestural iterations: Repetitions as a means of unit formation Gestural reduplications: Repetitions as a means of word formation Gestural repetitions and their relevance for the creation of multimodal utterance meaning Discussion References

Abstract Developed against the framework of concepts from spoken and signed languages (Hurch 2005; Wilbur 2005), the chapter presents an empirically based twofold semantic classification of gestural repetitions within gesture phrases (Bressem 2012). By addressing structural and functional characteristics, the chapter discusses gestural repetitions from a gesture intrinsic perspective and a multimodal perspective. In doing so, the chapter unravels the complex internal structuring of gestural repetitions, structural and functional overlaps and variance between types of repetitions, and the particular relevance of gestural repetitions for the creation of multimodal utterances.

1. Introduction In his treatise on gestures in Naples, de Jorio ([1832] 2000), as one of the first gesture researchers, discusses possible meanings and functions of gestural repetitions. Stating that “gestures are not only adopted to express isolated ideas, but also ideas connected together” (de Jorio 2000: 398), he identifies three different ways in which gestural repetitions are used: (i) Gestures can be repeated because they are parts of a single action, such as in swearing or praying. (ii) They may be used in order to deliberately connect one idea with the other and for altering the verbal meaning either through the context in which they are performed or through a modification of their execution. (iii) Gestural repetitions may be used to express grammatical notions. Modifying the gestures through enlargement, increase or amplification of its qualities expresses the superlative, whereas reducing the movement conveys the diminutive. In a similar vein, modern gesture research mentions the potential of gestural repetitions. McNeill (1992), for instance, states that enhancing the gestures’ quality marks contrast between gestures or may function as diminution. Furthermore, gestural repetitions are often used to tie together thematically related parts of discourse. Through the recurrence of gestural form features, so called catchments, a “visuospatial imagery” is created that “runs through a discourse” (McNeill 2005: 115) and connects immediately following but also separated parts of discourse. Apart from the ability to signal thematic changes and discourse structure, gestural repetitions indicate durativity and iterativity of enacted actions (see, e.g., Brookes 2005; Ladewig 2011; Müller 1998, 2000) and are used for the Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 16411649

1642

VIII. Gesture and language

expression of aspect or Aktionsart (see, e.g., Becker et al. 2011; Ladewig 2011; McNeill 2000; Müller 2000). Although gestural repetitions have been addressed selectively, accounts usually give rough descriptions of possible types and describe repetitions only selectively with respect to their forms and functions. Detailed studies addressing these aspects are still missing (see Fricke 2012 for a first account). This research gap is surprising because the phenomenon of repetitions poses a range of methodological and theoretical questions for the medium of gesture: Which aspects of form make successions of gesture appear as connected or separated? What kinds of meanings are depicted by gestural repetitions? And what kinds of structural and functional aspects can repetitions fulfill? This chapter presents an empirically based twofold classification of gestural repetitions, addressing repetitions on the levels of form, meaning, and function alone and in relation to speech (Bressem 2012). Developed against the framework of concepts from spoken and signed languages (Hurch 2005; Wilbur 2005), a semantic classification of gestural repetitions within gesture phrases is presented: (i) Iterations, in which the repetition of the gestures is used for the repetition of the same meaning and (ii) Reduplications, in which the repetition of the gestures is used for the creation of a complex meaning.

2. Gestural iterations: Repetitions as a means o unit ormation Iterations are sequences of at least two preparation-stroke or stroke phases, in which none or the realization of the form parameter “movement” (direction and quality) or “position” changes (see Bressem volume 1 and Bressem, Ladewig, and Müller volume 1 for details in gesture annotation). Iterations take over concrete referential functions when depicting actions and objects. In doing so, iterations emphasize the semantics of speech by underlining the meaning expressed verbally or they modify the verbal semantics by adding complementary semantic information (Bressem, Ladewig, and Müller volume 1). When referring to abstract events and facts, iterations take over abstract referential and, in particular, meta-communicative and prosodic function. Example 1 (see Fig. 124.1) is a prototypical instance in which gestural iterations are used for meta-communicative function and at the same time serve prosodic function. While articulating his position against Germany’s nuclear partaking and uttering wenn sie das ernst meinen (‘if you take this seriously’), the speaker produces a series of four recurrent ring gestures of which the first is articulated with an enlarged and accented movement, while the following three strokes are reduced in size. This series of ring gestures can be interpreted as conveying the semantic theme of “exactness, making something precise, or making prominent some specific fact or idea” (Kendon 2004: 240) and as such acts upon speech by metaphorically grasping and holding discursive objects (Kendon 1995; Streeck 2005). Due to their meta-communicative function in the verbal utterance, the gestural iteration marks focal aspects of the speakers’ utterance and underlines the preciseness and correctness of his arguments. In addition, changes in the movements (reducing, enlarging, accentuated ending) function as prosodic marking. Similar to accents in speech, the movement accentuates and places emphasis on the meaning of the gesture. In cooperation with the verbal utterance, the gestural repetition

124. Repetitions in gesture

1643

contributes to the creation of a multimodal prosodic structure through which specific parts of the verbo-gestural utterance are highlighted (see Bressem 2012). In example 2 (see Fig. 124.1), a woman tells a story about a particular behavior of the family dog. While saying rennt er in Flur, kratzt (‘runs into the hallway, scrapes’), the woman produces a gestural iteration consisting of three strokes. Through the repetitive movement sequence, in which the hand acts as if they would perform an actual action, the woman offers a bodily depiction of how dogs scrape. In temporal overlap with the predicate of the sentence (“scrapes”), the gestural iteration fulfills an emphasizing function by gesturally underlining a particular action and manner of action already specified by the verbal utterance. In doing so, speech and gesture together create a multimodal impression of the scraping dog. The repeated gestural execution is thereby an integral part of the imitated action because the action of scraping is itself repetitive. In example 3 (see Fig. 124.1) the iteration does not result from the depiction of an action scheme but is rather a necessary means by which outlines and extensions of objects can be modeled (see also Ladewig 2012). While explaining a particular type of bottle holder often used in Italian restaurants, the speaker utters wo die Flasche Wein da in som Metallding drinne is (‘where the bottle wine is in such a metal thing’) and produces a gestural iteration. Through the threefold execution of strokes with arced movements going inwards and outwards, along with the bent hands facing downwards, the gesturally-mimed object, namely a holder for wine bottles emerges. The gestural repetition expresses complementary semantic information about the size of the object and, as a result, modifies the “metal thing” to mean “bent metal thing”. By providing a qualitative description of the object specified by the noun, the gestural repetition takes over the function of a gestural attribute (Fricke 2012).

Example 1: weapons

Example 2: Arko

Example 3: metal thing

wenn sie d‘As ⴚ ERNst meinen,

rennt er in flur, kratzt.

wo die flasche wⴕEin da in som metAllding drinne is,

If you take this seriously

runs into the hallway, scrapes.

where the bottle wine is in such a metal thing

The hand forming a ring is moved up and down four times with enlarged and accentuated movements.

The flat hand with a palm oriented downwards performs bent movement downwards and towards the body three times.

The bent hands facing downwards perform three bent lateral movements outwards and inwards.

Fig. 124.1: Examples of gestural iterations fulfilling prosodic, emphasizing, and modifying function

1644

VIII. Gesture and language

Although the examples here differ on the levels of form, meaning, and function, they share a fundamental characteristic: In all iterations, the successive execution of strokes does not result in a new and complex gestural meaning. Regardless of the number of strokes strung together, the individual strokes repeat the same gestural meaning. Accordingly, in cases of iterations, the seamless aligning of strokes along with the maintenance of at least two form parameters is used as a means for connecting similar gestural units. In cases of concrete referential function, iterations arise as a necessary means for the depiction of actions and objects. The repeated gestural execution is either an integral part of the imitated action scheme or it is a necessary prerequisite for the depiction of objects and thus solely a means to an end. In cases of having abstract referential function, the repetition of strokes achieves particular effects (prominence) and takes over a discursive and meta-communicative function. With these structural and functional characteristics, gestural iterations show analogies to the repetition of utterances or words in speech, which is a means for achieving particular effects (e.g., emphasis, surprise, conflict), for causing change on the connotative level, and for stylistic, textual, or pragmatic purposes (De Beaugrande and Dressler 1981; Kotschi 2001; Stolz 2007).

3. Gestural reduplications: Repetitions as a means o word ormation Reduplications are sequences of at least two stroke-stroke phases, in which not more than two form parameters change, namely “direction of movement” and “position”. Gestural reduplications are thereby made up of two subtypes: (i) Reduplications in which simultaneous changes in the parameters “directions of movement” and “position” occur. (ii) Reduplications in which only the parameter “position” changes (see also Fricke 2012). Reduplications take over abstract referential function and depict abstract events and states. By carrying redundant semantic features, reduplications underline the meaning expressed verbally and thus emphasize the semantics of speech. The same as in spoken utterances, reduplications express lexical or grammatical meaning and depict the Aktionsart “iterativity” or the notion of plurality. Example 4 (see Fig. 124.2) illustrates gestural reduplications expressing the Aktionsart “iterativity”. While explaining the notion of internal mail, the speaker produces a series of three strokes co-occurring with the prepositional phrase zwischen zwei Ämtern hin und herschickt (‘send back and forth between two offices’). Using a stretched index finger and arced movements away from the body and towards the body, the gestural iteration represents the iterativity of the movement event expressed in the verbs “send back and forth” through the repeated execution of strokes. As the beginning and endpoint of the represented movement event become visible in clear endpoints of the individual strokes, the movement sequences are articulatorily marked as individual and separate phases, indicating that the movement event “send back and forth” unfolds between two points. In combination with the parameter change (movement direction and position), the single strokes become visible as individual and separate phases. Thus, “repetition as a temporal

124. Repetitions in gesture

1645

process is verbally and gesturally conceptualized as a repeated movement sequence” (Müller 2000: 221, translation J.B.). However, the representation of iterativity is not only based on the depiction of the concrete movement event visible in the individual movement sequences. Rather, the meaning of the reduplication refers to the abstract notion of movement and in particular to iterativity expressed gesturally in the movement unfolding between two endpoints (Müller 2000). Example 5 (see Fig. 124.2) shows an instance of gestural reduplications expressing the notion of plurality. In this example, the speaker talks about a seminar for hair dressers she has recently attended and explains to her interlocutor that haircuts and their compositions are also explained in textbooks. While saying kannste dir ja immer die einzelnen Schritte durchlesen (‘well you can read through the single steps’), she produces a series of three strokes co-occurring with einzel (‘single’), nen schritte (‘steps’), and durch (‘through’). Using a hand shape with the fingers flapped down and a palm down orientation, the speaker executes three strokes with an arced movement away from the body. The hands thereby successively move from a higher position to lower positions in front of the speaker’s body. Through the arced movements executed in different positions of the gesture space, the abstract concept of “single steps” is gesturally represented as different regions in front of the speaker’s body. Yet, the positions in gesture space are not used for the representation of perceived spatial relations between objects in the world. Rather, the gesture space is used for creating structural relations between gestures (Müller and Tag 2010). The singe single strokes mark individual spaces around the speaker’s body, which are used to represent the single steps. As the strokes are produced in spatial and temporal proximity and furthermore are marked as belonging together

Example 4: back and forth

Example 5: single steps

dInge immer zwischen zwei ÄMtern hin und hErschickt,

rennt er in flur, kratzt.

Always send things back and forth between two offices

runs into the hallway, scrapes.

The stretched index finger is moved away from the body and towards the body with arced movements three times.

The flat hand with a palm oriented downwards performs bent movement downwards three times.

Fig. 124.2: Examples of gestural reduplications fulfilling emphasizing function and expressing iterativity and plurality

1646

VIII. Gesture and language

through constant form features, the impression of a sequence of similar yet different points in space arises (one space vs. several spaces). In combination with the co-expressive verbal utterance, the meaning of the gestural from is enriched (Enfield 2009) such that the notion of plurality emerges. Although the sub-types differ in aspects of form, the repetition of the individual strokes in both types of reduplications has the same effect: In reduplications, contrary to gestural iterations, the coordination of the single strokes (Fricke 2012) does not lead to a mere repetition of the meaning of the individual sub-strokes. Rather, based on the meaning of the parts, the whole sequence of strokes creates a complex gestural meaning. The complex gestural meaning of the reduplicative construction is thereby “an entity in its own right, usually with emergent properties not inherited or strictly predictable from the components and the correspondences between them” (Langacker 2008: 164). Due to semantic change resulting from the repetition of the individual strokes and based on an understanding of reduplication as the “systematic repetition of phonological material within a word for semantic or grammatical purposes” (Rubino 2005: 11), it is assumed that repetition in cases of gestural reduplications is not only a means to create connected gestural units. More importantly it is understood as a means for word formation, which may be either used for the expression of the Aktionsart “iterativity” or the notion of plurality. In doing so, gestural reduplications show analogies to the reduplications in sign languages both for the expression of Aktionsart and the notion of plural. In sign languages, aspect or Aktionsart is expressed by modulating the movement. Plural marking is achieved by repeating movements along the horizontal, vertical, or sagittal axis as well as by positioning the hands in different places in gesture space (Klima and Beluggi 1979; Pfau and Steinbach 2005). Gestural reduplications therefore seem to use a similar structural principle (reduplication of movement, change of position in gesture space) for a similar function (indication of Aktionsart and plural).

4. Gestural repetitions and their relevance or the creation o multimodal utterance meaning Gestural iterations and reduplications are integrated into the verbal utterance through positional integration via temporal overlap (Fricke 2012), that is, iterations and reduplications most often temporarily overlap with the co-expressive speech segment. Yet in doing so, they show differences regarding the depth of integration both semantically and structurally. In cases of concrete and abstract use, gestural iterations may match with the semantic features of speech by carrying redundant semantic features (see example 1 above). In doing so, the meaning of speech [s] and gesture [g] may be “identical, i.e. meaning [s] ⫽ meaning [g]” (Gut et al. 2002: 8) or the semantic features of the gesture may be included among the set of semantic features expressed in speech. In these cases, gestural iterations take over a prosodic and/or pragmatic function when depicting abstract meaning (see example 1) and an emphasizing function in cases of depicting concrete meaning (see example 2). Yet, in cases of depicting actions and objects, iterations also have the capability of carrying complementary semantic features and as such either specify objects in their size and shape (example 3) or the manner of an action. In these cases, the gestural repetition carries at least one semantic feature that is not expressed in the co-expressive speech segment. The meaning of the gesture contributes to the verbal meaning, “thus

124. Repetitions in gesture

1647

forming a subset of the meaning of the superordinate modality, namely speech” (Gut et al. 2002: 8). Moreover, through the temporal overlap with nouns and nominal phrases in cases of depicting objects, gestural iterations take over an attributive function as they specify and modify the nucleus noun of the nominal phrase (Fricke 2012). In cases of depicting actions, iterations specify the manner of the action and, through the correlation with verbs and verb phrases, take over the function of an adverbial determination because they qualify the verb meaning (Bressem 2012). Particular cases of gestural iterations thus have the capability of influencing and modifying the propositional content of the verbal utterance, a functional relevance missing in gestural reduplications. As reduplications only carry redundant semantic features and as such do not add semantic information not being expressed by the spoken utterance (see section 3), gestural reduplications do not affect the propositional content of the utterance and as such can only be regarded as having an emphasizing function. Similar as the spoken utterance, reduplications both express lexical (example 4: iterativity) or grammatical meaning (example 5: plural) and as such depict the semantic nucleus of the spoken utterance in another modality. Gestural iterations and reduplications thus take over particular relevance for the creation of multimodal utterances. Reasons for the different significance, and in particular for the missing modifying function of reduplications, lie within the abstract meaning of reduplications and their detachment from concrete aspects of the actual world. Iterations are more directly connected to bodily or visual experiences. Used for the representation of concrete actions and objects and by providing complementary semantic information, they are tightly linked with the semantics of the verbal utterance and therefore provide necessary information for understanding the multimodal utterance. Reduplications, on the other hand, trace a successive process of abstraction from visual or bodily experiences. Due to the abstract meaning arising from it and their detachment from concrete entities, reduplications do not affect the propositional content of the verbal utterance but rather embody particular aspects of the meaning expressed verbally. Moreover, based on the fact that the repetition creates a complex gestural meaning in reduplications, they seem to be less strongly connected with the semantics of the spoken utterance.

5. Discussion The twofold classification of gestural repetition presented in this chapter indicates that the phenomenon of repetitions in gesture is structurally and functionally complex. This complex internal structuring shows that repetition is not only an elementary means of expression for spoken or sign languages but also for gestures, and even seems to indicate analogies with repetitions in sign languages (Bressem 2012). Moreover, the particular relevance of gestural iterations and reduplications for the creation of multimodal utterance meaning indicates that rough descriptions of particular types without reference to structural and functional overlaps and variance cannot unravel the diversity within gestural repetitions. By adopting concepts of repetitions in spoken and signed languages, the chapter has presented a description of gestural repetition within a common methodological and theoretical frame of reference, aiming at the identification of structures and patterns in gestures that are a) comparable to the ones found in repetitions of spoken or signed

1648

VIII. Gesture and language

languages and b) specific to the gestural modality (Bressem 2012). In doing so, the perspective contributes to a “grammar” of gesture (Müller 1998, 2010; Müller, Bressem, and Ladewig volume 1) and a multimodal theory of grammar (Fricke 2007, 2012, volume 1).

Acknowledgements I thank Silva H. Ladewig for allowing me to use parts of her video data (see Ladewig 2012 for further information on the corpus) and Mathias Roloff ([email protected]) for the drawings.

6. Reerences Becker, Raymond, Alan Cienki, Austin Bennett, Christina Cudina, Camile Debras, Zuzanna Fleischer, Michael Haaheim, Torsten Müller, Kashmiri Stec and Alessandra Zarcone 2011. Aktionsarten, speech and gesture. In Carolin Kirchhof (ed.), Proceedings of GESPIN2011: Gesture and Speech in Interaction. http://gespin.amu.edu.pl/?q⫽node/66. Bressem, Jana 2012. Repetitions in gesture: Structures, functions, and cognitive aspects. Ph.D. dissertation, Faculty of Social and Cultural Sciences, European University Viadrina, Frankfurt (Oder). Bressem, Jana volume 1. A linguistic perspective on the notation of form features in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1079⫺1098. Berlin/Boston: De Gruyter Mouton. Bressem, Jana, Silva H. Ladewig and Cornelia Müller volume 1. Linguistic Annotation System for Gestures (LASG). In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An international Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1098⫺1124. Berlin/Boston: De Gruyter Mouton. Brookes, Heather 2005. What gestures do: Some communicative functions of quotable gestures in conversations among Black urban South Africans. Journal of Pragmatics 37(12): 2044⫺2085. De Beaugrande, Robert-Alain and Wolfgang Dressler 1981. Einführung in die Textlinguistik. Tübingen: Niemeyer Tübingen. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. A Translation of “La Mimica Degli Antichi Investigata Nel Gestire Napoletano” (Fibreno, Naples 1832) and With an Introduction and Notes by Adam Kendon. Bloomington/Indianapolis, IN: Indiana University Press. First published [1832]. Enfield, N. J. 2009. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge, NY: Cambridge University Press. Fricke, Ellen 2007. Origo, Geste und Raum: Lokaldeixis im Deutschen. Berlin: Walter de Gruyter. Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin: Mouton de Gruyter. Fricke, Ellen volume 1. Towards a unified grammar of gesture and speech: A multimodal approach. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 733⫺754. Berlin/Boston: De Gruyter Mouton. Gut, Ulrike, Karin Looks, Alexandra Thies and Dafydd Gibbon 2002. Cogest: Conversational gesture transcription system version 1.0. Fakultät für Linguistik und Literaturwissenschaft, Universität Bielefeld, ModeLex Tech. Rep 1.

124. Repetitions in gesture

1649

Hurch, Bernhard (ed.) 2005. Studies on Reduplication. Berlin: Mouton de Gruyter. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3): 247⫺279. Kendon, Adam 2004. Gesture. Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Klima, Edward S. and Ursula Beluggi 1979. The Signs of Language. Cambridge, MA: Harvard University Press. Kotschi, Thomas 2001. Formulierungspraxis als Mittel der Gesprächsaufrechterhaltung. In: Klaus Brinker (ed.), Text- und Gesprächslinguistik: Ein internationales Handbuch zeitgenössischer Forschung, 1340⫺1348. Berlin: Walter de Gruyter. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. Ladewig, Silva H. 2012. Syntactic and semantic integration of gestures into speech: Structural, cognitive, and conceptual aspects. Ph.D. dissertation, Faculty of Social and Cultural Sciences, European University Viadrina, Frankfurt (Oder). Langacker, Ronald 2008. Cognitive Grammar: A Basic Introduction. Oxford: Oxford University Press. McNeill, David 1992. Hand and Mind. What Gestures Reveal About Thought. Chicago, IL: University of Chicago Press. McNeill, David 2000. Catchments and contexts: Non-modular factors in speech and gesture production. In: David McNeill (ed.), Language and Gesture, 312⫺328. Cambridge, UK: Cambridge University Press. McNeill, David 2005. Gesture and Thought. Chicago, IL: University of Chicago Press. Müller Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte, Theorie, Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2000. Zeit als Raum. Eine kognitiv-semantische Mikroanalyse des sprachlichen und gestischen Ausdrucks von Aktionsarten. In: Ernest W. B. Hess-Lüttich and Walter Schmitz (eds.), Botschaften verstehen. Kommunikationstheorie und Zeichenpraxis. Festschrift für Helmut Richter, 211⫺228. Frankfurt a. M.: Peter Lang. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia and Susanne Tag 2010. Combining gestures: Mimetic and non-mimetic use of gesture space. Paper presented at the Conference Paper presented at the 4th conference of the International Society of Gesture Studies. Frankfurt (Oder), Germany. Pfau, Roland and Markus Steinbach 2005. Backward and sideward reduplication in German Sign Language. In: Bernhard Hurch (ed.), Studies on Reduplication, 568⫺593. Berlin: de Gruyter. Rubino, Carl 2005. Reduplication: Form, function and distribution. In: Bernhard Hurch (ed.), Studies on Reduplication, 11⫺29. Berlin: de Gruyter. Stolz, Thomas 2007. Das ist doch keine Reduplikation! Über falsche Freunde bei der Suche nach richtigen Beispielen. In: Andreas Amman and Aina Urdze (eds.), Wiederholung, Parallelismus, Reduplikation. Strategien der multiplen Strukturanwendung, 47⫺80. Bochum: Brockmeyer. Streeck, Jürgen 2005. Pragmatic aspects of gesture. In: Jacob Mey (ed.), International Encyclopedia of Languages and Linguistics, 71⫺76. Oxford: Elsevier. Wilbur, Ronnie B. 2005. A reanalysis of reduplication in American Sign Language. In: Bernhard Hurch (ed.), Studies on Reduplication, 594⫺623. Berlin: de Gruyter.

Jana Bressem, Chemnitz (Germany)

1650

VIII. Gesture and language

125. Syntactic complexity in co-speech gestures: Constituency and recursion 1. 2. 3. 4. 5. 6.

Introduction Simultaneity and linearity Constituency and recursion: Self-embedding in co-speech gestures Implications for language theory and gesture studies Conclusion References

Abstract This chapter presents evidence for recursion in co-speech gesture from the perspective of a multimodal approach to grammar (Fricke 2012). It will be shown that constituency and recursion may be manifested by co-speech gestures alone. If we consider the current debate about recursion and language complexity prompted by Hauser, Chomsky, and Fitch (2002), then finding recursion in co-speech gestures has the language-theoretic implication that natural spoken languages have to be conceived of as inherently multimodal ⫺ also from the perspective of generative grammar.

1. Introduction From the perspective of a multimodal approach to grammar, syntax and syntagmatic relations in general are considered to be the core area under investigation. The debate about whether gestures have a grammar at all generally hinges on the question of syntax. In this vein, Isabella Poggi notes on the subject of emblematic gestures (“symbolic gestures” in her terminology): […] a language can be defined as a communication system comprising not only a lexicon but also a syntax. This is the case only for Sign Languages, whilst the hearings’ systems of symbolic gestures do not include syntax rules, in that hearing people do not combine gestures to make gestural sentences […]. If symbolic gestures are not a language and do not combine to make sentences, it should not make sense to speak of “grammatical” distinctions among them. (Poggi 2007: 169; italics added by E.F.)

With regard to syntax, the following fundamental questions have to be answered: Firstly, can co-speech gestures be typified and semanticized independently of verbal utterances? Since the concept of kinesthemes allows submorphemic units to be identified, it enables semiotic processes of typification and semantization to be modeled; it thus provides terminal constituents for gestural constituent structures, which enables this prerequisite to be fulfilled (Fricke 2012, this volume). Secondly, do simple gestural units combine to form complex gestural units? And, thirdly, if this is the case, how do gestural syntagmas interact with the syntagmas into which spoken language can be segmented? And, fourthly, to what extent do the same syntactic principles apply to gesture and speech production with regard to how their respective units may be combined? This chapter will ⫺ for methodological reasons ⫺ focus on the second and the fourth question. It Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 16501661

125. Syntactic complexity in co-speech gestures: Constituency and recursion

1651

will be argued that sign languages and multimodal spoken languages share the same syntactic principles of constituency and recursion (section 3). Moreover, it will be shown that constituency and recursion are manifested by co-speech gestures alone (Fricke 2007a, 2008, 2012, volume 1). If we consider the current debate about recursion and language complexity prompted by Hauser, Chomsky, and Fitch (2002), then validating the claim that co-speech gestures are recursive has the following far-reaching implications: If recursion is specific to the language faculty in the narrow sense (FLN) and cospeech gestures can be proved to be recursive, then they must be considered as an integral part of language ⫺ also from the perspective of generative grammar (section 4).

2. Simultaneity and linearity A multimodal perspective on syntactic description requires clarification of the mediaspecific conditions that influence the rules for combining syntactic elements on the verbal and gestural levels respectively. When comparing spoken languages with sign languages, researchers often invoke a contrast between two principles: the temporal linearity of spoken languages due to the solitary activity of one articulator, on the one hand, and the simultaneity of sign languages due to the joint activity of several articulators in space, on the other hand: “With just one set of articulators, spoken languages have linearity; with multiple articulators (two hands, face, mouth, etc.) signed languages have simultaneity” (Woll 2007: 340). Researchers of sign languages have always been particularly critical of the principle of linearity postulated by Saussure ([1916] 1966) (cf. Woll 2007: 340; for an overview of sign languages and modality, see Meier 2012). But, in fact, Saussure himself had already realized that temporal linearity is a result of the auditory modality of spoken language. First, let us consider how he states the principle of linearity, which he introduces as the second fundamental principle of language alongside the principle of the arbitrariness of the linguistic sign: “The signifier, being auditory, is unfolded solely in time from which it gets the following characteristics: (a) it represents a span, and (b) the span is measurable in a single dimension; it is a line” (Saussure 1966: 70). Saussure then opposes the auditory domain to the visual domain and goes on to contrast rules of combination based on the principle of simultaneity with rules of combination based on the principle of temporal succession: “In contrast to visual signifiers (nautical signals, etc.), which can offer simultaneous groupings in several dimensions, auditory signifiers have at their command only the dimension of time. Their elements are presented in succession; they form a chain” (Saussure 1966: 70). If we take into consideration the fact that the principle of linearity marks Saussure’s “oral turn”, i.e., his rejection of the primacy of written language, then the principle of linearity can be seen as introducing diversity into linguistic articulation relative to the media-specific properties of the medium involved (cf. Stetter 2005: 220). Stetter (2005: 221) proposes to reformulate Saussure’s principle of linearity in the following way: “The signifiant is articulated in a linear manner depending on the properties of the respective media.” Despite the differences between spoken and signed languages with respect to how their respective components are ordered, Liddell and other linguists analyzing sign languages have emphasized that verbal and gestural articulation are, in many respects, comparable: “Just as the hand must be correctly positioned to articulate signs, the tongue must be correctly positioned to produce spoken words. […] The need to correctly place

1652

VIII. Gesture and language

an articulator demonstrates a clear parallel between spoken and signed languages” (Liddell 2003: 11). The following Fig. 125.1 uses a Cartesian coordinate system to illustrate the linear and simultaneous dimensions of utterance production with respect to different articulators.

Fig. 125.1: Three-dimensional model for describing the gestural components of utterances (cf. Fricke 2012)

The z-axis lists articulators that are potentially relevant and capable of moving simultaneously. The temporal dimension is represented on the x-axis. A set of gestural form parameters is assigned to each articulator; these parameters may differ depending on the articulator. The parameters used to describe the articulation of the hands are modeled on the so-called “phonological” parameters used to analyze sign languages (see Liddell 2003: 6). Stokoe (1960) was the first to describe gestural forms, by analogy with spoken language, as bundles of distinctive features. He distinguishes three parameters: handshape, movement, and location of the movement (cf. Liddell 2003: 6⫺7; for an overview, see Crasborn 2012; Garcia and Sallandre volume 1). These parameters stemming from sign language research have been taken up by a number of gesture researchers and adapted to the description of co-speech gestures (e.g., McNeill 1992; Becker 2004: 60; for an overview, see Bressem volume 1a, b; Bressem, Ladewig, and Müller volume 1; Duncan volume 1; Ladewig and Bressem volume 1). Further parameters have been added, e.g., the orientation of the palm, handedness, and gravity (for an overview, see Bressem volume 1b; Bressem, Ladewig, and Müller volume 1). With respect to handedness, the three-dimensional model of gestural analysis presented in Fig. 125.1 includes each hand in the list of articulators on the z-axis. This enables a distinction to be made between the left and the right hand instead of subsuming them under a “handedness” parameter, i.e., the left and the right hand are analyzed as separate items on the z-axis. The parameter “gravity” (see Lausberg volume 1), for example, allows one to determine exactly where rest positions occur in gestural sequences: Absolute rest positions are characterized by the articulator offering no resistance to the effects of gravity; hence,

125. Syntactic complexity in co-speech gestures: Constituency and recursion

1653

they are identified by the absence of any antagonistic muscular activity. In the next section on constituency and recursion in co-speech gestures, only hand movements on the linear x-axis are treated.

3. Constituency and recursion: Sel-embedding in co-speech gestures Co-speech gestures exhibit constituent structures which can, furthermore, be characterized by recursion and potentially infinite iteration (Fricke 2007a, 2008, 2012). Both procedures, recursion and iteration, will produce strings of signs of any length on the basis of a finite inventory of elements and a finite inventory of rules, which explains “how a language can (in Humboldt’s words) ‘make infinite use of finite means’ […]” (Chomsky 1965: 8). Both recursion and iteration contribute to the production of gestural complexity. It should be noted that structures are not identical with the processes that produce them: “flat” iterative structures can result from recursion, for example, a sequence of natural numbers can be obtained by “adding 1” (rule: n J n ⫹ 1 or n J 0) (see Pinker 1994: 86). Conversely, iteration can produce self-embedding recursive structures (Lobina 2011; Fitch 2010). In the following, we will present gesture units (GUs) as selfembedding structures by highlighting the gestural phenomena in which they have been observed, using technical terms only if necessary (Fricke 2007a, 2008; for definitions of technical terms and a technical discussion of the rules underlying phrase structure, see Fricke 2012, volume 1, in preparation). Fricke’s approach is based on Kendon’s work on gestural consituency (1972, 1980, 2004) and merely broadens his perspective by indicating further structural properties and language-theoretic implications. How can a “gesture unit” be defined? According to Kendon (1972, 1980, 2004), gesture units are delimited by positions of relaxation and ⫺ in contrast to gesture phrases

Fig. 125.2: Series of video stills showing rest positions of the speaker on the left (cf. Fricke 2012)

1654

VIII. Gesture and language

(GPs) ⫺ obligatorily contain a phase of retraction: “[…] This entire excursion, from the moment the articulators begin to depart from a position of relaxation until the moment when they finally return to one, will be referred to as a gesture unit” (Kendon 2004: 111). However, if we try to apply Kendon’s (2004) method of identifying gesture units to the following example (Fig. 125.2), taken from a corpus of video recordings of route descriptions (Fricke 2007b), the results are ambiguous. Considering the series of stills illustrated in Fig. 125.2 as a video sequence, how many gesture units should be counted for the speaker on the left on the basis of the rest positions (R1-R9) shown? Three, or more than three gesture units? The answer depends on how one defines a rest position. By examining this series of stills, we can establish that the speaker on the left exhibits two different types of rest positions: (i) Rest position, type 1: Her forearms and hands are resting on her lap. The effects of gravitational forces are at a maximum; her arms and hands are in a state of muscular relaxation.

Fig. 125.3: Rest position, type 1 (cf. Fricke 2012)

(ii) Rest position, type 2: Her hands are positioned in front of her lower abdomen. The effects of gravitational forces are not at a maximum due to antagonistic muscular activity. Her arms and hands are only partially relaxed, as her forearms are slightly raised and not resting on her thighs (for the term “partial retraction”, see Kendon 1980: 212). In order to segment this video sequence, we can indisputably isolate one gesture unit at the beginning and one at the end of the series of video stills: Both units are delimited by a type 1 rest positions (position: lap; gravity: maximum effects).

Fig. 125.4: Rest position, type 2 (cf. Fricke 2012)

125. Syntactic complexity in co-speech gestures: Constituency and recursion

1655

Fig. 125.5: Simple gesture unit at the beginning of the video sequence, between rest positions 1 and 2 (cf. Fricke 2012)

Fig. 125.6: Simple gesture unit at the end of the video sequence, between rest positions 8 and 9 (cf. Fricke 2012)

Fig. 125.7: One or six simple gesture units between rest positions 2 and 8? (cf. Fricke 2012)

However, it is far from clear how we should analyze the sequence between the second and the eighth rest positions (R2 and R8). Should this sequence count as one single simple gesture unit, or as a string of six simple gesture units, or as a complex gesture unit consisting of six simple gesture units? This question relates to the status of the two different types of rest positions: (i) Should one consider both types of rest positions, which bring their corresponding retraction phases to an end, to be on the same level of hierarchy in the gestural constituent structure? (ii) Do different positions in gesture space and different degrees of gravitational effect indicate differences in the depth of embedding within the constituent structure? When we try to establish which gestural form parameters these two types of rest positions instantiate, we realize that only two locations can be correlated with the parameter

1656

VIII. Gesture and language

“position in gesture space”: the speaker’s hands are either on her lap or held in front of her abdomen at almost exactly the same height and at the center of the gesture space. Assuming that identical formal characteristics indicate structural “cohesion” (or “belonging together”) (see also McNeill’s [2005: 116⫺117] concept of “catchments”), we can assume gestural cohesion: (i) Firstly, between units delimited by type 1 rest positions (position: lap; gravity: maximum effects); (ii) Secondly, between units delimited by type 2 rest positions (position: in front of the speaker’s abdomen at the center of the gesture space; gravity: partial effects). What do these correlations imply for determining the constituent structures of gestures? Gestural cohesion is expressed by recurring instantiations of gestural form parameters in the marker structure (McNeill 2005; Fricke 2007a, 2008, 2012). In constituent structure trees, gestural cohesion is revealed by the fact that constituents that belong closely together have the same node, i.e., they are parts of the same superordinate constituent (Fricke 2007a, 2008, 2012). Fig. 125.8 shows that the topmost node in a gestural constituency hierarchy is always a primary gesture unit delimited by rest positions where gravitational forces are at a maximum.

Fig. 125.8: Gestural constituent structure: Primary and secondary gesture units (cf. Fricke 2012)

Rest positions are not part of the constituency hierarchy, rather, they can serve as “boundary signals” (cf. Trubetzkoy [1939] 1989: 242) marking the beginning and the end of a gesture unit. Rest positions are preceded by phases of retraction that culminate in the instantiation of the formal parameters of the rest positions. In phonology, boundary signals are sounds that only occur at the beginning or at the end of a linguistic unit (morpheme, syllable, word) (cf. Trubetzkoy 1989). To what extent can gesture sequences be segmented by applying Trubetzkoy’s concept of boundary signals? Just as the glottal stop serves as a boundary signal in spoken language, so can rest positions and retraction phases that end in a rest position serve

125. Syntactic complexity in co-speech gestures: Constituency and recursion

1657

as boundary signals on the gestural level. As Fig. 125.8 demonstrates, we hold that a gesture unit is delimited by a rest position that brings a retraction phase to an end, and, moreover, that every gesture unit requires a retraction phase (Retr.) as an immediate constituent. The retraction phase is thus a predictable kinesic feature of gesture units and fulfills a delimitative function in the Trubetzkoyan sense. The term “immediate constituent” can be traced back to Bloomfield ([1933] 1964). The method of analysis in which it is used reveals hierarchical relationships within verbal syntagmas: “Sentences are not just linear sequences of elements, but are made up of ‘layers’ of immediate constituents, each lower-level constituent being part of a higher-level constituent” (Lyons [1968] 1975: 210⫺211). When we consider constituent structures of the kind shown in Fig. 125.8, it is obvious that we have to distinguish between two different types of gesture units in order to ascertain their respective status in the hierarchy. The gestural marker structure reveals that it is their respective retraction phases that indicate a difference in their depth of embedding. Both types of gesture units require a retraction phase that is an immediate constituent of the higher-level gesture unit. The respective retraction phases differ with respect to their gestural markers: Type 1 retraction phases (position: target ⫽ lap; gravity: target ⫽ max. effects) show no muscular movement of the hand or arm offering resistance to gravity. This type of boundary signal is used to determine the primary segmentation of a gestural sequence into two or more gestures or, alternatively, to distinguish between “gesture” and “non-gesture”. Retraction phases of this type provide gestural cohesion on the topmost level of segmentation of the gestural sequence, which is thus analyzed into primary gesture units. Type 2 retraction phases, by contrast, serve as boundary signals for subordinate gesture units within a primary gesture unit. Secondary gesture units have in common a similar position in gesture space, which provides gestural cohesion at this level of embedding. If we compare the two types of retraction phases with regard to their constituency, we realize that gesture units, viewed as a category, belong to the type of constructions Chomsky calls “self-embedded”. Chomsky (1965) offers the following definition of “self-embedding“ in the narrow sense of “center-embedding”: The phrases A and B form a nested construction if A falls totally within B, with some nonnull element to its left within B and some nonnull element to its right within B. Thus the phrase “the man who wrote the book that you told me about” is nested in the phrase “called the man who wrote the book that you told me about up” […]. The phrase A is selfembedded in B if A is nested in B and, furthermore, A is a phrase of the same type as B. Thus “who the students recognized” is self-embedded in “who the boy who the students recognized pointed out”, […] since both are relative clauses. Thus nesting has to do with bracketing, and self-embedding with labeling of brackets as well. (Chomsky 1965: 12)

By applying these criteria for self-embedding to the gestural syntactic category “gesture unit” (GU), one sees that the constituent structure of our example exhibits self-embedding (a) in a broad sense: A secondary gesture unit, e.g., the unit delimited by the fifth and sixth rest positions, is inserted into the primary gesture unit, at the topmost node, alongside other immediate constituents of the primary gesture unit; and (b), in a narrow sense: The primary and the secondary gesture units are gestural constructions of the same type of GU, defined by an obligatory retraction phase (Fricke 2012).

1658

VIII. Gesture and language

4. Implications or language theory and gesture studies What implications for language theory and gesture studies can we draw from the above example of analysis? In the current debates about recursion and linguistic complexity, various positions have been adopted (see, e.g., van der Hulst 2010; Zwart 2011; Sauerland and Trotzke 2011). Hauser, Chomsky, and Fitch (2002) assume that recursion is specific to the human faculty of language and is not to be found either in animals or in human cognitive abilities other than the faculty of language. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human element in the faculty of language. (Hauser, Chomsky, and Fitch 2002: 1569)

By contrast, Everett (2005) claims that there is no evidence for recursive syntactic structures in the Amazonian language Piraha˜: “It is the only language known without embedding (putting one phrase inside another of the same type or lower level, e.g., noun phrases in noun phrases, sentences in sentences, etc.)” (Everett 2005: 622). Everett’s claim has not remained unchallenged (see, e.g., Fitch 2010; Zwart 2011; Sauerland and Trotzke 2011). For others, like Corballis (2007), recursion is not specific to the human faculty of language but a general characteristic of human cognition. With regard to the position taken by Hauser, Chomsky, and Fitch (2002), the proof that co-speech gestures can be recursive implies the following consequences: Assuming that recursion is specific to the language faculty in the narrow sense (FLN), then cospeech gestures displaying recursivity must be considered as an integral element of language. But if one does not accept that multimodality is a core feature of language, then one is obliged to refute their hypothesis that recursion is the defining criterion for the human language faculty (FLN).

5. Conclusion Syntax is still an understudied area in gesture studies and research on linguistic multimodality. Recent studies on this topic that take into account the media-specific properties of articulators (for multimodal integration in noun phrases, see Fricke 2008, 2012, volume 1; Ladewig 2011; for syntactic complexity in gestural stroke sequences, see Fricke 2008, 2012; Bressem 2012; for representations of co-speech gestures in HeadDriven Phrase Structure Grammar (HPSG), see, for example, Lücking 2013) indicate that further research is needed in order to gain a deeper understanding of the syntactic structures that characterize each modality and how these may be related across modalities. In this chapter, both linear and simultaneous relations of co-speech gestures have been discussed and, moreover, a three-dimensional model for describing the gestural components of utterances and illustrating their relations has been presented. It has been shown that constituency and recursion can be manifested by co-speech gestures alone. Gestural constituent trees based on the analysis of empirical examples reveal the structural property of self-embedding, in that gestural constituents can contain other gestural

125. Syntactic complexity in co-speech gestures: Constituency and recursion

1659

constituents of the same type. Within the framework of generative grammar, and admitting Hauser, Chomsky, and Fitch’s (2002) hypothesis that recursion is the only defining criterion for the human faculty of language, finding recursion in co-speech gestures has the language-theoretic implication that natural spoken languages have to be conceived of as inherently multimodal. Conversely, rejecting the claim that language is therefore fundamentally multimodal implies that recursivity cannot be taken to be the defining criterion of the language faculty in the narrow sense, as the Chomskyan model proposes.

6. Reerences Becker, Karin 2004. Zur Morphologie redebegleitender Gesten. MA thesis, Freie Universität Berlin. Bloomfield, Leonard 1964. Language. New York: Holt, Rinehart and Winston. First published [1933]. Bressem, Jana 2012. Repetitions in Gesture: Structures, Functions, and Cognitive Aspects. PhD dissertation, European University Viadrina, Frankfurt (Oder). Bressem, Jana volume 1a. Transcription systems for gestures, speech, prosody, postures, and gaze. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1037⫺1059. Berlin/Boston: De Gruyter Mouton. Bressem, Jana volume 1b. A linguistic perspective on the notation of form features in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1079⫺1097. Berlin/Boston: De Gruyter Mouton. Bressem, Jana, Silva H. Ladewig and Cornelia Müller volume 1. Linguistic Annotation System for Gestures (LASG). In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1098⫺1124. Berlin/Boston: De Gruyter Mouton. Chomsky, Noam 1965. Aspects of the Theory of Syntax. Cambridge, MA: The MIT Press. Corballis, Michael C. 2007. The uniqueness of human recursive thinking. American Scientist 95: 240⫺248. Crasborn, Onno 2012. Phonetics. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook. (Handbooks of Linguistics and Communication Science 37.), 4⫺20. Berlin/Boston: De Gruyter Mouton. Duncan, Susan volume 1. Transcribing gestures with speech. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1007⫺1014. Berlin/Boston: De Gruyter Mouton. Everett, Daniel L. 2005. Cultural constraints on grammar and cognition in Piraha˜. Another look at the design features of human language. Current Anthropology 46: 621⫺646. Fitch, W. Tecumseh 2010. Three meanings of “recursion”: key distinctions for biolinguistics. In: Richard K. Larson, Viviane De´prez and Hiroko Yamakido (eds.), The Evolution of Human Language, 73⫺90. Cambridge, UK: Cambridge University Press. Fricke, Ellen 2007a. Linear structures of gestures: Co-speech gestures as self-embedding constructions. Paper presented at the Third International Conference “Integrating gestures” of the International Society for Gesture Studies (ISGS), June 18⫺21. Chicago, USA.

1660

VIII. Gesture and language

Fricke, Ellen 2007b. Origo, Geste und Raum: Lokaldeixis im Deutschen. Berlin/New York: De Gruyter. Fricke, Ellen 2008. Grundlagen einer multimodalen Grammatik. Syntaktische Strukturen und Funktionen. Habilitation thesis, European University Viadrina, Frankfurt (Oder). Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin/Boston: De Gruyter. Fricke, Ellen volume 1. Towards a unified grammar of gesture and speech: A multimodal approach. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 733⫺754. Berlin/Boston: De Gruyter Mouton. Fricke, Ellen this volume. Kinesthemes: Morphological complexity in co-speech gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.) Berlin/Boston: De Gruyter Mouton. Fricke, Ellen in prep. Gestures and structural complexity: Iteration and recursion. Garcia, Brigitte and Marie-Anne Sallandre volume 1. Transcription systems for sign languages: A sketch of the different graphical representations of sign language and their characteristics. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1125⫺1138. Berlin/Boston: De Gruyter Mouton. Hauser, Marc D., Noam Chomsky and W. Tecumseh Fitch 2002. The faculty of language: What is it, who has it, and how did it evolve? Science 298(4): 1569⫺1579. Hulst, Harry van der (ed.) 2010. Recursion and Human Language. Berlin/New York: De Gruyter Mouton. Kendon, Adam 1972. Some relationships between body motion and speech. An analysis of an example. In: Aron W. Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 177⫺210. New York: Pergamon Press. Kendon, Adam 1980. Gesticulation and speech: Two aspects of the process of utterance. In: Mary R. Key (ed.), The Relationship of Verbal and Nonverbal Communication, 207⫺227. The Hague: Mouton. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Ladewig, Silva H. 2011. Syntactic and Semantic Integration of Gestures into Speech. Structural, Cognitive, and Conceptual Aspects. PhD dissertation, European University Viadrina, Frankfurt (Oder). Ladewig, Silva H. and Jana Bressem volume 1. A linguistic perspective on the notation of gesture phases. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1060⫺1078. Berlin/Boston: De Gruyter Mouton. Lausberg, Hedda volume 1. NEUROGES ⫺ A coding system for the empirical analysis of handmovement behaviour as a reflection of cognitive, emotional, and interactive processes. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1022⫺ 1036. Berlin/Boston: De Gruyter Mouton. Liddell, Scott K. 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge, UK: Cambridge University Press.

125. Syntactic complexity in co-speech gestures: Constituency and recursion

1661

Lobina, David J. 2011. “A running back” and forth: A review of recursion and human language. Biolinguistics 5(1⫺2): 151⫺169. Lücking, Andy 2013. Ikonische Gesten. Grundzüge einer linguistischen Theorie. Berlin/Boston: De Gruyter. Lyons, John 1975. Introduction to Theoretical Linguistics. Cambridge, NY: Cambridge University Press. First published [1968]. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Meier, Richard P. 2012. Language and modality. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook. (Handbooks of Linguistics and Communication Science 37.), 574⫺600. Berlin/Boston: De Gruyter Mouton. Pinker, Steven 1994. The Language Instinct. New York: William Morrow. Poggi, Isabella 2007. Mind, Hands, Face, and Body. A Goal and Belief View of Multimodal Communication. Berlin: Weidler. Sauerland, Uli and Andreas Trotzke 2011. Biolinguistic perspectives on recursion: Introduction to the special issue. Biolinguistics 5(1⫺2): 1⫺9. Saussure, Ferdinand de 1966. Course in General Linguistics. New York: McGraw-Hill. First published [1916]. Stetter, Christian 2005. System und Performanz. Symboltheoretische Grundlagen von Medientheorie und Sprachwissenschaft. Weilerswist: Velbrück Wissenschaft. Stokoe, William C. 1960. Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf. Buffalo: University of Buffalo. Trubetzkoy, Nikolaus S. 1989. Grundzüge der Phonologie. Göttingen: Vandenhoek and Ruprecht. First published [1939]. Woll, Bencie 2007. Perspectives on linearity and simultaneity. In: Myriam Vermeerbergen, Lorraine Leeson and Onno Crasborn (eds.), Simultaneity in Signed Languages. Form and Function, 337⫺ 344. Amsterdam/Philadelphia: John Benjamins. Zwart, Jan-Wouter 2011. Recursion in language: A layered-derivation approach. Biolinguistics 5(1⫺ 2): 43⫺56.

Ellen Fricke, Chemnitz (Germany)

1662

VIII. Gesture and language

126. Creating multimodal utterances: The linear integration o gestures into speech 1. 2. 3. 4. 5. 6. 7. 8. 9.

Introduction The phenomenon defined Data and method Syntactic integration Syntax-semantics interface ⫺ noun or verb specific gestures? Creating multimodal meaning Summary Discussion References

Abstract Studies on gestures most often focus on multimodal utterances in which gesture and speech are used in temporal overlap. This chapter investigates a different phenomenon, namely gestures that are integrated linearly into a spoken utterance by occupying syntactic gaps. Based on syntactic and semantic analyses of speech and gestures, it will be shown that a) gestures are not integrated in all kinds of syntactic gaps but preferably occupy the positions of nouns and verbs, b) non-conventional gestures are most often used with speech-replacing function, and c) the syntactic position foregrounds semantic aspects of gestures. Based on these findings the relations between speech and gesture types are discussed and a continuum of different degrees of integrability is proposed. Furthermore, the possibility of noun and verb-like gestures is elucidated (Ladewig and Bressem Ms.) and implications for the notion of a multimodal language are drawn (Fricke 2012; Müller 2007).

1. Introduction Scholars of gestures have observed that gestures can be integrated into the syntactic structure of a spoken utterance (e.g., Clark 1996; Enfield 2009; Fricke 2012; Harrison 2009; McNeill 2005; Slama-Cazacu 1976). Recent empirical studies have documented that gestures can take over functions of syntactic constituents, either by accompanying speech (Bressem 2012; Fricke 2012; Ladewig 2012; Streeck 2002a) or by replacing verbal units (Bohle 2007; Ladewig 2012; Müller and Tag 2010). But gestures are also integrated into the semantics of a spoken utterance. It was found that gestures can, for instance, modify the reference object expressed in speech (Bressem 2012; Fricke 2012) or that they can provide the semantic center of an utterance (Ladewig 2012). This paper presents an overview of the different forms and functions of gestures that substitute for speech. It discusses a specific phenomenon of gesture speech integration namely gestures that are inserted in syntactic gaps of interrupted spoken utterances. It focuses on the interaction between the two modalities speech and gesture; in particular on the interaction of their structural and functional properties. Furthermore, as the syntactic positions are the only information a recipient can rely on when interpreting the gestures under investigation, the syntactic gaps are examined as possible anchor points for gestures to join in interrupted spoken utterances. In doing so the chapter elucidates how the syntactic posiMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 16621677

126. Creating multimodal utterances: The linear integration of gestures into speech

1663

tion influences the understanding of a gesture and how aspects of form and meaning are made salient for a recipient. The chapter starts by giving a short introduction of the phenomenon under investigation, followed by a presentation of the data and the method. Afterwards, the different analytical steps and their outcomes will be introduced, that is (i) the analysis of the syntactic integration of gestures, (ii) their semantic integration, and (iii) the creation of multimodal utterance meaning. The chapter closes with a short summary and a discussion of the results.

2. The phenomenon deined The phenomenon under investigation has been referred to as “mixed syntax” (SlamaCazacu 1976), “composite signal” (Clark 1996), or “language-slotted gestures”/“speechlinked gestures” (McNeill 2005, 2007) ⫺ terms which document the phenomenon but lack a larger empirical and systematic investigation. The research subject falls within the realms of different research phenomena, namely “interrupted speech”, “substitution by gestures”, and “syntactic integration of gestures in speech” which will be shortly introduced in the following. Discontinuity or interruption is identified by way of syntactic as well as prosodic devices involved in the production of the investigated utterances (see, Couper-Kuhlen and Selting 2001; Selting 1995, 1998 inter alia). Discontinuous utterances reflect lexicalization or verbalization problems in many cases (see, e.g., Gülich 1994; Gülich and Kotschi 1995, 1996). As such, gestures used during nonfluent speech have been studied to a large part with respect to word or concept searches or planning problems (e.g., Alibali, Kita, and Young 2000; Kita 2000, 2003; Krauss, Yihsiu, and Gottesman 2000; Krauss, Yihsiu, and Purnima 1996; Rauscher, Krauss, and Yihsiu 1996). Furthermore, gestures in interrupted utterances and pauses have been examined from a conversation analytical point of view. Gestures were, for instance, studied with respect to turn taking, showing that they are used systematically at transition relevance places to hold or yield the floor, among other functions (see, inter alia, Bohle 2007, this volume; Schegloff 1984; Schmitt 2004; Schönherr 1997; Streeck and Hartge 1992). Substitution by gesture is understood as the replacement of a verbal unit through a gestural unit. Information is not supplemented by a gesture but it is replaced. This means that in the majority of the cases investigated, speech is absent while the stroke is being performed. The substitution of speech by gestures has so far been investigated primarily with respect to particular gesture types. Accordingly, “symbolic gestures” (Efron [1941] 1972), better known as “emblematic gestures” (Ekman and Friesen 1969), are primarily considered to be substitutes for spoken words. They are conventionalized, lexicalized, and perform pragmatic functions. Pantomimic gestures and referential gestures resemble each other in their semiotic structure but their relation to speech is assumed to be of a different kind. Both gesture types iconically depict the object they refer to, but more body parts are involved in the execution of pantomimic gestures (Ladewig, Müller, and Teßendorf 2010; Ladewig, Teßendorf, and Müller in preparation). Speech is considered to be obligatorily present during the use of referential ges-

1664

VIII. Gesture and language

tures but it is regarded as obligatorily absent in pantomime. These aspects have been summarized under the term “Kendon’s continuum” (McNeill 1992, 2005); now termed “Gesture Continuum” (McNeill volume 1). First mentions of a syntactic integration of gestures into spoken language can be found in Slama-Cazacu’s (1976) reflections on a “mixed syntax”. She suggests that gestures can replace nouns, verbs, adjectives, and adverbs and she argues that gestures preferably serve as subjects, predicates, or as a predicate’s complements (Slama-Cazacu 1976: 222). A few examples are given to underpin her assumptions but no empirical basis for her observations are provided. The integration of gestures into the underlying syntactic structure of a spoken utterance has been documented empirically for cataphoric expressions such as “like this” or “such” (e.g., Fricke 2007, 2012; Goodwin 1986; Müller 2007; Streeck 1988, 2002b). In these cases attributive or modifying information is conveyed gesturally. Fricke (2012) showed that referential gestures can expand verbal noun phrases by way of the deictics son or solch (‘such a’). These structurally integrated gestures instantiate the function of an attribute which modifies the nucleus of a noun phrase and reduces the extension of its reference object. The phenomenon was termed “multimodal attribution” and was investigated within the framework of a “multimodal grammar” (Fricke 2012), which also laid the theoretical ground for the study presented below. Bressem (2012, this volume) could also show that gestures are syntactically integrated in a spoken utterance by modifying it and taking over of the functions of syntactic constituents serving as attributes and adverbial determinations. The following gives an overview of gestures being integrated in syntactic gaps. We will start with a short introduction of the data and method.

3. Data and method The study is based on 20 hours of data from different discourse types, i.e., naturallyoccurring conversations, TV shows, experimental data, and parlour games, which were collected during the years 2004⫺2010. The phenomenon of interrupted utterances completed by gestures was identified by applying the following criteria: ⫺ An interrupted utterance is identified by means of syntactic as well as prosodic devices involved in the production of utterances (e.g., Couper-Kuhlen and Selting 2001; Selting 1995, 1998). ⫺ Gestures join in interrupted utterances. The spoken utterances are not continued after the deployment of a gesture/gestures. As such, gestures in final sentence position are investigated. ⫺ No dysfluency markers are produced by the speakers which document lexicalization or verbalization problems. ⫺ The utterances are interrupted by the speaker, not by a co-participant (see, e.g., Schwitalla 1997). Altogether 66 instances of the phenomenon were identified which distributed over 22 speakers. They were analyzed in the annotation software Elan (Wittenburg et al. 2006) covering the determination of intonation units (Chafe 1994), transcription of speech (Selting et al. 1998), syntactic analyses (Eisenberg [1998] 2001), and the description of gestures (e.g., Ladewig and Bressem 2013; Müller 2010b). (For more informa-

126. Creating multimodal utterances: The linear integration of gestures into speech

1665

tion see Bressem, Ladewig, and Müller volume 1.) In a second step, experiments were conducted in order to test the comprehension of these multimodal utterances and to conduct analyses on objective grounds. Three conditions were set up. In the first condition, the reading condition, the 66 multimodal utterances of the corpus were written down on sheets of paper and handed out to the subjects. They were asked to read the sentences and fill in words, phrases, or clauses they considered best suited for the syntactic gaps. The list of utterances encompassed only the sentences in which the gestures joined in. No further information on the context was supplied. Altogether 15 subjects participated (10 female, 5 male). In the second condition, video condition I, people watched video clips of all 66 instances. As in the aforementioned condition, the video clips encompassed only the sentences excluding any contextual information. The same question as in the reading condition was posed to the subjects. Additionally, no hint of paying attention to the use of gestures was given. 15 subjects participated (8 female, 7 male). In the third experimental condition, video condition II, the subjects watched video clips and had to answer the same question as in the other two video conditions. This time the clips included the utterances under investigation and also their broader contexts. 15 subjects participated (8 female, 7 male). Altogether 2970 lexical choices elicited in the three experimental conditions were analyzed. In the following section the results of the syntactic analyses will be presented.

4. Syntactic integration This paper focuses on the interaction between the two modalities speech and gesture and in particular on the interaction of structural and functional properties of both of them. However, as the gestures are used in a syntactic gap, meaning they are not accompanied by speech, the only verbal information one can rely on is the syntactic position exposed by the interruption. Thus, the exposed syntactic gaps are examined as possible anchor points for gestures to join in interrupted spoken utterances. The syntactic analysis revealed that gestures are preferably inserted in noun and verb positions: Out of 66 examples identified in the data, 47% of the gestures adopted the position of a noun, 45% of the gestures replaced a verb. Both preferred syntactic positions will be exemplified in the following by giving two samples.

4.1. Gestures in noun position The first example is taken from a conversation in which the participants who know each other well are talking about trains and a handcar park they have visited before. Some of the speakers tell that they have already been to the handcar park and have tried to drive the handcar there. At one point of the conversation one speaker asks for clarification, as he does not know what a draisine is: Was is Draisine (‘What’s a handcar?’) Right after asking the question he comes up with an answer saying Ach hier mit diesen (‘Well here with these’), interrupts his utterance, recognizable by the constant intonation, and inserts an up and down movement with both arms which resembles the action of pressing or pushing something down (see Fig. 129.1). The analysis of the syntactic structure underlying the spoken utterance reveals that this multi-stroke sequence occupies a syntactic gap. More specifically, the position of a noun is adopted since the demonstrative pronoun diesen (‘these’) usually combines with

1666

VIII. Gesture and language

Fig. 129.1: Transcript of example ‘with these’/mit diesen

Fig. 129.2: Syntactic analysis of the example ‘with these’/mit diesen

a noun (see Fig. 129.2). These results are substantiated by the outcome of the experiments meaning that in 93% of the cases the constituent of a noun could be determined. The preposition (Pr), the demonstrative pronoun (N) and the gesture create a complex multimodal prepositional phrase (PPmumod) that functions as an attribute to the noun “handcar”, mentioned in the utterance before.

126. Creating multimodal utterances: The linear integration of gestures into speech

1667

4.2. Gestures in verb position The second example is taken from a conversation between four women talking about a wedding. Speaker Mo is telling an incidence in which she, her sister, and her grandmother are coming home from a wedding party. Having arrived at the apartment house, her sister notices that she has forgotten the key to her apartment. In order to get into it she starts climbing up the window but her grandmother, standing next to her, tries to stop her undertaking. The speaker is saying und die hat immer geschubst (‘and she was pushing all the time’). While saying this she executes two two-handed gestures, i.e., two pushing movements forward and away from her body. Subsequently, she says und wir hinten (‘and we from behind’), interrupts her verbal utterance, recognizable by the constant intonation, and performs one pushing movement upward and away body (see Fig. 129.3). The stroke begins on the second syllable of this adverb and reaches in the subsequent pause. It is followed by a retraction and a hold.

Fig. 129.3: Transcript of example ‘And we from behind’/Und wir hinten

As in the previous example, speech and gesture are intertwined on a syntactic level. After uttering the beginning of the second main clause und wir hinten (‘and we from behind’) a gesture joins in. Following the SVO structure of this main clause, the gesture occupies the position of the finite verb which, in second position, follows the subject wir (‘we’, see Fig. 129.3). The findings yielded by the syntactic analyses of the answers given in the experiments substantiate this outcome. Accordingly, 87% of all lexical choices show a verb. These results suggest assigning the constituent category verb (V) to the gesture.

1668

VIII. Gesture and language The examples represent the outcome of the quantitative analysis, meaning that, in the majority of cases, it is nouns and verbs that are replaced by gestures. Nouns, either alone or as part of a multimodal noun phrase, serve the function of an object. Verbs, either alone or as part of a multimodal verb phrase or verb form, serve the function of a predicate.

4.3. Gestures types Following the observation of a syntactic integration of gestures into speech, it is worth examining what gesture types are deployed with a speech-replacing function. According to the “Gesture continuum” (formerly known as the “Kendon’s Continuum”, see McNeill volume 1) emblematic gestures and pantomimic gestures should be deployed most often (McNeill 1992, 2005). However, the opposite was found in the data, meaning that gestures which are not regarded as fulfilling a speech-replacing function were identified in the majority of cases, namely “referential gestures” (Müller 1998), also referred to as “iconic or metaphoric gestures” (McNeill 1992) or “representational gestures” (Kendon 1980; Kita 2000). This gesture type is characterized by the obligatory presence of speech. Tab. 126.1: Distribution of gesture types over preferred syntactic positions referential gestures syntactic positions

47% noun position (31 gestures) 45% in verb position (29 gestures)

recurrent gestures

emblematic gestures

non-pantomimic

pantomimic gestures

others

pointing gestures

59%

6%

16%

13%

6%

50%

17%

23%

/

10%

Tab. 126.1 shows that referential gestures were inserted into the preferred syntactic gaps most often. In 65% of the cases, gestures were used in noun position out of which 59% were used in a non-pantomimic and 6% of the gestures were deployed in a pantomimic way, meaning more body parts are involved in the gestural depiction. In 67% of the cases, gestures adopted verb positions, out of which 50% were used in a non-pantomimic and 17% were deployed in a pantomimic way. The gesture type used second most often is that of recurrent gestures (noun: 29%, verb: 23%) out of which 13% comprise pointing gestures used in noun positions only. (For an overview of recurrent gestures see Ladewig this volume; Bressem and Müller this volume.) Emblematic gestures make up the smallest number of inserted gestures. In noun positions 6% are used; in verb positions 10% of the gestures comprise emblematic gestures. Accordingly, against the assumption advocated in the literature, referential gestures (or iconic, or metaphoric gestures, McNeill 1992) are used most often in syntactic gaps ⫺ not emblematic gestures. Referential gestures together with recurrent gestures, which have only been selectively characterized as speech replacing, amount to about 90% of all gestures used.

5. Syntax-semantics interace - noun or verb speciic gestures? The next analytical step approaches the question of whether gestures with particular semiotic characteristics are integrated in the identified syntactic gaps. More precisely, the question is raised whether noun- or verb-like gestures can be distinguished.

126. Creating multimodal utterances: The linear integration of gestures into speech

1669

To tackle this issue, a syntax-semantics interface is established by investigating the distribution of the “gestural modes of representation” over the identified syntactic positions. This aspect concerns the motivation of the gestural form, meaning the mimetic techniques used to transform movements of the hands and arms into gestures. Two semiotic strategies are differentiated, namely “acting gestures” and “representing gestures”. Acting gestures reenact an action with an object such as in turning the car key or opening the window. In representing gestures, the hands are transformed into objects which can be in motion such as depicting a moving snake with the index finger. Drawing and molding gestures are subsumed under the category of acting gestures (Müller 1998, 2010a; Müller, Bressem, and Ladewig volume 1). With regard to the subject under investigation, it is assumed that representing gestures are used preferably in noun positions, as they depict an object in motion, and acting gestures are preferably inserted in verb positions, as they mime actions with objects. However, the analysis of the gestural forms revealed that a clear allocation of the gestural modes of representation to one or the other preferred syntactic position could not be found.

Tab. 126.2: Distribution of gestural modes of representation over preferred syntactic positions gestural modes of representation

syntactic position noun (31 gestures) verb (29 gestures)

Acting Moulding Drawing acting only acting with specified object acting with unspecified object

61% (19 gestures) 13% 6% 6% 13% 23%

77% (22 gestures) 3% 3% 10% 48% 13%

Representing object in motion

39% (12 gestures) 39%

23% (7 gestures) 23%

Tab. 126.2 illustrates that acting gestures were used most often in the preferred syntactic gaps, meaning that 61% of the gestures in noun position and 77% of the gestures in verb positions were deployed in the acting mode. Most often acting with an object was used. Drawing and molding gestures make up only a small number of gestures. The remaining 39% of gestures in noun positions and 23% in verb positions were used in the representing mode. In these cases, gestures depicting an object in motion were used solely. The examples presented above represent the outcome of this quantitative analysis meaning that in both cases the gestures are used in the acting mode: The first example shows an object being pushed downwards and in the second one depicts an object being pushed upwards. According to these results, a preference of one of the gestural modes for a particular syntactic position cannot be detected and, as such, noun or verb-specific gestures could not be identified. (For a discussion of these results see section 8.)

1670

VIII. Gesture and language

6. Creating multimodal meaning At this point of the investigation, the question of how speech and gesture interact in creating multimodal meaning is addressed. For this reason, the analysis of the syntactic structure as well as of the gestural modes of representation was complemented by semantic aspects of gestures and speech. Moreover, the lexical choices elicited in the three experimental conditions were incorporated in the analysis, meaning that the interrupted utterances as well as the lexical choices were investigated with respect to semantic roles (Jackendoff 1972), image schemas (Johnson 1987), and conceptual referents (Langacker 1987). Gestures were analyzed (by the author) with respect to the depicted objects (“objects of mimesis”, Müller 2010a) and the “inherent meaning” (Ladewig and Bressem 2013) that comes with the gestural form which builds upon the gestural modes of representation but also on the image schemas, motor patterns, and geometric patterns a gesture is reminiscent of (e.g., Cienki 1998; Mittelberg 2006, 2010). In the following, the interaction between spoken and gestural information are traced using the examples introduced in section 3. When the participants of the experiments were asked to watch the video clips and to write down their lexical choice, they named objects such as “handles”, “grips”, “pumps” or “air pumps” in the first example and the action of pushing in the second example. Thus, although the gestures are quite similar, it appeared that the recipients relied on different information when interpreting the spoken utterance and the inserted gesture. That is to say, when naming the words “handles” or “grips”, people focused on the semantic information of object depicted by the hand shape of the gestures. In terms of image schemas, object and container are incorporated by the gestures. These are complemented by the image schemas of motion, force, path, or verticality in the cases in in which “(air)pumps” were written down. Thus, when people watched video clips of this example and had to interpret the inserted gestures, they focused on both object and action information. They took the whole gestural gestalt and thus both hand shape and movement into account. Yet, in all lexical choices elicited for this example, the gestures were interpreted as referring to an object. As such in terms of the semantic correlate or conceptual referent of the syntactic category “noun” the gesture is treated as depicting a thing. In the second example, people only named the action of pushing which is why they seemed to have focused on the movement rather than on the hand shape of the gesture. According to their interpretations and according to the analysis of the gestural form, the gesture is reminiscent of the image schemas resistance, blockage, (counter)force, and contact. The object information (object) expressed by the hand shape rather remained in the background. The gesture was interpreted as depicting an action rather than an object. (For more information on the depiction of objects and actions by particular form parameters see Armstrong and Wilcox 2007; Mittelberg 2006; Stokoe [1991] 2001.) From these analyses the question arises as to why gestures, which are similar in their movement pattern, are interpreted in such different ways. Researchers have convincingly pointed out that gestures convey complex images of contents. What is transmitted linearly in spoken language can be covered by only one gesture (see, e.g., McNeill 1992, 2005). When people have to interpret the meaning of a gesture they may focus on different semantic aspects conveyed by one gesture (see, e.g., Cienki 2005). Which meaning aspect they focus on in their interpretation is highly influenced by the accompanying

126. Creating multimodal utterances: The linear integration of gestures into speech

1671

speech. In the phenomenon under investigation, the interaction between the syntactic and semantic information of the spoken utterance and the gestural objects of mimesis, meaning what is depicted by the hands, is responsible for the different interpretations. This means that the underlying syntactic and also semantic structure of an utterance triggers particular semantic aspects transmitted by the gestures: If an interruption by the speaker exposes the syntactic gap of a noun, the aspect of object is foregrounded or triggered, transmitted by the hand shape. In few cases both the information of object and action are made salient by the syntactic gap, but still the gesture is regarded as depicting and referring to an object rather than an action (see Fig. 126.4).

Fig. 126.4: Foregrounding of gestural information through syntactic gaps

If a verb is exposed by a syntactic gap, then the semantic aspect of action is “foregrounded” or triggered, which is depicted by the movement of the hand in the first place. The gesture is conceived as depicting an action. (For an illustration of how gestures foreground verbal meaning, see e.g. Müller 2007, 2008; Müller and Tag 2010.) Thus what becomes visible is that there is an interaction between the structural and the semantic information provided by a syntactic gap and the structural and semantic information conveyed by the gestures.

7. Summary This study investigated the relation of speech and gestures in cases in which gestures substitute for speech. It was found that syntactic gaps serve as anchor points for gestures to join in interrupted utterances. Referential gestures, more specifically referential gestures re-enacting an action, that were used most often were ones which are not inserted in all kinds of syntactic positions but are integrated in the syntactic gaps of nouns and verbs. These gestures express the information of objects and actions ⫺ the same basic information conveyed by nouns and verbs in speech.

1672

VIII. Gesture and language When addressing the question of an interaction between speech and gesture, the influence of the syntactic position on the meaning and understanding of the inserted gestures was taken into account. The analysis revealed that the syntactic position foregrounds semantic aspects of the gesture. Noun positions foreground either the information of object or of an object involved in an action; verb positions foreground the semantic information of action. These different information are reflected in different aspects of the gestural form, namely in the hand shape and the movement of the hand. This brief summary allows for a number of insights beyond the scope of the original investigation. Three of them will be discussed in the following section: (i) the relation of gesture and speech, more specifically the presence and absence of speech during gesture use, (ii) different degrees of integrability captured in terms of a continuum, and (iii) some implications for the “linguistic potential of gestures” (Müller 2009, volume 1) are discussed.

8. Discussion The results of the study propose a reconsideration of the relationship between speech and gesture types. As described above, some gesture types are characterized by obligatory presence of speech, such as iconic or metaphoric gestures. Others were described by optional presence of speech, as in cases of emblematic gestures. Pantomime is distinguished by obligatory absence of speech (see McNeill 2005: 7). Argued against the background of the results presented above, these assumptions become debatable. As was shown, all gesture types investigated in the study were found to replace components of spoken utterances. However, regarding the relationship of gestures to speech, the opposite of the above-mentioned argument was documented. Gestures that have been characterized by obligatory presence of speech turned out to be used most often in syntactic gaps. Gestures that have been described as optionally occurring with or without speech were deployed in syntactic gaps the least often. To consider these gestures a separate gesture type as proposed by McNeill (2005, volume 1) does not solve the problem, since all types of gestures ranging from non-conventionalized iconic gestures to conventionalized emblematic gestures can be integrated in syntactic gaps. These findings question whether substitutability of speech is a property of a certain gesture types but suggest regarding it as a general capacity of gestures. Furthermore, the findings on the relation of gesture and speech propose a continuum of integrability. Based on the type of integration, the information distributed across the modalities, and the occurrence as well as the temporal alignment of the modalities, different “types of integration” (Fricke 2012) are put forward (see Fig. 126.5). Accordingly, gestures that are positioned between two spoken utterances show a low degree of integrability, as no structural or functional points of integration can be identified. Gestures that are cataphorically integrated into a spoken utterance (e.g., Fricke 2012) or gestures that take over the functions of attributes or verbal determinations (Bressem 2012) show a higher degree, as syntactic as well as semantic points of integration can be figured out. The highest degree of integration is shown by gestures that are inserted in syntactic gaps. As these gestures constitute the semantic centers of complex constituents, they provide necessary information to interpret and make sense of an utter-

126. Creating multimodal utterances: The linear integration of gestures into speech

1673

Fig. 126.5: Continuum of integrability

ance. In other words, these utterances were not comprehensible without the integrated gesture(s). A third aspect that should be briefly taken into account is the gestures’ semantic potential to replace nouns and verbs of spoken utterances. Having a closer look at the semiotic structure of gestures, we can understand why gestures can be integrated into these syntactic positions: Gestures can express the same basal information that is expressed by nouns and verbs of spoken languages, namely object and action information. The object information resides in the hand shape in most cases. The information of action is reflected in the movement of the hand (e.g., Stokoe 2001). Both types of information are conflated in one gesture. How these types of information are understood is dependent on the structural and functional information provided by speech, even in cases in which the only information we have ‘at hand’ is a syntactic position. These findings gave rise to the recently discussed assumption that we do not find verb or nounlike gestures that might develop into the grammatical categories of nouns and verbs when entering a linguistic system such as sign languages. On the contrary, which gestural aspect might be isolated to become linguistic is a matter of convention and does not have to do with a particular type of gestures showing particular semiotic properties. (For a discussion see Ladewig and Bressem Ms.) By and large the study reveals “gestures’ potential for language” (Müller 2009, volume 1): Through their semantic potential, gestures are capable of replacing verbal constituents, fulfill functions of spoken units and convey meaning on their own. Moreover, as gestures can be integrated in spoken utterances, they can be considered parts of language, leading to the conception of language as multimodal (Bressem 2012; Fricke 2012; Ladewig 2012; Müller 2007; Müller et al. 2005).

Acknowledgements I am grateful to Mathias Roloff for providing the drawings (www.mathiasroloff.de).

9. Reerences Alibali, Martha W., Sotaro Kita and Amanda J. Young 2000. Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes 15(6): 593⫺613. Armstrong, David F. and Sherman Wilcox 2007. The Gestural Origin of Language. Oxford/New York: Oxford University Press.

1674

VIII. Gesture and language

Bohle, Ulrike 2007. Das Wort ergreifen - das Wort übergeben: explorative Studie zur Rolle redebegleitender Gesten in der Organisation des Sprecherwechsels. Berlin: Weidler. Bohle, Ulrike this volume. Gesture and conversational units. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1360⫺1368. Berlin/Boston: De Gruyter Mouton. Bressem, Jana 2012. Repetitions in gestures: Structures and cognitive aspects. PhD dissertation. European University Viadrina, Frankfurt (Oder). Bressem, Jana this volume. Repetitions in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1641⫺1650. Berlin/Boston: De Gruyter Mouton. Bressem, Jana, Silva H. Ladewig and Cornelia Müller volume 1. Linguistic Annotation System for Gestures (LASG). In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1098⫺1125. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume. A repertoire of German recurrent gestures with pragmatic function. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body Language Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1575⫺1592. Berlin/Boston: De Gruyter Mouton. Chafe, Wallace L. 1994. Discourse, Consciousness, and Time: The Flow and Displacement of Conscious Experience in Speaking and Writing. Chicago: University of Chicago Press. Cienki, Alan 1998. Straight: An image schema and its metaphorical extensions. Cognitive Linguistics 9(2): 107⫺149. Clark, Herbert H. 1996. Using Language. Cambridge, UK: Cambridge University Press. Couper-Kuhlen, Elizabeth and Margret Selting 2001. Introducing Interactional Linguistics. In: Elizabeth Couper-Kuhlen and Margret Selting (eds.), Studies in Interactional Linguistics, 1⫺22. Amsterdam/Philadelphia: John Benjamins. Efron, David 1972. Gesture, Race and Culture. Paris/The Hague: Mouton. First published [1941]. Eisenberg, Peter 2001. Grundriß der deutschen Grammatik: Der Satz. Weimar: Metzler. First published [1998]. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Enfield, N. J. 2009. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge, UK: Cambridge University Press. Fricke, Ellen 2007. Origo, Geste und Raum: Lokaldeixis im Deutschen. Berlin: Walter de Gruyter. Fricke, Ellen 2012. Grammatik multimodal. Wie Wörter und Gesten zusammenwirken. Berlin: Mouton de Gruyter. Goodwin, Charles 1986. Gesture as a resource for the organization of mutual orientation. Semiotica 62(1⫺2): 29⫺49. Gülich, Elisabeth 1994. Formulierungsarbeit im Gespräch. In: Svetla Cmerjkova´, Danesˇ Frantisˇek and Eva Havlova` (eds.), Writing vs. Speaking, 77⫺91. Tübingen: Narr. Gülich, Elisabeth and Thomas Kotschi 1995. Discourse production in oral communication. A study based on French. In: Uta M. Quasthoff (ed.), Aspects of Oral Communication, 30⫺66. Berlin/ New York: de Gruyter. Gülich, Elisabeth and Thomas Kotschi 1996. Textherstellung in mündlicher Kommunikation. Ein Beitrag am Beispiel des Französischen. In: Wolfgang Motsch (ed.), Ebenen der Textstruktur, 37⫺ 80. Tübingen: Niemeyer. Harrison, Simon 2009. Grammar, gesture, and cognition: The case of negation in English. PhD dissertation, Universite´ Michel de Montaigne, Bourdeaux 3.

126. Creating multimodal utterances: The linear integration of gestures into speech

1675

Jackendoff, Ray 1972. Semantic Interpretation in Generative Grammar. Cambridge, MA: Massachusetts Institute of Technology Press. Johnson, Mark 1987. The Body in the Mind. The Bodily Basis of Meaning, Imagination, and Reason. Chicago, IL: University of Chicago. Kendon, Adam 1980. Gesticulation and speech: two aspects of the process of utterance. In: Mary R. Key (ed.), Nonverbal Communication and Language, 207⫺227. The Hague: Mouton. Kita, Sotaro 2000. How representational gestures help speaking. In: David McNeill (ed.), Language and Gesture, 162⫺185. Cambridge, NY: Cambridge University Press. Kita, Sotaro 2003. Pointing Where Language, Culture, and Cognition Meet. Mahwah, NJ: Lawrence Erlbaum Associates. Krauss, Robert M., Chen Yihsiu and Rebecca F. Gottesman 2000. Lexical gestures and lexical access: A process model. In: David McNeill (ed.), Language and Gesture, 261⫺283. Cambridge, NY: Cambridge University Press. Krauss, Robert M., Chen Yihsiu and Chawla Purnima 1996. Nonverbal behavior and nonverbal communication: what do conversational hand gestures tell us? In: Mark P. Zanna (ed.), Advances in Experimental Psychology, 389⫺450. San Diego, CA: Academic Press. Ladewig, Silva H. 2012. Syntactic and semantic integration of gestures into speech: Structural, cognitive, and conceptual aspects. PhD dissertation, European University Viadrina, Frankfurt (Oder). Ladewig, Silva H. this volume. Recurrent gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body Language Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1558⫺1575. Berlin/Boston: De Gruyter Mouton. Ladewig, Silva H. and Jana Bressem 2013. New insights into the medium hand ⫺ Discovering structures in gestures based on the four parameters of sign language. Semiotica 197: 203⫺231. Ladewig, Silva H. and Jana Bressem Ms. Looking for nouns and verbs in gestures ⫺ empirical grounding of a theoretical question. Ladewig, Silva H., Cornelia Müller and Sedinha Teßendorf 2010. Singular gestures: Forms, meanings and conceptualizations, 4th conference of the International Society of Gesture Studies. Frankfurt (Oder), Germany. Ladewig, Silva H., Sedinha Teßendorf and Cornelia Müller in preparation. Gesture semantics: Forms, meanings and conceptualizations of spontaneous gestures. Langacker, Ronald W. 1987. Foundations of Cognitive Grammar: Theoretical Prerequisites. Stanford, CA: Stanford University Press. McNeill, David 1992. Hand and Mind. What Gestures Reveal about Thought. Chicago, IL: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago, IL: University of Chicago Press. McNeill, David 2007. Gesture and thought. In: Anna Esposito, Maja Bratanic´, Eric Keller and Maria Marinaro (eds.), Fundamentals of Verbal and Nonverbal Communication and the Biometric Issue, 20⫺33. Amsterdam: IOS Press. McNeill, David volume 1. The co-evolution of gesture and speech, and downstream consequences. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 480⫺512. Berlin/Boston: Mouton de Gruyter. Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar. Cornell University. Ann Arbor, MI: UMI. Mittelberg, Irene 2010. Geometric and image-schematic patterns in gesture space. In: Vyvyan Evans and Paul Chilton (eds.), Language, Cognition, and Space: The State of the Art and New Directions, 351⫺385. London: Equinox. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag.

1676

VIII. Gesture and language

Müller, Cornelia 2007. A dynamic view on gesture, language and thought. In: Susan D. Duncan, Justine Cassell and Elena T. Levy (eds.), Gesture and the Dynamic Dimension of Language, 109⫺ 116. Amsterdam/Philadelphia: John Benjamins. Müller, Cornelia 2008. What gestures reveal about the nature of metaphor. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 249⫺275. Amsterdam: John Benjamins. Müller, Cornelia 2009. Gesture and language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. Abington/New York: Routledge. Müller, Cornelia 2010a. Mimesis und Gestik. In: Gertrud Koch, Martin Vöhler and Christiane Voss (eds.), Die Mimesis und ihre Künste, 149⫺187. Paderborn/München: Fink. Müller, Cornelia 2010b. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Hedda Lausberg, Ellen Fricke and Katja Liebal 2005. Towards a grammar of gesture: evolution, brain, and linguistic structures. Berlin: Antrag im Rahmen der Förderinitiative „Schlüsselthemen der Geisteswissenschaften Programm zur Förderung fachübergreifender und internationaler Zusammenarbeit“. Müller, Cornelia and Susanne Tag 2010. The Dynamics of Metaphor: Foregrounding and Activating Metaphoricity in Conversational Interaction. Cognitive Semiotics 6: 85⫺120. Rauscher, Frances H., Robert M. Krauss and Chen Yihsiu 1996. Gesture, speech, and lexical access. Psychological Science 7(4): 226⫺231. Schegloff, Emanuel A. 1984. On some gestures’ relation to talk. In: Maxwell J. Atkinson and John Heritage (eds.), Structures of Social Action, 266⫺296. Cambridge, NY: Cambridge University Press. Schmitt, Reinhold 2004. Die Gesprächspause: Verbale Auszeiten aus multimodaler Perspektive. Deutsche Sprache 32(1): 56⫺84. Schönherr, Beatrix 1997. Syntax-Prosodie-nonverbale Kommunikation: empirische Untersuchungen zur Interaktion sprachlicher und parasprachlicher Ausdrucksmittel im Gespräch. Tübingen: Niemeyer. Schwitalla, Johannes 1997. Gesprochenes Deutsch. Eine Einführung. Berlin: Erich Schmidt Verlag. Selting, Margret 1995. Der „mögliche Satz“ als interaktiv relevante syntaktische Kategorie. Linguistische Berichte 158: 298⫺325. Selting, Margret 1998. Fragments of TCUs as deviant cases of TCU-production in conversational talk. In: University of Konstanz, InLiSt, 9: http://kops.ub.uni-konstanz.de/volltexte/2000/2467/ pdf/2467_2001.pdf., accessed June 2006. Selting, Margret, Peter Auer, Birgit Barden, Jörg R. Bergmann, Elizabeth Couper-Kuhlen, Susanne Günther, Cristoph Meier, Uta M. Quasthoff, Peter Schlobinski and Susanne Uhmann 1998. Gesprächsanalytisches Transkriptionssystem (GAT). Linguistische Berichte 173: 91⫺122. Slama-Cazacu, Tatiana 1976. Nonverbal components in message sequence: “Mixed syntax”. In: William Charles McCormack and Stephen A. Wurm (eds.), Language and Man: Anthropological Issues, 217⫺227. The Hague: Mouton. Stokoe, William C. 2001. Semantic phonology. Sign Language Studies 71: 107⫺114. First published [1991]. Streeck, Jürgen 1988. The significance of gesture: How it is established. IPrA Papers in Pragmatics 2(1/2): 60⫺83. Streeck, Jürgen 2002a. A body and its gestures. Gesture 2(1): 19⫺44.

127. Gestures and location in English

1677

Streeck, Jürgen 2002b. Grammars, words, and embodied meanings: On the uses and evolution of so and like. The Journal of Communication 52(3): 581⫺596. Streeck, Jürgen and Ulrike Hartge 1992. Previews: Gestures at the transition place. In: Peter Auer and Alsdo di Luzio (eds.), The Contextualization of Language, 135⫺157. Amsterdam: John Benjamins. Wittenburg, Peter, Hennie Brugman, Albert Russel, Alex Klassmann and Han Sloetjes 2006. ELAN: a professional framework for multimodality research. Paper presented at the Proceedings of LREC. Genoa, Italy.

Silva H. Ladewig, Frankfurt (Oder) (Germany)

127. Gestures and location in English 1. 2. 3. 4. 5. 6. 7. 8.

Introduction Core concepts in this paper The frontal axis The present study Using the process of translation to attribute frontal properties to Ground objects Different perspectives in speech and gesture Conclusion References

Abstract This paper examines two ways in which speakers express location along the frontal (front/ back) axis. It is already known that English speakers can attribute a frontal property to an object by mentally projecting, in mirror-like fashion, the frontal surface of an intrinsically oriented person or thing onto it: this is known as a “relative” frame of reference (Levinson 1996, 2003). Here, we show that such mirror-like rotation need not occur and that speakers can instead use the process of “translation” ⫺ a process typically associated with languages like Hausa (Hill 1982), but not English. Furthermore, when this occurs, speakers in our study performed gestures that clarified this unusual attribution of the frontal property. Hence, speech and gesture worked hand-in-hand to express an ambiguous and unusual spatial operation. Secondly, we show that speakers can conceptualize location along the frontal axis using two different perspectives, one of which is expressed in speech, the other in gesture. This distribution of perspectives across the two communicative modalities reflects the complex pattern of spatial conceptualization that can underlie how speakers consider locative relationships along the frontal axis.

1. Introduction In this paper we explore two ways in which speakers use speech and gesture to express object location. This is a new approach to the study of locative expressions: while previous studies have looked at how speakers gesture when talking about spatial topics such Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 16771686

1678

VIII. Gesture and language

as object motion and placement events (see, for example, Emmorey and Casey 2001; Gullberg 2011; Kita and Özyürek 2003; McNeill 2000), few (for example, Arik 2009) have considered how speakers gesture when encoding location. We therefore seek to develop this underexplored aspect of the literature by reporting on two ways in which speakers use speech and gesture to express location along the frontal (front/back) axis. Firstly, we show that English speakers can attribute a frontal property to an object using a process known as “translation” (see below) as opposed to 180-degree rotation, which is the process normally associated with English (Levinsion 1996, 2003). Furthermore, in the examples that we analyze, speakers accompany this use of translation by co-speech gestures: these gestures provide clarification about the side of the object(s) that is attributed the frontal surface. Secondly, we examine how speakers can express locative relationships using two different “perspectives” (see below) simultaneously: only one of these perspectives is expressed in speech; the other emerges in gesture. Gestures play a key role in the phenomena under analysis here. This is because they carry salient spatial information that is not available to the addressee in the accompanying lexical locative expression. This highlights the importance of attending to speakers’ gestures in descriptions of object location.

2. Core concepts in this paper Before undertaking a closer inspection of the frontal axis itself, we will introduce some core concepts and terminology used in this paper. We will begin by identifying the key semantic roles in static locative expressions, before moving on to the concepts of frames and reference and perspective.

2.1. Figure, Path, and Ground Static locative expressions establish the location of one object (or a group of objects) in relation to one or several reference objects. In what follows, we adopt Talmy’s (2000) terminology and call the core semantic roles of such expressions Figure, Path, and Ground. (1)

The boy is in front of Figure Path

the tree Ground

The Figure is the object(s) being located; in example 1 this is the boy. The Ground is the object(s) in relation to which the Figure is located; here, this is encoded by the tree. The Path is the nature of the locative relationship that exists between the Figure and the Ground; in the example above it is encoded by the spatial unit in front of.

2.2. Frames o reerence Expressing direction along a spatial axis (such as the frontal axis) requires the use of a system that provides spatial cues for navigation. Levinson (1996, 2003) calls these systems frames of reference and divides them into three categories (equivalents of these three frames of reference are found elsewhere in the literature. For instance, they appear as “environmentcentred”, “object-centred”, and “viewer-centred” reference frames in Carlson-Radvansky and Irwin [1993]. For a detailed discussion of frames of reference in the literature, see Watson [2006]): “absolute”, “intrinsic”, and “relative”. (We will not be concerned with the “ab-

127. Gestures and location in English

1679

solute” category in what follows.) When a speaker uses an intrinsic frame of reference, “the figure object is located with respect to what are often called intrinsic or inherent features of the ground object” (Levinson 1996: 366; original emphasis). Imagine that the boy in example (1) were located in front of a house instead of a tree. A house has an intrinsic front: this is typically the side that we use to enter it. If we were referring to the boy’s location in front of this side, we would therefore be using an intrinsic frame of reference. In contrast, a relative frame of reference requires the speaker to graft a front onto the Ground. Consider example (1) again. The Ground object the tree does not possess an intrinsic front; rather, the speaker must mentally project their own front (or someone/something else’s) onto the tree. As far as English is concerned, this requires the 180-degree rotation of the intrinsic front onto the Ground entity (Levinson 1996: 369⫺371). Hence, in example one, the boy may be facing the tree: this would allow the speaker to conceptually rotate the boy’s front onto the tree opposite, thereby explaining the use of in front of. This process of rotation is represented in the diagram on the right half of the image below.

Fig. 127.1: Assigning front/back spatial properties under translation and 180∞ rotation. Thank you to Simon Trevaks for creating this image following my specifications. It is based on those in Levinson (2003).

Not all languages behave in this way, however. For example, Hausa (Hill 1982) uses a system in which speakers graft a frontal axis directly onto the Ground object without any rotation. This means that the “front” of the tree in the example above would be facing in the same direction as the speaker (see the left diagram in Fig. 127.1, above). Levinson (2003b) refers to this process as the “translation” of spatial properties.

2.3. Perspective When speakers view a visual image and are asked to describe the locative relationships in the represented scene, they have two possibilities: to encode location from an imagined point within this scene itself, or from one external to it. In her work on German Sign Language (DGS), Perniss (2007) calls this scene-internal location character perspective, and the scene-external one observer perspective. I will retain this terminology in what follows.

3. The rontal axis The frontal axis corresponds to the imaginary lines that extend forward and backward from the region understood as a reference object’s “front” and “back”. It subsumes the

1680

VIII. Gesture and language

componential “front” and “back” half-line axes described by Herskovits (1986): it therefore incorporates location that is both in front of and behind an object. As explained above, an object may possess intrinsic front and back spatial properties, or these may be attributed to them through the use of a relative frame of reference.

4. The present study The data in this paper are taken from a study that investigates how native speakers of English and French use speech and gesture to express static locative relationships. The 20 English-speaking participants were native speakers of Australian English and all were students at the University of New South Wales, Sydney, at the time of recording. Participants were divided up into 10 male/female pairs, with one person being assigned the role of “describer”, the other that of “receiver”. The describer was shown two pictures, one at a time, and asked to describe each one to the receiver, who was seated opposite them. Describers had five minutes to describe each picture, and were told to focus on the location of 14 listed items. After this time, the receiver was allowed to ask questions for up to five minutes, before having to choose the picture described from four different versions of it. The data reported on here are drawn uniquely from the monological descriptions of the lounge room scene (see appendix).

4.1. Coding o speech and gesture Data were coded using ELAN. Speech was transcribed verbatim and gesture strokes annotated. A stroke is “the meaningful part of the gestural movement where the spatial excursion of the limb reaches its apex” (Gullberg, Hendriks, and Hickmann 2008: 213). Given the substantial amount of data collected, we only analyzed locative expressions in which each of the 14 specified objects occurred as a Figure for the first time in a speaker’s description. Locative expressions are defined in this study as utterances that express the location of one object or a group of objects; this is a functional definition that makes no requirement for any particular category of grammatical item to be present (for example, a verb or a spatial preposition) (see Tutton [2013] for a justification of this approach). We will adopt the transcription code below when presenting examples from the data. underline underlining indicates occurrence with a gesture stroke .. two dots represent pauses of three seconds or less ( ) speech between brackets is given to provide context: it is not part of the locative expression being analyzed.

5. Using the process o translation to attribute rontal properties to Ground objects Certain uses of in front of revealed that speakers used the process of translation, as opposed to 180-degree rotation, to attribute frontal properties to Ground objects. (2)

just a bit in front of those items there is a bone (EngDesc10)

127. Gestures and location in English

1681

In example (2), those items are the previously lexicalised sofa, book, and dog. The Figure, a bone, is described as being in front of this group of objects. Yet the three objects that comprise this group do not possess a collective front, nor are their individual intrinsic fronts facing in the same direction (see picture extract above). Specifically, the dog is facing the viewer, the sofa is facing the back of the room (from the speaker’s viewpoint as an external observer), and the book is lying open and does not have an intrinsic “front” in this context. Only the front of the sofa is correctly oriented to establish a frontal “search domain” (Levinson 2003) within which the bone may be located. Yet how is this front attributed to the book and the dog, such that the bone is in front of these items? Not only are the three Ground objects close to each other, but they are also located along a common lateral (left/ right) axis: this means that they are readily conceptualized as a linear group. The use of in front of appears to be licensed by this linear arrangement, relying on a conceptualized extension of the sofa’s frontal surface to include its two laterally aligned neighbors. The speaker therefore maps the sofa’s front onto the neighboring objects, using the process of translation normally associated with languages like Hausa (see Hill 1982). That is, there is no 180-degree rotation of the sofa’s frontal surface, such that the “front” attributed to the group of objects now faces in the opposite direction (i.e., towards the speaker). This example clearly shows that the speaker has attributed a “front” in a manner which is not conventionally associated with English (cf. Levinson 1996, 2003). While we can deduce this explanation from having visual access to the picture, the receiver in the experiment has no such visual privilege. Hence, they cannot know that the front of this group of objects results from an extension of the sofa’s frontal surface to incorporate its laterally aligned neighbors. Furthermore, they cannot know whether this encoded front faces the back of the room, or the viewer instead. The describer needs to resolve this directional ambiguity so that the addressee can correctly understand the Figure’s location in the picture. This directional explication occurs in the gesture shown in the stills above. The speaker’s left hand, fingers bunched together as though holding the conceptualized bone, moves forward and down: this reveals that the bone is located further back into the picture, away from the viewer. The gesture shows that the front which the speaker has assigned to the group of Ground objects faces the back of the room from the speaker’s viewpoint. It therefore resolves directional ambiguity and works alongside the lexical expression to create an expression of location.

1682

VIII. Gesture and language The process of attributing a front to a neighboring, laterally aligned object is also noted in another speaker’s discourse.

(3)

in front of her to the right like in front of the book which is next to the sofa there’s a bone (EngDesc2)

Books that are lying open, such as the one in the picture, do not have intrinsic “frontal” surfaces in relation to which location may be established. In example (3), the region referenced by in front of lies away from the book, back into the picture (see picture extract above). As in the preceding example, the front of the book borrows from the intrinsic front of another, laterally aligned object: this is the preceding Ground her. In the previous expression in her discourse, the speaker establishes the lady’s location on the sofa. It is therefore possible that the speaker’s attention is also drawn to the frontal orientation of the sofa, which may help to trigger the use of in front of in example (3). The lady, the sofa on which she is sitting, and the book are all aligned along a common lateral axis. The speaker acknowledges this when she states that the book is next to the sofa: as she does so, she gestures to the right (this gesture is not shown here). Just as in example (2), this alignment along the lateral axis triggers the mapping of the lady’s front onto the book. Once again, gesture expresses directional information that establishes the location of the Figure further back into the picture: this gesture is shown in the stills above. As in example (2), speech and gesture collaborate to express location and direction along the frontal axis. In both instances, this involves a spatial context in which the frontal surface of one object is mapped onto a neighboring, laterally aligned object. This does not involve the 180-degree rotation of front/back spatial properties in either example. Gesture clarifies the location of the attributed “front” by expressing movement forward and away from the speaker: this shows that the front faces the back of the room from the speaker’s viewpoint. The mapping of a frontal property onto laterally aligned Ground objects highlights the interconnectivity of the frontal and lateral axes. A certain amount is already known

127. Gestures and location in English

1683

about how this axial interconnectivity affects the attribution of spatial properties. We know, for example, that lateral properties are dependent on frontal ones. As pointed out by Lyons (1977: 694): “Recognition of the difference between right and left is dependent upon the prior establishment of directionality in the front-back dimension.” However, examples (2) and (3) above show that frontal properties can also be determined by the alignment of objects along the lateral axis: this means that the lateral axis can play a role in the attribution of frontal properties. Nevertheless, examples (2) and (3) both attribute fronts using intrinsically oriented objects (i.e., the “sofa” and the “lady”): this means that a “front” has to exist in the first place. The way in which the speakers assign these fronts in the two examples above is important. That is, they do not use the 180degree rotation of the intrinsic front of either the sofa or the lady. Rather, the mapping is achieved by a process of translation onto the laterally aligned objects. As far as we are aware, this finding has not previously been reported for static locative expressions in the literature. However, Hill (1982: 23) has reported that speakers can attribute frontal properties that “align” with their own when motion is involved: When people are, say, riding in a vehicle, they are more likely to describe a further object as in front of a nearer one (e.g., Oh, look at that cemetery up there in front of those trees). When in front of is used with an aligned field in their way, we often find such elements as out, out there, or up there in the utterance. It is as though a need is felt to signal that the constructed field is not, as it usually is, a facing one. (Hill 1982: 23)

Hence there is a precedent of sorts for this discovery, although Hill’s observation works within the context of a motion event. Miller and Johnson-Laird (1976: 396) also discuss an example that has a certain degree of similarity to what we have noted in our data. (4)

Put it in front of the rock

One interpretation of this phrase is that the object “it” should be placed in front of the rock’s surface which “ego” is facing (Miller and Johnson-Laird 1976: 396). However, Miller and Johnson-Laird also suggest that “if ego is thinking of himself as in a row of objects behind the rock, “in front of the rock” can mean on the far side of ego” (Miller and Johnson-Laird 1976: 396). This means that the front of the rock would be facing in the same direction as ego. On one level, this is similar to our finding because it suggests that English speakers can use the process of translation to apply front and back properties to Ground objects. On another level, there are important differences to what we have discovered. Firstly, in Miller and Johnson-Laird’s example, it is ego which is in a row of objects. However, in our data it is the Ground objects, not the speaker, which are part of a row (see examples 2 and 3). Secondly, the “front” attributed to the Ground in our examples borrows from the intrinsic frontal property of a neighboring object in this row. In contrast, Miller and Johnson-Laird do not state any such condition in their explanation of example (4). The examples from our data therefore bring to light a new way in which speakers attribute frontal properties when using a relative frame of reference in static locative expressions. Furthermore, in each of the cases analyzed, the speaker uses a gesture that clarifies the attribution of this frontal property. This is of value for the addressee, who cannot know, from the analysis of the lexical locative expression alone, that this process of translation has taken place.

1684

VIII. Gesture and language

6. Dierent perspectives in speech and gesture The following example shows that different perspectives can occur in speech and gesture (I would like to thank Asli Özyürek for suggesting the idea of a dual perspective in this example), and that both perspectives may potentially be present within a single gesture. The speaker imagines herself near the door in picture (“you’ve just walked in the door”). This reveals the use of a character perspective in speech.

(5)

(if you’ve just walked in the door the rug’s on the floor) and then.. next to it is the television so like in front as you’re facing it

Just before uttering next to the speaker moves her hand forward and down, in keeping with the idea of frontal direction encoded by in front as you’re facing it. This gesture, which concerns the location of the television, is shown in the stills above. The speaker’s body is turned slightly to the right, such that we understand her hand placement to indicate forward direction. While the gesture’s trajectory is consistent with the use of character perspective, its hand shape and orientation are not. The rectangular form of the hand iconically suggests the flat screen of the television, and its orientation in relation to the speaker suggests that she is viewing this screen. However, the speaker would see the side of the television from her imagined location near the door in the picture (see the picture extract above) ⫺ not its front. Instead, her hand’s depiction of shape, as well as its orientation, is consistent with the television as seen by an external observer of the picture. Furthermore, the speaker executes this gesture with her right hand, although the television would be on her left-hand side when viewed from her imagined location in the picture. However, the television is on the right-hand side when viewed by an external viewer: this provides further evidence for the use of an observer perspective in gesture. Hence, it would seem that the speaker is using a character perspective in speech, and a dual observer/character perspective in gesture. Let

127. Gestures and location in English

1685

us recapitulate the argument for a dual perspective in gesture here. The speaker’s hand moves forward along the frontal axis: this is consistent with the location of the television when the scene-internal character perspective is used. In contrast, the shape and orientation of the hand suggest that the speaker is facing the television screen: this indicates the use of a scene-external observer perspective. This interpretation is bolstered by the speaker’s use of her right hand, which seemingly reflects the television’s location on the righthand side of the picture as we view it. If our interpretation of this example is correct, then this dual perspective in gesture clearly brings to light the multi-layered nature of spatial conceptualization: that is, speakers can simultaneously conceptualize location using both observer and character perspectives. It is also important to realize that the observer perspective occurs in gesture alone, while speech focuses solely on character perspective. This indicates that a choice of perspective is not binding for both modalities, and that variation across speech and gesture can exist. A slightly different interpretation of (5) is that the gesture uses an observer perspective only, as opposed to a conflated character/observer one. In our explanation above, we proposed that the hand’s forward trajectory suggests the distance of the television in relation to the speaker’s imagined location within the picture. However, it may also represent the television’s location in the background of the picture as viewed by the scene-external observer: hence the gesture may represent location exclusively from an observer perspective. Ultimately, whether the speaker conflates both perspectives in gesture or simply uses observer perspective alone is not a critical distinction. The fact remains that speech encodes location from a purely character perspective, while gesture does so using an observer one.

7. Conclusion This paper has identified two unusual ways in which speakers express location along the frontal axis. In both cases, the role of gesture is crucial. In the first case, the speaker attributes a frontal surface to an object or group of objects using the process of translation. However, the speaker’s addressee cannot know that this process has taken place simply by hearing the lexical locative expression: it is the speaker’s gesture that can provide them with the necessary clarification. In the second case, the speaker uses a different perspective in gesture to that encoded in speech. This, arguably, may not be beneficial for the addressee, as the perspective in gesture conflicts with the one in speech. However, on a cognitive level, the gesture shows that speakers are capable of considering location from multiple perspectives simultaneously. This suggests an important sophistication in our capacity to conceptualize static locative relationships. It remains to be established in a larger study exactly why such multiple perspectives are adopted, and when they are likely to occur. This, in turn, will enable us to better understand how speakers process and express static locative relationships.

8. Reerences Arik, Engin 2009. Spatial language: Insights from sign and spoken languages. Unpublished PhD dissertation, Purdue University, West Lafayette, Indiana. Carlson-Radvansky, Laura A. and David E. Irwin 1993. Frames of reference in vision and language: Where is above? Cognition 46(3): 223⫺244.

1686

VIII. Gesture and language

Emmorey, Karen and Shannon Casey 2001. Gesture, thought and spatial language. Gesture 1(1): 35⫺50. Gullberg, Marianne 2011. Language-specific encoding of placement events in gestures. In: Jürgen Bohnemeyer and Eric Pederson (eds.), Event Representations in Language and Cognition, 166⫺ 188. Cambridge: Cambridge University Press. Gullberg, Marianne, Henriette Hendriks and Maya Hickmann 2008. Learning to talk and gesture about motion in French. First Language 28(2): 200⫺236. Herskovits, Annette 1986. Language and Spatial Cognition: An Interdisciplinary Study of the Prepositions in English. Cambridge: Cambridge University Press. Hill, Clifford 1982. Up/down, front/back, left/right: A contrastive study of Hausa and English. In: Jürgen Weissenborn and Wolfgang Klein (eds.), Here and There. Cross-linguistic Studies on Deixis and Demonstration, 13⫺42. Amsterdam: John Benjamins. Kita, Sotaro and Azli Özyürek 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal?: Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48(1): 16⫺32. Levinson, Stephen C. 1996 Language and space. Annual Review of Anthropology 25: 353⫺382. Levinson, Stephen C. 2003. Space in Language and Cognition: Explorations in Cognitive Diversity. Cambridge: Cambridge University Press. Lyons, John 1977. Semantics, Volume 2. Cambridge: Cambridge University Press. McNeill, David 2000. Catchments and contexts: non-modular factors in speech and gesture production. In: David McNeill (ed.), Language and Gesture, 312⫺328. Cambridge: Cambridge University Press. Miller, George A. and Philip N. Johnson-Laird 1976. Language and Perception. Cambridge: Cambridge University Press. Perniss, Pamela M. 2007. Space and iconicity in German Sign Language (DGS). PhD dissertation, MPI Series in Psycholinguistics 45, University of Nijmegen. Talmy, Leonard 2000. Toward a Cognitive Semantics. Cambridge, MA: Massachusetts Institute of Technlogy Press. Tutton, Mark 2013. A new approach to analysing locative expressions. Language and Cognition 5(1): 25⫺60. Watson, Matthew E. 2006. Reference frame selection and representation in dialogue and monologue. Unpublished PhD dissertation, University of Edinburgh.

Mark Tutton, Nantes (France)

Appendix A  Lounge-room scene

128. Gestural modes of representation as techniques of depiction

1687

128. Gestural modes o representation as techniques o depiction 1. 2. 3. 4. 5. 6. 7.

Gestures as forms of visual and manual thinking Four basic modes of representation Abstraction and schematization: Cognitive-semiotic processes motivating the meaning of form Iconicity and motivation of meaning in gestures and signs Acting and representing: A cognitive-semantic systematics of meaning construal in gestures Conclusion References

Abstract This chapter addresses the creation of gestures from hand movements. It draws upon Arnheim’s and Gombrich’s Gestalt psychological accounts of different modes of representation in the visual arts and on Bühler’s linguistic and psychological theory of the representational function of language (Arnheim [1954] 1969; Bühler [1934] 1982; Gombrich 1960). Gestures are considered as forms of visual and manual thinking, shaped by the particular techniques employed in making gestures. It is suggested that these techniques are fundamental for the motivation of gestural meaning. They go along with abstraction and generalization of meaning and imply cognitive-semiotic processes of metonymy and metaphor. The chapter is a revised version of Müller’s four-fold proposal (acting, molding, drawing, representing, Müller [1998a, b, 2009]) and includes an overview of research into the iconic motivation of gestures and signs. It concludes with a brief presentation of a cognitive-semantic systematics of gestural depiction which departs from Talmy’s (1975, 1985, 1987) notion of the motion event as a conceptual structure. As a result the four modes of representation are broken down to two: acting and representing.

1. Gestures as orms o visual and manual thinking Imagine an early 19th century painting of an English park. What we see, is a sunny, late summer day, in a peaceful atmosphere, with green meadows, cows, a lake, an array of woods, a large sky, and, hidden in one of them, a mansion. For us, the painting appears as a quite natural and realistic representation of an English park and landscape. The art theorist Ernst Gombrich, however, raises the question what is “more” realistic, John Constable’s painting of Wivenhoe Park, Essex who painted this picture in 1816, or the black and white photos that he made roughly 140 years later (Gombrich 1960)? Moreover, he gives us two photographs of Wivenhoe Park taken from the exact same point of view, one of them shows the park and the mansion in strong contrasts, probably at early evening time, meadows and woods being very dark, only the mansion is light and clearly visible, while the other one renders the meadows and woods in different shades of grey and here the mansion does not contrast very much with the park, although it shows the lightest grey in the picture apart from the sky. This photograph displays the mansion in what could be a light sunny midday moment. For the naive observer all three artful representations of the park and the mansion feel rather “realistic”, or put differently, they seem to represent in a rather straightforward and “realistic” fashion Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 16871702

1688

VIII. Gesture and language

Wivenhoe Park at different moments in time. Gombrich (1960), however, argues that they are not realistic at all, he says that, what we see in the pictures are conceptualizations of a perceived reality; a reality which was perceived by a specific person at a given moment in time with a specific goal: namely painting a picture or taking a black and white photography. Each specific goal implying a specific psychological perspective on what we conceive of as the outside world or “reality”. What we actually see in the pictures is what Gombrich terms the “illusion of reality”. The two photographs illustrate this point quite clearly, because they are not taken at two moments in time but are two prints of one negative, which were exposed to light different amounts of time. Thus, the evening versus midday atmosphere is an illusion created by the artist. It is not a “reflection” of a real evening at Wivenhoe Park. Similarly Constable’s painting presents just another way of looking at Wivenhoe Park, not the park “as it really looks”. Hence, what we see in the picture is not a representation of reality but an artfully created illusion of reality. The illusion of the evening atmosphere is a creation of the artist, it is a representation of the artist’s conceptualization of what he perceived at a certain moment in time, not a simple projection of reality. In the arts, reality is illusion and the images we see are products of conceptualizations of perceived (or imagined) objects by a certain artist. Gombrich’s analysis is in harmony with Rudolf Arnheim’s extensive work on the psychology of the creative eye and art and visual perception (Arnheim 1969), and it is Arnheim who expresses this line of argument in a nutshell, when he says: “[…] that the creation of images in the arts and elsewhere, is not triggered by an optical projection of the object to be represented, but is a correspondence of what has been observed of an object” (Arnheim 1978: 133). Images in the arts show how artists see a landscape, an object, a person; they are products of visual thinking. Visual thinking however is more than “pure” perception of the artist it is also shaped by the medium in which images are rendered. Is it an oil painting, a sketch, or a photograph? Gombrich (1960) discusses three pictures of Dedham Vale, again by John Constable, to illustrate this point. Each image uses a different medium: two pencil sketches (one with a soft, another one with a hard pencil), one oil sketch. Again we see pictures of a landscape from one and the same perspective, but of course the products are very different from one another. The sketches work with gradations of grey ranging from white to black and use lines and hatchings whereas the oil sketch uses the available spectrum of oil colors and works with colored areas, fields, dimensions, “lumps” of colors, which are orchestrated in relation to one another. The artists, thus, perceives and conceives the “world” through and with this depictive techniques: It goes without saying that the artist can only render what his tools are suited for. The technology limits his freedom of choice. One can, and must, make completely different strokes, note different characteristic forms and relations of the motif using a pencil rather than a paintbrush. One who holds a drawing pad on his knees will be on the lookout for every stroke which can be rendered in lines, or ⫺ as one might say in abbreviated fashion ⫺ he will think in lines. The one who stands before a canvas with a paintbrush and a palette, on the other hand, will pay attention to the relations between dimensions, will think in dimensions. (Gombrich 1978: 85, translation Alan Cienki)

Thinking in lines or thinking in dimensions, this is visual thinking, artists think “through” and “with” the ‘modes of representation’ (Darstellungsweisen) that they work

128. Gestural modes of representation as techniques of depiction

1689

with. Now, we would like to suggest, that similar processes are at stake when people create gestures from hand movements. Gestures are made from a small set of modes of representation and when we gesture, we think through and within this frame. Gestures are forms of visual thinking in a manual modality. They come with specific perspectives on the world they depict, perspectives that are individual and subjective views of the world. Gestures are conceptualizations of perceived and conceived experiences that merge visual and manual ways of thinking through and in movement. This perspective alludes to Dan Slobin’s concept of “Thinking for Speaking”, where thinking during language use is oriented towards language specific concepts as well as to Cienki and Müller’s adaptation of this it to gesture: “Thinking for Speaking and Gesturing” (Cienki and Müller 2008; Slobin 1991). Cienki and Müller point out that speakers economically make use of the advantages of the visual and audible modality at hand. In doing this, they orient their thinking towards these manual expressive forms. Thus, when a speaker tells a story about a picture of the Spanish king Juan Carlos and displays the main object of the story with three different gestures, based on three different modes of representation (“molding”, “drawing” and “representing”) ⫺ he thinks of the memorized perceptual object in three different manners. In fact by using three different representational modes, he constructs a subjective “illusion of reality”. He does not depict reality in any “objective” manner but composes an audio-visual story. When he first depicts the picture of the king, he uses his hands as if molding or shaping a three-dimensional object in space. The second time, he manually refers to the picture, he acts as if his extended fingers served as a pencil and left visible traces in the air, acting as if he could actually draw or outline the two-dimensional shape of the picture. He then acts as-if holding a small object in his hands (depicting a little crown) that he places on top of the picture frame. In the fourth and last case, the speaker’s hands become the object itself, they represent the picture as a whole, or, as one could say, they are used as a kind of manual sculpture of the royal portrait. By employing four different modes of representation, four different gestures depicting different facets of one and the same perceived and memorized object are created. Each gesture offers a different construction, a different conceptualization, and a different way of thinking visually and manually about the portrait. The different gestures foreground different aspect of the gestured object: (i) in the molding case, it is the three-dimensional quality of the object, (ii) in the drawing case, the object is reduced to a geometrical two dimensional line drawing, (iii) in the holding case an ornament of the object is being depicted, (iv) and in the representing case, the spatial-material quality of the object (a flat object) is highlighted. Now, how are these gestures actually composed into the development of the story? The young man describes a peculiar childhood event with an interesting, surprising, and entertaining plot: When living in exile in Venezuela, his parents had a picture of the Spanish king Juan Carlos hanging on the wall in their living room. Juan Carlos was an icon of the democratic opposition during Francoism in Spain. In an attempt to challenge the young Spanish democracy in the late seventies, the military entered the parliament, trying to bring the Francoists back to power through a putsch. A few days before this

1690

VIII. Gesture and language

happened, the picture of Juan Carlos fell down from the wall all by itself, while the family was having lunch. The speaker’s family regarded this as a prophecy of those disturbing events. And for the storyteller, this was beautiful material to tell an amusing story in a conversation. The speaker begins his story with a molding gesture that gives a fairly floppy account of the shape of the picture (see also Fig. 128.1). Here the gesture functions as a kind of “placeholder” in an unsuccessful wordsearch. It turns out that the speaker cannot think of an appropriate Spanish expression for the type of picture he is describing and, moreover, his addressee has problems understanding what he is actually talking about. Once this issue is solved, and the addressee has displayed understanding, the storyteller moves on to giving a precise shape description of the object in word and in gesture. While saying: ‘It was a round picture frame, with a small crown on top of it’ (tenia un marco redondo e una pequen˜a corona encima), he performs two highly articulated gestures: He draws or outlines an oval shape, immediately followed by a holding gesture which acts as if placing a crown on top of the outlined shape (for the outlining gesture see Fig. 128.1). The two gestures are produced in close local and temporal closeness, thus forming a kind of gesture compound (Müller, Bressem, and Ladewig volume 1). When reaching the climax of the story, the hand turns into a sculpture of the royal portrait, first representing and locating it at the outer edge of the gesture space (in another room): ‘We had it there […]’ (lo teniamos alli), and second representing and moving it downwards quickly to depict the fast downward motion of the picture: ‘when all of a sudden it fell down from the wall all by itself’ (de pronto el solo se cayo´ al suelo).

Fig. 128.1: A picture of the king from five different perspectives: molding the shape, outlining the shape, holding and placing crown, representing and placing picture, representing and moving picture down.

In short, the five gestures all refer to the picture of the king, the topic of the story, but depict different aspects of it. Moreover, they are based on four different modes of gestural representation: molding, drawing, acting, and representing, each of which creates a different “illusion of reality” of the memorized object, each of which displays a different form of seeing and of conceiving the actual royal portrait. These different gestural depiction of the picture play a significant role in this little narration because they embody different perspectives: the floppy molding gesture representing a roundish vertically oriented object (serving as placeholder for a lacking verbal expression); the drawing gesture giving a precise shape description, followed by a holding and placing gesture, which is precisely located on the top of the just outlined oval object (giving a shape description that helps with the identification of object); the first of the

128. Gestural modes of representation as techniques of depiction

1691

two representing gestures locates a vertically oriented flat object at the outer edge of the gesture space (locating a specific object in another room), the second one depicts the falling down of a flat object (locating, and moving the object in a specific direction). From the point of view of a narratological analysis the gestures foreground rheme information, information that drives the story, which creates in Firbas’ terms communicative dynamism (for Firbas see McNeill 1992; Müller 2003). In short, the gestures are different constructions of the narrative object. This little sequence illustrates, that there is not one and only gestural way to depict a perceived object in the world, on the contrary, gestures are products of particular modes of gestural representation, which imply the orientation towards specific facets of perceived and conceived objects in the world. With these depictive techniques speakers create their subjective views of world, tailored to the temporal and sequential affordances of conversations. Gestures are visible and manual forms of thinking in communicative interaction.

2. Four basic modes o representation Müller (1998a, b, 2009) has distinguished four modes of representation: acting, molding, drawing, and representing. In the acting mode, the hands are used to mime or reenact actual manual activities, such as grasping, holding, giving, receiving, opening a window, turning off a radiator, or pulling an old-fashioned gear shift; in the molding mode, the hands mold or shape a transient sculpture, such as a picture frame or a bowl; in the drawing mode, the hand(s) outline(s) the contour or the form of objects or the path of movements in space; and in the representing mode, the hand embodies an object as a whole, a kind of manual “sculpture”, when, for example, a flat open hand represents a piece of paper and the extended index finger represents the pen used to make notes on that paper. Fig. 128.2 provides examples of the four modes of representation: pulling a gear-shift in an old fashioned car, molding the shape of a picture (the royal portrait in 3-D), drawing (outlining) the shape of the portrait (in 2-D), and representing a piece of paper and a pencil (as two moving “sculptures”).

Fig. 128.2: Four gestural modes of representation as basic techniques of depiction: hand enacts pulling a gear-shift, hands mold the shape and draw (trace) the contour of an oval object, hand represents a piece of paper, index represents a pen.

The four modes of representation address iconic motivations of gestures. But this does not mean, that it only accounts for iconic gestures in the McNeillian (McNeill 1992) or

1692

VIII. Gesture and language

depictive gestures in Kendon’s and Streeck’s sense (Kendon 2004; Streeck 2008). On the contrary, we assume that the modes of representation account for the motivations of gestures more generally (including iconic, metaphoric, depictive as well as pragmatic gestures; see also Bressem and Müller this volume a, b; Müller 2004). It is astounding that Kendon binds those techniques of representation only to the creation of depictive gestures, because his analysis of pragmatic gestures and gesture families relies vitally on the idea that pragmatic gestures are derivations from manual actions, such as if seizing (Grappolo), grasping (Ring), holding, presenting (Open Hand Supine, Palm Presenting) (Kendon 2004). We believe, that there is reason enough to argue that referential (or depictive) as well as pragmatic gestures make use of one of the four modes of gestural representation (see Bressem and Müller this volume a, b; Ladewig 2011; Müller 2004; Payrato´ and Teßendorf this volume; Teßendorf this volume). The term representation is a translation of German Darstellung, which Gombrich (1978) employs in his German book for the artistic modes of representation. Also it appeals to Bühler’s theory of the representational function of language (Bühler 2011; Müller 1998, 2009, volume 1). More appropriately, German Darstellung would be translated as ‘depiction’, however, this is not how Bühler has been translated into English (see Bühler 2011; Müller 2009, for more detail). It is noteworthy, furthermore, that the German term Darstellung does not imply an idea of re-presentation. The same holds for the systematics of the four basic Modes of Gestural Representation: It does not presuppose any pre-existing reality that is being represented, on the contrary, it seeks to answer the question of how the hands are used in the creation of gestures, what they are doing when depicting actions, objects, properties, spatial and temporal relations, or when enacting speech-acts and/or expressing modal meanings. On the contrary, as the example discussed above shows, taking different representational modes into account, actually opens up a methodological path to seeing and accounting for the individual subjective constructions of manual and visual illusions in the manual mode of expression.

3. Abstraction and schematization: Cognitive-semiotic processes motivating the meaning o orm Creating and understanding gestures implies cognitive-semiotic process of metonymy (and metaphor) (Fig. 128.3). Or, put differently, metonymy and metaphor motivate the meaning of gesture forms. When reenacting instrumental actions for gestural depictions or for pragmatic gesturing, the underlying practical action is modulated (see Müller and Haferland 1997). Goffman (1980) talks about modulation of actions in the context of making visible the distinction between fight and play in chimpanzees. Playful bites are only acting-as-if biting they are no real bites. We suggest that what happens from a cognitive-semiotic point of view, when actions are marked as play, is a fundamental way of turning practical action into gestures. Action patterns are significantly reduced and meaningful aspects of the action become abstracted and schematized. This also applies to molding and outlining shapes, since they too are re-enacted mundane actions. Molding is derived from touching or smoothing surfaces, the term alludes to molding of clay, and the drawing mode exploits the action of using the extended index to draw lines into a soft surface, such as, sand, snow, or any kind of powdery surface. Those re-enacted actions, how different they all may be, all involve a schematization and an abstraction

128. Gestural modes of representation as techniques of depiction

1693

Fig. 128.3: The gestural modes of representation operate upon the cognitive-semiotic principle of metonymy.

of movement which involve cognitive-semiotic processes of metonymy in the first place (and metaphor in some cases). All the modes of representation operate upon in a general sense on the pars pro toto idea of metonymic relations. However, they differ with respect to “what part stands for which whole”. In the acting mode, a modulated action stands for an instrumental action. It is abstracted from the underlying action and renders a schematized version of it. In the molding mode, meaningful elements from a surface Gestalt are abstracted from an object and a schematized version is being “sculpted” and shaped. In the drawing mode, the objects are reduced to their shape (contour lines) or to a path (object lines, depicting, for instance, roads, rivers, or snakes). The cognitive-semiotic processes, characterizing the different modes of representation are vital for both the creation and the perception of gesture (see also Mittelberg 2006, volume 1). They spell out, how a schematized meaning of gesture form is motivated, a meaning which remains rather vague, when considered out of context, but is meaningful: We can recognize without problem that somebody is molding a shape, completely out of context. We need context to re-construct the local, the indexicalized meaning of the gesture and to establish a particular kind of reference for the gesture. But the form of the gesture is meaningful in itself. It is noteworthy to mention that the processes of schematization also involve generalization of meaning, and that these processes characterize lexicalization and grammaticalization in language (Sweetser 1990). It is, furthermore, interesting to add here, that Kendon has suggested that classifiers in sign languages use the same set of techniques of representation (Kendon 2004; Müller 2009). In American Sign Language there is a high degree of consistency in how the various hand shapes for the different classifiers are used and how the movement patterns are

1694

VIII. Gesture and language

carried out when they are employed. However, this seems to be a regularization of techniques that are widely used by speakers when using gesture for depictive purposes (Kendon 2004: 318⫺319). As a conclusion, we would suggest, that the gestural Modes of Representation actually spell out the basic iconic and indexical motivations for the creation of gestures as well as for signs in a manual modality (at least when we consider the etymology of signs).

4. Iconicity and motivation o meaning in gestures and signs In this section, we take a closer look at the role iconicity and motivation of meaning in sign linguistics and in gesture studies respectively. Having conducted studies on Alternate Sign Languages and on gestures, Kendon suggests fairly explicitly that classifiers in sign languages are based on the same set of representational techniques as gestures (Kendon 2004: 318⫺319; see also Müller 2009). [In chapter 9], we showed how a speaker, when using gesture to indicate the size and shape of an object, to show how that object is positioned, to trace the shape of something, to show how an object is handled as a way of referring to that object, and so forth, makes use of a restricted range of hand shapes and movement patterns that constitutes a repertoire of representation techniques. These techniques have much in common with what has been described for classifiers and their associated “movement morphemes”. (Kendon 2004: 318, italics in the original)

In the earlier days of sign linguistics those representation techniques have been discussed under the label of iconicity and motivation of signs (see Cohen, Namir, and Schlesinger 1977; Kendon 1980b, 1988; Mandel 1977). Cohen and colleagues (1977) have described it as an inherent characteristic of an iconic sign in a signed language: “To indicate an action the signer generally performs an abbreviated imitation of a characteristic part of it: write, for example, is signed by making such an imitative movement” (Cohen, Namir, and Schlesinger 1977: 17, emphasis in original). However, whereas in a signed language the kind of imitation will be limited by convention, in co-speech gesturing we will find different forms of imitating one and the same action. Writing for instance might be “gestured” by imitating the holding of a pen and moving it in a horizontal plane from left to right, or by representing the pen with an extended index finger and moving it rightwards (in Western European cultures). Moreover, it might be that the base of such a “spontaneous” gesture is not a specific action, i.e., the letter Paul wrote this morning, but a generalized motion pattern associated with writing ⫺ namely Paul’s prototypical form of writing (see Fricke 2008). Wilhelm Wundt advances one of the earliest reflections upon the semiotic processes motivating gestures as well as signs. In the first volume of his opus Völkerpsychologie. Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte (‘Ethnopsychology. An investigation of the laws of evolution of language, myth, manner and custom’) (Wundt 1921) he discusses the evolution of language from expressive movements and in particular from the hands. He discusses in great depth the motivation of signs as used among deaf people, Native Americans, Cistercian monks, as well as of conventionalized gestures used along with speech (as for instance among the Neapolitans) (Wundt 1921). It is noteworthy to mention that Wundt uses the German word

128. Gestural modes of representation as techniques of depiction

1695

Gebärde unanimously for ‘signs’ in signed languages and for ‘gestures’. Since the word Geste had already been introduced to German at that time (see Müller 1998), Wundt’s terminological choice indicates that he regarded co-speech gestures and signs of signed languages as belonging to one “family” of phenomena. Wundt’s work remained, however, a fairly isolated attempt to approach the motivation of gestures and signs systematically. In the 1970s, with the emergence of a field of sign linguistics the motivation of signs and specifically their iconic nature became quite pertinent for a short period of time (Battison 1971; Cohen, Namir, and Schlesinger 1977; Kendon 1980a, b, c, 1981, 1986; Mandel 1977). In the seventies, structuralism and generativism reigned linguistic theory and in both iconicity was clearly not regarded a substantial feature of language. Thus, the iconic nature of many of the signs in signed languages became a “no go” in the attempt of sign linguistics to eventually make the case that signed languages possess all the features of a full fledged language. Because of these historical circumstances, important early work on the iconic nature and the motivation of signs was basically neglected for almost three decades. Among them, for instance, the work by Cohen and colleagues, who included an account of the iconic structures (in form of the “base-referent” distinction) into their analysis of Israeli Sign Language (Cohen, Namir, and Schlesinger 1977). Mark Mandel’s important paper on “Iconic devices in American Sign Language” (inspired by Battison 1971 and Cohen, Namir, and Schlesinger 1977) had a similar fate. Mandel proposed a differentiated taxonomy of iconic devices in American Sign Language (ASL), which received only scarce and scattered attention at the time. There is a small article by Bergman (1978), which documents that Mandel’s principles actually appear to play a fundamental role in the Swedish Sign Language as well. Expanding these lines of thought, Kendon investigates iconic relationships between forms of a signs and their meaning in several alternate sign languages, e.g., sign languages which are used in alternation with vocal languages (see Kendon 1980a, b, c, 1986, 1988). Kendon offers a detailed analysis of an alternate sign language used in the Enga Province of Papua New Guinea, which also includes a comparison with reports on signs used by the sawmill workers of British Columbia and those used by the Pitta Pitta tribe in North Central Queensland (Kendon 1980a, b, c; Meissner, Philpott, and Philpott 1975; Roth 1897). Kendon furthermore has published a monographic documentation of seven different (Northern Central Desert “NCD”) sign languages used among Australian aboriginals. However, none of these contributions to the iconic nature of gestures or of signs in signed languages and to their motivation inspired a larger discussion, neither in sign linguistics nor in the field of gesture studies. It is only with the rise of cognitive linguistics in the eighties and nineties that iconicity was eventually rehabilitated as an ubiquitous and fundamental property of spoken languages. These paradigmatic changes in linguistic theory laid the theoretical grounds for re-addressing iconicity of signs in signed languages. It was Sarah Taub who bridged this long silence and devoted a book which addressed Iconicity and Metaphor in American Sign Language from a cognitive linguistic point of view (Taub 2001). A similar important contribution to the iconic motivation of signs in signed languages is Danielle Bouvet’s book on modes of production and metaphor in French Sign Language (Le corps et la me´taphore dans les langues gestuelles. A la recherche des modes de production des signes, ‘Body and metaphor in gestured languages. Looking for the modes of sign production’, 1997) but ⫺ presumably because it has never been translated into English ⫺ it has hardly been recognized internationally. Notably,

1696

VIII. Gesture and language

Genevie`ve Calbris’ book on the semiotics of French gestures had a similar fate ⫺ although this was translated into English (Calbris 1990). Only with the cognitive turn in the interest into the questions of motivation and iconicity of gestures and signs came to receive increasingly more interest (see also Armstrong, Stokoe, and Wilcox 1995). Of vital importance for this development was David McNeill’s monograph “Hand and Mind: What Gestures Reveal about Thought” (1992). Because the study of gestures was promising a “window” onto imagistic forms of thinking gesture studies suddenly became a promising topic for psychologists and the cognitive sciences more generally (notably including anthropology and linguistics). But, also in gesture studies iconic motivations of gestures received only scarce interest ⫺ and this despite the fact that one of the core and highly influential gesture categories proposed by McNeill in 1992 had been termed “iconics”. More recently, this picture has begun to change; research into the iconic motivations of gestures has been receiving increasing interest. We have already mentioned that Kendon devotes some attention to the techniques of gesture creation in his latest book (Kendon 2004). Müller proposed a systematics of gestural modes of representation and discussed them as mimetic devices in an Aristotelian sense (Müller 1998a, b, 2009, 2010). Sowa (2006) proposes a systematics as basis of computer recognition; and Streeck (2008, 2009) discusses them as forms of depiction and evoking techniques. By bringing together Jacobsonian, Peirceian, and cognitive linguistic takes on iconicity, indexicality, metaphor, and metonymy, Mittelberg offers a new account of the cognitive-semiotic principles motivating co-speech gestures (Mittelberg 2006, this volume; Mittelberg and Waugh this volume). Drawing on Peirce and Pike, Fricke lays out the theoretical grounds for the semiotic processes driving gesture creation and for an inclusion of gestures into the study of linguistics proper (Fricke 2008, this volume). Given that sign languages are now fully recognized as languages, it appears no longer necessary to build up strict divisions between gestures and signs, nor to background iconic and indexical motivations of signs. Taub’s book is a vivid testimony of that but also sign linguistic work on classifiers and on the relations between gestures and signs indicates this shift in perspective (Armstrong, Stokoe, and Wilcox 1995; Wilcox 2009).

5. Acting and representing: A cognitive-semantic systematics o meaning construal in gestures In order to at least indicate how the modes of representation contribute to the meaning of gestures (and probably also iconic signs), we will conclude this chapter with a sketch of a cognitive-semantic systematics of meaning construal in gestures. With this systematics an attempt is made to concretize the specific conceptualizations that go along with employing the different modes of representation. The systematics reduces the modes to two fundamental ones: acting and representing. In the acting mode, the hand(s) re-enact(s) any kind of action or any kind of movements of the hand. In the representing mode, the hand(s) turn(s) into a manual sculpture of an object. Put differently, in the acting mode, the hands mime themselves, while in the representing mode they mime other entities (Fig. 128.4). The acting mode conceptualizes actions of the hands and arms in two different ways: En-acting Actions and En-acting

128. Gestural modes of representation as techniques of depiction

1697

Fig. 128.4: Modes of representation grounding a cognitive-semantic systematics of meaning construal in gestures.

Movement. The representing mode also comes with two different kinds of conceptualizing entities manually: Representing Objects and Representing Objects in Motion. When considering en-acting of actions as a base for meaning construal in gestures three different types of manual actions are to be distinguished: the hands may enact actions such as waving or drawing, they may enact actions where the hand shape specifies a particular object (holding a knife, turning a car key), or they enact actions which do not involve a specific hand shape, such as showing, giving, or receiving objects on the open hand. (i) action only (ii) action with specified object (iii) action with unspecified object In such a systematics, it makes sense to subsume the molding and the drawing mode of representation under a more general category of acting. Molding and drawing are both derived from specific types of manual actions: Molding is based on touching and moving across surfaces and drawing is derived from the action of using the extended index for tracing in soft surfaces. The cognitive-semantic systematics considers them simply as specific types of actions, but they are vital in the formation of so-called size and shape classifiers in sign languages (Perniss, Thompson, and Vigliocco 2010). Drawing would be a case of action only and molding a case of action with specified object. When considering en-acting of movement as a base for meaning construal in gestures, Talmy’s systematics of motion events as conceptual structure (Talmy 1983) accounts for

1698

VIII. Gesture and language

the different types of movements of the hands and arms as bases for gestural meaning. The hands may depict motion only (“a trip to New York”), motion and path (“somebody giving the speaker an insight”), motion and manner of motion (“somebody talking fast”, “walking back and forth”), or motion, manner of motion and path (“rolling down”, “running up”): (i) (ii) (iii) (iv)

motion only motion and path motion and manner of motion motion, manner of motion and path

Within the group of gestures based on the representing mode we distinguish: objects and objects in motion. When representing objects manually, the hands may be used to simply depict an object (“a portrait of the king”), but very often, representing gestures are used to locate (“the portrait in another room”) and direct objects in space (“watching somebody”), or to put them in a particular spatial relation (“two intersecting paths”): (i) (ii) (iii) (iv)

object only located object directed object objects in spatial relation

On the other hand, representing hand shapes are widely used to depict objects in motion and thus, Talmy’s systematics of motion events as conceptual structures comes into play. In this case the hand movement depicts the movement of the figure (against a ground): (i) (ii) (iii) (iv)

motion object and motion object, motion and manner of motion object, motion, manner of motion, and location or path object, motion and path

The conceptual structures involved in both types of the representing mode, strongly appeal to what sign linguists have characterized as entity classifiers in sign languages. The modes of representation involve complex and differentiated system of conceptualizations that is neither co-extensive nor reducible to a distinction between an observer and a character viewpoint (McNeill 1992). They also allude to McNeill’s distinction of observer and character viewpoint gestures. But, although it seems as if the representing mode would be just another way of talking about observer viewpoint gestures and the acting mode another term for character viewpoint gestures, in fact the distinctions are not identical. The conceptual semantic structures involved in the two modes of representation are not co-extensive with McNeill’s distinction. Thus, for instance, gestures using the representing mode can very well imply a character viewpoint, if what is being represented is for instance a blade of knife used to depict cutting of bread in a speaker’s dinner preparations. Due to restrictions of space, it is not possible to discuss these issues in more detail or to illustrate the different types of gestural conceptualizations with a proper account of the individual gesture forms. The above-mentioned examples can only allude to them.

128. Gestural modes of representation as techniques of depiction

1699

What we have aimed at presenting though, is a cognitive-semantic systematics of meaning construal in gestures, which is based in the two fundamental modes of representation: acting and representing, and which opens up a path towards an analysis of the conceptual semantics of gestures and signs.

6. Conclusion We may conclude, that gestures can be regarded as mundane forms of creating illusions of reality, illusions which are manifestations of visual and manual forms of thinking. As for the two Dedham Vale sketches of Constable described above, this means that even the quality of the pencil is part of this process of thinking for sketching, since a soft pencil allows for a much broader range of black, grey and white shades than a very hard pencil. This means, that the instrument and technique directly imports on the process of conceptualization of a perceived world. Gestalt Psychology of Arts teaches us that images are conceptualizations and that they are shaped by different modes or techniques of representation. They are artfully created illusions of reality, creative abstractions, schematizations, and yet at the same time highly subjective embodied forms of visual and manual thinking. Gestures in this view are “natural” and “artful” illusions of reality, created by speakers in the flow of discourse and interaction, and they are probably the first mimetic devices appearing on the stage of human evolution.

Acknowledgements Over the past two decades many people have been vital in discussing the gestural modes of representation. Jürgen Streeck and Adam Kendon encouraged me in the early stages of my career to pursue this line of thought and I am very grateful for their interest and support. In the context of the research project “Towards a grammar of gesture: evolution, brain and linguistic structures” Ellen Fricke, Irene Mittelberg, Jana Bressem, but especially Silva H. Ladewig and Sedinha Teßendorf were excellent collaborators on this issue, driving the empirical research and constantly and most productively challenging my theoretical framing. Hedda Lausberg and Katja Liebal were extremely important in providing neuro-psychological and primatological support for the assumption of a twofold distinction of gestural modes of representation. And last not least, I wish to thank Karin Becker and Mathias Roloff for the drawings ⫺ and Lena Hotze for bearing with me and keeping my morals high in finalizing this chapter.

7. Reerences Armstrong, David F., William Stokoe and Sherman Wilcox 1995. Gesture and the Nature of Language. Cambridge, NY: Cambridge University Press. Arnheim, Rudolf 1965. Kunst und Sehen: Eine Psychologie des Schöpferischen Auges. Berlin: De Gruyter. Arnheim, Rudolf 1969. Visual Thinking. Berkley/Los Angeles: University of California Press. Arnheim, Rudolf 1978. Art and Visual Perception: A Psychology of the Creative Eye. Berkeley: Univeristy of California Press. First published [1954]. Bergman, Brita 1978. On motivated signs in the Swedish Sign Language. Studia Linguistica 32(1/ 2): 9⫺17.

1700

VIII. Gesture and language

Bouvet, Danielle 1997. Le Corps et la Me´taphore dans les Langues Gestuelles: A la Recherche des Modes de Production des Signes. Paris: L’Harmattan. Bressem, Jana and Cornelia Müller this volume a. A repertoire of recurrent gestures of German. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1575⫺ 1592. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume b. The family of away gestures: Negation, refusal, and negative assessment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1592⫺1604. Berlin/Boston: De Gruyter Mouton. Bühler, Karl 1982. Sprachtheorie: Die Darstellungsfunktion der Sprache. Stuttgart: Fischer. First published [1934]. Bühler, Karl 2011. Theory of Language: The Representational Function of Language. Amsterdam/ Philadelphia: John Benjamins. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Cienki, Alan and Cornelia Müller 2008. Metaphor, gesture and thought. In: Raymond W. Gibbs (ed.), Cambridge Handbook of Metaphor and Thought, 483⫺501. Cambridge, NY: Cambridge University Press. Cohen, Enya, Lila Namir and Izchak M. Schlesinger 1977. A New Dictionary of Sign Language: Employing the Eshkol-Wachman Movement Notation System. The Hague: Mouton. Fricke, Ellen 2008. Grundlagen einer Multimodalen Grammatik des Deutschen: Syntaktische Strukturen und Funktionen. Habilitation thesis: European Univeristy Viadrina, Frankfurt (Oder). Fricke, Ellen this volume. Between reference and meaning: Object-related and interpretant-related gestures in face-to-face interaction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1788⫺1802. Berlin/Boston: De Gruyter Mouton. Goffman, Erving 1980. Rahmen-Analyse. Ein Versuch über die Organisation von Alltagserfahrungen. Frankfurt Main: Suhrkamp. Gombrich, Ernst H. 1960. Art and Illusion: A Study in the Psychology of Pictorial Representation. New York: Pantheon Books. Gombrich, Ernst H. 1978. Kunst und Illusion. Eine Studie über die Psychologie von Abbild und Wirklichkeit in der Kunst. Stuttgart: Belser. Kendon, Adam 1980a. A description of a deaf-mute sign language from the Engaprovince of Papua New Guinea with some comparative discussion: Part II. Semiotica 32(1/2): 81⫺117. Kendon, Adam 1980b. A description of a deaf-mute sign language from the Engaprovince of Papua New Guinea with some comparative discussion: Part I. Semiotica 31(1): 1⫺34. Kendon, Adam 1980c. A description of a deaf-mute sign language from the Engaprovince of Papua New Guinea with some comparative discussion: Part III. Semiotica 32(3/4): 245⫺313. Kendon, Adam (ed.) 1981. Nonverbal Communication, Interaction, and Gesture. The Hague/Paris: De Gruyter Mouton. Kendon, Adam 1986. Some reasons for studying gesture. Semiotica 62(1/2): 3⫺28. Kendon, Adam 1988. Sign Languages of Aboriginal Australia: Cultural, Semiotic and Communicative Perspectives. Cambridge, NY: Cambridge University Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Ladewig, Silva H. 2011. Putting the Cyclic Gesture on a Cognitive Basis. CogniTextes 6. Mandel, Mark 1977. Iconic devices in American Sign Language. In: Lynn A. Friedman (ed.), On the Other Hand. New Perspectives on American Sign Language, 57⫺108. New York: Academic Press.

128. Gestural modes of representation as techniques of depiction McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago, IL: University of Chicago Press. Meissner, Martin, Stuart B. Philpott and Diana Philpott 1975. The Sign Language of Sawmill workers in British Columbia. Sign Language Studies 9: 291⫺308. Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discoursive Evidence for Multimodal Models of Grammar. Ph.D. Dissertation, Cornell University, Ann Arbor, MI: UMI. Mittelberg, Irene volume 1. The exbodied mind: Cognitive-semiotic principles as motivating forces in gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 755⫺784. Berlin/Boston: De Gruyter Mouton. Mittelberg, Irene this volume. Gesture and iconicity. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1712⫺1732. Berlin/Boston: De Gruyter Mouton. Mittelberg, Irene and Linda Waugh this volume. Gesture and metonymy. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1747⫺1766. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia 1998a. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 1998b. Beredte Hände. Theorie und Sprachvergleich redebegleitender Gesten. In: Thomas Noll und Caroline Schmauser (eds.), Körperbewegungen und ihre Bedeutungen, 21⫺44. Berlin: Weidler. Müller, Cornelia 2003. On the gestural creation of narrative structure: A case study of a story told in a conversation. In: Isabella Poggi, Monica Rector and Nadine Trigo (eds.), Gestures: Meaning and Use, 259⫺265. Porto: Universidade Fernando Pessoa. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand. A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), Semantics and Pragmatics of Everday Gestures, 233⫺256. Berlin: Weidler Verlag. Müller, Cornelia 2009. Gesture and Language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. Abington/New York: Routledge. Müller, Cornelia 2010. Mimesis und Gestik. In: Gertrud Koch, Christiane Voss and Martin Vöhler, (eds.), Die Mimesis und ihre Künste, 149⫺187. München: Fink. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gesture: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia and Harald Haferland 1997. Gefesselte Hände. Zur Semiose performativer Gesten. Mitteilungen des Germanistenverbandes 3: 29⫺53. Payrato´, Lluı´s and Sedinha Teßendorf this volume. Pragmatic gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbook of Linguistics and Communication Science 38.2.), 1531⫺1540. Berlin/Boston: De Gruyter Mouton.

1701

1702

VIII. Gesture and language

Perniss, Pamela, Robin Thompson and Gabriella Vigliocco 2010. Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology 1: 1⫺15. Roth, Walter E. 1897. Ethnological Studies Among the North-West-Central Queensland Aborigines. Brisbane: E. Gregory Government Printer. Slobin, Dan I. 1991. Learning to think for speaking: Native language, cognition, and rhetorical style. Pragmatics 1: 7⫺26. Sowa, Timo 2006. Understanding Coverbal Iconic Gestures in Object Shape Description. Berlin: Akademische Verlagsgesellschaft Aka GmbH. Streeck, Jürgen 2008. Depicting by gesture. Gesture 8(3): 285⫺301. Streeck, Jürgen 2009. Gesturecraft. The Manu-facture of Meaning. Amsterdam: John Benjamins. Sweetser, Eve 1990. From Etymology to Pragmatics. Cambridge, UK: Cambridge University Press. Talmy, Leonard 1975. Semantics and syntax of motion. In: John P. Kimball (ed.), Syntax and Semantics, 181⫺238. New York: Academic Press. Talmy, Leonard 1983. How language structures space. In: Herbert L. Pick and Linda P. Acredolo (eds.), Spatial Orientation: Theory, Research, and Application, 225⫺282. New York: Plenum Press. Talmy, Leonard 1985. Lexicalization patterns: Semantic structure in lexical forms. In: Timothy Shopen (ed.), Grammatical Categories and the Lexicon Vol. III. Language Typology and Syntactic Description, 57⫺149. Cambridge, UK: Cambridge University Press. Talmy, Leonard 1987. Lexicalization patterns: Typologies and universals. Cognitive Science Program Report 47: 1⫺9. Taub, Sarah 2001. Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge, NY: Cambridge University Press. Teßendorf, Sedinha this volume. Pragmatic and metaphoric gestures ⫺ combining functional with cognitive approaches. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1540⫺1558. Berlin/Boston: De Gruyter Mouton. Wilcox, Sherman 2009. Symbol and Symptom: Routes from Gesture to Signed Language. Annual Review of Cognitive Linguistics 7: 89⫺110. Wundt, Wilhelm 1921. Völkerpsychologie. Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythos und Sitte, Vol. 1. Stuttgart: Kröner Verlag.

Cornelia Müller, Frankfurt (Oder) (Germany)

129. Levels o abstraction 1. 2. 3. 4. 5.

Introduction A hierarchy of sign types Communicating Conclusion References

Abstract Roland Posner introduced a hierarchy of sign types in his 1993 essay “Believing, causing, intending: The basis for a hierarchy of sign concepts in the reconstruction of communicaMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 17021712

129. Levels of abstraction

1703

tion”. In so doing, it was his goal to find a way to replace language-oriented metaphorical descriptions of human interactions with a more general semiotic model, which provides nonmetaphorical descriptions for all sign types including verbal and nonverbal interaction. John Searle’s five categories of speech acts: declarations, directives, assertives, expressives, and commissive ⫺ have become a fundamental part of communication theory and tend to be applied not only to verbal communication but also to other types of human interaction which do not involve language or speech such as facial expression, bodily gestures, clothing, and design as well as simulated communication, self-delusion, and manipulation. Posner considers sign processes as a special case of causal processes and combines semiotic terms with terms from intensional logic to create a general conceptual framework for their analysis. In this chapter, we will see how he defines basic sign types and postulates levels of reflection to form a two-dimensional classification of sign types which can account for Searle’s categories of speech acts as well as for sign processes that deviate from the conditions of speech acts.

1. Introduction Roland Posner’s essay “Believing, causing, intending: The basis for a hierarchy of sign concepts in the reconstruction of communication” (1993) introduces a two-dimensional classification of sign types as a building block toward understanding the processes of human and animal communication in the context of Artificial Intelligence research. In so doing, he manages to unite two streams of communication analysis that have, until now, existed in isolation. Over the years, semioticians have developed typologies for categorizing “extraorganismic” objects involved in sign processes, such as icon, index, symbol, text, score, etc., while cognitive psychologists and intensional logicians have cultivated the rich vocabulary that exists at least in all European languages for describing the corresponding “intraorganismic” processes, such as assuming, thinking, concluding, interpreting, believing, wanting, and intending. Posner examines the interdefinability of the terms used and proposes to combine the two approaches. A sign thus appears as an event that makes someone do or believe something. Believing something involves developing and accepting an internal representation. A request turns out to be an event produced with the intention that it causes the addressee to do something one wants to be done. In order to construct a unified conceptual framework for the analysis of sign processes, Posner defines basic sign types and shows how their level of reflection can vary. This creates a system of concepts, which has the potential to account for and connect all sign processes up to the degree of complexity reached in communication. This hierarchy of signs provides a powerful analytical language for examining the processes underlying non-verbal communication, especially gesture, and provides a conceptual bridge to insights provided by speech act theory.

2. A hierarchy o sign types 2.1. Basic sign types: signal, indicator, expression, gesture Posner’s theory starts with the assumption that “sign processes are a special case of causal processes” (Posner 1993: 220). He characterizes them as “process(es) connecting

1704

VIII. Gesture and language

the occurrence of an event f with the occurrence of an event e, where (the occurrence of) f is called a cause and (the occurrence of) e is called its effect” (Posner 1993: 220). Using the terminology of intensional logic, a causal process can be described as: E (f) J E (e) where f and e function as terms representing events, E is a one-place predicator and J is a two-place sentence operator which denotes the relation of causation. “Think of yourself sitting in a room where a sudden loud noise makes the window rattle: we say that the occurrence of the noise f is a cause for the occurrence of the rattling e in the windowpanes, the latter being its effect.” (Posner 1993: 220)

2.1.1. Signal This form of a simple causal process is not yet considered a sign process until there is a behavioral system that acts as intermediary. The simplest level of sign is a signal which can be described as follows: E (f) J T (a,r), where a stands for the behavioral system, r for a reaction and T for a two-place predicator that indicates the relation between the two. To put the formula into words, one could say: An event f causes a being a to perform a behavior r. Posner offers the following as an example of a signal process: “A bird sits in a tree until a loud noise causes it to fly off.” (Posner 1993: 220) According to Posner, this case differs from a causal process on account of the bird being a behavioral system that enters the relation between the occurrence of the cause E(f) and the occurrence of the effect E(e). As we will see later, signals belong to the simplest sign type found at the lowest level of reflection for sign meanings. They are understood as recipient signs since they are not dependent on intentional production by a sender.

2.1.2. Indicator Unlike a signal, an indicator, the next type of basic sign, requires that the responding behavioral system a is capable of having internal representations. In this case, the event f causes a to believe some proposition p: E (f) J G (a, p), where p is a sentence that sets forth a proposition and G is a two-place operator that denotes the relation of belief holding between the behavioral system a and the proposition. As an example, take a person on a walk in the woods who sees the shadows on the ground becoming darker and therefore believes that the sun is coming out from behind the clouds.

2.1.3. Expression An expression is similar to an indicator in that an event f causes some belief in a behavioral system a. Here, however, the effect is a belief of a specific type: the belief that there exists someone b who produced the event f while in a certain state Z: E (f) J G(a, Z (b)). For Posner, Z is a one-place predicator that presents its argument as being in a certain state, so that f can be said to express that state. He offers the following example: “A door is banged (f), which makes the neighbor (a) believe that the tenant (b) is angry (Z)” (Posner 1993: 221). An expression process takes place in this case because the reacting system a not only perceives the event f but also assumes the existence of an acting system b being in a special state.

2.1.4. Gesture The last basic sign type, gesture, is characterized by the fact that the circumstances already introduced for expressions (especially the presence of a reacting system assuming

129. Levels of abstraction

1705

an additional behavioral system b to have produced the sign event f) are increased by an additional element, which is intention: “A gesture is an expression where the expressed state of the sign producer is an intention to produce another event.” (Posner 1993: 222) As an example take two men standing close to each other and being involved in an agitated conversation within an English pub, when one of them a sees the other one b reach out with his arm f and believes that he is intending to punch. Here we say that a takes f to be an involuntary gesture betraying b’s intention to do g: E (f) J G(a, I (b, T (b,g))). This classification is independent of the question whether b really wants to hit a or anyone else. The basic sign types “signal”, “indicator”, “expression”, and “gesture” are all defined as events that cause some behavioral system to respond in a certain way. They also admit beliefs about possible sign producers and their states and intentions but do not require the involvement of a sign producer or any other behavioral system in addition to the recipient. That is why signals, indicators, expressions, and gestures are called recipient signs. The basic sign processes can be placed along a continuum whose complexity increases as particular factors (causality, presence of a reacting system, internal representation, assumed acting systems, beliefs, and intentions) are added. It follows that “each following concept is a specialization of the preceding one” (Posner 1993: 223). Or more specifically: Every gesture is an expression (of its producer’s intention), every expression is an indicator (of its producer’s state), every indicator is a signal (for the recipient to believe something), and every signal is a cause (of some response in a behavioral system). But of course, the reverse is not true: only certain causes are signals, only certain signals are indicators, only certain indicators are expressions, and only certain expressions are gestures. (Posner 1993: 223)

2.2. Recipient signs and sender signs In Posner’s conceptual system, the complexity of sign processes not only expands “horizontally” along the continuum from simple causality to gesture, but also “vertically”, where the additional complexity is attained by taking interaction and role-change between senders and recipients into account. In order to generate sender signs, Posner refers to an observation that can be made in the everyday life not only of humans but also of other higher animals. Having noticed that a given type of recipient sign has evoked desirable responses, one is motivated to instrumentalize this sign type: One produces a similar event so that one achieves a similar response. As an example, Posner (1993: 231) describes a teacher in a kindergarten who has had the experience that a sudden loud noise tends to make the youngest children interrupt their play, look for the origin of the noise, and be silent for a moment. By suddenly clapping her hands loudly on another occasion, the teacher then utilizes the noise for her own purposes and thereby produces a sender sign. Sender signs can have recipients and when those are intended by the sender they are called addressees. Producing an event with the intention that it causes some other event is called an action:

1706

VIII. Gesture and language

T (b, f) ∧ I (b, E(f) J E(e)). Producing an event with the intention that it causes a behavioral system to respond in a certain way is called signaling: T (b, f) ∧ I (b, E(f) J T(a, r)). Signaling thus is an action where the intended configuration of events has the structure of a signal:

It follows that in describing signaling one must embed the structure of a signal in the structure of an action:

The same is true for the actions of indicating, expressing and gesturing. They are all sender signs that can be described by embedding the structure of the corresponding recipient sign in the structure of an action. In this way an indicator can be transformed into indicating:

an expression can be transformed into expressing:

a gesture can be transformed into gesturing:

In Tab. 129.1 the simple recipient sign types (i.e., the basic sign types) appear on the bottom line 1a in columns II, III, IV and V. The corresponding simple sender sign types appear on line 1b. It is now interesting to see what happens when a behavioral system receives a sender sign. Posner claims that this amounts to transforming the sender sign into an indicator (as defined on line 1a in column III). This means that in the process of reception all sender signs take on the form E (f) J G(a, …). The result is spelled out in Tab. 129.1 on line 2a where all formulas start with this structure and the structure of the sender sign in question is embedded in it.

129. Levels of abstraction Tab. 129.1: Levels of reflection up to communicating (taken from Posner 1993: 227)

1707

1708

VIII. Gesture and language

In this way, a signaling can be transformed into an indicator of signaling:

indicating can be transformed into an indicator of indicating:

expressing can be transformed into an indicator of expressing:

gesturing can be transformed into an indicator of gesturing:

Even an action which is not a sign process in itself can enter this transformation and become a sign process when it is received as an indicator of action (see Tab. 129.1, line 2a, column I):

129. Levels of abstraction

1709

Of course, the result of an indicator transformation can in turn be the intended effect of an action, for instance when someone who has just uttered a sender sign has the impression not to have been adequately understood and draws attention to that sender sign by uttering a noticeable cough f. In summarizing this sketch of the vertical dimension of Posner’s hierarchy of sign concepts it can be said that its lowest level 1a contains simple recipient signs, and the higher levels are reached by applying either (i) the sender sign transformation rule, which embeds recipient sign structures in the expression T (b, f) ∧ I (b, …), or (ii) the recipient sign transformation rule, which embeds sender sign structures in the expression E(f) J G(a, …). These transformation rules are also applicable to their mutual results which has the consequence that the levels in the vertical dimension come in pairs: 1a, 1b; 2a, 2b; etc. and can reach indefinitely high depending on the context and the intellectual equipment of the interaction partners (see Tab. 129.1). This raises the question: What are the consequences of increasing complexity for the quality of a sign process? Posner (2000) answers this question by examining the amount of reflection taking place in the interaction partners. As a measure of that he uses the number of embedded occurrences of the operators G(…, ---) “believe” and I(…, ---) “intend” found in a formula. He distinguishes the following levels of reflection in sign behavior: Formulas which do not contain any occurrences of such an operator are located on level RS 0, formulas which contain one or more operators embedded in each other belong to the reflection levels RS 1, RS 2, RS 3, etc. Note the following examples: RS 0: RS 1: RS 2: RS 3:

E(f) J T(a, r) [signal] E(f) J G(a, p) [indicator] T(b, f) ∧ I (b, E(f) J E(e)) [action] E(f) J G(a, I(b, T(b, g))) [gesture] T(b, f) ∧ I (b, E(f) J G(a, Z(b))) [expressing] E(f) J G(a, T(b, f) ∧ I (b, E(f) J G(a, Z(b))) [indicator of expressing] T(b, f) ∧ I (b, E(f) J G(a), T(b, f) ∧ I (b, E(f) J E(e))) [indicating an action]

Posner has successfully applied his measurement procedure in the analysis of the sign processes occurring in self-presentation of humans and robots. For the rest of the present article, however, we refer to the levels of reflection up to communication as they are designed in Tab. 129.1 according to the transformational approach.

3. Communicating The importance of language as a means of communication and the lack of adequate terms for categorizing sign processes, belonging to the levels 1a⫺2b (see Tab. 129.1), have paved the way for much misleading jargon in everyday discourse: Pictures, faces, bodily expressions, as well as architecture are supposed to “speak”, and telegraphs as well as roads and pipes are said to “communicate”. Posner favors a more restricted

1710

VIII. Gesture and language

usage of these words. For him, speaking always presupposes language and communicating involves a sender openly intending an addressee to do or believe something. This is not the case in sign processes belonging to the level 1a⫺2b (see Tab. 129.1), as can be seen in the kindergarten examples: When a sudden loud noise occurs in a room, the people present need to not know where it comes from and cannot avoid hearing it and responding to it. Acoustic signals (1a II) and signalings (1b II) are often nothing more than manipulation: The youngest children in the kindergarten have no choice, the sudden noise startles them away from their play. The teacher, however, can mitigate this effect by indicating the noise before it comes, she can clap her hands silently before doing so loudly or before having the bell ring. Indicating a signaling has other conditions of success than the signaling itself. Successful indicating causes the recipients to believe something while successful signaling causes them to behave in a certain way. Nevertheless, combining a low-level sender sign with indicating its occurrence is a step in the right direction. In order to show this, Posner (1993: 231⫺234) asks the reader to imagine the effects of varying loudness in the teacher’s handclapping: Produced loudly, the handclapping in the kindergarten functions as a signaling which is fulfilled immediately, but the less loudly it is performed by the teacher, the more it is reduced to a message which informs the children of the teacher’s wishes but leaves them free to decide themselves whether they want to fulfill them or not. Of course, the teacher only uses the strategy of silent handclapping if she is convinced that the message about her wishes will cause the children to fulfill them. As this analysis shows, communicating a request requires more than indicating the sender’s wishes (as presented in the upper half of the formula 2bcom II in Tab. 129.1). It also requires the sender’s belief that through indicating them he can cause the wishes to be fulfilled by the addressee (as presented in the lower half of the formula 2bcom II in Tab. 129.1). In other words: When someone wants to communicate a request, two conditions must be satisfied: (i) indicating condition: a sender b produces an event f with the intention that f causes the recipient a to believe that b intends a to behave in a certain way r, and (ii) communication condition: b believes that his belief will cause a to behave in that way r. Communicating a request is called directive communicating. Its structure contains the structure of a simple signal, which is one of the basic sign types: E (f) J T (a,r); but it has undergone four transformations: ⫺

by the sender sign transformation rule it has been converted into the structure of signaling (1b II):



by the recipient sign transformation rule this has been converted into the structure of an indicator of signaling (2a II):

129. Levels of abstraction



by the sender transformation rule this has been converted into the structure of indicating a signaling (2b II):



by adding the communication condition which was converted into the structure of directive communication (2bcom II).

Which effect do these transformations have? They convert the simple causal process of a sign f making its recipient a perform a behavior r into a highly reflected configuration of beliefs, causes, and intentions which include the sign f making the recipient’s belief in the sender’s intention that f causes r cause the recipient’s fulfillment of that intention. In semiotic terms: Instead of a signal, one produces an indicator of that signal in the belief that this indicator will itself function as that signal. Communicating a request thus becomes signaling by indicating that signaling. Instead of manipulating the recipient into performing the behavior r, the sender informs the recipient of that goal and leaves the rest to him. Brutal force in the realization of an interactive goal is replaced by articulating that goal in some appropriate code.

4. Conclusion Directive communicating is one of the five types of illocutionary acts which John Searle (1979) postulated in his analysis of verbal communication. He distinguishes declarations from directives, assertives, expressives, and commissives and claims that in all languages every utterance belongs to one of these types. Taking into account the present state of comparative linguistics, one can say that this claim has proved to be very useful even if it is both under- and overgeneralizing. By designating illocutionary acts as speech acts, Searle implies that illocutionary acts are a linguistic phenomenon. There are, however, acts of communication in all cultures which do not use language as a means of communication such as emblematic gestures, music, picture presentation, etc. There are also language uses such as children’s rhymes, text recitals, and speaking in one’s dreams which are not illocutionary acts in the sense of Searle.

1711

1712

VIII. Gesture and language Posner’s classification of sign processes avoids problems such as these; it characterizes all processes that contain a basic sign structure on a certain level of reflection. As Tab. 129.1 shows on line 2bcom in columns I⫺V, Posner’s classification even takes account of Searle’s illocutionary act types. Due to the hierarchical relation between causal processes and the four basic sign types signal, indicator, expression, and gesture, Posner (1993: 238) can derive two general claims on the structure of his taxonomy: “(1) All commissives, assertives, expressives and directives are declarations, and (2) all communicating consists in an action performed by indicating that action.”

5. Reerences Posner, Roland 1993. Believing, Causing, Intending: The Basis for a Hierarchy of Sign Concepts in the Reconstruction of Communication. In: Rene´ J. Jorna, Barend van Heusden and Roland Posner (eds.), Signs, Search and Communication. Semiotic Aspects of Artificial Intelligence, 215⫺ 270. Berlin/New York: de Gruyter. Posner, Roland 2000. The Reagan Effect: Self-Presentation in Humans and Computers. Semiotica 128(3⫺4): 445⫺486. Searle, John R. 1979. A Taxonomy of Illocutionary Acts. In: John R. Searle, Expression and Meaning. Studies in the Theory of Speech Acts, 1⫺29. Cambridge, UK: Cambridge University Press.

Ulrike Lynn, Chemnitz (Germany)

130. Gestures and iconicity 1. 2. 3. 4. 5.

Introduction: The human body as icon and icon creator Semiotic foundations of iconicity in gesture Different kinds and degrees of iconicity in gesture Concluding remarks References

Abstract Iconicity has been shown to play a central role in both gestural sign formation and the interpretation of multimodal communicative acts. In bodily signs, iconic structures may, no matter how abstract, partial, and sketchy they might be, constitute the semiotic material on the basis of which rich inferences are drawn and meaning is made, and often co-constructed, in multimodal interaction. Focusing on communicative kinetic action spontaneously performed with speech, this chapter presents various embodied semiotic practices in which the speaker’s body serves as an icon of someone or something else, or the speaker’s hands create furtive gestural gestalts in the air. After laying out the relevant premises of Peirce’s theory, special attention is paid to the kinds of semiotic objects gestures may be iconic of, as well as to the pragmatic forces and conceptual structures that may jointly motivate gestural forms and functions. Issues of how iconicity and abstraction may be brought about and interact with indexicality and conventionality are also addressed. Along Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 17121732

130. Gestures and iconicity

1713

the way, gestural examples exhibiting different kinds and degrees of iconicity (e.g., image, diagrammatic, and metaphor iconicity) are discussed.

1. Introduction: The human body as icon and icon creator Iconic aspects of communicative body postures and movements are a central issue in gesture studies. The human body clearly has a natural potential for pictorial portrayal, that is, to be a living image, or icon (Peirce 1955), of someone or something else, or to manually trace or sculpt fictive images in the air. Yet, questions of what exactly manual gestures may be iconic of and how iconicity is brought about in such ad hoc produced bodily signs are not trivial (e.g., Müller 1998a, b; Sweetser 2009). Mediating between our imagination and the physical and social world, gestures may generally fulfill a broad range of cognitive, communicative, and interactive functions. In most cases, they assume several functions at the same time. What has come to be commonly known as iconic gestures are bodily postures and movements representing concrete objects and actions (McNeill 1992). When considering a broad range of discourse genres, however, their semiotic functions seem to cover a wider spectrum of kinds and degrees of iconicity. It is true that pantomiming the use of a familiar tool, imitating the physical actions we remember a character perform in a film, or portraying the size of a fish we watched someone catch on the seashore are intuitive iconic practices (Arnheim 1969: 117). Nonetheless, hand shapes and movements may also readily create from scratch images of things, spaces, scenarios, ideas, and connections that we imagine for the first time and that may come into a tangible existence by being further developed through the gestures we make while speaking. By sketching the floor plan of our dream house into the space in front of us, for instance, virtual spatial structures attain a certain semiotic reality sharable with interlocutors. The question here is whether what they denote is understood as something concrete or abstract. If the main criterion is the content of the associated spoken discourse, a gesture accompanying a verbal description of a house, i.e. a concrete object, usually counts as iconic. A dream house and other products of one’s imagination seem not to fit the understanding of iconic gestures in the narrow sense. And although they are mental objects, they may feel real for the individuals designing them in their mind. If the gesture were laying out a theoretical framework, i.e. something abstract denoted in speech, it would be read as a metaphoric expression. In any event, for a short moment, the gesture constitutes a furtive and minimal material sign carrier on the basis of which the interpreter can imagine certain aspects of what is being talked about, thus making meaning of a multimodally achieved semiotic act. These initial observations have taken us from practices of gesture production to the interpreting mind, which is the perspective of semiotic analysis par excellence (e.g., Peirce 1960). As will be shown in this chapter, each perspective implies differently anchored facets of complex semiotic processes and contributes valuable insights into the forms and functions of iconicity in gesture. Semiotic and conceptual aspects of iconicity in gesture will be the focus of this chapter. (For an overview of research on the production and comprehension of iconic, representational and referential gestures see Mittelberg and Evola this volume.)

1.1. Points o departure and scope o article Gesture researchers from different disciplines, including semiotics, linguistics, psychology, biology, and anthropology, have presented theoretical accounts and detailed descriptions

1714

VIII. Gesture and language

of the ways in which gestures as the ones described above share iconic structure and other properties with the objects, events or human actions they depict, and how they semantically relate to the propositional content of an unfolding utterance (e.g., Kendon 2004; McNeill 2000, 2005). Some approaches focus on the individual speaker (e.g., Calbris 1990), others make a point in showing how such gestures contribute meaningful components to dynamically evolving “contextures of action” often co-constructed by co-participants (Goodwin 2011: 182; see also e.g., Clark 1996; Enfield 2009; Kendon 2004; McNeill 2005; Murphy 2005). These and various other accounts have illuminated the semiotic and polyfunctional nature of what are typically rather schematic and evanescent gestural gestalts. Although many approaches seem to be based on Peircean (1955, 1960) semiotics, the terms iconicity and iconic gestures are not used uniformly (see Mittelberg and Evola this volume). According to Peirce (1960: 157; 2.276), “icons have qualities which resemble those of the objects they represent, and they excite analogous sensations in the mind”; they rely on a perceived similarity between the sign carrier and what they represent. Attempts to establish broader theoretical foundations of iconicity in gesture have only recently been undertaken (e.g., Andre´n 2010; Enfield 2009; Fricke 2012; Mittelberg 2006, volume 1; Mittelberg and Waugh this volume). Related issues of similarity, analogy, reference, and representation are still a matter of debate (e.g., Fricke 2007, 2012; Lücking 2013); some gesture scholars are hesitant to apply the notion of iconicity to gesture (e.g., Streeck 2009). As for spoken language, the arbitrary versus motivated nature has been an issue of continued controversy. Drawing on Peirce’s notions of image and diagrammatic iconicity, Jakobson (1966) provided cross-linguistic evidence that iconicity operates at all levels of linguistic structure, not only in phonology (such as in onomapoetic expressions) but also in the lexicon, morphology, and syntax (Jakobson and Waugh [1979] 2002). Indeed, a large body of research has confirmed that language as well as discourses may be motivated in more direct or more abstract ways, e.g., by isomorphism (e.g., Givo´n 1985; Haiman 1980; Hiraga 1994; Simone 1995; Waugh 1992, 1993). Compared to fully coded sign systems such as spoken and signed languages, spontaneous gestures do not constitute an independent symbolic sign system. When producing gestures, speaker-gesturers do not select from a given form inventory of a system, in which some forms are more iconic than others, but they create semiotic material each time anew. Questions at the heart of the matter concern, for instance, the conceptual, physical, material, and social principles that motivate gestural sign formation and use. One might further ask in what ways co-speech gestures exploit and create iconic structures differently than language, and how iconic modes interact with processes of conventionalization and grammaticalization (e.g., Calbris 1990; Kendon 2004; Müller 1998a; Sweetser 2009). Since co-speech gestures and signed languages share the same articulators and the same articulatory space, considering the semiotic work done by iconic principles in signed language is extremely insightful for students of gesture. In sign languages research, the treatment of iconicity has gone through several stages (e.g., Mandel 1977; Wilcox in press), and after emphasizing the conventional and symbolic nature of signs (e.g., Frishberg 1975), the role of iconic properties in sign formation, semantic structure, and discourse pragmatics has been attested across diverse signed languages (e.g., Bouvet 1997; Grote and Linz 2003; Kendon 1986; Perniss, Thompson, and Vigliocco 2010; Pizzuto and Volterra 2000; Taub 2001; Wilcox 2004). Focusing on spontaneous speech-accompanying gestures, the aim of this chapter is to present some of the theoretical concepts central to understanding and analyzing iconic

130. Gestures and iconicity

1715

modes in gesture. The interaction of iconicity with metonymy, metaphor, and conventionality will also be addressed. In what follows, Peirce’s (1960) view of semiotic relations as well as the notions of object, representamen, ground, and interpretant are laid out in section 2; they will serve as the theoretical backbone against which the discussion of iconicity in gesture will evolve. Different kinds and degrees of iconicity in bodily signs and their relation to the concurrent speech are discussed and exemplified in section 3. Throughout the chapter, special attention will be paid to the specific affordances of coverbal gestures as a dynamic visuo-spatial medium. Concepts from cognitive semantics are also brought in to highlight the role of embodied schemata in gestural abstraction and predominantly iconic expression. Section 4 provides a summary and sketches possible avenues for further research.

1.2. A irst example o iconicity in gesture For a first example of iconicity in gesture, consider Fig. 130.1 below (taken from ArchRecord TV, January 5, 2011; Mittelberg 2012). In the sequence of interest here, the British architect Norman Foster describes an art gallery he recently designed: the Sperone Westwater Gallery in Manhattan. Its characteristic feature is a comparably narrow and high, that is, vertical gestalt. While describing its spatial dimensions, Foster brings this enormous building down to human scale by employing his hands to demarcate three differently oriented chunks of space: a) on it’s a twenty-five foot wide slot (Fig. 130.1a), the hands are held a little more than shoulder-wide apart with the open vertical palms not completely facing each other but slightly opening up, and the finger tips turned towards the interviewer and slightly outward, indicating the width of the building slot. Then on so it’s very tight (Fig. 130.1b), the fingertips point upward and the hands are brought in a little. Finally, on that means that it’s a vertical gallery (Fig. 130.1c), the hands change into a new configuration: the gallery’s height is conveyed by the distance spanning between the right hand with its open palm turned downward at hip level, thus representing the foundation of the gallery, and the left hand also with the open palm facing down but located a little above head level, imitating the top of the building. In each panel below, the hands not only iconically represent the borders of this particular architectural space, but each gesture also exhibits incorporated indices evoking different orientations (e.g., Haviland 1993; Mittelberg and Waugh this volume).

Fig. 130.1a “It’s a twenty-five foot wide slot…

Fig. 130.1b … so it’s very tight …

Fig. 130.1c … that means that it’s a vertical gallery.”

1716

VIII. Gesture and language

Two aspects of these measurement gestures are striking. First, when looking at the distance between hands, the second gesture (Fig. 130.1b) seems incongruent with the attribute very tight, probably in part because his body could still fit in between the hands representing the outer limits of the building. Second, compared to the first two gestures indicating the gallery’s width (Fig. 130.1a,b), the distance between hands is not much bigger in the last gesture evoking its height (Fig. 130.1c). This sequence thus illustrates the subjective, approximate, and relative nature of gestural portrayals of this sort: relative not only with respect to the speaker’s body and personal gesture space, but in this case also in respect to much wider museum buildings Foster has previously experienced as well as designed. In view of both the stimulus and the accompanying discourse, these gestures do not simply “represent” the geometry of the tower-like gallery (Fig. 130.2), but are the result of a subjective “take” on it, reflecting pragmatic forces of experience and sign formation. While gesture form annotation is not a central issue in this chapter, it is important to note that to be able to analyze the functions of iconic gestures, their form features first need to be described in a systematic fashion, e.g., accounting for the involved postures, hand shapes, kinetic action, movement trajectories, the alignment with the synchronously produced speech segments, and, if possible, also with their prosodic contours (see e.g., Bressem volume 1; Calbris 1990; Hassemer et al. 2011; Kendon 2004; McNeill 2005; Mittelberg 2010; Müller 1998a).

Fig. 130.2: Sperone Westwater Gallery, Manhattan, NYC (‘stimulus’ of gestures shown in Fig. 130.1)

2. Semiotic oundations o iconicity in gesture Revisiting Peirce’s notions of similarity and iconicity, the goal of this section is to highlight the facets of gestural sign formation and interpretation they may account for. Adopting a wider semiotic perspective allows us to account for the interaction of a highly symbolic sign system, such as language, and visuo-spatial modalities, such as body posture and kinetic action. Gestures are differently iconic than spoken language, and in other ways also differently iconic than signed languages (e.g., Kendon 1988; Sweetser 2009; Wilcox in press). One needs to bear in mind that spontaneous co-speech gestures do not need to adhere to well-formedness conditions or to a symbolic code with given form-meaning mappings (e.g., McNeill 1992: 38). When produced with speech, kinetic actions do not need to be fully transparent and self-explanatory: schematic, polyvalent gestural forms usually do not carry the full load of communication, but receive

130. Gestures and iconicity

1717

parts of their meaning from the concurrent speech content. By recreating something existing outside a given discourse context, or by creating something new within it, gestures contribute dynamic physical qualities to complex unfolding multimodal sign processes (see also Andre´n 2010; Enfield 2009; Fricke 2007, volume 1; Mittelberg 2006, volume 1; Mittelberg and Waugh this volume; Müller 2010; inter alia). It is exactly the lacking status of an independent symbolic sign system that has prompted gesture researchers to search for patterns of communicative behavior recurring within and across speakers, discourses, and communities, thus striving to find underlying core elements in terms of similar form features as well as shared cognitive and semantic structures and pragmatic functions (e.g., Kendon 2004 on gesture families; Müller 2004, 2010 and Ladewig this volume on recurrent gestures; Streeck 2009 on gesture ecologies; and Cienki 2005; Ladewig 2011; Mittelberg 2006, 2013, volume 1 on image and force schemata). This section is primarily concerned with Peirce’s sign model; different kinds of iconicity are discussed in section 3.

2.1. Similarity and other semiotic relations interacting in gestural sign processes Peirce (1955, 1960) proposed a triad of semiotic relations between a sign carrier and the object it represents: similarity (iconicity), contiguity (indexicality), and conventionality or habit (symbolicity). While in principle all of these modes interact in a given gestural sign, we will focus here on similarity and iconicity. Peirce’s definition of the dynamic triadic sign process involves three elements: A sign [in the form of a representamen] is something which stands to somebody for something in some respect or capacity. It addresses somebody, that is, creates in the mind of that person an equivalent sign, or perhaps a more developed sign. That sign which it creates I call the interpretant of the first sign. The sign stands for something, its object. (Peirce’s 1960: 135, 2.228; italics in the original)

A central premise of this model is that meaning does not reside in a given represesentamen, such as a word or a gesture, but arises in the form of interpretants, that is, cognitive representations evoked in the mind of the sign receiver. Interpretants link the representamen with an Object in the moment of perception and interpretation (when understood in the Peircean sense, the term object will henceforth be capitalized). So without an embodied interpreting mind there is no similarity (or any of the other fundamental relations), no semiosis, and no meaning (see Danaher 1998 and Mittelberg 2008, volume 1 on parallels with cognitive semantics). For example, when observing and listening to Norman Foster describing the dimensions of the Sperone Westwater Gallery (Fig. 130.1) in words and gestures (i.e. representamina), the addressee creates a composite mental image (i.e. interpretants) of the building (i.e., the Object) based on this multimodal semiotic material. The cognitive representation and sensorial associations triggered in the process will differ depending on whether or not s/he has actually seen and experienced this particular space. As mentioned in the introduction, according to Peirce (1960: 157; 2.276), “icons have qualities which resemble those of the objects they represent, and they excite analogous sensations in the mind”. The term icon rests on a multimodal understanding of similarity, including sensations in the interpreter’s mind that to her/him make something look, feel, taste, smell, sound, or move like something else. Recognizing qualities, shapes, rhythmic

1718

VIII. Gesture and language

patterns, and larger structures is assumed to be driven by our quest for familiarity and meaning when exposed to both habitual and new perceptual data and social experiences (Arnheim 1969; Johnson 2007). Detecting sensory similarities and cross-sensory correspondences are physical and cognitive processes at the root of embodied categories which have also been shown to motivate gestural expression (e.g., Cienki and Müller 2008; Mittelberg volume 1). In gesture, as in any other kinds of signs, the understanding that the interpretation of a sign carrier as being iconic of something else relies on a perceived similarity with its Object may play out differently depending on whether it pertains to processes of sign production or sign interpretation. First, when giving a multimodal description of one’s dream house, the gestural sketch one draws in the air can be assumed to exhibit some perceived ⫺ and felt ⫺ qualities in common with the multisensorial imagery in one’s mind (see Gibbs 2006; Johnson 2007). Gestures take part in the cross-modal encoding of structure and meaning, thus driving associations and reflecting the speaker-gesturer’s subjective conceptualization of, e.g., previous experiences or new visions (e.g., Cienki 2012; Sweetser 2012). So while entrenched categorical and image-schematic structures may motivate such construal operations to some degree, in imaginative processes they may also trigger inferences allowing the speaker to move into unexpected directions and see new connections (e.g., Cienki and Mittelberg 2013). Second, on the basis of the multimodal description a mental representation arises in the mind of the listener-observer, invoking, among other dimensions, similarity relations with respect to floor plans and houses s/he previously encountered. Prototypical members of the category house may serve as cultural models against which a particular (sketchy) gestural image may get matched. We can say then that similarity may be perceived and construed on both sides of the sign process. Although the similarity between the bodily actions we observe in others and our own perceptual and physical habits may influence how we cognitively, physically, and emotionally align with our interlocutors, similarity is only one way to understand the intentions and meaning behind the communicative behavior of others. Contiguity relations between the communicating body and its material and social habitat also play an important role in sensing and interpreting the meaning of bodily signs (see Mittelberg and Waugh this volume on contiguity and metonymy in gesture). And while gestures are not fully coded signs, conventionality and habit come into play in the form of movement patterns, actions schemas, conceptual categories, and socio-culturally shaped behaviors. These may account for certain iconic patterns in gesture and the ways in which they are recognized, imitated, and learned during development (for work on language acquisition see, e.g., Zlatev 2005 on mimetic schemas and Andre´n 2010 on action gestalts). A number of gesture researchers have suggested extensions or alternative accounts of the Peircean notion of similarity, drawing on, for instance, Goodman’s ([1968] 1976) notion of exemplification and Wittgenstein’s (1953) notion of family resemblance (see Fricke 2007, 2012; Jäger, Fehrmann, and Adam 2012; Lücking 2013; Streeck 2008, 2009).

2.2. Semiotic Objects: Concrete entities, physical actions, and beyond Before teasing apart the distinct ways in which gestures may be said to be iconic, we will first look more closely at what they might be iconic of. Peirce’s understanding of what a semiotic Object can be is extremely wide and ranges from existing to non-existing things: it encompasses both concrete and abstract entities, including possibilities, goals, qualities, feelings, relations, concepts, mental states, or ideas (Kockelman 2005). Essen-

130. Gestures and iconicity

1719

tially, anything can be an Object, as long as it is represented by a sign (Shapiro 1983: 25). That is, a gesture may also function as the Object in a subsequent (gestural) sign process. The nature and properties of the Object further determine, according to Peirce, the sign, which may account for the fact that certain kinds of gestures can be expected to occur more frequently in certain discourses about certain topics and of a certain genre (e.g., iconic gestures representing motion events in retellings of animated cartoons; see, e.g., McNeill 2005; Mittelberg and Evola this volume). To account for the multifaceted meaning-making processes in gesture production and interpretation, Peirce’s distinction between the dynamic Object and the immediate Object is particularly insightful: “the dynamic object is the object that determines the existence of the sign; and the immediate object is the object represented by the sign. Immediate objects only exist by virtue of the signs that represent them; whereas dynamic objects exist independently of the signs that stand for them” (Kockelman 2005: 246). Coming back to the art gallery example, we can say that the dynamic Object, i.e. the particular building talked about, exists in New York City irrespectively of any act of semiotic representation, be it a multimodal description (as in Fig. 130.1) or a photograph (as in Fig. 130.2). In the addressee’s mind the interpretant links the gestural and linguistic sign carriers to the immediate Object which only resides inside of the sign relation (Peirce 1960, 8.314; see also Sonesson 2007, 2008). Even if the addressee has experienced the Object, i.e. the gallery, previously, this particular description may evoke first of all those aspects of it that are made salient by this measurement gesture. In any event, the dynamic Object remains unattainable for the interpreter. In each multimodal communicative process, the original dynamic Object thus differs from the immediate Object established by the interpreter (see also Fricke 2007). Misunderstandings may arise due to too large a gap between the two. This leads us to conclude that in interpretative processes, perceived similarity can only pertain to the relation between the representamen and its immediate Object, not between the representamen and the dynamic Object. It seems obvious that a gestural sign does not necessarily seize an Object that exists in the real world or the way an Object exists in the real world. A gesture might evoke certain aspects of, for example, the speaker’s furtive memory of a room, person, or color, or her understanding of an abstract category. Gesture research done within the framework of cognitive linguistics has evidenced ways in which gestures seem to be motivated by embodied conceptual structures, such as prototypes (Rosch 1977), image schemas (Johnson 1987), frames (Fillmore 1982), mental simulation (Gibbs and Matlock 2008); and metaphors (Lakoff and Johnson 1980). Indeed, some of the nonphysical Objects listed above remind us of common target domains of conceptual metaphors (e.g., Cienki 2012; Cienki and Müller 2008; Mittelberg volume 1; Parrill and Sweetser 2004; Sweetser 1998). Some gesture scholars distinguish gestures that carefully describe a specific, existing space or object, such as the house one lives in or a tool one has used many times, from those gestures that seem to reflect a thought processes or an understanding evolving as one speaks (see Fricke 2007; McNeill 1992; Müller 2010). Streeck (2009: 151), for instance, introduced two gestural modes: depicting (e.g., via an iconic gesture portraying a physical object) and ceiving (i.e. via a gesture conceptualizing a thematic object). He attributes the latter mode to a more self-absorbed way of finding a gestural image for an emerging idea (e.g., on the basis of an image schema): “When ‘they think with their hands’, speakers rely on their bodies to provide conceptual structure” (Streeck 2009: 152). We will now narrow in on Peirce’s concept of the ground of a sign carrier.

1720

VIII. Gesture and language

2.3. Grounded abstraction and mediality eects in co-speech gestures Peirce’s concept of the ground of a sign carrier, i.e. of the representamen, accounts for the fact that sign vehicles do not represent Objects with respect to all of their properties, but only with regard to some salient qualities. These foregrounded, signifying features function as the ground of the representamen. Principles of abstraction already operate at this level of the semiotic process. In Peirce’s own words (1960: 135, 2.228; italics in the original): The sign stands for something, its object. It stands for that object, not in all respects, but in reference to some sort of idea, which I sometimes called the ground of the representamen. “Idea” is here to be understood in a sort of Platonic sense, very familiar in everyday talk; I mean in that sense in which we say that one man catches another man’s idea.

The ground may thus be understood as a metonymically profiled quality of an Object (e.g., the width of a building, see Fig. 130.2) portrayed by a representamen (e.g., open hands facing each other, see Fig. 130.1b). As Sonesson (2007: 47) notes, “Peirce himself identifies ‘ground’ with ‘abstraction’ exemplifying it with the blackness of two things.” While the partiality of representation is commonly assumed, distinct semiotic grounding mechanisms may elucidate different ways in which abstraction may be brought about in co-speech gestures (Sonesson 2007: 40; see also Ahler and Zlatev 2010). We will focus here on gestures with a predominantly iconic ground (see Mittelberg and Waugh this volume for gestures with either predominantly iconic or indexical ground as well as transient cases). A sign with a highly iconic ground gives a partial, that is, a metonymically abstracted image of its Object based on a perceived or construed similarity. For an illustration, consider the following multimodal description of a childhood memory (adapted from Mittelberg, Schmitz, and Groninger in press). The speaker describes, in German, how every morning on her way to kindergarten she would run down the endless winding stairs in her house. In her gestural portrayal, her left index finger pointing downward draws a spiral-like gestural trace starting at eye level and winding down and around six times until reaching hip level (Fig. 130.3). While she seems to be watching the event from the top flight of the house, her index finger becomes an abstract image icon of her body imitating the action of walking down the stairs in circles. She then uses both her

Fig. 130.3: Index finger reenacts speaker’s walking down a spiral staircase

130. Gestures and iconicity

1721

hands to draw two vertical lines from the imagined ground level to the top, indicating the shape of what to her felt like a tower-shaped building. To create visible and lasting gestalts of gestural motion, the normally invisible traces created by the movement of the speaker’s left hand were tracked with the help of an optical tracking system and then processed and plotted, which resulted in the motion event sculpture shown in Fig. 130.4. Hence, an immaterial memory became a visible and tangible object (produced by a 3-D printer). From the perspective of the sign producer, the dynamic Object is a frequently undergone motion event consisting of running down a spiral staircase. It manifests itself in the form of a dynamic, evanescent representamen (i.e. a spiral-like gestural gestalt) distilling schematic core features of a rich experience, bringing to light both the path and manner of this particular kinetic action routine. The spiral foregrounds those qualities of the Object that function as the ground of the representamen, abstracting away a host of contextual aspects.

Fig. 130.4: Motion event sculpture: Childhood memory of running down the stairs, ” Natural Media Lab & Dept. of Visual Design, RWTH Aachen University 2013

In this gesture the ground can also be qualified as iconic in that it evokes the idealized image-schematic structure underlying the motion event as a whole (see also, e.g., Cienki 2005 and Mittelberg 2010, 2013 on image schemas in gesture and Freyd and Jones 1994 on the spiral image schema). As addressees, who have neither seen the speaker’s action of rushing down the stairs nor the staircase in question, we can imagine, based on the schematic gestural depiction, the essential traits of the spiral motion event and the type of architecture lending the material structure along which it unfolded many times. In this subjective, multimodal performance act, the message or idea in the Peircean (1960: 135) sense, that to the speaker the stairs seemed endless and the house enormously high, comes across quite effectively (in fact, this gestural gestalt expresses the idea of a towerlike building more vividly than the architect’s gestures shown in Fig. 130.1). In light of the assumed partiality of perception and depiction, perspective appears to be a decisive factor in these processes. Whether a sign producer adopts, for instance, character or observer viewpoint will influence which aspects of the Object get profiled metonymically and thus constitute the ground of the representamen. In the moment captured in Fig. 130.3, the portrayal simultaneously reflects observer viewpoint (the speaker seems to be looking down the staircase) and character viewpoint (the index

1722

VIII. Gesture and language

finger represents her walking down the stairs); it is thus an example of dual viewpoint (see McNeill 1992; Parrill 2009; Sweetser 2012). Iconicity and salience in gesture may further be brought about by different kinds of gestural practices. Drawing on the tools, media, and mimetic techniques visual artists employ, Müller (1998a: 114⫺126; 1998b: 323⫺327) introduced four modes of representation in gesture: drawing (e.g., tracing the outlines of a picture frame); molding (e.g., sculpting the form of a crown); acting (e.g., pretending to open a window); and representing (e.g., a flat open hand stands for a piece of paper). If one applied these modes to the same Object, each of them would establish a different kind of iconic ground and hence highlight different features of the Object. Put differently, each portrayal would convey a different idea of the Object (Peirce 1960: 135; 2.228). These observations suggest that mediality effects may be rooted in the fact that linguistic and gestural modalities have different potentials to create certain types of grounds; they may portray, or abstract, certain kinds of qualities or ideas more readily and effectively than others.

2.4. The Interpretant in gesture interpretation and production According to Peirce, the perspective of the sign interpreter weighs more than the sign producer’s: “In terms of the dynamics of signification, the concept of the ‘interpretant’ remains uppermost” (Corrington 1993: 159). Peirce distinguishes between the immediate, the dynamic, and the final interpretant (for details see Enfield 2009; Fricke 2007; Mittelberg 2006). His conception of the interpretant may account for the idiosyncrasy of individual minds with different semiotic histories, e.g., for specific stages in life or areas of expertise, which influence the recognition and interpretation of signs. Importantly in view of co-speech gestures, interpretants have a tendency for semiotic augmentation, that is, to develop into a more developed sign, for instance through multimodal integration (e.g., Fricke 2012; Mittelberg 2006). We must remember that the interpretant is the mature sign that has already been augmented (and hence has greater semiotic density than the representamen). The interpretant is always underway toward further interpretants and seems to ‘hunger’ to link up with larger units of meaning. (Corrington 1993: 159)

Fricke (2007: 193⫺195, 2012) calls interpretant gestures those kinds of gestural signs that, for instance, based on a previous act of interpretation in an ongoing discourse, reflect prototypical members of a given category rather than specific exemplars. Whereas the discussion of iconicity in gesture often focuses on sign-object relations, the concept of the interpretant may illuminate mechanisms of both gesture reception and production. This insight further reinforces the importance of embodied conceptual structures for bodily communication (see also Cienki and Müller 2008; Evola 2010; Gibbs 2006; Mittelberg volume 1).

3. Dierent kinds and degrees o iconicity in gesture Peirce distinguishes three subtypes of icons, which may interact to varying degrees in dynamic semiotic gestalts: images, diagrams, and metaphors: Those [icons] which partake of simple qualities […] are images; those which represent the relations, mainly dyadic, or so regarded, of the parts of one thing by analogous relations in

130. Gestures and iconicity

1723

their parts, are diagrams; those which represent the representative character of a representamen by representing a parallelism in something else, are metaphors. (Peirce 1960: 157; 2.277)

Regardless of which of these iconic modes may be predominant, iconic gestural portrayals tend to be inherently metonymic (Bouvet 2001; Mittelberg 2006; Mittelberg and Waugh this volume; Müller 1998a). Gestural imagery often consists only of schematic figures, minimal motion onsets or sketchy articulations with a short temporal permanence. In unfolding discourses there is just enough time to offer quick gestural glimpses at essential aspects and qualities of what is being talked about and perhaps not easily conveyable in speech. As is generally the case with icons, to fulfill their functions, bodily icons need to be anchored in a semiotic and physical context through various kinds of indices (e.g., Haviland 1993; Mittelberg and Waugh this volume; Sweetser 2012) and tend to rely on different kinds of conventionality. In his work on children’s gestures, Andre´n (2010: 219) makes a distinction between “natural transparency and conventionbased transparency in iconic gestures,” thus accounting for socio-cultural practices and processes of conventionalization (see also Sonesson 2008 on primary and secondary iconicity and Zlatev 2005 on mimetic schemas). Bodily icons further exhibit different degrees of iconicity and semiotic substance. In a first approach, we can broadly distinguish between several kinds of physical representamina with predominantly iconic ground: first, those in which the speaker’s entire body functions as an icon by imitating a particular posture or kinetic action (of her/himself or someone else); second, those in which body parts, such as the speaker’s arms and hands, iconically represent an object or action; third, invisible figurations such as lines and volumes taking shape in gesture space as a result of manual actions. The latter are icons in their own right and may, as such, represent a person, object, concept, and so forth. As soon as gesturing hands seem to be manipulating contiguous objects, tools, or surfaces (not iconically represented), contiguity relations, and thus indices, come to the fore (see Mittelberg and Waugh this volume for sub-types of icons and indices correlating with distinct contiguity relations and metonymic principles). In principle, these different kinds of physical semiotic material may partake in the three sub-types of iconicity devised by Peirce, of which gestural examples will be discussed next.

3.1. Image iconicity As a large body of research has shown, image iconicity may take shape in various forms and degrees in bodily signs (e.g., Duncan, Cassell, and Levy 2007; Kendon 2004; McNeill

Fig. 130.5a: Image iconicity (light beams)

Fig. 130.5b: Hallway in the Cultural Institute of Stockholm

1724

VIII. Gesture and language

1992, 2000, 2005; Mittelberg and Evola this volume). Bouvet (1997: 17), for instance, describes a full-body image icon produced by a little boy who pretends to be a helicopter in action by rotating his arms around the axis of his body. Müller (1998a: 123) describes a flat palm-up open hand representing a piece of paper; and in Fig. 130.3 we saw an index finger reenacting the speaker’s action of walking down the stairs. In Fig. 130.5a (adapted from Mittelberg, Schmitz, and Groninger in press), an architecture student employs his arms and hands (i.e. “body segments” according to Calbris 1990: 44) to evoke the lighting in a hallway of the Cultural Institute of Stockholm (designed by Gio` Ponti; see Fig. 130.5b). Exploring the epistemic and creative potential of gestures, the student decided not to use speech in his description. When comparing the gestural portrayal to the scene captured in the photograph, we see that the student’s hands become the window openings in the sealing of the hallway and his arms (the reflections of) the light beams falling in from above. Through the physical presence of the arms and hands, this gesture has an increased degree of iconicity, reflecting the tranquil quality of the light entering the building. Due to the way in which Norman Foster uses his hands to demarcate the spatial dimensions of the Sperone Westwater Gallery, the three gestures shown in Fig. 130.1a⫺c also qualify as image icons. Though compared to the gestures shown in Fig. 130.5a, Foster’s portrayal exhibits a lesser degree of iconicity. Gestural representamina consisting of manually traced virtual lines, figurations, or otherwise created planes or volumes that emerge from the gesturing hands obviously do not have much material substance. Once they are completed, they constitute, no matter how sketchy and evanescent they might be, independent iconic signs. Hands or fingertips are often observed to draw an entity’s shape in the air, for instance the panels of a rectangular picture frame as described by Müller (1998a: 119). As discussed earlier, in the gesture portraying a childhood memory (Fig. 130.4), the speaker’s hand traces the structural core of a motion event. It is thus an example of kinetic action evoking an underlying abstract event, image, or force schema (e.g., Cienki 2005; Mittelberg 2006, 2010, 2013, volume 1; Sweetser 1998).

3.2. Diagrammatic iconicity Body diagrams may also manifest themselves in various forms. To begin with, the body in and of itself may be regarded as a diagrammatic structure consisting of parts that may get profiled against the whole. Gestural graphs and diagrams drawn into the air are schematic representations that bring out the internal structure of a gestalt by high-

Fig. 130.6: Diagrammatic iconicity (teach-er)

130. Gestures and iconicity

1725

lighting the junctures between its parts or how the elements are related to one another. In Peirce’s own words icons, “which represent the relations, mainly dyadic, […] of the parts of one thing by analogous relations in their own parts, are diagrams” (Peirce 1960: 157; 2.277; see also Stjernfelt 2007). For a relatively solid example of a gesture manifesting a dyadic relation, consider Fig. 130.6 (adapted from Mittelberg and Waugh 2009). In this multimodal teaching performance, a linguistics professor explains the basics of noun morphology. He complements the verbal part of his utterance, as speakers of English you know that … teacher consists of teach⫺ and ⫺er, with a composite gesture, whose internal structure is of particular relevance. Both of his hands show the palms turned upwards and the fingertips curled it. On the mention of teach⫺ he brings up his left hand and immediately thereafter, on the mention of ⫺er, his right hand (see Fig. 130.6). This cross-modally achieved process of meaning construction is complex in that it not only involves a diagrammatic structure, but also a metaphorical projection. It thus qualifies as a diagrammatic metaphor icon. If we take the left hand to represent the morpheme teach⫺ and the right hand the morpheme ⫺er, each sign itself involves a reification of an abstract linguistic unit, or a speech sound, which through a metaphorical projection gets construed as a physical object (ideas are objects; Lakoff and Johnson 1980). We could also assume the hands to be enclosing small imaginary items, in which case the invisible items would need to be metonymically inferred from the perceptible containers. In both interpretations, this bimanually evoked diagram puts into relief the boundary between the two components, while also accentuating the fact that the linguistic units mentioned in speech are connected on a conceptual level. Neither is the idea of a diagram mentioned in speech, nor is the speech figurative. Yet, the body evidences conceptual structures and processes (for diagrammatic iconicity in gesture see also Enfield 2003, 2009; Fricke 2012; Mittelberg 2006, 2008, volume 1). This gestural diagram is also an instantiation of isomorphism (e.g., Calbris 1990; Fricke 2012; Givo´n 1985; Lücking 2013; Mittelberg 2006; Waugh 1992).

3.3. Metaphor iconicity With the help of Peirce’s iconic modes, one may differentiate gestural image icons of metaphoric linguistic expressions from gestural metaphor icons which manifest a metaphoric construal not expressed in speech. The former case is also referred to as a multimodal metaphor and the latter as a monomodal metaphor (e.g., Cienki and Müller 2008; Müller and Cienki 2009). Mittelberg and Waugh (this volume) describe the following example of a gestural icon cued by a linguistic metaphoric expression (i.e., a multimodal metaphor). A linguistics professor, lecturing about sentence structure, refers to a sentence as a string of words. On the mention of a string or words, she traces an invisible horizontal line in the air that evokes the idea of a string. In cognitive linguistic terms, the sentence is the target domain and the string the source domain of the underlying metaphorical mapping; the source meaning is taken literally in the gesture modality. Hence, conceptual structure is mediated in the form of a sketchy physical structure (e.g., McNeill 1992; Sweetser 1998). We will now consider a metaphor icon that corresponds to a monomodal metaphor. In the following sequence (Fig. 130.7), the linguistics professor explains the difference between main verbs and auxiliaries. When saying there is … what’s called the main verb, he points with his right hand to the verb form “taught” written on the blackboard behind him, thus contextualizing the deictic expression there is. While holding the deictic gesture

1726

VIII. Gesture and language

Fig. 130.7: Metaphor iconicity (the main verb; left hand)

and saying the main verb, he makes with his left hand a cupped palm-up open hand imitating the form of a small round container. The strongly iconic ground of this representamen portrays some of the prototypical structural characteristics of a small bowl-like container. This iconic form does not directly represent the idea of a main verb mentioned in speech. Following Peirce, the cupped hand represents “a parallelism” (1960: 157; 2.277) between a category and a cup-like container; for a moment the hand actually becomes a container. The point here is that the container-like gesture adds a metaphoric dimension to this multimodally performed explanation, thus manifesting the speaker’s understanding of the main verb as a physical entity. The manual container serves as the source domain of the conceptual metaphor categories are containers (Lakoff and Johnson 1980). In such metaphor icons, metaphoric understandings of basic linguistic units and categories are expressed monomodally: whereas the speech is technical and non-metaphorical, the gesture modality evidences a metaphorical construal. Speech-independent metaphor icons like this one and the diagram of morphological structure discussed above (Fig. 130.6) thus reveal, or “exbody” (Mittelberg volume 1: 750), the speakers’ embodied conceptualization of abstracta in physical terms (see also Cienki and Müller 2008; Evola 2010; McNeill 1992; Müller and Cienki 2009; Parrill and Sweetser 2004). Iconic and metaphoric modes further tend to interact with indexical and metonymic principles. In the interpretation of metaphoric gestures, metonymy may, according to Mittelberg and Waugh (2009, this volume), lead the way into metaphor (see also Taub 2001; Wilcox in press).

4. Concluding remarks The observations made throughout this chapter have confirmed that getting at the meaning of predominantly iconic gestures is not simply a matter of reference. To express their ideas and inclinations or solve communicative challenges, speakers may through their gestures relive experiences or create new meaningful semiotic material which in an evolving discourse may take on a live of its own. Instead of referring to something in the outside world, gestures may in certain moments actually be the world: It is through my body that I understand other people, just as it is through my body that I perceive ‘things’. The meaning of a gesture thus ‘understood’ is not behind it, it is intermingled with the structure of the world outlined by the gesture. (Merleau-Ponty 1962: 216)

It seems that the structure of the world metonymically outlined, or profiled, by a given gesture may be shaped by at least these different, interrelated kinds of iconic structure:

130. Gestures and iconicity

1727

physical, semiotic, and conceptual. All of these exhibit conventionality to lesser or greater degrees and rely on indices to unfold their meaning. Kinetic posture and action may put into relief the morphology of the human body and its movements and/or the spatial structures and physical objects humans interact with in their daily lives. Gestures and full-body enactments may also evoke essential qualities of experience brought to bear via, for instance, event and image-schematic structures and metaphor. Embodied action and image schemata have been shown to feed into gestural conceptualization on the side of the gesturer and to also guide the recognition and interpretation of furtive and schematic iconicity in gestures on the side of the addressee. They may also be the basis of metonymic inferences and metaphoric projections necessary to make meaning out of dynamically emerging semiotic gestalts (e.g., Cienki 2012; Mittelberg and Waugh this volume; Müller and Tag 2010). Given the multifaceted forms and functions that gestures may assume in various communicative situations and socio-cultural contexts, there remains a lot to be said about the different kinds of cognitive-semiotic and pragmatic principles that drive crossmodal processes of meaning-making in the here and now of the speech event as well as in gradual processes of conventionalization and grammaticalization. It seems worthwhile to bring into the picture additional theoretical approaches that might account for certain properties and functions of gestures differently than Peircean accounts of similarity and iconicity can (see Fricke 2012; Lücking 2013; Streeck 2009). It also seems crucial to further examine how gestural icons are indexically grounded in their semiotic, material and social environment (e.g., Streeck, Goodwin, and LeBaron 2011; Sweetser 2012). Gestures, as any other medium, do not simply imitate or reproduce the speaker’s inner or outer world; they participate in the encoding and structuring of experience as well as in associative and creative processes. One particularly promising avenue for future research is to further investigate what kinds of mediality effects result from what kinds of motivating and constraining forces in both co-speech gestures and signed languages.

Acknowledgements The author wishes to thank Jacques Coursil, Vito Evola and Daniel Schüller for valuable input, as well as Viktor Gatys, Hannah Groninger, Anna Kielbassa, Marlon Meuters, Patrick Pack, and Thomas Schmitz (RWTH Aachen Department of Visual Design) for their collaboration on the motion capture study in the Natural Media Lab. Special thanks to Yoriko Dixon for providing the gesture drawings. The preparation of the article was supported by the Excellence Initiative of the German State and Federal Governments during a research stay at the Excellence Cluster TOPOI (Humboldt Universität zu Berlin).

5. Reerences Ahler, Felix and Jordan Zlatev 2010. Cross-modal iconicity: A cognitive semiotic approach to sound symbolism. Sign Systems Studies 38 (1/4): 298⫺348. Arnheim, Rudolf 1969. Visual Thinking. Berkeley: University of California Press. Andre´n, Mats 2010. Children’s Gestures from 18 to 30 Months. Lund: Centre for Languages and Literatures, Lund University. Bouvet, Danielle 1997. Le Corps et la Me´taphore dans les Langues Gestuelles: A la Recherche des Modes de Production des Signes. Paris: L’Harmattan.

1728

VIII. Gesture and language

Bouvet, Danielle 2001. La Dimension Corporelle de la Parole. Les Marques Posturo-Mimo-Gestuelles de la Parole, leurs Aspects Me´tonymiques et Me´taphoriques, et leur Roˆle au Cours d’un Re´cit. Paris: Peeters. Bressem, Jana volume 1. A linguistic perspective on the notation of form features in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1079⫺1098. Berlin/Boston: De Gruyter Mouton. Calbris, Genevie`ve 1990. The Semiotics of French Gesture: Advances in Semiotics. Bloomington: Indiana University Press. Cienki, Alan 2005. Image schemas and gesture. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics, 421⫺442. Berlin: Mouton de Gruyter. Cienki, Alan 2012. Gesture and (cognitive) linguistic theory. In: Rosario Cabellero Rodriguez and M. J. P. Sanz (eds.), Ways and Forms of Human Communication, 45⫺56. Cuenca: Ediciones de la Universidad de Castilla-La Mancha. Cienki, Alan and Irene Mittelberg 2013. Creativity in the forms and functions of gestures with speech. In: Tony Veale, Kurt Feyaerts and Charles Forceville (eds.), Creativity and the Agile Mind: A MultiDisciplinary Study of a Multi-Faceted Phenomenon, 231⫺252. Berlin: Mouton de Gruyter. Cienki, Alan and Cornelia Müller (eds.) 2008. Metaphor and Gesture. Amsterdam, Philadelphia: John Benjamins. Clark, Herbert H. 1996. Using Language. Cambridge: Cambridge University Press. Corrington, Robert S. 1993. An Introduction to C.S. Peirce: Philosopher, Semiotician, and Ecstatic Naturalist. Lanham, MD: Rowman and Littlefield. Danaher, David S. 1998. Peirce’s semiotic and cognitive metaphor theory. Semiotica 119(1/2): 171⫺207. Duncan, Susan, Justine Cassell and Elena T. Levy (eds.) 2007. Gesture and the Dynamic Dimension of Language. Amsterdam: John Benjamins. Enfield, N.J. 2003. Producing and editing diagrams using co-speech gesture: Spatializing non-spatial relations in explanations of kinship in Laos. Journal of Linguistic Anthropology 13: 7⫺50. Enfield, N.J. 2009. The Anatomy of Meaning. Speech, Gestures, and Composite Utterances. Cambridge: Cambridge University Press. Evola, Vito 2010. Multimodal cognitive semiotics of spiritual experiences: Beliefs and metaphors in words, gestures, and drawings. In: Fey Parrill, Vera Tobin and Mark Turner (eds.), Form, Meaning, and Body, 41⫺60. Stanford: CSLI Publications. Fillmore, Charles J. 1982. Frame semantics. In: Linguistic Society of Korea (ed.), Linguistics in the Morning Calm, 111⫺137. Seoul: Hanshin. Freyd, Jennifer and Kristine Jones 1994. Representational momentum for a spiral path. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16: 1107⫺1117. Fricke, Ellen 2007. Origo, Geste und Raum ⫺ Lokaldeixis im Deutschen. Berlin/New York: De Gruyter Mouton. Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin: Mouton de Gruyter. Fricke, Ellen volume 1. Towards a unified grammar of gesture and speech: A multimodal approach. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 733⫺754. Berlin/Boston: De Gruyter Mouton. Frishberg, Nancy 1975. Arbitrariness and iconicity: Historical change in American Sign Language. Language 5(1): 676⫺710. Gibbs, Raymond W., Jr. 2006. Embodiment and Cognitive Science. New York: Cambridge University Press.

130. Gestures and iconicity

1729

Gibbs, Raymond W., Jr. and Teenie Matlock 2008. Metaphor, imagination, and simulation: Psycholinguistic evidence. In: Raymong W. Gibbs (ed.), The Cambridge Handbook of Metaphor and Thought, 161⫺176. Cambridge: Cambridge University Press. Givo´n, Talmy 1985. Iconicity, isomorphism, and non-arbitrary coding in syntax. In: John Haiman (ed.), Iconicity in Syntax, 187⫺219. Amsterdam: John Benjamins. Goodman, Nelson 1976. Languages of Art: An Approach to a Theory of Symbols. 2nd ed. Indianapolis: Hackett. First published [1968]. Goodwin, Charles 2011. Contextures of action. In: Jürgen Streeck, Charles Goodwin and Curtis LeBaron (eds.), Embodied Interaction: Language and the Body in the Material World, 182⫺193. Cambridge: Cambridge University Press. Grote, Klaudia and Erika Linz 2003. The influence of sign language iconicity on semantic conceptualization. In: Wolfgang G. Müller and Olga Fischer (eds.), From Sign to Signing: Iconicity in Language and Literature 3, 23⫺40. Amsterdam: John Benjamins. Haiman, John (ed.) 1985. lconicity in Syntax. Amsterdam: John Benjamins. Hassemer, Julius, Gina Joue, Klaus Willmes and Irene Mittelberg 2011. Dimensions and mechanisms of form constitution: Towards a formal description of gestures. Proceedings of the GESPIN 2011 Gesture in Interaction Conference, Bielefeld: ZiF. Haviland, John 1993. Anchoring, iconicity and orientation in Guugu Yimithirr pointing gestures. Journal of Linguistic Anthropology 3(1): 3⫺45. Hiraga, Masako 1994. Diagrams and metaphors: Iconic aspects in language. Journal of Pragmatics 22(1): 5⫺21. Jäger, Ludwig, Gisela Fehrmann and Meike Adam (eds.) 2012. Medienbewegungen: Praktiken der Bezugnahme. München: Fink. Jakobson, Roman 1966. Quest for the essence of language. In: Linda R. Waugh and Monique Monville-Burston (eds.), Roman Jakobson: On Language, 407⫺421. Cambridge, MA: Harvard University Press. Jakobson, Roman and Linda R. Waugh 2002. The Sound Shape of Language, Berlin/New York: Mouton de Gruyter, 3rd. ed. First published [1979]. Johnson, Mark 1987. The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. Chicago: Chicago University Press. Johnson, Mark 2007. The Meaning of the Body: Aesthetics of Human Understanding. Chicago: University of Chicago Press. Kendon, Adam 1986. Iconicity in Warlpiri sign language. In: Paul Bouissac, Michael Herzfeld and Roland Posner (eds.), Iconicity: Essays on the Nature of Culture. Festschrift für Thomas Sebeok zu seinem fünfundsechzigsten Geburtstag, 437⫺446. Tübingen: Stauffenburg Verlag. Kendon, Adam 1988. Sign Languages of Aboriginal Australia: Cultural, Semiotic, and Communicative Perspectives. Cambridge: Cambridge University Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kockelman, Paul 2005. The semiotic stance. Semiotica 157(1/4): 233⫺304. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. Ladewig, Silva H. this volume. Recurrent gestures. In: Cornelia Müller, Ellen Fricke, Alan Cienki, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1558⫺1574. Berlin/Boston: De Gruyter Mouton. Lakoff, George and Mark Johnson 1980. Metaphors We Live By. Chicago: Chicago University Press. Lücking, Andy 2013. Ikonische Gesten. Grundzüge einer linguistischen Theorie. Berlin: Mouton de Gruyter. Mandel, Mark 1977. Iconic devices in American Sign Language. In: Lynn Friedman (ed.), On the Other Hand. New Perspectives on American Sign Language, 57⫺107. New York: Academic Press. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: Chicago University Press. McNeill, David 2005. Gesture and Thought. Chicago: Chicago University Press.

1730

VIII. Gesture and language

McNeill, David (ed.) 2000. Language and Gesture. Cambridge: Cambridge University Press. Merleau-Ponty, Maurice 1962. Phenomenology of Perception. New York: Humanities Press. Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar (Ph.D. dissertation, Cornell University). Ann Arbor, MI: UMI. Mittelberg, Irene 2008. Peircean semiotics meets conceptual metaphor: Iconic modes in gestural representations of grammar. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 115⫺154. Amsterdam: John Benjamins. Mittelberg, Irene 2010. Geometric and image-schematic patterns in gesture space. In: Vyvyan Evans and Paul Chilton (eds.), Language, Cognition, and Space: The State of the Art and New Directions, 351⫺385. London: Equinox. Mittelberg, Irene 2012. Ars memorativa, Architektur und Grammatik. Denkfiguren und Raumstrukturen in Merkbildern und spontanen Gesten. In: Thomas Schmitz and Hannah Groninger (eds.), Werkzeug/Denkzeug. Manuelle Intelligenz und Transmedialität kreativer Prozesse, 191⫺ 221. Bielefeld: Transcript Verlag. Mittelberg, Irene 2013. Balancing acts: Image schemas and force dynamics as experiential essence in pictures by Paul Klee and their gestural enactments. In: Mike Bokrent, Barbara Dancygier and Jennifer Hinnell (eds.), Language and the Creative Mind, 325⫺346. Stanford: Center for the Study of Language and Information. Mittelberg, Irene volume 1. The exbodied mind: Cognitive-semiotic principles as motivating forces in gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 755⫺784. Berlin/Boston: De Gruyter Mouton. Mittelberg, Irene and Vito Evola this volume. Iconic and representational gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1732⫺1746. Berlin/Boston: De Gruyter Mouton. Mittelberg, Irene, Thomas H. Schmitz and Hannah Groninger in press. Operative Manufakte. Gesten als unmittelbare Skizzen in frühen Stadien des Entwurfsprozesses. In: Inge Hinterwaldner and Sabine Ammon (eds.), Bildlichkeit im Zeitalter der Modellierung. Operative Artefakte in Entwurfsprozessen der Architektur und des Ingenieurwesens. München: Fink. Mittelberg, Irene and Linda R. Waugh 2009. Metonymy first, metaphor second: A cognitive-semiotic approach to multimodal figures of thought in co-speech gesture. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 329⫺356. Berlin: Mouton de Gruyter. Mittelberg, Irene and Linda R. Waugh this volume. Gestures and metonymy. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1747⫺1766. Berlin/Boston: De Gruyter Mouton. Murphy, Keith M. 2005. Collaborative imagining: The interactive use of gestures, talk, and graphic representation in architectural practice. Semiotica 156: 113⫺145. Müller, Cornelia 1998a. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 1998b. Iconicity and gesture. In: Serge Santi et al. (eds.), Oralite´ et gestualite´: Communication multimodale et interaction, 321⫺328. Montre´al/Paris: L’Harmattan. Müller, Cornelia 2004. Forms and uses of the palm up open hand: A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gesture, 233⫺256. Berlin: Weidler Verlag. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68.

130. Gestures and iconicity Müller, Cornelia and Alan Cienki 2009. Words, gestures and beyond: Forms of multimodal metaphor in the use of spoken language. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 297⫺328. Berlin: Mouton de Gruyter. Müller, Cornelia and Susanne Tag 2012. The dynamics of metaphor: Foregrounding and activating metaphoricity in conversational interaction. Cognitive Semiotics 10(6): 85⫺120. Parrill, Fey 2009. Dual viewpoint in gesture. Gesture 9(3): 271⫺289. Parrill, Fey and Eve E. Sweetser 2004. What we mean by meaning: Conceptual integration in gesture analysis and transcription. Gesture 4(2): 197⫺219. Peirce, Charles Sanders 1955. Logic as semiotic: The theory of signs (1893⫺1920). In: Justus Bucher (ed.), Philosophical Writings of Peirce, 98⫺119. New York: Dover. Peirce, Charles Sanders 1960. Collected Papers of Charles Sanders Peirce (1931⫺1958). Vol. I.: Principles of Philosophy, Vol. II: Elements of Logic. Charles Hartshorne und Paul Weiss (eds.). Cambridge: The Belknap of Harvard University Press. Perniss, Pamela, Robin Thompson and Gabriella Vigliocco 2010. Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology 1: 1⫺15. Pizzuto, Elena and Virgina Volterra 2000. Iconicity and transparency in sign language: A crosslinguistic cross-cultural view. In: Karen Emmorey and Harlan Lane (eds.), Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima, 261⫺286. Mahwah, N.J.: Lawrence Erlbaum Associates. Rosch, Eleanor 1977. Human categorization. In: Neil Warren (ed.), Advances in Cross-Cultral Psychology. London: Academic Press. Shapiro, Michael 1983. The Sense of Grammar: Language as Semeiotic. Indiana: Indiana University Press. Simone, Rafaele (ed.) 1995. Iconicity in Language. Amsterdam: John Benjamins. Sonesson, Göran 2007. The extensions of man revisited: From primary to tertiary embodiment. In: John M. Krois, Mats Rosengren, Angela Steidele and Dirk Westerkamp (eds.), Embodiment in Cognition and Culture, 27⫺53. Amsterdam: John Benjamins. Sonesson, Göran 2008. Prolegoma to a general theory of iconicity: Considerations of language, gesture, and pictures. In: Klaas Willems and Ludovic De Cuypere (eds.), Naturalness and Iconicity in Language, 47⫺72. Amsterdam: John Benjamins. Stjernfelt, Frederic 2007. Diagrammatology: An Investigation on the Borderlines of Phenomenology, Ontology and Semiotics. Dordrecht: Springer. Streeck, Jürgen 2008. Depicting by gesture. Gesture 8 (3): 285⫺301. Streeck, Jürgen 2009. Gesturecraft: The Manu-Facture of Meaning. Amsterdam: John Benjamins. Streeck, Jürgen, Charles Goodwin and Curtis D. LeBaron (eds.) 2011. Embodied Interaction: Language and Body in the Material. New York: Cambridge University Press. Sweetser, Eve E. 1998. Regular metaphoricity in gesture: Bodily-based models of speech interaction. Actes du 16e Congre`s International des Linguistes (CD-ROM), Elsevier. Sweetser, Eve E. 2009. What does it mean to compare language and gesture? Modalities and contrasts. In: Jiansheng Guo, Elena Lieven, Nancy Budwig, Susan Ervin-Tripp, Keiko Nakamura and Seyda Özcaliskan (eds.), Crosslinguistic Approaches to the Psychology of Language: Studies in the Tradition of Dan Isaac Slobin, 357⫺366. New York: Psychology Press. Sweetser, Eve E. 2012. Viewpoint and perspective in language and gesture. Introduction to Barbara Dancygier and Eve Sweetser (eds.), Viewpoint in Language: A Multimodal Perspective, 1⫺22. Cambridge: Cambridge University Press. Taub, Sarah 2001. Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press. Waugh, Linda R. 1992. Presidential address: Let’s take the con out of iconicity: Constraints on iconicity in the Lexicon. American Journal of Semiotics 9: 7⫺48. Waugh, Linda R. 1993. Against arbitrariness: Imitation and motivation revived, with consequences for textual meaning. Diacritics 23(2): 71⫺87.

1731

1732

VIII. Gesture and language

Wilcox, Sherman 2004. Conceptual spaces and embodied actions: Cognitive iconicity and signed languages. Cognitive Linguistics 15(2): 119⫺147. Wilcox, Sherman in press. Signed languages. In: Dagmar Divjak and Ewa Dabrowska (eds.), Handbook of Cognitive Linguistics. Berlin: Mouton de Gruyter. Wittgenstein, Ludwig 1953. Philosophical Investigations. Oxford: Blackwell. Zlatev, Jordan 2005. What’s in a schema? Bodily mimesis and the grounding of language. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics, 313⫺342. Berlin: Mouton de Gruyter.

Irene Mittelberg, Aachen (Germany)

131. Iconic and representational gestures 1. 2. 3. 4. 5. 6.

Introduction Iconic gestures, dimensions, and patterns Mimicry: Intersubjective alignment and understanding Representational and referential gestures Concluding remarks References

Abstract The construct of iconic gestures, those gestures understood as sharing certain form features with the object, action, or scene they represent, has traditionally proven to be a useful tool for scholars to classify this subset of gestures, distinguishing them from other types such as indexical or emblematic gestures. More recent approaches prefer to avoid discrete categories and rather speak in terms of dimensions or principles, such as iconicity or indexicality, in order to highlight the fact that gestures tend to perform multiple functions at once. Iconic co-speech gestures are semiotically conditioned not only by the particular language spoken, but also by the pragmatics of situated, multimodal language use, thus being cognitively, intersubjectively, and socio-culturally motivated. Iconic patterns of gesture production identified within individual as well as across various languages and language families have provided valuable insights into the intimate interrelation of thought, gesture, and speech in face-to-face interaction as well as other kinds of multimodal communication. This chapter reviews both production- and comprehension-oriented research on iconic gestures, including examples from cross-cultural, clinical, and forensic studies. Ways in which iconic gestures pertain to related terms, such as representational and referential gestures, are also addressed.

1. Introduction Iconicity, in broader terms, is understood as the relationship between a sign and an object, in which the form the sign takes is perceived and interpreted to be similar in Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 17321746

131. Iconic and representational gestures

1733

some way to the object it is representing (Peirce 1960). Because representation tends to be partial, iconicity interacts with the principles of metonymy (e.g., a part stands for a whole; see Mittelberg and Waugh this volume). Since gesture is characterized by the extraordinary affordance of spatially and dynamically encoding visual information and kinetic action, questions gesture researchers have dealt with have been whether people produce iconic gestures (i) based on what visual information they have available, (ii) if iconic gestures are motivated by the particular language they cogesture with, or (iii) a combination of the two. Moreover, because the driving principle of iconicity is generally assumed to be based on similarity (and not conventionality), the role that social and individual practices play in the creation and use of these semiotic forms can be easily misunderstood. Semiotic foundations of iconicity are discussed in detail in Mittelberg this volume (see also, e.g., Andre´n 2010; Fricke 2012; Lücking 2013; Sonesson 2008). This chapter will focus on (predominantly) iconic gestures, both from a production and a comprehension perspective, as they are linked to speech, to social practices, and how they pertain to related terms, i.e., representational and referential gestures, in the literature.

2. Iconic gestures, dimensions, and patterns Among the established gesture typologies, the one most strongly associated with the notion of iconic gestures is the one proposed by McNeill and Levy (1982) and then extended by McNeill (1992: 12). According to this Peirce-inspired taxonomy, iconics encompass gestures illustrating aspects of what is conveyed in speech through actional and visuo-spatial imagery primarily based on memories and other kinds of mental representations. Iconic gestures imply a correspondence between the form a gesture takes, e.g., a body posture, hand shape, and/or the trajectory and manner of a hand movement, and the person, concrete object, action, or motion event it depicts. Put differently, “in an iconic gesture there is a certain degree of isomorphism between the shape of the gesture and the entity that is expressed by the gesture” (Kita 2000: 162). Iconics also reflect the viewpoint from which the speaker portrays a scene, e.g., character or observer viewpoint. Metaphorics are related to iconics in that they are “pictorial, but the pictorial content presents an abstract idea rather than a concrete object or event” (McNeill 1992: 14). Both of these types of representational gestures tend to be produced in a more central gesture space, which arguably accentuates their relatedness, as opposed to others, for example, indexical gestures which are produced more peripherally (McNeill 1992: 89⫺94). Examples of the kinds of abstract ideas referred to here are “knowledge, language itself, the genre of the narrative, etc.” (McNeill 1992: 80). In a well-known example of an iconic gesture, a speaker describes a scene from the cartoon “The Canary Row” (McNeill 1992: 12). When saying he grabs a big oak tree and he bends it way back, the speaker simultaneously performs with his right arm and hand a grabbing and pulling action backward. According to McNeill (2005: 6⫺7), “the gesture has clear iconicity ⫺ the movement and the handgrip; also a locus (starting high and ending low) ⫺ all creating imagery that is analogous to the event being described in speech at the same time”. As becomes apparent in this quote, in his more recent work McNeill (2005: 41⫺43) has

1734

VIII. Gesture and language

moved away from the original concept of categories (i.e. iconics, deictics, metaphorics, beats, and cohesives) now preferring to reason in terms of dimensions such as iconicity, indexicality, and metaphoricity (see also Duncan, McNeill, and McCullough 1995).

2.1. Production o iconic gestures Exploring patterns in the production of iconic gestures has allowed valuable insights into the intimate interrelations of thought, gesture, and speech, positing “gesture and the spoken utterance as different sides of a single underlying mental process” (McNeill 1992: 1). Iconic gestures have been shown to enhance both speaking and thinking, in particular analytical and spatio-motoric thinking (e.g., Kita 2000). In the Vygotsky-inspired concept of the growth point, gestural imagery also plays an important role, “since it grounds sequential linguistic categories in an instantaneous visuospatial context” (McNeill 2005: 115).

Fig. 131.1: Enactment of figure’s posture

Fig. 131.2: Paul Klee, Dance of a Mourning Child (1922)

Not unlike pointing and other indexical gestures, iconic gestures can be produced to fill a semantic gap in speech, especially when representing spatial imagery like size, shape, motion, or other schematic, partial images which take advantage of the affordances of gestures versus speech. For example, the participant shown in Fig. 131.1 ephrastically describes a painting by Paul Klee (Dance of a Mourning Child, 1922, Fig. 131.2; adapted from Mittelberg 2013) through a full-body enactment of the figure’s stance including the tilted head and arm configuration, as well as its position of the legs, and the eye-gaze directed downward. She also evokes the flowing skirt by repeated manual up-and-down movements around her hips and upper legs. Keeping character viewpoint all through the sequence, she extends her arms to the side while saying: her head was turned to this side … if I were mirroring what she was doing … and her arms were like this …. Then, on and her mouth was almost in the shape of a heart, she draws an icon of the figure’s heartshaped mouth onto her own lips and two lines imitating its eye slits onto her own eyes (I kept trying to see if her eyes were open or closed, and it looked like they were just slits). Although in the speech modality she’s using a deictic in “like this”, bringing the listener’s attention to her body and gestures, the full meaning of the speech-gesture utterance is

131. Iconic and representational gestures

1735

understood via her iconic bodily gestures, an image clearly easier to produce via gesture than via speech (see Fricke 2007 and Streeck 2009: 108⫺118 on gestures performing attributive and adverbial functions). A host of studies involving typologically different languages have provided ample insights into how processes of thinking for speaking (Slobin 1996) may on a moment-tomoment basis shape not only the linguistic, i.e. lexical and grammatical, encoding of motion events, but also the tightly interwoven gestural imagery especially exhibiting manner and/or path information. Initial observations put into relief a “high degree of cross-linguistic similarity” in gestures about the same content and “accompanying linguistic segments of an equivalent type, in spite of major lexical and grammatical differences between the languages” (McNeill 1992: 212). Subsequent research has produced converging “evidence for language specificity of representational gestures” (Kita 2000: 167; see, e.g., Kita and Özyürek 2007; McNeill and Duncan 2000; Müller 1998a; Özyürek et al. 2005). For example, Kita and Özyürek (2007) found that speakers of satelliteframed languages (e.g., Germanic and Slavic languages, which encode path separately from the main motion verb) were more likely to iconically represent path and manner of motion actions conflated in the same gestures as opposed to speakers of verb-framed languages (e.g., Romance and Semitic languages, where path is expressed in the main verb and manner of motion is conveyed by other means) who tend to portray manner and path in separate gestures. Following this line of research, these gestures appear to be motivated by the iconicity of the linguistic-conceptual representation and not of the visual-spatial imagery. However, as Duncan (2002: 204) points out regarding gestural imagery and verb aspect, this claim “is not the same as saying that gestures merely mirror linguistically codified aspect contrasts. Rather, different verb aspects appear expressive of fundamental distinctions in the ways we can ‘cognize’ an event during acts of speaking” (see also Cienki in press on representational gestures’ connection to grammar). Besides single gestures exhibiting a structural resemblance with the entities or actions they portray, and supporting the argument that gestures are semiotically linked not only to the language used but to language use, there are also discourse-internal iconic patterns, e.g., so-called catchments: “[a] catchment is recognized from a recurrence of gesture features over a stretch of discourse. It is a kind of thread of consistent visuo-spatial imagery running through a discourse segment that provides a gesture-based window into discourse cohesion” (McNeill 2000: 316; the notion is inspired by Kendon’s (1972) idea of locution clusters). Hence, iconicity here pertains to how gestures resemble (in part) other, preceding gestures in the semiotic neighborhood (see also Jakobson 1960 on the principle of equivalence and the poetic function in language). A common denominator for this research strand, which has resulted in a wealth of iconic gestures, is the employed semi-experimental method of data elicitation: Participants are asked to retell the aforementioned animated cartoon “The Canary Row”, in which the protagonists Tweety Bird and Sylvester undergo all kinds of adventures while chasing each other around town. This particular kind of stimulus consisting of twodimensional cartoon action movies with numerous motion events unfolding up, down, and along various kinds of spatial structures is reflected in the iconic gestures produced by a large and diverse group of study participants. While this approach limits the range of gestures as well as the kind of iconic gestures (i.e., based on the cartoon medium) that might occur, it has the advantage, as opposed to more free-form and naturalistic

1736

VIII. Gesture and language

conversations for instance, that based on the stimulus material the gesture analyst is able to reconstruct scene by scene what the participants’ gestures are iconic of. This also allows researchers to compare gesture production patterns not only across speakers of a single language or across different languages, but also across different age and clinical groups. Investigations into language acquisition have revealed particular stages in cognitive and language development, including transition points and gesture-speech mismatches (e.g., Goldin-Meadow 2003; McNeill 2005; McNeill and Duncan 2000). Generally, work on aphasia and other communication disorders evidences their impact on forms and functions of iconic gestures and also provide a window onto the workings of the non-disturbed multimodal language system (e.g., Caldognetto, Magno, and Poggi 1995; Cocks et al. 2011; Cocks et al. 2013; Duncan and Pedelty 2007; Goodwin 2011; Hogrefe et al. 2012; McNeill 2005). The large body of work reviewed above has presented ample evidence that iconic gestures are cognitively and communicatively extremely versatile, fulfilling a broad range of functions that go well beyond facilitating lexical retrieval during word-searching processes (e.g., Hadar and Butterworth 1997; see Krauss, Chen, and Gottesman 2000: 263 on the category of lexical gestures).

2.2. Comprehension o iconic gestures Taking the perspective of gesture comprehension, a body of research has shown the communicative significance of iconic gestures, that is, their contribution to the addressee’s understanding of what the speaker is conveying multimodally. In an intercultural study, Calbris (1990) explores how iconic and cultural facets of a set of French gestures, ranging from highly motivated examples to others implying a cultural cliche´, was interpreted by a group of Hungarian and a group of Japanese speakers respectively. Some of what is called “cliche´” here compares to the kinds of culturally-defined gestures now known as emblems (McNeill 1992). In reference to Saussure’s (1986) notion of arbitrariness, the author stresses the point that “gestures are not arbitrary signs, but conventional and motivated (Fo´nagy 1956, 1961⫺1962)” (Calbris 1990: 38). The more conventional gestures, such as the cliche´ Ceinture, evoked by a transverse line drawn at waist level to indicate privation, were not understood equally well by the two groups: The Hungarians were better at guessing and reconstructing their meaning than the Japanese. More universal motivations appear to facilitate intercultural comprehension, as in a gesture consisting of a hand placed on the belly expressing, in conjunction with a corresponding facial expression, disgust or nausea (Calbris 1990: 39). It is concluded that “[l]ess linked to a cliche´, less symbolic, less polyvalent, motivation seems to be all the more natural and transparent as it approaches depiction, or simple reproduction of movement. It seems all the more direct as it is narrowly linked with what is concrete” (Calbris 1990: 40; see also Andre´n (2010) and Bouvet (1997, 2001) on the transparency of iconic gestures and signs). In a serious of experimental studies investigating the communicative functions of cospeech gestures, Beattie and Shovolton (1999) found, for instance, that participants who had listened to retellings of a cartoon story gave a ten-percent more accurate summary if they had seen the iconic gestures accompanying the verbal retellings. In a study focusing on gestures presented without speech (Beattie and Shovolton 2002), a correlation was found between the viewpoint with which a scene was portrayed multimodally and the communicative effectiveness of the gestures. Gestures produced from character viewpoint were more informative than those embodying observer viewpoint (see, e.g., Dancy-

131. Iconic and representational gestures

1737

gier and Sweetser 2012; McNeill 1992). Looking at the interaction of speech and gesture in the communication of specific semantic features, it was further demonstrated that character viewpoint gestures were more communicative when conveying features pertaining to relative position, and character viewpoint gestures were more effective in conveying speed and shape features (Beattie and Shovolton 2001). Moreover, it was suggested that the effectiveness of TV advertisements may be increased by integrating spontaneous gestures considering their temporal and semantic properties (Beattie and Shovolton 2007; see also Beattie 2003). Studies on iconic gestures and speech integration in aphasics have shown that if comprehension is impaired on the verbal level, gestures are more heavily relied upon to decode messages. In addition, aphasia may have a disturbing effect on the multimodal integration of information presented in speech and iconic gestures (Cocks et al. 2009). Eye-movement studies are a way to find out what kinds of gestures addressees tend to notice more than others, and what they note about them. Gullberg (2003) found that listener-observers pay particular attention to gestures representing objects or actions and that the attentive direction of the participants eye-gaze on the gestures had both cognitive and social motivations (see also Gullberg and Holmqvist 2006; Gullberg and Kita 2009). With regards to gestures in the field of forensics, Evola and Casonato (2012) have suggested that legal transcripts of interviews and interrogations can be compromised by not taking into account the gestures produced (both by the interviewer and the interviewee) in the interrogation setting. Indeed, gestures are not usually transcribed in deposed transcripts. In particular iconic gestures (for example, ones produced during statements of physical descriptions), if properly interpreted, are useful in forensic and psychological evaluations, in that they may reveal extra information not encoded in speech; however, ultimately this information often goes unnoticed or unrecorded in the legal deposition. Moreover, children being interviewed may tend to prefer gesturing over verbalizing, especially with regards to taboo topics. In one dispute, for example, a pre-teen girl being interviewed in an alleged child molestation case is asked by the adult interviewer to describe “what she felt”. Upon insistent questioning, the girl hedges the question and repeatedly touches her forehead with her straight index finger for almost four minutes before verbally admitting she felt “a big finger” against her head. By paying more attention to the interviewee’s gestures, especially iconic ones, the authors suggest that “hidden” information is revealed, and the child’s own way of communicating is respected.

3. Mimicry: Intersubjective alignment and understanding A kind of socially oriented, intersubjective iconicity in co-speech gestures may reside in the ways in which speakers interpret and partly imitate the gestural behavior of their interlocutors (e.g., McNeill 2005: 160⫺162; see also Calbris 1990: 104⫺153 on the motivated, conventional, and cultural aspects of mimetic gestures and Müller 2010a on the notion of mimesis as applied to gesture). Kimbara defines gestural mimicry as the “recurrence of the same or similar gestures across speakers” and “as an instance of jointly constructed meaning” (2006: 42). Gesture, like speech, contributes both form and meaning as shared cognitive and semiotic resources on the bases of which co-participants build up common ground and unify cultural patterns (Clark 1996). Gestural mimicry is not an automatic or exact duplication of an interlocutor’s behavior, but a collaboratively

1738

VIII. Gesture and language

achieved “representational action mediated by meaning” (Kimbara 2006: 58), reinforcing one’s identity of inclusion or exclusion within a social and cultural setting. In the process, the reoccurrence of particular gesture form features may “make salient […] those aspects of what is being talked about, and […] influence the way in which the interlocutor comes to represent and so to conceive of the same referent” (Kimbara 2006: 58; see also Evola 2010; Parrill and Kimbara 2006). A study by Mol, Kramer, and Swerts (2009: 4) investigates whether speakers mimic gestures of their interlocutors that are inconsistent with the accompanying speech (evidence for “perception-behavior link”) or those consistent with the representations in speech (evidence for “linguistic alignment”). Results show that almost only gestures that matched the concurrent speech were repeated. Moreover, participants who had seen inconsistent gestures performed fewer gestures overall. This indicates that “the copying of a gesture’s form is more likely a case of convergence in linguistic behavior (alignment) than a general instance of physical mimicry” (Mol, Kramer, and Swerts 2009: 7). Comparing gestural mimicry in face-to-face situations to situations with an invisible interlocutor, Holler and Wilkin (2011) not only consider shared formal and semantic features as criteria for gestural mimicry, but also the use of the same mode of representation. The authors further posit three functions of mimicked gestures: “presentation, acceptance, and displaying incremental understanding” (Holler and Wilkin 2011: 141). They conclude that mimicked gestures assume crucial functions in the incremental creation of mutually shared understanding and “are both part of the common ground interactants accrue, as well as part of the very process by which they do so” (Holler and Wilkin 2011: 148; see also Bergmann und Kopp 2010 and Kopp, Bergmann, and Wachsmuth 2008 on questions of alignment and iconic gestures from the perspective of computer modeling).

4. Representational and reerential gestures Gesture scholars have proposed various other terms to capture as well as highlight certain nuances of the kinds of semiotic processes referred to as iconic gestures above. Here, a selective overview of some of the prominent accounts will be provided in chronological order, not laying out the complete taxonomies, but focusing on underlying questions of iconicity and representation instead. Early on Wundt (1921) divided referential gestures into two different kinds: a) gestures imitating an object or concept or gestures mimicking an action, for instance by drawing with the index finger its contours in the air or by evoking through a specific hand configuration the plasticity of its characteristic shape (e.g., a cupped hand imitating a small bowl; and b) connotative gestures that pick out a characteristic feature to refer to the object or action in its entirety. While both of these processes imply partial representation and thus metonymy (Mittelberg and Waugh this volume), it is Wundt’s category of symbolic gestures in which figurative aspects and especially metaphor come to the fore (see also Wundt 1973). Efron ([1941] 1972: 96) distinguished between several types of object-related gestural behaviors, only some of which have the capacity for pictorial, physiographic representation: “depicting either the form of a visual object or a spatial relationship (iconographic gesture), or that of a bodily action (kinetographic gesture)”. Hence, a difference is made between what could also be called object images and bodily motor images; Efron assigns the function of a true icon only to the iconographic

131. Iconic and representational gestures

1739

type. Building on Efron (1972), Ekman and Friesen (1969) attribute considerable importance to the idea of representation. In their classification of nonverbal behaviors, they distinguish, inter alia, gestural acts that stand, either iconically or arbitrarily, for something else (i.e. extrinsically coded acts) from those being significant in and of themselves (i.e. intrinsically coded acts). Among the various subtypes of speech-accompanying illustrators, three may fulfill iconic functions (i.e. are extrinsically coded): pictographs, spatial illustrators, and kinetographs; however, only the first type always is iconic: “Pictographs (…) are iconic because by definition a picture must resemble but cannot be its significant.” (Ekman and Friesen 1969: 77; see also Fricke 2007; Kendon 2004; Müller 1998a for overviews). In her work of mimic representations, Calbris (1990: 104⫺115) demonstrates that regardless of the motivated, i.e. iconic, nature of mimetic gestures, they are always also conventional in the sense that they portray cognitive schemata or cultural practices. That is, there are cultural differences in which features of a reference object or scene are selected and encoded for gestural representation, and how exactly one imitates an everyday action involving objects such as picking up the phone or raising a glass. The author also draws attention to the schematicity of such gestures afforded by the “powers of abstraction. […] Even in evoking a concrete situation, a gesture does not reproduce the concrete action, but the idea abstracted from the concrete reality” (Calbris 1990: 115). Motivation may also manifest itself in the form of analogous relationships between the meaning (the signified) and the gesture (the signifier) through isomorphism (see also Fricke 2012; Lücking 2013; Mittelberg 2006, this volume). In her functional classification system, Müller (1998a: 89⫺90) draws on Bühler’s ([1934] 1982) model of communication with its three functions: expressive, referential, and appellative. Müller (1998a: 110⫺113) accounts for predominantly referential gestures by making a distinction between those that refer to concrete reference objects, such as physical entities, behaviors, and events, and those that refer to abstracta such as timelines or financial transactions. She further stresses the fact that the same kind of gesture, e.g., a tracing gesture outlining a rectangular-shaped structure, may, depending on the concurrent speech content, refer to a physical picture frame or a theoretical framework (see also Müller 2010a; Müller and Cienki 2009). Whether relating to concrete or abstract reference objects and actions, referential gestures involve abstraction of relevant aspects or a general idea. In addition, hand movements may shape and create referential gestures in different ways, thus bringing about iconicity in gestural gestalts. To account for this, Müller (1998a: 114⫺126; 1998b: 323⫺327) introduced four modes of representation in gesture: drawing (e.g., tracing the outlines of a picture frame), molding (e.g., sculpting the form of a crown); acting (e.g., pretending to open a window), and representing (e.g., a flat open hand with the palm turned up stands for a piece of paper). According to Kendon (2004: 160), referential gestures may point to what the utterance is about or represent certain aspects of the propositional content of an utterance. In the group of manual actions that serve purposes of representation, Kendon (2009: 8) distinguishes between two distinct uses. First, there are uses of manual action that “provide dynamic movement information about the properties of objects or actions the speaker is talking about”. These may fulfill an adverbial or adjectival function communicating aspects of the manner of an action or the shape or relative dimensions of a given object (see Fig. 131.1). Second, manual actions may suggest the presence of concrete objects, e.g., by placing the items being talked about in space or highlighting aspects of

1740

VIII. Gesture and language

their relationships (comparable to diagrams or drawings); these uses do not add anything to the propositional content of the utterance. In addition, Kendon (2004) distinguishes different techniques of representation, namely modeling a body part to stand for something else, enacting certain features of an action pattern, or depicting objects in the air through movements recognized as sketching or sculpting the shape of something (see also Streeck 2008, 2009 on depicting by gesture; see also Mittelberg this volume). While responding to the need of categorizing gestures for the purpose of analysis, many gesture scholars have come to realize that working with categories, even if seen as not absolute, poses problems in light of the dynamic, polysemous, and multifunctional nature of gestural forms (cf. Kendon 2004; McNeill 2005; Müller 2010b). This implies, in principle, that there are no iconic or metaphoric gestures as such, but that these semiotic principles interact to a certain degree in a given gestural sign. One needs to establish, in conjunction with the concurrent speech and other contextual factors, which one actually determines its local function. The latter understanding expresses, in alignment with Peirce (1960) and Jakobson (1987), a hierarchical view on processes of association and signification (e.g., Mittelberg 2008, volume 1; Mittelberg and Waugh 2009, this volume). One should also keep in mind that motivated iconic signs tend to involve habit and conventionality and could not unfold their meaning if not understood as indexically embedded in utterance formation, performances, and intricate structures of embodied interaction with the material and cultural world (e.g., Calbris 1990, 2011; Streeck, Goodwin, and LeBaron 2011; Sweetser 2012).

5. Concluding remarks In light of the research reviewed in this chapter, it is useful to take note that the terms iconic gestures and iconics come from traditional semiotics and as such have their own specific, and at times complex, meaning in the history of ideas (for details see Mittelberg this volume; see also Jakobson 1966; Jakobson and Waugh 2002; Sonesson 2008). The subjective and interpretative nature of iconic gestures, as with icons in other modalities, is important to keep in mind in order to understand that, when attempting to classify gestures as such, there can be ambiguity. As to their interpretative aspect, gestures can be classified as iconics, or as predominantly iconic bodily signs, when there is some form of similarity between the sign-vehicle (the gesture) and the object which can be seized by and is salient to an interpretant. The examples provided in the chapter dealing with cultural iconics, which although are socially motivated, address to what extent they are habitual, conventional, or conventionalized albeit based on similarity. The way people use their language(s) in their diverse settings motivates their thinking for speaking and for gesturing (e.g., Cienki and Müller 2008; McNeill 2005; Slobin 1996). As evidenced by the work discussed above, bodily iconic signs metonymically foreground certain aspects of an object, an idea, or another gesture and translate them cognitively and gesturally in a way that may enhance both the speaker’s own understanding of what s/he is trying to convey as well as the interlocutor’s interpretation. Iconic kinetic action features, often only consisting of minimal motion onsets or schematic images furtively traced in the air, thus help coparticipants to arrive at shared understandings in dynamically evolving “contextures of action” (Goodwin 2011: 182; see also Enfield 2009, 2011). In this fashion, different meanings expressed in diverse sign systems (e.g. speech, art, a scene on the street) become multimodally comprehensible and mutually interpreting, also allowing people to refer to something outside their own gestural system (cf. Mannheim 1999).

131. Iconic and representational gestures

1741

Although iconic gestures occupy a struggled place in the literature, the following intramodal, intermodal, cross-modal, interpersonal, and intertextual iconic relations and patterns can be devised: iconic relations between (i) an individual gestural sign carrier and what it evokes or represents (e.g., iconic gestures, representational gestures); (ii) gestures and the concurrent speech content as well as prosodic contours; (iii) gestural behavior of interlocutors (e.g., mimicry [Kimbara 2006]); as well as iconic patterns emerging from gestural forms recurring (iv) within the same discourse (e.g., catchments [McNeill and Duncan 2000] or locution clusters [Kendon 1972]); (v) across discourses and speakers (e.g., recurrent gestures [Bressem volume 1; Ladewig 2011, this volume; Müller 2010b] and geometric and image-schematic patterns in gesture space [Cienki 2005; Mittelberg 2010, 2013]; (vi) across different languages (see section 2); and (vii) across different age groups, clinical groups, social groups, or cultures (see section 2). It seems worthwhile to bring into the picture additional theoretical approaches that might account for certain properties and functions of gestures more effectively than similarity and iconicity can (see Fricke 2012; Lücking 2013; Streeck 2009). Contiguity relations between the communicating body and its material and social habitat also play an important role in sensing and interpreting the meaning of bodily signs which most of the time subsume iconic and indexical (and symbolic) functions (Peirce 1960; see Mittelberg and Waugh this volume). It seems crucial to further examine how gestural icons are indexically conditioned by their semiotic, material, and social environment, thus revealing their subjective and intersubjective dimensions (e.g., Haviland 1993; Streeck, Goodwin, and LeBaron 2011; Sweetser 2012). One way to do this is to continue comparative semiotic studies investigating the interplay of iconic and other semiotic principles interacting in both cospeech gesture and signed languages to arrive at a fuller understanding of the cognitive, physical, pragmatic, and socio-cultural forces that drive processes of conventionalization and grammaticalization in bodily signs and their grounded, richly contextualized usage (e.g., Andre´n 2012; Goldin-Meadow 2003; Grote and Linz 2003; Kendon 2004; Perniss, Thompson, and Vigliocco 2010; Sweetser 2009; Wilcox in press; Zlatev 2005). Considering the interdependent factors of communicative human action and interaction addressed throughout this chapter, the following observations Le´vi-Strauss made decades ago may serve as an interim conclusion, since they not only provide historical anchorage, but may also inspire future work: [B]oth the natural and the human sciences concur to dismiss an out-moded philosophical dualism. Ideal and real, abstract and concrete, ‘emic’ and ‘etic’ can no longer be opposed to each other. What is immediately ‘given’ to us is neither the one nor the other, but something which is betwixt and between, that is, already encoded by the sense organs as by the brain. (Le´vi-Strauss [1972] cited in Jakobson and Waugh 2002: 51)

Acknowledgements The preparation of the article was supported by the Excellence Initiative of the German State and Federal Governments and the Bonn-Aachen International Center for Information Technology (B-IT).

1742

VIII. Gesture and language

6. Reerences Andre´n, Mats 2010. Children’s Gestures from 18 to 30 Months. Lund: Centre for Languages and Literatures, Lund University. Beattie, Geoffrey 2003. Visible Thought: The New Psychology of Body Language. London: Routledge. Beattie, Geoffrey and Heather Shovolton 1999. Mapping the range of information contained in the iconic hand gestures that accompany spontaneous speech. Journal of Language and Social Psychology 18(4): 438⫺462. Beattie, Geoffrey and Heather Shovolton 2001. An experimental investigation of the role of different types of iconic gesture in communication: A semantic feature approach. Gesture 1(2): 129⫺149. Beattie, Geoffrey and Heather Shovolton 2002. An experimental investigation of some properties of individual iconic gestures that affect their communicative power. British Journal of Psychology 93(2): 473⫺492. Beattie, Geoffrey and Heather Shovolton 2007. The role of iconic gesture in semantic communication and its theoretical and practical implications. In: Susan D. Duncan, Justine Cassell and Elena T. Levy (eds.), Gesture and the Dynamic Dimension of Language, 221⫺241. Amsterdam: John Benjamins. Bergmann, Kirsten and Stefan Kopp 2010. Systematicity and idiosyncrasy in iconic gesture use: Empirical analysis and computational modeling. In: Stefan Kopp and Ipke Wachsmuth (eds.), Gesture in Embodied Communication and Human-Computer Interaction, 182⫺194. Berlin: Springer. Bouvet, Danielle 1997. Le Corps et la Me´taphore dans les Langues Gestuelles: A la Recherche des Modes de Production des Signes. Paris: L’Harmattan. Bouvet, Danielle 2001. La Dimension Corporelle de la Parole. Les Marques Posturo-Mimo-Gestuelles de la Parole, leurs Aspects Me´tonymiques et Me´taphoriques, et leur Roˆle au Cours d’un Re´cit. Paris: Peeters. Bressem, Jana volume 1. A linguistic perspective on the notation of form features in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Tessendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1079⫺1098. Berlin/Boston: De Gruyter Mouton. Bühler, Karl 1982. Sprachtheorie. Die Darstellungsfunktion der Sprache. Stuttgart/New York: Fischer. First published [1934]. Calbris, Genevie`ve 1990. The Semiotics of French Gesture: Advances in Semiotics. Bloomington: Indiana University Press. Calbris, Genevie`ve 2011. Elements of Meaning in Gesture. Amsterdam: John Benjamins. Caldognetto, Emanuela Magno and Isabella Poggi 1995. Creative iconic gestures: Some evidence from aphasics. In: Rafaele Simone (ed.), Iconicity in Language, 257⫺276. Amsterdam: John Benjamins. Cienki, Alan 2005. Image schemas and gesture. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics, 421⫺442. Berlin: Mouton de Gruyter. Cienki, Alan 2013. Gesture, space, grammar, and cognition. In: Peter Auer, Martin Hilpert, Anja Stukenbrock and Benedikt Szmrecsanyi (eds.), Space in Language and Linguistics: Geographical, Interactional, and Cognitive Perspectives, 667⫺686. Berlin: Mouton de Gruyter. Cienki, Alan and Cornelia Müller 2008. Metaphor, gesture, and thought. In: Raymond W. Gibbs, Jr. (ed.), The Cambridge Handbook of Metaphor and Thought, 483⫺501. Cambridge: Cambridge University Press. Clark, Herbert H. 1996. Using Language. Cambridge: Cambridge University Press. Cocks, Naomi, Lucy Dipper, Ruth Middleton and Gary Morgan 2011. What can iconic gestures tell us about the language system? A case of conduction aphasia. International Journal of Language and Communication Disorders 46(4): 423⫺436.

131. Iconic and representational gestures

1743

Cocks, Naomi, Lucy Dipper, Madeleine Pritchard and Gary Morgan 2013. The impact of impaired semantic knowledge on spontaneous iconic gesture production. Aphasiology 27 (9): 1050⫺1069. Cocks, Naomi, Laetitia Sautin, Sotaro Kita, Gary Morgan and Sally Zlotowitz 2009. Gesture and speech integration: An exploratory study of a man with aphasia. International Journal of Language and Communication Disorders 44(5): 795⫺804. Dancygier, Barbara and Eve E. Sweetser (eds.) 2012. Viewpoint in Language: A Multimodal Perspective. Cambridge: Cambridge University Press. Duncan, Susan D. 2002. Gesture, verb aspect, and the nature of iconic imagery in natural discourse. Gesture 2(2): 183⫺206. Duncan, Susan D., David McNeill and Karl-Erik McCullough 1995. How to transcribe the invisible ⫺ and what we see. In: Daniel O’Connell, Sabine Kowal and Roland Posner (eds.), Zeichen für Zeit: Zur Notation und Transkription von Bewegungsabläufen (special issue of KODIKAS/ CODE) 18, 75⫺94. Tübingen: Günter Narr. Duncan, Susan D. and Laura Pedelty 2007. Discourse focus, gesture, and disfluent aphasia. In: Susan D. Duncan, Justine Cassell and Elena T. Levy (eds.), Gesture and the Dynamic Dimension of Language, 269⫺283. Amsterdam: John Benjamins. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton and Co. First published [1941]. Ekman, Paul and Wallace Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1(1): 49⫺98. Enfield, N.J. 2009. The Anatomy of Meaning. Speech, Gestures, and Composite Utterances. Cambridge: Cambridge University Press. Enfield, N.J. 2011. Elements of formulation. In: Jürgen Streeck, Charles Goodwin and Curtis LeBaron (eds.), Embodied Interaction: Language and the Body in the Material World, 59⫺66. Cambridge: Cambridge University Press. Evola, Vito 2010. Multimodal cognitive semiotics of spiritual experiences: Beliefs and metaphors in words, gestures, and drawings. In: Fey Parrill, Vera Tobin and Mark Turner (eds.), Form, Meaning, and Body, 41⫺60. Stanford: CSLI Publications. Evola, Vito and Marco Casonato 2012. Gesture studies on trial: Applying gesture studies to forensic interrogations and interviews. Presentation at the International Society for Gesture Studies (ISGS) 2012, Lund University, Sweden. Fo´nagy, Iva´n 1956. Über die Eigenart des sprachlichen Zeichens. Lingua 6: 67⫺88. Fo´nagy, Iva´n 1971. Le signe conventionnel motive´: Un de´bat mille´naire. La Linguistique 7(2): 55⫺80. Fricke, Ellen 2007. Origo, Geste und Raum ⫺ Lokaldeixis im Deutschen. Berlin/New York: de Gruyter. Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin: Mouton de Gruyter. Goldin-Meadow, Susan 2003. Hearing Gesture: How our Hands Help us Think. Cambridge, MA: Harvard University Press. Goodwin, Charles 2011. Contextures of action. In: Jürgen Streeck, Charles Goodwin and Curtis LeBaron (eds.), Embodied Interaction: Language and the Body in the Material World, 182⫺193. Cambridge: Cambridge University Press. Grote, Klaudia and Erika Linz 2003. The influence of sign language iconicity on semantic conceptualization. In: Wolfgang G. Müller and Olga Fischer (eds.), From Sign to Signing: Iconicity in Language and Literature 3, 23⫺40. Amsterdam: John Benjamins. Gullberg, Marianne 2003. Eye movements and gestures in human interaction. In: Jukka Hyönä, Ralf Radach and Heiner Deubel (eds.), The Mind’s Eye: Cognitive and Applied Aspects of Eye Movements, 685⫺703. Oxford: Elsevier. Gullberg, Marianne and Kenneth Holmquivst 2006. What speakers do and what listeners look at. Visual attention to gestures in human interaction live and on video. Pragmatics and Cognition 14(1): 53⫺82.

1744

VIII. Gesture and language

Gullberg, Marianne and Sotaro Kita 2009. Attention to speech-accompanying gestures: Eye movements and information uptake. Journal of Nonverbal Behavior 33(4): 251⫺277. Hadar, Uri and Brian Butterworth 1997. Iconic gestures, imagery and word retrieval in speech. Semiotica 115(1/2): 147⫺172. Haviland, John 1993. Anchoring, iconicity and orientation in Guugu Yimithirr pointing gestures. Journal of Linguistic Anthropology 3(1): 3⫺45. Hogrefe, Katharina, Wolfram Ziegler, Nicole Weidinger and Georg Goldenberg 2012. Non-verbal communication in severe aphasia: Influence of aphasia, apraxia, or semantic processing? Cortex 48(8): 952⫺962. Holler, Judith and Katie Wilkin 2011. Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior 35(2): 133⫺153. Jakobson, Roman 1960. Linguistics and poetics. In: Krystyna Pomorska and Stephen Rudy (eds.), Roman Jakobson ⫺ Language in Literature, 62⫺94. Cambridge, MA: Harvard University Press. Jakobson, Roman 1966. Quest for the essence of language. In: Linda R. Waugh and Monique Monville-Burston (eds.), Roman Jakobson: On Language, 407⫺421. Cambridge, MA: Harvard University Press. Jakobson, Roman 1987. On the relation between auditory and visual signs. In: Krystyna Pomorska and Stephen Rudy (eds.), Roman Jakobson, Language in Literature, 467⫺473. Cambridge, MA: Harvard University Press. Jakobson, Roman and Linda R. Waugh 2002. The Sound Shape of Language. Berlin/New York: Mouton de Gruyter, 3rd ed. First published [1979]. Kendon, Adam 1972. Some relationships between body motion and speech. An analysis of an example. In: Aaron Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 177⫺ 210. Elmsford, NY: Pergamon Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kendon, Adam 2009. Kinesic components of multimodal utterances. Berkeley Linguistics Society Proceedings. Berkeley, CA: Berkeley Linguistics Society. Kimbara, Irene 2006. On gestural mimicry. Gesture 6(1): 39⫺61. Kita, Sotaro 2000. How representational gestures help speaking. In: David McNeill (ed.), Language and Gesture, 162⫺185. Cambridge: Cambridge University Press. Kita, Sotaro and Asli Özyürek 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48(1): 16⫺32. Kita, Sotaro and Asli Özyürek 2007. How does spoken language shape iconic gestures? In: Susan D. Duncan, Justine Cassell and Elena T. Levy (eds.), Gesture and the Dynamic Dimension of Language, 67⫺81. Amsterdam: John Benjamins. Kopp, Stefan, Kirsten Bergmann and Ipke Wachsmuth 2008. Multimodal Communication from Multimodal Thinking ⫺ Towards an integrated model of speech and gesture production. International Journal of Computing 2(1): 115⫺136. Krauss, Robert M., Chen Yihsiu and Rebecca Gottesman 2000. Lexical gestures and lexical access: A process model. In: David McNeill (ed.), Language and Gesture, 261⫺283. Cambridge: Cambridge University Press. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. Ladewig, Silva H. this volume. Recurrent gestures. In: Cornelia Müller, Ellen Fricke, Alan Cienki, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1558⫺1574. Berlin/Boston: De Gruyter Mouton. Lücking, Andy 2013. Ikonische Gesten. Grundzüge einer linguistischen Theorie. Berlin: Mouton de Gruyter. Mannheim, Bruce 1999. Iconicity. Journal of Linguistic Anthropology 9(1⫺2): 107⫺110. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: Chicago University Press.

131. Iconic and representational gestures McNeill, David (ed.) 2000. Language and Gesture. Cambridge: Cambridge University Press. McNeill, David 2005. Gesture and Thought. Chicago: Chicago University Press. McNeill, David and Susan D. Duncan 2000. Growth points in thinking-for-speaking. In: David McNeill (ed.), Language and Gesture, 141⫺161. Cambridge: Cambridge University Press. McNeill, David and Elena Levy 1982. Conceptual representations in language activity and gesture. In: Robert J. Jarvella and Wolfgang Klein (eds.), Speech, Place, and Action, 271⫺296. Chichester: John Wiley and Sons. Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar. (Ph.D. dissertation, Cornell University). Ann Arbor, MI: UMI. Mittelberg, Irene 2008. Peircean semiotics meets conceptual metaphor: Iconic modes in gestural representations of grammar. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 115⫺154. Amsterdam: John Benjamins. Mittelberg, Irene 2010. Geometric and image-schematic patterns in gesture space. In: Vyvyan Evans and Paul Chilton (eds.), Language, Cognition, and Space: The State of the Art and New Directions, 351⫺385. London: Equinox. Mittelberg, Irene 2013. Balancing acts: Image schemas and force dynamics as experiential essence in pictures by Paul Klee and their gestural enactments. In: Barbara Dancygier, Mike Bokrent and Jennifer Hinnell (eds.), Language and the Creative Mind, 325⫺346. Stanford: Center for the Study of Language and Information. Mittelberg, Irene volume 1. The exbodied mind: Cognitive-semiotic principles as motivating forces in gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 755⫺784. Berlin/Boston: De Gruyter Mouton. Mittelberg, Irene this volume. Gestures and iconicity. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1712⫺1732. Berlin/Boston: De Gruyter Mouton. Mittelberg, Irene and Linda R. Waugh 2009. Metonymy first, metaphor second: A cognitive-semiotic approach to multimodal figures of thought in co-speech gesture. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 329⫺356. Berlin: Mouton de Gruyter. Mittelberg, Irene and Linda R. Waugh this volume. Gestures and metonymy. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1747⫺1766. Berlin/ Boston: De Gruyter Mouton. Mol, Lisette, Emiel Kramer and Marc Swerts 2009. Alignment in iconic gestures, does it make sense? Proceedings of ACSP 2009 International Conference on Audio-Visual Speech Processing, University of East Anglia, Norwich, UK, September, 2009. Müller, Cornelia 1998a. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 1998b. Iconicity and gesture. In: Isabelle Guaı¨tella, Serge Santi, Christian Cave and Gabrielle Konopczynski (eds.), Oralite´ et gestualite´: Communication multimodale et interaction, 321⫺328. Montre´al/Paris: L’Harmattan. Müller, Cornelia 2010a. Mimesis und Gestik. In: Gertrude Koch, Christiane Voss and Martin Vöhler (eds.), Die Mimesis und ihre Künste, 149⫺187. München: Fink. Müller, Cornelia 2010b. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia and Alan Cienki 2009. Words, gestures and beyond: Forms of multimodal metaphor in the use of spoken language. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 297⫺328. Berlin: Mouton de Gruyter.

1745

1746

VIII. Gesture and language

Özyürek, Asli, Sotaro Kita, Shanley Allen, Reyhan Furman and Amanda Brown 2005. How does linguistic framing of events influence co-speech gestures? Insights from cross-linguistic variations and similarities. Gesture 5(1): 251⫺237. Parrill, Fey and Irene Kimbara 2006. Seeing and hearing double: The influence of mimicry in speech and gesture on observers. Journal of Nonverbal Behavior 30(4): 157⫺166. Peirce, Charles Sanders 1960. Collected Papers of Charles Sanders Peirce (1931⫺1958). Vol. I: Principles of Philosophy, Vol. II: Elements of Logic. Cambridge: The Belknap of Harvard University Press. Perniss, Pamela, Robin Thompson and Gabriella Vigliocco 2010. Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology 1: 1⫺15. Saussure, Ferdinand de 1986. Course in General Linguistics. 3rd edition. Translated by Roy Harris. Chicago: Open Court. Slobin, Dan J. 1996. From ‘thought and language’ to thinking for speaking. In: John J. Gumperz and Steven Levinson (eds.), Rethinking Linguistic Relativity, 70⫺96. Cambridge: Cambridge University Press. Sonesson, Göran 2008. Prolegoma to a general theory of iconicity: Considerations of language, gesture, and pictures. In: Klaas Willems and Ludovic De Cuypere (eds.), Naturalness and Iconicity in Language, 47⫺72. Amsterdam: John Benjamins. Streeck, Jürgen 2008. Depicting by gesture. Gesture 8(3): 285⫺301. Streeck, Jürgen 2009. Gesturecraft: The Manu-Facture of Meaning. Amsterdam: John Benjamins. Streeck, Jürgen, Charles Goodwin and Curtis D. LeBaron 2011. Embodied Interaction: Language and Body in the Material World: Learning in Doing: Social, Cognitive and Computational Perspectives. New York: Cambridge University Press. Sweetser, Eve E. 2009. What does it mean to compare language and gesture? Modalities and contrasts. In: Jiansheng Guo, Elena Lieven, Nancy Budwig, Susan Ervin-Tripp, Keiko Nakamura and Seyda Özcaliskan (eds.), Crosslinguistic Approaches to the Psychology of Language: Studies in the Tradition of Dan Isaac Slobin, 357⫺366. New York: Psychology Press. Sweetser, Eve E. 2012. Viewpoint and perspective in language and gesture. In: Barbara Dancygier and Eve Sweetser (eds.), Viewpoint in Language: A Multimodal Perspective, 1⫺22. Cambridge: Cambridge University Press. Wilcox, Sherman in press. Signed languages. In: Ewa Dabrowska and Dagmar Divjak (eds.), Handbook of Cognitive Linguistics. Berlin: Mouton de Gruyter. Wundt, Wilhelm 1921. Völkerpsychologie. Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythos und Sitte, Vol. 1, 4th edition. Stuttgart: Kröner Verlag. Wundt, Wilhelm 1973. The Language of Gestures. The Hague: Mouton. Zlatev, Jordan 2005. What’s in a schema? Bodily mimesis and the grounding of language. In: Beate Hampe (ed.), From Perception to Meaning: Images Schemas in Cognitive Linguistics, 313⫺342. Berlin: Mouton de Gruyter.

Irene Mittelberg, Aachen (Germany) Vito Evola, Aachen (Germany)

132. Gestures and metonymy

1747

132. Gestures and metonymy 1. 2. 3. 4. 5. 6.

Metonymic moments: Ad hoc abstraction in gesture Experiential anchorage: Material, physical, and conceptual contiguity relations Internal and external metonymy in manual gestures and full-body (re-)enactments Cross-modal patterns of meaning construction: Metonymic shifts and chains Concluding remarks: Metonymic “slices of life” References

Abstract Gestures are inherently metonymic: they may profile salient features of objects, actions, events, concepts, or ideas that are particularly relevant to the speaker in a given moment of multimodal interaction. This chapter aims to account for distinct cognitive-semiotic mechanisms that seem to motivate instances of ad hoc abstraction in spontaneous gestural sign formation as well as guide processes of cross-modal inferencing during interpretation. It explores a range of semiotic practices that bring about the metonymic spareness and furtiveness characteristic of manual gestures and full-body reenactments. First Peircean, Jakobsonian, and recent embodied views on contiguity are discussed, laying out various contiguity relations and degrees of metonymic proximity between the communicating human body, its material and social habitat, and the virtual entities gesturing hands may seem to manipulate or the invisible traces their movements may leave in the air. Then a taxonomy of metonymic principles engendering predominantly indexical or iconic bodily signs, as well as transient cases will be presented. Finally, a set of underlying metonymic shifts and chains and their interaction with metaphor will be discussed.

1. Metonymic moments: Ad hoc abstraction in gesture Gestures, like most other signs, tend to be partial representations and thus metonymic in one way or another. Accordingly, gestural sign formation implies, as most processes of perception and expression do, abstraction, that is, the singling out of salient features or decisive moments of entities, ideas, actions, or events ⫺ whether experienced many times before or imagined for the first time. Arnheim (1969: 117) describes the schematic nature of gestural gestalts as follows: Actually, the portrayal of an object by gesture rarely involves more than some one isolated quality or dimension, the large or small size of the thing, the hourglass shape of a woman, the sharpness or indefiniteness of an outline. By the very nature of the medium of gesture, the representation is highly abstract. What matters for our purpose is how common, how satisfying and useful this sort of visual description is nevertheless. In fact, it is useful not in spite of its spareness but because of it.

The aim of this chapter is to lay out a set of semiotic practices assumed to engender the metonymic “spareness” and furtiveness that is characteristic of spontaneous coverbal gestures and full-body enactments. Gestures often consist only of schematic figurations traced in the air, or minimal motion-onsets metonymically alluding to, or abstracting over, entire physical shapes, movement excursions, action routines, or the objects/tools Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 17471766

1748

VIII. Gesture and language

they involve. For instance, to convey to a friend sitting across the lecture hall that one will email her later, first one might enact typing action by holding both hands next to each other, palms facing downward, furtively moving some fingers up and down, and then slightly points an index finger in her direction. Based on her embodied experience and world knowledge, as well as the shared context, the addressee can easily infer a more precisely performed typing action actually producing text, the implied keyboard and other contextual elements, such as the material setting and the contiguous steps involved in using a computer and sending/receiving electronic messages. While this gesture sequence can be understood without additional speech information, most of the gestures discussed in this chapter are temporally, syntactically, and semantically integrated with the concurrent speech that serves to disambiguate the meaning of typically polyfunctional gestural forms and actions. Contrasting with fully coded signs and constructions in spoken and signed languages, which must adhere to certain well-formedness conditions to be correctly understood, coverbal gestures are created less consciously and most often “from scratch” (e.g., Mittelberg this volume; Müller 1998). This ad hoc potential for idiosyncratic expression contributes to the broad range of individual gesture styles. As we will see below, it also reveals a certain systematicity afforded through specific metonymic principles acting as constitutive driving forces in each instance of bodily sign formation. Motivated and subjective, the use of natural semiotic resources allows the gesturer to convey ⫺ more or less creatively ⫺ information from her own or other points of view (Cienki and Mittelberg 2013; Müller 2010; Sweetser 2012). Metonymy in gesture has so far received much less attention than metaphor (e.g., Cienki 2012; Cienki and Müller 2008). Throughout this chapter various metonymic principles will be examined in light of the specific mediality and affordances of gestures. Metonymy here is understood as involving two entities pragmatically linked within the same experiential domain (e.g., Barcelona 2009) or frame (Fillmore 1982), one of which is profiled, allowing inference of the other element(s), e.g., a typing action may evoke a virtual contiguous keyboard as well as the ensuing email exchange. Metaphor, by contrast, involves a mapping between two different experiential domains (e.g., in she easily got a grasp of the concept the mental process of understanding is understood in terms of a physical action; Lakoff and Johnson 1980). Drawing on previous work on metonymy in co-speech gestures and bringing in insights from signed languages, this chapter will give an overview of how, in multimodal communicative acts, the human body and its visual action (Kendon 2004) provide not only dynamic iconic structure, but also different kinds of indices that may function as physical cues for metonymic inferences, thus putting speakers in touch with their imagination and the world around them. First, section 2 introduces various types of contiguity relations. In section 3, two distinct but interlaced routes of metonymically motivated gestural abstraction will be laid out, one primarily based on iconicity, the other primarily rooted in contiguity relations. Modes of interaction between metonymy and metaphor are also briefly addressed. Section 4 presents an overview of metonymic shifts and chains, and section 5 sketches possible avenues for further research.

2. Experiential anchorage: Material, physical, and conceptual contiguity relations When examining how metonymy may manifest itself in co-speech gestures, combining semiotic frameworks not exclusively based on language (e.g., Jakobson 1956; Peirce

132. Gestures and metonymy

1749

1960) with embodied approaches to language, cognition, and social interaction allows us to account for the specific nature of both verbal and bodily signs. These perspectives agree that meaning does not reside in the material form that a sign, such as a word or a gesture, takes, but arises in the dynamic multidimensional gestalt of a mental representation or some other kind of cognitive or physical response to a perceived sound, image, or human behavior. Metonymies are pervasive embodied processes of association and signification, rooted in entrenched multi-sensory experiences of perceiving, interpreting, and communicating (e.g., Gibbs 1994). Contiguity, a general relational concept introduced in this section, underpins most metonymic principles and has proven constitutive of gesture form (Hassemer et al. 2011) as well as cross-modal processes of meaning construction (Mittelberg 2006, 2010b, volume 1). According to Peirce (1960), contiguity encompasses different kinds of factual connections: e.g., physical impact, contact and adjacency, as well as temporal and spatial closeness, or distance. All of these may underpin indexical sign processes, in which the sign carrier, e.g., fingerprints left at a crime scene, points the interpreting mind to the “object”, e.g., the person whose fingers caused the imprints through physical impact. Grammatical function words such as personal pronouns and demonstratives are indexical linguistic signs ⫺ or shifters (Jakobson [1957] 1971) ⫺ whose highly context-dependent meanings include factors of the speech event, shifting in each instance of use. As we will see below, not only highly indexical gestures (such as deictics or beats; see McNeill 1992), but also gestures (re)presenting some content are polysemous, multifunctional signs that share this property in striking ways. Within cognitive linguistics, contiguity relations feeding into metonymic expressions are thought of as either objectively given or cognitively construed (e.g., Barcelona 2009; Benczes et al. 2011; Dirven and Pörings 2002; Peirsman and Geeraerts 2006); they further are assumed to be contingent (Panther and Thornburg 2003). As for bodily semiotics, the latter aspect seems pertinent, for a gesture is, in most cases, just a gesture because gesticulating hands do not manipulate physical objects or surfaces but only pretend to do so (as in the email-typing gesture described above). Hence, the original contiguity relation within the functional domain of hands typing on a keyboard is cancelled. This letting go of the material world turns a transitive manual action into a more abstract communicative hand movement, from which virtual objects, tools, surfaces, and their affordances often may still be inferred (Grandhi, Joue, and Mittelberg 2012). Crucially, gesturing hands may not only reflect their groundedness in everyday interaction with the material world, but also (re-)establish indexical anchorage of the body and the mind in the here and now by seeking tactile contact with the environment and integrating artifacts and surface structures into meaningful actions often collaboratively constructed with interlocutors (e.g., Enfield 2011; Goodwin 2007; Haviland 2000; Streeck 2009). Jakobson (1956) distinguished between contiguity relations in the physical world, e.g., between a fork and knife, and those combining items in a semiotic contexture, e.g., linguistic units forming syntagms or entire discourses (Waugh and Monville-Burston 1990). In the emailing gesture, the iconic typing action and the indexical gesture hinting at the receiver jointly constitute a gestural syntagm (Mittelberg 2006). In addition, the two modalities, and others such as eye-gaze and head movements, contextualize one another (Jakobson 1963). Of central importance in the present context is Jakobson’s distinction between “inner contiguity”, i.e. synecdoche, and “outer contiguity”, i.e. what he called “metonymy proper”:

1750

VIII. Gesture and language One must ⫺ and this is most important ⫺ delimit and carefully consider the essential difference between the two aspects of contiguity: the exterior aspect (metonymy proper), and the interior aspect (synecdoche, which is close to metonymy). To show the hands of a shepherd in poetry or the cinema is not the same as showing his hut or his herd, a fact that is often insufficiently taken into account. The operation of synecdoche, with the part for the whole or the whole for the part, should be clearly distinguished from metonymic proximity. […] the difference between inner and outer contiguity […] marks the boundary between synecdoche and metonymy proper. (Jakobson and Pomorska 1983: 134)

From these observations one can recognize two fundamental contiguity relations as the basis of distinct metonymic operations: a) inner contiguity underlies inherent part-whole relationships, i.e. synecdoche (e.g., in thirty sails appeared at the horizon, sails evokes vessels); b) outer contiguity underpins metonymic expressions in which the profiled element is not part of, but externally contiguous and pragmatically related to, the element it causes us to infer (e.g., a vessel shown in a film scene may evoke the crew inside it). In the section below, we will review work exemplifying how these different relations may motivate cross-modally achieved metonymic expressions in co-speech gesture.

3. Internal and external metonymy in manual gestures and ull-body (re-)enactments Given their dynamic visuo-spatial mediality, gestures can be expected to be differently metonymic and iconic than (spoken) language (Müller 1998; Sonesson 1992; Waugh 1993). As work on metonymy in gesture has shown, synecdoche plays a central role in gestural sign formation (Bouvet 2001; Ishino 2007; Müller 1998; Taub 2001; Wilcox 2004). The goal of this section is to demonstrate that while the principle of partial representation is an essential driving force, communicating hands also exploit various forms of “metonymic proximity” between the body and the contiguous outer world (Jakobson and Pomorska 1983: 134). Building on a recent Jakobsonian account of distinct metonymic operations in gesture, i.e. internal and external metonymy (Mittelberg 2006, 2010b, volume 1; Mittelberg and Waugh 2009), the underlying fundamental distinction between inner and outer contiguity will serve as a blueprint for the following discussion on metonymy in manual gestures and full-body enactments. Where pertinent, connections to cognitive linguistic accounts will be drawn. It’s important to note that these principles mix to various degrees in dynamic multifunctional gestural signs and that the gesture analyst always needs to identify, in light of the speech content, the dominant force determining the gesture’s primary focus and function.

3.1. Internal metonymy: Wholes, parts, and essence Internal metonymy relies on the kinds of inner contiguity relations that underpin the pars-pro-toto principle: a part stands for another part; a part for the whole; or a whole for the part. For example, in the expression everyone lives under one roof, roof stands for the entire house of which it is a physical fragment. Thus, what is generally known as synecdoche is subsumed under internal metonymy. “Internal” suggests that the inner structure of an entity, body, or event is broken down into its component parts or phases, and that one of them (e.g., the roof, or the shepherd’s hand in Jakobson’s example above) is taken to imply the entire gestalt structure (e.g., the house, or the shepherd). Such

132. Gestures and metonymy

1751

relations also link parts and wholes in abstract structures such as schemas, frames, or constructions (e.g., Mittelberg 2006). In bodily semiotics, internal metonymy may motivate processes of profiling and highlighting prototypical or locally salient aspects of, e.g., a given concept, object, action, or event. Gesturers may evoke parts, contours, geometric shapes, spatial dimensions, the manner of motion, and other qualities of what they are talking about and wish to accentuate. Just as visual perception is an active selective process, gestures may assist speakers in “grasping the essentials” (Arnheim 1974: 42) of, for instance, a witnessed scene, a cognitive percept (Bouvet 1997), or abstract thought processes. According to Shapiro (1983: 201), this metonymic singling out or individuation of recognizable features and patterns “is perceptually and/or cognitively well-motivated (natural)”. Johnson (2007: 92) reminds us that “[i]t is our ability to abstract a quality or structure from the continuous flow of our experience and then to discern its relations to other concepts and its implications for action that makes possible the highest forms of inquiry of which humans are uniquely capable.” As we will see below, gestures are a means to draw on both conceptual relations and their implications for action. The following typology aims to encompass a range of gestures, in which metonymy interacts with iconicity to various degrees; hence, the choice of icon as a base term. Using “metonym” instead would not work as well, as most gestures are always metonymic in some way. Another concern is to mark the semiotic difference vis-a`-vis gestures more strongly based on outer contiguity (see indices listed in section 3.2 and Mittelberg [this volume] on Peirce’s notions of iconicity and ground). Since gestures may depict, or create, all kinds of “objects” in the Peircean sense, i.e. physical and nonphysical entities, the labels primarily reflect bodily characteristics and actions. The discussion proceeds according to Peirce’s (1960: 135) subtypes of icons: image icons, diagrammatic icons, and metaphor icons. Importantly, these cognitive-semiotic modes do not represent absolute categories, but interacting dynamic processes of profiling relevant features by making them stand out within complex semiotic gestalts. Body posture/Body action image icon. In these gestures, a body (part) stands for a body (part) and (re-)enacted bodily action stands for bodily action of the same kind: so, body postures may mimic body postures (e.g., standing or leaning forward), bodily actions imitate bodily actions (e.g., grasping, running, or dancing), head movements imitate head movements (e.g., nodding), and hands represent hands (e.g., waving). Gesturers may imitate their own (performed or imagined) actions or those of others (see McNeill 1992; Sweetser 2012 on viewpoint). Such gestural portrayals tend to be inherently metonymic in both their reduced articulation and temporal impermanence, since in ongoing conversations there is just enough time to share quick gestural glimpses at crucial aspects of what is being conveyed or not readily expressible through speech. For an example of a body action image icon, consider Fig. 132.1 below (taken from an interview on ArchRecordTV, January 5, 2011; see Mittelberg 2012). Here the British architect Norman Foster enacts a fictive scene based on his own experience with architectural space he himself designed (i.e. the Sperone Westwater Gallery, Manhattan). In this multimodal performance, Foster imitates someone entering the building and being taken by surprise. Assuming character viewpoint, he mimics looking and pointing up at the bottom of the elevator installed just above the museum’s lobby. He is visibly amused by the thought of this architectural effect: …because the last place you think you‘d ever really want to be in any building is underneath the elevator. Via internal metonymy, this image

1752

VIII. Gesture and language

Fig. 132.1: underneath the elevator (body action image icon)

icon portrays only a few essential aspects of the full actions they allude to, and several incorporated indices guide viewers’ attention upward to the imagined elevator. In the second example (taken from Mittelberg’s [2006] multimodal corpus of linguistics lectures in American English), a professor introduces the concept of semantic roles (Fig. 132.2): To account for this… we use names of semantic roles that bounce around in linguistics… agent, patient, recipient, goal, experiencer… those are semantic roles. On the mention of recipient she produces a palm-up open hand (Müller 2004) with slightly bent fingers held near her right hip. Recipient designates a particular semantic role, i.e. a grammatical function, which the teacher personifies with her entire body by becoming a body posture image icon, slightly abstracted and idealized, of a person who could be holding something she received (via a person-role metonymy; Panther and Thornburg 2004: 94). Whether her open hand is meant to be supporting an object or to signal readiness to receive something is left unspecified. Although not operationalized here, we can assume a latent outer contiguity relation between the palm and a potential object. Internal metonymy further interacts with metaphor, that is, personification. A gesture that can be classified as a body action image icon is described by Bouvet (2001: 89): retelling La Fontaine’s fable of “the crow and the fox” in French, the speaker performs a stylized bimanual grasping action with thumbs and fingers snapping several times at shoulder height when saying that ‘the two men catch the fox’ (les deux hommes attrappent le renard). Hence, the focus is on the men and their physical actions and not on the fox implied in their actions.

Fig. 132.2: recipient (body posture image icon)

132. Gestures and metonymy

1753

Body-/Hand-as-object image icon. This metonymic process draws attention to the speaker’s entire body or a body part used to stand for something other than itself: e.g., a concept or an object as such, an entity or person undergoing a motion event (i.e. an intransitive action), or a tool performing an object-oriented (i.e. transitive) action. Hands, when becoming objects, persons, or tools, are inseparable from the action they are involved in; very abstract instances are better described as abstract action image icons (see below). More prototypical instances of hand-as-object image icons, however, involve the gesturer’s hands profiled against the entire body based on a pars-pro-toto relation (see Calbris 1990 on body segments). A cupped hand, for instance, may imitate a sort of recipient (Mittelberg 2008), an index finger held horizontally and close to the mouth may stand for a toothbrush (Lausberg et al. 2003), or a flat palm-up open hand may represent a piece of paper (Müller 1998: 123). A flat palm-vertical open hand may further become a blade, i.e. a hand-as-tool icon, performing virtual cutting actions on an imagined fruit (Grandhi, Joue, and Mittelberg 2011). In American Sign Language (ASL), the lexical sign for tree is a bimanually achieved image icon encoding the salient elements of a tree: its trunk, branches and the supporting ground (Taub 2001: 29). Bouvet (1997: 17) describes how a little boy uses his entire body to imitate a helicopter, thus bringing out its prototypical form features and movements. His torso represents the helicopter’s core and his arms the two opposite rotors circling around their axis. In this body-as-object image icon, the boy becomes a helicopter in action. Abstract action/process image icon. In this kind of gestural abstraction the action itself is foregrounded, and the things or persons possibly involved in it are backgrounded: “The abstractness of gestures is even more evident when they portray action. One describes a head-on crash of cars by presenting the disembodied crash as such, without any representation of what is crashing […] and a clash of opinions is depicted in the same way as a crash of cars” (Arnheim 1969: 117). Abstract process icons subsume gestures metonymically distilling the essence out of cognitive or physical processes such as iteration, continuation, correlation, or merging, by abstracting away from the elements or ideas undergoing the process, or performing the action (see Bressem [volume 1] for additional motion patterns; Ladewig [2011] on the cyclic gesture; and Mittelberg [2010a, 2013] on image schemas and force dynamics in gesture). Line/Figure/plane/volume image icon. Gestures of this type produce lines, figurations, planes, or volumes that are, no matter how abstract and evanescent they might be, iconic signs in their own right. Hands may trace a belt in the form of a traverse line at waist level (Calbris 1990: 39), draw an entity’s shape such as a rectangular picture frame (Müller 1998: 119), or evoke the width of a building (Mittelberg this volume). They may also depict the pertinent qualities of the path and/or manner of a motion event. For an example of a line image icon, consider Fig. 132.3 (taken from Mittelberg 2006): the linguist produces a tracing gesture by moving both her hands laterally outward from the center until her arms are fully extended. The concurrent utterance ⫺ we think of a sentence as a string of words ⫺ not only determines that this polysemous gestural line-drawing depicts a sentence, it also shifts the focus from the bodily action of tracing to the contiguously emerging virtual line. While this shift is triggered through an index leading from the tips of her hands to the trace they produce, it is via internal metonymy that this sketchy imagined line is an image icon of a string standing for a complete (written) sentence (see Taub 2001:77 for path iconicity in ASL). Virtual threedimensional gestalts may also emerge from underneath sculpting hands: there also is

1754

VIII. Gesture and language

Fig. 132.3: a string of words (line image icon)

immediate outer contiguity between the hands and the material they mold (Müller 1998) into a volume image icon (see also hand-surface index below). diagrammatic icon. Gestural graphs and diagrams are abstract schematic representations that bring out the internal structure of a gestalt by highlighting the boundaries between its parts or how the elements are connected. Such highly metonymic “icons of relation” (Peirce 1960: 135) combine, like many conceptual image schemas (Johnson 2007), inner and outer contiguity relations in various ways (Mittelberg 2008, 2010a, volume 1). Metaphor icon. All the modes of internal metonymy presented above may interact with metaphoric processes. In fact, from the perspective of the interpreter, metonymy has been argued to lead the way into metaphor (Mittelberg and Waugh 2009). Note the crucial difference between gestural image icons of metaphoric linguistic expressions, such as the recipient (Fig. 132.2) and the string of words (Fig. 132.3), and speech-independent metaphor icons manifesting a metaphorical understanding in their own right. Mittelberg (volume 1: 764) describes a metaphor icon in the form of a cupped palm-up open hand produced by a linguist when explaining the grammatical category the main verb. Through internal metonymy the hand shape iconically portrays essential form features of a small container which builds the basis for the metaphorical mapping categories are containers (Lakoff and Johnson 1980). Hence, whereas the speech is technical and non-metaphorical, the gesture modality evidences a metaphoric construal (see also Cienki and Müller 2008; Evola 2010).

3.2. External metonymy: Contact, containment, manipulation, and exploration External metonymy involves various kinds of outer contiguity relations, e.g., contact, adjacency, impact, and cause/effect (Jakobson and Pomorska 1983). For instance, in The White House remained silent, the White House refers to the U.S. President or his spokesperson. The relevant contiguity relations, i.e. between the building or institution and its inhabitants or members, are spatial and pragmatic in nature; the people in the building are obviously not part of its architectural structure (like the roof in the example for internal metonymy). House and people belong to the same frame (Fillmore 1982). Or, if the question would you like another cup? is used to ask the addressee if she cares for more tea, the container cup stands for its contents, i.e. tea, which is not part of the material structure of the container cup. This container-for-contained metonymy evokes the tea-drinking frame with all its pragmatic implications and socio-cultural conventions. In multimodal interaction, the speakers’ hands may create containers, surfaces, as well as chunks of or points in space for imagined entities which in turn may stand for

132. Gestures and metonymy

1755

the concepts or things talked about and “shown” to interlocutors (Kendon 2004; Müller 2004). Outer contiguity relations not only condition the body’s tactile, sensory-motor interaction with the physical and social world (section 2); they further hold between the outer shell of a person’s body and the inner self, e.g., the organs, the brain, and the mind. In particular, contact, adjacency, and impact are external relations between hands and the objects, tools and surfaces they are in touch with that may be highlighted, established, or deleted through metonymic modes operating on them. The following typology spans different types and degrees of body-centered “metonymic proximity” (Jakobson and Pomorska 1983: 134). Since in these cases indexicality dominates over iconicity, the base term for the cognitive-semiotic principles is index (see also Mittelberg volume 1). Away from body index (pointing). Pointing gestures are included in the taxonomy as examples of prototypical or highly indexical signs based on an outer contiguity relation between the tip of the pointing finger or hand and the more or less distant target (concrete or abstract) of the pointing action (e.g., Fricke 2007; Kita 2003; Talmy 2013). Placing index. This gestural practice is used to literally place things or people referred to in speech in gesture space, thus creating placeholders that either underpin the introduction of a new discourse element or facilitate anaphoric reference. Placing may be performed with one hand or both hands, but typically with the palm facing down (PDOH) or away from the body. A speaker might also simply point with his index finger into the space in front of him, thus setting up a point or location that metonymically stands for something else (Clark 2003; Cooperrider and Nu´n˜ez 2009; McNeill 2005; Mittelberg 2006). Interactive discourse index (pointing at interlocutors; discourse contents; common ground). These indices underpin various interactive practices of pointing towards conversational partners, audiences, or discourse contents, e.g., citing, seeking, delivery, or turn coordination (Bavelas et al. 1995). For example, Bavelas et al. (1995: 396) describe “general citing” gestures as typically involving a loose palm-up open hand directed towards an interlocutor “to cite the addressee ⫺ that is, to acknowledge an earlier contribution the addressee made”. In processes of metonymic inferencing (e.g., Langacker 1993; Panther and Thornburg 2003), interactive discourse indices may lead the interpreting mind to an intended target meaning, e.g., something an interlocutor did or said before, or an emotional response he or she displayed while listening. Common ground (Clark 1996), i.e. shared knowledge and experiences, may also be pointed at in this fashion. How exactly metonymic principles guide the interpretation of these interactive practices, or multimodal pragmatic acts, still needs to be investigated. Body part index. This kind of external metonymy comes to bear in gestures that derive their meaning partly from their contact with, or proximity to, particular body parts or regions (e.g., the lower back) or a particular organ (e.g., the heart). The gesture depicted in Fig. 132.4 is a bimanual body part index directed at the speaker’s temples; it co-occurs with knowledge in the verbal utterance Grammar emerges from language use, not from knowledge becoming automatized. While pointing, the two cupped hands constitute a container attached to the head, i.e. the site of knowledge, thus mirroring the fact that the head is metaphorically construed as a container that stands metonymically for its contents. Getting to the latter takes two steps along an inferential pathway (e.g., Barcelona 2009; Panther and Thornburg 2003), guided by external metonymy through first drawing on the outer contiguity between the hands and the head and then between

1756

VIII. Gesture and language

the head and its insides. Body-part centered metonymic processes are also productive in signed languages (see Mandel 1977: 63; Wilcox [2004: 213] for the sign think; Dudis [2004] on body partitioning).

Fig. 132.4: knowledge (body part index)

Fig. 132.5: noun (hand-object index; support)

Hand-object index (support; container). In gestures involving open, cupped or closed hands, palms may appear to be literally “in touch” with the imagined “objects” they seem to be supporting, holding, or otherwise manipulating. The palm-up open hand gesture (Müller 2004) in Fig. 132.5 is a good example of how the principle of external metonymy is instantiated through an interaction with the concurrent speech content. In fact, it often is the speech content that justifies assuming something like an “object”. Explaining the framework of emergent grammar, the speaker maintains that a priori […] you cannot define a noun from a verb. On noun, this palm-up open hand constitutes a perceivable surface, i.e. a material support structure, for the abstract category noun, metaphorically reified as a graspable object (Mittelberg 2008). Importantly, iconicity and metaphor alone cannot account for this gesture’s meaning. While the person may serve as a body action image icon similar to the one in Fig. 132.2, there is no similarity relationship between the person and the grammatical category mentioned in speech. Rather, an imputed immediate contiguity relation (contact/adjacency) between the open palm and the implied element becomes significant: the word noun draws attention from the action to the entity to be inferred metonymically. The indexicality residing inside such manual signs propels, together with other discourse-pragmatic factors, a sort of reduced indicating function, as if the speaker was pointing to the existence of otherwise intangible ideas or entities (see Liddell [2003] on surrogates in ASL). Hand-tool index (with/out implied object). This indexical principle allows differentiating gestures involving an object from those involving a handheld tool with which an action is, or may potentially be, performed. In their study on transitive action gestures, Grandhi, Joue, and Mittelberg (2011) found that participants describing everyday actions tend to produce gestures in which the dominant hand seems to be handling (i.e. not iconically representing as in hand-as-tool icon) the tool required for a particular action. Slicing an apple, for instance, necessitates both an object (apple) and a tool (knife). While explaining, you need to slice the apple by holding it down and cutting it there, one participant pretends to be holding a knife in her right hand (hand-tool index), as shown in Fig. 132.6, while pantomiming a cutting action on a virtual apple she is seemingly holding down with her left hand (hand-object index). The speech content draws attention to both the action and the object, but not to the tool. All three elements

132. Gestures and metonymy

1757

belong to the same experiential domain or frame. Taken as a whole they represent a body action image icon that can be broken down in the sense that two indices point to external elements, one of which (the object) is profiled by the speech content and the other (the tool) can be easily inferred from the context (see section 3.1).

Fig. 132.6: slicing an apple (left hand: hand-object index; right hand: hand-tool index)

Double hand-object index (enclosing; grouping; sculpting). These gestures exhibit similarly muted indexical functions as the ones we just saw. Yet, by employing two articulators, e.g., two fingers or two hands, they provide more iconic information regarding the geometry and size of the “object” they seem to be holding or the chunk of space they enclose. The person in Fig. 132.7 is explaining the short sentence Diana fell. Upon mentioning the verb fell, the thumb and index finger of his right hand seem to be holding it up in the air, conceptualized as a tangible object or as space extending between the articulators. Again, if we only considered the visible gestural articulators as the semiotic material of this gesture, we could not establish a meaningful relationship with the verb fell (no falling event is depicted, either). But this body action image icon provides indexical cues drawing on the immediate outer contiguity (contact) between the observable gestural components and the imagined element thus seized. Through the linguistic cue fell, this tight contiguity relation is operationalized via external metonymy. The bimanual gesture depicted in Fig. 132.8 combines iconic and indexical modes in a rather balanced fashion. Here the speaker talks about main verbs and auxiliaries, explaining that verbs like have, will, being, and been […] must all belong to some subcategory. Upon some subcategory he makes this gesture, consisting of two hands seemingly holding a virtual three-dimensional object. While there is an iconic relationship (via

Fig. 132.7: fell (double hand-object index)

Fig. 132.8: subcategory (double hand-object index)

1758

VIII. Gesture and language

internal metonymy) between the physical action of holding something and this gestural imitation (body action image icon), the speaker is not referring to his action but to the object involved in it. So the linguistic cue triggers the activation of the outer contiguity relation between the hands and the adjacent virtual object, which results in a crossmodal metonymic expression. This association works effortlessly, also on the side of the interpreter, because action and object belong to the same basic experiential domain or frame. Moreover, the gesture’s comparatively low location in gesture space reinforces the idea of subcategory. Since it receives some of its meaning from its marked position, this also is a gestural instance of metonymy of place (Mittelberg and Waugh 2009). Variants of such bimanual indices may also function as the visible starting points for creating (not holding) imaginary three-dimensional objects, e.g., volume image icons (section 3.1). Double hand-object indices may also be instantiated by hands seemingly involved in enclosing or grouping virtual items in gesture space. Hand-surface index (touch; exploration). Immediate outer contiguity relations between open hands and the surfaces they pretend to touch or run across may come into focus in gestures seemingly exploring the texture of fabrics, the surface of a piece of furniture, or some ground. Note the difference between this tactile gestural practice of interacting with the material world, e.g., by sensing or pointing at some of its prominent attributes, such as smoothness or bumpiness, as opposed to hand-object indices alluding to hand-held objects or tools, or, image icons created by hands and their movements (Müller 1998; Streeck 2009). Hand-trace index (impact; effect; with/out resulting icon). If virtual movement traces do not constitute iconic signs, that is, if they do not create or represent something other than themselves, then the focus may be on their “impact”, e.g., their leaving some sort of mark. What is profiled in these cases is the outer contiguity relation between perceivable gestural articulators, e.g., the index finger or the entire hand, and the resulting inscriptions in the air or on surfaces (Goodwin 2007). In fact, this is the first step leading into the creation of line image icons (section 3.1). If indexicality is the dominant function, however, these marks compare to animal footprints in the snow, a classic example of indices also involving some iconic features. Modal index (epistemic stance; attitude). Muted degrees of indexicality may also reside in palm-up open hand variants implying empty hands or no object-oriented aspects at all (as opposed to hand-object indices). Different kinds of expressive movements and facial expressions may reveal the speaker’s attitude, or epistemic stance towards what she (or an interlocutor) is saying: e.g., doubt, uncertainty, obviousness, or cluelessness. Interlocutors pick up on such indices that may add modal, i.e. pragmatic functions to a gestural portrayal also representing some content (Cienki and Mittelberg 2013; Kendon 2004; Müller 1998). Emotional/mental state index. Expressive movements may also be motivated by ⫺ or simply be ⫺ psychological or emotional states. Although they are comparable to vocalizations signaling, e.g., surprise or impatience, in gesture these indices are incorporated into bodily (iconic) structure. The pathos formula described by art historians stands for states of strong affect manifesting themselves, e.g., in statues of Laocoon struggling with a sea snake sent by Neptune (e.g., Gombrich 1960). Such central figures of Western iconography seem still to resonate in today’s gestural practices. Captured outbursts of emotion like these are body posture image icons with incorporated emotional state indices physically displaying an inner disposition. Similarly, iconic and indexical modes

132. Gestures and metonymy

1759

may jointly produce a unified corporeal portrayal, signaling, e.g., a speaker’s surprise or agitation about something s/he is talking about, e.g., as in Fig. 132.1 (Müller 1998). Listeners may also respond in physical ways, thereby displaying, e.g., empathy. These behaviors add a sense of drama to everyday performances in conversations but are more prevalent on stage (Brandt 2004). Crucially, this kind of indices come from within the body icons and seem to simultaneously point back inward, thus offering cues about the speaker’s inner state. hand-object indices, on the contrary, focus on the speaker’s acting hands, thus leading the interpreter’s mind into contiguous material or imagined worlds (for metonymic relations between specific movement qualities and emotions in ASL see Wilcox, Wilcox, and Jarque [2003]).

4. Cross-modal patterns o meaning construction: Metonymic shits and chains Coming back to the fundamental distinction between inner and outer contiguity relations stressed by Jakobson (Jaksobson and Pomorska 1987), one can now step back and look at the bigger picture, which reveals gesture-specific tendencies in exploiting them for communicative purposes. From the range of gestural actions and postures presented above, a set of underlying metonymic patterns seems to emerge. Each pattern involves one or several stages evolving along an axis originating from the speaker’s body engaged in metonymically reduced communicative postures, movements, or actions. Interlocutors may first recognize a certain kind of action based on similarity relations with action schemas or typical postures, but may also witness creative extensions or new forms and behaviors that do not fit into conventional patterns. From the visible body as physical anchor point, the axis extends on the one side to the speaker’s inner world and on the other to the speaker’s outer world, passing through two body-centered outer contiguity relations: (interior world) inner body ← BODY → outer body (exterior world). Breaking these relations down further we arrive at the following spectrum, ranging from inner states, body parts, and physical actions to body contact (i.e., immediate contiguity; Jakobson and Pomorska 1983: 134) and varying degrees of increasing metonymic distance. In Tab. 132.1 below, the zone of physical inner contiguity relations is shaded in a darker grey than the zones of outer contiguity relations. The two opposing arrows to each side of the body are meant to highlight the fact that some indices may point from the inner body outward and others from the outside towards the inside: Tab. 132.1: Axis of body-centered inner and outer contiguity relations exploited for co-speech gestures Outer cont. | inner contiguity relations | outer contiguity relations (with increasing distance from body) Interior

BODY (part/zone)

in contact

adjacent

close

in reach

further away

These ordered relations are assumed to provide the structural backbone for the set of distinct patterns of metonymic principles and chains presented in Tab. 132.2 below. Situated at a higher level of abstraction, they may motivate the various kinds of icons and indices (and their interaction) discussed in detail in section 3.

1760

VIII. Gesture and language As the patterns presented in Tab. 132.2 suggest, internal metonymy, i.e. an icon of a human body, is always the point of departure for an ensuing metonymic operation or chain of operations. Pattern (A) accounts for, e.g., motion onsets or schematic movements alluding to full action routines (body action image icons), or hands standing for objects or tools (e.g., hand-as-object/tool icons). Pattern (B) involves bodily actions with noticeable expressive qualities, thus pointing to the inner disposition, attitude, or epistemic stance of the speaker or the person s/he mimics (e.g., body action image icons incorporating modal indices or emotional state indices). Pattern (C) involves two metonymic steps: from the speaker’s pointing hand to the location pointed at on her body and from that location to some invisible inner organ, process, or sensation interlocutors cannot perceive but imagine or “feel for” the speaker (e.g., body part index). Here pattern (D) is exemplified by revisiting McNeill’s (2005: 114) well-known “bowling ball” example in which a speaker retells a sequence of the Canary Row cartoon story by saying that Tweety Bird runs and gets a bowl-

Tab. 132.2: Metonymic principles and chains in manual gestures and full-body enactments Metonymic principles and chains

Attention focus within inner contiguity bounds Attention shift across outer contiguity relations

Examples

A Internal metonymy (icon) metaphor

Attention stays within body icon: Focus on body: salient features of a part, zone, shape, movement, or action Attention stays within body icon: Focus on body: salient features of a part, zone, shape, movement, or action & movement qualities, or mimics manifesting inner disposition (e.g., emotion; attitude; stance) Attention shifts inward: Hand body part or location on body adjacent inner area, organ, sensation, process (e.g., head thought process) Attention shifts outward: (real/virtual) space/object/tool perHand son/surface/entity in metonymic proximity to body (in immediate contact; adjacent; close; distant; etc.); including interlocutors, discourse contents and common ground (e.g., interactive/pragmatic functions) Attention shifts onto emergent icon: Hand parts of emerging virtual trace/plane/ volume; infer from parts the whole gestalt created contiguously to body (icons in their own right)

Fig. 132.2

B

Internal metonymy (icon) inherent index metaphor

C

Internal metonymy (icon) external metonymy (index) external metonymy (index) metaphor

D Internal metonymy (icon) external metonymy (index) metaphor

E

Internal metonymy (icon) external metonymy (index) internal metonymy (icon) metaphor

Fig. 132.1

Fig. 132.4

Fig. Fig. Fig. Fig.

132.5 132.6 132.7 132.8

Fig. 132.3

Note: All patterns are cross-modal (speech cues are not listed here). All metonymic processes and chains may be extended by additional metonymic modes; they may also lead into metaphorical extensions. The icons produced by pattern (E) may manifest as image, diagrammatic, or metaphor icons.

132. Gestures and metonymy

1761

ing ball and drops it down the drainpipe. Through his speech and bodily portrayal the speaker draws the listener’s attention to both the physical action and the implied object. It is first via iconicity and internal metonymy that we recognize a person performing a bimanual downward dropping action; second, via external metonymy we can pragmatically infer, i.e. imagine, the ball contiguous to the hands as well as the ball’s ensuing trajectory and possible effect on Sylvester. This gesture thus is a body action image icon with an implied double hand-object index that sets off a metonymic chain (see Brdar-Szabo´ and Brdar [2011] on metonymic chains in language). Finally, pattern (E) may be observed if the gesturer’s body and action are not in focus, but ⫺ cued by the concurrent speech ⫺ attention is drawn to a fictive schematic iconic figuration resulting from the manual tracing or sculpting movements (e.g., volume image icon or diagrammatic icon). Note that all of the metonymic principles and chains given in Tab. 132.2 depend on where attention is drawn to by the concurrent speech; they all may also lead into metaphor (Mittelberg and Waugh 2009). What is common to these cross-modal patterns is that for the interpreting interlocutor they provide metonymic moments that mediate meaning not only depending on what the speaker is trying to convey and emphasize, but also on the interpreter’s own modes of attending to aspects relevant to her/him. Interpreting bits and pieces of salient (abstracted) information in a dynamically evolving semiotic contexture and incorporating them into a (metonymically) structured whole requires a combination of several embodied cognitive processes, particularly those pertaining to focusing and shifting attention: zooming in on partial aspects of the communicating body, e.g., by focusing on a hand shape or following its movements and the figurations emerging from it; zooming out to understand a gestural diagram or a given performance act in its entirety; as well as shifting focus from the visible communicating body to entities and spaces it is interacting with, combining different perspectives and modes of inference (see Coulson [2001] on semantic leaps). Empirical results from gesture production studies have shown that people exploit more readily external metonymy (i.e. pantomimed action with the virtual object in hand) than internal metonymy (i.e. body-part-as-object). One reason might be that they imply different modes of abstraction and that abstracting features from an object involves more of a cognitive effort than pretending to hold an object in the hand and performing the essential features of the corresponding prototypical action (see Grandhi, Joue, and Mittelberg [2011] for a user-study and Lausberg et al. [2003] for neuroscientific insights). It should be stressed that the different iconic and indexical modes presented above are obviously not exhaustive. They need to be tested and modified in light of the specific kind of data and research questions at hand. In each multimodal sign process their varying interaction as well as their correlation with the concurrent speech content needs to be accounted for very carefully. If possible, conventional, habitual, as well as individual gestural practices should also be considered.

5. Concluding remarks: Metonymic slices o lie Being existentially tied to the human body and its material and socio-cultural habitat, gestures are, regardless of their predominant function, inherently indexical. Given the body’s shifting anchorage in different physical and mental spaces (Sweetser 2012), it is not surprising that quite a range of indices and their interaction with iconic modes could be shown to play a constitutive role in gestural sign creation and interpretation. These

1762

VIII. Gesture and language

observations attest to the tight link between the communicating body and the mind; they also demonstrate that studying metonymy in gestural abstraction and inferencing allows for new insights into human perception, online conceptualization, and meaningmaking processes. There still is much research to be done on gestures and full-body enactments to better understand how exactly ad hoc metonymies (Koch 2004) interact with other central semiotic practices and cognitive principles. It would be worthwhile to establish, for instance, how the distinct metonymic modes discussed in this chapter pattern with particular viewpoint strategies (e.g., Dancygier and Sweetser 2012), gestural modes of representation (Müller 1998), and varying degrees of metaphoricity (Müller and Tag 2012). Another possibility is to explore similar processes in static visuo-spatial modalities such as painting and sculpture. Cubist pictures, for instances, share with gestures that they present what Lodge (1977: 109) called “slice[s] of life”: fragments of objects humans interact with on a daily basis such as chairs, cups, bottles, tables, and newspapers (Mittelberg 2006). Human figures, musical instruments, and newspaper headlines typically appear in the form of abstracted forms, e.g., contours, characteristic features (e.g., eyes, guitar strings, truncated words, etc.), or basic geometric shapes (e.g., cubes, squares, triangles, etc.), standing in for the entire gestalts (through internal metonymy). A table can further be suggested by a piece of the tablecloth covering it (via external metonymy). While Cubists were striving to “discover less unstable elements in the objects to be represented, i.e., the constant and essential elements” (Wertenbaker 1967: 86), co-speech gestures have the propensity to pick out or create ⫺ so-to-speak on the fly ⫺ both globally essential, e.g., prototypical, as well as momentarily salient attributes. Invoking felt qualities of meaning and of understanding (Johnson 2007), gestures are spontaneous communicative actions producing ⫺ for conversational partners or audiences ⫺ metonymic “slices of life”: not only of speakers’ outer material living context, but also of their inner life, e.g., their reasoning, imagination, and emotions. In the flow of observing a painting or listening to a person, the interpreter draws on multiple senses in synthesizing the manifold fragments, allusions and perspectives through active “simultaneous vision” (Zeki 1999: 52), and an array of metonymic inferences to a unified whole, that is, an insightful and meaningful semiotic experience. The following observation by Arnheim succinctly encapsulates the main interest of this chapter; it also inspires us to further investigate, both theoretically and empirically, the intelligent actions of the human body: Often a gesture is so striking because it singles out one feature relevant to the discourse. It leaves to the context the task of identifying the referent: the bigness portrayed by the gesture can be that of a huge Christmas parcel received from a wealthy uncle or that of a fish caught last Sunday. The gesture limits itself intelligently to emphasizing what matters. (Arnheim 1969: 117)

Acknowledgements The authors wish to thank Jacques Coursil, Vito Evola, Gina Joue, Matthias Priesters, Linn Rekittke, Daniel Schüller and Dhana Wolf for valuable input and Yoriko Dixon for the gesture drawings. The preparation of the chapter was supported by the Excellence Initiative of the German State and Federal Governments.

132. Gestures and metonymy

1763

6. Reerences Arnheim, Rudolf 1969. Visual Thinking. Berkeley: University of California Press. Arnheim, Rudolf 1974. Art and Visual Perception: A Psychology of the Creative Eye. Berkeley: University of California Press. Barcelona, Antonio 2009. Motivation of construction meaning and form: The roles of metonymy and inference. In: Klaus-Uwe Panther, Linda Thornburg and Antonio Barcelona (eds.), Metonymy and Metaphor in Grammar, 363⫺401. Amsterdam: John Benjamins. Bavelas, Janet, Nicole Chovil, Linda Coates and Lori Roe 1995. Gestures specialized for dialogue. Personality and Social Psychology Bulletin 21(4): 394⫺405. Benczes, Re´ka, Antonio Barcelona and Jose´ Francisco Ruiz de Mendoza Iban˜ez (eds.) 2011. Defining Metonymy in Cognitive Linguistics. Amsterdam: John Benjamins. Bouvet, Danielle 1997. Le Corps et la Me´taphore dans les Langues Gestuelles: A la Recherche des Modes de Production des Signes. Paris: L’Harmattan. Bouvet, Danielle 2001. La Dimension Corporelle de la Parole. Les Marques Posturo-Mimo-Gestuelles de la Parole, leurs Aspects Me´tonymiques et Me´taphoriques, et leur Roˆle au Cours d’un Re´cit. Paris: Peeters. Brdar-Szabo´, Ria and Mario Brdar 2011. What do metonymic chains reveal about the nature of metonymy? In: Re´ka Benczes, Antonia Barcelona and Jose´ Francisco Ruiz de Mendoza Iban˜ez (eds.), Defining Metonymy in Cognitive Linguistics, 217⫺248. Amsterdam: John Benjamins. Bressem, Jana volume 1. A linguistic perspective on the notation of form features in gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1079⫺1098. Berlin: Mouton de Gruyter. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: The University of Indiana Press. Cienki, Alan 2012. Gesture and (cognitive) linguistic theory. In: Rosario Cabellero Rodriguez and M. J. P. Sanz (eds.), Ways and Forms of Human Communication, 45⫺56. Cuenca: Ediciones de la Universidad de Castilla-La Mancha. Cienki, Alan and Irene Mittelberg 2013. Creativity in the forms and functions of gestures with speech. In: Tony Veale, Kurt Feyaerts and Charles Forceville (eds.), The Agile Mind: Creativity in Discourse and Art, 231⫺252. Berlin: Mouton de Gruyter. Cienki, Alan and Cornelia Müller (eds.) 2008. Metaphor and Gesture. Amsterdam: John Benjamins. Clark, Herbert H. 1996. Using Language. Cambridge: Cambridge University Press. Clark, Herbert H. 2003. Pointing and placing. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 243⫺268. Mahwah, NJ: Lawrence Erlbaum Associates. Cooperrider, Kensy and Rafael Nu´n˜ez 2009. Across time, across the body: Transversal temporal gestures. Gesture 9(2): 181⫺206. Coulson, Seana 2001. Semantic Leaps: Frame-Shifting and Conceptual Blending in Meaning Construction. Cambridge: Cambridge University Press. Dancygier, Barbara and Eve E. Sweetser 2012. Viewpoint in Language: A Multimodal Perspective. Cambridge: Cambridge University Press. Dirven, Rene´ and Ralf Pörings (eds.) 2002. Metaphor and Metonymy in Comparison and Contrast. Berlin: Mouton de Gruyter. Dudis, Paul 2004. Body partitioning and real-space blends. Cognitive Linguistics 15(2): 223⫺238. Enfield, N.J. 2011. Elements of formulation. In: Jürgen Streeck, Charles Goodwin and Curtis LeBaron (eds.), Embodied Interaction: Language and the Body in the Material World, 59⫺66. Cambridge: Cambridge University Press. Evola, Vito 2010. Multimodal cognitive semiotics of spiritual experiences: Beliefs and metaphors in words, gestures, and drawings. In: Fey Parrill, Vera Tobin and Mark Turner (eds.), Form, Meaning, and Body, 41⫺60. Stanford: CSLI Publications.

1764

VIII. Gesture and language

Fillmore, Charles J. 1982. Frame semantics. In: Linguistic Society of Korea (ed.), Linguistics in the Morning Calm, 111⫺137. Seoul: Hanshin. Fricke, Ellen 2007. Origo, Geste und Raum ⫺ Lokaldeixis im Deutschen. Berlin: Mouton de Gruyter. Gibbs, Raymond W., Jr. 1994. The Poetics of Mind: Figurative Thought, Language, and Understanding. Cambridge: Cambridge University Press. Gombrich, Ernst 1960. Art and Illusion. A Study in the Psychology of Art. London: Phaidon. Goodwin, Charles 2007. Environmentally coupled gestures. In: Susan Duncan, Justine Cassell and Elena T. Levy (eds.), Gesture and the Dynamic Dimensions of Language, 195⫺212. Amsterdam: John Benjamins. Grandhi, Sukeshini A., Gina Joue and Irene Mittelberg 2011. Understanding naturalness and intuitiveness in gesture production: Insights for touchless gestural interfaces. Proceedings of the ACM 2011 Conference on Human Factors in Computing Systems (CHI), Vancouver, B.C. Grandhi, Sukeshini A., Gina Joue and Irene Mittelberg 2012. To move or to remove? A humancentric approach to understanding of gesture interpretation. Proceedings of the 10th ACM conference on Designing Interactive Systems. Newcastle: ACM Press. Hassemer, Julius, Gina Joue, Klaus Willmes and Irene Mittelberg 2011. Dimensions and mechanisms of form constitution: Towards a formal description of gestures. Proceedings of the GESPIN 2011 Gesture in Interaction Conference. Bielefeld: ZiF. Haviland, John B. 2000. Pointing, gesture spaces, and mental maps. In: David McNeill (ed.), Language and Gesture, 13⫺46. Cambridge: Cambridge University Press. Ishino, Mika 2007. Metaphor and metonymy in gesture and discourse. Ph.D. Dissertation. University of Chicago. Jakobson, Roman 1956. Two aspects of language and two types of aphasic disturbances. In: Linda R. Waugh and Monique Monville-Burston (eds.), Roman Jakobson ⫺ On Language, 115⫺133. Cambridge, MA: Harvard University Press. Jakobson, Roman 1963. Parts and wholes in language. In: Linda R. Waugh and Monique MonvilleBurston (eds.), Roman Jakobson ⫺ On Language. Cambridge, MA: Harvard University Press. Jakobson, Roman 1971. Shifters, verbal categories, and the Russian verb. In: Roman Jakobson (ed.), Selected Writings, Volume II: Words and Language, 130⫺147. The Hague: Mouton. First published [1957]. Jakobson, Roman and Krystyna Pomorska 1983. Dialogues. Cambridge, MA: Massachusetts Institute of Technology Press. Johnson, Mark 2007. The Meaning of the Body: Aesthetics of Human Understanding. Chicago: University of Chicago Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kita, Sotaro 2003. Pointing: Where Language, Culture and Cognition Meet. Mahwah, NJ: Lawrence Erlbaum. Koch, Peter 2004. Metonymy between pragmatics, reference, and diachrony. Metaphorik.de 7: 6⫺26. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. Lakoff, George and Mark Johnson 1980. Metaphors We Live By. Chicago: Chicago University Press. Langacker, Ronald W. 1993. Reference-point constructions. Cognitive Linguistics 4(1): 1⫺38. Lausberg, Hedda, Robyn F. Cruz, Sotaro Kita, Eran Zaidel and Alain Ptito 2003. Pantomine to visual presentation of objects: Left hand dypraxia in patients with complete callosotomy. Brain 126(2): 343⫺360. Liddell, Scott K. 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Lodge, David 1977. Two Modes of Modern Writing: Metaphor, Metonymy, and the Typology of Modern Literature. Ithaca, NY: Cornell University Press. Mandel, Mark 1977. Iconic devices in American Sign Language. In: Lynn Friedman (ed.), On the Other Hand. New Perspectives on American Sign Language, 57⫺107. New York: Academic Press.

132. Gestures and metonymy McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: Chicago University Press. McNeill, David 2005. Gesture and Thought. Chicago: Chicago University Press. Mittelberg, Irene 2006. Metaphor and Metonymy in Language and Gesture: Discourse Evidence for Multimodal Models of Grammar. Ph.D. Dissertation, Cornell University. Ann Arbor, MI: UMI. Mittelberg, Irene 2008. Peircean semiotics meets conceptual metaphor: Iconic modes in gestural represenations of grammar. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 115⫺154. Amsterdam: John Benjamins. Mittelberg, Irene 2010a. Geometric and image-schematic patterns in gesture space. In: Vyvyan Evans and Paul Chilton (eds.), Language, Cognition, and Space: The State of the Art and New Directions, 351⫺385. London: Equinox. Mittelberg, Irene 2010b. Interne und externe Metonymie: Jakobsonsche Kontiguitätsbeziehungen in redebegleitenden Gesten. Sprache und Literatur 41(1): 112⫺143. Mittelberg, Irene 2012. Ars memorativa, Architektur und Grammatik. Denkfiguren und Raumstrukturen in Merkbildern und spontanen Gesten. In: Thomas Schmitz and Hannah Groninger (eds.), Werkzeug/Denkzeug. Manuelle Intelligenz und Transmedialität kreativer Prozesse, 191⫺ 221. Bielefeld: Transcript. Mittelberg, Irene 2013. Balancing acts: Image schemas and force dynamics as experiential essence in pictures by Paul Klee and their gestural enactments. In: Michael Borkent, Barbara Dancygier and Jennifer Hinnell (eds.), Language and the Creative Mind, 325⫺346. Stanford, CA: CSLI Publications. Mittelberg, Irene volume 1. The exbodied mind: Cognitive-semiotic principles as motivating forces in gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 755⫺784. Berlin: Mouton de Gruyter. Mittelberg, Irene and Linda R. Waugh 2009. Metonymy first, metaphor second: A cognitive-semiotic approach to multimodal figures of thought in co-speech gesture. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 329⫺356. Berlin: Mouton de Gruyter. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. Forms and uses of the palm up open hand: A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gesture: The Berlin Conference, 233⫺256. Berlin: Weidler Verlag. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia and Susanne Tag 2012. The dynamics of metaphor: Foregrounding and activating metaphoricity in conversational interaction. Cognitive Semiotics 10(6): 85⫺120. Panther, Klaus-Uwe and Linda L. Thornburg (eds.) 2003. Metonymy and Pragmatic Inferencing. Amsterdam: John Benjamins. Panther, Klaus-Uwe and Linda L. Thornburg 2004. The role of conceptual metonymy in meaning construction. Metaphorik.de 06: 91⫺113. Peirce, Charles Sanders 1960. Collected Papers of Charles Sanders Peirce (1931⫺1958), Volume I: Principles of Philosophy, Volume II: Elements of Logic. Edited by Charles Hartshorne und Paul Weiss. Cambridge: Harvard University Press. Peirsman, Yves and Dirk Geeraerts 2006. Metonymy as a prototypical category. Cognitive Linguistics 17(3): 269⫺316. Shapiro, Michael 1983. The Sense of Grammar: Language as Semeiotic. Indiana: Indiana University Press. Sonesson, Göran 1992. Bodily semiotics and the extension of man. In: Eero Tarasti (ed.), Center, Periphery in Representations and Institutions. Proceedings from the 3rd Annual Congress of The

1765

1766

VIII. Gesture and language

International Semiotics Institute. Imatra, Finland, July 1990, 185⫺210. Imatra: International Semiotics Institute. Streeck, Jürgen 2009. Gesturecraft: The Manu-facture of Meaning. Amsterdam: John Benjamins. Sweetser, Eve 2012. Viewpoint and perspective in language and gesture. In: Barbara Dancygier and Eve Sweetser (eds.), Viewpoint in Language: A Multimodal Perspective, 1⫺22. Cambridge: Cambridge University Press. Talmy, Leonard 2013. Gestures as cues to a target. Paper given at ICLC 12, University of Edmonton, Alberta, June, 2013. Taub, Sarah 2001. Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press. Waugh, Linda R. 1993. Against arbitrariness: Imitation and motivation revived, with consequences for textual meaning. Diacritics 23(2): 71⫺87. Waugh, Linda R. and Monique Monville Burston 1990. Roman Jakobson: His life, work and influence. In: Linda R. Waugh and Monique Monville Burston (eds.), Jakobson On Language, 1⫺45. Cambridge, MA: Harvard University Press. Wertenbaker, Lael 1967. The World of Picasso, 1881⫺1973. New York: Time-life Books. Wilcox, Phyllis P. 2004. A cognitive key: Metonymic and metaphorical mappings in ASL. Cognitive Linguistics 15(2): 197⫺222. Wilcox, Sherman, Phyllis P. Wilcox and Maria Josep Jarque 2003. Mappings in conceptual space: Metonymy, metaphor, and iconicity in two signed languages. Jezikoslovje 4(1): 139⫺156. Zeki, Semir 1999. Inner Vision: An Exploration of Art and the Brain. Oxford: Oxford University Press.

Irene Mittelberg, Aachen (Germany) Linda R. Waugh, Tucson (USA)

133. Ways o viewing metaphor in gesture 1. 2. 3. 4. 5. 6. 7. 8. 9.

Introduction: How metaphor has been applied to gesture analysis Metaphor as a semiotic process: Motivating gestures as signs Metaphor as conceptualization: Thinking for speaking and gesturing Metaphor and gesture functions: Referential, discourse related, pragmatic Dynamicity of metaphor: Foregrounding, attending to, and activating metaphoric meaning Metaphor as temporal orchestration: Dynamics of multimodal metaphors Systematic metaphor and gesture Conclusion References

Abstract This chapter outlines different understandings of what metaphor is and how those different accounts have been applied to the study of gesture. In so doing, it shows how the study of gesture has contributed to current research in cognitive linguistics, conceptual and applied metaphor theory, conversation and discourse analysis, cognitive psychology more generally, sign language linguistics as well as embodiment and multimodal communication research. Metaphor has been described as a major cognitive-semiotic process which motivates (toMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 17661781

133. Ways of viewing metaphor in gesture

1767

gether with metonymy) the meaning of gestures. Metaphoric processes are crucial for the constitution of gestural signs but they are also of core importance for the motivation of semantic and pragmatic meanings of gestures. Metaphoric gestures are embodied conceptualizations of a particular kind: They show how people conceive of various ideas and at the same time the gestural mode of expression involves a particular way of thinking for speaking. Metaphoric gestures are a vital form of embodied, lived experience. The study of metaphor and gesture therefore shows that metaphor must be thought of as a matter of use and is dynamic with respect to the gradient nature of metaphoric meaning. Taking a dynamic view on metaphor to the level of discourse, we will outline how multimodal metaphors are temporally orchestrated, e.g., they evolve over time in discourse and may systematically structure entire discourses. The chapter underlines that the study of gesture and metaphor has important implications for a wide array of current research and theory building.

1. Introduction: How metaphor has been applied to gesture analysis Like metaphor research in general, the study of metaphor in gesture has been enormously influenced by Lakoff and Johnson’s 1980 publication Metaphors we live by. That work drew on Reddy’s ([1979] 1993) analysis of metaphoric expressions for communication in English, expressions such as “putting one’s ideas into words” and “getting one’s thoughts across” that, Reddy (1979: 10) and subsequently Lakoff and Johnson claimed, reflect patterns of talking and thinking about ideas as objects, linguistic expressions as containers, and communication as sending ⫺ referred to collectively as the conduit metaphor for communication. McNeill and Levy (1982), and subsequently McNeill (1985, 1992), found gestures used in their research on narratives about a cartoon in which participants held their hand(s) as if holding a manipulable object when referring to the cartoon genre or to ideas in the cartoon. Applying Reddy’s and Lakoff and Johnson’s approach to metaphor in the domain of communication, they labeled such gestures metaphorics. Little did they know that this would lead to a dominant tendency in subsequent research on gesture, particularly in research in cognitive psychology, to consider as metaphoric only or primarily gestures reflecting aspects of the conduit metaphorical model of communication (e.g., gestures with an open hand, palm up, used when referring to ideas as if holding them out for acceptance by the listener). However, as other research (e.g., Calbris 1990; Cienki 1998a, 2013; McNeill and Levy 1982; Müller 2008a) has shown, there is a wide variety of gestures which can be construed as metaphoric in nature.

2. Metaphor as a semiotic process: Motivating gestures as signs Metaphor has been described as a major cognitive-semiotic process, which motivates (together with metonymy) the meaning of gestures (see Mittelberg this volume; Mittelberg and Waugh this volume; Müller this volume). Metaphoric processes are crucial for the constitution of gestural signs and the meaning of gestures. This involves (i) the motivation of the meaning of the gestural form and (ii) the motivation of the contextualized meaning, the semantics and pragmatics of a particular form used in a particular context.

1768

VIII. Gesture and language

Gestures as signs are creations of speakers that use techniques of the body which can mime actions, objects, and events in the world. Speakers may employ different modes of gestural representation (or depiction) to create gestures: The hands may be used to as-if enact an action (with or without objects), they may act as if molding or shaping and object, as if drawing the contour of objects, or they act as if they were a sculpture or a model of an object (for further detail, see Müller 1998, 2009, this volume). Gestures based on acting, molding, or drawing involve metonymic processes in the most general sense of metonymic relations, namely that a part stands for a whole: A part of the action stands for the enactment of opening a window, for molding or outlining the round shape of a picture. In gestural depictions based on representation, however, metaphor is the primary motivational cognitive-semiotic process. When the hand becomes an imaginary object or moves in order to characterize an abstract process, the semiotic relation between sign and base characterizing the abstraction is similarity, not contiguity as in the case of metonymic relations. On the other hand, in the case of metonymic motivation of gestural meaning (including internal and external metonymy, see Mittelberg and Waugh this volume), metaphor comes into play, too. Namely, when enacting the opening of a window or the throwing away of a piece of crumbled paper, metonymy explains the motivation of the re-enacted action scheme (internal metonymy) and the contiguity relation to an inferred object (window handle), but the inferred object itself is only inferred. This means that different forms of metaphtonymy (Gossens 1990) would be the motivating principles for many gestures and also for classifier constructions in signs within sign languages (Müller 2009). They probably also play an important role in sign formation within sign languages (see Kendon 1989) and in processes of semanticization and grammaticalization that lead from gesture to signs (Wilcox 2009). With these distinctions in place, we may now reconsider the role of metaphor in the above mentioned conduit gesture, which a bit unfortunately, has become a prototype of metaphoric gesture. Metaphoricity plays out as a principal process of sign formation in this gesture in the first place: The conduit gesture presents discursive objects on the open palm hand. It is based on the manual action of presenting, showing, giving, and taking objects on the open hand. There is a metonymic contiguity relation between the as-ifaction that we see in the holding out of the palm up open hand and a metaphorical relation between the contextualized pragmatic meaning of the gesture: When presenting an abstract discourse topic on the open hand, this discourse topic is only metaphorically sitting on the open palm. However, this alone does not make the conduit gesture a metaphoric gesture. Metaphoricity serves here as a semiotic means to perform a pragmatic function. Palm up open hands (Müller 2004) or Open Hands Supine (Kendon 2004) have been intensely studied as one type of pragmatic gesture (see also Bressem and Müller this volume a, b; Kendon 2004, Streeck 2009). Thus conduit gestures ⫺ or palm up open hands ⫺ operate by “giving, showing, offering an object by presenting it on the open hand” and are used to present an “abstract, discursive object as a concrete, manipulable entity” (Müller 2004: 233, 236). It appears, that this type of motivation is characteristic for most pragmatic gestures (see also Teßendorf this volume) and may even be vital in motivating the meaning of families of pragmatic gestures (Bressem and Müller this volume a, b). Let us consider two further examples of pragmatic gestures, which both are members of the family of Away gestures (Bressem and Müller this volume a, b): the throwing away and the brushing away gesture (Fig. 133.1). All members of the family are motivated by

133. Ways of viewing metaphor in gesture

1769

a semantization of the effect or goal of an underlying action scheme: a body space, which is cleared from annoying or disturbing objects. In the case of the throwing away gesture, the gesture acts as if throwing away a larger object. The brushing away gesture (or brushing aside, see Teßendorf this volume), on the other hand, is based on the removal of tiny annoying objects like crumbs on a sweater, or a mosquito sitting on one’s arm and is used to brush away annoying arguments in a conversation. In both cases, the gestures are used to qualify topics of talk as unwanted, annoying ones. Thus the pragmatic meaning is directly derived from those manual actions. Note, however, that although the topics of talk are of course being thrown away metaphorically, not literally, the meaning of the gesture is pragmatic, e.g., it is a communicative action.

Fig. 133.1: Throwing away and brushing away gestures performing a dismissive speech act.

When performing a throwing or brushing away gesture, a speaker is performing an assertive and expressive speech act, not an act of metaphorical reference ⫺ as when talking and gesturing about the iron curtain between the Eastern and Western blocks in Cold War times (gesture is used as if it were the iron curtain separating East from West). Distinguishing between metaphor as cognitive-semiotic process driving the motivation of gestures and signs, and metaphor as a type of reference to the world is of crucial importance for gesture studies, but it may throw light onto the cognitive-semiotic processes motivating meaning in language more generally and thus be of particular interest for cognitive linguistics and cognitive semiotics.

3. Metaphor as conceptualization: Thinking or speaking and gesturing The approach to metaphor that became best known through Lakoff and Johnson (1980, 1999, and elsewhere) and work following in that tradition is often labelled “Conceptual Metaphor Theory” (CMT), though see, for example, Jäkel (1999) and Müller (2008a) for discussion of research by earlier scholars on metaphor as conceptual in nature. A basic premise of this approach is that metaphor is not just or primarily a phenomenon of word use, but is fundamentally a matter of conceptual correspondences between different domains of experience. Such metaphoric mappings on the conceptual level can receive various forms of expression, not just verbally, but also in the creative arts, architecture, images in our dreams, and spontaneous gestures with speech. However, from

1770

VIII. Gesture and language

the perspective of Conceptual Metaphor Theory, the metaphors expressed in gesture are not mere reflections of verbal metaphors; rather, any type of metaphoric expression arises from an underlying mapping of some source domain concept onto a conceptual target domain. On this view, metaphor is an understanding of some target concept in terms of a source (e.g., thinking in terms of movement along a path), and it is that source that constitutes the basis for relevant metaphoric expressions (such as the verbal expression “arriving at a conclusion”) (see Emanatian 1997 for other examples related to this domain). Therefore, metaphor in gesture can take forms which sometimes are not even articulated verbally, as discussed below.

4. Metaphor and gesture unctions: Reerential, discourse related, pragmatic Just as previous research on gesture (e.g., Kendon 2004; McNeill 1992; Müller 1998) has discussed a range of functions that gesture can serve, so can one consider metaphor as playing out in these different functions of gesture, but to differing degrees and in different forms. Metaphor in gesture has primarily been researched in relation to referential gestures (see, e.g., Cienki and Müller 2008). The focus is then usually on gestures which, as determined by the context and by the content of the speech around the gesture, relate to a (usually abstract) referent that is the topic of the speech. The relation is constituted through a gestural form or movement that can be construed as representing some physical entity, relation, or movement that maps onto the abstract topic via a comparison with it. In Conceptual Metaphor Theory terms, the Target Domain idea is represented by a gestural expression of the Source Domain idea for understanding it. To use terminology with a longer tradition in metaphor research (from Richards 1965), the Tenor or Topic (Leech’s 1969 adaptation of the term) is represented by a Vehicle in gesture. To date, various topics have been researched in terms of their gestural expression (what we might call gesture Vehicles), to name a few examples: notions of time (Calbris 1985; Cooperrider and Nu´n˜ez 2009; Nu´n˜ez and Sweetser 2006), political concepts (Calbris 2003; Cienki 2004), philosophical ideas (Montredon et al. 2008), and concepts concerning discourse itself (Sweetser 1998). The latter takes us to the point that metaphor in gesture can also relate to the discourse level ⫺ to ideas as such, as they are being employed in the ongoing development of discourse, and to the parsing of the discourse (Kendon 2004). See, for example, McNeill, Cassell, and Levy (1993) on the pointing to different spaces in the course of a narrative as a way of indicating (and marking reference to) different topics (a phenomenon noted by Bühler as early as 1934 in terms of Deixis am Phantasma). Recall the connection McNeill (1992) and McNeill and Levy (1982) made between the conduit metaphorical model of communication and the way in which people talking about communicating ideas reify them in space by holding two slightly open hands facing each other, as if holding a manipulable object in the air. Other research discusses how speakers set up ideas, and two-part arguments in particular, as different spaces on a horizontal plane before them, e.g., by moving their open hands palm down to spaces on the left and right in front of them (Calbris 2008). This type of gesture use involves a kind of metaphor that is usually not found or not even possible on the verbal level ⫺ a kind of ontological metaphor that objectifies discourse structure (idea development through

133. Ways of viewing metaphor in gesture

1771

time) through physical movement and particular articulations of the hands in space. (Contrast sign languages, where such physical representation of discourse properties with the hands is the norm and has a grammatical status.) A further use of gesture with spoken language relates to attitudes speakers have towards the discourse topic or ways in which speakers show what they are trying to accomplish in the interaction by using gestures in certain ways, and metaphor can be interpreted as playing a role here as well. For example, such pragmatic use of gesture (Kendon 2004) can involve the use of a palm up open hand when asking a question or introducing new information. Given that this same hand shape, palm orientation, and positioning of the hand in a low or central gesture space is often used to present something small to an interlocutor on one’s open hand, the gesture can be interpreted as serving a similar function in the discourse contexts mentioned, to metaphorically present a question or idea to an interlocutor (Müller 2004), as discussed above. Other examples include gestures with the open hand, palm facing outward, metaphorically stopping the line of action between speaker and interlocutor (Kendon 2004: Chapter 13). Or the gesture found among Spanish speakers involving a rotating of the wrist down and outward, resulting in a movement similar to that used to brush small, annoying objects (like crumbs or lint) off of one’s clothing; this gesto de barrer has been interpreted (Teßendorf 2005) as expressing dismissal of an idea, whereby the idea dismissed is then metaphorically treated like a physical small, annoying object (see Bressem and Müller this volume a, b; Teßendorf this volume), or the throwing away and brushing away gestures discusses in section 2 above. Metaphor can thus be involved with various types of communicative functions in gestures, such as referential functions, functions concerning the discourse itself, and pragmatic functions. However, the relation of the metaphor in gesture to the speech itself is different in each case. Reference to the topic of talk via gesture usually involves imagistically highlighting particular aspects of that topic through depiction of some part(s) of the Source Domain. Metaphor in gesture that relates to the level of discourse structure is often schematic in nature as it simply involves the reification of (elements of) the discourse itself without the Source Domain having further specification than that of being one space distinct from another, or an object that fits in the palm of one’s hand. The pragmatic level of analysis overlaps with the discourse level in many cases and, from existing research, appears to involve a range of types of specificity in gestural forms (from a simple open hand shape to forms articulated in more detail). Finally, metaphor can be found in gestures which have conventionalized symbolic functions. Known as emblems (Efron [1941] 1972), such gestures are used by speakers as signs on a par with the spoken words of a language, sometimes substituting for words. Though any iconicity motivating their form may not be obvious, metaphoric relations of the form of the gesture to the idea that the gesture stands for symbolically can sometimes be found. Examples include the mapping that good is up found in cultures which use the thumbsup gesture to show a positive evaluation, or metaphors concerning logical thinking as rectilinear and disturbed or crazy thinking as twisted or convoluted (Cienki 1998b) and the gesture in American culture for craziness of pointing the index finger towards the side of one’s head and rotating the finger several times, tracing a repeating circular path.

5. Dynamicity o metaphor: Foregrounding, attending to, and activating metaphoric meaning The dynamic nature of meaning, when considered as conceptualization, is one of the basic tenets of cognitive linguistics. Thus Langacker (2001: 8) writes: “I will argue, how-

1772

VIII. Gesture and language

ever, that dynamicity is essential to linguistic semantics. How a conceptualization develops and unfolds through processing time is often (if not always) a pivotal factor in the meanings of expressions.” In metaphor theory, dynamic perspectives on metaphor have lately gained much currency. As in linguistics generally, static views on meaning have traditionally remained unquestioned, including in common ways of thinking about metaphoric meaning. With a focus on metaphors as lexical or conceptual units, metaphoricity was thought of as a property that was present or not in words or concepts. When such a static view is applied to lexical units, they may be identified on a word by word basis (see MIP, Metaphor Identification Procedure, Pragglejaz Group 2007). When connected to Conceptual Metaphor Theory, this view frames conceptual mappings between source and target domains as products, not as processes (for this model, Lakoff and Johnson 1980 provides the paradigmatic frame). Gibbs, however, has been pivotal in “dynamizing” both takes on metaphor. Not only has he underlined the importance of looking at metaphor as product and as process (Gibbs 1993; Gibbs and Steen 1999), in many psychological experiments he has shown that conceptual metaphors are cognitively, experientially activated, that they are processed and comprehended as metaphors, and that metaphoric meaning may prime actual behavior or that a particular behavior may prime activation of metaphors (Gibbs 1994, 1998, 2006, 2011). Using an experimental setting, Wilson and Gibbs (2007) found that for the comprehension of metaphoric usages of the word “grasp” such as in “grasping a concept”, the sensorimotor experience of a grasping movement facilitates understanding: “For example, making a grasping movement before seeing grasp the concept facilitates people’s access to their embodied, metaphorical understanding of concept, even if concepts are not things that people can physically grasp” (Wilson and Gibbs 2007: 723, emphasis in original). These psychological and neurological findings provide strong support for an intimate and dynamic connection between bodily experiences and word meaning. In particular, they point to an active and dynamic relation between sensorimotor experiences and metaphoric meaning. In such a view, word meanings are not retrieved from a mental lexicon as fixed disembodied units of meaning, but are the products of ad hoc processes of meaning construction, which are highly subjective and bound to individual and local experiences of particular forms of language use. Research on metaphor and gesture in language use has provided ample support for such a dynamic and embodied view on metaphoric meaning (Cienki and Müller 2009; Müller 2008a, b, 2011; Müller and Ladewig in press). In particular, an analysis of speech, gesture, and body movement may offer further insights into the dynamic nature of metaphoric meaning activation. Obviously, it cannot give access to a neurological level of generalized sensorimotor experiences. What it can do, however, is provide insights into the individual, subjective, and dynamic level of experience that characterizes any ad hoc uses of metaphors. Gestures are sensorimotor experiences. They show that sensorimotor programs for grasping are active, when, for instance, someone is talking about grasping a concept and is performing a grasping movement at the same time (see Müller 2008a, b; Müller and Tag 2010). Those gestures reveal that metaphoricity is activated, that it is actively processed cognitively and experienced bodily. Put differently, gestures are a way of foregrounding metaphoricity that would otherwise be a “sleeping” potential of a metaphor (see Müller 2008a for an extended version of this argument). Moreover foregrounding of metaphoricity is an interactive achievement and it is empirically accountable as such through foregrounding activities: The more instantiations of an experiential source domain of a metaphor there are (verbally and gesturally)

133. Ways of viewing metaphor in gesture

1773

in a given stretch of discourse, the more metaphoricity is foregrounded and the higher the degree of activated metaphoricity is. If, in addition, further interactive, semantic, and syntactic cues foreground metaphoric meaning in speech and gesture, metaphoricity can be considered highly activated for a given speaker and a given co-participant at this very moment in time of their conversation (for more detail see Müller 2008a, 2011; Müller and Tag 2010). Thus, depending on the foregrounding devices that operate upon a particular metaphoric expression, metaphoric meaning may be more or less foregrounded, and metaphors maybe more or less sleeping or waking. This empirical observation is in line with Langacker’s account of the dynamicity of conceptualization. At the very least, metaphoric meaning is clearly not a fixed, steady property of lexical items, but gradable and variable. Studying metaphors from the point of view of gesture usage thus contributes to linguistic semantics in that it provides further evidence for a dynamic view on meaning. It shows that metaphoric meaning must, like any other type of meaning, be considered a dynamic product of a process that involves a constant flow attention. The analysis of metaphor and gesture thus adds a new perspective to issues discussed in cognitive linguistics and in cognitive psychology. Findings from gesture studies challenge atomistic views of metaphors widespread within cognitive linguistics as well as in applied linguistics, but they also contribute to psychological research on metaphor, for instance, when it comes to the “old” issue of whether conventionalized metaphors are comprehended as metaphors or not (see Gibbs 1994, 1998; Müller 1998). Giora’s (1997, 2002, 2003) psycholinguistic research indicates that it is the salience of a lexical item, which determines the readiness of processing it as literal or metaphoric. If a lexical item has a metaphoric and a non-metaphoric reading, then the more salient one (i.e., in terms of frequency, conventionality, etc.) will be comprehended faster, no matter whether it is a metaphoric or a literal meaning of a word. However, while experimental settings tend to address generalized experiences, gesture studies can address subjective experiences and the ad hoc construction of metaphoric meaning in the flow of an ongoing stretch of discourse. Qualitative analyses of naturalistic data also shed light on the interactive and temporal orchestration of metaphors in language use.

6. Metaphor as temporal orchestration: Dynamics o multimodal metaphors So far, we have focused on the dynamics of metaphor as an aspect of conceptualizing metaphoric meaning, the gradability of metaphor, and the activation of metaphoricity of transparent lexical metaphors, which turns sleeping metaphors into waking ones. In this section, we will turn to the temporal and sequential dimension of the dynamics of metaphor, e.g., we will expand the focus from gestures and lexical units to the level of discourse. This implies a shift to the temporal evolution of metaphoric meaning throughout a conversation. In particular, applied linguists have drawn attention to the fact that metaphors used in discourse hardly come alone and may actually structure and organize political, social, economic, cultural discourses over very long time spans. Cameron’s work has been pivotal and provided many landmark studies revealing this fact and we will turn to this aspect in the next section in some detail (Cameron 2009, 2010a, b). For now, we will briefly illustrate how metaphor is intertwined in speech and gesture, how it moves along between the two modalities, how it is communicated and shared in an

1774

VIII. Gesture and language

Fig. 133.2: finding balance is feeling a silk thread pulling the navel towards the spine: Metaphors intertwining across speech, full body movement, and gesture.

interaction, and thus how it moves along in time. When tracing the course of a metaphoric meaning over the whole time span of a communicative event, such as for instance a ballet lesson, one finds that it is difficult to think of metaphoric meaning as a clearly delineated entity, as, for instance, a concept that manifests itself in speech and/or in gesture. Fig. 133.2 shows the first movements of the emergence of a metaphor for balance: finding balance is feeling a silk thread pulling the navel towards the spine. In the first moments of the dance lesson, the teacher finds an appropriate metaphor that expresses her subjective experience of an upright thigh and back. Such a stable upright posture is crucial to achieving and maintaining balance for a ballet dancer ⫺ but it is very hard to find and to maintain. This is why she makes this the topic of an entire lesson. Over the course of this lesson, she articulates the metaphor verbally repeatedly “the feeling is a silk thread from the navel to the spine”. She uses the metaphor to express and to communicate a tiny movement of the hips that leads to an upright posture. An imagined silk thread pulling the navel to spine creates the base for a balanced and upright upper body. Fig. 133.2 provides a selected transcript of the ways in which the metaphor moves back and forth between full body gestures, speech, hand gestures and between the teacher and her students. In fact, the teacher uses the metaphor to inscribe the feeling actively into the body of her students. The metaphor thus becomes an interactive object of a shared embodied experience manifesting itself in a web of metaphoric meaning. It evolves as an expression of a subjective experience and is negotiated between the co-participants in speech, full body gestures, and hand gestures (for more detail, see Horst et al. this volume; Müller and Ladewig in press). As the ballet example indicates, the study of metaphor in gesture, speech, and body movement may also contribute to current debates around the role of embodiment for cognition, language, and communication (Bakels this volume; Cuffari 2012; Cuffari and Wiben Jensen this volume; Gibbs 2006; Streeck 2009). Metaphoric gestures might be

133. Ways of viewing metaphor in gesture

1775

regarded as felt experiences in a phenomenological sense (Kappelhoff and Müller 2011; Kolter et al. 2012; Müller and Ladewig in press; Sheets-Johnstone 1999). Such a perspective paves the way to a concept of gestures as expressive movements (Horst et al. this volume; Kappelhoff and Müller 2011). Framed in such a way, the performance of metaphoric gestures comes along with a sensation of movement and involves an immediate perception of meaning on the side of the co-participant. Analyzing metaphor in its multimodal and temporal unfolding may therefore provide strong support for philosophical claims concerning the immediate and social nature of cognition, affect, feeling, and understanding. As Gallagher puts it: On the embodied view of social cognition, the mind of the other person is not something that is hidden away and inaccessible. In perceiving the actions and expressive movements of the other person in the interactive contexts of the surrounding world, one already grasps their meaning; no inference to a hidden set of mental states (beliefs, desires, etc.) is necessary. When I see the other’s action or gesture, I see (I immediately perceive) the meaning in the action or gesture; and when I am in a process of interacting with the other, my own actions and reactions help to constitute that meaning. (Gallagher 2008: 449)

This immediate and interactive process of understanding and co-constructing metaphoric meaning is what we see in the ballet lesson. Fig. 133.3 illustrates the interactive trajectory of metaphoric meaning from the first moments of finding and describing the silk thread

Fig. 133.3: Multimodal and interactive trajectories of metaphoric meaning

1776

VIII. Gesture and language

metaphor, to teaching it (by jointly working out the feeling associated with the metaphor), to talking and gesturing about the experience at the end of the class. Notably, when the student reflects upon the class, she spontaneously produces a gestural depiction of her subjective bodily experience by acting as if molding or pulling a horizontal thread in front of her body. Thus, by the end of the class the metaphor underwent an almost linear trajectory from a body feeling, a verbal and bodily metaphor to a verbal and gestural metaphor. These metaphors appear to structure and organize the discourse about balance and they operate on a level of discourse which is comparable to Cameron‘s systematic metaphors discussed below (Cameron 2009, 2010a, b). Taking into account metaphoric gestures, metaphoric body movements, and metaphoric verbalizations as they evolve through a stretch of discourse makes it very hard to conceive of metaphor as a property of a lexical item. It provides strong evidence for a theory of metaphor that includes the procedural nature of metaphors and regards metaphoric meaning making as an interactive and embodied process. The analysis of metaphor and gestures as they are used in different contexts, interactions, and discourses supports assumptions of a procedural nature of metaphor (see Cameron 1999, 2009; Corradi Fiumara 1995; Gibbs 1993; Müller 2008a).

7. Systematic metaphor and gesture Cameron (2007a, b and elsewhere) and Cameron and Maslen (2010a) introduce the notion of “systematic metaphor” as a way of talking about metaphoric expressions which fall into semantic groupings and which recur throughout some discourse. An advantage to this approach is that the resultant systematic metaphors, though described in the format of Target Domain is Source Domain, like conceptual metaphors, are not tied to any cognitive claims about conceptual mappings on the part of those using the metaphors. Systematic metaphors are neutral in this regard, and ⫺ in the analysis of talk produced by groups of people ⫺ explicitly concern the supraindividual level. As an example, in their focus group discussions with Britons about terrorism, Cameron, Maslen, and Low (2010) found a systematic metaphor of being affected by terrorism is participating in a game of chance. This was a higher level mapping which drew upon sub-groups of Vehicle terms that had to do with terrorism understood in terms of violent physical action, movement, and relation to a social landscape. The idea that systematic metaphors could extend beyond the words uttered to speakers’ use of gestural behavior is a logical extension of the approach, particularly given that metaphor can play out on the discourse and pragmatic levels in ways which are not (necessarily or sometimes even possibly) expressed on the verbal level. For example, Cornelissen, Clarke, and Cienki (2012) show how both an experienced entrepreneur and one with less experience both refer to their new ventures with systematic metaphors as objects involved in movement, but for the experienced entrepreneur, the movement is characterized by gestures showing particular kinds of directionality (forward along a linear path for product development and circular for the repeatable business cycle leading to income which can be reinvested in the business). The notion of systematic metaphors allows one to approach the more thematic ways in which speakers discuss and work with abstract ideas over stretches of discourse. It offers a fruitful direction for future research to handle a higher level of abstraction in metaphoric framing beyond the level of word or gesture use.

133. Ways of viewing metaphor in gesture

1777

8. Conclusion The study of metaphor in gesture provides insights into semiotic processes of meaning creation in this form of embodied behavior. These can be interpreted from the point of view of what they might reveal about how speakers conceptualize one domain in terms of another (that normally being the abstract in terms of the physical), if one takes the approach of Conceptual Metaphor Theory. However, analysis of patterns of verbal and gestural metaphoric expression can also help draw conclusions about the use of metaphoric framing on the level of the communicative system(s) being employed without extrapolating to claims about conceptualization. In turn, extending metaphor studies to include data from gesture can take the claims of Conceptual Metaphor Theory at their word, showing that metaphor is not just a matter of word use but is essentially a conceptual phenomenon which can receive expression in various forms of human behavior. On the other hand, without recourse to assertions about metaphor as conceptual, one can gain new insights into the nature of metaphoricity by examining gesture, including seeing it as a dynamic process which can be more or less foregrounded at different moments in discourse (Müller 2008a, b).

Acknowledgements We thank Mathias Roloff for providing the drawings (www.mathiasroloff.de).

9. Reerences Bakels, Jan-Hendrik this volume. Embodying audio-visual media: Concepts and transdisciplinary perspectives. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 2048⫺2061. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume a. A repertoire of recurrent gestures of German. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1575⫺ 1591. Berlin/Boston: De Gruyter Mouton. Bressem, Jana and Cornelia Müller this volume b. The family of away gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1592⫺1604. Berlin/ Boston: De Gruyter Mouton. Bühler, Karl 1982. Sprachtheorie: Die Darstellungsfunktion der Sprache. Stuttgart: Fischer. First published [1934]. Calbris, Genevie`ve 1985. Espace-Temps: Expression Gestuelle de Temps. Semiotica 55(1⫺2): 43⫺73. Calbris, Genevie`ve 1990. The Semiotics of French Gesture: Advances in Semiotics. Bloomington: Indiana University Press. Calbris, Genevie`ve 2003. L’expression Gestuelle de la Pense´e d’un Homme Politique. Paris: CNRS Editions. Calbris, Genevie`ve 2008. From left to right… : Coverbal gestures and their symbolic use of space. In: Alan Cienki and Cornelia Müller (eds.) Metaphor and Gesture, 27⫺53. Amsterdam: John Benjamins.

1778

VIII. Gesture and language

Cameron, Lynne 2007a. Confrontation or complementarity: Metaphor in language use and cognitive metaphor theory. Annual Review of Cognitive Linguistics 5: 107⫺135. Cameron, Lynne 2007b. Patterns of metaphor use in reconciliation talk. Discourse and Society 18(2): 197⫺222. Cameron, Lynne 2009. The discourse dynamics approach to metaphor and metaphor-led discourse analysis. Metaphor and Symbol 24(2): 63⫺89. Cameron, Lynne and Robert Maslen (eds.) 2010a. Metaphor Analysis: Research Practice in Applied Linguistics, Social Sciences and the Humanities. London: Equinox. Cameron, Lynne and Robert Maslen 2010b. Identifying metaphor in discourse data. In: Lynne Cameron and Robert Maslen (eds.), Metaphor Analysis: Research Practice in Applied Linguistics, Social Sciences and the Humanities, 97⫺115. London: Equinox. Cameron, Lynne, Robert Maslen and Graham Low 2010. Finding systematicity in metaphor use. In: Lynne Cameron and Robert Maslen (eds.), Metaphor Analysis: Research Practice in Applied Linguistics, Social Sciences and the Humanities, 116⫺146. London: Equinox. Cienki, Alan 1998a. Metaphoric gestures and some of their relations to verbal metaphoric expressions. In: Jean-Pierre Koenig (ed.), Discourse and Cognition: Bridging the Gap, 189⫺204. Stanford, CA: Center for the Study of Language and Information. Cienki, Alan 1998b. Straight: An image schema and its metaphorical extensions. Cognitive Linguistics 9(2): 107⫺149. Cienki, Alan 2004. Bush’s and Gore’s language and gestures in the 2000 US presidential debates: A test case for two models of metaphors. Journal of Language and Politics 3: 409⫺440. Cienki, Alan 2013. Conceptual metaphor theory in light of research on speakers’ gestures. Journal of Cognitive Semiotics 5(1⫺2): 349⫺366. Cienki, Alan and Cornelia Müller (eds.) 2008. Metaphor and Gesture. Amsterdam: John Benjamins. Cooperrider, Kensy and Rafael Nu´n˜ez 2009. Across time, across the body: Transversal temporal gestures. Gesture 9(2): 181⫺206. Cornelissen, Joep, Jean Clarke and Alan Cienki 2012. Sensegiving in entrepreneurial contexts: The use of metaphors in speech and gesture to gain and sustain support for novel business ventures. International Small Business Journal 30(3): 213⫺241. Corradi Fiumara, Gemma 1995. The Metaphoric Process: Connections between Language and Life. London/New York: Routledge. Cuffari, Elena 2012. Gestural Sense-making: Hand Gestures as Intersubjective Linguistic Enactments. Phenomenology and the Cognitive Sciences 11(4): 599⫺622. Cuffari, Elena and Thomas Wiben Jensen this volume. Living Bodies: Co-enacting Experience. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.). 2016⫺2026. Berlin/Boston: De Gruyter Mouton. Efron, David 1972. Gesture, Race, and Culture. The Hague: Mouton. First published [1941]. Emanatian, Michele 1997. The spatialization of judgment. In: Wolfgang-Andreas Liebert, Gisela Redeker und Linda Waugh (eds.), Discourse and Perspective in Cognitive Linguistics, 131⫺147. Amsterdam: John Benjamins. Gallagher, Shaun 2008. Understanding others: Embodied social cognition. In: Paco Calvo and Antoni Gomila (eds.), Handbook of Cognitive Science: An Embodied Approach, 439⫺452. Amsterdam: Elsevier. Gibbs, Raymond W. 1993. Process and products in making sense of tropes. In: Andrew Ortony (ed.), Metaphor and Thought, 252⫺276. Cambridge: Cambridge University Press. Gibbs, Raymond W. 1994. The Poetics of Mind: Figurative Thought, Language, and Understanding. Cambridge, UK: Cambridge University Press. Gibbs, Raymond W. 1998. The fight over metaphor in thought and language. In: Albert N. Katz, Cristina Cacciari, Raymond W. Gibbs and Mark Turner (eds.), Figurative Language and Thought, 119⫺57. New York/Oxford: Oxford University Press.

133. Ways of viewing metaphor in gesture

1779

Gibbs, Raymond W. 2006. Embodiment and Cognitive Science. New York: Cambridge University Press. Gibbs, Raymond W. 2011. Are ‘deliberate’ metaphors really deliberate? A question of human consciousness and action. Metaphor and the Social World 1(1): 26⫺52. Gibbs, Raymond W. and Gerard J. Steen (eds.) 1999. Introduction. In: Raymond W. Gibbs and Gerard J. Steen (eds.), Metaphor in Cognitive Linguistics, 1⫺8. Amsterdam: John Benjamins. Giora, Rachel 1997. Understanding figurative and literal language: The graded salience hypothesis. Cognitive Linguistics 8(3): 183⫺206. Giora, Rachel 2002. Literal vs. figurative language: Different or equal? Journal of Pragmatics 34: 487⫺506. Giora, Rachel 2003. On Our Mind: Salience, Context, and Figurative Language. New York: Oxford University Press. Goosens, Louis 1990. Metaphtonymy: the interaction of metaphor and metonymy in expressions for linguistic action. Cognitive Linguistics 1(3): 323⫺340. Horst, Dorothea, Franziska Boll, Christina Schmitt and Cornelia Müller this volume. Gesture as interactive expressive movement: Inter-affectivity in face-to-face communication. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2112⫺2124. Berlin/ Boston: De Gruyter Mouton. Jäkel, Olaf 1999. Kant, Blumenberg, Weinrich: Some forgotten contributions to the cognitive theory of metaphor. In: Raymond W. Gibbs and Gerard J. Steen (eds.), Metaphor in Cognitive Linguistics, 9⫺27. Amsterdam: John Benjamins. Kappelhoff, Hermann and Cornelia Müller 2011. Embodied meaning construction. Multimodal metaphor and expressive movement in speech, gesture, and in feature film. In: Metaphor and the Social World 1(2): 121⫺153. Kendon, Adam 1989. Sign Languages of Aboriginal Australia: Cultural, Semiotic and Communicative Perspectives. Cambridge: Cambridge University Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kolter, Astrid, Silva H. Ladewig, Michaela Summa, Sabine Koch, Cornelia Müller and Thomas Fuchs 2012. Body memory and the emergence of metaphor in movement and speech. An interdisciplinary case study. In: Sabine Koch, Thomas Fuchs, Michaela Summa and Cornelia Müller (eds.), Body Memory, Metaphor and Movement, 201⫺226. Amsterdam/Philadelphia: John Benjamins. Lakoff, George and Mark Johnson 1980. Metaphors We Live By. Chicago: University of Chicago Press. Lakoff, George and Mark Johnson 1999. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic Books. Langacker, Ronald 2001. Dynamicity in grammar. Axiomathes 12: 7⫺33. Leech, Geoffrey 1969. A Linguistic Guide to English Poetry. Harlow: Longman. McNeill, David 1985. So you think gestures are nonverbal? Psychological Review 92(3): 350⫺371. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. McNeill, David, Justine Cassell and Elena Levy 1993. Abstract deixis. Semiotica 95(1⫺2): 5⫺19. McNeill, David and Elena Levy 1982. Conceptual representations in language activity and gesture. In: Robert J. Jarvell and Wolfgang Klein (eds.), Speech, Place, and Action, 271⫺295. Chichester: Wiley and Sons. Mittelberg, Irene this volume. Gesture and iconicity. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1712⫺1732. Berlin/Boston: De Gruyter Mouton. Mittelberg, Irene and Linda Waugh this volume. Gesture and metonymy. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction.

1780

VIII. Gesture and language

(Handbooks of Linguistics and Communication Science 38.2.), 1732⫺1746. Berlin/Boston: De Gruyter Mouton. Montredon, Jacques, Abderrahim Amrani, Marie-Paule Benoit-Barnet, Emmanuelle Chan You, Re´gine Llorca and Nancy Peuteuil 2008. Catchment, growth point, and spatial metaphor: Analyzing Derrida’s oral discourse on deconstruction. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 171⫺194. Amsterdam: John Benjamins. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2004. Forms and uses of the palm up open hand: A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gesture: The Berlin Conference, 233⫺256. Berlin: Weidler Verlag. Müller, Cornelia 2008a. Metaphors Dead and Alive, Sleeping and Waking: A dynamic view. Chicago: University of Chicago Press. Müller, Cornelia 2008b. What gestures reveal about the nature of metaphor. In: Alan Cienki and Cornelia Müller (eds.), Metaphor and Gesture, 219⫺245. Amsterdam: John Benjamins. Müller, Cornelia 2009. Gesture and language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. Abington/New York: Routledge. Müller, Cornelia 2011. Reaction Paper. Are ‘deliberate’ metaphors really deliberate? A question of human consciousness and action. Metaphor and the Social World 1(1): 61⫺66. Müller, Cornelia this volume. Gestural Modes of Representation as techniques of depiction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1687⫺1702. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia and Silva H. Ladewig in press. Metaphors for sensorimotor experiences. Gestures as embodied and dynamic conceptualizations of balance in dance lessons. In: Mike Borkent, Barbara Dancygier and Jennifer Hinnell (eds.), Language and the Creative Mind. Stanford: University of Chicago Press. Müller, Cornelia and Susanne Tag 2010. The dynamics of metaphor: Foregrounding and activating metaphoricity in conversational interaction. Cognitive Semiotics 10(6): 85⫺120. Nu´n˜ez, Rafael und Eve Sweetser 2006. With the future behind them: Convergent evidence from language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science 30: 1⫺49. Pragglejaz Group 2007. MIP: A method for identifying metaphorically used words in discourse. Metaphor and Symbol 22(1): 1⫺39. Reddy, Michael J. 1993. The conduit metaphor: A case of frame conflict in our language about language. In: Andrew Ortony (ed.), Metaphor and Thought, 164⫺201. Cambridge: Cambridge University Press. First published [1979]. Richards, Ivor A. 1965. The Philosophy of Rhetoric. Oxford: Oxford University Press. First published [1936]. Sheets-Johnstone, Maxina 1999. The Primacy of Movement. Amsterdam/Philadelphia: John Benjamins. Streeck, Jürgen 2009. Gesturecraft: The Manu-facture of Meaning. Amsterdam/Philadelphia: John Benjamins. Sweetser, Eve 1998. Regular metaphoricity in gesture: Bodily-based models of speech interaction. Actes du 16e Congre`s International des Linguistes, (CD-ROM). Elsevier. Teßendorf, Sedinha (2005). Pragmatische Funktionen Spanischer Gesten am Beispiel des ‘Gesto de Barrer’. Unpublished MA Thesis, Freie Universität Berlin. Teßendorf, Sedinha this volume. Pragmatic and metaphoric gestures ⫺ combining functional with cognitive approaches. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Hand-

134. The conceptualization of time in gesture

1781

book on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1540⫺1550. Berlin/Boston: De Gruyter Mouton. Wilcox, Sherman 2009. Symbol and symptom: Routes from gesture to signed language. Annual Review of Cognitive Linguistics 7: 89⫺110. Wilson, Nicole L. and Raymond W. Gibbs 2007. Real and imagined body movement primes metaphor comprehension. Cognitive Science 31: 721⫺731.

Alan Cienki, Amsterdam (Netherlands) and Moscow (Russia) Cornelia Müller, Frankfurt (Oder) (Germany)

134. The conceptualization o time in gesture 1. 2. 3. 4. 5.

Temporal gestures Early observations and emerging theoretical perspectives Temporal gestures across cultures New directions References

Abstract Humans around the world conceptualize time as space. Such spatial construals surface systematically in co-speech gesture, providing a more vivid picture of time conceptualization than offered by speech alone. Temporal gestures have recently come into sharp empirical focus because of their position at the intersection of questions about the psychological reality of conceptual metaphor, about the embodied nature of abstract reasoning, and about linguistic and cognitive diversity. Across cultures temporal gestures show some widespread patterns ⫺ such as the anchoring of “now” to the speaker’s location with a point downward from the speaker’s gestural space ⫺ and some striking particulars ⫺ such as locating the past and future, respectively, downhill and uphill or in front and behind. Provocative recent findings notwithstanding, much remains to be learned about temporal gestures, including their variation within and across cultures and their precise relationships to language and cultural practices.

1. Temporal gestures Humans gesture abundantly when talking about space (Alibali 2005). Whether describing where two grocery stores lie in relation to each other, illustrating the size of an unseen fish, or indicating the direction of a landmark in the distance, such communicative acts are commonly ⫺ even characteristically ⫺ accompanied by gestures of the hands. Curiously, similar movements can be observed whenever humans talk about time. Though famously ineffable and abstract, the intangibility of time hardly seems to get in the way of its gestural expression. Whether describing when one event happened in relation to another, commenting on the duration of a tedious meeting, or referring to an upcoming Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 17811788

1782

VIII. Gesture and language

event, such communicative acts, too, are commonly accompanied by hand movements. Such movements are often called temporal gestures, and they are paradigmatic examples of conceptual metaphor in gesture: they enact a construal of the domain of time as though it had properties of the domain of space. Spatial construals of time are the stuff of everyday language and thought and are strikingly widespread around the world, if not universal (Nu´n˜ez and Cooperrider 2013). In language after language the passage of time is talked about in terms of motion, duration is talked about in terms of length or amount, and the concepts of past, present, and future and of earlier and later are talked about in terms of relative location (Alverson 1994; Haspelmath 1997). Gestures that embody such construals may prove just as everyday and just as widespread, though empirical work in this area is still in its early stages (for an earlier review see Kendon 1993). What forms do temporal gestures take? What relation do they bear to the spatial metaphors found in spoken language? In what ways do temporal gestures differ from one culture to the next and in what ways are they the same? Such questions are certainly of descriptive interest to students of human communicative behavior. But they also have broad ramifications for our understanding of how body, language, and culture together shape one of the most abstract ⫺ but also most fundamental ⫺ dimensions of human experience.

2. Early observations and emerging theoretical perspectives Observers of human communication have perhaps noticed from the start that people gesture when talking about time. In a well-known passage in which he marvels that gestures are “almost as expressive as words”, Quintilian writes: “Do we not employ them to indicate joy, sorrow, hesitation, confession, penitence, measure, quantity, number, and time?” (Quintilianus 1922, Volume IV, Book 9, Chapter 3, line 86). Among the first scholars to turn concerted attention to the gestural expression of time was Andrea de Jorio ([1832] 2000) in his study of gesture in Naples. He describes how gestural reference is made to past time by iterated thrusts of the hand over a shoulder, to the present moment by directing an extended index finger to the ground, and to future time by extending the hand forward in a semi-circular leap. (The semi-circular “topology” noted here by de Jorio, which he relates to the sun’s arc as a source for thinking about temporal progression, is also noted by later authors and we will return to it below.) De Jorio does not dwell on the fact that these three gestures together form a contrast along the frontback axis, reflecting a systematic spatial construal of time. Outside these early observations, the phenomenon of time-related gestures does not appear to have attracted much further analytic attention until the work of Genevie`ve Calbris. Across several publications (1985, 1990: 84⫺93), she has provided a rich semiotic analysis of the temporal gestures produced by French speakers. In addition to the past-behind/future-front pattern noted by de Jorio, Calbris describes another pattern in which earlier events are located to the left and later events are located to the right. Many of the features of temporal gestures Calbris notes among French speakers ⫺ such as the use of both the sagittal front-back and lateral left-right axes ⫺ may end up generalizing to speakers of other Western global languages. Others appear to be more restricted in their distribution. Calbris writes, for example, that when producing temporal gestures along the front-back axis, French speakers recruit relative height to express relative distance from the present moment. Interest in temporal gestures since Calbris’s first writings has largely shared a cognitive orientation to gesture that emerged in the 1980s. It was around this time that the

134. The conceptualization of time in gesture

1783

psychologist David McNeill influentially suggested ⫺ with the support of his new experimental methods ⫺ that co-speech gestures provide a kind of back-door access to the imagistic dimension of a speaker’s thought processes. Viewed in this way, gesture becomes more than just an interesting behavior to describe: it presents a brave new kind of evidence that cognitive scientists can bring to bear on questions about the nature of conceptualization. The emergence of McNeill’s cognitive view of gesture coincided with a swell of interest in conceptual metaphors ⫺ that is, cognitive mappings from one domain to another that were hypothesized to underlie the myriad metaphorical expressions seen in language. From the very beginning a paradigmatic example for conceptual metaphor theorists was the time is space metaphor (Lakoff 1992) and it remains so today. The time is space metaphor is often characterized as a “primary metaphor” because it is motivated by an experiential correlation between two domains (Grady 1997; Johnson 1999). When walking on a path, for example, the experience of forward motion is coupled to the experience of temporal progression ⫺ and indeed the basic experience of walking may give rise to the future-in-front mapping widely seen in both temporal language and temporal gesture. It was not until the late 1990s that these two areas of inquiry ⫺ conceptual metaphor theory and cognitive approaches to gesture ⫺ coalesced, motivating systematic empirical work on gestures accompanying metaphorical language in general and, in particular, accompanying metaphorical language about time. The turn to gesture was motivated in part by a pointed criticism of conceptual metaphor theory lodged by Murphy (1996) and others. Murphy objected that if one wants to prove that conceptual metaphors are really about underlying thought and are not just linguistic decoration, proliferating additional linguistic examples is not enough: alternative evidence for their psychological reality is necessary. Co-speech gestures provide spontaneous and thus ecologically valid four-dimensional insights into the imagistic side of language, metaphorical or otherwise. Several studies have now demonstrated that temporal gestures provide information about time-related imagery that is at once richer and more dynamic, and which in some cases departs from the representations suggested by spoken language. In a ground-breaking early study in this vein, Cienki (1998) filmed informal interviews with college students in the U.S. and analyzed them for their metaphorical speech and gesture. He made two important observations about how metaphorical gestures depart from speech. The first was that gestures sometimes offered evidence of metaphorical processes at work where the immediately accompanying speech did not. Overtly metaphorical speech, it turned out, was not a necessary precondition for metaphorical gestures. This observation has since been corroborated in a number of studies on temporal gestures. Cienki’s second observation was that speakers of English often gestured in a way that was consistent with a left-to-right timeline. Such a mapping does not show up explicitly in time expressions in the English language, where instead expressions involving front-back contrasts (The weeks ahead look good; They left back in January) are pervasive. This observation has been extended more recently to show that English speakers sometimes produce gestures along the left-to-right axis even when overtly using front-back metaphors in concurrent speech (Casasanto and Jasmin 2012). Metaphorical language may be the source of temporal gestures along the front-back axis, but ⫺ as suggested by Calbris (1990), Cienki (1998), and others since ⫺ cultural practices of literacy and explicit temporal representation (as found in timelines and graphs, for example) likely motivate gestures consistent with the direction of writing. Writing direc-

1784

VIII. Gesture and language

tions exhibit an inherent “forward” direction, be it rightwards or leftwards. It is perhaps only by virtue of this directedness that writing direction is recruited for side-to-side temporal gestures even when co-occurring speech suggests a front-back construal.

3. Temporal gestures across cultures The fact that temporal gestures in post-industrial cultures are profoundly shaped by literacy and associated cultural practices makes temporal gestures among more traditional, pre-industrial groups a topic of special interest. Around the time of Cienki’s study, Nu´n˜ez and colleagues (Nu´n˜ez, von Neumann, and Mamani 1997) began to study the spatial construal of time among the Aymara, an indigenous group of the Andes who lack a writing system and do not have entrenched conventions for representing time graphically. Close examination of Aymara expressions about deictic aspects of time (concerning past, present, and future) suggested a striking time is space metaphor that ⫺ in contrast to the pattern found in English and many other languages studied ⫺ mapped the past to the front and the future to the back. On the basis of linguistic evidence alone, however, it was not possible to rule out an alternative, less exotic explanation of this apparently “reversed” mapping: the fronts and backs invoked in such expressions may not belong to the ego but to the fronts and backs of another temporal event. (Expressions about sequential aspects of time, which concern only earlier-than, later-than relationships, commonly make metaphorical use of front-back orientation, e.g., February follows January.) Nu´n˜ez and Sweetser (2006) turned to gesture to distinguish between these possibilities. They confirmed that, first, the Aymara past-front/ future-behind linguistic metaphor is cognitively real and, second, it is centered on the ego rather than on some other temporal anchor point. Gestures produced along the front-back axis appear to be inherently deictic ⫺ that is, they include the ego’s position in a way that gestures produced from side-to-side do not. Additionally, the authors reported that while past-front/ future-behind temporal gestures were widely used by elderly Aymara speakers, they were on the decline among younger speakers with Spanish proficiency, who tended to favor more Spanish-like past-behind/future-front temporal gestures. Since Nu´n˜ez and Sweetser (2006) several papers have sought to explore the range of cross-cultural diversity in time conceptualization by using temporal gestures as a window. Nu´n˜ez et al. (2012) studied the spatial construal of time in the Yupno, an indigenous group of the Finisterre Range in Papua New Guinea. Like the Aymara, the Yupno lack a writing system or cultural practices for representing time. Building on the methodology used by Nu´n˜ez and Sweetser, the researchers used semi-structured field interviews in which participants were asked to explain commonplace temporal expressions. Though not asked to gesture, Yupno participants spontaneously did so ⫺ abundantly and systematically ⫺ during their explanations. Their gestures reflected an allocentric topographic construal of time in which ⫺ regardless of which way the speaker was facing ⫺ the past was construed as downhill, the present as co-located with the speaker, and the future as uphill. Perhaps even more remarkably, the construal did not fit the familiar linear “arrow” of time. Instead it exhibited a three-dimensional geometry apparently grounded in the particulars of the local terrain. An interesting point of contrast between the case of Aymara and Yupno is how strongly and regularly the two languages employ metaphorical language to talk about time. In Aymara the front/back terms pervade linguistic expressions about time, whereas in Yupno use of the uphill/downhill contrast for time reference appears to be much more restricted.

134. The conceptualization of time in gesture

1785

Another recent study presents further evidence that time’s “arrow” is not universal. Le Guen and Balam (2012) studied the spatial construal of time among the Yucatec Maya, using a combination of linguist analysis, card arrangement tasks, and analysis of co-speech gesture. While the authors do find evidence for spatial construals of time in gesture, these construals appear to be at once more diverse and perhaps less systematic than has been described in previous studies. For instance, it is reported that gestures for past and future contrast with gestures for present ⫺ a point downwards from the speaker’s gestural space as observed in Naples, France, the United States, and among speakers of Aymara and Yupno ⫺ but do not contrast with each other. Le Guen and Balam (2012) also briefly note another temporal gesture practice in which speakers refer to times of day by indicating locations along the sun’s imagined arc through the sky. The arc is absolutely oriented from east to west and is thus anchored to the world rather than to the speaker’s body. This practice is apparently widespread among small-scale groups (see, e.g., Haviland 2004: 207; also Kendon 1980; and especially De Vos 2012 for apparently similar practices employed in small-scale sign languages), though it has only very recently attracted systematic attention. Floyd (2008) describes in detail such a celestial gesture system in use by speakers of Nheenghatu´, an indigenous language of the Brazilian Amazon. Reference to punctate times of day or extended swaths of time can be made by pointing or sweeping gestures, respectively. Floyd argues that such gestures fulfill a role comparable to spoken words insofar as they provide on-record referential information not found anywhere in speech. Note that, in contrast to the spatial construals of time described earlier, celestial pointing gestures such as those described by Floyd are not grounded in a conceptual metaphor but rather in a conceptual metonymy by which spatial locations along the east-west arc provide metonymic access to times of day. Little is known about whether such models of the sun’s daily course are recruited for understanding time at other scales (weeks, months, years, cultural history) or for construing deictic and sequential aspects of time generally. Returning to the case of English, recent studies have begun to use more controlled methods to elicit temporal gestures in the laboratory. Cooperrider and Nu´n˜ez (2009), for instance, sought to delve more deeply into the varieties of temporal gestures produced by English speakers. They used a narrative retelling task in which participants studied from either a graphical or auditory stimulus a brief history of the universe and then recounted it for a naı¨ve participant. The authors described five types of temporal gesture in which time was conceptualized as having spatial properties, each of which reflected a recurring cluster of formal features. Participants produced gestures describing the duration of an event; pointed to or placed events as though they had spatial location; produced gestures highlighting a transition in time or the “spatial” relation between two events; and occasionally produced gestures “personifying” time as an agent with motion of its own. Cooperrider and Nu´n˜ez noted that an interesting frontier of research on temporal gestures ⫺ and indeed on metaphorical gestures more generally ⫺ is the granularity at which differences in temporal gestures reflect subtle characteristics of the underlying representations that motivate them. Casasanto and Jasmin (2012) addressed a puzzling discrepancy between previous linguistic analyses of time as space metaphors in English, which suggest the primacy of the front-back axis, and previous analyses of how English speakers gesture about time, which have noted a predominant left-right pattern. In a first study they explicitly elicited temporal gestures about past and future and about sequences by asking people how they

1786

VIII. Gesture and language

would gesture about such notions. They then compared the observed patterns to those seen in spontaneous temporal gestures from a second study and uncovered some differences. English speakers were more likely to use the front-back axis for time in elicited gestures than in spontaneous gestures. Another interesting finding to come out of this study, lending support to speculation voiced elsewhere, is that use of the front-back axis or the left-right axis depends in part on whether sequential or deictic temporal relationships are being conceptualized. Specifically, the front-back axis was more strongly associated with deictic than with sequential relationships.

4. New directions The discussion above has suggested a number of fruitful avenues for future inquiry on temporal gestures. For one, there is much potential for further instructive comparison between co-speech temporal gestures and linguistic signs produced to refer to time. Both sign and co-speech gesture exploit the analog richness of the manual-visual modality for communicating subtleties of temporal concepts. A key difference is that the overt spatialization of time is obligatory in signers, while it is only optional in speakers. Where thoroughgoing descriptions of signed temporal reference are available (see Engberg-Pedersen 1999 for a review) interesting parallels are evident. In American Sign Language the left-right and front-back axis are specialized for different kinds of temporal reference, with the former recruited for sequential relationships and the latter for deictic relationships (Emmorey 2002). This pattern is echoed, albeit more faintly, in co-speech gesture as just described above. Further study of both established sign systems and emerging sign systems, particularly in relation to the co-speech temporal gestures used in surrounding communities, could clarify the origins and transmission of spatial construals of time. Another fruitful avenue for further work concerns the mapping between temporal construal and gestural form. How fine-grained are the correspondences? Studies to date have focused on gross patterns ⫺ such as the orientation of the axis used ⫺ rather than subtleties. In our studies we have occasionally encountered what look to be morphological features expressing nuances of construal. For example, when producing downward “now” gestures, English speakers often do so with an index finger extended handshape. Yupno speakers, by contrast, often do so with the palm open and flat, oriented parallel with the ground. There is a possibility that these are “frozen” conventions, but they could plausibly reflect a difference between thinking of events as made up of points or slices in a line (in the English case) and thinking of time as positions on a wider field (in the Yupno case). Several authors, including Calbris and de Jorio, have noted temporal gestures that exhibit a (semi-) circular topology. Do these properties reflect underlying construals or, again, “frozen” gestural conventions? Given that gestural form is shaped by a host of factors other than mental imagery, large corpora will likely be needed to discern one-off idiosyncrasies from broader patterns. Temporal gestures, like co-speech gesture generally, are of two-fold interest. On the one hand, they constitute a systematic everyday behavior, one that can be seen in human group after human group, exhibiting in each case a blend of universal and culturespecific features. On the other hand, temporal gestures are a cutting-edge tool of contemporary cognitive science. They provide fleeting but vivid glimpses into how the human mind construes experience.

134. The conceptualization of time in gesture

5. Reerences Alibali, Martha W. 2005. Gesture in spatial cognition: Expressing, communicating, and thinking about spatial information. Spatial Cognition and Computation 5(4): 307⫺331. Alverson, Hoyt 1994. Semantics and Experience: Universal Metaphors of Time in English, Mandarin, Hindi, and Sesotho. Baltimore: Johns Hopkins University Press. Calbris, Genevie`ve 1985. Espace-temps: Expression gestuelle du temps. Semiotica 55(1⫺2): 43⫺74. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Casasanto, Daniel and Kyle Jasmin 2012. The hands of time: Temporal gesture in English speakers. Cognitive Linguistics 23(4): 643⫺674. Cienki, Alan 1998. Metaphoric gestures and some of their relations to verbal metaphorical expressions. In: Jean-Pierre Koenig (ed.), Discourse and Cognition: Bridging the Gap, 189⫺204. Stanford, California: Center for the Study of Language and Information. Cooperrider, Kensy and Rafael E. Nu´n˜ez 2009. Across time, across the body: Transversal temporal gestures. Gesture 9(2): 181⫺206. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. Translated and edited by Adam Kendon. Bloomington: Indiana University Press. First published [1832]. De Vos, Connie 2012. Sign-spatiality in Kata Kolok: How a village sign language in Bali inscribes its signing space. Ph.D. dissertation, Radboud University, Nijmegen. Emmorey, Karen 2002. Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah, NJ: Lawrence Erlbaum. Engberg-Pedersen, Elisabeth 1999. Space and time. In: Jens Allwood and Peter Gärdenfors (eds.), Cognitive Semantics, 131⫺152. Amsterdam: John Benjamins. Floyd, Simeon 2008. Solar iconicity, conventionalized gesture, and multimodal meaning in Nheengatu´. Paper prepared for Arizona Linguistics and Anthropology Symposium. Grady, Joseph 1997. Theories are Buildings revisited. Cognitive Linguistics 8(4): 267⫺290. Haspelmath, Martin 1997. From Space to Time: Temporal Adverbials in the World’s Languages. Munich and Newcastle: Lincom Europa. Haviland, John B. 2004. Gesture. In: Alessandro Duranti (ed.), A Companion to Linguistic Anthropology, Volume 100, 197⫺221. Malden, MA: Blackwell. Johnson, Christopher 1999. Metaphor vs. conflation in the acquisition of polysemy: the case of see. In: Masako K. Hiraga, Chris Sinha and Sherman Wilcox (eds.), Cultural, Psychological and Typological Issues in Cognitive Linguistics, 155⫺169. Amsterdam: John Benjamins. Kendon, Adam 1980. A description of a deaf-mute sign language from the Enga Province of Papua New Guinea with some comparative discussion. Part III: Aspects of utterance construction. Semiotica 32(3/4): 245⫺313. Kendon, Adam 1993. Space, time and gesture. Degre`s 7(4): 3A⫺16. Lakoff, George 1992. The contemporary theory of metaphor. In: Andrew Ortony (ed.), Metaphor and Thought, 202⫺250. Cambridge: Cambridge University Press. Le Guen, Olivier and Lorena I.P. Balam 2012. No metaphorical timeline in gesture and cognition among Yucatec Mayas. Frontiers in Psychology 3(August): 1⫺15. Murphy, Gregory L. 1996. On metaphoric representation. Cognition 60(2): 173⫺204. Nu´n˜ez, Rafael and Kensy Cooperrider 2013. The tangle of space and time in human cognition. Trends in Cognitive Sciences 17(5): 220⫺229. Nu´n˜ez, Rafael, Kensy Cooperrider, D. Doan and Jürg Wassmann 2012. Contours of time: Topographic construals of past, present, and future in the Yupno valley of Papua New Guinea. Cognition 124(1): 25⫺35. Nu´n˜ez, Rafael, Vicente Neumann and Manuel Mamani 1997. Los mapeos conceptuales de la concepcio´n del tiempo en la lengua Aymara del Norte de Chile [Conceptual mappings in the conceptualization of time in northern Chile’s Aymara]. Boletı´n de Educacio´n de la Universidad Cato´lica del Norte 28: 47⫺55.

1787

1788

VIII. Gesture and language

Nu´n˜ez, Rafael and Eve Sweetser 2006. With the future behind them: Convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science 30(3): 401⫺450. Quintilianus, Marcus Fabius 1922. The Institutio Oratoria of Quintilian. Translated by Harold Edgeworth Butler, Volume IV, The Loeb Classical Library. New York: G. P. Putnam and Sons.

Kensy Cooperrider, Chicago (USA) Rafael Nu´ n˜ez, San Diego (USA) Eve Sweetser, Berkeley (USA)

135. Between reerence and meaning: Object-related and interpretant-related gestures in ace-to-ace interaction 1. 2. 3. 4. 5. 6. 7.

Introduction Previous studies: Co-speech gestures and their verbal affiliates Reference, meaning, and denotation Co-speech gestures between reference and meaning From meaning to reference: Gestural turning points in face-to-face interaction Conclusion References

Abstract This chapter presents object-related and interpretant-related gestures as manifesting the grammatical distinction between extensional and intensional determination in noun phrases from the perspective of a multimodal approach to grammar. The distinction between “object-related” and “interpretant-related” is based on the distinction made between reference and meaning in linguistics and semiotics: Object-related gestures are related to the reference object intended by the speaker, whereas interpretant-related gestures are related to the meaning or concept attached to a spoken word form. Consistent with McNeill’s growth point hypothesis and Wundt’s concept of “Gesamtvorstellung”, it is proposed that objectrelated and interpretant-related gestures in noun phrases allow for observing the “distinctive separation of the characteristic from the object” (Wundt [1900] 1904) in different stages. In manifesting a turning point between extensional and intensional determination in Seiler’s continuum, the distinction between object-related and interpretant-related gestures bridges the gap between McNeill’s growth point hypothesis and Fricke’s multimodal approach to grammar.

1. Introduction Targeting a multimodal approach to grammar, Fricke claims that co-speech gestures manifest a principle that also operates on the verbal level of language: the differentiation Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 17881802

135. Between reference and meaning

1789

between intensional and extensional determination (Seiler 1978; Fricke 2008, 2012, volume 1). With regard to their attributive function in noun phrases, adjectives can be divided into two groups: one that primarily limits the extension of the nuclear noun, regardless of its meaning (extensional determination), and one that primarily modifies its meaning (intensional determination) (Seiler 1978). In the following sections, it will be illustrated how co-speech gestures can be considered as constituent parts of noun phrases and analogously divided into two groups: gestures that are primarily related to the reference object intended by the speaker (object-related) and gestures that are primarily related to a meaning or concept that is attached to a spoken word form (interpretant-related). The terms “object-related” and “interpretant-related” introduced by Fricke (2008, 2009, 2012) are based on the Peircean concept of sign, which is conceived of as a triadic relation between the representamen or sign vehicle (R), its object (O), and its interpretant (I) (cf. Peirce 1931⫺58, 2000). The latter can be understood, for the time being pretheoretically, as the sign’s “meaning” in a broad sense. The distinction that Fricke makes between “object-related” and “interpretant-related” is based on the distinction made between reference and meaning in linguistics (see section 2): Object-related gestures are primarily related to the reference object intended by the speaker, whereas interpretantrelated gestures are primarily related to the meaning or concept attached to a spoken word form. Why is it necessary to make this distinction? Fricke (2006, 2008, 2009, 2012) presents examples of co-speech gestures that accompany noun phrases and whose form characteristics are obviously incongruent with those of the reference object intended by the speaker, although the speaker knows exactly what the respective reference object looks like. For example, various speakers refer to a rectangular entrance with arced and circular gestures. Fricke’s explanation for this is that speakers are able to switch between gestures that depict the intended reference object and gestures that depict mental images associated with the word form of the nuclear noun, e.g., mental images that serve as representations of prototypes or stereotypes. This hypothesis is substantiated by the drawings of informants who were instructed to draw typical objects of a particular kind, e.g., a bridge (Brücke), an entrance to a building (Tor), or a hole (Loch) (see Fig. 135.2). According to Lyons (1977), a reference is conceived of as an act bound to the respective speaker and his or her utterance. Hence, speakers should be able to successfully use interpretant-related gestures to refer to objects although there is a mismatch between their respective form characteristics (section 4). Fricke (2006, 2008, 2009, 2012) presents two further interesting observations: Firstly, within the same turn, speakers elaborate their interpretant-related gestures in ways that tend to make them resemble object-related gestures, e.g., they switch from one type to the other; and secondly, a speaker’s interpretant-related gestures may be interpreted by addressees as object-related when they are relaying the received information in a subsequent turn. This means that an addressee may assign non-existent properties to the reference object intended by the speaker (section 5).

2. Previous studies: Co-speech gestures and their verbal ailiates The fact that co-speech gestures are influenced by grammatical and semantic differences inherent to their verbal affiliates is substantiated by various comparative studies (e.g., Müller 1998, McNeill 1992, volume 1; McNeill and Duncan 2000, Kita and Özyürek 2003;

1790

VIII. Gesture and language

for an overview of models of gesture-speech production see Feyereisen volume 1). Kendon (2004) assumes at least four different types of influence. For the purpose of this chapter, the second type is the most relevant, which is the influence on a gesture exercised by the semantic features of its verbal affiliate: The semantic features of something that a lexical expression such as a verb may encode may influence what features are brought out in a description of it. If gesture is a part of that description, gesture will be influenced accordingly (Kita and Özyürek: Turkish, Japanese and English comparisons). (Kendon 2004: 348)

In their study, Kita and Özyürek (2003) compared descriptions of an animated film sequence given by English, Japanese, and Turkish speakers. In this film sequence, the comic figure Sylvester moves from one side of the street to the other by swinging on the end of a rope. The authors present the following hypothesis: The Interface Hypothesis predicts that gestural expressions are simultaneously shaped by linguistic formulation possibilities and by the spatial properties of the events that may not be linguistically encoded in the accompanying speech. Specifically, the Interface Hypothesis predicts that the gestural expression of the events varies across languages in ways similar to the linguistic packaging of information about the events in respective languages. (Kita and Özyürek 2003: 18)

While the English verb swing accounts for the semantic feature ‘arced’ in the represented movement, there is no comparable configuration of semantic features in Turkish or Japanese (Kita and Özyürek 2003: 18). This has the effect that all the American speakers who use the verb swing in their descriptions perform an arc-shaped gesture, while the Turkish and Japanese speakers, who lack a comparable lexeme in their vocabulary, mostly perform a horizontal gesture that demonstrates the direction but not the form of the movement. Likewise, the semantic feature ‘path’, but not the form of the movement, is encoded in the movement verbs used by these speakers. Kita and Özyürek come to this conclusion in their discussion: “The cross linguistic variation in the gestural representation of the Swing Event has the same pattern as the variation in the linguistic packaging of information about the event” (Kita und Özyürek 2003: 21). These results are challenged by lexicalist approaches to gesture analysis. For example, the “Lexical Semantics Hypothesis” (Butterworth and Hadar 1989; Schegloff 1984) assumes that co-speech gestures are solely determined by the semantics of their verbal affiliates: “gestures do not encode what is not encoded in the concurrent speech” (Kita and Özyürek 2003: 17). In contrast, the so-called “Free Imagery Hypothesis” (Krauss, Chen, and Chawla 1996; Krauss, Chen, and Gottesmann 2000; de Ruiter 1998, 2000) claims that gestures are solely determined by prelinguistic image-like representations in working memory without the influence of speech processing and, therefore, without the influence of grammatical or lexical structures (cf. Kita and Özyürek 2003: 17). As an alternative view, Kita and Özyürek (2003: 28) propose the “Interface Hypothesis” with a model of speech and gesture production, based on Levelt’s (1989) model of speech production, in which diverse modules can interact online during the process of formulating an expression. This means, on the one hand, that the execution of gestures can be influenced by lexical items retrieved by and grammatical encoding specified by the Formulator, and, on the other hand, that gestures can also be determined by the motor

135. Between reference and meaning

1791

and spatial components of working memory, and by the situative context or environment in which the utterance takes place (for an overview of current models of gesture-speech production, see Feyereisen volume 1). In contrast to these hypotheses based on a modular architecture, McNeill presents a holistic approach to language and gesture “as a dynamic and integrated system” (McNeill volume 1: 135). The “growth point hypothesis” is grounded on the idea of a growth point “as a cognitive package that combines semiotically opposite linguistic categorial and imagistic components” (McNeill volume 1: 135). McNeill gives the following characterization: A growth point is a nexus where the static and dynamic intersect. Thus both dimensions must be considered. In combining them, the growth point becomes the minimal unit of the dynamic dimension itself. It is called a growth point because it is meant to be the initial pulse of thinking-for-(and while)-speaking, out of which a dynamic process of organization emerges. The linguistic component categorizes the visual-actional imagery component. The linguistic component is important since, by categorizing the imagery, it brings the gesture into the system of language. Imagery is equally important, since it grounds sequential linguistic categories in an instantaneous visualspatial frame. It provides the growth point with the property of “chunking” […], whereby a chunk of linguistic output is organized around the presentation of an image. Synchronized speech and gesture are the key to this theoretical growth point unit. (McNeill volume 1: 135⫺136)

In his final note on consciousness, McNeill (volume 1: 153) parallels his concept of growth point and its “unpacking” to Wundt’s (1904) concept of Gesamtvorstellung (‘holistic mental representation’). The parallel is indeed striking. The quotation given by McNeill primarily focuses on Wundt’s psychological view of sentences as both simultaneous and sequential structures. However, Wundt’s concept of syntax goes one step further, firstly, by focusing on the psychological origin of syntactic structures in language history and by including the aspect of reference: He describes sentences and their syntactic units, e.g., attributes and noun phrases as historically derived from the psychological elaboration, or “unpacking” in McNeill’s terms, of a holistic Gesamtvorstellung that is originally used to refer to entities and events conceived of as a totality (Fricke 2008, 2012; see the quotation given in section 4.2). According to Wundt, the grammatical differentiation between intensional and extensional determination found in noun phrases of single languages, such as German, has to be seen as the historical outcome of individual psychological processes of “unpacking” the Gesamtvorstellung while speaking. In Humboldt’s terms: Seiler’s (1978) continuum of determination is manifested on the verbal level as ergon, and on the gestural level as energeia (cf. Fricke 2012). McNeill’s psychological growth-point hypothesis is supported by the linguistic findings, gathered from the viewpoint of multimodal grammar, that are presented in the following sections (Fricke 2008, 2012). All the above hypotheses provide theoretical background for Fricke’s (2008, 2012) distinction between object-related and interpretant-related gestures, which differentiates her approach with regard to the following aspects: Firstly, using a multimodal approach to grammar as a starting point, a parallel differentiation between intensional and extensional determination on the verbal and gestural levels reveals that the same grammatical principle is manifested multimodally (Fricke 2008, 2012; for distinguishing between multimodal code manifestation and code integration, see Fricke volume 1). Secondly,

1792

VIII. Gesture and language

both types of gestures are investigated as forming an integral part of noun phrases. Noun phrases with the German deictic son ‘such a’ are of particular interest. Thirdly, the execution of co-speech gestures is assumed to be influenced not only by semantic features of lexical entries, but also by cognitive prototypes (Rosch and Mervis 1975; Rosch 1977) or stereotypes (Putnam 1975), considered as typified mental images associated with particular verbal expressions (section 4.1). Fourthly, this approach takes the perspective that a gesture’s relation either to the object of reference or to the concept conveyed by a particular verbal expression is actively created during conversation; thus, a speaker can use both types of gestures within one and the same turn (see section 4.2). Fifthly, processes of re-interpreting mismatches between the reference object and interpretant-related gestures observed in face-to-face interaction (section 5) are described and explained within a semiotic framework presented in section 4.3.

3. Reerence, meaning, and denotation The distinction between object-related and interpretant-related gesture introduced above requires an excursus on the notions of reference, meaning, and denotation. According to Lyons (1977: 174), “the term ‘reference’ has to do with the relationship which holds between an expression and what that expression stands for on particular occasions of its utterance”. Linguistic expressions may differ in meaning but have the same reference, e.g., the noun phrases the victor at Jena and the looser at Waterloo (Edmund Husserl); both may be used to refer to Napoleon (cf. Lyons 1977: 199). On the other hand, expressions may be partially synonymous and have nearly the same meaning, e.g., the expressions United States and America, but may be used differently with respect to the reference object intended by the speaker (Lyons 1977: 2009). For example, “the latter may also be used for the whole continent consisting of North, Central and South America” (Löbner 2002: 46). Moreover, as Lyons points out (1977: 177), “it is the speaker who refers (by using some appropriate expression): he invests the expression with reference by the act of referring”. This allows for the possibility that the speaker can refer successfully to the non-linguistic entity “dog” with the noun phrase this cat as part of the utterance see this cat over there. This utterance may be sufficient for the addressee to identify the animal in question, although dogs do not belong to the extension (also denotatum) of the word cat. To put it in a nutshell: “successful reference does not depend upon the truth of the description contained in the referring expression” (Lyons 1977: 181). It is important to bear this fact in mind when studying the analyses of objectrelated and interpretant-related gestures in section 4. In the following, the terms “denotatum” and its equivalent “extension” are used for classes of potential reference objects to which a linguistic expression correctly applies (Lyons 1977: 207). For example, the denotatum of the noun cat is a particular class of animals with the individual animals as its denotata. The denotata of the adjective circular, for example, are entities with the property of being circular. The denotatum (also “extension”) as a class of potential reference objects is provided by the meaning (also “intension”) of the expression. For the purpose of this chapter, the meaning of a word is conceived of as “a concept that provides a mental description of a certain kind of entity” (Löbner 2002: 22). According to Löbner (2002: 20), “a concept for a kind, or category, of entities is information in the mind that allows us to discriminate entities of that kind from entities of other kinds”. Although, according to Löbner, the mental de-

135. Between reference and meaning

1793

scription of expressions like cat is “by no means exhausted by a specification of their visual appearance”, and therefore should be not equated with a “visual image”, mental images connected to a word form may at least form a part of such mental descriptions, as cognitive prototype theory suggests (e.g., Rosch and Mervis 1975; Rosch 1977). It is beyond the scope of this chapter to discuss the particulars of the challenges to this approach (for a detailed discussion, see Löbner 2002). The only assumption made in the following argumentation is that, on the semantic level, some lexical expressions in a single language Sn may be connected with intersubjectively typified mental images; these may correspond either to prototypes, according to Rosch (Rosch and Mervis 1975), or to stereotypes, according to Putnam (1975). Such typified and conventionalized mental images associated with an expression can, in turn, be represented by iconic co-speech gestures. In Peircean terms, we are dealing with a gestural representamen (R1) (also ‘sign vehicle’) whose object (O1) is a prototype, considered as a mental image that is associated with a word form, and which belongs to the interpretant (I2) of that word form (R2).

4. Co-speech gestures between reerence and meaning 4.1. Prerequisites: Noun phrases, prototypes, and reerence objects The examples in this section are taken from video recordings of communication partners giving descriptions of routes in the vicinity of Potsdamer Platz, Berlin, in December 2000. The data were collected from 33 informants, who were divided into three groups. Each member of group A followed a predetermined route at Potsdamer Platz on his or her own. Each informant was instructed to describe this route to an informant in group B, who was unfamiliar with the location. This description was to be so precise that the latter would, in turn, be able to describe the route to an informant in group C, who would, in turn, then be able to follow the route independently. The informants in group A were also instructed to take two photographs: one of the rectangular entrance porch of the Stella Musical Theater, taken from inside the porch and facing Marlene-

Fig. 135.1: Rectangular entrance porch of the Stella Musical Theater facing Marlene-Dietrich-Platz (Fricke 2007: 271)

1794

VIII. Gesture and language

Dietrich-Platz (Fig. 135.1 the dark contour of the porch borders the edge of the picture), and one of where the route ended. Although every speaker in informant group A clearly knew that the entrance porch of the Stella Musical Theater has a rectangular shape, some speakers refer to it by using gestures with both rectangular and circular or arc-shaped trace forms within one and the same utterance turn (see Fig. 135.1). In these cases, all the speakers use mainly noun phrases with the qualitative deictic son ‘such a’. In the following, examples of the utterances containing son Loch ‘such a hole’ and son Tor ‘such an archway’ are presented. Noun phrases introduced by the determiner son are a particularly interesting kind of noun-phrase construction in German. According to Hole and Klumpp (2000), son has to be considered as an article that is governed by the nuclear noun of the respective noun phrase. As a qualitative deictic denoting a quality, son obligatorily requires a qualitative description, which can be instantiated either verbally or gesturally (Fricke 2008, 2012). The crucial point is that in our examples a verbal attribute offering the required qualitative description does not occur as part of the noun phrase under consideration. Moreover, there are occurrences of noun phrases introduced by son that lack both a verbal qualitative description as well as a gestural one. These findings can be explained by assuming the participation of conventionalized prototypes or stereotypes attached to the respective word form of the nuclear noun during utterance formulation. Since such prototypes and stereotypes are conventionalized, and therefore intersubjectively accessible, let us assume that they may instantiate the obligatory description of a quality and, furthermore, that if they are attached to the nuclear noun, then they may be depicted by an iconic co-speech gesture. The drawings in Fig. 135.2 are examples of typical archways (Tor), bridges (Brücke), and holes (Loch) produced by 22 native German speakers. It should be noted that all the drawings of a hole (Loch), without exception, have a circular shape, and all the drawings of an archway (Tor), in the sense of ‘entrance’, have an arced shape. In contrast to the English compound archway, the arced shape is not encoded in the German uncompounded word Tor.

Fig. 135.2: Drawings of a typical Tor, Brücke, and Loch by native German speakers (Fricke 2006)

135. Between reference and meaning

1795

As will be demonstrated in the next section, the speakers in this study use object-related gestures while referring directly to the rectangular entrance porch of the Stella Musical Theater, and they also use interpretant-related gestures while referring indirectly to the word-form of the nuclear noun by depicting the prototypical concept attached to it.

4.2. Examples o object-related and interpretant-related gestures in noun phrases Particularly interesting is the following utterance taken from German route descriptions in which an arc-shaped gesture “resolves” into the form of a rectangle within the same turn. The speaker is searching for an adequate description on the verbal level by using different noun phrases (Fricke 2008, 2012). The noun phrases son Loch im Haus (‘such an opening in the building’) and son Tor (‘such an entrance’) both refer to a rectangular entrance porch of the Stella Musical Theater at Potsdamer Platz in Berlin shown above (Fig. 135.1). (1)

A: [da iss einfach nur son Loch im Haus | sozusagen …]1 [son Tor ]2 [xxx]3 ‘there is just such an opening in the building’| ‘so to speak’

Fig. 135.3: Arc-shaped stroke in gesture 1

Fig. 135.4: Straight-line stroke in gesture 1

The speaker accompanies the verbal expression da ist einfach nur son Loch im Haus (‘there is just such an opening in the building’) with gesture 1: an arc-shaped stroke that traces the outline of part of a circle from right to left (Fig. 135.3). This gesture ends with a straight-line stroke downwards from the upper-right edge to the lower-right edge of the periphery of the gesture space (Fig. 135.4). While maintaining the same hand shape (G-Form) and mode (“the hand draws”, Müller 1998), the speaker accompanies the verbal expression son Tor (‘such an entrance’) with gesture 2: the outline of a right angle traced in the opposite direction, i.e., upwards (Fig. 135.5), from left to right (Fig. 135.6), and then downwards (Fig. 135.7). (2)

A: [son Tor ]2 [xxx]3 ‘such an entrance’

The illustration of a rectangle is continued by an additional stroke sequence that begins on the left side of the gesture space rather than the right. Essentially, the stroke sequence of gesture 3 repeats that of gesture 2 in the opposite direction (Fig. 135.8). In this example, we see that the same speaker uses two different types of gestures to refer to the same intended reference object within the same turn: A gesture, or gesture fragment, whose

1796

VIII. Gesture and language

Fig. 135.5: Straight-line stroke in gesture 2

Fig. 135.6: Straight-line stroke in gesture 2

Fig. 135.7: Straight-line stroke in gesture 2

Fig. 135.8: The same stroke sequence from left to right in gesture 3

trace form does not correspond to the contours of the reference object, is followed by one whose trace form does indeed correspond. How can we explain this observation? What does this speaker-based “resolution” from an interpretant-related to an objectrelated gesture mean? In this example, we can witness the quasi real-time detachment of an attributive characteristic from a holistic mental representation ⫺ “Gesamtvorstellung” (Wundt 1904) ⫺ connected to the nuclear nouns of the respective noun phrases. According to Wundt (1904), sentences are elaborations of an underlying holistic complex that he calls a “Gesamtvorstellung”, which in some respects parallels McNeill’s concept of holistic “growth points” (McNeill 1992, 2005; for an overview, see McNeill volume 1). Wundt presents his conception of a Gesamtvorstellung and its relation to sentence building as follows: Das einfachste Hilfsmittel, die Gegenstände zu nennen, besteht in der Hervorhebung irgendeiner Eigenschaft derselben. Der Name für das Ding selbst, das Substantivum, und der Name für eine seiner Eigenschaften fließen daher ursprünglich zusammen; und nur dadurch, daß sich eine einzelne Eigenschaftsbezeichnung inniger mit der Vorstellung eines Gegenstandes assoziiert und so den ursprünglichen Eigenschaftsbegriff hinter dem Gegenstande zurücktreten läßt, sondern sich allmählich Substantivum und Adjektivum. […] Diese Scheidung beider Wortformen ist aber wiederum eine Wirkung der Satzbildung. Denn der Satz ist es ja erst, der eine Gesamtvorstellung in einen Gegenstand und in eine an diesem besonders apperzipierte Eigenschaft zerlegt. [The simplest means to naming objects consists in highlighting one of its characteristics. The name for the thing itself, the noun, and the name for one of its characteristics therefore originally merge together; and only in this way, by deeply associating one single denotation for a characteristic with the idea of an object and thus causing the original concept of the characteristic to retreat behind the object, do the noun and the adjective gradually become differentiated. […] This separation of the two word forms is, however,

135. Between reference and meaning

1797

in turn, an effect of sentence building. Then it is primarily the sentence that fragments a holistic mental representation into an object and a particular, consciously perceived characteristic.] (Wundt 1904: 286⫺287, translated by Mary M. Copple; italics added by E.F.)

It should be noted that, in contrast to synthetic views of syntax that conceive of sentences as consisting of smaller units called “words” (bottom-up synthesis), Wundt’s view is analytic: Sentences and their syntactic units are derived from the elaboration of a holistic “Gesamtvorstellung” originally used in order to refer to entities and events as a whole ensemble (top-down analysis). The examples (1) and (2) above illustrate Wundt’s analytic view. With respect to the German noun phrases son Loch (‘such an opening’) and son Tor (‘such an entrance’), the German qualitative deictic son ‘such a’ obligatorily requires a qualitative description (Fricke 2008, 2010; for further details, see Fricke volume 1), which can be instantiated verbally, e.g., by an adjective, or gesturally, e.g., by an iconic gesture. According to Fricke (2008, 2012, volume 1), son is the syntactic integration point for co-speech gestures in noun phrases on the level of the linguistic system. Son as a deictic determiner is governed by the nuclear noun with respect to its gender. Gestures that are structurally integrated to a comparable extent can also be integrated functionally as attributes in verbal noun phrases (for details of multimodal integrability, see Fricke 2008, 2012, volume 1). Moreover, son as an article instantiates exactly the turning point from “determination of reference to determination of concept” in Seiler’s (1978) continuum of determination (Fig. 135.9). Moving from the left side towards the right, the determination of reference decreases and the determination of concept increases, while moving from right to left, the determination of concept decreases and the determination of reference increases.

Fig. 135.9: Seiler’s (1978) continuum of determination in noun phrases

This turning point is also evident on the gestural level: The first gesture in our example is a circular interpretant-related gesture that depicts a mental prototype of the circular shape associated with the word form Loch. Thus, it merely illustrates the concept of the nuclear noun (determination of concept); it does not limit the extension of what it denotes. The subsequent object-related gesture accompanying the word form Tor, however, does exactly that: The rectangular trace form of the gesture limits the extension of the possible architectural forms of entrances and excludes arc-shaped entrances from the denotatum (determination of reference). It should be noted that the mental prototype of the German word form Tor in the sense of ‘entrance to a building’ is arc-shaped and not rectangular, as has been substantiated by drawings of typical Tor-shapes by native speakers of German (Fricke 2008, 2012).

1798

VIII. Gesture and language

4.3. Object-related and interpretant-related gestures as Peircean sign conigurations The distinction between object-related and interpretant-related gestures in Seiler’s continuum of noun phrases can be illustrated by examining different sign configurations from a Peircean perspective. Let us consider example (1). The sign configuration of the expression ‘there is just such an opening in the building’ is initially accompanied by a circular gesture (Fig. 135.3), and the subsequent gesture accompanying the expression ‘such an entrance’, which relates to the same object of reference, is “transformed” into the outline of a rectangle (Fig. 135.4⫺8).

Fig. 135.10: Sign configuration of an interpretant-related gesture

The expression son Loch (‘such a hole/such an opening’) is interpreted as a representation of a rectangular opening in a wall. The relationship between the representamen R1, son Loch, and the object O1, a rectangular opening in a wall, is established by the interpretant I1, which conveys the image of a prototypical opening as an aspect of the meaning of the word Loch (Fig. 135.10). The circular gesture R2, as a representamen of a second sign configuration, has the same mental image as its object. Hence, the interpretant I1 of the first sign configuration becomes the object O2 of the second sign configuration, in which representamen R2, the circular-shaped gesture, is interpreted as a representation of the mental image of a prototypical hole (I1 ⫽ O2). Here, the speaker’s circular-shaped gesture is not directly referring to the intended reference object; rather, this reference is indirectly established by the interpretant of the nuclear noun Loch (‘hole’) in the noun phrase son Loch (‘such a hole/such an opening’). A different sign configuration, however, occurs in the case of the rectangular gesture that accompanies the noun phrase son Tor (‘such an entrance’) (Fig. 135.11). The reference object intended by the speaker is still the rectangular entrance porch of the Stella Musical Theater at Marlene-Dietrich-Platz. In this case, the rectangular gesture directly references this specific entrance without the detour of an additional sign. In this example, gesture and speech share the same intended reference object (Od), which

135. Between reference and meaning

1799

Fig. 135.11: The sign configuration of an object-related gesture

is thematized and encoded differently in the verbal and gestural modalities. As the spoken and gestural signs record different aspects of the remembered or imagined entrance, the spoken sign and the gestural sign have their own distinct objects (O1 and O2).

5. From meaning to reerence: Gestural turning points in ace-to-ace interaction How can one decide whether a given gesture is interpretant-related or object-related? The Peircean concept of dynamic interpretant, i.e., the actual effect that a sign has on its interpreter, allows for both possibilities to be available to the speaker: A gesture can represent the pictorial imagining of a conventionalized prototype, or the memory of an object of perception, or an object of the individual imagination. The speaker can decide for him- or herself which case applies. The addressee, however, as long as the reference object intended by the speaker is unknown, remains unsure. Since there seem to be no differences with regard to the level of form, a gesture intended by the speaker as interpretant-related may be perceived and “reinterpreted” as object-related by the addressee. In example (3), speaker B uses the verbal expressions son Tor (‘such an entrance’) and dieses Tor (‘this entrance’) in order to describe the rectangular entrance porch of the Stella Musical Theater at Potsdamer Platz introduced in section (4.1) to her addressee C (Fig. 135.12). In the subsequent conversation, addressee C takes the role of the speaker and describes the same entrance to addressee D by using solely the German expression Torbogen (‘archway’) without co-speech gesture (Fig. 135.13). (3)

C: sie haben halt die Straßenseite gewechselt und sind dann an irgend so einem Torbogen entlanggegangen [...] weißt du welchen ich meine/ ‘they have just crossed over the street and then walked alongside such an archway […] do you know which one I mean/’

The German expression Torbogen is a regular compound of the nouns Tor and Bogen. Its meaning is composed of the meaning of the head noun (also determinatum) plus a specification added by the modifier (also determinans). The use of the head noun Bogen ‘arc’ indicates that speaker C, who does not know Potsdamer Platz, re-interprets B’s arcshaped interpretant-related gesture in the initial conversation as being object-related in the subsequent conversation. During this second conversation, she refers to the same rectangular entrance as a specific kind of arc. That it is necessary to differentiate between

1800

Fig. 135.12: Interpretant-related gesture in conversation 1

VIII. Gesture and language

Fig. 135.13: Utterance of Torbogen in conversation 2

encoding by the speaker and decoding by the addressee has already been noted by the German linguist Hermann Paul ([1880] 1968: 122). He uses this differentiation to defend his synthetic method against Wundt’s analytic method. He argues that addressees need to re-combine the parts of an utterance that result from a top-down analysis of Gesamtvorstellung (Fricke 2008, 2012).

6. Conclusion The differentiation between object-related and interpretant-related gestures can explain how the form characteristics of co-speech gestures can be incongruent with those of the reference object intended by the speaker: Object-related gestures are bodily movements that are related to the reference object intended by the speaker, whereas interpretantrelated gestures are bodily movements that are primarily related to a meaning or concept attached to a spoken word form. These concepts can be mental images of prototypes. The argument for assuming a mental, image-like prototype is not psychological but grammatical: The German deictic article son shows a very high degree of multimodal integrability, which can be described in three consecutive steps (Fricke 2008, 2012, volume 1). The first step consists in the nuclear noun of the noun phrase governing the use of son. The second step consists in the cataphoric integration of a qualitative determination required by son, which may occur by means of gesture. In the case of an iconic gesture providing the qualitative determination, the third step accomplishes a categorical selection with respect to four gestural modes of representation “the hand models”, “the hand draws”, “the hand acts”, and “the hand represents” (Müller 1998). Taking this view of multimodal grammar raises the crucial question of whether or not co-speech gestures are capable of instantiating independent syntactic constituents that are detached from the nuclear noun in a noun phrase. If this requirement is met, then co-speech gestures can only be expected to adopt an attributive function within verbal noun phrases on the syntactic level. In manifesting the differentiation between extensional and intensional determination in Seiler’s continuum, object-related and interpretant-related gestures, considered as parts of noun phrases, can be divided into two groups that are equivalent to verbal adjectives with attributive function: one that limits the extension of the nuclear noun, regardless of its meaning (object-related gestures), and one that merely modifies or illustrates its meaning (interpretant-related gestures). Consistent with McNeill’s growth point hypothesis and Wundt’s concept of Gesamtvorstellung, it is proposed that co-speech gestures in noun phrases allow for observing

135. Between reference and meaning the “distinctive separation of the characteristic from the object” (Wundt 1904: 286⫺287) in a pre-differentiated state. Three stages can be observed: Firstly, the noun phrase is introduced by the article son without verbal or gestural extension; the qualitative description required by son is achieved solely through the image-like prototype attached to the word form of the nuclear noun. Secondly, an iconic gesture delivers this intersubjectively perceivable “image” of the prototype (interpretant-related gesture), which can be interpreted by the addressee as a “true” qualitative description of a characteristic of the reference object intended by the speaker (object-related gesture). Thirdly, the form of an iconic gesture is based on form characteristics of the intended reference object (objectrelated) and is also interpreted by the addressee as being object-related. In this case, the gesture limits the extension of the nuclear noun and has to be considered as a constituent of the noun phrase with an attributive function that is detached from the nuclear noun. In manifesting the turning point between extensional and intensional determination in Seiler’s continuum, the distinction between object-related and interpretant-related gestures bridges the gap between McNeill’s growth point hypothesis and Fricke’s multimodal approach to grammar.

7. Reerences Butterworth, Brian and Uri Hadar 1989. Gesture, speech and computational stages: A reply to McNeill. Psychological Review 96(1): 168⫺174. de Ruiter, Jan Peter 1998. Gesture and speech production. PhD dissertation. Nijmegen University. de Ruiter, Jan Peter 2000. The production of gesture and speech. In: David McNeill (ed.), Language and Gesture, 284⫺311. Cambridge: Cambridge University Press. Feyereisen, Pierre volume 1. Psycholinguistics of speech and gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication Science 38.1.), 156⫺168. Berlin/Boston: De Gruyter Mouton. Fricke, Ellen 2006. Was bilden Gesten ab? Zum Objekt- und Interpretantenbezug redebegleitender Gesten. Lecture series “Bedeutung in Geste, Bild und Text”, Technische Universität Berlin. Fricke, Ellen 2007. Origo, Geste und Raum: Lokaldeixis im Deutschen. Berlin/New York: De Gruyter. Fricke, Ellen 2008. Grundlagen einer multimodalen Grammatik. Syntaktische Strukturen und Funktionen. Habilitation thesis, European University Viadrina, Frankfurt (Oder). Fricke, Ellen 2009. Attribution and multimodal grammar: How gestures are functionally and structurally integrated into spoken langage. Paper presented at the 3rd International Conference of the French Cognitive Linguistics Association (AFLiCo 3), “Grammars in Construction(s)”, Universite´ Paris 10-Nanterre, France. Fricke, Ellen 2010. Phonaestheme, Kinaestheme und multimodale Grammatik. Sprache und Literatur 41(1): 69⫺88. Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin/Boston: De Gruyter. Fricke, Ellen volume 1. Towards a unified grammar of gesture and speech: A multimodal approach. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication Science 38.1.), 733⫺ 754. Berlin/Boston: De Gruyter Mouton. Hole, Daniel and Gerson Klumpp 2000. Definite type and indefinite token: the article son in colloquial German. Linguistische Berichte 182: 231⫺244.

1801

1802

VIII. Gesture and language

Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Kita, Sotaro and Aslı Özyürek 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48: 16⫺32. Krauss, Robert M., Yihsiu Chen and Purnima Chawla 1996. Nonverbal behavior and nonverbal communication: What do conversational hand gestures tell us? In: Mark P. Zanna (ed.), Advances in Experimental Social Psychology, 389⫺450. San Diego: Academic Press. Krauss, Robert M., Yihsiu Chen and Rebecca F. Gottesman 2000. Lexical gestures and lexical access: A process model. In: David McNeill (ed.), Language and Gesture, 261⫺283. Cambridge, UK: Cambridge University Press. Levelt, Willem J.M. 1989. Speaking. From Intention to Articulation. Cambridge, MA: The MIT Press. Löbner, Sebastian 2002. Understanding Semantics. London: Arnold. Lyons, John 1977. Semantics. Volume 1. Cambridge, NY: Cambridge University Press. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago, IL: Chicago University Press. McNeill, David 2005. Gesture and Thought. Chicago, IL: Chicago University Press. McNeill, David volume 1. The growth point hypothesis of language and gesture as a dynamic and integrated system. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication Science 38.1.), 135⫺155. Berlin/Boston: De Gruyter Mouton. McNeill, David and Susan Duncan 2000. Growh points in thinking-for speaking. In: David McNeill (ed.), Language and Gesture, 141⫺161. Cambridge, UK: Cambridge University Press. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Paul, Hermann 1968. Prinzipien der Sprachgeschichte. Tübingen: Max Niemeyer. First published [1880]. Peirce, Charles S. 1931⫺58. Collected Papers. Cambridge, MA: Harvard University Press. Peirce, Charles S. 2000. Semiotische Schriften. Volumes 1⫺3. Frankfurt a. M.: Suhrkamp. Putnam, Hilary 1975. The meaning of “meaning”. In: Keith Gunderson (ed.), Language, Mind, and Knowledge, 131⫺193. Minneapolis: University of Minnesota Press. Rosch, Eleanor 1977. Human categorization. In: Neil Warren (ed.), Studies in Cross-cultural Psychology, 1⫺49. London: Academic Press. Rosch, Eleanor and Carolyn B. Mervis 1975. Family resemblances: Studies in the internal structure of categories. Cognitive Psychology 7: 573⫺605. Schegloff, Emanuel A. 1984. On some gestures’ relation to speech. In: J. Maxwell Atkinson and John Heritage (eds.), Structures of Social Action: Studies in Conversational Analysis, 266⫺298. Cambridge, NY: Cambridge University Press. Seiler, Hansjakob 1978. Determination: A functional dimension for interlanguage comparison. In: Hansjakob Seiler (ed.), Language Universals, 301⫺328. Tübingen: Narr. Wundt, Wilhelm 1904. Völkerpsychologie. Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte. Volume 1: Die Sprache. Leipzig: Engelmann. First published [1900].

Ellen Fricke, Chemnitz (Germany)

136. Deixis, gesture, and embodiment from a linguistic point of view

1803

136. Deixis, gesture, and embodiment rom a linguistic point o view 1. 2. 3. 4. 5. 6. 7.

Introduction Deixis and indexicality: The Bühlerian and the Anglo-American tradition Deixis, pointing, and naming: Commonalities and differences The deictic relation I: The origo The deictic relation II: The deictic object Conclusion and outlook: The embodied deictic relation References

Abstract This chapter gives an overview of the relation between deixis, gesture, and embodiment from a linguistic point of view. Differences between the Bühlerian and the Anglo-American traditions of deixis theory with respect to four main points are treated: 1. the scope of the notion of deixis, 2. the concept of origo, 3. the concepts of deictic reference and deictic object, and 4. the role of the human body, including gestures. Although the term “deixis” is originally based on the idea of drawing attention to something by means of pointing, it is shown that linguistic deixis is not limited to pointing, nor can verbal deixis be derived from pointing gestures alone. Moreover, iconic gestures produced with the deictic utterance are revealed to be an indispensible part of multimodal deixis. In the following, theoretical contradictions inherent in both traditions of deixis theory are discussed, and Fricke’s proposal of origo-allocating acts and her distinction between deixis at signs vs. non-signs are presented.

1. Introduction The term “deixis”, which comes from a Greek word meaning ‘pointing’ or ‘indicating’, is “based upon the idea of identification, or drawing attention to, by pointing” (Lyons 1977: 636). In linguistics, the term is now mainly used to refer to verbal expressions with a deictic function (e.g., Bühler [1934] 1982a, 1982b, [1934] 1990; Fillmore 1982, 1997; Lyons 1977; Levinson [1983] 1992, 2004). For mutual understanding in face-to-face interaction, speakers and their addressees need to be simultaneously engaged in perception, imagination, and other cognitive processes. Deixis assumes a particular function in the coordination of mental representations as well as social interaction: It can be understood as a communicative and cognitive procedure in which the speaker focuses the attention of the addressee by the words, the gestures, and other directive clues that he uses; these diverse means of expression co-produce context as a common ground (e.g., Bühler 1982a, 1982b, 1990; Clark 1996, 2003; Clark, Schreuder, and Buttrick 1983; Diessel 2006; Ehlich 1985, 2007; Enfield 2003, 2009, volume 1; Fricke 2002, 2003, 2007, in preparation a, b; Goodwin 1986, 2000a, 2000b, 2003; Hanks 1990, 1992, 1993, 2005, 2009; Haviland 1993, 2003; Hausendorf 2003; Kendon and Versante 2003, Kendon 2004; Tomasello 1995, 2008, 2009; Streeck 1993; Stukenbrock 2014). According to Tomasello, joint attention implies viewing the behavior of others as intentionally driven: “Thus, to interpret a pointing gesture one must be able to determine: what is the intention in directing my Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18031823

1804

VIII. Gesture and language

attention in this way?” (Tomasello 2008: 4) He claims that the abilities of human and non-human primates differ in this respect: Whereas humans are normally adept at “intention-reading”, apes appear to be capable of only “imperative pointing”, which does not necessarily involve joint attention (cf. Tomasello 2008). Despite many publications on deixis and pointing in the recent years, the multimodal collaboration of gesture and deixis in language and multimodal utterances is still an understudied area in linguistic pragmatics and semantics, as well as in gesture studies. Since gesture is one important way to focus the addressee’s attention on the visible context of the utterance, it has a central role in deixis (Bühler 1990; Levinson 2004). Some occurrences of verbal deictics, e.g., the demonstrative this in the utterance I mean this book, not that one!, obligatorily require a directive pointing gesture to accompany them (Bühler 1990: 107). Fillmore (1997: 62⫺63) termed such multimodal occurrences (both optional and obligatory ones) the “gestural use” of verbal deictics, in contrast to symbolic and anaphoric use. Within gesture studies, there have been several investigations into pointing that focus on the different articulators that might instantiate this directive function, e.g., pointing with the lips (Sherzer 1973; Enfield 2001; Wilkins 2003), with gaze (e.g., Goodwin 1980; Heath 1986; Kendon 1990; Kita 2003a; Streeck 1993, 1994, 2002; Stukenbrock 2013), with the nose (Cooperrider and Nu´n˜ez 2012), and of course with different kinds of hand movements (e.g., Haviland 1993, 2003; Kendon 2004; Kendon and Versante 2003; Fricke 2007, 2010; Jarmolowicz-Nowikow this volume; Stukenbrock 2013; Wilkins 2003). Further relevant perspectives are provided by other fields of research, for example, child development studies (e.g., Butterworth 2003; Butterworth and Morissette 1996; Clark 1978; Goldin-Meadow and Butcher 2003; Liszkowski 2005; Pizzuto and Capobianco 2005), psychology (e.g., McNeill 1992, 2003, 2005; Levy and Fowler 2000; Kita 2003a, for an overview see Kita 2003b), primatology (e.g., Povinelli, Bering, and Giambrone 2003; Tomasello 2008; for an overview see Pika et al. 2005), conversation analysis (e.g., Goodwin 2000b, 2003; Mondada 2002, 2007; Schmitt and Deppermann 2010; Streeck 1993, 1994, 2002; Stukenbrock 2013; for an overview see Mondada volume 1), anthropology (e.g., Enfield 2001, 2009; Hanks 1990, 2005, 2009; Haviland 1993, 2003; Senft 2004), linguistics and semiotics (e.g., Enfield 2001, 2003, 2009; Fricke 2002, 2003, 2007, in preparation a, b; Harweg 1976; Hausendorf 2003; Müller 1996; Schmauks 1991), and linguistics of sign language (e.g., Cormier 2012; Engberg-Pedersen 2003; Liddell 2000; Pizzuto and Capobianco 2008; for an overview of speech, sign, and gesture see Wilcox volume 1). For an introductory overview covering a wide range of fields, see the collected volume Pointing: Where Language, Culture, and Cognition Meet edited by Sotaro Kita (2003c). However, deixis is not limited to pointing, nor can verbal deixis be derived from gestural deixis alone, as Bühler (1990) emphasizes. Although Bühler uses the analogy to pointing as a starting point for analyzing verbal deictics, he makes it unmistakably clear that he considers “the deictic origin of language”, which means the temporal priority of pointing without naming, a “myth” (Bühler 1990: 100⫺101). “Pointing is pointing”, Bühler states, “and never anything more, whether I do it mutely with my finger or doubly with finger and a sound to accompany the gesture” (Bühler 1990: 102) (for details, see section 3). It should be noted that, according to Bühler (1990: 127), the pointing gesture and its function can also be replaced “by indirect situational evidence or conventional interpretation clues”, e.g., the origo and what he calls the “tactile body image”, whose participation is indispensi-

136. Deixis, gesture, and embodiment from a linguistic point of view

1805

ble for any deictic function (Bühler 1990: 146) to be fulfilled. “Without such guides”, Bühler claims, “every deictic word would in a sense be sent off in random; it would indicate nothing more than a sphere, a ‘geometrical place’ to us, but that is not enough for us to find something there” (Bühler 1990: 127⫺128).

2. Deixis and indexicality: The Bühlerian and the Anglo-American traditions Deixis has been generally characterized as introducing context-dependent properties into language (for an overview see Levinson 2004). With respect to different lines of deixis theory, there are mainly two senses in which deixis is discussed: The Anglo-American tradition considers indexicality (context dependency in the broader sense) as the defining criterion for deixis, whereas the European tradition, in the line of Bühler, defines deixis as origo dependent (context dependency in a narrower sense) and considers deictic expressions as obligatorily standing in relation to the origo (cf. Fricke 2002, 2003, 2007). According to Bühler, the origo is the indispensible deictic clue mentioned above and constitutes the deictic center of the utterance, which in the default case is instantiated by the temporal and local coordinates of the actual speaker as well as his actual communicative role. In Anglo-American deixis theory, deictic expressions are conceived of as indexical expressions that depend on context elements in the utterance situation but are not necessarily relative to the origo (e.g., Fillmore 1982, 1997; Levinson 1992, 2004; Lyons 1977). Levinson (2004: 97) characterizes the terms “deixis” and “indexicality” as “coextensive ⫺ they reflect different traditions […] and have become associated with linguistic and philosophical approaches respectively”. However, in the Anglo-American tradition, if Bühler’s term “origo” is used at all, it is conceived of as “the indexical ground of reference” (Hanks 1992; see also Fillmore 1982: 45). According to Hanks, “deictic reference organizes the field of interaction into a foreground upon a background, as Figure and Ground organize the visual field” (Hanks 1990: 40⫺41). The distinction between Figure and Ground (see also Talmy 1978) within Anglo-American deixis theory can be traced back to Fillmore (1982: 45), who gives the example of verbal local deictics that use the speaker’s body (or in some cases the addressee’s body) as perceivable Ground. In contrast to Bühler, deixis here is entirely limited to perceptual deixis, or demonstratio ad oculos in Bühlerian terms. Imagination-oriented deixis (e.g., displacement), as conceived by Bühler, is classified as non-deictic: “What justifies me in describing it as non-deictic is its not being anchored in the current speech event in which the utterance is produced” (Fillmore 1982: 38). Following Fillmore, nearly the entire Anglo-American line of deixis theory limits the general term “deixis” to perceptual deixis and, in addition, to the actual speaker and his spatio-temporal coordinates (Fricke 2002, 2003, 2007; see also Hanks 2005: 196 on the “spatialist” and “interactive” points of view in Anglo-American deixis theory). This limitation can also be observed with respect to the common distinction between the terms “deictic” and “intrinsic” in local deixis (cf. Fricke 2002, 2003). Miller and Johnson-Laird give the following characterization: We will call the linguistic system for talking about space relative to a speaker’s egocentric origin and coordinate axes the deictic system. We will contrast the deictic system with the intrinsic system, where spatial terms are interpreted relative to coordinate axes derived from

1806

VIII. Gesture and language intrinsic parts of the referent itself. Another way to phrase this distinction is to say that in the deictic system spatial terms are interpreted relative to intrinsic parts of ego, whereas in the intrinsic system they are interpreted relative to intrinsic parts of something else. (1976: 396)

In this quotation, the concept of deixis is not clearly distinguished from the concept of an intrinsic frame of reference. Rather, the intrinsic system of the speaker is opposed to that of a non-speaker: If a deictic system is present, then the speaker makes himself and his intrinsic coordinates a reference point for a linguistic localization. However, if an intrinsic system is present, then the reference point lies with an entity that is not the speaker and is derived from the inherent properties of that entity. The object referred to must have a clear front and back so that a spatial coordinate system can be constructed and fixed, e.g., vehicles like cars. Examples of non-intrinsic entities are balls, bushes, and columns. The concept of intrinsic can also be traced back to Bühler, who, without actually using the term, deals with the intrinsic as part of deixis: […] we may view the important case, for example, of a vehicle (carriage, ship, locomotive, car) where one’s orientation immediately and not only conceptually, but of necessity perceptually, follows the conventional direction of movement of the object. Just as naturally as with animals and other humans. When a teacher of gymnastics facing a dressed line of gymnasts gives commands, the orders left and right are conventionally given and understood according to the gymnasts’ orientation. That is the paradigmatic case for whose explanation one must note the astonishingly easy translatability of all field values of the visual system and the verbal deictic system from someone in another plane of orientation. (Bühler 1982b: 36⫺37)

In his approach, Bühler argues for a movable origo conceived of as an abstract mathematical point in a Cartesian coordinate system, whereas Miller and Johnson-Laird assume an “origo” fixed to the speaker and his actual spatio-temporal coordinates in perceptual space. Therefore, they exclude phenomena that they consider to be non-deictic but that fall under Bühler’s wider concept of deixis. It is worth pointing out that the direction of transfer of the respective coordinate systems is exactly the opposite one: Miller and Johnson-Laird use the intrinsic coordinate system of an object as the starting point and conceptualize the deictic system as an intrinsic coordinate system connected to the actual speaker. Consequently, the intrinsic coordinates become, so to say, “deictic”. This means that, in their approach, speaker deixis is derived from object intrinsics, whereas Bühler derives object intrinsics from speaker deixis. In contrast to Miller and Johnson-Laird, Bühler allows for moving the egocentric origo to other people, creatures, and objects. As long as verbal expressions and gestures are relative to the origo, they are covered by his notion of deixis (for further details, see Fricke 2002, 2003, and 2007: 17⫺53). It is beyond the scope of this section to discuss this issue more deeply, but it should be noted that the Anglo-American and the Bühlerian traditions differ with respect to the following main points: (i) the scope of the notion of deixis, (ii) the concept of origo or deictic center (concrete and fixed vs. abstract and movable), (iii) the concepts of deictic reference and deictic space (perceivable entities vs. perceivable as well as imaginary entities), and (iv) the role of the human body (marginalized vs. crucial).

136. Deixis, gesture, and embodiment from a linguistic point of view

1807

The following sections are based on the Bühlerian tradition of deixis theory. He is considered to be the most influential founder of modern deixis theory. Most basic categories can be traced back to him (Klein 1978). Nevertheless, the crucial role of the human body in Bühlerian concepts, e.g., the “tactile body image”, which anticipates the concepts of “image schema” and “embodiment” in current cognitive linguistics (e.g., Johnson 1987; Lakoff 1987; Geeraerts 2010; Hampe 2005; for an overview of image schemas and gesture see Cienki 2005, volume 1; on embodiment Ziemke, Zlatev, and Frank 2007; Sonesson 2007; Zlatev volume 1), and the investigation of pointing gestures in particular, have been neglected in recent researches, which would no doubt benefit from a thorough reflection on his theoretical approach. Moreover, considering that no video recordings were available in Bühler’s time, his Sprachtheorie (‘Theory of language’ 1934) offers surprising insights into deixis based on his precise observations of everyday communication.

3. Deixis, pointing, and naming: Commonalities and dierences 3.1. Pointing in Bühlerian deixis theory In contrast to “symbols” or “naming words”, according to Bühler (1982a, 1982b, 1990), “deictic words” or “pointing words” are characterized by the fact that they are only interpretable by recourse to an origo, which by default is connected with the speaker. Deictic words belong to the “deictic field of language”, whereas naming words or symbolic words belong to its “symbolic field”. Bühler introduces his chapter on the deictic field of language with the description of a signpost imitating an outstretched arm: “The arm and finger gesture of man, to which the index finger owes its name, recurs when the signpost imitates the outstretched ‘arm’; in addition to the arrow symbol, this gesture is a widespread sign to point the way or direction” (Bühler 1990: 93). He continues: “If all goes well it does good service to the traveler; and the first requirement is that it must be correctly positioned in its deictic field” (Bühler 1990: 93). To paraphrase the essence of the first paragraphs of his introduction, imagine your arm is performing a pointing gesture. It is as if a straight line is drawn between two points, i.e., the tip of your extended index finger and the point where your body is located, the origo or origin. Depending on who performs the pointing gesture, you or another person, and depending on where in the room the speaker is, the extension of the straight line would lead to different target points in the room. Bühler assumes similar properties for certain verbal expressions like I, you, here, there, now, or then. These verbal expressions, called “deictics”, refer to different situational context elements, depending on when, where, and by whom they are uttered. According to Bühler, verbal deictics like here and co-speech pointing gestures can only be interpreted in relation to an origo: They are origo-relative expressions (cf. Fricke 2002, 2003). It should be mentioned that Bühler emphasizes an aspect called “sociocentricity” (Hanks 1990: 7) in later approaches. He considers deixis to be a “complex human act” and a social undertaking within which “the sender does not just have a certain position in the countryside as does the sign post; he also plays a role; the role of the sender as distinct from the role of the receiver” (Bühler 1990: 93). Bühler further points out that “it takes two to tango, two are needed for every social undertaking, and the concrete speech event must first be described in terms of the full model of verbal communication.

1808

VIII. Gesture and language

If a speaker ‘wishes to indicate’ the sender of the present word, he says I, and if he wishes to indicate the receiver he says thou” (Bühler 1990: 93). Bühler’s concept of linguistic deixis is derived from the analogy to pointing but differs from it with regard to two main aspects: Firstly, as mentioned above, linguistic deixis cannot be fully derived from pointing, and secondly, linguistic deixis requires the presence of at least a minimal ingredient of naming or, in other words, deictics are signals as well as symbols and manifest the functions of both appeal and representation in Bühler’s organon model of language (Bühler 1990). Bühler cannot be misunderstood on this matter, as he makes the following statement about verbal deictics: They, too, are symbols (and not only signals); da and dort (there) symbolize, they name an area, they name the geometrical location, so to speak, that is, an area around the person now speaking within which what is pointed at can be found; just as the word heute (today) in fact names the totality of all days on which it can spoken, and the word I all possible senders of human messages, and the word thou the class of all receivers as such. (Bühler 1990: 104)

According to Bühler, “the simple reference to something to be found here or there, at a certain place in the sphere of actual perception, must clearly be distinguished from the quite different information that it is of such and such character” (Bühler 1990: 102). From this, Bühler concludes that “pointing is pointing and never anything more”, but it may complement the naming function of the sound. This means that naming and pointing cannot be derived from each other although, according to Bühler, “they are able to complement each other” as parts of utterance formation, later called “gesturespeech ensembles” by Kendon (Kendon 2004: 127). Bühler also clearly recognizes the parallel between symbolic naming and iconic imitation: “A mute gesture, too, can characterize what is meant by imitating it; the sound symbolizes it” (Bühler 1990: 102). With respect to the complementary function of co-speech pointing, Bühler states the necessity of deictic clues in general, not only simple pointing but also substitutes like situational evidence or particular conventionalized clues. Concerning the necessity of cospeech pointing in utterances containing particular verbal deictics, the case is relatively clear. In order to focus the addressee’s attention on the reference object intended by the speaker, the latter is obliged to use directive body movements accompanying the utterance I mean this book, not that one!, already introduced above. It should be noted that obligatory pointing “might hold true for certain deictics but not for all cases to be included” (Hausendorf 2003: 262). An example of situational evidence is using the noun phrase this book here in order to refer to the only single book present in the utterance situation. It is obvious that in this case no complementary directive information is necessary. But what is meant by “conventionalized clues”? Bühler illustrates this case by analyzing the anaphoric use of this and that: Where is there supposed to be such a sensual guide when in German I use the words dieser and jener [respectively this ⫽ the latter and that ⫽ the former] to refer to what has just been spoken of in the utterance? In this case there is admittedly no sensible guide. But to replace it, a convention takes effect that the hearer should look back at what was last named as the nearer thing when he hears dieser [this, the latter] and at what was first named as the more remote thing when he hears jener [that, the former] and that he should resume thinking about them. (Bühler 1990: 128)

136. Deixis, gesture, and embodiment from a linguistic point of view

1809

It should be mentioned that German differs from English in this respect. In English former and latter use the beginning of the text as the origin of reference, whereas in German the origin is instantiated by the point at which the verbal deictics actually occur (Bühler 1990: 128). These contrary conventions in anaphoric use might also shed light on the contrary directionality of the Bühlerian and Anglo-American conceptions of deictic and intrinsic in deixis theory already mentioned above. So far we have distinguished between pointing, deixis, and naming. Although Bühler himself only deals with co-speech pointing gestures as complements to verbal deictics, we are allowed to ask: How deictic are pointing gestures in themselves? Bühler leaves no doubt as to how the question should be answered: To be considered as fully deictic, pointing gestures require a complementary naming function characterizing the reference object intended by the speaker.

3.2. Examples o pointing and naming in gesture Considering co-speech gestures alone, particular forms of conventionalized pointing gestures can be conceived of as integrating the Bühlerian naming function and, therefore, as fully deictic. Examples are the G-form, with an extended index finger and the palm oriented downwards, and the palm-lateral-open-hand gesture (PLOH). Analogous to verbal local deictics, which differ with respect to their denotatum, e.g., space (here) vs. object (this), these two typified forms of pointing are at the same time semanticized with different meanings. The G-form is semanticized with a meaning which can be paraphrased as ‘pointing to an object’, whereas the meaning of the palm-lateral-open-hand gesture is directive (‘pointing in a direction’).

Fig. 136.1: Two types of pointing gestures in German: G-Form and palm-lateral-open-hand (Fricke 2007, this volume)

Differentiation between the palm-lateral-open-hand gesture and the G-Form has been observed with respect to both form and meaning ⫺ at least for single occurrences ⫺ in Italian (Kendon and Versante 2003; Kendon 2004) and, as a quantitative study has shown, also in German (Fricke 2007, 2010; for this differentiation in other languages, see also Haviland 1993, 2003; Jarmołowicz-Nowikow this volume; Wilkins 2003). Fricke classifies such partially conventionalized form-meaning relations ⫺ in contrast to fully lexicalized ones ⫺ as “kinesthematic” (Fricke 2010, 2012, this volume). Since other nonconventionalized body movements, for example, pointing with the elbow, provide purely directional information without characterizing the target to which the speaker is drawing

1810

VIII. Gesture and language

the addressee’s attention, from a linguistic point of view, they are to be classified as only proto-deictic. Proto-deictic instances of pointing serve as optional or obligatory complements in concert with verbal deictics to achieve the full deictic function required by the type of utterance as well as the communicative goal of the speaker. Since both co-speech gestures alone and verbal deictics alone are capable of instantiating the full deictic function by integrating pointing and naming, this implies that gesture as a medium shows the potential for unfolding language-like properties in the Bühlerian “deictic field” (cf. Wundt [1900] 1904, [1900] 1973).

4. The deictic relation I: The origo In the simplest case, the deictic relation is conceived of as a two-place relation between the origo and the deictic object. More elaborated concepts provide a three-place relation consisting of the origo, the deictic object, and an optional relatum object (Herrmann and Schweizer 1998: 51). Let us consider the following situation in Fig. 136.2 and assume that the speaker wants to inform the addressee about where the pliers are located.

Fig. 136.2: The deictic relation according to Herrmann and Schweizer (1998)

Given the conditions shown in the illustration, the speaker can produce the following three equally appropriate utterances: (1) (2) (3)

The pliers are in front of me (Origo: speaker, relatum: speaker, intended object: pliers). The pliers are behind the car (Origo: car, relatum: car, intended object: pliers). The pliers are to the left of the car (Origo: speaker, relatum: car, intended object: pliers).

The object to be located, the pliers, is called the intended object and is the same in all three utterances. This intended object is located in relation to another object, the relatum, and to the origo. The relatum and the origo can be instantiated by different entities, in this case, either by the speaker or the car. The utterances can be divided into two groups, namely three-point localizations and two-point localizations, depending on whether there are three different entities or only two that instantiate the position of the origo, the relatum, and the intended object. Utterance (3) is an example of three-point localization, whereas utterances (1) and (2) are examples of two-point localization. Herrmann’s 6Hmodel consists of altogether six main variants, which result from assigning the origo either to the speaker, to the addressee, or to a third party.

136. Deixis, gesture, and embodiment from a linguistic point of view

4.1. The Bühlerian origo and tactile body image as predecessors o image schemas Bühler’s term “origo”, which in a first step is derived from the analogy to a pointing gesture (see section 3.1), is conceptualized in a second step as the origin of a Cartesian coordinate system, which is used to organize the personal, temporal, and spatial structure of utterances. Let two perpendiculary intersecting lines on the paper suggest a coordinate system to us, O for the origin, the coordinate source: […] My claim is that if this arrangement is to represent the deictic field of human language, three deictic words must be placed where the O is, namely the deictic words here, now and I. (Bühler 1990: 117)

As it will be demonstrated in the following section 4.2, Bühler’s definition of origo is not completely adequate to specify the phenomena associated with deixis, but it takes an important aspect into account, namely, that the origo is under no circumstances to be identified with a concrete component of the situation. The Cartesian coordinate system, adopted from the sphere of mathematics, implies that Bühler thinks of origos in terms of abstract mathematical points. The direction of his thinking progresses from the concrete pointing gesture as a starting point, via abstract geometrical vectors, to abstract single points expressed in terms of algebra. Such origos are included in the meaning of the respective deictic expressions, e.g., the deictic here is considered to be origo-inclusive. This means that the respective relation to the origo has to be anchored in the utterance situation by instantiating it with perceptible or imaginary entities. Bühler’s second way of detaching the origo from concrete perceptible entities is to assume the existence of a tactile body image, which resembles modern concepts of image schema as embodied experiential gestalts. Image schemas are defined as “a recurring dynamic pattern of our motor programs that gives coherence and structure to our experience” (Johnson 1987: xiv). The core list of image schemas, taken from Johnson (1987) and Lakoff (1987), includes the body axes, e.g., FRONT-BACK and UP-DOWN. Bühler’s tactile body image is similarly grounded in sensory experience and connected with the speaker’s body axes: When the same person uses words like in front ⫺ behind, right ⫺ left, above ⫺ below, another fact becomes apparent, namely the fact that he senses his body, too, in relation to his optical orientation, and employs it to point. His (conscious, experienced) tactile body image has a position in relation to visual space. (Bühler 1990: 129)

The tactile body image plays a crucial role in processes of deictic displacement, which is the so-called second main case of Bühler’s deixis ad phantasma (imagination-oriented deixis). The origo is displaced within perceptual space or imagined space. Thus the verbal deictics used by the speaker are not interpreted in relation to his current orientation but rather in relation to another grounding system, a virtual image that he creates of himself. The speaker’s tactile body image wanders with the origo as it is described by Bühler in the following quotation: When Mohammed feels displaced to the mountain, his present tactile body image is connected with an imagined optical scene. For this reason he is able to use the local deictic

1811

1812

VIII. Gesture and language words here and there (hier, da, dort) and the directional words forwards, back; right left on the phantasy product or imagined object just as well as in the primary situation of actual perception. And the same holds for the hearer. (Bühler 1990: 153)

The explicit link between the concept of origo and the concept of tactile body image reveals Bühler’s conception to be an early predecessor of crucial concepts of embodiment in cognitive linguistics. Concerning modern deixis theory, one important implication of his thoughts is that by abstracting the origo from pointing and the whole body from mere sensory orientation, Bühler makes way for leaving behind the limitations of the so-called “canonical utterance situation” (Lyons 1977: 638). One argument against a concrete, physically defined origo fixed to the speaker is the phenomenon of movable origos in deictic displacements. A physical point in space and time cannot be mentally shifted (for a detailed discussion, see Fricke 2002, 2003, 2007). The assumption of a movable origo is not limited to Bühler alone and became widely accepted by later approaches. However, Bühler (1990: 117) seems to assume one single origo for all dimensions: a mutual starting point for personal, local, and temporal deixis. This raises the question of whether the assumption of one origo is sufficient. Sennholz (1985: 24) notes that there cannot be a single origo for all dimensions since, in some circumstances, several deictics used in one and the same speech sequence can each have there own origo. Fricke (2002, 2003) gives examples of co-speech gestures that can only be analyzed if basic concepts of deixis theory are changed: firstly, the concept of origo (section 4.2), and secondly, the concept of the deictic object (section 5).

4.2. Frickes model o origo-allocating acts Fricke’s empirical analyses of multimodal deixis show that, with respect to the local dimension, not only several inter-dimensional but also intra-dimensional origos are present (see example 4 and Fig. 136.4 in section 4.3). Therefore, her concept of origo-allocating acts suggests a hierarchical structure of origos, beginning with a primary origo connected to

Fig. 136.3: The origo-allocating act according to Fricke (2002, 2007)

136. Deixis, gesture, and embodiment from a linguistic point of view

1813

the role of the speaker. As turn-taking results in changes of communicative role, whoever is the speaker attains a primary origo and with it the possibility of intentionally creating secondary origos by means of origo allocation. In multimodal communication, these secondary origos can be instantiated by perceptible and imaginary entities, which can be interpreted either as signs or non-signs (Fricke 2002, 2003, 2007, in preparation b). If we assume that an origo is not necessarily fixed to the speaker but can be transferred to other people and objects, then origo allocation is not simply a result of acquiring the speaker’s role during turn-taking. When origos are allocated, it can be presumed that there is an intentionally driven agent who carries out origo allocation and instantiation (Fricke 2002, 2003). By talking to somebody, a person acquires the speaker’s role, and with it the right to allocate local origos or to provide the local origo with intrinsically oriented entities. Such an entity can also be the speaker himself (Fricke 2002, 2003). Therefore, it is important to distinguish between two different things: firstly, the speaker who, in his role as speaker and as holder of a primary origo, allocates the secondary origos intentionally; and secondly, the speaker who, as an intrinsically arranged entity, instantiates a secondary origo. If we assume that the function of origo allocation is connected with the role of the speaker, then the personal dimension is the highest dimension in the hierarchy. Thus, the right to allocate origos changes with the communicative role.

4.3. Examples o verbal and gestural displacement as origo-allocation The following examples are taken from video recordings of route descriptions at Potsdamer Platz in Berlin. The speaker in example 4 has not physically followed the route at Potsdamer Platz herself. She tries to relay to the addressee, as precisely as possible, the route description given to her by a third person. We can observe that while the intended deictic object at the verbal and gestural level is the same, namely the Stella Musical Theater, the gestural and verbal origos differ. The speaker allocates the verbal local origo to the addressee and localizes the theater with the verbal expression rechts von dir (‘on your right’) in relation to the body axes of the addressee standing opposite to her, whereas she allocates the gestural local origo to her own body and its actual orientation. (4)

Und dann soll dort ein großer Platz sein und [rechts von dir] ist dies Stella-Musicaltheater […] ‘And then there should be a large square and [on your right] the Stella Musical Theater […]’

The Peircean concept of sign (Peirce 1931⫺58) can be applied to explain the disparity between the gestural and verbal origos in example 4: By using the verbal utterance rechts von dir [on your right], the speaker, as holder of the primary origo, allocates a secondary local origo to the addressee B as an imaginary wanderer on the verbal level. At the same time, on the gestural level, she allocates a secondary origo to her own body and its frontback and left-right axes. Since the body of the speaker, in her capacity as a human being, is analogous to that of the addressee B projected into the future, the speaker allows herself to be understood as a model that represents the addressee B. Thus, on the gestural level, the speaker does not shift her perspective so that it correlates with that of the

1814

VIII. Gesture and language

Fig. 136.4: Pointing gesture accompanying rechts von dir (‘on your right’)

addressee, but rather she instantiates the origo by her own body, which functions as an iconic sign of the imaginary wanderer (for a detailed discussion, see Fricke 2002, 2003). Since the origin of the typical pointing gesture is the body of the person who is performing it, one would tend to think that no gestural displacements are possible. But this is not true, as the example of the palm-lateral-open-hand gesture in Fig. 136.1 demonstrates. The direction of the speaker’s palm correlates with the front-back axis of the addressee. While the speaker is uttering und gehst hier geradeaus (‘and go straight ahead’), the gestural local origo and the verbal origo are both instantiated, not by the speaker, but by the body of the addressee. Why is this? If the speaker had allocated the gestural origo to her own body, then the palm-lateral-open-hand gesture would have been in parallel with her own front-back axis. In this example, the speaker does not place herself in the shoes of the future addressee but uses the body of the actual addressee B as an iconic sign for the addressee B projected into the future.

5. The deictic relation II: The deictic object 5.1. Bühlers distinction between perceptual deixis and imagination-oriented deixis With regard to the deictic object, Bühler (1990) distinguishes between perceptual deixis, or demonstratio ad oculos, and imagination-oriented deixis, or deixis ad phantasma. The perceptibility of an entity is the criterion for classification as demonstratio ad oculos, and the imaginary presence of an entity is the criterion for classification as deixis ad phantasma. Within imagination-oriented deixis, he further distinguishes between three main cases. The first main case is characterized as a kind of theater stage on which the speaker performs like an actor. The second main case corresponds to processes of displacement already introduced above, where the “mental” origo is not instantiated by the actual speaker but displaced to new positions or transferred to other entities. The third main case is characterized by the fact that “the person who is having the experience is able to indicate with his finger the direction in which the absent thing is seen with the mind’s eye” (Bühler 1990: 152). The speaker does not shift a secondary origo (as in the second main case), nor is the intended object localized as an imaginary object within the actual perceptual space of speaker and addressee (as in the first main case). The difference to demonstratio ad oculos is simply that the intended object ⫺ in Bühler’s example, the

136. Deixis, gesture, and embodiment from a linguistic point of view

1815

object pointed at ⫺ is hidden and perceptually not accessible in the actual utterance situation. This classification has been challenged by the impact of gesture studies on deixis theory. Based on her analyses of deixis in multimodal utterances, Fricke’s claim is that the distinction between perceptual deixis and imagination-oriented deixis is based on the more fundamental distinction between deixis at non-signs vs. signs (Fricke 2002, 2003, 2007, 2008).

5.2. Beyond perception and imagination: Deixis at signs vs. non-signs The distinction between perception and imagination as introduced by Bühler for his deictic modes is not a genuinely linguistic one. The scope of linguistics covers conventionalized signs that stand for other things. What these other things are has long been a matter of controversial discussion (e.g., Lyons 1977: 95⫺114). Accepting for the moment the view that all deictic communication between humans takes place by means of verbal or gestural signs, then it would be preferable to have a concept for the deictic object based on the sign relation itself, instead of relying on non-linguistic ontological differences. In the following, we will focus on Bühler’s first main case of deixis ad phantasma, which is characterized as a kind of theater stage on which the speaker performs like an actor: ‘Here I was ⫺ he was there ⫺ the brook is there’: the narrator begins thus with indicative gestures, and the stage is ready, the present space is transformed into a stage. We paperbound people will take a pencil in hand on such occasions and sketch the situation with a few lines. […] If there is no surface to draw a sketch on, then an animated speaker can temporarily ‘transform’ his own body with two outstretched arms into the pattern of the battle line. (Bühler 1990: 156; italics added by E.F.)

Considering the last sentence of this quotation, we can observe that the battle line which is embodied by the speaker’s outstretched arms is perceptible and not imaginary. Although perceptibility is the distinguishing criterion for demonstratio ad oculos, this example is classified as deixis ad phantasma. What is Bühler’s motivation for this? The answer lies in the alternative interpretation: If classified as demonstratio ad oculos, pointing at a “real” battle line in a battle could not be differentiated from pointing at an embodied battle line in a narration. Thus, the precise nature of the distinction that Bühler wishes to draw with regard to deictic modes and objects is in certain aspects unclear. This is the reason why Fricke claims that the distinction between perceptual deixis and imaginationoriented deixis is based on the more fundamental distinction between deixis at non-signs vs. signs (Fricke 2002, 2003, 2007; see also Goodwin’s concept of a “semiotic field” in Goodwin 2000b). Considering Bühler’s characterization of imagination-oriented deixis, what all the examples he gives in the above quotation have in common is that the deictic object the speaker refers to, regardless of whether it is imaginary or not, is interpreted as a sign standing for something else. The embodied battle line in Bühler’s example is not just a perceptible object but rather a perceptible sign that depicts an absent battle line (Fricke 2007, 2009).

5.3. Examples o deixis at signs vs. non-signs in multimodal interaction 5.3.1. Deixis at non-signs The term “deixis at non-signs” refers to the default case of deixis in which both communication partners have perceptual access to the reference object intended by the

1816

VIII. Gesture and language

speaker. However, the respective concepts are essentially different. Based on the Peircean concept of sign, deixis at non-signs is characterized by the fact that the entity that the pointing gesture or the verbal deictic refers to is not interpreted as a sign. This is illustrated by the following example, which is equivalent to the “canonical situation of utterance” introduced by Lyons (1977: 637): (5)

A: [du kommst hier vorne raus an dieser Straße (.)] ‘you come out here right in front at this street (.)’

Fig. 136.5: Deixis at non-signs in example (5) (Fricke 2007)

Fig. 136.6: Deixis at non-signs as a Peircean sign configuration (Fricke 2007)

The pointing gesture in example (5) is directed at a target point instantiated by the entity “street” to which both the speaker and the addressee have perceptual access while communicating. The target object, or demonstratum, does not stand for something else. Therefore, it is not interpreted as a sign according to Peirce but is identical to the reference object intended by the speaker. In other words: With respect to both the speaker and addressee, the street is the street and nothing else; the demonstratum pointed at and the deictic reference object intended by the speaker do not differ at this point of their conversation.

5.3.2. Deixis at signs The term “deixis at signs” is used when the deictic object (demonstratum) is an entity that is interpreted as standing for something else, as it is illustrated by the following example: (6)

A: [das iss die Arkaden/] ‘that is the Arkaden’

While giving directions in the absence of the route described, the speaker (A) is pointing at the flat left hand of the addressee (B). This flat hand represents a certain building at Potsdamer Platz in Berlin, namely the Arkaden, a glass-covered shopping mall. In contrast to deixis at non-signs, the demonstratum and the reference object intended by the speaker are not identical but differ: The flat hand that the speaker is pointing at is interpreted as a sign for the intended reference object, the Arkaden. This relation is illustrated by the Peircean configuration of the sign processes in Fig. 136.8 on the right: The demonstratum of the pointing gesture R1 is the flat hand of the addressee, which is

136. Deixis, gesture, and embodiment from a linguistic point of view

Fig. 136.7: Deixis at signs in example (6) (Fricke 2007)

1817

Fig. 136.8: Deixis at signs as Peircean sign configuration (Fricke 2007)

the object O1 of the first sign relation. But, at the same time, the flat hand functions as the sign vehicle, or representamen R2 , in a second sign relation that stands for the intended reference object, the Arkaden (O2), which is not present in the actual utterance situation. This example is part of a longer sequence of interaction during which both communication partners collaboratively build up a shared map-like model of the Potsdamer Platz by verbal and gestural means (Fricke 2007: 208; for collaborative use of gesture space, see also Furuyama 2000 and McNeill 2005: 161). Other sequences in this data collection show that speakers can produce the deictic object and the respective pointing gesture simultaneously with their right and left hands, as illustrated by the following example (Fig. 136.9⫺12). The right hand is an iconic sign for a particular street at Potsdamer Platz and functions at the same time as the demonstratum that the speaker is pointing at with her left hand. The other pointing gestures refer to “imaginary” target points standing as signs for paths and buildings at Potsdamer Platz (for phenomena of so-called “abstract pointing”, see McNeill 2003, 2005; McNeill, Cassell, and Levy 1993). (7)

A: (rh ⫽ right hand): 1[{ja} also wenn hier so die Straße iss (.) von da Fußgängerweg und von da auch Fußgängerweg (.) und da iss McDonalds/ (xxx)]1 ‘so if the street is here (.) from there footpath and from there also footpath (.) and there is McDonalds’ (Fricke 2007: 128)

Examples like this show that deictic objects are not necessarily given prior to the utterance in question but can also be produced by speaker and addressee as part of their face-

Fig. 136.9: Deixis at signs: pointing gesture (lh) 1

Fig. 136.10: Deixis at signs: pointing gesture (lh) 2

1818

Fig. 136.11: Deixis at signs: pointing gesture (lh) 3

VIII. Gesture and language

Fig. 136.12: Deixis at signs: pointing gesture (lh) 4

to-face interaction and the respective utterance itself. Consequently, the contribution of co-speech gestures to linguistic deixis is twofold: firstly, pointing gestures as proto-deictics and complements to verbal deictics, and secondly, iconic gestures as potential deictic objects.

6. Conclusion and outlook: The embodied deictic relation Although the term “deixis” is originally based on the idea of drawing attention to something by means of pointing, linguistic deixis is not limited to pointing, nor can verbal deixis be derived from pointing gestures alone. Moreover, the latter are not the only type of co-speech gestures that contribute to deixis. For example, iconic gestures that form part of the multimodal utterance may instantiate the deictic object of the deictic relation. In the Bühlerian tradition of linguistic deixis theory, iconic gestures of this kind are classified as part of imagination-oriented deixis (deixis ad phantasma) in contrast to perceptual deixis (demonstratio ad oculos). The inherent contradiction of Bühler’s classification substantiates Fricke’s distinction between deixis at signs vs. non-signs as being more fundamental. Based on Herrmann and Schweizer’s (1998) model, the deictic relation has been defined above as a three-place relation consisting of the origo, an optional relatum object, and the deictic object. With regard to the origo, the term and concept of origo has been traced back to Bühler, who defines it as the zero-point of a Cartesian coordinate system, which is the mutual starting point for all deictic dimensions (personal, local, and temporal deixis). In contrast to the Anglo-American tradition, Bühler conceptualizes the origo as a mathematical point with no volume, which allows it to move, and also allows for deictic displacement. At the same time, the Bühlerian origo is anchored in the “tactile body image” of the speaker, a notion which strongly resembles modern concepts of image schemas as experiential gestalts. As pointed out, this explicit link reveals Bühler’s conception to be an important predecessor of embodiment theory in cognitive linguistics. With her concept of origo-allocating acts, Fricke broadens Bühler’s original concept, which assumes a mutual origo for all deictic dimensions. Her concept of origo is based on the assumption of an intentionally driven agent who allocates and instantiates the origos provided by the deictic utterance: Origo-allocating acts are hierarchically structured. The primary origo is connected to the role of the speaker who, as the current holder of the primary origo, intentionally allocates secondary origos to his own body, or to other perceptible or imaginary entities. Analogous to deictic objects, these instantiations of

136. Deixis, gesture, and embodiment from a linguistic point of view

1819

secondary origos can also be interpreted as signs or as non-signs. With respect to embodiment in deixis, it turns out that the complete set of deictic relations may be instantiated either by the speaker’s body or by his gestures: the secondary origo ⫺ by the speaker’s torso (or other body parts) and his tactile body image (section 4); the deictic expression ⫺ by pointing gestures (including “naming”) (section 3.2); and the deictic object ⫺ either by the speaker’s body or by iconic gestures produced during the utterance (section 5).

7. Reerences Bühler, Karl 1982a. Sprachtheorie. Die Darstellungsfunktion der Sprache. Stuttgart/New York: Fischer. First published [1934]. Bühler, Karl 1982b. The deictic field of language and deictic words. In: Robert J. Jarvella and Wolfgang Klein (eds.), Speech, Place, and Action: Studies in Deixis and Related Topics, 9⫺30. New York: Wiley. Bühler, Karl 1990. Theory of Language. The Representational Function of Language. Amsterdam/ Philadelphia: John Benjamins. First published [1934]. Butterworth, George 2003. Pointing is the royal road to language for babies. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 9⫺34. Mahwah, NJ: Erlbaum. Butterworth, George and Paul Morissette 1996. Onset pointing and the acquisition of language in infancy. Journal of Reproductive and Infant Psychology 14: 219⫺231. Cienki, Alan 2005. Image schemas and gesture. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics. (Cognitive Linguistics Research 29.), 421⫺441. Berlin: Mouton de Gruyter. Cienki, Alan volume 1. Cognitive Linguistics: Spoken language and gesture as expressions of conceptualization. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication Science 38.1.), 182⫺201. Berlin/Boston: De Gruyter Mouton. Clark, Eve V. 1978. From gesture to word: On the natural history of deixis in language acquisition. In: Jerome S. Bruner and Alison Garton (eds.), Human Growth and Development, 85⫺120. Oxford: Oxford University Press. Clark, Herbert H. 1996. Using Language. Cambridge, UK: Cambridge University Press. Clark, Herbert H. 2003. Pointing and placing. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 243⫺268. Mahwah, NJ: Erlbaum. Clark, Herbert H., Robert Schreuder and Samuel Buttrick 1983. Common ground and the understanding of demonstrative reference. Journal of Verbal Learning and Verbal Behavior 22: 245⫺ 258. Cooperrider, Kensy and Rafael Nu´n˜ez 2012. Nose-pointing. Notes on a facial gesture of Papua New Guinea. Gesture 12 (2): 103⫺129. Cormier, Kearsy 2012. Pronouns. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook. (Handbooks of Linguistics and Communication Science 37.), 227⫺244. Berlin/Boston: De Gruyter Mouton. Diessel, Holger 2006. Demonstratives, joint attention, and the emergence of grammar. Cognitive Linguistics 17(4): 463⫺489. Ehlich, Konrad 1985. Literarische Landschaft und deiktische Prozedur: Eichendorff. In: Harro Schweizer (ed.), Sprache und Raum. Psychologische und linguistische Aspekte der Aneignung und Verarbeitung von Räumlichkeit. Ein Arbeitsbuch für das Lehren von Forschung, 246⫺261. Stuttgart: Metzler. Ehlich, Konrad 2007. Kooperation und sprachliches Handeln. In: Konrad Ehlich (ed.), Sprache und sprachliches Handeln, Vol. 1, 125⫺137. Berlin: De Gruyter.

1820

VIII. Gesture and language

Enfield, N.J. 2001. ‘Lip-pointing’. A discussion of form and function with reference to data from Laos. Gesture 1(2): 185⫺212. Enfield, N.J. 2003. Demonstratives in space and interaction: Data from Lao speakers and implications for semantic analysis. Language 79(1): 82⫺117. Enfield, N.J. 2009. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge, NY: Cambridge University Press. Enfield, N.J. volume 1. A “composite utterances” approach to meaning. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication Science 38.1.), 689⫺707. Berlin/Boston: De Gruyter Mouton. Engberg-Pedersen, Elisabeth 2003. From pointing to reference and predication: Pointing signs, eye gaze, and head and body orientation in Danish Sign Language. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 269⫺292. Mahwah, NJ: Erlbaum. Fillmore, Charles J. 1982. Towards a descriptive framework for spatial deixis. In: Robert J. Jarvella and Wolfgang Klein (eds.), Speech, Place, and Action: Studies in Deixis and Related Topics, 31⫺ 59. New York: Wiley. Fillmore, Charles J. 1997. Lectures on Deixis. Stanford: CSLI Publications. Fricke, Ellen 2002. Origo, pointing, and speech. The impact of co-speech gestures on linguistic deixis theory. Gesture 2(2): 207⫺226. Fricke, Ellen 2003. Origo, pointing, and conceptualization. What gestures reveal about the nature of the origo in face-to-face interaction. In: Friedrich Lenz (ed.), Deictic Conceptualisation of Space, Time, and Person, 69⫺94. Amsterdam/Philadelphia: John Benjamins. Fricke, Ellen 2007. Origo, Geste und Raum: Lokaldeixis im Deutschen. Berlin/New York: De Gruyter. Fricke, Ellen 2008. Powerpoint und Overhead. Mediale und kontextuelle Bedingungen des mündlichen Vortrags aus deixistheoretischer Perspektive. Zeitschrift für Semiotik 30(1/2): 151⫺174. Fricke, Ellen 2009. Deixis, Geste und Raum: Das Bühlersche Zeigfeld als Bühne. In: Mareike Buss, Sabine Jautz, Frank Liedtke und Jan Schneider (eds.), Theatralität sprachlichen Handelns. Eine Metaphorik zwischen Linguistik und Kulturwissenschaften, 165⫺188. München: Fink. Fricke, Ellen 2010. Phonaestheme, Kinaestheme und multimodale Grammatik. Sprache und Literatur 41(1): 69⫺88. Fricke, Ellen 2012. Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin/Boston: De Gruyter. Fricke, Ellen this volume. Kinesthemes: Morphological complexity in co-speech gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.) Berlin/Boston: De Gruyter Mouton. Fricke, Ellen in preparation a. Deixis and gesture. In: Konstanze Jungbluth and Federica da Milano (eds.), Manual of Deixis in Romance Languages. Berlin/Boston: De Gruyter Mouton. Fricke, Ellen in preparation b. Deixis, gesture, and space from a semiotic point of view. In: Federica da Milano (ed.), Space and Language: On Deixis. Amsterdam/Philadelphia: John Benjamins. Furuyama, Nobuhiro 2000. Gestural interaction between the instructor and the learner in origami instruction. In: David McNeill (ed.), Language and Gesture, 99⫺117. Cambridge, UK: Cambridge University Press. Geeraerts, Dirk 2010. Theories of Lexical Semantics. Oxford: Oxford University Press. Goldin-Meadow, Susan and Cynthia Butcher 2003. Pointing toward two-word speech in young children. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 85⫺ 109. Mahwah, NJ: Erlbaum. Goodwin, Charles 1980. Restarts, pauses, and the achievement of a state of mutual gaze at turnbeginning. Sociological Inquiry 50(3⫺4): 272⫺302.

136. Deixis, gesture, and embodiment from a linguistic point of view

1821

Goodwin, Charles 1986. Gesture as a resource for the organization of mutual orientation. Semiotica 62(1/2): 29⫺49. Goodwin, Charles 2000a. Pointing and the collaborative construction of meaning in aphasia. Texas Linguistic Forum 43: 67⫺76. Goodwin, Charles 2000b. Action and embodiment within situated human interaction. Journal of Pragmatics 32: 1489⫺1522. Goodwin, Charles 2003. Pointing as situated practice. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 217⫺241. Mahwah, NJ: Erlbaum. Hampe, Beate (ed.) 2005. From Perception to Meaning: Image Schemas in Cognitive Linguistics. (Cognitive Linguistics Research 29.). Berlin: Mouton de Gruyter. Hanks, William F. 1990. Referential Practice. Language and Lived Space among the Maya. Chicago, IL: The University of Chicago Press. Hanks, William F. 1992. The indexical ground of deictic reference. In: Alessandro Duranti and Charles Goodwin (eds.), Rethinking Context: Language as an Interactive Phenomenon, 43⫺76. Cambridge, NY: Cambridge University Press. Hanks, William F. 1993. Metalanguage and Pragmatics of Deixis. In: John A. Lucy (ed.), Reflexive Language: Reported Speech and Metapragmatics, 127⫺157. Cambridge, NY: Cambridge University Press. Hanks, William F. 2005. Explorations in the Deictic Field. Current Anthropology 46(2): 191⫺220. Hanks, William F. 2009. Fieldwork on deixis. Journal of Pragmatics 41(1): 10⫺24. Harweg, Roland 1976. Formen des Zeigens und ihr Verhältnis zur Deixis. Zeitschrift für Dialektologie und Linguistik 43: 317⫺337. Hausendorf, Heiko 2003. Deixis and speech situation revisited: The mechanism of perceived perception. In: Friedrich Lenz (ed.), Deictic Conceptualisation of Space, Time, and Person, 249⫺269. Amsterdam/Philadelphia: John Benjamins. Haviland, John B. 1993. Anchoring, iconicity and orientation in Gungu Yimithirr pointing gestures. Journal of Linguistic Anthropology 3(1): 3⫺45. Haviland, John B. 2003. How to point in Zinacanta´n. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 139⫺169. Mahwah, NJ: Erlbaum. Heath, Christian 1986. Body Movement and Speech in Medical Interaction. Cambridge, NY: Cambridge University Press. Herrmann, Theo and Karin Schweizer 1998. Sprechen über Raum. Sprachliches Lokalisieren und seine kognitiven Grundlagen. Bern: Huber. Jarmołowicz-Nowikow, Ewa this volume. Index finger extended or open palm. Is the type of referent (person or object) a determinant of pointing gesture form? In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 1824⫺1831. Berlin/Boston: De Gruyter Mouton. Johnson, Mark 1987. The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. Chicago, IL: University of Chicago Press. Kendon, Adam 1990. Conducting Interaction. Patterns of Behavior in Focused Encounters. Cambridge, NY: Cambridge University Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Kendon, Adam and Laura Versante 2003. Pointing by hand in “Neapolitan”. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 109⫺137. Mahwah, NJ: Erlbaum. Kita, Sotaro 2003a. Interplay of gaze, hand, torso orientation, and language in pointing. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 307⫺328. Mahwah, NJ: Erlbaum. Kita, Sotaro 2003b. Pointing: A foundational building block of human communication. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 1⫺8. Mahwah, NJ: Erlbaum.

1822

VIII. Gesture and language

Kita, Sotaro (ed.) 2003c. Pointing: Where Language, Culture, and Cognition Meet. Mahwah, NJ: Erlbaum. Klein, Wolfgang 1978. Wo ist hier? Präliminarien zu einer Untersuchung der lokalen Deixis. Linguistische Berichte 58: 18⫺40. Lakoff, George 1987. Women, Fire and Dangerous Things: What Categories Reveal about the Mind. Chicago, IL: University of Chicago Press. Levinson, Stephen C. 1992. Pragmatics. Cambridge, UK: Cambridge University Press. First published [1983]. Levinson, Stephen C. 2004. Deixis. In: Laurence R. Horn and Gregory Ward (eds.), The Handbook of Pragmatics, 97⫺121. Oxford: Blackwell. Levy, Elena T. and Carol A. Fowler 2000. The role of gestures and other graded language forms in the grounding of reference in perception. In: David McNeill (ed.), Language and Gesture, 215⫺234. Cambridge, UK: Cambridge University Press. Liddell, Scott K. 2000. Blended spaces and deixis in sign language discourse. In: David McNeill (ed.), Language and Gesture, 331⫺357. Cambridge, UK: Cambridge University Press. Liszkowski, Ulf 2005. Human twelve-month-olds point cooperatively to share interest with and helpfully provide information for a communicative partner. Gesture 5(1/2): 135⫺154. Lyons, John 1977. Semantics. Vol. 2. Cambridge, NY: Cambridge University Press. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago, IL: Chicago University Press. McNeill, David 2003. Pointing and morality in Chicago. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 293⫺206. Mahwah, NJ: Erlbaum. McNeill, David 2005. Gesture and Thought. Chicago, IL: Chicago University Press. McNeill, David, Justine Cassell and Elena T. Levy 1993. Abstract deixis. Semiotica 95(1/2): 5⫺19. Miller, George A. and Philip N. Johnson-Laird 1976. Language and Perception. Cambridge, MA: Harvard University Press. Mondada, Lorenza 2002. Die Indexikalität der Referenz in der sozialen Interaktion: diskursive Konstruktionen von ,ich‘ und ,hier‘. Zeitschrift für Literaturwissenschaft und Linguistik 125: 79⫺113. Mondada, Lorenza 2007. Multimodal ressources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies 9(2): 195⫺226. Mondada, Lorenza volume 1. Conversation analysis: Talk and bodily resources for the organization of social interaction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication Science 38.1.), 218⫺227. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia 1996. Zur Unhöflichkeit von Zeigegesten. Osnabrücker Beiträge zur Sprachtheorie 52: 197⫺222. Peirce, Charles S. 1931⫺58 Collected Papers. Charles Hawthorne and Paul Weiss (eds.) Volumes 1⫺6; Arthur W. Burks (ed.) Volumes 7⫺8. Cambridge: Harvard University Press. Pika, Simone, Katja Liebal, Josep Call and Michael Tomasello 2005. Gestural communication of apes. Gesture 5(1/2): 41⫺56. Pizzuto, Elena and Micaela Capobianco 2005. The link (and differences) between deixis and symbols in children’s early gestural-vocal system. Gesture 5(1/2): 179⫺200. Pizzuto, Elena and Micaela Capobianco 2008. Is pointing “just” pointing? Unraveling the complexity of indexes in spoken and signed discourse. Gesture 8(1): 82⫺103. Povinelli, Daniel J., Jesse M. Bering and Steve Giambrone 2003. Chimpanzees “Pointing”: Another error of the argument by analogy? In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 35⫺68. Mahwah, NJ: Erlbaum. Schmauks, Dagmar 1991. Deixis in der Mensch-Maschine-Interaktion. Multimediale Referentenidentifikation durch natürliche und simulierte Zeigegesten. Tübingen: Niemeyer.

136. Deixis, gesture, and embodiment from a linguistic point of view

1823

Schmitt, Reinhold and Arnulf Deppermann (eds.) 2010. Sprache intermedial: Stimme und Schrift, Bild und Ton. Berlin/New York: De Gruyter. Sennholz, Klaus 1985. Grundzüge der Deixis. Bochum: Brockmeyer. Senft, Gunter (ed.) 2004. Deixis and Demonstratives in Oceanic Languages. Canberra: Pacific Linguistics. Sherzer, Joel 1973. Verbal and nonverbal deixis: The pointed lip gesture among the San Blas Cuna. Language in Society 2: 117⫺131. Sonesson, Göran 2007. From the meaning of embodiment to the embodiment of meaning. In: Tom Ziemke, Jordan Zlatev and Roslyn M. Frank (eds.), Body, Language and Mind. Vol 1: Embodiment. (Cognitive Linguistics Research 35.1.), 85⫺127. Berlin: Mouton de Gruyter. Streeck, Jürgen 1993. Gesture as communication I: Its coordination with gaze and speech. Communication Monographs 60(4): 275⫺299. Streeck, Jürgen 1994. Gesture as communication II: The Audience as co-author. Research on Language and Social Interaction 27(3): 239⫺267. Streeck, Jürgen 2002. Grammars, words, and embodied meanings: On the uses and evolution of ‘so’ and ‘like’. Journal of Communication 52(3): 581⫺596. Stukenbrock, Anja 2013. Deixis in der face-to-face-Interaktion. Habilitation thesis, Freiburg Institute for Advanced Studies (FRIAS), Albert-Ludwigs-Universität Freiburg. Unpublished manuscript. Talmy, Leonard 1978. Figure and Ground in Complex Sentences. In: Joseph Greenberg, Charles Ferguson and Edith Moravcsik (eds.), Universals of Human Language, Vol. 4, 625⫺649. Stanford: Stanford University Press. Tomasello, Michael 1995. Joint attention as social cognition. In: Chris Moore and Philipp Dunham J. (eds.), Joint Attention. Its Origin and Role in Developmennt, 103⫺130. Hillsdale: Erlbaum. Tomasello, Michael 2008. Origins of Human Communication. Cambridge, MA: The MIT Press. Tomasello, Michael 2009. The Cultural Origins of Human Cognition. Cambridge, MA: Harvard University Press. Wilcox, Sherman volume 1. Speech, sign and gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication Science 38.1.), 125⫺133. Berlin/Boston: De Gruyter Mouton. Wilkins, David 2003. Why pointing with the index finger is not a universal (in sociocultural and semiotic terms). In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 171⫺216. Mahwah, NJ: Erlbaum. Wundt, Wilhelm 1904. Völkerpsychologie. Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte. Volume 1: Die Sprache. Leipzig: Engelmann. First published [1900]. Wundt, Wilhelm 1973. The Language of Gestures. The Hague: Mouton. First published [1900]. Ziemke, Tom, Jordan Zlatev and Roslyn M. Frank (eds.) 2007. Body, Language and Mind. Vol 1: Embodiment. (Cognitive Linguistics Research 35.1.) Berlin: Mouton de Gruyter. Zlatev, Jordan volume 1. Levels of embodiment and communication. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication Science 38.1.), 533⫺550. Berlin/Boston: De Gruyter Mouton.

Ellen Fricke, Chemnitz (Germany)

1824

VIII. Gesture and language

137. Pointing by hand: Types o reerence and their inluence on gestural orm 1. 2. 3. 4. 5. 6. 7. 8. 9.

Introduction The form of pointing gesture and its cultural determinants The aim of the study Experimental procedure and material studied Forms of pointing gestures present in both experiments The form of pointing gestures depending on the referent A survey on pointing gestures Discussion References

Abstract The number of studies focused on the form of pointing gestures is relatively limited. They concern hand shape in small children as well as cultural conventions influencing the form of pointing gestures. The aim of this article is to show that the type of the referent of a pointing gesture ⫺ whether it is a person or an object ⫺ is a determinant of the form of pointing gestures realized by adult native speakers of Polish. The research involves two experimental tasks aimed at eliciting pointing gestures indicating people and objects. The results show that Polish people use different forms of pointing gestures depending on what they are pointing at. It was shown that almost all pointing gestures made to indicate objects (paper figures in this case) had the form of index finger extended, while the majority of gestures indicating people had a form different than index finger extended, being realized with open palm or gaze. A survey conducted supports the assumption that the results of both experiments were strongly influenced by Polish cultural norms concerning pointing gestures.

1. Introduction Gestures of indicating are recognized as a separate class or subclass in all 20th-century classifications of gestures; however, they are labeled differently and their definitions are not fully equivalent. The names most often used for this category of gestures are deictic gestures (Efron 1941; Ekman and Friesen 1969; McNeill 2005) and pointing gestures (Efron 1941; Kendon 2004); however, the labels demonstrative (Wundt 1973) or simply indicative (Butterworth 2003) also occur. Some researchers use the aforementioned labels as synonyms, while others define precise differences and similarities between them. The problem with defining deictic gestures is the same as with defining deictic verbal expression ⫺ it is very difficult to determine the boundaries of “deixis” (Levinson 2004). For this reason some researchers assume that most of, for example, the iconic gestures are deictic to a certain extent (Kendon 2004; McNeill 2005). A good solution to the terminological problem concerning indicating gestures is proposed by Kendon (2004), who states that it is possible to distinguish two subclasses among them: deictic and pointing gestures. The division into two subclasses is not sharp, as the whole class of indicating gestures may vary in the degree to which they are deictic ⫺ the deictic component may be more or less strong (Kendon 2004). “Gestures that are said to be pointing Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18241831

137. Pointing by hand: Types of reference and their influence on gestural form

1825

gestures are dominated by the deictic component almost to the exclusion of everything else. We may say of such gestures that they are specialized as pointing gestures” (Kendon 2004). Although Kendon’s proposal seems to be widely accepted by researchers, it is possible to find in the literature other conceptions where pointing gestures are not interpreted as belonging to one of the deictic extremes, but are considered as a broad category of gestures that combine indexing as well as iconic aspects (Goodwin 2003; Haviland 2003).

2. The orm o pointing gesture and its cultural determinants Most researchers agree that prototypical pointing is realized with a hand with index finger extended and the remaining fingers curled under the palm (Butterworth 2003; Kendon 2004; McNeill 2005). Kendon (2004) describes a pointing gesture as characterized by movement toward in a well-defined path, where at least the final part of the path is linear. When the hand reaches the last stage of the movement toward it may be held for a while; however, this is not an obligatory element of a pointing gesture. The studies focused on the form of gestures show that the form of a pointing gesture is strongly influenced by cultural conventions. Authors of intercultural studies assume that the pointing gesture probably exists in all cultures, and only its form is different (Müller 1996). The pointing gestures described in publications to date constitute a diverse set of gestures realized with hand, lips, head, and gaze. The most widely described group of pointing gestures is those realized with the hand. Kendon and Versante (2003; see also Kendon 2004) describe different forms of pointing gestures used by Neapolitans in everyday conversations. It was observed that the forms of pointing gestures adopted by speakers in the analyzed material (extended index finger, thumb with other fingers flexed to the palm or open palm) are strongly determined by the discourse context, especially the way the speaker refers to the object. For example, pointing with the thumb is used when the object or its location is not in the center of the discourse focus. The prototypical pointing gesture (with the index finger extended and other fingers curled under the palm) may, depending on culture, be regarded as culturally inappropriate. Kita and Essegbey (2001) report on specific features of the production of pointing gestures in Ghana. Recordings of Ghanaians giving route directions show that pointing with the left hand is considered a taboo in that country. This results in some notable practices, such as reducing the size of left-handed pointing if produced, or even suppression of left-handed pointing gestures. This perception of pointing with the left hand also causes overuse of right-hand pointing even when it is not comfortable for the speaker, as when the indicated target is situated to the left. Interviews carried out among the Ghanaians featured in the recordings showed that left-hand pointing is considered highly inappropriate and insulting. Haviland (2003) describes two standardized hand shapes for pointing used by Tzotzil speakers in Zinacantan (Chiapas, Mexico): an extended index finger used to indicate a concrete object, and a flat hand (with the palm held vertically, thumb side up, fingers grouped, and extended outwards) to show directions. In this case an extended index finger may be interpreted as “this one”, and a flat hand as “that direction”. The form of the pointing gesture can also differ in the degree of arm extension. An empirical study carried out by Enfield, Kita, and de Ruiter (2007) among Lao people

1826

VIII. Gesture and language

distinguished two forms of pointing gestures with distinct pragmatic functions: pointing gestures of large form with arm fully extended and eye gaze often aligned, and pointing gestures characterized by small, often casual movement realized only by the hand. Multimodal analysis of speech and gestures led to the conclusion that pointing gestures of large form provide important semantic information not contained in speech, while those of small form carry secondary information and are rather supportive to speech. Galhano-Rodrigues (2012) conducted a research in Portugal (Alto Minho) and distinguished three categories of pointing gestures realized with the hand (with index finger extended, thumb, or open palm), as well as movements of head, torso, and gaze having a pointing function. The cultural specificity of gestures has become a source of interest for designers of virtual agents. Endrass et al. (2010) tried to adjust the nonverbal behavior of agents to the culture-specificity of potential users of the agents. A corpus of Japanese and German people was recorded to analyze their culture-specific gestures and then integrate those gestures into the nonverbal repertoire of agents. One of the gesture categories analyzed was pointing gestures. The results of the analysis showed that the forms of pointing realized by German and Japanese subjects differed. Most of the Germans performed pointing gestures with index finger extended, while the most typical form for those from Japan is with the hand open. Another example of a culturally specific form of pointing gestures is “lip pointing”, a conventionalized and systematic behavior in some parts of Southeast Asia, the Americas, Africa, Oceania, Australia, and Papua New Guinea (Enfield 2001; Wilkins 2003). According to Wilkins (2003) lip pointing is far more widespread than researchers suggest. The frequency of use of lip pointing in comparison with index finger pointing differs in different parts of the world. In some communities lip pointing is a predominant form of indication. Wilkins (2003) describes puzzlement among the Barai of Papua New Guinea when they saw a person use an index finger pointing gesture to indicate objects. The reason for the Barai’s reaction was not that they regarded pointing with the index finger as impolite, but that they lacked understanding of the communicative intentions behind this particular form of gesture. In all of the descriptions of lip pointing available in the literature, this form of indication is not limited to special arrangement of the lips. According to Sherzer (1973), who investigated the meaning of facial expressions used by the Cuna Indians (San Blas, Panama), it is a combination of looking in a particular direction, raising the head and opening and closing the lips which gives the impression of pointing lips. “Lip pointing” seems also to be produced by the Lao people. Enfield (2001) describes Lao speakers realizing a characteristic movement of protruding lips accompanied by movements of the head and certain parts of the face (chin-raise/head lift, gaze direction, eyebrow raise).

3. The aim o the study Little is known about the factors that influence the physical form of gestures (Gerwing and Bavelas 2004). The aim of the experiments was to examine whether the referent is one of the potential determinants of pointing gesture form. It was assumed that the form of pointing gesture realized by the Polish participants in the experiments may be different depending on what is being indicated: a person or an object. An extended index finger

137. Pointing by hand: Types of reference and their influence on gestural form

1827

is generally regarded in Poland as rude, especially when indicating people, and hence it was hypothesized that Polish participants would more often use other forms of pointing gesture than index finger extended (e.g., open palm, gaze) when indicating people than when pointing at objects. The author is aware that the form of pointing gestures may be determined by other aspects of communication (Jarmołowicz-Nowikow 2012; Jarmołowicz-Nowikow and Karpin´ski 2011) ⫺ this paper presents just one of the whole range of pointing gesture determinants.

4. Experimental procedure and material studied Two experiments were carried out to elicit as many as possible pointing gestures indicating objects and people. In this study pointing gestures are defined according to Kendon’s definition (Kendon 2004). In one of the experiments, the arrangement of the recording studio and the experimental task were designed to provoke situations in which participants would point at objects. Two participants facing each other (participant A and participant B) took part in each recording session. Each of the participants stood behind a desk, on which were eleven identical paper figures marked with numbers. On participant A’s desk there were additionally small pieces of paper with the numbers of all the figures. Participant A’s task was to take a number at random, find the corresponding figure, and instruct participant B without saying the number of the figure. Participant B was asked to identify the figure in such a way that participant A could be sure that the identified figure was the right one. Both participants were told that they could communicate in any way, except that they could not use the numbers of the figures. The apparent purpose of the other experimental task was to arrange a group of 10 people into a certain configuration and to photograph them. The real purpose was to elicit pointing gestures used for indicating people. One subject and a group of 10 students took part in the experiment. The subject and the group of students were facing each other. The subject was given a picture of 10 people and was asked to position the people in the studio in the same way that people were positioned in the picture, and then take a photograph of them. To elicit pointing gestures the subject was asked not to address people using names and not to use any descriptions facilitating their identification (e.g., “the tall girl on the left”) while putting them into position. The participants of both experiments are university students. The data under study consists of 25 sessions recorded as part of Experiment I (about 106min.) and 12 sessions recorded as part of Experiment II (about 80min.). There were 1026 pointing gestures distinguished in the two experiments; however, not all of them were analyzed. Only pointings indicating people and objects were taken into consideration, and so 305 pointing gestures indicating place or direction were excluded from the analysis. All transcriptions of gestures were done in Elan. The boundaries of all pointing gestures were marked using Kendon’s model of gesture structure (Kendon 2004). Tab. 137.1: Numbers of gestures recorded in both experiments. Pointings indicating: Experiment I: Experiment II:

people

objects

place and direction

0 360

361 0

0 305

1828

VIII. Gesture and language

5. Forms o pointing gestures present in both experiments Three main categories of pointing gestures were distinguished in the material under study on the basis of the analysis of their forms: ⫺ open palm; ⫺ extended finger; ⫺ gaze as pointing. Each of the categories consists of subcategories comprised of recurrent specific realizations of the category’s basic form, e.g., open palm. (i) Open Palm category: ⫺ open palm (one hand) ⫺ open palm (both hands together) ⫺ open palm (hand partially open) (ii) Extended Finger category: ⫺ extended finger (one hand with index finger extended) ⫺ extended finger (both hands together with index fingers extended) ⫺ extended finger (one hand with index and middle finger extended) ⫺ extended finger (one hand with middle finger extended) (iii) Gaze as Pointing Gesture category ⫺ gaze (gaze) ⫺ gaze (gaze and head nod)

6. The orm o pointing gestures depending on the reerent The results of the analysis show a tendency among participants to realize different forms of pointing gestures depending on the referent. In Experiment I, which elicited pointing gestures indicating paper figures, the most frequently made pointing gestures belonged to the Extended Finger category (86%). The remaining pointing gestures (14%) in Experiment I had a form typical for the Open Palm category. No evident examples of gaze having the function of pointing were noted in the material recorded during Experiment I. The reason for this may be the size and setting of the paper figures on both desks. It is supposed that using gaze as a pointing gesture in this particular experimental situation might not be precise enough for the other participant to identify the figure.

Fig. 137.1: Form of pointing gestures used to indicate objects and people

137. Pointing by hand: Types of reference and their influence on gestural form

1829

In Experiment II the proportions between pointing gestures realized with extended finger and other forms of pointing gestures are different. Most pointing gestures (66%) were realized without extended finger, that is with open palm (34%) or gaze (32%). About one third of all pointing gestures (34%) indicating people in Experiment II had a form assigned to the Extended Finger category. Taking the norms of Polish politeness into consideration, the Extended Finger category on one hand, and the Open Palm and Gaze as Pointing Gesture categories on the other (these will be called the No Extended Finger category in the further part of this paper) may be regarded as opposing categories, since pointing with open palm or by means of gaze is not regarded as rude in Poland. It was also noticed that participants are quite consistent in their “style of gesturing”. This means that the predominant “style” of pointing gestures is easily distinguishable in the case of every participant. This observation concerns the recordings from both experiments. Most of the participants consistently used either gestures from the Extended Finger category, or other forms of pointing gestures without extended finger, that is gestures belonging to the categories of Open Palm and Gaze (the No Extended Finger category). The occurrence of similar proportions for gestures from the Extended Finger category and those of other categories in a single participant is very rare. The majority of participants in Experiment I (83%) and all participants in Experiment II (100%) had their gestural style dominated by the Extended Finger or No Extended Finger category. This means that more than 75% of pointing gestures realized by each participant belonged either to the Extended Finger category or to the No Extended Finger category ⫺ it does not mean that the dominating group of gestures in a single participant was always Extended Finger in Experiment I and No Extended Finger in Experiment II.

7. A survey on pointing gestures To interpret the results of the experiments, a survey concerning pointing gesture was conducted among Polish undergraduate students (of the same age as the subjects of the experiments). 82 respondents took part in the survey. All of the respondents stated that pointing at people is regarded as inappropriate in Polish culture, while only one third of them claimed that pointing at objects is perceived in Poland as rude. It is interesting that fewer respondents declared index finger extended as pointing gesture to be rude when asked for their personal opinion (about 7% in the case of pointing at objects and 75% in the case of pointing at people). All students remember being reprimanded by a parent or teacher not to point with index finger extended.

8. Discussion In both the experiments, a peculiar kind of communicative situation was arranged. The limited time of the experimental task, and restrictions concerning certain aspects of communication (the prohibition on saying the numbers of the figures in Experiment I, and some forms of addressing students in Experiment II) were purposefully introduced into the procedure to increase the frequency of use of pointing gestures. This means that the specific nature of the experimental tasks does not permit conclusions about the frequency of pointing gesture realization. It is also impossible to draw general conclusions about the form of pointing gestures used by Polish people depending on referent. However, as the results of the two experiments show that the type of referent may strongly influence

1830

VIII. Gesture and language

the form of pointing gestures, some assumptions about referent as a determinant of pointing gesture form may be made. The results of the survey showed that Poles regard indicating people with the index finger as highly inappropriate, whereas indicating objects with an extended finger is not perceived as rude by most of the respondents (for German speakers see Müller 1996). Cultural norms of politeness provide an explanation for the tendency to realize different forms of pointing gestures when indicating people and objects.

9. Reerences Butterworth, George 2003. Pointing is the royal road to language for babies. In: Sotaro Kita, (ed.), Pointing: Where Language Culture and Cognition Meet, 9⫺33. Hillsdale, NJ: Erlbaum. Efron, David 1941. Gesture and Environment. New York: King’s Crown Press. Ekman, Paul and Wallace Friesen 1969. The repertoire of nonverbal behavior: categories, origins, usage and coding. Semiotica 1: 49⫺98. Endrass, Birgit, Ionut Damian, Peter Huber, Matthias Rehm, Elisabeth Andre` 2010. Generating culture-specific gestures for virtual agents dialogs. In: Jan Allbeck, Norman Badler, Timothy Bickmore, Catherine Pelachaud and Alla Safonova (eds.), IVA 2010, LNAI 6356, 329⫺335. Berlin: Springer. Enfield, N.J. 2001. ‘Lip-pointing’: A discussion of form and function with reference to data from Laos. Gesture 1(2): 185⫺211. Enfield, N.J., Sotaro Kita and Jan Peter de Ruiter 2007. Primary and secondary pragmatic functions of pointing gestures. Journal of Pragmatics 39(10): 1722⫺1741. Galhano-Rodrigues, Isabel. 2012. “Vou buscar ali, ali acima!” A multimodalidade da deixis no portugueˆs europeu. Revista de Estudos Linguı´sticos da Univerdade do Porto 7: 129⫺164. Gerwing, Jennifer and Janet Beavin Bavelas 2004. Linguistic influences on gesture’s form. Gesture 4(2): 157⫺195. Goodwin, Charles 2003. Pointing as situated practice. In: Sotaro Kita (ed.), Pointing: Where Language Culture and Cognition Meet, 217⫺242. Hillsdale, NJ: Erlbaum. Haviland, John 2003. How to point in Zinacanta´n. In: Sotaro Kita (ed.), Pointing: Where Language Culture and Cognition Meet, 139⫺169. Hillsdale, NJ: Erlbaum. Jarmołowicz-Nowikow, Ewa 2012. Are pointing gestures induced by communicative intention? In: Anna Esposito, Antonietta M. Esposito, Alessandro Vinciarelli, Rüdiger Hoffmann and Vincent C. Müller (eds.), Cognitive Behavioural Systems (Lecture Notes in Computer Science 7403), 377⫺389. Berlin: Springer. Jarmołowicz-Nowikow, Ewa and Maciej Karpin´ski 2011. Communicative intentions behind pointing gestures in task-oriented dialogues. Proceedings of GESPIN, 5⫺7 September, Bielefeld, Germany. Kendon, Adam 2004. Gesture. Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Kendon, Adam and Laura Versante 2003. Pointing by hand in Neapolitan. In: Sotaro Kita (ed.), Pointing: Where Language Culture and Cognition Meet, 109⫺137. Hillsdale, NJ: Erlbaum. Kita, Sotaro and James Essegbey 2001. Pointing left in Ghana: how a taboo on the use of the left hand influences gestural practices. Gesture 1(1): 73⫺95. Levinson, Stephen C. 2004. Deixis. In: Laurence Horn (ed.), The Handbook of Pragmatics, 97⫺121. Oxford: Blackwell. McNeill, David 2005. Gesture and Thought. Chicago, IL: University of Chicago Press. Müller, Cornelia 1996. Zur Unhöflichkeit von Zeigegesten. Osnabrücker Beiträge zur Sprachtheorie 52: 196⫺222. Sherzer, Joel 1973. Verbal and non-verbal deixis: the pointed lip gesture among the San Blas Cuna. Language in Society 2(1): 117⫺131.

137. Pointing by hand: Types of reference and their influence on gestural form

1831

Wilkins, David 2003. Why pointing with the index finger is not a universal (in sociocultural and semiotic terms). In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 171⫺215. Mahwah, NJ: Erlbaum. Wundt, Wilhelm 1973. The language of gesture. In: Thomas A. Sebeok (ed.), Approaches to Semiotics, 55⫺152. The Hague: Mouton.

Ewa Jarmołowicz-Nowikow, Poznan´ (Poland)

IX. Embodiment  The body and its role or cognition, emotion, and communication 138. Gestures and cognitive development 1. 2. 3. 4. 5. 6.

Introduction Age-related changes in gesture production Gesture provides a window on children’s knowledge Gesture can play a role in cognitive change Conclusion References

Abstract Gestures are a special form of action that bridges between activity in the physical world and abstract, symbolic representations. As such, gestures may play a key role in cognitive development. Children typically begin to produce gestures during their first year, with gestures that indicate objects (deictic gestures) typically emerging before gestures that represent objects or actions (representational gestures). Once children begin to produce gestures, those gestures can reveal aspects of children’s thinking and cognition about a wide range of topics. In some cases, gestures reveal information that children do not express in speech, including information about the stability and content of children’s thoughts. However, gesture is more than a simple “window” on cognition. Recent studies suggest that producing gestures actually changes the course of children’s thinking. Gesture production highlights perceptual and action information, helps children to manage working memory demands, and communicates to listeners that children are ready for certain types of instructional input. A complete understanding of children’s cognitive development will require a deeper understanding of how and why children produce gestures, and how gestures manifest and affect thought.

1. Introduction Action plays a central role in both classic and contemporary theories of cognitive development. For example, in Piaget’s ([1936] 1952) theory, internalized action is the foundation of logical thought. Piaget viewed cognitive development as “a process of extending and reorganizing the internalized action system constructed during infancy into systems of mental representation and abstract thought” (Bidell and Fischer 1992: 107). Contemporary research in cognitive development also emphasizes the importance of action. Empirical studies reveal the integral role of action in shaping a wide range of cognitive achievements, including object perception (Soska, Adolph, and Johnson 2010), spatial perception (Bertenthal, Campos, and Kermoian 1994; Campos et al. 2000), categorization (Smith 2005), and social understanding (Sommerville, Woodward, and Needham 2005). Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18331840

1834

IX. Embodiment

In this article, I focus on a specific form of action that may play a key role in cognitive development: spontaneous gestures. Gestures are motor actions, but they differ in important ways from functional actions such as locomotion or actions on objects. One crucial difference is that gestures are closely tied to language (Kita and Özyürek 2003; McNeill 1992). Another crucial difference is that gestures often represent or iconically depict aspects of other motor actions. Indeed, some theorists have argued that gestures derive from the simulations of actions and perceptual states that speakers generate in the effort of speaking (Hostetter and Alibali 2008). By virtue of their close ties to both language and action, gestures form a bridge between concrete, embodied activity in the physical world and abstract, symbolic representations. Deictic gestures point to objects and locations in the world; when produced with speech, deictic gestures connect words and phrases to the physical world. Representational gestures iconically depict aspects of objects, actions, events, or ideas; when produced with speech, representational gestures disambiguate, enrich, and specify the meaning of the accompanying speech. Thus, both deictic and representational gestures serve to ground communication ⫺ and thinking ⫺ in the physical world (Alibali and Nathan 2012). In his classic work on learning and instruction, Jerome Bruner (1966) described the systematic progression from enactive to iconic to abstract representations (see also Werner and Kaplan 1964). Spontaneous gestures can simulate actions (i.e., they can be enactive), and they can also represent actions and perceptual states (i.e., they can be iconic). At the same time, gestures are closely tied to the abstract symbols of language. From this perspective, then, it seems possible that gestures may play a special role in revealing or promoting progress in cognitive development, and in learning more generally. This article reviews evidence that gestures both reveal and have a hand in children’s cognitive development.

2. Age-related changes in gesture production Children typically begin to produce gestures between 8 and 12 months of age (Bates 1976; Bates et al. 1979). The first gestures to emerge are deictic gestures ⫺ such as points to objects or locations, or holding up objects to indicate them. For example, a child may hold up a toy to draw his mother’s attention to it, or he may point to a desired piece of food. At around this time, children also begin to produce emblems, which are socially conventional gestures, such as waving an open hand to say “bye-bye”. “Baby signs” (Acredolo and Goodwyn 1996) capitalize on children’s ability to produce and interpret such conventional gestures. At around the age of one year, children begin to produce representational gestures, which are gestures that depict objects, actions, or events by virtue of their handshape or motion (Acredolo and Goodwyn 1988). For example, a child might produce a cradling gesture to refer to her doll, or might flap her arms to refer to a bird (Iverson, Capirci, and Caselli 1994). Young children often produce representational gestures using their entire bodies, including their trunks, legs, and feet, to represent other’s bodies; over developmental time, children shift to using primarily their hands and arms (McNeill 1992). Like adults, children produce representational gestures at a higher rate when their listeners can see those gestures than when they cannot; however, also like adults, children continue to produce a substantial number of representational gestures, even when visibility between speaker and listener is blocked (Alibali and Don 2001).

138. Gestures and cognitive development

1835

The consistent order of acquisition (i.e., deictic before representational gestures) across children suggests that the emergence of different types of gestures may depend on children’s level of motor or cognitive development. Indeed, representational gestures are both motorically and cognitively more complex than deictic gestures, so it is not surprising that they emerge later. Children tend to produce points and conventional symbolic gestures (i.e., emblems) before producing their first words (Goodwyn and Acredolo 1993; Iverson and Goldin-Meadow 2005); however, relatively little is known about how gesture production patterns with other cognitive achievements during early development. Once children begin to produce deictic and representational gestures, those gestures can reveal aspects of children’s thinking and cognition about a wide range of topics. Researchers in cognitive development have utilized children’s gestures as a window onto the nature of their knowledge and how it changes over time.

3. Gesture provides a window on childrens knowledge Researchers have used gestures as a tool for studying people’s thinking in a wide range of domains, including mathematical reasoning (e.g., Wagner, Nusbaum, and GoldinMeadow 2004), scientific reasoning (e.g., Crowder 1996), and problem solving (e.g., Garber and Goldin-Meadow 2002). Across all of these domains, speakers’ gestures have proven to be a valuable tool for studying thinking, for one key reason: speakers’ gestures often reveal information that they do not express in speech. Gestures encode meaning differently from speech ⫺ they express meaning idiosyncratically, using motion and space. In contrast, speech conveys meaning using socially codified words, syntactic structures, and other grammatical devices. Because speech and gesture encode meaning in distinct ways, they sometimes express distinct aspects of speakers’ thoughts. Consider a child solving a Piagetian conservation of number task involving two identical rows of six checkers each. The experimenter spreads one row of checkers apart so that it is longer than the other row, and asks the child to judge whether the rows have the same number or different number. Young children often believe that the longer row has more checkers. When asked to explain this judgment, children often express information about the length of the transformed row in both gestures and speech, for example, by saying, “This row is long,” while demarcating the ends of the row in gestures. Responses of this sort, in which speech and gesture convey information that is largely redundant, were termed gesture-speech matches by Church and Goldin-Meadow (1986). Other children express information in gestures that differs from the information they express in the accompanying speech. Consider a child who offers a similar justification for the number conservation task in speech, but expresses different information in gesture. For example, the child might say, “This row is long” while producing a gesture that mimes spreading the row apart. In this case, the child expresses one piece of information in speech (i.e., the row is long) and another piece of information in gesture (i.e., the checkers were spread out). Responses of this sort, in which speech and gesture convey different information, were termed gesture-speech mismatches by Church and GoldinMeadow (1986). Note that the pieces of information expressed in gesture and speech in this response do not actually conflict ⫺ they mismatch in the sense that they express different, though related, aspects of the task at hand. Since Church and Goldin-Meadow’s pioneering work, gesture-speech mismatches have been documented in children’s explanations of a wide array of cognitive tasks,

1836

IX. Embodiment

including mathematical equations (Perry, Church, and Goldin-Meadow 1988) and problems about balance (Pine, Lufkin, and Messer 2004), among others. More compelling, across tasks, it has been shown that children who frequently produce gesture-speech mismatches are particularly likely to profit from instruction about the tasks (GoldinMeadow and Singer 2003; Perry et al. 1988; Pine et al. 2004). Thus, children who produce many gesture-speech mismatches in their explanations of conservation tasks are particularly likely to benefit from instruction about conservation. Why are gesture-speech mismatches associated with receptiveness to instruction? In producing mismatches, children simultaneously activate and express multiple ideas about the task at hand ⫺ ideas that, over developmental time or with appropriate instruction, may become integrated into a more stable, more advanced knowledge state. Once this integration occurs, children tend express only a single idea about the task at hand, and consequently produce gesture-speech matches (Alibali and Goldin-Meadow 1993). From this perspective, the information children express in gesture in gesture-speech mismatches reflects real knowledge that they possess about the task at hand (Garber, Alibali, and Goldin-Meadow 1998) ⫺ knowledge that is ripe for integration into a more advanced knowledge state. This knowledge appears to be implicit, as it tends not to be expressed in speech at that point in time, even across the entire set of responses that a child provides (Goldin-Meadow, Alibali, and Church 1993). Thus, children produce gestures when they speak about their knowledge, and these gestures reveal important information about the content and stability of their knowledge. But are gestures simply a window on children’s thinking, or do they play a more integral role? A large body of research has shown that action matters for thinking and learning. Thus, it seems likely that, as a form of action, gestures also affect thinking and learning, and in so doing, gestures may influence the course of cognitive development.

4. Gesture can play a role in cognitive change Recent studies have addressed a number of possible mechanisms by which gesture could influence learners’ knowledge. One possibility is that producing gestures may influence the information that children encode and represent when solving problems. In some cases, children appear to use gestures to “explore” the task at hand, and the information they “discover” using gestures may be incorporated into their reasoning about the tasks (Alibali et al. 2014). For example, in formulating an explanation for a Piagetian conservation of liquid quantity task, a child may produce gestures that represent several of the perceptual features of the task (e.g., the height, width, and water level of a glass of water), before settling on which of these features to verbalize. From this perspective, the child’s gestures are a form of action that may serve to increase activation on certain pieces of information, or may even bring new information into the cognitive system. In support of this view, people reason differently about tasks when gesture is allowed and when it is prohibited. For example, adults use a different mix of strategies to solve problems about gear movements when allowed to gesture and when prohibited from gesturing (Alibali et al. 2011). With gesture allowed, adults often simulated the actions of the gears, and with gesture prohibited, they more often focused on the number of gears. As a second example, children express different sorts of information in their explanations of conservation tasks when allowed to gesture and when prohibited from gesturing (Alibali and Kita 2010). Compared to children who were prohibited from gesturing,

138. Gestures and cognitive development

1837

children who were allowed to gesture were more likely to express information about the immediate perceptual state of the task objects, and less likely to express information that was not perceptually present at the moment of explanation. These studies suggest that producing gestures highlights or lends salience to simulated actions and perceptual states. Another possibility is that learners’ gestures may be a means by which they manage the working memory demands of cognitive tasks. Some evidence suggests that producing gestures may actually lighten the cognitive load involved in explanation. GoldinMeadow et al. (2001) investigated this issue by examining how gesture on a primary task (explaining a math problem) affected performance on a secondary task (remembering a list of words or letters) performed at the same time. Children (and adults, as well) remembered more words or letters when they gestured during their math problem explanations than when they did not (see also Wagner et al. 2004). Gestures appeared to reduce the cognitive demand of the explanation task, so participants could allocate more resources to the memory task when they produced gestures. Other studies have yielded findings compatible with the claim that people use gesture to manage working memory demands. For example, children perform better on counting tasks when they are allowed to use gesture than when they are not (Alibali and DiRusso 1999). It seems likely that keeping track of counted objects demands fewer cognitive resources when it is accomplished with external actions (such as gesture) than when it must be done internally. More generally, it may be the case that externalizing information in gesture requires fewer resources than maintaining such information internally, and this may be the reason why producing gestures allows children to conserve cognitive resources. If children gesture when performing a task, they may have more resources available for learning about the task. A third possibility is that learners’ gestures may signal to other people (such as parents or teachers) that the learners are ready for certain types of input. If others can detect and interpret learners’ gestures, they may then offer input that is tailored to the learners’ needs. In this way, gestures can engage social mechanisms of change. There is growing evidence for each of the steps in this hypothesized social pathway of change. Many studies have shown that listeners glean information from gestures (see Hostetter 2011). Matching gestures improve listeners’ uptake of the accompanying speech, relative to no gestures (e.g., McNeil, Alibali, and Evans 2000). More crucially, people also detect and interpret children’s mismatching gestures, and they sometimes incorporate the information children express uniquely in gestures into their interpretations of the children’s speech (Alibali, Flevares, and Goldin-Meadow 1997; Church, Kelly, and Lynch 2000; Goldin-Meadow, Wein, and Chang 1992). If observers detect information that children express in gestures, they may adjust their interactions with children on the basis of this information. They may even provide critical information that will help children progress to a more advanced understanding. Relatively little research has addressed this question; however, one study has shown that adults do alter their instructional input to children on the basis of the children’s gestures. Goldin-Meadow and Singer (2003) asked adults to tutor children about mathematical equations, and compared the instruction they provided to children who frequently produced mismatching gestures at pretest, and children who seldom did so. The adults provided a wider range of strategies and more instructions that contained two strategies (one in gesture and one in speech) to children who produced mismatching gestures at

1838

IX. Embodiment pretest. Thus, by communicating information in their gestures, children contributed to shaping their own learning environments. Given that gesture plays an integral role in communication, it seems likely that the gestures of parents and teachers may also be important for promoting (or possibly hindering) children’s learning. A growing body of research has yielded compelling evidence that teachers’ gestures make a difference for students’ learning (e.g., Church, AymanNolley, and Mahootian 2004; Singer and Goldin-Meadow 2005; Valenzeno, Alibali, and Klatzky 2003). Instructional gestures promote children’s comprehension of instructional speech, guide children’s encoding of visual and spatial material, and help students make connections between related ideas (Alibali and Nathan 2007; Alibali, Nathan, and Fujimori 2011; Richland and McDonough 2010).

5. Conclusion In sum, children’s gestures provide a window on their cognition, and they also have a hand in shaping the path of cognitive development. Gestures can reveal information that children do not express in speech ⫺ important information about the stability and content of their thoughts. However, gesture is more than a simple window ⫺ producing gestures actually changes the course of children’s thinking. Gesture production highlights perceptual and action information, helps children to manage working memory demands, and communicates to listeners that children are ready for certain types of instructional input. Thus, a complete understanding of children’s cognitive development will require a deeper understanding of children’s gestures ⫺ how and why they are produced, and how they manifest and affect thought. More broadly, a deeper understanding of children’s gestures will also contribute to understanding of the role of action in cognition.

6. Reerences Acredolo, Linda P. and Susan W. Goodwyn 1988. Symbolic gesturing in normal infants. Child Development 59(2): 450⫺466. Acredolo, Linda P. and Susan W. Goodwyn 1996. Baby Signs: How to Talk with Your Baby before Your Baby Can Talk. Chicago, IL: Contemporary Books, Inc. Alibali, Martha W., Ruth Breckinridge Church, Sotaro Kita and Autumn B. Hostetter 2014. Embodied knowledge in the development of conservation of quantity: Evidence from gesture. In: Laurie Edwards, Francesca Ferrara and Deborah Moore-Russo (eds.), Emerging Perspectives on Gesture and Embodiment in Mathematics. Charlotte, NC: Information Age Press. Alibali, Martha W. and Alyssa A. DiRusso 1999. The function of gesture in learning to count: More than keeping track. Cognitive Development 14(1): 37⫺56. Alibali, Martha W. and Lisa S. Don 2001. Children’s gestures are meant to be seen. Gesture 1(2): 113⫺127. Alibali, Martha W., Lucia Flevares and Susan Goldin-Meadow 1997. Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology 89(1): 183⫺193. Alibali, Martha W. and Susan Goldin-Meadow 1993. Transitions in learning: What the hands reveal about a child’s state of mind. Cognitive Psychology 25(4): 468⫺523. Alibali, Martha W. and Sotaro Kita 2010. Gesture highlights perceptually present information for speakers. Gesture 10(1): 3⫺28. Alibali, Martha W. and Mitchell J. Nathan 2007. Teachers’ gestures as a means of scaffolding students’ understanding: Evidence from an early algebra lesson. In: Ricki Goldman, Roy Pea,

138. Gestures and cognitive development Brigid Barron and Sharon J. Derry (eds.), Video Research in the Learning Sciences, 349⫺365. Mahwah, NJ: Erlbaum. Alibali, Martha W. and Mitchell J. Nathan 2012. Embodiment in mathematics teaching and learning: Evidence from students’ and teachers’ gestures. Journal of the Learning Sciences 21(2): 247⫺286. Alibali, Martha W., Mitchell J. Nathan and Yuka Fujimori 2011. Gesture in the mathematics classroom: What’s the point? In: Nancy Stein and Stephen Raudenbush (eds.), Developmental and Learning Sciences Go To School, 219⫺234. New York: Taylor and Francis. Alibali, Martha W., Robert C. Spencer, Lucy Knox and Sotaro Kita 2011. Spontaneous gestures influence strategy choices in problem solving. Psychological Science 22(9): 1138⫺1144. Bates, Elizabeth 1976. Language and Context. New York: Academic Press. Bates, Elizabeth, Laura Benigni, Inge Bretherton, Luigia Camaioni and Virginia Volterra 1979. The Emergence of Symbols: Cognition and Communication in Infancy. New York: Academic Press. Bertenthal, Bennett I., Joseph J. Campos and Rosanne Kermoian 1994. An epigenetic perspective on the development of self-produced locomotion and its consequences. Current Directions in Psychological Science 3(5): 140⫺145. Bidell, Thomas R. and Kurt W. Fischer 1992. Beyond the stage debate: Action, structure, and variability in Piagetian theory and research. In: Robert J. Sternberg and Cynthia A. Berg (eds.), Intellectual Development, 100⫺140. Cambridge, NY: Cambridge University Press. Bruner, Jerome S. 1966. Toward a Theory of Instruction. Cambridge, MA: Belknap Press of Harvard University Press. Campos, Joseph J., David I. Anderson, Marianne A. Barbu-Roth, Edward M. Hubbard, Matthew J. Hertenstein and David Witherington 2000. Travel broadens the mind. Infancy 1(2): 149⫺219. Church, Ruth Breckinridge, Saba Ayman-Nolley and Shahrzad Mahootian 2004. The role of gesture in bilingual education: Does gesture enhance learning? International Journal of Bilingual Education and Bilingualism 7(4): 303⫺319. Church, Ruth Breckinridge and Susan Goldin-Meadow 1986. The mismatch between gesture and speech as an index of transitional knowledge. Cognition 23(1): 43⫺71. Church, Ruth Breckinridge, Spencer D. Kelly and Katherine Lynch 2000. Immediate memory for mismatched speech and representational gesture across development. Journal of Nonverbal Behavior 24(2): 151⫺174. Crowder, Elaine 1996. Gestures at work in sense-making science talk. Journal of the Learning Sciences 5(3): 173⫺208. Garber, Philip, Martha W. Alibali and Susan Goldin-Meadow 1998. Knowledge conveyed in gesture is not tied to the hands. Child Development 69(1): 75⫺84. Garber, Philip and Susan Goldin-Meadow 2002. Gesture offers insight into problem solving in adults and children. Cognitive Science 26(6): 817⫺831. Goldin-Meadow, Susan, Martha W. Alibali and Ruth Breckinridge Church 1993. Transitions in concept acquisition: Using the hand to read the mind. Psychological Review 100(2): 279⫺297. Goldin-Meadow, Susan, Howard Nusbaum, Spencer D. Kelly and Susan M. Wagner 2001. Explaining math: Gesturing lightens the load. Psychological Science 12(6): 516⫺522. Goldin-Meadow, Susan and Melissa A. Singer 2003. From children’s hands to adults’ ears: Gesture’s role in the learning process. Developmental Psychology 39(3): 509⫺520. Goldin-Meadow, Susan, Debra Wein and Cecilia Chang 1992. Assessing knowledge through gesture: Using children’s hands to read their minds. Cognition and Instruction 9(3): 201⫺219. Goodwyn, Susan W. and Linda P. Acredolo 1993. Symbolic gesture versus word: Is there a modality advantage for onset of symbol use? Child Development 64(3): 688⫺701. Hostetter, Autumn B. 2011. When do gestures communicate? A meta-analysis. Psychological Bulletin 137(2): 297⫺315. Hostetter, Autumn B. and Martha W. Alibali 2008. Visible embodiment: Gestures as simulated action. Psychonomic Bulletin and Review 15(3): 495⫺514.

1839

1840

IX. Embodiment Iverson, Jana M., Olga Capirci and M. Cristina Caselli 1994. From communication to language in two modalities. Cognitive Development 9: 23⫺43. Iverson, Jana M. and Susan Goldin-Meadow 2005. Gesture paves the way for language development. Psychological Science 16(5): 367⫺371. Kita, Sotaro and Aslı Özyürek 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48(1): 16⫺32. McNeil, Nicole M., Martha W. Alibali and Julia L. Evans 2000. The role of gesture in children’s comprehension of spoken language: Now they need it, now they don’t. Journal of Nonverbal Behavior 24: 131⫺150. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago, IL: University of Chicago Press. Perry, Michelle, Ruth Breckinridge Church and Susan Goldin-Meadow 1988. Transitional knowledge in the acquisition of concepts. Cognitive Development 3(4): 359⫺400. Piaget, Jean 1952. The Origins of Intelligence in Children. New York: International University Press. First published [1936]. Pine, Karen J., Nicola Lufkin and David Messer 2004. More gestures than answers: Children learning about balance. Developmental Psychology 40(6): 1059⫺1067. Richland, Lindsey E. and Ian M. McDonough 2010. Learning by analogy: Discriminating between potential analogs. Contemporary Educational Psychology 35(1): 28⫺43. Singer, Melissa A. and Susan Goldin-Meadow 2005. Children learn when their teacher’s gestures and speech differ. Psychological Science 16(2): 85⫺89. Smith, Linda B. 2005. Action alters shape categories. Cognitive Science 29(4): 665⫺679. Sommerville, Jessica A., Amanda L. Woodward and Amy Needham 2005. Action experience alters 3-month-old infants’ perception of others’ actions. Cognition 96(1): B1⫺B11. Soska, Kasey C., Karen E. Adolph and Scott P. Johnson 2010. Systems in development: Motor skill acquisition facilitates three-dimensional object completion. Developmental Psychology 46(1): 129⫺138. Valenzeno, Laura, Martha W. Alibali and Roberta L. Klatzky 2003. Teachers’ gestures facilitate students’ learning: A lesson in symmetry. Contemporary Educational Psychology 28(2): 187⫺204. Wagner, Susan M., Howard Nusbaum and Susan Goldin-Meadow 2004. Probing the mental representation of gesture: Is handwaving spatial? Journal of Memory and Language 50(4): 395⫺407. Werner, Heinz and Bernard Kaplan 1964. Symbol Formation: An Organismic-Developmental Approach to Language and Expression of Thought. London: John Wiley and Sons.

Martha W. Alibali, Madison (USA)

139. Embodied cognition and word acquisition: The challenge of abstract words

1841

139. Embodied cognition and word acquisition: The challenge o abstract words 1. 2. 3. 4. 5. 6. 7.

Introduction Embodiment and grounding Acquisition Representation Linguistic diversity Conclusion References

Abstract The chapter outlines a theoretical proposal on abstract concepts and words, called WAT: Words As social Tools. The proposal has four central principles: 1) both concrete and abstract concepts are embodied and grounded; 2) the linguistic mediation and the social influence is more crucial for acquiring abstract than concrete words; 3) abstract concepts activate more linguistic brain areas than concrete concepts; 4) linguistic variability affects more abstract than concrete concepts. The proposal is presented in light of recent supporting evidence.

1. Introduction According to embodied and grounded theories, the bodily control systems constrain cognitive processes, hence, they cannot be explained without considering the bodily contribution. In keeping with the idea that some systems, as the action one, are re-used at a higher hierarchical level (Anderson 2010), according to embodied and grounded views language is “grounded” in the sensorimotor system. While it is not so difficult for such a theory to account for the representation of concrete concepts and words, such as “bottle”, many problems arise when considering abstract concepts and words as “phantasy” or “truth”, which do not have a single, concrete object as referent. As recognized by both proponents and opponents to the embodied and grounded views, the explanation of how abstract concepts and words are represented constitutes a real challenge for embodied and grounded cognition (Arbib 2008). Aim of the chapter is to outline and defend a theoretical proposal on abstract concepts and words, called WAT: Words As social Tools. The proposal has four central tenets: both concrete concepts and words and abstract concepts and words are grounded in perception, action, and emotional systems (embodiment and grounding principle); the linguistic mediation and the social influence is more crucial for the acquisition of abstract concepts and words than of concrete concepts and words (acquisition principle); the way in which abstract concepts and words and concrete concepts and words are represented in the brain reflects their different acquisition modality, thus both abstract concepts and words and concrete concepts and words are grounded, but the last activate more the linguistic system (representation principle); due to the importance of language for abstract concepts and words acquisition, linguistic differences affect more abstract concepts and words than concrete concepts and words (linguistic diversity principle). Presenting the proposal I will discuss supporting evidence, obtained in our and in other labs. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18411848

1842

IX. Embodiment

2. Embodiment and grounding Concrete concepts and words are grounded in perception, action, and emotional processes. This assumption is shared by all embodied and grounded theories, independently of whether they promote forms of weak or strong embodiment. Whether this is true also for abstract concepts and words, however, is debated. Within embodied and grounded views, different positions can be identified (for reviews see Borghi and Pecher 2012; Pecher, Boot, and van Dantzig 2011). According to some views, there is no principled difference between concrete and abstract concepts. Other views posit that abstract concepts and words are less semantically rich than concrete concepts and words. For example, according to Paivio (1986) they derive their meanings from associations with concrete concepts and words, since, differently from concrete concepts and words, they are represented only propositionally and not analogically. The influential Conceptual Metaphor Theory (Lakoff and Johnson 1980) states that abstract concepts and words derive their meanings metaphorically from their concrete correspondents ⫺ for example, the abstract concept and word “category” is comprehended referring to the concrete concepts and words “container” (Boot and Pecher 2011). Differently from Paivio’s view, one limitation of this view is that it does not relate strongly to brain processes, since supporting evidence is mainly obtained in psychological and linguistics studies. Further embodied and grounded views posit a strong involvement of the motor system and of its predictive role for both concrete concepts and words and abstract concepts and words (Glenberg and Gallese 2012; Guan et al. 2013). Other theories stress the fact that for abstract concepts and words some kinds of contents play a major role: emotions (Kousta et al. 2011), introspective features and situations (Barsalou and Wiemer-Hastings 2005), exemplifications (Borghi, Caramelli, and Setti 2005), and different dimensions of perceptual strength (Connell and Lynott 2012). All these theories share the problem of generalization: They might be effective for a subset of abstract concepts and words, but not for all. It is well possible that subdomains of abstract concepts and words differ in the content they elicit, and that fine-grained analyses of the contents of different abstract concepts and words are required. As an example, Roversi, Borghi, and Tummolini (2013) found with a feature production task that institutional artifacts, be they concrete or abstract (e.g., check; ownership), elicited exemplifications, probably necessary to ground them. Abstract social entities (e.g., friendship), instead, elicited situations and mental associations. At the same time, it would be important to develop a theory of abstract concepts and words which is sufficiently general. In order to do so, I believe two aspects should be considered, that current theories have at least partially neglected: the role of conceptual acquisition and of the social dimension.

3. Acquisition I propose that concrete concepts and words and abstract concepts and words are acquired in a different way (for details, see Borghi and Binkofski 2014; Borghi and Cimatti 2009, 2012). Learning and language learning are social phenomena. However, the contribution of others, hence, the role of the social dimension, is more crucial with abstract concepts and words than with concrete concepts and words, since with the first it is

139. Embodied cognition and word acquisition: The challenge of abstract words

1843

necessary to rely more on other people’s knowledge. Consider for example the abstract concept and word “democracy”: To represent it, we may access a series of visual scenes, but also recall and rely on the opinion of authoritative people (Prinz 2002). Abstract concepts and words do not have a single referent, but activate a sparse variety of situations, mental states, events ⫺ and language can be crucial to keep them together. Literature on Mode of Acquisition (Wauters et al. 2003) demonstrated this: Learning of concrete concepts and words such as “book” is mainly perceptual, since children hear the word in presence of its referent. Abstract concepts as “grammar”, instead, are acquired mainly through linguistic explanations. In a recent study with adult subjects we mimicked the acquisition of concrete and abstract concepts and words (Borghi et al. 2011). Participants either manipulated novel objects (concrete concepts and words) or observed groups of objects moving and interacting in novel ways (abstract concepts and words). The underlying idea was that, differently from concrete concepts and words, abstract concepts and words do not refer to single objects but to complex interactions, they are not grouped in categories on the basis of perceptual similarities and are not manipulated during acquisition. Due to the fact that their referents are diverse, having a unifying label might be crucial. We tested whether participants were able to form categories independently from language, then we verified whether being told the category name and being explained its meaning modified the learned categories. Abstract concepts and words were more difficult to learn, as it happens in experiments with real concepts. Moreover, participants produced more perceptual properties with concrete concepts and words, as in real feature listing tasks. The most important result was obtained in a property verification task in which participants could respond either with the hand, by pressing a key on the keyboard, or with the mouth, by saying “yes” on the microphone. Responses to abstract concepts and words were faster with the microphone, responses to concrete concepts and words with the keyboard. Importantly, the difference was more marked when the meaning of words was explained. These results support the Words As social Tools proposal. They reveal that the acquisition modality of abstract concepts and words and concrete concepts and words influences their representation, and they indicate that linguistic information is more important for abstract concepts and words than for concrete concepts and words. At the same time, they suggest that linguistic information does not suffice to represent abstract concepts and words, but that other sensorimotor information is crucial as well: Indeed, a control experiment revealed that the effect was not present when linguistic and perceptual information contrasted. Hence, the results disconfirm not embodied and grounded multiple representation theories according to which, differently from concrete concepts and words, abstract concepts and words are not grounded (Dove 2011).

4. Representation I propose that, due to their peculiar acquisition modality, concrete and abstract concepts and words are differently represented in the brain: Both activate sensorimotor and linguistic networks, but the second play a major role for abstract concepts and words. Behavioral and neural (Transcranial Magnetic Stimulation and fMRI) evidence collected in our lab support this hypothesis (Sakreida et al. 2013; Scorolli et al. 2011, 2012). In three studies we used the same material, consisting of four kinds of phrases, two

1844

IX. Embodiment compatible combinations (abstract verb and noun, concrete verb and noun), and two mixed combinations (abstract verb ⫹ concrete noun, concrete verb ⫹ abstract noun): for example, to describe an idea/a flower, to grasp a flower/an idea. Concrete verbs were action related and abstract verbs were not; concrete nouns referred to graspable objects, abstract nouns to non graspable entities. In the behavioral and Transcranial Magnetic Stimulation study participants read phrases composed by a verb followed by a noun and evaluated whether they made sense or not by producing a motor response. Transcranial Magnetic Stimulation single-pulses were delivered 250 ms after each word presentation. In both studies response times analyses showed an advantage of compatible over incompatible combinations, in line with the idea that abstract concepts and words and concrete concepts and words are processed in parallel systems, the motor and the linguistic one (see also Barsalou et al. 2008). Transcranial Magnetic Stimulation results showed an early activation of the motor system in phrases with concrete verbs. Analysis of motor evoked potentials of the hand muscles revealed that, in contrast with phrases containing concrete verbs, those with abstract verbs elicited larger peak-to-peak motor evoked potentials amplitude with a late than with an early pulse. This result suggests that also abstract concepts and words are grounded. Moreover, it allows the speculation that the effect is due to an early activation of mouth-related motor areas with abstract concepts and words, having a delayed effect on the topologically near hand-related motor areas. The results of the fMRI study further support this interpretation. Both concrete and abstract phrases activated the core areas of the sensorimotor neural network, thus confirming that both concrete concepts and words and abstract concepts and words are grounded. In addition, concrete phrases activated left frontopolar/orbitofrontal cortex and the right frontal operculum, whereas abstract phrases activated areas within the language processing system as the anterior middle temporal gyrus bilaterally and the left posterior supramarginal gyrus. Overall, the behavioral data, the Transcranial Magnetic Stimulation, and the fMRI study confirm that, while both concrete and abstract concepts and words are grounded in the sensorimotor system, linguistic areas are recruited more for abstract concepts and words than for concrete concepts and words processing. Further behavioral support to the Words As social Tools proposal, according to which linguistic information is more crucial for abstract concepts and words processing, is provided by Recchia and Jones (2012). The authors analyzed the effects of three different measures of semantic richness on lexical decision and naming tasks. They found that a rich linguistic context, given by the high number of semantic neighbors, facilitated processing of abstract concepts and words, while the number of features, but not the number of semantic neighbors, facilitated concrete concepts and words processing.

5. Linguistic diversity A natural consequence of the fact that for abstract concepts and words representation linguistic information plays a major role is that their meaning should be more variable across cultures and languages compared to that of concrete concepts and words. In keeping with this idea, Gentner and Boroditsky (2001) distinguished between cognitive and linguistic dominance. Cognitive dominance refers to words in which the sensorimotor basis prevails, as concrete nouns; linguistic dominance concerns instead words such as determiners and conjunctions, for the formation of which language plays a major

139. Embodied cognition and word acquisition: The challenge of abstract words

1845

role. Below I will discuss recent evidence showing that, while the meaning of concrete concepts and words is less variable depending on the spoken language, for abstract concepts and words meaning the story is completely different. For reasons of space, I will limit the analysis to a few examples. In a seminal study on categorization Malt et al. (1999) asked Chinese, Spanish, and English speakers to label containers and to sort them. Despite the great variety in the naming pattern, similarity judgments were consistent across groups and not strongly influenced by linguistic variations. The evidence of a dissociation between the experience and the naming pattern is not limited to concrete objects domains. Further examples are given by motion and locomotion verbs (for a review, see Malt, Gennari, and Imai 2010). Even if English and Spanish motion verbs encode the manner and the path of motion differently, this does not influence memory differently (Gennari et al. 2002). A similar case is given by locomotion verbs. In an analysis on different languages (English, Japanese, Spanish, and Dutch) it was found that beyond two broad categories formed on the basis of biomechanical constraints, to walk and to run, there is room for variation, since every language partitions locomotion events into different sub-categories (e.g., the English words “jog”, “run”, and “sprint” correspond to a single Japanese word). Overall, these data suggest that, when the stimulus space has a precise structure, there is less room for influence due to language diversity. This is often not the case for abstract concepts and words, where different languages partition the stimulus space in different ways. One example concerns the abstract concept and word of time, and its relation to space. Boroditsky (2001) hypothesized that spatial metaphors for time characterizing languages such as Chinese and English influence time representation: She demonstrated that Chinese and English speakers organize the timeline following a vertical vs. a horizontal dimension. Further recent evidence shows that the spatial organization linked to different writing directions influences the way in which time is organized: past on the left and future on the right in Western cultures, but not in Eastern ones. Overall, the idea that the abstract domain of time can be conceptualized in terms of the more concrete domain of space has received a lot of experimental support (for a review, see Bonato, Zorzi, and Umilta` 2012). Whether this is due to the different metaphors characterizing each culture or to the influence of the writing directions, what counts is that abstract concepts and words such as time are highly sensitive to the different cultural and linguistic milieu. A further example is given by mental state concepts: Goddard (2010) showed that, beyond a limited number of meanings which are common across languages, i.e., think, feel, want, and know, the majority of words concerning emotion and language are language specific: For example, the words “sad” and “unhappy” in English do not have a correspondent concept in Chinese, that distinguishes between “fatalistic sadness”, “confused sadness/malincholy”, and “ethical and altruistic grief ”. Reporting these examples I do not intend to deny that the diversity of meanings across languages is pervasive, characterizing different domains. Language diversity affects abstract concepts and words and concrete concepts and words as well. I simply intend to suggest that the influence of linguistic variability is stronger for abstract concepts and words than for concrete concepts and words.

6. Conclusion Abstract concepts and words explanation represents a real challenge for embodied and grounded views. In order to account for them, in their variety, I have tried to demon-

1846

IX. Embodiment strate that it is important to consider the developmental dimension, i.e., their acquisition modality, which is mainly linguistic and social. As to their representation in the brain, while both abstract concepts and words and concrete concepts and words are embodied and grounded in perception and action system, the first activate more linguistic brain areas. Further research is needed to explore the hypothesis that their brain representation is due to their peculiar acquisition modality. Given that for abstract concepts and words the acquisition language counts more, I have reported evidence consistent with the view that they are more impacted by linguistic diversity. Further developmental, neural, and cross-cultural evidence is needed to better explore this fascinating area of human cognition.

Acknowledgements Thanks to the emco-group (www.emco.unibo.it). Special thanks to Felice Cimatti, with whom we first sketched the WAT proposal, to Claudia Scorolli and to Ferdinand Binkofski, with whom we refined it in light of experimental evidence, and to Luca Tummolwith whom we further elaborated it. Thanks for discussions to Fabian Chersi, Cristiano Castelfranchi, Davide Marocco, Domenico Parisi, Lucia Riggio, and Corrado Roversi. Funding: FP7 project ROSSI: Emergence of communication in Robots through Sensorimotor and Social Interaction (Grant agreement n. 216125).

7. Reerences Anderson, Michael L. 2010. Neural reuse as a fundamental organizational principle of the brain. Behavioral and Brain Sciences 33(4): 245⫺266. Arbib, Michael A. 2008. From grasp to language: Embodied concepts and the challenge of abstraction. Journal of Physiology 102(1): 4⫺20. Barsalou, Lawrence W., Awa Santos, Kyle W. Simmons and Christine D. Wilson 2008. Language and simulations in conceptual processing. In: Manuel De Vega, Arthur M. Glenberg and Arthur C. Graesser (eds.), Symbols, Embodiment and Meaning, 245⫺283. Oxford: Oxford University Press. Barsalou, Lawrence W. and Katja Wiemer-Hastings 2005. Situating abstract concepts. In: Diane Pecher and Rolf Zwaan (eds.), Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thought, 129⫺164. Cambridge, NY: Cambridge University Press. Bonato, Mario, Marco Zorzi and Carlo Umilta` 2012. When time is space: evidence for a mental time line. Neuroscience Biobehavioral Review 36(10): 2257⫺2273. Boot, Inge and Diane Pecher 2011. Representation of categories. Experimental Psychology 58(2): 162⫺170. Borghi, Anna M. and Ferdinand Binkofski 2014. Words as Social Tools: An Embodied View on Abstract Words. Berlin/New York: Springer. Borghi, Anna M., Nicoletta Caramelli and Annalisa Setti 2005. Conceptual information on objects’ locations. Brain and Language 93(2): 140⫺151. Borghi, Anna M. and Felice Cimatti 2009. Words as tools and the problem of abstract words meanings. In: Niels Taatgen and Hedderik van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society, 2304⫺2309. Amsterdam: Cognitive Science Society. Borghi, Anna M. and Felice Cimatti 2012. Words are not just words: the social acquisition of abstract words. RIFL 5: 22⫺37. Borghi, Anna M., Andrea Flumini, Felice Cimatti, Davide Marocco and Claudia Scorolli 2011. Manipulating objects and telling words: A study on concrete and abstract words acquisition. Frontiers in Psychology 2:15.

139. Embodied cognition and word acquisition: The challenge of abstract words

1847

Borghi, Anna M. and Diane Pecher 2012. Special Topic on Embodied and Grounded Cognition. Lousanne: Frontiers. Boroditsky, Lera 2001. Does language shape thought? English and Mandarin speakers’ conceptions of time. Cognitive Psychology 43(1): 1⫺22. Connell, Louise and Dermot Lynott 2012. Strength of perceptual experience predicts word processing performance better than concreteness or imageability. Cognition 125(3): 452⫺465. Dove, Guy 2011. On the need for embodied and disembodied cognition. Frontiers in Psychology 1: 242. Gennari, Silvia, Steven Sloman, Barbara C. Malt and W. Tecumseh Fitch 2002. Motion events in language and cognition. Cognition 83(1): 49⫺79. Gentner, Dedre and Lera Boroditsky 2001. Individuation, relativity and early word learning. In: Melissa Bowerman and Steven Levinson (eds.), Language Acquisition and Conceptual Development, 215⫺256. Cambridge, UK: Cambridge University Press. Glenberg, Arthur M. and Vittorio Gallese 2012. Action-based language: A theory of language acquisition, comprehension, and production. Cortex 48(7): 905⫺922. Guan, Qun Connie, Wanjin Meng, Ru Yao and Arthur M. Glenberg 2013. Motor system contribution to the comprehension of abstract language. Plos one 8(9): e75183. Goddard, Cliff 2010. Universals and variation in the lexicon of the mental state concepts. In: Barbara C. Malt and Phillip Wolff (eds.), Words and the Mind. How Words Capture Human Experience, 72⫺92. Oxford: Oxford University Press. Kousta, Stavroula T., Gabriella Vigliocco, David P. Vinson, Mark Andrews and Elena Del Campo 2011. The representation of abstract words: Why emotion matters. Journal of Experimental Psychology: General 140(1): 14⫺34. Lakoff, George and Mark Johnson 1980. Metaphors We Live By. Chicago, IL: University of Chicago Press. Malt, Barbara C., Steven A. Sloman, Silvia Gennari, Meiyi Shi and Yuan Wang 1999. Knowing versus naming: Similarity and the linguistic categorization of artifacts. Journal of Memory and Language 40(2): 230⫺262. Malt, Barbara, Silvia Gennari and Mutsumi Imai 2010. Lexicalization patterns and the world to words mapping. In: Barbara C. Malt and Phillip Wolff (eds.), Words and the Mind. How Words Capture New Experience, 29⫺57. New York: Oxford University Press. Paivio, Allan 1986. Mental Representations: A Dual Coding Approach. Oxford University Press. Pecher, Diane, Inge Boot and Saskia van Dantzig 2011. Abstract concepts: sensory-motor grounding, metaphors, and beyond. In: Brian H. Ross (ed.), The Psychology of Learning and Motivation, Volume 54, 217⫺248. Burlington: Academic Press. Prinz, Jesse 2002. Furnishing the Mind. Concepts and Their Perceptual Basis. Cambridge, MA: Massachusetts Institute of Technology Press. Recchia, Gabriel and Michael N. Jones 2012. The semantic richness of abstract concepts. Frontiers in Human Neuroscience 6: 315. Roversi, Corrado, Anna M. Borghi and Luca Tummolini 2013. A poem is a hammer and a blank cheque: An experimental study on the categorization of artefacts. Review of Philosophy and Psychology, 4(3): 527⫺542. Sakreida, Katrin, Claudia Scorolli, Mareike M. Menz, Stefan Heim, Anna M. Borghi and Ferdinand Binkofski 2013. Are abstract action words embodied? An fMRI investigation at the interface between language and motor cognition. Frontiers in Human Neuroscience 7:125. Scorolli, Claudia, Ferdinand Binkofski, Giovanni Buccino, Roberto Nicoletti, Lucia Riggio and Anna M. Borghi 2011. Abstract and concrete sentences, embodiment, and languages. Frontiers in Psychology 2: 227. Scorolli, Claudia, Pierre Jacquet, Ferdinand Binkofski, Roberto Nicoletti, Alessia Tessari and Anna M. Borghi 2012. Abstract and concrete phrases processing differently modulates cortico-spinal excitability. Brain Research 1488: 60⫺71.

1848

IX. Embodiment Wauters, Loes N., Agnes E.J.M Tellings, Wim H.J. Van Bon and A. Wouter Van Haaften 2003. Mode of acquisition of word meanings: The viability of a theoretical construct. Applied Psycholinguistics 24(3): 385⫺406.

Anna M. Borghi, Bologna and Rome (Italy)

140. The blossoming o childrens multimodal skills rom 1 to 4 years old 1. 2. 3. 4. 5.

Introduction Background Multimodal skills from 1 to 4 Conclusion References

Abstract Language acquisition is one of the first fields in which the multimodal aspects of language have been illustrated. Spontaneous longitudinal interactional data was required to study productions in context and the obvious role of action, gaze, gesture, facial expressions, and prosody in children’s first productions was underlined by a wealth of studies. However, most research focuses on children’s first symbolic gestures and on word-gesture combinations in the first stages of development. But children continue to use their body, and especially manual gestures, head shakes, facial expressions throughout the language acquisition process and become expert multimodal language users in face-to-face conversations. This short paper explores the blossoming of children’s multimodal skills through two exploratory studies on children’s expression of negation and on pointing gestures from one to four years old. Research based on video-data of children’s daily interactions can demonstrate the progressive mastery of coordination between bodily action, gestures, and talk in conversation. Children can rely on simultaneous use of the vocal and visual modalities to gradually become competent multimodal conversationalists.

1. Introduction Gestures, verbal productions, signs, gaze, facial expressions, postures, are all part of our socially learned, inter-subjective communicative system. Human beings, with all their representational skills, combine modalities in order to share meaning, to refer to present and absent entities and events, to express their projects, their desires, and their inner feelings. As McNeill pointed out, we might need to “broaden our concept of language” (1992: 2). Research in Sign Languages has helped to show how the visual modality can be used symbolically. Thanks to combinations of experimental and ecological studies, to video recordings, to specialized software, international databases, theoretical approaches Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18481857

140. The blossoming of children’s multimodal skills from 1 to 4 years old

1849

that include multimodality and multiple levels of analyses, and to rich collaborations among experts of several scientific fields, we now have the tools that help us create new methods to illustrate the fact that “vocal language is inherently multimodal” (Müller 2009: 216). “Utterance uses of visible bodily action” (Kendon 2004: 1⫺2) integrated with spoken expressions, form a tight partnership in adult interaction in which they can either alternate or be complementary. The roots of this partnership have been illustrated in a wealth of research in language acquisition, one of the first fields for which the role of gestures has been analyzed in depth. However, most of the studies are focused on the first stages of language development and on how gestures facilitate children’s entry into (i) the symbolic power of language because they are enactive and iconic (Werner and Kaplan 1963) and (ii) into syntax through first word-gesture combinations. In this short essay, I would like to argue that gestures remain functional when hearing children progressively master speech. I will make a quick overview of the background for my research. I will then use two studies to illustrate the blossoming of children’s multimodal skills after the first stages of language development: children’s multimodal expression of negation and a longitudinal case-study of a child’s pointing gestures, from one to four years old.

2. Background 2.1. Action, context, experience and the body in language acquisition 2.1.1. Language in action Language ⫺ a social phenomenon ⫺ is captured, internalized, and reconstructed again and again by each individual child thanks to its transmission by care-givers in their daily interactions with their upspring. “Meaning comes about through praxis ⫺ in the everyday interactions between the child and significant others” (Budwig 2003: 108). Joint parent-child action/interaction provides the scaffold for children’s growing ability to grasp both what is happening around them, and what is being said in the situation. They learn to understand language and action together, each providing support for the other. Following Vygotsky (1978), Duranti explains that language is “a mediating activity that organizes experience” (1984: 36), but of course experience is reversely a mediating activity that organizes language. To examine how children come to use language in general, one must examine the broader context in which the child experiences events and interaction: “Experience, even the drama of pain and suffering, lies outside, inside, and alongside enacted language as its indexical and phenomenological resource” (Ochs 2012: 156).

2.1.2. Language in context The starting point of language acquisition scholars’ interest in gesture or visible bodily action could be summarized in de Laguna’s famous assertion that “in order to understand what the baby is saying you must see what the baby is doing” (1927: 91). Children’s productions are like evanescent sketches of adult language and can only be analyzed in

1850

IX. Embodiment their interactional context by taking into account shared knowledge, actions, manual gestures, facial expressions, body posture, head movements, all types of vocal productions, along with the recognizable words used by children (Morgenstern and Parisse 2007; Parisse and Morgenstern 2010). Research in language acquisition has therefore developed the tools, methods, and theoretical approaches to analyze children’s multimodal productions in context as early as the second half of the 19th century, through scientist’s observations of their own children. The detailed follow-ups of children’s language anchored in their daily lives are a source of fascinating links between motor and psychological development, cognition, affectivity, and language. The “founding fathers” of the study of child development and language had great intuitions about the importance of gestures and their relation to language. In his notes on his son’s development, Darwin (1877) stresses the importance of observing the transition from uncontrolled body movements to intentional gestures. Romanes (1889) compares human and animal gestures. He makes new observations on qualitative differences and mentions the “gestural language of deaf people” as a sign of the universality of symbolic gestures. As of the end of the 20th century, thanks to video data linked to transcripts with specialized software (CLAN, ELAN, PHON) detailed codings and analyses of multimodality have been possible and have opened whole new fields of research. It is especially the case for researchers who study language with a usage-based perspective in its natural habitat ⫺ discourse, daily conversations ⫺ “the prototypical kind of language usage, the form in which we are all first exposed to language ⫺ the matrix for language acquisition” (Levinson 1983: 284). We are now able to document in detail how the visual and vocal modalities come together in a constant stream in daily interactions and progressively shape children’s language.

2.1.3. From sensorimotor schemas and motor representations to language Zlatev (1997) suggests that sensorimotor schemas provide the “grounding” of language in experience and will then lead to children’s access to the symbolic function. Infants’ imitation and general production of gestures has indeed been studied as a prerequisite to construct “pre-linguistic” concepts, as a pathway into the symbolic function of language or a bridge between language and embodiment. Gestures are viewed as representational structures that are constructed through imitation, that are enacted overtly and can be shared with others. Mimetic schemas for imitable actions, shared representations of objects that can be manipulated, ground the acquisition of children’s first gestures and first words or signs (Zlatev, Persson, and Gärdenfors 2005). Besides, evidence accumulated in neuroscience shows that language use engages motor representations (Glenberg and Kashak 2003) and that through complex imitation, manual-gestural communication in social interaction leads to language (Arbib 2012).

2.2. The acilitating role o gestures in language acquisition Children’s neurological maturation enables them to control their bodily movements and transform them into gestures thanks to gradually finer motor skills. Some of these gestures are assigned meaning by their interlocutors. First gestures, just before the first birthday, are usually deictic: pointing at an object or waving an object to show it to the parent and attract joint attention. Pointing gestures in particular, combine motor and cognitive prerequisites with the capacity to symbolize and to take up forms used by adults

140. The blossoming of children’s multimodal skills from 1 to 4 years old

1851

in dialogue. As Tomasello and his colleagues underline, “pointing may thus represent a key transition, both phylogenetically and ontogenetically, from nonlinguistic to linguistic forms of human communication” (Tomasello, Carpenter, and Liszkowski 2007: 720). At around one year old, children produce representational gestures using their entire body to imitate an animal for example. Children also start using gestures that reflect those in their input around the same period (Estigarribia and Clark 2007). They develop cognitive prerequisites that allow them to take up symbolic gestures such as the “bye bye” gesture, or the “itsy bitsy spider” routine, from the environment. Gestures have been studied mostly either in the stage called “pre-linguistic” when they are used in isolation, or when they are combined with words and are described as facilitating children’s access to first combinations. Synchrony and asynchrony have been presented as important features in multimodal multi-element communication. Kelly (2011) has observed in her data how children’s interaction skills unfold from communications across a single modality to multi-modal synchronized communications. GoldinMeadow and her colleagues have thoroughly investigated productions of gesture-speech combinations and their comprehension at the one-word stage and beyond (GoldinMeadow 1999; Morford and Goldin-Meadow 1992; Özc¸alıs¸kan and Goldin-Meadow 2005). They observe that children first use the two modalities to communicate about the same element like holding up a cookie and saying “cookie” (Butcher and GoldinMeadow 2000). Later on, speech and gesture will together form an integrative system (Goldin-Meadow and Butcher 2003). Using two modalities for two different elements is described as preceding the onset of two-word speech. The skills to express more than one element or aspect of an event in the same turn as opposed to what Scollon (1976) calls “vertical constructions” (different elements expressed in two successive turns that are often united in parents reformulations), are necessary for children to be able to combine two words. The multifaceted character of an event is first expressed through two complementary modalities, with a gesture and a word referring to two different elements. There is little research on children’s gestures when they become more expert speakers apart from very interesting studies on co-verbal gestures used, for example, to solve problems when they are quite older (Church and Goldin-Meadow 1986; Goldin-Meadow et al. 2001). However, child language does not cease to be multimodal between two and seven years old. Children’s multimodal skills keep blossoming and the conjoined use of the visual-manual and auditory-vocal channels is mastered progressively with great individual differences between children.

3. Multimodal skills rom 1 to 4 3.1. Data In order to study the use of multimodal skills, video data is necessary. The camera should not only focus on the child but capture the interlocutors as is of course absolutely necessary in sign language interactions. Such video data is extremely rare because focus on the multimodal aspects of spontaneous conversation is quite recent. The CoLaJE team therefore filmed a set of French children from 0 to 7 years old (the Paris Corpus), in one case with two cameras, in order to make fine-grained analyses of gestural aspects

1852

IX. Embodiment of the interaction. The Paris Corpus was financed by the French Agence Nationale de la Recherche, in the context of a research program titled ‘Communication Langagie`re Chez le Jeune Enfant’ (CoLaJE, 2009⫺2012, http://colaje.risc.cnrs.fr where videos and transcriptions can be downloaded) and directed by the author. All the children live in Paris or in surrounding suburbs. They have middle-class college-educated parents, and were filmed at home about once a month for an hour in daily life situations (playing, taking a bath, having dinner). ELAN was the main software used to code gestures. Multimodal analyses require attention to so many details and expertise in so many fields that collective research is necessary to capture all the features of the integrative nature of language with all its linguistic levels and multichannel specificity.

3.2. Individual dierences: the expression o negation The CoLaJE team, the team that collected and analyzed the Paris Corpus, has constantly pointed out great disparities between the children’s language development (see for example Morgenstern 2009; Morgenstern and Parisse 2012a, b; Morgenstern et al. 2010). The study of the expression of negation is a privileged locus to combine multimodal analyses of gesture with prosody, syntax, semantics, and pragmatics. In our preliminary research on five CoLaJE children’s multimodal expression of negation (Blondel et al. 2012) we explored the status and evolution of gestures of negation produced with and without words as well as the role of co-verbal gestures used in combination with negative verbal productions. Our main findings were the following. Interestingly enough some children, but not all of them, followed a prototypical pathway: (i) (ii) (iii) (iv) (v)

actions of avoidance and rejection mobilizing the whole body; symbolic gestures of negation (mostly head-shakes); gestures of negation used in combination with words; negative utterances; negative utterances sometimes complemented by co-verbal gestures (negations but mostly other types of gestures).

The child who used gestures of negation in isolation the most clearly, for the longest period and who maintained them in combination with non (French)/no (Italian) for the longest time, was the bilingual French/Italian child who took some time to master speech. The necessity to enter two languages at once might have an influence on the management of the visual-gestural and the auditory modalities. In his bilingual environment, gestures of negation were culturally the same in French and in Italian and were used both by his mother and his father. Besides, the headshake is one of the only frequent and clear gestures expressed with the head, which is the part of the body interlocutors constantly have their gaze on when they communicate (Zlatev and Andre´n 2009). They might be a stable element to seize in his input and put into use efficiently in all circumstances. Besides, his 4-year-old brother is a very talkative little boy and invades the whole sound environment. Antoine might resort to gesture in order to communicate without interference. Gesture might therefore have a compensatory function for that little boy. It is a wonderful resource to communicate efficiently in his specific environment during his multimodal, multilingual entry into language.

140. The blossoming of children’s multimodal skills from 1 to 4 years old

1853

Madeleine, the most linguistically precocious child in our dataset did not use any gesture of negation at the beginning of the data. Her body movements did express avoidance, rejection, and refusal but she started using the word non at one year old. The first headshakes were co-verbal gestures and we only found them in the data at four years old. All the hearing children we studied who had used symbolic gestures of negation at first, stopped using them during the period when they entered speech and until their speech was quite continuous as if they needed to concentrate on one modality at a time in the acquisition process. However, the visual-gestural modality made a spectacular comeback in all five children’s data with the use of co-verbal gestures of negation when speech seemed to be already quite elaborate. But do gestures in general disappear from children’s interactive communication between the first stages of language development and mastery of speech? Madeleine’s data might give us some insight on her use of the visual-gestural modality. We will focus on the use and functions of pointing gestures in her data.

3.3. Madeleines use o pointing gestures Our analyses of the data (Morgenstern et al. 2010) show that vocal and gestural modalities are associated and complement each other from the very onset of pointing. We categorized all Madeleine’s pointing gestures and the adults’ in order to analyze their quantity and functions from their “pre-linguistic” to their co-verbal uses.

Fig. 140.1: Rate of Madeleine and her mother’s pointing gestures over the number of utterances

As shown in Figure 140.1, the increase in Madeleine’s use of speech over pointing gestures is spectacular: the rate of her pointing gestures over the number of utterances is much higher at the beginning of the data until she is about 2;0 (up to 93% at 1;02) and then stabilizes around 5 to 10% as of 1;06, which is quite close to her mother’s use. In a previous study, we have shown that Madeleine’s uses of deictics is complemented by pointing gestures 100% of the time at the beginning of the data, and only 5% of the time at 2;0 (Mathiot et al. 2009). But the gross number of pointing gestures used in an hour is in fact still quite important at the end of the data. She produces 95 pointing gestures in one hour at 4;01.27 for example (Fig. 140.2). The variation is of course very

1854

IX. Embodiment

Fig. 140.2: Number of pointing gestures per hour in Madeleine’s data

much linked to situational factors (reading with her mother elicits a lot of pointing gestures). The functions of Madeleine’s pointing gestures diversify greatly over the course of the data. A more detailed analysis of Madeleine’s pointing gestures is presently being conducted by Dominique Boutet and the author. At first, pointing gestures are produced in isolation with either a proto-declarative or a proto-imperative function. At around one year old, they begin to be complemented with vocal productions with the same overall functions. Around 1;06, pointing gestures are produced with deictics or nouns and clearly localize the objects shown or requested. The verbal productions simultaneous to pointing then become more and more complex: first with predicates, then with whole utterances. At 2;0, we find the first use of a pointing gesture with a totally different symbolic meaning that can be glossed as “beware”. The Index is vertically held in front of her chin, the tip at the height of her mouth. She is speaking to her doll and telling her faut pas attraper froid (‘you mustn’t catch cold’). She also starts pointing to absent entities. At 2;06, she points to several locations during her fictive narratives. She also starts using more diversified co-verbal gestures. At 3;0, her speech becomes extremely complex with embedded clauses and diversification of her tense system and in parallel she goes through what McNeill (2005) calls “the gesture explosion” with more and more co-verbal gestures. Interestingly enough, Madeleine enters a different stage around 3;06⫺4;0 when the functions of her pointing gestures become more and more diverse. For example, she points up her fingers to count the dolls she is talking about, but she also then uses her pointed fingers to embody/stand for the dolls themselves as if they were classifiers in sign language. By the age of 4;0, her pointing gestures are integrated in fluid co-verbal gesturing. Pointing can follow the rhythmic variation of her prosody: gestures and vocal productions are linked with great subtlety. She demonstrates excellent mastery of the location, the orientation, the motion of her pointing gestures, which enables her to mark subtle differentiation of their functions. She uses pointing to refer to time-spans or to attenuate, to suspend the predication she is making in speech. For example, as she wants to go get

140. The blossoming of children’s multimodal skills from 1 to 4 years old

1855

a costume that is in her room and disguise herself, she forbids the observer who is filming her to come with her. She lifts up her left index near her chin as she says je dois chercher mon de´guisement (‘I must go get my costume’). She starts to walk towards her bedroom stealthily and her index continues to go upward almost as if she were going to go shsh. We interpreted that co-verbal gesture as an attenuation of the assertive prohibition she targeted at the observer Martine, whom she might not want to be that directive with. It is a kind of modalisation of the prohibition. There is a message in her whole behavior that seems to mean “beware” but she wants to be gentle about it. And she ends this scene by saying tu me suis pas hein? (‘you’re not following me, OK?’). Her very sophisticated gesturing therefore illustrates, specifies, reinforces, or modalizes the meanings of her vocal productions.

4. Conclusion Gestures continue to enhance the blossoming of children’s communication skills after the “pre-linguistic” and the first gesture-word combinations. They are part of an intersubjective multimodal communicative system in which it is more and more complex to tease apart gestures from speech. The performative, interactional, and sociocultural nature of language involves the cooperation of both modalities, with one constantly supporting, extending or modifying the other. We need to understand not only how the vocal modality or how the visual modality are used more and more skillfully by children, thanks to adults’ scaffolding in everyday life interactions, but how the different channels and modalities work together. This perspective will give us better insights on how children become experts in face-to-face social interaction, which is necessarily multimodal in nature.

5. Reerences Arbib, Michael A. 2012. How the Brain Got Language, the Mirror System Hypothesis. Oxford/New York: Oxford University Press. Blondel, Marion, Aliyah Morgenstern, Pauline Beaupoil, Sandra Benazzo, Dominique Boutet, Ste´phanie Cae¨t and Fanny Limousin 2011. The blossoming of negation in gesture, sign and vocal productions. Colloque international sur le langage de l’enfant. ADYLOC. Paris, Juin 2011. Darwin, Charles 1877. A biographical sketch of an infant. Mind 2(7): 285⫺294. Glenberg, Arthur M. and Michael P. Kaschak 2003. The body’s contribution to language. In: Brian H. Ross (ed.), The Psychology of Learning and Motivation, Volume 43, 93⫺126. San Diego, CA: Academic Press. Budwig, Nancy 2003. Context and the dynamic construal of meaning in early childhood. In: Catherine Raeff and Janette B. Benson (eds.), Social and Cognitive Development in the Context of Individual, Social, and Cultural Processes, 103⫺130. London/New York: Routledge. Butcher, Cynthia and Susan Goldin-Meadow 2000. Gesture and the transition from one- to twoword speech: When hand and mouth come together. In: David McNeill (ed.), Language and Gesture, 235⫺257. Cambridge, NY: Cambridge University Press. Church, Ruth Breckinridge and Susan Goldin-Meadow 1986. The mismatch between gesture and speech as an index of transitional knowledge. Cognition 23(1): 43⫺71. de Laguna, Grace Mead Andrus 1927. Speech: It’s Function and Development. New Haven: Yale University Press. Duranti, Alessandro 1984. Intentions, self and local theories of meaning: Words and social action in a Samoan context. Center for Human Information Processing, Report No. 122, La Jolla.

1856

IX. Embodiment

Estigarribia, Bruno and Clark Eve Vivien 2007. Getting and maintaining attention in talk to young children. Journal of Child Language 34(4): 799⫺814. Goldin-Meadow, Susan 1999. The role of gesture in communication and thinking. Trends in Cognitive Science 3(11): 419⫺429. Goldin-Meadow, Susan and Cynthia Butcher 2003. Pointing toward two-word speech in young children. In: Sotaro Kita (ed.), Pointing: Where Language, Culture, and Cognition Meet, 85⫺ 106. Mahwah, NJ: Erlbaum. Goldin-Meadow, Susan, Howard Nusbaum, Susan D. Kelly and Susan M. Wagner 2001. Explaining math: Gesturing lightens the load. Psychological Science 12(6): 516⫺522. Levinson, Stephen C. 1983. Pragmatics. Cambridge, UK: Cambridge University Press. Mathiot, Emmanuelle, Marie Leroy, Fanny Limousin and Aliyah Morgenstern 2009. Premiers pointages chez l’enfant sourd-signeur et l’enfant entendant: deux suivis longitudinaux entre 7 mois et 1 an 7 mois. Aile-Lia 1: 141⫺168. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago, IL: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago, IL: University of Chicago Press. Morford, Marolyn and Susan Goldin-Meadow 1992. Comprehension and production of gesture in combination with speech in one-word speakers. Journal of Child Language 19(3): 559⫺580. Morgenstern, Aliyah 2009. L’Enfant Dans La Langue. In collaboration with Sandra Benazzo, Marie Leroy, Emmanuelle Mathiot, Christophe Parisse, Anne Salazar Orvig and Martine Sekali. Paris: Presses de la Sorbonne Nouvelle. Morgenstern, Aliyah, Ste´phanie Cat, Marion Blondel, Fanny Limousin and Marie Leroy-Collombel 2010. From gesture to sign and from gesture to word: pointing in deaf and hearing children. Gesture 10(2/3): 172⫺202. Morgenstern, Aliyah and Christophe Parisse 2007. Codage et interpre´tation du langage spontane´ d’enfants de 1 a` 3 ans. Corpus 6, Interpre´tation, contextes, codage: 55⫺78. Morgenstern, Aliyah and Christophe Parisse 2012a. The Paris corpus. French Language Studies 22(1): 7⫺12. Morgenstern, Aliyah and Christophe Parisse 2012b. Constructing “basic” verbal constructions: a longitudinal study of the blossoming of constructions with six frequent verbs. In: M. Bouveret and D. Legallois (eds.), Constructions in French, 127⫺154. Amsterdam: Benjamins. Müller, Cornelia 2009. Gesture and language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. London: Routledge. Ochs, Elinor 2012. Experiencing language. Anthropological Theory 12(2): 142⫺160. Özc¸alıs¸kan, S¸eyda and Susan Goldin-Meadow 2005. Gesture is at the cutting edge of early language development. Cognition 96(3): B101⫺B113. Parisse, Christophe and Aliyah Morgenstern 2010. A multi-software integration platform and support for multimedia transcripts of language. LREC 2010, Proceedings of the Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, 106⫺110. Romanes, Georges 1889. L’E´volution Mentale chez l’Homme. Origine des Faculte´s Humaines. Paris: Alcan. French translation [1891]. Scollon, Ron 1976. Conversations with a One-Year Old: A Case Study of the Developmental Foundations of Syntax. Honolulu: University Press of Hawaii. Tomasello, Michael, Malinda Carpenter and Ulf Liszkowski 2007. A new look at infant pointing. Child Development 78(3): 705⫺722. Vygotsky, Lev S. 1978. Mind in Society. Cambridge, MA: Harvard University Press. Werner, Heinz and Bernard Kaplan 1963. Symbol Formation: An Organismic Developmental Approach to Language and the Expression of Thought. New York: John Wiley. Zlatev, Jordan 1997. Situated Embodiment. Studies in the Emergence of Spatial Meaning. Stockholm: Gotab Press.

141. Gestures before language: The use of baby signs

1857

Zlatev, Jordan and Mats Andre´n 2009. Stages and transitions in children’s semiotic development. In: Jordan Zlatev, Mats Andre´n, Marlene Johansson-Falck and Carita Lundmark (eds.), Studies in Language and Cognition, 380⫺401. Newcastle: Cambridge Scholars. Zlatev, Jordan, Tomas Persson and Peter Gärdenfors 2005. Bodily Mimesis as “the Missing Link” in Human Cognitive Evolution. (LUCS 121.) Lund: Lund University Cognitive Studies.

Aliyah Morgenstern, Paris (France)

141. Gestures beore language: The use o baby signs 1. 2. 3. 4. 5. 6. 7.

Introduction Gestures and their role in the stages of language acquisition Investigating gestures and baby signs empirically: Methodological and theoretical approach Combining gestures and baby signs: Results Examples of gesture ⫺ baby sign combinations Conclusion References

Abstract In the past 20 years, baby signing was established as a new way of communicating between hearing parents and hearing toddlers because baby signs allow toddlers to communicate well before acquiring a vocal language. Studies have shown that baby signs may have a positive influence on speech comprehension (e.g., Doherty-Sneddon 2008; Goodwyn, Acredolo, and Brown 2000). But to the present, baby signs have been studied on their own, independently of their communicative context, i.e., the co-occurrence of baby signs and gesture has been omitted so far. This article investigates the combination of baby signs and gestures and gives a first description of this phenomenon. Based on five hours of video data in which four families using baby signing in their everyday interaction were filmed in different situations (e.g., in situations of playing and eating), a recurring structure of combining baby signs and gestures, in particular with deictic gestures, was identified. Besides the fact that toddlers are able to utter multi-expressions under the age of 18 months, the study also revealed that toddlers show a more advanced semantic content with regard to their utterances. The chapter closes with some concluding remarks on possible lines of further research.

1. Introduction Since the 1980s, the idea of supporting language acquisition of hearing children with the help of combining gestures and signs has existed. Researchers like Goodwyn and Acredolo (1985) realized that symbolic gestures facilitate the communication between caregivers and infants. Similarly, sign language interpreter Joseph Garcia (1999) created a procedure to teach hearing parents and their children how to use American Sign Language (ASL) to communicate in an early stage of language acquisition by using so called Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18571868

1858

IX. Embodiment baby signs and established the notion of baby signing, that is, the use of signs as a means of communication between hearing parents and their hearing toddlers. The idea of an early exchange between parents and their children through the use of baby signs soon enjoyed growing popularity, leading to a rapid growth of the development and teaching of baby signing in different countries in the last 30 years (Gericke 2009; Kiegelmann 2009; König 2010; Malottke 2011). Baby signs are sings that are pre-dominantly based on country-specific signs of the respective sign language (e.g., American Sign Language, German Sign Language, or Dutch Sign Language), yet vary in their articulatory specificity as they are tailored to the infantine motor skills (see König 2010: 16; Malottke 2011: 13). Furthermore, in contrast to sign languages, baby signing is not a language but a conventionalized semiotic system which is understood as a means to facilitate the pre-verbal communication (see Malottke 2011: 13). Baby signs are used simultaneously to the spoken language and serve as accompaniment for the expression of crucial terms in order to reduce potential communicative difficulties. Since the 1980s, interest and studies referring to baby signs increased steadily in various research fields, like (psycho-)linguistics, pedagogy, and speech-language pathology. However, until today those studies dealt either with impedimental or utilizing effects on language acquisition in the first years of life. Research in psychology and psycholinguistics (e.g., Susan Goldin-Meadows) focuses on the use of gestures in the process of language acquisition showing that gestures take over a prominent role in language development. Based on these results, interest in studying manual utterances in children increased steadily over the last few decades. In so doing, studies have focused on the use of gestures in connection especially with spoken language (see for example Morgenstern this volume), but not with regard to (baby) signs. Potential combinations of gestures and baby signs have been mentioned (see Goodwyn and Acredolo 1998; Goodwyn, Acredolo, and Brown 2000), yet linguistic studies did not examine this phenomenon in detail. Until today, systematic linguistic research investigating the use of gestures and baby signing is still missing. Regarded only as a means of facilitating communication between children and caregivers, structural properties of the linear combination of gestures and baby signs are still absent. This is where the present chapter ties in by presenting first results of a study on the use of gestures and baby signs, investigating the structural and functional characteristics of such a linear combination. Before presenting the study in detail, the chapter shortly discusses the role of gestures in language acquisition before addressing the analytical and methodological approach. Based on the discussion of two examples, the chapter presents the functional characteristics of the use of gestures and baby signs in toddlers.

2. Gestures and their role in the stages o language acquisition During speech development, children pass various stages of multimodal language acquisition as, for instance, the use of deictic gestures or one-word utterances. Within these stages, toddlers not only learn to communicate with speech but also with gestures. Thereby, the use of gestures does not precede the acquisition of language, but it is tightly related to spoken language acquirement (see Goldin-Meadow 2009). Before children are able to express themselves verbally, the use of gestures allows them to refer to objects as well as subjects (see Capirci et al. 2002). As a consequence of

141. Gestures before language: The use of baby signs

1859

rising motoric and cognitive skills around the first year of life, toddlers make use of gestural communication more frequently. At this point of their development, children perceive the realization of thoughts and needs through spoken language as inconvenient and complicated (see Iverson and Thelen 1999). Accordingly, children prefer to use gestures instead of words. Already before and around the first birthday they are able to use different kinds of gestures: (i) Deictic gestures: flat hand or index finger; referring to objects or persons (see Capirci et al. 1996; Iverson, Capirci, and Caselli 1994; Iverson et al. 1999; Stefanini et al. 2009). (ii) Performative gestures: flat hand; giving, showing or requesting of objects (see further Capirci et al. 1996; Iverson, Capirci and Caselli 1994; Iverson et al. 1999; Stefanini et al. 2009). (iii) Referential gestures: reference to objects and actions in the world (see Müller 1998, 2010); i.e., aspects of objects or actions. (iv) Conventional or emblematic gestures: defined by tradition and culture; arbitrarily selected; e.g., waving. Gestures lead infants through several stages of first language acquisition and adopt a supporting function. They guide the child from the gestural expression to the one-word utterance right up to two-word sentences. Through the constant feedback by their caregivers, toddlers steadily receive verbal input and get the opportunity to acquire spoken language. Until the age of three, children substitute the gesture by a word and use twoword utterances (see Goldin-Meadow 2009; Iverson and Goldin-Meadow 2005; Iverson, Capirci, and Caselli 1994; Rowe, Özc¸alıs¸kan, and Goldin-Meadow 2008). The development of gestural communication plus gesture-word-combinations is closely related to the formation of complex utterances. It represents the transition to an intensive phase of increasing speech comprehension and practicing spoken language. At the age of two, infants improve their ability to combine various words. Children use combinations of two words such as Cat there!. Starting with 18 months until the age of three, infants apply two-word utterances to express requests, or rejections, to describe places, or name actions. Furthermore, they combine interrogatives with yes/no-particles in order to receive a feedback to their questions (see Andresen 2010: 790; Clark 2003; Klann-Delius 1999: 25; Szagun 2008: 66; Wode 1988: 227). Similar to the language acquisition of spoken language, deaf children pass through the same stages as hearing toddlers. Initial studies gave the impression that deaf children are able to communicate earlier than hearing children, who acquire the verbal equivalent few months later. Nevertheless, studies have shown that hearing as well as deaf children produce their first words or signs around the first year of life (see Boyes Braem 1990; Klann-Delius 1999; Lillo-Martin 1999; Pruss Romagosa 2002). Accordingly, both groups, after primarily using gestures, start to produce words, signs, or a combination of words and signs with gestures around the same time. Deaf children apply with nine months gestures and two-signs utterances at the age of one year and a half. Researchers agree on the observation that the beginning of two-word/signs utterances simultaneously represents a productive activation of syntax (see Clark 2003; Gerdes 2008; Klann-Delius 1999; Owens 1996; Szagun 2008; Tomasello 2003; Wode 1988). Infants learn to segment, to classify, and to arrange utterances. Thus, they realize that

1860

IX. Embodiment some words or signs rather belong together than others. During the third year of age, infants stop expressing themselves in a telegraphic style and learn to utter complex and extended sentences (see Gerdes 2008: 25; Owens 1996: 261). As was mentioned in the sections before, baby signs have come to be known and used as a new way of communication for hearing infants and their parents. Considering the wealth of research on the use of gestures during language acquisition, showing the tight interrelation of gestures and speech in the process of acquiring speech, the question arises as to whether and how baby signs are combined with speech and gestures. On the basis of past studies, it should be of interest to investigate how children, who were taught by using gestures, words, and baby signs, have the ability to express themselves more explicitly.

3. Investigating gestures and baby signs empirically: Methodological and theoretical approach In this section we would like to give a brief overview of the data as well as methodological and theoretical approach, which provides the basis for the investigation of gesturebaby sign combinations. The study is based on five hours of audio visual data collected in the year 2012. Altogether, four families were visited and five children (see Tab. 141.1) were filmed. The data were gathered in familiar surroundings, when the children were playing, eating, or reading. During the recordings, the toddlers were always communicating with their caregivers (mother, father, or siblings). All subjects were native speakers of German. The family members communicated by using gestures, baby signs but also spoken language. Similar as in the studies on the language acquisition cited above, the toddlers in our study are between the age of one and two, thus they are in the first stages of language acquisition. The toddlers are situated in a kind of intermediate stage, where the absence of spoken language is seen as stressful. Tab. 141.1: Subjects investigated in the study Child

Age

Recording time

Gender

1 2 3 4 5

10 months 13 months 19 months 19 months 21 months

51 minutes 45 minutes 77 minutes 77 minutes 37 minutes

male female female female male

The present study is grounded in a cognitive linguistic and form based approach to gestures, which “focuses on an understanding of the ‘medium hand’ in the first place and relates discovered structures and patterns in gestures to those found in speech […]” (Ladewig 2012: 27) (see also Bressem and Ladewig 2011; Müller 1998, 2010, volume 1; Müller, Ladewig, and Bressem volume 1). In so doing, a linguistic approach assumes that gestures can be segmented and classified, that they show regularities and structures on the level of form and meaning, and have the potential for combinatorics and hierarchical structures. Gestures are seen as core partners in the creation of utterance meaning

141. Gestures before language: The use of baby signs

1861

by taking over “functions of linguistic units either in collaboration or in exchange with vocal linguistic units” (Müller, Bressem and Ladewig volume 1: 709). We do not only go along with the assumption that gestures are dynamic and motivated, but we also assume that gestures can be segmented and classified and thus provide for the investigation of linear patterns and structures of the gestural movement. Accordingly, we assume that gestures and baby signs may be investigated in their temporal structure and relations with other gestures and baby signs. Moreover, we assume that baby sings in combination with gestures may have the capability of expressing the same or different meanings as in the combination of speech and gesture. In our study, we investigated the relation of gestures and baby signs. The videos were divided in separate clips and transferred into the annotation tool ELAN (see Wittenburg et al. 2006). The annotation of gestures and baby signs was based on the Linguistic Annotation System for Gestures (Bressem, Ladewig, and Müller volume 1), and speech was transcribed by applying the GAT conventions (see Selting et al. 2009). The analyses conducted in ELAN enabled us to identify all instances of gesture-baby sign combinations and to determine structural and functional characteristics of the combinations. All instances of gesture-baby sign combinations used by the toddlers were counted. Hand movements manipulating objects in play situations or gestures as part of a game or song were excluded from the data corpus. Gestures, which have been counted as part of a gesture-sign-combination, were, for instance, indicating and pointing with the straight index finger or with the flat hand to a person, object, happening, or place. With respect to baby signs, we coded only signs that toddlers learned in a baby sign course or of a specific baby sign textbook. These were identified based on teaching books by König (2010) and Malottke (2011). Only if both manual expressions were uttered together and were accomplished on the basis of the factors given above, it was counted as a gesturesign combination.

4. Combining gestures and baby signs: Results After examining the whole data set, we identified 56 gesture-baby sign combinations (see Tab. 141.2). The following types of combinations were detected: gesture plus baby sign; baby sign plus gesture; baby sign plus baby sign; comprehensive units, means more than two gestures or/and baby signs (see section 5 for detailed description). As can be seen in Tab. 141.2, preferred combinations are gesture plus baby sign and baby sign plus gesture. These combinations were frequently used because of simply naming and inquiring after objects as well as subjects. Furthermore, we could determine that children between the age of 13 and 19 months are in an intermediate stage, which means

Tab. 141.2: Number of gesture-baby sign combinations

Gesture and baby sign Baby signs and gesture Baby sign and baby sign Comprehensive units

Child 1 (10 months)

Child 2 (13 months)

Child 3 (19 months)

Child 4 (19 months)

Child 5 (21 months)

1 1 0 0

6 4 1 4

7 0 1 2

10 7 1 3

3 3 1 1

1862

IX. Embodiment that they have the motoric and cognitive ability to combine two or more ideas. But at the same time, they perceive the realization of thoughts merely through spoken language as stressful and difficult. For that reason, we assume that especially child 2, 3, and 4 used the highest number of gesture-baby sign combinations.

5. Examples o gesturebaby sign combinations In the following section, we would like to illustrate identified combinations of gestures and baby signs as well as variations of information content, demonstrating that children are able to express more information when using manual utterances than when using speech. By giving one example, we hope to show that infants in their first year of life are able to utter a multi-expression of three components, namely baby sign-gesturebaby sign.

5.1. Variations o combinations and its inormation content Within the study, four kinds of combinations have been found, in particular: (i) (ii) (iii) (iv)

Gesture and baby sign: First Gesture followed by baby sign. Baby sign and gesture: First baby sign followed by gesture. Baby sign and baby sign: Two consecutively expressed baby signs. Comprehensive units: Combinations of gestures and baby signs; more than two monomodal utterances have been uttered.

All coupling of combinations result from the order of realized utterances. Fig. 141.1 shows clearly that toddlers prefer the combination of gesture plus baby sign. At this point it should be mentioned that predominantly deictic gesture were used during utterances.

Fig. 141.1: Variations of combinations

141. Gestures before language: The use of baby signs

1863

This conjunction is consistent with the preferred order of gesture and word. As mentioned in section 2, children start around the first year of life to combine first learned words with mainly deictic gestures. In our results, this order is precisely repeated. It was confirmed that toddlers primarily localize objects and persons subsequently to associate them with a nomination or addition of similar or additional information. These results seem to underline Boyes Braem (1990), who suggested that especially the primarily pointing is one signal for guiding the attention to the immediate vicinity and therefore to a followed artifact or person. But what kind of meaning is transmitted? Naturally children run through a certain order of combinations, that means, at first they prefer to combine gesture and specification like “there”, “it”, or “that”. Subsequently, toddlers meld gesture and reinforcement, meaning pointing at somebody or something and name it additionally. As a last step, they integrate gesture and a supplement. Besides the pointing at a person or the object, the child is giving additional information, for instance, pointing to the chocolate and saying mama, because it is the candy of the mother (see Capirci et al. 1996; Goldin-Meadow 2009; Iverson and Goldin-Meadow 2005; Iverson, Capirci, and Caselli 1994; Rowe, Özc¸alıs¸kan and Goldin-Meadow 2008). Similar to the use of combinations of gesture and word, nearly the same consolidation has been found in gesture and baby sign. More precisely, toddlers make use of a deictic gesture plus reinforcement or supplement. This fact is not at all surprising, but it bears mentioning that the age of the usage is different than the one found for the combination of gesture and word. It has been realized that there are immense differences of the time of use and implementing. In respect of the age, children usually show a constant rate of combination at the age of 16 months, which also contained reinforcements (see Capirci et al. 1996). With the support of a deictic gesture and baby sign children acquire much earlier the opportunity to express a reinforcing or supplementary utterance. As an illustration of this finding, let us take a look at one example (see Fig. 141.2). Child 2, who is 13 months old, is capable of uttering, particularizing, and reinforcing messages. Within the study, we have noticed that by the help of gesture and baby sign she already used supplementary utterances. In the present example, child 2 pointed with her right index finger to a chamber, in which 20 minutes before noisy windbells were

Fig. 141.2: Combination of deictic gesture and German baby sign for done!

1864

IX. Embodiment put into. She combines the deictic gesture, which localized the windbells, with the German baby sign for done!. In doing so, her right arm was moving from the left upper side transverse to the right bottom side with a flat spread hand. In a nutshell, child 2 remembered that the object is behind the door and does not make any sounds, literally “The tinkling of the wind chime is over!”. To recap, with aid of gesture and baby sign child 2 has the ability to communicate supplementary thoughts three months earlier compared to the combination of gesture and word. Always one step ahead, she combined two issues, which will be formulated verbally not until at the age of 16 months. The language competence indeed is in place, but due to lack of spoken language vocabulary it is not realizable. To put it in other words, what is impossible on the level of phonetics, can be uttered with the aid of the hands.

5.2. Comprehensive units beore at the age o 18 months In the following, we would like to show one particular incidence, which is very interesting in respect of combinations with more than two monomodal units within one utterance. Here it occurs in diverse kinds of combinations. In this case, the following combinations were been found: (i) (ii) (iii) (iv) (v) (vi)

Baby sign, baby sign, and gesture Baby sign, gesture, and baby sign Gesture, baby sign, and gesture Gesture, baby sign, and baby sign Baby sign, gesture, baby sign, baby sign, and gesture Gesture, baby signs, gesture, and baby sign

Children combine at least three, partially four or more signs. According to the language acquisition literature, two-word utterances are earliest found at the age of 18 months. Furthermore, multi-expressions emerge barely before the second birthday. But it should be pointed out again that already at the beginning of the second year of life, toddlers possess the cognitive ability to combine two or more ideas. Nevertheless, they do not have the needful repertoire of spoken language to express themselves correctly (see Capirci et al. 1996; König 2010; Malottke 2011). By the use of baby signs and gestures, they get the ability to present their expertise. We would like to illustrate this aspect by again using an example of child 2, who is able to do longer expressions because of her motor and cognitive skills. Nevertheless, the girl is young enough to combine mainly gestures and baby signs to comprehensive information. During the whole utterance, child 2 sits outside in the garden on her fathers lap at the time when the mother comes out of the living room and asks her daughter whether she is able to hear the music. As a result the child points with the right index finger to her right ear and signs i hear. Both father and mother think that the girl wants to communicate that she hears the music on the radio. But immediately after her first signing, she turns her head to the left and points with the left index finger to the garden and signs afterwards with thumb and index finger of the left hand the sign for bird. Her utterance could be read as follows: I hear a bird over there (sign hear, pointing gesture, sign bird) (see Fig. 141.3).

141. Gestures before language: The use of baby signs

1865

Fig. 141.3: Comprehensive unit ⫺ hear (sign), over there (pointing gesture), bird (sign)

This example shows that the absence of spoken language allows the combination of an alternative communication line: gesture and baby sign. Children bring single parts together and compose one string of utterance. In addition, with the aid of comprehensive units we could determine the initial utilization of syntactic properties. Similar to the structure of two- and multi-expressions, gestures comply a certain order and arrange to considerable units. Toddlers learn to segment, to classify, and to arrange linguistic elements. So to say, the child perceives that some of the elements rather belong together than others (see Szagun 2008: 31). It has become apparent that the child first of all calls where he or she could see something. With the result that the place of the sound is localized with help of a deictic gesture, prevalently followed by a subject or object expressed with baby sign. Besides the fact that children are able to bring out comprehensive units under the age of 18 months, deictic gesture could not be seen as a simple addition but rather as one supplement part in a manual utterance.

6. Conclusion In this chapter, we have shown, that with the help of gesture-baby sign combinations children under the age of 18 months are able to utter a higher degree of information, namely a supplementary utterance. Furthermore, they connect more than two linguistic units in one declaration. According to the literature, toddlers have the ability to combine a gesture and a word or sign earliest at the age of 16 months. Capirci et al. (1996: 668) explain it in the following way: In line with previous studies, we found that at 1;4, when two-word utterances were essentially absent from the children’s repertoires, crossmodal two-element combinations were already very frequent and included supplementary utterances referring to two distinct elements. In other words, children demonstrated the cognitive capacity to combine two ideas (using a single word and a single gesture) at this young age, even though they did not yet produce utterances of two spoken items.

This pilot study does not clearly show proof whether children are always able to express a comprehensive utterance with the aid of gestures and baby signs. But it proves the assumption that they could, if they get the right communication tool. The hands allow

1866

IX. Embodiment an early spoken interaction with their parents. As mentioned in section 5, what cannot uttered via spoken language, is possible with aid of gestures and baby signs. Researchers underline that the acquisition and implementation of baby signs around the first year of life are easier to handle than the acquisition of spoken language (see Gericke 2011; Goodwyn, Acredolo, and Brown 2000; König 2010; Wilken 2006). Especially at this age, the speech organs are in a constant change, in which they get trained and tested (see Hagemann 2012). As opposed to this, hands and fingers could already be used coordinated and pointedly. For future investigations, it would be interesting to clarify why the children in our study are able to express utterances. As we mentioned, according to research literature, children, regardless of whether deaf or hearing, do not combine two items before the age of 16 months. But Tab. 141.2 in section 4 shows us that it is possible and frequently. Therefore, it would be advisable to analyze once more the fact that the tertiary communication channel, meaning spoken language, gesture, and baby sign, could be one possible option. Besides the verbal expression, they also get a visual description. After all, how far the infantile utterances are affected by that factor has still to be examined.

Acknowledgements We would like to thank Mathias Roloff for providing the drawings (www.mathiasroloff.de). Moreover we are grateful to Jana Bressem and Silva H. Ladewig for their input and support in finalizing this article.

7. Reerences Andresen, Helga 2010. Zweiwortphase. In: Helmut Glück (ed.), Metzler Lexikon Sprache, 790. Weimar/Stuttgart: J.B. Metzler. Boyes Braem, Penny 1990. Einführung in die Gebärdensprache und ihre Erforschung. Hamburg: Signum Verlag. Bressem, Jana and Silva H. Ladewig 2011. Rethinking gesture phase ⫺ Articulatory features of gestural movement? Semiotica 184(1/4): 53⫺91. Bressem, Jana, Silva H. Ladewig and Cornelia Müller volume 1. Linguistic Annotation System for Gestures (LASG). In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication 38.1.), 1098⫺1124. Berlin/Boston: De Gruyter Mouton. Capirci, Olga, Jana Iverson, Elena Pizzuto and Virginia Volterra 1996. Gestures and words during the transition to two-word speech. Journal of Child Language 23(3): 645⫺673. Capirci, Olga, Jana Iverson, Elena Pizzuto and Virginia Volterra 2002. Gesture and the nature of language in infancy: The role of gesture as a transitional device en route to two-word speech. In: David F. Armstrong, Michael A. Karchmer and John Vickrey Van Cleve (eds.), The Study of Sign Languages. Essay in Honor of William C. Stokoe, 213⫺246. Washington, D.C.: Gaullaudet University Press. Clark, Eve V. 2003. First Language Acquisition. Cambridge: Cambridge University Press. Doherty-Sneddon, Gwyneth 2008. The great baby signing debate. The Psychologist 21(4): 300⫺303. Garcia, Joseph 1999. Sign With Your Baby. How to Communicate With Infants Before They Can Speak. Seattle: Northlight Communications. Gerdes, Adele 2008. Spracherwerb und neuronale Netze. Die konnektionistische Wende. Marburg: Tectum Verlag.

141. Gestures before language: The use of baby signs Gericke, Wiebke 2009. babySignal. Mit den Händen sprechen. München: Tectum Verlag. Goldin-Meadow, Susan 2009. How gesture promotes learning throughout childhood. Child Development Perspectives 3(2): 106⫺111. Goodwyn, Susan W. and Linda P. Acredolo 1985. Symbolic gesturing and language development. Human Development 28(1): 40⫺49. Goodwyn, Susan W. and Linda P. Acredolo 1998. Encouraging symbolic gestures: A new perspective on the relationship between gesture and speech. In: Jana Iverson and Susan Goldin-Meadow (eds.), The Nature and Functions of Gesture in Children’s Communication, 61⫺73. San Francisco: Jossey-Bass Publishers. Goodwyn, Susan W., Linda P. Acredolo and Catherine A. Brown 2000. Impact of symbolic gesturing on early language development. Journal of Nonverbal Behavior 24(2): 81⫺103. Hagemann, Katrin 2012. Pädagogische Grundlage und Leitbild. URL: http://www.babyzeichen.info/. Iverson, Jana, Olga Capirci and Cristina Caselli 1994. From communication to language in two modalities. Cognitive Development 9: 23⫺43. Iverson, Jana, Olga Capirci, Emiddia Longobardi and Cristina Caselli 1999. Gesturing in motherchild interactions. Cognitive Development 14: 57⫺75. Iverson, Jana and Susan Goldin-Meadow 2005. Gesture paves the way for language development. Psychological Science 16(5): 367⫺371. Iverson, Jana and Ester Thelen 1999. Hand, mouth and brain. The dynamic emergence of speech and gesture. Journal of Consciousness Studies 6(11/12): 19⫺40. Kiegelmann, Mechthild 2009. Baby Signing. Eine Einschätzung aus entwicklungspsychologischer Perspektive. DAS ZEICHEN 82: 262⫺272. Klann-Delius, Gisela 1999. Spracherwerb. Stuttgart/Weimar: Metzler. König, Vivian 2010. Das große Buch der Babyzeichen. Guxhagen: Karin Kestner. Ladewig, Silva H. 2012. Syntactic and Semantic Integration of Gestures Into Speech ⫺ Structural, Cognitive, and Conceptual Aspects, Ph.D. dissertation, European-University Viadrina, Frankfurt (Oder). Lillo-Martin, Diane 1999. Modality effects and modularity in language acquisition: The acquisition of American Sign Language. In: William C. Ritchie and Tei K. Bhatia (eds.), Handbook of Child Language Acquisition, 531⫺567. San Diego: Academic Press. Malottke, Kelly 2011. Zauberhafte Babyhände. Norderstedt: Books on Demand GmbH. Morgenstern, Aliyah this volume. The blossoming of children’s multimodal skills from 1 to 4 years old. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication 38.2.), 1848⫺1857. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Berlin Verlag. Müller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur 41(1): 37⫺68. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Jana Bressem and Silva H. Ladewig volume 1. Towards a grammar of gestures: A form-based view. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication 38.1.), 707⫺733. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Silva H. Ladewig and Jana Bressem volume 1. Gestures and speech from a linguistic perspective: A new field and its history. In: Cornelia Müller, Alan Cienki, Ellen Fricke,

1867

1868

IX. Embodiment

Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistic and Communication 38.1.), 55⫺81. Berlin/Boston: De Gruyter Mouton. Owens, Robert E. 1996. Language development. An introduction. Boston: Allyn and Bacon. Pruss Romagosa, Eva 2002. Gebärdensprache als Erstsprache. In: Anne Beecken, Jörg Keller, Siegmund Prillwitz and Heiko Zienert (eds.), Grundkurs Deutsche Gebärdensprache Stufe II, 35⫺37. Hamburg: Signum Verlag. Rowe, Meredith, Seyda Özc¸alıs¸kan and Susan Goldin-Meadow 2008. Lerning words by hand: Gesture’s role in predicting vocabulary development. First Language 28(2): 182⫺199. Selting, Margret, Peter Auer, Dagmar Barth-Weingarten, Jörg Bergmann, Pia Bergmann, Karin Birkner, Elizabeth Couper-Kuhlen, Arnulf Deppermann, Peter Gilles, Susanne Günthner, Martin Hartung, Friederike Kern, Christine Mertzlufft, Christian Meyer, Miriam Morek, Frank Oberzaucher, Jörg Peters, Uta Quasthoff, Wilfried Schütte, Anja Stukenbrock, Susanne Uhmann 2009. Gesprächsanalytisches Transkriptionssystem 2 (GAT 2). Gesprächsforschung ⫺ Online-Zeitschrift zur verbalen Interaktion 10: 353⫺402. Stefanini, Silvia, Arianna Bello, Cristina Caselli, Jana Iverson and Virginia Volterra 2009. Co-speech gestures in a naming task: Development data. Language and Cognitive Processes 24(2): 168⫺189. Szagun, Gisela 2008. Sprachentwicklung beim Kind. Weinheim/Basel: Beltz Verlag. Tomasello, Michael 2003. Constructing a Language. A Usage-Based Theory of Language Acquisition. Cambridge/Massachusetts/London: Harvard University Press. Wilken, Etta 2006. Unterstützte Kommunikation: Eine Einführung in Theorie und Praxis. Stuttgart: Kohlhammer Verlag. Wittenburg, Peter, Hennie Brugman, Albert Russel, Alex Klassmann and Han Sloetjes 2006. ELAN: A professional framework for multimodality research. In: Proceeding of LREC 2006, Fifth International Conference on Language Resources and Evaluation, 1556⫺1559. Wode, Henning 1988. Einführung in die Psycholinguistik. Ismaning: Max Hueber.

Lena Hotze, Frankfurt (Oder) (Germany)

142. Gestures and second language acquisition 1. 2. 3. 4.

Gestures and second language acquisition The L2 acquisition of gesture – learning to gesture like a native speaker Gestures in L2 acquisition – a window on language development Gestures as a medium of acquisition – the effect of seeing and producing gestures on L2 learning 5. Conclusion 6. References

Abstract Most people in the world speak more than one language and many learn it as adolescents or adults. The study of second language acquisition (meaning any language learnt after the first language) is concerned with how a new language develops in the presence of an existing one. Since gestures are an integral part of communication, subject to crosslinguistic, socioMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18681875

142. Gestures and second language acquisition

1869

and psycholinguistic variation, they become a natural extension of second language (L2), foreign language (FL) and bilingualism studies. Gestures can be examined as a system to be acquired in its own right (the acquisition of gestures), as a window on language development (gestures in acquisition), and as a medium of development (the effect of gestures on acquisition).

1. Gestures and second language acquisition Most people in the world speak more than one language and many learn it as adolescents or adults both in classroom settings and “in the wild”. The study of second language acquisition (SLA; meaning any language learnt after the first language, L1) is concerned with how a new language develops in the presence of an existing one. Studies typically examine how the outcome of learning is affected by factors such as the nature of the languages that come into contact (e.g., English-Chinese vs. English-German), learners’ age (child vs. adult), general cognitive skills such as working memory, the learning situation (classroom vs. naturalistic), the type of instruction (form vs. meaning based), and usage patterns in conversation and interaction (papers in Doughty and Long 2003). Studies of second language acquisition are traditionally separate from the related field of bilingualism studies, but both domains are inherently crosslinguistic in that both consider the effects of language contact in one mind, interaction or community. Both fields therefore rely and draw on detailed studies of monolingual or native speaker practices. Since gestures are an integral part of communication in the same arena as speech and language, subject to crosslinguistic, socio- and psycholinguistic variation, they become a natural extension of second language (L2), foreign language (FL), and bilingualism studies. In that context, gestures can be examined as a system to be acquired in its own right (the acquisition of gestures), as a window on language development (gestures in acquisition), and as a medium of development (the effect of gestures on acquisition). Cross-cutting these broad areas, gesture analysis can shed new light on issues such as the effect of the other language (transfer or crosslinguistic influence), general effects found in all learners (learner varieties), communication strategies, the role of collaborative processes, classroom practices and assessment, the role of vision, and motor actions for acquisition (e.g., Gullberg 2006b, 2010; McCafferty and Gullberg 2008; McCafferty and Stam 2008). This entry briefly exemplifies some of these research domains and their theoretical underpinnings.

2. The L2 acquisition o gesture  learning to gesture like a native speaker Learning a new language means learning new words, grammar, and appropriate usage. It also potentially means learning to gesture in a new way. Studies now examine how children come to gesture in adult-like and culture-specific ways, but we know little about whether L2 speakers ever learn to gesture in target-like fashion. Although a few studies investigate L2 users’ comprehension of conventional or quotable gestures (emblems) (e.g., Jungheim 1991; Wolfgang and Wolofsky 1991), little is known about whether L2 learners themselves produce such culture-specific gestures. Emblems may show the same well-documented acquisition difficulties as spoken idiomatic expressions (e.g., Irujo

1870

IX. Embodiment 1993) given their similar function. It remains an empirical question whether L2 speakers learn to produce appropriate forms of gestural back channelling (e.g., head toss vs. headshake, Morris et al. 1979), or to respect handedness taboos (e.g., Kita and Essegbey 2001). Even less is known about whether L2 learners acquire and produce languagespecific but non-conventionalized gesture patterns of shape and form, frequency, spatial expanse, etc. The role of attention to and noticing of features in the input ⫺ familiar in the field of spoken second language acquisition ⫺ has not been examined for gesture. Since gestures, as visual phenomena, are often assumed to be inherently “salient” with an attention-directing effect, it would be particularly interesting to consider their development. Gesture patterns could be easier to acquire than speech. Yet, nothing is known about this question.

3. Gestures in L2 acquisition  a window on language development 3.1. The role o the other language(s), crosslinguistic inluence Much research in second language acquisition targets so-called transfer or crosslinguistic influence (Jarvis and Pavlenko 2008), that is, the impact of existing languages on the acquisition and use of new ones. Traditionally, the L1 is assumed to “leak” into the L2 in the form of foreign accent in pronunciation, lexical choice, grammar use, etc., and to be the main reason for why L2 learners differ from target language speakers. A growing body of work suggests that native speakers of typologically different languages gesture differently as a reflection of how languages encode and express meaning elements such as path and manner of motion (Kita 2009 for an overview). Further studies have also shown that L2 learners of these languages do not necessarily gesture like target native speakers but display traces of their L1s in their gesture production. Traces can be found in gestural timing: Learners may temporally align their gestures with different elements in speech than native speakers (e.g., Choi and Lantolf 2008; Stam 2006). Traces can also be found in gestural forms reflecting the fact that learners express different semantic content in gestures than native speakers (e.g., Gullberg 2009; Özyürek 2002). Findings are often discussed in terms of Slobin’s notion of “thinking for speaking” (e.g., Slobin 1996), that is to say, ways in which linguistic categories influence what information you select for expression when speaking. In second language acquisition the argument is that L1-like gesture patterns may reveal whether L2 speakers continue to think for speaking in the L1 rather than in L2-like ways. Important to this argument is the assumption that gestures reflect conceptual-semantic elements (e.g., path and manner of motion) as well as their morphosyntactic organization (word order, number of clauses). Recent studies also examine how the L2 may affect the L1 in speech and gesture (e.g., Brown and Gullberg 2008) investigating new theoretical issues such as the stability of the native speaker norm. A related question is how native speakers perceive non-target-like L2 gestures. Although a number of studies show that learners’ gesture production affects assessments of learners positively (Gullberg 1998; Jenkins and Parra 2003), no studies so far have directly examined native perception of “foreign gesture” or its potential interactional consequences.

142. Gestures and second language acquisition

1871

3.2. General learner eects  learner varieties and interlanguage Second language acquisition studies also examine learners’ language as a systematic variety in its own right (Perdue 2000; Selinker 1972), with properties determined both by general learning mechanisms and by the specific languages involved. In this perspective, gestures can highlight how language learners manage lexical, grammatical, and discursive difficulties at a given proficiency level in real time. For instance, one line of work examines how learners from different language backgrounds achieve discourse coherence using gestures when pronouns or word order patterns are not yet mastered (Gullberg 2006a; Yoshioka 2008). Learners often use chains of full lexical noun phrases in speech to refer to the same entity (the woman ⫺ the woman) instead of an alternation of nouns and pronouns (the woman ⫺ she). At that point, they also consistently anchor and trace entities in space using gestures, creating coherent maps of discourse even if speech is not very clear. Another line of work targets gesture rates, showing that learners and bilinguals typically produce more gestures than native speakers and monolinguals (Gullberg 2012; Nicoladis 2007), although this may depend on languages involved (So 2010) and individual communicative style (Gullberg 1998; Nagpal, Nicoladis, and Marentette 2011). Different types of gestures may also be differentially affected. For example, studies of Japanese learners of French show that learners move from producing mainly representational gestures, complementing the content of speech, towards more emphatic or rhythmic gestures related to discourse (Kida 2005). Other studies have found that learners produce more representational gestures with increasing proficiency (Gregersen, Olivares-Cuhat, and Storm 2009). These heterogeneous results indicate the need for more careful charting of what gestures are produced by learners with particular proficiency profiles in different tasks to improve our understanding of learners’ bi-modal behavior. Such descriptions also have potential pedagogical and diagnostic applications.

3.3. Practices in L2 interaction and classrooms Second language acquisition studies also investigate learner practices in interaction from various theoretical perspectives. One line of work examines how learners use their expressive resources to overcome difficulties using communication strategies. For example, spoken strategies include circumlocution (medicine paper for prescription), avoidance, and foreignizing (recipi for prescription from Swedish recept). Gestures are also strategically recruited. Learners deploy representational gestures to elicit lexical help from interlocutors, often in lengthy negotiation sequences. They also produce deictic gestures to handle grammatical difficulties such as tense by mapping time onto space. Finally, learners produce many pragmatic gestures (often wrist-circling gestures) to manage interactive difficulties arising from non-fluent speech (Gullberg 1998, 2011). A particular kind of interaction arises when learners talk to themselves, for example, when trying to solve a problem or rehearse new knowledge (Lee 2008; McCafferty 1998). Such talk is often accompanied by beat-like gestures whose function is under debate. Another line of work examines interaction in foreign language classrooms, for example, the way in which teachers provide corrective feedback through re-casts (student: he go yesterday. Teacher: yes, he went yesterday). Studies have shown that teachers deploy gestures to clarify and disambiguate meaning, and to regulate interaction (Lazaraton 2004; Smotrova and Lantolf 2013; Tabensky 2008), although students’ reliance on and appreciation of teachers’ gestures vary (Sime 2006). L2 students themselves use gestures

1872

IX. Embodiment to complete each others’ utterances (Olsher 2004). It has been suggested that such multimodal utterances provide opportunities for learning by allowing for recasts and expansions under non-stressful conditions (Mori and Hayashi 2006). Overall, the classroom setting remains underexplored, in particular with regard to different kinds of instruction (e.g., focus on form vs. focus on meaning), materials, and domains of language (e.g., lexicon, grammar, pronunciation).

4. Gestures as a medium o acquisition  the eect o seeing and producing gestures on L2 learning All forms of didactic talk or “instructional communication” ⫺ whether by adults to children (“motherese”) or by adult native speakers to adult L2 users (“foreigner/teacher talk”, Ferguson 1971) ⫺ display an increased use of representational and rhythmic gestures (e.g., Allen 2000; Iverson et al. 1999; Lazaraton 2004). Teachers, instructors, and parents clearly think that seeing gestures facilitates comprehension ⫺ a view with some empirical support (e.g., Sueyoshi and Hardison 2005) ⫺ and possibly also learning. However, to date the empirical evidence for gestural facilitation of L2 learning remains scant. There is some evidence that words and idiomatic expressions are better remembered when introduced with gestures whose meaning match the word or expression (Allen 1995; Kelly, McDevitt, and Esch 2009). Non-matching gestures do not help, suggesting that it is not movement per se that is crucial but rather the semantic integration between a particular gesture and a given word. This finding is further corroborated by neurocognitive evidence suggesting that brain regions implicated in semantic processing are relevant (Macedonia, Müller, and Friederici 2011). However, the beneficial effects of seeing gestures may depend on the linguistic units tested. For example, the acquisition of the sound system seems to be facilitated by seeing lip movements but not manual gestures (Hirata and Kelly 2010). Moreover, linguistic levels interact in complex ways. For example, adult learners taught new L2 words containing phonetically difficult or easy sounds were only helped by seeing matching gestures for words with easy sound combinations. Gestures did not help the learning of words with difficult sounds (Kelly and Lee 2012). Turning to gesture production, there is evidence that producing gestures promotes learning. Adults and children who gesture while learning about math and science do better than those who do not, for instance (Alibali and DiRusso 1999). However, little is known about the effect of gesturing on the acquisition of language. It has been suggested that gesturing might help L2 learners internalize new knowledge on theoretical grounds (Lee 2008), and some teaching methods rely on embodiment (e.g., total physical response, Asher 1977). Yet empirical research actually testing learning is rare. One interesting study shows that French children learn more words in L2 English if they repeat word and gesture after instruction than children who only repeat the new word in speech (Tellier 2008). It remains an empirical question whether any long-term learning effects can be demonstrated for gesture production in L2.

5. Conclusion Under a view of speech and gesture as an interconnected system, the study of gestures becomes a natural extension of studies of L2 acquisition, opening new vistas on the full

142. Gestures and second language acquisition

1873

range of L2 speakers’ communicative and linguistic resources, and on the processes of language acquisition in which the learner’s individual cognition is situated in a social, interactive context. Much remains to be done in this exciting field of inquiry which poses new challenges both to the field of second language acquisition research and to gesture studies.

6. Reerences Alibali, Martha W. and Alyssa A. DiRusso 1999. The function of gestures in learning to count: More than keeping track. Cognitive Development 14(1): 37⫺56. Allen, Linda Q. 1995. The effect of emblematic gestures on the development and access of mental representations of French expressions. Modern Language Journal 79(4): 521⫺529. Allen, Linda Q. 2000. Nonverbal accommodations in foreign language teacher talk. Applied Language Learning 11(1): 155⫺176. Asher, James J. 1977. Learning Another Language Through Actions. Los Gatos: Sky Oaks Productions. Brown, Amanda and Marianne Gullberg 2008. Bidirectional crosslinguistic influence in L1⫺L2 encoding of Manner in speech and gesture: A study of Japanese speakers of English. Studies in Second Language Acquisition 30(2): 225⫺251. Choi, Soojung and James P. Lantolf 2008. Representation and embodiment of meaning in L2 communication. Motion events in the speech and gesture of advanced L2 Korean and L2 English speakers. Studies in Second Language Acquisition 30(2): 191⫺224. Doughty, Catherine J. and Michael H. Long (eds.) 2003. The Handbook of Second Language Acquisition. Oxford: Blackwells. Ferguson, Charles A. 1971. Absence of copula and the notion of simplicity: A study of normal speech, baby talk, foreigner talk and pidgins. In: Dell Hymes (ed.), Pidginization and Creolization of Languages, 141⫺150. Cambridge, NY: Cambridge University Press. Gregersen, Tammy, Gabriela Olivares-Cuhat and John Storm 2009. An examination of L1 and L2 gesture use: What role does proficiency play? The Modern Language Journal 93(2): 195⫺208. Gullberg, Marianne 1998. Gesture as a Communication Strategy in Second Language Discourse. A Study of Learners of French and Swedish. Lund: Lund University Press. Gullberg, Marianne 2006a. Handling discourse: Gestures, reference tracking, and communication strategies in early L2. Language Learning 56(1): 155⫺196. Gullberg, Marianne (ed.) 2006b. Special issue on Gestures and second language acquisition. International Review of Applied Linguistics 44(2). Gullberg, Marianne 2009. Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics 7: 222⫺245. Gullberg, Marianne 2010. Methodological reflections on gesture analysis in SLA and bilingualism research. Second Language Research 26(1): 75⫺102. Gullberg, Marianne 2011. Multilingual multimodality: Communicative difficulties and their solutions in second language use. In: Jürgen Streeck, Charles Goodwin and Curtins LeBaron (eds.), Embodied Interaction: Language and Body in the Material World, 137⫺151. Cambridge, NY: Cambridge University Press. Gullberg, Marianne 2012. Bilingualism and gesture. In: Tej K. Bhatia and William C. Ritchie (eds.), The Handbook of Bilingualism and Multilingualism, 2nd edition, 417⫺437. Malden, MA: Wiley-Blackwell. Hirata, Yukari and Spencer D. Kelly 2010. Effects of lips and hands on auditory learning of secondlanguage speech sounds. Journal of Speech, Language and Hearing Research 53(2): 298⫺310. Irujo, Suzanne 1993. Steering clear: avoidance in the production of idioms. International Review of Applied Linguistics 31(3): 205⫺219.

1874

IX. Embodiment

Iverson, Jana M., Olga Capirci, Emiddia Longobardi and M. Cristina Caselli 1999. Gesturing in mother-child interactions. Cognitive Development 14(1): 57⫺75. Jenkins, Susan, and Isabelle Parra 2003. Multiple layers of meaning in an oral proficiency test: The complementary roles of nonverbal, paralinguistic, and verbal behaviors in assessment decisions. Modern Language Journal 87(1): 90⫺107. Jungheim, Nick O. 1991. A study on the classroom acquisition of gestures in Japan. Ryutsukeizaidaigaku Ronshu 26(2): 61⫺68. Kelly, Spencer D. and Angela Lee 2012. When actions speak too much louder than words. Gesture disrupts word learning when phonetic demands are high. Language and Cognitive Processes 27(6): 793⫺807. Kelly, Spencer D., Tara McDevitt and Megan Esch 2009. Brief training with co-speech gesture lends a hand to word learning in a foreign language. Language and Cognitive Processes 24(2): 313⫺334. Kida, Tsuyoshi 2005. Appropriation du geste par les e´trangers: Le cas d’e´tudiants japonais apprenant le franc¸ais. Unpublished PhD. dissertation, Universite´ de Provence (Aix-Marseille I), Aix en Provence. Kita, Sotaro 2009. Cross-cultural variation of speech-accompanying gesture: A review. Language and Cognitive Processes 24(2): 145⫺167. Kita, Sotaro and James Essegbey 2001. Pointing left in Ghana: How a taboo on the use of the left hand influences gestural practice. Gesture 1(1): 73⫺95. Lazaraton, Anne 2004. Gesture and speech in the vocabulary explanations of one ESL teacher: A microanalytic inquiry. Language Learning 54(1): 79⫺117. Lee, Jina 2008. Gesture and private speech in second language acquisition. Studies in Second Language Acquisition 30(2): 169⫺190. Macedonia, Manuela, Karsten Müller and Angela D. Friederici 2011. The impact of iconic gestures on foreign language word learning and its neural substrate. Human Brain Mapping 32(6): 982⫺998. McCafferty, Steven G. 1998. Nonverbal expression and L2 private speech. Applied Linguistics 19(1): 73⫺96. McCafferty, Steven G. and Marianne Gullberg 2008. Special issue Gesture and SLA: Toward an integrated approach. Studies in Second Language Acquisition 30(2): 133⫺146. McCafferty, Steven G. and Gale Stam (eds.) 2008. Gesture. Second Language Acquisition and Classroom Research. New York: Routledge. Mori, Junko and Makoto Hayashi 2006. The achievement of intersubjectivity through embodied completions: A study of interactions between first and second language speakers. Applied Linguistics 27(2): 195⫺219. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures, Their Origins and Distribution. London: Cape. Nagpal, Jaya, Elena Nicoladis and Paula Marentette 2011. Predicting individual differences in L2 speakers’ gestures. International Journal of Bilingualism 15(2): 205⫺214. Nicoladis, Elena 2007. The effect of bilingualism on the use of manual gestures. Applied Psycholinguistics 28(3): 441⫺454. Olsher, David 2004. Talk and gesture: The embodied completion of sequential actions in spoken interaction. In: Rod Gardner and Johannes Wagner (eds.), Second Language Conversations, 221⫺245. London: Continuum. Özyürek, Asli 2002. Speech-language relationship across languages and in second language learners: Implications for spatial thinking and speaking. In: Barbora Skarabela (ed.), BUCLD Proceedings, Volume 26, 500⫺509. Somerville, MA: Cascadilla Press. Perdue, Clive 2000. Organising principles of learner varieties. Studies in Second Language Acquisition 22(3): 299⫺305. Selinker, Larry 1972. Interlanguage. International Review of Applied Linguistics 10(3): 209⫺231.

143. Further changes in L2 Thinking for Speaking?

1875

Sime, Daniela 2006. What do learners make of teachers’ gestures in the language classroom? International Review of Applied Linguistics 44(2): 209⫺228. Slobin, Dan I. 1996. From “thought and language” to “thinking for speaking”. In: John J. Gumperz and Stephen C. Levinson (eds.), Rethinking Linguistic Relativity, 70⫺96. Cambridge, UK: Cambridge University Press. Smotrova, Tetyana and James P. Lantolf 2013. The function of gesture in lexically focused L2 instructional conversations. The Modern Language Journal 97(2): 397⫺416. So, Wing Chee 2010. Cross-cultural transfer in gesture frequency in Chinese-English bilinguals. Language and Cognitive Processes 25(10): 1335⫺1353. Stam, Gale 2006. Thinking for Speaking about motion: L1 and L2 speech and gesture. International Review of Applied Linguistics 44(2): 143⫺169. Sueyoshi, Ayano and Debra M. Hardison 2005. The role of gestures and facial cues in second language listening comprehension. Language Learning 55(4): 661⫺699. Tabensky, Alexis 2008. Expository discourse in a second language classroom: How learners use gesture. In: Steven G. McCafferty and Gale Stam (eds.), Gesture. Second Language Aquisition and Classroom Research, 298⫺320. New York: Routledge. Tellier, Marion 2008. The effect of gestures on second language memorisation by young children. Gesture 8(2): 219⫺235. Wolfgang, Aaron and Zella Wolofsky 1991. The ability of new Canadians to decode gestures generated by Canadians of Anglo-Celtic backgrounds. International Journal of Intercultural Relations 15(1): 47⫺64. Yoshioka, Keiko 2008. Gesture and information structure in first and second language. Gesture 8(2): 236⫺255.

Marianne Gullberg, Lund (Sweden)

143. Further changes in L2 Thinking or Speaking? 1. 2. 3. 4. 5.

Thinking for speaking Study Results Discussion and conclusion References

Abstract Cross-linguistic research has shown that languages differ typologically in how motion events are indicated lexically and syntactically, and that speakers of these languages have different patterns of thinking for speaking (for a review, see Han and Cadierno 2010). Spanish speakers express path linguistically on verbs, their path gestures tend to occur with path verbs, and their manner gestures may occur without manner in speech, whereas English speakers express path linguistically on satellites, their path gestures tend to occur with satellite units, and their manner gestures rarely occur without manner in speech. Stam (2006b) has shown that the English narrations of Spanish learners of English have aspects of their first language (Spanish) and aspects of their second language (English) thinking Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18751886

1876

IX. Embodiment for speaking patterns. She has further shown that these patterns continue to change over time. An L2 learner’s thinking for speaking about path in English became more native-like, but her thinking for speaking about manner did not (Stam 2010b). This paper investigates whether the learner’s L2 thinking for speaking patterns continued to change from 2006 to 2011. It shows that her thinking for speaking about path and manner had continued to change, but her thinking for speaking about boundary crossings had not.

1. Thinking or speaking Cross-linguistic research has shown that languages differ typologically in how motion events are indicated lexically and syntactically, and that speakers of these languages have different patterns of thinking for speaking, the thinking that occurs in the process of speaking (Slobin 1991; for a review, see Han and Cadierno 2010). Based on how path and manner are encoded, languages have been categorized into three types: verb-framed, satellite-framed, and equipollently-framed (Slobin 2006; Talmy 2000). Spanish and English exemplify two of these typologically different languages (Slobin 2006; Talmy 2000). Spanish is a verb-framed language, whereas English is a satelliteframed language. In Spanish, motion and path are indicated by the verb, and manner if present in speech is indicated outside the verb by an adjunct, an adverbial such as a gerund or a phrase. For example, in e´l entra baliando ‘he enters dancing’, the verb entra ‘enters’ indicates path, while the gerund bailando ‘dancing’ indicates manner. In English, motion and manner are indicated by the verb, and path is indicated by a satellite, a particle. For example, in he dances in, the verb dances indicates manner, while the particle in indicates path. Spanish speakers when narrating a motion event tend to describe states and emphasize settings while English speakers tend to describe processes and accumulate path components (for a review, see Stam 2010b). In addition, the gestures the speakers make follow the same patterns: Spanish speakers’ path gestures tend to occur with the verb and do not cross boundaries, while English speakers’ tend to occur with a satellite unit and can cross boundaries (Stam 2010b).

1.1. Thinking or speaking and second language acquisition Slobin hypothesized that many language patterns acquired in childhood are “resistant to restructuring in adult second language acquisition” (1996: 89). Therefore, several studies (for reviews, see Cadierno 2008, 2013; Stam 2010b) have investigated his thinking for speaking hypothesis and second language acquisition to determine (i) whether it is possible for learners to acquire another thinking for speaking pattern, (ii) what pattern learners are thinking in when they are speaking their second language – their first language (L1), their second language (L2), or somewhere in between ⫺ and (iii) whether this changes with proficiency level. Stam, (1998, 2006a, 2006b, 2008), Kellerman and van Hoof (2003), Lewis (2012), and Negueruela et al. (2004) looked at Spanish and English speech and gesture to investigate whether learners’ thinking-for-speaking patterns about path change when they acquire a second language. Their findings varied, however, as a result of differences in their study

143. Further changes in L2 Thinking for Speaking?

1877

designs (Stam 2010b). Kellerman and van Hoof (2003) and Negueruela et al. (2004) found that L1 Spanish speakers’ gestures indicated that they were still thinking for speaking in their L1 Spanish when narrating in L2 English, whereas Stam (1998, 2006a), found that when L2 English learners narrated in English, their thinking for speaking patterns were a mixture of L1 and L2 patterns, reflecting their interlanguage systems. Furthermore, Lewis (2012) looking at L1 English learners of L2 Spanish in a study abroad program found that the majority of the participants showed L2 thinking for speaking patterns for path in their L2 after six months abroad. These results suggest that it is possible for thinking for speaking patterns to change, but it is not clear to what extent. In the only longitudinal study to date, Stam (2010b) found that an L2 learner’s expression of path changed both linguistically and gesturally in English from 1997 to 2006, but her expression of manner did not. By 2006, the learners’ linguistic expression of path followed the English thinking-for-speaking pattern. She consistently expressed path with a satellite. In addition, by 2006 her gestures were more native-English speaker like. They were less segmented and more occurred with ground noun phrases and more than one element and fewer occurred with verbs and other. Of interest is the question whether L2 thinking for speaking can continue to change. It is the purpose of this paper to explore the possibility.

2. Study This study, a follow-up to Stam (2010b), investigated whether an L2 learner’s thinkingfor-speaking patterns in English continued to change from 2006 to 2011. It sought answers to the following questions: (i) How does the learner express path and manner linguistically and gesturally in 2011? (ii) How does this compare with her expression of path and manner in 1997, in 2006, and with native speakers of English? (iii) What are the implications for thinking for speaking changing in an L2?

2.1. Participant The participant was a Mexican-Spanish speaking learner of English at the advanced proficiency level at National Louis University at the time that she was originally videotaped in 1997. She had completed the former ESOL program, a semi-intensive fivelevel integrated skills program with a grammatically based curriculum designed to provide English language learners with the English necessary to succeed in undergraduate studies at the University, and was taking regular English classes. She had been studying English for two years and had been working at a bank for nine months, and she reported using English 40% and Spanish 60% of the time. By 2006, she had graduated from the university with a degree in computer information systems management and had been working at a bank as an accounting specialist for seven years, and she reported using English and Spanish equally (Stam 2010b). In 2011, she was unemployed and looking for a job, and again reported using English and Spanish equally (50% and 50%).

1878

IX. Embodiment

2.2. Procedures The same procedures were followed in 1997, 2006, and 2011. The participant was shown a Sylvester and Tweety Bird cartoon, Canary Row (Freleng 1950), in two segments and asked to narrate each segment in Spanish and English to two different listeners: a Spanish-speaking and an English-speaking one. The order was counterbalanced, with the initial order for the narration of the first segment randomly assigned in 1997 and the same order followed in 2006, and 2011 (Spanish-English, English-Spanish). The narrations were videotaped, and the participant was not told that thinking for speaking or gestures were a focus of the study.

2.3. Coding One episode which contained three motion events ⫺ (i) Sylvester climbs up inside the drainpipe, (ii) the ball goes inside Sylvester, and (iii) Sylvester and the bowling ball move/roll down and out of the drainpipe, across/ down the street and into a bowling alley ⫺ was coded using McNeill’s coding scheme (1992) to determine how path and manner were expressed both linguistically and gesturally in English. The function of the gesture in terms of motion event component (path, manner, ground), and meaning of the gesture were noted (for example, Sylvester climbing up the drainpipe). Questions on the coding of or timing of gestures were brought to lab meetings at the McNeill Lab Center for Gesture and Speech Research at the University of Chicago, where members of the lab watched the videotaped segments in question and reached a consensus on what the coding should be, as well as to the 19th Annual Sociocultural Theory and Second Language Learning Research Working Group Meeting (2012).

2.4. Data analysis Two types of data were analyzed and compared for the 1997, 2006, and 2011 narrations: speech and speech and gesture. These data were then compared with those of nativeEnglish speakers from Stam (2006a).

2.4.1. Speech analysis The narrations were analyzed for how path was expressed linguistically.

2.4.2. Speech and gesture analysis The synchrony of the gesture in relation to speech was established by watching the video recording in slow-motion and frame-by-frame (30 frames/sec) with the accompanying audio to establish the onsets and offsets of gesture strokes (Stam 2006b). Path (path, path and ground), manner (manner, path and manner, manner and ground), and ground gestures were identified and counted. Then, what motion event speech element the stroke of the path gesture co-occurred with (verb, satellite, ground noun phrase, more than one element, and other) was noted and counted, and percentages for the co-occurrence were

143. Further changes in L2 Thinking for Speaking?

1879

Tab. 143.1: Motion event speech categories (Stam 2006a: 111) Speech Element

Examples

Verb ⫽ V, SV, VO, conjunction (S) V Satellite ⫽ adverbs, prepositions of path Ground noun phrase More than one ⫽ V ⫹ satellite, V ⫹ satellite ⫹ ground noun phrase, satellite ⫹ ground noun phrase Other ⫽ conjunctions, subjects (alone), prepositional phrases, adjectives, pauses

goes; he goes; throws the ball; and he goes through; up; to; into the drainpipe comes out; comes out the drainpipe; out the drainpipe he, with the ball inside

calculated and compared (see Tab. 143.1, for motion event speech categories). Also, whether manner gestures occurred with the manner in speech was noted and tabulated. Finally, how speech and gesture interacted, that is, what aspects of the motion event the speech and gesture emphasized, for example, process versus ground setting description was examined. Verbs, subjects and verbs, verbs and objects, and conjunctions (subjects) and verbs were considered as verbs (Stam 2006b), all verbs that had co-occurring path gestures were counted, not just motion verbs, and both adverbs and prepositions of motion were included as satellites as these prepositions can express direction (Talmy 2000). Also, in regard to gestures sometimes falling on incomplete words and grammatical constituents, the following scheme was used: “(1) if the gesture fell on a syllable of the word, it was counted as co-occurring with the full speech element, for example, co from come was counted as a verb; (2) if it was a case of co-articulation, for example s in from gets in, it was counted as a satellite; (3) and if the gesture fell on a preposition and an article, for example to the, it was counted as a satellite” (Stam 2008: 239⫺240).

3. Results First the results for speech will be presented and then the results for speech and gesture.

3.1. Speech In terms of her linguistic expression of path, there was a difference in how she expressed path in English between 1997 and 2006, and this difference persisted in 2011. In 1997, she expressed path 33% of the time with just the verb go without an accompanying satellite or prepositional phrase. This is something that native English speakers do not do ⫺ English speakers’ verbs are followed by satellites that express path or prepositional phrases that express path and ground (Stam 2006a, 2008). By 2006 and in 2011, the learner was expressing path linguistically with a satellite 100% of the time. However, there was no change in her expression of manner. She did not use the verb roll in 1997, 2006, or 2011. This differed from the native-English speakers, who all used the verb roll (see Tab. 143.2).

1880

IX. Embodiment Tab. 143.2: Motion verbs ⫹ satellites L2 Learner 1997 (N⫽9)

L2 Learner 2006 (N⫽7)

L2 Learner 2011 (N⫽6)

Native Speakers (N⫽30)

come ⫹ out

11% (1) 33% (3)

climb ⫹ inside go ⫹ inside

14% (1) 43% (3)

climb Ø

climb ⫹ up

22% (2) 11% (1) 11% (1) 11% (1)

go ⫹ out, to

29% (2) 14% (1)

throw ⫹ through walk Ø

go Ø

go ⫹ down, through go ⫹ upstairs put ⫹ through throw ⫹ away

throw ⫹ into

go ⫹ down, to, up

16.6% (1) 50% (3) 16.7% (1) 16.7% (1)

come ⫹ down, out, up crawl ⫹ up drop ⫹ down fall ⫹ back down, into go ⫹ in, into, out, up, up through knock ⫹ down put ⫹ into roll ⫹ down, on down run Ø throw ⫹ down, into

6.7% (2) 20% (6) 3.3% (1) 10.0% (3) 6.7% (2) 20% (6) 3.3% (1) 3.3% (1) 16.7% (5) 3.3% (1) 6.7% (2)

3.2. Speech and gesture 3.2.1. Path As previously mentioned, the different patterns of thinking for speaking of native speakers of Spanish and English are also expressed gesturally. English speakers’ path gestures tend to co-occur with a satellite or a verb plus satellite (Kellerman and van Hoof 2003; McNeill and Duncan 2000; Stam 2006a, 2006b) while Spanish speakers’ path gestures tend to co-occur with a verb or other (McNeill and Duncan 2000; Stam 2006a, 2008). The learner produced a total of 22 path gestures in English in 1997, 17 in 2006, and 10 in 2011. Fig. 143.1 shows the percentage of path gestures she produced with the different motion event speech elements. In 1997, 32% co-occurred with the verb and 45% with other following the Spanish pattern (Stam 2006a, 2008), but she also had some path gestures that co-occurred with the satellite (the English pattern). Her path gestures were somewhere between the Spanish and English patterns. In 2006, 18% co-occurred with the verb, 12% with the satellite, 18% with the ground noun phrase, 24% with more than one element, and 29% with other. The percentage of path gestures co-occurring with the satellite remained about the same from 1997 to 2006, while both the percentage of path gestures co-occurring with the verb and other decreased, and the percentage co-occurring with the ground noun phrase and more than one element increased. In 2011, 30% co-occurred with the verb, 10% with the satellite,

143. Further changes in L2 Thinking for Speaking?

1881

Fig. 143.1: Percentage of path gestures with motion event speech element L2 learner and nativeEnglish speakers

20% with the ground noun phrase, 30% with more than one element, and 10% with other. Between 2006 to 2011, the percentage of path gestures co-occurring with the satellite and the ground noun phrase remained roughly the same, the percentage co-occurring with the verb increased, the percentage co-occurring with more than one element increased slightly and the percentage co-occurring with other decreased. Fig. 143.1 compares the learner’s percentage of path gesture results with those found by Stam (2006a) for native-English speakers. As can be seen in the figure, the learner’s gestural expression of path in 2006 had become more English-like except for the percentage of gestures that co-occurred with other (the Spanish pattern). In 2011, some aspects of the learner’s English-like pattern persisted and even improved, for example, the increase in the percentage of path gestures with ground noun phrase and more than one element and the decrease in percentage with other. However, other aspects did not. There was an increase in the percentage of path gestures with verbs and no increase in the percentage with satellites. This suggests that although the learner’s expression of path gesturally in English has continued to change, it has not completely changed to the native-speaker pattern of expression.

3.2.2. Manner McNeill and Duncan (2000) found that Spanish speakers may have manner in gesture when there is none in the accompanying speech, while English speakers rarely have manner in gesture when there is none in the accompanying speech. In both 1997 and 2006, all of the learner’s manner gestures co-occurred with no manner in speech. In contrast, 75% of her manner gestures co-occurred with manner in speech and 25% cooccurred with no manner in speech in 2011. This is similar to the native-English speakers who also had 75% of their manner gestures co-occurring with manner in speech and 25% co-occurring with no manner in speech for the three motion events, and suggests

1882

IX. Embodiment Tab. 143.3: Percentage of manner gestures with manner/no manner in speech Group

Manner in Speech

L2 Learner 1997 (N⫽3) L2 Learner 2006 (N⫽1) L2 Learner 2011 (N⫽4) Native Speakers (N⫽4)

0% 0% 75% (N⫽3) 75% (N⫽3)

No Manner in Speech 100% 100% 25% 25%

(N⫽3) (N⫽1) (N⫽1) (N⫽1)

that learner’s expression of manner gesturally has begun to shift to an English speaker’s pattern from a Spanish speaker’s (see Tab. 143.3).

3.3. Speech and gesture interaction Let us look at how speech and gesture interact in the learner’s narrations in English in 1997, 2006, and 2011 and how these compare with native-English speakers’ narrations. Stam (2010b: 79) compared an example (Stam 2008: 250) of the learner’s description of Sylvester coming out of the drainpipe from 1997 with her description in 2006 to see if there were any changes in her L2 thinking for speaking. Stam found that the learner’s expression of the motion event in 2006 had become more similar to that of nativeEnglish speakers in that there were fewer gestures, they were all path, and there was no emphasis of ground. These examples (Stam 2010b: 79) will now be compared with an example of the learner’s description of the same event in 2011. In 1997 (example 1) the learner produced 4 gestures: one manner (1a) co-occurring with when a subordinating conjunction, two path (1b) co-occurring with the satellite out and (1c) co-occurring with part of the ground noun phrase from the, and one ground (1d) co-occurring with the remainder of the ground noun phrase pipe. (1)

o[[kay when*when h] [e came out] [from the] [ pipe]] a b c d

⫺ a: iconic: both hands, right hand at lap moves up to upper left chest with 1 1/2 circles in toward body and away from body, left hand moves up to upper left side MANNER; ⫺ b: iconic: both hands, right hand at left upper arm moves in toward body and down to left chest, and continues down to lap, left hand moves in toward body and down to left upper arm PATH; ⫺ c: iconic: both hands, right hand at left chest moves down to lap, left hand at upper left side moves down to lap PATH; ⫺ d: iconic: both hands, palms toward center, fingers toward center, joined at left lap GROUND (Stam 2008: 250). In 2006 (example 2), the learner produced 2 path gestures: one (2a) co-occurring with the verb, and the other (2b) co-occurring with the ground noun phrase. (2)

[[and he goes all] [out of the pipe]] a b

143. Further changes in L2 Thinking for Speaking?

1883

⫺ a: iconic: right hand wrist bent at waist moves slightly to the right to lower right side PATH; ⫺ b: iconic (reduced repetition of previous gesture): right hand wrist bent at lower right side moves to the right and slightly up PATH (Stam 2010b: 79). In 2011 (example 3), she produced 3 gestures: one of which was a path gesture (3b) cooccurring with more than one element go down the pipe, another an iconic gesture (3a) showing Sylvester with the ball inside co-occurring with the conjunction and, and the other a deictic (3c) showing the location of the street and the endpoint co-occurring with to the. (3)

[[/ and] [/ / go down the pipe all the way] [/ to the street]] a b c

⫺ a: iconic: both hands, facing center, fingers facing away on both sides of the body on right and left extreme periphery ; ⫺ b: iconic: both hands, facing center, fingers away from body, right hand at right center periphery, left had at upper left periphery move down to the right across body to low right periphery and flip up PATH; ⫺ c: deictic: right hand turns over and points down at low right periphery palm towards center fingers toward down, left hand lowers to right center palm towards body, fingers toward right and both hands hold . Her gestures in this example indicate that she is thinking of what Sylvester looks like (3a), where he went (3b), and where he ended (3c). In 2011, her speech and gesture for the expression of path is more similar to that of a native speaker’s (example 4) than in 1997 and 2006 as there is only one path gesture for Sylvester and the bowling ball going down, and out the drainpipe that co-occurs with more than one element. However, her speech and gesture still differs from that of a native speaker’s as she also a deictic gesture for the street. This shows that the learner is still unable to cross boundaries with her gestures and needs to produce a separate gesture for the endpoint, a Spanish pattern. (4)

. [and he comes out the bottom of the drainpipe]

⫺ iconic ⫹ deictic: left hand index finger extended at upper left side goes straight down, then curves toward center under right at lap and holds. PATH (Stam 2010b: 80). To summarize, between 1997 and 2011, the learner’s linguistic and gestural expression of path changed in English. In 2006, she consistently used satellites, and this use persisted through 2011. From 1997 to 2011, there was a decrease in path gestures with other and an increase in path gestures with ground noun phrases and more than one element. In addition, her speech and gestures became less segmented, and her gestures covered more constituents in utterances like native-English speakers’ gestures do.

1884

IX. Embodiment The learner’s expression of manner did not change in English between 1997 and 2006. She continued to express manner within a Spanish thinking-for-speaking pattern. She continued not to produce the manner verb roll in English like native-English speakers do, and she expressed manner only in gesture when there was none in speech. However, between 2006 and 2011, her expression of manner began to change. Though she was still not using the manner verb roll, she no longer expressed manner only in gesture when there was none in speech. Instead her gestural expression followed the English pattern. Over the fourteen years, her pattern of thinking for speaking about path in English became more native-like, and her pattern of thinking for speaking about manner began to change.

4. Discussion and conclusion This study sought answers to three questions: How the learner expressed path and manner linguistically and gesturally in 2011, how this compared with her expression of path and manner in 1997, in 2006, and with native speakers of English, and what implications this had for thinking for speaking changing in an L2. The results show that the learner’s expression of path linguistically in English changed between 1997 and 2011. In 1997, she sometimes expressed path linguistically with a satellite following the English thinking-for-speaking pattern, but she also sometimes expressed it with just a verb following the Spanish thinking-for-speaking pattern. In 2006 and 2011, her expression of path linguistically followed the English thinking-for-speaking pattern. She consistently expressed path with a satellite. However, her expression of manner did not change. She never used the manner verb roll in 1997, 2006, or 2011. There was also a change in how she expressed path gesturally in English from 1997 to 2011. There was an increase in path gestures with ground noun phrases and more than one element and a decrease in path gestures with other. Additionally, there was a change in the learner’s gestural expression of manner. By 2011, she was following the English pattern of rarely having manner gestures without manner in speech. In addition, the learner’s speech and gestures together changed. The gestures covered more speech and became less and less segmented over time. These differences in the learner’s gestural expression of path and manner from 1997 to 2011 reflect a change in her L2 thinking for speaking. Her thinking for speaking about path and manner became more native-like, but not completely as there was no increase in the number of path gestures with satellites, and her path gestures did not include boundary crossings. The change in the learner’s gestural expression of manner in 2011 suggests that perhaps manner is not a pattern acquired in childhood that is resistant to change after all (Slobin 1996; Stam 2010b). It just takes time. It is possible that learners first focus on path, the most salient element of a motion event, and then turn to manner. This would be consistent with what Stam reported in a study on the development of first language thinking for speaking in English (Stam 2010a): satellites are learned early and used consistently, and manner use appears later. It is also possible that the gestural change in manner may be due in part to increased interactions with native speakers and mimesis (McCafferty 2008). The change in the learner’s expression of path both linguistically and gesturally, and manner gesturally is probably a result of her increased English proficiency and her use of the language on a daily basis in a number of sociocultural contexts. As the learner

143. Further changes in L2 Thinking for Speaking?

1885

has interacted more in English in American culture, her thinking for speaking has become more native-like. Although this study showed that the learner’s thinking for speaking about path and manner in her L2 changed over a fourteen-year period the results are limited. Only one individual and her speech and gesture in only one episode of her cartoon narration were examined. To get a fuller picture of changes in the learner’s thinking for speaking, more episodes need to be examined. Nevertheless, the fact that some aspects of her L2 thinking for speaking about path and manner have continued to change implies that L2 thinking for speaking is not static. It can change over time. That the learner is still not crossing boundaries with her path gestures like native-English speakers do implies that not all aspects of thinking for speaking change equally. It also raises the question of how long it takes for some aspects to change and whether some are resistant to change as Slobin has proposed. What is needed to explore this question further are more longitudinal studies of second language learners from different language backgrounds as well as studies that test whether L2 thinking for speaking patterns can be explicitly taught.

5. Reerences Cadierno, Teresa 2008. Learning to talk about motion in a foreign language. In: Peter Robinson and Nick C. Ellis (eds.), Handbook of Cognitive Linguistics and Second Language Acquisition, 239⫺275. New York: Routledge. Cadierno, Teresa 2013. Thinking for speaking in second language acquisition. In: Carol A. Chapelle (ed.), The Encyclopedia of Applied Linguistics. Oxford: Blackwell Publishing Ltd. Freleng, Friz (director) 1950. Canary Row [Animated Film]. New York: Time Warner. Han, ZhaoHong and Teresa Cadierno (eds.) 2010. Linguistic Relativity in SLA: Thinking for Speaking. Buffalo, NY: Multilingual Matters. Kellerman, Eric and Anne-Marie van Hoof 2003. Manual accents. International Review of Applied Linguistics 41(3): 251⫺269. Lewis, Tasha 2012. The effect of context on the L2 thinking for speaking development of path gestures. L2 Journal 4(2): 247⫺268. McCafferty, Steven G. 2008. Mimesis and second language acquisition. Studies in Second Language Acquisition 30(2): 147⫺167. McNeill David 1992. Hand and Mind. Chicago, IL: The University of Chicago Press. McNeill, David and Susan Duncan 2000. Growth points in thinking-for-speaking. In: David McNeill (ed.), Language and Gesture, 141⫺161. Cambridge, UK: Cambridge University Press. Negueruela, Eduardo, James P. Lantolf, Stephanie Rehn Jordan and Jaime Gelabert 2004. The “private function” of gesture in second language speaking activity: a study of motion verbs and gesturing in English and Spanish. International Journal of Applied Linguistics 14(1): 113⫺147. Slobin, Dan I. 1991. Learning to think for speaking: Native language, cognition, and rhetorical style. Pragmatics 1: 7⫺26. Slobin, Dan I. 1996. From “thought and language” to “thinking for speaking.” In: John J. Gumperz and Stephen C. Levinson (eds.), Rethinking Linguistic Relativity, 70⫺96. Cambridge, UK: Cambridge University Press. Slobin, Dan I. 2006. What makes manner of motion salient? Explorations in linguistic typology, discourse, and cognition. In: Maya Hickmann and Stephane Robert (eds.), Space in Languages: Linguistic systems and Cognitive Categories, 59⫺81. Amsterdam/Philadelphia: John Benjamins. Stam, Gale 1998. Changes in patterns of thinking about motion with L2 acquisition. In: Serge Santi, Isabelle Guaı¨tella, Christian Cave´ and Gabrielle Konopczynski (eds.), Oralite´ et Gestualite´: Communication Multimodale, Interaction, 615⫺619. Paris: L’Harmattan.

1886

IX. Embodiment Stam, Gale 2006a. Changes in patterns of thinking with second language acquisition. Ph.D. Dissertation, Committee on Cognition and Communication, Department of Psychology, The University of Chicago, Chicago, IL. Stam, Gale 2006b. Thinking for speaking about motion: L1 and L2 speech and gesture. International Review of Applies Linguistics 44(2): 143⫺169. Stam, Gale 2008. What gestures reveal about second language acquisition. In: Steven G. McCafferty and Gale Stam (eds.), Gesture: Second Language Acquisition and Classroom Research, 231⫺255. New York: Routledge. Stam, Gale 2010a. L1 thinking for speaking before age 3. Paper delivered at 4th Conference of the International Society for Gesture Studies (ISGS) ⫺ Gesture: Evolution, Brain, and Linguistic Structures, Frankfurt (Oder), Germany. Stam, Gale 2010b. Can a L2 speaker’s patterns of thinking for speaking change? In: ZhaoHong Han and Teresa Cadierno (eds.), Linguistic Relativity in L2 Acquisition: Evidence of L1 Thinking for Speaking, 59⫺83. Buffalo, NY: Multilingual Matters. Talmy, Leonard 2000. Towards a Cognitive Semantics. Volume II: Typology and Process in Concept Structuring. Cambridge, MA: MIT Press.

Gale A. Stam, Skokie (USA)

144. Gesture and the neuropsychology o language 1. 2. 3. 4. 5. 6. 7. 8.

Introduction Background: an evolutionary perspective Neurological substrates of language and gesture Disorders of gesture in cases of apraxia A note on dissociations in case studies How neuropsychology can help gesture classification Conclusions References

Abstract An overview of what is currently known about the neurological substrates and cognitive structures involved in gesture use and how they relate to what is known about the neuropsychology of language.

1. Introduction There are many relationships between communicative action, gesture, and language. For example, kissing as a sign of affection can be, depending on the circumstances, an actual movement of the whole body, a ritual display for blowing a kiss, or by verbal expressions such as “hugs and kisses” or “je t’embrasse” (‘I embrace you’) at the end of informal love letters. These different behavioral forms are not identical, however. Each mode of Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18861897

144. Gesture and the neuropsychology of language

1887

communication has specificity together with commonalities that enable translations from one mode to another. The attempt to conciliate these two aspects has elicited discussion. One approach among others has been to examine the cerebral bases of communicative behavior to discover the organization of mind through the study of brain damaged by various neurological diseases, or observed by means of imaging techniques, and simulated in formal models (Shallice and Cooper 2011).

2. Background: an evolutionary perspective The notion of multimodal communication is very ancient and it is now anchored in Darwin’s theory of natural evolution, assuming “descent with modification”. New forms (species, organs, behavior) emerge from recombination and transformation of old parts. In the present time, several global theories describe reuse of neural circuitry for a set of primitive and novel functions (Anderson 2010). The outcome of such a natural “tinkering” is either a slightly modified variant of the ancestral structure or a quite innovative product. Human communicative behavior is an instance of change that can be viewed in both ways, depending on emphasis put either on the continuity with other primate species or on the revolution brought by symbolic reference and language acquisition. Debates about the mirror neuron hypothesis illustrate this duality between a modified or innovative structure. Mirror neurons were initially discovered in the premotor frontal region of the macaque brain (for a review, see Rizzolatti and Craighero 2004). They differ from so-called canonical neurons that respond to the presentation of isolated objects, by responding to object-directed actions like food grasping either performed by the monkey itself or by another individual. Thus, they have visual and motor bimodal properties. On the basis of these bimodal properties, Rizzolatti and co-workers assumed that the mirror neurons could mediate action understanding and, through observation/ execution matching, constitute a bridge between primitive mammalian motor control systems and evolved human language and mind reading. In subsequent studies, cells with similar mirror properties were identified in other part of the monkey brain (superior temporal and inferior parietal). Results of imaging studies in humans reinforced the hypothesis that a similar mirror-neuron system also existed in our species, allowing imitation, cultural learning, and social understanding, though this claim is still debated (see e.g., discussion between Gallese et al. 2011). In several papers, for instance, Jacob (2009) (see also Jacob and Jeannerod 2005) stressed the necessary distinction between action understanding (anticipating consequence from previous experience) and mind reading (inferring the intentions of other organisms). Generally speaking, mirror neuron theory has not yet been elaborated enough, empirically and computationally, to solve all the problems that it is thought to address in the domains of motor cognition, social cognition, language processing, and psychopathology. Arbib (2005, 2006) proposed an expanded and more balanced version of the Mirror System Hypothesis as an evolutionary scenario of “descent with modification”. He assumed a gradual progression in primate evolution from object grasping to language through seven stages (S1⫺S7): (i) S1: A motor control system for manual and oral object grasping. (ii) S2: A mirror system shared with common human and macaque ancestry.

1888

IX. Embodiment (iii) S3: A simple imitation system (learning of short sequences through repeated exposure) shared with common human and chimpanzee ancestry (macaques do not imitate). (iv) S4: A complex imitation system in the hominid line, enabling acquisition in a single trial of novel sequences of actions linked via sub-goals. (v) S5: Protosign, a manual-based communication system using pantomime (5a) and conventional gestures to disambiguate pantomimes (5b). (vi) S6: Protolanguage as the combination of protosigns and vocal sounds. (vii) S7: Language beyond protolanguage (use of syntax and compositional semantics). This proposal suggests that communicative action and language are not co-extensive as each stage supposes additional mechanisms, but during the phylogeny, the former evolved from the latter. Thus, the Mirror System Hypothesis makes use of two different, partly overlapping networks.

3. Neurological substrates o language and gesture Cognitive processes result from orchestrated interactions between multiple distributed brain areas that each mediate functionally specialized operations. Today’s views of neuroanatomy of language consider wider networks that include but extend beyond the classical Broca’s (inferior frontal gyrus) and Wernicke’s (superior temporal gyrus) areas in the left hemisphere. Hickok and Poeppel (2004, 2007), for instance, identify four areas that play an important role in single word processing (Fig. 144.1a). They assume that the posterior inferior temporal lobe constitutes a sound-meaning interface, that the superior temporal gyrus supports acoustic-phonetic speech codes, that the Sylvian parieto-temporal area constitutes an auditory-motor interface and that the prefrontal regions (posterior inferior and dorsal premotor) support the articulatory-based speech code. In this

Fig. 144.1a: Functional anatomy of spoken word processing (adapted from Hickok and Poeppel, 2004). dPM ⫽ dorsal premotor, IF ⫽ inferior frontal (Broca’s area), pITL ⫽ posterior inferior temporal lobe, Spt ⫽ Sylvian parietal temporal, STG ⫽ superior temporal gyrus (Wernicke’s areas). Subcortical structures are not shown.

144. Gesture and the neuropsychology of language

1889

Fig. 144.1b: Neuro-anatomy of the human Mirror Neuron System (adapted from Cattaneo and Rizzolatti 2009). IFG ⫽ inferior frontal gyrus, PMD ⫽ dorsal premotor cortex, IPL ⫽ inferior parietal lobule, IPS ⫽ intraparietal sulcus, SPL ⫽ superior parietal lobule, STS ⫽ superior temporal sulcus. Subcortical structures are not shown.

model, the conceptual network is widely distributed, and involves anterior temporal, parietal, and frontal regions. The human Mirror Neuron System that is used for action observation and imitation also encompasses several areas in addition to prefrontal and parietal regions (Caspers et al. 2010). Cattaneo and Rizzolatti (2009) proposed a map in which different cortical areas correspond to different types of motor acts (Fig. 144.1b): reaching in the superior parietal lobule, actual and simulated tool use (transitive gestures) in various parts of the inferior frontal gyrus and the inferior parietal lobule, executing intransitive gestures (i.e., expressive or conventional) at the temporo-parietal junction, observation of upper-limb movements in a portion of the superior temporal sulcus. From this summary, it is clear that these two extended neural networks overlap, though currently, one cannot strictly localize mental functions to well-defined brain structures due to the variety of processes that are involved. For instance, putting the extended index finger on the mouth to request silence requires to select the correct hand shape and to move the hand to the mouth in the appropriate orientation. Studies of brain-damaged patients have found that selecting a hand configuration and pointing to the self were impaired independently, following specific lesions of different parts in the parietal lobe. Furthermore, the same brain areas, as a consequence of neural reuse, may be involved in multiple functions. Broca’s area, for instance, plays a role in speech production and comprehension at the phonological, syntactic and semantic levels, in action execution, observation, imitation and understanding, and in music processing. It can be viewed either as a mosaic of independent elements (multiplied in closely neighboring areas), or as a distributed system (the same node belongs to several networks in which it plays different roles depending on its connections, like an individual who belongs to different social groups), or as the neurological basis of a domain-general ability to process hierarchical structure of sentences and action plans (e.g., Hagoort’s 2005 binding hypothesis). Thus, close proximity of brain areas involved in speech and gesture processing is only a weak evidence for the interdependence of language and action. If the

1890

IX. Embodiment claim is that connections exist between these two domains, it is trivial, and if the claim is that sensorimotor mechanisms constitute the cerebral basis for language, it is disputable on the basis of the empirical evidence (Mahon and Caramazza 2008). Rather than attempting to “localize” broad mental functions like language and gesture, neuropsychological studies aim at describing more precisely a functional architecture in which different parts of the brain support different processes. Fig. 144.1a and 144.1b have in common a distinction made between a ventral (temporal) and a dorsal (fronto-parietal) stream. In the cognitive neuroscience of action, the functions of the ventral stream are to answer the “what” questions by means of long-term semantic representations. The dorsal streams, subdivided in superior and inferior parts (or dorsodorsal and ventro-dorsal), are respectively involved in answering the “where” and “how” questions in movement planning. This conception of multiple routes to action originated in neurophysiological studies on the monkey and in the description of dissociations in human brain damaged patients (Daprati and Sirigu 2006; Jeannerod and Jacob 2005; Milner and Goodale 2008). In case of optic ataxia (bilateral lesions of the dorsal stream), reaching and pointing actions are inaccurate, while estimating the size of perceived objects by opening the thumb and index finger is unimpaired. Inversely, in various cases of lesions to the ventral or the junction of the ventro-dorsal streams, patients can no longer select appropriate hand shapes from memory, but they remain able to use present objects and to point to their location. Thus, in relation to multiple “vision for action” systems (Rossetti and Pisella 2002), one can distinguish different stores of knowledge and different kinds of motor control processes, depending upon the task characteristics. For instance, Buxbaum and Kale´nine (2010) associated the dorso-dorsal stream to knowledge of structures (object shapes and locations, body schema used in immediate performance) and the ventro-dorsal stream to knowledge of functions (object mechanical properties retrieved from memory). There are of course multiple connections between these different routes. Arbib (2006), for instance, suggested that the putative mirror neuron system constitutes an interface between the ventral and dorsal streams, which can be accidentally injured, entailing various forms of aphasia and apraxia.

4. Disorders o gesture in cases o apraxia The term apraxia was initially proposed at the end of the 19th century as a substitute for the rival notion of “motor asymbolia”. It remained thanks to the theoretical interpretations given by Hugo Liepmann between 1900 and 1925 (see Goldenberg 2003a). Liepmann reported the case of a patient whose illness rendered him unable to execute simple commands such as “show your nose” or “make a fist” with the right hand. By contrast, he was still able to perform correctly these movements with the left hand and instruction to imitate gestures elicited the same manual asymmetry. Subsequent group studies revealed a prevalence of left hemisphere lesions as antecedents of the apraxias and varieties of impairments. Liepmann distinguished problems resulting from loss of “movement formulas” (knowing what to do, in front of objects for instance) and problems resulting from broken connections between ideas of movement and motor execution (knowing how to do). Nowadays, neuropsychologists use the term of “apraxia” to refer to various disorders of purposive gesture performance that are caused by brain damage and that cannot be explained by primary motor or verbal comprehension deficits (for comprehensive re-

144. Gesture and the neuropsychology of language

1891

views, see Petreska et al. 2007; Rumiati, Papeo, and Corradi-Dell’Acqua 2010). The term represents a generic label for impairments in three different domains: communicative gestures (e.g., saluting), simulated and actual tool use, and copy of meaningless movement. Apraxia results from extended left hemisphere lesions and no precise localization can be currently proposed (Goldenberg 2003b). These lesions extend beyond the putative Mirror Neuron System mainly in the inferior frontal and the parietal lobes (Goldenberg et al. 2007; Goldenberg 2009). Following on the seminal contribution by Liepmann, contemporary cognitive models of the action system have been proposed that use the different labels of ideational and ideomotor apraxia for conceptual and motor deficits. Additional dissociations are also described by means of comprehensive batteries used in the clinical examination of praxis (see, for instance, Bartolo, Cubelli, and Della Sala 2008; Peigneux and Van der Linden 2000; Power et al. 2010). Performance is assessed in tasks that vary on several dimensions: (i) Input modality: spoken commands, visual models (imitation), real objects (seen and/ or manipulated), pictures, […] (ii) Gesture category: meaningful/meaningless, transitive/intransitive (pantomimes versus conventional gestures), simple/sequential […] (iii) Musculature involved: face, upper limb (left and right), […] (iv) Output: production, comprehension (picture matching, judgment of well-formedness, […]) Unfortunately, these tests are not standardized and present several limitations. Lists of gestures are usually short (about 6 to 12 items per category; there is a trade-off between number of tasks and number of items), and gesture characteristics such as familiarity, motor complexity, or emotional load are not well controlled. Nonetheless, these instruments enable clinicians to distinguish various possible forms of apraxia (see, e.g., Cubelli et al. 2000, and Roy et al. 2000). Conceptual apraxia impairs gesture comprehension beyond gesture production from verbal instruction. In contrast, pantomime agnosia is characterized by impaired recognition but normal production. Defective processing of transitive gestures (recognition and use) with normal processing of intransitive gestures is interpreted as an “amnesia of usage”, a specific impairment of the knowledge of how objects are manipulated. Other deficits affect gesture production at various stages with correct recognition. The information processing models of praxis describe multiple routes from input to output through the ventral and dorsal streams (Buxbaum and Kale´nine 2010; Rumiati et al. 2010). The semantic route is required to perform meaningful gestures from verbal command because language comprehension and action semantics are involved. Two more direct routes allow translations from observed gesture to executed gesture (imitation of meaningless postures), and from objects to actual use (reaching, grasping, and manipulating). Thus, gesture processing may be impaired for different reasons. However, the interpretation of cases of dissociation is complicated by the problem of task demands presented in the following section.

5. A note on dissociations in case studies Shallice (1988) thoroughly discussed the methodology of case studies in neuropsychology in order to establish the precise conditions in which differences in performance allow

1892

IX. Embodiment

Fig. 144.2: Performance as a function of available cognitive resources: in the most impaired patient A, score is higher in task II than in task I. An inverse difference is observed in patient B. In both tasks, B’s score is higher than A’s score (adapted from Shallice 1988).

investigators to infer existence of separate subsystems in the cognitive architecture. The problem arises from variation in task difficulty (see Fig. 144.2). This is illustrated by findings of Papeo et al. (2010) who compared processing of actions (videoclips) and objects (photographs) in three tasks: pantomime execution, naming, and word/picture matching. One had a patient A (case B.B.) who scored higher in task I (73% of correct tool use demonstration) than in task II (57% of correct action imitation), whereas in another patient B (cases M.B.), performance in task II (100% correct imitation) was better than in task I (87% of correct demonstration of tool use); but in all cases, patient B was less severely affected than patient A. This pattern of performance does not provide a proof that pantomime imitation and demonstration of object use rely on different components of the system; it is compatible with a single component, which may be impaired at various degrees. In the most severely impaired patient A, who also suffered from comprehension deficits in the matching task, the vision of the present object facilitated the gesture production in comparison with the imitation that required maintenance of the model in short term memory. In the less impaired patient B with intact comprehension, the model to be copied provided the necessary information for gesture production whereas in presence of a picture of an object, the difficulty was to retrieve the appropriate hand configuration from long-term memory. To prove a more convincing double dissociation, we should find other cases in which A surpasses B in task I while B is better than A in task II, a pattern of results that would be incompatible with the assumption of a single processing component. In general, as consequences of lesions to the left cerebral hemisphere, patients with apraxia also usually suffer from language impairments. The two disorders are statistically associated; there are only few cases of apraxia without aphasia (maybe due to right hemisphere damage or to individual differences in the pattern of lateralization), but there are numerous cases of aphasia without apraxia. Does this show greater facility of gesture production, or a more important contribution of the right hemisphere to gesture than to language production? In the study of Papeo et al. (2010) mentioned above, case B.B. was impaired in all tasks: 0% correct in naming, 57% of correct gesture imitation, and

144. Gesture and the neuropsychology of language

1893

73% of correct pantomime of objects, in addition to word comprehension problems. In this severely impaired patient like in other persons with global aphasia, pantomime production, although pathological, was better than naming. Inversely, in case N.P. (intact comprehension), score of gesture imitation (60% correct) was lower than naming scores (actions: 87%; objects: 80%). However, in all tasks, N.P. was superior to B.B. These findings are consistent with the idea that in patients with mild aphasia, pantomime production is an unfamiliar task and that retrieval of motor image from memory is more demanding than lexical retrieval. The same pattern is observed in healthy elderly persons without brain damage who can be placed at the extreme right of the functions drawn in Fig. 144.2, with better object naming than gesture imitation. Thus, depending on the severity of the brain damage (the amount of resource available, according to Shallice 1988), naming can be an easier or a more difficult task than gesture production and without further information, one cannot know whether separate subsystems or a single conceptual system are involved. Similar problems raised by the variability in task difficulty can be noted in a study comparing pantomime and object recognition, pantomime imitation, and actual use of objects in a series of 37 patients (Negri et al. 2007). Usually, patients with apraxia have no problem in object recognition, and imitation is generally more impaired than object use. Negri et al. reported several cases of dissociations. However, their object recognition task was very easy and in case T.O., for example, a score of 97% of correct response was paradoxically rated as pathological, while 85% of correct gesture imitation was within the range of normal performance. Likewise, in the case B.E., object use was judged more impaired than pantomime imitation despite similar rough scores in the two tasks because in the whole sample as in the control group, imitation was more difficult than actual use. Several factors intervene to explain such a difference: sensory information provided by the object and absent in the gesture to be imitated, load in working memory, task familiarity, etc. In absence of control for these factors, one cannot decide whether symbolic and actual use of objects depends on unique or distinct subsystems. Another factor that affects performance is the automatic/voluntary nature of the task. In a study by Trojano, Labruna, and Grossi (2007), four patients with apraxia were video recorded during two natural situations, a meal and a conversation with the psychologist. Actions (tool use vs. object prehension) and speech-related gestures (representational and meaningless) were coded and, when correctly produced, selected for further testing. The test consisted of imitation of pantomime, non-tool actions and meaningless conversational gestures. All patients made spatial errors in reproducing in artificial conditions the movements previously found to belong to their repertoire. Again, tasks varied in processing demands.

6. How neuropsychology can help gesture classiication Despite the methodological problems associated with differences in task difficulty, results of neuropsychological studies can be informative concerning the Kendon’s continuum, which aligns gesticulation, autonomous gestures (pantomime and conventional gestures), and sign language on a single dimension, whereas McNeill (2005) considers multiple dimensions that distinguish gesticulation (without linguistic properties) from Sign Languages (with these properties). For Kendon (2004), gestures are kinds of action whereas McNeill puts emphasis on the language-imagery dialectic, which characterizes speechrelated gesticulation and is absent in other uses of gestures.

1894

IX. Embodiment Of particular interest for this debate are studies of rare brain-damaged patients with impairments in use of Sign Languages (Corina and Knapp 2008). In these cases, examination of language and praxis involves the same visual and manual modalities. Some of these patients remain able to pantomime the use of objects that they cannot “name” by signing, providing evidence of a (simple) dissociation between production of autonomous gestures and Sign Language. For example, in the British Sign Language used by “Charles”, a patient studied by Marshall et al. (2004), some signs resembled gestures spontaneously performed by non-signers to represent an object such as a cigarette (a pantomime of smoking), whereas other signs had a specific morphology that differed from the gestures used by hearing speakers. The language examination showed a global naming score of 50% correct, inferior to the correct pantomime score of 82%. However, signs that were similar to spontaneous gestures were more often correct (64%) than signs with a different morphology (36%). Right hemisphere lesions do not provoke similar aphasic symptoms but entail diverse other communication problems. For instance, users of the American Sign Language with right hemisphere damage produced very few lexical errors in a story narration task but a high proportion of errors in producing the signs called “classifiers” (Hickok et al. 2009). These signs combine a hand shape configuration that refers to a generic semantic class (people, vehicles, etc.) and a motion or a location that represent visually the motion or the location of the referent (for instance, the trajectory of the car in the narrative). Thus, they combine properties of signs and iconic gestures. These studies on aphasia in Sign Languages indicate that the cerebral lateralization of gesture production varies from strong left hemisphere dominance to bilateral control depending on the lexical status of the movement. Neuropsychological investigations have also compared the production of conventional gestures and pantomimes. In cases of patients with apraxia, production of transitive gestures (simulated use of objects) is generally more impaired than production of intransitive gestures, i.e., conventional or expressive gestures such as meaning a bad smell (Stamenova, Roy, and Black 2010). In some cases, the deficit only concerns the transitive gestures with a spared ability to perform conventional gestures (Dumont, Ska, and Schiavetto 1999) while the inverse dissociation has not yet been described. One may wonder whether these two categories of gesture relate to different kinds of knowledge, a motor cognition for tool use or a social cognition for intransitive gestures (Buxbaum, Kyle, and Menon 2005). However, the absence of documented cases showing an inverse dissociation (i.e., impaired production of conventional gestures with preserved pantomimes) favor the hypothesis of a single system. One can account for observed difference in terms of task demands, for instance, by considering that fixing the correct movement parameters is more difficult when the instruction is to represent an imaginary object manipulation rather than to perform a familiar conventional gesture. Very few studies have been devoted to the analysis of the speech-related gestures of patients with apraxia. Following Hostetter and Alibali (2008), if these gestures are seen as simulated actions, one should expect impaired gesticulation in patients whose apraxia results from defective conceptualization of action. By contrast, if the problem of the patients is to execute correctly the movement they have in mind, they should have no problem in accompanying speech with spontaneous hand gestures. Rose and Douglas (2003) selected seven patients with a diagnosis of conceptual apraxia and aphasia. During a short conversation, these patients performed a high number of gestures classified as descriptive (iconics), codified (e.g., head nods), or pantomimes. There was no significant

144. Gesture and the neuropsychology of language

1895

correlation between scores on apraxia assessment and use of pantomimes in conversation. These findings of differences between spontaneous gestures and gestures performed on command can be interpreted in several ways. This includes differences in task demands concerning gesture accuracy, automatic versus voluntary processing, facilitation by the conversational context and familiarity. By contrast, Hogrefe et al. (2011) found significant correlations between qualities of spontaneous gestures and scores in two tests of the clinical examination: pantomime to command and picture semantic processing. However, the used procedure was very different between these two studies: patient selection (various degrees of semantic impairments), speech elicitation conditions (conversation versus narrative recall), and gesture coding (frequency and classification versus ratings of formal diversity and comprehensibility).

7. Conclusions The success of the concept of “mirror neuron system” is impressive, if it is assessed through bibliometric indices. Undoubtedly, ability to match audio-visual representation of observed actions and motor representations of the same action is fundamental for imitation and social understanding, two important prerequisites of language acquisition and use (Rizzolatti, Fogassi, and Gallese 2001). However, Rizzolatti and co-workers admitted on several occasions that mirroring was only a part of the solution to the problems they address and unfortunately, results of neuropsychological studies indicate that we are still far from knowing how the system works (e.g., how the vision of actions sometimes “resonate” or evoke complementary actions?, how apparently similar gestures receive different interpretations depending on the context?, etc.). Investigations of brain damaged patients and findings obtained through neuroimaging techniques converge at concluding that extended brain networks are involved in language and gesture processing. Yet, all the regions connected within these networks do not play equivalent roles. Depending on their locations and the individual brain histories, lesions entail very diverse consequences mixing specific impairments and spared abilities. The main lesson from the neuropsychological studies, captured in the term of “cognitive architecture”, is that the brain forms an integrated but composite system. A kaleidoscope rather than a mirror of the surrounding world.

Acknowledgements The author is funded as Research Director by the Fund for Scientific Research (FNRS, Belgium). Gratitude is expressed to Agnesa Pillon, Dana Samson, and Martin Edwards for their insightful remarks on a preliminary version of the chapter.

8. Reerences Anderson, Michael L. 2010. Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences 33(4): 245⫺313. Arbib, Michael A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics. Behavioral and Brain Sciences 28(2): 105⫺167. Arbib, Michael A. 2006. Aphasia, apraxia and the evolution of the language-ready brain. Aphasiology 20(9/10/11): 1125⫺1155.

1896

IX. Embodiment Bartolo, Angela, Roberto Cubelli and Sergio Della Sala 2008. Cognitive approach to the assessment of limb apraxia. The Clinical Neuropsychologist 22(1): 27⫺45. Buxbaum, Laurel J. and Sole`ne Kale´nine 2010. Action knowledge, visuomotor activation, and embodiment in the two action systems. Annals of the New York Academy of Sciences 1191: 201⫺218. Buxbaum, Laurel J., Kathleen M. Kyle and Rukmini Menon 2005. On beyond mirror neurons: Internal representations subserving imitation and recognition of skilled object-related actions in humans. Cognitive Brain Research 25(1): 226⫺239. Cattaneo, Luigi and Giacomo Rizzolatti 2009. The mirror neuron system. Archives of Neurology 66(5): 557⫺560. Caspers, Svenja, Karl Zilles, Angela R. Laird and Simon B. Eickhoff 2010. ALE meta-analysis of action observation and imitation in the human brain. NeuroImage 50(3): 1148⫺1167. Corina, David P. and Heather Patterson Knapp 2008. Signed language and human action processing: Evidence for functional constraints on the human Mirror-Neuron System. Annals of the New York Academy of Sciences 1145: 100⫺112. Cubelli, Roberto, Clelia Marchetti, Giuseppina Boscolo and Sergio Della Sala 2000. Cognition in action: Testing a model of limb apraxia. Brain and Cognition 44(2): 144⫺165. Daprati Elena and Angela Sirigu 2006. How we interact with objects: learning from brain lesions. Trends in Cognitive Sciences 10(6): 265⫺270. Dumont, Catherine, Bernadette Ska and Alessandra Schiavetto 1999. Selective impairment of transitive gestures: An unusual case of apraxia. Neurocase 5(5): 447⫺458. Gallese, Vittorio, Morton Ann Gernsbacher, Cecilia Heyes, Gregory Hickok and Marco Iacoboni 2011. Mirror neuron forum. Perspectives on Psychological Science 6(4): 369⫺407. Goldenberg, Georg 2003a. Apraxia and beyond: Life and work of Hugo Liepmann. Cortex 39(3): 509⫺524. Goldenberg, Georg 2003b. Pantomime of object use: a challenge to cerebral localization of cognitive function. NeuroImage 20(1): 101⫺106. Goldenberg, Georg 2009. Apraxia and the parietal lobes. Neuropsychologia 47(6): 1449⫺1459. Goldenberg, Georg, Joachim Hermsdörfer, Ralf Glindemann, Chris Rorden and Hans-Otto Karnath 2007. Pantomime of tool use depends on integrity of left inferior frontal cortex. Cerebral Cortex 17(12): 2769⫺2776. Hagoort, Peter 2005. On Broca, brain, and binding: a new framework. Trends in Cognitive Sciences 9(9): 416⫺423. Hickok, Gregory, Herbert Pickell, Edward Klima and Ursula Bellugi 2009. Neural dissociation in the production of lexical versus classifier signs in ASL: Distinct patterns of hemispheric asymmetry. Neuropsychologia 47(2): 382⫺387. Hickok, Gregory and David Poeppel 2004. Dorsal and ventral streams: a framwork for understanding aspects of the functional anatomy of language. Cognition 92(1/2): 67⫺99. Hickok, Gregory and David Poeppel 2007. The cortical organization of speech perception. Nature Reviews Neuroscience 8(5): 393⫺402. Hogrefe, Katharina, Wolfram Ziegler, Nicole Weidinger and Georg Goldenberg 2011. Non-verbal communication in severe aphasia: Influence of aphasia, apraxia, or semantic processing? Cortex 48(8): 952⫺962. Hostetter, Autumn B. and Martha W. Alibali 2008. Visible embodiment: Gestures as simulated action. Psychonomic Bulletin and Review 15(3): 495⫺514. Jacob, Pierre 2009. A philosopher’s reflections on the discovery of mirror neurons. Topics in Cognitive Science 1(3): 570⫺575. Jacob, Pierre and Marc Jeannerod 2005. The motor theory of social cognition: a critique. Trends in Cognitive Sciences 9(1): 21⫺25. Jeannerod Marc and Pierre Jacob 2005. Visual cognition: a new look at the two-visual systems model. Neuropsychologia 43(12): 301⫺312. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press.

144. Gesture and the neuropsychology of language

1897

Mahon, Bradford Z. and Alfonso Caramazza 2008. A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. Journal of Physiology (Paris) 102(1⫺3): 59⫺70. Marshall, Jane, Jo Atkinson, Elaine Smulovitch, Alice Thacker and Bencie Woll 2004. Aphasia in a user of British Sign Language: Dissociation between sign and gesture. Cognitive Neuropsychology 21(5): 537⫺554. McNeill, David 2005. Gesture and Thought. Chicago, IL: University of Chicago Press. Milner, A. David and Melvyn A. Goodale 2008. Two visual systems re-viewed. Neuropsychologia 46(3): 774⫺785. Negri, Gioia A.L., Rafaella I. Rumiati, Antonietta Zadini, Maja Ukmar, Bradford Z. Mahon and Alfonso Caramazza 2007. What is the role of motor simulation in action and object recognition? Evidence from apraxia. Cognitive Neuropsychology 24(8): 795⫺816. Papeo, Liuba, Gioia A.L. Negri, Antonietta Zadini and Rafaella I. Rumiati 2010. Action performance and action-word understanding: Evidence of double dissociations in left-damaged patients. Cognitive Neuropsychology 27(5): 428⫺461. Peigneux, Philippe and Martial Van der Linden 2000. Pre´sentation d’une batterie neuropsychologique et cognitive pour l’e´valuation de l’apraxie gestuelle. Revue de Neuropsychologie 10(2): 311⫺362. Petreska, Biljana, Michela Adriani, Olaf Blanke and Aude G. Billard 2007. Apraxia: a review. Progress in Brain Research 164: 61⫺83. Power, Emma, Chris Code, Karen Croot, Christine Sheard and Leslie J. Gonzalez Rothi 2010. Florida Apraxia Battery⫺Extended and Revised Sydney (FABERS): Design, description, and a healthy control sample. Journal of Clinical and Experimental Neuropsychology 32(1): 1⫺18. Rizzolatti, Giacomo and Laila Craighero 2004. The mirror-neuron system. Annual Review of Neuroscience 27: 169⫺192. Rizzolatti, Giacomo, Leonardo Fogassi and Vittorio Gallese 2001. Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience 2(9): 661⫺ 669. Rose, Miranda and Jacinta Douglas 2003. Limb apraxia, pantomime, and lexical gesture in aphasic speakers: Preliminary findings. Aphasiology 17(5): 453⫺464. Rossetti, Yves and Laure Pisella 2002. Several ‘vision for action’ systems: a guide to dissociating and integrating dorsal and ventral functions (tutorial). In: Wolfgang Prinz and Bernhard Hommel (eds.), Common Mechanisms in Perception and Action: Attention and Performance, Volume XIX, 62⫺119. New York: Oxford University Press. Roy, Eric A., Matthew Heath, Dave Westwood, Tom A. Schweizer, Michael J. Dixon, Sandra E. Black, Linda Kalbfleisch, Kira Barbour and Paula A. Square 2000. Task demands and limb apraxia in stroke. Brain and Cognition 44(2): 253⫺279. Rumiati, Rafaella Ida, Liuba Papeo and Corrado Corradi-Dell’Acqua 2010. Higher-level motor processes. Annals of the New York Academy of Sciences 1191: 219⫺241. Shallice, Tim 1988. From Neuropsychology to Mental Structure. Cambridge, NY: Cambridge University Press. Shallice, Tim and Richard P. Cooper 2011. The Organization of Mind. New York: Oxford University Press. Stamenova, Vessela, Eric A. Roy and Sandra E. Black 2010. Associations and dissociations of transitive and intransitive gestures in left and right hemisphere stroke patients. Brain and Cognition 72(3): 483⫺490. Trojano, Luigi, Ludovica Labruna and Dario Grossi 2007. An experimental investigation of the automatic/voluntary dissociation in limb apraxia. Brain and Cognition 65(2): 169⫺176.

Pierre Feyereisen, Louvain-la-Neuve (Belgium)

1898

IX. Embodiment

145. Gestures in aphasia 1. 2. 3. 4. 5. 6. 7.

Introduction Preliminary: varieties of aphasia and the architecture of language processing Gesture and speech: shared and specific mechanisms The contribution of the right hemisphere to the production of speech-related gestures Gestures and the treatment of aphasia Conclusions: aphasia and the neuropsychological foundations of language and gesture References

Abstract An overview of the state of the art in current research of gestures used by patients suffering from various kinds of aphasia. These studies show that neural substrates of language and gestures are partially shared and partially specific.

1. Introduction Since the beginning of the scientific study of aphasia, in the second part of the 19th century, debates about the issue of the specificity of language processing have never ceased. On the one hand, some scholars consider aphasia as resulting from lesions of brain regions that are specialized for language production and comprehension. On the other hand, other scholars view aphasia as a consequence of more general disorders. Among these global approaches, focus can be made either on interpersonal communication or on action control. In another direction following a long tradition in neuropsychology, one assumes that arm movements and oral articulation are controlled by the same system located in the Broca’s area (e.g., Gentilucci and Dalla Volta 2008). Language has evolved in the human species as a form of hand and mouth gesture. In the same vein, according to the contemporary embodied theories of cognition, language use is thought to be grounded on sensory-motor processes. The present chapter on patients with aphasia follows a companion, more general chapter on the neuropsychology of language and communicative action. It concentrates on some recent studies (published since year 2000; for a review of the previous literature, see Feyereisen 1999). The focus here is on speech-related gestures.

2. Preliminary: varieties o aphasia and the architecture o language processing Different forms of disorders can be distinguished following clinical assessment of aphasia (for comprehensive overviews, see Hillis 2007; Saffran and Schwartz 2003). In the traditional approach that is based on the classical work in aphasiology and on the use of standardized tests, four dimensions of spoken language use are assessed. This includes fluency in oral expression (i.e., number of words per minute), auditory comprehension, repetition, and naming. Picture naming is generally impaired in all kinds of aphasia, although numbers and kinds of errors vary. Fluency can be in the normal range or strongly reduced, in cases of fluent and non-fluent aphasia, respectively. The speech of Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 18981905

145. Gestures in aphasia

1899

persons with fluent aphasia is often characterized by an over-representation of highfrequency words like pronouns or empty generic terms (“things”, “to do”, “it is”, etc.). Combination of the three bipolar (⫹/⫺) dimensions of fluency, comprehension, and repetition defines eight aphasic syndromes. For instance, the so-called Broca’s aphasia is a non-fluent form with relatively spared comprehension and defective repetition. The Wernicke’s aphasia is a fluent form with problems in both comprehension and repetition. Lesion localization is generally more anterior in Broca’s aphasia and more posterior in Wernicke’s aphasia, but the regions involved are not well defined within the left hemisphere. Actually, these syndromes correspond to neurological factors ⫺ different lesions entail different impairments ⫺ but consequences of lesions are largely unpredictable and there are large individual variations in patterns of performance. The traditional classification of aphasic syndromes has an acknowledged clinical utility, but it does not help us to understand brain mechanisms underlying language use because it lacks a detailed model of information processing architecture. Most recent approaches to examination of patients with aphasia rely on models of language processing that distinguish spoken and written modalities, comprehension and production processes, and phonological, lexical, and syntactic levels (see Feyereisen volume 1). Patients are considered as series of single cases rather than as members of groups corresponding to the traditional syndromes. These examinations consist of a large set of tasks addressing multiple aspects of the target component. For instance, assessment of lexical semantics may consist of written and spoken picture naming tasks, picture/word matching tasks, semantic association tasks, question answering about features, and it may include a comparison of various categories like natural versus man-made objects, body parts and tools, actions, etc. As results, studies of patients with brain damage reveal the fractionation of the cognitive system into various more or less specialized components. If gesture and speech processing involve multiple separable components, some may be shared by the two modalities and some others may be more specific. Accordingly, depending on the part of the brain that is damaged, aphasia may either associate with gesture impairments or allow compensation by gestures for language disorders. From such a perspective, one can suppose that a specific deficit in the phonological encoding of lexical items, for instance, will spare gesture use. By contrast, a general degradation of the semantic-conceptual knowledge diagnosed from impaired performance in several naming, matching, and judgment tasks will likely also affect communication through gestures. Does the empirical evidence support these predictions?

3. Gesture and speech: shared and speciic mechanisms In the study of Carlomagno et al. (2005), a referential communication task was designed in order to compare gesture and speech production in different kinds of patients with either aphasia or dementia of Alzheimer’s type, both suffering from lexical-semantic impairments. All patients experienced difficulties in conveying verbally the requested information but the source of these difficulties differed. Discourse of patients with dementia was often vague and elicited misunderstanding. In contrast, persons with aphasia were able to compensate for word-finding problems through iconic gestures and thus, their communication was more efficient. Brain damage that causes language impairments may have very different impacts on gesture use. Whereas large or diffuse lesions affect several components of the cognitive

1900

IX. Embodiment system, some other lesions may entail very selective consequences, as in the case of “Marcel” described by Kemmerer, Chandrasekaran, and Tranel (2007). Marcel was a forty-year-old man and was a victim of a car accident that caused extended cortical lesions resulting in severe naming disorders, but intact semantic knowledge. On several occasions, Marcel demonstrated using pantomimes and accurate comprehension of pictures that he could not name. Of particular interest was his behaviour while retelling the cartons used by Kita and Özyürek (2003) in their cross-linguistic study of co-verbal gesture. Languages differ in the way they express the path and the manner of a motion. The so-called Satellite-framed languages like English and German express path through particle satellites and are endowed with a rich vocabulary of verbs describing manner (e.g., “to walk across”). The so-called Verb-framed languages like French and Spanish express path through the verb and manner through adverbs or prepositional phrases (e.g., “traverser a` pied”). Kita and Özyürek (2003) found that differences in vocabulary were associated with differences in gestures. To describe a particular motion in a cartoon retelling, English speakers frequently accompanied the verb “to swing” with an arc movement, while speakers of Turkish and Japanese languages which lack such a verb often gestured a straight movement. Presented with the same kind of animated material, Marcel performed gestures that were typical of English speakers, even though he was unable to use the verb “to swing” and the particle “across”. Investigators concluded that conceptual structures were unaffected in this patient, as evidenced by gesture production and by scores in the semantic tasks that did not require naming, whereas naming impairment was likely due to severely disturbed phonological encoding. Lanyon and Rose (2009) investigated the relationship between gesture production and language impairments in persons with aphasia by referring to the model of Krauss, Chen, and Gottesman (2000) and testing the hypothesis that spontaneous gestures may facilitate lexical access. They analyzed the transcripts of conversations with 18 speakers suffering from various kinds of aphasia, with or without semantic impairments. Distinctions were made between fluent and non-fluent speech and, in cases of dysfluencies, between successful and unsuccessful word retrieval. Gesture types were also distinguished: meaningful (iconics, pantomimes, emblems), which constituted 94% of the produced gestures, or meaningless (beat gestures: 6%). Gesture production was associated with word finding difficulties but was found as frequently during unsuccessful as successful word retrieval episodes. Thus, there was no clear evidence that gestures helped word retrieval. As an alternative hypothesis, speech interruptions might trigger gesture production (Feyereisen 2006). However, different patterns of language impairment have to be considered. Gesture production was associated with successful word retrieval in a subgroup of five patients with intact semantic processing, indicating that gesture production can help lexical access only if there is no central comprehension impairment. Cocks et al. (2010) were also interested in the production of iconic gestures during states of word finding difficulties called tip-of-the-tongue states. They carried out a detailed investigation of a single case, Mrs LT suffering from conduction aphasia. In that pathology, the semantic system is intact but repetition is severely impaired, as speech production in other conditions. Discourse is characterized by multiple attempts to say the same word with phonological errors. Globally, iconic gestures performed by LT during retelling a cartoon story were similar in semantic form to those of control participants. In tip-of-the-tongue states, rates of successful retrieval did not differ depending on the presence or absence of gesture. However, LT’s gestures produced during tip-of-

145. Gestures in aphasia

1901

the-tongue states depicted shapes of characters and objects, more often than events, and were indicative of intact semantic knowledge about the target words and difficulties at the level of phonological encoding. These findings are consistent with those of studies reviewed in the preceding paragraphs and indicate that the system representing semantic knowledge is shared across speech and gesture production but that there are separations between the post-semantic speech and gesture production systems. Further information about the hypothetical facilitation of lexical access by gesture production is available from studies on aphasia rehabilitation in which therapists elicit various kinds of movements during picture naming tasks (see section 4 below).

4. The contribution o the right hemisphere to the production o speech-related gestures Ability to perform communicative gestures in persons with aphasia may depend on the preservation of some regions of the left hemisphere or on a contribution of the intact right hemisphere. In this latter case, we can predict that patients with right hemisphere damage should be impaired in some aspects of gesture processing. The roles of the right hemisphere in gesture production may be very diverse: emotional arousal, prosody, discourse pragmatics, semantic representations, mental imagery, spatial attention, etc. (Feyereisen 1999). Many scientists have assumed a privileged link between spatial cognition (including mental imagery) and gesture production (see Alibali 2005 for a review), but studies on gestural production or comprehension patients with right hemisphere damage who show visuo-spatial impairments are very rare. Thus, further investigations are necessary to understand how production of iconic gestures may relate to impairments of some identified components of mental imagery, visual and/or motor. A recent multiple single case study examined prosody as a factor that may affect gesture production after right hemisphere damage (Cocks, Hird, and Kirsner 2007). Five patients were compared to a group of healthy persons of the same age range in several speech production tasks: personal, procedural, emotional narratives, and a comic book description. Acoustic analyses yielded objective measures of prosody, and visuospatial processing was also assessed. The group of patients was found to be very heterogeneous. Gesture production declined in some of the patients, but no systematic co-occurring decline was found in prosody and visuospatial abilities. In addition, the nature of the speech production task influenced gesture production in both groups of participants. For instance, in some patients with a diminished variation of fundamental frequency in prosody, the production of beat gestures was reduced in the emotional but not in the procedural discourse. This study illustrates the difficulty of investigating gesture production in patients suffering from various impairments following brain damage. Less problematic is the experimental study of communicative competence on the comprehension side, for which there is less variation in the healthy population. Patients with right hemisphere damage or left hemisphere damage were compared to healthy control participants with the hypothesis of a major contribution of the right hemisphere to pragmatic abilities. In that domain, experimental tasks generally involve linguistic processing but here, a nonverbal variant was devised, initially to study the development of pragmatic inferences about gesture intention in young children (Cutica, Bucciarelli, and Bara 2006). The test consisted of short videos of social actions followed by a multiple choice among four pictures, one representing the intended consequence of the action and the

1902

IX. Embodiment three other inappropriate reactions. For instance, the film showed a boy and a girl walking to a car and the boy opening the passenger door. The correct response was choosing the picture of the girl entering the car. The necessary inferences may be simple, as in this example, or more complex, in the case of a pointing gesture used as an indirect request for instance. A distinction was also made between standard communication, in which gesture use was conventional, and non-standard communication, in which the intention was to deceive or to express irony (for instance, by hand clapping to express sadness after a failure). As expected, results showed that patients with right hemisphere damage made fewer correct inferences than controls in all but the simplest condition, whereas patients with left hemisphere damage made only errors in the complex non-standard communication, in which nonverbal expression conflicted with actual intention. Up to now, the most convincing evidence about the role of the right hemisphere in gesture production has been obtained in studies of patients with callosal disconnections, also called split-brains patients (Kita and Lausberg 2008; Lausberg et al. 2007). In normal conditions, the language regions of the left hemisphere are connected to the right motor cortex that controls left hand movements via the fibres forming the corpus callosum. In patients with callosotomy, after surgery undergone to alleviate intractable crises of epilepsy, the left hand of these patients no longer received input from the intact left hemisphere and was controlled by structures located in the intact right hemisphere. Lausberg et al. (2007) coded gestures performed during interviews by four patients of this kind. They found a significant left hand preference for communicative gestures in two of them who had shown clear left hemisphere dominance for language processing in previous examination. Patterns of lateralization were more complex in the two other cases. Three of these patients were tested again in a different task of describing animation in video clips, a task in which control participants use one or the other hand in similar proportion, depending on the presented stimulus (Kita and Lausberg 2008). Again, and more clearly in the case AA than in the two others, the left hand was used to accompany the verbal delivery of visuo-spatial information. These left hand gestures, however, were not as well coordinated with speech as in control participants. The implications of these observations is that gestures can be generated in the right hemisphere but that synchronization of left hand gestures with speech might require intact connections between the two hemispheres, as it is usually the case in normal conditions and in patients with aphasia.

5. Gestures and the treatment o aphasia Analysis of gesture production in persons with aphasia has two kinds of implications for rehabilitation, which may focus either on communication or on language restoration (see the comprehensive review by Rose 2006, followed by peer commentaries). Reciprocally, outcomes of these different behavioral interventions can inform us about the relationships between gestures and speech. Interventions that focused on word retrieval were inspired by the hypothesis that lexical access can be facilitated by gesture production. Outcomes of these studies were mixed: naming performance improved after several sessions of gestural training in a subgroup of patients while the therapy was ineffective in another subgroup. Why facilitation occurred in some patients and not in others remained unexplained. For instance, associated limb apraxia (i.e., impairment in producing gesture on command) was not found to be a critical factor. At least two mechanisms might intervene to improve naming

145. Gestures in aphasia

1903

performance through gesture: the retrieval of visuo-motor features of objects or actions in the conceptual system during production of representational gestures, or the spreading of neural activation triggered by movement execution, meaningful or meaningless. Crosson (2008), for instance, suggested that using the left hand to prime the motor regions of the right hemisphere can facilitate language production. The idea behind the treatment was that by repeatedly pairing a complex hand movement with picture naming, intact regions of the right frontal lobe involved in motor control mechanisms might eventually assume word production functions. In a different direction, Rose and Douglas (2001) compared the efficiency of several interventions on 6 patients suffering from various kinds of aphasia and found some positive effects of performing an iconic gesture on naming, but no effect of instructions to simply point to the picture or to close the eyes and activate visual images of the objects. In a subsequent study, one of the patients, JB, who previously did not show facilitation effect and suffered from semantic impairments, was included in a more intensive treatment program of 14 sessions in which three conditions were compared: verbal (semantic judgments and repetition), gestural (performance of iconic gestures), and verbal plus gestural (Rose and Douglas 2008). Significant improvements over the baseline were observed after some sessions but in the same extent in the purely verbal as in the gestural conditions. Thus, gesture execution was not necessary. Similarly, Marangolo et al. (2010) found that in four patients without comprehension deficits, mere observation of actions performed by the experimenter had the same positive effect as action copy, whereas imitation of meaningless movements had no effect. In the study of Raymer et al. (2006), gestural treatment was found to be effective in facilitating retrieval of object nouns and action verbs in equal proportions, not surprisingly since objects that were presented were associated with pantomimes of action, for instance, salt shaking. Further investigations on the role of item and task properties in naming performance by individual patients are needed to better understand how gesture might facilitate lexical access. Nonetheless, independently of their effects on language production, gestures have a communicative value on their own and they can be the target of the rehabilitation (Daumüller and Goldenberg 2010).

6. Conclusions: aphasia and the neuropsychological oundations o language and gesture In the present state of research on speech-related gestures used by patients with brain damage, amount of knowledge that has been constructed is still limited. Do we find specific gestural impairments that relate to the linguistic impairments? Due to the extent of individual differences in gesture production by healthy speakers and the absence of standards of well-formedness, it is difficult to identify a motor clumsiness that would be analogous to errors in spoken or signed languages. One can simply note that in cases of severe conceptual impairments, in global aphasia or in dementia of Alzheimer type for instance, many gestures are unclear while in other kinds of patients they allow efficient communication. Gesture comprehension by patients with aphasia remains also to be more thoroughly investigated. The main lesson from aphasia, however, is that language breakdowns are very diverse and that one cannot expect to find one-to-one correspondence between selective impairments of some components of the language system and particular uses of gestures.

1904

IX. Embodiment

Acknowledgements The author is funded as Research Director by the Fund for Scientific Research (FNRS, Belgium). Gratitude is expressed to Agnesa Pillon, Dana Samson, and Martin Edwards for their insightful remarks on a preliminary version of the chapter.

7. Reerences Carlomagno, Sergio, Maria Pandolfi, Andrea Marini, Gabriella Di Iasi and Carla Cristilli 2005. Coverbal gestures in Alzheimer’s type dementia. Cortex 41(4): 535⫺546. Cocks, Naomi, Lucy Dipper, Ruth Middleton and Gary Morgan 2010. What can iconic gestures tell us about the language system? A case of conduction aphasia. International Journal of Language and Communication Disorders 46(4): 423⫺436. Cocks, Naomi, Kathryn Hird and Kim Kirsner 2007. The relationship between right hemisphere damage and gesture in spontaneous discourse. Aphasiology 21(3/4): 299⫺319. Crosson, Bruce 2008. An intention manipulation to change lateralization of word production in nonfluent aphasia: Current status. Seminars in Speech and Language 29(3): 188⫺200. Cutica, Ilaria, Monica Bucciarelli and Bruno G. Bara 2006. Neuropragmatics: Extralinguistic pragmatic ability is better preserved in left-hemisphere-damaged patients than in right-hemispheredamaged patients. Brain and Language 98(1): 12⫺25. Daumüller, Maike and Georg Goldenberg 2010. Therapy to improve gestural expression in aphasia: a controlled clinical trial. Clinical Rehabilitation 24(1): 55⫺65. Feyereisen, Pierre 1999. Neuropsychology of communicative movements. In: Lynn S. Manning and Ruth Campbell (eds.), Gesture, Speech and Sign, 3⫺25. New York: Oxford University Press. Feyereisen, Pierre 2006. How could gesture facilitate lexical access? Advances in Speech-Language Pathology 8(2): 128⫺133. Feyereisen, Pierre volume 1. Psycholinguistics of speech and gesture: Production, comprehension, architecture. Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 156⫺168. Berlin/Boston: De Gruyter Mouton. Gentilucci, Maurizio and Riccardo Dalla Volta 2008. Spoken language and arm gestures are controlled by the same motor control system. The Quarterly Journal of Experimental Psychology 61(6): 944⫺957. Hillis, Argye E. 2007. Aphasia: Progress in the last quarter of a century. Neurology 69(2): 200⫺213. Kemmerer, David, Bharath Chandrasekaran and Daniel Tranel 2007. A case of impaired verbalization but preserved gesticulation of motion events. Cognitive Neuropsychology 24(1): 70⫺114. Kita, Sotaro and Hedda Lausberg 2008. Generation of co-speech gestures based on spatial imagery from the right hemisphere: Evidence from split-brain patients. Cortex 44(2): 131⫺139. Kita, Sotaro and Asli Özyürek 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48(1): 16⫺32. Krauss, Robert M., Yishiu Chen and Rebecca F. Gottesman 2000. Lexical gestures and lexical access: a process model. In: David McNeill (ed.), Language and Gesture, 261⫺283. Cambridge, UK: Cambridge University Press. Lanyon, Lucette and Miranda L. Rose 2009. Do the hands have it? The facilitation effects of arm and hand gesture on word retrieval in aphasia. Aphasiology 23: 809⫺832. Lausberg, Hedda, Eran Zaidel, Robyn F. Cruz and Alain Ptito 2007. Speech independent production of communicative gestures: Evidence from patients with complete callosal disconnection. Neuropsychologia 45(13): 3092⫺3104.

146. Body movements and mental illness

1905

Marangolo, Paola, Silvia Bonifazi, Francesco Tomaiuola, Laila Craighero, Michela Coccia, Gianmarco Altoe`, Leandro Provinciali and Anna Cantagallo 2010. Improving language without words: First evidence from aphasia. Neuropsychologia 48(13): 3824⫺3833. McNeill, David and Susan D. Duncan 2011. Gestures and growth points in language disorders. In: Jackie Guendouzi, Filip Loncke and Mandy J. Williams (eds.), The Handbook of Psycholinguistic and Cognitive Proceses: Perspectives in Communication Disorders, 663⫺685. New York/London: Psychology Press. Raymer Anastasia M., Floris Singletary, Amy Rodriguez, Maribel Ciampitti, Kenneth M. Heilman and Leslie J. Gonzalez Rothi 2006. Effects of gesture⫹verbal treatment for noun and verb retrieval in aphasia. Journal of the International Neuropsychological Society 12(6): 867⫺882. Rose, Miranda 2006. The utility of arm and hand gestures in the treatment of aphasia. Advances in Speech-Language Pathology 8(2): 92⫺109. Rose, Miranda and Jacinta Douglas 2001. The differential facilitatory effects of gesture and visualisation processes on object naming in aphasia. Aphasiology 15(10/11): 977⫺990. Rose, Miranda and Jacinta Douglas 2008. Treating a semantic word production deficit in aphasia with verbal and gesture methods. Aphasiology 22(1): 20⫺41. Saffran, Eleanor M. and Myrna F. Schwartz 2003. Language. In Michela Gallagher and Randy J. Nelson (eds.), Handbook of Psychology, Volume 3: Biological psychology, 595⫺636. New York: Wiley.

Pierre Feyereisen, Louvain-la-Neuve (Belgium)

146. Body movements and mental illness: Alterations o movement behavior associated with eating disorders, schizophrenia, and depression 1. 2. 3. 4.

Eating disorders Schizophrenia Depression References

Abstract This chapter gives an overview on alterations of movement behavior in individuals with mental disorders. The review focuses on the major diagnostic groups eating disorders, schizophrenia, and depression. For these disorders there is a solid empirical data base on movement behavior, i.e., several empirical studies have been conducted by independent researchers using different diagnostic tools for movement behavior. The interdisciplinary data evidence that these three diagnostic groups are associated with diagnosis specific alterations of movement behavior.

1. Eating disorders In patients with anorexia nervosa, and more rarely in those with bulimia, hyperactivity is an established symptom, which is documented in the standard diagnostic manuals Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19051913

1906

IX. Embodiment such as the International Classification of Diseases (ICD) and the Diagnostic and Statistical Manual of Mental Disorders (DSM). About 75% of the patients with anorexia and about half of those with bulimia show episodes with hyperactivity in the course of the illness (Davis 1997; Kron et al. 1978). Both somatic and mental factors play a role in the genesis of hyperactivity. On the physical level, starving induces hyperactivity and hyperactivity reduces the hungry feeling (Broocks, Liu, and Pirke 1990; Edholm et al. 1955; Katch, Michael, and Jones 1969; Keys et al. 1950; Routtenberg 1968). The vicious circle of hyperactivity is consolidated by its psychotropic effect. Physical activity results in a short-term mood improvement (Pierce et al. 1993; Raglin 1990), and in a long-term decrease of depression, and anxiety (Jambor et al. 1994; Norvell and Belles 1993). Various psychodynamic factors contribute to hyperactivity. It can serve as a deliberate intervention to lose weight, as a self-punishment for having eaten, or as a permanent affirmation of one’s control over the own body (Beumont et al. 1994). It can help to feel oneself (Kruger and Schofield 1986) and to fight against feelings of emptiness or self-disintegration (Stanton-Jones 1992). Moreover, some patients with eating disorders perform physical training in obsessive-compulsive manner (Davis 1997). It would appear that the different etiologies result in different phenomenological forms of hyperactivity. However, thus far, no research has been conducted to disentangle these different forms of hyperactivity by movement analysis. A more sophisticated diagnostics of the various forms of hyperactivity would help to develop specific and more effective therapeutic interventions. In contrast to hyperactivity as a quantitatively conceptualized disturbance, qualitative alterations of movement behavior in individuals with eating disorders have, thus far, not been considered in standard diagnostic manuals. However, there is ample empirical evidence for their occurrence, in particular in studies employing a Laban based movement analysis, which is a comprehensive descriptive analysis for whole body movement in space (Laban [1950] 1988). Burn (1987) reported that anorectic patients had a more bound movement flow, less flow from upper to lower body, more peripheral movement initiation, and more sustained movement. In a major study on 60 patients with anorexia and bulimia as compared to a matched healthy control group, Lausberg et al. (1996) reported a significantly smaller movement area, less weight shift, more isolated use of the parts of the body, less integration of the lower body, more peripheral movement initiation, less strength, and more bound flow of movement. While the patients with anorexia and bulimia showed the same trends relative to the control group, the anorexia group was more disturbed than the bulimia group both in movement behavior as in personality, as measured with the questionnaire “Freiburger Persönlichkeitsinventar”. Moreover, Shenton (1990) reported bound flow, a limited use of weight, and a distorted use of space and time in anorectic patients. A neurological examination of motor behavior in anorectic patients revealed dysdiadochokinesis (Gillberg, Rastam, and Gillberg 1994), which is a deficit in the smooth alternate innervation of agonist and antagonist muscles. It is linked to the symptom of bound movement flow, as measured with the Laban movement analysis. Furthermore, using a technical device to measure balance (DELOS), Hölter, Troska, and Beudels (2008) demonstrated a disturbance in balance in patients with anorexia. The difficulty to maintain the body in balance is well compatible with the movement pattern in patients with eating disorders described in the studies by Burn (1987), Lausberg et al. (1996), and Shenton (1990). Two studies have examined the patients’ movement behavior in the course of a successful therapy. In the study by Gillberg, Rastam, and Gillberg (1994) mentioned above, the dysdiadochokinesis persisted even after the improvement of the eating behavior.

146. Body movements and mental illness

1907

However, Lausberg et al. (1988) employing a more comprehensive Laban based movement analysis, demonstrated that after therapy the patients with anorexia showed significant changes in several movement parameters: the balance was more stable, the shaping of the body was more progressive, the direction of limb movements was centrifugal rather than centripetal, the kinesphere (reach of limb movements) was wider, the movement area was larger, and there was less spatial distance to the therapist. To summarize, patients with eating disorders, specifically anorexia, do not only show hyperactivity, but also qualitative alterations in movement behavior. As compared to healthy controls, the patients’ movement behavior is characterized by more bound flow, more isolated use of the parts of the body, less integration of the lower body, more peripheral movement initiation, less strength, less weight shift, and a smaller movement area. This movement pattern corresponds to the patient’s disturbed body image (Lausberg 2008). The bound movement flow, the isolated moving of the parts of the body, and the peripheral movement initiation are all movement patterns that facilitate body movement control. As such, the behavior matches the patients’ striving for control over the own body. Furthermore, the avoidance of weight shifts corresponds to the wish to be weightless and ethereal. The avoidance of movement of the lower body and of the trunk reflects the rejection of “problematic” parts of the body such as belly, hips, and backside.

2. Schizophrenia Since the beginnings of modern psychiatry, alterations of movement behavior have been documented in patients with schizophrenia (Kahlbaum 1874; Kleist 1943; Wernicke 1900; Reiter 1926). Traditionally, hypokinetic and hyperkinetic alterations of movement behavior in schizophrenic patients have been distinguished. Hypokinetic disorders comprise bradykinesis (slow movement), akinesis/hypokinesis (absence/poverty of body movements), amimics/hypomimics (absence/poverty of facial expression), catalepsy (maintaining a fixed body position for a long time), catatonia (a state of immobility), waxen flexibility, rigidity, mutism (absence of speaking), and retardation. Hyperkinetic disorders include mannerisms, habits, stereotypes, agitation, hyperactivity, and restlessness. Some of these traditional parameters have been adopted in current standard diagnostic manuals, e.g., agitation, catalepsy, or waxen flexibility. However, it has been demonstrated that the objectivity and reliability of these traditional parameters is low (e.g., Wallbott 1989). Current research on movement behavior in individuals with schizophrenia is complicated by the fact that nowadays, the majority of schizophrenic patients are treated with neuroleptic medication already at an early stage of their illness. Neuroleptic medication has severe and socially stigmatizing side effects on the patients’ movement behavior. The neuroleptically induced movement disorders are classified as acute dystonia/early dyskinesia (involuntary movements such as torticollis, tongue protrusion, grimacing), parkinsonism (hypokinesia and rigidity), akathesia (restlessness with an involuntary inability to sit or stand still), and tardive/late dykinesia (involuntary movements such as chewing and sucking movements, grimacing). Since tardive dyskinesia may be irreversible even after discontinuation of the medication, its early detection is indispensible for increasing the chance of reversal (Trosch and Nasrallah 2004). Motor side effects occur in schizophrenic patients to individually extremely diverse degrees. The most reliable predictor that, thus far, has been identified is the occurrence of movement disorders in the patient’s previous medical history. In a review, Höffler (1995) reported large ranges

1908

IX. Embodiment of prevalence for early dyskinesia, parkinsonism, akathisia, and late dyskinesia. The divergence between the prevalence studies may be explained by differences in samples, diagnostic procedures, types and doses of the medication, but it also indicates a lack of objectivity and reliability of the standard psychiatric diagnostic movement scales (Abnormal Involuntary Movement Scale (AIMS), Rockland Scale, Hillside Acathesia Scale, Simpson-Angus Scale, etc.). Contrary to the popular belief that the prevalence of motor side effects has decreased since the introduction of the so-called atypical neuroleptics, the prevalence has doubled in the last 20 years (Halliday et al. 2002). It is evident at first sight that there is an overlap between the movement behavior alterations that have been described traditionally in the pre-neuroleptic era and those that are nowadays attributed to neuroleptic medication. The fact that neuroleptically induced movement disorders are characterized by similar symptoms as those that are intrinsic to schizophrenia causes diagnostic and therapeutic problems (e.g., Lausberg and Hellweg 1998), a condition that has been termed the “catatonic dilemma” (Brenner and Rheuban 1978). Given the widespread application of neuroleptic medication, there are only a few current empirical systematic studies that investigate the movement behavior of neurolepticnaive schizophrenic patients (Caligiuri, Lohr, and Jeste 1993; Chatterjee et al. 1995; Owens and Johnstone 1982; Rogers 1985). Owens and Johnstone (1982) reported involuntary movements in patients with severe chronic schizophrenic patients who had never been exposed to neuroleptic medication. Caligiuri, Lohr, and Jeste (1993) and Chatterjee et al. (1995) found parkinsonism in neuroleptic-naive patients. Likewise, Rogers (1985) and Bräunig (1995) have reported similar alterations of movement behavior in neuroleptic-free schizophrenic patients and medicated patients. Thus, these studies confirm the “catatonic dilemma”. However, it is argued here that this problem is partly a methodological one, since in the studies on neuroleptic-naive schizophrenic patients the same movement diagnostic scales were used as in the studies on neuroleptic side effects. The movement scales do not seem to be sensitive enough to distinguish between the medication side effects and the schizophrenic symptoms. In fact, more comprehensive analyses of movement behavior reveal further alterations of movement behavior in schizophrenic patients (Davis 1981, 1978; Jones 1965; Wallbott 1989; Wolf-Schein, Fisch, and Cohen 1985). Condon (1969) observed self-dyssynchrony in schizophrenic patients. Davis (1978), using the Movement Diagnostic Scale (MDS), reported fragmentation of body movements in chronic schizophrenia. Furthermore, the analysis of hand movement behavior in schizophrenic patients revealed distinct patterns of body-focused and object-focused activity in the acute clinical state as compared to the post-acute state (Freedman 1972; Freedman and Hoffman 1967). The above proposition that more sensitive analysis systems are required to disentangle medication side-effects and schizophrenic symptoms in the patients’ movement behavior is supported by Wilder (1985) and Manschreck et al. (1985). The first author demonstrated that the Movement Diagnostic Scale was an effective tool to differentiate between a placebo and a neuroleptic condition in schizophrenic patients. The second group of authors observed a stronger deficit in motor synchrony in unmedicated schizophrenic patients as compared to medicated ones. To summarize, schizophrenic patients display a broad range of movement behavior alterations. Since nowadays, most of the schizophrenic patients are treated with neuroleptic medication, a substantial amount of alterations in movement behavior is due to medication side effects. However, some of these movement disturbances are associated

146. Body movements and mental illness

1909

with the illness, as they have been demonstrated in the pre-neuroleptic era as well as in the few current empirical studies on unmedicated schizophrenic individuals. Thus, sensitive movement diagnostic tools have to be employed to differentiate between schizophrenic movement symptoms and medication side effects.

3. Depression As for schizophrenic patients, also for depressive patients specific alterations concerning different aspects of movement behavior have been described since the beginnings of modern psychiatry. However, their documentation in current diagnostic manuals is primarily limited to psychomotor inhibition or agitation. One of the most prominent alterations in the depressive individuals’ movement behavior is a slumped posture (Bader et al. 1999; Bleuler 1949; Kraepelin 1913; Kretschmer 1921; Lemke et al. 2000; Michalak et al. 2009). The association of this posture, which is characterized by giving into gravity, and a depressed mood, is so obvious that it is referred to in universal metaphorical categories (Lakoff and Johnson 1999; Narayanan 1997). “I am down/up” is used as a metaphor for “ I am unhappy/happy”. In depressive individuals, the neglect of the upward direction seems to be not limited to the posture but it is also present in dynamic body movement. Michalak et al. (2009), who conducted a three-dimensional analysis of the gait, demonstrated that the depressive patients showed significantly less vertical and sagittal movements and more lateral/horizontal movements than healthy controls. The reduction of vertical movements persisted even after the improvement of the clinical symptoms of depression. Furthermore, depressive individuals exhibit a reduced gait velocity, a reduced stride length, an increased cycle duration, and a reduced arm swing (Bader et al. 1999; Lemke et al. 2000; Michalak et al. 2009). In the gestural behavior, they show a decrease in the frequency of communicative gestures and facial expressions (Ekman and Friesen 1974; Ellgring 1986; Ulrich 1977; Ulrich and Harms 1979; Wallbott 1989) and a decrease in the velocity of these parameters (Ellgring 1986; Juckel 2005; Wallbott 1989). These alterations are likely to reflect the depressive mood. However, it is noteworthy that most of these studies were conducted in medicated depressive patients. Therefore, it has to be considered that the reduction in velocity, expansion, and expressivity may be partly caused by medication side effects. Furthermore, depressive patients show a high amount of self-touch gestures (Freedman 1972; Freedman and Hoffman 1967; Lausberg 1995; Ulrich 1977; Ulrich and Harms 1979). Self-touch gestures are displayed when individuals experience stress or negative emotions (Barosso et al. 1978; Freedman et al. 1972; Freedman and Bucci 1981; Lausberg and Kryger 2011; Sainsbury 1955; Sousa-Poza and Rohrberg 1977;). Freedman and Bucci (1981) suggested that body-focused activity has a distinct filtering function when individuals are exposed to external stimulation such as in psychiatric interviews. Moreover, Lausberg (2013) has proposed that on body movements serve to regulate hyperor hypoarousal. In body-image research, it has long been suggested that self-touch stabilizes the body borders (Joraschky 1983). After a successful psychotherapy or pharmacotherapy, the improvement of the depressive disorder is accompanied by changes in movement behavior, such as an increase in object-focused gestures, an increase in the velocity of gestures, and a decrease of bodyfocused gestures (Ellgring 1986; Freedman 1972; Freedman and Hoffman 1967; Lausberg 1995; Ulrich 1977; Ulrich and Harms 1979; Wallbott 1989).

1910

IX. Embodiment To summarize, depressive individuals typically display a broad range of alterations of movement behavior. Prominent is a slumped posture and a reduction of vertical movements. Furthermore, they show a reduction of velocity, expansion, and expressive repertoire in their gait, gestures, and facial expression. In contrast, the amount of self-touching behavior is increased. The diagnostic relevance of these parameters is evidenced by the fact they improve in line with the decrease of the clinical depression.

4. Reerences Bader, Jean-Pierre, J. Bühler, Jerome Endrass, Andreas Klipstein and Daniel Hell 1999. Muskelkraft und Gangcharakteristika depressiver Menschen. Der Nervenarzt 70: 613⫺619. Barosso, Felix, Norman Freedman, Stanley Grand and Jacques van Meel 1978. Evocation of Two Types of Hand Movements in Information Processing. Journal of Experimental Psychology 4(2): 321⫺329. Beumont, Peter J.V., Arthur Brenden, Janice D. Russell and Stephen W. Touyz 1994. Excessive Physical Activity in Dieting Disorder Patients: Proposals for a Supervised Exercise Program. International Journal of Eating Disorders 15(1): 21⫺36. Bleuler, Manfred 1949. Lehrbuch der Psychiatrie. 8th edition. Berlin: Springer. Bräunig, Peter (ed.) 1995. Differenzierung katatoner und neuroleptika-induzierter Bewegungsstörungen. Stuttgart/New York: Georg Thieme. Brenner, Ira and William J. Rheuban 1978. The catatonic dilemma. The American Journal of Psychiatry 135(10): 1242⫺1243. Broocks, Andreas, James Liu and Karl-Martin Pirke 1990. Semistarvation-induced hyperactivity compensates for decreased norepinephrine and dopamine turnover in the medio-basal hyperthalamus of the rat. Journal of Neurological Transmission 79(1⫺2): 113⫺124. Burn, Holly 1987. The Movement Behaviour of Anorectics ⫺ The Control Issue. American Journal of Dance Therapy 10(1): 54⫺76. Caligiuri, Michael P., James B. Lohr and Dilip V. Jeste 1993. Parkinsonism in neuroleptic-naive schizophrenic patients. The American Journal of Psychiatry 150(9): 1343⫺1348. Chatterjee, A., M. Chakos, A. Koreen, S. Geisler, B. Sheitman, M. Woerner, J.M. Kane, J. Alvir and J.A. Lieberman 1995. Prevalence and clinical correlates of extrapyramidal signs and spontaneous dyskinesia in never-medicated schizophrenic patients. The American Journal of Psychiatry 152(12): 1724⫺1729. Condon, William S. and Henry W. Brosin 1969. Micro Linguistic-Kinesic Events in Schizophrenic Behaviour. In: Gregory Bateson, Ray Birdwhistell, Henry W. Brosin, Charles Hockett and Norman A. McQuown (eds.), The Natural History of an Interview, 812⫺837. New York: Grune and Stratton. Davis, Caroline 1997. Eating Disorders and Hyperactivity: A Psychobiological Perspective. Canadian Journal of Psychiatry 42(2): 168⫺175. Davis, Martha 1978. Movement characteristics of hospitalized psychiatric patients. In: Maureen Needham Costonis (ed.), Therapy in Motion, 89⫺112. Urbana: University of Illinois. Davis, Martha 1981. Movement characteristics of hospitalized psychiatric patients. American Journal of Dance Therapy 4(1): 52⫺71. Davis, Martha 1997. Guide to movement analysis methods. Behavioral Measurement Database Services, Pittsburgh, PA. Edholm, O.G., J.G. Fletcher, E.M. Widdowson and R.A. McCance 1955. The energy expenditure and food intake of individual men. British Journal of Nutrition 9(3): 286⫺300. Ekman, Paul and Wallace V. Friesen 1974. Nonverbal behavior and psychopathology. In: Raymond J. Friedman and Martin M. Katz (eds.), The Psychology of Depression, 203⫺232. New York: John Wiley and Sons.

146. Body movements and mental illness Ellgring, Heiner 1986. Nonverbal expression of psychological states in psychiatric patients. European archives of psychiatry and neurological sciences 236(1): 31⫺34. Freedman, Norbert and Stanley P. Hoffmann 1967. Kinetic behaviour in altered clinical states: Approach to objective analysis of motor behaviour during clinical interviews. Perceptual and Motor Skills 24: 527⫺539. Freedman, Norbert 1972. The analysis of movement behavior during clinical interview. In: Aron W. Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 153⫺175. New York: Pergamon Press. Freedman, Norbert and Wilma Bucci 1981. On kinetic filtering in associative monologue. Semiotica 34(3/4): 225⫺249. Freedman, Norbert, James O’Hanlon, Philip Oltman and Herman A. Witkin 1972. The imprint of psychological differentiation on kinetic behaviour in varying communicative contexts. Journal of Abnormal Psychology 79(3): 239⫺258. Gillberg, Christopher, Maria Rastam and Maria Gillberg 1994. Anorexia Nervosa: Physical Health and Neurodevelopment at 16 and 21 Years. Developmental Medicine and Child Neurology 36(7): 567⫺575. Halliday, Jennifer, Susan Farrington, Shiona Macdonald, Tom MacEwan, Val Sharkey and Robin McCreadie 2002. Nithsdale schizophrenia surveys 23: Movement disorders. British Journal of Psychiatry 181: 422⫺427. Höffler, Jürgen 1995. Extrapyramidalmotorische Nebenwirkungen unter Neuroleptika ⫺ Phänomenologie und Prävalenz. In: Peter Bräunig (ed.), Differenzierung katatoner und neuroleptikainduzierter Bewegungsstörungen, 12⫺17. Stuttgart/New York: Georg Thieme. Hölter, Gerd T., Svenja Troska and Wolfgang Beudels 2008. Körper- und bewegungsbezogenes Verhalen und Erleben von anorektischen jungen Frauen ⫺ ausgewählte Befunde zur Gleichgewichtsregulation und zum Körpererleben. In: Peter Joraschky, Hedda Lausberg and Karin Pöhlmann (eds.), Körperorientierte Diagnostik und Psychotherapie bei Essstörungen, 89⫺108. Gießen: Psychosozial. Jambor, Elizabeth A., Mary E. Rudisill, Esther M. Weekes and Thomas J. Michaud 1994. Association among fitness components, anxiety, and confidence following aerobic training in aquarunning. Perceptual Motor Skills 78(2): 595⫺602. Jones, I.H. 1965. Observations on Schizophrenic Stereotypes. Comprehensive Psychiatry 6(5): 323⫺335. Joraschky, Peter 1983. Das Körperschema und das Körperselbst als Regulationsprinzipien der Organismus ⫺ Umwelt ⫺ Interaktion. München: Minerva. Juckel, Georg 2005. Mimik und Emotionalität ⫺ am Beispiel depressiver Patienten. Psychoneuro 31(7/8): 379⫺384. Kahlbaum, Karl Ludwig 1874. Die Katatonie oder das Spannungsirresein. Berlin: Hirschwald. Katch, Frank I., Ernest D. Michael and Evelyn M. Jones 1969. Effects of physical training on the body composition and diet of females. Research Quarterly 40: 99⫺104. Keys, Ancel, Josef Brozek, Austin Henschel, Olaf Mickelsen and Henry L. Taylor, 1950. The Biology of Human Starvation. Minneapolis, MN: University of Minnesota Press. Kleist, Karl 1943. Die Katatonien. Nervenarzt 16: 1⫺10. Kraepelin, Emil 1913. Psychiatrie. 8th edition. Leipzig: Johann Ambrosius Barth. Kretschmer, Ernst 1921. Körperbau und Charakter. Berlin: Springer. Kron, Leo, Jack L. Katz, Gregory Gorzynski and Herbert Weiner 1978. Hyperactivity in anorexia nervosa: A fundamental clinical feature. Comprehensive Psychiatry 19(5): 433⫺40. Krueger, David W. and Ellen Schofield 1986. Dance/movement therapy of eating disordered patients: A model. American Journal of Dance Therapy 13(4): 323⫺331. Laban, Rudolf 1988. The Mastery of Movement. Worcester: Northcote House. First published [1950]. Lakoff, George and Mark Johnson 1999. Philosophy In The Flesh: the Embodied Mind and its Challenge to Western Thought. New York: Basic Books.

1911

1912

IX. Embodiment

Lausberg, Hedda 1995. Bewegungsverhalten als Prozeßparameter in einer kontrollierten Studie mit funktioneller Entspannung. Unpublished paper presented at the 42. Arbeitstagung des Deutschen Kollegiums für Psychosomatische Medizin, Universität Jena. Lausberg, Hedda 2008. Bewegungsdiagnostik und -therapie in der Behandlung von KörperbildStörungen bei Patienten/Patientinnen mit Essstörungen. In: Peter Joraschky, Hedda Lausberg and Angela v. Arnim (eds.), Körperorientierte Diagnostik und Psychotherapie bei Essstörungen, 109⫺128. Gießen: Psychosozial-Verlag. Lausberg, Hedda (ed.) 2013. Understanding Body Movement: A Guide to Empirical Research on Nonverbal Behaviour: With an Introduction to the NEUROGES Coding System. Frankfurt a. M.: Peter Lang. Lausberg, Hedda and R. Hellweg 1998. “Catatonic dilemma”. Therapy with lorazepam and clozapine. Nervenarzt 69(9): 818⫺822. Lausberg Hedda and Kryger Monika 2011. Gestisches Verhalten als Indikator therapeutischer Prozesse in der verbalen Psychotherapie: Zur Funktion der Selbstberührungen und zur Repräsentation von Objektbeziehungen in gestischen Darstellungen. Psychotherapie-Wissenschaft 1: 41⫺55. Lausberg Hedda, Jörn von Wietersheim and Hubert Feiereis 1996. Movement Behaviour of Patients with Eating Disorders and Inflammatory Bowel Disease. A Controlled Study. Psychotherapy and Psychosomatics 65(6): 272⫺276. Lausberg, Hedda, Jörn von Wietersheim, Eberhard Wilke and Hubert Feiereis 1988. Bewegungsbeschreibung psychosomatischer Patienten in der Tanztherapie. Psychotherapie, Psychosomatik, Medizinische Psychologie 38: 259⫺264. Lemke, Matthias R., Thomas Wendorff, Brigitt Mieth, Katharina Buhl and Martin Linnemann, 2000. Spatiotemporal gait patterns during over ground locomotion in major depression compared with healthy controls. Journal of Psychiatric Research 34(4⫺5): 277⫺283. Manschreck, Theo C., Brendan A. Maher, Niels G. Waller, Donna Ames and Craig A. Latham 1985. Deficient motor synchrony in schizophrenic disorders: clinical correlates. Biological Psychiatry 20(9): 990⫺1002. Michalak, Johannes, Nikolaus F. Troje, Julia Fischer, Patrick Vollmar, Thomas Heidenreich and Dietmar Schulte 2009. Embodiment of sadness and depression⫺gait patterns associated with dysphoric mood. Psychosomatic Medicine 71(5): 580⫺587. Narayanan, Srini 1997. Embodiment in Language Understanding: Sensory-Motor Representations for Metaphoric Reasoning About Event Descriptions. PhD thesis, Computer Science Division, EECS Department, University of California, Berkeley. Norvell, Nancy and Dale Belles 1993. Psychological and physical benefits of circuit weight training in law enforcement personnel. Journal of Consulting and Clinical Psychology 61(3): 520⫺527. Owens, D. G. Cunningham and Eve C. Johnstone 1982. Spontaneous involuntary disorders of movement. Archives of General Psychiatry 39(4): 452⫺461. Pierce, Edgar F., Morris W. Eastman, Hem L. Tripathi, Kristen G. Olson and William L. Dewey 1993. Beta-endorphin response to endurance exercise: Relationship to exercise dependence. Perceptual and Motor Skills 77(3): 767⫺770. Raglin, John S. 1990. Exercise and mental health. Beneficial and detrimental effects. Sports Medicine 9(6): 323⫺329. Reiter, Paul J. 1926. Extrapyramidal motor-disturbances in dementia praecox. Acta Psychiatrica et Neurologica 1: 287⫺305. Rogers, D. 1985. The motor disorders of severe psychiatric illness: a conflict of paradigms. The British Journal of Psychiatry 147: 221⫺232. Routtenberg, Aryeh 1968. Self-starvation of rats living in activity wheels. Adaptation effects. Journal of Comprehensive Physiology and Psychology 66(1): 234⫺238. Sainsbury, Peter 1955. Gestural Movement During Psychiatric Interview. Psychosomatic Medicine 17: 454⫺469. Shenton J. 1990. Move for the Better. Therapy Weekly 13: 4.

147. Bodily communication and deception

1913

Sousa-Poza, Joaquin F. and Robert Rohrberg 1977. Body movements in relation to type of information (person- and non-person oriented) and cognitive style (field dependence). Human Communication Research 4(1): 19⫺29. Stanton-Jones, Kristina 1992. An Introduction to Dance Movement Therapy in Psychiatry. London/ New York: Routledge. Trosch, Richard and Henry Nasrallah 2004. The neurological burden of EPS: Have we really solved the problem in the era of atypicals? Video, University of Florida and the Distance Learning Network, Boalsburg. Ulrich, Gerald 1977. Videoanalytische Methoden zur Erfassung averbaler Verhaltensparameter bei depressiven Syndromen. Pharmakopsychiatrie 10: 176⫺182. Ulrich, Gerald and K. Harms 1979. Video-analytic study of manual kinesics and its lateralization in the course of treatment of depressive syndromes. Acta Psychiatrica Scandinavia 59(5), 481⫺492. Wallbott, Harald G. 1989. Movement quality changes in psychopathological disorders. In: Bruce Kirckaldy (ed.), Normalities and abnormalities in human movement, 128⫺146. (Medicine and Sport Science 29) Basel: Karger. Wernicke, Carl 1900. Grundriss der Psychiatrie in klinischen Vorlesungen. Leipzig: Thieme. Wilder, Vicky Nichols 1985. Effects of Antipsychotic Medication on the Movement Pathologies of Chronic Schizophrenic Patients. Philadelphia, PA: Hahnemann University Graduate School. Wolf-Schein, Enid G., Gene S. Fisch and Ira L. Cohen 1985. A Study of the Use of Nonverbal Systems in the Differential Diagnosis of Autistic, Mentally Retarded and Fragile X Individuals. American Journal of Dance Therapy 8(1): 67⫺80.

Hedda Lausberg, Cologne (Germany)

147. Bodily communication and deception 1. 2. 3. 4. 5.

Theoretical approaches to the study of behavioral correlates of deception Results from recent meta-analyses Moderator variables Implications for detecting deception References

Abstract In social interactions, people generally assume that communicators speak the truth. Nonetheless, deception is an essential ingredient of human (and animal) social life. In many daily situations, the consequences of lying when being caught are not very high but in certain circumstances, like in intimate relationships or in criminal proceedings, consequences can be quite severe. Although deception is difficult to study in situ for obvious ethical and legal constraints, the importance to detect lies in these latter situations has sparkled hundreds of studies to understand correlates of deception. We focus here on nonverbal and paraverbal cues to deception. It is assumed that there is an evolutionary arms race, which, on the one hand, has brought deception an evolutionary advantage, that is, on the other hand, counter-acted by other organisms’ attempts to develop strategies to detect deception (cf. Smith 1987). To Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19131921

1914

IX. Embodiment provide an example from daily life: Children learn that they get caught lying when they avert their gaze from their mother when being questioned about some wrongdoing they are trying to deny. Looking their mother straight into her eyes they learn to get by, at least for a while, until their mother catches on that this is the strategy they use. Now, it is the children’s turn again to change their strategies… In the following, we first describe several theoretical approaches that make sometimes congruent, sometimes rival predictions about classes of nonverbal and paraverbal behaviors, which may or may not be associated with deception. Thereafter, we review the empirical evidence on the validity of these indicators of deception. Towards the end, we contrast these findings with people’s beliefs about these cues and the difficulties to detect deception.

1. Theoretical approaches to the study o behavioral correlates o deception Traditionally, four different approaches to predict the associations of nonverbal and paraverbal behaviors with deception have been discussed: the attempted control, the arousal, the cognitive load, and the emotional approach (Zuckerman and Driver 1985; Zuckerman, DePaulo, and Rosenthal 1981). More recently, DePaulo et al. (2003) have added self-presentation and impression management as an encompassing framework.

1.1. Attempted control approach The main assumption of the attempted control approach is that liars will try to control all those behaviors that may serve as cues to deception (Ekman and Friesen 1969). Because deceivers wish to hush up suspicious movements as much as possible, the controlled behavior “may appear planned, rehearsed, and lacking in spontaneity” (Zuckerman et al. 1981: 7). In their “leakage theory”, Ekman and Friesen (1969) postulated that some nonverbal channels are less controllable than others. According to this theory, the face is more carefully monitored by the deceiver, while lower body parts, such as leg and feet, are the least in his/her awareness (Ekman and Friesen 1969). When deceivers try to control their behavior, they will focus their attention on those movements that are closely connected with cultural stereotypes concerning behaviors deceivers usually show (Akehurst et al. 1996; Zuckerman, Koestner, and Driver 1981). Ekman and Friesen (1974) asked their stimulus persons what behavior should be controlled in order to tell a lie without being detected. As predicted, the majority mentioned the face, not the body; the difference was highly significant. Hocking and Leathers (1980) surveyed communication students with a questionnaire that revealed that most of the 164 respondents expected an increase in body movements, such as arm, hand, foot, and leg movements etc., and a decrease in eye contact during deception. More recent studies confirmed that these beliefs are held by both lay people and professionals alike worldwide (e.g., Breuer, Sporer, and Reinhard 2005; Global Deception Research Team 2006). Combining these stereotypes and the control approach with the assumptions of leakage theory, it is expected that movements such as hand, arm, and finger movements should decrease to a certain amount when a person is lying. Movements of legs and feet should decrease to a smaller amount, because they are less controlled. In contrast, the

147. Bodily communication and deception

1915

frequency of eye contact and smiles should increase. With respect to paraverbal behaviors, predictions are less clear cut. For example, liars may speak slower if they are concerned that certain contents may give them away. On the other hand, if liars believe that they may be detected by speaking too slowly, they may attempt to counteract this by increasing their verbal output.

1.2. Arousal approach The arousal approach is based on psychophysiological reactions that are related to deception. This assumption is probably as old as written records of attempts to discover deception (see Kleinmuntz and Szucko 1974; Trovillo 1939). The assumption that lying causes an increased arousal that manifests itself in different physiological changes (e.g., galvanic skin response) is the basis of polygraph examinations that are employed to detect liars (National Research Council 2003). The linkage between heightened physiological arousal and nonverbal correlates that can be observed without any technical aid has also been investigated. It is assumed that the same processes that are responsible for the observed psychophysiological phenomena also influence nonverbal and paraverbal behaviors. For this reason, deceivers are expected to show an increase in eye blinks, more head movements, adaptors and movements of the extremities (hands, legs, and feet), a higher pitch and more speech errors compared to truth-tellers. However, it is not always guaranteed that only liars are highly aroused. What about truth-tellers who are unjustly suspected of lying? This might often be the case in real life situations during police interviews. In a study by de Turck and Miller (1985), both unaroused and aroused nondeceivers were compared to deceivers. For this purpose, half of the nondeceivers were subjected to aversive white noise to increase their arousal artificially. Although the arousal level of deceivers and aroused nondeceivers was comparable, all nonverbal cues that distinguished between deceivers and unaroused truth-tellers also differed between deceivers and aroused truth-tellers. Thus, according to this study, the nonverbal correlates of deception appear not to be caused by arousal per se, but only by arousal that is induced by deception (deTurck and Miller 1985).

1.3. The aective approach The affective approach is based on the assumption that deception is associated with different affective reactions that can have an impact on nonverbal behavior (Ekman 2001; Knapp, Hart, and Dennis 1974). According to Ekman (2001), the two most frequent effects that are connected with deception are fear that the lie might be detected and followed by unpleasant consequences, and guilt, because lying is socially regarded as forbidden and morally reprehensible. Pressurized by these emotions, deceivers will signal their discomfort through nonverbal cues (Knapp et al. 1974). Regarding the impact of affects on nonverbal behavior, Vrij (2008) assumes that guilt leads to a decrease in eye contact, while fear results in an increase in movements. On the one hand, adaptors are expected to increase as a function of discomfort and anxiety (Ekman and Friesen 1972), on the other hand a tendency to dissociate oneself from the negative experience of lying (termed withdrawal; Miller and Burgoon 1982) may be associated with a decrease of illustrators (Ekman 2001; Zuckerman et al. 1981). It is less clear from this approach how paraverbal behaviors such as speech rate would change. Mehrabian (1971) assumed that negative affect should result in a slower speech rate.

1916

IX. Embodiment

1.4. Cognitive load approach and working memory capacity The main assumption of the cognitive load approach is that it is more difficult to invent a plausible lie than to tell the truth (Cody, Marston, and Foster 1984; Vrij 2008; Zuckerman et al. 1981). Therefore, the human information processing system is much more strained when a deceptive story is constructed because the deceiver has to avoid inconsistencies while embedding his or her account in existing facts that are already known. This leads to an increased cognitive load that is manifested in “specific verbal and nonverbal cues” (Miller and Stiff 1993: 55). With any highly complex task, as cognitive effort increases, people reduce the amount of eye contact and increase body-focused and object-focused adaptor behavior. The cognitive load approach can also be linked to models of working memory (see Sporer and Schwandt 2006, 2007). We propose that the monitoring of both the verbal and nonverbal behavior of a communicator required to produce a complex lie takes place in working memory (see Baddeley 2000). Given the reduced processing capacity of working memory, we would expect the speech rate to decline when telling a lie compared to telling the truth as the latter places less demands on processing capacity (cf. DePaulo, Lassiter, and Stone 1982). We assume that the “central executive”, which is considered a component of working memory (Baddeley 2000), is also responsible for the control of nonverbal behaviors. Consequently, with increasing complexity of a lie, the control of nonverbal and paraverbal behaviors should diminish. Thus, such a working memory model incorporates both the attempted control and the cognitive complexity approach and expands their predictions. Specifically, the working memory model predicts that some of the behaviors studied may be better cues to deception when the communicator had little time to prepare his or her lie. Hence, this model can serve as a basis for understanding (complex) lies and the accompanying nonverbal and paraverbal reactions. Note that this model is also compatible with DePaulo’s self-presentational perspective described next. Attempting to regulate one’s behavior is assumed to usurp cognitive resources (Baumeister 1998).

1.5. Sel-presentation and impression management According to DePaulo’s self-presentational perspective to understand nonverbal behavior (DePaulo 1992; DePaulo and Friedman 1998), people in general try to present themselves in a positive light to others. To succeed as liars, liars must present themselves in a way to appear sincere to others. Although truth-tellers face the same problem when they want to appear credible, liars’ self-presentations are often not as convincingly embraced as truthful ones and may show greater signs of deliberateness (DePaulo et al. 2003). DePaulo’s self-presentational perspective not only complements the previously discussed approaches but also integrates them to some extent. It allows additional predictions for the role of certain moderators, for example, regarding the role of planning and motivation, in particular, the identity-relevance of certain lies. Buller and Burgoon’s (1996) Interpersonal Deception Theory goes even further by emphasizing strategic aspects of deceivers’ planned/rehearsed lies as well as adjustments to receivers’ reactions.

2. Results rom recent meta-analyses Tab. 147.1 summarizes the results of several recent meta-analyses (DePaulo et al. 2003; Sporer and Schwandt 2006, 2007) and compares them to former meta-analyses of other

147. Bodily communication and deception

1917

Tab. 147.1: Mean effect sizes (r) of nonverbal and paraverbal indicators of deception in different meta-analyses in comparison to beliefs about deception of students and different occupational groups (from Sporer and Schwandt 2006, 2007) Variable

Zuckerman and Driver (1985)

DePaulo et al. (2003)

Sporer and Schwandt (2006, 2007)

Weighted mean r

Zuckerman et al. (1981)

Unweighted mean r

Köhnken (1988)

Beliefs

Students

Professional groups

Nonverbal behaviors in the head area Blinking Eye contact Gaze aversion Head movements Nodding Smiling

.24* –.01 a –.09 a –.04

.03 .00 .01/.03b –.01 .00 .00

.00 –.01 .03 .06 –.09* –.03

.00 –.02 .02 .05 –.05 –.07*

.32 a .53 .29 a .15

.53 –.45 a .49 a .23

.08* .00 –.07* –.04

.02 –.19** .02 –.07*

.07** –.18** .02 –.05

.84 a .10 .67

.79 a .58 .56

.02

.01

.03

.56

.66

–.01c,d –.01c,d .03 .00 .00 .10* .10* .01 .00

–.04 –.01 .01 .04 –.02 .10* .08 .11** .04

–.06* .01 .01 .03 .02 .13* .11 .09** .06*

Nonverbal behaviors in the body area Adaptors Hand movements Gestures Foot and leg movements Body animation

.17** a –.06 –.01 –.01

Paraverbal behaviors Message duration Number of words Rate of speaking Filled pauses Unfilled pauses Pitch Repetitions Response latency Speech errors

–.09* a –.03 .26** a .32* a –.01 .11*

.22 a .56 .54 a .43 a .32 .72

.15 a .65 a a a .77 .79 .70

Note. The effect sizes d reported by Zuckerman and Driver (1985), DePaulo et al. (2003), Zuckerman et al. (1981) and Köhnken (1988) were converted into the effect sizes r. Positive values of r indicate an increase, negative a decrease in deception. a not investigated. b Gaze aversion and eyeshifts were coded separately. c Number of words or message duration. d When talking time is operationalized as part of the total interaction, there is a reliable association (r ⫽ –.17, p < .05). *p < .05; **p < .01.

1918

IX. Embodiment authors. To allow for comparability between the different results, the reported d-values (standardized mean differences) were converted into the effect size measure r (pointbiserial correlation coefficient), which can be interpreted analogously to a correlation coefficient. Positive r-values denote an increase of behavioral cues to deception, negative values a decrease. A point-biserial r ⫽ .10 is considered a small, r ⫽ .24 a medium, and r ⫽ .37 a high relation (Cohen 1988). The results are summarized as follows: ⫺ According to these findings, there are hardly any reliable nonverbal behavioral patterns of deception in the head area, which presumably is best controlled by liars. Only the meta-analysis by Sporer and Schwandt (2007) showed that there were tendencies for liars to nod and to smile less frequently. ⫺ In the body area, self-manipulations slightly increase, whereas hand (and finger) movements as well as illustrators tend to decrease when lying. ⫺ Among the paraverbal indicators, there is generally evidence for an increase of voice pitch, which presumably can only be detected with instrumental aids (Sporer and Schwandt 2006). However, studies using commercially available “voice stress analyzers” have found little support for their validity (National Research Council 2003). ⫺ Surprisingly, lies do not seem to be shorter in general, though recent studies using more complex statements seem to suggest this relationship (e.g., Sporer et al. 2011). Consistent with a greater demand on working memory, answers are given delayed but are not necessarily associated with more speech errors (Sporer and Petermann 2011). In general, Freudian slips seem to be rare (Sporer et al. 2011).

3. Moderator variables The type and amount of indicators of deceptive behavior are influenced by the context of a deceptive situation. Thus, several moderator variables are likely to have an impact on these behaviors according to the meta-analyses by DePaulo et al. (2003) and Sporer and Schwandt (2006, 2007): ⫺ Under high motivation, liars appear more tense, show fewer leg, foot, hand- and finger movements, higher pitched voices, and take longer to respond (termed the motivational impairment effect; DePaulo and Kirkendol 1989); ⫺ With only short opportunity to prepare or plan, liars nod less and show fewer head and hand- and finger movements, speak with a higher pitched voice, and take longer to respond. ⫺ Unsanctioned lies are shorter in duration, contain more filled pauses, and are characterized by longer response latencies. There are also other variables that moderate effect sizes for one or the other cue to deception but results are not as clear-cut. Importantly, when messages are longer, some of the differences described may become more noticeable (DePaulo et al. 2003). Also, the topic lied about (facts, emotions, or attitudes and feelings) may be an important moderator (Sporer and Schwandt 2006, 2007). Some authors have argued that an analysis of microexpressions (1/5th to 1/25th of a second), which supposedly are difficult to suppress, may be useful to detect feigned emotions (Ekman 2001). In particular, feigned smiles, which involve only the zygomatic

147. Bodily communication and deception

1919

major muscle, can supposedly be differentiated from genuine, so-called Duchenne smiles, which involve both the zygomatic major muscle and the orbicularis oculi. These assumptions, which have been popularized by contemporary television series (“Lie to me”), for which Ekman serves as an advisor, and taught to airport security personnel, have only recently been subjected to systematic empirical tests. Using cumbersome frame-by-frame analyses of videotaped material, Porter and ten Brinke (2008) found that inconsistent microexpressions occur rather seldom overall, are observed only as partial microexpressions in either the upper or the lower part of the face, take longer, and occur in genuine as well as in simulated and masked expressions. In summary, contrary to common beliefs, few, if any, nonverbal cues show strong associations with deceptive behavior, and the occurrence of these cues may depend on a host of factors.

4. Implications or detecting deception In a large-scaled meta-analysis Bond and DePaulo (2006) found that people attempting to detect deception are barely above chance level, with a weighted average of 53.4% classification accuracy across close to 300 studies. Results differed depending on mode of presentation: When both visual and auditory information (M ⫽ 54.0%) were available, results were slightly better than when only auditory information (M ⫽ 53.0%) was the basis for judgments, which in turn was better than when visual information alone (M ⫽ 50.5%) was present. This implies that a primary focus on nonverbal behavior is not the best way to detect deception. Nonetheless, lay people seem to believe that nonverbal cues are the royal way to detect deception, and many police training manuals and training courses seem to be based on that assumption (Masip et al. 2010). In contrast, both a recent meta-analysis on training programs to detect deception (Hauch et al. in press) as well as recent experimental studies by Reinhard and his colleagues (e.g., Reinhard et al. 2011) indicate that closer scrutiny of the content of a message leads to higher accuracy rates (see also Sporer 2004) than reliance on nonverbal behaviors. In fact, some reviewers of the literature have even gone so far as to label attributions of credibility based on observations of a witness’ or defendant’s face or emotional expressions as “dangerous decisions”, which could jeopardize legal decisionmaking (Porter and ten Brinke 2009). Considering the cross-cultural differences in many nonverbal behaviors and their perception as a function of cross-ethnic contact (e.g., Elfenbein and Ambady 2003; see also numerous other chapters of this Handbook), reliance on nonverbal cues to detect deception seems clearly ill-advised when persons from different cultural backgrounds interact with each other. Perhaps, with the advancement of more precise (unobtrusive) measurement methods, some cues may show more reliable associations with deception, particularly in high-stakes lies.

5. Reerences Akehurst, Lucy, Günter Koehnken, Aldert Vrij and Ray Bull 1996. Lay persons’ and police officers’ beliefs regarding deceptive behaviour. Applied Cognitive Psychology 10(6): 461⫺471. Baddeley, Alan D. 2000. Short-term and working memory. In: Endel Tulving and Fergus I.M. Craik (eds.), Handbook of Memory, 77⫺92. Oxford, UK: Oxford University Press. Baumeister, Roy F. 1998. The self. In: Daniel T. Gilbert, Susan T. Fiske and Gardner Lindzey (eds.), Handbook of Social Psychology, Volume 1, 680⫺740. Boston: McGraw-Hill.

1920

IX. Embodiment

Bond, Charles F. Jr. and Bella M. DePaulo 2006. Accuracy of deception judgments. Personality and Social Psychology Review 10(3): 214⫺234. Breuer, Maike M., Siegfried L. Sporer and Marc-Andre´ Reinhard 2005. Subjektive Indikatoren von Täuschung: Die Bedeutung von Situation und Gelegenheit zur Vorbereitung. Zeitschrift für Sozialpsychologie 36(4): 189⫺201. Buller, David B., and Judee K. Burgoon 1996. Interpersonal deception theory. Communication Theory 6(3), 203⫺242. Cody, Michael J., Peter J. Marston and Myrna Foster 1984. Deception: Paralinguistic and verbal leakage. In: Robert N. Bostrom (ed.), Communication Yearbook 8, 464⫺490. Beverly Hills, CA: Sage. Cohen, Jacob 1988. Statistical Power Analysis for the Behavioural Sciences. Hillsdale, NJ: Erlbaum. DePaulo, Bella M. 1992. Nonverbal behavior and self-presentation. Psychological Bulletin 111(2): 203⫺243. DePaulo, Bella M. and H.S. Friedman 1998. Nonverbal communication. In: Daniel Gilbert, Susan T. Fiske and Gardner Lindzey (eds.), Handbook of Social Psychology, Volume 2, 3⫺40. New York: Random House. DePaulo, Bella M. and Susan E. Kirkendol 1989. The motivational impairment effect in the communication of deception. In: John C. Yuille (ed.), Credibility Assessment, 51⫺70. Dordrecht: Kluwer. DePaulo, Bella M., G. Dan Lassiter and Julie I. Stone 1982. Attentional determinants of success at detecting deception and truth. Personality and Social Psychology Bulletin 8(2): 273⫺279. DePaulo, Bella M., James J. Lindsay, Brian E. Malone, Laura Muhlenbruck, Kelly Charlton and Harris Cooper 2003. Cues to deception. Psychological Bulletin 129(1): 74⫺118. deTurck, Mark A. and Gerald R. Miller 1985. Deception and arousal: Isolating the behavioral correlates of deception. Human Communication Research 12(2): 181⫺201. Ekman, Paul 2001. Telling Lies: Clues to Deceit in the Marketplace, Marriage, and Politics. New York: Norton. Ekman, Paul and Wallace V. Friesen 1969. Nonverbal leakage and clues to deception. Psychiatry 32(1): 88⫺106. Ekman, Paul and Wallace V. Friesen 1972. Hand movements. Journal of Communication 22(4): 353⫺374. Ekman, Paul and Wallace V. Friesen 1974. Detecting deception from the body and face. Journal of Personalita and Social Psychology 29(3): 288⫺298. Elfenbein, Hillary A. and Nalini Ambady 2003. When familiarity breeds accuracy: Cultural exposure and facial emotion recognition. Journal of Personality and Social Psychology 85(2): 276⫺290. Global Deception Research Team 2006. A world of lies. Journal of Cross-Cultural Psychology 37(1): 60⫺74. Hauch, Valerie, Siegfried L. Sporer, Stephen W. Michael and Christian A. Meissner in press. Does training improve detection of deception? A meta-analysis. Communication Research. Hocking, John E. and Dale G. Leathers 1980. Nonverbal indicators of deception: A new theoretical perspective. Communication Monographs 47(2): 119⫺131. Kleinmuntz, Benjamin and Julian J. Szucko 1982. On the fallibility of lie detection. Law and Society Review 17(1): 85⫺104. Knapp, Mark L., Roderick P. Hart and Harry S. Dennis 1974. An exploration of deception as a communication construct. Human Communication Research 1(1): 15⫺29. Masip, Jaume, Carmen Herrero, Eugenio Garrido and Alberto Barba 2010. Is the behaviour analysis interview just common sense? Applied Cognitive Psychology 25(4): 593⫺604. Mehrabian, Albert 1971. Nonverbal betrayal of feeling. Journal of Experimental Research in Personality 5(1): 64⫺73. Miller, Gerald R. and Judee K. Burgoon 1982. Factors affecting assessments of witness credibility. In: Norbert L. Kerr and Robert M. Bray (eds.), The Psychology of the Courtroom, 169⫺194. San Diego, CA: Academic Press.

147. Bodily communication and deception

1921

Miller, Gerald R. and James B. Stiff 1993. Deceptive Communication. Newbury Park, CA: Sage. National Research Council 2003. The Polygraph and Lie Detection. Washington, DC: National Academy Press. Porter, Stephen and Leanne ten Brinke 2008. Reading between the lies: Identifying concealed and falsified emotions in universal facial expressions. Psychological Science 19(5): 508⫺514. Porter, Stephen and Leanne ten Brinke 2009. Dangerous decisions: A theoretical framework for understanding how judges assess credibility in the courtroom. Legal and Criminological Psychology 14(1): 119⫺134. Reinhard, Marc-Andre´., Siegfried L. Sporer, Martin Scharmach and Tamara Marksteiner 2011. Listening, not Watching: Situational familiarity, efficacy expectations and the ability to detect deception. Journal of Personality and Social Psychology 101(3): 467⫺484. Smith, Euclid O. 1987. Deception and evolutionary biology. Cultural Anthropology 2(1): 50⫺64. Sporer, Siegfried L. 2004. Reality monitoring and the detection of deception. In: Pär Anders Granhag and Leif A. Strömwall (eds.), The Detection of Deception in Forensic Contexts, 64⫺102. New York, NY: Cambridge University Press. Sporer, Siegfried L., Maike M. Breuer, J. Zander and M. Krompass 2011. Nonverbal and paraverbal correlates of deception: Does preparation make a difference? Unpublished manuscript, Department of Psychology and Sports Science, University of Giessen, Germany. Sporer, Siegfried L. and Nina F. Petermann 2011. Paraverbal cues to deception as a function of interview type. Paper presented at the Meeting of the American and European Psychology-Law Society in Miami, Florida. Sporer, Siegfried L. and Barbara Schwandt 2006. Paraverbal indicators of deception: A meta-analytic synthesis. Applied Cognitive Psychology 20(4): 421⫺446. Sporer, Siegfried L. and Barbara Schwandt 2007. Moderators of nonverbal indicators of deception. A meta-analytic synthesis. Psychology, Public Policy, and Law 13(1): 1⫺34. Ten Brinke, Leanne, Diana Stimson, Dana Carney 2014. Some evidence for unconscious lie detection. Psychological Science 25(5): 1098⫺1105. Trovillo, Paul V. 1939. A history of deception detection. Journal of Criminal Law and Criminology 29: 848⫺899; 30: 104⫺119. Vrij, Aldert 2008. Detecting Lies and Deceit: The Psychology of Lying and the Implications for Professional Practice. Chichester: John Wiley. Zuckerman, Marvin and Robert Driver 1985. Telling lies: Verbal and nonverbal correlates of deception. In: Aron W. Siegman and Stanley Feldstein (eds.), Nonverbal Communication: An Integrated Perspective, 129⫺147. Hillsdale, NJ: Lawrence Erlbaum. Zuckerman, Marvin, Bella M. DePaulo and Robert Rosenthal 1981. Verbal and nonverbal communication of deception. In: Leonard Berkowitz (ed.), Advances in Experimental Social Psychology, Volume 14, 1⫺59. New York: Academic Press. Zuckerman, Marvin, Richard Koester and Robert Driver 1981. Believes about cues associated with deception. Journal of Nonverbal Behavior 6(2): 105⫺114.

Siegfried L. Sporer, Giessen (Germany)

1922

IX. Embodiment

148. Multimodal discourse comprehension 1. 2. 3. 4. 5. 6. 7.

Introduction Event-related brain potentials (ERPs) Semantic retrieval and iconic co-speech gestures Iconic gestures and word processing Speech-gesture integration Conclusions References

Abstract Multi-modal discourse comprehension involves understanding speech as well as accompanying gestures. Research using event-related brain potentials (ERPs) suggests iconic co-speech gestures activate brain systems for semantic retrieval in a way similar to that of visual representations such as pictures. Event-related brain potentials research indicates the processing of iconic gestures is sensitive to contextual congruity, and that gestures can facilitate the processing of semantically related speech, as well as that of related picture probes. Taken together, these studies suggest that both speech and gestures activate conceptual representations in semantic memory, and that language comprehenders can integrate information from the two channels to form visually enhanced cognitive models of the discourse referents.

1. Introduction Anyone who has had a phone conversation about how to replace the sound card in a computer, how to change the oil in a car, or how to make lasagna, knows the importance of explanatory gestures for communication. Multi-modal discourse involves the use of both visual (gestures) and auditory (speech) information to communicate about topics such as the skilled activities mentioned above. Research in our laboratory concerns the cognitive and neural basis of multi-modal discourse comprehension, especially how people combine information conveyed by co-speech iconic gestures with that conveyed in the accompanying speech. Iconic gestures are body movements that signal visuo-spatial properties of objects and events described in the accompanying speech. For example, a speaker might indicate the size of a bowl by holding his hands apart from each other, creating perceptual similarity between the span of the bowl and the span of the open space between his hands. Here we review studies concerning the cognitive and neural substrate of multi-modal discourse comprehension. In particular, we discuss studies concerning what brain systems are engaged by iconic gestures, what cognitive processes mediate their comprehension, and how language users integrate propositional information conveyed by speech with analogue information conveyed by gestures.

2. Event-related brain potentials (ERPs) Recent research in our laboratory has used event-related brain potentials (ERPs) to study the real-time comprehension of multi-modal discourse. Event-related brain potentials represent brain activity in the cortex recorded non-invasively via electrodes placed Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19221929

148. Multimodal discourse comprehension

1923

on the scalp (see Coulson 2007 for a review). Tiny signals from these electrodes are amplified and digitized, yielding the electroencephalogram (EEG). By averaging portions of electroencephalogram that are time locked to the presentation of a specific class of stimuli, it is possible to extract a record of brain activity temporally correlated with the cortical processes engaged by that type of stimulus. The resulting event-related brain potentials waveform can be analyzed as a series of positive- and negative-going deflections (commonly referred to as components) that are characterized by their polarity (negative or positive voltage), time course, and distribution across scalp electrode sites.

2.1. N400 component An ERP component particularly relevant to semantic processing is the N400, which was discovered during early research on language comprehension (Kutas and Hillyard 1980). Kutas and Hillyard recorded ERPs to the last word of sentences that ended either congruously (as in (i)), or incongruously (as in (ii)). (i) I take my coffee with cream and sugar. (ii) I take my coffee with cream and dog. By averaging the signal elicited by congruous and incongruous sentence completions, respectively, these investigators were able to reveal systematic differences in the eventrelated brain potentials to these stimulus categories. Incongruous sentence completions elicited a larger negative wave than congruous ones between 300 and 700ms after word onset, peaking after approximately 400ms (hence the name, N400). Subsequent research has shown that N400 components are generated whenever stimulus events induce conceptual processing, and reflect brain activity involved in the retrieval of information from semantic memory. More negative N400 reflects more effortful semantic retrieval, and the size of the N400 is reduced when contextual factors result in the pre-activation of relevant semantic features (see Kutas and Federmeier 2011 for a review).

2.2. Word and picture N400 Besides words, an N400-like component has also been elicited by image-based stimuli such as line drawings and photographs. The two components differ somewhat in scalp distribution, suggesting different brain areas are active in the processing of words and pictures. However, both the word and the picture N400 peak around 400ms after stimulus onset, and both components are sensitive to contextual congruency, being larger for stimuli that follow unrelated than related items. For example, either a pictorial or verbal representation of a cat following one of a hamburger elicit a larger N400 than a pictorial or verbal representation of a cat following a dog (Holcomb and McPherson 1994). Moreover, just as pseudo-words elicit larger N400s than unrelated words, unrecognizable images elicit larger N400s than do recognizable (unrelated) ones (McPherson and Holcomb 1999). Finally, the size of both word and picture N400 is modulated by the global, discourse level coherence of a word or picture within a story context, being larger for words and images in incoherent than coherent stories (West and Holcomb 2002). In view of these similarities, it appears that the word and the picture N400 components index at least partially overlapping semantic systems that respond to different modalities in comparable ways.

1924

IX. Embodiment

3. Semantic retrieval and iconic co-speech gestures An important issue in iconic gesture comprehension concerns the way in which they engage the brain’s various semantic systems. On the one hand, iconic gestures involve movements of the body, and thus might be expected to rely on brain systems engaged in action recognition. On the other hand, iconic gestures are representational, and thus might be expected to rely on brain systems engaged in language comprehension. But, whereas the meaning of words (and signs) depends on conventional associations among speakers (and signers) of the relevant language, the meaning of iconic gestures relies on visual similarities between gestures and the things they represent. Because the meaning of co-speech iconic gestures derives at least to some extent from their visual properties, one might expect their comprehension to be mediated by similar semantic retrieval processes to those invoked by line drawings and photographs of real-world objects. To investigate this hypothesis, Wu and Coulson (2005) recorded participants’ eventrelated brain potentials as they watched videos of spontaneously produced iconic gestures. Stimuli were taken from a corpus of iconic, co-speech gestures that was collected by videotaping a young man describing cartoon segments. He was told that the experimenters were creating materials for a memory experiment and was unaware of the intent to elicit spontaneous gestures. Occurrences of iconic gesture were digitized into short, soundless video clips (2.3 seconds each), and these clips were presented either after the original cartoon segments the speaker was describing (congruous gestures), or after a different cartoon (incongruous gestures). Consistent with the claim that iconic gestures activate semantic information in a similar way to pictures or words, Wu and Coulson observed an N400-like response in participants’ event-related brain potentials. A negative-going wave that peaked approximately 450ms after gesture onset, the so-called Gesture N450, was larger for incongruous than congruous gestures. These data suggest that the contextual congruity of spontaneously produced iconic gestures modulated activity in brain systems mediating semantic memory retrieval.

3.1. N300 and N400 components One difference between the ERPs to gestures and those to images is that the latter elicit an anterior negativity peaking around 300ms after the onset of the stimuli (N300), as well as the more broadly distributed negativity (N400) peaking approximately 400ms post-stimulus (McPherson and Holcomb 1999). Consequently, the N300 has been argued to index an image-specific semantic system, while the N400 indexes a more general one (McPherson and Holcomb 1999). However, given that videos of everyday actions, such as a man shaving or chopping vegetables, also elicit an N400 without an accompanying N300 (Sitnikova, Kuperberg, and Holcomb 2003), this difference might simply be an artifact of the dynamic nature of gestures. That is, image-specific semantic systems are engaged by static and dynamic images alike, but for technical reasons the N300 is only evident in event-related brain potentials to static images. Wu and Coulson (2011) explored this possibility by recording electroencephalogram as participants viewed cartoons followed by either a video or a still image of a speaker talking about the clip. Still images were captured from the videos such that they conveyed the gist of the gesture. As in the study by Wu and Coulson (2005), gestures shown in the videos and still images were either congruous or incongruous with the preceding cartoon. Event-related brain potentials were time-locked to either the onset of the video

148. Multimodal discourse comprehension

1925

or the image. Congruency reduced the amplitude of the N400 component elicited by dynamic gestures (videos), and both the N300 and N400 components elicited by the static gestures (still images). The smaller negativities elicited by congruous relative to incongruous stimuli suggest that, for both dynamic and static gestures, contextually congruent information facilitated semantic processing. Comparison of the two topographic distributions of congruency effects suggested similar neural generators for dynamic and static gestures, but larger effects for static ones due to the better time-locking afforded by the presentation of a static stimulus.

3.2. Images o gestures versus objects Wu and Coulson (2011) also compared event-related brain potential congruency effects for static gestures with ERP picture priming effects for photographs of objects recorded in a separate experiment with the same participants. The timing and polarity of these effects were similar (that is, they both yielded N300 and N400 effects), but the scalp topography suggested somewhat different neural generators were active in the processing of gestures versus the processing of photographs of objects. Observed similarities in the processing of gestures and photographs suggest that the brain treats gestural information much as it does other sorts of visual representations. Differences presumably reflect both the use of neural systems specifically dedicated to the representation of the human body, and to the more abstract nature of gestures than the visual representations in the photographs. That is, the relationship between photographs and the objects they depict depends on basic perceptual similarity, while that between the cartoons and our informant’s gestures was derived from shared relational structure. As an illustration, consider a cartoon segment in which Nibbles, Jerry’s mischievous young cousin from Tom and Jerry cartoons, jumps onto the rim of a candlestick and begins chomping at the base of the candle. His actions cause the candle to topple, much in the manner of a tree being felled, onto Jerry’s head. The subsequent gesture is shown in Fig. 148.1, next to an illustration of a single frame from the cartoon. The speaker’s left forearm and extended hand depict the long, straight shape of the candle, as well as its horizontal orientation. Further, the largely parallel configuration of his left forearm above his right one is analogous to the parallel relationship between the fallen candle and the plate of doughnuts beneath.

Fig. 148.1: Artist’s rendition of the cartoon (left) described by the informant’s gesture (right)

1926

IX. Embodiment From a purely perceptual perspective, however, there are few similarities between the gesture and the cartoon. The candle points to the right while the speaker’s hand points to the left; the candle is cylindrical in shape, whereas the speaker’s hand is flat; moreover, the candle in the original cartoon was red, whereas the speaker’s arm was covered by the sleeve of a plaid shirt. The similarity between the gesture and the cartoon derive from shared relations between sets of features, such as the shape and orientation of the speaker’s left forearm and the candle, or from higher order relations between sets of items, such as the speaker’s right and left forearms, and the candle and the table.

4. Iconic gestures and word processing Despite the somewhat schematic nature of iconic gestures, our research indicates they engage the brain in a way similar to that of visual images, prompting sensitivity to visual context, and activating the semantic retrieval processes indexed by the N400 component in the event-related brain potential. Gestures embedded in multi-modal discourse contexts might thus serve to activate stored knowledge about their referents, priming related words and concepts. Given that interpretation of gestures can proceed based on their visual properties, such priming might be expected to occur even in the absence of contextual support. To test this hypothesis, Wu and Coulson (2007b) recorded eventrelated brain potentials as healthy adults watched silent videos of spontaneously produced iconic gestures followed by probe words that were either related or unrelated to them. Related probe words elicited less negative N400 than unrelated probes, suggesting they were easier to process. These data imply that even in the absence of supporting linguistic context, iconic gestures activate information in semantic memory about the phenomena they depict. However, a more pertinent question might be whether co-speech iconic gestures affect the processing of the speech that accompanies them. To address this question Wu and Coulson (2010) recorded participants’ electroencephalogram as they viewed short segments of spontaneous discourse accompanied either by iconic gestures or an uninformative image of the speaker. Each discourse segment was followed by either a related or an unrelated picture probe. Event-related brain potentials were computed time-locked to the onset of all content words throughout the audio stream, as well as to picture probes. We found that gestures modulated event-related brain potentials to content words co-timed with the first gesture in a discourse segment, relative to the same words presented with static freeze frames of the speaker. Effects were observed 200⫺550ms after speech onset, a time interval associated with semantic processing. Gestures also increased sensitivity to picture probe relatedness, as indexed by event-related brain potentials to the pictures.

5. Speech-gesture integration Research reviewed above suggests that iconic gestures activate information in semantic memory via abstract perceptual similarities between the gestures and the objects and actions they represent. In multi-modal discourse, gestures can activate information about discourse referents, thus facilitating semantic processing of relevant speech. In fact, in such contexts, gestures often provide information that goes beyond that presented in the accompanying speech. For example, describing the foyer to his house, a man said “When you go in there’s an oriental rug,” while tracing a circle in the air. Integrating the infor-

148. Multimodal discourse comprehension

1927

mation presented in speech with that in the gesture yields the inference that the oriental rug in question is round. In this way, complementary information presented in speech and gesture result in enhanced understanding of the discourse referent, as listeners integrate knowledge about oriental rugs that is activated by the speech with visual information activated by the gesture to yield a more specific cognitive model than that prompted by either channel alone. To test this model of multi-modal discourse comprehension, Wu and Coulson (2007a) asked participants to watch videos of a man describing everyday objects and events and recorded event-related brain potentials to two sorts of related picture probes, as well as pictures which were unrelated to the preceding videos. Related picture probes either agreed with information conveyed through both speech and gesture (cross-modal probes) or through speech alone (speech-only probes). For example, one video prime showed a man saying, “It’s actually a double door,” while holding his hands vertically above one another (see Fig. 148.2). The speech-only probe depicted a set of French doors, while the cross-modal probe depicted a Dutch door (two small, vertically arrayed doors). To test for intrinsic differences in processing difficulty, all speech-only and cross-

Fig. 148.2: Examples of materials from Wu and Coulson (2007a)

1928

IX. Embodiment modal probes also served as unrelated probes by appearing after a different video, for example, in which the man described a couch. Wu and Coulson (2007a) found that videos of spontaneous discourse involving speech and gesture led to greater priming for the cross-modal picture probes, which agreed with information conveyed through both channels (e.g., a picture of a Dutch door), relative to the speech-only probes (e.g., a picture of a French door). Cross-modal probes elicited a larger N400 relatedness effect than did speech-only probes. Cross-modal probes also elicited an N300 relatedness effect, whereas speech-only ones did not. These findings support McNeill’s (1992) proposal that listeners combine information from speech and gestures to arrive at an enhanced understanding of their interlocutor’s meaning. They further suggest that iconic gestures activate image-specific information about the concepts they denote. In some cases, gestures in this study provided critical information denoting a certain kind of item within a class (e.g., a Dutch instead of a French door; a cupboard shelf instead of a wall shelf; a gas stove knob instead of a door knob). In other cases, they portrayed salient visuo-spatial features of objects (e.g., the location of a logo on a T-shirt, the shape of vase, the degree of openness of a car window). Finally, some gestures demonstrated the manner of action execution (e.g., mixing with a spoon rather than an electric mixer, writing by hand rather than typing on a keyboard, painting with vertical rather than horizontal brush strokes). The fact that participants experienced greater ease in understanding crossmodal than speech-only picture probes suggests that distinctions such as these were incorporated into participants’ models of the speaker’s intended message, even though they were never made overt in speech. An important theoretical consequence of this finding is the idea that during comprehension, listeners integrate meanings encoded both linguistically and gesturally, resulting in visually specific conceptual representations.

6. Conclusions Here we reviewed experiments using event-related brain potentials that suggest the contextual congruity of spontaneously produced iconic gestures served to modulate activity in brain systems mediating semantic memory retrieval (Wu and Coulson 2005). Data suggest the brain treats gestures much as it treats other sorts of visual representations, though the cortical networks engaged by gesture- versus image-based depictions likely are only partially overlapping. Subtle differences in brain activity elicited by gestures versus pictures of objects may reflect both the recruitment of brain systems dedicated to the representation of the human body, as well as the more abstract nature of gestures than photographs (Wu and Coulson 2011). Gestures embedded in multi-modal discourse contexts have been shown to activate stored knowledge about their referents, priming related words (Wu and Coulson 2007b) and facilitating semantic processing of accompanying speech (Wu and Coulson 2010). Moreover, when comprehending multimodal discourse, language users are able to dynamically combine information conveyed by speech with that in gestures to formulate visually specific cognitive models of discourse referents (Wu and Coulson 2007a).

7. Reerences Coulson, Seana 2007. Electrifying results: ERP data and Cognitive Linguistics. In: Monica Gonzalez-Marquez, Irene Mittelberg, Seana Coulson and Michael Spivey (eds.), Methods in Cognitive Linguistics, 400⫺427. Amsterdam: John Benjamins.

149. Cognitive operations that take place in the Perception-Action Loop

1929

Holcomb, Phillip J. and Warren B. McPherson 1994. Event-related brain potentials reflect semantic priming in an object decision task. Brain and Cognition 24(2): 259⫺276. Kutas, Marta and Kara D. Federmeier 2011. Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annual Review of Psychology 62: 621⫺647. Kutas, Marta and Steven A. Hillyard 1980. Reading senseless sentences: Brain potentials reflect semantic incongruity. Science 207(4427): 203⫺205. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. McPherson, Warner B. and Phillip J. Holcomb 1999. An electrophysiological investigation of semantic priming with pictures of real objects. Psychophysiology 36(1): 53⫺65. Sitnikova, Tatiana, Gina R. Kuperberg and Phillip J. Holcomb 2003. Semantic integration in videos of real-world events: An electrophysiological investigation. Psychophysiology 40(1): 160⫺164. West, Caroline W. and Phillip J. Holcomb 2002. Event-related potentials during discourse-level semantic integration of complex pictures. Cognitive Brain Research 13(3): 363⫺375. Wu, Ying Choon and Seana Coulson 2005. Meaningful gestures: Electrophysiological indices of iconic gesture comprehension. Psychophysiology 42(6): 654⫺667. Wu, Ying Choon and Seana Coulson 2007a. How iconic gestures enhance communication: An ERP study. Brain and Language 101: 234⫺245. Wu, Ying Choon and Seana Coulson 2007b. Iconic gestures prime related concepts: An ERP study. Psychonomic Bulletin and Review 14(1): 57⫺63. Wu, Ying Choon and Seana Coulson 2010. Gestures modulate speech processing early in utterances. NeuroReport 21(7): 522⫺526. Wu, Ying Choon and Seana Coulson 2011. Are depictive gestures like pictures? Commonalities and differences in semantic processing. Brain and Language 119(3): 184⫺195.

Seana Coulson, San Diego (USA) Ying Choon Wu, San Diego (USA)

149. Cognitive operations that take place in the Perception-Action Loop 1. 2. 3. 4.

The nature of the Perception-Action Loop The location of cognitive operations Continuity between cognitive processes and cognitive operations References

Abstract The Perception-Action Loop provides a way to understand how the continuous outflow of motor movement changes the continuous inflow of sensory input in ways that carry meaningful information. Sensory input in its own right is information, as is the changes that occur in between and within the inflow and outflow of information. This interaction of information is analog and remains analog: the computations performed on this information Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19291935

1930

IX. Embodiment are largely undistorted and not carved into discrete digital bits or stages. Whether one is determining one’s heading while walking, solving a difficult problem with a diagram, or having a conversation with another person, important movements of the limbs, eyes, and speech apparatus induce informative changes in the environment that are perceived by the self and others. The external portion of the Perception-Action Loop, where action directly influences sensation, is where many cognitive operations (to be distinguished from cognitive processes) take place, and it is also where the Perception-Action Loops of two or more people can get entrained to produce joint action, joint perception, and even joint cognition.

1. The nature o the Perception-Action Loop The Perception-Action Loop refers to a circle of causal influence whereby an organism’s actions (or even partial motor movements) change its perceptual input on a millisecond timescale, such that the organism’s cognizing, action planning, movement execution, and resulting flow of incoming sensory input are thus all updated continuously in time (Neisser 1976). There are many canonical examples of the Perception-Action Loop: catching a fly ball, jumping rope, solving a metal puzzle with the aid of your hands, etc. However, the (sometimes high-level) cognitive processes within this loop are far more pervasive than those simple examples might suggest. This chapter will highlight some examples of cognitive operations that emerge out of the Perception-Action Loop. This idea of a continuous perception-action loop has its intellectual roots in the ‘reflex arc’ concept from over a century ago. John Dewey (1896) famously critiqued the original version of the reflex arc as being too piecemeal in its treatment of the mind: “The sensory stimulus is one thing, the central activity, standing for the idea, is another thing, and the motor discharge, standing for the act proper, is a third. As a result, the reflex arc is not a comprehensive, or organic unity, but a patchwork of disjointed parts, a mechanical conjunction of unallied processes” (Dewey 1896: 137). Instead, Dewey urged the field to embrace the continuous flow from perception to cognition to action: “What is wanted is that sensory stimulus, central connections and motor responses shall be viewed, not as separate and complete entities in themselves, but as divisions of labor, functioning factors, within the single concrete whole, now designated the reflex arc” (Dewey 1896: 137). Half a century later, while he was assisting the Air Force in improving training for pilots landing planes, J.J. Gibson helped close the loop on that arc. Gibson (1950) developed the concept of ‘optic flow’ as a specifically visuomotor example of a continuous Perception-Action Loop. Patterns of light are reflected off surfaces, textures, and objects in the environment and continuously flow across the retina as the organism moves through space. In the case of self-controlled movement, be it a cat stalking its prey or a pilot landing their plane, the temporal structure of retinal flow abides by physical laws of optics that can unambiguously place that organism in one, and only one, place in the environment at a given point in time. That particular pattern of optic flow over a short period of time could not have been obtained from any other place in the environment. Nor could any other trajectory of movement through the environment have produced exactly that particular optic flow. In the case of landing a plane, every millisecond of directed pressure of the pilot’s hands placed on the control stick is directly yoked to the next millisecond of optic flow on their retinas. When actions are treated as continuous in time, and sensation is treated as continuous in time, the reflex arc turns into a continuous Perception-Action Loop where there are no individuated stimuli and no individuated

149. Cognitive operations that take place in the Perception-Action Loop

1931

responses. Rather than cognition being an encapsulated module between a perception stage and an action stage, it may instead be a more abstract and adaptively functional process that emerges from sensorimotor circuitry. Sensorimotor skills are learned in the course of early development and have been shown to scaffold into higher-level cognitive processes. Thelen et al. (2001) explored the relationship between the motoric components of children’s reaching movements and their spatial memory for locations of hidden objects. In the A-not-B task, young children often perseverate their reach for a hidden toy in location A, where they have just obtained the toy many times in a row, despite the fact they were just shown the toy was hidden in location B (Piaget 1954). In the past, this perseveration was thought to be a purely memory-based error, but Thelen and colleagues showed that it has a strong spatial motor component. Simply introducing the motoric perturbation of adding weights to the child’s arms (so that each reach is more effortful) significantly reduces the frequency of the Anot-B error suggesting the child is aware of the new location, but a motor trace is still strongly active for the old location, and thus dominates when a reach is easier (Thelen et al. 2001). This same principle is witnessed with adults as well. Ballard, Hayhoe, and Pelz (1995) examined eye movement patterns in a block-copying task. The eye movements revealed a pattern of looking that relied heavily on referencing the pattern they were copying at all stages of the copying process, instead of memorizing the block pattern first in its entirety. However, when the model block pattern was far enough away from the block-copying workspace that torso movements were required (not just eye movements), suddenly people began loading up internal working memory to aid their task performance (Ballard, Hayhoe, and Pelz 1995). Relatively low-level sensorimotor routines of interacting with the environment are often sufficient to perform what we call cognitive operations, such as remembering where objects are and arranging them into complex patterns. And when those sensorimotor routines become more metabolically taxing, the organism may switch to relying somewhat more on internal neural cognitive processes. Nonetheless, the Perception-Action Loop is always making some form of contribution to the adaptive behavior of the organism, and thus internal cognitive processes must be able to communicate with perception and action effortlessly and unavoidably in order to facilitate goal-directed actions. In fact, the connection between internal cognitive processes and this perception-action loop is so tightly coupled, that cognition constantly cascades into actions, and therefore traces of cognition can be measured and seen in almost any action the body can perform. This unavoidable leaking of cognition into everyday tasks, such as driving or simply reaching for a glass of water, leaves traces of our cognition in the environment, which we can measure and use to gauge the dynamics of an immensely complex and interactive system.

2. The location o cognitive operations Cognition itself may in fact arise out of the interplay of the perception-action cycling and learning from the environment. Supervised learning is a class of neural network modeling that has typically been criticized as positing that there must be explicit instruction or symbolic teaching of a more distributed representation. However, this ’teacher’ can instead be thought of as a natural environmental signal: Direct perception of the outcome of one’s own actions can act as the teacher and thereby update connections in the brain in order to shape behavior. This kind of “distal supervised learner” is charac-

1932

IX. Embodiment terized by Jordan and Rumelhart’s (1992) forward model of motor learning (see also Kawato 1999) and demonstrated with an example of learning to throw darts. In a typical model, a supervised learning algorithm will contain a set of a priori variables that the model needs to attempt to learn. In Jordan and Rumelhart’s work they reconceptualize the supervisor as the physical visual target for a dart board ⫺ perception of the target and an understanding of the goal of darts is all that is required to drive the system’s learning, but the model has the added advantage of having a supervisory goal instead of being unsupervised (having no specific goal) (Jordan and Rumelhart 1992). With each dart throw, the model uses the perceived result of where the dart landed to update the dynamics of the arm joints to improve the next throw (Jordan and Rumelhart 1992). In this way, the supervisor is actually an emergent property of the sensory stream of optical information and a cognitive process. In observing someone learning to play darts, the typical categorization of what is ’cognitive’ would be the goal, the learning, and any strategy we might observe. However, these are not separate from the actions performed or the environment perceived. The recurrent aspect of the perception-action loop is where the cognition happens. The internal cognitive processes and the external cognitive operations are part of the Perception-Action Loop (see Fig. 149.1). This point is best illustrated by looking at the externalization of cognition into the body, a field sometimes referred to as embodied cognition. When some of cognition can be externalized or offloaded into the environment, the brain is free to do simpler, more efficient processing. For example, Kirsh and Maglio (1994) hypothesized that expert Tetris players would be extremely good at mental rotation, thereby requiring fewer button presses that rotate a block, before dropping each piece into place. What they found was quite the opposite: expert Tetris players pressed

Fig. 149.1: In the Perception-Action Loop, the flow of information smoothly transitions from exhibiting a perceptual character to exhibiting a cognitive character to exhibiting a motoric character, to once again exhibiting a cognitive character as the actions in the environment change the flow of sensory input in a manner that is equivalent to performing cognitive operations.

149. Cognitive operations that take place in the Perception-Action Loop

1933

the rotation button as fast as possible, much more so than amateur players of the game (Kirsh and Maglio 1994). Thus, instead of using a strategy that comes with a high cognitive load (mental rotation), they externally rotated the object, and simply perceptually matched the object’s features to the arrangement below (Kirsh and Maglio 1994). The actions performed (pressing the button to rotate the block) and the perception of the results of those actions (the orientation of the block and the space needing to be filled) is doing much of what we would call cognition in this case ⫺ no mental rotation required. When an action is performed, it is also perceived ⫺ there is some form of sensory information (proprioceptive, visual, etc.) that will accompany an action. The nervous system is built in such a way so that these signals can flow in parallel: Afferent nervous pathways (toward the central nervous system) may be activated at the same time as efferent nervous pathways (away from the central nervous system). This autocatalytic loop can in itself foster cognition (Chemero 2009; Gibson 1979). Autocatalysis is a chemical reaction where the product is also a catalyst of the reaction. In terms of behavior, think of the dart-throwing example again: The outcome of the dart-throw becomes the catalyst for updating the dart-throwing motor parameters. This is in stark contrast to the typical feed-forward stage-based account, which would posit an intermediary stage of cognitive processing that would evaluate performance and then relay the output of that processing to the motor system. Instead, the motor and perceptual processes interact to give rise to a complex process that we then call cognition. Even abstract conceptual insights can be instigated by certain eye movement patterns on a diagram associated with an insight problem. Grant and Spivey (2003) found that participants who are thirty seconds away from solving a diagram-based version of Karl Duncker’s (1945) famous tumor-and-lasers radiation problem tend to make a characteristic pattern of eye movements on the diagram. By contrast, participants who are thirty seconds away from giving up on the problem exhibit significantly less of that eye-movement pattern (Grant and Spivey 2003). Importantly, this pattern of eye movements was not merely an indicator to the experimenters that a participant was about to solve the problem, but this pattern of eye movements was actually assisting the participant in arriving at the solution. In Grant and Spivey’s second experiment, they subtly animated the diagram in a way that induced that pattern of eye movements, and the proportion of participants solving the problem doubled. Thomas and Lleras (2007) followed up this work with a secondary task in which people were explicitly instructed to move their eyes in a particular pattern across the diagram. People who moved their eyes in the pattern that produced saccade paths that converged on the tumor exhibited a higher rate of finding the solution (Thomas and Lleras 2007). As one might expect, based on the overlap between attentional mechanisms and eye movement mechanisms, even when participants didn’t move their eyes at all, but were instead instructed to shift their attention covertly in this converging-lines pattern, performance on this insight problem was again improved (Thomas and Lleras 2009). Based on results like these, it may be useful to treat the Perception-Action Loop that couples the organism with its environment (Neisser 1976) as not only the place where perception and action take place, but also the location of cognitive operations.

3. Continuity between cognitive processes and cognitive operations Movements of the hands and eyes are not the only motor outputs we produce that enact changes in our environment. We also use speech which can induce others to carry out

1934

IX. Embodiment actions. We say, “Let’s go to the movies together,” or “Can you pass the salt?” or “Get off my lawn!”. Treating language as a form of action is not a new idea (Clark 1996; Searle 1965; see also Hutchins 1995). However, in the past, this ‘Language-as-Action’ perspective has tended not to use laboratory methods with the kind of millisecond-timing that allows one to observe the Perception-Action Loop in all its glory. When Shockley, Santana, and Fowler (2003) recorded postural sway during conversation (by measuring fluctuations in the body’s center of mass), they observed recurrent patterns in the sway that were correlated across two participants who were talking about the same thing. When those same two participants were conversing with different people about unrelated things, those two participants’ postural sway patterns were no longer correlated (Shockley, Santana, and Fowler 2003). Evidently, when two people are co-creating their linguistic environment by delivering speech acts back and forth to each other, with reference to objects in their shared visual environment, their bodies become somewhat entrained with one another even at the level of subtle fluctuations in the location of their centers of mass. In much the same way that self-motion changes how the environment impacts our senses, and how moving various objects around changes the way the environment impacts the senses of everyone in that environment, contributing words and sentences to the environment also changes how everyone within range of that communication thinks about their environment. For example, when two people are in separate rooms having an unscripted two-way conversation over headsets about a shared visual display, their eye movements become correlated such that they are looking at the same parts of the display at almost exactly the same time (Richardson, Dale, and Kirkham 2007). When a listener is watching a person deliver a monologue, their brain activity patterns (as measured by EEG) are correlated with those of the speaker (Kuhlen, Allefeld, and Haynes 2012). And when two people are discussing a joint decision about a perceptual event, coordination in their language use is predictive of improved accuracy in their joint-perceptual task (Fusaroli et al. 2012). Language can be thought of as a kind of technological invention that allows a form of externalization of our thoughts, such that they become part of the environment, and therefore part of other people’s PerceptionAction Loops (Clark 2003). Understanding the Perception-Action Loop can radically change how you think about the mind. When you reach for a coffee mug, each new millisecond of visual input changes what the motor cortex is doing for the next millisecond of shaping the hand in preparation for a simple grasping action. When you turn a metal puzzle around in your hands, each new millisecond of shift in the visual angle and each new millisecond in altered pressure on the fingers provide sensory feedback that changes the way you think about the possible solutions. When you look at a map, each new eye movement affects the next eye movement and this process determines the sequencing of images that are fed into the planning process of how a travel route gets organized. And every word that comes out of your mouth concretizes the fuzzy concept that triggered it, allowing you to convert and combine vague ideas, through monologue and dialogue, into specific plans, agreements, coordinated actions, and social conventions. When examined in this way, it can be a bit of a shock to realize that so much of our cognition and our thinking ⫺ indeed so much of who we are ⫺ is being constructed outside of our brains rather than inside them.

149. Cognitive operations that take place in the Perception-Action Loop

4. Reerences Ballard, Dana H., Mary M. Hayhoe and Jeff B. Pelz 1995. Memory representations in natural tasks. Journal of Cognitive Neuroscience 7: 66⫺80. Chemero, Anthony 2009. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Clark, Andy 2003. Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press. Clark, Herbert H. 1996. Using Language, Volume 4. Cambridge: Cambridge University Press. Dewey, John 1896. The reflex arc concept in psychology. Psychological Review 3: 357⫺370. Duncker, Karl 1945. On Problem Solving. Psychological Monographs 58(5): i⫺113. Fusaroli, Riccardo, Bahador Bahrami, Karsten Olsen, Andreas Roepstorff, Geraint Rees, Chris Frith and Kristian Tyle´n 2012. Coming to Terms. Quantifying the Benefits of Linguistic Coordination. Psychological Science 23(8): 931⫺939. Gibson, James Jerome 1950. The Perception of the Visual World. Oxford: Houghton Mifflin. Gibson, James Jerome 1979. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin. Grant, Elizabeth and Michael Spivey 2003. Eye movements and problems solving: Guiding attention guides thought. Psychological Science 14(5): 462⫺466. Hutchins, Edwin 1995. Cognition in the Wild. Cambridge, MA: MIT Press. Jordan, Michael I. and David E. Rumelhart 1992. Forward models: Supervised learning with a distal teacher. Cognitive Science 16: 307⫺354. Kawato, Mitsuo 1999. Internal models for motor control and trajectory planning. Current Opinion in Neurobiology 9(6): 718⫺727. Kirsh, David and Paul Maglio 1994. On distinguishing epistemic from pragmatic action. Cognitive Science 18: 513⫺549. Kuhlen, Anna K., Carsten Allefeld and John-Dylan 2012. Content-specific coordination of listeners’ to speakers’ EEG during communication. Frontiers in Human Neuroscience 6, Article 266. Neisser, Ulric 1976. Cognition and Reality: Principles and Implications of Cognitive Psychology. San Francisco, CA: W.H. Freeman. Piaget, Jean 1954. The construction of reality in the child. New York: Basic. Richardson, Daniel C., Rick Dale and Natasha Z. Kirkham 2007. The Art of Conversation Is Coordination. Common Ground and the Coupling of Eye Movements During Dialogue. Psychological Science 18(5): 407⫺413. Searle, John R. 1965. What is a speech act? In: Max Black (ed.), Philosophy in America, 221⫺239. London: George Allen and Unwin Ltd. Shockley, Kevin, Marie-Vee Santana and Carol A. Fowler 2003. Mutual interpersonal postural constraints are involved in cooperative conversation. Journal of Experimental Psychology: Human Perception and Performance 29(2): 326⫺332. Thelen, Esther, Regor Schöner, Christian Scheier and Linda B. Smith 2001. The dynamics of embodiment: A field theory of infant perseverative reaching. Behavioral and Brain sciences 24(1): 1⫺34. Thomas, Laura E. and Alejandro Lleras 2007. Moving eyes and moving thought: On the spatial compatibility between eye movements and cognition. Psychonomic Bulletin and Review 14(4): 663⫺668. Thomas, Laura E and Alejandro Lleras 2009. Covert shifts of attention function as an implicit aid to insight. Cognition 111(2): 168⫺174.

Stephanie Huette, Memphis, TN (USA) Michael Spivey, Merced, CA (USA)

1935

1936

IX. Embodiment

150. Gesture and working memory 1. 2. 3. 4. 5. 6. 7.

Introduction Speech and gesture production and working memory Speech and gesture perception and working memory Encoding gesture in working memory Potential mechanisms Conclusion References

Abstract Hand gestures have been hypothesized to influence ongoing working memory processes for speakers and listeners. There is clear evidence that speakers’ demand on working memory is reduced when they gesture along with their speech, and that this effect is greatest for speakers with the smallest working memory capacity. Listeners may show a similar beneficial effect of gesture on working memory; however, at present there is only indirect support for this hypothesis. Moreover, it is not known how gestures are stored in working memory or how they influence ongoing working memory processes. It is likely that gestures may involve visual, spatial, or action representations in ongoing communication.

1. Introduction Hand gestures clearly communicate information from speakers to listeners (Alibali, Flevares, and Goldin-Meadow 1997; Beattie and Shovelton 2002; Cook and Tanenhaus 2009; Driskell and Radtke 2003; Graham and Argyle 1975; Riseborough 1981; Valenzeno, Alibali, and Klatzky 2003). Yet, in addition to this direct communicative function of gesture, gestures influence speakers and listeners in other ways during communication. One proposal has been that gesture influences speakers’ and listeners’ use of working memory during communication. Working memory is a cognitive system for the temporary storage and manipulation of information during ongoing processing. Working memory is generally considered to have limited capacity, although the nature of this capacity limit has been the subject of considerable debate. Baddeley and Hitch (1974) proposed a multi-component model of working memory that has been very successful and that has been adopted in the study of gesture. In this model, working memory includes at least two subsystems, the phonological loop and the visuospatial sketchpad, controlled by a central executive. The capacity of one’s working memory is known to relate to performance in language processing, both production (Daneman and Green 1986; Hartsuiker and Barkhuysen 2006; Power 1985) and comprehension (Daneman and Green 1986; Just and Carpenter 1992) as well as reading (Daneman and Carpenter 1980; Gathercole et al. 2004). Speakers rely on their working memory to speak fluently. When speakers are asked to maintain information in working memory during communication, they produce shorter, less elaborate sentences, they pause more (Jou and Harris 1992; Kemper, Herman, and Lian 2003; Power 1985), and they make more errors in subject verb-agreement (Hartsuiker and Barkhuysen 2006). Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19361942

150. Gesture and working memory

1937

Resource-based accounts of cognition, like working memory, stipulate that, because of competition for resources, doing two tasks will generally be more difficult than doing a single task. However, combining behaviors can sometimes result in facilitation above performing one of the behaviors in isolation. For example, maintaining information in both a spoken and a visual form places less demand on working memory compared with maintaining information in a single format (Goolkasian and Foos 2005). Gesturing along with speech may similarly reduce demand on working memory during communication, for speakers (Goldin-Meadow et al. 2001; Wagner, Nusbaum, and Goldin-Meadow 2004) as well as potentially for listeners. Indeed, when speakers are restricted from gesturing, they produce shorter, less elaborate sentences, and pause more, particularly for speech with spatial content (Graham and Heywood 1975; Rauscher, Krauss, and Chen 1996). These changes in speech are similar to the changes reported when speakers are placed under working memory load, and suggest that producing gestures while speaking may help speakers manage demand on working memory.

2. Speech and gesture production and working memory More direct tests of the hypothesis that gesture influences speakers’ demand on working memory have used a dual task paradigm to examine how gesturing versus not gesturing impacts demand for working memory resources. In these studies, participants are asked to maintain information in mind while explaining a task to an experimenter. After the explanation, the information maintained in working memory is recalled. The amount of information recalled can be used as an index of the demand on working memory during the explanation task ⫺ when the explanation requires greater demand on working memory, participants should perform less efficiently on the secondary working memory task. In general, because participants perform better on secondary working memory tasks when they gesture during their explanation, findings using a dual-task paradigm suggest that gesturing can decrease demand on working memory during speaking (GoldinMeadow et al. 2001; Wagner, Nusbaum, and Goldin-Meadow 2004). This facilitatory effect of gesturing along with speech has been observed across tasks and across types of gestures as well as across speakers of different ages. Nine to ten year old children show the effect when explaining mathematical equivalence problems, and adults show the effect when explaining mathematical factoring problems (GoldinMeadow et al. 2001; Wagner, Nusbaum, and Goldin-Meadow 2004). Both of these math tasks primarily elicit deictic gestures. Seven and eight year old children also show the effect when explaining Piagetian conservation tasks, which elicit primarily iconic gestures (Ping and Goldin-Meadow 2010). The facilatatory effect also appears independent of the nature of the working memory resources, although the evidence for this effect is less robust than generalization across tasks and ages. Wagner, Nusbaum, and Goldin-Meadow (2004) reported that adults explaining math factoring problems showed a working memory benefit for both verbal (remembering letters) and spatial material (remembering grid patterns). This finding suggests that speakers cannot simply be shifting working memory demand from one store (verbal) to the other (visuospatial). This effect is also observed regardless of whether the presence or lack of gestures is a natural, spontaneous occurrence or the result of experimental instructions (Cook, Yip, and Goldin-Meadow 2012). This finding eliminates the potential confounds that speak-

1938

IX. Embodiment ers gesture when demand on working memory is low, or that something that leads speakers to gesture also leads to reduced demand on working memory. Instead, it seems that it is truly the act of producing meaningful movements along with speech that is responsible for decreasing speakers’ demand on working memory. One possible explanation for these findings is that movement, in general, facilitates speech production. However, the facilitatory effect of gesturing on working memory is related to the meaning expressed in gesture, rather than simply emerging as a consequence of moving while speaking (Cook, Yip, and Goldin-Meadow 2012; Ping and Goldin-Meadow 2008, 2010; Wagner, Nusbaum, and Goldin-Meadow 2004). For example, when speakers are asked to produce circular arm movements while speaking, they do not show a benefit on the secondary working memory task, although they produce these movements with similar rate and timing as spontaneous gesture (Cook, Yip, and GoldinMeadow 2012). Thus, moving in a meaningful way along with speech, which by definition is gesturing, is necessary for speakers to reduce their demand on working memory. Thus, the findings from dual task paradigms lead to the conclusion that combining gesture with speech decreases speakers’ demand on working memory resources during communication. A second approach to examining the role of working memory in gesture production has been to link gesture production to an individual’s working memory capacity. As suggested previously, the capacity of one’s working memory can generally be linked to language production. With respect to gesture, the capacity of one’s working memory appears to be related to the facilitatory effect of gesture as seen in dual task paradigms. When working memory capacity is assessed separately from performance on the working memory task used in dual-task paradigms, individuals with lower working memory capacity show a greater working memory benefit associated with gesturing during an explanation when they are assessed in a dual-task paradigm (Cook 2005; Marstaller and Burianova´ 2013). Thus, producing gesture seems to decrease demands on working memory for speakers, particularly for speakers with less working memory capacity. If gesture can reduce demand on working memory, we might expect speakers to use gesture as a strategic resource for managing demand on working memory during speaking. There is some evidence for this hypothesis. First, speakers gesture more when they are required to maintain to-be-communicated spatial information in working memory compared with when this material is available in the immediate environment (de Ruiter 1998; Morsella and Krauss 2004; Wesp et al. 2001). Second, speakers appear to gesture more when conceptual (and presumably working memory) demands are higher, particularly demands on spatial conceptualization (Hostetter, Alibali, and Kita 2007; Kita and Davies 2009; Melinger and Kita 2007). For example, Morsella and Krauss (2004) found that speakers gestured at a higher rate when describing pictures that were hard to describe and/or hard to code verbally. Alibali, Kita, and Young (2000) found that children produced more meaningful gestures when engaged in a conceptual task rather than a description task, with the lexical demands of the two tasks equated. However, none of these studies used material that was uniquely difficult with respect to working memory, and so it is not clear that these findings offer clear support for a strategic use of gesture when demand on working memory is high.

3. Speech and gesture perception and working memory Although it is known that working memory is involved in language perception, and that gestures influence listeners during language perception, there has been less examination

150. Gesture and working memory

1939

of the function of gesture in relation to listeners’ working memory processes. There have not been any dual task studies of listener processing or studies linking the effect of gesture on listeners to listeners’ working memory capacity. Instead, most research on gesture perception has focused on examining whether listeners detect information in gesture, rather than examining how listeners’ processing is influenced by the presence of gesture. Listeners do show better immediate recall of verbal material that has been presented with gesture (Thompson 1995), which suggests that this material may have been more readily encoded into working memory. Moreover, this effect is particularly pronounced in children, who may have fewer working memory resources available (Thompson, Driscoll, and Markson 1998). Like the effect of gesture on working memory processes in speakers, the effect of gesture on listeners’ immediate memory for material depends on the meaning encoded in gesture. Specifically, listeners benefit more from gestures which express meaning compared with gestures which do not (Feyereisen 2006; Thompson, Driscoll, and Markson 1998).

4. Encoding gesture in working memory It is not clear how gestural information is stored and/or maintained in working memory. Gestures may be directly stored in speakers’ and/or listeners’ working memory, or may indirectly influence information that is stored in working memory. Although working memory was originally conceived of as an amodal system for maintenance of information, there is now considerable evidence to suggest that coding in working memory includes sensory and motor aspects of represented entities (Baddeley 1986; Wilson 2001). Linguistic material is clearly stored in working memory in a phonological format, because phonological properties of the to-be-stored material influence capacity, for both spoken (Baddeley 1986) and signed (Wilson and Emmorey 1997, 1998, 2003) languages. It is possible that gestures are based in an action code in working memory. At least some gestures appear to be generated from participants’ motor representations (Cook and Tanenhaus 2009; Hostetter and Alibali 2008). Speakers’ prior motor experience influences the form of the gestures produced as well as the information listeners extract from these gestures (Cook and Tanenhaus 2009). In addition, speakers produce more representational gestures for material that they have produced with their own actions (Hostetter and Alibali 2010), and for material with motor imagery compared with visual imagery (Feyereisen and Havard 1999). There has been some work examining how actions are stored in working memory. Memory for hand and arm actions appears distinct from spatial memory (Smyth and Pendleton 1989; Smyth, Pearson, and Pendleton 1988). Wood (2007) directly examined how action information was stored in working memory and found that working memory for actions has a limited capacity of two to three actions and is independent of object and spatial working memory. Actions appear to be stored as integrated action representation because participants could maintain similar numbers of complex, multipart, longer actions as single, simple actions (Wood 2007). Because participants can maintain both objects and actions and both locations and actions in working memory at the same time, it seems clear that working memory for actions is independent of visual and spatial working memory (Wood 2007).

1940

IX. Embodiment

5. Potential mechanisms Several hypotheses have been proposed to underlie the interaction of gesture and working memory. One possibility is that maintaining a gestured representation of a to-bearticulated entity or relation may lessen or eliminate the necessity of maintaining a concurrent verbal representation of this material (e.g., Wagner, Nusbaum, and GoldinMeadow 2004). Speakers may be able to maintain some aspects of the to-be-articulated explanation in one form, and different aspects in the alternative form, and then use the available information to generate a more complete representation as needed during communication. One specific way in which information could be distributed across working memory systems is due to the different articulators that are used. Entities which are more similar in articulation are more difficult to simultaneously maintain in working memory, and this effect is independent of the nature of the articulation (speech or sign language) (Wilson and Emmorey 1997). Representation in both speech and gesture may help reduce interference among to-be-articulated items in working memory by allowing encoding across multiple articulators (hand and mouth). If so, then other articulators would be expected to show a similar effect, although hand and mouth seem to offer a number of degrees of freedom that is not matched by other articulators. Alternatively, the act of gesturing allows speakers to reduce demand on working memory by replacing resource-intensive representations with indexical representations to external objects (Ballard et al. 1997). On this account, gesture may function to reduce demand on working memory by offloading some demand on working memory onto the available environment. However, this explanation cannot completely account for the observed effect of gesture on working memory because when speakers produce iconic gestures in neutral space, the gestures cannot serve an indexical function, and speakers still show an associated working memory benefit (Ping and Goldin-Meadow 2010). More generally, coding in gesture might be more efficient with respect to particular characteristics of the to-be-communicated information. For example, the math and conservation problems used in much of the work relating gesture and working memory rely on location and on spatial and shape information. Gestures may enable particularly efficient coding of these and other features, while speech, in contrast, might be more efficient at coding categorical relations among entities.

6. Conclusion It is clear that gesture can function to facilitate speakers’ working memory processes. It is less clear how gesture can influence listeners’ working memory processes. To truly understand the influence of gesture on working memory, we will need to improve our understanding of action encoding in working memory, as well as our understanding of how listeners process gesture.

7. Reerences Alibali, Martha W., Lucia Flevares and Susan Goldin-Meadow 1997. Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal Of Educational Psychology 89(1): 183⫺193. Alibali, Martha W., Sotaro Kita and Amanda Young 2000. Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes 15(6): 593⫺613.

150. Gesture and working memory Baddeley, Alan D. 1986. Working Memory. Oxford: Oxford University Press. Baddeley, Alan D. and Graham Hitch 1974. Working memory. In: Gordon H. Bower (ed.), The Psychology of Learning and Motivation: Advances in Research and Theory, Volume 8, 47⫺89. New York: Academic Press. Ballard, Dana. H., Mary Hayhoe, Polly Pook, and Rajesh Rao 1997. Deictic codes for the embodiment of cognition. Behavioral And Brain Sciences 20(4): 723⫺767. Beattie, Geoffrey and Heather Shovelton 2002. An experimental investigation of some properties of individual iconic gestures that mediate their communicative power. British Journal of Psychology, 93(Pt 2): 179⫺192. Cook, Susan W. 2005. Gesture, movement and working memory: A functional account. Ph.D. dissertation, Department of Psychology, University of Chicago. Cook, Susan. W., Terina K. Yip and Susan Goldin-Meadow 2012. Gestures, but not meaningless movements, lighten working memory load when explaining math. Language and Cognitive Processes 27(4): 594⫺610. Cook, Susan and Michael Tanenhaus 2009. Embodied communication: Speakers’ gestures affect listeners’ actions. Cognition 113(1): 98⫺104. Daneman, Meredyth and Patricia Carpenter 1980. Individual-Differences in Working Memory and Reading. Journal of Verbal Learning and Verbal Behavior 19(4): 450⫺466. Daneman, Meredyth and Ian Green 1986. Individual-Differences in Comprehending and Producing Words in Context. Journal of Memory and Language 25(1): 1⫺18. de Ruiter, Jan-Peter 1998. Gesture and speech production. Ph.D. dissertation, Faculty of Social Sciences, Catholic University of Nijmegen. Driskell, James and Paul Radtke 2003. The Effect of Gesture on Speech Production and Comprehension. Human Factors 45(3): 445⫺454. Feyereisen, Pierre 2006. Further investigation on the mnemonic effect of gestures: Their meaning matters. European Journal of Cognitive Psychology 18(2): 185⫺205. Feyereisen, Pierre and Isabelle Havard 1999. Mental Imagery and Production of Hand Gestures While Speaking in Younger and Older Adults. Journal of Nonverbal Behavior 23(2): 153⫺171. Gathercole, Susan, Susan Pickering, Camilla Knight and Zoe Stegmann 2004. Working memory skills and educational attainment: Evidence from national curriculum assessments at 7 and 14 years of age. Applied Cognitive Psychology 18(1): 1⫺16. Goldin-Meadow, Susan, Howard Nusbaum, Spencer Kelly and Susan Wagner 2001. Explaining math: Gesturing lightens the load. Psychological Science 12(6): 516⫺522. Goolkasian, Paula and Paul Foos 2005. Bimodal format effects in working memory. American Journal of Psychology 118(1): 61⫺77. Graham, Jean and Michael Argyle 1975. A cross-cultural study of the communication of extraverbal meaning by gestures. International Journal of Psychology 10(1): 57⫺67. Graham, Jean A. and Simon Heywood 1975. The effects of elimination of hand gestures and of verbal codability on speech performance. European Journal of Social Psychology 5(2): 189⫺195. Hartsuiker, Robert and Pashiera Barkhuysen 2006. Language production and working memory: The case of subject-verb agreement. Language and Cognitive Processes 21(1⫺3): 181⫺204. Hostetter, Autumn and Martha W. Alibali 2008. Visible embodiment: gestures as simulated action. Psychonomic Bulletin and Review 15(3): 495⫺514. Hostetter, Autumn and Martha W. Alibali 2010. Language, gesture, action! A test of the Gesture as Simulated Action framework. Journal of Memory and Language 63(2): 245⫺257. Hostetter, Autumn, Martha W. Alibali and Sotaro Kita 2007. I see it in my hands’ eye: Representational gestures reflect conceptual demands. Language and Cognitive Processes 22(3): 313⫺336. Jou, Jerwen and Richard Harris 1992. The effect of divided attention on speech production. Bulletin of the Psychonomic Society 30(4): 301⫺304. Just, Marcel and Patricia Carpenter 1992. A capacity theory of comprehension: Individual differences in working memory. Psychological Review 99(1): 122⫺149.

1941

1942

IX. Embodiment Kemper, Susan, Ruth Herman and Cindy Lian 2003. The costs of doing two things at once for young and older adults: Talking while walking, finger tapping, and ignoring speech of noise. Psychology and Aging 18(2): 181⫺192. Kita, Sotaro and Stephen T. Davies 2009. Competing conceptual representations trigger co-speech representational gestures. Language and Cognitive Processes 24(5): 761⫺775. Marstaller, Lars and Hana Burianova´ 2013. Individual differences in the gesture effect on working memory. Psychonomic Bulletin and Review 20(3): 496⫺500. Melinger, Alissa and Sotaro Kita 2007. Conceptualisation load triggers gesture production. Language and Cognitive Processes 22(4): 473⫺500. Morsella, Ezequiel, and Robert Krauss 2004. The role of gestures in spatial working memory and speech. American Journal of Psychology 117(3): 411⫺424. Riseborough, Margaret 1981. Physiographic Gestures as Decoding Facilitators ⫺ 3 Experiments Exploring a Neglected Facet of Communication. Journal of Nonverbal Behavior 5(3): 172⫺183. Ping, Raedy and Susan Goldin-Meadow 2008. Hands in the Air: Using Ungrounded Iconic Gestures to Teach Children Conservation of Quantity. Developmental Psychology 44(5): 1277⫺1287. Ping, Raedy and Susan Goldin-Meadow 2010. Gesturing Saves Cognitive Resources When Talking About Nonpresent Objects. Cognitive Science 34(4): 602⫺619. Power, Mick 1985. Sentence Production and Working Memory. The Quarterly Journal of Experimental Psychology Section A. Human Experimental Psychology 37(3): 367⫺385. Rauscher, Frances, Robert Krauss and Yihsiu Chen 1996. Gesture, speech, and lexical access: The role of lexical movements in speech production. Psychological Science 7(4): 226⫺231. Smyth, Mary and Lindsey Pendleton 1989. Working memory for movements. The Quarterly Journal of Experimental Psychology Section A. Human Experimental Psychology 41(2): 235⫺250. Smyth, Mary, Lindsey Pendleton and Norma Pearson 1988. Movement and working memory: Patterns and positions in space. The Quarterly Journal of Experimental Psychology Section A. Human Experimental Psychology 40(3): 497⫺514. Thompson, Laura 1995. Encoding and memory for visible speech and gestures: A comparison between young and older adults. Psychology and Aging 10(2): 215⫺228. Thompson, Laura, Donna Driscoll and Lori Markson 1998. Memory for visual-spoken language in children and adults. Journal of Nonverbal Behavior 22(3): 167⫺187. Valenzeno, Laura, Martha W. Alibali and Roberta Klatzky 2003. Teachers “gestures facilitate students” learning: A lesson in symmetry. Contemporary Educational Psychology 28(2): 187⫺204. Wagner, Susan, Howard Nusbaum and Susan Goldin-Meadow 2004. Probing the Mental Representation of Gesture: Is Handwaving Spatial? Journal of Memory and Language 50(4): 395⫺407. Wesp, Richard, Jennifer Hesse, Donna Keutmann and Karen Wheaton 2001. Gestures maintain spatial imagery. American Journal of Psychology 114(4): 591⫺600. Wilson, Margaret 2001. The case for sensorimotor coding in working memory. Psychonomic Bulletin and Review 8(1): 44⫺57. Wilson, Margaret and Karen Emmorey 1997. A visuospatial “phonological loop” in working memory: Evidence from American Sign Language. Memory and Cognition 25(3): 313⫺320. Wilson, Margaret and Karen Emmorey 1998. A “word length effect” for sign language: Further evidence for the role of language in structuring working memory. Memory and Cognition 26(3): 584⫺590. Wilson, Margaret and Karen Emmorey 2003. The effect of irrelevant visual input on working memory for sign language. Journal of Deaf Studies and Deaf Education 8(2): 97⫺103. Wood, Justin 2007. Visual working memory for observed actions. Journal Of Experimental Psychology-General 136(4): 639⫺652.

Susan Wagner Cook, Iowa (USA)

151. Body movements in robotics

1943

151. Body movements in robotics 1. 2. 3. 4. 5.

Introduction Types of robot and types of body movement Motion generation for robots Final remarks References

Abstract The article on body movements in robotics describes why body movements are a relevant object of interest for research on robots and categorizes different types of movement based on their functional purpose and intent. Following an overview of robot appearances and application areas, a focus is placed on robotic platforms that are recognized as “social robots” and which may come in anthropomorphic, zoomorphic, caricatured, or functional designs. Body movements are discussed with respect to locomotion, manipulative movement, and expressive movement, with an emphasis on movements that express communicative functions, such as gesture and facial expression. With respect to motion generation, off-line methods employing hardcoded action sequences or pre-recorded motions are contrasted with on-line methods which may be based on imitation learning or which transfer multimodal motion scheduling from virtual humanoid agents to humanoid robots. In particular, research challenges with regard to motor control for arbitrary, expressive hand-arm movement and its coordination with other interaction modalities are discussed.

1. Introduction Robotics, the engineering science and technology of robots, is a highly interdisciplinary field that brings together mechanical, electrical, and software engineering with areas like motion science and human-machine interaction. With roots dating back to mechanical devices in antiquity that resembled appearances and mimicked behaviors of natural living beings, the field moved to a thriving research area at the beginning of the twentyfirst century. While industrial robots with functional layouts and production-specific purposes were dominating in the late 1950s, remarkable advances made since then have led to a great diversity in the mechanical design of robots and the range of robotics applications. Today’s robot appearances range from legged robots and walking machines, pet robots that move like living creatures, face robots that express emotions, to full-blown humanoid or even android robots that resemble the upper torso or the full body of humans. Some are capable of interacting with humans or learning from humans by imitation. Robotics applications range widely and include office and museum attendants, toys and entertainment devices, household and service robots, route guides, educational robots, robots for elderly assistance, therapy and rehabilitation, and more. Humanoid robots in particular have found increasing attention. The term “humanoid” was coined by the Japanese robotics pioneer Ichiro Kato of Waseda University in Tokyo, to denote a human-shaped robot mimicking human-like movements and functions (Matsusaka 2008). Many roboticists believe that in the not-too-distant future huMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19431948

1944

IX. Embodiment manoid robots could become social companions and interact with humans in their daily lives, and there are a number of industrial companies that produce and sell robots tailored to human needs. While the creation of artificial beings that assist humans in their living space or work environment is a major engineering effort, the other objective in creating such robots is for scientific purposes. In order to develop robots with human-like functions, researchers need to understand such functions in detail, for example, how body movements are produced and how they can be related to conveying communicative intent. Thus robots are also being used as platforms for research on human communication, anthropomorphism, and embodiment. Although a major branch of current robotics research is dedicated to the design of more lifelike or even human-like robots, a remaining challenge is the generation of smooth and efficient body movements in robots. For instance, it is proposed that their human-like bodies shall enable people to intuitively understand their body language and behave as if they were communicating with humans (Kanda et al. 2003).

2. Types o robot and types o body movement In the following we give an overview of the state of the art of robotic body movements. Our discussions will focus on robotic platforms that are recognized as “social robots”, which purposefully interact socially both with each other and, more importantly, with humans. A wide range of application areas for socially interactive robots has been identified and classified in four broad categories with regard to their design and outer appearance (Fong, Nourbakhsh, and Dautenhahn 2003; examples added for illustration): ⫺ Anthropomorphic: The robot’s body is structurally and functionally similar to a human body (e.g., Honda humanoid robot ASIMO, android robots); this can potentially elicit expectations from the human interaction partner and attributions to the robot which exceed its actual capabilities. ⫺ Zoomorphic: By imitating living creatures such as dogs (e.g., Sony Aibo) or cats (e.g., Philips iCat), a dinosaur (Pleo) or a seal (Paro), the objective typically is to establish a human-creature relationship which, in contrast to anthropomorphic design, does not evoke as high an expectation on the human’s side. ⫺ Caricatured: Based on findings in computer animation indicating that the appearance of a character does not have to be realistic to be perceived as believable, caricatured design (e.g., Flobi) helps to mitigate unreasonable expectations and can potentially direct attention to or distract from specific features of the robot. ⫺ Functional: The features of the robot are designed in a way that enables them to illustrate the tasks the robot can perform, allowing the user to recognize both its capabilities and limitations; such design is preferably used for service robots (e.g., ARMAR, BIRON). Given this classification of robot design, body movements exposed by different types of robots vary depending on the category the robot belongs to and the actual design of the robot. Robotic body movements can be further subdivided into different types of movement, which can occur across all the design categories listed above. A minimal break-

151. Body movements in robotics

1945

down encompasses the following movement types, which fundamentally differ with regard to their functional purpose and intent: ⫺ Locomotion: Any type of motion that is intended to change the current location of the moving body. In current robotic systems, this is typically achieved via multilegged walking, crawling, rolling, swimming, or flying. ⫺ Manipulative movement: Any type of robotic movement that physically perturbs or changes the state of the robot’s environment, mostly accomplished via object manipulation (e.g., grasping or moving of objects, playing musical instruments). ⫺ Expressive movement: Any type of motion serving to express, utter or communicate. This includes non-verbal (e.g., body language such as communicative arm gesture, gaze or facial expression) as well as verbal behavior if it induces motion (e.g., lip movements). Note that these movement classes are not mutually exclusive, e.g., locomotion might also result in a manipulative movement. Combinations of movement classes are also possible, for example when handling a physical object while simultaneously walking or gesturing. While movements falling into the first two classes mainly serve non-communicative functions, the third class describes movement behaviors that a robot may use to express itself. Since the focus of this handbook lies on communication and its relationship to body and language, we put an emphasis on the third movement class, i.e., expressive movement, in the following. If a robot is designed to interact socially with humans, it must generate and display human-like social communicative cues as part of its expressive behavior (Breazeal and Scassellati 1999). Ideally, it will use its body effectively to perform communicative tasks in a human environment. Non-verbal behaviors along with speech are primary candidates for extending the communicative capabilities of social robots as well as making them appear more lifelike. In such a manner, the robot can convey intentionality, suggesting to the human interaction partner that the robot has internal states, communicative intent, beliefs and desires (Dautenhahn 1997). In an effort to endow robots with natural and believable behaviors, various approaches to expressive robotic body movements are dedicated to the generation of “conceptual” body movements and their coordination with other modalities. For example, when used in an environment that requires rich conversational skills, like in a tutoring situation, the robot needs to combine verbal and non-verbal communicative behavior, e.g., speech and hand/arm gesture, and synchronize the two modalities appropriately. In a similar fashion, dancing or orchestra-conducting robots need to synchronize their body movements with the rhythm and timing of the accompanying music, representing the concurrent modality in such a scenario. Equipping robots with the ability to produce multimodal behavior, particularly when serving a communicative purpose, represents an important step towards improved efficiency and quality of human-robot interaction. However, for a robot required to generate multimodal output such as speech and gesture, fine synchronization of two or more modalities poses a major challenge. In many existing approaches, synchronization of different modalities is either achieved only approximately or by solely adapting one modality to another. Given the limitations of robotic platforms, as for example imposed by motor velocity limits, these approaches may prove to be insufficient, and the need for mutual adaptation mechanisms becomes evident.

1946

IX. Embodiment

3. Motion generation or robots The generation of body movement for robots can be achieved in either an off-line or an on-line manner. Off-line approaches are highly controllable and easier to implement; however, they lack flexibility when used in interaction scenarios. One simple off-line method is hard-coding action sequences based on motor primitives, using either forward or inverse kinematics to control individual joints. Each posture interpolating a trajectory is pre-defined and exactly calculated in advance. Such manually pre-programmed movements are often applied for demonstration purposes or for tele-operated robots, which are typically used in Wizard-of-Oz scenarios (Steinfeld et al. 2009) in human-robot interaction studies. As a step towards more easily programmable robots, transferring motion from a living demonstrator (human or animal) to synthesize natural movements for a robot body has become a standard alternative to manual programming. When used as an offline method, this approach relies on pre-recorded motion data typically collected via a marker-based capture system, which is subsequently retargeted to the kinematic model of the robot. To solve the so-called correspondence problem (i.e., mapping movements performed by the demonstrating human body to the reproducing robotic body given the physical differences) and to meet constraints imposed by the kinematic target model, adjustments can be made by editing the captured movements based on optimization algorithms. This allows for the generation of highly realistic movements, however, at the cost of greatly time-consuming post-processing of the captured motion data. Moreover, off-line methods using pre-recorded motions act on the assumption that the environment is static, thus they neither account for dynamic changes that can occur nor do they provide for sensory feedback signaling the robot’s current state. This ultimately results in a lack of robustness to uncertainties in the environment. For this reason, and in consideration of human-robot interaction taking place in rich dynamic environments, interest in on-line and, ideally, marker-free motion transfer has been gaining momentum (Dariush et al. 2008). Learning from demonstration, also referred to as imitation learning, allows for a simplified process of programming complex motions especially for humanoid robots. By solely showing the robot a task demonstrated by a human teacher ⫺ either relying on markers placed on anatomical landmarks or by utilizing a markerless system, e.g., using time-of-flight depth cameras or the Microsoft Kinect ⫺ the robot is able to subsequently reproduce the motion performed by the human. Other work goes a step further and extends the approach by not only attempting to replicate motion based on observation, but also by trying to understand the goal of a demonstrated action, focusing on the intention of the teacher. For a comprehensive overview of robot programming by demonstration see Billard et al. (2008). Another approach to the generation of communicative body movement for humanoid robots builds upon the experiences gained from the development of action generation frameworks used for virtual humanoid agents. While generating expressive behavior such as gesture, especially together with synchronized speech, is a recent development in robotics, it has been addressed in various ways within the domain of embodied conversational agents. An on-line approach for speech and gesture generation with the Honda humanoid robot (Salem et al. 2012) builds on the Articulated Communicator Engine originally developed and used for the virtual agent Max (see next chapter). The framework takes into account the meaning conveyed in non-verbal utterances by coupling the

151. Body movements in robotics

1947

planning of both content and form across the two modalities gesture and speech. However, when transferring the concept of a multimodal scheduler from the domain of embodied conversational agents to actual robots, physical constraints such as joint and velocity limits become a challenge. As a consequence, strategies to handle these constraints need to be explicitly addressed. In general, synchronization of concurrent modalities has proven to be more difficult in the significantly more complex domain of robots.

4. Final remarks In this article we presented an overview of what is currently known about body movements in robotics: why are body movements a relevant object of interest for research on robots, and what are the specific forms of movement considered? Body movements in robotics have been discussed here with respect to locomotion, manipulative movement, and expressive movement, with an emphasis on the latter, that is, body movements that express communicative function. A crucial step in the attempt to build social robots is to endow them with expressive non-verbal behaviors. One such behavior is gesture, frequently used by human speakers to emphasize, supplement, or complement what they express in speech, for example, by pointing to objects referred to in verbal utterances or by gestures giving directions. While quite a number of approaches to generate gestural body movements in robots can be found in the literature, many research challenges remain, especially with regard to motor control for arbitrary, expressive hand-arm movement and its coordination with other interaction modalities. A further line of research in robotic body movement pertains to robot head movement and facial expression. Such robots have a variety of appearances ranging from caricatured (e.g., Flobi, iCAT) to anthropomorhic (e.g., iCub, WE-4RII). These robots commonly employ a set of servo-motors that control different parts of the face, such as the eyebrows, eyes, eyelids, mouth/lips, and head position. Based on such features, many different facial expressions including happiness, surprise, anger, sadness, joy, or fear can be displayed and used to communicate emotional states of the robot. Likewise, gaze ⫺ achieved by head and eye movements ⫺ can be used as a means to naturally convey the internal state of a robot such as attentional focus.

5. Reerences Billard, Aude, Sylvain Calinon, Rüdiger Dillmann and Stefan Schaal 2008. Robot programming by demonstration. In: Bruno Siciliano and Oussama Khatib (eds.). Handbook of Robotics, 1371⫺ 1394. Berlin/New York: Springer. Breazeal, Cynthia and Brian Scassellati 1999. How to build robots that make friends and influence people. Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, 858⫺863. Kyonjiu (Korea). Dariush, Behzad, Michael Gienger, Arjun Arumbakkam, Christian Goerick, Youding Zhu and Kikuo Fujimura 2008. Online and markerless motion retargeting with kinematic constraints. Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 191⫺ 198. Nice (France). Dautenhahn, Kerstin 1997. Ants don’t have friends ⫺ thoughts on socially intelligent agents. Technical Report FS97⫺02, 22⫺27. AAAI, Fall Symposium on Communicative Action in Humans and Machines, November 8⫺10th, 1997, MIT, Cambridge, MA, USA.

1948

IX. Embodiment Fong, Terrence, Illah Nourbakhsh and Kerstin Dautenhahn 2003. A survey of socially interactive robots. Robotics and Autonomous Systems 42(3/4): 143⫺166. Kanda, Takayuki, Hiroshi Ishiguro, Michita Imai and Tetsuo Ono 2003. Body movement analysis of human-robot interaction. Proceedings of the Eighteenths International Joint Conference on Artificial Intelligence (IJCAI 2003), 177⫺182. Acapulco, Mexico. Matsusaka, Yosuke 2008. History and current researches on building a human interface for humanoid robots. In: Ipke Wachsmuth and Günther Knoblich (eds.), Modeling Communication with Robots and Virtual Humans, 109⫺124. Berlin/Heidelberg/New York: Springer. Salem, Maha, Stefan Kopp, Ipke Wachsmuth and Frank Joublin 2010. Towards an integrated model of speech and gesture production for multi-modal robot behavior. Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication, 649⫺654. Viareggio, Italy. Steinfeld, A., O. C. Jenkins, and B. Scassellati 2009. The Oz of Wizard: Simulating the Human for Interaction Research. In ACM/IEEE International Conference on Human-Robot Interaction (HRI 2009), 101⫺108. La Jolla, CA, USA.

Ipke Wachsmuth, Bielefeld (Germany) Maha Salem, Bielefeld (Germany)

152. Gestures, postures, gaze, and movements in computer science: Embodied agents 1. 2. 3. 4.

Embodied conversational agents Computational simulation of expressive non-verbal behavior Future challenges References

Abstract This chapter introduces embodied conversational agents, virtual characters that can engage in human-like multimodal conversational behavior. The chapter focuses in particular on how the expressive non-verbal behavior is simulated in them. The different approaches currently used to plan the form of such behaviors as well as to realize them by means of computer animation techniques are explained and compared. Finally, future challenges in this field are pointed out.

1. Embodied conversational agents Computers increasingly meet their users in the form of embodied human-like agents. Virtual characters are found in entertainment systems as well as serious applications like information presentation in health communication, interactive museum guides, or webbased agents for customer support. Likewise, humanoid robots have been developed to help people with household chore or to provide assistance in collaborative working situaMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19481955

152. Gestures, postures, gaze, and movements in computer science: Embodied agents 1949

Fig. 152.1: (from left to right). REA (Cassell et al. 1999), MAX (Kopp et al. 2005), SASO-ST (Swartout et al. 2006), BILLIE (Kopp 2010).

tions. A common rationale is that embodied agents can foster human-machine cooperation by enabling human-like face-to-face interaction. This long-standing vision has turned into systematic research in the field of “Embodied Conversational Agents” (Cassell et al. 2000). Originating from systems like “Gandalf ” (Tho´risson 1997) and “REA” (Cassell et al. 1999), this has led to numerous agents built upon theoretically grounded, computational accounts of verbal and nonverbal behaviors and their use as a function of contextual factors like content, discourse, interaction regulation, attitudes, roles, or emotions (see Fig. 152.1). Generally, Embodied Conversational Agents are computer systems able to perceive and generate multimodal communicative behavior, to exhibit natural timing in doing so, and to respond and contribute reasonably to ongoing conversation (see Cassell et al. 2000). In addition, researchers have started to investigate how Embodied Conversational Agents can also engage in more implicit and dynamic processes of communication, e.g., back-channel feedback, alignment, or inter-personal synchrony (Kopp 2010). A key issue of this research is how non-verbal conversational behavior can be modeled best in computational terms. Work in the field of human-robot interaction has focused on the recognition of human users and their behaviors, while research on virtual agents has mostly concentrated on the simulation and synthesis of all kinds of conversational behavior. The remainder of this chapter will outline the state of the art in how communicative behavior can be generated with embodied agents. We will often focus on simulated hand gestures. The described techniques are similar (or even identical) to those used for other behaviors like gaze, head movements, or posture.

2. Computational simulation o expressive non-verbal behavior A vast number of approaches have been geared toward the simulation of specific behavior, e.g., facial expressions, gaze, or gesture. All of them share, by and large, three major steps to be taken to map a given communicative goal along with additional factors like certainty or affective state, onto graphical behavior animations: (i) planning of communicative content, (ii) planning of a behavioral realization of this intent, and (iii) realization of the planned behaviors.

1950

IX. Embodiment While the systems implemented have often concentrated on processing steps that pertain to one particular level, and have short-circuited others, these subsequent stages are in principle involved in the generation of each conversational behavior an agent is to perform. A standardization initiative called SAIBA (http://www.mindmakers.org) has set out to formulate XML specification languages for the interfaces between these three stages. The Function Markup Language as an interface between content planning and behavior planning describes communicative and expressive content without any reference to physical behavior. The Behavior Markup Language as interface between behavior planning and behavior realization describes multimodal behaviors as they are to be realized by the final stage of the generation pipeline. This Behavior Markup Language provides a means of describing the significant features of behaviors like gesture, gaze, face, and other bodily movements, along with synchronization constraints that hold between them within a coherent multimodal ensemble. The coarse three-stage pipeline originates mainly from approaches in the historically earlier work on the automated generation of language or text (Reiter and Dale 2000). Interestingly, it loosely corresponds to the stages of speech and gesture production commonly assumed in Psychology and Linguistics (e.g., de Ruiter 2007), namely, conceptualizing a (preverbal) message, formulating it in language and gesture, and producing the overt phonation and movements. However, while there is considerable disagreement about how much and at which stages verbal production and nonverbal productions interact (McNeill 2005), computational modeling approaches a priori tend ⫺ and to some extent need ⫺ to assume independent modules that interact in a message-based way. We will review methods and techniques employed in the final two stages of behavior generation, deciding the form of behaviors and creating the respective movements with artificial bodies.

2.1. Behavior realization: Moving an agents body Behavior synthesis generally starts from an abstract request to perform a certain behavior (e.g., a pointing gesture to a target at a specific time). Such requests arise numerously and asynchronously in Embodied Conversational Agents, which commonly consist of many components working in parallel. They need to be processed in a fast and timely manner, and the result of this processing is an instantaneous visualization (“frame”) of the agent’s body, updated in a careful manner and not less than 25 times per second to create the illusion of smooth motion (“render loop”). The process to get from the behavior request to the graphically rendered output is realized differently in different systems, depending on the required naturalness, flexibility, or ease of modeling. The overall structure, however, is the same as shown in Fig. 152.2. A behavior request is first turned into one or more animations that employ one or multiple motion control algorithms running in the render loop. These algorithms break down the motion into single postures, defined in terms of certain control parameters. These parameters are then used to determine the agent’s body configuration, usually using a skeleton consisting of rigid links connected via joints. At this stage, improbable body configurations can be prevented by imposing limits to these joints. The results are joint angles that are then used to update the graphical body geometry. This geometry is then rendered to a graphical output, and the animation loop starts over again. Existing Embodied Conversational Agents differ in the motion control algorithms they employ. Systems that emphasize naturalness of motion usually employ so-called “Motion Capturing” to record movements performed by human actors. The movements are stored and then simply replayed as a primitive form of motion control. Much re-

152. Gestures, postures, gaze, and movements in computer science: Embodied agents 1951

Fig. 152.2: General structure of the animation loop.

search has been devoted to this data-based animation technique and, nowadays, there are sophisticated methods for blending or superposing motions or for adapting them to fit new criteria like given timing constraints, style of motion, energy expenditure, or avoiding collisions (e.g., Gleicher 2000). Yet, this fitting is only possible to a certain degree without distorting the original motion, and the range of motions producible with this technique is hence limited by the range of the stored data. This technique has found its main application for producing behaviors whose form is predefined (from whole utterances down to emblematic gestures), which can be generated by combining and adjusting a number of stereotypical motions (e.g., beat gestures or conversational gestures), or for which limited adaptation is sufficient (e.g., breathing or postural sway). For approaches using this technique for gesture animation see, for example, Neff et al. (2008) or Stone et al. (2004). More flexible animation is possible by motion control algorithms that perform some form of online control over the movement. The standard technique is called “parametric keyframing” and consists in specifying (either beforehand or during runtime) certain key postures and interpolating between them automatically. This allows for generating a set of control parameters for each frame, which could be target positions for the hand or orientation vectors for the head. Using inverse kinematics techniques, such “external” parameters can be transformed into “internal” body skeleton parameters, namely joint angles. Keyframing is a traditional method that stems from the making of animated cartoons and is usually among the first ones used in Embodied Conversational Agents (Cassell et al. 1999; Hartmann, Mancini, and Pelachaud 2006) because of its simplicity and the degree of control it still affords. The highest flexibility and generativeness is provided by procedural animation. Using this approach, motion control is exerted online and throughout the entire movement by specialized procedures that can employ an explicit model of the target trajectory or some other model of the flow of control parameters over time (e.g., learned from data). This technique is often used for the generation of behaviors with specific external features, like iconic gestures that are to reproduce a particular form (Bergmann and Kopp 2009). In general, however, the different approaches discussed here apply differently well to different behaviors. Indeed, it has long been recognized that the automatic simulation of embodied agents, which needs to en-

1952

IX. Embodiment compass multiple behaviors at the same time, requires an adept combination of several motion controllers. Embodied Conversational Agents animation engines consequently feature a range of them (e.g., Kopp and Wachsmuth 2004).

2.2. Behavior planning: Deciding what movements to produce Behavior planning in Embodied Conversational Agents is in charge of determining behavioral forms that fulfill certain communicative goals in the current dialog and discourse context. The solution to this problem is, to some extent, contingent upon the method used for behavior realization. Three approaches to solve this problem have evolved (see Fig. 152.3): lexicon-based, data-based, and model-based generation.

Fig. 152.3: Overview of different approaches used to plan an agent’s nonverbal behaviors.

⫺ Lexicon-based generation. Most existing systems have employed a lexicon-based approach (Fig. 152.3a) for generating nonverbal behavior (see, e.g., Kopp and Wachsmuth 2002; Krenn and Pirker 2004 for gesture; Poggi 2001 for gaze). Here, a predefined repository of behavior templates, annotated with the functions they can fulfill, is used to pick from in a context-sensitive manner with meaning-to-behavior rules (6, 11). The BEAT system (Cassell, Vilhja´lmsson, and Bickmore 2001) employed XML-based selection rules to collect possible behaviors and then used priority rules and filters to cut this down to a realizable combination. In a second step, selected templates are refined or adjusted to meet contextual constraints like timing, expressivity, movement style, or target locations (Hartmann, Mancini, and Pelachaud 2006; Ruttkay 2007). This approach has been adopted widely for its simplicity, and it was applied for many kinds of behaviors regardless of how adequately they can be lexicalised (e.g., symbolic gestures are conventionalized, while iconic gestures are fully flexible). ⫺ Data-based generation. Data-based models (also called “shallow models”) rely on large behavioral data sets in two ways. First, they provide a repository from which the behaviors or behavior segments are picked (Fig. 152.3b). Often, the data sets are annotated to this end with information about content or functions of behaviors. For example, Stone et al. (2004) use segmented motion capture data and recombine segments with speech samples to generate coherent multimodal utterances. Second, the data can be used to identify how probable isolated or combined occurrences of behaviors are. These probabilities can be used to pick and concatenate the most likely

152. Gestures, postures, gaze, and movements in computer science: Embodied agents 1953 behavior(s) given a communicative goal or function. For example, the system by Neff et al. (2008) learns statistical gesture profiles from annotated multimodal behavior and uses this to produce character-specific discourse gestures and beats for annotated text. Data-based generation directly lends itself to using motion capture animations. Due to the required amount of data and the difficulty of annotating it with rich metainformation, however, such approaches have been used mainly for non-representational gestures (which do not impose so much meta-information) or single speakers (which limits the size and variability of the required behavioral data). ⫺ Model-based generation. The most flexible but at the same time theoretically most challenging way is to realize a generative model (Fig. 152.3c) that can come up with even novel behaviors on demand. This approach requires detailed knowledge about the respective behavior and, in particular, how its single features combine to fulfill communicative goals or functions. Using this knowledge, the model determines combinations of feature values, leaving others possibly underspecified. This approach has been used, e.g., for sign language or head movements (Heylen 2008). The first system to apply this to gestures was the NUMACK system (Kopp, Tepper, and Cassell 2004), which used empirically suggested mappings from visuo-spatial features of the referent object onto gesture features like handshape, position, or movement trajectory. The system by Bergmann and Kopp (2009) employed a Bayesian Decision Network to plan iconic gestures on the fly. Based on a large data corpus, local probabilistic models are learned to correlate input features (shape of the referent object, communicative goal, information structure, previous gesture) with gesture features like occurrence, handedness, or general representation technique (e.g., drawing, posturing, shaping). Other decisions are modeled in terms of explicit rules and determine features like hand position, orientation, and handshape from a visuo-spatial representation of the object referred to. Evaluations show that this hybrid approach can reproduce empirically observed gestures and that newly generated gestures are rated positive and helpful by humans (Bergmann, Kopp, and Eyssell 2010).

3. Future challenges The methods described in this chapter target two main problems in simulating communication with embodied agents: animating expressive movements with their bodies and planning the form and shape of such movements as a function of communicative demands. Each of the different techniques has its (dis-)advantages and there is a trade-off between generativity (and hence communicative flexibility and autonomy) vs. naturalness and subtle expressivity of the agent. Choosing a technique is thus a matter of preference, and one main challenge for behavior generation in Embodied Conversational Agents is to resolve this conflict and to improve on both axes simultaneously. Right now, a combination of methods is required when we want to have behaviors like gaze, posture, and various kinds of gesture together in Embodied Conversational Agents. Systems that are seamlessly integrated in this respect, however, lie still ahead in the future. Other challenges arise from the fact that we need better knowledge of the various behaviors and their use in interactive communication. Research on Embodied Conversational Agents can help to advance the state of theoretical knowledge by devising, implementing, and evaluating concrete predictive models. In fact, all current agents rely on empirical studies to inform their models. Regarding gesture, the model-based approach

1954

IX. Embodiment described by Bergmann and Kopp (2009) has in that way yielded novel findings about the internal structure of iconic gestures, the various influences their features underlie (e.g., same speaker’s previous gesture or complexity of the shape being described), and their internal causal relationships. Yet, a fuller picture of the semantic potentials and, moreover, pragmatic functions of such features is still lacking. In particular, behaviors are always produced as parts of multimodal ensembles. The interaction between features of behaviors like speech and gesture, gaze, facial expression, or posture poses the next big challenge that computational models for embodied agents need to master. While practical methods for generating multimodal behavior have been proposed, usually following the lexicon-based approach, deeper models are still being investigated at the level of the interaction of two modalities (e.g., speech and gesture, language and eye gaze). Finally, the perception and production of behaviors are not separate and potentially influence each other, and this can help interaction partners to coordinate in multiple ways (Kopp 2010). An account of this, operationalized in agents that link perceptual and behavioral skills in an integrated architecture, is ultimately needed to simulate how the body is moved in communication.

4. Reerences Bergmann, Kirsten and Stefan Kopp 2009. Increasing expressiveness for virtual agents ⫺ Autonomous generation of speech and gesture for spatial description tasks. In: Keith S. Decker, Jaime Sima˜o Sichman, Carles Sierra and Cristiano Castelfranchi (eds.), Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’09), Vol. 1, 361⫺ 368. Ann Arbor, MI: IFAAMAS Bergmann, Kirsten, Stefan Kopp and Friederike Eyssel 2010. Individualized gesturing outperforms average gesturing ⫺ Evaluating gesture production in virtual humans. In: Jan Allbeck, Norman Badler, Timothy W. Bickmore, Catherine Pelachaud and Alla Safonova (eds.), Proceedings of the 10th Conference on Intelligent Virtual Agents, 104⫺111. Berlin/Heidelberg: Springer Verlag. Cassell, Justine, Timothy W. Bickmore, Mark Billinghurst, Lee Campbell, Kenny Chang, Hannes Högni Vilhja´lmsson and Hao Yan 1999. Embodiment in conversational interfaces: Rea. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI’99), 520⫺ 527. New York, NY: ACM Press. Cassell, Justine, Julian Sullivan, Scott Prevost and Elizabeth Churchill (eds.) 2000. Embodied Conversational Agents. Cambridge, MA: Massachusetts Institute of Technology Press. Cassell, Justine, Hannes Vilhja´lmsson and Timothy Bickmore 2001. BEAT: the behavior expression animation toolkit. In: Proceedings of SIGGRAPH ’01, 477⫺486. New York, NY: ACM Press. De Ruiter, Jan Peter 2007. Postcards from the mind: The relationship between speech, imagistic gesture, and thought. Gesture 7(1): 21⫺38. Gleicher, Michael 2000. Animation from observation: Motion capture and motion editing. ACM SIGGRAPH Computer Graphics 33(4): 51⫺54. Hartmann, Björn, Mauricio Mancini and Catherine Pelachaud 2006. Implementing expressive gesture synthesis for embodied conversational agents. In: Sylvie Gibet, Nicolas Courty and JeanFrancois Kamp (eds.), Gesture in Human-Computer Interaction and Simulation, 45⫺55. Berlin/ Heidelberg/New York: Springer Verlag. Heylen, Dirk 2008. Listening heads. In: Ipke Wachsmuth and Günther Knoblich (eds.), Modeling Communication with Robots and Virtual Humans, 241⫺259. Berlin/Heidelberg: Springer Verlag. Kopp, Stefan 2010. Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors. Speech Communication 52(6): 587⫺597. Kopp, Stefan, Lars Gesellensetter, Nicole C. Krämer and Ipke Wachsmuth 2005. A conversational agent as museum guide ⫺ Design and evaluation of a real-world application. In: Themis Panayiotopoulos (ed.), Intelligent Virtual Agents, Proceedings, Vol. 3661, 329⫺343. Berlin: Springer Verlag.

153. The psychology of gestures and gesture-like movements in non-human primates

1955

Kopp, Stefan, Paul Tepper and Justine Cassell 2004. Towards integrated microplanning of language and iconic gesture for multimodal output. In: Proceedings of the International Conference on Multimodal Interfaces (ICMI ’04), 97⫺104. New York, NY: ACM Press. Kopp, Stefan and Ipke Wachsmuth 2002. Model-based animation of coverbal gesture. In: Proceedings of Computer Animation 2002, 252⫺257. Los Alamitos, CA: IEEE Press. Kopp, Stefan and Ipke Wachsmuth 2004. Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1): 39⫺52. Krenn, Brigitte and Hannes Pirker 2004. Defining the gesticon: Language and gesture coordination for interacting embodied agents. In: Proceedings AISB-2004 Symposium on Language, Speech and Gesture for Expressive Characters, 107⫺115. University of Leeds, UK. McNeill, David 2005. Gesture and Thought. Chicago: Chicago University Press. Neff, Michael, Michael Kipp, Irene Albrecht and Hans-Peter Seidel 2008. Gesture modeling and animation based on a probabilistic recreation of speaker style. ACM Transactions on Graphics 27(1): 1⫺24. Poggi, Isabella 2001. The lexicon and the alphabet of gesture, gaze, and touch. In: Proceedings of IVA 2001. Lecture Notes in Computer Science, Volume 2190, 235⫺236. Heidelberg: Springer Verlag. Reiter, Ehud and Robert Dale 2000. Building Natural-Language Generation Systems. Cambridge: Cambridge University Press. Ruttkay, Zsofi 2007. Presenting in style by virtual humans. In: Anna Esposito (ed.), Verbal and Nonverbal Communication Behaviours, 23⫺36. Berlin/Heidelberg: Springer Verlag. Stone, Matthew, Douglas DeCarlo, Insuk Oh, Christian Rodriguez, Adrian Stere, Alyssa Lees and Chris Bregler 2004. Speaking with hands: Creating animated conversational characters from recordings of human performance. ACM Transactions on Graphics 23(3): 506⫺513. Swartout, William, Jonathan Gratch, Randy Hill, Ed Hovy, Stacy Marsella, Jeff Rickel and David Traum 2006. Toward virtual humans. AI Magazine 27(2): 96⫺108. Tho´risson, Kristin R. 1997. Gandalf: an embodied humanoid capable of real-time multimodal dialogue with people. In: Proceedings of AGENTS ’97, 536⫺537. New York, NY: ACM Press.

Stefan Kopp, Bielefeld (Germany)

153. The psychology o gestures and gesture-like movements in non-human primates 1. 2. 3. 4. 5. 6. 7.

What are nonhuman primates? What is a gesture? What is a psychological approach? What is intentional communication? Which cognitive aspects are of interest? Conclusion and outlook References

Abstract Research into gestural communication of nonhuman primates is often inspired by an interest in the evolutionary roots of human language. The focus on intentionally used behaviors is central to this approach that aims at investigating the cognitive mechanisms characterizing Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19551962

1956

IX. Embodiment gesture use in monkeys and apes. This chapter describes some of the key characteristics that are important in this context, and discusses the evidence the claim is built on that gestures of nonhuman primates represent intentionally and flexibly used means of communication. This chapter will first provide a brief introduction into what primates are and how a gesture is defined, before the psychological approach to gestural communication is described in more detail, with focus on the cognitive mechanisms underlying gesture use in nonhuman primates.

1. What are nonhuman primates? The order primates has been traditionally divided into prosimians (lemurs, lorises, tarsiers) and anthropoids (Old World monkeys, New World monkeys, apes, humans) (Fleagle 1999). While humans inhabit all continents and almost every climate zone of the world, nonhuman primates live in the tropical and subtropical regions of Africa, Asia, and the Americas. Unlike other mammalian orders, primates lack a shared characteristic that is unique to this group. However, one major behavioral trait of primates is their tendency to be highly social throughout all life stages. In contrast to the typically short-term associations of other mammals, group membership in primates tends to be highly regular (Smuts et al. 1987).

2. What is a gesture? Although the term gesture is frequently used in the scientific world and in everyday life, definitions can vary significantly, i.e., in regard to the body parts that execute gestures, the considered modalities, and the relationship of these non-verbal communicative means with language. For example, human gestures can be defined as a form of non-verbal communication in which visible bodily actions communicate particular messages, either in place of speech or closely intertwined with spoken words (Kendon 2004). However, because nonhuman primates do not have language, researchers usually adopt definitions and criteria from research into gestures of pre-verbal children (Bates et al. 1979; Leavens, Russell, and Hopkins 2005). Thus, gestures of nonhuman primates are commonly defined as mechanically ineffective behaviors that are directed at a particular recipient and are tailored to the attentional state of the audience and which are characterized by the sender’s persistence and elaboration when the initial communicative attempts fail (Call and Tomasello 2007; Leavens, Russell, and Hopkins 2005) (see section 5). Unlike research into human gestures that focuses on the visual modality, researchers interested in nonhuman primates also consider auditory gestures, which generate sound from body parts other than the vocal cords, as well as tactile gestures, which involve physical contact between the two interacting partners. Furthermore, many researchers do not restrict gestures to the use of hands but include movements of limbs or the head and body postures. Some studies also consider facial expressions as facial gestures (Ferrari et al. 2003; Maestripieri 1999), while others refer to facial expressions as a separate mode of visual communication (Call and Tomasello 2007). Unlike facial expressions and vocalizations, gestures are not identified based on specific structural properties, but are generally classified according to their function or use (but see Roberts et al. 2012).

3. What is a psychological approach? The communication of nonhuman animals receives considerable attention from a variety of disciplines such as biology and psychology, but also anthropology, linguistics, or neu-

153. The psychology of gestures and gesture-like movements in non-human primates

1957

roscience. One major reason for this interest in animal communication is the search for the evolutionary roots of human language and the assumption that by comparing humans and other animals, it is possible to identify those behaviors that are uniquely human and those that are shared with other species, which might represent potential precursors to human language (Slocombe, Waller, and Liebal 2011; Wilcox 1999). Biologists and psychologists traditionally use different ⫺ though related ⫺ perspectives when studying animal behavior. While biologists focus more on the ultimate aspects of behavior and are particularly interested in why and how a specific behavior has evolved, psychologists are more interested in the proximate aspects, which include cognitive, emotional, or physiological mechanisms underlying behavior and the development of behavior during an individual’s lifetime (Tinbergen 1963; Waller et al. 2013). This has important implications for research into the gestural communication of nonhuman primates. Here researchers mostly use a psychological approach, which centers on the proximate aspects and thus on the cognitive mechanisms underlying gesture use (see section 5). Key to this approach is the question whether gestural communication in nonhuman primates is intentional, as the intentional use is one of the major characteristics of human language.

4. What is intentional communication? In gesture research, the term “intentional communication” is used to describe purposeful, goal-directed behavior, with the sender having voluntary control over the production of a particular signal (Benga 2005). At the same time, this does not necessarily imply that the recipient understands that this signal is an intentional act of communication (Genty et al. 2009). The aim of this approach to primate communication is to identify those signals that are characterized by variability between individuals and flexibility in use as opposed to signals that are used by all individuals of one species, often for very specific functions and in very specific contexts (Tomasello 2008). In contrast to those phylogenetically ritualized signals that have evolved under very specific selection pressures, intentionally used signals are most likely acquired by some form of learning during an individual’s lifetime. Thus, research into primate gestures is particularly motivated by an interest in the cognitive aspects of primate communication, because the intentional use of signals implies voluntary control and thus potential for a more flexible and sophisticated use of these signals.

5. Which cognitive aspects are o interest? In the following, different features are discussed that are commonly used to identify acts of intentional communication in nonhuman primates (for a detailed discussion, see Liebal et al. 2013: Chapter 8).

5.1. Presence and attentional state o the audience The sender’s sensitivity to the presence of an audience is defined as audience effect, and refers to a signal only being used when someone is present and thus able to perceive the signal (Rogers and Kaplan 2000). Most existing studies focus on great apes’ interactions

1958

IX. Embodiment with a human experimenter and demonstrate that they only produce gestures in the presence, but not absence of the human (e.g., Hostetter, Cantero, and Hopkins 2001; Poss et al. 2006). There is considerably more research that investigates if and how nonhuman primates adjust their gestures to the recipient’s attentional state. While tactile and auditory gestures can be perceived regardless of whether the recipient is attending or not, visual gestures require the visual attention of the recipient. In interactions with conspecifics, several species including monkeys, gibbons, and great apes use visual gestures only if the recipient is visually attending (see Call and Tomasello 2007). In interactions with humans, both great apes and monkeys adjust their gesture use to the attentional state of a human experimenter: They gesture more and use visual gestures only if the human is oriented towards them (e.g., Anderson et al. 2010; Hostetter, Cantero, and Hopkins 2001; Maille et al. 2012). However, in more complex situations with two human experimenters with differing attentional states and varying body orientations, chimpanzees did not seem to show sensitivity to the attentional state of the human when producing pointing gestures (Povinelli and Eddy 1996). However, there is some evidence that the apes use the orientation of the human’s face to conclude whether the human can perceive the their pointing gestures, while the body orientation informs the ape whether the human is able to give any food at all. Thus, the orientation of the face and the body provide different information (Kaminski, Call, and Tomasello 2004).

5.2. Use o attention-getters Closely related to the previous section is the question whether nonhuman primates use particular gestures to attract the attention of a non-attending individual. Studies that focused on interactions between conspecifics found that both siamangs and orangutans do not use auditory and tactile gestures more if the recipient is not attending indicating that these potential attention-getting gestures are used regardless of the attentional state of the recipient (Liebal, Pika, and Tomasello 2004, 2006). Furthermore, there is little evidence that great apes use attention-getting gestures first to attract the recipient’s attention before producing a visual gesture (Liebal, Call, and Tomasello 2004; Tempelmann and Liebal 2012). Thus, it is currently unclear whether nonhuman primates use specific gestures to attract the attention of others or whether such gestures are used to trigger others into action (Liebal and Call 2012). In interactions with humans, great apes use more auditory gestures and vocalizations if a human experimenter is turned away and thus not attending (Hostetter, Cantero, and Hopkins 2001; Poss et al. 2006). However, if great apes are given the opportunity to change their position in relation to the orientation of a human experimenter, they preferably walk in front of the human where they use visual gestures to beg for food rather than using auditory or tactile gestures behind the human to attract her attention (Liebal et al. 2004). Thus, rather than manipulating the attentional state of their partner, chimpanzees move into the visual field of another individual to ensure that their communicative behaviors are perceived (Liebal, Call, and Tomasello 2004).

5.3. Flexible use across dierent contexts Gesture researchers usually highlight the flexible use of these signals (Call and Tomasello 2007; Tomasello 2008), but flexibility can be defined in different ways. It can refer to

153. The psychology of gestures and gesture-like movements in non-human primates

1959

the flexible usage of gestures across different contexts, or to the ability to combine components of an existing repertoire into longer sequences to enable a more flexible use of a relatively limited repertoire. In regard to the flexible usage, great apes use the majority of gestures for more than one function, and several gestures can be used to achieve the same goal (e.g., Genty et al. 2009; Tomasello et al. 1997). As a consequence of this flexible use across different contexts, many gestures do not have a specific meaning, but the information they convey is defined by the context in which they are used. In regard to the combination of gestures, sequences are described for several great ape species in both captive and wild settings (Hobaiter and Byrne 2011; Liebal, Pika, and Tomasello 2004; Tanner 2004). Altogether, there is little evidence that gesture combinations are used for new or other functions than their single components, thus indicating that great apes do not create sequences to communicate new meanings. Instead, they seem to represent the sender’s communicative strategies to flexibly react to the recipient’s behavior. For example, gesture sequences of chimpanzees emerge if the recipient does not respond to the initial gesture (Liebal et al. 2004) and gorillas use sequences as means to adjust the communicative interactions between two individuals (Genty et al. 2009; Tanner 2004). Interestingly, there is some evidence that gesture sequences of chimpanzees reflect some kind of developmental process since they shift from initially long and redundant sequences of rapid-fire gestures in youngsters to selecting more effective single iterative gestures as adults (Hobaiter and Byrne 2011). Thus, across these studies gesture sequences are not used as premeditated constructs to increase the flexibility or efficacy of gesture use, but they seem to represent strategies to react appropriately to the recipient’s behavior.

5.4. Persistence and elaboration Instances in which a recipient does not react to the first gesture are very interesting, since they reveal how flexibly nonhuman primates can react in such situations. If the sender persists in their communicative attempts, they can either repeat the same signal or elaborate gesture use by changing the type or intensity of the gesture in order to achieve the recipient’s response. In interactions with conspecifics, there is evidence that both wild and captive chimpanzees persist in their communicative attempts after their initial gesture failed (Hobaiter and Byrne 2011; Liebal, Call, and Tomasello 2004), while gorillas and orangutans are less likely to continue to gesture if there is no response of the recipient (Genty and Byrne 2010; Tempelmann and Liebal 2012). Whether these results reflect differences between species or are caused by different methodologies across studies is currently unclear. Most gesture sequences of great apes, however, are repetitions of the same gesture. Even if different gesture types are combined, there is little evidence that these elaborated sequences are more successful in obtaining an appropriate response from the recipient than single gestures (Genty and Byrne 2010; Liebal, Call, and Tomasello 2004; Tempelmann and Liebal 2012). Most evidence for elaboration in gesture use comes from studies on great apes’ interactions with humans. For example, orangutans adjust their communicative behavior when begging for food from a human depending on whether the human’s response met their goal fully, only partly, or not at all (Cartmill and Byrne 2007). Thus, orangutans stop gesturing when they get the whole banana, they repeat the same gesture if they receive only half instead of the whole banana indicating persistence, and they switch to other gestures in case the human offers them a completely different food item than they requested.

1960

IX. Embodiment

5.5. Learning o novel gestures This section specifically refers to novel gestures that are created by particular individuals and which are not part of a species’ repertoire, but which may spread across individuals within one group. This would indicate some form of flexibility in a way that new gestures can be added to a species repertoire. For example, an eye-covering gesture has been documented for mandrills in only one out of many groups (Laidre 2008). In chimpanzees, the hand-clasp is unique to certain communities, suggesting that this gesture was newly created and was subsequently acquired by other individuals within the group (van Leeuwen et al. 2012). However, very little is known about how nonhuman primates acquire their gestures and more longitudinal studies are needed to identify the mechanisms underlying gesture acquisition (Schneider, Call, and Liebal 2012a, b).

6. Conclusion and outlook Research on gestural communication in nonhuman primates usually takes a psychological perspective and thus focuses on the cognitive mechanisms underlying gesture use in monkeys and apes. The intentional use of gestures is of central importance in this field of research and a variety of cognitive skills are used to identify intentional acts of communication. An increasing body of research on several species of apes but also some monkey species shows that nonhuman primates use their gestures only in the presence of an audience, they adjust them to the attentional state of the recipient, and persist in their communicative attempts if their initial gestures fails to elicit a response of the recipient. However, studies examining the use of specific attention-getting gestures to manipulate the recipient’s attentional state revealed inconsistent findings, as did studies on the function of gesture sequences. Furthermore, some communicative strategies seem to vary depending on whether apes are interacting with other conspecifics or a human experimenter. It is important to emphasize, however, that the majority of knowledge on gesture use is from studies on great apes in captive settings. Therefore, future research needs to consider other primate species, particularly monkeys, in both captive and wild settings. Furthermore, little is known about the developmental processes and the factors that influence gesture acquisition during ontogeny. Finally, gesture is only one out of several modalities nonhuman primates use to communicate with others, in addition to facial expressions, vocalizations, and olfactory signals. Future research should specifically address these different facets of primate communication and the ways these modalities interact with and influence each other.

7. Reerences Anderson, James R., Hika Kuroshima, Yuko Hattori and Kazuo Fujita 2010. Flexibility in the use of requesting gestures in squirrel monkeys (Saimiri sciureus). American Journal of Primatology 72(8): 707⫺714. Bates, Elizabeth, Laura Benigni, Inge Bretherton, Luigia Camaioni and Virginia Volterra 1979. The Emergence of Symbols: Cognition and Communication in Infancy. New York: Academic Press. Benga, Oana 2005. Intentional communication and the anterior cingulate cortex. Interaction Studies 6(2): 201⫺221. Call, Josep and Michael Tomasello (eds.) 2007. The Gestural Communication of Apes and Monkeys. Mahwah/NJ: Erlbaum.

153. The psychology of gestures and gesture-like movements in non-human primates Cartmill, Erica A. and Richard W. Byrne 2007. Orangutans modify their gestural signaling according to their audience’s comprehension. Current Biology 17(15): 1345⫺1348. Ferrari, Pier F., Vittorio Gallese, Giacomo Rizzolatti and Leonardo Fogassi 2003. Mirror neurons responding to the observation of ingestive and communicative mouth actions in the monkey ventral premotor cortex. European Journal of Neuroscience 17(8): 1703⫺1714. Fleagle, John G. 1999. Primate Adaptation and Evolution. San Diego, CA: Academic Press. Genty, Emilie, Thomas Breuer, Catherine Hobaiter and Richard W. Byrne 2009. Gestural communication of the gorilla (Gorilla gorilla): Repertoire, intentionality and possible origins. Animal Cognition 12(3): 527⫺546. Genty, Emilie and Richard W. Byrne 2010. Why do gorillas make sequences of gestures? Animal Cognition 13(2): 287⫺301. Hobaiter, Catherine and Richard W. Byrne 2011. Serial gesturing by wild chimpanzees: Its nature and function for communication. Animal Cognition 14(6): 827⫺838. Hostetter, Autumn B., Monica Cantero and William D. Hopkins 2001. Differential use of vocal and gestural communication by chimpanzees (Pan troglodytes) in response to the attentional status of a human (Homo sapiens). Journal of Comparative Psychology 115(4): 337⫺343. Kaminski, Juliane, Josep Call and Michael Tomasello 2004. Body orientation and face orientation: Two factors controlling apes begging behavior from humans. Animal Cognition 7(4): 216⫺233. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Laidre, Mark E. 2008. Do captive mandrills invent new gestures? Animal Cognition 11(2): 179⫺187. Leavens, David A., Jamie L. Russell and William D. Hopkins 2005. Intentionality as measured in the persistence and elaboration of communication by chimpanzees (Pan troglodytes). Child Development 76(1): 291⫺306. Liebal, Katja and Josep Call 2012. The origins of non-human primates’ manual gestures. Philosophical Transactions of the Royal Society B: Biological Sciences 367(1585): 118⫺128. Liebal, Katja, Josep Call, Simone Pika and Michael Tomasello 2004. To move or not to move: How apes adjust to the attentional state of others. Interaction Studies 5(2): 199⫺219. Liebal, Katja, Josep Call and Michael Tomasello 2004. The use of gesture sequences in chimpanzees. American Journal of Primatology 64(4): 377⫺396. Liebal, Katja, Simone Pika and Michael Tomasello 2004. Social communication in siamangs (Symphalangus syndactylus): Use of gestures and facial expressions. Primates 45(1): 41⫺57. Liebal, Katja, Simone Pika and Michael Tomasello 2006. Gestural communication of orangutans (Pongo pygmaeus). Gesture 6(1): 1⫺38. Liebal Katja, Bridget M. Waller, Anne M. Burrows and Katie E. Slocombe 2013. Primate Communication: A Multimodal Approach. Cambridge, UK: Cambridge University Press. Maestripieri, Dario 1999. Primate social organization, gestural repertoire size, and communication dynamics: A comparative study of macaques. In: Barbara J. King (ed.), The Evolution of Language: Assessing the Evidence from Nonhuman Primates, 55⫺77. Santa Fe: School of American Research. Maille, Audrey, Lucie Engelhart, Marie Bourjade and Catherine Blois-Heulin 2012. To beg, or not to beg? That is the question: Mangabeys modify their production of requesting gestures in response to human’s attentional states. Plos One 7(7): e41197. Poss, Sarah R., Chris Kuhar, Tara Stoinski and William D. Hopkins 2006. Differential use of attentional and visual communicative signaling by orangutans (Pongo pygmaeus) and gorillas (Gorilla gorilla) in response to the attentional status of a human. American Journal of Primatology 68(10): 978⫺992. Povinelli, Daniel J. and Timothy J. Eddy 1996. Factors influencing young chimpanzees’ (Pan troglodytes) recognition of attention. Journal of Comparative Psychology 110(4): 336⫺345. Roberts, Anne I., Sarah J. Vick, Sam G. B. Roberts, Hannah M. Buchanan-Smith and Klaus Zuberbühler 2012. A structure-based repertoire of manual gestures in wild chimpanzees: Statistical analyses of a graded communication system. Evolution and Human Behavior 33(5): 578⫺589. Rogers, Lesley J. and Gisela Kaplan 2000. Songs, Roars, and Rituals: Communication in Birds, Mammals, and other Animals. Cambridge, MA: Harvard University Press.

1961

1962

IX. Embodiment Schneider, Christel, Josep Call and Katja Liebal 2012a. What role do mothers play in the gestural acquisition of Pan pansicus and Pan troglodytes? International Journal of Primatology 33: 246⫺262. Schneider, Christel, Josep Call and Katja Liebal 2012b. Onset and early use of gestural communication in nonhuman great apes. American Journal of Primatology 74: 102⫺113. Slocombe, Katie E., Bridget M. Waller and Katja Liebal 2011. The language void: the need for multimodality in primate communication research. Animal Behaviour 81(5): 919⫺924. Smuts, Barbara B., Dorothy L. Cheney, Robert M. Seyfarth, Richard W. Wrangham and Thomas Struhsaker 1987. Primate Societies. Chicago, IL: University of Chicago Press. Tanner, Joanne E. 2004. Gestural phrases and gestural exchanges by a pair of zoo-living lowland gorillas. Gesture 4(1): 1⫺24. Tempelmann, Sebastian and Katja Liebal 2012. Spontaneous use of gesture sequences in orangutans: A case for strategy? In: Simone Pika and Katja Liebal (eds.), Recent Developments in Primate Gesture Research, 73⫺91. Amsterdam: John Benjamins. Tinbergen, Niko 1963. On aims and methods of ethology. Zeitschrift für Tierpsychologie 20(4): 410⫺433. Tomasello, Michael 2008. Origins of Human Communication. Cambridge, MA: MIT Press. Tomasello, Michael, Josep Call, Jennifer Warren, Thomas G. Frost, Malinda Carpenter and Katherine Nagell 1997. The ontogeny of chimpanzee gestural signals: A comparison across groups and generations. Evolution of Communication 1(2): 223⫺259. van Leeuwen, Edwin J. C., Katherine A. Cronin, Daniel B. M. Haun, Roger Mundry and Mark D. Bodamer 2012. Neighbouring chimpanzee communities show different preferences in social grooming behaviour. Proceedings of the Royal Society B: Biological Sciences 279(1746): 4362⫺ 4367. Waller, Bridget M., Katja Liebal, Anne M. Burrows and Katie E. Slocombe 2013. How can a multimodal approach to primate communication help us understand the evolution of communication? Evolutionary Psychology: An International Journal of Evolutionary Approaches to Pychology and Behavior 11: 538⫺549. Wilcox, Sherman 1999. The invention and ritualization of language. In: Barbara J. King (ed.), The Evolution of Language: Assessing the Evidence from Nonhuman Primates, 351⫺384. Santa Fe: School of American Research.

Katja Liebal, Berlin (Germany)

154. An evolutionary perspective on acial behavior 1. 2. 3. 4.

Asking the right questions The ultimate function(s) of facial behavior Conclusion References

Abstract The field of human facial expression contains an impressive record of psychological research on the developmental, emotional, and social aspects of facial behavior. Although most psychological research on facial behavior refers to Darwin’s (1872) ideas about emotional Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19621968

154. An evolutionary perspective on facial behavior

1963

expression, it rarely integrates modern principles derived from ethological research. I will argue that the latest development in ethology and behavioral ecology should help clarify the function of facial behavior.

1. Asking the right questions Tinbergen (1963) prescribed that behavior should be approached using four levels of explanation: proximate mechanisms, ontogeny, phylogeny, and ultimate function. Disciplines deriving from ethology and sociobiology have typically investigated the last two levels of explanation whereas the first two levels motivated research programs in social psychology and developmental psychology. These four levels of analysis, also called Tinbergen’s four whys, are not alternatives to each other but represent complementary ways of leading to a deeper understanding of behavior. The first level of explanation is concerned with the proximate mechanisms that cause behavior to occur at a particular point in time. Proximate mechanisms of facial behavior represent the psychobiological factors underlying their production, like neuromuscular activity (Rinn 1984), emotional states (Ekman and Oster 1979), cognitive evaluations of the environment (Scherer 1992), and social motives (Fridlund 1991). It is unlikely that either one of these factors prevails over the others, as facial movements often results from a combination of cognitive, social, and emotional aspects (Parkinson 2005). The second level of explanation looks at the ontogenetic causes of behavior, i.e., the developmental processes through which biological dispositions interact with cognitive and social changes to form complex and sophisticated expressions. Starting with elementary, inherited patterns of muscular facial activity, individuals gradually learn the circumstances in which facial behavior is rewarding and acquire flexibility in the control of their faces in order to adapt to a diversity of social situations. The gradual modification of expressive patterns over the life span therefore results from complex interactions between inherited features and the social environment (Zivin 1985). The third level of explanation deals with the evolutionary history of the behavior, or its phylogenetic causes. Scientists working on this level of analysis often use the comparative method to trace the evolution of a behavior through the observation of species that are closely related on the evolutionary tree (Preuschoft and van Hooff 1995). Facial expressions are believed to originate in movements of protection and auto-regulation (Andrew 1963), and have later formalized to serve a communicative function (van Hooff 1967). This latter functional change may have occurred in response to selective pressures related to social organization (Parr, Waller, and Fugate 2005). The fourth level of explanation is the one that is most closely linked to natural selection and pertains to the function, or ultimate cause, of behavior. The evolutionary function of behavior relates to how the behavior promotes individual survival and the production of offspring. The proposed functions of facial behavior will be discussed in more detail in the following section.

2. The ultimate unction(s) o acial behavior Function of behavior relates to the beneficial consequences that it has on survival and reproductive success (Hinde 1975). Although, ultimately, the function of all adaptive behaviors will be the same ⫺ contributing to survival and reproductive success ⫺ what

1964

IX. Embodiment actually interests ethologists is how precisely a behavior achieves these goals. Function therefore refers to the chain of consequences that occur between the performance of behavior and the improvement in survival and reproductive success. When speaking of function, ethologists also refer to the consequences of a difference in the presence of a particular kind of behavior (Hinde 1975). For example, the function of a given facial expression will be evaluated in the light of its beneficial consequences for the individual showing it in comparison to individuals not showing it. Evidence about function also comes from the context in which the behavior occurs most frequently. For example, if a facial expression is displayed at higher rates in conflicts than in affiliation, its function should be related to the regulation of agonistic interactions. In order to understand the function of facial behavior we must consider the socioecological circumstances in which humans evolved. Environmental challenges such as the control of valuable resources and the protection from predators led to the evolution of sociality, i.e., the formation and maintenance of stable social groups (van Schaik 1983). In this context, the capacity to manage social relationships became a necessity for survival and reproduction, and this necessity may have driven the evolution of behavior and cognition in primates, including humans (Dunbar 1998; Humphrey 1976). Communication evolved as a successful strategy to cope with the social environment because it allowed individuals to form cooperative relationships but also to manage social conflicts. Facial behavior is an integral part of communication, as evidenced by its high frequencies in social situations (Adams and Kirkevold 1978) and its consequences both in social perception (Knutson 1996) and social interaction (Camras 1977). It is therefore plausible that the function of facial behavior mostly concerns the regulation of social relationships. The alternative is that facial movements function to regulate non-social aspects of life like sensory regulation and individual adjustment to stressful events.

2.1. Intra-individual unctions o acial behavior Charles Darwin’s (1872) book The Expression of the Emotions in Man and Animals was the first to provide an evolutionary account on facial behavior. Darwin considered facial expression as habits that are useful to the organism for the performance of basic actions, like, for example, raising the eyebrows increases the range of vision (Darwin 1872: 281). This proposal was recently investigated in a study on the role of facial displays in sensory regulation. Susskind and colleagues (2008) showed that facial expression of fear ⫺ that involves, among other movements, eyebrow raise ⫺ increases the visual field, nasal volume, and air velocity during inspiration; whereas the opposite pattern is observed for the disgust expression. Facial movements would therefore be functional in and of themselves because they would provide direct advantages in situations that require the use of senses, for example, in searching for environmental information or in reducing the exposure to noxious stimuli. This view is also compatible with the idea that facial behavior plays a central role in the preparation of the organism to cope with fundamental life events (Scherer 1992). In that respect, facial behavior could act as adaptive behavior per se or could be part of a chain of events that lead to adaptive action. Another proposed intra-individual function of facial behavior is that, because it would contribute to a better regulation of emotional life (Gross and Levenson 1997), it would significantly improve well-being and, ultimately, personal adjustment to aversive events across life span. For example, positive emotional expression could facilitate the adaptive response to stress (Keltner and Bonanno 1997). In similar fashion, Duchenne

154. An evolutionary perspective on facial behavior

1965

smiles displayed during an interview were found to predict better long-term adjustment through both social integration and undoing of negative emotion (Papa and Bonanno 2008). The proprioceptive feedback of facial behavior on emotional experience, also known as the facial feedback hypothesis (Laird 1974; McIntosh 1996), forms the basis of modern thoughts about intra-personal functions of facial expression. By means of bidirectional links between facial movements and emotion, the former would accentuate emotional experience and magnify the individual benefits that derive from it. More research is needed to understand the role of facial behavior in individual adjustment, in particular to show that consequences of facial behavior on coping strategies do not derive from better social integration.

2.2. Communicative unctions o acial behavior Darwin (1872: 355⫺356) also recognized that facial behavior is important for communication, as he believed people often use facial movements to emphasize utterances and to express their states of mind. Early ethologists also conceived of the face as a source of information such as motivational states and behavioral intentions (Andrew 1963; Hinde 1966). Primatologists later applied the ethological concept of display to the study of facial behavior (van Hooff 1967), a concept that embraces the idea that motor movements with self-regulatory functions (e.g., food intake, respiration, protection from aversive stimuli) have formalized into communicative signals through an evolutionary process called ritualization (Huxley 1966). The evolutionary emancipation of facial motor patterns involved changes in the form and temporal dynamics of the behavior to improve information transfer (Fridlund 1991) and the capacity to provoke adaptive responses in perceivers (Krebs and Dawkins 1984; Rendall, Owren, and Ryan 2009). The communicative function of facial behavior covers two aspects: the transfer of information and the influence of social partners. The idea that facial behavior functions to transfer information from a sender to a receiver has dominated research programs in psychology and ethology for the last hundred years. The transfer of information is believed to be about the sender’s emotional state (Buck 1994), cognitive appraisal (Scherer 1992), social motives (Fridlund 1991), and personality (Knutson 1996). Facial behavior is also believed to convey information about objects and events in the environment so that perceivers can use that knowledge to interpret ambiguous situations (Sorce et al. 1985). Finally, facial behavior could communicate about the status of the relationship between sender and receiver (Frank 1988). Facial behavior may not only function to transfer information but also to influence other individuals to the sender’s advantage. The idea that the main function of signaling is not to transfer information from a sender to a receiver but to provoke responses in perceivers that are adaptive to the signaler originates in the study of animal communication (Krebs and Dawkins 1984; Rendall, Owren, and Ryan 2009). The influence view contends that the transfer of information is not necessary for a signal to evolve, provided that it is efficient at influencing perceivers’ behavior in a way that benefits the sender. This view is not very popular in psychology, though it has been used to interpret the function of nonverbal vocalizations (Owren and Bachorowski 2003), and it is also compatible with the idea that nonverbal behaviors function as means of social control (Patterson 1982). The views that facial behavior functions to convey information and to influence the behavior of others have to take into account the issues of reliability and deception. When

1966

IX. Embodiment influence is detrimental to perceivers (i.e., when responses to signals jeopardize receivers, survival and reproduction), natural selection should favor the evolution of cognitive abilities aimed at filtering social stimuli that are adaptive to the organism and at avoiding harmful ones. The acquisition of, and the selective responding to, social information that is relevant to survival and reproduction is considered a major aspect of social cognition (Sander, Grafman, and Zalla 2003; Wilson and Sperber 2006). The tendency of perceivers to acquire adaptive social information therefore constituted the psychological landscape to which signalers had to adapt and may have created, in humans, the necessity to convey reliable information. Evolutionary biologists identified three functional categories of reliable signals: costly handicaps, minimal-cost signals, and indices. Zahavi (1975) argued that in order to be reliable a signal must be costly, i.e., it must seriously impair the fitness of individuals who do not possess the quality in such a way that they are prevented from producing the signal. Zahavi contained that the cost of the signal guarantees its honesty because it is directly related to the disposition it is meant to advertise. In Zahavi’s (1975) terminology, a costly signal is called a handicap. Other authors maintained that signals need not always be costly to be reliable, for example, in the case of minimal-cost signals (Maynard Smith and Harper 1995). When the sender and receiver place the outcome of the interaction in the same order of preference, like in cooperative interactions, there is no need for signalers to deceive nor for receivers to develop resistance to deception; signals are then expected to be of low intensity and reliable (Krebs and Dawkins 1984). Another specific type of low-cost reliable signal is the index. An index is said to be reliable because it demonstrates a quality that cannot be faked due to physical constrains (Maynard Smith and Harper 1995). Facial expressions of emotion could act as reliable signals of personality attributes and social dispositions (Frank 1988). For example, the Duchenne smile could be an honest signal of altruistic dispositions because it involves a facial movement (cheek raise) that is difficult to control voluntarily (Brown, Palameta, and Moore 2003; Mehu, Grammer, and Dunbar 2007). It is not clear, however, in which category of reliable signals the Duchenne smile could be classified because the costs related to its production and perception have not been systematically investigated. This line of research suggests that it is the physiological component of the expression that would guarantee its honesty, either in the form of added costs to the signal’s production, in which case it would be a handicap; or in the form of a hard wired connection between emotional processes and facial muscle activation, in which case it would act as an index. More research into the costs, contexts, and social consequences of facial behavior should help clarify its functional nature.

3. Conclusion I reviewed the different proposals made to explain the function of facial behavior, the intra-individual and interpersonal functions. In order to qualify as an adaptation, a behavior must have biologically advantageous consequences on which natural selection can operate (Hinde 1975), but must also show modification for a particular function (West-Eberhard 1992). The present argument defends the idea that facial behavior involved in sensory regulation was co-opted by natural selection as a result of its effects on perceivers during social interactions. These effects entail beneficial responses from

154. An evolutionary perspective on facial behavior

1967

other group members, responses that are possibly mediated by the acquisition by perceivers of adaptive social and environmental information. Evidence for a social function should therefore demonstrate that facial behavior has changed qualitatively to fulfill a communicative role. For example, differences should be found between the form of facial movements that are displayed in situations where sensory regulation is needed and the form of facial movements shown during social interactions. More specifically, facial behavior observed in social situations should include components that make them more easily detectable by perceivers, by being more conspicuous, stereotypical, redundant, and by including alerting components. It is therefore crucial that future research investigates facial behavior as it occurs both in social and non-social contexts, preferably in natural environments.

4. Reerences Adams, Robert M. and Barbara Kirkevold 1978. Looking, smiling, laughing, and moving in restaurants: sex and age differences. Environmental Psychology and Nonverbal Behavior 3(2): 117⫺121. Andrew, Richard J. 1963. Evolution of facial expression. Science 141: 1034⫺1041. Brown, William M., Boris Palameta and Chris Moore 2003. Are there nonverbal cues to commitment? An exploratory study using the zero-acquaintance video presentation paradigm. Evolutionary Psychology 1: 42⫺69. Buck, Ross 1994. Social and emotional functions in facial expression and communication: the readout hypothesis. Biological Psychology 38(2/3): 95⫺115. Camras, Linda A. 1977. Facial expressions used by children in a conflict situation. Child Development 48: 1431⫺1435. Darwin, Charles 1872. The Expression of the Emotions in Man and Animals. London: John Murray. Dunbar, Robin I. M. 1998. The social brain hypothesis. Evolutionary Anthropology 6(5): 178⫺190. Ekman, Paul and Harriet Oster 1979. Facial expressions of emotion. Annual Review of Psychology 30: 527⫺554. Frank, Robert H. 1988. Passions Within Reason: The Strategic Role of the Emotions. New York: Norton. Fridlund, Alan J. 1991. Evolution and facial action in reflex, social motive, and paralanguage. Biological Psychology 32(1): 3⫺100. Gross, James J. and Robert W. Levenson 1997. Hiding feelings: The acute effects of inhibiting negative and positive emotion. Journal of Abnormal Psychology 106(1): 95⫺103. Hinde, Robert A. 1966. Ritualization and social communication in Rhesus monkeys. Philosophical Transactions of the Royal Society of London. Series B. Biological Sciences 251: 285⫺294. Hinde, Robert A. 1975. The concept of function. In: Gerard Baerends and Aubrey Manning (eds.), Function and Evolution in Behaviour, 3⫺15. Oxford: Clarendon Press. Humphrey, Nicholas K. 1976. The social function of intellect. In: Patrick P.G. Bateson and Robert A. Hinde (eds.), Growing Points in Ethology, 303⫺318. Cambridge: Cambridge University Press. Huxley, Julian 1966. A discussion on ritualization of behaviour animals and man. Philosophical Transactions of the Royal Society of London. Series B. Biological Sciences 251: 249⫺271. Keltner, Dacher and George A. Bonanno 1997. A study of laughter and dissociation: distinct correlates of laughter and smiling during bereavement. Journal of Personality and Social Psychology 73(4): 687⫺702. Knutson, Brian 1996. Facial expressions of emotion influence interpersonal trait inferences. Journal of Nonverbal Behavior 20(3): 165⫺182. Krebs, John R. and Richard Dawkins 1984. Animal signals: mind-reading and manipulation. In: John R. Krebs and Nicholas B. Davies (eds.), Behavioural Ecology: An Evolutionary Approach, Volume 2, 380⫺402. Oxford: Blackwell Scientific Publications.

1968

IX. Embodiment Laird, James D. 1974. Self-attribution of emotion: The effects of expressive behavior on the quality of emotional experience. Journal of Personality and Social Psychology 29(4): 475⫺486. Maynard Smith, John and David G. Harper 1995. Animal signals: models and terminology. Journal of Theoretical Biology 177(3): 305⫺311. McIntosh, Daniel N. 1996. Facial feedback hypotheses: Evidence, implications, and directions. Motivation and Emotion 20(2): 121⫺147. Mehu, Marc, Karl Grammer and Robin I.M. Dunbar 2007. Smiles when sharing. Evolution and Human Behavior 28(6): 415⫺422. Owren, Michael J. and Jo-Anne Bachorowski 2003. Reconsidering the evolution of nonlinguistic communication: the case of laughter. Journal of Nonverbal Behavior 27(3): 183⫺200. Papa, Anthony and George A. Bonanno 2008. Smiling in the face of adversity: the interpersonal and intrapersonal functions of smiling. Emotion 8(1): 1⫺12. Parkinson, Brian 2005. Do facial movements express emotions or communicate motives? Personality and Social Psychology Review 9(4): 278⫺311. Parr, Lisa A., Bridget M. Waller and Jennifer Fugate 2005. Emotional communication in primates: implications for neurobiology. Current Opinion in Neurobiology 15(6): 716⫺720. Patterson, Miles L. 1982. A sequential functional model of nonverbal exchange. Psychological Review 89(3): 231⫺249. Preuschoft, Signe and Jan A.R.A.M. van Hooff 1995. Homologizing primate facial displays: a critical review of methods. Folia Primatologica 65(3): 121⫺137. Rendall, Drew, Michael J. Owren and Michael J. Ryan 2009. What do animal signals mean? Animal Behaviour 78(2): 233⫺240. Rinn, William E. 1984. The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin 95(1): 52⫺77. Sander, David, Jordan Grafman and Tiziana Zalla 2003. The human amygdala: An evolved system for relevance detection. Reviews in the Neurosciences 14(4): 303⫺316. Scherer, Klaus R. 1992. What does facial expression express? In: Kenneth T. Strongman (ed.), International Review of Studies of Emotion, Volume 2, 139⫺165. Chichester, UK: Wiley. Sorce, James F., Robert N. Emde, Joseph J. Campos and Mary D. Klinnert 1985. Maternal emotional signaling: Its effect on the visual cliff behavior of 1-year-olds. Developmental Psychology 21(1): 195⫺200. Susskind, Joshua M., Daniel H. Lee, Andre´e Cusi, Roman Feiman, Wojtek Grabski and Adam K. Anderson 2008. Expressing fear enhances sensory acquisition. Nature neuroscience 11(7): 843⫺ 850. Tinbergen, Niko 1963. On aims and methods of ethology. Zeitschrift für Tierpsychologie 20(4): 410⫺433. Van Hooff, Jan A.R.A.M. 1967. The facial displays of the catarrhine monkeys and apes. In: Desmond Morris (ed.), Primate Ethology, 7⫺68. London: Weidenfeld and Nicholson. Van Schaik, Carel P. 1983. Why are diurnal primates living in groups? Behaviour 87(1/2): 120⫺144. West-Eberhard, Mary Jane 1992. Adaptation: current usages. In: Evelyn F. Keller and Elizabeth A. Lloyd (eds.), Keywords in Evolutionary Biology, 13⫺18. Cambridge, MA: Harvard University Press. Wilson, Deirde and Dan Sperber 2006. Relevance theory. In: Laurence R. Horn and Gregory Ward (eds.), Handbook of Pragmatics, 607⫺632. Oxford: Blackwell Publishing. Zahavi, Amotz 1975. Mate selection: selection for a handicap. Journal of Theoretical Biology 53(1): 205⫺214. Zivin, Gail 1985. Separating the issues in the study of expressive development: A framing chapter. In: Gail Zivin (ed.), The Development of Expressive Behavior: Biology-Environment Interactions, 3⫺25. Orlando, FL: Academic Press.

Marc Mehu, Geneva (Switzerland) and Vienna (Austria)

155. On the consequences of living without facial expression

1969

155. On the consequences o living without acial expression 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Introduction Experiences of people living with facial paralysis Emotional consequences of facial paralysis Psychological consequences of facial paralysis Expressive behavior of people with facial paralysis Social perception of people with impoverished facial expression Misdiagnosing people with impoverished facial expression Facilitating social interaction with facial paralysis Conclusions References

Abstract Although the importance of the face in communication is well-known, there has been little discussion of the ramifications for those who lack facial expression: individuals with facial paralysis such as Bell’s palsy and Möbius syndrome, and facial movement disorders like Parkinson’s disease. By examining the challenges experienced by these individuals, this chapter not only highlights the importance of facial expression, but reveals the role of the rest of the body in emotional experience, communication, and interaction. First, the qualitative experiences and psychological adjustment of people with facial paralysis are examined; then applied and theoretical implications of facial paralysis to facial feedback theory, mimicry, and empathy are covered. Next, the tendency for people to form inaccurate impressions of the emotions and traits of people with facial paralysis are discussed. Some people with facial paralysis compensate for their lack of facial expression by increasing expressivity in their bodies and voices. These compensatory expressions may improve impressions of them. Importantly, potential risks of misdiagnosing people with facial paralysis and other facial movement disorders with psychological disorders such as autism, depression, or apathy are considered. This chapter concludes with ways to facilitate social interaction.

1. Introduction The importance of facial expression in social interaction is well documented; it serves to communicate emotion, initiate and regulate the dynamics of conversation, develop rapport, and build social connectedness (Ekman 1986; Tickle-Degnen 2006). There is, however, little research on the consequences of impoverished facial expression. The face is often regarded as the most salient social communication channel, though we use it with other expressive channels, including the body and voice (Noller 1985). Examining the challenges experienced by people with facial paralysis or palsy not only highlights the importance of facial expression but, crucially, reveals something of the role of the rest of the body in emotional experience, communication, and interaction. This chapter describes the psychological and communicative consequences of facial paralysis, both for people with facial paralysis and for those interacting with them, and ways to facilitate interaction between the two. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19691982

1970

IX. Embodiment

1.1. Types o acial paralysis Facial paralysis is a relatively common disorder with a variety of causes. Bleicher et al. (1996) estimated the incidence of facial paralysis to be 50 cases per 100,000. Facial paralysis can be congenital or acquired, unilateral or bilateral, and complete or incomplete. Acquired facial paralysis can result from a variety of causes, including idiopathic Bell’s palsy, Guillain-Barre Syndrome, Sarcoid, Lyme disease, stroke, and damage to the facial nerve from neoplasms or trauma. Bell’s palsy is the most common cause of facial paralysis, affecting 25 people per 100,000 annually (Bleicher et al. 1996). It is usually unilateral and temporary, typically resolving completely within six weeks, though approximately 16% of Bell’s palsy cases do not recover, or recover incompletely (Peitersen 1992). In some cases, partial recovery is accompanied by synkinesis, an erroneous reinnervation of facial muscles, resulting in abnormal facial movements, e.g., eyelid closure with smiling, as well as facial tightness and pain. Congenital facial paralysis may result from birth trauma (e.g., from forceps delivery) or prenatal maldevelopments (e.g., Möbius Syndrome or Hemifacial Microsomia). Estimates for the occurrence of congenital facial paralysis vary widely from 2 to 8 per 1,000 births per year (Hughes et al. 1999). Birth trauma is the most common cause of congenital facial paralysis, with 2 per 1000 births (Falco and Eriksson 1990). Möbius Syndrome is a congenital, non-progressive condition characterized by the underdevelopment of the 6th and 7th cranial nerves, resulting in facial paralysis which is most often severe and bilateral, and by impaired lateral movement of the eyes (Briegel 2006; Möbius 1888). It is properly considered a sequence rather than a syndrome, and people may have a wide variety of associated symptoms such as micrognathia and limb and chest wall malformations (Briegel 2006). It is also a very rare condition, occurring in 2 births per 100,000 (Verzijl et al. 2003). Parkinson’s disease classically involves impaired movement initiation, rigidity, tremors, and postural instability (Birkmayer and Hornykiewicz 1961). It affects 17 per 100,000 people per year (Twelves, Perkins, and Counsell 2003). Unlike facial paralysis, expressivity in Parkinson’s disease is reduced not only in the face but also in the body and voice, resulting in an expressive mask (Tickle-Degnen and Lyons 2004). People with Parkinson’s disease and expressive masking are often thought to have become dull, boring, or depressed (Cole 1998; Tickle-Degnen and Lyons 2004).

1.2. Physical consequences o acial paralysis Facial paralysis results in difficulties in physical functioning. During the first few weeks or months after birth, babies with facial paralysis experience feeding problems due to difficulty in sucking and swallowing (Verzijl et al. 2003). Later in life, individuals with facial paralysis may experience dry eyes due to insufficient eyelid closure; drooling saliva or food while eating; and problems with the articulation of labial sounds (Sjögreen, Andersson-Norinder, and Jacobsson 2001), which often results in flaccid dysarthria, a speech disorder found in 20 out of 22 cases of Möbius Syndrome in one study (Meyerson and Foushee 1978). To compensate, labial sounds are often replaced by similar sounds produced by the tongue placement behind, against, or in between, the frontal teeth (Sjögreen, Andersson-Norinder, and Jacobsson 2001). Most children with facial paralysis experience delayed speech development, but, as Meyerson and Foushee (1978) noted, people with facial paralysis usually develop understandable speech with these compensations.

155. On the consequences of living without facial expression

1971

2. Experiences o people living with acial paralysis Qualitative research on facial paralysis provides a rich background about the experiences of people with facial paralysis. In their book on living without facial expression, Cole and Spalding (2008) gave the biographies of a dozen or so people with Möbius Syndrome. One of the problems of such an approach is that people’s experiences differ and are not quantifiable. Some themes, however, emerged. Several people who remembered their childhood felt an emotional disconnection when younger. One woman described how as a child, I did not do ballet or horse riding; I did hospitals and operations. I had the eye doctor and the foot doctor and a speech therapist, and a face doctor. My limitations were a fact of life. I never thought I was a person; I used to think I was a collection of bits. I thought I had all these different doctors to look after all the different bits. ‘Celia’ was not there; that was a name people called the collection of bits. (Cole and Spalding 2008: 14)

Another woman, now aged 40, who had become more expressive as a teenager and adult learned to use prosody and gesture more, and yet, she says, All my gesture is voluntary, even now. Everything I do, I think about… With Möbius you have to be so much more wordy and articulate and this requires intelligence and can be hard and tiring. For me the word is stronger than facial expression. Without the word, how to express the feeling? I am interested now in non-facial aspects; gesture and tone of voice. Gesture is part of language, is a language and people with Möbius do not always learn it; they must be taught. As you grow up the social feedback from others has far more meaning than as a child. A meaningful smile from you triggers an emotional response from me. As a teenager I was articulate but this was not sufficient. (Cole and Spalding 2008: 190)

Thus far we have focused on emotional expression and communication. But two other aspects of living with Möbius must be considered: education and relationships. The parents of one UK teenager, Gemma, were interviewed. Gemma is in the top stream of an excellent school and obviously very bright and motivated. Yet with her Möbius came severe visual, hearing, and speech problems. To reach her potential she has needed not only medical and audiological assistance but speech and language therapists, special needs teachers, well briefed mainstream teachers and, above all, the tireless advocacy of her parents for her to receive her rights. One hopes others will be as fortunate. Lastly, intimacy may be difficult for someone who experiences stigma of facial difference. Since attraction is often based on the face, initially at least, those with visible difference can be disadvantaged. One man with Möbius Syndrome, in his 50’s, related “The hardest thing for me to do in my whole life is to take the risk of being physical with a woman. I am petrified of the fear of being rejected. Between dates I was so wounded.” Despite the challenges of facial paralysis, many people with the condition are personally and professionally successful. Meyerson (2001) collected qualitative interviews from 18 such individuals with Möbius Syndrome. Participants’ sources of strength included family support, faith, humor, sense of self, special skills, determination, and networking, a similar list to that of most people. Participants reported using eye contact to signal confidence and using gestures, vocal prosody, and verbal disclosure to communicate emotion. They reported that their friends and family were able to see the person behind

1972

IX. Embodiment the paralysis and recognize their emotions without difficulty; people can learn to focus on channels other than the face when interacting with people with facial paralysis. Bogart, Tickle-Degnen, and Joffe (2012) and Bogart (in press) conducted focus groups with adults and teenagers with Möbius Syndrome. Participants felt their social interactions were mostly positive but reported negative experiences resulting from social stigma and people misunderstanding their facial expressions or speech. Five factors influencing their social interaction experiences were resilience/sensitivity, social engagement/ disengagement, social support/stigma, being understood/misunderstood, and public awareness/lack of awareness of Möbius Syndrome. Participants reported using compensatory expressive strategies including the voice to convey emotion, gestures, touch to create personal closeness, clothing to express personality, and humor. Möbius Syndrome is a particularly stigmatizing disability because it involves an inexpressive face, giving the person the appearance of a lack of emotion or intelligence, leading to hesitancy in initiating interaction.

3. Emotional consequences o acial paralysis Darwin ([1872] 1998) theorized that facial expression of emotion evolved in animals and humans due to its adaptive signaling value. A large body of research now suggests the existence of universal facial expressions of emotion (e.g., anger, contempt, disgust, fear, happiness, sadness, and surprise) produced and recognized across nearly all cultures (Ekman, Sorenson, and Friesen 1969). Consequently, facial paralysis leaves individuals unable to communicate using one of the only universal languages (Ekman 1986).

3.1. Facial eedback hypothesis The facial feedback hypothesis contends that facial expression is necessary or sufficient to experience emotion (Izard 1971; Tomkins 1962, 1963). Several studies have suggested that facial expression is sufficient to generate or modulate emotional experience. In a classic study of this hypothesis, participants held pens in their mouths in a way that either inhibited or facilitated smiling (Strack, Martin, and Stepper 1988). When they read comic cartoons, the people whose smiles were facilitated found the cartoons funnier than those whose smiles were inhibited. In another study, people receiving facial Botox injections (resulting in temporary partial facial paralysis), viewed emotionally evocative video clips and rated their emotional reaction (Davis et al. 2010). Relative to controls, participants who received Botox did not differ in their emotional reaction to strongly negative or positive videos but showed a decrease in their emotional reaction to mildly positive stimuli. The authors concluded that facial feedback may subtly modulate emotional experience. There has been less support for the strong version of the hypothesis, which suggests that facial expression is necessary to feel emotion. Levenson and Ekman (in preparation) presented emotionally evocative videos to 10 individuals with Möbius Syndrome, and measured their physiologic responses (e.g., galvanic skin response and heart rate) and self-reported emotional experience. Participants with Möbius Syndrome showed a normal pattern of physiological responses to the emotional stimuli and a normal intensity of emotion experience compared to controls. This study provides evidence that, for people with Möbius Syndrome, facial feedback is not necessary to experience emotion. People with congenital facial paralysis may have adapted to retain intact emotional experience without facial movement in ways that those with temporary loss of movement have not.

155. On the consequences of living without facial expression

1973

3.2. Expressive mimicry People naturally and automatically mimic each other’s facial expressions, body movements, and vocal attributes (Chartrand and Bargh 1999). According to one theory, the reverse simulation model of embodiment, people recognize facial expressions by implicitly mimicking observed expressions, in turn generating the corresponding emotional experience in the observer (Goldman and Sripada 2005). Researchers have attempted to inhibit participants’ facial movement (e.g., by having participants hold pens in their mouths) and found that this reduced their emotion recognition ability (Niedenthal et al. 2001; Oberman and Ramachandran 2007). However, these manipulations were potentially distracting, and studying people with facial paralysis seems a better test of this theory. Calder et al. (2000) studied 3 individuals with Möbius Syndrome and found they exhibited normal ability on a facial expression recognition task. In a more challenging task involving recognition of morphed expressions, one individual showed impairments. In a larger follow-up study, Bogart and Matsumoto (2010a) examined the ability of 37 people with Möbius Syndrome to recognize facial expressions compared to 37 age and gender matched controls. People with Möbius Syndrome did not differ from the control group or normative data in emotion recognition accuracy. Bate et al. (2013) recently investigated this further and found similar results to Calder’s earlier work, with small deficits in face processing in those with Mobius Sequence. Any differences between studies in this area are likely to be methodological. Among people with Möbius Syndrome, facial mimicry is not necessary for facial expression recognition. The difference in findings between the Möbius Syndrome studies and the artificially inhibited movement studies may reflect the potentially distracting manipulations used in the artificial studies, but it is also possible that people with Möbius Syndrome are able to perform normally because they have adapted to their condition. It would be interesting to compare the emotion experience and recognition abilities of people with congenital and acquired facial paralysis, but to our knowledge, there has only been one study of emotion in acquired facial paralysis (Keillor et al. 2002). No evidence for reduced emotional experience or facial expression recognition ability was found. Mimicry is crucial for empathy; it helps people to understand another’s emotions, and it communicates that understanding. Embodiment theories propose that facial feedback from mimicry of others’ expressions generates emotion in the mimicker, resulting in emotional convergence (Goldman and Sripada 2005). As Merleau-Ponty (1964) suggested, “I live in the facial expression of the other, as I feel him living in mine.” This raises the possibility that empathy may be a challenge for people with facial paralysis to receive, convey, or experience, for reasons including stigmatization of facial difference, difficulty communicating emotion, and a possible difficulty recognizing and embodying others’ emotions (Cole 2001). We suggest here that the primary breakdown of empathy for those with facial paralysis is the inability of others to recognize the facial expressions of people with facial paralysis. The research described above provides evidence that people with facial paralysis have normal emotional experience and emotion recognition ability. People with facial paralysis are able to feel empathy, but, without facial expression, they will not appear empathetic. Furthermore, it may be difficult for others to feel empathetic towards someone with facial paralysis, since there is no facial expression for them to mimic. In this way, facial paralysis can be a source of a major emotional disconnect.

1974

IX. Embodiment

3.3. Channels o expression Although facial expression plays an important role in emotion and social interaction, people also use other expressive channels to communicate (Noller 1985), including the body (e.g., gestures, posture, proximity) and the voice (e.g., prosody, language). Usually, these channels play a supporting role to the face in expression; however, for people with facial paralysis, these may become their primary modes of expression, their compensatory expressive channels. Some people compensate for their facial paralysis by increasing their use of these channels (Bogart, Tickle-Degnen, and Ambady 2012). In fact, Cole and Spalding (2008) suggested that since people with Möbius Syndrome are unable to develop emotional embodiment through the face, they may compensate by “bootstrapping” their emotional embodiment with compensatory expression through gesture and voice.

4. Psychological consequences o acial paralysis A paralyzed face is a disfigured face, both in motion and at rest. Facial paralysis results not only in an absence of appropriate facial expression but also in loss of muscle tone and wrinkling. Often, in acquired facial paralysis, the face may be asymmetrical, with expression evident on only one side of the face. The smile of a person with unilateral facial paralysis is often not recognizable as a smile, rather it may resemble a sneer, and to reduce this, people may try to decrease their facial movement. Due to a lack of muscle tone, the face may sag, especially around the eyes and the mouth, giving the appearance of sadness when, ironically, no expression exists. Facial disfigurement is one of the most stigmatizing of disabilities (Macgregor 1990); studies of those with acquired or congenital disfiguring conditions have found unusually high rates of depression and anxiety (Rumsey et al. 2004). Those with congenital facial paralysis have lived their entire lives without facial expression; in contrast, people with acquired facial paralysis must relearn to communicate without their face and adjust to others’ changed reactions to them. Therefore, it is possible that people with congenital facial paralysis may be better adapted than people with acquired facial paralysis. There is a dearth of research on the psychological consequences of facial paralysis. Most studies have been small and have included people with a variety of types of facial paralysis. As acquired facial paralysis is far more common than congenital, research has been dominated by the former.

4.1. Studies o acquired acial paralysis People with acquired facial paralysis have high levels of psychological distress such as anxiety and depression (Neely and Neufeld 1996; VanSwearingen and Brach 1996; VanSwearingen et al. 1998; VanSwearingen, Cohn, and Bajaj-Luthra 1999). VanSwearingen, Cohn, and Bajaj-Luthra (1999) found that a specific impairment in smiling in people with facial paralysis predicted depression, even when controlling for overall impairment and disability. It is unclear whether the relationship between impairment of smiling and depression resulted from an endogenous cause (i.e., a lack of facial feedback) or an exogenous cause (i.e., lack of positive social feedback from others).

4.2. Studies o congenital acial paralysis There have been only a few studies that have examined psychological adjustment in people with congenital facial paralysis (Bogart and Matsumoto 2010b; Briegel 2007,

155. On the consequences of living without facial expression

1975

2012; Briegel, Hofmann, and Schwab 2007; Briegel, Hofmann, and Schwab 2010). In the first study on psychopathology and personality aspects of subjects with Möbius Syndrome aged 17 years or older, Briegel (2007) examined 22 out of 29 adults known to the German Möbius Syndrome Foundation. Eight had a psychiatric diagnosis (predominantly major depression), and 6 participants had suicidal thoughts. According to the Derogatis Symptom Checklist Revised (Derogatis 1977), 7/20 subjects met criteria of a clinical case. Participants had a non-significant tendency towards greater depression and anxiety than the general population. Compared to the general population, subjects with Möbius Syndrome showed increased interpersonal sensitivity and inhibitedness. Their life satisfaction, achievement orientation, and extraversion were significantly reduced. The study suggested that adults with Möbius Syndrome and normal intelligence are at high risk of developing psychiatric disorders (especially major depression) and an introverted personality, likely because they experience more social rejection and fewer positive interactions. In contrast, in the largest psychology study of Möbius Syndrome to date, Bogart and Matsumoto (2010b) examined self-reported measures of anxiety, depression, social functioning, and satisfaction with life (using the Hospital Anxiety and Depression Scale, Zigmond and Snaith 1983, and the Texas Social Behavior Inventory, Helmreich and Stapp 1974) in a US sample of 37 adults, compared to 37 age and gender matched control participants without facial paralysis, and normative data. The only significant difference was that the Möbius group reported lower social functioning. People with Möbius Syndrome in this study did not have increased levels of depression, anxiety, or decreased levels of satisfaction with life compared to the general population. Briegel et al. (2010) studied 31 children with Möbius Syndrome aged 4⫺17, using the Child Behavior Checklist (CBCL) 4⫺18 (Arbeitsgruppe Deutsche Child Behavior Checklist 1991). Parents reported frequent social problems (12.9% vs. 2% in the normative sample) especially among adolescents (25%) compared with children (5.3%). However as another more recent study has shown, the self-rated social problems of children with Möbius Syndrome were more positive than their caregivers’ ratings (Briegel 2012).

4.3. Interpreting these indings In the above studies examining social functioning in Möbius Syndrome, difficulties were found (Bogart and Matsumoto 2010b; Briegel 2007, 2012). But while Briegel’s (2007) study showed a non-significant trend towards increased depression compared to normative data, Bogart and Matsumoto (2010b) found no differences between people with Möbius and a matched control group or normative data. These different results may reflect several factors, for example, differences in measures, culture, sample size, and caregiver vs self-report. Bogart and Matsumoto’s (2010b) study was conducted with an American sample, while Briegel’s study was conducted with a German one. There are cultural differences in the stigma ascribed to disability and visible difference (Yang et al. 2007), and this may affect the adjustment of an individual living in that culture. Bogart’s study was also of larger sample size, and had a more robust design due to the inclusion of a matched-control group. The range of adjustment found in the studies described show that reactions to facial paralysis vary widely: some people have problems such as depression and anxiety, while others are quite resilient. Future research should examine further the sources of resilience in facial paralysis and consider ways of assisting those with problems in this area.

1976

IX. Embodiment

5. Expressive behavior o people with acial paralysis Though social functioning problems seem common among people with congenital and acquired facial paralysis (Bogart and Matsumoto 2010b; Briegel 2007), they may adapt by using compensatory expressivity. In the first behavioral study of facial paralysis, Bogart, Tickle-Degnen, and Ambady (2012) examined whether people with congenital facial paralysis, who have been adapting to facial paralysis their entire lives, display more compensatory expressivity compared to those with acquired facial paralysis (onset averaging 12 years prior). People with facial paralysis were videotaped while interviewed about emotional events in their lives. During standardized points in the interview, their emotional language was analyzed using the Linguistic Inquiry Word Count (Pennebaker, Booth, and Francis 2007) and their nonverbal expressivity was rated by trained coders. As hypothesized, people with congenital facial paralysis were more expressive in their bodies, voices, and emotional language. Indeed, during his interview, a man with Möbius Syndrome reported using compensatory expression: “The tone, the volume, the rate, the timbre of the voice, and body language, I use to supplement in ways that my face can’t provide […] I have a whole repertoire of laughs that I use to respond to different situations.”

6. Social perception o people with impoverished acial expression It is important to consider the way others perceive people with facial paralysis, as this is one of the main determinants of their social functioning. In everyday life, people form first impressions about others’ interpersonal attributes quickly and automatically (Ambady and Rosenthal 1992). When participants are shown short episodes of behavioral information, e.g., a short video, a “thin slice”, they can make accurate judgments about a person’s emotions, personality, competence, and many other social outcomes (Ambady and Rosenthal 1992). People rely heavily on the face when forming these impressions, so when the signal quality of the face is poor due to facial paralysis or Parkinson’s disease, their impressions are inaccurate (Bogart, Tickle-Degnen, and Ambady in press; Tickle-Degnen and Lyons 2004). In fact, people with facial paralysis report being particularly concerned by strangers’ first impressions of them (Bogart in press; Bogart, Tickle-Degnen, and Joffe 2012). They reported being mistaken as sad, unfriendly, or even intellectually disabled. The way people form impressions of people with facial paralysis and Parkinson’s disease has been examined experimentally using a thin slice design involving videotaped interviews (Bogart, Tickle-Degnen, and Ambady in press; Hemmesch, Tickle-Degnen, and Zebrowitz 2009; Tickle-Degnen and Lyons 2004). Various social perceivers, including healthcare professionals, psychology undergraduates, and older adults, viewed clips as short as 20s and rated their impressions of the people. Perceivers, when viewing more severe facial paralysis or expressive masking, were inaccurate and negatively biased when rating attributes such as emotion, likeability, and personality traits such as extraversion and neuroticism. People with facial paralysis, unlike people with Parkinson’s disease, can compensate for their lack of facial expression with their bodies and voices. Bogart, Tickle-Degnen, and Ambady (in press) found that perceivers rated people with facial paralysis who used a high amount of compensatory expression more positively than those who used less, regardless of the severity of their facial paralysis. So, these behaviors can improve the

155. On the consequences of living without facial expression

1977

accuracy of perceivers’ impressions and reduce misunderstandings. Additionally, this suggests that perceivers integrate emotional information from various channels (e.g., face, body, voice) in a holistic manner, rather than focusing on only the face.

7. Misdiagnosing people with impoverished acial expression One of the most serious consequences of the tendency to form inaccurate impressions of people with impoverished facial expression is the potential misdiagnosis of psychological conditions like intellectual disability, autism, depression, and apathy in these individuals. Flat affect may indicate depression in the typical population, but this cannot be used in people with facial paralysis or Parkinson’s disease. An unresponsive face and speech difficulties also put people at risk for being mistaken as intellectually disabled. If this occurs early on it may result in different socialization and education, and subsequent disparities in future opportunities for children with facial paralysis. Researchers have found incidences of intellectual disability (which is usually mild) between 0% (Ghabrial et al. 1998; Verzijl, Padberg, and Zwarts 2005) and 75% (Cronemberger et al. 2001). In spite of this large range, intellectual disability is usually estimated to occur in about 10⫺15% of individuals with Möbius Syndrome (Kuklik 2000; Johansson et al. 2001). In many studies, especially earlier ones, conclusions have not been based on standardized intelligence tests whilst in others heterogeneous and non-equivalent tests have been used (Briegel, 2006). Both Verzijl, Padberg, and Zwarts (2005) and Briegel et al. (2009), who found a 0⫺9% incidence of intellectual disability, pointed out that intelligence tests which are less dependent on time constraints should be preferred for subjects with Möbius Syndrome; otherwise neurological and physical disabilities could cause falsely low results. Similarly, researchers have found rates of autism in Möbius Syndrome ranging widely from 0% to 29% (Bandim et al. 2003; Briegel et al. 2009; Briegel et al. 2010; Gillberg and Steffenburg 1989; Johansson et al. 2001; Verzijl et al. 2003). There are several possible explanations for the wide range of reported incidence. Generally, diagnosing autistic disorders in Möbius patients is very challenging (Briegel, 2006). Möbius Syndrome can impose social interaction difficulties which may be mistaken for symptoms of autism, including impaired facial expression, eye-to-eye gaze, difficulty in developing peer relationships, and lack of social or emotional reciprocity. Other diagnostic difficulties result from developmental delays, especially speech and language delays, and ⫺ most of all ⫺ concomitant intellectual disability. Of the three major categories of diagnostic criteria for autism according to the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association 2000), (impairment in social interaction, delayed speech, and restricted repetitive stereotyped behaviour), the first two might, at least in part, be accounted for by the physical symptoms of Möbius Syndrome. Therefore, the younger the patient, the more difficult it is to make a diagnosis of autism spectrum disorder (Briegel 2006). Additionally, there are methodological problems in several studies: lack of information about diagnostic instruments used (Verzijl et al. 2003) and, most of all, overrepresentation of intellectual disability (Bandim et al. 2003; Briegel 2006; Gillberg and Steffenburg 1989). In the most recent and methodologically best study, with 22 participants aged 6⫺16 years, who all underwent a physical and psychological examination, none of the participants fulfilled diagnostic criteria of autism spectrum disorder on a clinical consensus conference, indicating that Möbius Syndrome is less frequently associated with autism spectrum disorder than formerly thought (Briegel et

1978

IX. Embodiment al. 2010). Throughout 6 studies worldwide with a total of 132 Möbius patients included, there has been a secured diagnosis of autism spectrum disorder in 18 patients, and 17 of 18 had intellectual disabilities. Therefore, Briegel (2006) concluded that only one association could undoubtedly be shown: the association of autism with intellectual disability which is already well known. Cole (1998) warned of the potential of misdiagnosing people with Parkinson’s disease due to their expressive masking. There have been nearly 30 studies reporting high rates of apathy in people with Parkinson’s disease, a symptom or syndrome characterized by a lack of motivation or goal-seeking behaviour. In a review of this research, Bogart (2011) suggested that people with Parkinson’s disease are likely to be misperceived as apathetic due to, among other reasons, their expressive masking symptoms. We caution readers to avoid the tendency to view people with impoverished facial expression as having psychological disorders. When diagnosing individuals, clinicians and researchers should rely on information other than the face such as the body and voice (in facial paralysis) and the content of the persons’ speech. Caution should be used when diagnosing young children with facial paralysis with these conditions. Because of the symptoms associated with facial paralysis, they should be allowed more time to reach developmental milestones.

8. Facilitating social interaction with acial paralysis We have shown that though there are serious social consequences of facial paralysis, people can compensate for their lack of facial expression, and people interacting with those with facial paralysis can look beyond the face to perceive expression holistically. Social functioning can be facilitated by encouraging those with facial paralysis to use compensatory expression, particularly children who may not have developed these adaptations yet or people who have recently acquired facial paralysis, and by training social perceivers to focus on these expressive channels. Bogart, Tickle-Degnen, and Ambady (in press) conducted a pilot study in which perceivers were trained to look beyond the expressive mask of Parkinson’s disease to focus on the content of the person’s speech. The results were promising: after training, perceivers’ impressions of the personalities of people with Parkinson’s disease were more positive. Such training may be particularly useful for family members, teachers, and healthcare practitioners of people with these conditions.

9. Conclusions Throughout this chapter, we have presented evidence suggesting that the primary barrier to social functioning with facial paralysis is others’ difficulty recognizing the expressions of people with facial paralysis and the stigma associated with the condition. Some people with facial paralysis may need help to develop ways of managing other’s responses to them, whilst those who interact with people with facial paralysis need to tune into those other clues. Greater public awareness about facial paralysis may help reduce stigma and peoples’ hesitancy to interact with them. More broadly, this chapter has highlighted the importance of the face for emotional communication, empathy, and social connectedness while demonstrating the role of the whole body in communication and social perception.

155. On the consequences of living without facial expression

10. Reerences Ambady, Nalini and Robert Rosenthal 1992. Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin 111(2): 256⫺274. American Psychiatric Association 2000. Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision. Washington, D.C.: American Psychiatric Association. Arbeitsgruppe Deutsche Child Behavior Checklist 1998. Elternfragebogen über das Verhalten von Kindern und Jugendlichen: deutsche Bearbeitung der Child Behavior Checklist (CBCL/4⫺18). Einführung und Anleitung zur Handauswertung. 2. Auflage mit deutschen Normen bearbeitet von Manfred Döpfner, Julia Plück, Sven Bölte, Klaus Lenz, Peter Melchers and Klaus Heim. Köln: Arbeitsgruppe Kinder-Jugend- und Familiendiagnostik (KJFD). Bandim, Jose, Liana Ventura, Marilyn Miller, Henderson Almeida and Ana Costa 2003. Autism and Mobius sequence: an exploratory study of children in northeastern Brazil. Arquivos NeuroPsiquiatra 61(2a): 181⫺185. Bate, Sarah, Sarah J. Cook, Joseph Mole, and Jonathan Cole 2013. First report of generalized face processing difficulties in Mobius sequence. Plos One 8(4): e62656. Birkmayer, Walter and Oleh Hornykiewicz 1961. The L-3,4-dioxyphenylalanine (DOPA)-effect in Parkinson-akinesia. Wiener Klinische Wochenschrift 73: 787⫺788. Bleicher, Joel, Steve Hamiel, Jon Gengler and Jeff Antimarino 1996. A survey of facial paralysis: Etiology and incidence. Ear, Nose and Throat Journal 75(6): 355⫺358. Bogart, Kathleen Rives, Linda Tickle-Degnen and Nalini Ambady 2012. Compensatory expressive behavior for facial paralysis: Adaptation to congenital or acquired disability. Rehabilitation Psychology 57(1): 43⫺51. Bogart, Kathleen Rives, Linda Tickle-Degnen and Matthew Joffe 2012. Social interaction experiences of adults with Moebius syndrome: A focus group. Journal of Health Psychology 17(8): 1212⫺1222. Bogart, Kathleen Rives 2011. Is apathy a valid and meaningful symptom or syndrome in Parkinson’s disease? A critical review. Health Psychology 30(4), 386⫺400. Bogart, Kathleen Rives, Linda Tickle-Degnen and Nalini Ambady in press. Communicating without the face: Holistic perception of emotions of people with facial paralysis. Basic and Applied Social Psychology. Bogart, Kathleen Rives in press. “People are all about appearances”: A Focus group of teenagers with Moebius syndrome. Journal of Health Psychology. Bogart, Kathleen Rives and David Matsumoto 2010a. Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome. Social Neuroscience 5(2): 241⫺251. Bogart, Kathleen Rives and David Matsumoto 2010b. Living with Moebius syndrome: Adjustment, social competence, and satisfaction with life. Cleft Palate-Craniofacial Journal 47(2): 134⫺142. Briegel, Wolfgang 2006. Neuropsychiatric findings of Mobius sequence: a review. Clinical Genetics 70(2): 91⫺97. Briegel, Wolfgang 2007. Psychopathology and personality aspects of adults with Mobius sequence. Clinical Genetics 71(4): 376⫺377. Briegel, Wolfgang 2012. Self-perception of children and adolescents with Möbius sequence. Research in Developmental Disabilities 33(1): 54⫺59. Briegel, Wolfgang, Christina Hofmann and K. Otfried Schwab 2007. Moebius sequence: behaviour problems of preschool children and parental stress. Genetic Counseling 18(3): 267⫺275. Briegel, Wolfgang, Martina Schimek, Inge Kamp-Becker, Christina Hofmann and K. Otfried Schwab 2009. Autism spectrum disorders in children and adolescents with Moebius sequence. European Child and Adolescent Psychiatry 18(8): 515⫺519. Briegel, Wolfgang, Martina Schimek, Dirk Knapp, Roman Holderbach, Patricia Wenzel and EvaMaria Knapp 2009. Cognitive evaluation in children and adolescents with Mobius sequence. Child: Care, Health and Development 35(5): 650⫺655.

1979

1980

IX. Embodiment

Briegel, Wolfgang, Christina Hofmann and K. Otfried Schwab 2010. Behaviour problems of patients with Moebius sequence and parental stress. Journal of Paediatrics and Child Health 46(4): 144⫺148. Briegel, Wolfgang, Martina Schimek and Inge Kamp-Becker 2010. Moebius sequence and autism spectrum disorders ⫺ Less frequently associated than formerly thought. Research in Developmental Disabilities 31(6): 1462⫺1466. Calder, Andrew, Jill Keane, Facundo Manes, Nagui Antoun and Andrew Young 2000. Impaired recognition and experience of disgust following brain injury. Nature Neuroscience 3(11): 1077⫺ 1078. Chartrand, Tanya and John Bargh 1999. The chameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology 76(6): 893⫺910. Cole, Jonathan 1998. About Face. Massachusetts: The Massachusetts Institue of Technology Press. Cole, Jonathan 2001. Empathy needs a face. Journal of Consciousness Studies 8(5⫺7): 51⫺68. Cole, Jonathan and Henrietta Spalding 2008. The Invisible Smile. Oxford/New York: Oxford University Press. Cronemberger, Monica, F., Jose Belmiro de Castro Moreira, Decio Brunoni, Tomas Scalamandre Mendoca, Eliezia H. De Lima Alvarenga, Ana Maria Pfeiffer Pereira Rizzo, Sandra Maria Martins Diogo 2001. Ocular and clinical manifestations of Möbius’ Syndrome. Journal of Pediatric Opthamology and Strabismus 38(3): 156⫺162. Darwin, Charles 1998. The Expression of Emotion in Man and Animals. New York: Oxford University Press. First published [1872]. Davis, Joshua, Ann Senghas, Fredric Brandt and Kevin Oshsner 2010. The effects of BOTOX injections on emotional experience. Emotion 10(3): 433⫺440. Derogatis, Leonard 1977. SCL-90-R, Administration, Scoring and Procedures Manual-I for the R(evised) Version. Baltimore: Johns Hopkins University School of Medicine. Ekman, Paul, E. Richard Sorenson and Wallace V. Friesen 1969. Pan-cultural elements in facial displays of emotion. Science 164(3875): 86⫺88. Ekman, Paul 1986. Psychosocial aspects of facial paralysis. In: Mark May (ed.), The Facial Nerve. New York: Thieme. Falco, N. and E. Eriksson 1990. Facial nerve palsy in the newborn: incidence and outcome. Plastic and Reconstructive Surgery 85(1): 1⫺4. Ghabrial, Raf, Georgina Kourt, Peter A. Lipson and Stephen F. Martin 1998. Möbius’ Syndrome: features and etiology. Journal of Pediatric Ophthalmology and Strabismus 35(6): 304⫺311. Gillberg, Christopher and Suzanne Steffenburg 1989. Autistic behaviour in Moebius Syndrome. Acta Paediatrica Scandinavica 78: 314⫺316. Goldman, Alvin I. and Chandra Sripada 2005. Simulationist models of face-based emotion recognition. Cognition 94(3): 193⫺213. Helmreich, Robert and Joy Stapp 1974. Short forms of the Texas Social Behavior Inventory (TSBI), an objective measure of self-esteem. Bulletin of the Psychonomic Society 4(5A): 473⫺475. Hemmesch, Amanda R., Linda Tickle-Degnen and Leslie A. Zebrowitz 2009. The influence of facial masking and sex on older adults’ impressions of individuals with Parkinson’s disease. Psychology and Aging 24(3): 542⫺549. Hughes, C. Anthony, Earl H. Harley, Gregory Milmoe, Rupa Bala and Andrew Martorella 1999. Birth trauma in the head and neck. Archives of Otolaryngology ⫺ Head and Neck Surgery 125 (2): 193. Izard, Carroll E. 1971. The Face of Emotion. New York: Appleton-Century-Crofts. Johansson, Maria, Elisabet Wentz, Elisabeth Fernell, Kerstin Stromland, Marilyn Miller and Christopher Gillberg 2001 Autistic spectrum disorders in Mobius sequence: a comprehensive study of 25 individuals. Developmental Medicine and Child Neurology 43(5): 338⫺345. Keillor, Jocelyn, Anna Barrett, Gregory Crucian, Sarah Kortenkamp, and Kenneth Heilman 2002. Emotional experience and perception in the absence of facial feedback. Journal of the International Neuropsychological Society 8(1): 130⫺135.

155. On the consequences of living without facial expression Kuklik, Miloslav 2000. Poland ⫺ Mobius Syndrome and disruption spectrum affecting the face and extremities: a review paper and presentation of five cases. Acta Chirurgiae Plasticae 42(3): 95⫺103. Levenson, Robert and Paul Ekman 2011. Emotional experience of individuals with Moebius Syndrome. Manuscript in preparation. Macgregor, Frances Cooke 1990. Facial disfigurement: Problems and management of social interaction and implications for mental health. Aesthetic Plastic Surgery 14(1): 249⫺257. Merleau-Ponty, Maurice 1964. The Primacy of Perception. Evanston, IL: Northwestern University Press. Meyerson, Marion and David R. Foushee 1978. Speech, language and hearing in Moebius Syndrome: a study of 22 patients. Developmental Medicine and Child Neurology 20(3): 357⫺365. Meyerson, Marion 2001. Resiliency and success in adults with Moebius Syndrome. Cleft Palate ⫺ Craniofacial Journal 38(3): 231⫺235. Möbius, Paul Julius 1888. Über angeborene doppelseitige Abducens-Facialis-Lähmung. Münchener medizinische Wochenschrift 35: 91⫺94. Neely, J. Gail and Peggy S. Neufeld 1996. Defining functional limitation, disability, and societal limitations in patients with facial paresis: initial pilot questionnaire. Otology and Neurotology 17(2): 340⫺342. ˚ se H. Innes-Ker 2001. When Niedenthal, Paula M., Markus Brauer, Jamin B. Halberstadt and A did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression. Cognition and Emotion 15(6): 853⫺864. Noller, Patricia 1985. Video primacy ⫺ A further look. Journal of Nonverbal Behavior 9(1): 28⫺47. Oberman, Lindsay and Vilayanur Ramachandran 2007. The simulating social mind: The role of the mirror neuron system and simulation in the social and communicative deficits of autism spectrum disorders. Psychological Bulletin 133(2): 310⫺327. Peitersen, Erik 1992. Natural history of Bell’s palsy. Acta Oto-Laryngologica 112(S492): 122⫺124. Pennebaker, James, Roger J. Booth and Martha E. Francis 2007. LIWC2007 Operator’s Manual. Austin, TX: LIWC.net. Rumsey, Nichola, Alex Clarke, Paul White, Menna Wyn-Williams and Wendy Garlick 2004. Altered body image: appearance-related concerns of people with visible disfigurement. Journal of Advanced Nursing 48(5): 443⫺453. Strack, Fritz, Leonard L. Martin and Sabine Stepper 1988. Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis. Journal of Personality and Social Psychology 54(5): 768⫺777. Sjögreen, Lotta, Jan Andersson-Norinder, and Catharina Jacobsson 2001. Development of speech, feeding, eating, and facial expression in Mobius sequence. International Journal of Pediatric Otorhinolaryngology 60(3): 197⫺204. Tickle-Degnen, Linda and Kathleen Doyle Lyons 2004. Practitioners’ impressions of patients with Parkinson’s disease: The social ecology of the expressive mask. Social Science and Medicine 58(3): 603⫺614. Tickle-Degnen, Linda 2006. Nonverbal behavior and its functions in the ecosystem of rapport. In: Valerie Manusov and Miles Patterson (eds.), The SAGE Handbook of Nonverbal Communication. Thousand Oaks, CA: Sage. Tomkins, Silvan S. 1962. Affect Imagery Consciousnes, Volume 1: The positive affects. Oxford, UK: Springer. Tomkins, Silvan S. 1963. Affect Imagery Consciousness, Volume 2: The negative affects. New York: Tavistock/Routledge. Twelves, Dominique, Kate S.M. Perkins and Carl Counsell 2003. Systematic review of incidence studies of Parkinson’s disease. Movement Disorders 18(1): 19⫺31. VanSwearingen, Jessie M. and Jennifer S. Brach 1996. The Facial Disability Index: reliability and validity of a disability assessment instrument for disorders of the facial neuromuscular system. Physical Therapy 76(12): 1288⫺1298.

1981

1982

IX. Embodiment VanSwearingen, Jessie M., Jeffrey F. Cohn and Anu Bajaj-Luthra 1999. Specific impairment of smiling increases the severity of depressive symptoms in patients with facial neuromuscular disorders. Aesthetic Plastic Surgery 23(6): 416⫺423. VanSwearingen, Jessie M., Jeffrey F. Cohn, Joanne Turnbull, Todd Mrzai, and Peter Johnson 1998. Psychological distress: linking impairment with disability in facial neuromotor disorders. Otolaryngology-Head and Neck Surgery 118(6): 790⫺796. Verzijl, Harrie¨tte T.F.M., Bert van der Zwaag, Johannes R.M. Cruysberg and George W. Padberg 2003. Möebius Syndrome redefined. Neurology 61(3): 327⫺333. Verzijl, Harrie¨tte T.F.M., George W. Padberg and Machiel J. Zwarts 2005. The spectrum of Mobius Syndrome: an electrophysiological study. Brain 128(7): 1728⫺1736. Yang, Lawrence H., Arthur Kleinman, Bruce G. Link, Jo C. Phelan, Sing Lee and Byron Good 2007. Culture and stigma: adding moral experience to stigma theory. Social Science and Medicine 64(7): 1524⫺1535. Zigmond, Anthony S. and R. Philip Snaith 1983. The Hospital Anxiety and Depression Scale. Acta Paediatrica Scandinavica 67(6): 361⫺370.

Kathleen Rives Bogart, Corvallis, OR (USA) Jonathan Cole, Bournemouth and Poole (UK) Wolfgang Briegel, Schweinfurt (Germany)

156. Multimodal orms o expressing emotions: The case o interjections 1. 2. 3. 4. 5. 6.

Definition of interjections Properties of interjections Verbal, nonverbal, and bodily aspects Acquisition in first language Origin and diachronic development of interjections References

Abstract In their prototypical sense interjections are semi-automatic utterances providing an insight into the speaker’s emotional state of mind. Classifications of interjections are based on form (primary vs. secondary interjections like Ah! vs. I see!) and function (emotive, cognitive, conative, and phatic). Importantly, interjections vary regarding their degree of expressivity: emotive interjections like Ow! are the most expressive ones, followed by cognitive (Ah!) and conative interjections (Shh!), while phatic interjections (mhm) barely show any expressivity at all. Primary interjections are sometimes difficult to handle in linguistic terms because of their properties; they are closely linked to gestures, often contain non-speech sounds (Ugh!), may violate phonotactic constraints (mhm), are non-referential yet display intricate semantic structures and correspond to full sentences on the syntactic level. Although their origin is not well investigated due to their oral nature, four sources can be distinguished: (1) body reflexes (Brr!), (2) onomatopoetic structures (Shh!), (3) loans Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19821989

156. Multimodal forms of expressing emotions: The case of interjections

1983

(Ouch!), and (4) development from secondary to primary interjections (May God blind me! > Corblimey! > Blimey!/ Cor!). Regarding first language acquisition it seems as if arbitrary interjections like Yuck! are acquired through input while “natural” interjections like Ugh! might not need to be learned after all.

1. Deinition o interjections In their prototypical sense interjections are spontaneous expressions providing an insight into the speaker’s emotional state of mind. They are per definitionem (see Latin interiacere ‘throw in between’) a part of spoken language and basically never occur in writing except for the purpose to attempt a natural impression of spoken language (e.g., in comic strips, dramas, etc.). Ameka distinguishes between three classes of interjections: expressive, conative, and phatic. Expressive interjections are “symptoms of the speaker’s mental state” (Ameka 1992a: 113) and can be further divided into emotive and cognitive interjections. The latter serves the purpose to signal cognitive processes in terms of comprehension (Ah!, Aha!). The former, on the other hand, are what we would like to call prototypical interjections. Their primary purpose is the spontaneous expression of strong and subjective emotionality (Nübling 2004: 17). They represent an immediate reaction to, or some sort of affective comment on a verbal or non-verbal event. Prototypical interjections have an “I feel” component in their semantic structure: Ow! “I feel pain”, Yuck! “I feel revolted”, Brr! “I feel cold”, etc. According to Goffman, they equal a “natural overflowing, a flooding up of previously contained feeling, a bursting of normal restraints, a case of being caught of guard” (Goffman 1978: 800). These interjections do not need an addressee. Hence, speakers can, and in fact do, produce them also when they are all by themselves. Conative interjections, on the other hand, aim at getting a reaction from the listener (Ameka 1992b), so the addressee is indispensable. The utterances of Shh! “I want you to be silent” and Psst! “I want your attention so I can talk to you confidentially” only make sense in social contexts. Phatic interjections are used in discourse as a means of backchanneling (mhm, hm, etc.) (Ameka 1992b) and although they are “thrown in between” the interlocutor’s utterances to reassure him of one’s attention and that one is following the argument, they are not exclaimed and have no emotive content. The presentation of the different kinds of interjections illustrates an important aspect: they vary not only regarding their function, but also regarding their expressivity. Placed on a continuum of expressivity, emotive interjections are the most expressive ones, followed by cognitive and ⫺ far behind ⫺ conative interjections. Phatic interjections then represent the other end of the continuum, barely showing any expressivity at all. Importantly, even though only prototypical interjections express the speaker’s emotional state, the other interjections, too, express something about the speakers’ state of mind: whether they finally understand something, whether they want other people to be silent or whether they signal that they are paying attention to what is being said, etc. Tesnie`re also included onomatopoeia when discussing interjections (1976: 99). However, as these are representations of sounds in the outside world (Oink!, Boom!, Woosh!, etc.) rather than expressions of the speaker’s state of mind, we do not classify them as interjections. Furthermore, interjections are not only categorised according to their function but also according to their origin: in very simple terms, primary interjections are those interjections

1984

IX. Embodiment that, strictly speaking, are not used otherwise (Ow!, Whoops!, Yuck!, etc.), whereas secondary interjections also have an independent semantic value (Oh my God!, Oh dear!, I see!, etc. → may God help us, my dear Susie, he saw you yesterday). This is a straightforward distinction, but sometimes speakers blur the boundaries by turning a primary interjection into a lexical word: I’ll give you ow! This is yucky! That one ouched, etc. (examples from transcripts of the Manchester and Wells corpora on the CHILDES database). Regarding the ensuing discussion of the functional properties of interjections, we will focus on prototypical (i.e., emotive) interjections. The formal features presented apply to all of the classes of interjections introduced above.

2. Properties o interjections 2.1. Functions Prototypical interjections primarily serve the purpose of expressing strong and subjective emotionality. These emotions can vary from astonishment (Wow!) to joy (Whoopee!) and relief (Phew!), perplexity (Oh!) and wonder (Ah!), surprise (Whoops!, Oh!), even to contempt (Tut-tut!) and rejection (No!), from fear (Aah!) to revulsion (Ugh!) to pain (Ouch!) and the sensation of cold (Brr!) (see Nübling 2004: 17). In their prototypical sense interjections are produced semi-automatically, as reflexes to certain circumstances. So when (not seriously) hurting ourselves we usually cry Ow! or Ouch!, or when a minor mishap occurs we are likely to say Whoops! or Oopsie!, for instance. When using interjections in social contexts, we give voice to our emotions, and the listener gets an impression of our state of mind, but we do not talk about it as such. Interjections are non-referential as they do not allow discourse about third parties. They only make sense in the context speakers produce them, and they only make sense to the ego and the alter ego. Due to their sudden, reflex-like production, interjections, strictly speaking, do not count as messages. They can have, however, illocutionary functions. Being equivalents to full sentences, they are highly pragmatic units expressing basic emotions (Nübling 2004: 20). Fries, on the other hand, claims that interjections have no communicative purpose: All these expressions seem to be spontaneous reactions to a situation suddenly confronting the speaker. […] They may, of course, be overheard by a listener and the hearer gains some impression of the kind of situation to which the speaker is reacting. […] These forms […] are not used to elicit regular responses from those who hear them. Their purpose is not communicative. (Fries 1952: 53)

He is right in saying that prototypical interjections are not designed to get a response from a listener. Nevertheless, their semantic content is highly communicative: if an elderly woman were to slip on an icy pavement and fall down, crying out Ow! automatically, this tells potential by-standers that the woman hurt herself and might need some help. Furthermore, they can also have a face-saving function in social contexts, as is the case for Whoops! and its variants: Oops! defines the event as a mere accident, shows we know it has happened, and hopefully insulates it from the rest of our behaviour ⫺ indicating that failure of control was not generated by some obscure intent unfamiliar to humanity, or some general defect in competence. (Goffman 1978: 801)

156. Multimodal forms of expressing emotions: The case of interjections

1985

In the preceding section we have shown that interjections can be classified according to their function (emotive, cognitive, phatic, conative). These categories, however, are not always straightforward. An interjection might have multiple functions in a context and, accordingly, multiple categorizations: in the quote by Goffman Oops! contains a conative element in its semantic structure, namely “I want you to know that this was just a minor mishap and that I’m normally able to do this properly”. In the same way, phatic interjections might also be cognitive when signaling that one is following the discourse as this would not be possible without cognitive processing.

2.2. Form Interjections display a number of typical characteristics. Firstly, in the case of primary interjections, they are usually very short (exception: Whoopsadaisy!) and always monomorphemic. Hence, they cannot be divided any further into smaller meaningful units. Secondly, they are equivalents to full sentences (Ow! typically means “I feel pain” and Ugh! “I feel revolted”, for instance). Thirdly, they are syntactically independent and for this reason always context-bound. This means that they generally occur in isolation to the other speech material and do not form an integral part of sentences or clauses. In order to infer what the interjection means contextual knowledge is imperative unless speakers verbalize the cause for its production (e.g., “Ouch! I bumped my head!”, “Ugh! I hate spiders!”). As a result of their syntactic independence, they are not subject to derivational or inflectional processes. Finally, interjections can defy phonological and phonotactic rules of a language but do not have to. The use of dental clicks in Tut-tut!, for instance is uncommon in the English language as is the use of non-English sounds like [X, x, ›] in the possible realizations of Ugh! (see Wells 2008: 352). Furthermore, English words require a nucleus but strictly speaking there is none in interjections such as Tut-tut! (pronounced with two dental clicks), mhm, and hm. These characteristics are the reason why it is problematic to classify interjections as proper words. They display a number of unusual characteristics, thus eluding straightforward linguistic classification.

3. Verbal, nonverbal, and bodily aspects The fact that there is a close link between gestures or body movements and interjections is indisputable (Ameka 1992a; Kowal and O’Donnell 2004; Nübling 2004). In The Expression of the Emotions in Man and Animals Darwin gives a detailed account for the relation between interjections and bodily aspects: As the sensation of disgust primarily arises in connection with the act of eating or tasting something, it is natural that its expression should consist chiefly in movements around the mouth. […] With respect to the face, moderate disgust is exhibited in various ways: by the mouth being widely opened, as if to let an offensive morsel drop out; by spitting; by blowing out of the protruded lips; or by a sound as of clearing the throat. Such guttural sounds are written ach or ugh; and their utterance is sometimes accompanied by a shudder […]. (Darwin 1872: 258; our emphasis)

The sounds produced in these circumstances can indeed resemble the English interjections of disgust, namely Phew! and Ugh!. Later on in the same work Darwin notes the following on the expression of surprise:

1986

IX. Embodiment […] whenever astonishment, surprise or amazement is felt […], our mouths are generally opened; yet the lips are often a little protruded […] As a strong expiration naturally follows the deep inspiration which accompanies the first sense of startled surprise, and as the lips are often protruded, the various sounds which are then commonly uttered can apparently be accounted for […] One of the commonest sounds is a deep Oh; and this would naturally follow […] from the mouth being moderately opened and the lips protruded. (Darwin 1872: 284⫺285)

These bodily aspects linked to emotions are not language-specific because they are biological, hence universal: “toutes [interjections] tiennent imme´diatement a` la fabrique ge´ne´rale de la machine organique et au sentiment de la nature humaine, qui est partout le meˆme dans les grands et premiers mouvements corporels” (Beauze´e [1767] 1974). Even though only some emotive interjections may be regarded as renderings of body reflexes (Ugh!, Brr!, Phew!), they are all accompanied by gestures. When surprised, people tend to widen their eyes and put their hand to their mouth and the face displays a number of expressions when speakers feel revolted: they may wrinkle up or pinch their nose, screw up their face, cover mouth and nose with their hand, poke out their tongue, and avert their face or even entire body from the disgusting stimulus. When hurting ourselves, we automatically touch the part of the body that aches, usually rubbing it to ease the pain. When feeling cold, we frequently wrap our arms around ourselves, rubbing our hands on our upper arms to feel warmer. Considering that emotive interjections are spontaneous displays of the speaker’s state of mind and that humans invariably express emotions mainly through body language, it makes perfect sense that there is such a close relation between the utterance of interjections and the body. While there is expression of emotion through body language without the use of interjections, the utterance of emotive interjections is always accompanied by gestures.

4. Acquisition in irst language The question of how children acquire interjections has not been sufficiently addressed yet. Asano conducted a study on the acquisition of Oops!, Ouch!, and Yuck! in early childhood in the late nineties using three American English corpora from the CHILDES database. Her results suggested that children seem to use these interjections in a manner different from adults and that children extend the usage of interjections as they grow older (Asano 1997: 14). According to her findings, the order of acquisition is possibly Oops! ⫺ Yuck! ⫺ Ouch!. In 2008 Stange investigated the acquisition of Ow! and Ouch!, Ugh!, Yuck!, and Phew! as well Whoops! and Whoopsadaisy! plus their variants in early childhood using the Wells corpus from the CHILDES database. These were British English data and her results showed that Ugh!, Oops!, and Ow! were already acquired by the age of 1;5 and that the other interjections followed roughly between the ages of 2;0 and 2;5. Stange also found that children displayed differences in the usage of Ow! and Ugh! compared to adults. Nonetheless, the input by adults plays an important role in the acquisition process; the interjections frequently used by adults were the ones that the children acquired the earliest and used most frequently (Stange 2009). Regarding the actual process of acquisition Stange suggests a distinction between “natural exclamations” (like Ow! and Ugh!) and more arbitrary primary interjections

156. Multimodal forms of expressing emotions: The case of interjections

1987

(like Yuck! and Whoopsadaisy!). The latter are definitely acquired through input as these are also the ones that differ cross-linguistically (e.g., French Berk!, German Igitt!, English Yuck!, etc.): […] when a parent plucks up a toddler and […] “playfully” swings or tosses it in the air, the prime mover may utter an Oopsadaisy! ⫺ stretched out to cover the period of groundlessness, counter-acting its feeling of being out of control, and at the same time instructing the child in the terminology and role of spill cries. (Goffman 1978: 802)

“Natural” primary interjections, on the other hand, might not necessarily be learnt, which would account both for their early occurrence in child language and their crosslinguistic similarity (e.g., German Pfui!, English Phew!, Russian Фy!, Polish Fu!, Welsh Whiw!, etc.). Both Stange and Asano suggest that children understand the meaning of emotive interjections already at the age of around 1;0 and that phonological factors are responsible for the lag in production (Asano 1997: 14; Stange 2009: 110).

5. Origin and diachronic development o interjections The origin of interjections is not well investigated due to their oral nature. Nevertheless, four diachronic paths to interjectionality can be distinguished (Nübling 2001): (i) Reflex interjections: Many interjections are the result of body reflexes. German Huch! “oops” expresses “surprise, affright” and imitates the air which is instinctively ejected or inhaled abruptly when somebody is being frightened or surprised. Another reflex interjection is Brrrrr! (a long bilabial vibrant which cannot be written adequately), which is generally used when somebody is so cold that they are chattering their teeth. As can be seen, these interjections often contain special sounds or sound sequences which do not belong to the phonological inventory of the respective language or defy its phonotactic constraints. This kind of interjection shares many similarities crosslinguistically. A good example are the above-mentioned interjections German Pfui!, English Phew!, Danish/Swedish Fy!, French Fi!, Spanish Puaj!, Chinese Pei!. They always express disgust or contempt and seem to represent either the gesture of spitting or of refusing (bad) food or air by closing the lips. In all these languages the corresponding interjection consists of a monosyllabic word with an initial labial consonant. In every case, it is closely linked to the body and to unintended reflexes. (ii) Onomatopoetic interjections: Some interjections imitate the sound of other acoustic sources which are not the result of body reflexes. German Hui! and English Woosh! or Whee! which express admiration probably reproduce the sound of quickly moving objects or that of a whistle. Furthermore, the verbalisation of susurration is very similar in  different languages, e.g., English Hist! or Shush!, German Pst!, French Chut! [ yt]. It is not by chance that these languages use the voiceless sibilants  [s] and [ ] ⫺ sometimes in combination with voiceless fricatives or plosives ⫺, for they best represent the noise of susurration. If there are any vowels at all (German Pst! does not use any, for instance), they are high and consequently not very sonorous. The purpose of these interjections is to signal somebody to be quiet. (iii) Loan interjections: A completely different but frequent source consists of loans from other languages. Perhaps it is the foreign character and the prestige language which

1988

IX. Embodiment increases expressivity and emotionality. German borrowed French and English interjections such as Olala!, Sapperlot! < French sacre´ nom de dieu “holy name of God”, and Shit!, Oh (my) God!, Jesus! [Di:zez] from English. (iv) From secondary to primary interjections: Sometimes complex expressions such as Oh my God! develop to primary interjections which do not reveal their origin anymore. Every language abounds in polylexical expressions (Reisigl 1999). If they are used very frequently they “interjectionalise” to short expressions by losing their original lexical meaning, i.e., by bleaching the literal meaning of the involved words. At the same time, they undergo iconic formal erosion. A well-documented example is German Oje! [o’je:], sometimes shortened to je [je:], which expresses negative surprise, consternation or compassion. It can be reduplicated to Oje oje! or extended to Ojeee!, which iconically leads to intensification. Here, it can even adopt tonal structures, i.e., the voice can go up and down. Different tonemes express different emotions: 1. level: ‘deception’, 2. rise⫹fall: ‘horror, consternation’, 3. fall: ‘compassion’. Sometimes, the final vowel is shortened to [o’je]; in this case it denotes skepticism. All these features are typical for primary interjections (Ehlich 1986, 2007). The original underlying construction was Oh mein Jesus! (‘oh my Jesus’). The lexical (religious) content has been completely bleached, the pragmatic function as invocation has disappeared long ago. The expression turned into a primary interjection by simplifying its formal structure, which now is opaque and consists of a simple VCV sequence. Today it is on the same level as the German primary interjections Aha!, Oho!, Ach!. Further examples of this kind of historical development are German Jemine! [(o)’jεm=,ne:] < Jesus domine, Herrje! < Herr Jesus, Potz(blitz)! < Gottes (Blitz) “God’s thunderbolt”. We conclude this article with a quote by Wilkins which illustrates the complexity and hybrid status of interjections: Interjections are hard to handle in linguistic terms, not because they are peripheral to the concerns of linguistics, but because they embody, almost simultaneously, all the concerns of linguistics. They are lexemes and utterances; they have to be described semantically and pragmatically; they require “the examination of our relation to social situations at large, not merely our relation to conversation” (Goffman 1981: 90); as utterances they are verbless and nounless […]; their relation to other areas of the lexicon must be investigated not only synchronically but also through a study of their diachronic development; […] and they are not only associated with the strictly linguistic component, but are also closely associated with non-linguistic, gestural means of communication (Wilkins 1992: 155⫺156).

6. Reerences Asano, Yoshiteru 1997. Acquisition of English interjections ouch, yuck, and oops in early childhood. Colorado Research in Linguistics 15: 1⫺15. Ameka, Felix 1992a. Interjections: the universal yet neglected part of speech. Journal of Pragmatics 18(2/3): 101⫺118. Ameka, Felix 1992b. The meaning of phatic and conative interjections. Journal of Pragmatics 18(2/ 3): 245⫺271. Beauze´e, Nicolas 1974. Grammaire Ge´ne´rale ou Exposition Raisonne´e des E´le´ments Ne´cessaires du Langages. Tome II. Nouvelle Impression en Facsimile´ de l’E´dition de 1767. Stuttgart-Bad Cannstatt: Frommann. First published [1767].

157. Some issues in the semiotics of gesture: The perspective of comparative semiotics 1989 Darwin, Charles 1872. The Expression of the Emotions in Man and Animals. London: Murray. Ehlich, Konrad 1986. Interjektionen. Tübingen: Niemeyer. Ehlich, Konrad 2007. Interjektion und Responsiv. In: Ludger Hoffmann (ed.), Handbuch der deutschen Wortarten, 423⫺444. Berlin/New York: Walter de Gruyter. Fries, Charles C. 1952. The Structure of English. New York: Harcourt. Goffman, Erving 1978. Response cries. Language 54(4): 787⫺815. Kowal, Sabine and Daniel C. O’Connell 2004. Einleitung zur Sonderausgabe Interjektionen. Zeitschrift für Semiotik 26(1⫺2): 3⫺10. Nübling, Damaris 2001. Von oh mein Jesus! zu oje! ⫺ Der Interjektionalisierungspfad von der sekundären zur primären Interjektion. Deutsche Sprache 1: 20⫺45. Nübling, Damaris 2004. Die prototypische Interjektion: Ein Definitionsvorschlag. Zeitschrift für Semiotik 26(1⫺2): 11⫺46. Reisigl, Martin 1999. Sekundäre Interjektionen. Eine diskursanalytische Annäherung. Frankfurt am Main: Peter Lang. Stange, Ulrike 2009. The Acquisition of Interjections in Early Childhood. Hamburg: Diplomica Verlag. Tesnie`re, Lucien 1976. E´le´ments de Syntaxe Structurale, 2e e´dition revue et corrige´e, 3e tirage. Paris: Klinsieck. Wells, John C. 2008. Longman Pronunciation Dictionary. Harlow: Longman. Wilkins, David P. 1992. Interjections as deictics. Journal of Pragmatics 18(2⫺3): 119⫺158.

Ulrike Stange, Mainz (Germany) Damaris Nübling, Mainz (Germany)

157. Some issues in the semiotics o gesture: The perspective o comparative semiotics 1. 2. 3. 4. 5. 6.

Introduction Signs and other meanings Grounding indexicality Iconicity Conclusion References

Abstract Semiotics is basically about differences and similarities between different vehicles for conveying meaning. On the one hand, questions formulated in other semiotic domains may be of interest to the study of gesture; on the other hand, answers to these questions within the study of gesture are important to general semiotics. Here we will be looking at the nature of signs as opposed to other meanings, as well as to actions. We will also scrutinize indexicality as contiguity and as directionality, as well as whether it precedes or results from the act of signification. Without going deeply into the nature of the iconic scale, we will consider the relevance to gesture of the distinction between primary and secondary iconicity, and of the hierarchies of the world taken for granted. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 19891999

1990

IX. Embodiment

1. Introduction Some of the domains that now form part of semiotics, such as, most notably, the semiotics of pictures, had hardly been studied at all before the revival of semiotic theory in the middle of the last century. Gesture, however, has been the focus of a long tradition from Condillac to Efron, and played a part even, in a sense, in ancient rhetoric. Nevertheless, also the study of gesture received a new impetus at the same time as, and in part due to, the re-emergence of semiotics. Therefore, there is every reason to remind also the students of gesture not to lose track of the task of semiotics as a global discipline: to compare different semiotic resources, in order to determine their differences and similarities. In the present case, that means determining the place of gesture among other kinds of semiotic resources. All students of gesture, it must be admitted, do not necessarily feel inclined to address the big questions concerning the nature of semiosis and of its different kinds. Yet it is not only the Peircean ethics of terminology, but also the practical purpose of semiotic inquiry, which require all scholars not to use established terms in new meanings, or to avoid introducing new terms for notions already entrenched in semiotic theory. This would suppose a minimum of contact between those working in general semiotics, and the students of specific domains, such as gesture studies.

2. Signs and other meanings 2.1. On the notion o sign Even though the scope of meaning is nowadays sometimes taken to be wider than the notion of sign (see Sonesson 1989), there is every reason to investigate whether some particular kind of semiotic resource, such as gesture, consists of signs or some other type of meaning, or perhaps contains both signs and other meanings. Signs, or some particular kind of signs, are conceivable only mastered by children at a relatively mature age, and are possibly not accessible at all to other kinds of animals. A concept of sign is needed to pursue this question experimentally (see Hribar, Sonesson, and Call in press; Sonesson 2012a; Zlatev et al. 2013). To structuralists, from Saussure to Eco, it was easy to determine whether something was a sign or not. Signs were conventional. Thus, Eco tried to show that putative iconic signs (by which he mostly meant pictures) were actually conventional; indeed, he even hinted that there must be something conventional to indexical signs (such as pointing). Saussure himself mused that miming could be of the nature of signs, since it must imply a rudiment of conventionality. Nowadays, most semioticians would probably recognize that convention enters into every kind of meaning. Hjelmslev ([1943] 1969) required all signs to be made up of smaller combinatories of elements that as such were meaningless, that is, which had “double articulation”. Eco (1984) therefore argued that iconic signs must be made up of meaningless iconemes, and Birdwhistell (1970) claimed to have found kinemes in gestures. Both ideas have long since been shown to be untenable (see Sonesson 2010a). We are left with the Saussurean (1968⫺74) definition, according to which a sign consists of a signifier and a signified, or the Peircean one, which states that signs are formed out of representamen, object, and interpretant ⫺ but we are offered no means to tell whether any particular item, or even a dyad or triad of items, answers to any of these

157. Some issues in the semiotics of gesture: The perspective of comparative semiotics 1991 labels. There is of course the common sense notion, often invoked by Peirce (1931⫺58), according to which one of these things “stands for” (one of) the other(s) ⫺ but it is not clear what this means. Searle (1995) describes the constitutional rules giving rise to what he calls institutional reality, including signs, using the formula “X counts as Y in C”. This may really characterize the kind of meaning with which chessmen are endowed, to the extent that they are all, in an extended sense, pawns in the game. As Saussure taught us, an array of buttons are enough to distinguish the different bundles of possible movements that the chessmen embody. Traditional chessmen, however, also look like kings, queens, horses, and so on, and in this respect they are like language, which, in spite of what Saussure seems to say, also refer us to the world of our experience (see Sonesson 2009a, 2010b). We need to define a notion of sign based on our intuitive understanding of what a sign is: it should include words among the signs, and it should at least exclude perception (though Peirce would not agree with the latter). I have put together such a definition, inspired in the work of the phenomenologist Edmund Husserl and the psychologist Jean Piaget (Sonesson 1989). It starts from Husserl’s idea that perception is imbued with meaning. It is not only that everything we see has its own shape, its colors, and its parts. If we look at a cube, we necessarily perceive it from a certain angle, showing certain sides entirely, others in part, and yet others not at all, and revealing some of its properties, but as long as we see it as a cube, the others sides and properties are part of what we see. Yet the part of the cube which we most directly see is normally also what is the focus of our attention, at least as long as we do not start to wonder whether the object is really hollow on the other side, or the like. Different variations are possible here, which I have discussed elsewhere (see Sonesson 2011, 2012b). However, in the case of a sign, there clearly must be one item which is most directly given but not in focus, namely the expression, and another item which is indirectly given, yet in focus, namely, that is the content (and/or the referent). Nevertheless, the same situation might perhaps occur even in the case where we would not like to say there is any sign present: if I look at the side of the cube which is directly present to me while concentrating hard on how it may look from the hidden side. Another criterion is needed and it can be found in Piaget’s notion of semiotic function, which is defined by differentiation. As such, the criterion of differentiation is not very clear, so I believe it must be applied to the result of using the other criteria, which means that, like the latter, it cannot be a sufficient criterion. Applying differentiation to the relation between an item which is directly given but not in focus and another item which is in focus but indirectly given, there seems to be mainly two possibilities: it can be spatio-temporal or categorical. I can turn the cube over to have a look at its other sides, but this is not possible in the case of a sign. There is no spatiotemporal continuity. I can collect the cube together with cubes into a class of cubes, but it does not make sense to do the same thing with expression and content. They form part of different categories. Words (including those of “sign language”) are clearly signs in this sense, and so are pictures (Sonesson 1989, 2010a, 2011). In the case of everyday gestures, there can be no doubt that emblems fulfill these criteria, as do Efron’s ([1941] 1972) kinetographic and iconographic gestures (see Kendon 2004). Other cases so far seem less clear.

2.2. Sign and ground According to the classical division of signs made by Peirce (following upon many other thinkers who made similar distinctions in other terms), there are three kinds of signs:

1992

IX. Embodiment icons, indices, and symbols. In very general (non-Peircean) terms, iconicity is the relation between two items that is based on (an experience of) similarity between one or several properties of the items; and indexicality is a relation founded on a contiguity (or neighborhood) existing between two items or properties thereof. If such a neighborhood exists between two things that are experienced as being parts of the same whole, we may more specifically call this relation factoriality (see Sonesson 1989, 1998). Finally, if there is neither similarity nor contiguity between the two items involved, but some kind of regularity, which may be a simple habit, an explicit convention, or something in between, we have to do with symbolicity. Peirce thinks of all these as signs, because his notion of sign is very broad (as he realized late in life), more or less equivalent to any kind of meaning, including perception. Contiguity and factoriality are present everywhere in the perceptual world without as yet forming signs: we will say, in that case, that they are mere indexicalities. An index, then, must be understood as indexicality (an indexical relation or ground) plus the sign function. Analogously, the perception of similarities (which is an iconic ground) will give rise to an icon only when it is combined with the sign function. No matter what Peirce may have meant, it makes more systematic and evolutionary sense to look upon iconicity and indexicality as being only potentials for something being a sign. Iconicity, indexicality, and symbolicity merely describe that which connects two objects; they do not tell us whether the result is a sign or not (Tab. 157.1).

Tab. 157.1: The relationship between principles, grounds, and signs, from the point of view of Peirce (adding a Thirdness of ground from Sonesson’s point of view). Firstness

Secondness

Thirdness

Principle Ground

Iconicity Iconic ground

Sign

Iconic sign (icon)

⫺ Indexicality ⫽ indexical ground Indexical sign (index)

⫺ Symbolicity ⫽ symbolic ground (Symbolicity ⫽ symbolic ground ⫽) symbolic sign (symbol)

The sign thus defines a principle of relevance, which Peirce, in his earlier texts, called a ground, but other factors, such as attention, may pinpoint other such principles. These considerations allow us to separate the study of the phylogenetic and ontogenetic emergence of iconicity, indexicality and symbolicity from that of the corresponding signs (see Sonesson 1998, 2001). Thus, in a recent study (Zlatev et al. 2013), we found that children much earlier understood indexical vehicles (such as pointing and markers) than iconic vehicles (such as pictures and scale-models). This could mean that indices are easier to understand than icons, but an alternative explanation is that indexicality may function to indicate something, which was the task at hand, without having sign character, whereas iconicity without sign character would only be another item of the same category. Further experiments have to be conceived in order to separate these two interpretations.

157. Some issues in the semiotics of gesture: The perspective of comparative semiotics 1993

2.3. To interpret the world and to change it Not all “visible actions” count as “utterances”, as Kendon (2004) puts it, that is, as signs. Mukarˇovsky´ (1978) distinguishes between actions that have the function to change the world, and those that merely change the interpretation of the world; and Greimas (1970) opposes praxis and gesture in similar terms. Vygotsky (1978) similarly separates technical and psychological tools. Nevertheless, Rodriguez and Moro (1999) and Andre´n (2010) have observed that bodily movements with props may still function as gestures rather than as actions. On the other hand, it is conceivable that some gestures making use of no props could be seen as actions rather than signs. Many years ago, when I directed a group analyzing video clips from French television (Sonesson 1981, see Sonesson 2009b), I suggested the extension of what I called the proxemic model to the study of gesture. In proxemics, which studies the meaning of distances between two or more subjects, movement does not serve to convey a content, but to create a space, whether intimate or public or somewhat in between. These spaces do not really exist in the physical world, of course, but they are real in the human Lifeworld, or at least in some varieties of it.

Fig. 157.1: Differential valorization of a spatial relation, according to the position of the palm: relation to the ground (1), relation to the above (2), relation to the back (3), inclusion of the other (4a), pointing outside of own space (4b)

The multinational group which I headed in Paris in the late seventies and early eighties, which analyzed the gestures of French marketers, agreed that the meaning of many hand movements occurring in front of the chest considered as a scene should be understood as ways of relating either to the chest itself, to the ground, and to the other person. Today I don’t know how much to make of this multinational consensus. However, in Kendon’s (2004) recent discussion of “palm down” and “palm up” gestures, there are certainly some which, in particular cultures, have gained an emblematic meaning, but others could be understood rather as creators of spatial relationships, and so could these same emblematic gestures, outside of their domain of validity ⫺ similarly to how we understood them in our “proxemic model” (Sonesson 1981, 2009b). It is not obvious how one would show that a particular kind of gesture fits better in with the action model than with the sign model, but maybe those who dedicate themselves to the study of gesture would do well to integrate also the action model into their analytical toolkit.

1994

IX. Embodiment

3. Grounding indexicality 3.1. Indexicality and direction In Peirce’s work, there are many other definitions of the three kinds of signs than the ones referred to above, but, as I have argued elsewhere (Sonesson 1989), these three are the only ones which seem able to exhaust the universe of signs, considered from the particular perspective of motivating the content from the expression. It is, in particular, in the case of indices that Peirce seems to propose many other criteria, which do not yield the same result, notably causality, which, as I have shown elsewhere (Sonesson 1998), would delimit a much more narrow group of signs. Although Peirce never explicitly defines the index by means of direction, the choice of the term index, clearly suggests that, like the pointing finger, indexicality serves to mark a direction, and this also seems to be how the term has been understood in much of the psychological literature. Starting out from semiotics, however, I have argued that signs showing a direction are only a subcategory of those relying on contiguity (Sonesson 1989: 47, 1998). In fact, an arrow or a pointing finger may be just as contiguous to something at its beginning as to something at its end, and yet both would normally be understood as signs only of what is at their end ⫺ and indeed in a particular spot at the end. Interestingly, Rene´ Thom (1973) conceives indexicality in terms of the forward thrust of the arrow-head as imagined in water or the sentiment of its slipping from our hands. Since this is very much a distinction in the spirit of Gestalt psychology, I suggested we should use the term vectoriality to describe it. Directionality may however be a more familiar term, and can be used as long as it is understood that his is not simply indexicality. It seems natural for an indexical sign incorporating vectoriality to be easier to grasp than on without any vectoriality. In an experiment (Zlatev et al. 2013), we tried to separate mere contiguity from vectoriality (directionality), in the form of a marker and pointing, respectively. Four chimpanzees were tested at Lund University Primate Research Station Furuvik and three groups of children at the Humanities Laboratory, Lund University. In the majority of the cases the results for the apes failed to reach significance. Still, there was a tendency for indexical signs to be more often correctly interpreted than iconic signs. Preliminary results for the children show the same tendency and thus support the hypothesis that 18-month olds most often understand pointing and more rarely markers, while only some 24-month olds understand the iconic signs. The 30-months olds usually understand all four types of signs. In our study, the difference between pointing and marker was not significant, but that may have been because we did not manage to hide completely the action by means of which the marker was placed, which has its own directionality. This poses the question whether it is contiguity or directionality that accounts for pointing and marker being easier to grasp than picture and replica. Other studies are necessary to separate these two criteria.

3.2. Contiguity assumed or created There is an ambiguity in the use of the term indexicality, which, as far as I know, was never noted before I did (Sonesson 1989, 1998): the items involved may either be contiguous at some moment before the act of signification, or they may become so precisely at the moment, and because of, the act of signification. In other words, the contiguity may be currently perceptible and actually perceived, as in the case of an arrow pointing the

157. Some issues in the semiotics of gesture: The perspective of comparative semiotics 1995 way, or it may have existed at a time anterior to the time of perception, as in the case of footprints, or photography. Elsewhere, I have suggested that we could speak of performative or abductive indices, according as the contiguity is created by the sign itself, or, conversely, is a condition for the use of the sign (see Sonesson 1989, 1998). Indeed, an arrow or a pointing finger works by creating a neighborhood which did not exist before to the thing they point to, but the footprints can be interpreted because we know beforehand about the relationship between the feet and the different kinds of surfaces on which they are susceptible to leave their marks. There is no independently existing category of pointed-out objects, but there certainly is a category of feet, which (if we take this only to mean human feet) can be differentiated into big feet and small feet, male feet and female feet, flatfeet and feet of traditional Chinese women, etc. The former are performative, because they create the relationship they are about (following Austin’s classic definition); they latter are abductive, because they rely on our ability to draw conclusions from one singular fact to another on the foundation of habitual relationships (following Peirce’s characterization of abduction). In this sense, pointing is of course performative, but so is the marker. It may seem, therefore, that gesture can only convey indexicality abductivily through the intermediary of iconicity: thus, for instance, in Mallery’s ([1881] 1972) example of the content woman being indicated by the palm held at a small height, the position of the palm is iconic for height, but height is connected to woman by abductive indexicality. Nevertheless, abbreviated gestures may be said to rely on abductive indexicality by themselves, as in the classical interpretation of pointing as being an abbreviation of reaching.

4. Iconicity 4.1. The scale o iconicity As I have pointed out elsewhere (Sonesson 2001), iconicity, in the Peircean sense, goes well beyond depiction, and thus beyond McNeill’s (2005: 39) “iconic gestures”, to include his “metaphoric gestures”, some emblems, and perhaps all beats. In the following, I will take this for granted and go on to discuss more complicated issues of iconicity. The idea of a “scale of iconicity” first seems to have been introduced by Charles Morris (1946): in this sense, a film is more iconic of a person than is a painted portrait because it includes movement, etc. Abraham Moles (1981) constructed a scale comprising thirteen degrees of iconicity from the object itself (100%) to its verbal description (0%). Such a conception of iconicity is problematic, not only because distinctions of different nature appear to be amalgamated, but also because it takes for granted that identity is the highest degree of iconicity and that the illusion of perceptual resemblance typically produced, in different ways, by the scale model and the picture sign is as close as we can come to iconicity besides identity itself (see Sonesson 1998). Nevertheless, on a very general level, e.g. in the simple case of distinguishing drawings, black-and-white photographs, and color photographs, this idea has been confirmed by psychological experiments, including our own (see Hribar, Sonesson, and Call in press). Kendon (2004: 2) rightly takes exception to what McNeill (2005: 5) calls “Kendon’s continuum”, which is the application of the iconicity scale to gesture, as well as to the “expanded” version proposed by Gullberg (1998). The results from both the study of pictures and that of

1996

IX. Embodiment gesture thus suggest that we should rather consider a number of different parameters on which expression and content may vary. The general issue seems too big an issue to analyze here, but we will have a look at two related notions.

4.2. Primary and secondary iconicity A distinction can be made between two kinds of iconical signs: those that become signs because they are iconic, and those that are understood as iconic because they are signs (see Sonesson 1994, 2008, 2010a, 2010b). In other terms, a primary iconic sign is a sign in the case of which the perception of a similarity between an expression E and a content C is at least a partial reason for E being taken to be the expression of a sign the content of which is C. That is, iconicity is really the motivation (the ground) or, rather, one of the motivations for positing the sign function. A secondary iconic sign, on the other hand, is a sign in the case of which our knowledge that E is the expression of a sign the content of which is C, in some particular system of interpretation, is at least a partial reason for perceiving the similarity of E and C. Here, then, it is the sign relation that partially motivates the relationship of iconicity. Pictures are of course primary iconical signs, in this sense. To be exact, they are primary iconical signs to human adults, for there is every reason to believe that they are no signs at all, but simply objects as such, to apes and to children below the age of 2 or 3 years. Secondary iconical signs, however, are often identical to the object itself, but in some context, such as a shop-window or an exhibition, they may be turned into signs of themselves. There are two ways in which iconicity may be secondary: either there is too much iconicity for the sign to work on its own, such as objects becoming signs of themselves in some capacity, or there is too little iconicity for the sign function to emerge without outside help. Thus, iconicity may be said to be secondary, because of depletion, or because of profusion. A car, which is not a sign on the street, becomes a sign at a car exhibition, as does Duchamp’s urinal in a museum. In other cases, the sign function must precede the perception of iconicity, because there is too little resemblance without it, as in the manual signs of the North American Indians, which, according to Mallery (1972: 94⫺95), seem reasonable when we are informed about their meaning. This is also the case with “droodles” such as the sketch that may either be seen as an olive dropping into a Martini glass or as a close-up of girl in scanty bathing suit, or some third thing, according to the label that is attached. While both scenes, and many more, are possible to discover in the drawing, both are clearly underdetermined by it. Adopting my terms, de Cuypere (2008) claims that linguistic iconicity is exclusively secondary. Supposing this to be true, would it apply also to gesture? Mallery’s example, quoted above, would seem to suggest so. Nevertheless, no matter what similarities there may be between language and gesture (and in particular “sign languages”), gesture resembles pictures in being founded on a system of transformations from perceptual experience, which language is not (or very marginally so). More specifically, gesture and pictures both share in the sensory modality which dominates human perception, visuality, to which we are fairly accustomed to translate other domains of our experience. On the other hand, it may seem that gesture must rely on secondary iconicity, both because of too much iconicity, and too little. If we primarily look upon human arms as technical tools, in Vygotsky’s sense, the iconicity that they may convey as gestures would necessarily have to be secondary, because of the difficulty of seeing arms as anything else than arms. At the same time, the similarity between perceptual experience and what may be

157. Some issues in the semiotics of gesture: The perspective of comparative semiotics 1997 mimicked using the arms would normally seem to be at the level of droodles, because of the constraints intrinsic to the shapes of the arms themselves. Thus, gesture appears to be based on secondary iconicity, because of both depletion and profusion. If so, this suggests that the two categories of secondary iconicity delineated above are not exclusive. As far as I know, there are no empirical studies bearing on this issue. This is especially unfortunate, because already the reasoning above shows that the distinction between primary and secondary iconicity cannot account for the facts, even at the level of intuitions. Nonetheless, there has to be a dialectics of theory and praxis. Before the distinction, however faulty, has been put to any empirical test, it does not make much sense to revise it.

4.3. The dominance hierarchy A prerequisite for signs working at all, and for iconical signs in particular, would seems to be the hierarchy of dominance which is part and parcel of the Lifeworld, the “world taken for granted”. Elsewhere (Sonesson 1989), I have suggested that there are in fact two such hierarchies, one which is more abstract, and which accounts for some objects serving more naturally as expressions in signs than others, and another which is more concrete, and indeed more directly relative to human beings, their bodies and other properties, and which serves to explain why the effect of iconicity can be brought off much more simply in the case of certain contents. In the first case, I have indicated that a two-dimensional object functions more readily as the expression of a three-dimensional content than the reverse; that something static more easily can signify something which is susceptible of movement than the opposite; and that an inanimate object could more easily stand for an animate one than the other way round. DeLoache (2000) has independently invoked the first principle when explaining why pictures are more easily interpreted by children than scale-models; and Mandler (2004) has shown that very small children are aware of the distinction between animate and inanimate objects. The second, more concrete, but somewhat overlapping, scale suggests that very little information is needed to convey the idea of a face; somewhat more for a human being; a little more for an animal; and somewhat more again for an object which is not an animal but which is susceptible of movement. There is ample proof that human faces are indeed very high up on this scale (see Messer 1994). We are presently in the process of investigating whether the other surmises can be substantiated in the case of pictures. The idea would not necessarily be that these hierarchies are innate; they may result from communalities in the human situation. In any case, I would suppose them to be universal. We can so far only speculate what relevance these scales may have to gesture. Unlike the case of pictures, the expression of gesture is always made out of the same material, arms and hands and marginally some other body parts. All these are three-dimensional objects. Clearly these three-dimensional objects may stand for other three-dimensional objects and even, in the limiting case, for two-dimensional ones. This is in contradiction to the first hierarchy. Gesture is also unlike pictures in having an expression plane that is normally in movement (although some movements, notably in the case of emblems and pointing, may really only serve to move hands into a position which is the real carrier of the content). Perhaps the inclusion of movement is a facilitating factor here. If so, this would be an interesting result from the point of view of general semiotics, not only to the semiotics of gesture. Nevertheless, the fact that arms and hands, which are

1998

IX. Embodiment a direct part of human embodiment, contradicts general principles of the Lifeworld, does not necessarily show that these principles are invalid in the general case. It is not clear how much can be made of the second hierarchy in the case of gesture. As I suggested above, the shapes of arms and hands impose severe constraints on the iconicity of gestures; it is really on the level of droodles. Again, the movement that gesture incorporates certainly serves to liberate it partly from those constraints. It is at present an empirical question whether gesture may yet profit from the principles of the second hierarchy in their rendering of faces, human beings, animals, and moving bodies, and/or other objects of the Lifeworld.

5. Conclusion We have looked at some issues that loom large in general semiotics and in the semiotics of pictures, to evaluate their relevance to the semiotic of gesture. The answers to many of these queries seem unclear at the moment, but it may be worthwhile for students of gesture to incorporate these issues into their future investigations. In any case, any such answers would be of immense interest to comparative semiotics.

6. Reerences Andre´n, Mats 2010. Children’s gestures from 18 to 30 months. Ph.D. dissertation, Department of Linguistics and Phonetics, Lund University. Birdwhistell, Ray L. 1970. Kinesics and Context: Essays in Body Motion Communication. Philadelphia: University of Pennsylvania Press. De Cuypere, Ludovic 2008. Limiting the Iconic: from the Metatheoretical Foundations to the Creative Possibilities of Iconicity in Language. Amsterdam: John Benjamins. DeLoache, Judy S. 2000. Dual representation and young children’s use of scale models. Child Development 71(2): 329⫺338. Eco, Umberto 1984. Semiotics and the Philosophy of Language. Bloomington: Indiana University Press. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton. First published [1941]. Greimas, Algirdas J. 1970. Du Sens. Paris: Seuil. Gullberg, Marianne 1998. Gesture as a Communication Strategy in Second Language Discourse: A Study of Learners of French and Swedish. Lund: Lund University Press. Hjelmslev, Louis 1969. Prolegomena to a Theory of Language. Madison: University of Wisconsin Press. First published [1943]. Hribar, Alenka, Göran Sonesson and Josep Call 2014. From sign to action: Studies in chimpanzee pictorial competence. Semiotica 198: 205⫺240. Kendon, Adam 2004. Gesture: Visible Action as Utterance. New York: Cambridge University Press. Mandler, Jean M. 2004. The Foundations of Mind: The Origins of Conceptual Thought. Oxford: Oxford University Press. Mallery, Garrick 1972. Sign Language Among North American Indians Compared With That Among Other Peoples and Deaf-mutes. Photomechanic Reprint. The Hague: Mouton. First published [1881]. Messer, David J. 1994. The Development of Communication: From Social Interaction to Language. Chichester: Wiley. Moles, Abraham A. 1981. L’Image ⫺ Communication fonctionnelle. Bruxelles: Casterman. Morris, Charles W. 1946. Signs, Language and Behavior. New York: Prentice-Hall. Mukarˇovsky´, Jan 1978. Structure, Sign, and Function. New Haven: Yale University Press.

157. Some issues in the semiotics of gesture: The perspective of comparative semiotics 1999 McNeill, David 2005. Gesture and Thought. Chicago, IL: University of Chicago Press. Peirce, Charles Sanders 1931⫺58. Collected Papers I⫺VIII. Edited by Charles Hartshorne and Paul Weiss. Cambridge, MA: Belknap Press of Harvard University Press. Rodriguez, Cintia and Christiane Moro 1999. El Ma´gico Nu´mero Tres. Barcelona: Paido´s. Saussure, Ferdinand de 1968⫺74. Cours de linguistique ge´ne´rale I-II. Edition critique par Rudolt Engler. Wiesbaden: Harrossowitz. Searle, John 1995. The Construction of Social Reality. London: Allen Lane. Sonesson, Göran 1981. Esquisse d’une Taxonomie de la Spatialite´ Gestuelle. Paris: E´coles des Hautes E´tudes en Sciences Sociales. Sonesson, Göran 1989. Pictorial Concepts. Lund: Lund University Press. Sonesson, Göran 1994. Prologomena to a semiotic analysis of prehistoric visual displays. Semiotica 100(3/4): 267⫺332. Sonesson, Göran 1998. Entries on ‘Icon’ ⫺ ‘Iconicity’ ⫺ ‘Index’ ⫺ ‘Indexicality’. In: Paul Bouissac, Göran Sonesson, Paul Thibault and Terry Threadgold (eds.), Encyclopedia of Semiotics. New York: Oxford University Press. Sonesson, Göran 2001. De l’iconicite´ des images a` l’iconicite´ des gestes. In: Christian Cave´, Isabelle Guı¨telle and Serge Santi (eds.), Oralite´ est Gestualite´: Interactions et Comportements Multimodaux dans la Communication. Actes du Colloque ORAGE 2001, Aix-en-Provence, 18⫺22 Juin 2001, 47⫺55. Paris: L’Harmattan. Sonesson, Göran 2009a. New considerations on the proper study of man ⫺ and, marginally, some other animals. Cognitive Semiotics 4: 133⫺168. Sonesson, Göran 2009b. Au-de´la` du language de la dance: Les significations du corps. DegresRevue de Synthese a` Orientation Semiologique (139⫺40): C1⫺C25. Sonesson, Göran 2009c. Here comes the semiotic species. Reflections on the semiotic turn in the cognitive sciences. In: Brady Wagoner (ed.), Symbolic Transformations. The Mind in Movement Through Culture and Society, 38⫺58. London: Routledge. Sonesson, Göran 2010a. Pictorial semiotics. In: Thomas A. Sebeok and Marcel Danesi (eds.), Encyclopedic Dictionary of Semiotics. Berlin/New York: De Gruyter Mouton. Sonesson, Göran 2010b. Semiosis and the elusive final interpretant of understanding, Semiotica 179(1/4): 145⫺258. Sonesson, Göran 2011. The mind in the picture and the picture in the mind: A phenomenological approach to cognitive semiotics. Lexia ⫺ Rivista di Semiotica (07/08): 167⫺182. Sonesson, Göran 2012a. Semiosis beyond signs: On two or three missing links on the way to human beings. In: Theresa S. Schilhab, Frederik Stjernfelt and Terrence W. Deacon (eds.), The Symbolic Species Evolved, 81⫺96. Dordrecht: Springer. Sonesson, Göran 2012b. The foundation of cognitive semiotics in the phenomenology of signs and meanings. Intellectica (212/2, 58): 207⫺239. Thom, Rene´ 1973. De l’icoˆne au symbole. Cahiers Internationaux du Symbolisme (22⫺23): 85⫺106. Vygotsky, Lev 1978. Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press. Zlatev, Jordan, Elainie Alenkær Madsen, Sara Lenninger, Tomas Persson, Susan Sayehli, Göran Sonesson and Joost van de Weijer 2013. Understanding communicative intentions and semiotic vehicles by children and chimpanzees. Cognitive Development 28(3): 312⫺329.

Göran Sonesson, Lund (Sweden)

2000

IX. Embodiment

158. Embodied meaning, inside and out: The coupling o gesture and mental simulation 1. 2. 3. 4. 5. 6. 7.

Inside and out: Mental simulation and gesture Speaker simulation shapes speaker gesture Speaker gesture shapes speaker simulation Speaker gesture shapes listener simulation Listener simulation shapes perception of speaker gesture Conclusion: Simulation, gesture, and the embodiment of meaning References

Abstract During situated interaction, meaning construction is embodied inside and out. Meaning is embodied internally when we create embodied simulations, co-opting brain areas specialized for perception or action to create dynamic mental representations, rich in sensorimotor detail. Thinking of petting a kitten, for instance, might include visual simulation of its appearance and motor simulation of the act of petting ⫺ all in brain areas typically used to see and touch kittens. At the same time, meaning is embodied externally in representational gestures, actions of the hands and body that represent objects, actions, and ideas. In this chapter, we argue that these internal and external embodiments are tightly coupled, with bidirectional causal influences between gesture and simulation, both within the speaker and between speaker and listener. By embodying meaning, inside and out, language users take advantage of the complementary semiotic affordances of thought and action, brain and body.

1. Inside and out: Mental simulation and gesture The creation of meaning is intimately tied to the body. Or rather, bodies. Meaningmaking is seldom solitary, and the prototypical linguistic encounter involves multiple interacting agents working together to negotiate shared meaning. These multiple interlocutors use their bodies as fully fleshed out semiotic resources during situated talk (Goodwin 2000; Kendon 2004; McNeill 2005). That is, they gesture, moving their hands and bodies in meaningful ways. Meaning is also embodied less visibly. The neural activity supporting language comprehension and production relies on brain areas that are repurposed from perception and action ⫺ that is, “embodied” brain areas ⫺ and these areas coordinate during comprehension to create “embodied simulations” of linguistic content (Barsalou 2008; Glenberg 2010). In this chapter, we argue that simulation and gesture are forms of embodiment that are tightly and multiply coupled during situated meaning-making. To start, the body makes an invisible contribution to meaning in virtue of interlocutors’ embodied simulations (Barsalou 2008). When comprehenders hear language describing bodily actions, their brains’ motor systems become engaged in a content-specific way. That is, hearing a sentence about kicking engages motor circuitry responsible for controlling leg actions, just as hearing a sentence about chewing lights up neural areas for actual mouth-moving (Pulvermüller and Fadiga 2010). This content-specific use of Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20002007

158. Embodied meaning, inside and out

2001

brain circuits specialized for action is paralleled in perception: Processing language about visible objects and events activates visual regions of the brain (Stanfield and Zwaan 2001), while language about audible things prompts a detailed auditory simulation of their sound (Winter and Bergen 2012). There is now convergent evidence for embodied simulation during language comprehension from brain imaging, behavioral experimentation, and neuropsychological lesion studies (Bergen 2012). This internal simulation of what it might be like to perform described actions or perceive described objects and events has been most thoroughly studied during language comprehension, but there’s initial evidence that it plays a role in language production as well. One leading idea (Sato 2010) is that speakers perform embodied simulations at the phase of message generation (cf. Levelt 1993) during language production. If a message includes percepts or actions, embodied simulation might play a role in its construction. A second, more visible way in which the body is recruited to create meaning is through the production of gestures, spontaneous movements ⫺ typically of the hands ⫺ that take advantage of the motor-visual channel to complement speech. Representational gestures, in particular, use a combination of hand morphology, motion trajectory, and spatial location to represent concrete or abstract referents through literal or metaphoric resemblance. While speech and gesture are tightly coupled in both timing and meaning, they often express information that is complementary rather than identical (McNeill 2005). You might ask, “Have you grown?” while using an extended index finger to trace an upward trajectory ⫺ communicating that the question is about height, not heft. Or you could say, “My mood has really changed”, while pointing upward, using space metaphorically to communicate that your mood has changed for the better.

Fig. 158.1: Relations between gesture and simulation: speaker simulation drives speaker gesture (a); speaker gesture influences speaker simulation (b) and listener simulation (c); and listener simulation shapes their interpretation of speaker gesture (d).

2002

IX. Embodiment Situated meaning-making is embodied inside and out: in the internal embodied simulations that accompany message formulation and comprehension, and in the external meaningful gestures that accompany speech. Crucially, these embodied processes work in concert (Fig. 158.1): a speaker’s mental simulation drives her gestures (section 2); her gestures influence her own (section 3) and listeners’ simulations (section 4); and a listener’s ongoing simulation may even shape her interpretation of speakers’ gestures (section 5).

2. Speaker simulation shapes speaker gesture Embodied simulation and gesture are both particularly well suited to representing action. Motor simulation re-engages neural circuitry that is also responsible for action performance; gesture, as meaningful manual action, can (re-)enact actions with high fidelity. Pursuing this similarity, Hostetter and Alibali (2008) argued that external gestures are actually the product of internal action simulation (see Fig. 158.1a). On their account, whenever motor or visual simulation of action exceeds a threshold of activation, the simulated action is expressed as real, gestural action. A number of other authors have suggested that gestures are the product of imagistic or motor representations generated during language production (e.g., Kita and Özyürek 2003; McNeill 2005), although none of these have explicitly connected those imagistic processes to embodied neural simulation. This threshold may be sensitive to a number of factors, including neural connectivity, cognitive load, and socio-communicative norms. They suggest further that speechrelated activation of the motor system makes simulated action most likely to “spill out” as gesture during language production, although gestures are sometimes produced in the absence of speech (e.g., Chu and Kita 2011). In line with this proposal, several recent studies have found that increased action simulation predicts increased representational gestures. In Hostetter and Alibali (2010), for instance, participants either created or viewed geometric patterns, and then described them from memory. Participants produced more representational gestures when describing patterns that they had created themselves rather than merely viewed. Similarly, Sassenberg and Van Der Meer (2010) reported that, during route descriptions, representational gestures were more common during increased mental simulation, independent of task difficulty. Gesture also reflects specific details of ongoing simulation. When people describe how they solved the Tower of Hanoi puzzle, which involves rearranging different sized disks, the trajectories of their gestures reflect the trajectories of the manual actions they actually used to solve the puzzle (Wagner Cook and Tanenhaus 2009), suggesting that gesture form is shaped by the particulars of motor simulation. Embodied simulation, moreover, can vary in focus, perspective, length, and degree of detail, and grammatical choices during language production may reflect these features of simulation. For example, producing progressive language may reflect increased focus in simulation on the ongoing process of the described state or event, as contrasted, say, with the resulting endstate (Bergen and Wheeler 2010; Madden and Zwaan 2003). And indeed when speakers produce progressive rather than perfect sentences, their accompanying gestures are longer-lasting and more complex (Parrill, Bergen, and Lichtenstein 2013). Similarly, during event descriptions, the viewpoint adopted in gesture ⫺ either internal or external to the event ⫺ is shaped by the body’s involvement in the described event (Parrill 2010) and by the kind of motion information encoded in speech (Parrill 2011), both of which

158. Embodied meaning, inside and out

2003

may reflect aspects of simulated viewpoint (Brunye´ et al. 2009). To the extent that aspect and other grammatical features accurately diagnose properties of embodied simulation, it appears that speakers express the product of simulation in not only the linguistic channel, but also the gestural channel.

3. Speaker gesture shapes speaker simulation A speaker’s gestures can also prompt and shape their own embodied simulation (Fig. 158.1b). Representational gestures have been proposed to help maintain mental imagery during language production (de Ruiter 2000), or to “help speakers organize rich spatio-motoric information into packages suitable for speaking” (Kita 2000: 163). More generally, producing representational gestures may encourage an embodied simulation of the objects and events represented in gesture. For instance, Alibali and colleagues (2011) had participants solve variants of the following gear-rotation puzzle: If there are five gears in a row, each interlocked with its neighbors, and the first gear turns counterclockwise, then in what direction will the last gear turn? This puzzle can be solved using either a “sensorimotor” strategy that relies on a mentally simulating the gears’ rotation or a more formal “parity” strategy that takes advantage of the alternating pattern of gear rotation (i.e., odd gears will turn counterclockwise, even gears clockwise). Gesturing made participants more likely to adopt the sensorimotor strategy, compared to trials where they spontaneously chose not to gesture and to trials where gesture was inhibited. Gesturing also improves performance on classic mental rotation tasks (Chu and Kita 2011), but only if the gesture is actually produced and not merely seen (GoldinMeadow et al. 2012). Producing gestures, therefore, may encourage embodied simulation during problem solving. Gesture production can also evoke detailed sensorimotor information. In one study, Beilock and Goldin-Meadow (2010) had participants solve the Tower of Hanoi puzzle (pre-test) and then solve it again after a brief pause (post-test). For half the participants, the relative weights of the disks were surreptitiously switched before the post-test, which impaired performance ⫺ but only if, between pre-test and post-test, participants explained how they had solved the puzzle in the pre-test. The fact that participants were unaffected by the switched weights if they hadn’t explained their solution ⫺ and thus hadn’t gestured about the task ⫺ suggests that gesture itself was responsible for shaping speakers’ embodied representation of the task. Indeed, the impact of switching weights was mediated by the kinds of gestures produced during the explanation: The more the gestures represented the relative weights of the disks, the more post-test performance was affected by the switched weights. In other words, gesturing about the disks’ taskirrelevant weights added precise weight information to participants’ representations of the puzzle. These multimodal influences of gesture on thought suggest that gestures are not merely visible (Hofstetter and Alibali 2008) but also a distinctly felt form of embodiment, shaping the precise sensorimotor information included in accompanying simulation (cf. Goldin-Meadow et al. 2012).

4. Speaker gesture shapes listener simulation One of the classic findings in the study of co-speech gesture is that speakers’ gestures contribute to listeners’ comprehension, affecting representations in long-term memory

2004

IX. Embodiment (e.g., Rogers 1978). One mechanism for this influence may be the modulation of listeners’ motor or visual simulation (Fig. 158.1c), and several recent studies support this proposal. Viewing gestures that represent actions may prompt a listener to simulate the performance of that action. Gestures that encode fine details of the trajectory of a manual action can lead the listener to include those details in their subsequent reproduction of the action, as if viewing the gesture prompted a detailed motor simulation of the action (Wagner Cook and Tanenhaus 2009). Moreover, a speaker’s gestures may guide the listener’s creation of a detailed visual simulation. In one experiment, Wu and Coulson (2007) showed participants videos of a man describing everyday objects, followed by a picture of the mentioned object. Crucially, the shape of the pictured object could be compatible with both the man’s speech and gesture, or with his speech alone. For example, when the man said, “It’s actually a double door”, this could refer to Dutch-style double doors in which two half-doors are stacked one on top of the other, or Frenchstyle double doors where two full doors are placed side by side. When his speech was accompanied by a bimanual gesture that evoked Dutch-style double doors ⫺ his hands stacked on top of each other ⫺ then a subsequent picture of Dutch-style double doors would be related to both his speech and his gesture, while a picture of French-style double doors would be related to his speech but not his gesture. They found that pictures were easier to process when they matched both speech and gesture, as if the fine details of the man’s gesture shaped the listener’s visual simulation of the speech content. Gestures produced by a speaker during discourse, therefore, can rapidly shape the fine details of a listener’s motor and visual simulation of discourse content.

5. Listener simulation shapes perception o speaker gesture Any particular representational gesture can, in principle, represent an infinite range of hypothetical referents. Consider a gesture tracing an upward trajectory. Does it represent the flightpath of a bumblebee? Or a disco-dancing move? The precise meaning of a gesture is often disambiguated by discursive context or concurrent speech (e.g., “It flew to the flower” vs. “He danced like Travolta”). Another possible source of disambiguation, currently underexplored, is the comprehender’s embodied simulation at the moment they perceive a gesture. A comprehender’s concurrent or preceding simulation may shape their interpretation of a speaker’s gestures, perhaps by selectively focusing attention on specific facets of a gesture’s motor or visuospatial properties (Fig. 158.1d). Consider the different ways in which a gesture can stand in for a referent: by depicting, tracing an object’s shape in the air; by enacting, recreating some part of an action; or by modeling, using the hand to stand in for another object (Kendon 2004; cf. Müller 2009, Streeck 2008). A single gesture can be ambiguous among these. Consider a hypothetical gesture produced while saying, “I wiped the water off the table”, in which a facedown open-palm handshape sweeps across a horizontal plane. This gesture could be depicting the tabletop’s shape, height, or size; enacting the manual action of wiping; or using the hand to model the cloth itself. We hypothesize that the listener’s interpretation of such a gesture, and thus the gesture’s contribution to ongoing comprehension, may be shaped by their embodied simulation at the time of gesture perception. If the preceding linguistic context has prompted a motor simulation of the actions involved in cleaning (“My arm was so tired from wiping down the tabletop”), then the ongoing motor simulation at the moment of gesture perception might lead the listener to interpret the

158. Embodied meaning, inside and out

2005

gesture as an enactment. In contrast, if the conversation up until then has focused on the table’s appearance (“It was a gorgeous antique table with the flattest top you’ve ever seen, but there was water on it”), then the ongoing visual simulation of the table’s appearance could increase attention to the way the gesture represents the visuospatial properties of the table or spill (e.g., its height or shape), prompting a depictive interpretation. Similarly, the viewpoint of a listener’s ongoing simulation, either internal or external to an event, may influence their interpretation of gestures that are ambiguous with respect to viewpoint. The perceived meaning of a speaker’s gesture, therefore, could be shaped by properties of the interlocutor’s ongoing embodied simulation, although at present we know of no empirical support for this proposal.

6. Conclusion: Simulation, gesture, and the embodiment o meaning Situated meaning-making involves the body in at least two ways: as gesture and as embodied simulation. As we have seen, these two sources of embodiment, external and internal, are tightly coupled both within and between interlocutors (Fig. 158.1). This chapter has focused on concrete meanings: physical actions and objects, spatial arrays, perceivable events. But the human semantic potential far outstrips the concrete, allowing us to communicate about things unseen, displaced in time and space, and things unseeable like time, love, and mathematics. Even these abstract domains, however, may become meaningful through our bodies. Concepts as abstract as time and arithmetic may be mapped metaphorically to more concrete domains, and thus rely on sensorimotor simulations of those concrete source domains (Gibbs 2006; Lakoff and Johnson 1980). Gestures about abstract concepts that are understood metaphorically, moreover, often reflect the concepts’ spatial and embodied sources (Cienki and Müller 2008; Marghetis and Nu´n˜ez 2013). For instance, even though numbers do not literally vary in size, talk of “bigger” or “smaller” numbers may involve both spatial simulations of size and gestures that reflect spatial metaphors for number (Nu´n˜ez and Marghetis 2014). The fact that one source of embodiment is tucked inside the skull and the other is cast out into the world has implications for the representational work that they can do. First, gestures are visible to interlocutors, both intended and unintended, and are thus distinctly public; simulation, on the other hand, is shielded from others by the braincase, and is thus a private form of embodiment. Second, simulation is multimodal and thus generates a richly embodied representation, including taste and smell (e.g., Louwerse and Connell 2011); gesture, tied to the manual modality, forces a schematization of the represented content, tied more to action than to any other modality (Goldin-Meadow et al. 2012). Third, simulation isn’t limited by the physiology of the hand (or body), and so it can represent the impossible, the difficult, the novel ⫺ but it is softly constrained by physiology and experience, so that it is more difficult to simulate unnatural or difficult actions (e.g., Flusberg and Boroditsky 2011). The body proper, by contrast, is perfectly suited for representing itself, without recourse to internal representations of its physiology; manual gestures afford rich representations of manual actions. In sum, gesture and simulation differ in terms of their publicity, their multimodal richness, and their representational affordances.

2006

IX. Embodiment “It is my body which gives meaning,” wrote Merleau-Ponty, “not only to the natural object, but also to cultural objects like words” ([1945]2002: 273). This meaning-giving body, we have argued, makes its contribution in at least two ways ⫺ internal simulation and external gestures ⫺ each shaping the other during situated interaction.

7. Reerences Alibali, Martha W., Robert C. Spencer, Lucy Knox and Sotaro Kita 2011. Spontaneous gestures influence strategy choices in problem solving. Psychological Science 22(9): 1138⫺1144. Barsalou, Lawrence W. 2008. Grounded cognition. Annual Review of Psychology 59(1): 617⫺645. Beilock, Sian L. and Susan Goldin-Meadow 2010. Gesture changes thought by grounding it in action. Psychological Science 21(11): 1605⫺1610. Bergen, Benjamin 2012. Louder Than Words: The New Science of How the Mind Makes Meaning. New York: Basic Books. Bergen, Benjamin and Kathryn Wheeler 2010. Grammatical aspect and mental simulation. Brain and Language 112(3): 150⫺158. Brunye´, Tad T., Tali Ditman, Carolyne R. Mahoney, Jason S. Augustyn and Holly A. Taylor 2009. When you and I share perspectives: Pronouns modulate perspective-taking during narrative comprehension. Psychological Science 20(1): 27⫺32. Chu, Mingyuan and Sotaro Kita 2011. The nature of gestures’ beneficial role in spatial problem solving. Journal of Experimental Psychology: General 140(1): 102⫺115. Cienki, Alan and Cornelia Müller 2008. Metaphor, gesture, and thought. In: Raymond Gibbs (ed.), The Cambridge Handbook of Metaphor and Thought, 483⫺501. Cambridge: Cambridge University Press. De Ruiter, Jan P. 2000. The production of gesture and speech. In: David McNeill (ed.), Language and Gesture, 284⫺311. Cambridge: Cambridge University Press. Flusberg, Stephen and Lera Boroditsky 2011. Are things that are hard to physically move also hard to imagine moving? Psychonomic Bulletin and Review 18(1): 158⫺164. Gibbs, Raymond 2006. Metaphor interpretation as embodied simulation. Mind and Language 21(3): 434⫺458. Glenberg, Arthur M. 2010. Embodiment as a unifying perspective for psychology. Wiley Interdisciplinary Reviews: Cognitive Science 1(4): 586⫺596. Goldin-Meadow, Susan, Susan L. Levine, Elena Zinchenko, Terina KuangYi Yip, Naureen Hemani and Laiah Factor 2012. Doing gesture promotes learning a mental transformation task better than seeing gesture. Developmental Science 15(6): 876⫺884. Goodwin, Charles 2000. Action and embodiment within situated human interaction. Journal of Pragmatics 32(10): 1489⫺1522. Hostetter, Autumn B. and Martha W. Alibali 2008. Visible embodiment: Gestures as simulated action. Psychonomic Bulletin and Review 15(3): 495⫺514. Hostetter, Autumn B. and Martha W. Alibali 2010. Language, gesture, action! A test of the Gesture as Simulated Action framework. Journal of Memory and Language 63(2): 245⫺257. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kita, Sotaro 2000. How representational gestures help speaking. In: David McNeill (ed.), Language and Gesture, 162⫺185. Cambridge: Cambridge University Press. Kita, Sotaro and Asli Özyürek 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48(1): 16⫺32. Lakoff, George and Mark Johnson 1980. Metaphors We Live By. Chicago: University of Chicago Press. Levelt, Willem J. 1993. Speaking: From Intention to Articulation, Volume 1. Cambridge: Massachusetts Institute of Technology Press.

158. Embodied meaning, inside and out

2007

Louwerse, Max M. and Louise Connell 2011. A taste of words: Linguistic context and perceptual simulation predict the modality of words. Cognitive Science 35(2): 381⫺398. Madden, Carol J. and Rolf A. Zwaan 2003. How does verb aspect constrain event representations? Memory and Cognition 31(5): 663⫺672. Marghetis, Tyler and Rafael Nu´n˜ez 2013. The motion behind the symbols: A vital role for dynamism in the conceptualization of limits and continuity in expert mathematics. Topics in Cognitive Science 5(2): 299⫺316. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Merleau-Ponty, Maurice 2002. Phenomenology of Perception. New York: Routledge. Müller, Cornelia 2009. Gesture and Language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. Abington/New York: Routledge. Nu´n˜ez, Rafael and Tyler Marghetis 2014. Cognitive linguistics and the concept(s) of number. In: Roi Cohen-Kadosh and Ann Dowker (eds.), Oxford Handbook of Numerical Cognition. Oxford University Press. Parrill, Fey 2010. Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure. Language and Cognitive Processes 25(5): 650⫺668. Parrill, Fey 2011. The relation between the encoding of motion event information and viewpoint in English-accompanying gestures. Gesture 11(1): 61⫺80. Parrill, Fey, Benjamin Bergen and Patricia Lichtenstein 2013. Grammatical aspect, gesture, and conceptualization: Using co-speech gesture to reveal event representations. Cognitive Linguistics 24(1): 135⫺158. Pulvermüller, Friedemann and Luciano Fadiga 2010. Active perception: sensorimotor circuits as a cortical basis for language. Nature Reviews Neuroscience 11(5): 351⫺360. Rogers, William T. 1978. The contribution of kinesic illustrators toward the comprehension of verbal behavior within utterances. Human Communication Research 5(1): 54⫺62. Sassenberg, Uta and Elke Van Der Meer 2010. Do we really gesture more when it is more difficult? Cognitive Science 34(4): 643⫺664. Sato, Manami 2010. Message in the body: Effects of simulation in sentence production. PhD dissertation, University of Hawai’i at Manoa. Stanfield, Robert A. and Rolf. A. Zwaan 2001. The effect of implied orientation derived from verbal context on picture recognition. Psychological Science 12(2): 153⫺156. Streeck, Jürgen 2008. Depicting by gesture. Gesture 8(3): 285⫺301. Wagner Cook, Susan and Michael Tanenhaus 2009. Embodied communication: Speakers’ gestures affect listeners’ actions. Cognition 113(1): 98⫺104. Winter, Bodo and Benjamin Bergen 2012. Language comprehenders represent object distance both visually and auditorily. Language and Cognition 4(1): 1⫺16. Wu, Ying Choon and Seana Coulson 2007. How iconic gestures enhance communication: An ERP study. Brain and Language 101(3): 234⫺245.

Tyler Marghetis, San Diego (USA) Benjamin K. Bergen, San Diego (USA)

2008

IX. Embodiment

159. Embodied and distributed contexts o collaborative remembering 1. 2. 3. 4. 5.

Context and collaborative remembering Context, cognition, and interaction Towards an embodied and distributed view on context The need for integration References

Abstract This article aims to provide an embodied and distributed perspective into the ways in which contexts influence collaborative remembering in small groups in everyday environments. This new approach aims to provide the grounds for a new ecologically valid theory on the study of context in collaborative remembering which accounts for the mutual interdependencies between minds, bodies, and environment guiding joint remembering processes in the real-world activities.

1. Context and collaborative remembering If we agree on the fact that human cognitive activity is linked to high-level cognitive processes by way of embodied interaction with culturally organized material and social world (Hutchins 2010: 712), a detailed description of the context in which processes of collaborative remembering unfold is essential. Studies in cognitive psychology (Harris et al. 2011; Hirst and Echterhoff 2012) have shown that the conversational context of remembering directly influences how individual and shared memories are formed and communicated. However, besides stating the key role that context plays in guiding memory processes, these studies do not provide further evidence that would shed light on how context actually works in shaping cognitive, embodied, and discourse processes of collaborative remembering. Moreover, if the context of remembering is crucial in determining how memories are formed and communicated, memory research in cognitive psychology would need to explicitly account for its methodological limitations for their evident lack of ecological validity by using experimental techniques that sometimes have nothing to do with the actual activities in which people remember together in their everyday lives. On the other hand, studies from cognitive and ethnographic perspectives in cognitive science (Dahlbäck, Kristiansson, and Stjernberg 2013) and computer science (Wu et al. 2008) that were conducted in naturalistic settings where people are engaged in situated activities, show that the context of remembering is crucial in shaping the ways in which people construct and communicate their memories. Computer scientists (Wu et al. 2008) have investigated the cognitive strategies that families create to struggle with amnesia in real-world activities. This study explores the communicative strategies that families create to compensate the memory impairment of one of their members. These communicative strategies include the use of technological devices (e.g., calendars, personal digital assistants (PDAs), and journals) as well as disMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20082016

159. Embodied and distributed contexts of collaborative remembering

2009

cursive practices. This investigation shows how by means of distributed cognitive processes across participants and technological devices families may work as cognitive systems coping with amnesia. In order to examine the ways in which older adults cope with cognitive decline in natural settings, cognitive scientists (Dahlbäck, Kristiansson, and Stjernberg 2013) have conducted extensive field work in home healthcare services. These studies show that collaborative remembering depends on the interanimation “of a complex distributed web of processes involving both internal or intracranial and external sources” (Dahlbäck, Kristiansson, and Stjernberg 2013: 1), which are tightly constrained by the dynamics of the activities in which individuals are involved. Thus, naturally occurring situations should be taken as the loci for the analysis of collaborative remembering. However, despite the fact these ecologically valid studies in computer and cognitive science sustain the view that remembering is an embodied, distributed, and situated practice, which should be examined in real world activities, these studies have not accounted in detail for the central role that embodied resources (e.g., gestures, eye-gaze, and facial expression) play in conversational remembering. Nor have they provided a detailed analysis of this role or a description of the context in which such multidimensional activities occur. Going further, they have not made explicit how this context influences the interactional practices taking place in it. I believe that this important methodological limitation is related to the fact that most of their analyses are solely based on written transcripts of social interaction, and thus do not consider the richness of multimodal interaction in conversations about past experiences. A better understanding of the multimodal nature of joint and collaborative remembering activities in natural settings is crucial if we agree with Kendon’s take (1986) regarding the key role that gesticulation plays in everyday communication: I believe gesticulation arises as an integral part of an individual’s communicative effort and that, furthermore, it has a direct role to play in this process. Gesticulation is often an important component of the utterance unit produced, in the sense that the utterance unit cannot be fully comprehended unless its gestural component is taken into consideration. In many instances it can be shown that the gesticulatory component has a complementary relationship to what is encoded in words, so that the full significance of the utterance can only be grasped if both words and gesture are taken into account. (Kendon 1986: 12)

The following example taken from a data set of an ongoing project on alignment and collaborative remembering in small groups (Cienki, Bietti, and Kok 2014) illustrates the ways in which instances of lexical, syntactic, and gestural alignment provide the interactional structure of collaborative remembering in a family conversation about a specific activity (what they had experienced in the morning) within a larger event (a family trip to Hawaii in August 2010). In the first turn (line 1), Diego employs a yes-no question ¿se acuerdan? ‘do you remember?’ to check whether the memories of the specific event that they are jointly remembering are shared to some extent by those he is talking to. Moreover, Diego’s yesno question does not only operate to check whether the other family members are able to recall this shared experience but also to back up his own individual memories of the event, to reassure himself that this is how it happened. In other words, the question acting as an embodied reminder (Bietti 2013a, b; Bietti and Galiana Castello´ 2013) may work to make Diego’s individual memories more reliable in terms of his own certainty.

2010

IX. Embodiment

Fig. 159.1: Fragment of the recording of the family participant group

In this case, the pragmatic function of the yes-no question is related to the group activity of collaboratively reconstructing a shared account of their experience together. If we agree with the fact “one of the most prominent differences between questions and assertions is the obligation to respond” (Levinson 2012: 16), the obligation to response put forward by Diego’s question in the first turn triggers the formation of an adjacency pair (Schegloff 2007). An adjacency pair is composed by two turns, and each turn has to come from different speakers and be of different utterances types. For example, Diego’s question ¿se acuerdan? ‘do you remember?’ (L.1), requires the addressee to produce an answer of a different type ah fuimos a esa playa ‘ah we went to this beach’ (Dolores, L.2). However, if we go beyond the verbal interaction and look at the ways in which this adjacency pair is realized between Diego and Dolores in fully interactional and multimodal terms, it is made clear that Diego’s changing gaze direction as he focuses on the interior of the interactional space (Mondada 2009), along with the pointing gesture, also play a central role in the sequence. According to Mondada (2009: 1979) “interactional spaces are actively and constantly shaped and sustained by the participants’ bodies, glances and gestures during the interaction”. Hence, these co-constructed interactional settings play a central role in guiding mutual attention and reciprocity. Diego’s gaze and pointing gesture should be considered as embodied resources for provoking or bringing forth a response in the interaction (Stivers and Rossano 2012). This leads on to Diego’s embodied resources for mobilizing response reinforcing the accountability of Dolores as she responds to his question and acts as a reminder of what they did in the

159. Embodied and distributed contexts of collaborative remembering specific shared event in question. Several scholars (Goodwin 1994; Kendon 1990) have documented the regulatory function of the speakers’ gazes in social interactions. Moreover, in experimental settings, it has been shown (Bavelas, Coates, and Johnson 2002: 576) that “the listener tended to respond when the speakers looked at him or her”, providing compelling evidence that collaborative activities in face-to-face interaction are not only driven by verbal resources. This example shows that the notion of adjacency pair (Schegloff 2007) needs to be complemented by another concept that was able to account for the embodied resources that often are present in such minimal interactional sequences. Hence, I believe that the notion of projective pairs (Clark 2012) would be more appropriate for dealing with examples of embodied interaction and communication. As Clark puts it: [Projective pairs] like adjacency pairs, consist of two communicative acts in sequence from different people, with the first part projecting the second. The difference is that either part may be any type of communicative acts ⫺ spoken, gestural or otherwise. The proposal here is that question-answer pairs are types of projective pairs, and so one or both parts may be wordless. (Clark 2012: 82)

In the next turn, Dolores (L.2) completes the projective pair initiated by Diego (L.1). Dolores agrees with Diego while pointing at him when she utters esa ‘that’. However, Dolores’ agreement with Diego is not exclusively realized by means of verbal resources. As we can observe in Fig. 159.1b, Dolores also returns the pointing gesture to Diego. Dolores’ embodied agreement with Diego manifests a clear instance of sequential alignment (Pickering and Garrod 2004), which we believe serves to create the grounds for jointly constructing a shared account of what they experienced together that specific day on the beach. This joint and collaborative construction between shared and distributed memories of the event that they experienced together is behaviorally grounded in the continuous repetition and re-use of syntactic structures and lexical items between lines 2 and 5, along with the gesture performed by each of the participants as they represent how wide the beach was (Figure 159.1c). The re-use and repetition of syntactic structures (e.g., fuimos a la playa ‘we went to the beach’) and lexical items (e.g., playa ‘beach’) represent a case of verbal alignment by means of dialogic syntax (Du Bois 2010). By dialogic syntax, I refer to “structural similarities between immediately co-present segments in a broader conversational context” (Du Bois 2010: 2). In these lines, the repetition of syntactic structures and lexical items supports the agreement between Diego and Dolores and helps them to coordinate their shared memories of the events they are talking about, that is to say that they went on the first day of their stay in Maui to visit that particular beach, and that it was wide. However, as Figure 159.1a, b, and c illustrates, these instances of dialogic syntax are not the only mechanisms guiding the collaborative construction of shared and distributed memories. Embodied resources that Diego and Dolores manifest in the shared focus of visual attention and the sequential alignment of pointing and gestures also guide this collaborative activity. This example shows how important such gesticulations are for the collaborative activity of joint remembering triggered by Diego’s reminder. Interestingly, the interactional sequence taken to illustrate how questions act as reminders in collaborative remembering activities in multimodal interaction ends up in a declarative turn made by Diego. He

2011

2012

IX. Embodiment states that he had not remembered how wide the beach was (L.7). Thus, the interactional sequence shows not only how memories of shared events are reconstructed in joint activities but also that such collaborative and embodied processes may facilitate the retrieval of details that Diego had forgotten. The aim of presenting this sequence of interaction is to show the need for an integrative perspective for the study of context in collaborative remembering which takes into account the interplay between mind, body, and environment guiding these cognitive activities.

2. Context, cognition, and interaction In order to meaningfully proceed in social cooperation and interpersonal communication, speakers need to take for granted that, to some extent, their representations are shared with their addresses (Givo´n 2005; Tomasello 2008). By means of situated and distributed (family members talking about their trip to Hawaii) and embodied (gestures, pointing, gaze) activities, family members obtained the information needed to adapt and re-adapt behavior during the course of social interactions. The re-adaptation of behavior during the ongoing communicative interaction is triggered by the updating of common ground between participants. The concept of common ground (Clark and Brennan 1991) refers to the shared knowledge that is essential for communication between people. Several authors (Givo´n 2005; van Dijk 2008, 2009) argue that in social interactions, the sensation that we share goal-specific and relevant information with our addressees relies on our subjective and unique representation of the context in which the interaction unfolds. According to this representationalist view on context, the speaker’s representation of the context includes a representation of the mind of the interlocutor that may shift constantly from one utterance to the next during live communication. This cognitive and linguistic process allows us to make strategic hypotheses about what our addressee knows. A language user’s representation of the context is not only about his/her interlocutor’s epistemic (knowledge) and deontic states (intentions). Rather, they are constituted by the interplay of the following schematic categories: setting (time and place), current action, and participants with their social and cognitive proprieties such as identities, goals, and knowledge (van Dijk 2008). Van Dijk (2008) goes a stage further and claims that the pragmatic and communicative relevance of context models is built on the fact that they control the way in which speakers bring into line, or accommodate, their utterances to the communicative situation. The family conversation shown above illustrates that family members do indeed have a representation of the social interaction that they are participating in, which basically consists of collaboratively remembering their trip to Hawaii in 2010. Traces of representations of the context in which the interaction unfolds are given by the identities and social relationships of the participants (family members). These regulate and determine how they address each other (e.g., informal style). In other words, every time the same family members engage in a new social interaction, they may be taking leads from representations of similar situations grounded in cultural models (Shore 1996) driven by socially-shared knowledge and individual memories of personal experiences in alike situations. In relation to setting, I agree with the representationalist view on context that claims that participants in a social interaction should have a continuously updating mental model of the time and place in which the interaction unfolds. In the family conversation, such representation of the setting would be given by the location (in this

159. Embodied and distributed contexts of collaborative remembering

2013

case at home) and the time (August, 2011) where and when the interaction takes place, as well as by a representation of the participants’ position across the interactional space, that is, remember and keep track of who is sitting next to each other (Diego seems to know where his pointing gesture must be addressed, Figure 159.1a). A relevant trace of these operating internal representations of the social interaction in general, and the specific activity (e.g., collaborative remembering) in particular, are reflected in Dolores’ question acting as reminder in line 1. This assumes that to some extent her individual memories are shared with the other family members who participated in the same events. However, when we take a closer look at the micro-dynamics of the unfolding interaction manifested by the coordinated orchestration of multiple behavioral channels ⫺ at least between Diego and Dolores who take a more participative role in the interactional sequence ⫺ that constitutes the formation of projective pairs, the internalist and representationalist view on context raises some questions.

3. Towards an embodied and distributed view on context Experimental studies on lexical and syntactic alignment in psycholinguistics (e.g., Pickering and Garrod 2004) have shown that the cognitive accessibility of certain linguistic behaviors (e.g., a chosen sentence structure) is induced by hearing another person using it, thus increasing the likelihood of the other person producing related behavior. Moreover, instances of lexical and syntactic alignment in conversations seem to facilitate the construction of shared situation models (van Dijk and Kintsch 1983; Zwaan and Radavansky 1998), this means, the events that participants in conversation are talking about. In relation to prosodic alignment in task-based and spontaneous dyadic conversations, several studies (e.g., Truong and Heylen 2012) have found compelling evidence for overall alignment in intensity, pitch, voice quality, and speaking rate between participants. Regarding experimental studies that have investigated eye-gaze coordination in unscripted conversation, Richardson, Dale, and Kirkham (2007) have demonstrated that during conversational interaction, there is a tight coupling of visual attention. They showed that as their experimental subjects discuss a work of art, their eye movements become distinctly aligned in time. These researchers argue that it seems to be the case that the better the alignment, the better the participants understand each other ⫺ in other words, the better they succeed in fulfilling the shared goal or intention of communicating with one another. In relation to this line of inquiry focusing on the influence of bottom-up embodied behavior in cognitive processes, recent investigations on collaborative remembering in everyday small-group environments (Cienki, Bietti, and Kok 2014) have pointed out the central role that instances of sequential and simultaneous alignment of multiple behavioral channels (language, manual gesture, facial expression, body-position, and eye-gaze) have in these cognitive activities by showing the web of mutual dependences between historically and culturally grounded high-level representations (setting, participants, and macro-activity) and dynamic coordinated processes operating over micro time-scales, that is, at the time-scale shaped by the unfolding interaction. These findings are in accordance with what has been indicated in several studies looking at the key role played by the successful coordination of information from internal representations sustained by individual cognitive resources (e.g., memory systems) and external representations based on interacting body and distributed resources, such as cognitive tools for improving problem-solving performance (Zhang and Wang 2009).

2014

IX. Embodiment

4. The need or integration Conceptualizations of context grounded in internal representations of social interactions (Givo´n 2005; van Dijk 2008, 2009) account for the fact that new communicative situations do not lead participants to construct completely new representations of the context from scratch. To build a completely new representation for each communicative interaction from zero would require too much cognitive effort, and thereby such cognitive processes would not be efficient. Hence, internal representations of the context in which social interactions take place must be partially planned in advance. This in advance planning reflects the historically and culturally grounded dynamics of some elements (e.g., participants, setting, and macro-activity) which constitute an important part of the contexts shaping our everyday cognitive activities. However, as several studies in both experimental and natural settings have demonstrated, the alignment of multiple behavioral channels together with the interplay of internal and external representations play a central role in determining the ways in which specific activities at a micro time-scale unfold, and thereby also shape contexts in social interactions. Future conceptualizations of context in situated cognitive activities will have to take into account the need for analyzing the web of mutual dependencies among historical, cultural, cognitive, linguistic, embodied, and distributed resources following on one from another over multiple time-scales. Otherwise, we will still only be accounting for one aspect (e.g., cognitive or interactional) of this multidimensional phenomenon.

Notes Since this article only focuses on approaches that have studied context from cognitive and linguistic perspectives, other important theories on context in anthropological linguistics (e.g., Duranti and Goodwin 1992; Gumperz 1992), conversation analysis (Schegloff 1997), interactional sociolinguistics (e.g., Fetzer 2007), systemic functional linguistics (e.g., Halliday and Hasan 1985), philosophy (e.g., Rysiew 2011, for a review), sociology (e.g., Goffman 1974), and cognitive psychology (e.g., Schrank and Abelson 1977) have been intentionally excluded.

5. Reerences Bavelas, Janet Beavin, Linda Coates and Trudy Johnson 2002. Listener responses as a collaborative process: The role of gaze. Journal of Communication 52(3): 566⫺580. Bietti, Lucas M. 2013a. Reminders as interactive and embodied tools for socially distributed and situated remembering. Sage Open 3. Bietti, Lucas M. 2013b Embodied reminders in family interactions: Multimodal collaboration in remembering activities. Discourse Studies 15(6), 665⫺685. Bietti Lucas M. and Galiana F. Castello´ 2013. Embodied reminders in family interactions: Multimodal collaboration in remembering activities. Discourse Studies 15(6): 665⫺686. Cienki, Alan, Lucas M. Bietti and Kasper Kok 2014. Multimodal alignment during collaborative remembering. Memory Studies 7(2). Clark, Herbert H. 2012. Wordless questions, wordless answers. In: Jan P. de Ruiter (ed.), Questions: Formal, Functional and Interactional Perspectives, 81⫺99. Cambridge: Cambridge University Press.

159. Embodied and distributed contexts of collaborative remembering Clark, Herbert H. and Susan A. Brennan 1991. Grounding in communication. In: Laureen B. Resnick, John M. Levine and Stephanie D. Teasly (eds.), Perspectives on Socially Shared Cognition, 127⫺149. Washington: APA Books. Dahlbäck, Nils, Mattias Kristiansson and Frederik Stjernberg 2013. Distributed remembering through active structuring of activities and environments. Review of Philosophy and Psychology 4(1): 153⫺165. Du Bois, John 2010. Towards a dialogic syntax. Unpublished manuscript. Duranti, Alessandro and Charles Goodwin 1992. Re-thinking context: An introduction. In: Alessandro Duranti and Charles Goodwin (eds.), Rethinking Context: Language as an Interactive Phenomenon, 1⫺42. Cambridge: Cambridge University Press. Fetzer, Anita 2007. Context, contexts and appropriateness. In: Anita Fetzer (ed.), Context and Appropriateness, 3⫺27. Amsterdam/Philadelphia: John Benjamins. Givo´n, Thomas 2005. Context as Other Minds. The Pragmatics of Sociality, Cognition and Communication. Amsterdam: John Benjamins. Goffman, Erwing 1974. Frame Analysis: An Essay on the Organization of Experience. London: Harper and Row. Goodwin, Charles 1994. Professional vision. American Anthropologist 96(3): 606⫺633. Gumperz, John 1992. Contextualization and understanding. In: Alessandro Duranti and Charles Goodwin (eds.), Rethinking Context: Language as an Interactive Phenomenon, 229⫺269. Cambridge: Cambridge University Press. Halliday, Michael A.K. and Ruqaiya Hasan 1985. Language, Context, and Text: Aspects of Language in a Social-semiotic Perspective. Geelong, VIC: Deakin University Press. Harris, Celia B, Paul G. Keil, John Sutton, Amanda Barnier and Doris McIlwain 2011. We remember, we forget: Collaborative remembering in older couples. Discourse Processes 48(4): 267⫺303. Hirst, William and Gerald Echterhoff 2012. Remembering in conversations: The social sharing and reshaping of memories. Annual Review of Psychology 63(1): 55⫺79. Hutchins, Edwin 2010. Cognitive ecology. Topics in Cognitive Science 2(4): 705⫺715. Kendon, Adam 1986. Some reasons for studying gesture. Semiotica 62(1/2): 3⫺28. Kendon, Adam 1990. Conducting Interaction: Patters of Behavior in Focused Encounters. Cambridge: Cambridge University Press. Levinson, Stephen C. 2012. Interrogative intimations: On a possible social economics of interrogatives. In: Jan P. de Ruiter (ed.), Questions: Formal, Functional and Interactional Perspectives, 11⫺32. Cambridge: Cambridge University Press. Mondada, Lorenza 2009. Emergent focused interactions in public places: A systematic analysis of the multimodal achievement of a common interactional space. Journal of Pragmatics 41(10): 1977⫺1997. Pickering, Michael and Simon Garrod 2004. Toward a mechanistic psychology of dialogue. Behavioural and Brain Sciences 27(2): 169⫺190. Richardson, Daniel C., Richard Dale and Natasha Z. Kirkham 2007. The art of conversation is coordination. Psychological Science 18(5): 407⫺413. Rysiew, Patrick 2011. Epistemic contextualism. In: Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Winter Edition 2011. Schegloff, Emanuel, A. 1997. Whose text? Whose context? Discourse and Society 8(2): 165⫺187. Schegloff, Emanuel A. 2007. Sequence Organization in Interaction: A Primer in Conversation Analysis, Volume 1. Cambridge: Cambridge University Press. Schrank, Roger and Robert P. Abelson 1977. Scripts, Plans, Goals and Understanding: An Inquiry into Human Knowledge Structure. Hillsdale, NJ: Lawrence Erlbaum Associates. Shore, Bradd 1996. Culture in Mind: Cognition, Culture and the Problem of Meaning. New York: Oxford University Press. Stivers, Tanya and Federico Rossano 2012. Mobilising response in interaction: A compositional view of questions. In: Jan P. de Ruiter (ed.), Questions: Formal, Functional and Interactional Perspectives, 58⫺80. Cambridge: Cambridge University Press.

2015

2016

IX. Embodiment Tomasello, Michael 2008. Origins of Human Communication. Cambridge, MA: Massachusetts Institute of Technology Press. Truong, Khiet P. and Dirk Heylen 2012. Measuring prosodic alignment in cooperative task-based conversations. Proceedings of the 13th Annual Conference of the International Speech Communication Association (InterSpeech 2012). September 9⫺13, Portland, Oregon. Van Dijk, Teun A. 2008. Discourse and Context. A Sociocognitive Approach. Cambridge: Cambridge University Press. Van Dijk, Teun A. 2009. Society and Discourse. How Context Controls Text and Talk. Cambridge: Cambridge University Press. Van Dijk, Teun A. and Walter Kintsch 1983. Strategies of Discourse Comprehension. New York: Academic Press. Wu, Michael, Jeremy Birnholtz, Brian Richards, Ronald Baeker and Mike Massimi 2008. Collaborating to remember: A distributed cognition account of families coping with memory impairments. In: Proceedings of the ACM CHI 2008 Conference on Human Factors in Computer Systems, 825⫺834. Zhang, Jiajie and Hongbin Wang 2009. An exploration of the relations between external representations and working memory. PLoS ONE 4(8). Zwaan, Rolf A. and Gabriel A. Radvansky 1998. Situation models in language and memory. Psychological Bulletin 123(2): 162⫺185.

Lucas M. Bietti, Paris (France)

160. Living bodies: Co-enacting experience 1. 2. 3. 4. 5.

Introduction The standard view of embodiment: Universal, minimal, individual From embodiment to inter-bodily co-enacting Conclusion References

Abstract We advocate a move away from the received notion of embodiment that operates in much of cognitive science and cognitive linguistics and a corresponding move towards the notion of inter-bodily co-enacting, which affords salient features and phenomena for the study of language in social interaction. Human bodies come in a wide variety of forms; bodies are different in both how they sense and in how they are sensible to others. We review the paradoxes and limitations of embodiment when “the human body” or “all human bodies” are characterized in simultaneously universal, individual, and minimal (sub-personal) terms. The implicit logic of this use of “embodiment” holds that cognition is the activity of isolated individual minds (even if they are indeed “embodied” minds), and that only by guaranteeing the sameness in structure will we reach sameness in meaning and thereby secure communicative success. To offer an alternative to this view, we draw on distributed Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20162025

160. Living bodies: Co-enacting experience

2017

and enactive cognition and interaction studies to demonstrate how specific sense-making bodies-in-interaction participate in the coordination dynamics that afford meaning, understood as consequences in experience.

1. Introduction In this piece we argue that much of the current broad usage of the term “embodiment” in cognitive science and cognitive linguistics rests on an implicit, under-thematized notion of “the body”. This body consists of basic structures, organization, processes, and parameters for sense, movement, and orientation. Everyone’s having the same body that does the same sorts of things and reacts in the same sorts of ways to similar environmental stimuli is taken to ground meaning, specifically linguistic meaning construction, as developed for example in conceptual metaphor theory. However, in order to capture the features relevant to collaborative meaning-construction in linguistic interaction, we will need to look at the particularities and differences between living bodies as well. Other aspects of embodiment, such as a body’s unique physical appearance, personal history of pain and pleasure experiences, and restrictions or enhancements of bodily being in the world are highly salient and constitutive to the meaning co-created in conversations, and therefore must not be left out in inquiry into the embodiment of linguistic sensemaking (understood as whole-body sense-making). These days the academic landscape is changing as a result of a double movement consisting on the one hand of a biologization (naturalization) of the humanities and social sciences and on the other hand a sociologization (culturalization) within the natural sciences, as noted by the German sociologist Werner Vogd (2010). We can and indeed should bring sociological, cultural-political, and cognitive approaches together, once we realize that experience, perspective, and value are the collective engine of sense-making. In doing this cross-disciplinary work, we build on the enactive approach to cognitive science, which develops this notion of interactive embodied sense-making, and the distributed approach to cognition and language, which studies the dynamic unfolding of sense in particular ecologies that span individual bodies and multiple timescales.

2. The standard view o embodiment: Universal, minimal, individual The meaning of the notion of “embodiment” in cognitive science and cognitive linguistics is notably multiple. It has been characterized in terms of three constituent levels or aspects (Gibbs 2006; Johnson 1999; Lakoff and Johnson 1999); it has been mined for six premises (Wilson 2002; Ziemke 2003); in scanning the web of related disciplines in which it features as a core notion, Tim Rohrer comes up with twelve distinct usages (2007: 28⫺29). Rohrer generalizes these twelve into two broad senses of embodiment that pertain to cognition and language: “embodiment as broadly experiential” and “embodiment as the bodily substrate” (2007: 31), noting that the latter is increasingly in focus, while interaction between these two senses remains systematically understudied (2007: 44). This polysemy points to the richness of a multi-level phenomenon, but also flags dangers of abstraction and overgeneralization. For several reasons we discuss here, this once-revolutionary and yet still underdetermined term may be too blunt an instrument for researching everyday, meaningful in-

2018

IX. Embodiment teractions. Face-to-face conversations involve multiple and whole living bodies co-enacting meaning. Meaning emerges via engagement with a particular environment and with particular others. Meaning enacting draws on specific complexes of shared discourses and common grounds, and it unfolds across multiple timescales. Meaning, by which we mean consequences in experience, is an on-going process of co-achievement via various interactions with shared symbols and emergent interactive dynamics. Yet the notion of embodiment, even if it were to put on all of its many colorful hats and fascinators, is fundamentally structured so as to tell us about one body: a general, universal, and yet paradoxically, individual and isolated, body. To make our point clear: We are not denying that human bodies in general share physical processes and features of a universal character. Instead, we are arguing that when academics theorize about embodiment they tend to use an idealization, a standard ⫺ an implicit mental image or representation, if you will ⫺ of “the human body,” rather than real bodies. In this way the notion of embodiment is often undetermined and can wind up being reductive, even if unintentionally so. Robin Zebrowski (2009: 5) aptly observes that in cognitive science and conceptual metaphor theory (as well as in other fields, such as biology and bio-ethics), there operates an unquestioned myth of a “standard” body. Following the work of George Lakoff (1987), she argues instead for a radial category structure for describing human bodies (Zebrowski 2009: 266). Understanding human embodiment as varied by design may offer a more nuanced perspective than what is sometimes found in cognitive linguistics. For example, Zoltan Kövecses, whose work on cross-cultural conceptual metaphors of emotion is field-defining, employs a notion of differential experiential focus which brings into view the perspective in which “the body”, taken as the seat of conceptualization and meaning, is at once individual and universal: “Embodiment leads to universality. All human bodies are the same.” (Kövecses 2013: 1) All hearts beat, all temperatures elevate, and all palms sweat, while “differences in cultural knowledge and pragmatic discourse functions” explain the global diversity of emotion concepts and metaphorical expressions (Kövecses 2003: 183). Zebrowski’s comment on universality in conceptual metaphor theory is useful here: While the theories of conceptual metaphor seem empirically correct, we must examine what it means to have ‘the kinds of bodies we have,’ since it seems to us as though in the multitude of metaphors given, there is an assumption that physical bodies are standard across individuals, and it is only culture and language that differ in their interpretations of these bodily universals. (Zebrowski 2009: 15)

Moreover, we question whether these basic physiological features that bodies have in common can account for all of the relevant features of co-enactments of meaning in language use (see section 3). In addition to the logic of the universal body, work in cognitive science and cognitive linguistics that trades on the notion of embodiment may employ a minimal notion of embodiment: the brain-as-body (Gallagher 2013). Zebrowski details a long history of understanding the brain as a static machine that performs the same functions in the same way in each body-container in which it is found (2009: 90⫺94). Lakoff and Feldman’s Neural Theory of Language Project at The University of California, Berkeley (e.g., Feldman and Narayanan 2004) and simulation-based explanations of language

160. Living bodies: Co-enacting experience

2019

understanding (Barsalou 1999; Bergen, Narayan, and Feldman 2003) ground the notion of embodied language and meaning on events, structures, and processes in the brain. For example, Feldman (2010: 1) writes: “One major scientific advance in recent decades has been Embodiment ⫺ the realization that scientific understanding of mind and language entails detailed modeling of the human brain and how it evolved to control a physical body in a social community.” Thus we can see that the notion of embodiment that is taken to ground linguistic meaning and human cognition seems to require subscription to an abstraction of “the body”. The universal, standard body is minimal or largely sub-personal, consisting of a brain and nervous system, other physiological structures, spatial orientation and a sensory-motor system, for example. (Note that it does not, for the purposes typically considered relevant to language studies, consist of skin color, hair texture, genitalia, weight, age, ability, or other features that contribute to the meaning experienced and generated by living human bodies in social interaction.) This ideal commonality is very tempting because it shortcuts an ever-looming philosophical issue: We have to explain how all the individual brains locked away in the individual bodies share meaning and understand each other. If we make the case that body structure, physiology, orientation, sensorimotor systems, anatomically afforded movements, etc. are indeed in common, and if the (common) body is the ground of meaning, then this sharing is secured. There is another compelling reason to redraw the terms of ‘embodiment’: The unreflective ways in which the term operates in theories of cognition and language maintain an impassable rift between the individual and the social. Mainstream cognitive science and philosophy of mind generally take biology as first and foremost an individual phenomenon, while sociality is understood as something purely collective and public. Correspondingly, cognition is construed as an individual, internal, and private process (e.g., the result of deep hidden structures), while communication conversely is conceived as purely social, public, and outer (e.g., an external manifestation of inner thought). These general notions are indeed established and important figures of thought that have helped us to make distinctions about the world. But an unfortunate implication of these distinctions is that they often come off as mutually exclusive. On a dichotomous reading, what is social is understood as that which by definition does not belong to nature or biology and the other way round. Likewise the cognitive is defined in virtue of its not being something “out there” in the world of communication. Furthermore, these dichotomies share the underlying premise that it is the skin that constitutes the principal boundary between the inner and the outer, and consequently the demarcation between what can be described and understood in biological or in sociological terms respectively. (For critiques of this idea of the skin, or in terms of cognition, the skull, as the principal boundary limiting the arena of cognition, see Clark 2010; Cowley and Valle´e-Tourangeau 2013; Steffensen and Cowley 2010; Stewart, Gapenne, and Di Paolo 2010). Moreover the outer social world of communication and cultural practices is typically captured as belonging to the context of human actions. The context then is often, metaphorically speaking, understood as a kind of “social container” that surrounds and encapsulates the doings of the individual bodies, which are apprehended as separate entities constraining the limits of biological and cognitive functions. While the notion of embodiment that characterizes “the second generation of cognitive science” (Lakoff and Johnson 1999: 77⫺78) ties together cognition and body, it often leaves unchallenged the dichotomy between the outer social world and the inner world of thought and (embodied) cognition.

2020

IX. Embodiment The main project of cognitive linguistics has been to ground thought and language in embodied experience, and rightly so. Yet when “embodied experience” is undertheorized and left to its own conceptual devices, implicit folk ideas about what a body is take hold. Moreover, excitement about incorporating empirical and specifically neuroscientific approaches into the humanities has lead to a shift away from developing the notion of experiential embodiment to focusing on the causal powers of the bodily substrate, as Rohrer observes (2007: 37). Then, as exemplified in Kövecses’ work, the attempt to explain how embodied cognitive structures are influenced by socio-cultural facets discriminates the two parties as distinct entities: embodied cognition on the one hand and context and culture on the other. Actual living bodies, with their idiosyncrasies, in their historical, geographical, social-cultural performances, communicating and communing, are lost; only mysteries about intentions and others minds remain. In analyzing and explaining language, communication, and cognition, how can we recover a rich sense of bodies as they show up meaningfully and shape the meaning of our everyday lives? Our body-selves show up for others according to different, shifting perspectives and purposes: as a body with breasts (large, small, or missing), with dark or light skin, with missing or robotic appendages; bodies to invade, bodies to embrace, bodies to ignore, bodies to aspire to. We should not miss these dimensions of interbodily being and acting together when we investigate communication and other forms of collective human sense-making. As stated above, language activity in conversational interaction (at least) takes place as a social, multi-party, ecologically embedded practice (as observed by many in the dialogic tradition, e.g., Linell 2009: 49). If we begin with the interaction itself or with the dialogical system (Steffensen 2012) as the target of analysis, rather than a neatly divisible dyad of speaker-listener, then the pressing need to cross the solipsistic abyss that separates one’s mind from the mind of the other is lessened if not obliterated (De Jaegher and Di Paolo 2007). Two developing paradigms in cognitive science ⫺ distributed cognition and enactive cognition ⫺ begin from this new starting place, thus offering avenues to re-thinking the role of “embodiment” in language, communication, thinking, and meaning.

3. From embodiment to inter-bodily co-enacting The emerging paradigms of enactive and distributed cognition strike out on a middleway, attempting to dissolve the dichotomies of biological/cultural, individual/social, inner/outer, standard/non-standard as “merely abstractions from the interactive (enactive) process that is experience” (Johnson and Rohrer 2007: 47). What notion of body can we find in these approaches? First of all, there is a shift in perspective from embodiment as an encapsulated feature of individual cognition to a broader focus on a wider cognitive ecology, that is, the coactions of bodies participating in an environment. In the words of Evan Thompson, “[t]he roots of mental life lie not simply in the brain, but ramify through the body and the environment. Our mental lives involve our body and the world beyond the surface membrane of our organism” (Thompson 2007: ix). This suggests that the unit of analysis in cognitive science may shift from the body, and its embodied cognition, as a welldefined isolated phenomenon, to the inter-relation between bodies and environmental structures that make up an extended ecology (Steffensen 2011). Here it is crucial to bear

160. Living bodies: Co-enacting experience

2021

in mind that the notion of ecology does not correspond directly to the more familiar concept of context. The ecology is not an outer frame that just surrounds or contains the individual agents, and it cannot be captured in the simple outer-inner dichotomy. Rather, the ecology emerges from the active sense-making of agents employing the physical materials and socio-cultural resources of the environment, and furthermore “the ecology is embodied to the extent that it allows us to be sensitive to the sensitivity of others” (Steffensen and Cowley 2010: 333). In other words, the human body does not exist in isolation; instead we co-evolve with the environment. Therefore embodiment cannot be reduced to an isolated individual body. Furthermore, given the non-isolation of human bodies, any body’s smell, appearance, proximity, and style of movement, for example, directly impinge upon, perturb, or have meaning for the bodies around it. In other words, the idiosyncratic differences of bodies in interaction make a difference to sense-making. Put yet another way, a fuller concept of bodies in interaction should refer not only to organismic existence in its characteristic modes of motility, sensing, and perspective, but also to living bodies’ uniquely sensible presences that carry significance for other bodies and that contribute to a gestalt “felt sense” of a situation (Johnson 2007). While there are compelling reasons and cases in which it is useful or appropriate to think of our body as the container of our organs and physiological processes, it does not follow from this that the skin constitutes the limit of our experiential world. Instead it is the body’s semi-permeable nature, its breach, which provides us with the possibility of experience in the first place. [...] Embodiment may be a nomological condition for agency but it is ‘embodiment’ broadly conceived, for it is the agent’s capacity to transgress its boundaries, to spill over into the bodily experience of others, which establishes the community of felt co-engagement. (Stuart 2010: 307⫺308)

The growing and overlapping fields of interaction studies, gesture studies, dynamical systems approaches to cognition, and multimodal metaphor research collectively suggest that the shifting and concatenating rhythms of the in-between, while indeed difficult to parse for the purposes of quantitative analysis, must be included when we undertake to explain meaning construction. One extant route is found in video-based gesture studies (e.g., Kappelhoff and Müller 2011; Kendon 2004; Streeck 2009), which “suggest breaking away from the idea that communication consists of distinctive channels for the verbal and the non-verbal, to demonstrate the ways in which social action and interaction involve the interplay of talk, visible and material conduct” (Heath, Hindmarsh, and Luff 2010: 9). While this could be the topic of another entry (or book, or series), we maintain that the meaning of interaction is found in the consequences in experience that are afforded to and modulated by participants, that is, by living bodies and body-selves. Interaction is continually re-organized via coordination processes in which people participate but over which they do not exercise full control (De Jaegher and Di Paolo 2007). These processes may be measured in terms of metaphoricity or other identifiable moments of change or breakdown (Jensen and Cuffari in preparation). The enactivist paradigm in cognitive science also offers resources for rethinking embodiment in the way that is here recommended. On the enactivist view (Froese and Di Paolo 2009; Maturana and Varela 1980; Varela, Thompson, and Rosch 1991), cognition is the active sense-making of an autonomous living being as it navigates, creatively ex-

2022

IX. Embodiment plores, and evaluates its world in movement, perception, and response. This sense-making is always co-authored by the environment and by others in it, as captured by the notion of structural coupling (Varela, Thompson, and Rosch 1991). In an organismenvironment interaction, the coupled domains are reciprocally co-constituting; sensory inputs guide organism actions, and organism actions modulate the environment and thus modify the sensory returns. On this view, […] what the world ‘is’ for the organism amounts to neither more nor less than the consequence of its actions for its sensory inputs; this in turn clearly depends on the repertoire of possible actions. This is the heart of the concept of enaction: every living organism enacts, or as Maturana (1987) liked to say brings forth the world in which it exists. (Stewart 2010: 3)

Thus in enactivism we find that cognition is a feature of living bodies, which by definition exist in interactions; furthermore, reality itself is a product of life’s dynamically unfolding couplings and interactions. For present purposes, the pivotal consequences of this view are threefold. First, bodies exist in relation, and are social (Johnson and Rohrer 2007: 43). Second, the phenomenal or experiential world is a function of acting, living, social bodies (Thompson 2007: 237). Third, each living body uniquely enacts its own precarious perspective, or “needful freedom” (Jonas 1966: 80) via its on-going cognizing or sense-making. Note that the enactivist’s core tenets of autopoeisis (Maturana and Varela 1980) and adaptivity (Di Paolo 2006) put the focus on life as it occurs in so many unique and precarious perspectives. In enactivism, specificity and idiosyncracy are “built-in”. Ongoing work in enactivism now approaches “pathologies” such as autism (De Jaegher 2013), schizophrenia and other kinds of mental illness (e.g., Fuchs 2009), and locked-in syndrome (Kyselo 2012) from the perspective of how specific living bodies inter-enact worlds of significance. Ultimately, then, the move to ground language and sense-making in the interactions, experiences, and in-betweens of living bodies, rather than in an abstract notion of universal-individual embodiment, will engender confrontations with the fundamental underdeterminations that haunt everyday co-enacted meaning and communication. While preserving this possibility of radical difference, we can also follow a pragmatist lead in noting that “mind arises through communication by a conversation of gestures in a social process or context of experience” (Mead 1934: 50). Solipsism is a worry structurally related to the premise that minds are fundamentally individual, and hence should not linger here. When we begin with the social act as “primitive” for the emergence of cognition, consciousness, and self-hood, shared meaning is not precluded from the start (Mead 1934: 47). Nevertheless, this challenge to understand human meaning-making as grounded in inter-bodily being can serve as a motivation for new methods in research. As mentioned, the interactive and multimodal turn observable in cognitive linguistics today is a good response or a good journeying forth on this middle way. Recent work brings the notions of inter-corporeal and multi-body cognition to neuroscience as well (e.g., Dumas 2011; Froese and Fuchs 2012).

4. Conclusion Importantly, in reviewing these different possibilities for grounding meaning in bodily life, our point is not to discard the individual as a living organism, self, or person. To

160. Living bodies: Co-enacting experience

2023

the contrary, we aim to recover the individual in its particularity of perspective and experience, as a unique center of care, agency, and sense-making that has a unique history and knows of unique affordances. Nor is our point to deny identifiable similarities in dimensions of human bodily existence. For example, “sense-making” and participating in distributed dialogical systems are assumed to be basic traits of human bodily life. What we call for are treatments of embodiment that maintain conceptual space and curiosity for the full range of significant aspects of sensing and sensible bodies as they interact, experience, and live from moment to month to year. For the purposes of studying conversations and other live interactions involving language, the term “embodiment” must at least be complemented by, if not replaced with, descriptions such as “interbodily” and “bodies”, to maintain the reality of plurality and difference that shapes our exchanges and our shared meaning. There is only so much that can be understood about sense-making on the basis of what “all human bodies” have in common. A crucial question to be addressed in further cross-disciplinary research, then, is what relationships obtain between experience, individual body-selves, and the dynamic coordinations or meanings that they enact in coming together in particular times and places. While we have not taken on this work here, we have tried to clear space for this way of thinking by calling for more complex and pluralistic treatments of bodies in interaction.

Acknowledgements Thanks to George Fourlas and Miriam Kyselo for helpful comments. This work is supported by the Marie-Curie Initial Training Network, “TESIS: Towards an Embodied Science of InterSubjectivity” (FP7-PEOPLE-2010-ITN, 264828).

5. Reerences Barsalou, Lawrence W. 1999. Perceptual symbol systems. Behavioral and Brain Sciences 22: 577⫺ 609. Bergen, Benjamin, Shweta Narayan and Jerome Feldman 2003. Embodied verbal semantics: Evidence from an image-verb matching task. In: Richard Alterman and David Kirsh (eds.), Proceedings of the 25th Annual Conference of the Cognitive Science Society, 139⫺144. Mahwah, NJ: Lawrence Erlbaum. Clark, Andy 2010. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford: Oxford University Press. Cowley, Stephen J. (ed.) 2011. Distributed Language. Amsterdam/Philadelphia: John Benjamins. Cowley, Stephen J. and Frederic Valle´e-Tourangeau (eds.) 2013. Cognition Beyond the Body: Interactivity and Human Thinking. Dordrecht: Springer. De Jaegher, Hanne 2013. Embodiment and sense-making in autism. Frontiers in Integrative Neuroscience 7: 15. De Jaegher, Hanne and Ezequiel Di Paolo 2007. Participatory sense-making. Phenomenology and the Cognitive Sciences 6(4): 485⫺507. Di Paolo, Ezequiel A. 2006. Autopoiesis, Adaptivity, Teleology, Agency. Phenomenology and the Cognitive Sciences 4(4): 429⫺452. Dumas, Guillaume 2011. Towards a two-body neuroscience. Communicative and Integrative Biology 4(3): 349⫺352. Feldman, Jerome A. 2010. Cognitive Science should be unified: comment on Griffiths et al. and McClelland et al. Trends in Cognitive Sciences Trends in Cognitive Sciences 14(8): 341.

2024

IX. Embodiment

Feldman, Jerome A. and Srini Narayanan 2004. Embodied meaning in a neural theory of language. Brain and Language 89: 385⫺392. Froese, Tom and Ezequiel Di Paolo 2009. Sociality and the life ⫺ mind continuity thesis. Phenomenology and the Cognitive Sciences 8(4): 439⫺463. Froese, Tom and Thomas Fuchs 2012. The extended body: A case study in the neurophenomenology of social interaction. Phenomenology and the Cognitive Sciences 11(2): 205⫺235. Fuchs, Thomas 2009. Embodied cognitive neuroscience and its consequences for psychiatry. Poiesis and Praxis 6(3⫺4): 219⫺233. Gallagher, Shaun 2013. Embodied intersubjectivity and psychopathology. TESIS Munich Workshop: Psychopathology and Psychotherapy: An Interpersonal Approach. Klinikum rechs der Isar (MRI), Department of Psychosomatic Medicine and Psychotherapy, Munich, Germany, June 7 2013. Gibbs, Raymond W. 2006. Embodiment and Cognitive Science. Cambridge: Cambridge University Press. Heath, Christian, Jon Hindmarsh and Paul Luff 2010. Video in Qualitative Research. Thousand Oaks: Sage Publications. Jensen, Thomas W. and Elena C. Cuffari in preparation. Doubleness in experience: a distributed enactive approach to metaphor in real life data. Johnson, Mark 1999. Embodied reason. In: Gail Weiss and Honi F. Haber (eds.), Perspectives on Embodiment: The Intersections of Nature and Culture, 81⫺102. New York: Routledge. Johnson, Mark 2007. The Meaning of the Body: Aesthetics of Human Understanding. Chicago: University of Chicago Press. Johnson, Mark and Tim Rohrer 2007. We are live creatures: Embodiment, American pragmatism, and the cognitive organism. In: Tom Ziemke, Jordan Zlatev and Rosyln M. Frank (eds.), Body, Language and Mind, Volume 1, Embodiment, 17⫺54. Berlin/New York: Mouton de Gruyter. Jonas, Hans 1966. The Phenomenon of Life: Toward a Philosophical Biology Essays. New York: Harper and Row. Kappelhoff, Hermann and Cornelia Müller 2011. Embodied meaning construction: Multimodal metaphor and expressive movement in speech, gesture, and feature film. Metaphor and the Social World 1(2): 121⫺153. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kövecses, Zolta´n 2003. Metaphor and Emotion: Language, Culture, and Body in Human Feeling. Cambridge: Cambridge University Press. Kövecses, Zolta´n 2013. Conceptualizing emotions: A cognitive linguistic perspective. Research and Applying Metaphor (RaAM) Seminar 2013: Metaphor, Metonymy and Emotions. Poznan, Poland, May 4 2013. Kyselo, Miriam 2012. From body to self ⫺ Towards a socially enacted autonomy, with implications for locked-in syndrome and schizophrenia. Unpublished PhD dissertation, University of Osnabrück. Lakoff, George 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago: University of Chicago Press. Lakoff, George and Mark Johnson 1999. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic Books. Linell, Per 2009. Rethinking Language, Mind, and World Dialogically: Interactional and Contextual Theories of Human Sense-Making. Charlotte: Information Age Publishing. Maturana, Humberto R. and Francisco J. Varela 1980. Autopoiesis and Cognition: The Realization of the Living. Dordrecht/Boston: D. Reidel Publishing Company. Mead, George H. 1934. Mind, Self and Society from the Standpoint of a Social Behaviorist. Chicago, IL: University of Chicago Press. Rohrer, Tim 2007. Embodiment and experientialism. In: Dirk Geeraerts (ed.), The Oxford Handbook of Cognitive Linguistics, 19⫺47. Oxford/New York: Oxford University Press.

160. Living bodies: Co-enacting experience

2025

Steffensen, Sune V. 2011. Beyond mind: An extended ecology of languaging. In: Stephen Cowley (ed.), Distributed Language, 185⫺210. Amsterdam/Philadelphia: John Benjamins. Steffensen, Sune V. 2012. Care and conversing in dialogical system. Language Sciences 34(5): 513⫺531. Steffensen, Sune V. and Stephen Cowley 2010. Signifying bodies and health: a non-local aftermath. In: Stephen Cowley, Joao C. Major, Sune V. Steffensen and Alfredo Dinis (eds.), Signifying Bodies. Biosemiosis, Interaction and Health, 331⫺356. Braga: The Faculty of Philosophy of Braga. Stewart, John 2010. Foundational issues in enaction as a paradigm for cognitive science: From the origin of life to consciousness and writing. In: John Stewart, Olivier Gappene and Ezequiel A. Di Paolo (eds.), Enaction Toward a New Paradigm for Cognitive Science, 1⫺32. Cambridge, MA: Massachusetts Institute of Technology Press. Stewart, John, Olivier Gapenne and Ezequiel A. Di Paolo (eds.) 2010. Enaction. Toward a New Paradigm for Cognitive Science. Cambridge, MA: Massachusetts Institute of Technology Press. Streeck, John 2009. Gesturecraft: The Manu-Facture of Meaning. Philadelphia: John Benjamins. Stuart, Susan 2010. Enkinaesthesia, biosemiotics, and the ethiosphere. In: Stephen Cowley, Joao C. Major, Sune V. Steffensen and Alfredo Dinis (eds.), Signifying Bodies. Biosemiosis, Interaction and Health, 305⫺330. Braga: The Faculty of Philosophy of Braga. Thompson, Evan 2007. Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Cambridge, MA: Harvard University Press. Varela, Francisco J., Evan Thompson and Eleanor Rosch 1991. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: Massachusetts Institute of Technology Press. Vogd, Werner 2010. Gehirn und Gesellschaft. Weilerswist: Velbrück Wissenschaft. Wilson M. 2002. Six views of embodied cognition. Psychonomic Bulletin and Review 9(4): 625⫺36. Zebrowski, Robin L. 2009. We are plastic: Human variability and myth of the standard body. Unpublished PhD dissertation, University of Oregon, Eugene, Oregon. Ziemke, Tom 2003. What’s that thing called embodiment? In: Richard Alterman and David Kirsh (eds.), Proceedings of the 25th Annual Conference of the Cognitive Science Society, 1134⫺1139. Mahwah, NJ: Lawrence Erlbaum.

Elena Clare Cuffari, San Sebastian (Spain) Thomas Wiben Jensen, Slagelse (Denmark)

2026

IX. Embodiment

161. Aproprioception, gesture, and cognitive being 1. 2. 3. 4. 5. 6.

Introduction The study of gesture and its implications IW’s gestures Discussion: Growth points, material carriers, and inhabitance To sum up References

Abstract Aproprioception is the loss of proprioceptive feedback, the sense of one’s own bodily position and movements in time and space. Aproprioception renders practical action nearly impossible, without mental concentration and visual supervision; in contrast it does not affect gestures. Gestures remain intact in a morphokinetic sense and in synchrony with speech, as if proprioception plays little role in gesture planning and execution. Using the case of IW (“The Man Who Lost His Body” the title of a BBC Horizon program devoted to his case), the present chapter discusses the phenomenon of aproprioception and its relation and relevance for gestures and cognitive being.

1. Introduction Mr. Ian Waterman, sometimes referred to as “IW”, suffered at age 19 a sudden, total deafferentation of his body from the neck down ⫺ the near total loss of all the touch, proprioception, and limb spatial position senses that tell you, without looking, where your body is and what it is doing. The loss followed a never-diagnosed fever that is believed to have set off an auto-immune reaction. The immediate behavioral effect was immobility, even though IW’s motor system was unaffected and there was no paralysis. The problem was not lack of movement per se but lack of control. Upon awakening after three days, IW nightmarishly found that he had no control over what his body did ⫺ he was unable to sit up, walk, feed himself, or manipulate objects; none of the ordinary actions of everyday life, let alone the complex actions required for his vocation. To imagine what deafferentation is like, try this experiment: Sit down at a table (something IW could not have done at first) and place your hands below the surface; open and close one hand, close the other and extend a finger; put the open hand over the closed hand, and so forth. You know at all times what your hands are doing and where they are but IW would not know any of this ⫺ he would know that he had willed his hands to move but, without vision, would have no idea of what they are doing or where they are located. The IW case is a fascinating study of a person who has lost his body schema (to use Gallagher’s 2005 terminology), or “his body” as in the title of the 1998 BBC Horizon program about IW, The Man who Lost his Body. The neuronopathy destroyed all sensory neurons roughly below the neck level in proportion to their myelination and conduction speed, sparing fibers underlying temperature and pain. The initial medical prognosis was that IW would spend the rest of his days confined to a wheelchair. Not one who takes setbacks lightly, IW commenced a rigorous self-designed and administered program of Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20262048

161. Aproprioception, gesture, and cognitive being

2027

movement practice with the aim of learning to move again, endlessly performing motions over and over in different combinations, different trajectories, different distances and velocities, until he could, by thinking about the motion and using vision as his guide, plan and execute many movements flawlessly, so flawless indeed, that observers find nothing unusual about them. The original description of IW and his self-administered recovery was called Pride and a Daily Marathon, a title chosen to describe the rigor and determination of IW battling the catastrophe that had befallen him (see Cole 1995). After more than 30 years, IW has developed an entirely new way of initiating and controlling movement. He has perfected this style to an astonishing degree. His movements depend on having constant visual contact with his limbs, the environment, including the surrounding space, objects to be manipulated and any other objects in the immediate vicinity. Every movement is planned in advance, the force and direction calculated intuitively, and the movement monitored as it is taking place. Given all these requirements, it is impressive to see IW move without visible flaw at normal speeds. Although his gait seems somewhat lumbering (he calls it controlled falling), his arm and hand movements are truly indistinguishable from normal. However, if vision is denied, IW can no longer control his hands and arms accurately.

2. The study o gesture and its implications Below we describe several experiments on gesture, conducted with IW, but first we explain briefly the kinds of gestures we focus on, how we study them, and what they reveal of a specific mode of cognition during speech.

2.1. The gesture continuum The word “gesture” covers a range of communicative events. The term is nonetheless convenient and we shall retain it for this chapter, but first we draw some crucial distinctions. The gestures of concern to us are an integral component of language, not a substitute, accompaniment, or ornament. Such gestures are synchronous and co-expressive with speech, not redundant, and not signs, salutes, or so-called emblems (see below). They are frequent ⫺ about 90% of spoken utterances in descriptive discourse are accompanied by them (Nobe 2000). They occur in similar form across many cultures (we have observed speakers from more than 20, including “high-gesture” cultures, such as Naples). The gestures so described were termed “gesticulations” by Kendon (1988); other gestures in his terminology were “language-like” and “pantomime” ⫺ all contrasted to “signs”. Arranged on a continuum, they can be organized as follows (McNeill 1992): Spontaneous Gesticulation → Language-like → Pantomime → Emblems → Signs The differences along the gesture continuum map onto three dimensions ⫺ how necessary speech is to the gesture; how language-like is the gesture; and how conventionalized is its form. These three perhaps can be reduced to an unnamed deeper dimension. Nonetheless, it is useful to see how points on the continuum differ on the three. So as one goes from gesticulation to sign the relationship of gesture to speech changes:

2028

IX. Embodiment ⫺ The obligatory presence of speech declines. ⫺ Language-like properties increase. ⫺ Socially regulated conventional signs replace self-generated form-meaning pairs. Language-like gestures have a different timing relationship with speech from gesticulations. For example in “he goes [-]”, a gesture synchronizes with a momentary pause in speech; a vacant grammatical slot. Here gesture substitutes for speech. An emblem is a culturally established morpheme (or semi-morpheme, because it does not usually have combinatoric, “syntagmatic” values), such as the “OK” sign and others. Emblems can occur with or without speech. Pantomime is gesture without speech, often in sequences and usually comprised of simulated actions. What distinguishes pantomime from gesticulation is that the latter, but not the former, is integrated with speech. Pantomime, if it relates to speaking at all, does so as a “gap filler”. Speech-gesticulation combinations are cognitive constructions, and occur where speech and gesture are co-expressive of the same idea. Movement by itself offers no clue to whether a gesture is “gesticulation” or “pantomime”; what matters is whether the two modes of semiosis, linguistic form and gesture, simultaneously co-express one idea unit. Sign languages are full, socially constituted, non-spoken languages. Even though “gesticulation” (hereafter, “gesture”) is only one point on the continuum, it dominates gesture output in storytelling, living space descriptions, academic discourse (including prepared lectures), and conversations. Such gestures synchronize with speech at points where they and speech embody shared underlying meanings in discourse, possess “communicative dynamism” (Firbas 1971), and are points of maximal discursive force (McNeill and Duncan 2000). Commonly, 99% if not all gestures in such contexts count as “gesticulation”. An example from a student participant in one of our earliest experiments is shown in Fig. 161.1.

2.2. Gestures and speech  two simultaneous modes o semiosis Fig. 161.1 illustrates synchronous co-expressive speech and a gesture recorded during a narration. The speaker had just watched a cartoon and was recounting it to a listener from memory. We explained that the task was storytelling and did not mention gesture (the same method was used with IW). The speaker was describing an event in which one character (Sylvester) attempted to reach another character (Tweety) by climbing up the inside of a drainpipe; a pipe conveniently topping out next to a window where Tweety was perched. The speaker said, “and he goes up thro´ugh the pipe this time” (with prosodic emphasis on “thro´ugh”). Co-expressively with “up” her hand rose and with “thro´ugh” her fingers spread outward to create an interior space. The upward movement and the opening of the hand were simultaneous and both synchronized precisely with “up thro´ugh”, the linguistic package that carried the related meanings. The prosodic emphasis on “thro´ugh”, highlighting interiority, is matched by the added complexity of the gesture, the spreading and upturning of the fingers. What we mean by co-expressivity here is this joint highlighting of the ideas of rising and interiority, plus their joint contribution to communicative dynamism. (More extensive accounts are in McNeill 1992 and McNeill 2005). However, also note the differences between the two types of semiosis. Speech componentializes the event: a directed path (“up”) plus the idea of interiority (“thro´ugh”). This

161. Aproprioception, gesture, and cognitive being

2029

Fig. 161.1: Gesture combining entity, upward movement, and interiority in one symbol.

analytic segregation further requires that direction and interiority be concatenated, to obtain the composite meaning of the whole. In contrast, gesture is a synthesis. The whole emerges as one symbol. The semantic elements in speech are simultaneously aspects of this imagery whole. No concatenation is required. Meaning determination moves from whole to parts, not from parts to whole. The effect is a uniquely gestural way of packaging meaning ⫺ something like “rising hollowness” in the example. Thus, speech and gesture, co-expressive but non-redundant, represent one event (climbing up inside the pipe) in two forms: analytic/combinatoric and global/synthetic ⫺ at the same instant. This kind of gesticulation is also our focus in the IW case. IW is unquestionably capable of combinations of unlike semiotic modes of these kinds in packaging meanings. It is important, however, to register a distinction within the gesticulation type introduced by IW himself. Some of his gestures, he says, are constructed: planned in advance, launched at will, and controlled in timing and motion throughout ⫺ carried out, in other words, exactly as he carries out his practical, world-related movements. His second type he calls “throw-aways” ⫺ “ones that just happen. Sometimes I’ll be aware of them because there may be something around me […] but most are just thrown away”. “Throw-aways” are not explicitly planned and monitored, and precisely for this reason are of great interest.

2.3. The binding o speech and gesture A final point is the binding of gestures and speech when they participate in the formation of cognitive units, a binding so strong that efforts to separate them fail ⫺ either speech and gesture remain together or they are jointly interfered with; in either case the speechgesture bond is unbroken. We expect the same to hold with IW’s “throw-away” gestures (his “constructeds”, arising from deliberate planning, generally do not show the same binding with speech). The following are experimental examples of tight binding gleaned independently of IW from the gesture literature: ⫺ Delayed auditory feedback ⫺ the experience of hearing your own voice played back after a short delay ⫺ produces major speech disturbances but does not interrupt speech-gesture synchrony (McNeill 1992).

2030

IX. Embodiment ⫺ Stuttering and gesture are incompatible. The onset of a gesture inoculates against stuttering and, conversely, the onset of stuttering during a gesture interrupts it instantly (Mayberry and Jaques 2000). ⫺ People blind from birth, who have never seen gestures and have no benefit from experiencing them in others, gesture and do so even to other blind people whom they know to be blind (Iverson and Goldin-Meadow 1997). ⫺ Memory loss interrupts speech and gesture jointly; it is not that gesture is a “gapfiller” when memory fails (McNeill 2005). ⫺ Conversely, gestures protect memory from interference (Goldin-Meadow et al. 1993). The speech-gesture units in these settings are held together by the requirements of idea unit formation: Thought in speech takes place simultaneously in imagery and linguistic form; to think while speaking is to be active in both these modes at once. Speech and gesture are thus yoked, because both are essential to this distinctive form of cognition. For a recent statement of a “growth point” hypothesis that explains this double essence of thinking while speaking, see McNeill et al. (2008). We return to the growth point at the end of this chapter.

3. IWs gestures Thanks to the BBC, IW, Cole, Gallagher, and the University of Chicago researchers could gather at the University for filming in July 1997. We wanted to record IW under a variety of conditions, both with and without vision. IW cannot simply be blindfolded, since he would be unable to orient himself and be at risk of falling over. Following an idea of Nobuhiro Furuyama, we devised and had constructed by David Klein a traylike blind, pictured in Fig. 161.2, that could be pulled down in front of him, blocking vision of his hands, while allowing him space to move and preserving his visual contact with his surroundings. IW was videotaped retelling the above-described animated cartoon. He also was recorded under the blind in casual conversation with Jonathan Cole. In 1997, we did not appreciate the importance of testing IW’s instrumental actions with-

Fig. 161.2: IW seated at the blind designed for gesture experiments.

161. Aproprioception, gesture, and cognitive being

2031

out vision but we had an opportunity to test his performance on this kind of task in April 2002 with financial support from the Wellcome Trust in a grant to Jonathan Cole, when IW, Cole, and Gallagher came back for a second visit to the University of Chicago.

3.1. Signiicant variables in assessing IWs gesture perormance To have a systematic approach to IW’s gestures, we pay specific attention to the following variables: (i) Timing: synchronization with co-expressive speech. (ii) Morphokinesis: the shape of the gesture in terms of hand forms and use of space. (iii) Topokinesis: the location of hands relative to each other in space, including but not limited to the approach of one hand by the other. (iv) Character viewpoint (CVPT): the perspective of the character being described; a gesture from the character viewpoint is close to mimicry. (v) Observer viewpoint (OVPT): the perspective of the narrator or an observer. With vision, IW’s gestures display all the above features (over a sample of gestures). Without vision, they show some but not all features: exact timing with speech, morphokinetic accuracy, and observer viewpoint. Topokinetic accuracy and character viewpoint, however, become rare. The loss or reduction of these two particular features implies that his gestures without vision depart from the pathway of world-related action control (regarding character viewpoint as mimicry or action simulation). The preservation of speechgesture synchrony implies that the system that remains is integrated with speech. The ensemble of preserved and lost features suggests a dedicated thought-language-hand link.

3.2. IWs gestures with and without vision (1997) IW’s gestures with vision are similar to those produced by normal speakers, although they are fewer in number and tend to be isolated, performed one by one, in keeping with his self-conscious constructed-gestures strategy. Fig. 161.3 shows a narrative gesture made with vision. IW was describing Sylvester after he had swallowed a bowling ball that Tweety had dropped inside the pipe. Both morphokinesis and topokinesis are indistinguishable from normal. His hand appears to bracket a small figure in the central gesture space and move it downward, wobbling right and left slightly as it went down. The motion is co-expressive with the synchronous speech: [//he // wiggles his way down] (bold face indicates speech accompanying gesture). The only clue that control is other than normal is that IW looks at his hand during the gesture. The viewpoint in this case is that of an observer; elsewhere, in his full description of the bowling ball episode, the character viewpoint also occurs (observer viewpoint and character viewpoint refer to the accompanying gestures, not the spoken forms): observer viewpoint: “tiny little bird” ⫺ left hand appears to outline bird (cf. Fig. 161.3) character viewpoint: “bowling ball” ⫺ both hands appear to thrust down on ball observer viewpoint: “wiggles his way down” ⫺ left hand again outlines bird, wiggles character viewpoint: “places it” ⫺ left hand appears to push down ball observer viewpoint: “gets a strike” ⫺ hands move laterally from center space

2032

IX. Embodiment

Fig. 161.3: IW iconic gesture with vision.

Fig. 161.4a, b: IW coordinated two-handed iconic gesture without vision.

Fig. 161.4 illustrates a narrative gesture without vision, a coordinated two-handed tableau, in which the left hand is Sylvester and the right hand is a trolley pursuing him. IW was saying, “[and the atram bcaught him up]” (a, b referring to the first and second panels of the illustration). His right hand moved to the left in exact synchrony with the co-expressive “caught”. Moreover, a poststroke hold extended the stroke image through “him” and “up” (underlining) and thus maintained full synchrony of the meaningful configuration in the stroke with still unfolding co-expressive speech. It is important to recall that this synchrony and co-expressivity were achieved without proprioceptive or spatial feedback. We thus see in IW, without any feedback, the double semiosis of synchronous gesture and speech. Fig. 161.4 demonstrates another similarity of IW’s “throw-aways” to normal gestures. The gesture is complex, it uses two hands doing different things in relation to each other, the whole imagery depicting a situation in which the entities identified in speech are

161. Aproprioception, gesture, and cognitive being

2033

changing their relationships in time and space. Such complexity contributes to communicative dynamism; that is the case with Fig. 161.4 ⫺ the event is the denouement of a buildup and the main discursive point.

3.3. Topokinetic versus morphokinetic accuracy The gesture in Fig. 161.4 was accurate morphokinetically but not topokinetically; as the right hand approached the left, the right and left hands did not line up. Fig. 161.5 illustrates another case of topokinetic approximation. IW was describing “a square plank of wood” and sketched a square in the gesture space. The illustration captures the misalignment of his hands as he completed the top of the square and was about to move both hands downward for its sides. We also asked IW to sketch simple geometric shapes in the air without vision. Morphokinetically, a triangle and a circle were readily created but topokinetically there was always some disparity (Fig. 161.6a⫺b show the end positions of a triangle and circle,

Fig. 161.5: Lack of topokinetic accuracy without vision.

Fig. 161.6a, 6b: IW’s misalignment as he outlines a triangle and circle without vision.

2034

IX. Embodiment

Fig. 161.7: Accurate completion of triangle by subject with intact proprioception and spatial sense without vision.

respectively). For comparison, we also asked undergraduate students at the University of Chicago to sketch geometric figures without vision. Fig. 161.7 is the end point of one such sketch of a triangle. Positioning is exact to the millimeter.

3.4. Instrumental actions Similarly, instrumental actions with denied vision are difficult for IW. Such actions require topokinetic accuracy. Fig. 161.8 shows two steps in IW’s attempt to remove the cap from a thermos bottle. The first is immediately after Jonathan Cole has placed the thermos in IW’s right hand and placed his left hand on the cap (IW is strongly left handed); the second is a second later, when IW has begun to twist the cap off. As can be seen, his left hand has fallen off and is turning in midair. Similar disconnects without vision occurred during other instrumental actions (threading a cloth through a ring, hitting a toy xylophone, etc. ⫺ this last being of interest since IW could have made use of acoustic feedback or its absence to know when his hand had drifted off target, but still he could not perform the action).

Fig. 161.8a, b: IW attempts to perform an instrumental action (removing cap from a thermos).

161. Aproprioception, gesture, and cognitive being

2035

3.5. Signiicance o the IW results so ar The IW case shows that, without vision, gestures continue to occur with accuracy up to the morphokinetic level, and possess the tight binding at points of co-expression with speech that characterizes unaffected gestures ⫺ all this without feedback of any kind. An important hypothesis is that a dedicated thought-language-hand brain link underlies combinations of semiotically unlike meaning packages that can be partially dissociated from the brain circuits involved in world-related actions. IW’s use of space is especially informative. Although he has no exact sense of where his hands are, he can align them morphokinetically to create a “triangle”, because triangularity affords a direct mapping of a concept into space. Likewise, the meaning of “catching up to” is sufficient to guide the hands into a morphokinetic embodiment of this idea, without an intervening action, real or simulated (cf. discussion in Gallagher 2005). The morphokinetic/topokinetic distinction also explains the near disappearance of character viewpoint without vision. Gestures like “holding it” and “places it”, with character viewpoint, resemble Tweety’s instrumental actions of holding the bowling ball and placing it. These character viewpoint gestures have meanings as simulated actions of a kind that require the level of control that, for IW, only visual guidance provides. Possibly for this reason they become difficult when vision is absent.

3.6. IW can control speech and gesture in tandem (1997) A striking demonstration of the thought-language-hand link is that IW, without vision, can modulate the speed at which he presents meanings in both speech and gesture, and do this in tandem. As his speech slows, his gesture slows, too, and to the same extent, so that speech-gesture synchrony is exactly preserved. If what he is forming are cognitive units comprised of co-expressive speech and gesture imagery in synchrony, this joint modulation of speed is explicable. He does it based on his sense (which is available to him) of how long the joint imagery-linguistic cognitive unit remains “alive”; peripheral sensory feedback need not be part of it. During a conversation with Jonathan Cole while still under the blind, IW reduced his speech rate at one point by about one-half (paralinguistic slowing), and speech and gesture remained in synchrony: Normal: Slow:

“and I’m startin’ t’use m’hands now” “because I’m startin’ t’get into trying to explain things”

The gestures are of a familiar metaphoric type in which a process is depicted as a rotation in space (possibly derived from ancient mechanisms, perhaps millwheels or clockworks: Metaphoric gestures often freeze-dry images that exist now only in this gesture form; cf. McNeill 1992 for other examples). IW executes the metaphor twice; first at normal speed, then at slow speed. The crucial observation is that the hand rotations are locked to the same landmarks in speech despite the different speeds. IW’s hands rotate in phase at normal speed, opposite phase at slow speed. Nonetheless, if we look at where the hands orbit inward and outward we find that rotations at both speeds coincide with the same lexical words, where they exist, and with the same stress peaks throughout. Fig. 161.9 shows the maximum inward and outward hand motions and the coincident speech. Brackets indicate where linguistic content was identical at the two rates.

2036

IX. Embodiment

Normal Speed (bracketed material⫽0.56 sec., 5 syllables) “and [I’m startin’ t’] use m’hands now”

Slow Speed (bracketed material⫽0.76 sec., 5 syllables) “‘cuz [I’m startin’ t’] get into” and I’m ⇐ ‘cuz I’m ⇒ hands move outward, then inward from the position shown.

startin’ ⇐ startin’ ⇒ hands again move outward, now starting to move out of phase.

t’ use m⇐ t’ get in⇒ At right, hands rotating out of phase, left hand rotates max in, right hand max out; corresponds to both hands max in at left, with hands rotating in phase.

161. Aproprioception, gesture, and cognitive being

2037

-y hands now ⇐ -to try(in’) ⇒ Hands back in phase, both move outward.

Fig. 161.9: IW changes rate of speech and gesture in tandem, maintaining synchrony. Note that motion of hands outward and inward occurs at same speech points. While the similarities of motion at the two speeds are compellingly obvious in the original video, we are confined here, of course, to still images that are, we acknowledge, not as easily deciphered. But if you look at the images and relate them to the appended comments, you can see the force of the example.

This agreement across speeds shows that whatever controlled the slow-down, it was exactly the same for speech and gesture. Bennett Bertenthal (pers. comm.) points out a possible mechanism for this tandem reduction. Speech and gesture, slowing together, could reflect the operation of a pacesetter in the brain that survived IW’s deafferentation; for example, the hand moves outward with a peak, an association that could be maintained over a range of speeds. The rotating hands were as noted metaphors for the idea of a process. The pacesetter accordingly could be activated by the thought-languagehand link and co-opted by a significance other than the action of rotation itself. This metaphoric significance is consistent with the timing, since the hands rotated only while IW was saying “I’m starting to…” and there was actually a cessation of gesture between the first (normal speed) and second (reduced speed) rotations as he said “and that’s because”, indicating that the rotation and any phonetic linkages it claimed were specifically organized around presenting the idea of a process as a rotation in space.

3.7. Summary o IWs gestures without vision The following points summarize what we have seen of IW’s gestures in the absence of visual, proprioceptive, or spatial position feedback: ⫺ Gestures have diminished character viewpoint. ⫺ Gestures preserve morphokinetic accuracy and lose topokinetic accuracy. ⫺ Gestures are co-expressive and synchronized with speech.

3.8. Phantom limb gestures Vilayanur S. Ramachandran and Sandra Blakeslee in “Phantoms in the Brain” (1998) describe Mirabelle, a young woman born without arms. Yet she experiences phantom

2038

IX. Embodiment arms and performs “gestures” with them ⫺ nonmoving gestures, but imagery in actionalvisual form. Dr: “How do you know that you have phantom limbs?” M: “Well, because as I’m talking to you, they are gesticulating. They point to objects when I point to things.” “When I walk, doctor, my phantom arms don’t swing like normal arms, like your arms. They stay frozen on the side like this” (her stumps hanging straight down). “But when I talk, my phantoms gesticulate. In fact, they’re moving now as I speak.” (Ramachandran and Blakeslee 1998: 41)

Mirabelle’s case points to a similar conclusion as IW’s ⫺ dissociation of gesture from practical actions. In Mirabelle’s case, moreover, intentions create the sensation of gestures when no motion is possible. Presumably, again, the same thought-language-hand link is responsible.

3.9. Overall signiicance o the IW case The IW case suggests that control of the hands and the relevant motor neurons is possible directly from the thought-linguistic system. Without vision, IW’s dissociation of gesture, which remains intact, and instrumental action, which is impaired, implies that the “know-how” of gesture is not the same as the “know-how” of instrumental movement. In terms of brain function, this implies that producing a gesture cannot be accounted for entirely with the circuits for instrumental actions; at some point the gesture enters a circuit of its own and there is tied to speech. A likely locus of this dedicated thought-language-hand link in the brain areas are 44 and 45: Broca’s area. The earlier mentioned paper by McNeill et al. (2008) presents a theory of how this link could have been selected evolutionarily in this brain area (called the “Mead’s Loop” model in the paper).

4. Discussion: Growth points, material carriers, and inhabitance To conclude this chapter we describe the “growth point” (GP) hypothesis mentioned briefly earlier; Vygotsky’s concept of a material carrier (in Rieber and Carton 1987); relate these to the concept of inhabitance from Merleau-Ponty (1962) while elaborating somewhat on the phenomenology of gesture; and explain the interconnections among all three concepts as they apply to the IW case.

4.1. The growth point It is beyond doubt that IW, at least in his “throw-aways”, is creating what we term growth points. Growth points organize speech and thought. Using the concept of a “minimal unit” with the property of being a whole from Vygotsky (1987: 4⫺5), a GP is an irreducible, “minimal unit” of imagery-language code combination. It is the smallest packet of an idea unit encompassing the unlike semiotic modes of imagery and linguistic encoding that we observe when speech and gesture coincide at points of co-expressiveness. A growth point is empirically recoverable, inferred from speech-gesture synchrony and co-expressiveness. It is inferred (not “operationally defined”) from

161. Aproprioception, gesture, and cognitive being (i) (ii) (iii) (iv)

2039

gesture form, coincident linguistic segment(s), co-expression of the same idea unit, and what Vygotsky (1987: 243) termed a “psychological predicate” ⫺ the point of newsworthy content that is being differentiated from the immediate context of speaking (of which more below).

The temporal and semantic synchronies represented in Fig. 161.1 and shown dramatically by IW when he reduced speed in speech and gesture in tandem, imply a growth point in which imagery and linguistic information are jointly present, so that one does not occur without the other. In Fig. 161.1 we infer the simultaneous presence of the idea of ascent inside the pipe in the two unlike semiotic modes. Even when the information (“semantic content”) in speech and gesture is similar, it is formed according to contrasting semiotic modes. The growth point is so named because it is a distillation of a growth process ⫺ an ontogenetic-like process but vastly sped up and made functional in online thinking-forspeaking. According to this framework, it is the initial unit of thinking-for-speaking (Slobin 1987) out of which a dynamic process of utterance-level and discourse-level organization emerges. Imagery and spoken form are mutually influencing. It is not that imagery is the input to spoken form or spoken form is the input to imagery. The growth point is fundamentally both. The existence of simultaneous unlike modes creates instability; an idea in two contending forms at once. This instability nonetheless is an essential part of the growth point and its role in speaking and thought ⫺ it drives thinking-for-speaking to seek resolution (McNeill and Duncan 2000). Stability comes from “unpacking” the growth point into grammatical structures (or viable approximations thereto) with usually further meanings actualized. A surface linguistic form emerges that cradles the growth point in stable and compatible form. This role of grammar ⫺ unpacking and supplying “stoporders” for the changes initiated by imagery-linguistic code instability ⫺ is an important clue for how speech in discourse is produced (see McNeill 2005 for detailed discussion). The reasons why a synchronous semiotic opposition of co-expressive gesture and speech creates instability and initiates change include: (i) conflict (between semiotic modes: analog imagery/analytic categorical), and (ii) resolution (through change: fueling thinking-for-speaking, seeking stability). Simultaneous semiotic modes thus comprise an inherently dynamic psycholinguistic model. In Fig. 161.1, the locution “up thro´ugh” is analytic: Up-ness and interiority are separated. The words also have syntagmatic values acquired from combinations within and beyond the phrase. The gestural image embodies the same information in the form of “Sylvester as a rising hollowness” but without analysis or combinatoric value. Unpacking resolves the tension by placing both components, linguistic and gestural, into a finished syntactic package that does not violate the image, realizes the syntagmatic potential of the linguistic side, and includes the production of further content (“he goes up through it this time”, including the metanarrative indexical, “this time”, that relates the event being described to a previous one).

2040

IX. Embodiment A final point is that we can fully understand what motivates any image-speech combination only with reference to how a growth point relates to its context of occurrence. The growth point-to-context relationship is mutually constitutive. The growth point is a “psychological predicate” ⫺ the point of differentiation from this context. The speaker so represents the context that this differentiation becomes possible. A robust phenomenon concerning gesture is that the form and timing of gestures select just those features that differentiate the psychological predicate in a context that is at least partly the speaker’s own creation (see McNeill 2005: 108⫺112). We observe all these hallmarks of growth points, including this correlation, in IW’s speech and gesture. The “caught him up” gesture, for example, was a psychological predicate that embodied newsworthy content in a context from the preceding narrative discourse of Sylvester on overhead wires running to escape a pursuing trolley. The gesture depicted the pursuit and overtaking by the trolley and was exactly synchronous with the linguistic segments “caught him up”. The growth point as inferred is this combination of semiotic modes for the idea of Sylvester being overtaken. The unpacking into “and the tram caught him up” settles it into a stable syntactic package (the next element in IW’s tale describes how he was then shocked ⫺ another growth point with its instability to be followed by stability through unpacking).

4.2. Material carriers We get a deeper understanding of such an imagery-language dialectic by introducing the concept of a “material carrier”. The concept clarifies reasons why IW, despite his careful attention to movement up to and including the construction of some gestures, performs, without meaning to, unattended “throw-aways”. A material carrier ⫺ as pointed out to us by Elena Levy, the phrase was used by Vygotsky ([1934] 1987) ⫺ is the embodiment of meaning in a concrete enactment or material experience. A material carrier appears to enhance the symbolization’s representational power. The concept implies that the gesture, the actual motion of the gesture itself, is a dimension of meaning. Enhancement is possible if the gesture is the very image; not an “expression” or “representation” of it, but is it. From this viewpoint, a gesture is an image in its most developed ⫺ that is, most materially, naturally embodied ⫺ form. The absence of a gesture is the converse, an image in its least material form. The material carrier concept thus helps explain why sometimes there is no gesture. Of course, gestures may be suppressed in certain fraught situations, but if gestures are occurring in general, then when no gesture occurs we see the lowest level of materialization. We describe here a theoretical model of how materialization has this effect on representational power, and when gestures do and do not occur with speech (cf. Goldin-Meadow 2003). A striking illustration of the material carrier is what Cornelia Müller (2008) terms the “waking” of “sleeping metaphors” ⫺ new life given to inactive metaphors, in which gesture brings back to awareness a metaphor’s original source. Müller views the metaphor dynamically, as a process by which the speaker and her listener generate metaphoricity in the context of the speech event; clearly a conception germane to the position of this book. The activation of the metaphor, and the semiotic impact of the sparking image, is a variable, dependent upon the speaker’s thought processes and the context of speaking. The gesture, as a material carrier, is an active component of this process. She gives an example of a German metaphor (gefunkt, ‘sparked’, the equivalent to English ‘clicked’, for suddenly falling in love). The expression is usually hackneyed and not apprehended as a metaphor. However, it can be awakened

161. Aproprioception, gesture, and cognitive being

2041

by a gesture. A speaker, describing her first love, said “between us somehow it sparked [‘clicked’]” (Müller’s translation). As she said “between us” her hand rose upward next to her face in a ring shape but with an unusual orientation ⫺ the fingers pointing at her own face; then, as she uttered the metaphor itself, gefunkt, her hand abruptly turned outward ⫺ her gesture materializing the “dead” metaphor as a sudden event, an electrical spark. IW shows the reality of materialization in yet another form. At one point in the 2002 experiment Jonathan Cole demonstrated, as IW watched, an object-directed transitive action (removing the cap from the thermos); IW then imitated the action. While he could not perform the action himself without vision (Fig. 161.8), we were interested in seeing if he could imitate it under conditions where topokinetic accuracy was not a factor, and indeed he could. But what was unexpected is that IW spontaneously spoke as he imitated the cap removal (he described his movements as he performed them). This was a fully spontaneous and unanticipated performance, not something we suggested, even though, of course, a spontaneous sprouting of speech is what the growth point hypothesis implies ⫺ the two forms of materialization co-occurring. The inverse experiment happened equally accidentally in a separate study of IW by Bennett Bertenthal (pers. comm.). Here, too, imitation was the task (he was shown a video, without sound, of other people’s gestures and asked to imitate them). As before, IW spontaneously began to speak. The experimental assistant asked him to not speak, as that was not part of the experimental protocol. IW complied and ⫺ the important observation for material carrier purposes ⫺ his imitations then, in many cases, simplified and shrank in size. Whereas, with speech, they had been large, complex, and executed in the space in front of his body (he was not under the blind), without it they were simple, miniaturized, and confined to the space at his lap. This was so even though imitation of other people’s gestures was his target and he had vision of his hands. These effects are impressive indications that two materializations, speech and gesture, co-occur, support, and feed one another, and that when one goes awry or missing the other tends to follow.

4.3. Phenomenology and the scientiic study o gesture The entire conception of speech and gesture is moved to a new level when we draw on the work of Maurice Merleau-Ponty (1962) for insight into the thought-language-hand link and the temporal alignment of speech, gesture, and significance into growth points. First, however, we have to elucidate the situation of present-day gesture studies with respect to the notoriously difficult relationship between phenomenology and (cognitive) science. Merleau-Ponty, for one, makes a specific distinction between his philosophy of embodiment and the empirical-scientific approach to the role of the body in language use and cognition in general. Empirical conceptions tend to focus on the body-as-object and describe embodied language use in terms of its objective features, such as the speech sounds uttered, the specific gestures which were made, or found patterns of neurological activity. In a two or more step process, the speaker ⫺ or rather her cognitive system ⫺ embodies some pre-existing meaning (a “thought”) through the realization of complex combinations of different kinds of material carriers (such as the verbal, the manual, the facial, and the postural modality), and thus linguistic meaning is “externalized”. In this approach, the body in language use functions as a machine that can talk, a machine that can “trans-

2042

IX. Embodiment

late” a private and disclosed thought into the conventionalized medium of material carriers. This kind of mechanistic communicative theory naturally follows from a framework that describes the linguistic event solely from an onlooker’s point of view. The empirical scientist takes an observational stance vis-a`-vis the object of her investigation, i.e. people involved in a conversation over there, and she relies on inference in order to discover what goes on when people talk. Merleau-Ponty on the other hand stresses the importance of acknowledging the participants’ personal involvement in bringing meaning to life while they are caught up in the act of perception, action, and intersubjective communication. From this perspective, we do not have the sensation that the speaker’s expressive body mediates between her thoughts and the listener’s cognitive capacities, but on the contrary, we experience that we have a direct access to each other’s intentions. Embodied meaning makes immediate sense from the perspective of the speaker and the listener. In fact, in this account, meaning coming into existence, its bodily expression, and, in a sense, even meaning reception, are one and the same thing and happen in one and the same process, in the dynamic unfolding of the interactive process. In contrast with empirical conceptions, here the speaking subject does not provide her thoughts with a material carrier, nor does the listening subject infer meaning from her objective perception of someone’s expressive bodily movements. Rather, speaker and listener are engaged in a process of “participatory sense-making” in which the interaction becomes primary and generates meaning (De Jaegher and Di Paolo 2007; Gallagher 2009). Phenomenological embodiment of linguistic meaning is fundamental, it is an a priori fact: The mental (a “thought”, or “intentional content”) and the physiological (its material carrier) are co-emergent. The emergence of meaning and its bodily expression therefore can be said to constitute two aspects of one and the same phenomenon, viz. the speaker’s bodily existence in an intersubjective and meaningful world, which she participates in instead of considering it from an outsider’s perspective, and hence to which she is fully attuned. In the next paragraphs we will discuss some implications of this phenomenological framework for any theoretical account of gesture in general and for the case of IW in specific. Discussing IW and multi modal expression, we use his distinction between constructed and throw-away gestures. Also when it comes to practical action, IW seems to capture well the distinction between an observational and a lived perspective, for example by calling his gait “controlled falling”. As it happens, Merleau-Ponty in his Structure of Behavior (2006: 8) mentions that according to rationalism “walking is reducible to a succession of recovered falls”. The similarity of both descriptions is not a coincidence. IW’s illness has forced him into trading in the lived for the observational perspective, a phenomenological reality for a rationalist one (or at least so for practical actions). Merleau-Ponty’s statement about walking being reducible to a succession of recovered falls reflects the one-sidedness of the perceptive/active event as it is framed by rationalism. The subject is burdened with controlling everything, while the world and the objects in it remain passive or unable to guide action. A more embodied account of course ascribes to perceived objects affordances, the ability to guide the subject’s actions because of the features it has. The ground affords walking because it “pushes back”, it is flat, stable, and solid. We, our bodies, learn how to walk in relation to the ground, by anticipating its features, and by being reassured in our anticipations. In rationalism on the other hand every step is like a first step because our bodies cannot anticipate that the ground will push back. IW also cannot anticipate that the ground will push back so that

161. Aproprioception, gesture, and cognitive being

2043

his active relation to the world is fully uni-directional: Every aspect of his actions is controlled by himself (interestingly, medical training emphasizes this rationalistic, “fully uni-directional” description as well). With regard to these theoretical antipoles, the empirical-scientific third-person, observatory perspective and the phenomenological participatory perspective, where should we locate an approach to gesture that propounds a thought-language-hand link to account for the synchronization of what Duncan (2006) has called the “three rhythmic pulses”: speech, gesture, and significance? Lived experience, despite its importance for the understanding of multimodal co-expressivity, by definition cannot be exhaustively described from an observatory point of view, but taking a third-person stance is exactly one of the defining traits of the scientific me´tier ⫺ and also that of a science of gesture. Language use necessarily precedes doing linguistics, and the unmediated way in which the speaker and her listener grasp the integrated communicative event can, after the fact, never be paralleled by listing the objective features of that event. Only the linguistic subject, because of her participatory and actively engaged stance and her ability to generate meaning in the simultaneous mental-verbal-embodied gestural way, can experience the richness this meaning derives from being “lived”. The scientific study of the thought-language-hand link, which reflects the power of co-expressiveness, offers a way of grasping this phenomenon, which exclusively objective descriptions of the role of the body in language use traditionally have difficulties grasping. Most empirical-scientific conceptions mistakenly infer ⫺ at least implicitly (and after the fact of speaking itself) ⫺ that because a communicative event can be divided up into different aspects by the linguistic scientist, a cognitive system necessarily also must process these aspects one by one (and therefore consecutively) before finding ways of integrating them into a coherent interpretation. Because the thought-language-hand link by definition both distinguishes and equates the three pulses, thus fulfilling both scientific and phenomenological aspirations, it enables us to operationalize some aspects of the philosophical concept of the body-assubject and is capable of inspiring empirical, experimental research. For a first investigation into the philosophical significance of gesture, we may turn to Merleau-Ponty’s Phenomenology of Perception (1962) to give us insight into the duality of gesture and language and the ontological status of the growth point ⫺ its multifaceted cognitive or perceptive way of being. Gesture, the instantaneous, global, nonconventional component, is “not an external accompaniment” of speech, which is the sequential, analytic, combinatoric component; it is not a “representation” of meaning, but instead meaning “inhabits” it. Quoting in the way of their mistaken view Gelb and Goldstein (1925: 158), Merleau-Ponty wrote: The link between the word and its living meaning is not an external accompaniment to intellectual processes, the meaning inhabits the word, and language ‘is not an external accompaniment to intellectual processes’. We are therefore led to recognize a gestural or existential significance to speech […] Language certainly has inner content, but this is not self-subsistent and self-conscious thought. What then does language express, if it does not express thoughts? It presents or rather it is the subject’s taking up of a position in the world of his meanings. (Merleau-Ponty 1962: 193; We are indebted to Jan Arnold for this quotation)

The growth point is a mechanism geared to this “existential content” of speech ⫺ this “taking up a position in the world”. Gesture, as part of the growth point, is inhabited by the same “living meaning” that inhabits the word (and beyond, the discourse). A

2044

IX. Embodiment

deeper answer to the query, therefore ⫺ when we see a gesture, what are we seeing? ⫺ is that we see part of the speaker’s current cognitive being, her very mental existence, at the moment it occurs. It’s not that the gesture is an external representation of a person’s inner thought; it is not simply the embodied expression of that thought but rather is inseparable from it and facilitatory to the thought’s genesis and unfolding. This applies equally to all speakers, IW included. By performing the gesture, an idea is brought into concrete existence and becomes part of the speaker’s own cognitive bodily existence at that moment. IW and similar subjects GL and CF, all made a conscious decision to learn gesture, initially under conscious control, to make themselves appear as normal and to express themselves to others completely. Similarly, patients with spinal cord injury and paralysis of the hands ⫺ and even arms ⫺ will still gesture with shoulder and head as compensation. Though unaware of Merleau-Ponty’s analysis, they are aware of the deep need to express through the body. Following Heidegger’s removal of the modernist oppositions between subject and object, language and outside world, Merleau-Ponty’s account states that a gesture is not a representation, or is not only such: It is a form of being. Gestures (and words, etc., as well) are themselves thinking in one of its many forms ⫺ not only expressions of thought, but thought, i.e., cognitive being, itself. To the speaker, gesture and speech are not only “messages” or communications, but are a way of cognitively existing, of cognitively being, at the moment of speaking. The speaker who creates a gesture of Sylvester rising up fused with the pipe’s hollowness is, according to this interpretation, embodying thought in gesture, and enacting meaning; and this action ⫺ thought in action ⫺ is part of the person’s being cognitively at that moment. Likewise the woman who gestured a sudden transformation with gefunkt, and IW in his rotating metaphor of the “getting into” process that he was undergoing. To make a gesture, from this perspective, is to bring thought into existence on a concrete plane, just as writing out a word can have a similar effect. The greater the felt departure of the thought from the immediate context, the more likely is its materialization in a gesture, because of this contribution to being. Thus, gestures are more or less elaborated depending on the importance of material realization to the existence of the thought. We observe the same elaboration of gesture in proportion to the importance of materialization in IW as well, and this is the final step of demonstrating the utter normality of his gestures of the “throw-away” type. Our second phenomenological excursion into the nature of speech and gesture concerns the notion of “co-expressive non-redundancy”, which was used to signify the convergence of two different modes of semiosis, the analytic/combinatoric verbal mode and the global/synthetic gestural mode, to represent one event (Sylvester climbing up inside the pipe) at the same time. An investigation of the concept of ‘co-expressiveness’ will shed light on how to interpret its non-redundancy. How is speech-gesture synchrony attained? Because the use of language and gesture is the speaker’s taking up of a position in the world, is the speaker’s way of cognitively being, the perfect synchrony of the different aspects of the speaker’s expressive bodily behavior becomes self-evident. This is Gallagher’s point with respect to IW when he states that the timing of his gestures vis-a`-vis his speech acts remains intact because “[t]he co-expressiveness of the two modes (gesture and speech) contribute to their synchronization” (2005: 113). As scientists we notice how well speech and gesture are at-

161. Aproprioception, gesture, and cognitive being

2045

tuned and how they break down together, but this is because we, with our tendency of chopping up the world into building blocks, implicitly first take both linguistic modes as belonging to different systems, as having a life of their own, and then wonder how synchrony might be attained. In a framework, however, which takes cognitive being and the bodily expression of linguistic meaning to be one and the same, co-expressiveness becomes equal to bodily expressiveness in general. As we said, embodiment in the phenomenological sense is an a priori fact, and from this naturally follows that co-expressiveness of speech and gesture is a necessary given. Linguistic multimodality is one of the origins of meaning itself, and therefore the different modes are co-expressive. What does this tell us about the “non-redundancy” of the co-expressiveness? The very appearance of the concept of “redundancy” in a discussion of linguistic multimodality belongs to a minimalistic framework in which the verbal is seen as the fundamental carrier of linguistic meaning and gesture as an additional mode (an “external accompaniment” of speech). When we ask the question “Why do we gesture?” we picture a still body which in the first place is capable of verbally expressing itself and which, in a linguistic event, may opt for the adding of gesture. Instead, if we take our active embodied existence as a given, we could also ask the question “Why wouldn’t we gesture?” and picture a body engaged in the world, for which it is only natural to use its full capacities of expression. In this sense, what was “redundant” not only becomes “non-redundant”, but even “obligatory”: Using all of your body to convey linguistic meaning is standard practice. As this is an article on IW and gesture, we have focused on the manual modality. However, Merleau-Ponty’s use of the term le geste cannot be unequivocally translated into “gesture”. Le geste refers to any aspect of the body deployed to convey meaning. But of course, because Merleau-Ponty’s phenomenology of language is one about bodily expression in general, anything said there holds for the manual modality also. Historically, manual gestures have been the principal focus of observation (there may be evolutionary reasons to expect the hands to be primary) but studies, especially recent ones, have included the head (McClave et al. 2008), gaze (McNeill et al. 2010), and vocal gestures (Okrent 2002) within a single framework of semiosis. These can be powerfully unified with the conception that linguistic meaning is obligatorily conveyed with all the body in unison (and that it is the suppression of elements that is exceptional). Susan Goldin-Meadow (1999) has found that use of gesture reduces cognitive burden on the part of the hearer as well as on the part of the speaker, and as such a combination of speech and gesture makes the intended meaning more easily understandable (instead of soliciting the heightened cognitive activity which we would expect from an increase of contextual information). Our phenomenological framework can easily accommodate these findings because if we describe linguistic action in terms of a speaking subject engaged in sense-making, the creation of meaning, using more co-expressive modalities will bring more of the same meaning about and for a listener it will be harder not to get what is expressed, as all bodily signs point into the same semiotic direction. To end this section we will apply phenomenological philosophy to understand better the distinction between IW’s “throw-aways” and his “constructeds”. Recall that his constructeds were fewer in number, were isolated, performed one-by-one and in a self-conscious manner. On the other hand, he produces his “throw-aways” with ease, though with some topokinetic problems. In a sense, by making this distinction, IW summarizes the whole point about the impossibility for third-person empirical-scientific approaches

2046

IX. Embodiment to fully capture the nature of gesture. When IW is unaware of his perfectly synchronized gesturing (when he is producing what he calls “throw-aways”), he is immersed in the first-person point of view and he engages his whole body-as-subject to convey his intentions. He bodily enacts his cognitive being at that time. However, when he is constructing his “constructeds”, he takes a third-person, detached, and external stance towards his own performance. He consciously divides up his utterances and hand movements by objectifying their features and then tries to attain synchrony. He takes a meta-cognitive stance (trying to control his hand movements) which clashes with and disrupts what he is trying to express with his hands (whatever the conversation is about). Co-expressiveness of gesture and speech breaks down ⫺ and so does synchrony, and to some degree so does the sense-making process. The term “objective” has two senses. First, it means viewing things from the outside in third-person perspective ⫺ looking at something asobject. But it can also mean objective, as in scientifically valid vs. merely subjective bias. The phenomenological perspective is contrasted with the first meaning ⫺ that is, it takes a first-person perspective, from the view of the subject. But this doesn’t mean that it has no objective validity and that it delivers biased (non-objective) knowledge. Phenomenological reflection can be methodical and controlled, and can provide objectively valid knowledge about subjectivity.

5. To sum up To sum up this article we can ask: Does IW show growth points; do his gestures act as material carriers; and do his meanings, in Merleau-Pontian fashion, inhabit them? IW’s own distinction between “constructed” and “throw-away” gestures is critical at this point. His “throw-aways” are indistinguishable from the gestures of unaffected speakers. That is, they comprise growth points with simultaneously encoded co-expressive linguistic content, to jointly differentiate what is newsworthy in context; they offer the benefits of material carrierhood; and are inhabited by positions in his world of meanings. IW’s very lack of awareness of them suggests this status. Unawareness is to be expected of positions in the world of meanings, and in this respect gestures are no different from most spoken words, of which, qua words, we are also usually unaware as we use them. The occurrence of this complex of processes in IW, despite deafferentation and his reworking of motion and control, suggests the existence of a thought-language-hand link in the human brain, an inheritance for us all, that survived his neuronopathy.

Notes Computer art from video by Fey Parrill, Ph.D. Except for Fig. 161.9, all illustrations are from McNeill (2005), “Gesture and Thought” (University of Chicago Press), and are used with permission.

6. Reerences Cole, Jonathan 1995. Pride and a Daily Marathon. Cambridge, MA: Massachusetts Institute of Technology Press. De Jaegher, Hanneke and Ezequiel Di Paolo 2007. Participatory sense-making: An enactive approach to social cognition. Phenomenology and the Cognitive Sciences 6(4): 485⫺507.

161. Aproprioception, gesture, and cognitive being

2047

Duncan, Susan D. 2006. Co-expressivity of speech and gesture: Manner of motion in Spanish, English, and Chinese. In: Proceedings of the 27th Berkeley Linguistic Society Annual Meeting, 353⫺370. Berkeley, CA: Berkeley Linguistics Society, University of California, Berkeley, Department of Linguistics, 1203 Dwinelle Hall. Firbas, Jan 1971. On the concept of communicative dynamism in the theory of functional sentence perspective. Philologica Pragensia 8: 135⫺144. Gallagher, Shaun 2005. How the Body Shapes the Mind. Oxford: Oxford University Press. Gallagher, Shaun 2009. Two problems of intersubjectivity. Journal of Consciousness Studies 16(6⫺ 8): 289⫺308. Gelb, Adhe´mar and Kurt Goldstein 1925. Über Farbennamenamnesie. Psychologische Forschung 6(1): 127⫺186. Goldin-Meadow, Susan 1999. The role of gesture in communication and thinking. Trends in Cognitive Sciences 3(11): 419⫺429. Goldin-Meadow, Susan 2003. Hearing Gesture: How Our Hands Help Us Think. Cambridge, MA: Harvard University Press. Goldin-Meadow, Susan, Howard Nusbaum, Philip Garber and Ruth Breckinridge Church 1993. Transitions in learning: Evidence for simultaneously activated hypotheses. Journal of Experimental Psychology: Human Perception and Performance 19(1): 92⫺107. Iverson, Jana M. and Susan Goldin-Meadow 1997. What’s communication got to do with it? Gesture in congenitally blind children. Developmental Psychology 33(3): 453⫺467. Kendon, Adam 1988. How gestures can become like words. In: Fernando Poyatos (ed.), CrossCultural Perspectives in Nonverbal Communication, 131⫺141. Toronto: Hogrefe. Mayberry, Rachel and Joselynne Jaques 2000. Gesture production during stuttered speech: insights into the nature of gesture-speech integration. In: David McNeill (ed.), Language and Gesture, 199⫺214. Cambridge: Cambridge University Press. McClave, Evelyn, Helen Kim, Rita Tamer and Milo Mileff 2008. Linguistic movements of the head in Arabic, Bulgarian, Korean, and African American vernacular English. Gesture 7(3): 343⫺390. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. McNeill, David and Susan D. Duncan 2000. Growth points in thinking for speaking. In: David McNeill (ed.), Language and Gesture, 141⫺161. Cambridge: Cambridge University Press. McNeill, David, Susan D. Duncan, Jonathan Cole, Shaun Gallagher and Bennett Bertenthal 2008. Growth points from the very beginning. Interaction Studies 9(1): 117⫺132. McNeill, David, Susan D. Duncan, Amy Franklin, Irene Kimbara, Fey Parrill and Haleema Welji 2010. Mind-merging. In: Ezequiel Morsella (ed.), Expressing Oneself/Expressing One’s Self: Communication, Language, Cognition, and Identity: A Festschrift in honor of Robert M. Krauss, 143⫺ 165. London: Taylor and Francis. Merleau-Ponty, Maurice 1962. Phenomenology of Perception. London: Routledge. Merleau-Ponty, Maurice 2006. The Structure of Behavior. Pittsburgh: Duquesne University Press. Müller, Cornelia 2008. Metaphors Dead and Alive, Sleeping and Waking. A Dynamic View. Chicago: University of Chicago Press. Nobe, Shuichi 2000. Where do most spontaneous representational gestures actually occur with respect to speech? In: David McNeill (ed.), Language and Gesture, 186⫺198. Cambridge: Cambridge University Press. Okrent, Arika 2002. A modality-free notion of gesture and how it can help us with the morpheme vs. gesture question in sign language linguistics. In: Richard P. Meier, Kearsy Cormier and David Quinto-Pozos (eds.), Modality and Structure in Signed and Spoken Language, 175⫺198. Cambridge: Cambridge University Press. Ramachandran, Vilayanur S. and Sandra Blakeslee 1998. Phantoms in the Brain: Probing the Mysteries of the Human Mind. New York: William Morrow.

2048

IX. Embodiment

Rieber, Robert W. and Aaron S. Carton (eds.) 1987. The Collected Works of L.S. Vygotsky. Volume 1: Problems of General Psychology. New York: Plenum. Slobin, Dan I. 1987. Thinking for speaking. In: Jon Aske, Natasha Beery, Laura Michaelis and Hana Filip (eds.), Proceedings of the Thirteenth Annual Meeting of the Berkeley Linguistic Society, 435⫺445. Berkeley: Berkeley Linguistic Society. Vygotsky, Lev S. 1987. Thought and Language. Cambridge: Massachusetts Institute of Technology Press. First published [1934].

Liesbet Quaeghebeur, Antwerpen (Belgium) Susan Duncan, Chicago (USA) Shaun Gallagher, Memphis (USA) Jonathan Cole, Bournemouth (UK) David McNeill, Chicago (USA)

162. Embodying audio-visual media: Concepts and transdisciplinary perspectives 1. 2. 3. 4. 5.

Audio-visual media, embodiment, transdisciplinarity: On gaps and bridges Embodiment in film theory: Audio-visual media and inter-affectivity Bodily aspects of film perception: Experimental findings Audio-visual media and the quest for embodiment: Transdisciplinary perspectives References

Abstract The role of the human body in film perception has been the subject of several publications in film and media studies recently. Inspired by different theoretical perspectives, the respective works altogether show a common interest in taking the spectator’s body as an entity that enables scholars to address the complex interrelations of perception, thought, and feeling. At the same time, audio-visual media increasingly attract attention in the fields of experimental psychology and the neurosciences. The article provides an overview regarding approaches to the embodied perception of audio-visual media in different academic fields. It aims at identifying common questions and perspectives ⫺ and concludes that research on the embodied perception of movement patterns in audio-visual media can offer exemplary insights regarding these questions.

1. Audio-visual media, embodiment, transdisciplinarity: On gaps and bridges With cultural, literature, and art studies generally turning more and more towards the human body as a main reference during the last decades of the 20th century, an increasing interest in concepts related to our experience of being-in-the-world, the complex Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20482061

162. Embodying audio-visual media: Concepts and transdisciplinary perspectives

2049

implications of embodied existence, became noticeable in the field of film and media studies as well (Grodal 2009; Marks 2000; Plantinga 2009; Shaviro 1993; Sobchack 1992; Williams 1991). Altogether, the respective works show ⫺ from a film-studies point of view ⫺ a broad spectrum of influences, including psychoanalytical, feminist, phenomenological, or cognitive film theory. From an outside perspective though, these works have something important in common: They all share the aspect of highlighting philosophical traditions of thought whose influence was and still is rather restricted to the domain of the humanities. Whether they focus on the specific bodily experience of subjectivity in film viewing (Sobchack 1992), the distinctive corporal temporality of certain film genres (Williams 1991), haptic aspects of audio-visual perception (Marks 2000), or a causal process model of different aspects of film reception (Grodal 2009) ⫺ all of these approaches to embodiment share the intention of relating screen images to embodied processes by linking compositional features of audiovisual images to sentiments and thoughts that are assumed to be experienced during the perceptive act; and they all conceptualize this by applying means of hermeneutic and phenomenological traditions of thought to the analysis of audio-visual aesthetics. In doing so, the discourse on embodiment within the field of film and media studies relates embodied perception ⫺ the bodily implications of seeing and hearing movies ⫺ to temporal dynamics of feeling and understanding. At the same time, a discussion on embodiment has formed in the field of cognitive science and the neurosciences. Against the background of linguistic models on human cognition that have dominated the cognitive sciences for decades (Chomsky 1968), the respective works claim that bodily (Clark and Chalmers 1998), perceptual (Barsalou 1999), or sensory-motor (Moseley et al. 2012; Pulvermüller, Shtyrov, and Ilmoniemi 2005) experiences influence, shape, or ground cognitive processes. Covering a broad range of theses ⫺ from situational, environmental, and temporal framing of cognition to conceptions of abstract thought being body-based (Wilson 2002) ⫺ the respective field within cognitive science focuses on functional models of human cognition, often showing a particular interest for the human brain. At first glance, both discourses do not seem to have a lot more in common than being filed under the broad term embodiment: Film studies conceive the embodied act of viewing audio-visual media as a specific spatio-temporal experience that literally incorporates perceptive, affective, and cognitive dimensions, while the cognitive sciences attach the bodily relation of perceptive, affective, and cognitive processes to the concept of multimodal (i.e., integrating multiple senses) networks; film studies locate the human body at the heart of a theory on subjective experiences of sensibility, while in the domain of the cognitive sciences the notion of the human body often merely refers to theoretical positions and experimental results that hint at brain regions not classically associated with cognition (e.g., regions engaged in perception, introspection, or motor activity) being involved in cognitive processes. Nevertheless, both lines of research show a common interest: They try to gain insights into the intertwining of perception, affect, and cognition. This article argues that both discourses meet in contemplating the interrelation of movement perception, affective experience, and the bodily bases of meaning constitution. In taking the linkage of movement and affect as a starting point, it aims at sketching out theoretical vanishing points of a transdisciplinary perspective on the embodied experience of audio-visual media, arguing that audio-visual movement patterns shape a

2050

IX. Embodiment specific intersubjective dimension of experiencing film and other audio-visual media. It draws on the concept of cinematic expressive movement (Kappelhoff 2004; Kappelhoff and Bakels 2011) as a theoretical perspective on experimental findings regarding the (neuro-) physiological aspects of film viewing and as a starting point in developing a transdisciplinary perspective on the embodied perception of audio-visual movement patterns. Finally, it hopes to demonstrate in which ways such a perspective on embodying audio-visual media may contribute to the transdisciplinary enterprise of examining the interrelation of body, language, and communication.

2. Embodiment in ilm theory: Audio-visual media and inter-aectivity The discussion on embodiment within film and media studies started out against the background of widespread psychoanalytical readings of film (Mulvey 1975) and approaches to conceptualize film viewing as a cognitive process aimed at acquiring information on narrative plot issues (Bordwell 1985). Stating that dominant hermeneutical approaches to film had lost touch to what drives people to see movies (Shaviro 1993), the respective works turned to the phenomenological concept of film experience (Sobchack 1992), virtually serving as a theoretical counterweight. In highlighting the concept of experience (Merleau-Ponty [1945] 2005), the turn to embodiment in film studies marks a significant shift in conceiving the relationship between spectator and film: While psychoanalytical film theory focuses on cinema’s illusive potential as a link to the imaginary ⫺ with the dream state being a core reference (Metz [1975] 1992) ⫺ and cognitive film theory explores schematic analogies between film style and basic principles of human cognition (Bordwell 1989, 1997; Grodal 1997), the phenomenological approach instead considers film experience to be shaped by bodily sensations (Marks 2000) and physical reactions (Williams 1991), thereby focussing on how spectators in the cinema sense and make sense. Unlike in approaches to the spectator’s emotion in cognitive film theory (Grodal 2009; Plantinga 2009; Smith 1995; Tan 1996), within phenomenological film theory these bodily sensations are not considered an effect of cognitive understanding ⫺ e.g., the comprehension of plot or character constellations in the case of film viewing. As Vivian Sobchack points out, the phenomenological approach considers embodied experience the basis of an intertwined act of perception, affect, and meaning constitution in film: “[T]he film experience is a system of communication based on bodily perception as a vehicle of conscious expression. It entails the visible, audible, kinetic aspects of sensible experience to make sense visibly, audibly, and haptically” (Sobchack 1992: 9). Following Sobchack, the spectator’s experience of feeling is not subsequent to the cognitive understanding of character intentions or plot constellations; it rather turns out to be accounted for by the perception of cinematic movement itself. In her understanding, the link between cinematic expressivity on the screen and somato-sensual perception on the side of the spectator lies in the gaze of the camera. This gaze always entails a two-fold subjectivity, with the spectator lending his perceptive, subjective, and corporal being-in-the-world to what becomes visible on screen: the perceptive act of the film (see also Schmitt and Greifenstein this volume; Voss 2011). The spectator relates on a bodily level with a kind of subjectivity that is not present on screen, but present through the screen itself: the camera gaze, inviting the spectator to see with somebody else’s eyes, to

162. Embodying audio-visual media: Concepts and transdisciplinary perspectives

2051

hear with somebody else’s ears (Sobchack 1992: 128⫺142). In doing so, the spectator is being physically engaged in the tactile implications of cinematic movement ⫺ while at the same time always remaining aware of the perceptual presence of another subject in the cinematic gaze (Sobchack 1992: 138). From this point of view, the embodiment of cinematic movement turns into the theoretical anchor of intersubjectivity in a double sense: On the one hand, intersubjectivity is no longer a theoretical perspective on the interrelation of film characters or the linkage between the spectator and these characters. Instead, it describes the relation between the human subject in the audience and the perception of another subjectivity, expressed by the audio-visual cinematic gaze. On the other hand, it denotes the intersubjective dimension of film with regard to a diverse audience; the embodied perception of cinematic movement is considered an intersubjective dimension of film viewing that entails experiences of feeling as well as dynamic meaning constitution: In a search for rules and principles governing cinematic expression, most of the descriptions and reflections of classical and contemporary film theory have not fully addressed the cinema as life expressing life, as experience expressing experience. Nor have they explored the mutual possession of this experience of perception and its expression by filmmaker, film, and spectator ⫺ all spectators viewing, engaged as participants in dynamically and directional reversible acts that reflexively and reflectively constitute the perception of expression and the expression of perception. Indeed, it is this mutual capacity for and possession of experience through common structures of embodied existence, through similar modes of being-in-the-world, that provide the intersubjective basis of objective cinematic communication. (Sobchak 1992: 5)

In pointing out the role of the spectator’s body in regard to the specific intersubjectivity of cinematic expression, Sobchack’s approach to embodiment offers a renewed perspective on a film theoretical concept that has addressed the linkage of movement perception and the feelings of the film spectator from classical film theory on: the concept of expressive movement. The idea of affective experience in drama and cinema being intimately tied to the temporal unfolding of expressive movement has been examined systematically and historically by Hermann Kappelhoff (Greifenstein and Kappelhoff this volume; Kappelhoff 2004; Scherer, Greifenstein, and Kappelhoff this volume). Starting with a closer look into the history of melodrama in the dramatic arts and cinema, the concept shows how different media and art forms ⫺ such as drama, acting, dance, or film ⫺ develop aesthetic strategies that aim at shaping the feelings of a heterogeneous, anonymous audience (“Zuschauergefühl”, see Kappelhoff and Bakels 2011) by aesthetic means of organizing the temporal dynamics of perception. In this regard, the concept of expressive movement became crucial for psychologically orientated film theory at the beginning of the 20th century. In 1916, the psychologist and philosopher Hugo Münsterberg made it a key theoretical module regarding the perceptual psychology of cinema. In a theoretical two-step, Münsterberg puts an emphasis on the nexus between perception of movement and affective experience. In a first step, it is the very concrete movement of the actor on the silver screen that brings this nexus into attention: “[G]estures, actions, and facial play are so interwoven with the psychical process of an intense emotion that every shade can find its characteristic delivery” (Münsterberg 1916: 113). In relating bodily expressions of affect ⫺ Münsterberg speaks of “emotion”, but not in accordance to contemporary

2052

IX. Embodiment

theories of emotional appraisal (Ellsworth and Scherer 2003) ⫺ to the sensation of affect on the side of the spectator, Münsterberg develops an early approach to what is currently being discussed under the term inter-affectivity in the field of social cognition (Niedenthal et al. 2005) or developmental psychology (Stern 2010); the respective works share the idea of a human capacity to share affective states of others not by decoding their actions, but by experiencing affect through the expressive quality of their bodily movements. As Kappelhoff demonstrates, it is this understanding of expressive movement that connects psychological, linguistic, and anthropological theories (Bühler 1933; Plessner [1941] 1970; Wundt 1900⫺1920) with concepts from philosophical aesthetics and the theory of arts (Fiedler 1991a, b; Simmel 1995a). Nevertheless, for Münsterberg the unfolding expressivity of the human body also serves as a paradigm. In a second step, he uses it as a theoretical model for conceptualizing the expressivity of film scenes; these are considered aesthetic compositions aiming at the organisation of a specific temporal dynamic by means of cinematic movement: “[The] additional expression of the feeling through the medium of the surrounding scene, through background and setting, through lines and forms and movements, is very much more at the disposal of the photo artist.” (Münsterberg 1916: 120). In this understanding, expressive movement no more refers to the movements of bodies or objects solely. Instead, Münsterberg widens the concept by adapting it to the temporal dynamics of aesthetic compositions. In doing so, he paves the way for applying the idea of a link between cinematic expression, perception, and affective experience to compositional features of the audio-visual image. From this perspective, expressive movement connects dynamics of obvious movement (e.g., movement of actors and objects on screen, camera movement) with more complex forms of transformation (e.g., lighting, rhythmic arrangements of shot lengths, acoustics), that are called ‘movement’ in film theory with regard to their role in modulating perceptual dynamics. In applying a concept that originally focused on human expressivity ⫺ facial expressions, gestures ⫺ to the audio-visual image as a whole, Münsterberg’s theory reveals a direction of thought quite similar to Vivian Sobchack’s approach to embodied perception in cinema. Münsterberg’s reflections on the affective quality of the actor’s expressive movements on screen resemble contemporary concepts on inter-affectivity, i.e., the dynamic coordination of affects towards a shared affective experience, in the field of developmental psychology (Stern 2010) as well as approaches to embodiment and intersubjectivity in current philosophy and phenomenology (Gallagher 2008; Sheets-Johnstone 2008). By working out the common principles underlying the expressivity of the human body and the audio-visual image, Münsterberg offers a perspective on the latter that lets it become graspable as the perceptual object of an on-going process of inter-affectivity ⫺ the spectator experiences the moving audio-visual image as an expression of affect that materializes as his own bodily sensation. With Sobchack, this experience can be considered intersubjective in both ways, i.e., regarding the linkage of the spectator and the moving image as well as an experience that is shared by a heterogeneous audience. As we will see, this understanding of cinematic expressive movement ⫺ as an approach to the embodied experience of audio-visual movement patterns ⫺ offers an integrating theoretical perspective on the bodily implications of film viewing (Kappelhoff and Bakels 2011). In the following section, the concept outlined above will serve as a starting point in reflecting experimental studies on the (neuro-) physiological correlates of film perception.

162. Embodying audio-visual media: Concepts and transdisciplinary perspectives

2053

3. Bodily aspects o ilm perception: Experimental indings Considering the vast variety of studies in the field of experimental psychology, this section mostly confines itself ⫺ in accordance to the topic of this article, the role of the human body in film perception ⫺ to the issue of studies highlighting physiological and neurological measures; studies based on behavioral methods, i.e., assigning tasks to or questioning a given number of subjects, are only included if related to phenomena documented on a bodily level. Corresponding studies featuring film clips became more and more popular within the 1990s. However, it has to be pointed out that the role of film and audio-visual media in (physio- and neuro-)psychological contexts at this time ⫺ and in a certain line of research: still ⫺ has to be seen as that of a tool, a means of addressing more general questions rather than a subject-matter of investigation. Leaving aside the question of the similarities and differences between film experience and natural perception that are discussed in philosophical and aesthetic film theory (see section 2), this line of psychological research primarily considers film a reliable means of stimulating emotions. Accordingly, a first group of studies aims at the development of specific stimulus sets, i.e., sets of audio-visual clips that are either subject to a typology of assigned emotional effects or relative variances in the psychological dimensions valence and arousal (in both cases validated by questioning a statistically significant number of subjects); these stimulus sets are developed to provide experimental research on emotions with a suitable elicitation tool (Gross and Levenson 1995; Hewig et al. 2005; Rottenberg, Ray, and Gross 2007; Schaefer et al. 2010). Consequently, a second line of research uses these elicitation tools in order to address general questions regarding the physiological and neurological dimensions of emotional experience (see, for example, Boiten 1998; Goldin et al. 2005; Gross and Levenson 1995). As a consequence, specific perceptual-psychological aspects of film viewing ⫺ as opposed to natural perception ⫺ remain out of focus; likewise, the interrelations between specific aspects of the respective clips ⫺ for example, formal, aesthetic, syntactic, or semantic features ⫺ and the assigned emotional quality are not addressed. Nevertheless, studies on film, its features, and corresponding physio- and neuro-psychological phenomena have developed from being a rare exception (Hubert and de Jong-Meyer 1991) into an emerging field of research over the past couple of years, giving birth to the idea of neurocinematics as a distinct field of research (Hasson et al. 2008b). This section is supposed to give an orienting overview over studies relevant in regard to this development; of course, such an overview has to be selective. In this case it is aimed at sketching out key topics in the field regarding the question at hand ⫺ movement perception and the bodily bases of film viewing ⫺, namely the extent to which film perception can be considered intersubjective, the episodic structure of film perception, the temporal dynamics of affect, and the relation of audio-visual movement patterns and sensory-motor experiences.

3.1. Film experience and intersubjectivity Early experimental approaches to the neurological bases of film viewing aimed at identifying and mapping distinct areas of the brain relevant to the processing of audio-visual media and assigning them to the processing of corresponding perceptual dimensions like the perception of color, language, faces, or the human body as a whole (Bartels and

2054

IX. Embodiment Zeki 2004). Nevertheless, the neuro-scientific approach to film viewing took a significant turn when exploratory studies highlighted the potential of a less comprehensive, less modular methodology. In 2004, the research group of Uri Hasson focused on smaller sections of the human brain: By correlating neuronal activations in single sections (socalled voxels) of a three-dimensional grid laid out on neurological data measured while subjects were viewing different films ⫺ a method called inter-subject correlation (ISC) ⫺ they found temporal courses of neurological activity in single voxels that turned out to be synchronous to a significant measure over a group of subjects (Hasson et al. 2004). These results added to the rather global interest in the functional role of different brain areas in the process of film viewing the idea of intersubjective processes in these areas while watching a certain film. Subsequently, they founded a first approach of relating neuro-scientific methods to positions in the field of film studies, namely the concepts of cognitive film theory, by applying a method that “may serve as an objective scientific measurement for assessing the effect of distinctive styles of filmmaking upon the brain” (Hasson et al. 2008b: 16). Meanwhile, first studies based on inter-subject correlation were able to document such correlations in the frontal cortex (Jääskeläinen 2008); these correlations have been interpreted as indications for intersubjective aspects within the cognitive processing of certain films (Nummenmaa et al. 2012). From a film studies perspective the notion of “contents and styles” draws the attention to the highly controversial question of a systematic framework regarding the conceptualization of aesthetic experience and meaning constitution in audio-visual media (see section 2 of this article and Schmitt, Greifenstein, and Kappelhoff this volume). Against the background of theories on the embodied experience of expressive movement, these experimental studies raise the question to what extend audio-visual movement patterns can be related to the intersubjective experience of cinematic images. Within the field of experimental psychology, studies based on the inter-subject correlation have once again drawn attention to another field within the work on film in experimental psychology: the episodic nature of film experience and its relation to dynamics of remembering.

3.2. Film experience and episodic perception In the early 1990s, behavioral studies had documented that the placing of commercial breaks within episodic units of audio-visual images have a negative impact on subjects’ ability to remember plot lines (Boltz 1992). Whereas the episodic experience of audiovisual media had meanwhile been located within the wider context of a general tendency towards the episodic nature of human perception (Magliano, Miller, and Zwaan 2001), studies applying the inter-subject correlation to questions on episodic experience of audio-visual media (in this case: a TV sitcom) have documented a correlation between the degree to which a film elicits synchronous neurological activities over a group of subjects and single subjects’ ability to remember episodic units (Hasson et al. 2008a). In addition, a recent behavioral study, i.e., a study based on tasks the subjects had to conduct, has documented an accordance of 90% for the episodic encoding of Hollywood films over a group of subjects; interestingly, only 50% of the limits of episodes identified could be explained with obvious shifts in diegetic time or space (Cutting, Brunick, and Candan 2012). From a film studies perspective, these results draw attention to the question for a systematic approach to the episodic perception of films and audio-visuals that can offer explanations other than spatial or temporal continuity on the plot level; once again, a

162. Embodying audio-visual media: Concepts and transdisciplinary perspectives

2055

focus on movement patterns might provide insights regarding the aesthetic dimension of segmentation. Albeit the compositional features that structure the episodic experience of audio-visual media so far remain uncertain in the field of experimental psychology, another line of research highlights temporal processes that could serve as another lead in gaining insights into this issue: the temporal dynamics of physiological reactions to film viewing.

3.3. Temporal dynamics o aect Again, as early as 1991, Hubert and de Jong-Meyer documented differing temporal patterns of physiological activity for subjects viewing an animated cartoon scene and a suspense film scene (Hubert and de Jong-Meyer 1991). They explained the physiological pattern related to viewing the cartoon ⫺ the combination of a temporarily decreased heart rate with a rapid decrease in electrodermal activity and few changes in bodily sensations ⫺ with the subjects experiencing an amused state; likewise, the physiological pattern related to viewing the suspense scene ⫺ the combination of a temporarily decreased heart rate with a rapid increase in electrodermal activity and marked changes in bodily sensations ⫺ was assigned to the experience of a state of irritation. The authors concluded “that the films elicited differential mood patterns” (Hubert and de JongMeyer 1991: 1). Subsequently, a number of studies have affirmed these results, whether focusing on the respiratory cycle (Boiten 1998), increasing the number of emotion categories (Christie and Friedman 2004), linking neurological data to temporal dynamics (Goldin et al. 2005), rejecting emotional categories in favor of dimensional categories (Gomez et al. 2005), or increasing the number of different physiological measures (Kreibig et al. 2007). Taken together, all of these studies concluded in identifying robust and distinctive physiological patterns. Having in mind the question regarding compositional features that structure episodic experience, from a film studies perspective these results can be seen as hinting towards a linkage between temporal segments of cinematic images ⫺ like audio-visual movement patterns (Kappelhoff and Bakels 2011) ⫺ and respective physio-psychological patterns; they also highlight the interrelation of movement patterns and the specific affect poetics of different cinematic genres (Kappelhoff and Grotkopp 2012; Visch and Tan 2009). Nevertheless, a crucial step in relating theories on the embodied experience of audio-visual movement to the (neuro-)physiology of film perception lies ahead in examining the linkage between perceptions of moving images and sensory-motor experiences as a start.

3.4. Audio-visual movement and sensory-motor experience The sensory-motor aspects of movement perception in audio-visual media have to be considered mostly virgin territory within the field of experimental psychology and the neurosciences. Nevertheless, a theoretical approach to sensory-motor experiences related to film perception genuinely deriving from the neurosciences can be found in the mirror neuron theory (MNT), that has recently been applied to the editing of film scenes (Gallese and Guerra 2012). In the 1990s, a group of neuro-physiologists led by Giacomo Rizzolatti and Vittorio Gallese, conducting research on the brains of macaque monkeys, discovered that certain motor-activity neurons, responsible for hand and mouth movements, did not only become active while the monkeys performed the corresponding movements, but also ⫺ though

2056

IX. Embodiment with a weaker intensity ⫺ when observing other monkeys performing these movements (Gallese and Goldman 1998). This discovery was soon interpreted as a hint at the monkeys’ potential to simulate, understand, and experience the actions of others not only on a cognitive level, but also by embodied simulation, i.e., a pre-cognitive neurological level of motor-activity. Though the existence of a similar system in the human brain could so far only be demonstrated indirectly (Rizzolatti and Craighero 2004) and is discussed controversially (Hickok 2009), over the past two decades the mirror neuron theory has become one of the most famous and noted discoveries in the recent history of the neurosciences. Since then, Vittorio Gallese has been pursuing the enterprise of applying mirror neuron theory to the fields of philosophy as well as to theories on the arts and media: Firstly a possible theoretical link to the philosophy of Maurice Merleau-Ponty was examined in order to establish a neuro-phenomenological approach to embodied simulation (ES) (see Gallese 2005); secondly the concept of gesture was taken as a theoretical link to apply mirror neuron theory and embodied simulation to the aesthetic experience of painting and sculpture (Freedberg and Gallese 2007). With regard to film analysis, Gallese and Guerra (2012: 200⫺205) argue in favor of conceptualizing the alignment of camera movements with character movements and intentions as the compositional structure essential to a pre-cognitive, embodied perception of film. From this point of view, a shot sequence from Alfred Hitchcock’s Notorious (USA 1946) is interpreted as resembling a grasping movement, another sequence from Il grido (Michelangelo Antonioni, Italy 1957) disallowing the alignment of character and camera movement, is considered a principle of film editing that entails a disembodied film experience. From a film studies perspective, the mirror neuron theory offers potential insights into general links regarding movement perception and movement experience (Curtis 2008; Elsaesser and Hagener 2010: 55⫺81); nevertheless, the approaches to embodied film experience outlined in section 2 of this article (as well as recent experimental studies on subjects “learning” film language; see, for example, Schwan and Ildirar 2010) hint at a difference between film experience and everyday-perception. In this regard, the relation of audio-visual movement (e.g., camera movement, editing) and sensory-motor experiences remains a crucial question open to experimental investigation. In the last section of this article, this question will be contextualized within the transdisciplinary discourse on movement perception, affective experience, and embodiment.

4. Audio-visual media and the quest or embodiment: Transdisciplinary perspectives The previous sections have sketched out a link between theories on embodiment within the field of film studies and perspectives on embodiment in other academic disciplines, especially the field of experimental psychology. Following the work of Vivian Sobchack, Hugo Münsterberg, and Hermann Kappelhoff, a theoretical view on the embodied dimensions of film experience has been outlined that accounts for the intersubjective dynamics of affectivity and meaning constitution in film viewing. At the core of this model lies the model of cinematic expressive movement (Kappelhoff 2004), conceiving audiovisual movement patterns as cinematic means of organizing a specific spatio-temporal experience that literally incorporates perceptive, affective, and cognitive dimensions of film viewing (Kappelhoff 2004; Kappelhoff and Bakels 2011; Scherer, Greifenstein, and Kappelhoff this volume).

162. Embodying audio-visual media: Concepts and transdisciplinary perspectives

2057

Subsequently, experimental studies on the bodily aspects of film perception within the field of experimental psychology and the neurosciences have been discussed with regard to potential links to and shared perspectives with the concept of cinematic expressive movement. Against this background, several questions could be identified that can only be addressed by setting up a dialog between film theory and psychological models: What is it that conveys the shared experience of a given film by a diverse audience? In what ways do films organize episodic perception? What do films tell us about the temporal dynamics of affect? And what do temporal dynamics of affect tell us about film genres in turn? What are the bodily basics that lead us to assuming a pre-cognitive linkage between movement perception and (neuro-)physiological experiences of being moved? These questions ⫺ and the theoretical considerations leading to them ⫺ highlight the potential of a transdisciplinary approach to the embodied experience of audio-visual images. Namely, the combination of film analytical methods based on the model of cinematic expressive movement with experimental research bears the potential to take the use of film as a tool in experimental studies to another level. In this field of research, film clips are often merely seen as a means to study more or less predictable emotional reactions or the comprehension of narratives in regard to experimental subject groups, i.e., only the effects of audio-visual images. However, a transdisciplinary approach could relate the temporal dynamics of neuro- and physio-psychological processes in film viewing to the aesthetic principles that organize the temporal experience of film, thereby addressing the overarching question of how what is experienced relates to what is being perceived. Especially in highlighting the role of movement perceptions ⫺ and the patterns that shape these perceptions ⫺ in regard to dynamics of inter-affectivity and meaning constitution, research on audio-visual images can contribute to investigating models on embodied cognition as developed in the cognitive sciences, overcoming the theoretical gaps exposed at the beginning of this article. In offering repeatable acts of embodied experience ⫺ audio-visual clips that aim at organizing embodied perception ⫺, research linking the analysis of audio-visual aesthetics to the embodied experience of audio-visual images may help investigating the intertwining of perception, affect, and cognition experimentally. At the same time, the systematic and dynamic implications of this intertwining seem to denote the common vanishing point of research on embodiment in diverse academic disciplines. In this regard, research on audio-visual movement patterns shares an epistemological interest with theories on the relation of abstract concepts and embodied perception in the cognitive sciences (Barsalou 1999, 2008; Lakoff and Johnson 1980; Turner 1996) and neuro-linguistic research on networks linking semantics with sensorymotor experiences (Moseley et al. 2012; Pulvermüller, Shtyrov, and Ilmoniemi 2005). Last but not least, the approach that has been sketched out in this article is of transdisciplinary relevance with regard to the use of video and audio-visual media in academic research: With a vast number of academic disciplines turning towards audio-visual media as a subject-matter, source of data, or presentational form in the digital age (see Müller volume 1), research on the embodied experience of moving images can provide the theoretical models that do not only look at what is assumed to be communicated, but instead explain as well how it is communicated ⫺ and thereby introduce the specific interrelation of body, language, and audio-visual communication to an increasingly transdisciplinary discourse on audio-visual images.

2058

IX. Embodiment

5. Reerences Barsalou, Lawrence W. 1999. Perceptual symbol systems. Behavioral and Brain Sciences 22: 577⫺ 660. Barsalou, Lawrence W. 2008. Grounded cognition. Annual Review of Psychology 59: 617⫺645. Bartels, Andreas and Semir Zeki 2004. Functional brain mapping during free viewing of natural scenes. Human Brain Mapping 21(2): 75⫺85. Boiten, Frans A. 1998. The effects of emotional behaviour on components of the respiratory cycle. Biological Psychology 49(1): 29⫺51. Boltz, Marilyn 1992. Temporal accent structure and the remembering of filmed narratives. Journal of Experimental Psychology: Human Perception and Performance 18(1): 90⫺105. Bordwell, David 1985. Narration in the Fiction Film. Madison: University of Wisconsin Press. Bordwell, David 1989. Making Meaning. Inference and Rhetoric in the Interpretation of Cinema. Cambridge: Harvard University Press. Bordwell, David 1997. On the History of Film Style. Cambridge: Harvard University Press. Bühler, Karl 1933. Ausdruckstheorie. Das System an der Geschichte aufgezeigt. Jena: Fischer. Chomsky, Noam 1968. Language and Mind. New York: Harcourt, Brace and World. Christie, Israel C. and Bruce H. Friedman 2004. Autonomic specificity of discrete emotion and dimensions of affective space. A multivariate approach. International Journal of Psychophysiology 51(2): 143⫺153. Clark, Andy and David Chalmers 1998. The extended mind. Analysis 58(1): 7⫺19. Curtis, Robin 2008. Expanded empathy. Movement, mirror neurons and Einfühlung. In: Joseph and Barbara Anderson (eds.), Narration and Spectatorship in Moving Images. Perception, Imagination, Emotion, 49⫺61. Cambridge: Cambridge Scholars Press. Cutting, James E., Kaitlin L. Brunick and Ayse Candan 2012. Perceiving event dynamics and parsing Hollywood films. Journal of Experimental Psychology: Human Perception and Performance 38(6): 1476⫺1490. Ellsworth, Phoebe C. and Klaus R. Scherer 2003. Appraisal processes in emotion. In: Richard J. Davidson (ed.), Handbook of Affective Sciences, 572⫺595. Oxford: Oxford University Press. Elsaesser, Thomas and Malte Hagener 2010. Film Theory. An Introduction Through the Senses. New York: Routledge. Fiedler, Konrad 1991a. Moderner Naturalismus und künstlerische Wahrheit. In: Konrad Fiedler, Schriften zur Kunst, Vol. I, 82⫺110. München: Wilhelm Fink. Fiedler, Konrad 1991b. Über den Ursprung der künstlerischen Tätigkeit. In: Konrad Fiedler, Schriften zur Kunst, Vol. I, 112⫺220. München: Wilhelm Fink. Freedberg, David and Vittorio Gallese 2007. Motion, emotion and empathy in esthetic experience. Trends in Cognitive Sciences 11(5): 197⫺203. Gallagher, Shaun 2008. Understanding others: Embodied social cognition. In: Paco Calvo and Antoni Gomila (eds.), Handbook of Cognitive Science: An Embodied Approach, 439⫺452. Amsterdam: Elsevier. Gallese, Vittorio 2005. Embodied simulation. From neurons to phenomenal experience. Phenomenology and the Cognitive Sciences, 4(1): 23⫺48. Gallese, Vittorio and Alvin Goldman 1998. Mirror neurons and the simulation theory of mindreading. Trends in Cognitive Sciences 2(12): 493⫺501. Gallese, Vittorio and Michele Guerra 2012. Embodying movies: Embodied simulation and film studies. Cinema: Journal of Philosophy and the Moving Image (3): 183⫺210. Goldin, Philippe R., Cendri A. Hutcherson, Kevin N. Ochsner, Gary H. Glover, John D. E. Gabrieli and James J. Gross 2005. The neural bases of amusement and sadness. A comparison of block contrast and subject-specific emotion intensity regression approaches. Neuroimage 27(1): 26⫺36. Gomez, Patrick, Philippe Zimmermann, Sissel Guttormsen-Schär and Brigitta Danuser 2005. Respiratory responses associated with affective processing of film stimuli. Biological Psychology 68(3): 223⫺235.

162. Embodying audio-visual media: Concepts and transdisciplinary perspectives

2059

Greifenstein, Sarah and Hermann Kappelhoff this volume. The discovery of the acting body. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2070⫺2080. Berlin/Boston: De Gruyter Mouton. Grodal, Torben K. 1997. Moving Pictures. A New Theory of Film Genres, Feelings, and Cognition. Oxford: Clarendon Press. Grodal, Torben K. 2009. Embodied Visions. Evolution, Emotion, Culture and Film. Oxford: Oxford University Press. Gross, James J. and Robert W. Levenson 1995. Emotion elicitation using films. Cognition and Emotion 9(1): 87⫺108. Hasson, Uri, Orit Furman, Dav Clark, Yadin Dudai and Lila Davachi 2008a. Enhanced intersubject correlations during movie viewing correlate with successful episodic encoding. Neuron 57(3): 452⫺462. Hasson, Uri, Yuval Nir, Ifat Levy, Galit Fuhrmann and Rafael Malach 2004. Intersubject synchronization of cortical activity during natural vision. Science 303(5664): 1634⫺1640. Hasson, Uri, Ohad Landesman, Barbara Knappmeyer, Ignacio Vallines, Nava Rubin and David J. Heeger 2008b. Neurocinematics: The neuroscience of film. Projections 2(1): 1⫺26. Hewig, Johannes, Dirk Hagemann, Jan Seifert, Mario Gollwitzer, Ewald Naumann and Dieter Bartussek 2005. Brief report. A revised film set for the induction of basic emotions. Cognition and Emotion 19(7): 1095⫺1109. Hickok, Gregory 2009. Eight problems for the Neuron Theory of Action Understanding in monkeys and humans. Journal of Cognitive Neurosciences 21(7): 1229⫺1243. Hubert, Walter and Renate de Jong-Meyer 1991. Autonomic, neuroendocrine, and subjective responses to emotion-inducing film stimuli. International Journal of Psychophysiology 11(2): 131⫺140. Jääskeläinen, Iiro P., Katri Koskentalo, Marja H. Balk, Taina Autti, Jaakko Kauramäki, Cajus Pomren and Mikko Sams 2008. Inter-subject synchronization of prefrontal cortex hemodynamic activity during natural viewing. The Open Neuroimaging Journal 2(14): 14⫺19. Kappelhoff, Hermann 2004. Matrix der Gefühle. Das Kino, das Melodrama und das Theater der Empfindsamkeit. Berlin: Vorwerk 8. Kappelhoff, Hermann and Jan-Hendrik Bakels 2011. Das Zuschauergefühl. Möglichkeiten qualitativer Medienanalyse. Zeitschrift für Medienwissenschaft 5(2): 78⫺95. Kappelhoff, Hermann and Matthias Grotkopp 2012. Film genre and modality. The incestuous nature of genre exemplified by the war film. In: Se´bastien Lefait and Philippe Ortoli (eds.). In Praise of Cinematic Bastardy, 29⫺39. Newcastle upon Tyne: Cambridge Scholars Publishing. Kappelhoff, Hermann and Cornelia Müller 2011. Embodied meaning construction. Multimodal metaphor and expressive movement in speech, gesture, and feature film. Metaphor and the Social World 1(2): 121⫺153. Kreibig, Sylvia D., Frank H. Wilhelm, Walton T. Roth and James J. Gross 2007. Cardiovascular, electrodermal, and respiratory response patterns to fear- and sadness-inducing films. Psychophysiology 44(5): 787⫺806. Lakoff, George and Mark Johnson 1980. The metaphorical structure of the human conceptual system. Cognitive Science 4(2): 195⫺208. Magliano, Joseph P., Jason Miller and Rolf A. Zwaan 2001. Indexing space and time in film understanding. Applied Cognitive Psychology 15(5): 533⫺545. Marks, Laura U. 2000. The Skin of the Film. Intercultural Cinema, Embodiment, and the Senses. Durham: Duke University Press. Merleau-Ponty, Maurice 2005. Phenomenology of Perception. London, New York: Routledge. First published [1945]. Metz, Christian 1992. The Imaginary Signifier. Psychoanalysis and the Cinema. Bloomington: Indiana University Press. First published [1975].

2060

IX. Embodiment

Moseley, Rachel, Francesca Carota, Olaf Hauk, Bettina Mohr and Friedemann Pulvermüller 2012. A role for the motor system in binding abstract emotional meaning. Cerebral Cortex 22(7): 1634⫺1647. Müller, Cornelia volume 1. Introduction. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 1⫺6. Berlin/Boston: De Gruyter Mouton. Mulvey, Laura 1975. Visual pleasure and narrative cinema. Screen 16(3): 6⫺18. Münsterberg, Hugo 1916. The Photoplay. A Psychological Study. New York/London: D. Appleton and Company. Niedenthal, Paula M., Lawrence W. Barsalou, Piotr Winkielman, Silvia Krauth-Gruber and Franc¸ois Ric 2005. Embodiment in attitudes, social perception, and emotion. Personality and Social Psychology Review 9(3): 184⫺211. Nummenmaa, Lauri, Enrico Glerean, Mikko Viinikainen, Iiro P. Jääskeläinen, Riitta Hari and Mikko Sams 2012. Emotions promote social interaction by synchronizing brain activity across individuals. Proceedings of the National Academy of Sciences 109(24): 9599⫺9604. Plantinga, Carl 2009. Moving Viewers. American Film and the Spectator’s Experience. Berkeley: University of California Press. Plessner, Helmuth 1970. Laughing and Crying: A Study of the Limits of Human Behaviour. Evanston: Northwestern University Press. First published [1941]. Pulvermüller, Friedemann, Yuri Shtyrov and Risto Ilmoniemi 2005. Brain signatures of meaning access in action word recognition. Journal of Cognitive Neuroscience 17(6): 884⫺892. Rizzolatti, Giacomo and Laila Craighero 2004. The mirror-neuron system. Annual Review of Neuroscience 27: 169⫺192. Rottenberg, Jonathan, Rebecca D. Ray and James J. Gross 2007. Emotion elicitation using films. In: James A. Coan and John J.B. Allen (eds.), Handbook of Emotion Elicitation and Assessment. Series in Affective Science, 9⫺28. New York: Oxford University Press. Schaefer, Alexandre, Fre´de´ric Nils, Xavier Sanchez and Pierre Philippot 2010. Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition and Emotion 24(7): 1153⫺1172. Scherer, Thomas, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movement in audio-visuals: Modulating affective experience. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2081⫺2092. Berlin/Boston: De Gruyter Mouton. Schmitt, Christina and Sarah Greifenstein this volume. Cinematic communication and embodiment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2061⫺ 2070. Berlin/Boston. De Gruyter Mouton. Schmitt, Christina, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movement and metaphoric meaning making in audio-visual media. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2092⫺2112. Berlin/Boston: De Gruyter Mouton. Schwan, Stephan and Sermin Ildirar 2010. Watching film for the first time. How adult viewers interpret perceptual discontinuities in film. Psychological Science 21(7): 970⫺976. Shaviro, Steven 1993. The Cinematic Body. Minneapolis: University of Minnesota Press. Sheets-Johnstone, Maxine 2008. Getting to the heart of emotions and consciousness. In: Paco Calvo and Antoni Gomila (eds.), Handbook of Cognitive Science: An Embodied Approach, 453⫺465. Amsterdam: Elsevier.

163. Cinematic communication and embodiment

2061

Simmel, Georg 1995a. Aesthetik des Porträts. In: Georg Simmel, Aufsätze und Abhandlungen 1901⫺ 1908, Vol. I, 321⫺332. Frankfurt/Main: Suhrkamp. Simmel, Georg 1995b. Die ästhetische Bedeutung des Gesichts. In: Georg Simmel, Aufsätze und Abhandlungen 1901⫺1908, Vol. I, 36⫺42. Frankfurt/Main: Suhrkamp. Smith, Murray 1995. Engaging Characters. Fiction, Emotion, and the Cinema. Oxford: Clarendon Press. Sobchack, Vivian 1992. The Address of the Eye. A Phenomenology of Film Experience. Princeton: Princeton University. Stern, Daniel N. 2010. Forms of Vitality: Exploring Dynamic Experience in Psychology, the Arts, Psychotherapy and Development. Oxford: Oxford University Press. Tan, Ed S. 1996. Emotion and the Structure of Narrative Film. Film as an Emotion Machine. Mahwah, NJ: Erlbaum. Turner, Mark 1996. The Literary Mind. Oxford: Oxford University Press. Visch, Valentijn T. and Ed S. Tan 2009. Categorizing moving objects into film genres. The effect of animacy attribution, emotional response, and the deviation from non-fiction. Cognition 110 (2): 265⫺272. Voss, Christiane 2011. Film experience and the formation of illusion. The spectator as ‘surrogate body’ for the cinema. Cinema Journal 50(4): 136⫺150. Williams, Linda 1991. Film bodies: Gender, genre, and excess. Film Quarterly 44(4): 2⫺13. Wilson, Margaret 2002. Six views of embodied cognition. Psychonomic Bulletin and Review 9(4): 625⫺636. Wundt, Wilhelm 1900⫺1920. Völkerpsychologie (10 Volumes). Leipzig: Wilhelm Engelmann.

Jan-Hendrik Bakels, Berlin (Germany)

163. Cinematic communication and embodiment 1. 2. 3. 4. 5. 6.

Introduction: Film experience and the spectator’s body The phenomenological view on communication The perceiving of an anonymous “other” See the seeing and hear the hearing Conclusion: Embodiment, affect, metaphor, and time References

Abstract How do films communicate with their spectators? With a neophenomenological perspective on this question, the embodied spectator comes to play a central role. Two aspects are of major interest: on the one hand, the spectator’s experiential presence in the cinema situation; and on the other hand, the ways audio-visual images address the spectator’s body. The chapter presents Vivian Sobchack’s theoretical outline of cinematic communication and its focus on embodiment, perception, and expression. Building upon Maurice Merleau-Ponty, cinematic communication is understood as an interplay of concrete acts of expressing and perceiving, substantially preceding all processes of thinking and conceptualizing. In a nutMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20612070

2062

IX. Embodiment shell, the article elaborates on how spectator and film intertwine in the very moment of media reception. The model’s horizons and potentials are examined in relating it to theories on affective expression and recent research on embodiment.

1. Introduction: Film experience and the spectators body Audio-visual images are an integral part of our everyday life and present within a broad variety of different media practices: We watch professional or self-made clips on the internet and news reports on TV or smart phones; we project films and TV series on our living room’s wall; and last but not least we go to the cinema. But what is the specific form of communication that is realized by those audio-visual media? And on what levels do they address a spectator? This is a very fundamental and complex issue for film and media studies. In this article, we would like to introduce an approach to those questions that represents a significant shift in modern film theory: With Vivian Sobchack’s (1992) extensive examination of Maurice Merleau-Ponty’s phenomenological work, the paradigm of embodiment has re-entered the debate on the communicative principles between film and spectator. It is a re-entering, for classical film theory once already had discussed film as an expressive art and related this to embodied, attentional, and affective dimensions of spectatorship (Bala´zs [1924] 2010; Eisenstein [1924] 1998, Münsterberg [1916] 2002). However, until Sobchack has paved the way again for thinking about the body in film reception, modern approaches admittedly reflect upon the relation between spectator and film. But regardless whether they define it, e.g., psychoanalytically (e.g., Metz [1977] 2000) or cognitively (e.g., Bordwell 1985): The spectator’s sensuous or even physical presence seems to be disregarded. In contrast, the body is of major importance in the neophenomenological theory on film reception. Rather than reducing embodiment to a physical or cerebral activity, “cinematic communication” (Sobchack 1992: 5) aims at embodied experience as an act of sensing situatedly and concretely. According to the model, the spectator is audiovisually addressed through her or his body and senses: “[M]y body is not only an object among all objects, (…) but an object which is sensitive to all the rest, which reverberates to all sounds, vibrates to all colors, and provides words with their primordial significance through the way in which it receives them” (Merleau-Ponty, cited in Sobchack 2004: 53). Film images are conceived of as only being realized within the perception and an allencompassing bodily experience of a spectator: Film images communicate with the audience through concrete acts of seeing and hearing, addressing synesthetic and attentional experiences of people sitting in the dark of a movie theater. By focusing on the intertwining of audio-visual images and bodily responses, the fundamental interrelatedness of film and spectator is emphasized. This communicative act is at the center of a holistic approach to film reception and it builds on the basic phenomenological idea: the intertwining of subject and world. We will elaborate the concept of the embodied spectator, which the American film scholar Sobchack has developed, in examining the model’s key aspect: What is cinematic communication? Does it share features with mundane face-to-face communication? What are its unique characteristics? To think of film and spectator against the background of such an analogy enables an inspiring perspective on the question of how the spectator is involved in film experience: It brings into play articulatory properties of film

163. Cinematic communication and embodiment

2063

and surveys film images as perceiving act and conduct at once. Film is conceived of as presenting an “other”, an embodied intentionality, which affects the spectator. Finally, we will broaden the perspective in outlining further substantial dimensions of the embodied spectator, which are constitutively linked to Sobchack’s model: How is a spectator not only sensuously involved in the film experience, but also affectively and cognitively? To take up again our initial observation: The specific settings of different media practices vary in significant ways. However, film has influenced audio-visual culture via artistic and aesthetic means most outstandingly. It has among others affected the manners of how today’s multi-screened audio-visual media articulate (cf. Scherer, Greifenstein, and Kappelhoff this volume). We assume that examining the neophenomenological conceptualization of how film and spectator communicate provides essential insights into the communication with audio-visual images in general. Furthermore, we suggest that it offers significant points of contact with embodiment research in other disciplines (Bakels this volume) ⫺ not least with cognitive linguistics’ research on verbo-gestural face-toface communication (cf. Horst et al. this volume).

2. The phenomenological view on communication According to Sobchack, a spectator is involved in a vital exchange when watching a film, an exchange that is realized between his body and the film. This cinematic communication is conceived of as a concrete situation in a phenomenological sense. Thus, we do not have a sender, a receiver, and in between a medium with the message (Shannon and Weaver 1963); especially this ‘in between’ is not something, which has to be encoded in a medium (like something stored in a parcel) and then is to be passed on and encoded. Rather, Sobchack radically overcomes such a model of communication that would conceive of film just as an object distinct from and in no relation to the spectator. She aims at grasping the relation of spectator and film via the concept of embodied experience: the experience of perceptive and expressive acts. With this idea, she builds her film theory upon Merleau-Ponty (especially [1945] 2005, [1951] 2007, [1964] 1968). Examining modes of the nonverbal in face-to-face communication, Merleau-Ponty also concentrates on perceptive and expressive acts. Bodily conduct in general and gestures in particular are conceived of as something that is bound intersubjectively to the respective other: The participants in a face-to-face communication are interwoven with each other via gesticulation. It is in such a sense that Merleau-Ponty conceptualizes a conversation between him and a friend of his in terms of interrelatedness and unity: [T]he distance between us, his consent or refusal are immediately read in my gesture; there is not a perception followed by a movement, for both form a system which varies as a whole. If, for example, realizing that I am not going to be obeyed, I vary my gesture, we have here, not two distinct acts of consciousness. What happens is that I see my partner’s unwillingness, and my gesture of impatience emerges from this situation without any intervening thought. (Merleau-Ponty 2005: 127)

Thus, Merleau-Ponty claims for face-to-face interactions that the act of seeing a gesture and performing a gesture are intrinsically bound together: Not willing to come nearer and the will to convince somebody to come over is not something separated but shared

2064

IX. Embodiment and interdependent; it is related to each other via gesture and facial expression ⫺ “the distance between us, his consent or refusal are immediately read in my gesture”. Perceptive and expressive acts intrinsically compose to a whole, creating a communicative situation in which gesture connects thinking and feeling of my counterpart and me. Notably, those acts of perception and expression are both at once: what is exchanged (i.e., consent or refusal) as well as the mode, in which the exchange is realized (i.e., the gesture and facial expression). Thus, the body is the interface of the exchange. The concept of embodied communication highlights a doubling that characterizes human beings: We are visual, perceivable, audible, and tangible ⫺ at once for another and ourselves. Such a doubling is addressed when a description of a communicative situation focuses on expressive and perceptive acts. Thus, face-to-face communication is conceptualized as an intertwining or circle realized by two bodies able to sense ⫺ rather than being a dialog of two distinct persons. But how can this apply for our communication with audio-visual media, for film is a non-human, technical ‘counterpart’? In face-to-face communication, as the term already indicates, we face and thus perceive and respond to each other’s presented facial and gestural expressions. Contrarily, film is a one-way communicative partner that addresses its audience via audio-visual expressivity: e.g., the color compositions, the light valeurs, and movement of montage. The presented audio-visual images affect spectators, while a film does not respond to the communication. Nevertheless, with Merleau-Ponty in mind, we can say that in perceiving the film’s images, the spectator with his body is intertwined with the film’s unfolding aesthetic and poetic means: He always embodies the film’s images in his perceptive acts. The spectator’s attitude towards a film is thus evoked by his bodily relatedness to the audio-visual images. This leads to another question: How does the concept of cinematic communication grasp the observing role of film, the camera’s ability to look, to see, and to regard?

3. The perceiving o an anonymous other What a spectator experiences via his receptive process within the cinematic communication are not only colors, forms, and sounds. Rather, it is the perception of an “other” (Sobchack 1992: 9) that is present and addresses us most intensively. Notably, this other is neither the director nor a protagonist of the film: For Sobchack, it is the perception of an anonymous other. This means, the spectator experiences a specific anonymous conduct when the camera turns away from something and changes its focus. We would like to illustrate this idea that applies to every film at each moment by taking a look at Alfred Hitchcock’s Vertigo (USA 1958). We have chosen the scene where Scottie (James Stuart) sees Madeleine (Kim Novak) for the first time in an elegant restaurant full of people (min 13:10⫺14:40). The scene’s opening (Fig. 163.1) illustrates very nicely the always-constitutive presence of such an anonymous other within cinematic communication. Here, what Sobchack conceives of as the perceptive act of the film is as prominently staged, as is its orientation towards something (the woman). To simply say that ‘Scottie is seeing Madeleine’ would not be a phenomenological description. In fact, such a description would disregard the actual staging, it would disregard the camera that frees itself from Scottie and moves from right to left through the restaurant until approaching Madeleine. While the camera movement cannot be characterized literally as Scottie’s perspective, the spec-

163. Cinematic communication and embodiment

2065

Fig. 163.1: Laying eyes on her ⫺ Scottie is seeing Madeleine for the first time (Vertigo, Alfred Hitchcock, USA 1958; min 13:16⫺14:00)

tator realizes the presence of someone who is there, perceiving and moving ⫺ someone who is laying eyes on Madeleine after a while, bringing her into the focus of attention. Thus, Sobchack argues that in cinematic communication two perceptive acts are intertwining: the one of the spectator and the one of the anonymous other (the film). Therefore, the situation in the cinema is not regarded as a strictly monologic one, where the spectator is the subject that looks at the object ‘film’. On the contrary, it is conceived of as being inherently dialogical, a situation in which two attending subjects are taking part. This anonymous other whose perspective the spectator perceives can be located. It is a double-filled phenomenological situatedness, as Sobchack indicates by wordplay and reference to the screen: “Here, where eye (I) am.” This points to the shared space and situation evoked by the interwoven perceptive acts of film and spectator: “Here, where we see” (Sobchack 1992: 10). Thus, film is always both: a direct and a mediated experience ⫺ the spectator’s own and direct perception merges with a mediated form of perception. In Vertigo it is the spectator who is laying eyes on Madeleine ⫺ but he does so by embodying the perception of an intentional other. In Sobchack’s account, film is much more than a perceivable object; it is always the impression of a perceiving subject, too (Sobchack 1992: 21). But what consequences does this have for the film’s status within cinematic communication? How can we grasp this idea that film images have a conduct that they perform? What does it mean for the spectator to experience film as a perceiving instance?

2066

IX. Embodiment

4. See the seeing and hear the hearing In a very condensed form, the main idea of Sobchack’s approach is given in the following prominent quotation: More than any other medium of human communication, the moving picture makes itself sensuously and sensibly manifest as the expression of experience by experience. A film is an act of seeing that makes itself seen, an act of hearing that makes itself heard, an act of physical and reflective movement that makes itself reflexively felt and understood. (Sobchack 1992: 3)

From this viewpoint, film is not to be conceived of as artwork or product, not as an object that a spectator looks at in cinema. Notably, the film itself enters the communicative situation as an acting participant in Sobchack’s words: “the moving picture makes itself manifest”. And what a film presents its spectator with is much more than bodies on a screen, much more than actors, objects, or rooms ⫺ film always shows a perceptive act: “Watching a film, we can see the seeing as well as the seen, hear the hearing as well as the heard, and feel the movement as well as see the moved” (Sobchack 1992: 10). Thus, to the spectator the perception itself becomes present as an expressive activity, while at the same time the spectator is able to observe what the camera focuses on: characters, objects, etc. We would like to illustrate this again with Vertigo. Within our exemplary scene’s beginning, there is a shot with a camera movement starting at Scottie’s face and ending at a posterior view of Madeleine. In this way, on the one hand, the spectator sees the seen and experiences what the camera catches sight of: a man and a woman in a crowded restaurant. On the other hand, and at the same time the spectator sees how Scottie and Madeleine are regarded ⫺ he sees the seeing. But the spectator thusly not only takes notice of the camera’s perceiving. Furthermore, this shot is a movement that the spectator feels. With one long-lasting take, the camera moves slowly through the room, moving away and back from Scottie’s looking face and asserting his sight line, so the spectator sees a horizontally gliding over an ensemble of full occupied tables and servers; then, the spectator feels how the camera is pausing for a moment ⫺ before changing its direction, now moving forward in a very focused way. It is a very soft and slow approaching, like being drawn towards someone, here towards one woman: Madeleine. Moreover, the spectator hears the hearing, realized with a change in sound design: He hears how the room’s acoustic atmosphere ⫺ talking voices and the clattering of crockery and cutlery ⫺ is getting superseded by soft and melancholic string music right at the moment when the camera starts moving towards Madeleine. (See Fig. 163.2 for a scheme of this cinematic movement image.) Thus, in cinematic communication two perceptive acts are interwoven in a significantly different way regarding the situation described by Merleau-Ponty: Though the film does not perceive the spectator as a counterpart, the film perceives something else. It is this ‘something else’, which the spectator perceives in watching the film. And the spectator perceives it always as something that is perceived: He experiences the perceptive act itself. This emphasizes the very unique characteristic of cinematic communication. In cinema the spectator does not perceive another’s visual body like he does in a conversation, but a concrete and subjective perspective on the world. One could say he sees and hears

163. Cinematic communication and embodiment

Fig. 163.2: Scheme of cinematic movement image (Vertigo, Alfred Hitchcock, USA 1958; min 13:16⫺14:00)

a conduct from an internal perspective that is different to his own, for it is a conduct via cinematic movement pattern that is accessible to him as a perception of an anonymous other. (cf. Sobchack 1992: 128⫺143).

5. Conclusion: Embodiment, aect, metaphor, and time We have presented how neophenomenological film theory understands the spectator as an embodied subject continuously attuned with the film’s images. In this view, a spectator is conceived of as an embodied subject intertwined with the film, and film is considered as making visible and audible two communicative acts at once: Film presents perception scenarios (a perceiving act) while at the same time it shows compositions of movement patterns (an expressing act). In that sense, the film’s dynamic patterns are regarded as expressive acts in very much the same sense as a human gesture is an expressive body movement (cf. Horst et al. this volume; Greifenstein and Kappelhoff this volume; Kappelhoff and Müller 2011; Müller volume 1). The spectator’s attentional, mental, and sensing activities are synchronized all along the film. Notably, the particular ways in which such a synchronization is realized addresses an aspect that remains rather implicit in Sobchack’s holistic designed model: the matter of time. However, we consider the fact, that films are compositions in and of time as essential: Embodied audio-vision is constitutively dynamic. Emphasizing the matter of time brings center stage the duration and temporal unfolding of images during the whole course of a film. To be more precise, we conceive of dynamic

2067

2068

IX. Embodiment

structures as the basic mode linking the spectator and his experience to the film: We assume that only temporality brings into being the ‘tuning’ of spectator and film. Therefore embodiment has to be thought of as a matter of permanent shifting, of a dynamic changing between being involved in film and experiencing the own bodily resonances. Embodied audio-vision thus can be grasped as affective intensities: rhythmic mergings, time spans of tension and release. This is the way in which a cinematic movement transforms into the spectator’s feeling (Zuschauergefühl, see Kappelhoff and Bakels 2011). This line of thought brings in approaches that have shown how the temporal dimension of film relates to the affective resonances of spectators (Kappelhoff 2004; Voss 2011). The spectator then is conceived not only as a perceiving, sensing, and involved body but also as an affected one (for an overview, see Scherer, Greifenstein, and Kappelhoff this volume). Moreover, current empirical research on embodiment and film (for an overview, see Bakels this volume) strengthens the assumption that spectators are traversing affective courses due to the temporally structured film reception. Such an understanding of an affective embodiment builds on Sobchack’s view of the general arrangement of film and spectator in a cinema surrounding. But it also goes beyond the general film-spectator relation, for it takes into account that each film is always a composition of very specific and individual audio-visual images. Therefor it questions how film images address spectators concretely and differently: intensively or slightly, immersing or distancing, shocking or inspiring. This brings up for discussion how specific films, their genre poetics, and aesthetics can be investigated, how these films make the audiences laugh or cry by means of their complex temporal structures (Kappelhoff and Bakels 2011; Kappelhoff and Grotkopp 2012). Another aspect that film theory leaves mainly unconsidered is the way of how in film reception meaning making and affective embodiment interplay. Admittedly, Sobchack (1992, 2004) claims that the spectator in cinematic communication is of course also a sense-making subject, but she does not offer a way to specifically account for this dimension. As current theoretical and analytical attempts show (Kappelhoff and Müller 2011), it is metaphor research that offers a highly fruitful intersection for such an enterprise. This is because figurative phenomena address perceptive and experiential scenarios and embodied conceptualizations. Such a perspective paves the way to investigate audiovisual metaphors with regard to processes of basic meaning making in film (Schmitt, Greifenstein, and Kappelhoff this volume). The processes, by which spectators of audiovisual media on the basis of their corporeal and affective sensing comprehend complex narratives, draw attention to something, and activate meaning cognitively, is a matter that is to be further investigated. It is a promising field to study the relation and the nature of embodiment, cognition, and affect in the line of dynamic situatedness.

6. Reerences Bakels, Jan-Hendrik this volume. Embodying audio-visual media. Concepts and transdisciplinary perspectives. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2048⫺2061. Berlin/Boston: De Gruyter Mouton.

163. Cinematic communication and embodiment

2069

Bala´zs, Be´la 2010. Visible man or the culture of film. In: Erica Carter (ed.), Be´la Bala´zs: Early Film Theory. Visible Man and the Spirit of Film, 1⫺90. Oxford: Berghahn Books. First published [1924]. Bordwell, David 1985. Narration in the Fiction Film. London: Methuen. Eisenstein, Sergej 1998. The montage of film attractions. In: Richard Taylor (ed.), The Eisenstein Reader, 35⫺52. London: British Film Institute. First published [1924]. Greifenstein, Sarah and Hermann Kappelhoff this volume. The discovery of the acting body. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2070⫺2080. Berlin/Boston: De Gruyter Mouton. Horst, Dorothea, Franziska Boll, Christina Schmitt and Cornelia Müller this volume. Gesture as interactive expressive movement: Inter-affectivity in face-to-face communication. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2112⫺2125. Berlin/ Boston: De Gruyter Mouton. Kappelhoff, Hermann 2004. Matrix der Gefühle. Das Kino, das Melodrama und das Theater der Empfindsamkeit. Berlin: Vorwerk 8. Kappelhoff, Hermann and Jan-Hendrik Bakels 2011. Das Zuschauergefühl. Möglichkeiten qualitativer Medienanalyse. Zeitschrift für Medienwissenschaft 5(2): 78⫺95. Kappelhoff, Hermann und Matthias Grotkopp 2012. Film genre and modality. The incestuous nature of genre exemplified by the war film. In: Se´bastien Lefait and Philippe Ortoli (eds.), In Praise of Cinematic Bastardy, 29⫺39. Newcastle upon Tyne: Cambridge Scholars Publishing. Kappelhoff, Hermann and Cornelia Müller 2011. Embodied meaning construction. Multimodal metaphor and expressive movement in speech, gesture, and feature film. Metaphor and the Social World 1(2): 121⫺153. Merleau-Ponty, Maurice 1968. The Visible and the Invisible. Followed by Working Notes. Evanston: Northwestern University. First published [1964]. Merleau-Ponty, Maurice 2005. Phenomenology of Perception. London/New York: Routledge. First published [1945]. Merleau-Ponty, Maurice 2007. The child’s relation with others. In: Ted Toadvine and Leonard Lawlor (eds.), The Merleau-Ponty Reader, 143⫺183. Evanston: Northwestern University. First published [1951]. Metz, Christian 2000. The Imaginary Signifier. Psychoanalysis and the Cinema. Bloomington: Indiana University. First published [1977]. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Münsterberg, Hugo 2002. The photoplay ⫺ a psychological study. In: Allan Langdale (ed.), Hugo Münsterberg on Film. The Photoplay ⫺ A Psychological Study and Other Writings, 45⫺162. New York/London: Routledge. First published [1916]. Scherer, Thomas, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movements in audio-visual media: Modulating affective experience. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2081⫺2092. Berlin/Boston: De Gruyter Mouton.

2070

IX. Embodiment Schmitt, Christina, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movement and metaphoric meaning making in audio-visual media. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2092⫺2112. Berlin/Boston: De Gruyter Mouton. Shannon, Claude Elwood and Warren Weaver 1963. The Mathematical Theory of Communication. Urbana: University of Illinois. Sobchack, Vivian 1992. The Address of the Eye. A Phenomenology of Film Experience. Princeton: Princeton University. Sobchack, Vivian 2004. Carnal Thoughts. Embodiment and Moving Image Culture. Berkeley: University of California. Voss, Christiane 2011. Film experience and the formation of illusion: The spectator as ‘surrogate body’ for the cinema. Cinema Journal 50(4): 136⫺150.

Christina Schmitt, Berlin (Germany) Sarah Greifenstein, Berlin (Germany)

164. The discovery o the acting body 1. 2. 3. 4. 5. 6. 7.

Introduction: Affect, expression, and acting The “sensitive gesture” in 18th century acting theories Gesture going beyond speech The temporal unfolding and becoming visible of affect Artificial gestures on stage and real tears in the audience Expressive movement ⫺ an aesthetic mode of perception References

Abstract The notion of gesture as medium of affect expression goes back to the age of late Enlightenment and with it to aesthetic thought on the art of acting in theater. While in dramatic performance practices of the Baroque a system of acting codes predominated, with normative rules and a fixed set of representational affect displays, theater in the age of Enlightenment broke with this tradition, discovering the acting body and with it the idea of making visible a subjectivity, the illusion of a natural feeling. This chapter points out how theories of aesthetics and acting (Lessing, Diderot) have elaborated the discovery of the actor’s body and how this thinking is linked to later art and entertainment practices like theatrical melodrama and moving image culture.

1. Introduction: Aect, expression, and acting Gestures and other forms of expression are known to be inherently affective phenomena, as they provide the ability to demonstrate subjective states and convey the “understanding of the other” through behavior and embodied interaction. The way social practices Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20702080

164. The discovery of the acting body

2071

are organized through non-verbal communication like expressive movements seems to be evident when looking at face-to-face conversations of daily life (Horst et al. this volume; Müller volume 1). But when it comes to artistic forms of expression as painting, film, or theater, that are well known for the way they involve viewers affectively, the matter of expressivity seems to be more complicated. The question here is, how feelings are modulated by means of a planned and artificial construct and not through a ‘natural’ or spontaneous kind of embodiment. The most discussed example of this issue is the theater actor’s ability to demonstrate will, affects, and attitudes most convincingly that he or she does not necessarily have to feel. Therefor, the actor has become the most prominent figure of thought in theories on expression; at different times, scholars from various fields have discussed the issue with the example of dramatic performance in theater and film (Bala´zs [1923] 1982, [1924] 2010; Bühler 1933; Engel [1786] 1994; Kappelhoff 1998, 2001, 2004a, 2004b; Löffler 2004; Plessner [1948] 2003). This paradox is pointed out by philosopher Helmuth Plessner with his theoretical essay on the anthropology of acting: Plessner describes how an illusionary and immersive theater reception takes place, how during a performance the spectator understands the experienced feelings as happening on stage, forgetting that the actor’s speech and gestures had been constructively planned and that the feelings perceived in the actor’s body instead base on the spectator’s own affectivity attending the performance (Plessner 2003). Such a phenomenon is what Plessner calls an ‘expression image’ (Ausdrucksbild, Plessner 2003: 408; the term is very similar to what he recurrently calls ‘movement image’, Bewegungsbild, Plessner [1925] 1982: 78; see also Kappelhoff 2004a). That is a dynamic image which is becoming visible as behavior. In the case of theater performance, it is orchestrated through the actor’s bodily movements, like facial expression, gaze, walk, voice, and gesture. Acting from his point of view can be understood as more than just an art. In Plessner’s sense, dramatic performance can be conceived of as a reflection of the exhibiting activity of behavior that is known by all humans in social communication: the need to hide feelings or the need to show an affective behavior which is not felt (Plessner 2003: 407⫺412). In Plessner’s notion, acting becomes a cultural form of thinking about human existence in its double-fold “eccentric position” (Plessner 2003: 417; for a definition of the term, see Plessner [1941] 1970: 32⫺47), about the paradox that a “human being always and conjointly is a living body” and at the same time “has this living body as this physical thing” (Plessner [1941] 1970: 34⫺35) or in other words: firstly, to feel one’s own bodily and affective sensations and to express them to others and secondly, to be able to control bodily expressions deliberately. Moreover, Plessner’s understanding of acting aims at a second crucial point, that of intersubjectivity: What is at center stage is a visible image of a feeling, a vivid presence in movement which is both bodily and perceptively existent, linking the performed gestures of the actor to the spectator’s reception act (Plessner 2003). Such an artistically staged expressive movement thus is more than a physical action: Through a temporal unfolding stage and audience become synchronized, the actor’s movement is transformed into an aesthetic mode of perceiving and feeling (for the theoretical notion of ‘expressive movement’, Ausdrucksbewegung, see Kappelhoff 2001, 2004a, 2004b; Kappelhoff and Müller 2011; Scherer, Greifenstein, and Kappelhoff this volume). Such a thought on human embodiment generally, and on acting in particular, spells out an idea of the nexus between affect and expression that goes back to 18th century thought

2072

IX. Embodiment

in the age of European Enlightenment. The constellation explained by Plessner recalls Diderot’s The Paradox of Acting who claims that the actor should act rationally in order to move the audience (Diderot [1778*/1830] 1957). What was discovered in the second half of the 18th century in aesthetic theory by scholars like Denis Diderot, Gotthold Ephraim Lessing, and Johann Jakob Engel was a new theatrical performance practice that focused the actor’s body and its making visible of subjective feelings and individual sensing. Drama had turned away from the Baroque theater’s affect depiction, where an affect was mainly considered to be represented by a rhetorical posture or meaningful declamation taken from a fixed set of affect classifications. The theater of late Enlightenment instead shed light on the presence of the body and on the staging of a new form of subjectivity. This development was not only a theatrical or artistic one: What was developed in the new form of theater practice and aesthetic thought on acting was the emergence of an individual feeling, the articulation of a psychological play that was until then mainly unknown. This can be described as historically grounding the modern understanding of human feelings and affects by means of cultural and artistic forms of communication. In the following, we will give an overview of selected aspects on aesthetics and dramatic performance in the 18th century, outlining what is meant by the “sensitive gesture” and how a theater practice was reflected that intensively dealt with the opposition between language and non-verbal behavior. Central to the discovery of the acting body was also the notion of a certain temporality of affect that only made an immersing of theater spectators possible. We will conclude by contextualizing our topic with current research on embodiment and audio-visuals, as such a view provides insights into the historical and cultural dimension of art and entertainment practices with regard to body, language, and communication.

2. The sensitive gesture in 18th century acting theories In the second half of the 18th century, writings on aesthetics, acting, and theater had brought forward a thinking that was in fruitful equivalence and dialog with scholarly thought on affect by medicine and physiology. The upcoming of sentimentalism in literature, philosophy, and the arts during the Enlightenment was not a movement standing in strict opposition to rationalism. Instead, essays on aesthetics mirrored an arisen notion of the primacy of sensory perception for reason and thought, a transitioned understanding of nature as presenting itself immediately through the senses was discussed (Fischer-Lichte 1992: 54). Such an idea also refers to an anthropological knowledge of how psychological and physical processes interdepend. Different realms of knowledge claimed the Cartesian dichotomy of body and soul to be doubted. In a way, aesthetics, philosophy, and ethics as well as physiology, medicine, and the emerging psychology had a shared aim: sketching out an integral vision of human existence to demonstrate that visible motions of the body show analogy, if not are bound to sensed moods and affective resonances (Kosˇenina 1995: 9). Matching with these humanist and scientific thoughts of a holistic view on human beings, theories on theater and aesthetics discovered in mid and late 18th century a new idea of subjectivity. But this idea was not restricted only to the represented theater roles that shifted from the Baroque’s fixed system of class representatives to the staging of individuals and ordinary people of the bourgeoisie, with their thoughts, sensations, and personal problems. The new theater art leaned against a traditional Baroque drama

164. The discovery of the acting body

2073

practice, where actors had to represent affects according to a catalog, a coded, rhetorical, and normative system of bodily positions and actions (e.g., the classification of German acting teacher Franciscus Lang is here to name, see Kindermann 1959, 479⫺483; cf. Barnett 1987). The Baroque rhetoric of acting, proclaiming bodily movements to be precise, recognizable, and highly explicit in order to depict affect most convincingly, was strictly opposed to what aesthetic theories of Lessing and Diderot described. In texts of the latter, the body was discovered to be looked at sensuously, in order to observe a certain behavior, to regard an individual way of walking, talking, and moving. Such a new thinking measured out the limits of representation and language as it included sensory perceptions of the world as crucial. This, of course, is only to be understood in the cultural and historical context of the 18th century, where the arising bourgeoisie reflected its self-understanding in new concepts of the arts. Sensibility was conceptualized as a new way of thinking and living within the broader movement of Enlightenment, bringing forward an ethical mode of questioning sentiments and feelings (Lloyd 2013). Among other influences, the new thinking had roots in English sensualism and moral philosophy (Fick 2000: 50; FischerLichte 1992: 54). Diderot understood sensibilite´ as an idea for pervading life and culture more intensively in order to develop virtue and moral sense (Diderot and D’Alembert [1765] 1972: 936). What in English literature was called “sentimentality”, was translated by Lessing and Bode into German as “Empfindsamkeit” (Fick 2000: 51). The term “sensitive gesture” summarizes two similar concepts (empfindsamer Gestus, Kappelhoff 2004a: 63⫺83): that of Lessing’s ‘individualizing gesture’ (individualisierender Gestus, Lessing [1769] 1954, 4th piece: 28) and that of Diderot’s ‘sublime gesture’ (gestes sublimes, Kappelhoff 2004a: 68; Diderot [1751] 1984: 34, 67). Both Lessing and Diderot considered the ordinary gesture observable in daily social behavior as starting point for imitation, but as not being appropriate for the stage. Instead, they believed that it was the actor’s necessity to transform it artistically through shaping it into something more precise, evident, clear, and present for theater audiences. Both described this form of acting as bringing to vision a highly artificial illusion of a natural and individual behavior. Diderot takes daily conduct as a material for the acting, but he claims that the actor should not only imitate ordinary conduct but orchestrate and form it in order to highlight its significance, to change it into a stage’s sublime gesture (Kappelhoff 2004a: 68). With his notion of the “individualizing gesture”, Lessing aims at the paradox mentioned in the beginning of the article: human behavior as permanently oscillating between rhetorically used sign practices and forms of affect expression. Lessing underlines, that actors that are able to embody such an ambivalence of conduct, often combine in their performance two opposing forces, when, for instance, through the actor’s body both an impulsive passion and a withholding or taming of it is to be perceived, or contrarily when a relaxing movement has to be balanced through animated and vital movements. Both opposing forces become visible, for instance, in facial expression as a complex composition (Lessing 1954, 3rd piece: 24⫺25). Such an artistic way of investigating, researching, and developing human gesture and expression as permanent shift of different dimensions of conduct leads to the actor’s body demonstrating movements that seem to be involuntary, through which in the arrangement of theatrical stage and spectators the illusionary image of a natural sensation becomes manifest. Of course, this is not an authentic affect of the performer but a highly artistic construction of making affects visible. Beyond others, it was not at least Lessing himself, who gave a quite detailed description of such an arisen acting style by the example of the actress Sophie Friederike Hensel in the

2074

IX. Embodiment

role of Sara (in his play Miß Sara Sampson, 1755). Seeing the play ten years after the premiere again, Lessing is enthusiastic and impressed by the way, the actress performs the dying heroine: how she picks at her clothes with her fingers, a constant twitching of her hands making visible the sentiment of a nervousness, creating through bodily means, through the performance of an unsteady motion, the image of a restless soul. Lessing describes the performance of the dying metaphorically as a flickering light that only few instances later is extinguished (Lessing 1954, 13th piece: 75; cf. Kosˇenina 1995: 120). The description aims at highlighting an aesthetic sensing that the actress developed through expressive means, being able to experience even subtle details in bodily movements, like the twitching and picking. Such an acting involves the spectator, so that the gestures on stage carry forward themselves within the perception of the spectator, intensifying the capability to sense and to feel compassionately. The affective course of the spectator comprises a certain pleasure to be affected by the acting and theatrical mise-en-sce`ne as well as being able to feel ‘compassion’ (Mitleid ) with the imagined character. Such a highly complex composing of feelings in the spectator’s course of attending the drama and a certain act of reflecting such a procedure is what Lessing and Diderot thought of as artistic and aesthetic cultivating of an activity of sensing (Fick 2000: 45, 138⫺139; Kappelhoff 2004a: 69⫺83). The “sensitive gesturing” in theater experience was a design of the new bourgeois subjectivity, aiming at a cultural form of affecting the senses for making spectators develop a sense of social and moral reasoning, a humanistic practice so to speak.

3. Gesture going beyond speech The acting theories of Lessing and Diderot understood the “sensitive gesture” as reflecting an opposition between bodily and verbal articulation in stage action. The actors’ expressive means of bodily movements were considered of going together with an overwhelming, being incapable of talking and rationalizing, a withdrawing from conscious verbality. The notion of the ancient rhetorical term eloquentia corporis (that Lessing, Engel, and others refigured, see Barnett: 330; Bühler 1933: 30⫺31; Kosˇenina 1995) defines this paradox as a bodily form of language that was claimed to be more immediate and understood universally. But Lessing and Diderot with their thoughts on acting did not have in mind a body language system with fixed meanings of different affects. Instead, they understood bodily articulation to communicate differently than speech. An example can be mentioned of the writer William Cockin, influenced by English sensualism, who wrote about the famous English actor David Garrick and his acting style to be another language, which instead of the ear, addresses itself to the eye, thereby giving the communications of the heart a double advantage over those of the understanding, and us a double chance to preserve so inestimable a blessing. This language is what arises from the different, almost involuntary movements and configurations of the face and body in our emotions and passions, and which, like that of tones, every one is formed to understand by a kind of intuition (William Cockin [1775] 1974: 90⫺91).

Again the term “involuntary movements” is mentioned as an ideal of acting that directly emerges from observing the body in motion orchestrating an image of subjectivity. Diderot as well writes about Garrick’s complex forms of transforming one affect into another in only a few seconds (Diderot 1957). In this sense, Diderot grasps gesture to be

164. The discovery of the acting body

2075

opposed to verbal language: His Letter on the deaf and dumb (1751) is a contribution to the debate on the origins of language. The basic thought of the opposition of gesture and language grounds here in the idea that non-verbal expressive phenomena are sensorily perceived and are thus more direct, while language is a mediated and learned form of communication. He compares the different articulatory modes of gesture and speech in this essay, going so far to say that gesture is in a way prior to verbal language, being understood by everyone. It is in this sense that Diderot describes a performance of Lady Macbeth in Shakespeare’s play, stating that the gesture of the actress was able to go beyond verbal articulatory means. He describes the somnambulant figure that with closed eyes, silently moving, is re-enacting as if she would wash the blood from her hands, an unconscious act of dealing with the crime of murder that dates back many years. With admiration for the actress, although he does not tell her name, Diderot is fascinated by the way silence and bodily means are able to express remorse in a very different way than dialogue would have done. This is what he calls ‘sublime gestures’ (erhabene Gebärden, Diderot 1984: 34). Noteworthy is here the fact that Diderot and Lessing did not reject verbality or prefer bodily forms of articulation, but that they discovered a certain artistically staged affectivity that was not being articulated through language only but predominantly through bodily performance. The opposition between language and gesture, or as well between the rhetoric use of gesture and an expressive form of bodily movement, was not to be evaluated or judged but was considered as a paradox, that was investigated in the thinking on acting by Lessing and Diderot, e.g., both being concerned with the artistic illusioning of feelings for theater audiences. Furthermore, this paradox relates to the theoretical distinction between expression and representation in artistically created phenomena of conduct. This refers to, for example, what Engel pointed out in Ideen zu einer Mimik with the terms ‘expressive and painting gestures’ (ausdrückende und malende Gebärden, Engel 1994: 40⫺70) and what Bühler later elaborated in his expression theory (Bühler 1933: 40⫺41) and which he refigured then in terms of his organon model of language and communication (Bühler [1934] 2011: 34⫺35; Müller volume 1). By the way, in expression theory the dichotomy of expression and representation in gesture has a long history: It mixes with all kinds of theoretical assumptions on psychophysic processes, when, for instance, it is taken up again in 19th century psychological research with the dichotomy of deliberately vs. undeliberately performed bodily movements (e.g., in Wundt’s research, see Löffler 2004: 176⫺ 178). Such a dichotomy is still present in todays psychology’s terms for deliberate vs. spontaneous forms of emotion expression (Meuter 2006: 270). Some positions claim that the distinction has even been broadened and further transformed into the discussion of emotion expression being universal vs. culturally dependent (Gumbrecht 2000: 417; Meuter 2006).

4. The temporal unolding and becoming visible o aect The “sensitive gesture” was not only understood as a matter of opposing gesture and speech. It was also a matter of duration, of the temporal organization of the performing act, and with it the immersion of the spectator. What drama and aesthetic theories of the 18th century rejected, were affective poses produced within an instant. Instead, artistic expressive movements were regarded as to unfold, to be temporally developed in order to be perceivable (Kappelhoff 2004a; Kosˇenina 1995: 2).

2076

IX. Embodiment It was Lessing who came up with this thought of a gestural transformation where the visible action of the performer’s body became more and more important. Such an idea he summarized with the term ‘transitorian painting’ (transitorische Malerei, Lessing 1954, 5th piece: 34). He describes how the actor’s body step by step becomes transformed into something else, which is not the actor anymore and not only understood as the character or role. Instead, Lessing brings into focus the unfolding of gesture and its transformation into a sentiment, a feeling that is realized in the theater spectator’s temporal act of experience (Lessing 1954, 3rd piece: 22⫺23). Engel pursued Lessing’s idea, stating that in a drama there are elaborated thoughts that only within the performance unfolding temporally are coming into existence (“Die Personen des Dramas tragen Gedanken vor, die eben erst entstehen” (Engel, cited in Bühler 1933: 45; Jeschke 1992: 105⫺106). Such an act shapes the spectator’s construction of a felt presence becoming realized through the visualization of the fictive scenario (Vergegenwärtigung, Bühler 1933: 47). The affect though only becomes present when it is allowed to demonstrate itself in its course, transitoriness, and becoming (Lessing 1954, 3rd piece; cf. Kappelhoff 2004a: 84⫺85). Essential for the idea of the “sensitive gesture” was the act of transforming or modifying: The time in which an actor is altered into the gestural body is intrinsically bound to the time span in which the spectators see the character’s affects arising: ‘modification of the soul’ (Modificationen der Seele) is the term Lessing uses for the transformation of the affects going along with the change of bodily movements (cf. Kappelhoff 2004a: 66). The acting on stage thus was conceived of being able to synchronize the experience of spectators, their attentional foci to the movements of gesturing, to the intonations of voice, or to the rhythm of speaking and pausing. Crucial for this thought was the refusal of understanding time as a chronological succession of events. Instead, gesture as temporal form was built on the notion of a becoming, the experiential quality of change, of transition, and duration. Accordingly, feelings themselves were considered to be temporal processes.

5. Artiicial gestures on stage and real tears in the audience The new understanding of expressive phenomena is based on the gesture’s ability to move theater spectators affectively. Lessing already describes for the actor how affect expression is to think of: For example, for expressing ‘rage’ (Zorn) he claimed that an actor did not have to sense actually the feeling himself but only had to study in detail and imitate its special dynamic form of body movement (he describes, for instance, the shaking lips, the game of the eye brows, the energetic footstep) in repetitive exercises. After having it practiced a lot (also technically), the rage would arise from the body and then also result in the heart as a feeling, giving him the possibility to act even more convincingly (Lessing 1954, 3rd piece: 23). The core thought is the artistic developing of a bodily action, that of finding and staging a dynamic embodiment of affect in the ‘external features’ (äußerliche Merkmale, Lessing 1954, 3rd piece: 23). With such a constructive view on affect through body movement, he formulates what becomes a main thought in 19th century emotion theory (e.g., by William James 1884). The most important clue in the acting concepts though went beyond the individual actor: It is to be named as paradox of the artificial construction of gestures and their ability to evoke real tears on the side of spectators. Instead of comprising only the binary relation between actor and character, Diderot, in his writing The Paradox of Acting (1957), induces a third element into acting theory: the theater spectator. In this most

164. The discovery of the acting body

2077

influential text the author lets two speakers argue about drama and acting practices. One of the most known sections of the essay is the statement that a good actor would not really feel what he pretends to feel, but that he should rationally construct his actions and gestural performances in order to move the theater spectators. Some scholars have argued that Diderot sketches out the rational, “cold” actor as being the more professional, and that the notion of a sensitive form of acting is thereby rejected (e.g., Roselt 2005: 134⫺135). But what has been underestimated largely from this point of view, is the extraordinary modern understanding that rules Diderot’s text. According to Diderot’s concept, what mediates between actor and spectator is not the character as an abstract idea, but the actor’s bodily movements as vivid expression (without the need that the actor actually feels what he performs, as mentioned in the beginning with Plessner): Thus, theatrical gesture was understood as being an interaffective phenomenon. With this shift in acting theory ⫺ from the production of gesture to the perception and experiencing of gesture ⫺ drama performances were measured along to their capacities of creating concrete scenarios, of making visible and audible complex feelings. Through this notion the oppositions artificial-natural (of expression) and affective-rational (of the actor), that once had been thought of as concerning only the singular performer, themselves are shifted onto the whole theater space: The idea is that on stage there is to be orchestrated a highly artistic performance while the audience reacts to it with real tears (Kappelhoff 2004a: 63⫺83). Such an interaffective view on theatrical gestures is taken up by Kleist again in the essay on the marionette theater (Kleist [1810] 1985) or in the 20th century by Meyerhold and Eisenstein in terms of biomechanics (Eisenstein and Tretyakov [1922] 1996). With such a perspective on acting that addresses the presence of the spectator’s own affectivity, a new form of art and entertainment practice can be described, which can be also found in melodramas of the late 18th century, e.g., Jean-Jacques Rousseau’s Pygmalion (1772) or Johann Christian Brandes’ Ariadne auf Naxos (1775). What these early melodramas had in common was that verbal articulations often seemed to be limited. These plays staged scenarios of the protagonist’s inwardness, e.g., the failing of the melodramatic heroine to express herself verbally, as not being able to explain, becoming speechless. Such a notion of powerlessness went along with the separation of dialog and non-verbal acting, where words and gesture did not cohere anymore. The new form of gestural dynamics often substituted dialog in the moment of an intensified drama. This happened when, for instance, a narrative action was interrupted and replaced by a gesture, by the temporal growing and unfolding visibility of affect. The new way of melodramatic gesture and theatrical staging was similar to what Lessing’s and Diderot’s essays pointed out as new acting style (Kappelhoff 2004a). A self-awareness to be affected came into play, and only on that basis the spectator was to imagine the character’s attitudes and the fictional world.

6. Expressive movement - an aesthetic mode o perception What emerged in the second half of the 18th century was a certain form of acting practice as well as reflection on it, where theatrical performing aimed at making visible the actor’s body and with it the various artistically staged images of affects and sensations. In the focus of attention was a sensuous way of looking at bodily expressions as well as a dynamic act of sensing, which can be summarized as “sensitive gesture”. The actor’s

2078

IX. Embodiment body was not subordinated to speech anymore, instead the artistic staging of affective gesture emancipated itself by substituting language now and then, or even by transgressing its expressivity onto the whole stage, merging aesthetically with music, light, and mise-en-sce`ne. The most important shift though was the neglecting of the importance of the actor’s own feelings. What became central was the actor’s bodily based and artistically staged expressive movement, and with it an aesthetic mode of perception, the ability to modulate the spectators’ affective resonances (Kappelhoff 2001, 2004a, 2004b). But the theories of Lessing and Diderot did not only concern in the realm of theater; what was discovered in acting was nothing less than a specific imagination of a certain subjectivity, of what an individual feeling might be. This can be understood as a ground for the modern understanding of human affectivity and embodiment. The ways how we feel, how we are affectively moved, have a cultural history: Over the course of time, they are developed, discovered, and practiced by people through diverse forms of art, entertainment, and communication. For example, there is to note that in a way modern entertainment practices of the 20th and 21st century ⫺ like film and audio-visual media ⫺ are related to these 18th century’s aesthetic concepts (Kappelhoff 2004a). A film, for example, confronts its spectators with images that do unfold expressively, similarly to gestures (Kappelhoff and Müller 2011; Scherer, Greifenstein, and Kappelhoff this volume; for a deeper account to this perspective on gesture, see also Horst et al. this volume; Müller 2013). From such a perspective, a film scene can be regarded as being embodied by spectators through concrete acts of perception and expression (Bakels this volume; Schmitt and Greifenstein this volume; Sobchack 1992). The feelings that a film spectator, sitting in a movie theater, goes through base on similar expressive features as those spelled out regarding the “sensitive gesture”. The staging and shaping of affects, the temporal unfolding not via a body movement but through audio-visual images, intrigue their spectators through a broad range of genres and audio-visual presentational forms. Such a perspective focuses on how art and entertainment practices develop certain ways to feel and to think. It is addressed how felt sensations are not only to be considered as an individual and bodily based matter, but how feelings are intertwined with media aesthetics as a form of cultural practice.

7. Reerences Bakels, Jan-Hendrik this volume. Embodying audio-visual media. Concepts and transdisciplinary perspectives. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2048⫺2061. Berlin/Boston: De Gruyter Mouton. Bala´zs, Be´la 1982. Die Erotik der Asta Nielsen. In: Be´la Bala´zs, Schriften zum Film. Volume I. Edited by Helmut H. Diederichs, Wolfgang Gersch and Magda Nagy, 184⫺186. München/Wien: Hanser. First published [1923]. Bala´zs, Be´la 2010. Visible man or the culture of film. In: Erica Carter (ed.), Be´la Bala´zs: Early film theory. Visible Man and the Spirit of Film, 1⫺90. Oxford: Berghahn Books. First published [1924]. Barnett, Dene 1987. The Art of Gesture: The Practices and Principles of 18th Century Acting. Heidelberg: Winter Universitätsverlag. Bühler, Karl 1933. Ausdruckstheorie. Das System an der Geschichte aufgezeigt. Jena: Gustav Fischer. Bühler, Karl 2011. Theory of Language: The Representational Function of Language. Amsterdam/ Philadelphia: John Benjamins. First published [1934].

164. The discovery of the acting body

2079

Cockin, William 1974. The Art of Delivering Written Language; Or, an Essay on Reading. In which the Subject is Treated Philosophically as well as with a View to practice. London: Scolar Press. First published [1775]. Diderot, Denis 1957. The Paradox of Acting. New York: Hill and Wang. Origin of text [1778*], first published [1830]. Diderot, Denis 1984. Brief über die Taubstummen. In: Denis Diderot, Ästhetische Schriften, Volume 1. Edited by Friedrich Bassenge, 27⫺97. Berlin: Das Europäische Buch. First published [1751]. Diderot, Denis and Jean-Baptiste le Rond D’Alembert 1972. Artikel aus der von Diderot und d’Alembert herausgegebenen Enzyklopädie. Edited by Manfred Naumann. Frankfurt/Main: Röderberg. First published [1765]. Eisenstein, Sergej and Sergej Tretyakov 1996. Expressive Movement. In: Alma Law und Mel Gordon (eds.), Meyerhold, Eisenstein and Biomechanics ⫺ Actor Training in Revolutionary Russia, 173⫺192. London: McFarland. First published [1922]. Engel, Johann Jakob 1994. Ideen zu einer Mimik. Wuppenau: E & A Verleger. First published [1786]. Fick, Monika 2000. Lessing-Handbuch: Leben ⫺ Werk ⫺ Wirkung. Stuttgart/Weimar: Metzler. Fischer-Lichte, Erika 1992. Entwicklung einer neuen Schauspielkunst. In: Wolfgang F. Bender (ed.), Schauspielkunst im 18. Jahrhundert ⫺ Grundlagen, Praxis, Autoren, 51⫺70. Stuttgart: Franz Steiner Verlag. Gumbrecht, Hans Ulrich 2000. Ausdruck. In: Karlheinz Barck, Martin Fontius, Dieter Schlenstedt, Burkhart Steinwachs and Friedrich Wolfzettel (eds.), Ästhetische Grundbegriffe. Historisches Wörterbuch in sieben Bänden, 416⫺430. Stuttgart/Weimar: Metzler. Horst, Dorothea, Franziska Boll, Christina Schmitt and Cornelia Müller this volume. Gesture as interactive expressive movement: Inter-affectivity in face-to-face communication. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2112⫺2124. Berlin/ Boston: De Gruyter Mouton. James, William 1884. What is an emotion? Mind 9(34): 188⫺205. Jeschke, Claudia 1992. Noverre, Lessing, Engel. Zur Theorie der Körperbewegung in der zweiten Hälfte des 18. Jahrhunderts. In: Wolfgang F. Bender (ed.), Schauspielkunst im 18. Jahrhundert ⫺ Grundlagen, Praxis, Autoren, 85⫺112. Stuttgart: Franz Steiner Verlag. Kappelhoff, Hermann 1998. Empfindungsbilder: Subjektivierte Zeit im melodramatischen Kino. In: Theresia Birkenhauer and Anette Storr (eds.), Zeitlichkeiten ⫺ Zur Realität der Künste, 93⫺119. Berlin: Vorwerk 8. Kappelhoff, Hermann 2001. Bühne der Emotionen, Leinwand der Empfindung ⫺ Das bürgerliche Gesicht. In: Helga Gläser, Bernhard Groß and Hermann Kappelhoff (eds.), Blick, Macht, Gesicht, 9⫺41. Berlin: Vorwerk 8. Kappelhoff, Hermann 2004a. Matrix der Gefühle: Das Kino, das Melodrama und das Theater der Empfindsamkeit. Berlin: Vorwerk 8. Kappelhoff, Hermann 2004b. Unerreichbar, unberührbar, zu spät ⫺ Das Gesicht als kinematografische Form der Erfahrung. montage AV 13(2): 29⫺53. Kappelhoff, Hermann and Cornelia Müller 2011. Embodied meaning construction: Multimodal metaphor and expressive movement in speech, gesture, and feature film. Metaphor and the Social World 1(2): 121⫺153. Kindermann, Heinz 1959. Theatergeschichte Europas. Das Theater der Barockzeit, Volume 3. Salzburg: O. Müller. Kleist, Heinrich von 1985. Über das Marionettentheater. In: Heinrich von Kleist, Über das Marionettentheater. Aufsätze und Anekdoten, 7⫺16. Frankfurt/Main: Insel Verlag. First published [1810]. Kosˇenina, Alexander 1995. Anthropologie und Schauspielkunst. Studien zur ‘eloquentia corporis’ im 18. Jahrhundert. Tübingen: Niemeyer. Lessing, Gotthold Ephraim 1954. Hamburgische Dramaturgie. In: Gotthold Ephraim Lessing, Gesammelte Werke, Volume 6. Berlin: Aufbau Verlag. First published [1769].

2080

IX. Embodiment

Löffler, Petra 2004. Affektbilder. Eine Mediengeschichte der Mimik. Bielefeld: Transcript. Lloyd, Henry Martyn (ed.) 2013, The Discourse of Sensibility. The Knowing Body in the Enlightenment. Cham/Heidelberg/New York/Dordrecht: Springer. Meuter, Norber 2006. Anthropologie des Audrucks. Die Expressivität des Menschen zwischen Natur und Kultur. München: Wilhelm Fink. Müller, Cornelia volume 1. Gestures as medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Plessner, Helmuth 1970. Laughing and Crying: A Study of the Limits of Human Behavior. Evanston: Northwestern University Press. First published [1941]. Plessner, Helmuth 1982. Die Deutung des mimischen Ausdrucks. Ein Beitrag zur Lehre vom Bewußtsein des anderen Ichs. In: Helmuth Plessner, Ausdruck und menschliche Natur. Gesammelte Schriften VII. Edited by Günther Dux, Odo Marquart and Elisabeth Ströker, 67⫺13. Frankfurt/ Main: Suhrkamp. First published [1925]. Plessner, Helmuth 2003. Zur Anthropologie des Schauspielers. In: Helmuth Plessner, Ausdruck und menschliche Natur. Gesammelte Schriften VII. Edited by Günther Dux, Odo Marquart and Elisabeth Ströker, 399⫺418. Frankfurt/Main: Suhrkamp. First published [1948]. Roselt, Jens (ed.) 2005. Seelen mit Methode. Schauspieltheorien vom Barock- bis zum postdramatischen Theater. Berlin: Alexander Verlag. Scherer, Thomas, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movements in audio-visual media: Modulating affective experience. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2081⫺2092. Berlin/Boston: De Gruyter Mouton. Schmitt, Christina and Sarah Greifenstein this volume. Cinematic communication and embodiment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2), 2061⫺ 2070. Berlin/Boston: De Gruyter Mouton. Sobchack, Vivian 1992. The Address of the Eye. A Phenomenology of Film Experience. Princeton: Princeton University Press.

Sarah Greifenstein, Berlin (Germany) Hermann Kappelhoff, Berlin (Germany)

165. Expressive movements in audio-visual media: Modulating affective experience

2081

165. Expressive movements in audio-visual media: Modulating aective experience 1. 2. 3. 4. 5. 6. 7.

Introduction: Being touched by moving images How do audio-visual images move spectators? The concept of expressive movement How do expressive movements unfold temporally? Affect modulation in different audio-visual media Conclusion References

Abstract Following the concept of expressive movement, we understand film and other audio-visual media as being expressive in a way that the images address the audience by unfolding temporally and gestalt-like ⫺ as gestures do in face-to-face-communication. Audio-visual expressivity is not understood as the actor’s or director’s articulation of feeling, but as a property of audio-visuals to shape the perceptive, affective, and embodied involvement of the spectators. Affect modulations through films, television series, or news reports are primarily grasped on the level of how aesthetic means ⫺ like camera movement, editing, or sound ⫺ address the spectator’s embodied experiences.

1. Introduction: Being touched by moving images Film and other presentational forms of audio-visual culture have established not only sophisticated means of telling stories and constructing impressive architectures of fictive and diegetic worlds, but are as well able to shape complex feelings on the side of spectators. Some films tend to amuse their audience, others make spectators outrage about the way a topic is depicted, and even others set in a melancholic or contemplative mood. Accordingly, each genre film promises a certain affective experience: Being moved is one main expectation of moviegoers that is reflected in the decision whether to see a comedy, a romance, a horror film, or a thriller. But how do film and other audio-visual media address the feelings of their spectators? What basic aspects of film are relevant for these processes, and on what levels do audio-visual images interact with viewers? How can affective resonances be related to the very concrete and situated act of sitting in a movie theater and experiencing light projections and sound reproductions? From a psychological point of view, those questions cannot be addressed on all levels. Associated with a concept of distinct emotions, it is highly problematic to link those preformed and static emotional categories to dynamic audio-visual images. Instead, in this chapter we would like to focus on the aesthetics of film, which we understand as highly relevant for the shaping of complex affective courses and feelings on the side of spectators (Zuschauergefühl, see Kappelhoff and Bakels 2011; see also Kappelhoff 2004a). We assume that film images organize the perceptive processes of spectators dynamically, as they unfold temporally during the film reception. While, for example, in one scene strong tensions and attentional foci are addressed, in a subsequent scene the suspense is relieved after a few minutes. From this perspective, film can be analyzed not Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20802092

2082

IX. Embodiment only on the level of narrative plot and character constellation, but as complex affective dramaturgy, in other words, as a temporal course that the spectators experientially go through. Furthermore, within the development of a scene, audio-visual images unfold as movement patterns structuring dynamically the process of watching. The way a scene unrolls in complex aesthetic figures of soundscapes, light changing, montage sequences, or camera work reveals a certain dimension of movement that realizes itself only in the perception of the spectator. Such an aesthetic addressing of the perceptive, affective, and comprehending activity of the spectator can be understood as cinematic expressive movement (Kappelhoff 2004a). In the following, we will give an overview of the concept of expressive movement in film. Therefore we contextualize this take on affect and feeling within the research field on emotions and film (for an overview, see Hediger 2006). Furthermore, we will demonstrate, using a specific film example, how the temporal dimension is a decisive property of audio-visual media to form affective resonances in spectators. Finally, the chapter gives a prospect of how affect modulation in different audio-visual media can be approached through a comparative view.

2. How do audio-visual images move spectators? Ever since the invention of the moving images film theory has intensively discussed the issue of how audio-visuals, and films in particular, are able to arouse emotions and shape the affective experiences of their audience. Several current approaches consider narrative settings, actions of characters, and plot developments as decisive for fiction film to evoke emotions (Grodal 2009; Plantinga 2009; Tan 1996). These cognitively orientated theories conceptualize the spectator’s reactions to a film predominantly as a conscious activity, which is controlled by mental operations of hypothesizing and constructing schemata. Whether a spectator finds a character on screen likeable or not, feels with him or not, is assumed to be dependent on the cognitive evaluation of a narrative situation. In short terms, these theories regard the narration as predominant, while aesthetics seem to be only subsidiary means. Although Ed Tan (1996) distinguishes between emotions evoked by fiction and those, which are stimulated by evaluating the aesthetics of a film, in his approach mainly the spectator’s conscious and cognitive activity is supposed to account for emotional processes. Thusly, film aesthetics are either regarded as supporting means ⫺ as secondary stylistic or technical tools ⫺ or as cognitively evaluated aesthetic moments that stand out from the narration. What remains mainly unconsidered in such an approach, is the way spectators experience film images affectively, bodily, and as well unconsciously. There has always existed a line of philosophically orientated theory within the humanities’ research on cinema, one that focuses strongly on aesthetics. Authors from this field of research explore how films themselves offer complex reflections, imaginations, and interpretations (Cavell 1971) and how the process of manifesting affects is intrinsically bound to the aesthetic and poetic forms of audio-visual images (Deleuze [1983] 2008). In the beginning of film theory such a holistic thought was established by the film theorist and psychologist Hugo Münsterberg ([1916] 2002). Already in the 1910s he claimed that film images have a congruency to mental and emotional activities: e.g., how a camera’s movement focusing on an object is structurally congruent to the way humans draw attention towards an object mentally.

165. Expressive movements in audio-visual media: Modulating affective experience

2083

This idea has been taken up and discussed even more explicitly by neo-phenomenological film theory in the 1990s in terms of embodiment (Sobchack 1992, 2004; see Bakels this volume; Schmitt and Greifenstein this volume). In film and media studies this approach made possible a return of the spectator’s body and along with it the inquiry of sensuous activities in film reception. From this perspective films and other audio-visual media are understood as interrelating the spectator’s perceptive involvement with the movements of the film. Theories on expressive movement and the expression of affect relate to this concept by focusing on the aesthetic and temporal organization of audiovisual images as a way to address spectators’ feelings (Zuschauergefühl, see Kappelhoff and Bakels 2011; see also Kappelhoff 2004a). Films involve spectators bodily and affectively by creating, for example, media-specific forms of distance towards a scenario or by bringing them very close to a face. A high frequenced montage of action and fight might make the spectators feel agitated or troubled, while a long, slow, and gliding camera movement above a city or landscape might arouse sensations of relaxation. Likewise, cinematic expressive movements are thought of being congruent to gestures as media of expression in face-to-face communication (Kappelhoff and Müller 2011; for a deeper account to this perspective on gesture, see also Horst et al. this volume; Müller volume 1). The notion of the theoretical concept of expressive movement is in line with other contemporary embodiment theories that understand movement and affective experience as being connected in a similar way, as, e.g., Shaun Gallagher (2008: 449) points out: “Affective and emotional states are not simply qualities of subjective experience; rather, they are given in expressive phenomena, i.e. they are expressed in bodily gestures and actions, and they thereby become visible to others.” What Gallagher refers to is the idea of expression as an intersubjective phenomenon, synchronizing affective experiences through visible and audible forms of movement. He and other authors focus in particular on the link between motion and emotion, respectively between aesthetic forms and affective responses (Gallagher 2008; Johnson 2007; Sheets-Johnstone 2008; Stern 2010).

3. The concept o expressive movement Films move their spectators. The film’s communication with spectators can be understood as a vital form of aesthetic composition and as bodily and sensory responsiveness. In this sense, when we say, the film moves the spectator, it is not meant to be understoodmetaphorically but quite literally: Film images develop as movement patterns, combining different staging tools like sound composition, montage rhythm, camera movements, and acting to one temporal gestalt. They do literally move spectators, because they organize their perception processes in the temporal course of the film reception. Historically, the term ‘expressive movement’ derives from a definition of gestures and other expressive phenomena as being not only bodily but also affectively relevant. The theoretical framework of expression has a long tradition in different disciplines, ever since the 18th century until well into the 20th century (e.g., Bühler 1933; Merleau-Ponty [1945] 2005; Plessner [1941] 1970; Wundt 1900⫺1920). The term is present in several film and media approaches on affect (Aumont 1992; Deleuze 2008; Kappelhoff 2004a; Löffler 2004) as well as on embodiment (Sobchack 1992, 2004). The film theoretical notion of expressive movement (Bala´zs [1924] 2010; Deleuze [1983] 2008; Eisenstein [1924] 1998; Kappelhoff 2001, 2004b; Münsterberg 2002) cannot be reduced to the actors’ bodily

2084

IX. Embodiment movements but aims at regarding the audio-visual image as gestalt-like, as a dynamic unfolding. Thus, expressive movement as a theoretical term comprises different aspects, such as implications on affect expression in theater and aesthetics (Greifenstein and Kappelhoff this volume), in gesture within face-to-face communication (see Horst et al. this volume; Müller volume 1), or in the embodied dimension of cinematic communication (see Bakels this volume; Schmitt and Greifenstein this volume). But even back in film theory of the 1920s, the intersection of affect and expression was discussed vividly. One prominent and very early example for this is Be´la Bala´zs’ (2010) notion of expression, who, e.g., focuses on the impact of Asta Nielsen’s face in Hamlet (Sven Gade and Heinz Schall, GER 1920). He conceives of the intense visibility of affect as closely bound to the way the face is staged. Rather than being a technical term merely, the close-up shot serves as an aesthetic form in cinema: It enlarges even small parts of a face enormously by approaching it, by enduringly presenting its motions. In this way, even the slightest movements, the most subtle changes can be observed by the spectator in a prominent way: being blown up enormously, projected onto a huge screen. It becomes evident with Bala´zs, and holds also for its current usage, that the filmtheoretical notion of expressive movement is exactly this property of audio-visual images as being expressive; through making affect perceivable by means of aesthetic forms, e.g., movement qualities, as well as through light and sound. The film theoretical concept of cinematic expressive movement (Kappelhoff 2004a) regards this special aesthetic dimension of movement quality as paramount for modulating affects. Expressive movements are understood as patterns of audio-visual staging. They are formed in a continuous merging of different articulatory modalities, such as music, camera work, and acting. Only in synaesthetic processes of a spectator’s perception dynamic patterns of affect are created: for example, a staccato editing, a camera’s drifting and calm pausing, or musical and visual rhythms. These aesthetic compositions are not understood as measurable and technical units, but as forms of synchronizing the perceptive and embodied activity of spectators with the unfolding of images on the screen. In a nutshell, audio-visual expressivity is not in the least understood as the expression of an author or actor, but as performance and articulation of patterns of the film itself, which is in its temporal dimension similar to forms of expressive behavior in humans (Kappelhoff and Müller 2011). From this perspective audio-visual images are only secondarily understood as representations of narrative situations, but primarily as addressing the spectator in order to feel him- or herself as a resonating and vivid body. After having sketched out the theoretical model of cinematic expressive movement, we now turn towards the question of how affects in film experience are shaped temporally. The following analysis of a film scene will illustrate how such a perspective on the temporality of affect can be approached in an aesthetic and phenomenological analysis.

4. How do expressive movements unold temporally? Over the course of a film, audio-visual images unfold in a dynamic way, creating different periods of time, layers, and spans of audio-visual composition. Expressive movements thus can establish different forms of temporality, such as prolongation and compaction, or synchronic versus succeeding forms of visual and auditive orchestration. In their development over the course of a film, those dynamic patterns create specific temporal gestalts and movement qualities: intense, abrupt patterns as short-term tension and stress for spectators, or a long-aroused expectation through an extended pattern.

165. Expressive movements in audio-visual media: Modulating affective experience

2085

To illustrate this temporal unfolding of expressive movements, we now focus on one example taken from the Classical Hollywood war film Bataan (Tay Garnett, USA 1943). On the basis of an exemplary scene we want to demonstrate the manner and levels of description by which we document how audio-visual movement patterns are temporally staged. We begin with a summary of the depicted action of the scene: Entrenched in the jungle, a squad commander of the American army gives the order to climb up a palm tree to one of his soldiers to look out for Japanese enemies. Shortly after reaching the top, the soldier on the lookout gets shot from off-screen and drops down dead in front of the eyes of his comrades. The summary already indicates that the scene is rather dramatic and sad, but we will suggest that the reconstruction of how the spectator understands the narrative structure of a sad story does not explain how his or her feelings arise. Put differently, an analysis focusing solely on the narrative level does not consider fundamentally how it comes about that a recipient is moved differently if watching a film or reading a book, even if he or she is following the same story in different media. In contrast, what we assume is that the perceivable and aesthetic dimension of expressive movements is essentially relevant to what spectators experience when watching a scene. We suggest that the composition and temporal unfolding of the expressive movement is mainly responsible for making spectators feel and understand over the course of a film. In order to illustrate this assumption, we will take a closer look at how the narrative action in the scene from Bataan is audio-visually staged: We will identify and describe the audio-visual movement patterns that structure it. We consider audio-visual compositions as expressive movements, when dynamics on various levels come to form a distinct gestalt: Those aesthetic orchestrations structurally resemble an unfolding melody, where the beginning is still present at the end, or a dynamic structure of a hand gesture in face-to-face communication, which is prepared, culminates in a stroke, and finally retracts (Kendon 2004). It is the very dynamic gestalt that makes an audio-visual movement expressive. Expressive movements in film can consist of montage patterns as well as camera, sound, mis-en-sce`ne, and acting figurations. Distinct movement qualities are created through the particular ways in which these different articulatory modalities come together and form an unfolding temporal gestalt. The Bataan scene described above is structured by two different movement patterns. Fig. 165.3 shows a diagram where these patterns are labeled as expressive movement units (1) and (2) (emu 1 and emu 2). The temporal gestalts of the two differ: While the first is composed as a continuous and slow movement, the second is staged in a staccato rhythm. The first movement pattern (Fig. 165.1), which ends with the arrival of the soldier at the top of the tree, is staged as follows: In long takes, we see the soldier receiving the command and approaching slowly the tree, preparing for the climb. After the order is spoken, only quiet sounds interrupt the silence. When the soldier starts climbing silently up the longish curved stem, the camera follows him with a soft and sliding movement. This upwards movement, which slowly develops, is realized with various different means of staging: The camera movement, the visual composition, and the actor’s movement merge into a continuous gliding, accompanied by silence on the soundtrack. Together, these articulatory modalities of audio-visual staging temporally merge and create one distinctive movement quality: the slow and silent sliding. We suggest that it is this quality that audio-visually organizes a calm tension as the spectator’s affective experience. Note that this experience is not dependent on one isolated means of cinematic staging ⫺ e.g.,

2086

IX. Embodiment

Fig. 165.1: expressive movement unit 1 from the exemplary scene (Bataan, Tay Garnett, USA 1943; time code: min 37:54⫺39:03)

Fig. 165.2: expressive movement unit 2 from the exemplary scene. (Bataan, Tay Garnett, USA 1943; time code: min 39:03⫺40:15)

165. Expressive movements in audio-visual media: Modulating affective experience

2087

Fig. 165.3: Visualization of the scene’s expressive movement units (Bataan, Tay Garnett, USA 1943; time code: min 37:54⫺40:15)

only a camera movement ⫺ but emerges due to the movement gestalt that is composed out of all the aesthetic means conjointly. In the second movement pattern (Fig. 165.2) of the scene, the static long shot of a soldier, who is keeping watch at the top of the palm tree, is interrupted abruptly by a sudden shot noise and his falling and screaming. A series of close-ups follows and shows in a high cutting rate the shocked faces of several comrades of the dying soldier. With the following slowing down of the cutting rate, a soldier’s face turns from surprise to grief. Accordingly, all the faces appear as one cohesive figure: a disrupted shocked face, a multiplied expression of fright that in the slowing down of the montage transforms as if it were one moving facial expression. Music sets in dramatically after the shot noise and changes tempo and volume by turning into a melancholic tune. Thus, the second movement pattern can be specified and qualified as sharp staccato that slows down, staged predominantly by montage, sound, and acting. Through the staccato montage and the high frequency of shots of several faces an experiencing of shock is addressed. The affective involvement is transforming into a sad alleviation of tension through the slowing down of the montage, the subtle change of facial expressions together with the change of music. The diagram (Fig. 165.3) shows the essential details of the expressive movement patterns, highlighting two important aspects: (i) Expressive movements realize their gestalts by addressing all senses of the spectators synchronously. Such dynamic patterns like the increasing tension of expressive

2088

IX. Embodiment movement unit 1 and the staccato of expressive movement unit 2 furthermore emerge temporally and multimodally. This is illustrated in Fig. 165.3 by sound volume and montage shots. Notably, the spectators do not perceive the two modalities separately; rather, the processes of synaesthetic perception bring the modalities together to a specific gestalt. (ii) Expressive movements are not distinct and isolated temporal units, but interact dynamically with each other. The scene as larger unit composes these movement patterns, creating an affective course that the spectators go through experientially by watching it. The affective course is a transformation of being addressed in different forms through varying perceptive scenarios and movement patterns. With regard to our example this refers to how the scenes’ expressive value changes from a slow stretched pattern in the scene’s first part (expressive movement unit 1) to an abrupt staccato of the second part (expressive movement unit 2). The audio-visual composition stages an affective course on the side of spectators that can be described as a calm tension that is abruptly turned into shock and resonates in a sad alleviation of tension. We thus suggest that expressive movements in films can be understood as temporally structured forms of shaping the spectators’ perceptive, affective, and embodied activity over the course of their unfolding. Expressive movements are no type of movement ⫺ not a technically understood measurable accumulation of seconds ⫺ but a specific movement dimension: the images’ sense, an experiential quality or rhythm, a temporally organized perceiving of a dynamic whole. It is in this sense that we conceive of film as staging the affective course of spectators through a complex temporal and multimodal aesthetic orchestration. This specific understanding of audio-visual images though is not limited to the realm of cinema films. On the contrary, the insights gained in film research along the lines of that chapter are in its essentials transferable to other media-use settings. Furthermore, as outlined in the following section, we suggest that results from research on cinematic expressive movement and the dynamic forms of affect dramaturgy in film can offer fundamental insights for cross media observations.

5. Aect modulation in dierent audio-visual media The process of how spectators of audio-visual images are moved is indeed not restricted to fictional feature films, but concerns the various media forms of moving image culture. However, with such a broad perspective on different media, all kinds of problematic aspects come into view; aspects that seem obstructive to a comparative perspective on audio-visuals. To name just a few: the distinction between fiction and documentary, media-specific settings of, for instance, cinema or television, and their different interfaces in media use. Especially since their “relocation” in the digital age ⫺ for example when a cinema film or news broadcast is being watched on a mobile phone ⫺ “a continuous mingling of technologies, experience forms and practices” (Cassetti and Sampietro 2012: 29) can be observed. Relations of apparatus and body are fluid and have a huge impact on the issue of social and historical media situatedness, as well as on media-specific communications with audiences. However, to investigate expressive movement is one way to describe a shared articulation of audio-visuals comparatively by focusing on their aesthetic dimension. In media

165. Expressive movements in audio-visual media: Modulating affective experience

2089

comparison, the shared means of all audio-visual media might be described as an essence of audio-visual staging, remaining the same, no matter on what screen size and in what situational context it is shown: Montage, camera movement, or music are an intersection of all audio-visual images. Those artistic means are perceivable on the level of audiovisual and temporal articulation. As basic criteria, we therefore suggest two aspects for analyzing expressive movements in various media: a temporal and multimodal articulation. Focusing on them, we made the following basic analytical observations: Regarding temporality, exemplary studies of cinema films and commercials have shown that the complexity of expressive movement patterns is strongly dependent on their length: By unfolding over hours, cinema films offer elaborated affective dramaturgies based on complex temporal shifts, turns, and durations. The responses to them vary in their intensities between strong to less, e.g., like a changing from tension to release in horror films, or like the persisting ambivalence of laughing and being sentimentally touched in tragicomedies. For example, in one study on Alfred Hitchcock’s Spellbound (USA 1945) we have identified a complex temporal form of expressive movement that is repeated often over the course of nearly two hours. Through its recurrence the tension is permanently intensified and only dissolved in the last repetition through a variation of the pattern. Commercials on the contrary have shown very short-termed patterns that appear to aim at addressing spectators most intensely and explicitly. We understand those divergent temporal arrangements of different audio-visual media as highly relevant for how spectators experience them affectively. With regard to the multimodal addressing, distinct types of intertwining images and sounds could be observed. Especially concerning the use of language, clear differences become obvious: In television news reports, speech predominates over audio-visual staging, with the effect that the visuals often are perceived as mere illustrations, as doublings of the spoken word. Fiction films, on the other hand, tend to make the audio-visual orchestration salient, whereof dialogue is only one compositional element. This is however rather a matter of genre and presentational form and format. The way a fiction film aims at establishing temporal gestalts through different articulatory modalities can differ decisively as well: The breathless use of language in screwball comedies differs drastically from the use in taciturn westerns or in musicals where every utterance or sound can initiate the next song and dance act. Commercials in television tend to modulate strong, unambiguous affective experiences most explicitly through both dialogue and audio-visual staging. Furthermore, the different multimodal forms of staging can not only orchestrate affectivity but as well create embodied meanings, e.g., through figurative thought and multimodal metaphors (see Kappelhoff and Müller 2011; Schmitt, Greifenstein, and Kappelhoff this volume). Proposing criteria for investigating different audio-visual media, of course is only the first step towards a comparative analysis. The different practices of media exhibition and the situatedness of reception, as discussed above, can have very strong influences on the spectators’ embodied experience. With the concept of expressive movement, these media differences can be taken into consideration carefully, while at the same time focusing on the shared means of audio-visual media. With these propositions for a media analysis that accounts for the different forms of how audio-visual media establish affective, sensuous, and bodily forms of involving spectators, we would like to offer a perspective that considers the very differences in the communicative acts of embodied media reception.

2090

IX. Embodiment

6. Conclusion To sum up: We have sketched out that cinematic expressive movements in film can be understood as aesthetic forms of addressing the spectators’ perceptions. The way in which a film communicates with its audience is not in the first place a matter of narrated stories. The communicative act between spectators and the unfolding of audio-visual images takes place on a more basic level instead: that of performed and perceived movements. Cinematic expressive movement can be understood as the aesthetic dimension of movement, addressing and shaping the way spectators feel.

Acknowledgements The presented work is an outcome of collaborative research conducted within the interdisciplinary project “Multimodal Metaphor and Expressive Movement” under the direction of Hermann Kappelhoff and Cornelia Müller at the Cluster of Excellence “Languages of Emotion” of the Freie Universität Berlin in cooperation with the European University Viadrina, Frankfurt (Oder). We particularly would like to thank Christina Schmitt, Jan-Hendrik Bakels, Franziska Boll, and Dorothea Horst for their critical and helpful comments.

7. Reerences Aumont, Jacques 1992. Du visage au cine´ma. Paris: E´ditions de l’E´toile. Bakels, Jan-Hendrik this volume. Embodying audio-visual media. Concepts and transdisciplinary perspectives. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2), 2048⫺2061. Berlin/Boston: De Gruyter Mouton. Bala´zs, Be´la 2010. Visible man or the culture of film. In: Erica Carter (ed.), Be´la Bala´zs: Early Film Theory. Visible Man and the Spirit of Film. 1⫺90. Oxford: Berghahn Books. First published [1924]. Bühler, Karl 1933. Ausdruckstheorie. Das System an der Geschichte aufgezeigt. Jena: Fischer. Cassetti, Francesco and Sara Sampietro 2012. With eyes, with hands. The relocation of cinema into the iPhone. In: Pelle Snickars and Patrick Vonderau (eds.), Moving Data. The iPhone and the Future of Media, 19⫺32. New York/Chichester: Columbia University Press. Cavell, Stanley 1971. The World Viewed: Reflections on the Ontology of Film. New York: Viking Press. Deleuze, Gilles 2008. Cinema 1: The Movement Image. London: Continuum. First published [1983]. Eisenstein, Sergej 1998. The montage of film attractions. In: Richard Taylor (ed.), The Eisenstein Reader, 35⫺52. London: British Film Institute. First published in [1924]. Gallagher, Shaun 2008. Understanding others: Embodied social cognition. In: Paco Calvo and Antoni Gomila (eds.), Handbook of Cognitive Science: An Embodied Approach, 439⫺452. Amsterdam: Elsevier. Greifenstein, Sarah and Hermann Kappelhoff this volume. The discovery of the acting body. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2070⫺2080. Berlin/Boston: De Gruyter Mouton. Grodal, Torben K. 2009. Embodied Visions. Evolution, Emotion, Culture and Film. Oxford: Oxford University Press.

165. Expressive movements in audio-visual media: Modulating affective experience

2091

Hediger, Vinzenz 2006. Gefühlte Distanz. Zur Modellierung von Emotion in der Film- und Medientheorie. In: Frank Bösch and Manuel Borutta (eds.), Die Massen bewegen. Medien und Emotionen in der Moderne, 42⫺62. Frankfurt Main/New York: Campus. Horst, Dorothea, Franziska Boll, Christina Schmitt and Cornelia Müller this volume. Gesture as interactive expressive movement: Inter-affectivity in face-to-face communication. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2112⫺2124. Berlin/ Boston: De Gruyter Mouton. Johnson, Mark 2007. The Meaning of the Body. Aesthetics of Human Understanding. Chicago: Chicago University Press Kappelhoff, Hermann 2001. Bühne der Empfindungen, Leinwand der Emotionen ⫺ das bürgerliche Gesicht. In: Helga Gläser, Bernhard Groß and Hermann Kappelhoff (eds.), Traversen 7. Blick, Macht, Gesicht, 9⫺41. Berlin: Vorwerk 8. Kappelhoff, Hermann 2004a. Matrix der Gefühle. Das Kino, das Melodrama und das Theater der Empfindsamkeit. Berlin: Vorwerk 8. Kappelhoff, Hermann 2004b. Unerreichbar, unberührbar, zu spät ⫺ Das Gesicht als kinematografische Form der Erfahrung. montage AV 13(2): 29⫺53. Kappelhoff, Hermann and Jan-Hendrik Bakels 2011. Das Zuschauergefühl. Möglichkeiten qualitativer Medienanalyse. Zeitschrift für Medienwissenschaft 5(2): 78⫺95. Kappelhoff, Hermann and Cornelia Müller 2011. Embodied meaning construction. Multimodal metaphor and expressive movement in speech, gesture, and feature film. Metaphor and the Social World 1(2): 121⫺153. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Löffler, Petra 2004. Affektbilder: eine Mediengeschichte der Mimik. Bielefeld: Transcript. Merleau-Ponty, Maurice 2005. Phenomenology of Perception. London/New York: Routledge. First published [1945]. Müller, Cornelia volume 1. Gestures as medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill und Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Münsterberg, Hugo 2002. The photoplay ⫺ a psychological study. In: Allan Langdale (ed.), Hugo Münsterberg on Film. The Photoplay ⫺ A Psychological Study and Other Writings, 45⫺162. New York/London: Routledge. First published [1916]. Plantinga, Carl 2009. Moving viewers. American Film and the Spectator’s Experience. Berkeley: University of California Press. Plessner, Helmuth 1970. Laughing and Crying: A Study of the Limits of Human Behavior. Evanston: Northwestern University Press. First published [1941]. Schmitt, Christina and Sarah Greifenstein this volume. Cinematic communication and embodiment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2061⫺ 2070. Berlin/Boston: De Gruyter Mouton. Schmitt, Christina, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movement and metaphoric meaning making in audio-visual media. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2092⫺2112. Berlin/Boston: De Gruyter Mouton.

2092

IX. Embodiment Sheets-Johnstone, Maxine 2008. Getting to the heart of emotions and consciousness. In: Paco Calvo and Antoni Gomila (eds.), Handbook of Cognitive Science: An Embodied Approach, 453⫺465. Amsterdam: Elsevier. Sobchack, Vivian 1992. The Address of the Eye. A Phenomenology of Film Experience. Princeton: Princeton University Press. Sobchack, Vivian 2004. Carnal Thoughts: Embodiment and Moving Image Culture. Berkeley: University of California Press. Stern, Daniel N. 2010. Forms of Vitality: Exploring Dynamic Experience in Psychology, the Arts, Psychotherapy and Development. Oxford: Oxford University Press. Tan, Ed S. 1996. Emotion and the Structure of Narrative Film. Film as an Emotion Machine. Mahwah, NJ: Erlbaum. Wundt, Wilhelm 1900⫺1920. Völkerpsychologie (10 Volumes). Leipzig: Wilhelm Engelmann. Films: Bataan 1943. Tay Garnett, USA (DVD: MGM 2000). Hamlet 1920. Sven Gade and Heinz Schall, GER (DVD: Edition Filmmuseum 2011). Spellbound 1945. Alfred Hitchcock, USA (DVD: EuroVideo 2002).

Thomas Scherer, Berlin (Germany) Sarah Greifenstein, Berlin (Germany) Hermann Kappelhoff, Berlin (Germany)

166. Expressive movement and metaphoric meaning making in audio-visual media 1. 2. 3. 4. 5. 6.

Introduction: Sensing and making sense in audio-visual communication Framework: Film and media studies and applied linguistics Case study: A qualitative-descriptive approach to media reception Some implications for metaphor research: Mapping in time Conclusion: Embodiment ⫺ affective grounding of meaning References

Abstract What kind of sense-producing processes take place in media communication, while spectators watch a film or a TV report? Audio-visual images not only communicate with their spectators through dialogs and representations (of people, objects, and actions), but also through articulations of aesthetic means (i.e., camera movement, cadrage, and montage, sound and visual composition). The chapter follows the idea that the aesthetics of the medium are intertwined with ways of producing meaning. More specific: that audio-visual media modulate the spectator’s embodied processes of at once affective resonances and metaphoric meaning making. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 20922112

166. Expressive movement and metaphoric meaning making in audio-visual media

2093

1. Introduction: Sensing and making sense in audio-visual communication How do spectators make sense of audio-visual images? When reflecting on how film and other audio-visual media communicate with spectators, what astonishes is the media’s twofold ‘orientation’. Thomas Elsaesser and Malte Hagener (2010: 13⫺34) elaborate on this by describing two prominent film-theoretical perspectives that seem to be in opposition to each other. One line of thought ⫺ the so-called realist approach ⫺ understands film as a ‘window onto the world’. Here, the spectator is understood as looking through a frame for seeing cinematically taken instances of what has once been real in front of the camera, for seeing and hearing people and objects, buildings, or landscapes. The second line addresses another aspect by understanding film as a ‘picture frame’: Here, the focus is on how film is appearing in the cinema reception, for aesthetic means are assumed to have the capacity to create a new kind of meaning and a novel form of reality for the spectator. In such a formative approach, the main interest is how film is presented to the spectator as being a “creation”, a “construction” of a unique perceptual world: “the frame exhibits the medium in its material specificity” (Elsaesser and Hagener 2010: 13⫺15). The images are thought of shaping the spectator’s perception and mental activities: “film itself constitutes a world, as in images and sounds it creates a thinking” (Fahle 2002: 97). Those two metaphors of film as window or picture frame, that prominently pervade film theory until today, focus on two poles: on the one hand, on what has been in front of the camera, and on the other hand, on how images present a new view for spectators. In certain theoretical approaches the realist and the formative approach are not taken only as two controversial film-theoretical thoughts regarding the essence of film, but also explain the documentary versus the narrative traditions of audio-visual culture (Andrew 1976: 113). For both perspectives on film it is clear that spectators are confronted with images, be they understood as reality-taken or reality-constructing. Furthermore, it is not by chance that a frame is included in the ‘window’ metaphor as well as in the ‘picture frame’ metaphor: This refers to the fine arts, which among other audio-visual art and media practices are one of cinema’s roots (Aumont 1997; Jutz and Schlemmer 1989). Images in film though are not a simple matter to define: They are described as ‘photographic images’, being static frames on a film reel, or as single shots (Bordwell and Thompson 2001); they can furthermore be understood as movement images (Dalle Vacche 2003; Deleuze [1983] 2008a; Kappelhoff 2004; Koebner and Meder 2006), or as images presenting time, mind, or affect (Deleuze 2008a, [1985] 2008b; Kappelhoff 1998). What becomes even more complicated is the matter that film includes speech ⫺ and thus also verbal imagery (Mitchell 1984) is vitally present in audio-visual images. Bearing those differentiations in mind, the initial question is to be posed more precisely: How are those various kinds of images meaningful to spectators who perceive them? We would like to bring in metaphor for approaching this complex issue of how meaning making in audio-visual media reception takes place. The etymology metaphorein ⫺ ‘to transfer’, ‘to transport’ ⫺ points to the capacity of metaphors realizing connections between different realms of thoughts, perceptions, things, images, or subjects. We understand these connections taking place mainly as a procedure, creating for the person that is thinking the metaphor as a concrete experiencing of something through something else. Metaphoricity thus happens to establish a “triadic structure”, in which the process of

2094

IX. Embodiment ‘seeing as’ or ‘seeing through’ is as important as, e.g., source and target (Müller 2008: 26⫺32). Therefore we focus less on generalized schemata on a systematic level than on the specifics of metaphoric meaning making in language and media use. We suggest different elements to be part of the figurative construction: Metaphoric elements of different realms of thoughts, perceptions, things, images, or subjects do create “shifts of meaning” (Black 1962: 45), they elicit image spans and tensions (Bildspanne, Weinrich [1963] 1983) or stage a “scenario” (Müller 2008: 89⫺95). A visualizing and making present is established of what is absent, abstract, or inherently subjective, e.g., a lived affective, bodily experience (Müller 2008; Müller and Ladewig 2014). Moreover, these realms of metaphor and with it the forms of imagery have enormous impact on our processes of meaning making ⫺ in Gilles Fauconnier and Mark Turner’s (2003) words: on “the way we think”. By referring to such a constructive idea of metaphor, we conceive of metaphor in film and other audio-visual media to initiate relations between the different image-producing realms, that is to say, between audio-vision and speech. Furthermore, metaphor is understood as a situated communicative functioning: Regarding audio-visual communication, the different experiential realms are not assumed to be principally pre-existent in the spectator, but to establish fluently ⫺ online ⫺ from the situated and concrete context. This situatedness, we grasp as the specific embodied media reception like it takes place, e.g., when a spectator sits in a movie theater and with all senses is involved in a film’s aesthetic performance. This means that the spectator perceptively goes through the audio-visual projection, where aesthetic shapings specific for each given film unfold as particular forms of addressing her or him bodily, affectively, and cognitively. In this chapter, we thus would like to introduce a view on meaning making in audiovisuals (or, as one could also say: on thinking in images) that accounts for the multiple and intertwined layers of dynamic imagery, by which audio-visual media communicate with their spectators. The model we therefore build upon is a recently developed interdisciplinary perspective on audio-visual communication, language, affect, meaning, and the body: the approach of Multimodal Metaphor and Expressive Movement (for a detailed outline, see Kappelhoff and Müller 2011). It combines knowledge from cognitive linguistics’ research on figurative thinking in verbo-gestural face-to-face discourse with film and media studies’ research on how the expressivity of audio-visual images shapes affective resonances on the side of spectators. Thereby, the approach offers a model of affective and cognitive processes in multimodal media communication. With regard to methodological questions, it shows, how descriptive microanalyses can make evident such intertwined processes. Drawing upon this, in the following, it will be outlined how metaphoric meaning making can be conceptualized as a process of thinking and understanding that roots in an actual embodied and affective experience. Moreover, we will show that the scope of the model offered by Kappelhoff and Müller is not restricted to films produced for cinema screenings. Rather it addresses audio-visual media in general, pointing to a fundamental dimension of its reception.

2. Framework: Film and media studies and applied linguistics The overall assumption of the approach of Multimodal Metaphor and Expressive Movement is that media reception is a dynamic and embodied process of communication: The expressivity of images and aesthetics is as important for this idea of communication as is the spectator’s bodily and perceptive resonance to it, for we start from the premise

166. Expressive movement and metaphoric meaning making in audio-visual media

2095

that film and other audio-visual media address spectators in a multifaceted way. Spectators are affectively involved, and at once they are making sense of what they see and hear. On the one hand, expressive movement offers a way to account for spectators’ affective involvement. In face-to-face communication, an expressive movement is a gesture, a vocal interaction, or facial expression of affect that becomes perceivable for the interlocutor or even co-involves her or him (Gallagher 2008; Horst et al. this volume; Müller volume 1; Plessner [1941] 1970; Stern 2010). In distinction to this, cinematic expressive movements are not depicted human gestures, but are conceived as expressive units of moving audio-vision. What spectators affectively go through when watching a film is intrinsically bound to the way these units, i.e., these movement patterns unfold dynamically. Different articulatory modalities (e.g., camera movement, montage, or sound) create figurations of movement that establish different gestalt-like forms. These aesthetic shapings of time are understood by film and media studies as being able to shape the spectator’s affective experiences during the reception process (Bakels this volume; Kappelhoff 2004; Kappelhoff and Bakels 2011; Kappelhoff and Müller 2011; Scherer, Greifenstein, and Kappelhoff this volume; Schmitt and Greifenstein this volume). The concept is in line with the framework of film aesthetics and affect (Aumont 1997; Bala´zs [1924] 2010; Bellour 2005; Deleuze 2008a; Münsterberg [1916] 2002) and film and embodiment (Marks 2000, 2002; Sobchack 1992, 2004; Voss 2011). The way a spectator perceives cinematic forms of movement can be described in terms of different realms of his or her own affective and embodied experience, dynamically created and established online. On the other hand, metaphor is assumed to be relevant for inquiring cognitive processes like meaning making as an act of conceptualizing. This assumption draws on George Lakoff and Mark Johnson’s cognitive linguistic definition of metaphor as “experiencing and understanding one kind of thing in terms of another” (1980: 3). However, concrete, situated affective and feeling processes do not play a constitutive role for Conceptual Metaphor Theory ⫺ or to be more precise, only as consciously realized concept as, e.g., an emotion metaphor (Kövecses 2000). The term embodiment addressed by Conceptual Metaphor Theory thus differs from the one we actually speak of in this article. Moreover, by claiming the existence of an overall conceptual system generally incorporated by members of a cultural community, Conceptual Metaphor Theory assumes fixed underlying patterns of already known conceptual realms to be of major importance for metaphors (various studies take up this position also with regard to audio-visual media; see, e.g., Coe¨gnarts and Kravanja 2012; Fahlenbrach 2008, 2010; Forceville and Jeulink 2011; see also Whittock 1990 for a first encounter of film studies’ metaphor research with Conceptual Metaphor Theory). The same holds for blending theory (also adapted to film reception; see Oakley and Tobin 2012), which as well focuses on conceptual domains as “mental spaces” (Grady, Oakley, and Coulson 1997). More particular, although blending theory implies ways to describe specifics of conceptualizations, “emergent structures”, and “elaborations”, mental spaces remain bound to preestablished, generalized conceptual domains. In contrast, what we would like to suggest here, is addressing the phenomenon of metaphor from the other end ⫺ that is to say: from the point of language and media use. Rather than aiming to find evidence for metaphorically organized conceptual structures on a system level, we are interested in how metaphoric meaning making actually

2096

IX. Embodiment forms itself diversely and dynamically, depending on the concrete situation, deriving from an actually lived, embodied experience. The act of conceptualizing itself and how it takes place is in focus. We therefore depart from Cornelia Müller’s definition of metaphor, grounded in applied linguistics’ research: “[T]he expression of ‘activation of metaphoricity’ […] implies that the product metaphor is always a result of the procedure for establishing metaphoricity” (Müller 2008: 5). With this dynamic view on cognitive processes, we assume that metaphoric meaning making can be understood as an embodied sensing and act of conceptualizing that emerges in and is constitutively bound to the respective communicative act itself. More specifically, we conceive it to be bound to the different and various perceived and imaginary realms that spectators are experiencing and which they metaphorically relate in their process of media reception. Or to put it in terms of Max Black’s interaction theory: “It would be more illuminating […] to say that the metaphor creates the similarity than to say that it formulates some similarity antecedently existing” (Black 1962: 37). In this theoretical line, we look at how the principal subject is actually “projected upon” the subsidiary subject (Black 1962: 41), how the seeing or imagining of one thing “filters”, “transforms”, and “selects” certain aspects of the other subject (Black 1962: 42), how both metaphorical elements through the procedure of a sharing within an attentional focus shed light upon each other (Black 1962; Glicksohn and Goodblatt 1993). Especially in audio-visual communication figurative thought is suggested to be essentially relevant: How spectators of film and audio-visual media comprehend and make meaning on a basic level, is assumed to be very often bound to processes of figuration. Building upon applied linguistics’ research, metaphors in film are conceived of neither to be static or bound to single verbal instances nor being primarily based on pre-given concepts and mappings. Rather we claim metaphoricity to emerge and to be activated (Müller 2008; Müller and Tag 2010) by spectators due to their perception of audiovisuals’ multimodal articulations (Kappelhoff and Müller 2011). In such a theoretical line, metaphors provoke situated processes of comprehension and meaning making in a specific context, basing on real-time experiences. This also draws a line to Mark Johnson’s understanding of embodied meaning: “It is our ability to abstract a quality or structure from the continuous flow of our experience and then to discern its relations to other concepts […]” (2007: 92). For investigating how a quality or dynamic structure (cf. also Langacker 2010 for the notion of structure being dynamic in a language use perspective) become abstracted from a real and lived experience in order to be conceptualized, film and other audio-visual media offer a paradigmatic research field. In particular, as outlined by the film theoretical discourse on expressivity (see, e.g., Kappelhoff 2004, 2008; Scherer, Greifenstein and Kappelhoff this volume; Voss 2011), the flow of images and the flow of thinking are bound to the same temporality. The temporal process of the spectator’s embodied reception is structured along the temporal arrangement of the audio-visuals. Or, as Maurice Merleau-Ponty puts it: The meaning of a film is incorporated into its rhythm just as the meaning of a gesture may immediately be read in that gesture: the film does not mean anything but itself. The idea is presented in a nascent state and emerges from the temporal structure of the film […]. The joy of art lies in its showing how something takes on meaning ⫺ not by referring to already established and acquired ideas but by the temporal […] arrangement of elements. ([1948] 1964: 57)

166. Expressive movement and metaphoric meaning making in audio-visual media

2097

The audio-visuals’ time of aesthetic performance thus not only is assumed to synchronize with and resonate in spectators’ bodies, i.e., their processes of feeling and thinking. It also emphasizes the situatedness of meaning making within audio-visual media communication. One can say that, by bringing together perspectives from film and media studies and cognitive linguistics, the approach of Multimodal Metaphor and Expressive Movement accounts for perceiving, feeling, and understanding to be essentially linked: “[C]inematic expressive movements shape the same kind of felt experience in a spectator as a bodily expressive movement that accompanies speech. In doing so, expressive movements provide the experiential grounds for the emergence and construction of metaphors.” (Kappelhoff and Müller 2011: 122) This strongly emphasizes how both bodily expressive behavior (regarding social interaction in face-to-face discourses) and aesthetic expressivity (regarding mass media communication) are closely linked to processes of thinking and feeling. Moreover, it supports the assumption that affects, emotions, intentions, or thoughts become intersubjectively perceivable and experiencable in the temporal flow and unfolding of the specific communicative act, as held by phenomenology, cognitive sciences, and developmental psychology (Gallagher 2008; Sheets-Johnstone 2008; Stern 2010). While in a conversation such an act consists of expressive modalities like, e.g., gesture, facial expression, and verbal language (Horst et al. this volume), in the context of audio-visual presentational forms the aesthetic compositions of, e.g., editing, camera movement, sound, and light articulate cinematic forms of expressivity. Building on this, we assume that analyzing audio-visual media as demonstrated in the following section offers paramount insights into the spectator’s affective and cognitive processes.

3. Case study: A qualitative-descriptive approach to media reception Fictional formats of audio-vision like films as well as factual ones like news coverage first of all are articulations of speech and images, temporally unfolding. That is to say, above all it is those dynamic articulations, which spectators are confronted with, and in which they are involved during media reception. Due to that, we suggest that the way spectators make sense of audio-visuals has to do very little with merely gathering facts. We will outline the scope of this estimation by focusing on TV news coverage. A news report’s audio-visual and verbal unfolding offers insights into media’s various communicative purposes and rhetorical characteristics: addressing, attracting attention, informing, convincing, and so on (notably, research bringing in a rhetoric perspective is relatively rare; cf. Ulrich 2012). Furthermore ⫺ and we see this in essential relation to the previous ⫺ audio-visual presentation (e.g., editing) is also conceived of in terms of its emotional impact (Detenber and Lang 2011; Unz 2011) and its property of modulating affects (as outlined here). Spectators may feel concerned or even shocked when seeing a coverage about the political development in Afghanistan; or they may feel slightly amused by the way news of the stock market are explained. However, in contrast to genre films that are known to sadden, exhilarate, or thrill in a very obvious manner ⫺ like melodrama, comedy, or horror ⫺, news coverage tends to avoid addressing spectators’ feelings or seems to treat this subtly. Moreover, bringing news and emotions together seems to be controversial, as producers of news coverage base their work upon the idea of primarily informing the audiences by presenting facts rather than entertaining them

2098

IX. Embodiment (as journalism handbooks suggest; cf. Harcup 2009). Nevertheless, for quite some time social-scientific oriented media studies recurrently highlight emotional implications of news coverage (e.g., Milburn and McGrail 1992; Unz, Schwab, and Winterhoff-Spurk 2008; Uribe and Gunter 2007; Winterhoff-Spurk 1998). However, although the question of how image and speech in TV news reports do interact generally is a topic (see Holly 2010 for a linguistic perspective), what has been disregarded mainly is how presentational forms of mass media ⫺ by means of audio-visual imagery ⫺ address spectators sensorily and subconsciously, changing the meaning of verbal information. Collateral with this is the low level of attention that has been paid to affective experience and embodied meaning in news coverage. However, analyses reveal that in TV reports techniques of cinematic staging are applied. These applications of audio-visual aesthetic means address and involve spectators more subtly than those, which are found as well in fiction films. As outlined above, the articulations of those cinematic strategies (e.g., camera movement and montage) ⫺ however subtle they may be ⫺ are assumed to be perceived and thus realized by spectators as cinematic expressive movements, i.e., affective experiences that shape spectators’ feelings (Zuschauergefühl, see Kappelhoff and Bakels 2011). Moreover, the aesthetic modulation of an affective temporal course goes hand in hand with a modulation of metaphoric meaning making. In other words: In the situation of media reception spectators can establish multimodal metaphors by perceiving articulations of words and dynamic images. And such an act of conceptualizing we thus conceive of in no way as a purely intellectual phenomenon, but as being inherently shaped by affectivity. Of course, how audio-visual communication makes spectators think and feel has a lot to do with individual, social, and cultural constitution and knowledge, too. Nevertheless we suggest metaphors to reveal very basic processes of situated meaning making. Such a basic act of conceptualizing builds on verbal and audio-visual forms of imagery (emerging from speech articulation, depicted objects, and aesthetic staging) ⫺ forms presented in time that create dynamic scenarios and movement patterns. To illustrate our assumptions, we will offer the analysis of an example and outline the affective and dynamic essence of metaphor ⫺ with regard to both: spectator’s cognitive processes and the rhetorical devices deployed by media products for communicative purposes. Metaphors in audio-visuals thus are not regarded merely as phenomena of strong poeticity, but primarily as basic forms of structuring reception processes ⫺ i.e., how spectators make sense of what they see, hear, and feel. Imagine the daily TV newscast ⫺ and imagine the economy expert interviewed does not speak in metaphors: No one would understand anything at all, as the expert would build on specialized knowledge by using technical terms that for large parts of the audience are incomprehensible. De facto, this is a rather unrealistic idea, because metaphors are in fact highly prominent in communications about economy (Peter et al. 2012; Zink, Ismer, and von Scheve 2012). Figurative speech is very often used with the intent to visualize an abstract concept: Economy is an abstract system, which is neither tangible nor visible per se. A thirteen second extract from an exemplary report on the financial crisis and its impact on the economy (taken from the German TV newscast Tagesschau, ARD, 20.10.2008, 8:00⫺8:15 pm) makes no exception here, as the voice-over commentary reveals: Die Konjunktur läuft nicht mehr wie geschmiert, und die Finanzmarktkrise könnte bald weiteren Sand ins Getriebe streuen. Die deutsche Wirtschaft rechnet mit einer deut-

166. Expressive movement and metaphoric meaning making in audio-visual media

2099

Fig. 166.1: Overview of a temporal and multimodal unfolding: a 13 second extract from a German TV news report, taken from the Tagesschau (ARD, 20.10.2008, 8:00⫺8:15 pm) (English translation)

lichen Abschwächung im nächsten Jahr. Deshalb will die Bundesregierung gegensteuern. [‘The economic cycle is not running as smoothly as it was anymore; and the financial crisis may soon continue to throw sand in the gears. The German economy expects a significant weakening next year. Therefore the federal government wants to steer in the opposite direction.’] On the verbal level, the chosen extract shows a continuous succession of verbal metaphors including some common German idioms (highlighted in the translation). This observation, though, does not tell much about the actual process of metaphoric meaning making evoked and modulated by the news report. We suggest therefore, that it is necessary to take into account the specific context, the actually presented articulation of words and images to make a point regarding metaphoric meaning making in media reception. In particular, we propose that it has to be considered in detail, how the various levels of audio-visual imagery unfold temporally as well as multimodally (Fig. 166.1).

3.1. Multimodality  a threeold synchronous controversy o words and audio-vision For this section, we focus on the multimodality of the example, i.e., verbal articulations, spoken by a voice-over commentary, and audio-vision. The extract starts out with a verbal metaphor: “The economic cycle is not running as smoothly as it was anymore.” However, what spectators perceive at the same instant are gleaming, well-oiled, smoothly running pistons, powerfully moving up and down. Thus, the metaphoric meaning is not something that is restricted to the verbal level, but involves as well what is dynamically depicted visually in its specific way. The verbal idiomatic expression (“not running as smoothly”) only on the first sight is doubled by the image. In fact, the sensory qualities of the moving image evoke an experiential contrast or paradox including a double negation: What is said is on the one hand doubled (for running smoothly plays a role both verbally and visually), while the negating verbal statement (“is not running”) simultaneously is strongly negated by the visual image (running). (By the way, this in particular might be an example for another classical rhetorical figure: litotes.)

2100

IX. Embodiment Such a relation of opposition between speech and visuality is repeated in the following, where together with the next shot, another verbal metaphor is used. While we hear “the financial crisis soon may continue to throw sand in the gears”, what is seen is a maintainer in a huge engine room who is wiping off a single, jerkily moving piston with a cloth. Thus, again, the verbal metaphoric meaning matches with the sensation induced audio-visually in a contrastive way: The man seen is wiping something off, while “throw sand in” is articulated verbally. At this point, what is visually depicted (machines in motion, cleaner of engine room) becomes kind of secondary or figurative as the coverage deals not with the cleaning man depicted but with the verbally announced agents: “the economic cycle” and “the financial crisis”. This happens because the multimodal interaction (the spoken “is not running as smoothly” and the visually given smoothly running) merges the two modalities, that of vision and verbality. For in news reports verbal utterances are predominant, the nouns or agents (“economic cycle” and “financial crisis”) not only are grammatical subjects for the verbal phrases but are the overall protagonists: They become omnipresent. In the next shot, a contradiction is realized for a third time. While hearing the next verbal metaphor, “the government wants to steer in the opposite direction”, what is visualized is a shot of a long container ship. Visual composition and camera movement stage the ship in a manner that differs to what is said verbally. Rather than showing a ship, whose movement is controlled by steering, the ship is staged as a heavy mass making leeway: The visual composition stages a perception of the ship where it seems to drift in an uncontrolled manner to the right, opposing the camera’s movement to the left. Thus, perceivable for the spectator becomes the tension between visual movements on the one hand and a verbally induced sensoriness on the other hand: Here, a tension is evoked, that between the verbal expression “to steer” and the visually staged movement of drifting apart. With the analysis of this temporal and multimodal unfolding in place, it becomes clear that it is rather unrewarding to focus only on the verbally articulated imagery (here: the idioms) for grasping processes of metaphoric meaning making modulated by media reception. Instead, it becomes evident that figurative thought in audio-visuals is restricted neither to a single moment nor exclusively to the level of speech. By capturing the ongoing intertwining of audio-visual images and voice-over commentary, we can observe how a composite meaning is being established, in which what is spoken (s) and what is visualized (v) combine. In particular, a matching of three multimodal equivalences and contradictions takes place: ⫺ “is not running as smoothly” (s) intertwines with running smoothly (v) ⫺ “throw sand in the gears” (s) intertwines with wiping off (v) ⫺ “steer in the opposite direction” (s) intertwines with drifting apart (v) Apparently, the verbal metaphors are part of a complex metaphoric mapping that takes place in time. Due to the dynamic multimodal interplay, a scenario is unfolding that can be described as imagined and perceptual. Both, verbal and visual articulations dynamically depict a nautic setting that is experienced through machines in motion, which have to be cleaned in order to keep on running and thus in order to be able to move the ship. And the ship, its engine, maintainer, and steersman link, on an embodied level, with the three targets of the verbal metaphors: economy, financial crisis, and government. According to

166. Expressive movement and metaphoric meaning making in audio-visual media

2101

Fig. 166.2: Along the temporal course a threefold interaction takes place: sensory qualities elicited multimodally do both contradict and match. (13 second extract from a German TV news report, taken from the Tagesschau, ARD, 20.10.2008, 8:00-08:15 pm, English translation)

this fluent unfolding of the scenario ⫺ or, so to say, to this dynamic multimodal imagery ⫺, an economic and political situation is imagined that is seen as a nautic one. Moreover, it is a scenario of a subtly experienced conflict: On the one hand, some aspects of the verbal imagery sensorily correspond to the audio-visual imagery, while on the other hand others contradict (Fig. 166.2). So far, we have outlined that verbal metaphors are only small elements in the construction of a multimodal metaphor. Put differently, we have shown how the concrete perceptive and sensory experiencing of dynamic visuality amends the act of comprehending speech. It does so, due to sensory qualities of an imagined and a perceived scenario, created by verbal and audio-visual imagery. We will be further concerned with this, but for now we first extend the analysis and take a closer look at the dimension of temporality. By looking at the specific cinematic expressive movement and its unfolding over the course of the three shots, we are not analyzing the succession of single elements. Instead, the focus is on the embodied and affective experience that spectators go through: the overall temporal gestalt of the extract. To put it figuratively in analogy to music: We will now look at the melody and not at singular chords.

3.2. Temporality  an accordance in the course o movement In its dynamic unfolding, audio-visual staging ⫺ like camera movement or editing ⫺ creates different movement patterns that along with their changing in time address spectators affectively. With regard to the extract from the news report, it is especially the montage of three shots, by which such a cinematic expressive movement is articulated. The flow of images and its specific movement pattern ⫺ only realized in the perceptive act of the spectators ⫺ can be described as follows: An intense and heavy, circulating up and down movement transforms into a more reduced one and turns finally into a slow, gliding, broadening horizontal movement. Thus, for the whole gestalt a movement quality

2102

IX. Embodiment

Fig. 166.3: The montage of the three shots unfolds temporally as an expressive movement (with the pattern of slowing down). In accordance with the movement is the verbal utterance “weakening”. (13 second extract from a German TV news report, taken from the Tagesschau, ARD, 20.10.2008, 8:00-08:15 pm, English translation)

of slowing down is realized, modulating the sensory experience of spectators in the very moment of media reception (Fig. 166.3). It is important to note that the slowing down is not a movement of objects or persons depicted in a single shot, but a movement that is articulated aesthetically by the images unfolding in time through montage. Furthermore, the dynamic pattern of slowing down maps with the word “weakening”, which is articulated in the voice-over’s next-to-last metaphoric expression: “The German economy expects a significant weakening next year.” The sensory qualities addressed by the verbally articulated imagery (i.e., the verbal metaphoric expression “weakening”) and the audio-visually articulated pattern (the slowing down) are in accordance with each other: Together, they highlight the experiencing of “reducing power”. To more specifically qualify the affective course by which spectators are addressed, it is necessary to focus on the overall situated metaphoric meaning making in the news report extract. This leads us also to some methodological implications regarding metaphor identification in audio-visuals, which in this article can be outlined only briefly.

3.3. Metaphor emergence The audio-visual movement eliciting the experience of slowing down goes hand in hand with the ongoing experience of conflict, as it is realized by the interplay of verbal and visual sensory qualities. Through this conflict that happens for spectators on a perceptive level, what is said verbally somehow becomes realized as concrete experience: The threat that the financial crisis soon may have significant effects on the German economy is thus not only stated by the voice-over commentary’s speech. Rather, the report reveals that metaphor is an embodied vehicle, that is to say: a means of concrete embodied transmission.

166. Expressive movement and metaphoric meaning making in audio-visual media

2103

Fig. 166.4: Metaphor emergence due to temporal and multimodal unfolding. (13 second extract from a German TV news report, taken from the Tagesschau, ARD, 20.10.2008, 8:00-08:15 pm, English translation)

The incongruent and conflictuous quality emerging from the verbal metaphors, which interact with the visuals, increasingly resonates affectively. Due to the overall repetitive pattern, each of the described three “steps” intensifies the tension between the actual seen machines in motion and the verbal articulation of a future threat (“may continue”, “expects”, “wants to”), which leads to a structure of anticipation that is realized as feeling of uncertainty through an affective course: “Economy” and “financial crisis” are experienced and understood through an ongoing contradictory relation between audiovisual staging and language. Moreover, the movement pattern from high dynamics to a slowed down drifting thus intertwines with the course of the threefold contradiction. We suggest that thereby, for the spectator, the perception of economy is mapped onto the concrete conflictuous scenario and the bodily experience of slowing down. The braking, the reducing dynamics go hand in hand with the verbal utterance of anticipation and transforms within an affective course into an increasing uncertainty. Emerging from this concrete experiencing is a dynamic metaphor that encompasses the entire sequence, because it is established and elaborated over the course of the three shots and comprises verbal and audio-visual articulation. Drawing on the preceding analysis, it can be formulated as follows: Economy as huge ship, difficult to steer and to keep running. Such a meaning making is an act of conceptualizing, which is grounded affectively. The metaphor is composed dynamically and multimodally of different elements (see also Fig. 166.4): ⫺ The economy is articulated verbally (by the words “economy” and “economic cycle”).

2104

IX. Embodiment ⫺ The ship is seen in the last shot ⫺ but its quality, its extensiveness is constructed step by step: from an interior to an exterior view, from the inside of a ship (machines in motion) to the outside. Furthermore, the aesthetic staging (shot size, camera angle, visual composition) creates the impression of a huge and heavy mass. ⫺ The difficulty to steer and keep running is intrinsically bound to the affective course that spectators go through: This increasing uncertainty is turned into an affective experience by the orchestration of the repeated verbalization of a future threat in concert with the visual presentation of the machine in motion. This particular interaction of verbal and visual expression creates a movement pattern of an increasing slowing down. This is the affective course that spectators realize in their perception of this piece. We conclude therefore, that what spectators grasp as information of the news report does not evolve from disembodied cognitive operations. Rather, through the emerging metaphor an affective sensing is part of the conceptualizing. A metaphoric meaning making becomes active through actually lived, i.e., experienced realms that go hand in hand with the imagination evoked in the process of understanding language. The difficulty to steer and the threat of a machine that stops running are such sensorily based meanings, which evolve from the scenario in which they are presented. Put differently, the expressive movement’s pattern (the slowing down) ⫺ that only unfolds in the spectator’s perception and not on the level of object movement ⫺ induces affective and experiential sense into the process of understanding. Spectators realize the machines in motion that turn out to be the gears of a drifting ship through a concrete feeling of becoming slower, of braking. In short, verbal and audio-visual images merge and create an emergence of meaning. It is crucial to bear in mind our remark from the beginning, that we regard these reconstructed affective and cognitive processes to be only one level of media experience: the level, which concerns the very basic mediatized form of addressing and appealing to the spectator in a specific way. Of course, within the reception of the analyzed news report other reactions may be evoked as well: due to its media-contextual framing and due to the fact that each situation of media reception is also shaped individually by each spectator (contributing to the reception setting personal, social, and cultural factors). Nevertheless, and in distinction to this, we target the properties of the medium’s articulation, which can be assumed to be individually embodied and intersubjectively shared by all viewers, and which entail rhetorical, poetic, and aesthetic means to elicit a meaning making that grounds in affective sensing. In the following, we discuss this observation regarding its implications for metaphor research more generally.

4. Some implications or metaphor research: Mapping in time The preceding case study has shown that even when verbal idioms are used, getting an idea of metaphoricity in audio-visual communication methodologically cannot be a matter of referring to dictionaries, where literal and respective figurative meanings are determined (such an approach to metaphor identification recently has been suggested especially for a corpus linguistics’ take on verbal texts; see, e.g., Steen et al. 2010). Furthermore, in audio-visuals the metaphoric mapping process only secondarily can be

166. Expressive movement and metaphoric meaning making in audio-visual media

2105

about characteristics that two conceptual domains share “as such”, like Conceptual Metaphor Theory claims. Instead, we propose that metaphoric meaning in audio-visuals primarily is arising and unfolding in a situated manner. The respective language and media use itself thus provides the reference frame that is significant in the first place. This position has been developed within the context of the emergence and activation of metaphoricity (Kappelhoff and Müller 2011; Müller 2008; Müller and Tag 2008). It is influenced by various other approaches dealing with the dynamics of meaning construction and metaphor: the dynamics of systematic metaphors (Cameron 2007, 2011), the construction and shifting of meaning in interaction theory (Black 1962), as well as the metaphorical blend’s “emergent meaning” in conceptual integration theory (Fauconnier and Turner 2003). Because of the dynamic and constructive view on metaphoricity we speak of a mapping in time. We understand mapping to dynamically create two present realms, and to bring them together in an unfolding process due to which one sheds light on the other. Mapping as term thus is not meant to find already established similarities in semantic fields or conceptual domains beyond a situated media context, but as making interactions possible, constructing similarities ad hoc through images, sensory qualities, or motives. For bringing forward a dynamic view (in Müller’s sense) of metaphor as situatedly arising and unfolding in audio-visual communication, we would like to offer a new terminology with regard to the domains involved in a metaphoric phenomenon. We conceive the parts that interact as “image-receiving field” and “image-offering field” (Weinrich 1983; see also Müller 2008) but we broaden the meaning of Harald Weinrich’s terms by interpreting them in a strong phenomenological sense. Therefore we speak of two emerging experiential realms that interact and construct the metaphor in film and audio-visuals. Both of these fields or realms are supposed to be imaginarily and perceptively present while spectators experience audio-visual media (films, TV shows, etc.). Notably, the mapping of those realms does not primarily build upon assumed similarities, but grounds in relations evoked by the concrete audio-visual context. Investigating further semantic and cognitive implications of those realms (e.g., whether they match with commonly known, pre-existing concepts), then appears to be a step on its own. More particular, it is a step of research that goes beyond the focus on basic metaphoric meaning making. Regarding the analyzed TV news report, the two experiential realms dynamically emerge and map onto each other through the multimodally articulated sensory qualities: The first experiential realm (see Fig. 166.5) that evokes the principle subject is articulated exclusively on the verbal level ⫺ this holds also for the first part of the metaphor: economy. The role of “economy” being a protagonist (with the financial crisis as its invisible but effective antagonist) is made clear right from the beginning. The second experiential realm (see Fig. 166.5) establishing the subsidiary subject in turn emerges highly multimodally ⫺ this holds also for the second part of the metaphor: huge ship, difficult to steer and to keep running. A nautic setting is present in an “affective stance” (Müller volume 1; cf. also Horst et al. this volume) of increasing uncertainty, both due to the scenario evoked by the interplay of sensory qualities addressed and articulated in language and audio-visual images and due to the expressive movement’s pattern. (For an overview of the mapping in time see Fig. 166.5.) Using the term scenario is of course only possible in a perspective that accounts for an embodied activity of the spectator. Following this line of thought, the audio-visual and verbal orchestration create realms of experience that the spectator goes through: Audio-

2106

IX. Embodiment

Fig. 166.5: How two experiential realms are dynamically emerging and mapped onto each other in time. (13 second extract from a German TV news report, taken from the Tagesschau, ARD, 20.10.2008, 8:00-08:15 pm, English translation)

visual media not only articulate different modalities in time, but at once compose different thematic elements in order to create new meanings through the shifting of attentional foci. Thereby what primarily comes into view ⫺ and what can be descriptively reconstructed on the level of phenomena developing in time ⫺, is how an embodied act of conceptualization is activated. The mapping in time is then an operation of merging similarities and differences of a scenario that relates these two emerging realms within a concrete and embodied act, in which a basic metaphoric meaning making is being established. Naturally, we do not deny that “economy” is a concept. But we follow Daniel Casasanto’s notion: To speak about concepts is not meant to claim that “an essential concept of cat, or game, or happiness is shared by all people at all times, or that our flexible thoughts are instantiations of invariant concepts” (2013: 10, emphasis in original). Accordingly, it is our aim to grasp how a concept like “economy” comes into being, how it is contextualized in discourse, and how spectators comprehend it, while, e.g., perceiv-

166. Expressive movement and metaphoric meaning making in audio-visual media

2107

ing a news report. In a nutshell, we are interested in how actual experience (based on seeing, hearing, and language understanding) is becoming a concept. It is in this sense when we conceive of metaphoric meaning making as a basic form of abstracting a situated essence from an actual “flow of experience” (Johnson 2007: 92). Approaching metaphoric meaning making by mapping in time thus emphasizes the particular character and temporality of communicative processes, as outlined here, with regard to media reception.

5. Conclusion: Embodiment - aective grounding o meaning It was our aim to outline meaning making in audio-visual communication as being affectively grounded. Therefor we have demonstrated how the take on Multimodal Metaphor and Expressive Movement accounts for (i) media reception as being a temporal and multimodal communication of audio-visuals and spectators, taking place situatedly; and (ii) metaphor as a very concrete affective and embodied process of meaning making. (i) The dynamic view of metaphor emergence focuses on investigating pragmatic contexts of embodiment, i.e., language and media use. By using a qualitative-descriptive method for analyzing the complexity and actual unfolding of phenomena, what is at center stage is a phenomenological reconstruction of how the spectator is addressed. In other words: It is reconstructed how the processes of affective and cognitive reception are modulated by the temporality, the multimodality, and the aesthetics of audio-visual media. (ii) Lakoff and Johnson have given us the apt wording that metaphor is an “experiencing and understanding of one kind of thing in terms of another” (1980: 3). With regard to metaphors as manifesting dynamically, as shifting and transforming permanently in concrete communicative acts (Müller 2008), this can be specified to metaphor’s pragmatic establishing of meaning. By primarily focusing on aesthetics, perceptions, expressive movement patterns, and “rich image meaning” of verbal utterances (Müller 2008: 94), we suggest that a basic understanding of media contents can be reconstructed by applying a form-based metaphor analysis that builds upon the emerging and dynamically mappings of experiential realms, which are always constitutively and inherently bound to affectivity. Accounting in such a way for “embodied meaning construction” (Kappelhoff and Müller 2011), thus paves the way to deepen the insights in the complex and variable ways metaphoricity is dynamically established in the broad variety of audio-visual media communication. Moreover, it contributes a new perspective to film and media studies’ debates on narration, interpretation, and reception of audio-visuals (for current positions, see, e.g., Anderson and Fisher Anderson 2007; Kaul, Palmier, and Skrandies 2009; Oakley and Tobin 2012; Suckfüll 2010; Verstraten 2009; Wuss 2009). To conceive of aesthetics and of represented topics in audio-visuals phenomenologically, as being communicated through concrete media experience (Schmitt and Greifenstein this volume), offers perspectives to reconsider the entrenched distinction between “content” and “form” in mass media research (cf., e.g., Detenber and Lang 2011; Unz 2011) and the concomitant models of communication and emotion. Furthermore, it fruitfully links up with the embodiment discourse in the cognitive sciences (e.g., Casasanto 2013; Johnson 2007), especially when

2108

IX. Embodiment informed by phenomenology (Gallagher 2008; Sheets-Johnstone 2008; see also Gibbs 2006). By understanding audio-visual images as being affectively expressive, our account of Multimodal Metaphor and Expressive Movement links as well to the concept of vitality affects and affect attunement both in face-to-face communication (Stern 1985) and in dance, art, and film (Stern 2010; for an account on film, see especially Bellour 2005). In reconstructing the aesthetic addressing of the perceptive, affective, and cognitive activity, such an approach to embodied meaning construction also offers perspectives for cognitive neuroscience research interested in “language and the motor system” (Cappa and Pulvermüller 2012). To sum up: We suggest these connections to current embodiment research because we regard audio-visuals not only as showing us objects or persons (film as window metaphor), but we consider it a medium that creates perceptions, images, and new ways of thinking and imagination in the spectators (film as intertwining both, picture frame and window metaphor). With the case study, we wanted to demonstrate that for spectators a process of situated meaning making takes place. This process is an activity of experiencing and understanding of one thing in terms of another that arises from a temporal flow of audio-visual images. Film and audio-visuals generally present imagelike scenarios to us through speech and audio-visual expressivity. In essence, one can say that perceived movement is suggested to establish a certain form of dynamic figurative thinking: Metaphor is at the heart of the situated act of conceptualizing when watching films and audio-visual media, as being highly nourished by different forms of imagery. The spectators’ processes of online meaning construction are assumed to be grounded in affective resonances that are modulated by shifts of tempi, rhythms, and the sensory qualities of audio-visual movement. Thinking and feeling in audio-visuals are dependent on the various means of aesthetics, the various means of cinematic expressive movement.

Acknowledgements The presented work is an outcome of the interdisciplinary project “Multimodal Metaphor and Expressive Movement”, headed by Hermann Kappelhoff and Cornelia Müller at the Cluster of Excellence “Languages of Emotion” of the Freie Universität Berlin in cooperation with the European University Viadrina Frankfurt (Oder).

6. Reerences Anderson, Joseph D. and Barbara Fisher Anderson (eds.) 2007. Narration and Spectatorship in Moving Images. Newcastle: Cambridge Scholars Publishing. Andrew, Dudley J. 1976. The Major Film Theories. An Introduction. London/Oxford/New York: Oxford University Press. Aumont, Jacques 1997. The Image. London: British Film Institute. Bakels, Jan-Hendrik this volume. Embodying audio-visual media. Concepts and transdisciplinary perspectives. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2), 2048⫺2061. Berlin/Boston: De Gruyter Mouton. Bala´zs, Be´la 2010. Visible man or the culture of film. In: Erica Carter (ed.), Be´la Bala´zs: Early Film Theory. Visible Man and the Spirit of Film, 1⫺90. Oxford: Berghahn Books. First published [1924].

166. Expressive movement and metaphoric meaning making in audio-visual media

2109

Bellour, Raymond 2005. Das Entfalten der Emotionen. In: Matthias Brütsch, Vinzenz Hediger, Ursula von Keitz and Margit Tröhler (eds.), Kinogefühle. Emotionaliät und Film, 51⫺102. Marburg: Schüren. Black, Max 1962. Models and Metaphors. Studies in Language and Philosophy. Ithaca, NY: Cornell University Press. Bordwell, David and Kristin Thompson 2001. Film Art. An Introduction. New York: McGraw-Hill. Cameron, Lynne 2007. Patterns of metaphor use in reconciliation talk. Discourse and Society 18(2): 197⫺222. Cameron, Lynne 2011. Metaphor and Reconciliation: The Discourse Dynamics of Empathy in PostConflict Conversations. New York: Routledge. Cappa, Stefano F. and Friedemann Pulvermüller 2012. Cortex special issue: Language and the motor system. Editorial. Cortex 48(7): 785⫺787. Casasanto, Daniel 2013. Different bodies, different minds: The bodyspecificity of language and thought. In: Rosario Caballero and Javier E. Dı´az Vera (eds.), Sensuous Cognition. Explorations into Human Sentience: Imagination, (E)motion and Perception, 9⫺17. Berlin/New York: De Gruyter Mouton. Coe¨gnarts, Maarten and Peter Kravanja 2012. From thought to modality: A theoretical framework for analysing structural-conceptual metaphors and image metaphors in film. Image and Narrative 13(1): 96⫺113. Dalle Vacche, Angela (ed.) 2003. The Visual Turn. Classical Film Theory and Art History. New Brunswick, NJ: Rutgers University Press. Deleuze, Gilles 2008a. Cinema 1. The Movement Image. London: Continuum. First published [1983]. Deleuze, Gilles 2008b. Cinema 2. The Time Image. London: Continuum. First published [1985]. Detenber, Benjamin H. and Annie Lang 2011. The influence of form and presentation attributes of media on emotion. In: Katrin Döveling, Christian von Scheve and Elly A. Konijn (eds.), The Routledge Handbook of Emotions and the Mass Media, 275⫺293. London: Routlegde. Elsaesser, Thomas and Malte Hagener 2010. Film theory. An Introduction Through the Senses. New York: Routledge. Fahle, Oliver 2002. Zeitspaltungen. Gedächtnis und Erinnerung bei Gilles Deleuze. montage AV 11(1): 97⫺112. Fahlenbrach, Kathrin 2008. Emotions in sound. Audio-visual metaphors in the sound design of narrative films. Projections: The Journal for Movies and Mind 2(2): 85⫺103. Fahlenbrach, Kathrin 2010. Audiovisuelle Metaphern. Zur Körper- und Affektästhetik in Film und Fernsehen. Marburg: Schüren. Fauconnier, Gilles and Mark Turner 2003. The Way We Think. Conceptual Blending and the Mind’s Hidden Complexities. New York: Basic Books. Forceville, Charles and Marloes Jeulink 2011. The flesh and blood of embodied understanding: The Source-Path-Goal schema in animation film. Pragmatics and Cognition 19(1): 37⫺59. Gallagher, Shaun 2008. Understanding others: Embodied social cognition, In: Paco Calvo and Antoni Gomila (eds.): Handbook of Cognitive Science: An Embodied Approach, 439⫺452. San Diego/Oxford/Amsterdam: Elsevier. Gibbs, Raymond 2006. Embodiment and Cognitive Science. Cambridge: Cambridge University Press. Glicksohn, Joseph and Chanita Goodblatt 1993. Metaphor and gestalt: Interaction theory revisited. Poetics Today 14(1): 83⫺97. Grady, Joseph E., Todd Oakely and Seana Coulson 1997. Blending and metaphor. In: Raymond W. Gibbs and Gerard J. Steen (eds.), Metaphor in Cognitive Linguistics. Selected Papers from the 5th International Cognitive Linguistics Conference, 101⫺124. Amsterdam/Philadelphia: John Benjamins. Greifenstein, Sarah and Hermann Kappelhoff this volume. The discovery of the acting body. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in

2110

IX. Embodiment Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2070⫺2080. Berlin/Boston: De Gruyter Mouton. Harcup, Tony 2009. Journalism. Principles and Practice. Los Angeles: SAGE. Holly, Werner 2010. Besprochene Bilder ⫺ bebildertes Sprechen. Audiovisuelle Transkriptivität in Nachrichtenfilmen und Polit-Talkshows. In: Arnulf Deppermann and Angelika Linke (eds.), Sprache intermedial: Stimme und Schrift, Bild und Ton, 359⫺382. Berlin/New York: De Gruyter Mouton. Horst, Dorothea, Franziska Boll, Christina Schmitt and Cornelia Müller this volume. Gesture as interactive expressive movement: Inter-affectivity in face-to-face communication. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2112⫺2125. Berlin/ Boston: De Gruyter Mouton. Johnson, Mark 2007. The Meaning of the Body. Aesthetics of Human Understanding. London/Chicago: University of Chicago Press. Jutz, Gabriele and Gottfried Schlemmer 1989. Vorschläge zur Filmgeschichtsschreibung. Mit einigen Beispielen zur Geschichte filmischer Repräsentations- und Wahrnehmungskonventionen. In: Knut Hickethier (ed.), Filmgeschichte schreiben, Ansätze, Entwürfe und Methoden, 61⫺67. Berlin: Ed sigma. Kappelhoff, Hermann 1998. Empfindungsbilder: Subjektivierte Zeit im melodramatischen Kino. In: Theresia Birkenhauer and Anette Storr (eds.), Zeitlichkeiten ⫺ Zur Realität der Künste, 93⫺119. Berlin: Vorwerk 8. Kappelhoff, Hermann 2004. Matrix der Gefühle. Das Kino, das Melodrama und das Theater der Empfindsamkeit. Berlin: Vorwerk 8. Kappelhoff, Hermann 2008. Die Anschaulichkeit des Sozialen und die Utopie Film. Eisensteins Theorie des Bewegungsbildes. In: Gottfried Boehm, Birgit Mersmann and Christian Spies (eds.), Movens Bild. Zwischen Evidenz und Affekt, 301⫺324. München: Wilhelm Fink. Kappelhoff, Hermann and Jan-Hendrik Bakels 2011. Das Zuschauergefühl. Möglichkeiten qualitativer Medienanalyse. Zeitschrift für Medienwissenschaft 5(2): 78⫺95. Kappelhoff, Herman and Cornelia Müller 2011. Embodied meaning construction. Multimodal metaphor and expressive movement in speech, gesture, and feature film. Metaphor and the Social World 1(2): 121⫺153. Kaul, Susanne, Jean-Pierre Palmier and Timo Skrandies (eds.) 2009. Erzählen im Film. Unzuverlässigkeit ⫺ Audiovisualität ⫺ Musik. Bielefeld: Transcript. Koebner, Thomas and Thomas Meder (eds.) 2006. Bildtheorie und Film. München: Edition Text ⫹ Kritik. Kövecses, Zolta´n 2000. Metaphor and Emotion. Language, Culture and Body in Human Feeling. Cambridge: Cambridge University Press. Lakoff, George and Mark Johnson 1980. Metaphors We Live By. Chicago: University of Chicago Press. Langacker, Ronald W. 2010. How not to disagree: The emergence of structure from usage. In: Kasper Boye and Elisabeth Engberg-Pedersen (eds.), Language Usage and Language Structure, 107⫺144. Berlin/New York: De Gruyter Mouton. Marks, Laura 2000. The Skin of the Film. Intercultural Cinema, Embodiment, and the Senses. Durham: Duke University Press. Marks, Laura 2002. Touch. Sensuous Theory and Multisensory Media. Minneapolis: University of Minnesota Press. Merleau-Ponty, Maurice 1964. The film and the new psychology. In: Sense and Non-Sense, 48⫺59. Evanston, IL: Northwestern University Press. First published [1948]. Milburn, Michael A. and Anne B. McGrail 1992. The dramatic presentation of news and its effects on cognitive complexity. Political Psychology 13(4): 613⫺632. Mitchell, William John Thomas 1984. What is an image? New Literary History 15(3): 503⫺537.

166. Expressive movement and metaphoric meaning making in audio-visual media Müller, Cornelia 2008. Metaphors Dead and Alive, Sleeping and Waking. A Dynamic View. Chicago: University of Chicago Press. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 201⫺217. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia and Silva H. Ladewig 2014. Metaphors for sensorimotor experiences. Gestures as embodied and dynamic conceptualizations of balance in dance lessons. In: Mike Borkent, Barbara Dancygier and Jennifer Hinnell (eds.), Language and the Creative Mind. Stanford: CSLI. Müller, Cornelia and Susanne Tag 2010. The dynamics of metaphor. Foregrounding and activation of metaphoricity in conversational interaction. Cognitive Semiotics 6: 85⫺120. Münsterberg, Hugo 2002. The photoplay ⫺ a psychological study. In: Allan Langdale (ed.), Hugo Münsterberg on Film. The Photoplay ⫺ A Psychological Study and Other Writings. 45⫺162. New York/London: Routledge. First published [1916]. Oakley, Todd and Vera Tobin 2012. Attention, blending, and suspense in classic and experimental film. In: Ralf Schneider and Marcus Hartner (eds.), Blending and the Study of Narrative. Approaches and Applications, 57⫺83. Berlin: Mouton de Gruyter. Peter, Nicole, Christine Knoop, Catarina von Wedemeyer and Oliver Lubrich 2012. Sprachbilder der Krise. Metaphern im medialen und wirtschaftlichen Diskurs. In: Anja Peltzer, Kathrin Lämmle and Andreas Wagenknecht (eds.), Krise, Cash und Kommunikation. Die Finanzkrise in den Medien, 49⫺70. Konstanz: UVK. Plessner, Helmuth 1970. Laughing and Crying: A Study of the Limits of Human Behavior. Evanston: Northwestern University Press. First published [1941]. Scherer, Thomas, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movements in audio-visual media: Modulating affective experience. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2081⫺2092. Berlin/Boston: De Gruyter Mouton. Schmitt, Christina and Sarah Greifenstein this volume. Cinematic communication and embodiment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2061⫺ 2070. Berlin/Boston: De Gruyter Mouton. Sheets-Johnstone, Maxine 2008. Getting to the heart of emotions and consciousness. In: Paco Calvo and Antoni Gomila (eds.), Handbook of Cognitive Science: An Embodied Approach, 453⫺465. San Diego/Oxford/Amsterdam: Elsevier. Sobchack, Vivian 1992. The Address of the Eye. A Phenomenology of Film Experience. Princeton: Princeton University Press. Sobchack, Vivian 2004. Carnal Thoughts. Embodiment and Moving Image Culture. Berkeley: University of California Press. Steen, Gerard J., Aletta G. Dorst, J. Berenike Herrmann, Anna Kaal, Tina Krennmayr and Trijntje Pasma 2010. A Method for Linguistic Metaphor Identification: From MIP to MIPVU. Amsterdam: John Benjamins. Stern, Daniel N. 1985. The Interpersonal World of the Infant. New York: Basic Books. Stern, Daniel N. 2010. Forms of Vitality. Exploring Dynamic Experience in Psychology and the Arts. Oxford/New York: Oxford University Press. Suckfüll, Monika 2010. Films that move us. Moments of narrative impact in an animated short film. Projections: The Journal for Movies and Mind 4(2): 41⫺63. Ulrich, Anne 2012. Umkämpfte Glaubwürdigkeit. Visuelle Strategien des Fernsehjournalismus im Irakkrieg 2003. Berlin: Weidler.

2111

2112

IX. Embodiment Unz, Dagmar 2011. Effects of presentation and editing on emotional responses of viewers. The example of TV news. In: Katrin Döveling, Christian von Scheve and Elly A. Konijn (eds.), The Routledge Handbook of Emotions and the Mass Media, 294⫺309. London: Routlegde. Unz, Dagmar, Frank Schwab and Peter Winterhoff-Spurk 2008. TV news ⫺ The daily horror? Journal of Media Psychology. Theories, Methods, and Applications 20(4): 141⫺155. Uribe, Rodrigo and Barrie Gunter 2007. Are ‘sensational’ news stories more likely to trigger viewers’ emotions than non-sensational news stories? A content analysis of British TV news. European Journal of Communication 22(2): 207⫺228. Verstraten, Peter 2009. Film Narratology. Toronto: University of Toronto Press. Voss, Christiane 2011. Film experience and the formation of illusion: The spectator as ‘surrogate body’ for the cinema. Cinema Journal 50(4): 136⫺150. Weinrich, Harald 1983. Semantik der kühnen Metapher. In: Anselm Haverkamp (ed.), Theorie der Metapher, 316⫺339. Darmstadt: Wissenschaftliche Buchgesellschaft. First published [1963]. Whittock, Trevor 1990. Metaphor and Film. Cambridge: Cambridge University Press. Winterhoff-Spurk, Peter 1998. TV news and the cultivation of emotions. Communications 23(4): 545⫺556. Wuss, Peter 2009. Cinematic Narration and its Psychological Impact. Functions of Cognition, Emotion and Play. Newcastle upon Tyne: Cambridge Scholars Publishing. Zink, Veronika, Sven Ismer and Christian von Scheve 2012. Zwischen Bangen und Hoffen. Die emotionale Konnotation des Sprechens über die Finanzkrise 2008/2009. In: Anja Peltzer, Kathrin Lämmle and Andreas Wagenknecht (eds.), Krise, Cash und Kommunikation. Die Finanzkrise in den Medien, 23⫺48. Konstanz: UVK. News coverage: Tagesschau 20.10.2008, 8:00⫺8:15 pm ARD (Consortium of Public Service Broadcasters in Germany)

Christina Schmitt, Berlin (Germany) Sarah Greifenstein, Berlin (Germany) Hermann Kappelhoff, Berlin (Germany)

167. Gesture as interactive expressive movement: Inter-aectivity in ace-to-ace communication 1. 2. 3. 4. 5.

Introduction: The expressive function of gesture Gesture as interaction: Expressing and perceiving gestures Gesture as inter-affectivity: Temporal and interactive emergence and development of affect Conclusion References

Abstract Gestures have an expressive function in everyday communication. As such, they display emotional experiences immediately. The concept of “gestures as interactive expressive moveMüller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 21122124

167. Gesture as interactive expressive movement ments” underlines that those attitudes evolve inter-subjectively. By outlining a perspective based on phenomenology and anthropological philosophy (Merleau-Ponty and Plessner), this chapter presents an interactive view on gesture and bodily behavior. It proposes that the modulation of affect and the emergence of emotions are a dynamic feature of the flow of interaction. It suggests that gestures as interactive expressive movements are dynamic acts of modulated affects and of expressing and perceiving emotions. It illustrates this perspective on gestures with a micro-analysis of a face-to-face conversation. The analysis offers significant insights into the situational emergence of shared affectivity and its development over the course of a conversation. It accounts for gesture as an embodied phenomenon through which shared attitudes and mutual understanding are negotiated dynamically and interactively. In conclusion, we argue that an account of gestures as expressive movements points to an embodied idea of sense making and of a mutual understanding that is based on shared corporeality instead of separated individuality.

1. Introduction: The expressive unction o gesture Over the past years, much work within the field of gesture studies has been carried out to investigate gesture in its everyday contexts-of-use. Emphasis was especially put on its semantic, pragmatic, and cognitive dimensions. As a result, gesture has been shown to be a dynamic and interactive communicative resource of participants in a discourse which inherently intertwines with speech for the emergence of meaning. Adam Kendon (1980: 208) coined the term of “gesture-speech ensembles” to underline that both, speech and gesture, are closely linked parts of one and the same process of utterance. As such, gesture promises insights into how “individual forms of expression are transformed by social processes into socially shared communicative codes” (Kendon 2004: 3). David McNeill (1992, 2005) shared this point of view and expanded it by a cognitive perspective. In drawing additionally on Wilhelm Wundt’s (1921) opinion that gesture externalizes cognitive “inner forms”, he refined it as “window onto thought” which offers insights into the conceptual basis of language (McNeill 1992: 29). However, by focusing primarily on semantic, pragmatic, and cognitive aspects of gesture, one essential property has been disregarded: its affective dimension, i.e., the impact of gesture for the modulation of affects and the emergence of emotions in faceto-face communication. More specifically, we suggest that with their gestures, speakers do not only display individual and shared ideas of things in the world. Rather and simultaneously, they articulate interactively evolved emotional states. By considering gestures as expressive movements, we are addressing this affective dimension. Drawing on Helmuth Plessner ([1925] 1982), Maurice Merleau-Ponty ([1945] 2005), and Mark Johnson (2006, 2007), we consider the use of gestures in interaction as an inter-subjective act of expressing and perceiving affective stances (see Kappelhoff 2004; Kappelhoff and Müller 2011). In concrete terms, gestures express affective stances of a speaker and their resonance in an interlocutor at once. The expressive function of gestures is thus of a temporal and deeply inter-subjective nature. This theoretical approach is in line with current phenomenologically informed views on social cognition, coming both from cognitive sciences and psychology (de Jaegher, di Paolo, and Gallagher 2010; Froese and Fuchs 2012; Fuchs and de Jaegher 2009; Gallagher 2008). It is also in line with phenomenological ways of thinking about embodiment that have found their way into cognitive sciences and cognitive linguistics in particular (Gibbs 2006; Zlatev 2010).

2113

2114

IX. Embodiment In the following, we will suggest that Karl Bühler’s (1933, [1934] 1999) linguistic organon model of language as communication effectively combines with phenomenological and anthropological considerations of the body’s expressivity. Furthermore, this assumption will be illustrated by an analysis of face-to-face communication. Obviously, a different analytic focus as it is proposed here requires new descriptive methods. Our analysis can therefore, also be conceived of as a paradigmatic methodological case study.

2. Gesture as interaction: Expressing and perceiving gestures The eminent psychologist Karl Bühler (1933, 1999) had already integrated gesture into his theory of language and expression. Though he did not discuss gesture in relation to his organon model of language, he declares in his theory of expression that gesture is functionally equivalent to language (Bühler 1933: 40). Elsewhere, Cornelia Müller has proposed that gestures can indeed “fulfill the same basic functions as language” (Müller volume 1: 200). They can be “used to express inner states, to appeal to somebody […], and to represent objects and events in the world” (Müller volume 1: 200). Notably, a core assumption of Bühler’s functional model of language is that all three functions ⫺ expression, appeal, and representation ⫺ are simultaneously present in each communicative act (Bühler 1999: 32). It is exactly this expressive function of language and gesture, which has not received much attention within the study of gestures in relation to language and communication, probably due to a wide-spread reduction of gestures to non-verbal body language (e.g., Watzlawick, Bavelas, and Jackson 1967; see also Müller 1998; Müller, Ladewig, and Bressem volume 1). Neither has the recently growing interest in the embodied nature of mind within cognitive linguistics led to an increased recognition of the affective aspects of embodiment. On the contrary, embodiment has been esteemed almost exclusively as relevant with regard to the body’s influence on the way people think and speak. For instance, even when focusing on sensorimotor perceptions and felt experiences which find expression in gestural movements (e.g., Hostetter and Alibali 2008), those processes have neither been discussed with regard to emotion expression nor concerning affective expressivity. As a consequence, it has been widely disregarded that movement is closely linked to emotion and affect. Such a position, in turn, has been advanced by scholars which are informed by a phenomenological stance towards the body in communication, and this is the position that we would like to advocate in this chapter (see, e.g., Flach, Söffner, and Margulies 2010; Gibbs 2006; Greifenstein and Kappelhoff this volume; Johnson 2007; Kappelhoff 2004; Kappelhoff and Bakels 2011; Kappelhoff and Müller 2011; Scherer, Greifenstein, and Kappelhoff this volume; Schmitt and Greifenstein this volume; Sheets-Johnstone 1999). Notably, this understanding of gesture and bodily behavior in everyday communication has been already formulated in the first half of the 20th century by phenomenology, Gestalt psychology, and anthropology. Drawing upon Bühler’s three functions (1933, 1999), Müller has suggested that gestures are always expression, appeal (address), and representation at the same time (Müller 2009, volume 1). The mere fact that somebody is employing gestures (instead of not moving the hands at all for communication) shows a high communicative effort of the speaker (Müller 1998: 104; Müller and Tag 2010). Moreover, the particular movement quality of the gestural hand movement embodies its expressive quality (Müller 1998: 104). Putting it differently, gestures display a high degree of involvement of the speaker in conversation and qualify it affectively.

167. Gesture as interactive expressive movement In his theory of expression, Bühler (1933: 39) recognizes this affective expressivity as a vital facet of a gesture. Giving the example of a drawing gesture, he says that it can be done in many different ways: Once it might be performed in a cheerful, another time in an angry, yet another time in a frightened and hesitating manner. In each case, the moving hands imitate an action, appeal to an interlocutor, and express an inner state. The different movement qualities express “our affective stance towards the object we are depicting” (Müller volume 1: 202). Thus, it is not only the what of gestural performance which discloses individual or discourse-specific perceptions but also the how as particular movement quality. It is noteworthy to mention that this does not mean to consider gesture as a symbol for encoded inner processes or feelings like, for instance, Samy Molcho (1985) suggests it. What happens instead when we see a gesture is that we immediately comprehend the other’s expression of affect without any decipherment and interpretation. It is exactly such an intertwining of gestural expression and emotional experience which MerleauPonty points out: Faced with an angry or threatening gesture, I have no need, in order to understand it, to recall the feelings which I myself experienced when I used these gestures on my own account. […] I do not see anger or a threatening attitude as a psychic fact hidden behind the gesture, I read anger in it. The gesture does not make me think of anger, it is anger itself. (MerleauPonty 2005: 184)

With this position, Merleau-Ponty distances himself from the concept of ‘empathy’ (Einfühlung), which had been already vividly discussed at the time. Within the paradigm of empathy, a person would be able to recognize and understand the emotions of another, because he or she projects his or her own recalled emotional and affective experience onto the other. In contrast to such a model which includes a gap between inner feeling and outer expression, Merleau-Ponty claims the following: “The meaning of a gesture thus ‘understood’ is not behind it, it is intermingled with the structure of the world outlined by the gesture, and which I take up on my own account” (2005: 186). Beyond the refusal of an arbitrary relation between expression and experience, what is important here, is Merleau-Ponty’s notion of structure with regard to behavior. Subscribing with it to a Gestalt-psychological view on gesture, he uses the term “to designate specifically how a Gestalt is organized” (Embree 1980: 94; cf. Merleau-Ponty [1942] 1963). The gesture’s direct intelligibility with regard to emotion is hence grounded in its specific course, i.e., its particular structure. (That is what we just have called the how of gestural performance.) From this it follows that the temporal Gestalt of body movement and affective experience are assumed to be structurally congruent. As Max Wertheimer puts it: “When a man is timid, afraid or energetic, happy or sad, […] the course of his physical processes is Gestalt-identical with the course pursued by the mental processes.” ([1925] 1997: 9) Due to such a congruency of movement and affect, comprehending a gesture for Merleau-Ponty is inherently tied to the body, which perceives the structure that is realized by the other’s gesturing. This is as much as to say that in this Gestalt structure, the expressing and perceiving of a gesture intertwines. To put it simply, through their bodies interlocutors share emotional experiences. Such a phenomenological and Gestalt-psychological understanding of the human body’s expressivity takes up an idea already formulated in Plessner’s (1982) philosophical

2115

2116

IX. Embodiment anthropology. According to him, the psychic manifests (i.e., expresses) itself immediately and figuratively in the body, such as in gesture, and is thus as well perceived and understood by others. Plessner elaborates on that social character of bodily movement, in particular, by introducing it as ‘expressive movement’ (Ausdrucksbewegung). Hermann Kappelhoff and Cornelia Müller summarize this concept as follows: “Expressive movement comes about a direct matching of affective alignments between individualized bodies.” (Kappelhoff and Müller 2011: 134; cf., e.g., Plessner 1982: 83⫺127; for a more detailed account, see Kappelhoff 2004; Kappelhoff and Bakels 2011; Scherer, Greifenstein, and Kappelhoff this volume; Schmitt, Greifenstein, and Kappelhoff this volume) Hence, the specific way of performing a gesture, the expression of an affective stance, is nothing isolated and independent like, for instance, Paul Ekman’s (1973) fairly static notion of discrete emotions expressed in the face. A speaker’s affective attitude at a given moment in conversation is rather inherently tied to the interaction with his interlocutors, to whom he reacts and who in turn react on him: “It would be a mistake to subjectivize these experiences of qualities of motion, as if they were locked up within some private inner world of feelings.” (Johnson 2007: 25) On the contrary, “emotions are processes of organism-environment interactions. They involve perceptions and assessments of situations in the continual process of transforming those situations” (Johnson 2007: 66⫺67, italics by the authors). What this all amounts to is that affective experience in face-to-face communication is conceived of as an embodied, inter-subjective, and dynamic phenomenon: “[A]ffects are not enclosed in an inner mental sphere to be deciphered from outside but come into existence, change and circulate between self and other in the intercorporeal dialogue” (Fuchs and de Jaegher 2009: 479). We situate ourselves within such a theoretical framework, when we speak of gesture as expressive movement, that is, the expression of affect in the movement of the hands. By conceiving of gesture as an inter-subjective act of expression and perception, it offers a perspective that accounts for the expressive function of gestures in face-to-face communication. Thus, we perceive of gestures as a direct expression of our interlocutor’s psyche, and we think of inter-affectivity, rather than of affect expression of an individual speaker. From such a point of view, gestures’ expressivity calls up bodily resonances in the perceiving interlocutor (Kappelhoff and Müller 2011: 121). Accordingly, what happens during a conversation is that gestures establish a structure of constant affective exchange between interlocutors. In the following, we will offer a way to account for this inter-subjective character of affect by taking a dynamic and interactive perspective on gesture. On the basis of a descriptive micro-analysis of the deployment of gestures along the course of a three party conversation, we will offer a reconstruction of gesture’s affective dimension in its situational and dynamic contexts.

3. Gesture as inter-aectivity: Temporal and interactive emergence and development o aect Based on a micro-analysis of a three party conversation, we will be proposing that affect emerges in temporal Gestalt-like units which emerge from the flow of interaction and for which we are suggesting the term interactive expressive movement unit. So far, we

167. Gesture as interactive expressive movement have described the concept of gesture as expressive movement rather as a facet of single gestures. Now, we would like to enrich this understanding by introducing a unit of expression that is the result of an interaction and which realizes itself in the gesturing of several people conjointly. Notably, this is much more a shift in focus than a conceptual one: Affect in face-to-face communication is assumed to manifest itself as embodied inter-affectivity. Our analysis will document that affect is in fact a dynamic and shared “in-between” phenomenon (see Fuchs and de Jaegher 2009: 482), jointly created by the participating interlocutors. Therefore, an interactive expressive movement unit is a sequentially organized product of joint gestural activities of co-participants in an interaction which, by definition, entails more than one gesture unit. Although our analysis starts from an account of hand gestures in conjunction with speech, we do take into account further aspects of the multimodal nature of the conversation. That is to say, the analysis also considers interactive expressive bodily behavior, e.g., intonation, facial expression, posture, eye gaze, as well as verbal interactive behavior, e.g., turn-taking and verbo-gestural metaphoric expressivity. We consider all of them together as one Gestalt of shared affective experience, and we will reconstruct the emergence of shared affectivity among the interlocutors within a situational context.

3.1. Jointly created, jointly shaped: Situational and dynamic emergence o aect We have chosen a conversation from a data corpus of video-recorded three party conversations which were initiated by a thematic stimulus and lasted approximately half an hour. Three young women discuss their respective understandings of self-realization. They have different opinions about and different experiences with self-realization. And after about thirteen minutes, a situation of conflict emerges. In their discussion, they have reached a point where the critical question is whether one conceives of self-realization as something arising from oneself, or something imposed from outside. In this context, the speakers find the metaphoric image of a whip that is conceived of in different ways: either as something that is being used to whip oneself (an “internal whip”) or as an external device, where somebody is being whipped (i.e., is being forced from the outside). Two of the women (speaker A and C) agree to the idea of self-realization as something that should emerge from oneself, from inside. In this respect, speaker C describes what she does not consider to be self-realization. She explains: “I think it’s mean if somebody gives me an idea of self-realization, to which I should aspire, according to which I should, like a puppet, like yeah, looking for happiness, but behind it there is actually only Hopp Hopp, do that and that and that’.” Though she does not use the existing German expression einpeitschen or mit der Peitsche antreiben ⫺ ‘whipping (on) somebody’ ⫺, she says the words typically associated with the act of whipping, namely Hopp Hopp. This exclamation can be related to the idiomatic expression of ‘whipping (on) somebody’, and therefore it can be considered as a verbal metaphor. Furthermore, before and while she is using this verbal metaphoric expression, her right hand is clenched as a fist which she moves quickly forth and back in front of her body: It enacts whipping (see Fig. 167.1). As speaker C is gesturally embodying what she has expressed verbally, metaphoric meaning can be considered as being activated (Müller 2008; Müller and Tag 2010). The third woman (speaker B) defends insistently the necessity of an “internal whip”. According to her, fulfilling one’s duties and forcing oneself to be disciplined is necessary

2117

2118

IX. Embodiment

Fig. 167.1: The “external whip” (speaker C)

Fig. 167.2: The “internal whip” (speaker B)

to achieve one’s proper self-realization. In order to reject speaker C’s argument, she makes it very clear that what she is imagining is not a situation in which somebody (from outside) is whipping on her. She says: “But I have the whip also inside my head.” Subsequent to this verbal metaphor, she enacts whipping by performing beating movements towards her body with her right fist as if holding a whip inside it (see Fig. 167.2). As her hand is embodying the experiential source domain of the verbal metaphor, i.e., the whip, metaphoricity can be considered as being activated (Müller 2008; Müller and Tag 2010). Note that the two gestures reveal subtle differences in the women’s understanding of the role that whipping plays for self-realization. Speaker C performs the whipping movement away from her body and enacts how somebody whips on somebody else: Agent and patient are separate ⫺ the whip is external. In contrast, speaker B executes the movement as an imagined ego towards herself: Agent and patient are the same ⫺ the whip is internal. These two contrasting ideas of whipping ⫺ and self-realization respectively ⫺ are intensively discussed and negotiated by the three women within a two and a half minutes long sequence of confrontation. The situation culminates in an angry reply of speaker C who is once again verbally negating the necessity of an “external whip” for one’s selfrealization. As she is saying this, she is emphasizing her refusal with a loud and energetic right hand clap on her thigh. We suggest that this gesture with its particular movement quality is directly perceived as being expressive of speaker C’s affective attitude ⫺ i.e., angry outburst ⫺ at this very moment of the utterance. However, this energetic gesture did not arise out of the blue. It is embedded in and results from the interactive dispute about self-realization. We conceive of the clapping gesture as part of an interactive gestural activity, i.e., as a consequence of previous utterances and as a motivation for subsequent reactions. As such, speaker C’s emphatic gesture is on the one hand a communicative effort to make herself understood and to emphasize her active participation in the conversation. That is to say, gesturing she expresses her individual involvement at this point in time. On the other hand, the movement quality of her gesture displays her attitude towards the discussed issue as a response to the interactive situation of confrontation. With its

167. Gesture as interactive expressive movement accentuated clapping movement, the gesture is embedded into an intensive affective phase of the conversation. This intense affectivity becomes apparent in the following ways: First of all, there is a high frequency of interactive gesture usage. Second, it is visible in the specific manners of interactive gesture usage: ⫺ frequent repetition of gestures or gestural sequences, ⫺ overlapping gesticulation of different speakers, and ⫺ increased speed and higher accentuation of the gestural movement quality. These kinds of gesture performance indicate an increasing and mutually perceptible communicative effort. In this phase, all three women participate actively in the conversation: While speaker B is constantly trying to make herself understood why she regards an “internal whip” as an essential aspect for her self-realization, her interlocutors refuse to agree. Without any ratification, she continues to reformulate her argument and again encounters disapproval. There is a constant shifting between pro and contra position which is expressed and achieved by the manners of gesturing including the movement qualities. In this phase, speaker B’s gestures are interrupted by the gestures of A and C. In turn, B interrupts them again, repeating her respective gestures, i.e., the “internal whip”, and performs them in a large, accentuated, and vigorous manner right in the center of the other’s visual focus of attention. A and C react in the same way. This constant intertwining finally leads to culmination and the moment when speaker C assertively performs the clapping gesture from our example to angrily repel the idea of an “external whip” at last. As a result of the dynamic “interplay of affective exchanges of intensity” (Kappelhoff and Müller 2011: 134) embodied in the participants’ gestures, a particular interactive movement pattern evolves which we suggest being perceived and experienced by the interlocutors as a whole. On the basis of the ongoing intensification of mutual incitement, the situation unfolds a choppy rhythm of continuous attempt and rough interruption, of initiating and being rigorously stopped, which accelerates and finally leads to a harsh and energetic eruption. This is experienced as an expression of increasing tension and anger, finally downright exploding. It is such an instance of a Gestalt-like phase that we call interactive expressive movement unit. Such an interactive expressive movement unit emerges from an interrelated, dynamic creation and shaping of shared affect. Moreover, we suggest that such a unit, which may encompass several minutes (in our example two and a half), unfolds a particular affective quality here: the movement Gestalt of a rough and choppy rhythm, finally harshly erupting. Furthermore, the particular affective intensity of this interactive expressive movement unit not only shows up in gesturing, but also in speech and bodily interaction. The high degree of involvement and the affective quality are furthermore expressed in the following ways: ⫺ ⫺ ⫺ ⫺

interrupted and discontinued utterances, rising intonation, high tempo of speech, and a high rate of turn-takings.

2119

2120

IX. Embodiment In our example, we have noticed that the perceptual rhythm of anger articulated gesturally is expressed on the verbal level as well. B is interrupted by A and C, who speak with a raised intonation and a high volume. B interrupts the other two also, and she as well raises her voice and intonation. This constant back and forth of turns, the increased tempo of speech, and the rate of turn-takings further contribute to a more and more increasing tension which characterizes the interactive expressive movement unit. These observations are in line with Johnson’s idea of emotions as “processes of organismenvironment interactions” (2007: 66). Due to the continual interactive exchange, affect expression emerges as a shared experience unfolding over the course of time. It is this temporality which provides the analytical access to affect. Based on this kind of analysis, an intensity profile of inter-affective experiences can be extracted which documents the dynamic emergence of affective intensities over the course of an interaction. The diagram below (Fig. 167.3) illustrates our analysis of the interactive expressive movement unit described above. It provides an overview of the speakers’ involvement and the jointly created and shaped affective experience of a choppy and rough back and forth that finally harshly erupts.

Fig. 167.3: Visualization of one interactive expressive movement unit in face-to-face communication

This brief micro-analysis has documented how the affective experience of increasing tension and anger are composed by the interplay of gesture with intonation, speech tempo, and turn-taking. The movement qualities of the gestures, the raised voices of the interlocutors, as well as the high rate of turn-takings and mutual interruptions coalesced to one Gestalt of the embodied situational experience of confrontation. We have described gesture as a fundamentally interactive and dynamic phenomenon in between the interlocu-

167. Gesture as interactive expressive movement

2121

tors. By merging expression and perception of affect, gestures constitute in their temporality an interface for inter-bodily resonances. Therefore, the emergence of affect in face-toface communication cannot be regarded as a matter of single gestures. In our second analysis, we will consider the temporal course of affect across the entire conversation.

3.2. Negotiating what is shared along a conversation: The temporal emergence o aect Based on what we have been outlining so far, we assume affect to permeate the whole conversation. That is to say, every conversation could be considered as a composition of various interactive expressive movement units. Considering their arrangement and interplay in time offers insights into how the interlocutors’ attitude towards the topics of conversation dynamically develop and change ⫺ it allows for a reconstruction of inter-affectivity of the interaction. To illustrate this idea, imagine a conversation to be like a piece of music, for instance, Bedrich Smetana’s famous symphonic poem “The Moldau River”, in which he musically describes the river’s course. Starting from small springs which unite into one single current, it flows through woods, meadows, and different landscapes. The river widens, streams toward Prague, and then vanishes into the distance, ending at the Elbe River. Each of these different sections of the river has its own quality of musical movement. For instance, the small springs at the beginning are implemented by a nimble and brimming melody, while the next part, i.e., their unification, is rather stately and regularly characterized. In their interplay, the sections compose to one overall image of the Moldau River’s course which on that basis can be described as particular patterns of musical movement qualities. Applying this idea to our conversation between the three women would mean to conceive of it as a river in whose course the discussion about self-realization is continuously flowing through different sections (i.e., interactive expressive movement units) of affective experience. These situational sections are created and shaped by the dynamic interaction of the participants. As a result, a pattern entailing various situational affective qualities emerges over time. This encompassing pattern, understood as the composition of all interactive expressive movement units, structures the discussion about selfrealization by giving it a particular interactively created affective course. Put differently, the flow of the conversational river is contingent with the development of shared affective experiences. The interactive expressive movement unit described above is located in the middle of the conversation and is characterized by increasing tension and anger. In the same way as we have emphasized that the clapping gesture within it has to be embedded into its related context because it emerged dynamically and interactively, we now wish to underline that this interactive expressive movement unit must be considered as embedded within the larger context of the conversation. That means that the experience of tension and anger which characterizes the interactive expressive movement unit must be put into relation to how it has come into being and what is going to follow. To draw upon the analogy to music again: The section of the stately and regularly flowing Moldau is only experienced in this specific manner because it is preceded by the nimble and brimming musical movement quality. Thus, the experience of the composition’s single sections along the river’s course is not to be conceived of as random. In contrast, it is shaped by the dynamic development and change of the movement quality of music.

2122

IX. Embodiment The same holds for the shared affective experience of the participants within an interactive expressive movement unit. It simultaneously responds to the development of affect before, and is a link for its further unfolding as the conversation is moving on. This means that the conversation consists of a succession of correlating interactive expressive movement units, each with a specific affective quality. It is such a continuous interaffective interaction of the participants that creates the temporal development of affect over time and grounds the course of the conversation. The dynamic pattern of jointly created affect goes along with the negotiation of a shared attitude towards a given topic.

4. Conclusion Considering gesture as interactive expressive movement extends the understanding of gesture’s expressive function. It overcomes the restriction of linking it to an individual’s affective stance. It underlines shared affectivity, or inter-affectivity, between interlocutors, and it is inseparably linked to the context of its inter-subjective environment. That is to say, gesture’s affective dimension has no end in itself in that it provided a visible basis for interpreting another’s behavior, feelings, and intentions. In contrast, it is where interlocutors “experience a specific feeling of being connected with the other” by a “circular interplay of expressions and reactions […] constantly modifying each partner’s bodily state” (Froese and Fuchs 2012: 213). Consequently, as we have demonstrated, face-toface communication is no encounter of separate bodies but of one “extended body” which includes self and others (Froese and Fuchs 2012: 213, italics by the authors). Such an embodied idea of interaction has fundamental consequences for the idea of sense making and mutual understanding. It overcomes the “popular though mistaken view that meaning is merely conceptual and propositional in nature” (Johnson 2006: 3). Whenever we are communicating with others, meaning is not simply there because we interpret and access hidden mental images or conceptualizations. Rather, meaning making is a dynamically emergent and inter-subjective process which is inseparably connected with the expressing and perceiving body. Affect emerges situationally and dynamically as a shared embodied experience of the interlocutors. The constant process of interaffective interaction in gesture unfolds over the course of an entire conversation and submits the interlocutors’ attitude towards a given theme to a dynamic development. That is to say, reciprocal inter-bodily affective resonance and mutual understanding go hand in hand. As an interactive, dynamic, and situated phenomenon affect it pervades entire conversations and is interactively elaborated. We therefore suggest that an account of gestures as expressive movements points to an embodied idea of sense making and of mutual understanding that is based on shared corporeality instead of separated individuality.

Acknowledgements The presented work is an outcome of collaborative research conducted within the interdisciplinary project “Multimodal Metaphor and Expressive Movement” under the direction of Hermann Kappelhoff and Cornelia Müller at the Cluster of Excellence “Languages of Emotion” of the Freie Universität Berlin in cooperation with the European University Viadrina Frankfurt (Oder). We particularly would like to thank Sarah Greifenstein and Thomas Scherer for their critical and helpful comments on an earlier

167. Gesture as interactive expressive movement

2123

version of this article. The method of analyzing interactive expressive movement units in face-to-face communication builds upon the film-analytic model for the qualitative description of expressive movements in audio-visuals (see Kappelhoff and Bakels 2011; Scherer, Greifenstein, and Kappelhoff this volume). We thank Mathias Roloff for providing the drawings (www.mathiasroloff.de).

5. Reerences Bühler, Karl 1933. Ausdruckstheorie: Das System an der Geschichte aufgezeigt. Jena: Fischer. Bühler, Karl 1999. Sprachtheorie: Die Darstellungsfunktion der Sprache. Stuttgart: UTB. First published [1934]. De Jaegher, Hanne, Ezequiel di Paolo and Shaun Gallagher 2010. Can social interaction constitute social cognition? Trends in Cognitive Sciences 14(10): 441⫺446. Ekman, Paul (ed.) 1973. Darwin and Facial Expression: A Century of Research in Review. New York/ London: Academic Press, Inc. Embree, Lester 1980. Merleau-Ponty’s examination of Gestalt psychology. Research in Phenomenology 10: 89⫺121. Flach, Sabine, Jan Söffner and Daniel Margulies (eds.) 2010. Habitus in Habitat I. Emotion and Motion. Bern: Peter Lang. Froese, Tom and Thomas Fuchs 2012. The extended body: A case study in the neurophenomenology of social interaction. Phenomenology and the Cognitive Sciences 11: 205⫺235. Fuchs, Thomas and Hanne De Jaegher 2009. Enactive intersubjectivity: Participatory sense-making and mutual incorporation. Phenomenology and the Cognitive Sciences 8(4): 465⫺486. Gallagher, Shaun 2008. Understanding others: Embodied social cognition. In: Paco Calvo Garzo´n and Toni Gomila (eds.), Elsevier Handbook of Embodied Cognitive Science, 439⫺452. London: Elsevier. Gibbs, Raymond 2006. Embodiment and Cognitive Science. Cambridge, UK: Cambridge University Press. Greifenstein, Sarah and Hermann Kappelhoff this volume. The discovery of the acting body. Expressive movement and dramatic performance. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2), 2070⫺2080. Berlin/Boston: De Gruyter Mouton. Hostetter, Autumn and Martha Alibali 2008. Visible embodiment: Gestures as simulated action. Psychonomic Bulletin and Review 15(3): 495⫺514. Johnson, Mark 2006. Merleau-Ponty’s embodied semantics ⫺ from immanent meaning, to gesture, to language. EurAmerica 36(1): 1⫺27. Johnson, Mark 2007. The Meaning of the Body: Aesthetics of Human Understanding. Chicago: University of Chicago Press. Kappelhoff, Hermann 2004. Matrix der Gefühle. Das Kino, das Melodrama und das Theater der Empfindsamkeit. Berlin: Vorwerk 8. Kappelhoff, Hermann and Jan-Hendrik Bakels 2011. Das Zuschauergefühl. Möglichkeiten qualitativer Medienanalyse. Zeitschrift für Medienwissenschaft 5(2): 78⫺95. Kappelhoff, Hermann and Cornelia Müller 2011. Embodied meaning construction. Multimodal metaphor and expressive movement in speech, gesture and feature film. Metaphor in the Social World 1(2): 121⫺135. Kendon, Adam 1980. Gesticulation and speech: Two aspects of the process of utterance. In: Mary Ritchie Key (ed.), The Relationship of Verbal and Nonverbal Communication, 207⫺227. The Hague: Mouton. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press.

2124

IX. Embodiment

McNeill, David 1992. Hand and Mind. What Gestures Reveal About Thought. Chicago: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Merleau-Ponty, Maurice 1963. The Structure of Behavior. Boston: Beacon Press. First published [1942]. Merleau-Ponty, Maurice 2005. Phenomenology of Perception. London/New York: Routledge. First published [1945]. Molcho, Samy 1985. Body Speech. New York: St Martin’s Press. Müller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte ⫺ Theorie ⫺ Sprachvergleich. Berlin: Arno Spitz. Müller, Cornelia 2008. Metaphors Dead and Alive, Sleeping and Waking. A Dynamic View. Chicago: University of Chicago Press. Müller, Cornelia 2009. Gesture and Language. In: Malmkjaer, Kirsten (ed.), Routledge’s Linguistics Encyclopedia, 214⫺217. Abington/New York: Routledge. Müller, Cornelia volume 1. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 202⫺217. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia, Silva Ladewig and Jana Bressem volume 1. Gestures and speech from a linguistic perspective: A new field and its history. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.), 55⫺81. Berlin/Boston: De Gruyter Mouton. Müller, Cornelia and Susanne Tag 2010. The dynamics of metaphor. Foregrounding and activation of metaphoricity in conversational interaction. Cognitive Semiotics 6: 85⫺120. Plessner, Helmuth 1982. Die Deutung des mimischen Ausdrucks. Ein Beitrag zur Lehre vom Bewußtsein des anderen Ichs. In: Helmuth Plessner: Gesammelte Schriften, Vol. VII: Ausdruck und menschliche Natur, 67⫺129. Frankfurt (Main): Suhrkamp. First published [1925]. Scherer, Thomas, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movement in audio-visual media: Modulating affective experience. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2081⫺2092. Berlin/Boston: De Gruyter Mouton. Schmitt, Christina and Sarah Greifenstein this volume. Cinematic communication and embodiment. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.), 2061⫺ 2070. Berlin/Boston: De Gruyter Mouton. Schmitt, Christina, Sarah Greifenstein and Hermann Kappelhoff this volume. Expressive movement and metaphoric meaning making in audio-visual media. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2), 2092⫺2112. Berlin/Boston: De Gruyter Mouton. Sheets-Johnstone, Maxine 1999. The Primacy of Movement. Amsterdam: John Benjamins. Watzlawick, Paul, Janet Beavin Bavelas and Don D. Jackson 1967. Pragmatics of Human Communication: A Study of Interactional Patterns, Pathologies and Paradoxes. New York: Norton. Wertheimer, Max 1997. Gestalt theory, In: William D. Ellis (ed.), Source Book of Gestalt Psychology, 1⫺11. Goldsboro: The Gestalt Journal Press. First published [1925]. Wundt, Wilhelm 1921. Völkerpsychologie: Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte. Volume 1: Die Sprache, 142⫺257. Leipzig: Engelmann. First published [1900].

167. Gesture as interactive expressive movement

2125

Zlatev, Jordan 2010. Phenomenology and cognitive linguistics. In: Shaun Gallagher and Daniel Schmicking (eds.), Handbook of Phenomenology and Cognitive Science, 401⫺414. Dordrecht: Springer.

Dorothea Horst, Frankfurt (Oder) Franziska Boll, Frankfurt (Oder) Christina Schmitt, Berlin Cornelia Müller, Frankfurt (Oder)

(Germany) (Germany) (Germany) (Germany)

X. Sign language  Visible body movements as language 168. Linguistic structures in a manual modality: Phonology and morphology in sign languages 1. 2. 3. 4. 5.

What phonology means for a sign language Phonological universals in the signed modality Morphology Iconicity and the lexicon References

Abstract This chapter sketches the levels of grammatical organization in signed languages that are closest to the phonetic difference in communication channel between sign and speech: phonology and morphology. Sign languages are argued to display a meaningless level of structure that is similar to phonological organization in spoken languages. Some phonological universals across sign languages are discussed, such as the presence of both one-handed and two-handed signs in all sign language lexicons and the preference for monosyllabic forms. At the level of morphology, this monosyllabicity is reflected in the preference for nonconcatenative morphology. Finally, the widespread iconicity in sign languages is not restricted to the lexicon, but can also be observed at the syntactic and discourse levels.

1. What phonology means or a sign language In the first phonological analysis of a sign language (Stokoe 1960 on American Sign Language, ASL), it was demonstrated that the core defining features of phonology are also present in American Sign Language: there is a limited set of meaningless elements that can recombine to create a very large number of morphemes. In other words, Hockett’s (1960) “duality of patterning” is also observed in sign language: there is a dual system of meaningless structure (phonology) on the one hand and meaningful structure (morphology and syntax) on the other hand. The fact that the actual phonological content differs for signed and spoken languages is nowadays considered a somewhat trivial by-product of the primary communication channels that are used (the “modality difference”: the oral-auditory modality or channel for spoken languages and the manualvisual modality for sign languages. Both terms underrepresent actual language production: oral languages do not just use the oral cavity and oral articulators but also the lungs, larynx and velum, and sign languages use not just the hands but also non-manual articulators such as the face and the head). Stokoe’s initial analysis pointed to substantial differences between sign and speech as well: not only is the content of the phonological elements different, referring to different articulators for the two types of languages, but also the way in which elements are Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 21272133

2128

X. Sign language – Visible body movements as language

arranged is different. Where the segments of spoken language syllables are put in a sequential order, signs show a simultaneous arrangement of the “cheremes” (later called parameters) of a syllable. The term syllable itself has remained controversial in sign language research and was only introduced much later (see Wilbur 1990; Jantunen and Takkinen 2010 for discussion). Later researchers have emphasized that this difference in ordering is only apparent. The parameters of a sign are now generally recognized as corresponding to distinctive features or major classes of features in spoken language segments, and sequential structure can be found in the movement parameter. Various proposals have been made to analyze the movement in signs in terms of series of segments (Liddell and Johnson 1989; Sandler 1989) or skeletal slots (Brentari 1998; van der Kooij 2002). The parameters proposed by Stokoe consisted of handshape (which he called “designator”), location (“tabula”), and movement (“signation”). In other words, in every sign there is an active articulator that performs an action at a location, the passive articulator. Several aspects have since been recognized as being of equal importance in lexical signs: one- vs. two-handedness, orientation of the hand(s), and some limited non-manual features, all considered by Stokoe to be of secondary importance. Just as handshape, location, and movement, these three parameters are unpredictable and thus in need of lexical specification, and they also carry distinctive power in contrasting minimal pairs (but see below for the limited role of non-manuals). Fig. 168.1 presents three examples of signs that have been analyzed in terms of their main parameters.

Meaning: Handedness Handshape: Orientation: Location: Movement:

pen one-handed A neutral space thumb flexion

boat symmetrical B inward space forward

mushroom asymmetrical B downward thumb contacting

Fig. 168.1: Three lexical items of Sign Language of the Netherlands (NGT) analyzed in terms of their phonological parameters

Although the analysis of signs in terms of parameters is still quite widely used in lexicography, applied linguistics, and psycholinguistics, there have been several proposals for phonological models of signs that use hierarchically ordered distinctive features, just as we know from feature geometry models of spoken languages. Especially the handshape parameter has been fruitfully analyzed in terms of finger selection and finger configuration (cf. Mandel 1981), each consisting of one or more features in the models of Sandler

168. Linguistic structures in a manual modality

2129

(1989), Brentari (1998), and van der Kooij (2002). Two-handed signs are parsimoniously described as either having symmetrical articulators performing the same movement, or having the so-called weak hand as the place of articulation of the strong or dominant hand (see van der Hulst 1993; Crasborn 2011 for further discussion). Only in the latter case can the weak hand have a different handshape than the strong hand. These analyses go back to one of the classic studies on sign language phonology by Battison (1978), who first established for American Sign Language that there are phonological limitations on the possible types of two-handed sign. For instance, the “Dominance Condition” excludes simultaneous movement of two different handshapes, a generalization that has so far been found to apply to all sign languages that have been studied.

2. Phonological universals in the signed modality Aside from the cross-modal universals applying to both signed and spoken languages, there also appear to be universals across signed languages. Battison’s conditions on twohanded signs are one instance, but an even more basic one that has rarely if ever been explicitly noted in the literature is that all signed languages appear to have both onehanded and two-handed lexical items, and that both types frequently occur in all languages. Not only do all sign languages appear to conform to the above restrictions on two-handed signs, but also do they maximally exploit the possible phonological space in having both symmetrical and asymmetrical two-handed signs. A second phonological universal across sign languages is the strong tendency to have only monosyllabic word forms. Most if not all signs in sign languages only have a single specification for handshape, orientation, location, and movement. If signs have changes in more than one of these parameters (changes in both location and handshape for instance), these always coincide in time and are thus never executed sequentially. Only in morphologically complex forms such as compounds, we find such sequences of syllables. What all sign language do have, is reduplication. Thus, “repetition” is a distinctive feature in signs, where the number of repetitions is not distinctive. Taking a different perspective, we can say that polysyllabic words are frequent in sign languages, but they nearly always consist of reduplicated syllables. Finally, another phonological generalization that appears to hold for all sign languages is that non-manual activity plays a rather limited role in the lexicon (but see Hermann and Pendzich this volume). Most sign languages have some non-manual features such as mouth patterns that are obligatory and meaningless features of specific lexical items. An example is the mouth pattern “shh” in the NGT sign BE-PRESENT, which cannot be traced back to any spoken Dutch word (Crasborn et al. 2008; Schermer 1990). However, in most if not all sign languages, such “mouth gestures” do not appear to be productive phonological features that systematically distinguish pairs of signs. Each example only occurs once in the lexicon, and there have been no cross-linguistic studies investigating whether there are phonetic-phonological patterns in these mouth gestures or whether each language has a limited set of idiosyncratic forms.

3. Morphology Although it is phonetically possible to string together multiple syllables in a sign form, we saw in the previous paragraph that this possibility is not exploited in the phonology

2130

X. Sign language – Visible body movements as language

of lexical signs. We similarly see very little concatenative morphology in signed languages. Rather, the phonological patterns we see in free morphemes in the lexicon also turn out to apply to most morphologically complex signs. Sign languages do typically have a very rich morphology, however, that is often compared to non-concatenative morphology in spoken languages like Arabic (Aronoff, Meir, and Sandler 2005). Elements of the stem are replaced by features that express person or location agreement and aspectual modulations of verbs and plurality in nouns (Padden 1988; Supalla 1982; see Steinbach 2012 for further discussion). In addition, many sign languages show some type of number incorporation, changing the handshape of lexical items for temporal concepts like week or year to express number (Sagara and Zeshan forthcoming). The above examples of morphological structure that is frequent across sign languages are all instances of inflectional morphology. Derivational processes, changing the word class from noun to verb or from verb to adjective, for instance, are rather more rare in sign languages. Some sign languages have been argued to show productive alternations between nouns and verbs, typically involving a modification of the movement of the sign (Supalla and Newport 1978 on American Sign Language; Johnston 2001 on Australian Sign Language; Hunger 2006 on Austrian Sign Language). This is one of the few derivational processes that have ever been discussed, however. Compounding, the third major type of morphological processes, appears to exist in most sign languages, but its productivity varies (see Meir et al. 2010; Meir 2012 for further discussion). A final type of morphology that appears to be present in all sign languages and that is heavily debated, is the use of classifiers (Schembri 2001; Supalla 1982, 1986; and see the collection of papers in Emmorey 2003 for discussion). Classifiers are handshapes in a specific orientation that stand for a rather broad semantic class of potential referents (such as flat horizontal things or thin upright things), and that are combined with location and movement features to form complex predicates. While initially analyzed in terms of inflectional morphological processes similar to classifiers in spoken languages like Navaho, for instance, more recent views suggest that the resulting signs are more likely to be only partly morphemic, in the sense that locations are not of a phonologically limited set but created flexibly depending on the layout of the signing space (Liddell 2003). The precise morphological analysis remains a matter of debate (see Zwitserlood 2012 for further discussion).

4. Iconicity and the lexicon Despite the phonological make-up of lexical items in signed languages and the concomitant duality of patterning, we also find a widespread iconicity of forms: resemblances between the form and the meaning of signs (see Brentari 2012 for further discussion). In the example signs in Fig. 168.1, we can see the pushing of the back of the pen, the movement of the boat through the water and the shape of the bow, and the shape of a typical mushroom with a stem and a cap. Although not all such interpretations of a sign need necessarily constitute a correct etymological analysis, it does appear that many signs are created by putting together feature values in such a way that they represent an abstract form of a concept that in turn is (often metaphorically) related to the intended meaning (Brennan 1990; Johnston and Ferrara 2010; Taub 2001). It remains an open question to what extent such iconicity can be analyzed in terms of a morphological process such as compounding.

168. Linguistic structures in a manual modality

2131

While many iconic signs are lexicalized, in spontaneous interaction we also find many non-lexicalized or partly lexicalized forms (Johnston and Schembri 1999; see also Wilcox this volume). These come in the form of partly lexicalized elements such as pointing signs, and also in the form of fingerspelled words from spoken languages (Brentari 2001). Moreover, all sign languages appear to make use of “constructed action”, in which bodily actions that are not limited to combinations of phonological elements are used to represent actions of referents (Metzger 1995). In the work of Cuxac and Sallandre (2007) on highly iconic structures in French Sign Language, such elements are even highlighted as constituting the core of sign language communication.

Acknowledgements Authoring of this chapter was supported by ERC Starting Grant 210373 “On the Other Hand” awarded to the author.

Reerences Aronoff, Mark, Irit Meir and Wendy Sandler 2005. The paradox of sign language morphology. Language 81(2): 301⫺344. Battison Robin 1978. Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press. Brennan, Mary 1990. Word Formation in British Sign Language. Stockholm: University of Stockholm. Brentari, Diane 1998. A Prosodic Model of Sign Language Phonology. Cambridge, MA: Massachusetts Institute of Technology Press. Brentari, Diane (ed.) 2001. Foreign Vocabulary in Sign Languages. Mahwah, NJ: Lawrence Erlbaum Associates. Brentari, Diane 2012. Phonology. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 21⫺54. Berlin/Boston: De Gruyter Mouton. Crasborn, Onno 2011. The other hand in sign language phonology. In: Marc van Oostendorp, Colin Ewen, Elisabeth V. Hume and Keren Rice (eds.), Companion to Phonology, Volume 1, 223⫺240. London: Blackwell. Crasborn, Onno, Els van der Kooij, Johanna Mesch, Dafydd Waters and Bencie Woll 2008. Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language and Linguistics 11(1): 45⫺67. Cuxac, Christian and Marie-Anne Sallandre 2007. Iconicity and arbitrariness in French Sign Language: Highly iconic structures, degenerated iconicity and diagrammatic iconicity. In: Elena Pizzuto, Paola Pietrandrea and Raffaele Simone (eds.), Verbal and Signed Languages, 13⫺34. New York: Mouton de Gruyter. Emmorey, Karen (ed.) 2003. Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum Associates. Hermann, Annika and Nina-Kristin Pendzich this volume. Nonmanual gestures in sign languages. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 2133⫺ 2145. Berlin/Boston: De Gruyter Mouton. Hockett, Charles F. 1960. The origin of speech. Scientific American 203(3): 88⫺96. Hulst, Harry van der 1996. On the other hand. Lingua 98: 121⫺143. Hunger, Barbara 2006. Noun/verb pairs in Austrian Sign Language (ÖGS). Sign Language & Linguistics 2(2): 71⫺94. Jantunen, Tommi and Ritva Takkinen 2010. Syllable structure in sign language phonology. In: Diane Brentari (ed.), Sign Languages, 312⫺331. Cambridge: Cambridge University Press.

2132

X. Sign language – Visible body movements as language

Johnston, Trevor and Adam Schembri 1999. On defining a lexeme in a signed language. Sign Language and Linguistics 2(2): 115⫺185. Johnston, Trevor 2001. Nouns and verbs in Australian Sign Language: An open and shut case? Journal of Deaf Studies and Deaf Education 6(4): 235⫺257. Johnston, Trevor and Lindsay Ferrara 2010. Lexicalization in signed languages. When is an idiom not an idiom? Paper presented at the 3rd UK Cognitive Linguistics Conference, 6⫺8 July 2010, University of Hertfordshire. Kooij, Els van der 2002. Phonological categories in Sign Language of the Netherlands. The role of phonetic implementation and iconicity. Ph.D. dissertation, Landelijke Onderzoekschool Taalwetenschap, Utrecht. Liddell Scott 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Liddell, Scott K. and Robert E. Johnson 1989. American Sign Language: The phonological base. Sign Language Studies 64: 195⫺278. Mandel, Mark A. 1981. Phonotactics and morphophonology in American Sign Language. Ph.D. dissertation, University of California, Berkeley. Meir, Irit 2012. Word classes and word formation. In: Roland Pfau, Markus Steinbach, and Bencie Woll (eds.), Sign Language. An International Handbook, 77⫺111. Berlin/Boston: De Gruyter Mouton. Meir, Irit, Mark Aronoff, Wendy Sandler and Carol Padden 2010. Sign languages and compounding. In: Sergio Scalise and Irene Vogel (eds.), Compounding, 301⫺322. Amsterdam: John Benjamins. Metzger, Melanie 1995. Constructed dialogue and constructed action in American Sign Language. In: Ceil Lucas (ed.), Sociolinguistics in Deaf Communities, 255⫺271. Washington, DC: Gallaudet College Press. Padden, Carol A. 1988. Interaction of Morphology and Syntax in American Sign Language. New York: Garland. Sagara, Keiko and Ulrike Zeshan forthcoming. Semantic Fields in Sign Languages. Sign Language Typology Series No. 5. Berlin: De Gruyter Mouton & Nijmegen: Ishara Press. Sandler, Wendy 1989. Phonological Representation of the Sign: Linearity and Nonlinearity in American Sign Language. Dordrecht: Foris. Schembri, Adam 2001. Issues in the analysis of polycomponential verbs in Australian Sign Language (Auslan). Ph.D. dissertation, University of Sydney, Sydney, Australia. Schermer, Trude 1990. In search of a language. Influences from spoken Dutch on Sign Language of the Netherlands. Delft: Eburon. Steinbach, Markus 2012. Plurality. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 112⫺136. Berlin/Boston: De Gruyter Mouton. Stokoe, William C. 1960. Sign Language Structure. An Outline of the Visual Communication Systems of the American Deaf (1993 Reprint ed.). Buffalo: Dept. of Anthropology and Linguistics, University of Buffalo. Supalla, Ted 1982. Structure and acquisition of verbs of motion and location in American Sign Language. Ph.D. dissertation, University of California, San Diego. Supalla, Ted 1986. The classifier system in American Sign Language. In: Colette Craig (ed.), Noun Classes and Categorization, 181⫺214. Amsterdam: John Benjamins Publishing Company. Supalla, Ted and Elissa Newport 1978. How many seats in a chair? The Derivation of Nouns and Verbs in American Sign Language. In Patricia Siple (ed.), Understanding Language through Sign Language Research 91⫺133. New York: Academic Press. Taub, Sarah 2001. Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press. Wilbur, Ronnie B. 1990. Why syllables? What the notion means or ASL research. In: Susan Fischer and Patricia Siple (eds.), Theoretical Issues in Sign Language Research, Volume 1, 81⫺108. Chicago: The University of Chicago Press. Wilcox, Sherman E. this volume. Gestures in sign-language. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communi-

169. The grammaticalization of gestures in sign languages

2133

cation. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 2170⫺2176. Berlin/Boston: De Gruyter Mouton. Zwitserlood, Inge 2012. Classifiers. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 158⫺186. Berlin/Boston: De Gruyter Mouton.

Onno Crasborn, Nijmegen (The Netherlands)

169. The grammaticalization o gestures in sign languages 1. 2. 3. 4. 5. 6. 7.

Introduction From gesture to sign Case study I: palm-up Case study II: headshake On the grammaticalization of gestures in spoken languages Conclusion References

Abstract Recent studies on grammaticalization in sign languages have shown that, for the most part, the grammaticalization paths identified in sign languages parallel those previously described for spoken languages. Hence, the general principles of grammaticalization do not depend on the modality of language production and perception. However, in addition to these modalityindependent paths, some modality-specific paths have been described. Of special interest in the present context is the grammaticalization of gestures. Since sign language and co-speech gesture share the visual-gestural modality, sign languages, unlike spoken languages, have the potential to integrate manual and non-manual gestures into their linguistic system. Cospeech gestures cannot only be integrated as lexical items (lexicalization of gestures) but more importantly also as grammatical markers (grammaticalization of gestures). This chapter focuses on the latter type of change and presents two case studies which illustrate the grammaticalization of manual and non-manual gestures in sign languages.

1. Introduction All natural languages are subject to diachronic change. It is thus not surprising that sign languages, that is, languages which are transmitted in the visual-gestural modality, also undergo changes over time. Just as in spoken languages, such changes may be triggered by external factors (e.g., word order change due to language contact, borrowing) and internal factors (e.g., phonological changes caused by articulatory and perceptual factors). This chapter addresses a specific type of internal change, namely grammaticalization. Grammaticalization is generally considered a process whereby lexical elements ⫺ mostly nouns and verbs ⫺ develop into (free or bound) grammatical morphemes (Heine and Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 21312147

2134

X. Sign language – Visible body movements as language

Kuteva 2002; Hopper and Traugott 1993). Recent studies on grammaticalization in sign languages have revealed that most of the attested pathways are modality-independent ⫺ for instance, the path from verb to tense marker and from noun to complementizer (see Pfau and Steinbach [2006, 2011] and Janzen [2012] for overviews). In the present chapter, however, we will depart from this modality-independent perspective on grammaticalization and turn to a modality-specific aspect, the grammaticalization of gestures. As co-speech gesture and sign language share the visual-gestural modality, sign languages have the unique possibility of integrating gestures into their linguistic system, not only as lexical items but also as grammatical markers. In section 2, we start our investigation by presenting two general strategies for gestures to enter the lexicon/grammar of a sign language. One of these strategies will be further illuminated by means of two case studies, one focusing on a manual (section 3), the other one on a non-manual gesture (section 4). In section 5, we briefly address the question in how far the suggested pathway from gesture to language really is modality-specific.

2. From gesture to sign It comes as no surprise that signers, just like speakers, make use of iconic and metaphorical co-speech gestures while signing (Emmorey 1999). However, distinguishing signs from gestures of similar shape is not always straightforward. As a working hypothesis, we assume that once a manual or non-manual element has acquired a fixed (possibly underspecified) meaning and shows a systematic use and syntactic distribution (e.g., specific sentential position for manual elements, grammatically determined scope properties for non-manuals), it can be considered part of the linguistic system of a sign language. Co-speech gestures as well as the corresponding lexical signs and grammatical markers are either produced with the hands (i.e., manual elements) or by means of facial expressions, head movements, or movements of the upper part of the body (i.e., nonmanual elements). A crucial difference between sign languages and spoken languages is that only the former share the modality of signal transmission with co-speech gesture. Therefore, sign languages have the unique possibility of directly integrating gestures into their linguistic system. As a consequence, co-speech gestures can develop into lexical items or even grammatical markers (but see section 5 for some speculations about the integration of gestures into the grammatical system of spoken languages). Wilcox (2004, 2007) observes that co-speech gestures may enter the grammatical system on two different routes: On route I, they first turn into a lexical element which then further develops into a grammatical marker (1a). In contrast, on route II, a gesture directly turns into a grammatical marker (1b) ⫺ possibly via an intermediate step of paralinguistic use. (1) a. Route I: b. Route II:

햲 햳 gesture J lexical sign J grammatical marker gesture J grammatical marker

Before turning to our two case studies, which both relate to route II, let us briefly illustrate route I with one example, the modal verb can in American Sign Language (ASL). Wilcox and Wilcox (1995) argue that this modal verb can be traced back to a gesture meaning “strong”. In a first step, i.e., step 햲 on route I, the gesture developed into the lexical sign

169. The grammaticalization of gestures in sign languages

2135

strong, the use of which is illustrated in example (2a), taken from Janzen and Shaffer (2002: 208). Only in step 햳, the lexical sign strong grammaticalized into the modal verb can (2b) (Wilcox and Wilcox 1995: 142). While the first step on route I, just like route II, is clearly modality-specific, the second step is modality-independent. As is well known, the development of modal verbs from lexical elements expressing physical ability or strength is also attested in many spoken languages (Heine and Kuteva 2002). (2)

a. our father STRONG over moon stars world ‘Our father is strong over the moon, and stars and world.’ y/n b. tomorrow index2 CAN drive index2 ‘Can you drive tomorrow?’

[ASL]

(Notation: All sign language examples are glossed in English small caps. indexx ⫽ pointing sign used for pronominalization; subscript numbers refer to points in the signing space: 1 ⫽ towards signer’s chest, 2 ⫽ towards addressee. signsign ⫽ indicates that two words are needed to gloss a single sign. The scope, i.e. onset and offset, of grammatical non-manual markers is indicated by the line above the gloss; “y/n” ⫽ non-manual marker accompanying yes/no-questions (most importantly, brow raise)) In the following two sections, we discuss in some detail the grammaticalization of a manual and a non-manual gesture in sign languages: first the “palm-up” gesture, which turned into a multifunctional discourse marker, then the headshake, which is commonly used as a marker of negation. In both sections, we will provide information on the origin and use of the respective gesture in spoken languages and on its use in sign languages. Additionally, we will offer some speculations on how these gestures acquired various grammatical functions and/or language-specific distributional properties. Note that other phenomena that have been argued to follow route II are the grammaticalization of pointing gestures (Pfau 2011) and the grammaticalization of brow raise (Janzen 1999); for further examples and discussion, see Pfau and Steinbach (2006, 2011).

3. Case study I: palm-up A manual gesture frequently observed in spoken language discourse is the “palm-up” gesture (Kendon 2004; Müller 2004). This gesture, which may be one-or two-handed, involves a hand configuration, palm open and facing upward, with all fingers loosely extended. It is observed in various discourse contexts and has been reported for various cultures. In fact, Müller (2004: 234) states that the “palm-up” gesture “is perhaps one of the most frequently used gestures”. In this section, we first provide information on the gestural origin and use of the “palm up” gesture before turning to a related sign (glossed as palm-up) that is frequently observed in sign language discourse. It will be argued that palm-up has grammaticalized along the lines of a modality-specific grammaticalization path from co-speech gesture to functional element, as described in the previous section.

3.1. On the origin and use o the palm-up gesture According to Müller (2004: 236), the “palm up” gesture has its origin in (i) actions of “[g]iving, showing, or offering an object by presenting it on the open hand”, and

2136

X. Sign language – Visible body movements as language

(ii) actions of “[r]eceiving an object or displaying an empty hand” indicating either readiness to receive that object or “the fact of not having something”. Clearly, both represent very basic manual actions which probably have characterized human interaction since the beginning of mankind (e.g., exchanging goods, begging). In such everyday interactions, objects are given, shown, offered, or received on an extended flat hand. However, there is an important difference between the use of an open palm in such situations and the use of “palm-up” as a co-speech gesture. In the former context, the action of giving and receiving involves concrete objects, whereas in the latter context, the manipulation of objects is extended to abstract objects, with “palm-up” commonly expressing communicative instead of instrumental actions. Müller (2004) argues that this shift can be explained by means of metaphorical mapping, whereby the manipulation of concrete objects (source domain) is mapped onto the act of giving and receiving abstract objects (target domain), as is also attested in spoken language metaphor (cf. Reddy’s (1979) “conduit metaphor” ideas are objects, as manifested in expressions like “to offer an idea” or “to give an example”). In his seminal work on gesture, Kendon (2004) discusses a “palm-up” gesture family, which he refers to as the “Open Hand Supine” (OHS) family, and describes in detail the various communicative functions of the gestures it contains. Within the Open Hand Supine family, he identifies two subgroups, Palm Addressed gestures and Palm Presentation gestures, which can be distinguished based on different movement patterns. Both gesture types share the meaning component of offering, presenting, or receiving something, but here we will only be concerned with Palm Presentation gestures, as the sign we discuss in section 3.2 shares important formal and functional aspects with members of this group. As for their form, Palm Presentation gestures involve an open hand, with palm oriented upwards, extended into frontal space. The extension is commonly “achieved through a wrist extension, often combined with a slight lowering of the hand, and followed by a hold” (Kendon 2004: 265). Palm Presentation gestures typically occur in “passages in the verbal discourse which serve as an introduction to something the speaker is about to say, or serve as an explanation, comment or clarification of something the speaker has just said” (2004: 266). In example (3), two Palm Presentation gestures co-occur with verbal parts of the utterance in which discourse referents important for the narrative are introduced (Kendon 2004: 267). (3)

So there’s this woman, she’s in the doctor’s office and she can’t … | ~~~~~~~*********** | >*>*>*>************ | (Notation: “~~” ⫽ preparation of gesture; “|” ⫽ gesture phrase boundary; “**” ⫽ gesture stroke; “**” ⫽ hand held in position at end of stroke; “>*>” ⫽ OHS moved right to a new location)

While in (3), the speaker in a sense presents discourse referents by means of “palm up”, in (4), it is part of the discourse itself that is presented to the interlocutor (Müller 2004: 244; example slightly adapted). The fragment in (4) is preceded by “May I ask you a question?”. Then the speaker puts forward the point she wants to make by means of the rhetorical question in (4), part of which is accompanied by a gesture which Müller (2004) glosses as “Palm Up Open Hand” (PUOH). The speaker thus presents “a discursive object, offering it for inspection, and inviting to join in the proposed view” (Müller 2004: 244). The fact that the gesture is performed with both hands is interpreted as an intensification strategy by Müller.

169. The grammaticalization of gestures in sign languages (4)

[…]

conoc-es a alguien que know-2.sg.pres.ind prep someone who PUOH (both hands)------------held----------------no hubiera tenido fantasias irrealisables not aux.3.sg.pst.sbjnct have.pst.part phantasies unrealizable ‘Do you know somebody who has not had unrealizable dreams?’

2137 [Spanish]

3.2. On the use o palm-up in sign languages In the last decade, studies on several sign languages have identified a sign similar in form to the Palm Presentation gestures of the Open Hand Supine gesture family. This sign, which we gloss as palm-up, has been studied in detail for Danish Sign Language (DSL; Engberg-Pedersen 2002), American Sign Language (ASL; Conlin, Hagstrom, and Neidle 2003; Hoza 2011), New Zealand Sign Language (NZSL; McKee and Wallingford 2011), and Sign Language of the Netherlands (Nederlandse Gebarentaal, NGT; van Loon 2012). Just like the gesture described in the previous section, palm-up can be one-or two-handed, and it is articulated in neutral space with lax 5-handshape(s) (all fingers extended) and an outward movement, which results in an upward palm orientation. Note that the sign has received various labels in the literature (e.g., “presentation gesture” for Danish Sign Language and “part:indef ” or well for American Sign Language), but for the sake of clarity, we will use the gloss palm-up throughout to refer to the sign under consideration. There is considerable overlap between the various functions of palm-up identified in the four sign languages; we will thus not attempt to describe all functions for all sign languages but instead outline only three of them. First, palm-up may function as a conversation regulator in that it is frequently used to end a turn at talk. This function has been described for American Sign Language (Hoza 2011), New Zealand Sign Language, and Sign Language of the Netherlands, and it is illustrated by the Sign Language of the Netherlands example in (5), a dialogue fragment which actually contains three instances of palm-up (van Loon 2012: 45). Let us first focus on the two clause-final occurrences of palm-up. In both instances, it appears that the signer uses palm-up to end his turn, thus allowing the interlocutor to take the floor. (5)

A: strong-bodied PALM-UP ‘You have to be strong-bodied.’ B: PALM-UP index1 use a-lot interpreters yes ‘Well, I use a lot of interpreters indeed.’

[NGT] PALM-UP

Other uses of palm-up that can be subsumed under the discourse regulation function are (i) backchannel signal (described for Danish Sign Language, New Zealand Sign Langauge, and Sign Language of the Netherlands) and (ii) sentence-final question particle in yes/no- and wh-questions in Sign Language of the Netherlands, American Sign Language, and other sign languages (Zeshan 2004b). When appearing at the end of an interrogative clause, as in (6), palm-up can be seen as a special type of turn signal inviting a response from the interlocutor (van Loon 2012: 54). Its interrogative function is highlighted by the fact that the sentence does not contain

2138

X. Sign language – Visible body movements as language

the question sign how (still the interpretation is clear because the sentence-initial indexical sign is accompanied by the Dutch mouthing hoe (‘how’)). This function is comparable to the use of Palm Up Open Hand in (4), the crucial difference being that in a sign language, where the linguistic message is articulated by the hands, palm-up cannot be produced simultaneously with the message. Taken together, when used as in (5) and (6), the underlying presentation function of palm-up is still evident.

(6)

hoe index1 influence PALM-UP ‘How can I have an influence on that?’

[NGT]

There is yet another function of palm-up which can also be subsumed under discourse regulation; in this function, however, palm-up appears sentence-initially, as in the utterance of signer B in (5), and thus opens a turn in a way comparable to the English turnopener “well”. In this use, palm-up may be analyzed as a discourse marker in that it imposes “a relationship between some aspect of the discourse segment [it is] part of […] and some aspect of a prior discourse segment” (Fraser 1999: 938). Similarly, Traugott (1997) points out that discourse markers commonly occupy a position in the left periphery of the sentence, carrying a special intonation ⫺ as is also true for this particular occurrence of palm-up in (5). Second, palm-up is commonly employed to connect sentences and units smaller than the sentence, thus creating or maintaining coherence within the discourse. This function is attested in all sign languages mentioned above. In (7), we provide an illustrative example from New Zealand Sign Language, in which palm-up serves to connect two sentences, the latter being an evaluative comment on the former (McKee and Wallingford 2011: 229). This use thus appears to be similar to one of the uses of the Palm Presentation gesture described in section 3.1. That is, both the gesture and the sign may occur in contexts expressing (verbal) comments on previous discourse. Moreover, palm-up may also function as a conjunction (“or”) and to signal a temporal sequence (“then”) or a causal relationship (“so”) between two propositions. (7)

they not-allow shave-head not-allow PALM-UP old-fashioned [NZSL] ‘They (the school) didn’t allow shaved heads (which was) old fashioned.’

Third, signers may use palm-up to convey certain (signer-oriented) epistemic meanings, that is, to assess the certainty or necessity of an utterance. In this use, palm-up is usually accompanied by specific facial expressions which express the signer’s attitude towards the utterance. For American Sign Language, Conlin, Hagstrom, and Neidle (2003) observe that palm-up (which they refer to as a “particle of indefiniteness”) serves to express uncertainty in a number of ways. Amongst other uses, in their data, palm-up surfaces in interrogatives, accompanies the indefinite determiner something/one, expresses the meaning “according to”, and frequently occurs in sentences containing adverbials such as maybe or non-factive verbs such as guess and think (8) (Conlin, Hagstrom, and Neidle 2003: 10). According to the authors, in all these contexts, palm-up “functions to widen the domain of possibilities under consideration along some contextually determined dimension” (Conlin, Hagstrom, and Neidle 2003: 1).

169. The grammaticalization of gestures in sign languages (8)

think john sick PALM-UP ‘(I) think that John is sick.’

2139 [ASL]

While the accompanying non-manual features are not represented in this example, Conlin, Hagstrom, and Neidle (2003: 15) point out that elements that convey a certain degree of uncertainty are “frequently associated with a tensed nose, lowered brows, and sometimes also raising of the shoulders”. They find that the same non-manual features cooccur with palm-up, which, however, may also be accompanied by raised eyebrows. McKee and Wallingford (2011: 232) report similar findings for New Zealand Sign Language and suggest that different brow positions may be related to slightly different modal meanings (raised brows ⫺ possibility; furrowed brows ⫺ uncertainty/doubt; for pragmatic functions of non-manuals, see also Dachkovsky and Sandler [2009] and Herrmann [2014]).

3.3. A possible scenario or the grammaticalization o palm-up As described in section 2, sign languages employ a modality-specific grammaticalization path from co-speech gesture to functional element. Our first case study has shown that this path is also relevant for the grammaticalization of the functional sign palm-up, which developed from the manual co-speech gesture “palm-up”. The spoken language examples presented above have made clear that the co-speech gesture “palm-up” is mainly observed in contexts where discourse referents or propositions are presented to the interlocutor. Not surprisingly, examples from various sign languages reveal that the corresponding sign palm-up is observed in similar, but not necessarily identical, contexts. However, to the best of our knowledge, palm-up does not express a specific lexical meaning related to the presentation of an abstract object; that is, the gesture did not lexicalize into a predicate that conveys a meaning like “suggest” or “put forward (an idea)”. Rather, palm-up is a multifunctional element that is used to express different yet related grammatical or semantic/pragmatic functions. Although historical data is lacking, it can be argued that the development of palmup follows route II in (1b) above along the more fine-grained grammaticalization path suggested in (9). In a first step, the co-speech gesture “palm-up” enters the grammatical system of sign languages with the most general function of a marker of turn taking, that is, the gesture develops into a marker structuring discourse. The fact that palm-up can only be integrated sequentially into a string of signs may have facilitated its reanalysis as a turn signal. In this use, palm-up occupies an utterance-final position. The same holds for the question particle palm-up, that is, in both uses, palm-up serves to present a piece of discourse to the interlocutor. Given that the question particle carries an additional grammatical feature, it might well be that this function developed from the more general turn-taking marker, but this possibility is not represented in (9).

(9)

Once it has entered the grammatical system, palm-up may undergo further changes towards more grammatical meanings. We assume that the next step on the grammaticali-

2140

X. Sign language – Visible body movements as language

zation cline is the use of palm-up as a sentence-initial discourse marker/particle, which connects pieces of discourse “to express a response or a reaction to the preceding discourse or attitude towards the following discourse” (Brinton (1996: 37); for the grammaticalization of discourse markers, also cf. Onodera (2011)). This change is characterized by a shift toward increased subjectivity, as the marker, while still structuring the discourse, now also signals speaker attitude. Crucially, subjectification is taken to be a defining characteristic of grammaticalization processes (Traugott 1995). Both connectivity and subjectification also play a role in the following steps. On the one hand, palm-up developed from an element connecting utterances of different signers into an element connecting clauses of one signer. In some of these uses, palm-up functions as a conjunction. On the other hand, it grammaticalized further into an epistemic marker. This latter change is characterized by further subjectification, as palm-up now expresses the signer’s self-oriented attitude towards her own utterance. The graph in (9) makes clear that we do not have evidence concerning the question whether one of these functions depends on the other, that is, whether, for instance, the epistemic marker has developed from the connective. Taken together, we have argued that once the gesture “palm up” entered the language system as the turn-taking marker palm-up, the door was opened for further grammaticalization processes which allowed palm-up to acquire more specific pragmatic and/or grammatical functions ⫺ marking a specific sentence type, combining discourse units and clauses, and expressing epistemic meaning. Further empirical research is necessary in order to determine whether the development of palm-up requires a more fine-grained grammaticalization path than the one sketched in (9).

4. Case study II: headshake According to Kendon (2002: 149), headshakes can be defined as horizontal head movements “either to the left or to the right, and back again, one or more times, the head always returning finally to the position it was in at the start of the movement”. Headshakes are attested as co-speech gestures all around the globe, and they are also commonly observed in sign languages of all continents. In this section, we discuss the origin of the headshake as well as its use in both modalities, addressing both similarities and differences. The observed differences will lead us to suggest that the headshake, as used in at least some sign languages, is a grammaticalized gesture.

4.1. On the origin and use o headshakes The ubiquity of headshakes raises the question what motivates the common use of this non-manual gesture. According to some scholars, use of the headshake is rooted in infants’ experience during (breast)feeding. Spitz (1957: 91), for instance, claims that “the wide dissemination of the head-shaking “No” is the consequence of the genetic derivation of this gesture from a universal experience of mankind, namely from the nursing situation”: once the child had enough food, she will turn her head away from the food source ⫺ be it the mother’s breast or a spoon. Communicative headshakes, used by the child outside this particular situation, are generally assumed to emerge around 12 months. In contrast, Jakobson (1972) suggests that it is actually the head nod which constitutes the basis for gestural head movements. He interprets the head nod as an “obvious

169. The grammaticalization of gestures in sign languages

2141

visual representation of bowing before the demand”, thus symbolizing obedience (Jakobson 1972: 92). Consequently, the semantically opposite sign requires a contrasting head movement. This line of reasoning is appealing as it does not only explain the headshake but also the backwards head tilt, which is attested as a sign of negation in some cultures. Under both views, the negative meaning of the headshake is taken to be the basic one ⫺ in fact, it is the only one taken into account in these studies. Example (10a) illustrates a headshake accompanying a negative statement (Kendon 2002: 163). Note that the headshake (“hs”) does not co-occur with the negative particle not. hs hs (10) a. He was not impressed with us playing with Peter hs b. I think I was like ten if I … hs c. She was very very old However, headshakes are also frequently observed in non-negative contexts, where they may signal, amongst other things, uncertainty and intensification. In the former function, they may combine with statements that are marked as uncertain by expressions such as “I think” (10b) or “whatever” (McClave 2000: 863), while in the latter function, they commonly accompany evaluative statements including intensifiers such as “absolutely” or “very” (10c) (Kendon 2002: 176). Clearly, both these uses can in principle be traced back to the basic negative function, as an uncertain statement can be argued to be under the scope of an implicit negative predicate such as “not sure”, while intensification may involve the implied meaning of “unbelievable” (see McClave (2000) and Kendon (2002) for discussion of further functions of the headshake; see Harrison (2009) for the interaction of headshake with manual negative gestures).

4.2. On the use o headshakes in sign languages McClave (2001) and Zeshan (2004a) observe that signers, just like speakers, occasionally produce gestural headshakes when the signed utterance expresses a meaning of intensification or uncertainty (e.g., in wh-questions). In addition to that, in basically all sign languages studied to date, negative utterances are accompanied by headshakes. Studies on the expression of sentential negation in various sign languages have revealed that the headshake is clearly a grammatical marker in this context and not just an optional gestural element. Evidence for this assumption comes from cross-linguistic variation and the fact that the distribution of the headshake is rule-based. As for cross-linguistic variation, Zeshan (2006) identifies two groups of sign languages. In the first group, negative clauses require the presence of a manual negative sign, be it a negative particle (“not”) or a negative argument (e.g., “nobody”, “nothing”). Still, a headshake is generally observed, and it usually accompanies only the manual negative sign. Italian Sign Language (LIS) belongs to this group, as is illustrated in (11). In (11a), we observe the sentence-final particle non accompanied by headshake, while (11b) is meant to illustrate that the corresponding sentence without non is ungrammatical irrespective of the scope of the headshake (Geraci 2005). Further sign languages that have been reported to be of the manual dominant type include Hong Kong Sign Language (Tang 2006) and Jordanian Sign Language (Hendriks 2008).

2142

X. Sign language – Visible body movements as language

hs (11) a. paolo contract sign non ‘Paolo didn’t sign the contract.’ ( ( ( hs) b.* paolo contract sign

[LIS]

However, other sign languages display a different pattern in that propositions are commonly negated by means of a headshake only. Consider the German Sign Language (DGS) examples in (12): (12a) contains the sentence-final particle not and is thus similar to (11a) (the different domain of the headshake will be addressed below). However, in contrast to Italian Sign Language, the version without manual negative particle is grammatical, too (12b) (in this example, the headshake accompanies at least the verb but optionally spreads onto the direct object milk). German Sign Language is thus a non-manual dominant sign language, as the manual marker of negation is optional (Pfau 2008). Other sign languages belonging to this group are Sign Language of the Netherlands (Coerts 1992) and Indopakistani Sign Language (Zeshan 2000). hs (12) a. poss1 partner milk drink not ‘My partner doesn’t drink milk.’ ( ) hs b. poss1 partner milk drink

[DGS]

This typological division is a first indication that the headshake is indeed a linguistic feature, as it is reminiscent of typological variation in the realm of negation reported for spoken languages (Payne 1985). If the headshake was a mere gesture, its varying status in different sign languages would be unexpected. Things get even more interesting once we zoom in on non-manual dominant sign languages, as once again, we find languagespecific constraints on the distribution of the headshake. Pfau (2002) and Pfau and Quer (2002) compare patterns of sentential negation in American Sign Language and German Sign Language, two non-manual dominant sign languages. In contrast to German Sign Language (and Italian Sign Language), basic word order in American Sign Language is Subject-Verb-Object, with the negative sign not preceding the verb. As shown in (13a), it is possible for the headshake to co-occur only with not (Neidle et al. 2000: 44). In German Sign Language, however, the corresponding structure is ungrammatical (13b); even when the sentence-final particle is present, the headshake has to also accompany the verb, as was shown in (12a). hs (13) a. john not buy house ‘John is not buying a house.’ hs b.* poss1 partner milk drink not ‘My partner doesn’t drink milk.’

[ASL]

[DGS]

The two sign languages also behave differently when the manual negative sign is dropped ⫺ as is commonly the case. Example (12b) already indicated that in German

169. The grammaticalization of gestures in sign languages

2143

Sign Language, it is possible for the headshake to only accompany the predicate. The same, however, is ungrammatical in American Sign Language (14a). In the absence of not, spreading of the headshake over the entire verb phrase is obligatory in American Sign Language. In other words: (14a) would be grammatical if the direct object house was also under the scope of the headshake. Pfau and Quer (2002) add to the typological picture a third non-manual dominant sign language, Catalan Sign Language. Interestingly, Catalan Sign Language displays yet another pattern: when the manual negative sign is present, it behaves like American Sign Language; when the manual sign is dropped, it patterns with German Sign Language. Note finally that in German Sign Language, when spreading occurs, it has to target entire constituents. Consequently, (14b), where the headshake extends only over part of the direct object, is also judged to be ungrammatical. hs (14) a.* john buy house ‘John is not buying a house.’ hs b.* index1 [red flower] like ‘I don’t like red flowers.’

[ASL]

[DGS]

Even the few examples we were able to consider here thus reveal that the distribution of the negative headshake is subject to language-specific constraints and, furthermore, that its scope properties are tightly linked to the syntactic structure of the utterance it accompanies. In this respect, the behavior of the headshake clearly differs from that of its gestural counterpart.

4.3. A possible scenario or the grammaticalization o headshakes A question that emerges from the previous discussion is how the headshake entered the linguistic system of sign languages and acquired grammatical/functional properties. As a starting point for the following (admittedly speculative) discussion, we want to bring to the reader’s attention the following two obvious facts. First, in any natural language, it is possible to change the polarity of any sentence from affirmative to negative by employing a dedicated morpheme ⫺ be it a free particle or an affix. Second, in no spoken language is it possible to negate a proposition by means of a headshake (or some other non-manual co-speech gesture) only (imagine the sentence “I did it” accompanied by headshake used to convey the meaning “I did not do it”). It thus seems likely that all sign languages started out with a purely manual system, where sentences are negated by a manual sign alone. This manual sign, which was probably lexicalized from a manual negative gesture, functions as a negative adverbial (negadv). Note that the qualification “purely” captures the fact that such a system is different from the manual dominant systems described above, as in manual dominant sign languages like Italian Sign Language (11), the headshake is typically obligatory. Still, even in a purely manual system, negative sentences occasionally may have been accompanied by a gestural headshake, similar to that observed in oral communication, which implies that its distribution was not tied to certain constituents within the sentence. Repeated co-occurrence of negadv and the headshake may then have given rise to

2144

X. Sign language – Visible body movements as language

phonological integration of the non-manual, whereby the headshake was reanalyzed as a lexical non-manual component of negadv . At this point, we reach a manual dominant system, that is, a system in which (i) negadv is obligatory and (ii) the non-manual only accompanies negadv , as the two constitute a lexical unit. Once the headshake has entered the linguistic system, the door is opened for a second reanalysis: in a next step, the non-manual dissociates from negadv and turns into a bound affix which combines with the verb. In principle, the system we reach at this point is still manual dominant; yet the headshake has gained an independent functional status. In a final step, negadv becomes optional, yielding a non-manual dominant system, and eventually disappears. Thus, the potential end point of this grammaticalization path would be a purely non-manual system, which, to the best of our knowledge, is not (yet) attested in any sign language. Also, since the headshake is suprasegmental, comparable to tone in tone languages, it is capable of spreading over syntactically defined domains, for instance, the verb phrase in example (12b) (Pfau 2008). This scenario is summarized in (15a). Let us emphasize that this scenario is not deterministic; it does not necessarily imply that all sign languages would eventually develop into purely non-manual systems. hs hs hs hs (15) a. negadv J negadv J verb negadv J verb J pas b. ne J ne … pasint J ne … pas Clearly, the pattern of language change suggested here is reminiscent of Jespersen’s Cycle according to which “[t]he original negative adverb is first weakened, then found insufficient and therefore strengthened, generally through some additional word, and this in turn may be felt as the negative proper and […] be subject to the same development as the original word” (Jespersen 1917: 4). Comparing the scenario in (15a) to French split negation (15b), negadv thus patterns with the negative adverb ne while the headshake initially may have functioned as an intensifier comparable to pas. Subsequently, in nonmanual dominant sign languages, the headshake turns into the “negative proper”. Beyond these parallels, Jespersen’s Cycle has an additional modality-specific flavor in sign languages, as it involves the grammaticalization of a non-manual gesture resulting in a simultaneous realization of two negative elements (Pfau and Steinbach 2013). Concerning the above scenario, an important disclaimer is in place. A crucial ingredient of our proposal is the assumption that the genesis of negation in sign languages involves the borrowing (i.e., lexicalization and grammaticalization) of manual and nonmanual gestures from the hearing community. However, this line of reasoning could be reversed if one adopts a “gestural theory of language origin”. According to proponents of this theory, protolanguage was gestural and sign languages might thus well constitute an earlier stage in the evolution of language (Armstrong and Wilcox 2007; Corballis 2003). Clearly, under this assumption, spoken languages might have borrowed structures and strategies from sign languages. Consequently, various other scenarios would come to mind, for instance, one which treats the headshake as the basic clause negator while manual negative signs, and maybe even negative morphemes in spoken languages, only entered the stage later.

5. On the grammaticalization o gestures in spoken languages The above discussion took for granted that the pathway from gesture to grammar is modality-specific. But is this really the case? Before concluding the chapter, let us add a

169. The grammaticalization of gestures in sign languages

2145

few remarks and speculations on the grammaticalization of manual and non-manual gestures in spoken languages. Co-speech gestures commonly mirror (part of) the content or structure of the spoken utterances they accompany: for instance, an emblematic gesture like the “thumb up” gesture may accompany a semantically corresponding lexeme, and a beat gesture may further accentuate prosodically prominent elements. Occasionally, however, manual gestures may contribute meaning that is not expressed through the vocal channel (McNeill 1992). When used in this way, a gesture can be argued to fulfill a lexical function; its specific meaning is still dependent on the utterance, yet it has gained independent semantics. Put differently, a gesture may undergo local lexicalization within a discourse. In a sense, this corresponds to step 햲 in (1a) above. As for the grammatical use of gestures, we need to distinguish between gestures that express grammatical meaning which is also encoded verbally and gestures that function independently as functional elements. As for the former type, manual gestures may, for instance, be repeated, thereby expressing aspectual meaning (e.g., iterativity) that is also conveyed in the verbal message (Bressem this volume). In the present context, the latter type of gestures is more interesting, as such gestures would be the spoken language equivalent of the process depicted in (1b). Crucially, a fully grammaticalized co-speech gesture is expected to contribute to the syntactic structure of a clause in the same way as functional elements do. This is what we claimed above for some of the uses of palmup (conjunction, question particle) and the headshake (negation). However, for spoken languages similar examples are hard to come by. For instance, as for the “palm up” gesture discussed in section 3.1, it seems that it is hardly ever the gesture alone that conveys the respective meaning. An interesting case that comes close to what we are looking for is discussed by Jouitteau (2004). Simplifying somewhat, she observes that Atlantic French (a dialect of French spoken in the Western part of France along the Atlantic coast) allows null subjects (pro drop) if the preverbal position is occupied by a manual or body gesture, such as a head nod, a movement of the hand(s), or a shrug. This distribution contrasts with Standard French, which is not a pro drop language. Crucially, in Atlantic French, too, the sentences with null subjects would be ungrammatical without the gesture. From this, Jouitteau concludes that the gesture is a fully syntactic element. If her analysis is on the right track, then we are indeed dealing with a case of grammaticalization of manual gestures in a spoken language. With respect to the grammaticalization of non-manual gestures in spoken languages, Gussenhoven (2004: 71⫺72) discusses the case of intonation. On the one hand, intonation has non-structural paralinguistic properties (i.e., affective properties like anger, friendliness, surprise). On the other hand, it is also an integral part of linguistic structure, with different grammatical functions associated with specific forms. Gussenhoven claims that universal grammatical intonation patterns are grammaticalized from acoustic gestures which in turn are motivated by certain biological codes (cf. also Wilcox 2004). For the sake of illustration, let us consider one of the three codes he suggests, the frequency code. This code depends on the correlation between the size of the human larynx and the rate of vocal fold vibration: high pitch is associated with affective properties such as friendliness, less dominant, and non-aggressive, while low pitch is associated with the opposite properties. The corresponding informational interpretations of the frequency code are the association of low pitch with certainty and high pitch with uncertainty. According to Gussenhoven, this association has been grammaticalized in many lan-

2146

X. Sign language – Visible body movements as language

guages: rising intonation contours are used to mark interrogatives while non-rising contours mark declaratives. Recall that we argued for a similar scenario in our discussion of the headshake in section 4.3. Headshakes ⫺ and probably other grammatical nonmanual markers in sign languages ⫺ start out as affective visual gestures which grammaticalize via paralinguistic uses into functional elements that signal a specific grammatical meaning, namely negation. This brief discussion thus suggests that in both sign and spoken languages, gestures may be subject to grammaticalization. In both modalities, grammaticalization is constrained by the fact that the source gestures must belong to the same articulatory-perceptual domain the target language belongs to. Consequently, sign languages may integrate visual gestures ⫺ be they manual or non-manual ⫺ into their grammatical system, while spoken languages generally integrate acoustic gestures (the Atlantic French case being an exception to this generalization). Given that visual gestures provide more input than acoustic gestures, it is not surprising that the grammaticalization of gestures appears much more common in sign languages.

6. Conclusion In sign languages, manual gestures ⫺ in particular, culture-specific emblematic gestures ⫺ commonly lexicalize (Janzen 2012). The case studies presented in the previous sections clearly indicate that beyond lexicalization, certain manual and non-manual cospeech gestures that accompany spoken utterances may also fulfill well-defined grammatical functions when used by signers; that is, they may grammaticalize. Grammaticalization may either proceed directly from gesture to grammatical marker (route II) or may involve an intermediate step, at which the gesture undergoes lexicalization (route I). We have argued that diachronic changes that take a gestural element as input ⫺ that is, route II and the first step on route I ⫺ are modality-specific, while the change from lexical to grammatical element parallels grammaticalization phenomena that have been described for spoken languages. According to the scenarios we suggested for “palm up” (9) and headshake (15a), grammaticalization may involve stepwise changes. It is important to keep in mind, however, that these two developmental pathways are qualitatively different. The grammaticalization path which takes “palm up” as input may apply to a single sign language; that is, the sign palm-up may fulfill various functions within a sign language, with some of these functions being more grammaticalized than others. In contrast, the headshake only acquires a single grammatical function: it always functions as a marker of negation. The various steps in the scenario in (15a) thus reflect different types of negation systems, as used in different sign languages, rather than various functions associated with a single marker.

7. Reerences Armstrong, David F. and Sherman E. Wilcox 2007. The Gestural Origin of Language. Oxford: Oxford University Press. Bressem Jana this volume. Repetitions in gesture. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1641⫺1649. Berlin/Boston: De Gruyter Mouton.

169. The grammaticalization of gestures in sign languages Brinton, Laurel B. 1996. Pragmatic Markers in English: Grammaticalization and Discourse Functions. Berlin: Mouton de Gruyter. Coerts, Jane 1992. Nonmanual grammatical markers: An analysis of interrogatives, negations and topicalisations in Sign Language of the Netherlands. Ph.D. dissertation, University of Amsterdam. Conlin, Francis, Paul Hagstrom and Carol Neidle 2003. A particle of indefiniteness in American Sign Language. Linguistic Discovery 2(1): 1⫺21. Corballis, Michael C. 2003. From hand to mouth: The gestural origins of language. In: Morten Christiansen and Simon Kirby (eds.), Language Evolution, 201⫺218. Oxford: Oxford University Press. Dachkovsky, Svetlana and Wendy Sandler 2009. Visual intonation in the prosody of a sign language. Language and Speech 52(2/3): 287⫺314. Emmorey, Karen 1999. Do signers gesture? In: Lynn Messing and Ruth Campbell (eds.), Gesture, Speech, and Sign, 133⫺159. Oxford: Oxford University Press. Engberg-Pedersen, Elisabeth 2002. Gestures in signing: The presentation gesture in Danish Sign Language. In: Rolf Schulmeister and Heimo Reinitzer (eds.), Progress in Sign Language Research: In Honor of Siegmund Prillwitz, 143⫺162. Hamburg: Signum. Fraser, Bruce 1999. What are discourse markers? Journal of Pragmatics 31(12): 931⫺952. Geraci, Carlo 2005. Negation in LIS (Italian Sign Language). In: Leah Bateman and Cherlon Ussery (eds.), Proceedings of the North East Linguistic Society (NELS 35.), 217⫺229. Amherst, MA: Graduate Linguistics Student Association. Gussenhoven, Carlos 2004. The Phonology of Tone and Intonation. Cambridge: Cambridge University Press. Harrison, Simon M. 2009. Grammar, gesture, and cognition: The case of negation in English. Ph.D. dissertation, Universite´ Michel de Montaigne Bordeaux 3. Heine, Bernd and Tania Kuteva 2002. World Lexicon of Grammaticalization. Cambridge: Cambridge University Press. Hendriks, Bernadet 2008. Negation in Jordanian Sign Language. A cross-linguistic perspective. In: Pamela Perniss, Roland Pfau and Markus Steinbach (eds.), Visible Variation: Comparative Studies on Sign Language Structure, 104⫺128. Berlin: Mouton de Gruyter. Herrmann, Annika 2014. Modal Particles and Focus Particles in Sign Languages. Berlin: de Gruyter Mouton. Hopper, Paul J. and Elisabeth C. Traugott 1993. Grammaticalization. Cambridge: Cambridge University Press. Hoza, Jack 2011. The discourse and politeness functions of hey and well in American Sign Language. In: Cynthia B. Roy (ed.), Discourse in Signed Languages, 69⫺95. Washington, DC: Gallaudet University Press. Jakobson, Roman 1972. Motor signs for ‘Yes’ and ‘No’. Language in Society 1(1): 91⫺96. Janzen, Terry 1999. The grammaticization of topics in American Sign Language. Studies in Language 23(2): 271⫺306. Janzen, Terry 2012. Lexicalization and grammaticization. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 816⫺841. Berlin/Boston: De Gruyter Mouton. Janzen, Terry and Barbara Shaffer 2002. Gesture as the substrate in the process of ASL grammaticization. In: Richard P. Meier, Kearsy A. Cormier and David G. Quinto-Pozos (eds.), Modality and Structure in Signed and Spoken Languages, 199⫺223. Cambridge: Cambridge University Press. Jespersen, Otto 1917. Negation in English and Other Languages. Copenhagen: A.F. Hølst. Jouitteau, Me´lanie 2004. Gestures as expletives: Multichannel syntax. In: Benjamin Schmeiser, Vineeta Chand, Ann Kelleher and Angelo J. Rodriguez (eds.), Proceedings of WCCFL 23, 422⫺ 435. Somerville, MA: Cascadilla Press. Kendon, Adam 2002. Some uses of the headshake. Gesture 2(2): 147⫺182.

2147

2148

X. Sign language – Visible body movements as language

Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Loon, Esther van 2012. What’s in the palm of your hands? Discourse functions of palm-up in Sign Language of the Netherlands. MA thesis, University of Amsterdam. McClave, Evelyn Z. 2000. Linguistic functions of head movements in the context of speech. Journal of Pragmatics 32(7): 855⫺878. McClave, Evelyn Z. 2001. The relationship between spontaneous gestures of the hearing and American Sign Language. Gesture 1(1): 51⫺72. McKee, Rachel and Sophia L. Wallingford 2011. ‘So, well, whatever’: Discourse functions of palmup in New Zealand Sign Language. Sign Language & Linguistics 14(2): 213⫺247. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago: The University of Chicago Press. Müller, Cornelia 2004. Forms and uses of the Palm Up Open Hand: A case of a gesture family? In: Cornelia Müller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 233⫺256. Berlin: Weidler. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan and Robert Lee 2000. The Syntax of American Sign Language. Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press. Onodera, Noriko O. 2011. The grammaticalization of discourse markers. In: Heiko Narrog and Bernd Heine (eds.), The Oxford Handbook of Grammaticalization, 614⫺624. Oxford: Oxford University Press. Payne, John R. 1985. Negation. In: Timothy Shopen (ed.), Language Typology and Syntactic Description, Volume 1: Clause Structure, 197⫺242. Cambridge: Cambridge University Press. Pfau, Roland 2002. Applying morphosyntactic and phonological readjustment rules in natural language negation. In: Richard P. Meier, Kearsy A. Cormier and David G. Quinto-Pozos (eds.), Modality and Structure in Signed and Spoken Languages, 263⫺295. Cambridge: Cambridge University Press. Pfau, Roland 2008. The grammar of headshake: A typological perspective on German Sign Language negation. Linguistics in Amsterdam 2008(1): 37⫺74. Pfau, Roland 2011. A point well taken: On the typology and diachrony of pointing. In: Donna Jo Napoli and Gaurav Mathur (eds.), Deaf around the World. The Impact of Language, 144⫺163. Oxford: Oxford University Press. Pfau, Roland and Josep Quer 2002. V-to-Neg raising and negative concord in three sign languages. Rivista di Grammatica Generativa 27: 73⫺86. Pfau, Roland and Markus Steinbach 2006. Modality-independent and Modality-specific Aspects of Grammaticalization in Sign Languages. (Linguistics in Potsdam 24.) Potsdam: Universitäts-Verlag. http://opus.kobv.de/ubp/volltexte/2006/1088/. Pfau, Roland and Markus Steinbach 2011. Grammaticalization in sign languages. In: Heiko Narrog and Bernd Heine (eds.), The Oxford Handbook of Grammaticalization, 683⫺695. Oxford: Oxford University Press. Pfau, Roland and Markus Steinbach 2013. Headshakes in Jespersen’s Cycle. Paper presented at 11th Conference on Theoretical Issues in Sign Language Research (TISLR 11), London, July 2013. Reddy, Michael 1979. The conduit metaphor. In: Andrew Ortony (ed.), Metaphor and Thought. Cambridge; Cambridge University Press. Spitz, Rene´ A. 1957. No and Yes: On the Genesis of Human Communication. New York: International Universities Press. Tang, Gladys 2006. Questions and negation in Hong Kong Sign Language. In: Ulrike Zeshan (ed.), Interrogative and Negative Constructions in Sign Languages, 198⫺224. Nijmegen: Ishara Press. Traugott, Elisabeth C. 1995. Subjectification in grammaticalisation. In: Susan Wright and Dieter Stein (eds.), Subjectivity and Subjectivisation, 31⫺54. Cambridge: Cambridge University Press. Traugott, Elisabeth C. 1997. The role of the development of discourse markers in a theory of grammaticalization. Manuscript, Stanford University, CA.

170. Nonmanual gestures in sign languages

2149

Wilcox, Sherman 2004. Gesture and language. Cross-linguistic and historical data from signed languages. Gesture 4(1): 43⫺73. Wilcox, Sherman 2007. Routes from gesture to language. In: Elena Pizzuto, Paola Pietrandrea and Raffaele Simone (eds.), Verbal and Signed Languages. Comparing Structures, Constructs, and Methodologies, 107⫺131. Berlin: Mouton de Gruyter. Wilcox, Sherman and Phyllis Wilcox 1995. The gestural expression of modality in ASL. In: Joan Bybee and Suzanne Fleischman (eds.), Modality in Grammar and Discourse, 135⫺162. Amsterdam: John Benjamins. Zeshan, Ulrike 2000. Sign Language in Indo-Pakistan. A Description of a Signed Language. Amsterdam: John Benjamins. Zeshan, Ulrike 2004a. Hand, head, and face: Negative constructions in sign languages. Linguistic Typology 8(1): 1⫺58. Zeshan, Ulrike 2004b. Interrogative constructions in signed languages: Cross-linguistic perspectives. Language 80(1): 7⫺39. Zeshan, Ulrike 2006. Negative and interrogative constructions in sign languages: A case study in sign language typology. In: Ulrike Zeshan (ed.), Interrogative and Negative Constructions in Sign Languages, 28⫺68. Nijmegen: Ishara Press.

Esther van Loon, Amsterdam (The Netherlands) Roland Pfau, Amsterdam (The Netherlands) Markus Steinbach, Göttingen (Germany)

170. Nonmanual gestures in sign languages 1. 2. 3. 4. 5. 6. 7.

Introduction Nonmanual components in sign languages Gestures on a signer’s body, head, and face Grammatical nonmanual features in sign languages Distinguishing affective from grammatical nonmanuals Conclusion References

Abstract Research on sign languages and research on co-speech gestures both used to focus primarily on manual aspects of sign and gesture. However, nonmanual elements performed by the body, the head, and the face also play an essential role in communication, either as gestural means or as grammatical markers. As signers use their upper body to express language, they both sign and gesture in the same so-called visual-gestural modality. Interestingly, many co-speech gestures have found their way into the sign language system as linguistic markers with a grammatical function. Thus, it is quite challenging to distinguish between affective and grammatical nonmanual features. This chapter presents the diversity of nonmanual elements and their manifold functions in sign languages on the continuum from gestural to grammatical markers. As signers quite naturally and intuitively gesture similar Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 21472160

2150

X. Sign language – Visible body movements as language

to speakers, we prominently discuss the more specific phenomenon of action role shift at the gesture-sign-interface. Furthermore, this chapter provides a survey of grammatical constructions that are systematically marked by grammaticalized nonmanual gestures in sign languages.

1. Introduction Irrespective of the fact that signers use their body to encode the grammar of language itself, they obviously also produce and perceive gestures. Like speakers of any other language, signers express these “utterance uses of visible bodily actions” (Kendon 2004, 2008) with their hands, their body, their head, and their face. Thus, there is a common articulatory basis between gesture and sign, which makes it quite difficult to disentangle the two when analyzing sign languages (cf. Özyürek 2012). Despite some differences between the gesturing of speakers and signers (cf. Emmorey 1999), gestures are part of both language modalities ⫺ the vocal-auditory modality and the visual-gestural modality ⫺ and contribute meaningful information to a spoken or signed utterance. “Together speech and gesture present a more complete version of the meaning than either accomplishes on its own” (McNeill 2000: 7). In sign languages, gestures are much more integrated into the language system, which can be seen with certain phenomena at the interface between sign language grammar and the gesture system such as action role shift. As signers do gesture both manually and nonmanually during signing, the issue of differentiating between gesture and sign also applies to nonmanual components of the upper body (see section 2). These different nonmanual components including facial expressions, head positions, and body movements may be used for gestural purposes such as conveying emotions along with the sign stream (see section 3). Still, the same features may have a specific grammatical function when they align with manual items or constituents. In fact, many gestural nonmanual expressions in a sign language have a grammatical equivalent (see section 4). Despite the fact that the nonmanual features themselves look alike, there are systematic criteria to differentiate between the two functions (see section 5). The different nonmanuals can be placed on a continuum from gesture on the one end, to sign on the other end, tracing back grammaticalization processes that show how sign languages incorporate the gestural mode into their language system. A brief conclusion points towards a multilayered language approach of both sign and spoken languages (see section 6).

2. Nonmanual components in sign languages Research on sign languages has clearly demonstrated that nonmanual expressions may fulfill various functions, either as gestural elements or as linguistic markers operating on all levels of the grammar. The term nonmanual is used for expressions, which are articulated on the body, but without the use of the hands (manus, lat.: ‘hand’). However, it is essential that grammatical markings are restricted to specific parts of the body. Lower body movements by the legs, feet, and the hip do not have grammatical functions. Instead, all upper body movements including the torso, the head, and the face are relevant both with regard to the grammar and the expression of affect in the broadest sense. For sign languages, this implies a blurred interface between nonmanuals of completely different kinds.

170. Nonmanual gestures in sign languages

2151

Tab. 170.1 summarizes various important nonmanual components in sign languages. The articulators body, head, and face can be split up into further individual elements.

Tab. 170.1: Nonmanual markers in sign languages Body Head

Face

⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺

Upper body lean Shrugged shoulders Head nod Headshake Head tilt Head and chin movement and position Eye aperture Eye gaze Eyebrow movement Nose movement Formation of the cheeks Mouth aperture Corner of the mouth movement Formation of the lips Tongue protrusion Holistic facial expression

This nonexhaustive list is not only relevant for grammatical marking but also for the gestural use of nonmanuals. For every form of communication ⫺ spoken or signed ⫺ facial expressions, body, and head movements may be used to produce nonmanual gestures such as the expression of emotions and the reaction to external physical triggers. With regard to this gestural use of nonmanuals, the list in Tab. 170.1 can be extended with further markers of the whole body such as a backward step to express dissociation, for instance. The different nonmanuals can be arranged on a scale in which one end represents the vast and most visible means as opposed to small and subtle means on the other end. Particularly the nonmanual markers on the face are often quite difficult to detect and require meticulous investigation. For a detailed description of the observed facial movements, the Facial Action Coding System (FACS) is an effective tool (cf. Ekman, Friesen, and Hager 2002). In sign language research, it is common to differentiate between markers that comprise muscles of the upper face and those that are produced on the lower face (cf. Coerts 1992; Wilbur 2003). The category of the upper face usually includes movements of the eyes and the brows. Markings of the lower face can be summarized by the term mouth patterns. Note that in the literature, the term mouth gestures is often used to refer to different mouth patterns that are not gestural but grammatical in nature. Furthermore, facial expressions can be understood and analyzed both as single components and as a holistic unit without the specification of individual parts. It is of central significance that nonmanual articulators may act independently and can also be layered simultaneously for different grammatical and gestural purposes (cf. Wilbur 2003). Some of these articulators exhibit a strong physical relation and may jointly fulfill the same grammatical function. One example is the forward head tilt as a marker for yes/nointerrogatives in German Sign Language (DGS) that articulatorily also causes a forward body lean in most cases (see section 4).

2152

X. Sign language – Visible body movements as language

3. Gestures on a signers body, head, and ace It is quite obvious that in sign languages, manual and nonmanual articulators are used for gestural purposes in natural conversation. At the same time, however, the identical articulators may systematically convey grammatical information. Sign language studies have long focused on providing evidence for the grammatical status of signs. Since this has become an indisputable fact in linguistic research, the gestural origin of specific signs and nonmanuals and the gesturing of signers themselves are more and more taken into account and raise interesting issues. Important examples for manual gestures used in signing are, for instance, the palm-up gesture and a negative gesture articulated with the index-finger, which are common within the spoken and the signed modality (see Loon, Pfau, and Steinbach this volume). In the following, we will not focus on the different manual gestures and concentrate on nonmanual gestures in sign languages. In the visual-gestural modality as well as in the vocal-auditory modality, emotions, attitudes, and reactions of the communicating interlocutor may be expressed by nonmanuals. Hence, apart from the hands and arms, Müller (2009) lists further articulators like the face, eyes, lips, shoulders, trunk, legs, and feet. Gestures may be expressed with one or more of these articulators. A striking contrast between signers and speakers is that the former mainly use the face to express affective information in the broadest sense and the latter predominantly apply acoustic gestures, the tone of voice, and intonation (cf. Emmorey 1999; Liddell 1980). One important difference between grammatical and gestural nonmanuals is the fact that on the gestural level, single elements can be used without an accompanying signed or spoken word. For instance, it is possible to communicate on a gestural level just by a smile. Looking at sign language grammar, linguistic nonmanuals usually need to have a manual host element that they align with (see section 4). As a subcategorization of gestural nonmanuals, the distinction between affective and evaluative is reasonable (cf. Emmorey 1999). Facial expressions such as “surprised” and “puzzled” are related to mental states rather than to emotional states (cf. Campbell 1997). In addition, Corina, Bellugi, and Reilly (1999: 309) mention facial expressions that “refer to those expressions which convey a speaker’s true-felt emotion and those expressions used in the service of communication to convey the emotional tenor of a past event”. Moreover, studies have shown that signers make extensive use of strategies such as back-channeling and turn-taking in discourse (cf. Baker 1977; Coates and Sutton-Spence 2001; Lucas 2002), which Ekman (1979: 183) calls “conversational signals”. Similarly, many hearing speakers use smiles, eyebrow movements, and head nods, among others, as listener signals during conversation (cf. Ekman 1979). The manner of usage and the degree of intensity of nonmanual gestures, in particular facial expressions in sign and spoken languages, depend on individual properties of the interlocutor. In addition, gestural nonmanuals may be influenced by the respective cultural background. Nevertheless, some emotional facial expressions are universal, even though it is still a matter of debate how many of them belong to this category. Ekman (1993: 387) argues that “distinctive universal expressions have been identified for anger, fear, disgust, sadness, and enjoyment”. Concerning nonmanual gestures in sign languages, the phenomenon of action role shift or constructed action has to be particularly highlighted. This discourse structuring mechanism is a specific type of perspective shift that a signer uses to take over the role

170. Nonmanual gestures in sign languages

2153

of another referent or fictional character. On the one hand, such a role shift is used for the reproduction of utterances and thoughts (called quotation role shift, constructed dialogue, constructed discourse, etc.) and on the other hand, which is crucial with regard to the issue of gestures in sign languages, it may be used for the reproduction of actions, emotional states, and mannerisms (called action role shift, constructed action, role playing, etc.; for an overview, see Lillo-Martin 2012). However, as Pfau and Quer (2010: 397) point out, “there is some overlap between both uses of role shift since in quotational role shift, signers frequently take on affective facial expressions of the character whose utterance they report”. The prototypical grammatical markers of role shift are body movement, change of head position, and eye gaze change. In addition, similar to intonation in spoken languages, facial expressions are associated with the quoted referent (cf. Herrmann and Steinbach 2012). “Referential shift is a linguistic device that can disambiguate the point of view associated with a facial expression, but the facial expression itself is non-linguistic” (Emmorey 1999: 152). As opposed to quotation role shift, in which the reproduction of utterances is based on lexical signs, for action role shift, manual and nonmanual gestures are of utmost importance. In sign languages, gestural acting can be implemented into narration without the need of lexical signs. Nevertheless, action role shift is subject to certain constraints. The gestural imitation of characters within action role shift is restricted to the upper part of the body. Hence, the lower parts of the body such as the legs and feet are not used for action role shift, which is an important difference to the various possibilities in pantomime. Fig. 170.1 exemplifies action role shift in German Sign Language by a short passage from the fable “The shepherd’s boy and the wolf ” taken from our data set of five Aesop’s fables (for information concerning the fables, see Crasborn et al. 2007).

Fig. 170.1: Gestural-grammatical interplay within action role shift: (a) pure action role shift, (b) action role shift with description (scream), (c) the sign wolf within quotation role shift

This sequence illustrates the systematic integration of action role shift into sign language. In Fig. 170.1(a), the gestural imitation of the shepherd’s boy’s action is carried out through facial expressions, posture, head position, and the hands but without the use of lexical signs. This pure action role shift can be paraphrased as “the boy is standing around, holding his chin while thinking”. The second picture illustrates an action role shift that is accompanied by the sign scream (signs are glossed by small capital letters).

2154

X. Sign language – Visible body movements as language

Presented by the narrator, the added sign scream is “an indirect description” of the simultaneously visible action of screaming (Metzger 1995: 264). Hence, the shepherd’s boy is only “partially mapped onto the signer” (Liddell and Metzger 1998: 668). Here, the action of the boy is represented by facial expressions, the head, and the torso, whereas the hands are used for the descriptive remarks of the narrator. In this construction, the combination of the character role and the narrator perspective becomes evident and exemplifies the gestural-grammatical interplay. By using the strategy of description in signed discourse, the narrator clarifies a simultaneously expressed gestural action role shift of a quoted character that might not be visible enough on its own. In general, such instances of action role shift may function as a matrix clause to introduce a following embedded quotation role shift that reports lexical signs (cf. Herrmann and Pendzich to appear). Part of a regular quotation role shift can be seen with the sign wolf in Fig. 170.1(c). Although gestural embodiments may appear in spoken language storytelling, this device does not usually occur in such a systematic fashion as in sign languages. In her study on mouth gestures in Israeli Sign Language (ISL), Sandler (2009) argues for a distinction between iconic gestures, “iconics” in McNeill’s (1992) terms, and mimetic gestures. The observed iconic mouth patterns are distinguished from mimetic gestures because they express information in addition to the signed utterances by “using the mouth to convey properties of other objects or events” symbolically (Sandler 2009: 255). While retelling a cartoon, a deaf signer of Israeli Sign Language, for instance, uses “a repeated opening and closing mouth gesture” to illustrate the repetitive echoing of a ball rolling down a pipe. Such iconic mouth gestures ⫺ equivalent to co-speech gestures ⫺ occur simultaneously with the utterance and function as embellishing or complementary elements (Sandler 2009). Apart from the gestural use of nonmanuals, sign languages make efficient use of grammaticalizing gestural components into the language system. Headshake, for instance, as a gestural negative marker in many spoken languages (see Harrison this volume), has become an essential grammatical marker of negation in many sign languages (see Loon, Pfau, and Steinbach this volume). However, cultural differences may lead to different negation markers as, for instance, the upward head tilt in Turkish Sign Language (TI˙D) and Indopakistani Sign Language (IPSL) (cf. Zeshan 2004, 2006). In the next section, we will describe nonmanual gestures that have become part of sign language grammar as grammaticalized and lexicalized elements.

4. Grammatical nonmanual eatures in sign languages Nonmanuals are essential for each level of the grammar, which typologically applies to all sign languages investigated so far. Concerning nonmanuals in general, two characteristics are particularly decisive: they are multifunctional and they may simultaneously combine with manual components as well as with further nonmanual features (cf. Herrmann and Steinbach 2013; Wilbur 2000, 2003). The meanings and functions that are associated with the different articulators are not universal but are specifically determined for each sign language. By investigating nonmanual features, it becomes immediately evident that these linguistic markers very often have gestural equivalents in the surrounding spoken language and originate from cultural gestures in the respective country (cf. Goldin-Meadow 2003; Janzen and Schaffer 2002; Özyürek 2012; Wilcox 2004). Janzen (2012: 836) emphasizes that “gestures are not ‘hearing people’s’ gestures, they belong

170. Nonmanual gestures in sign languages

2155

to deaf people, too, and evidence is mounting that they are integral to both lexicalization and grammaticalization patterns in sign languages”. Investigating grammatical facial expressions, the distinction between the upper and lower face is relevant with respect to different grammatical functions (cf. Coerts 1992; Wilbur 2003). Research on different sign languages has revealed that the upper face markers are particularly essential for grammatical structures at the syntactic and prosodic levels. For the lexical and morphological levels, mouth patterns play an important role, but the interrelation between upper and lower face seems to be more balanced than often suggested. In the following, we provide examples of specific nonmanuals that operate on the different levels of sign language grammar. With regard to the lexicon, nonmanual signals can be an obligatory, inherent part of specific signs (cf. Becker and von Meyenn 2012; Coerts 1992; Liddell 2003; Woll 2001). The sign recently in German Sign Language is always articulated with a slightly protruded tip of the tongue (see Fig. 170.2(a)). Variations between a lateral or a central tongue protrusion have no effect on the meaning of the sign and may simply be a matter of phonetic variation, perhaps due to differences in dialects. In American Sign Language (ASL), however, apart from the manual articulation, recently requires a small sideward head turn and a tension of the muscles in the cheeks either on the same or on both sides (cf. Liddell 2003). Interestingly, sign languages seem to exhibit modality-specific patterns of lexicalization. Some lexical signs for affective and evaluative concepts are produced with specific facial expressions, head positions, and/or body movements. While signing sad, for instance, signers of German Sign Language use downcast corners of the mouth, a tiny eye aperture, and furrowed eyebrows (see Fig. 170.2(b)). It is still an open issue whether these specific components are inherent obligatory markers of the sign or whether we deal with holistic facial expressions.

Fig. 170.2: Lexical facial expressions: (a) tongue protrusion of recently, (b) gesture based facial expression of sad

Considering morphological constructions, facial expressions are equally essential and are used simultaneously to the manual items. Morphological facial expressions may function as adverbial and adjectival modifications. A basic sign such as write can be adverbially modified by specific facial expressions as in to “write concentrated”, “write a lot”, and “write carelessly” (for the latter, see the tongue protrusion in Figure 170.3(a)). Obviously,

2156

X. Sign language – Visible body movements as language

we find manual adverbs in German Sign Language, such as maybe and unfortunately, but certain adverbs are solely expressed through nonmanual features and convey their meaning independently from the manual sign. The on- and offsets of adverbial nonmanuals are temporally coordinated with the manual sign or ⫺ in case of sentential adverbs ⫺ spread across the clausal domain (cf. Aronoff, Meir, and Sandler 2005; Boyes Bream 1995; Happ and Vorköper 2006; Liddell 1980; Reilly and Anderson 2002). Similar to adverbial modification, nonmanual adjectives may modify nominal elements (cf. Steinbach 2007). In the case of expressing “a big house”, the cheeks are puffed simultaneously to the manual sign house. For the syntactic and prosodic marking of signed utterances, particularly facial expressions of the upper face are responsible (cf. Liddell 1980; Herrmann 2012; Sandler 2009, 2012). Raised eyebrows as a gestural indicator of surprise, astonishment, and attentiveness, in many sign languages function as a syntactic marker of various constructions such as topics, yes/no-interrogatives, conditionals, and relative clauses. In general, sentence types are often indicated by specifically determined movements of the eyebrows. In German Sign Language, declarative sentences exhibit a neutral facial expression and head position except when they are negated, confirmed or emphasized. Interrogatives and imperatives, however, are marked with specific grammatical nonmanuals. Yes/no-interrogatives are accompanied by raised eyebrows and a forward head tilt (see Fig. 170.3(b)). In this case as well as with certain imperatives, grammatical nonmanuals constitute the only morphosyntactic markers. On the contrary, wh-interrogatives comprise manual interrogative wh-elements that are combined with furrowed eyebrows. Syntactic nonmanuals usually spread over syntactic constituents such as the entire clause in the case of yes/nointerrogatives (c.f. Boyes Bream 1995; Cecchetto 2012; Neidle et al. 2000; Petronio and Lillo-Martin 1997; Sandler and Lillo-Martin 2006).

Fig. 170.3: Grammatical facial expressions in German Sign Language: (a) morphological facial expression carelessly combined with the sign write, (b) nonmanual markers for yes/no-interrogatives (raised eyebrows, forward head tilt) combined with the sign explain

Two different nonmanual features express regular conditional clauses in German Sign Language, an eyebrow raise accompanying the condition and a head nod performed on the consequence. Thus, the conditional relation is only indicated nonmanually as can be seen in example (1) (see also Coerts 1992; Reilly, McIntire, and Bellugi 1990).

170. Nonmanual gestures in sign languages

(1)

r hn mouse put : start ‘If the mouse (card) is put (on the table), then (the game) starts.’

2157

[DGS]

(Notation: Sign language examples are glossed in English small capital letters. The scope, i.e. onset and offset, of grammatical nonmanual markers is indicated by the line above the gloss; ‘r’ ⫽ raised eyebrows, ‘hn’ ⫽ head nod.) Counterfactual conditionals, for instance, are indicated by additional nonmanuals such as a squint, which is added to the existing markings and is argued to have an inherent meaning that is compositionally combined to derive the complex counterfactual meaning (cf. Dachkovsky 2008). Whether the above described nonmanuals are analyzed as pure instantiations of syntactic features or as prosodic markers with pragmatic meaning contributions is still a matter of debate (cf. Herrmann 2012; Neidle et al. 2000; Sandler and Lillo-Martin 2006; Sandler 2010, 2012). However, the systematic alignment patterns and the clear structural linguistic functions differentiate the grammatical use of specific nonmanuals from the affective gestural use (see section 5). With regard to the head as a nonmanual articulator, the negative headshake is a crucial example. The typological comparison of sentential negation in various sign languages illustrates how different the language systems incorporate this gesture into the grammar. There are systematic restrictions concerning the elements that the headshake aligns with and whether or not spreading of the headshake onto constituents is permitted (e.g. the verb and/or object arguments). Moreover, some sign languages are nonmanually dominant, meaning the nonmanual headshake is sufficient to negate a sentence as in German Sign Language and Catalan Sign Language (LSC), whereas other sign languages require a manual negative element as their main strategy of negation, e.g. in Italian Sign Language (LIS). The headshake in such manually dominant sign languages remains an optional marker (cf. Geraci 2005; Pfau and Quer 2007; Pfau 2008; Quer 2012; Zeshan 2004, 2006). Furthermore, the body as the largest nonmanual articulator is used for various grammatical purposes. Forward and backward body leans in German Sign Language may differentiate between personal and impersonal politeness forms and may also function as markers of exclusion and inclusion such as with reject, for instance, and with dual and paucal number forms (“the three of us” vs. “the three of you”). In addition, body leans are part of the grammatical marking of quotation role shift and are used to indicate information structural contrast in signed discourse (cf. Happ and Vorköper 2006; Wilbur and Patschke 1998). This cursory survey has revealed that nonmanuals as a highly complex phenomenon are enormously relevant on all grammatical levels in sign languages. Such an element may occur as a single feature, but may systematically combine with other nonmanual markers resulting in complex simultaneous constructions. Grammatical nonmanuals may accompany single signs and can be layered with syntactic or prosodic phrases. The described systematic integration of nonmanual gestures into the grammar is specific to the visual-gestural modality. Even though nonmanual grammatical and gestural markings are performed via the same articulatory channel, they can still be differentiated by specific criteria, which will be discussed in the following section.

2158

X. Sign language – Visible body movements as language

5. Distinguishing aective rom grammatical nonmanuals Due to the same articulatory transmission mode of sign and gesture, as mentioned before, it is challenging to differentiate between affective and grammatical nonmanuals in sign languages. Nevertheless, there are clear criteria to distinguish between the two. Most impressively, linguistic nonmanuals have a defined scope and are timed to align with linguistic units. Affective nonmanuals, on the other hand may often vary and exhibit gradual and inconsistent spreading behavior. The clear on- and offset of grammatical nonmanuals that mainly correspond to constituent structure stand in opposition to the more global patterns of gestural nonmanuals. Typically, expressions used for grammatical purposes comprise only a few restricted facial articulators, thus, different specifications of facial muscles can be detected in either category. Signers have clear intuitions when it comes to grammaticality judgments, but show more signer-specific variation with affective nonmanuals (cf. Baker-Shenk 1983; Corina, Bellugi, and Reilly 1999; Emmorey 1999; Poizner, Klima, and Bellugi 1987; Wilbur 2003). In their study on the interaction of affective and linguistic eyebrow movements in signed interrogatives, de Vos, van der Kooij, and Crasborn (2009) clearly show the importance of the analysis of facial gestures that need to be taken into account when investigating linguistic markers. Research on sign language acquisition uncovers the fact that children acquire the systematic use of grammatical nonmanuals at a later stage than the respective inconsistent affective nonmanual gestures. Nonmanual interrogative marking, for instance, follows the acquisition of the respective manual markers even though the affective gestures of brow raise and furrowed brows are already present. The same is the case for the marking of conditionals in certain sign languages (cf. Emmorey et al. 1995; Morgan and Woll 2002; Reilly, McIntire, and Bellugi 1990). Furthermore, with regard to nonmanual morphology, Anderson and Reilly (1998: 139⫺140) found that “in the case of facial adverbs where there is no explicit affective interference, these non-manual signals are acquired earlier and without significant difficulty”. Neuropsychological studies provide further evidence for a differentiation between affective and grammatical nonmanuals, because grammatical facial expressions are found to be processed left hemispherically, whereas affective facial gestures activate right hemispheric areas of the brain (cf. Corina, Bellugi, and Reilly 1999; Corina and Spotswood 2012; McCullough, Emmorey, and Sereno 2005; Poizner, Klima, and Bellugi 1987). Studies on categorical perception of facial features with deaf signers have also shown specific differences in the perception of affective and linguistic facial expressions. Linguistic competence in a sign language may have an effect on categorical perception of gestural facial expressions (cf. Campbell et al. 1999; McCullough and Emmorey 2009). In sum, for the status of nonmanual features, either affective or grammatical, we can rely on various distinctive criteria, such as scope and alignment, we find functional differences, and signers have clear intuitions on the grammaticality of utterances. Further evidence for this distinction comes from language acquisition data and psycho- and neurolinguistic studies. Despite the criteria mentioned above, blurred cases appear due to the facts that (i) the same nonmanual feature is typically used in both functions and (ii) we deal with a grammaticalization continuum between nonmanual gestures and nonmanual signs.

170. Nonmanual gestures in sign languages

2159

6. Conclusion Sign languages as the natural languages of deaf people around the world have the unique opportunity to simultaneously layer different manual and nonmanual articulators to produce gestures and convey grammatical meaning. Signs performed by the hands can be accompanied by movements of the body, the head, and the face. In the visual-gestural modality, gestures are expressed via the same channels as language itself. Nonmanual markers such as a backward body lean or raised eyebrows may be nonmanual gestures indicating dissociation or surprise, for instance. However, they can also be an inherent part of a lexical entry or have grammatical functions such as the syntactic marking of yes/no-interrogatives and conditionals. Nonmanuals that play an essential role in sign language grammar in many cases have emerged from the gesture systems of the surrounding spoken language cultures. Signers gesture similarly to hearing people but most obviously integrate gestures much more systematically into their language system. The case of action role shift illustrates this ongoing interplay between gestures and signing. There are many studies that provide evidence for the difference between affective gestural nonmanuals and nonmanuals that fulfill grammatical functions. In any case, it is fruitful to assume a multilayered language approach that broadly reflects the continuum of gestures and grammatical features on a signer’s body.

7. Reerences Anderson, Diane E. and Judy S. Reilly 1998. PAH! The acquisition of adverbials in ASL. Sign Language and Linguistics 1(2): 117⫺142. Aronoff, Mark, Irit Meir and Wendy Sandler 2005. The paradox of sign language morphology. Language 8(2): 301⫺344. Baker, Charlotte 1977. Regulators and turn-taking in American Sign Language discourse. In: Lynn A. Friedman (ed.), On The Other Hand: New Perspectives on American Sign Language, 215⫺ 236, Academic Press: New York. Baker-Shenk, Charlotte 1983. A Microanalysis of the Nonmanual Components of Questions in American Sign Language. Berkeley, CA: University of California. Becker, Claudia and Alexander von Meyenn 2012. Phonologie. Der Aufbau gebärdensprachlicher Zeichen. In: Hanna Eichmann, Martje Hansen and Jens Heßmann (eds.), Handbuch Deutsche Gebärdensprache. Sprachwissenschaftliche und anwendungsbezogene Perspektiven, 31⫺59. Seedorf: Signum. Boyes Bream, Penny 1995. Einführung in die Gebärdensprache und ihre Erforschung. Hamburg: Signum. Campbell, Ruth 1997. Making faces: Coextant domains for language and visual cognition. In: Marc Marschark, Patricia Siple, Diane Lillo-Martin, Ruth Campbell and Victoria S. Everhart (eds.), Relations of Language and Thought. The View from Sign Language and Deaf Children, 147⫺152. New York/Oxford: Oxford University Press. Campbell, Ruth, Bencie Woll, Philip J. Benson and Simon B. Wallace 1999. Categorical perception of face actions: Their role in sign language and in communicative facial displays. The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology 52(1): 67⫺95. Cecchetto, Carlo 2012. Sentence types. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 292⫺315. Berlin: Mouton de Gruyter. Coates, Jennifer and Rachel Sutton-Spence 2001. Turn-taking patterns in deaf conversation. Journal of Sociolinguistics 5(4): 507⫺529. Coerts, Jane 1992. Nonmanual Grammatical Markers. An Analysis of Interrogatives, Negations and Topicalisations in Sign Language of the Netherlands. Amsterdam: Univ. Academisch proefschrift.

2160

X. Sign language – Visible body movements as language

Corina, David P., Ursula Bellugi and Judy Reilly 1999. Neuropsychological studies of linguistic and affective facial expressions in deaf signers. Language and Speech 42(2⫺3): 307⫺331. Corina, David P. and Nicole Spotswood 2012. Neurolinguistics. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 739⫺762. Berlin: Mouton de Gruyter. Crasborn, Onno, Johanna Mesch, Dafydd Waters, Annika Nonhebel, Els van der Kooij, Bencie Woll and Brita Bergman 2007. Sharing sign language data online. Experiences from the ECHO Project. International Journal of Corpus Linguistics 12(4): 535⫺562. Dachkovsky, Svetlana 2008. Facial expression as intonation in Israeli Sign Language: The case of neutral and counterfactual conditionals. In: Josep Quer (ed.), Signs of the Time. Selected Papers from TISLR 8, 61⫺82. Seedorf: Signum. Ekman, Paul 1979. About brows: Emotional and conversational signals. In: Michael von Cranach, Klaus Foppa, Wolf Lepenies and Detlev Ploog (eds.), Human Ethology. Claims and Limits of a New Discipline, 169⫺202. Cambridge: Cambridge University Press. Ekman, Paul 1993. Facial expression and emotion. American Psychologist 48(4): 384⫺392. Ekman, Paul, Wallace V. Friesen and Joseph C. Hager 2002. Facial Action Coding System. The Manual. Salt Lake City, UT: Research Nexus. Emmorey, Karen 1999. Do signers gesture? In: Lynn S. Messing and Ruth Campbell (eds.), Gesture, Speech, and Sign, 133⫺159. New York: Oxford University Press. Emmorey, Karen, Ursula Bellugi, Angela Friederici and Petra Horn 1995. Effects of age of acquisition on grammatical sensitivity: Evidence from on-line and off-line tasks. Applied Psycholinguistics 16(1): 1⫺23. Geraci, Carlo 2005. Negation in LIS (Italian Sign Language). In: Leah Bateman and Cherlon Ussery (eds.), Proceedings of the North East Linguistic Society (NELS 35.), 217⫺229. Amherst, MA: GLSA. Goldin-Meadow, Susan 2003. Hearing Gesture: How Our Hands Help Us Think. Cambridge, MA: Harvard University Press. Happ, Daniela and Marc-Oliver Vorköper 2006. Deutsche Gebärdensprache. Ein Lehr- und Arbeitsbuch. Frankfurt a. M.: Fachhochschulverlag. Harrison, Simon this volume. Head shakes: Etymology, cultural history, and cultural diversity of Yes and No gestures. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communcation Science 38.2.), 1496⫺1501. Berlin/Boston: De Gruyter Mouton. Herrmann, Annika 2012. Prosody in German Sign Language. In: Pilar Prieto and Gorka Elordieta (eds.), Prosody and Meaning, 349⫺380. Berlin: Mouton de Gruyter. Herrmann, Annika and Markus Steinbach 2012. Quotation in sign languages. A visible context shift. In: Ingrid van Alphen and Isabelle Buchstaller (eds.), Quotatives. Cross-linguistic and Crossdisciplinary Perspectives, 203⫺228. Amsterdam: John Benjamins. Herrmann, Annika and Markus Steinbach (eds.) 2013. Nonmanuals in Sign Language. Amsterdam/ Philadelphia: John Benjamins. Herrmann, Annika and Nina-Kristin Pendzich to appear. Between narrator and protagonist in fables of German Sign Language. In: Annika Hübl and Markus Steinbach (eds.), Linguistic Foundations of Narration in Spoken and Sign Languages. Amsterdam/Philadelphia: John Benjamins. Janzen, Terry 2012. Lexicalization and grammaticalization. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 816⫺841. Berlin: Mouton de Gruyter. Janzen, Terry and Barbara Schaffer 2002. Gesture as the substrate in the process of ASL grammaticization. In: Richard P. Meier, Kearsy Cormier and David Quinto-Pozos (eds.), Modality and Structure in Signed and Spoken Languages, 199⫺223. Cambridge: Cambridge University Press. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.

170. Nonmanual gestures in sign languages Kendon, Adam 2008. Some reflections on the relationship between ‘gesture’ and ‘sign’. Gesture 8(3): 348⫺366. Liddell, Scott K. 1980. American Sign Language Syntax. The Hague/Paris/New York: Mouton. Liddell, Scott K. 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Liddell, Scott K. and Melanie Metzger 1998. Gesture in sign language discourse. Journal of Pragmatics 30(6): 657⫺697. Lillo-Martin, Diane 2012. Utterance reports and constructed action. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 365⫺387. Berlin: Mouton de Gruyter. Lucas, Ceil 2002. Turn-Taking, Fingerspelling, and Contact in Signed Languages. Washington, DC: Gallaudet University Press. McCullough, Stephen, Karen Emmorey and Martin Sereno 2005. Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. Cognitive Brain Research 22(2): 193⫺203. McCullough, Stephen and Karen Emmorey 2009. Categorical perception of affective and linguistic facial expressions. Cognition 110(2): 208⫺221. McNeill, David 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago: The University of Chicago Press. McNeill, David (ed.) 2000. Language and Gesture. Cambridge: Cambridge University Press. Metzger, Melanie 1995. Constructed dialogue and constructed action in American Sign Language. In: Ceil Lucas (ed.), Sociolinguistics in Deaf Communities, 255⫺271. Washington, DC: Gallaudet University Press. Morgan, Gary and Bencie Woll (eds.) 2002. Directions in Sign Language Acquisition. Amsterdam: John Benjamins Publishing Company. Müller, Cornelia 2009. Gesture and language. In: Kirsten Malmkjær (ed.), The Routledge Linguistics Encyclopedia, 214⫺217. London: Routledge. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan and Robert G. Lee 2000. The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. Cambridge, MA: Massachusetts Institue of Technology Press. Özyürek, Asli 2012. Gesture. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 626⫺646. Berlin: Mouton de Gruyter. Petronio, Karen and Diane Lillo-Martin 1997. WH-movement and the position of Spec-CP: Evidence from American Sign Language. Language 73(1): 18⫺57. Pfau, Roland 2008. The Grammar of headshake: A typological perspective on German Sign Language negation. Linguistics in Amsterdam 1: 34⫺71. Pfau, Roland and Josep Quer 2007. On the syntax of negation and modals in Catalan Sign Language and German Sign Language. In: Pamela Perniss, Roland Pfau and Markus Steinbach (eds.), Visible Variation. Comparative Studies on Sign Language Structure, 129⫺161. Berlin: Mouton de Gruyter. Pfau, Roland and Josep Quer 2010. Nonmanuals: Their grammatical and prosodic roles. In: Diane Brentari (ed.), Sign Languages, 381⫺402. New York: Cambridge University Press. Poizner, Howard, Edward S. Klima and Ursula Bellugi 1987. What the Hands Reveal about the Brain. Cambridge, MA: Massachusetts Institute of Technology Press. Quer, Josep 2012. Negation. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 626⫺646. Berlin: Mouton de Gruyter. Reilly, Judy S., Marina McIntire and Ursula Bellugi 1990. The acquisition of conditionals in American Sign Language: Grammaticized facial expressions. Applied Psycholinguistics 11(4): 369⫺392. Reilly, Judy S. and Diane E. Anderson 2002. Faces. The acquisition of non-manual morphology in ASL. In: Gary Morgan and Bencie Woll (eds.), Directions in Sign Language Acquisition, 159⫺ 181. Amsterdam: John Benjamins.

2161

2162

X. Sign language – Visible body movements as language

Sandler, Wendy 2009. Symbiotic symbolization by hand and mouth in sign language. Semiotica 174(1/4): 241⫺275. Sandler, Wendy 2010. Prosody and syntax in sign languages. Transactions of the Philological Society 108(3): 298⫺328. Sandler, Wendy 2012. Visual prosody. In: Roland Pfau, Markus Steinbach and Bencie Woll (eds.), Sign Language. An International Handbook, 55⫺76. Berlin: Mouton de Gruyter. Sandler, Wendy and Diane Lillo-Martin 2006. Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Steinbach, Markus 2007. Gebärdensprache. In: Markus Steinbach, Ruth Albert, Heiko Girnth, Annette Hohenberger, Bettina Kümmerling-Meibauer, Jörg Meibauer, Monika Rothweiler and Monika Schwarz-Friesel (eds.), Schnittstellen der germanistischen Linguistik, 137⫺185. Stuttgart: Metzler. Van Loon, Esther, Roland Pfau and Markus Steinbach this volume. The grammaticalization of gestures in sign languages. In: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body ⫺ Language ⫺ Communication. An International Handbook on Multimodality in Human Interaction (Handbooks of Linguistics and Communcation Science 38.2.), 2133⫺1249. Berlin/Boston: De Gruyter Mouton. Vos, Connie de, Els van der Kooij and Onno Crasborn 2009. Mixed signals: Combining linguistic and affective functions of eyebrows in questions in sign language of the Netherlands. Language and Speech 52(2/3): 315⫺339. Wilbur, Ronnie B. 2000. Phonological and prosodic layering of nonmanuals in American Sign Language. In: Karen Emmorey and Harlan Lane (eds.), The Signs of Language Revisited: Festschrift für Ursula Bellugi and Edward Klima, 213⫺244. Mahwah, NJ: Lawrence Erlbaum. Wilbur, Ronnie B. 2003. Modality and the structure of language. Sign languages versus signed systems. In: Marc Marschark and Patricia E. Spencer (eds.), Oxford Handbook of Deaf Studies, Language and Education, 332⫺346. Oxford: Oxford University Press. Wilbur, Ronnie B. and Cynthia G. Patschke 1998. Body leans and the marking of contrast in American Sign Language. Journal of Pragmatics 30(3): 275⫺303. Wilcox, Sherman E. 2004. Gesture and language. Cross-linguistic and historical data from signed languages. Gesture 4(1): 43⫺73. Woll, Bencie 2001. The sign that dares to speak its name: Echo phonology in British Sign Language (BSL). In: Penny Boyes Braem and Rachel Sutton-Spence (eds.), The Hands are the Head of the Mouth: the Mouth as Articulator in Sign Languages, 87⫺98. Signum: Hamburg. Zeshan, Ulrike 2004. Hand, head, and face: Negative constructions. Linguistic Typology 8(1): 1⫺58. Zeshan, Ulrike 2006. Interrogative and Negative Constructions in Sign Language. Nijmegen: Ishara Press.

Annika Herrmann, Göttingen (Germany) Nina-Kristin Pendzich, Göttingen (Germany)

171. Enactment as a (signed) language communicative strategy

2163

171. Enactment as a (signed) language communicative strategy 1. 2. 3. 4. 5. 6. 7.

Embodiment as posture and action in signed language Can constructed action be considered obligatory? Is constructed action similar across sign languages? Does constructed action appear across language registers? How does constructed action compare to co-speech gestures? Conclusions and future directions References

Abstract Perhaps one of the most obvious facets of signed languages to non-signers is the manner in which signers use their bodies in mimetic ways to describe the actions of characters while also producing lexical signs and grammatical structures ⫺ the latter of which are likely not understandable to a non-signer. This mimetic use of the body appears as a common communicative strategy in sign for depicting animacy, although its designation within signed language grammars is still under debate. Some researchers consider mimetic actions as part of the signs and grammar of signed language, while others argue for a status as gestural complements to sign. This brief report summarizes data from several studies to provide one view of how and why signers depict animacy by using this strategy. Additionally, this report considers the extent to which certain communicative strategies are realizations of embodiment in language and communication more generally.

1. Embodiment as posture and action in signed language Signed languages contain lexical signs and grammar for the creation of meaningful utterances, just like spoken languages do. Users of signed languages, however, also incorporate body movements and postures within the signing stream that depict the actions or persona of a character or other animate object, and they appear to do this frequently. In my work, I refer to such movements and postures as constructed action, or a signer’s use of “…their body, head, and eye gaze to report the actions, thoughts, words, and expressions of characters within the discourse” (Metzger 1995). Data from multiple sign languages attest to cross-linguistic constructed action use. While some authors claim that constructed action has gestural qualities (e.g., Liddell and Metzger 1998; Quinto-Pozos and Mehta 2010), other writers have described such meaningful articulations as linguistic devices at the lexical and the sentential levels of structure (e.g., Supalla 1982, 2003). Early work on this topic in sign linked constructed action to examples of constructed dialogue (Roy 1989; Winston 1991) following work by Tannen (1989) on similar strategies that are used in spoken language to encourage a listener to become more involved with the speaker and the text or message that is being communicated. Like constructed dialogue, constructed action is not necessarily an accurate rendition of how a character may have actually acted (or might act in the future) or the persona of that character, but rather the signer’s account. Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 21612167

2164

X. Sign language – Visible body movements as language Constructed action in sign could appear via the use of various body parts of the signer, including the arms/hands, torso, head/face, eyegaze, and lower body. For instance, a constructed action rendition of a person tripping might include the upper body (torso, head, and arms) quickly jerking forward with the arms flailing about and a look of surprise on the signer’s face. In some emphatic cases of constructed action, the signer might also use their lower body to depict the tripping sequence (i.e., walk forward and act as if she is stumbling on something), but it is more common for constructed action to be depicted on body parts above the waist. See Fig. 171.1 for three renditions of a signer using constructed action to depict a character tripping (from Quinto-Pozos and Mehta 2010).

Fig. 171.1: Three renditions of a tripping scene as demonstrated via Constructed Action (CA); from Quinto-Pozos and Mehta (2010). Used with permission.

Many questions could be asked of constructed action and its role in signed languages. For example, is this form of embodiment of language and communication an obligatory part of signed language, or does it simply appear as one option ⫺ no better or worse than others ⫺ for communicating information about a character? Presumably, an obligatory reading for constructed action would raise questions about how it interacts with the grammar of a sign language. Further, to what extent does constructed action appear across multiple signed languages? Could this form of embodiment appear only in some sign languages, but not others, comparable to a grammatical feature that is not shared by all languages (e.g., overt subjects in sentences)? Another question is whether constructed action could be a register phenomenon ⫺ appearing only in certain genres of language use. Or, might it be used throughout different types of linguistic discourse including different levels of formality and for communication with various profiles of interlocutors or audiences? Responses to this question could inform our understanding of constructed action as it functions throughout a signer’s linguistic repertoire. In addition, how does constructed action compare with the co-speech gestures produced by hearing people? Are there similarities and differences across language users that could provide the researcher with clues to commonalities across all communicators with respect to how they use their bodies for communication? Presumably, constructed action could tell us much about embodiment and communication more generally and the role of the body in supporting communication.

2. Can constructed action be considered obligatory? It is worthwhile to begin the investigation of a feature of language and communication with general questions that concern its necessity and role in communication. To that end, I conducted a study that used production and judgment data to examine con-

171. Enactment as a (signed) language communicative strategy

2165

structed action and its use in American Sign Language (Quinto-Pozos 2007a). For the production task, participants were asked to provide signed portrayals of 20 elicitation videos whose content ranged from animate to inanimate objects, with movement of some sort being depicted. For each elicitation clip, each signer produced two signed portrayals: one without any limitations (hereafter first-production clips) and another in which the participant was usually asked to exclude an instance of constructed action from his/her first portrayal (hereafter second-production clips). Ten Deaf signers of American Sign Language (5 of Deaf parents, 5 of hearing parents) provided production data. In some cases, a participant would refuse to provide a signed portrayal of the elicitation clip without a particular instance of constructed action from the first-production clip ⫺ claiming that such a rendition would be unacceptable. For the judgment task reported in Quinto-Pozos (2007a), 18 Deaf signers of American Sign Language (5 of Deaf parents and 13 of Hearing parents) viewed ⫺ in random order ⫺ 33 of the production-clip pairs (first- and second-). Participants judged those clips based on two parameters: clarity and degree of “correctness”. Participants providing production data differed from those providing judgment data. In more than half of the 33 pairs of clips that were used for this portion of the study, participants judged first-production clips to be significantly different from second-production clips with the first-production clips being clearer and more correct than the second-production clips. Based on these data, I concluded that constructed action appeared to be obligatory for some users of American Sign Language ⫺ both in terms of a user producing constructed action and what an interlocutor/viewer feels about the use or lack of constructed action to portray an action or movement of a character. The results reported in Quinto-Pozos (2007a) also raised questions about signed language classifiers. In the context of signed languages, the term classifier commonly refers to handshapes that are used to depict an object in its entirety, such as an American Sign Language 3-handshape to refer to a vehicle and a bent-V handshape to refer to an animal (commonly referred to as entity or semantic classifiers). Other types of classifiers are used to describe objects (i.e., provide information about the size and/or shape of an object; commonly referred to as size-and-shape specifiers, SASSes) or how objects are handled (commonly referred to as handling classifiers). Supalla (1982) provides an early description of American Sign Language classifiers, including developmental milestones for their acquisition. With respect to Quinto-Pozos (2007a), why did classifier signs not appear to be sufficient for depicting aspects of the characters and animate objects that the signers were interested in communicating? In other words, why was constructed action seemingly obligatory? In a follow-up analysis that can be found in Quinto-Pozos (2007b), I suggested that classifiers were less-than-optimal for depicting aspects of a character ⫺ in comparison with the use of constructed action ⫺ because the classifiers did not allow for isomorphic portrayals of the animate referent to the degree that constructed action would likely have done so. As part of the analysis, I demonstrated how the lack of correspondence between articulator (i.e., classifier handshape) and referent was caused by one of the following: the inability of (entity) classifier signs to provide specific information about salient aspects of an animate entity, limitations on information based on the number and shape of articulators, and motoric constraints imposed by human articulators and limitations on human movement. I concluded that, when a classifier sign is not optimally communicative and the signer can portray a character’s posture or actions with her body, the constructed action approach is used (and favored).

2166

X. Sign language – Visible body movements as language The two articles from 2007 described here provided an initial picture of the importance of constructed action in American Sign Language and how constructed action interacts with aspects of language and communication. I noted that constructed action sometimes co-occurs with classifier and other signs (also see Dudis 2004), and it sometimes alternates with them in sequence. It is worth emphasizing that a signer can utilize various articulators at once (e.g., hands, torso, head, and face), which means that constructed action and classifiers can co-occur. Constructed action could appear via a facial expression and a torso movement while the signer is also articulating a classifier handshape to depict the location of the animate being with respect to other objects or entities. Constructed action provides information that cannot be provided efficiently or robustly by using only lexical signs or classifier signs

3. Is constructed action similar across sign languages? Not only is constructed action a common feature in American Sign Language, signers of other languages also use constructed action regularly. Since constructed action appears across languages, it is useful to ask whether its use appears obligatory across signed languages and whether it is used for the same purposes and if it takes on the same form (i.e., whether the same body parts are involved in the production of constructed action). A study comparing American Sign Language to an unrelated sign language (British Sign Language) and a historically-related sign language (Mexican Sign Language, Lengua de Sen˜as Mexicana) was reported in two conference presentations (Quinto-Pozos, Cormier, and Holzrichter 2006; Quinto-Pozos, Cormier, and Ramsey 2009). The methods employed for the cross-linguistic study were similar to those used for the single earlier single study of American Sign Language (Quinto-Pozos 2007a), only the method of choosing the elicitation video clips was refined. The reason for this change is that my co-authors and I believed, based on results from the original study, that the degree of obligatoriness of constructed action may be determined largely by the animacy of the referent (i.e., how “human-like” the movement appears). Specifically, we hypothesized that constructed action would be more obligatory with highly animate referents and less obligatory with referents low in animacy. Due to this hypothesis, rather than simply choosing a sampling of clips as was done for the initial study, we introduced an animacy rating task for the cross-linguistic study. This task was completed by hearing non-signers, in order to reduce any possible influence from knowledge of a sign language. Thirty non-signing participants (10 each from the USA, Mexico and the UK) were asked to rate 31 video clips (showing referents that were at what we judged to be various levels of animacy; all depicted some type of movement), on a scale of 1 to 5, according to how “human-like” each of the moving items in the clips were. From those we chose a set of 20 clips rated at various levels of animacy (four clips in each of five levels of animacy from lowest to highest); these clips were then described by Deaf users of the three sign languages. The results largely mirrored those obtained for the single study of American Sign Language (Quinto-Pozos 2007a). In addition, perceived animacy seemed to predict constructed action use, which means that there was generally no constructed action for the clips rated as lowest animacy and many examples of constructed action for the clips rated as most animate. Importantly, even though verb signs and classifiers may differ in form across sign languages (e.g., the preferred classifier handshape used for vehicles across the three lan-

171. Enactment as a (signed) language communicative strategy

2167

guages differs somewhat), various constructed action strategies appeared to be similar. This provides preliminary evidence that constructed action is not unique to any particular sign language, although more detailed cross-linguistic analyses are necessary.

4. Does constructed action appear across language registers? An important question concerns whether constructed action is primarily a feature of informal language use or whether it appears across a myriad of settings with varying levels of formality. In essence, this question is concerned with language register. A finding that constructed action appears across levels of formality would support its significance in signed language. Whereas, showing that constructed action only appears in casual registers since it is characteristically mime-like would question its obligatoriness across multiple types of language use. In order to examine constructed action use in different settings, we designed a study (Quinto-Pozos and Mehta 2010) that utilized an authentic narrative (i.e., a brief history of a Deaf leader’s life, written in English) that contained segments that we expected would elicit constructed action from a signer when translated into American Sign Language. Two native signers of American Sign Language were asked to produce the narrative while interacting with three different audiences each: a formal setting, a non-formal setting, and a school setting that only contained children in the audience (ages 9⫺11). We expected constructed action to be present in the child and non-formal settings, although it was unclear whether it would be produced when interacting with the formal audiences. For the coding of the collected data, we categorized constructed action by body part (arms/hands, face/facial expressions, head, torso, and lower body), and we devised a system for indicating degrees of constructed action (“slight”, “moderate”, and “exaggerated”). We found that constructed action does indeed occur throughout various settings and degrees of formality: from very formal events to informal gatherings to grade school settings. Additionally, in a general sense, audience and setting appear to influence the use of gesture insofar as certain aspects of constructed action (e.g., use of the signer’s lower body) seem more acceptable with some audiences and settings over others. The classroom setting with 9⫺10-year-old children elicited the least amount of the lowest degree of constructed action (labeled as ‘‘slight’’ in this work), and this finding may be one factor in the significant association found between degree of constructed action and setting. Also with respect to degree, it appears that “exaggerated” constructed action is least common in formal settings. This study revealed how different body parts to support constructed action use can pattern differently across audiences and settings even though constructed action does appear across different levels of formality.

5. How does constructed action compare to co-speech gestures? How does constructed action compare with the communicative bodily actions of co-speech gesturers? Presumably, differences may be apparent because signers use their hands and body for linguistic purposes, and those manual and non-manual articulations are governed by the phonologies and grammars of sign languages. However, there may also exist (some) similarities because of basic human strategies that are frequently employed for communication (e.g., the use of visual iconicity). Comparing signers’ and co-speech gesturers’ strate-

2168

X. Sign language – Visible body movements as language

gies for communication could help researchers to understand general visual strategies for the incorporation of embodiment into language and communication. One approach is to consider how signers and co-speech gesturers narrate stories or events. Research on gestural viewpoint suggests that several dimensions determine which perspective a narrator takes, including properties of the event described (Parrill 2010). Certain events evoke gestures from the point of view of a character (CVPT), others from the point of view of an observer (OVPT), and some from both perspectives. Comparisons have been made between observer viewpoint gestures and the use of classifiers and character viewpoint gestures and the use of constructed action (Cormier et al. 2012; Quinto-Pozos and Parrill 2008). For this study (Quinto-Pozos and Parrill 2012, in revision), ten American Sign Language signers described the cartoon stimuli used in Parrill (2010). Descriptions were then matched to particular stimulus events. Events shown by Parrill to elicit a particular gestural strategy (character viewpoint, observer viewpoint or both) were coded for signers’ instances of constructed action and classifiers. We divided constructed action into three categories: Constructed action involving the torso, facial affect, or constructed action depicting the handling of objects. Signers employed some of the same strategies that co-speech gestures used. Signers used classifiers the most when gesturers used the point of an observer exclusively and the least when gesturers used the point of view of a character exclusively. Additionally, constructed action-handling was employed by signers the most when gesturers used the point of view of a character exclusively. However, there were also differences between signers and co-speech gesturers. Perhaps the main difference is that signers used constructed action and classifier strategies for all categories of events, even when the gesturers used the point of view of a character or the point of an observer exclusively. Additionally, signers used constructed action, on average, more often than co-speech gesturers used the point of view of a character. This study revealed similarities in what signers and gesturers choose to employ for communication, although it also highlighted many qualitative differences with respect to strategies across the participants. In short, signers and co-speech gesturers can be compared, and this provides useful data for the consideration of embodiment and its multi-faceted role in human communication.

6. Conclusions and uture directions Based on the studies reported here, there are several points that can be made. Constructed action in signed languages exhibits a strong obligatory tendency for depicting animacy and patterns similarly across sign languages. It patterns differently across registers, but seems to appear across both formal and informal registers. In terms of constraints of use, constructed action commonly co-occurs (sequentially or simultaneously) with classifier constructions, which suggests that there is a robust relationship between both depictive devices. Additionally, signers tend to use constructed action when cospeech gesturers use character viewpoint gestures. Constructed action serves as a gestural complement to signed language grammars and lexical items, although there appear to be ways that constructed action use and form is constrained by linguistic features. Future work on this topic should examine the constraints that govern constructed action use across sign languages, such as when constructed action can and cannot occur in a phrase, and whether it can be accounted for by theories of syntactic structure. By

171. Enactment as a (signed) language communicative strategy

2169

examining the patterns of constructed action across signed languages and in comparison with co-speech gesture, we will come closer to understanding the role and importance of embodiment in language and communication.

7. Reerences Cormier, Kearsy, David Quinto-Pozos, Zed Sevcikova and Adam Schembri 2012. Lexicalisation and de-lexicalisation processes in sign languages: Comparing depicting constructions and viewpoint gestures. Language and Communication 32(4): 329⫺348. Dudis, Paul 2004. Body-partitioning and real-space blends. Cognitive Linguistics 15(2): 223⫺238. Liddell, Scott and Melanie Metzger 1998. Gesture in sign language discourse. Journal of Pragmatics 30(6): 657⫺697. Metzger, Melanie 1995. Constructed action and constructed dialogue in American Sign Language. In: Ceil Lucas (ed.), Sociolinguistics in Deaf Communities, 255⫺271. Washington, DC: Gallaudet University Press. Quinto-Pozos, David 2007a. Can constructed action be considered obligatory? Lingua 117(7): 1285⫺1314. Quinto-Pozos, David 2007b. Why does constructed action seem obligatory? An analysis of classifiers and the lack of articulator-referent correspondence. Sign Language Studies 7(4): 458⫺506. Quinto-Pozos, David, Kearsy Cormier and Amanda Holzrichter 2006. The obligatory nature of Constructed Action across 3 sign languages. Paper presentation. Workshop on Cross-linguistic Sign Language Research. Max Planck Institute for Psycholinguistics. Nijmegen, The Netherlands. Quinto-Pozos, David, Kearsy Cormier and Claire Ramsey 2009. Constructed action of highly animate referents: Evidence from American, British and Mexican Sign Languages. Paper presentation. The 35th annual meeting of the Berkeley Linguistics Society. University of California, Berkeley. Berkeley, California. Quinto-Pozos, David and Sarika Mehta 2010. Register variation in mimetic gestural complements to signed language. Journal of Pragmatics 42(3): 557⫺584. Quinto-Pozos, David and Fey Parrill 2008. Enactment as a communicative strategy: A comparison between English co-speech gesture and American Sign Language. Paper presentation. Gestures: A comparison of signed and spoken languages session of the German. Linguistics Society Conference. Universität Bamberg. Bamberg, Germany. Quinto-Pozos, David and Fey Parill 2012. Comparing viewpoint strategies used by co-speech gesturers and signers. Paper presentation. International Society for Gesture Studies (ISGS) Conference. Lund University. Lund, Sweden. Quinto-Pozos, David and Fey Parill in revision. Signers and co-speech gesturers adopt similar strategies for portraying viewpoint in narratives. Roy, Cynthia 1989. Features of discourse in an American Sign Language lecture. In: Ceil Lucas (ed.), The Sociolinguistics of the Deaf Community, 231⫺251. San Diego: Academic Press Inc. Supalla, Ted 1982. Structure and acquisition of verbs of motion and location in ASL. Unpublished doctoral dissertation, University of California, San Diego. Supalla, Ted 2003. Revisiting visual analogy in ASL classifier predicates. In: Karen Emmorey (ed.), Perspectives on Classifier Constructions in Sign Languages, 249⫺257. Mahwah, NJ: Lawrence Erlbaum Associates. Tannen, Deborah 1989. Talking Voices: Repetition, Dialogue, and Imagery in Conversational Discourse. Cambridge: Cambridge University Press. Winston, Elizabeth A. 1991. Spatial referencing and cohesion in an American Sign Language text. Sign Language Studies 73: 397⫺409.

David Quinto-Pozos, Austin (USA)

2170

X. Sign language – Visible body movements as language

172. Gestures in sign language 1. 2. 3. 4. 5. 6. 7.

Introduction Sign and gesture Synchronic interaction of sign and gesture What is sign and what is gesture? Diachronic interaction of sign and gesture Sign and gesture in the evolutionary context References

Abstract This chapter outlines the history and study of the relation between sign and gesture. Studies examining synchronic interaction are reported, placed in their historical context of structuralist linguistics, and evaluated in light of current usage-based theories. Studies of the diachronic incorporation of gesture into signed languages through lexicalization and grammaticalization are discussed. Finally, the relation between sign and gesture in gestural theories of language origin is briefly described.

1. Introduction For centuries, signed languages were regarded as nothing more than depictive gestures lacking features of language such as phonology, morphology, and syntax. The Roman rhetorician Quintilian mentioned the use of gestures by deaf people in his Institutes of Oratory, classifying gestures as a substitute for speech. The view that signed languages are merely pantomimic gestures culminated in the debate over the use of speech versus signing in the education of deaf children during the Milan Conference of 1880. The president of the conference, Giulio Tarra, argued that signs are nothing more than gesture: Gesture, instead of addressing the mind, addresses the imagination and the senses. […] While, on the one hand, mimic signs are not sufficient to express the fullness of thought, on the other they enhance and glorify fantasy and all the faculties of the sense of imagination. […] The fantastic language of signs exalts the senses and foments the passions, whereas speech elevates the mind much more naturally, with calm and truth and avoids the danger of exaggerating the sentiment expressed and provoking harmful mental impressions. (Lane 1984: 391, 393⫺394)

These views concerning sign and gesture persisted into the 21st century, with psychologists, educators, and linguists continuing to deny the linguistic status of signed languages. As a result, the relation between sign and gesture continues to be a topic of heated debate among linguists and gesture scholars.

2. Sign and gesture The literature on sign and gesture may be broadly divided into two categories: research examining the synchronic relation between sign and gesture and that examining the Müller, Cienki, Fricke, Ladewig, McNeill, Bressem (eds.) 2014, Body  Language  Communication (HSK 38.2), de Gruyter, 21682174

172. Gestures in sign language

2171

diachronic or development relation. The synchronic approach seeks to distinguish sign from gesture as distinct systems. Synchronically, gesture and sign may co-occur, either simultaneously or in alternation, within an utterance. The diachronic perspective examines how gestures that are documented to be in use among the wider community are incorporated into the linguistic systems of signed languages through lexicalization and grammaticalization.

3. Synchronic interaction o sign and gesture A growing body of research examines how gesture and sign interact synchronically. Many sign linguists are now actively exploring the relation between sign and gesture across several of the world’s signed languages. Liddell (2003) proposes that sign and gesture co-occur in several ways, including aspects of spatialized syntax, pointing or indexical signs, and classifier constructions. Liddell uses several factors to distinguish language from gesture, including countability, gradience, and contextual meaning. For example, he argues that location in pointing signs cannot be morphemic because the number of possible locations is uncountable. He applies the same analysis to classifier signs, arguing that while parts of these signs (e.g., handshape) are linguistic, other parts (locations) are variable, gradient elements (Liddell 2003: 212) and should be classified as gesture. This leads him to conclude that “the grammatical, the gradient, and the gestural are so tightly intertwined in ASL that it is the exception, rather than the rule, when an utterance lacks a gradient or gestural component” (Liddell 2003: 354). Liddell sees what he categorizes as the grammatical, the gradient, and the gestural all as a necessary part of a signed language, and in many cases appearing simultaneously. As he notes, “I have been describing the ASL language signal as consisting of combinations of signs, grammatical constructions, gradience in the signal produced by the primary articulators as signs are being produced, and gestural activities independent of the primary articulators” (Liddell 2003: 357). Sandler has recently taken notice of the need to explore the relation of gesture and sign (Sandler 2009, 2012). One of her proposals is that in spoken language, the linguistic signal is conveyed by the mouth while the hands produce gesture. Signed languages, she claims, “universally convey gestures that are defined as iconic, and they do this with the mouth. These gestures complement imagistic descriptions presented linguistically by the hands ⫺ precisely the reverse distribution from that found in spoken language” (Sandler 2009: 264). Sandler also relies on a set of criterial features to distinguish sign and gesture, taken from McNeill’s (1992) work on gesture. She says that while conceptually intertwined, language and gesture are structurally distinct: “The gestures have the properties that are not associated with linguistic organization: they are holistic, noncombinatoric, idiosyncratic, and context-sensitive. The linguistic signal is the converse: dually patterned, combinatoric, conventionalized, and far less context-dependent in the relevant sense” (Sandler 2009: 268). On the basis of this analysis, she concludes that rather than lying along a continuum, language, whether signed or spoken, and gesture are in a complementary relation: “If the primary language signal is oral, the gesture is manual ⫺ and vice versa” (Sandler 2009: 268). Thus, Liddell and Sandler appear to disagree on this aspect of the relation between sign and gesture. Whereas Sandler contends that, at least in the data on which she focused, they stand in an either/or, complementary relation, Liddell sees them being

2172

X. Sign language – Visible body movements as language

organically combined. On this basis, Liddell (2003: 357) concludes that “the gradient and gestural aspects of the signal are not peripheral or paralinguistic. They are required to be present and central to the meanings being expressed. In the case of ASL, restricting the analysis of language to symbolic units and their grammatical organization is hopelessly inadequate.”

4. What is sign and what is gesture? Within the synchronic approach, the identification of what is gesture and what is language must be clearly defined. Typically, sign linguists who make the claim for the synchronic interaction of gesture and language adopt criterial models of language and gesture, assuming for example that linguistic material is categorical and discrete while gestural material is gradient and analog (Liddell 2003). Some sign linguists also distinguish the two by appealing to conventionality versus idiosyncracity (Johnston and Ferrara 2012). These criteria are based in structuralist theories of language. Usage-based models of language reject these assumptions. For example, prototype models are favored over criterial models (Lakoff 1987; Langacker 2008). Gradience has been shown to pervade language at all levels (Bybee 2010). Whereas some sign linguists classify gradience in morphology as non-linguistic and therefore gesture, spoken language linguists come to quite a different conclusion. Hay and Baayen (2005: 346) ask whether morphological structure is inherently graded, and reply, “the issue is controversial, but the evidence that is currently accumulating in the literature suggests that the answer is yes.” Nothing requires the conventional units of language, whether spoken or signed, to be discrete and sharply-bounded. Indeed, a usage-based theory predicts gradience and fuzzy boundaries (Bybee and McClelland 2005). As Bybee (2010: 2) notes, “all types of units proposed by linguists show gradience.” She goes on to explain: The existence of gradience and variation does not negate the regular patterning within languages or the patterning across languages. However, it is important not to view the regularities as primary and the gradience and variation as secondary; rather the same factors operate to produce both regular patterns and the deviations. If language were a fixed mental structure, it would perhaps have discrete categories; but since it is a mental structure that is in constant use and filtered through processing activities that change it, there is variation and gradation. (Bybee 2010: 6)

The significance of these points cannot be overstated. Whether analysts focus on phonemes, morphemes, clauses, grammatical categories, or, in the present case, the relation between sign and gesture, they must ask “whether these discrete boundaries are discovered by linguists or imposed on the basis of theoretical preconception” (Langacker 2008: 37). As was noted, the criteria which many linguists have used to distinguish sign and gesture are often drawn from features of gesture originally reported by McNeill (1985, 1992). Linguists and gesture researchers have questioned whether these factors accurately describe gesture. Enfield (2004), for example, reports that co-speech gestures do show combinatoric principles and linear-segmented organization, and that “both speech and hand movements are capable of expressing both conventional/discrete/analytic and nonconventional/analog/imagistic meaning” (Enfield 2004: 119).

172. Gestures in sign language

2173

All of this suggests that what is needed in the search for understanding the relation between sign and gesture is less imposing and more discovering. Wilcox and his colleagues (Wilcox 2012; Wilcox and Xavier 2013) have proposed a unified framework in which sign and gesture are seen as expressions of a human expressive ability based on general cognitive, perceptual, motoric, and social abilities. In this usage-based view, sign and gesture are systems characterized by dynamic and emergent properties (Elman 1998; Thelen and Smith 1994).

5. Diachronic interaction o sign and gesture Wilcox and Wilcox (1995) first provided evidence that gesture may be a source for signs. Their data came from a study of signs expressing grammatical modality in American Sign Language. Many of these signs have a gestural origin, which are incorporated into the language first as lexical signs which then grammaticize to express modal meaning. One example is a gesture indicating upper body strength becoming a lexical sign meaning strong, which acquires a modal meaning can (physical, mental, or root ability). It is now well documented that gestures become lexicalized and grammaticalized into the linguistic system of signed languages (Janzen 2012; Johnston and Ferrara 2012; Schembri and Johnston 2007). Once lexicalized, gesture may undergo grammaticalization. Several researchers have documented the process by which lexicalized gestures grammaticalize (Janzen and Shaffer 2002; Wilcox, Rossini, and Pizzuto Antinori 2007; Wilcox and Wilcox 2009; Wilcox, Rossini, and Antinoro Pizzuto 2010). For example, it has been proposed (Janzen and Shaffer 2002) that a departure gesture used in the Mediterranean region entered French Sign Language (LSF) as the lexical sign partir ‘leave’. Because American Sign Language is historically related to French Sign Language (LSF), the sign also appeared in American Sign Language at the turn of the 20th century with the lexical meaning “to depart”. It also occurs with a more grammatical function marking future. Data from Catalan Sign Language (LSC) demonstrate the emergence of grammaticalized modal and evidential forms from gestural sources (Wilcox 2004). The Catalan Sign Language forms evident, clar, presentir, and semblar develop from gestural sources having concrete meanings, which through grammaticalization have developed more subjective and epistemic or evidential senses. As a lexical morpheme, evident has a range of physical senses denoting visual perception, including intensity of color; “prominence” or “salience”, such as a person who stands out because of her height; “sharp, well-defined”, such as indicating sharpness of an image; and “obvious”, as when looking for an object located in front of you. As a grammatical morpheme evident denotes subjective, evidential meanings such as “without a doubt”, “obviously”, and “logically implied”. The lexical morpheme clar is used in more concrete, lexical meanings to denote “bright” or “light”. It may also be used in a more abstract sense to denote clear content, a person’s skill in signing, or ability to explain a subject clearly. As a grammatical morpheme clar encodes speaker subjectivity and may be used in the same context as the more subjective use of evident. When used as a lexical morpheme, presentir denotes the sense of smell. The grammatical morpheme presentir is used to express the speaker’s inferences about actions or intentions. When used as a lexical morpheme semblar denotes physical resemblance. As a grammatical morpheme, semblar may be used to express the speaker’s subjective belief that an event is or is not likely to occur.

2174

X. Sign language – Visible body movements as language These Catalan Sign Language forms have sources in metaphorical or enacting gestures indicating the eyes and visual perception (evident), bright light (clar), the nose and the sense of smell (presentir), and physical, facial appearance (semblar). The full developmental path thus is from gesture to lexical morpheme to grammatical morpheme. In addition to manual gestures becoming grammaticalized, a second route leads from gesture to language (Wilcox 2004; Wilcox 2009; Wilcox, Rossini, and Antinoro Pizzuto 2010). This route begins as facial gestures or manner of movement gestures. These gestures do not enter the linguistic system as lexical signs; rather, they first appear as prosody and intonation. As they grammaticalize, they take on grammatical function, for example as markers of interrogatives, topics, conditionals, verb aspect, and intensification.

6. Sign and gesture in the evolutionary context While signs were for centuries regarded as nothing more than gesture lacking in linguistic features, philosophers and scholars at the same time often conjectured that the origins of language lie in gesture. Gestural theories of language origins have been recently advocated by the anthropologists, linguists, and psychologists (Arbib 2012; Hewes 1992; Corballis 1999, 2002, 2003). Among those who have explored gestural origins of language are William C. Stokoe and his colleagues (Armstrong, Stokoe, and Wilcox 1994). In fact, Stokoe introduced the idea in the first pages of his pioneering work Sign Language Structure (Stokoe 1960: 1): “Communication by a system of gestures is not an exclusively human activity, so that in a broad sense of the term, sign language is as old as the race itself, and its earliest history is equally obscure.” Stokoe traces how the gestural activity of a culture may be the raw material on which the signed languages of deaf people are built (Stokoe 1960: 1⫺2): To take a hypothetical example, a shoulder shrug, which for most speakers accompanied a vocal utterance, might be a movement so slight as to be outside the awareness of most speakers; but to the deaf person, the shrug is unaccompanied by anything perceptible except a predictable set of circumstances and responses; in short, it has definite meaning.

Stokoe then returns to the question of the evolutionary linkage between gesture and language and introduces the kernel of an idea to which he would return many times (Armstrong, Stokoe, and Wilcox 1995; Stokoe 1974; Wescott, Hewes, and Stokoe 1974): the notion that the original germ of language was neither a signed language nor a spoken language, but a multimodal-multisensory system in which the balance shifted over evolutionary history from an early system in which the visible gestural component played a significant role to a system in which the acoustic gestural component came to dominate.

7. Reerences Arbib, Michael A. 2012. How the Brain got Language: The Mirror System Hypothesis. New York: Oxford University Press. Armstrong, David F., William C. Stokoe and Sherman Wilcox 1995. Gesture and the Nature of Language. Cambridge: Cambridge University Press. Armstrong, David F., William C. Stokoe and Sherman E. Wilcox 1994. Signs of the origin of syntax. Current Anthropology 35(4): 349⫺368.

172. Gestures in sign language Bybee, Joan and Jay L. McClelland 2005. Alternatives to the combinatorial paradigm of linguistic theory based on domain general principles of human cognition. The Linguistic Review 22(2⫺4): 381⫺410. Bybee, Joan 2010. Language, Usage and Cognition. Cambridge/New York: Cambridge University Press. Corballis, Michael C. 1999. The gestural origins of language. American Scientist 87(2): 138⫺145. Corballis, Michael C. 2002. From Hand to Mouth: The Origins of Language. Princeton, NJ: Princeton University Press. Corballis, Michael C. 2003. From mouth to hand: Gesture, speech, and the evolution of righthandedness. Behavioral and Brain Sciences 26(02): 199⫺208. Elman, Jeffrey L. 1998. Language as a dynamical system. In: Robert F. Port and Timothy van Gelder (eds.), Mind as Motion: Explorations in the Dynamics of Cognition, 195⫺225. Cambridge, MA: Massachusetts Institute of Technology Press. Enfield, N. J. 2004. On linear segmentation and combinatorics in co-speech gesture: A symmetrydominance construction in Lao fish trap descriptions. Semiotica 149(1/4): 57⫺124. Hay, Jennifer B. and Rens H. Baayen 2005. Shifting paradigms: Gradient structure in morphology. Trends in Cognitive Science 9(7): 342⫺348. Hewes, Gordon W. 1992. Primate communication and the gestural origin of language. Current Anthropology 33(1⫺2) 65⫺84. Janzen, Terry 2012. Lexicalization and grammaticalization. In: Markus Steinbach, Roland Pfau and Bencie Woll (eds.), Handbook on Sign Languages. (Handbooks of Linguistics and Communication Science 37), 816⫺840. Berlin: Mouton de Gruyter. Janzen, Terry and Barbara Shaffer 2002. Gesture as the substrate in the process of ASL grammaticalization. In: Richard Meier, David Quinto and Keirsey Cormier (eds.), Modality and Structure in Signed and Spoken Languages, 199⫺223. Cambridge: Cambridge University Press. Johnston, Trevor and Lindsay L. Ferrara 2012. Lexicalization in signed languages: When is an Idiom not an Idiom? Selected Papers from UK-CLA Meetings 1: 229⫺248. http://uk-cla.org.uk/proceedings Lakoff, George 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago, IL: University of Chicago Press. Lane, Harlan 1984. When the Mind Hears: A History of the Deaf. New York, NY: Random House. Langacker, Ronald W. 2008. Cognitive Grammar: A Basic Introduction. Oxford: Oxford University Press. Liddell, Scott K. 2003. Grammar, Gesture, and Meaning in American Sign Language. New York: Cambridge University Press. McNeill, David 1985. So you think gestures are nonverbal. Psychological Review 92(3): 350⫺371. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. Sandler, Wendy 2009. Symbiotic symbolization by hand and mouth in sign language. Semiotica 2009(174): 241⫺275. Sandler, Wendy 2012. Dedicated gestures and the emergence of sign language. Gesture 12(3): 265⫺307. Schembri, Adam and Trevor Johnston 2007. Australian Sign Language (Auslan): An Introduction to Sign Language Linguistics. Cambridge: Cambridge University Press. Stokoe, William C. 1960. Sign Language Structure. Buffalo/New York: Department of Anthropology and Linguistics, University of Buffalo. Stokoe, William C. 1974. Motor signs as the first form of language. In: Roger W. Wescott, Gordon W. Hewes and William C. Stokoe (eds.), Language Origins, 35⫺50. Silver Spring, MD: Linstok Press. Thelen, Esther and Linda B. Smith 1994. A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA: The Massachusetts Institute of Technology Press.

2175

2176

X. Sign language – Visible body movements as language

Wescott, Roger W., Gordon W. Hewes and Willliam C. Stokoe (eds.) 1974. Language Origins. Silver Spring, MD: Linstok Press. Wilcox, Sherman 2004. Gesture and language: Cross-linguistic and historical data from signed languages. Gesture 4(1): 43⫺75. Wilcox, Sherman 2009. Symbol and symptom: Routes from gesture to signed language. Annual Review of Cognitive Linguistics 7(1): 89⫺110. Wilcox, Sherman 2012. Language in motion: A framework for unifying spoken language, signed language, and gesture. Anuari de Filologia Estudis de Lingüı´stica 2: 49⫺57. Wilcox, Sherman, Paolo Rossini and Elena Antinoro Pizzuto 2010. Grammaticalization in sign languages. In: Diane Brentari (ed.), Sign Languages, 332⫺354. Cambridge: Cambridge University Press. Wilcox, Sherman, Paolo Rossini and Elena Pizzuto Antinori 2007. Routes from gesture to language. In: Elena Pizzuto Antinori, Paola Pietrandrea and Raffaele Simone (eds.), Verbal and Signed Languages: Comparing Structures, Constructs and Methodologies, 107⫺131. Berlin/New York: Mouton de Gruyter. Wilcox, Sherman and Phyllis Wilcox 1995. The gestural expression of modality in American Sign Language. In: Joan Bybee and Suzanne Fleischman (eds.), Modality in Grammar and Discourse, 135⫺162. Amsterdam: John Benjamins Publishing Company. Wilcox, Sherman and Phyllis Perrin Wilcox 2009. The analysis of signed languages. In: Berndt Heine and Heiko Narrog (eds.), The Oxford Handbook of Linguistic Analysisthe Oxford Handbook of Linguistic Analysis, Volume 29, 739⫺760. Oxford: Oxford University Press. Wilcox, Sherman and Andre´ Xavier 2013. A framework for unifying spoken language, signed language, and gesture. Revista Todas as Letras 15(1): 88⫺110.

Sherman Wilcox, Albuquerque (USA)

Appendix Organizations, links, reerence publications, and periodicals Organization: International Society for Gesture Studies (ISGS) www.gesturestudies.com Organization: Sign Language Linguistics Society (SLLS) www.slls.eu/index2.php5 Book Series: Gesture Studies www.benjamins.com www.gesturestudies.com/bookseries.php Handbook: Sign Language ⫺ An International Handbook www.degruyter.com/view/product/382 Handbook: The Routledge Handbook of Multimodal Analysis www.routledge.com/books/details/9780415434379/ Handbook: The Sage Handbook on Nonverbal Communication www.sagepub.com/refbooks/Book226551 Handbooks of Communication Science: Nonverbal Communication www.degruyter.com/viewbooktoc/product/119484 Handbooks of Linguistics and Communication Science www.degruyter.com/view/serial/16647 Handbook: Oxford Handbook of Deaf Studies, Language, and Education http://global.oup.com/academic/?cc⫽de&lang⫽en Journal: GESTURE www.benjamins.com www.gesturestudies.com/gesture.php Journal: Journal of Multimodal Communication Studies jmcs.home.amu.edu.pl Journal: Journal of Nonverbal Communication www.springer.com/psychology Journal: Multimodal Communication www.degruyter.com/view/j/mc Journal: Sign Language & Linguistics benjamins.com/#catalog/journals/sll/main Journal: Sign Language Studies gupress.gallaudet.edu/SLS.html

Indices Authors Index A Aboudan, Rima 826⫺828, 849⫺850 Acredolo, Linda P. 114, 544, 794, 1834⫺ 1835, 1838, 1857⫺1859, 1867 Aijmer, Karin 1536 Alberti, Leon Battista 366, 1244 Aldrete, Gregory S. 331, 341, 1273 Alibali, Martha W. 159, 162⫺164, 399, 518⫺ 520, 640, 793, 795⫺796, 798, 817, 826, 839⫺841, 845⫺846, 1112, 1368, 1462, 1466, 1470, 1663, 1781, 1834, 1836⫺1839, 1872, 1894, 1901, 1936, 1938⫺1939, 2002⫺2003, 2114 Allen, Linda 285, 399, 406, 844, 851, 1189, 1236, 1427⫺1428, 1466, 1470, 1555, 1872 Alturo, Nu´ria 88 Amades, Joan 1266 Ambady, Nalini 616⫺617, 1404, 1919, 1974, 1976 Ameka, Felix 1983, 1985 Amira, Karl von 355 Anderson, Anthony 1375 Anderson, Peter A. 1354 Andre´n, Mats 65, 66, 68, 95, 544, 712, 759, 762, 1111, 1112, 1283, 1284, 1285, 1287, 1288, 1499, 1714, 1718, 1732, 1736, 1741, 1852, 1993 Andrew, Richard J. 258, 1304, 1963, 1965, 2093 Appleton, Lucy J. 258, 1435 Archer, Dawn 291, 909⫺910, 1536 Archer, Dwight 1234 Argentin, Gabriel 1407, 1409, 1467, 1470 Argyle, Michael 58, 104, 264, 285, 396, 810, 842, 846, 893, 1053⫺1054, 1234, 1311, 1313⫺1314, 1324, 1343, 1463, 1465, 1468, 1936 Aristotle 329⫺335, 340⫺341, 364⫺366, 369⫺372, 379⫺380, 430, 438⫺441, 443, 566, 713, 1105, 1400, 1444, 1450 Arkadyev, Peter M. 1289, 1290, 1292, 1298 Armstrong, David 22, 71, 131, 203, 457, 468, 469, 483, 484, 790, 1311, 1615, 1670, 1696, 2144, 2174

Arnheim, Rudolf 713, 763, 1687⫺1688, 1699, 1713, 1718, 1747, 1751, 1753, 1762 Atkinson, Matthew D. 694, 993, 1354, 1404, 1466 Attardo, Salvatore 643, 1490 Auer, Peter 212, 215, 580, 594, 1045, 1048⫺ 1050, 1311 Augustin, Madeleine Moreau 1125, 1273 Austin, Gilbert 272, 326, 328, 514, 533, 686, 1512, 1520, 1527, 1544, 1995 Austin, John 214, 261, 566, 675, 694, 695, 1113, 1506, 1510, 1544, 1548, 1554

B Bacon, Francis 56, 83⫺84, 364⫺365, 369⫺ 370, 372, 422, 467, 476, 1565 Baduel-Mathon, Ce´line 236, 1154 Bakeman, Roger 525⫺526, 880, 882⫺883, 885⫺888, 893⫺894, 897, 899⫺901 Baker-Shenk, Charlotte 1130, 2158 Ballard, Dana 1931, 1940 Bangerter, Adrian 164, 832 Banks, Marcus 984, 988 Barakat, Robert A. 83, 89 Barnett, Dene 55⫺56, 622, 2073⫺2074 Barsalou, Lawrence 158, 161, 162, 184, 194, 271, 514, 800, 1842, 1844, 2000, 2019, 2049, 2057 Bartenieff, Irmgard 307, 311⫺313, 933, 937, 942, 947, 952, 954, 959⫺960, 1229 Bates, Elizabeth 114, 174, 176, 544, 695, 794, 1834, 1956 Bateson, Gregory 101, 227, 232, 235, 396, 678⫺679, 883, 984⫺986, 1325, 1356 Bausch, Pina 424 Bavelas, Beavin Janet 2, 58, 66, 69, 104⫺ 105, 160, 163, 396, 399, 404, 567, 578, 580, 610⫺611, 614, 616, 618⫺619, 622, 737, 761, 775, 822⫺823, 826⫺830, 832, 838⫺839, 843, 847, 849⫺851, 1112, 1301⫺1302, 1307, 1326, 1328, 1363, 1368⫺1370, 1375, 1392⫺ 1394, 1398, 1453, 1458⫺1460, 1463⫺1465,

2180 1467⫺1468, 1531⫺1532, 1542, 1558⫺1560, 1562, 1566, 1576, 1611, 1755, 1826, 2011, 2114 Beattie, Geoffrey 2, 5, 66, 164, 399, 518, 809, 811⫺812, 817, 826⫺828, 838⫺840, 842, 844, 846, 848⫺850, 1023, 1080, 1111, 1327, 1369, 1467⫺1468, 1736⫺1737, 1936 Be´bian, Roch-Ambroise Auguste 389, 390, 1126, 1127, 1128, 1276 Becvar, Amaya 248, 256 Beebe, Beatrice 970 Bel, Je´roˆme 424 Bellugi, Ursula 18, 457, 635, 789, 1041, 1082⫺1083, 1130, 1569, 2152, 2156, 2158 Belsky, Jay 969 Bender, Susanne 969⫺970 Berenz, Norine 996, 999, 1002, 1044 Berglund, Eva 284, 1283, 1285, 1287 Bergmann, Kirsten 65⫺66, 161, 405, 998, 1080, 1111, 1738, 1951, 1953⫺1954 Bergmann, Jörg 993, 994, 998 Bharucha, Jamshed 1433 Bickerton, Derek 495⫺496 Billig, Michael 236, 264, 272, 1243, 1401, 1409 Birdwhistell, Ray L. 9⫺10, 24, 57⫺58, 64, 66, 101, 104, 129, 131, 232, 285, 393, 397, 402⫺403, 934, 940, 985, 988, 1000⫺1001, 1024, 1038⫺1039, 1045, 1053⫺1054, 1080, 1108, 1311, 1315, 1383, 1498, 1619, 1990 Birklein, Silvia 968⫺970 Black, Max 2094, 2096, 2105 Blake, Joanna 544, 1284 Bloom, Lois 997 Bloomfield, Leonard 57, 395, 1387, 1657 Boas, Franz 57, 101, 227, 230⫺232, 395, 432, 984⫺985 Bogen, Joseph E. 169⫺170 Bohle, Ulrike 65⫺66, 69, 404, 578, 591, 594⫺595, 684, 709, 727, 1000, 1002, 1038, 1053, 1080, 1106, 1112⫺1113, 1318, 1363⫺ 1365, 1407, 1531, 1662⫺1663 Bolden, Galina B. 222, 1325, 1378 Bolinger, Dwight 64, 521⫺522, 741, 1387, 1619 Bond, Charles F. 1919 Bonifacio, Giovanni 88, 364⫺365, 368⫺369, 371⫺374, 1244, 1527 Bonnal, Francoise 1127, 1276 Borowitz, Estelle 968 Borra`s-Comes, Joan 1271, 1386 Bouissac, Paul 187, 304⫺305, 869, 1177

Indices Bourdieu, Pierre 231, 331, 443, 676, 678, 680, 755, 1196 Bouvet, Danielle 188, 761, 763, 767, 1695, 1714, 1723⫺1724, 1736, 1750⫺1753 Bowern, Claire 975 Branigan, Holly P. 1375 Bressem, Jana 63⫺66, 129, 214, 402, 709, 733⫺735, 761, 774, 1038, 1060, 1080⫺1083, 1090⫺1100, 1340, 1505, 1532, 1542, 1549, 1560⫺1561, 1563, 1565⫺1570, 1576, 1578⫺ 1579, 1585, 1587⫺1588, 1596, 1600, 1602⫺ 1603, 1606, 1611, 1614⫺1615, 1619, 1622, 1626, 1631, 1634⫺1637, 1641-1643, 1647⫺ 1648, 1652, 1662⫺1665, 1690, 1716, 1741, 1753, 1861, 2145 Brinke, Leanne ten 1919 Brookes, Heather J. 12, 82⫺83, 87⫺95, 236, 402, 696, 1113, 1147⫺1149, 1156, 1476⫺ 1478, 1482, 1492, 1524, 1526, 1528, 1531, 1537, 1541⫺1542, 1561, 1569⫺1570, 1575⫺1576, 1586, 1588, 1596⫺1597, 1619, 1641 Brossard, Alain 1278 Browman, Catherine 236, 786 Brown, Penelope 104, 496, 508, 1208⫺1212, 1317, 1325, 1327, 1524 Bruner, Jerome S. 395, 1328⫺1329, 1834 Brunswik, Egon 551⫺552, 554, 558⫺559, 561, 1343 Buelte, Arnhilt 959 Bühler, Karl 203, 1199, 1623, 1687, 1692, 1739, 1770, 1803⫺1809, 1811⫺1812, 1814⫺ 1815, 1818, 2052, 2071, 2074⫺2076, 2083, 2114⫺2115 Bull, Peter E. 430, 435, 805, 893, 1407 Bulwer, John 56, 88, 96, 351, 364⫺365, 370⫺374, 467, 1527 Burgoon, Judy K. 265, 610⫺611, 613⫺616, 618⫺622, 906, 1351, 1407, 1467, 1915⫺ 1916 Burke, Kenneth 96, 323 Burke, Peter 91, 1527 Burridge, Kate 1491, 1523 Burrows, Anne M. 918, 926⫺927, 929 Burton, Michael 275, 1156 Butterworth, Brian 2, 59, 398, 804⫺805, 808⫺814, 1023, 1102, 1407, 1736, 1790, 1804, 1824⫺1825 Buxbaum, Laurel J. 169, 1890⫺1891, 1894 Bybee, Joan 788, 2172

Authors Index

C Cadierno, Teresa 1875⫺1876 Calame-Griaule, Genevie`ve 1157 Calbris, Genevie`ve 16, 60, 63, 65⫺68, 83, 86, 89, 94⫺95, 187, 190, 399⫺400, 402⫺403, 661, 666⫺668, 682, 709, 711, 777, 1080, 1083, 1100, 1104⫺1106, 1277⫺1278, 1427, 1477, 1488, 1498⫺1499, 1505, 1513⫺1514, 1519, 1541⫺1542, 1545, 1565, 1567, 1576, 1579, 1592⫺1594, 1596⫺1597, 1600, 1619, 1626, 1631, 1633, 1635, 1696, 1714, 1716, 1724⫺1725, 1736⫺1737, 1739⫺1740, 1753, 1767, 1770, 1782⫺1783, 1786 Call, Josep 471, 1958⫺1960 Cameron, Lynne 1107, 1773, 1776, 2105 Capirci, Olga 814, 1834, 1858⫺1859, 1863⫺ 1865 Caprile, Jean-Pierre 1156 Caramazza, Alfonso 1890 Carbone, Lodovico 1274 Carlo, Blasis 420 Casasanto, Daniel 187, 1783, 1785, 2107 Casey, Shannon 827, 986, 1370, 1678 Cassell, Justine 2, 185, 192, 399, 405, 1060, 1327, 1434, 1723, 1770, 1817, 1949, 1951⫺ 1953 Castelfranchi, Cristiano 85, 96, 627, 629⫺ 630, 643, 1490⫺1491 Caussin, Nicolas 367, 371, 1274 Chafe, Wallace 39, 70, 241, 256, 695, 822, 996, 1106⫺1107, 1362, 1664 Chen, Yishiu 157, 271, 399, 518, 616, 839, 846, 1023, 1133, 1361, 1423, 1462, 1464⫺ 1466, 1736, 1790, 1900, 1937 Cherinda, Marcos 1156 Chilton, Josephine 1497⫺1498 Chomsky, Noam 57⫺59, 129⫺131, 183, 471, 496, 534, 566, 635, 734⫺735, 742⫺745, 786, 1620, 1650⫺1651, 1653, 1657⫺1659, 2049 Chovil, Nicole 104⫺105, 160, 163, 399, 404, 580, 611, 615⫺616, 619, 839 Chu, Man-Ping 1237, 1526, 2002⫺2003 Chui, Kawai 1236, 1361, 1393, 1398 Cicero, Marcus Tullius 329⫺331, 335⫺340, 365⫺368, 1243, 1273 Cienki, Alan 2, 59, 66⫺67, 129, 131, 185⫺ 190, 193, 208, 323, 403, 709⫺711, 713, 726, 742, 755⫺756, 761⫺764, 770, 777⫺778, 788, 1104⫺1106, 1195, 1203, 1217, 1488, 1543, 1545, 1550, 1612, 1626, 1689, 1717⫺ 1719, 1721⫺1722, 1724⫺1727, 1735, 1739⫺

2181 1741, 1748, 1754, 1758, 1767, 1770⫺1772, 1776, 1783⫺1784, 1807, 2005, 2009, 2013 Clark, Andy 244, 544, 578, 686, 709, 830, 847, 1217⫺1218, 1238, 1714, 1737, 1851, 1860, 1867, 2014, 2019, 2023, 2049 Clark, Herbert H. 65, 66, 162, 163, 189, 262⫺263, 521, 534, 544, 580, 680, 695, 709, 822, 823, 827, 839, 847, 1369, 1375, 1380, 1433, 1462, 1466, 1468, 1662, 1663, 1755, 1803, 1804, 1934, 2011, 2012 Cleland, Alexandra A. 1375 Coates, Linda 104, 399, 567, 826, 847, 1326, 1328, 2011, 2152 Cocchiara, Giuseppe 1491 Coerts, Jane 2142, 2151, 2155⫺2156 Collett, Peter 1497⫺1498 Colletta, Jean-Marc 1151, 1254, 1280 Condillac, E´tienne Bonnot de 22, 56, 129, 203, 378⫺390, 467, 1275⫺1276, 1445, 1990 Condon, William S. 10, 64, 102, 105, 129, 397⫺398, 580, 1016, 1060, 1102, 1301⫺ 1308, 1316, 1385, 1406, 1908 Conte, Sophie 627, 1274 Cook, Guy 993⫺995 Cooke, Benjamin G. 1158 Cooperrider, Kensy 188, 1184, 1469, 1531, 1755, 1770, 1782, 1785, 1804 Coppola, Marie 118, 122, 500 Corballis, Michael 22⫺23, 203, 457, 462, 468, 474⫺476, 483, 486, 721⫺722, 742, 814, 839, 1658, 2144 Corina, David P. 1345, 1894, 2152, 2158 Cormier, Kearsy 1804, 2166, 2168 Cornelissen, Joep 193, 1776 Correia, Jorge Salgado 1262 Coseriu, Eugenio 1632⫺1633, 1637 Cosnier, Jacques 1278, 1476, 1479, Coulson, Seana 194, 769, 843, 1761, 1923⫺ 1928, 2004, 2095 Couper-Kuhlen, Elizabeth 580, 590⫺591, 594, 1050, 1106, 1108, 1303, 1365, 1663⫺1664 Cowley, Stephen 2019, 2021 Cramoisy, Se´bastien 1274 Creider, Chet A. 89, 91, 236, 1155⫺1156, 1384, 1524 Cressolles, Louis 368, 371⫺372, 1274 Croft, William 130, 188, 197, 703, 1613 Crowley, Terry 975, Cruz, Robyn 399, 714, 939, 968, 1023 Cso´rdas, Thomas 1441 Cuxac, Christian 18, 21, 1126, 1130, 1133, 1279

2182

Indices

D d’Alembert, Jean le Rond 1275, 2073 Dabbs, James M. 1375 Dahlbäck, Nils 2008⫺2009 Dale, Richard 1934, 2013 Danziger, Eve 1182, 1211 Darwin, Charles R. 176, 461, 467, 551, 557⫺ 558, 650, 932⫺933, 1023, 1245, 1339, 1344, 1497, 1850, 1887, 1964⫺1965, 1972, 1985⫺ 1986 Davidson, Jane W. 365, 922, 1434, 1436 Davis, Flora 1178 Davis, Jeffrey 2, 17, 172, 174, 177, 366, 494, 498, 933⫺934, 936⫺937, 939⫺940, 960, 1023⫺1024, 1054, 1179, 1216, 1222, 1386, 1905⫺1906, 1908, 1972 Davis, Martha 934, 937, 939, 940, 960, 1023, 1054, 1908 Deacon, Terence W. 493, 495, 542 Decroux, Etienne 1277 Della Casa, Giovanni 1244 Della Porta, Giovanni Battista 365, 369, 1244 DePaulo, Bella M. 618, 905, 1914, 1916⫺ 1919 Deppermann, Arnulf 404, 585, 993, 1000, 1301, 1303, 1305, 1307⫺1308, 1317, 1531, 1537, 1804 Descartes, Rene´, 127, 380⫺381, 387⫺388, 650 Deutsch, Roland 268⫺269, 908, 914, 1403 Dewey, John 1930 Di Paolo, Ezequiel 2019, 2020, 2021, 2022, 2042, 2113 Di Renzo, Alessio 1129, 1133 Diadori, Pierangela 89, 1491, 1518⫺1519 Diderot, Denis 379, 444, 1275⫺1277, 1446, 2072⫺2078 Dieu Nsonde, Jean de 1157, 1158 Dijk, Teun A. van 2012⫺2014 Dingemanse, Mark 1157⫺1158, 1185 Dittman, Allen T. 285, 810⫺812 Doris, Humphrey 423, 985 Driessen, Henk 1528 Du Bois, John W. 1048⫺1049, 1212, 2011, Dubos, Jean-Baptiste 1273 Duchenne de Boulogne, Guillaume B. 920, 1277 Dudis, Paul 131, 771, 779, 1756, 2166 Duncan, Isadora 422, 425 Duncan, Starkey Jr. 102, 838, 1327, 1362, 1363, 1384, 1386, 1467

Duncan, Susan 2, 21, 33, 39, 49, 59, 61, 67, 69, 135, 143, 146, 399, 483, 485, 498, 844, 875, 1017, 1041, 1062, 1063, 1068, 1099, 1565, 1723, 1734, 1735, 1741, 1789, 1880, 1881, 2028, 2039, 2046 Duranti, Allesandro 142, 154, 229, 234, 975, 993⫺994, 996, 998⫺999, 1001, 1003, 1849, 2014 Dushay, Robert 399

E Eastman, Carol M. 18, 236, 1155⫺1157, 1159 Eberhard-Kaechele, Marianne 969 Echterhoff, Gerald 2008 Eco, Umberto 86, 629⫺630, 1177, 1990 Edward, Klima 101, 178, 285, 395, 558, 1176 Edwards, Jane 993, 999, 1004 Efron, David 12, 16, 57, 82⫺87, 89⫺91, 169, 173⫺174, 178, 184, 230⫺231, 321, 393⫺ 395, 615, 735, 810, 837, 975, 984, 1024, 1060, 1171, 1246, 1271, 1334⫺1335, 1339⫺ 1340, 1453⫺1456, 1460, 1474, 1478⫺1479, 1492, 1503, 1510, 1527⫺1528, 1532, 1537, 1542, 1619, 1631, 1663, 1738⫺1739, 1771, 1824, 1990⫺1991 Ehlich, Konrad 648⫺649, 651, 655⫺656, 745, 993, 996, 1000⫺1002, 1038, 1044⫺ 1045, 1047, 1049⫺1050, 1054, 1311, 1803, 1988 Ehrlich, Uri 321, 328, 795, 1023 Eibl-Eibesfeldt, Irenäus 613, 1054 Ekman, Paul 2, 9, 12, 16, 57⫺58, 82⫺85, 89, 91, 94, 104, 176, 285, 293, 394, 396, 559⫺ 561, 610, 616, 641, 696, 735, 807, 885, 906, 918⫺926, 928, 934, 1000⫺1001, 1023⫺ 1024, 1040, 1053, 1069, 1102, 1177, 1245, 1295, 1311⫺1312, 1314, 1316, 1335⫺1338, 1342⫺1343, 1345, 1353, 1363, 1386, 1392⫺ 1394, 1407, 1433⫺1434, 1453⫺1456, 1458⫺ 1460, 1462⫺1464, 1467, 1474, 1476⫺1477, 1479, 1512, 1524, 1532, 1542, 1637, 1663, 1739, 1824, 1909, 1914⫺1915, 1918⫺1919, 1963, 1969, 1972, 2116, 2151⫺2152 Eliade, Mircea 325, 328 Ellgring, Heiner 934, 1023, 1909, Emmorey, Karen 21, 125, 468⫺470, 475, 487, 521, 702, 827, 1023, 1126, 1134, 1370, 1786, 1939⫺1940, 2130, 2134, 2150, 2152⫺ 2153, 2158

Authors Index Enfield, N. J. 60, 62⫺63, 65⫺66, 69, 236, 592, 675, 685, 690, 692, 696, 699, 701, 709, 711, 726, 757, 760, 976, 1195, 1216⫺1217, 1363, 1365, 1619, 1646, 1662, 1714, 1717, 1722, 1725, 1740, 1749, 1803⫺1804, 1825⫺ 1826, 2172 Engel, Johann Jakob 1446⫺1448, 2071⫺ 2072, 2074⫺2076 Erickson, Frederick 233, 683, 686, 988, 1385, Essegbey, James 236, 976, 1154⫺1155, 1161⫺1162, 1163⫺1166, 1525, 1528, 1825, 1870 Everett, Daniel 747, 975, 1658

F Faraco, Martine 1428⫺1429, Farnell, Brenda 17, 236, 1216 Fauconnier, Gilles 59, 183, 193, 246, 254, 2094, 2105 Fayer, Joan M. 1158 Fazio, Russell H. 268⫺269, 473, 908, 913, 1403 Fetzer, Anita 2014 Feuillet, Raoul Auger 417, 425 Feyereisen, Pierre 2, 52, 59, 154, 158, 163⫺ 164, 398, 495, 805⫺807, 810, 813, 840⫺841, 848, 1024, 1311, 1453, 1790⫺1791, 1898⫺ 1900, 1939 Figueroa, Esther 1158 Filliettaz, Laurent 1414 Fillmore, Charles J. 183, 191, 696, 759, 767, 778, 822, 840, 1719, 1748, 1754, 1803⫺1805 Fiske, Donald W. 266, 889, 1336, 1340, 1351, 1362⫺1364, 1384 Fitch, Tecumseh 734⫺735, 742⫺745, 1650⫺ 1651, 1653, 1658⫺1659 Fito´, Jaume 1270 Floyd, Simeon 610⫺611, 613, 616, 619, 1182⫺1185, 1191, 1785 Fodor, Jerry A. 156⫺158, 534 Fogel, Alan 970 Forceville, Charles 2095 Fornel, Michel de 222, 404, 838, 1371, 1377⫺1378 Forne´s, M. Anto`nia 89, 1268 Francke, Otto 324 Frank, Ruella 969 Franklin, Amy 116⫺117, 119, 813, 817 Freedman, Norbert 66, 170, 192, 1023⫺1024, 1069, 1102, 1112, 1338⫺1339, 1457, 1532, 1908⫺1909

2183 Frege, Gottlob 696, 806 Freud, Anna 959, 968 Freud, Sigmund 805, 807, 810, 817 Fricke, Ellen 2, 58, 60, 63⫺66, 68⫺69, 195, 394, 396, 402⫺403, 709, 711, 713⫺714, 716, 718, 719, 726⫺727, 733⫺735, 741⫺748, 762⫺763, 765, 771, 1023, 1041, 1060, 1063, 1072, 1080⫺1083, 1091⫺1092, 1099⫺1100, 1102, 1104, 1109⫺1110, 1112, 1114, 1453, 1456⫺1459, 1550, 1568, 1576, 1579, 1587⫺ 1588, 1602, 1619⫺1627, 1633, 1635⫺1637, 1644, 1646⫺1648, 1650⫺1658, 1664, 1672⫺ 1673, 1694, 1696, 1714, 1717⫺1719, 1722, 1725, 1727, 1733, 1735, 1739, 1741, 1755, 1788⫺1789, 1791, 1793⫺1795, 1797, 1800⫺ 1801, 1803⫺1807, 1809, 1812⫺1818 Friesen, Wallace 9, 12, 16, 57, 82⫺85, 89, 91, 94, 104, 176, 394, 396, 560⫺561, 610, 616, 641, 696, 735, 807, 885, 918⫺922, 924, 926, 934, 1000⫺1001, 1023⫺1024, 1040, 1053, 1069, 1102, 1176, 1245, 1295, 1311⫺1314, 1316, 1335⫺1336, 1338⫺1339, 1343, 1345, 1363, 1386, 1392⫺1394, 1407, 1433⫺1434, 1453, 1455⫺1456, 1458⫺1459, 1462⫺1467, 1474, 1476⫺1477, 1512, 1524, 1532, 1542, 1637, 1663, 1739, 1824, 1909, 1914⫺1915, 1972, 2151 Fuchs, Thomas 969, 2022, 2113, 2116⫺2117, 2122 Furuyama, Nobuhiro 148, 397, 399, 829, 1369, 1817, 2030 Fusellier-Souza, Ivani 1131

G Gallagher, Shaun 534⫺535, 537, 546, 621, 1775, 1905, 2018, 2026, 2030⫺2031, 2035, 2042, 2044, 2052, 2083, 2095, 2097, 2108, 2113 Gallese, Vittoria 269, 452, 461, 463, 472⫺ 473, 535, 759, 1842, 1887, 1895, 2055⫺2056 Garcia, Brigitte 1050, 1128, 1132, 1134, 1652, 1857, 1867 Gardner, Beatrix T. 1919 Gardner, Howard 47, 48 Garfinkel, Harold 102, 220, 568, 578⫺579, 995 Garnett, Tay 2085⫺2087 Garrod, Simon 162, 850, 1375, 2011, 2013 Gazzaniga, Michael S. 50, 169

2184

Indices Geeraerts, Dirk 759, 766, 1588, 1632⫺1633, 1749, 1807 Gentilucci, Maurizio 23, 162, 474, 484, 814, 1898 Georges, Demery 419 Gerald, Edelman 292, 790 Gerdes, Paulus 1156, 1859⫺1860, 1867 Gerwing, Jennifer 69, 163, 399, 832, 847, 1112, 1370, 1826 Gibbs, Raymond 67⫺68, 131, 187, 403, 514, 516⫺517, 755, 762, 767, 1369, 1375, 1468, 1718⫺1719, 1722, 1749, 1772⫺1774, 1776, 2005, 2017, 2108, 2113⫺2114 Gibson, James J. 514, 678⫺680, 686, 1930, 1933 Giora, Rachel 1773 Giorgio, Agamben 424 Glauning, Friedrich von 1155 Gnisci, Augusto 271, 880⫺881, 885, 888⫺ 889, 893⫺895, 900⫺901, 908, 1246⫺1247, 1407, 1463, 1467 Goethe, Johann Wolfgang von 1244 Goffman, Erving 7⫺9, 62, 87, 93, 101⫺108, 229, 232, 276, 321, 583⫺585, 599, 618, 694⫺695, 702, 708, 1305, 1308, 1317, 1325⫺1326, 1387⫺1388, 1528, 1543, 1692, 1983⫺1985, 1987⫺1988, 2014 Goldberg, Adele 152, 508, 703 Goldenberg, Georg 1890⫺1891, 1903 Goldin-Meadow, Susan 114⫺122, 125, 129, 158⫺159, 168, 399, 486, 496⫺497, 507⫺ 508, 529, 545, 793⫺801, 805, 840⫺843, 845, 1023⫺1024, 1235⫺1238, 1462, 1470, 1736, 1741, 1804, 1835⫺1838, 1851, 1855, 1859⫺ 1860, 1864, 1936⫺1938, 2003, 2005, 2030, 2040, 2045, 2154 Goldstein, Louis 440, 474, 497⫺498, 515, 785⫺786, 2043 Gombrich, Ernst 713, 1687⫺1688, 1692, 1758 Goodill, Sharon 968 Goodwin, Marjorie Harness 102, 104, 106, 219, 223, 566 Goodwin, Charles 65, 66, 102, 103, 104, 106, 108, 109, 110, 142, 219⫺223, 234, 241, 246, 250, 252, 404, 566, 567, 580⫺583, 585, 590, 592⫺593, 595, 596, 599, 603, 677, 680, 684, 685, 695, 701, 702, 757, 762, 822, 837, 975, 987, 988, 1016, 1038, 1054, 1112, 1170, 1216, 1235, 1326, 1327, 1365, 1369, 1371, 1373, 1377, 1378, 1398, 1414, 1462, 1466,

1664, 1714, 1803, 1804, 1825, 2000, 2011, 2014 Gottman, John M. 265, 880, 882⫺883, 885⫺ 886, 888⫺889, 893, 897, 900⫺901 Grady, Joseph E. 186, 756, 1783, 2095 Graham, Martha 422⫺423, 432, 434 Grammer, Karl 461, 1310⫺1311, 1966 Green, Jerald R. 89, 1478, 1936 Greimas, Algirdas J. 1176, 1178, 1278, 1633, 1993 Grice, Paul H. 62, 261, 471, 542, 621, 690, 696, 699, 998, 1050, 1312, 1434, 1437 Grigorjeva, Svetlana A. 1290 Groß, Ulrike 1157 Guerriero, Sonia A. 906, 1236 Guidetti, Miche`le 399, 1151, 1254, 1280, 1284, 1287, 1478 Güldemann, Tom 1186 Gulliver, Philip 1156 Gumperz, John J. 229, 276, 285, 995⫺996, 999, 1002, 1044, 1311, 2014

H Hackney, Peggy 311⫺312, 947⫺948, 952, 1231 Hadar, Uri 2, 5, 59, 65, 157⫺158, 398⫺399, 518, 804⫺814, 848, 1023, 1407, 1499, 1736, 1790, 1801 Hagoort, Peter 23, 25, 844, 1889 Hall, Crystal C. 1402⫺1404 Hall, Edward T. 612, 1176, 1311⫺1312, 1320 Hall, Judith A. 610, 617, 1302, 1307, 1309, 1351, 1363 Halliday, Michael A. 695, 703, 1377, 1908, 2014 Hanks, William F. 676, 679⫺680, 690, 697, 703, 976, 1199, 1204, 1209, 1803⫺1805, 1807 Hanna, Barbara E. 84, 86, 93, 1475 Harris, Lauren J. 57⫺58, 136, 379, 381, 384⫺385, 475, 483, 1162, 1936, 2008 Harrison, Simon 15, 63⫺65, 68, 195, 684, 719, 733⫺734, 762, 827, 846, 1083, 1093, 1100, 1368, 1418, 1500, 1563, 1565, 1576, 1579, 1595, 1597, 1619, 1636, 1662, 2141, 2154 Hassin, Ran 1402 Hastie, Suzanne 968 Hauser, Marc D. 471, 541, 734⫺735, 742⫺ 745, 1650⫺1651, 1658⫺1659

Authors Index Haviland, John B. 236, 265, 760, 765, 771, 976, 1183, 1196, 1209⫺1210, 1212, 1524, 1623, 1715, 1723, 1741, 1749, 1785, 1804, 1809, 1825 Heath, Christian 102, 104⫺106, 110, 219⫺ 221, 223, 374, 399, 404, 581⫺584, 589, 684, 741, 826, 837, 846, 986⫺988, 1002, 1170, 1325⫺1326, 1368⫺1369, 1377, 1414, 1418, 1424, 1445, 1531, 1804, 2021 Herbert, Robert K. 1524 Heritage, John 102, 219⫺220, 567, 569, 579, 694, 993 Heylen, Dirk 1953⫺1954, 2013 Hinde, Robert A. 58, 1963⫺1966 Hirst, Daniel 1052, 2008 Hitchcock, Alfred 2064⫺2065, 2067, 2089 Hochegger, Hermann 1155⫺1156 Hockett, Charles F. 60, 101, 469⫺470, 529, 985 Holler, Judith 757, 833, 838, 842⫺844, 847⫺ 848, 1080, 1369⫺1371, 1373, 1376⫺1377, 1462, 1466, 1468, 1738 Hollis, Alfred C. 1156 Hommel, Fritz 324 Hooff, Jan van 1963 Hostetter, Autumn B. 17, 158⫺159, 162⫺ 164, 518⫺520, 817, 839, 845, 1466, 1468, 1834, 1837, 1894, 1939, 1958, 2002, 2114 Hubbard, Amy L. 176, 546, 618 Huber, Ernst 457, 466, 927, 1311 Huber, Max 1155 Husserl, Edmund 534, 536, 1792, 1991 Hutchby, Ian 567, 590, 986 Hutchins, Edwin 242⫺243, 246⫺248, 250⫺254, 256, 677, 685, 757, 777, 1934, 2008 Hutchinson Guest, Ann 429, 947, 950, 951, 954, 956 Hymes, Dell H. 228⫺229, 404

2185

J Jackendoff, Ray 529, 534, 690, 742, 1434 Jaegher, Hanne de 2020⫺2022, 2042, 2113, 2116⫺2117 Jäger, Ludwig 1718 Jakobson, Roman 68, 86⫺87, 152, 188, 212, 696, 698, 755, 759⫺762, 765⫺767, 771⫺ 773, 775⫺776, 778, 1497⫺1498, 1714, 1735, 1740⫺1741, 1748⫺1750, 1754⫺1755, 1759, 2140 Janzen, Terry 64, 131, 789, 1568, 2134⫺ 2135, 2146, 2154, 2173 Jay, Timothy 959, 986, 1523 Jefferson, Gail 102, 108⫺110, 219⫺220, 404, 569⫺570, 574, 578, 590, 593, 694⫺695, 893, 897, 998⫺999, 1033, 1044, 1047⫺1049, 1106, 1319, 1362, 1364, 1372, 1434 Jespersen, Otto 2144 Johnson, Mark 68, 70, 188, 210, 403, 516, 676, 748, 755⫺759, 767, 777, 778, 1104, 1105, 1186, 1230, 1278, 1541, 1542, 1543, 1553, 1554, 1562, 1612, 1670, 1718, 1719, 1725, 1726, 1748, 1751, 1754, 1762, 1767, 1769, 1772, 1783, 1807, 1811, 1833, 1842, 1909, 2005, 2017, 2019, 2020, 2021, 2022, 2057, 2083, 2095, 2107, 2113, 2114, 2116, 2120, 2122, 2128 Johnson, Robert 1212, 1414 Johnson, Trudy 104, 567, 1326, 2011 Johnson-Laird, Philip 815⫺816, 1633, 1683, 1805⫺1806 Johnston, Trevor 1126, 1128, 1130, 1134, 2130, 2172⫺2173 Joraschky, Peter 1909 Jorio, Andrea de 9, 14, 17, 56, 88⫺89, 91, 393, 1245⫺1246, 1260, 1488, 1491, 1503, 1514⫺1517, 1521, 1525, 1527⫺1528, 1545, 1593, 1641, 1782, 1786

K I Ingold, Tim 1200, 1203⫺1204 Itkonen, Esa 538, 545 Iverson, Jana 114, 120, 161, 399, 544⫺545, 793⫺794, 1287⫺1288, 1462, 1834⫺1835, 1859, 1863, 1867, 1872, 2030 Izard, Carroll A. 559, 616, 919, 1972

Kaplan, Bernard 41, 296, 508, 1834, 1849, 1957 Kappelhoff, Hermann 67, 69⫺70, 206, 1114, 1775, 2021, 2050⫺2052, 2054⫺2056, 2063, 2067⫺2068, 2071, 2073⫺2078, 2081⫺2084, 2089, 2093⫺2098, 2105, 2107⫺2108, 2113⫺ 2114, 2116, 2119, 2122 Kartashkova, Faina I. 1293, 1298

2186

Indices Keller, Peter. E. 352, 1310, 1319, 1433⫺1435, 1437 Keller, Rudi 1319 Kellom, Tomlinson 418 Kelso, Scott 786, 790 Kendon, Adam 2⫺3, 8⫺21, 23⫺25, 29⫺30, 56, 58⫺59, 61⫺66, 71, 83⫺95, 101⫺105, 107, 129, 138, 158⫺159, 184⫺186, 190, 195, 203, 212⫺214, 219, 222, 232, 241, 256, 293, 316, 319, 379⫺380, 395⫺398, 400⫺404, 406, 481, 483, 487, 489⫺490, 493, 495, 499, 506, 525, 566⫺567, 574, 578, 580, 583, 618, 677, 681, 685, 693, 695, 699⫺701, 703, 708⫺710, 712, 715, 717⫺718, 721⫺726, 734⫺735, 737, 739, 741, 744⫺745, 751, 760⫺761, 771, 788, 793, 805⫺806, 808⫺ 811, 837⫺839, 976, 984⫺985, 988, 993, 996, 999, 1001⫺1002, 1009, 1012, 1016, 1024, 1042⫺1043, 1053⫺1054, 1060⫺1063, 1070, 1072, 1075, 1080, 1083⫺1084, 1093⫺1094, 1100, 1102⫺1104, 1108, 1112⫺1114, 1148, 1171, 1177⫺1178, 1195, 1199, 1216⫺1217, 1234, 1248, 1254⫺1256, 1275, 1285, 1301⫺ 1302, 1304⫺1307, 1311, 1315⫺1317, 1319, 1324⫺1327, 1336, 1340, 1347, 1356, 1361⫺ 1362, 1375, 1382⫺1383, 1387, 1392⫺1393, 1398, 1407, 1414, 1421, 1434, 1454, 1456, 1459⫺1460, 1462⫺1468, 1474⫺1478, 1488, 1491, 1496, 1499⫺1500, 1503⫺1505, 1510, 1512⫺1514, 1517⫺1520, 1524⫺1528, 1531⫺ 1536, 1541⫺1545, 1547⫺1548, 1553⫺1554, 1559⫺1563, 1565⫺1567, 1569⫺1570, 1575⫺ 1576, 1579, 1586, 1592⫺1597, 1600⫺1601, 1611, 1614⫺1615, 1619, 1623, 1626, 1631⫺ 1636, 1642, 1653⫺1654, 1664, 1668, 1691⫺ 1696, 1699, 1714, 1716⫺1717, 1723, 1735, 1739⫺1741, 1748, 1755, 1758, 1768, 1770⫺ 1771, 1782, 1785, 1790, 1801, 1803⫺1804, 1808⫺1809, 1824⫺1825, 1827, 1849, 1893, 1956, 1991, 1993, 1995, 2000, 2004, 2009, 2011, 2021, 2027, 2085, 2113, 2135⫺2136, 2140⫺2141, 2150 Kenwood, Christin 66, 399 Kestenberg, Judith 947, 959⫺970 Kestenberg Amighi, Janet 960⫺961, 963, 968⫺969 Kidwell, Mardi 69, 104⫺106, 109, 113, 1325⫺1330 Kim, Helen 2064 Kimbara, Irene 162, 399, 831, 838, 843, 848, 1015, 1371, 1376, 1378⫺1379, 1737⫺1738, 1741

Kimura, Doreen 168, 170⫺172, 175, 177, 518, 814, 1023⫺1024, 1339 King, Elaine C. 1434, 1437 Kintsch, Walter 2013 Kirk, Lorraine 846, 1156 Kirkham, Natasha 1934, 2013 Kirsh, David 246, 1932⫺1933 Kita, Sotaro 65, 69, 157, 159, 161, 164, 171⫺172, 174, 176, 178, 185, 236, 399, 406, 470, 520, 545, 640, 695⫺696, 714, 723, 744, 765, 771, 814, 837, 839, 843⫺ 845, 848, 851, 870, 911, 976, 1002, 1015, 1023, 1060⫺1063, 1070, 1075, 1102, 1112, 1152, 1154⫺1155, 1161⫺1167, 1182, 1186⫺ 1187, 1195, 1203, 1207, 1211, 1234, 1236⫺ 1237, 1466, 1524⫺1526, 1528, 1663, 1668, 1733⫺1735, 1737, 1755, 1789⫺1790, 1804, 1825, 1834, 1836, 1838, 1870, 1900, 1902, 1938, 2002⫺2003 Klassen, Doreen H. 1157⫺1158 Klein, Zdeneˇk 428, 434, 475, 1297⫺1298, 1807, 2030 Klima, Edward A. 18, 457, 635, 789, 1041, 1082⫺1083, 1130, 1569, 1615, 1646, 2158 Knapp, Mark 610, 1302, 1307, 1314, 1316, 1363, 1894, 1915 Knoblauch, Hubert 988 Knoblich, Gunter 1308, 1433 Knox, Dilwyn 365, 367⫺370, 373⫺374, 1527 Koch, Sabine C. 368, 439, 967⫺970 Kochman, Thomas 1158 Köhler, Wolfgang 525⫺527 Kok, Kasper 1386, 2009, 2013 Konrad, Zinta 1157 Kowal, Sabine 994, 1004, 1985 Krahmer, Emiel 159, 616, 1385, 1397 Krauss, Robert M. 65, 157⫺159, 162, 259⫺ 263, 399, 518, 616, 640, 805⫺807, 809⫺811, 814, 827, 839⫺840, 845⫺846, 1023, 1361, 1462, 1464⫺1466, 1663, 1736, 1790, 1900, 1937⫺1938 Kristeva, Julia 806, 1178 Kristiansson, Mattias 2008⫺2009 Krüger, Reinhard 1491 Krych, Meredyth A. 162, 830 Kunene, Daniel P. 1157⫺1158, 1524 Kunene, Ramona 1147, 1150⫺1152, 1157, 1185, 1528 Kunz, Teresa 970

Authors Index

L La Barre, Frances 968⫺969 Laban, Rudolf 307, 311, 313, 319, 423, 428⫺ 429, 431⫺432, 933, 942⫺945, 947, 949⫺ 957, 959⫺960, 965, 970, 984, 1024⫺1025, 1044, 1054, 1226, 1229, 1339, 1906⫺1907 Ladewig, Silva H. 63⫺66, 68⫺69, 71, 87, 93, 95, 129, 190, 214⫺215, 394, 400, 402⫺404, 709, 711, 713⫺715, 718⫺724, 726⫺727, 733⫺735, 744, 747, 751, 763, 777, 1041, 1060, 1063⫺1064, 1066, 1068⫺1070, 1072, 1074, 1080⫺1084, 1090, 1092⫺1094, 1099⫺ 1100, 1102⫺1107, 1109⫺1112, 1114, 1532⫺ 1533, 1540⫺1543, 1554⫺1555, 1560⫺1563, 1565⫺1570, 1576⫺1579, 1585, 1596, 1600, 1605⫺1606, 1614⫺1615, 1619, 1622, 1626, 1631, 1634⫺1635, 1641⫺1643, 1648, 1652, 1658, 1662⫺1665, 1668⫺1670, 1673, 1690, 1692, 1699, 1717, 1753, 1772, 1774⫺1775, 1860⫺1862, 1866, 2094, 2114 Lakoff, George 68, 183, 186, 188⫺189, 194⫺ 195, 210, 403, 463, 516, 535, 748, 755, 758⫺ 759, 767, 778, 1105, 1186, 1279, 1541⫺1543, 1553⫺1554, 1566, 1605, 1613, 1725⫺1726, 1748, 1754, 1767, 1769, 1772, 1783, 1807, 1811, 1842, 1909, 2005, 2017⫺2019, 2057, 2095, 2107, 2172 Lamb, Warren 944, 957, 959⫺961, 966⫺967, 970 Lane, Harlan 126, 388⫺390, 518, 788, 2170 Langacker, Ronald W. 59, 65, 68, 130, 183⫺ 184, 188⫺189, 191, 194, 524, 690, 703, 708, 759, 776, 1105, 1646, 1670, 1755, 1771, 1773, 2096, 2172 Lapaire, Jean-Re´mi 65, 719, 1280 Lausberg, Hedda 2, 51, 169⫺178, 399, 520, 711, 714, 814, 848, 1023⫺1025, 1030⫺1033, 1038, 1043, 1045, 1054, 1080, 1102, 1109, 1116, 1338⫺1339, 1652, 1699, 1753, 1761, 1902, 1906⫺1909 Lawrie, Douglas A. 404 Lazaraton, Anne 1428, 1871⫺1872 Le Guen, Olivier 1199, 1208⫺1212, 1785 LeBaron, Curtis D. 102, 105, 221, 229, 241, 404, 405, 566, 574, 582, 584, 679, 757, 988, 1325, 1462, 1466, 1727, 1740 Lecoq, Jacques 1277 Leite de Vasconcellos, Jose´, 1478 L’Epe´e, Charles-Michel de 388 Leo´n, Lourdes de 1209

2187 Lerner, Gene H. 104, 222, 574, 578, 590, 1326, 1328, 1378 Lessing, Gotthold Ephraim 444, 1446⫺1447, 2072⫺2078 Levelt, Willem J.M. 157⫺159, 808, 812⫺813, 815, 848, 1370, 1375, 1790, 2001 Levinson, Stephen C. 62, 104, 592, 690, 696⫺699, 822, 1208, 1210, 1216, 1317, 1325, 1327, 1524, 1536, 1677⫺1679, 1681, 1803⫺1805, 1824, 1850, 2010 Levy, Elena T. 2, 67, 69⫺70, 185⫺186, 399, 617, 1015, 1340, 1370, 1398, 1532, 1723, 1733, 1767, 1770, 1804, 1817, 2040 Lewis, Penny 947, 959, 968 Lichtenberg, Georg Christoph 1445 Liddell, Scott K. 21, 68, 131, 505, 521, 702, 772, 779, 1134, 1387, 1651⫺1652, 1756, 1804, 2128, 2130, 2152, 2154⫺2156, 2163, 2171⫺2172 Lieb, Kristin 1405 Liebal, Katja 459, 471, 711, 721⫺722, 1094, 1699, 1957⫺1960 Lillo-Martin, Diane 1387, 1859, 2153, 2156⫺ 2157 Llina´s, Rodolfo 790 Linell, Peer 822, 994, 2020 Lis, Magdalena 1296 Liszkowski, Ulf 544, 695, 1209, 1804, 1851 Lloberes, Marina 1271 Locke, John 379, 381, 387 Loman, Susan 947, 959, 961, 967⫺970 Lotan, Nava 968⫺969 Lucy, John A. 32 Luff, Paul 220, 404, 583⫺584, 589, 684, 987⫺988, 1325, 1414, 2021 Lynn, Ulrike 1505, 1507⫺1508, 1619, 1626, 1631⫺1632, 1636 Lyons, John 443, 739, 1625, 1631⫺1633, 1683, 1789, 1792, 1803, 1805, 1812, 1815⫺ 1816, 1970, 1976

M Machiavelli, Nicolo 494, 1244 Magno Caldognetto, Emanuela 85, 627, 641, 644, 1477⫺1478, 1482, 1488, 1492 Mahon, Bradford Z. 1889, 1890 Mallery, Garrick 17, 88, 393, 1492, 1995⫺ 1996 Mandel, Mark 700⫺701, 712, 762, 767, 1694⫺1695, 1714, 1756, 2128

2188

Indices Marey, E´tienne-Jules 983, 1277 Mark, Zvi 323 Marsh, Peter 617 Martin, John 423, 424, 428, 429, 432 Masip, Jaume 1919 Maturana, Humbert R. 536, 2021, 2022 Mauss, Marcel 203, 227, 231, 659, 674⫺675, 677⫺678, 683, 710, 1277⫺1278, 1441 Maynard, Douglas W. 577 McCafferty, Steven G. 407, 1388, 1430, 1871⫺1872, 1886 McClave, Evelyn Z. 64, 105⫺106, 159, 399, 745, 1060, 1108, 1155, 1383, 1385, 1387, 1467, 1498⫺1500, 2045, 2141 McLuhan, Marshall 343 McNeill, David 2⫺5, 9, 11, 16, 20⫺21, 30, 32⫺33, 46⫺47, 50, 52, 58⫺61, 63⫺67, 69⫺ 71, 82⫺83, 85, 93⫺94, 116, 120, 122, 125, 129, 135, 153⫺154, 161, 164, 172⫺173, 175⫺176, 178, 184⫺186, 192, 203, 219, 241, 247, 256, 285, 312, 317, 334, 372, 398⫺399, 403, 463, 476, 483, 486, 495⫺ 496, 504, 518⫺519, 521, 525, 529⫺530, 578, 580, 611, 615⫺616, 640, 690, 693⫺ 695, 697, 700, 703, 709⫺710, 712, 714, 719, 721⫺723, 726, 734, 737, 739⫺740, 744⫺ 745, 759⫺760, 762⫺763, 767, 771, 788, 793, 805⫺806, 809⫺811, 813, 838⫺839, 844, 885, 999, 1001⫺1002, 1008⫺1009, 1015⫺1016, 1023⫺1024, 1041⫺1042, 1045, 1062⫺1063, 1070, 1080⫺1081, 1083, 1086⫺ 1088, 1091, 1099, 1102, 1107⫺1110, 1112, 1114, 1177⫺1178, 1195, 1202, 1207, 1234, 1254, 1256, 1276, 1311, 1316, 1340, 1361, 1370, 1383⫺1385, 1392⫺1394, 1398, 1406⫺ 1407, 1423, 1434, 1456⫺1457, 1459⫺1460, 1462⫺1465, 1467, 1476, 1482, 1525⫺1526, 1532, 1542, 1545⫺1548, 1554, 1559, 1562, 1565, 1567, 1570, 1612, 1641⫺1642, 1652, 1656, 1662⫺1664, 1668, 1670, 1672, 1691, 1696, 1698, 1714, 1716, 1719, 1722⫺1723, 1725⫺1726, 1733⫺1737, 1740⫺1741, 1749, 1751, 1755, 1760, 1767, 1770, 1783, 1789, 1791, 1796, 1800⫺1801, 1804, 1817, 1824⫺ 1825, 1834, 1848, 1854, 1878, 1880⫺1881, 1893, 1928, 1950, 1995, 2000⫺2002, 2027, 2030, 2035, 2038⫺2040, 2045⫺2046, 2113, 2145, 2150, 2154, 2172 McQuown, Norman A. 101, 985 Mead, George Herbert 262, 480⫺483, 489⫺ 508, 1233, 2022, 2038 Mead, Margaret 101, 232, 678, 984⫺986

Mehan, Hugh 988 Mehrabian, Albert 619, 1343, 1353, 1386, 1407, 1915 Meissner, Martin 17, 1414, 1695 Melinger, Alissa 158⫺159, 164, 845, 848, 1370, 1938 Me´ne´strier, Claude Francœ ois 417, 419, 423 Meo-Zilio, Giovanni 12, 89, 94, 1478, 1492, 1519 Merlan, Francesca 1524 Merleau-Ponty, Maurice 62, 136, 534, 536, 686, 762, 764, 969, 1201, 1204, 1278, 1726, 1973, 2038, 2041⫺2045, 2050, 2056, 2062⫺ 2064, 2066, 2083, 2096, 2113, 2115 Merlin, Donald 485 Meyer, Christian 236, 686 Meyerhold, Vsevolod E. 1447⫺1448, 1450, 2077 Miller, George A. 628, 815⫺816, 1683, 1805⫺1806 Mitchell, Zachary A. 805 Mittelberg, Irene 59, 63, 66, 68, 129, 188⫺ 190, 394, 400, 403, 709, 711, 714⫺715, 742, 748, 756⫺757, 759⫺763, 765⫺767, 769, 772, 774⫺779, 1080, 1083, 1099⫺1100, 1104⫺1105, 1202, 1231, 1545, 1549, 1564, 1600, 1612, 1615, 1626, 1670, 1693, 1696, 1713⫺1727, 1733⫺1734, 1738⫺1741, 1748⫺ 1751, 1753⫺1756, 1758, 1761⫺1762, 1767⫺ 1768, 1928 Mol, Lisette 234, 846, 1371, 1376, 1738 Mondada, Lorenza 69, 102, 107, 220⫺223, 404, 581⫺585, 596, 837, 987⫺988, 1280, 1301, 1303, 1305⫺1308, 1317⫺1319, 1326, 1364, 1372, 1395, 1414, 1531, 1804, 2010 Monson, Ingrid 1434 Montepare, Joann M. 1400, 1402, 1404⫺ 1405 Montes, Rosa 1536, 1540, 1550 Montredon, Jacques 89, 661, 674, 1278, 1427, 1770 Moore, Carol-Lynne 952⫺953, 957 Moran, Nikki 1437 Morell, Karen L. 1156 Morgenstern, Aliyah 1280, 1850, 1852, 1853, 1858 Mori, Junko 404, 1430, 1872 Morris, Desmond 9, 12, 82⫺83, 85, 87, 89⫺ 91, 93⫺94, 637, 1150⫺1151, 1177, 1234, 1267⫺1268, 1270, 1311, 1347, 1476, 1478, 1491⫺1492, 1496⫺1498, 1503, 1512⫺1513,

Authors Index 1517⫺1519, 1524⫺1525, 1534, 1542, 1575, 1579, 1631⫺1632, 1634, 1636, 1870, 1995 Müller, Cornelia 2, 55, 56, 59⫺70, 82, 84, 93⫺94, 131, 177, 188⫺190, 195, 202⫺205, 208, 212, 214, 313, 365, 368, 370, 372, 374, 394, 396, 400⫺404, 558, 580, 594, 596, 656, 681, 684, 700, 708⫺714, 721, 724, 726, 735, 746, 747, 749, 761, 762, 764, 772, 777, 788, 804⫺806, 969, 1023, 1060, 1063, 1072, 1081, 1093, 1099, 1100, 1195, 1205, 1217, 1226⫺1227, 1254, 1256, 1311, 1318, 1339, 1394, 1420, 1453⫺1457, 1488, 1496, 1502⫺ 1503, 1505, 1512⫺1513, 1528, 1531⫺1537, 1540⫺1545, 1547⫺1550, 1559⫺1568, 1570, 1575⫺1579, 1585⫺1588, 1596, 1598, 1600, 1602, 1606, 1608, 1611, 1615, 1619, 1626, 1631⫺1637, 1641⫺1642, 1645, 1652, 1662⫺ 1665, 1668⫺1673, 1687, 1689⫺1696, 1713⫺ 1714, 1716⫺1719, 1722⫺1727, 1735, 1737, 1739⫺1741, 1748, 1750, 1752⫺1756, 1758⫺ 1759, 1762, 1767⫺1776, 1781, 1789, 1795, 1800, 1804, 1825, 1849, 1859, 1860⫺1862, 2004⫺2005, 2021, 2040⫺2041, 2067⫺2068, 2071, 2075, 2078, 2083⫺2084, 2089, 2094⫺ 2097, 2105, 2107⫺2108, 2113⫺2119, 2135⫺ 2136, 2152 Munari, Bruno 89, 1491 Münsterberg, Hugo 2051⫺2052, 2056, 2062, 2082⫺2083, 2095

N Napoli, Donna Jo 1523⫺1524 Neisser, Ulric 785, 1933 Neumann, Ranghild 16, 402, 1518⫺1519, 1535, 1542, 1563, 1569, 1576, 1579 Newport, Elissa 113, 117⫺118, 2130 Nicoladis, Elena 163, 399, 846, 1284, 1287, 1871 Nomura, Saeko 252⫺253, 685 North, Marion 960 Nothdurft, Werner 1003 Noverre, Jean Georges 419⫺420, 428, 430⫺ 431 Nuckolls, Janis B. 1183, 1185⫺1187, 1191 Nu´n˜ez, Rafael E. 59, 67, 68, 188, 192, 756, 763, 1182, 1184, 1185, 1191, 1469, 1755, 1770, 1782, 1784, 1785, 1804, 2005

2189

O O’Connell, Daniel 994, 1004 Ochs, Elinor E. 580, 583, 685, 993, 995, 997, 998, 1849 Ogston, William 10, 64, 105, 397⫺398, 1016, 1060, 1102, 1301⫺1303, 1307⫺1308, 1316, 1406 Olofson, Harold 1159 Olsher, David 1429, 1872 Omar, Sheih Y.A. 1156 Omar, Yahya Ali 1155, 1156, 1159 Ong, Walter J. 343⫺344, 440, 876 Oppenheimer, Daniel M. 271, 1406 Orie, Olanike O. 236, 1154⫺1155, 1162, 1525 Ortony, Andrew 59

P Padden, Carol 118, 470, 476, 2130 Palen, Leysia 252 Parr, Lisa A. 918, 926⫺929, 1963 Parrill, Fey 61, 68⫺69, 94, 131, 138, 147, 193, 403, 838, 843, 847⫺848, 1016, 1062, 1099, 1109, 1370, 1379, 1384, 1541, 1547⫺ 1548, 1552, 1554, 1565, 1719, 1722, 1726, 1738, 2002, 2168 Patrick, Peter L. 1158 Patterson, Miles L. 266, 615, 618⫺620, 1314, 1325, 1351 Paul, Ingwer 69, 404, 1106, 1531 Paya`, Marta 88, 1270 Payrato´, Lluı´s 83, 88, 89, 91, 92, 94, 1234, 1267, 1474, 1476⫺1477, 1482, 1532, 1536, 1575 Pedraza Go´mez, Zandra 1179 Peirce, Charles Sanders 62, 66, 86, 537, 629⫺630, 690, 695⫺696, 699, 714, 741, 761, 763⫺767, 769, 776, 1179, 1423, 1458, 1620⫺1621, 1696, 1712⫺1720, 1722⫺1723, 1725⫺1726, 1733, 1740⫺1741, 1748⫺1749, 1751, 1754, 1789, 1813, 1816, 1991⫺1992, 1994⫺1995 Pereverzeva, Svetlana I. 1289⫺1290, 1292 Perniss, Pamela 131, 1043, 1679, 1697, 1714, 1741 Pfau, Roland 64, 621, 1500, 1568, 1646, 2133, 2135, 2142⫺2144, 2152⫺2154, 2157 Philpott, Stuart B. 17, 616, 1414, 1695 Piaget, Jean 537⫺538, 544, 1296, 1833, 1931, 1991

2190

Indices Pickering, Martin J. 162, 850, 1375, 2011 Pike, Kenneth 58, 86, 129, 393, 397, 734, 740, 745, 751, 1387, 1619, 1696 Pinker, Steven 30, 468⫺469, 475, 499, 504, 508, 529, 534, 742, 1653 Pizzuto, Elena 469, 512, 789, 1126, 1133⫺ 1134, 1568, 1615, 1804 Plessner, Helmuth 1442, 2052, 2071⫺2072, 2077, 2095, 2113, 2115⫺2116 Poggi, Isabella 82⫺83, 85, 89, 91, 94, 100, 627, 629⫺630, 632⫺633, 635, 638⫺641, 643⫺644, 1148, 1459, 1462⫺1463, 1467, 1477⫺1478, 1482⫺1483, 1485⫺1486, 1488, 1490⫺1493, 1496, 1503, 1524⫺1525, 1531, 1534, 1563, 1575, 1619, 1631⫺1632, 1636, 1650, 1736, 1952 Pomerantz, Anita 599 Popper, Karl 488, 508, 537⫺538 Porter, Stephen 1919 Posner, Roland 60, 86, 89, 92⫺94, 639, 736, 738, 1311, 1476, 1503, 1507, 1542, 1548, 1579, 1619, 1626, 1631⫺1632, 1703⫺1707, 1709⫺1710, 1712 Poyatos, Fernando 89, 94⫺95, 287⫺290, 294⫺295, 297, 329, 1477 Pragglejaz, Group 1772 Preston Dunlop, Valerie 947 Prieto, Pilar 1176⫺1177, 1383, 1386 Psathas, George 113, 569, 1326 Puig, Merce`, 89, 1268

Reilly, Judy S. 125, 702, 1257, 2152, 2156, 2158 Reinhard, Marc-Andre´, 1914, 1919 Richards, Ivor A. 1770 Richardson, Daniel 2, 475, 517, 910, 1934, 2013 Rickford, Angela E. 1158 Rickford, John. R. 1158 Ricoeur, Paul 323, 440 Rijnberk, Ge´rard Van 1527 Rime´, Bernard 17 Rizzolatti, Giacomo 269, 446, 452⫺453, 457, 466, 468, 472⫺474, 483⫺484, 489, 558, 561, 756, 839, 967, 1278, 1887⫺1889, 1895, 2055⫺2056 Rodriguez, Lydia 1211 Rohrer, Tim 535, 2017, 2020, 2022 Romaniuk, Julia 1296 Rosch, Eleanor 535, 755, 1620, 1622, 1719, 1792⫺1793, 2021⫺2022 Rosenthal, Robert 271, 333, 910, 1355, 1404, 1409, 1914, 1976 Rossano, Frederico 104, 1208, 1325, 1326, 1327, 2010 Rousseau, Jean-Jacques 129, 379, 467, 1443, 2077 Ruiter, Jan Peter de 65, 157⫺159, 171, 174, 176, 399, 701, 810, 814, 845, 1023, 1110, 1361, 1363, 1365, 1790, 1801, 1825, 1938, 1950, 1954, 2003

Q

S

Quer, Josep 1500, 2142⫺2143, 2153, 2157 Quera, Vicenc 885, 887⫺888, 893⫺894, 897, 899⫺901 Quintilian, Marcus Fabius 10, 15, 55⫺56, 71, 88, 329⫺331, 334⫺335, 337⫺341, 351, 365⫺368, 370⫺371, 727, 1243⫺1244, 1273, 1516⫺1521, 1540, 1559, 1565, 1782, 2170

Sacks, Harvey 62, 102, 105, 108⫺109, 219⫺ 220, 222, 229, 404, 569, 574, 578⫺579, 590, 593, 684, 694⫺695, 893, 897, 997, 1033, 1106, 1319, 1362, 1372, 1377, 1434 Sager, Svend F. 1001, 1038, 1040, 1044, 1080, 1082, 1099, 1319⫺1320 Sainte Albine, Re´mond 1445 Saitz, Robert L. 89, 91, 1156, 1478, 1542 Salgado, Anto´nio 1262⫺1263, 1266 Sallandre, Marie-Anne 18, 21, 1126, 1131, 1652 Saltzman, Eliot 786, 790 Sandler, Wendy 476, 1387, 2128, 2130, 2139, 2154, 2156⫺2157, 2171 Sarduy, Severo 1176 Saussure, Ferdinand de 56, 60, 135, 136, 188, 212, 469, 488, 629, 632, 694, 696, 741, 761,

R Radden, Gunter 188, 709, 766 Rainer, Yvonne 424, 433 Rameau, Pierre 417 Ramsden, Pamela 960, 967 Rector, Monica 89, 100, 627, 1176, 1180 Reddy, Michael J. 185, 241, 258, 1767, 2136

Authors Index 1176, 1620, 1621, 1626, 1631, 1636, 1651, 1736, 1990, 1991 Savage-Rumbaugh, Sue 115, 471, 505, 525⫺ 527, 542⫺543 Scheflen, Albert E. 10, 64, 102, 232, 404, 583, 805, 934, 985, 988, 995, 1024, 1053, 1311, 1315, 1383, 1387, 1562 Schegloff, Emanuel A. 102⫺103, 105⫺106, 109, 219⫺222, 230, 404, 566⫺567, 569, 574, 577⫺581, 585, 590⫺594, 597, 684, 694⫺ 695, 702, 748, 805, 893, 897, 997, 1002, 1033, 1106, 1217, 1222, 1307, 1319, 1324, 1326, 1361⫺1362, 1364, 1369, 1372, 1377, 1434, 1663, 1790, 2010⫺2011, 2014 Schembri, Adam 20, 25, 1083, 2130, 2173 Scherer, Klaus R. 58, 66, 333, 396, 551, 553, 558⫺559, 561, 613, 737, 1053⫺1054, 1112, 1311, 1336⫺1338, 1386, 1963 Schilder, Paul 959 Schmitt, Reinhold 69, 221⫺223, 325, 404, 584, 585, 592, 889, 988, 999, 1000, 1044, 1301, 1303, 1305, 1308, 1311, 1317, 1319, 1320, 1365, 1531, 1663, 1804 Schönherr, Beatrix 1362⫺1363, 1365, 1455, 1663 Schwitalla, Johannes 1362⫺1364, 1664 Searle, John R. 88, 92, 212, 214, 261, 538, 566, 577, 694⫺695, 1113, 1506, 1508, 1510, 1548, 1585, 1703, 1711⫺1712, 1934, 1991 Sebeok, Thomas A. 17, 1527 Seiler, Hansjakob 749, 1789, 1791, 1797⫺ 1798, 1800⫺1801 Selting, Margret 580, 590⫺591, 593⫺594, 597, 601, 603⫺604, 993, 996⫺997, 1004, 1045, 1048⫺1050, 1106, 1303, 1363, 1365, 1663⫺1664, 1861 Senghas, Ann 117, 122, 470, 500⫺501, 545 Serenari, Massimo 636, 639, 1478 Seyfeddinipur, Mandana 64, 87, 89, 399, 401⫺402, 404, 723, 837, 975⫺976, 1023, 1060, 1062⫺1063, 1070, 1072⫺1073, 1075, 1103, 1520, 1541⫺1543, 1563, 1566, 1569, 1576 Shai, Dana 969 Shannon, Claude E. 156, 260, 2063 Sheets-Johnstone, Maxine 70, 430, 435, 1203⫺1204, 1775, 2052, 2083, 2097, 2108 Sherzer, Joel 12, 25, 82, 87⫺88, 90, 228⫺ 229, 236, 1207, 1478, 1531, 1561⫺1562, 1804, 1826 Shockley, Kevin 1934 Sibree, James 1155

2191 Sicard, Roch-Ambroise Cucurron 379, 389⫺ 390 Sidnell, Jack 223, 592⫺593, 988 Siebicke, Larissa 1156 Sime, Daniela 1428, 1871 Singleton, Jenny L. 116, 122, 125, 129, 486, 796 Sinha, Chris 534⫺535, 538, 1289 Sittl, Karl 355, 360 Slama-Cazacu, Tatiana 65⫺66, 709, 1662, 1664 Slobin, Dan 30, 32⫺33, 37, 184, 508, 710, 1134, 1236, 1256⫺1257, 1689, 1735, 1740, 1870, 1876, 1884⫺1885, 2039 Sloetjes, Han 1038, 1043, 1045, 1054, 1080, 1102, 1116, 1131, 1338⫺1339, 1395 Smith, Linda 1, 678, 786⫺790 So, Wing Chee 1238, 1239 Sobchack, Vivian 2049⫺2052, 2056, 2062⫺ 2068, 2078, 2083, 2095 Sonesson, Göran 534⫺535, 537⫺538, 546, 1719⫺1720, 1723, 1733, 1740, 1750, 1990⫺ 1997 Sorin-Barreteau, Liliane 1157 Sossin, K. Mark 959⫺961, 967⫺970 Sparhawk, Carol M. 83, 86, 89, 91, 94, 402, 1064, 1234, 1477, 1561, 1567 Spencer, Kelly 52, 430⫺432 Sperber, Dan 88, 259, 690, 698, 1475, 1966 Stam, Gale 1280, 1869⫺1870, 1876⫺1885 Stefani, Elwys de 222, 584, 988 Steffensen, Sune 2019, 2020, 2021 Steinen, Karl von den 1189⫺1190 Stern, Daniel 537, 1328, 2052, 2083, 2095, 2097, 2108 Stetter, Christian 736, 741, 1620⫺1622, 1626, 1651 Stivers, Tanya 106, 113, 221, 592, 596⫺597, 988, 1218, 1222, 1326, 1328, 2010 Stjernberg, Frederik F. 2008⫺2009 Stokoe, William C. 22, 24, 63, 86, 128⫺129, 131, 203, 457, 463, 468, 476, 635, 790, 1041, 1060, 1064, 1082⫺1083, 1104, 1125⫺1128, 1130, 1311, 1320, 1483, 1565, 1652, 1670, 1673, 1696, 1699, 1867, 2127⫺2128, 2174 Strack, Fritz 268⫺269, 271, 275, 908, 914, 1403, 1972 Streeck, Jürgen 13, 16, 60, 62, 65⫺66, 68⫺ 69, 102, 104⫺105, 190, 203, 206, 214, 219⫺ 222, 229, 232, 236, 241, 245⫺246, 252, 400, 404⫺405, 566⫺567, 574, 577, 581⫺582, 584, 596, 675, 679, 681⫺685, 701⫺702, 709,

2192

Indices 711⫺712, 714, 718, 726, 745, 757, 762⫺763, 837⫺838, 975, 988, 1002, 1104, 1106, 1109⫺ 1110, 1113, 1171, 1174, 1196, 1198⫺1201, 1254⫺1255, 1316, 1325⫺1326, 1362, 1364⫺ 1365, 1369, 1371⫺1372, 1406⫺1407, 1462, 1466, 1496, 1531⫺1534, 1536, 1541, 1543⫺ 1545, 1554⫺1555, 1559⫺1560, 1579, 1586, 1600, 1626, 1632, 1642, 1662⫺1664, 1692, 1696, 1699, 1714, 1717⫺1719, 1727, 1735, 1740⫺1741, 1749, 1758, 1768, 1774, 1803⫺ 1804, 2004, 2021 Stuart, Meg 425 Studdert-Kennedy, Michael 786 Sunaoshi, Yukako 1414 Supalla, Ted 2130, 2163, 2165 Sutton, Valerie 875, 1128 Sweetser, Eve 59, 61, 67⫺68, 131, 184, 188, 193, 403, 755⫺756, 759, 761⫺763, 767, 770, 774, 779, 1099, 1105, 1182, 1184⫺1185, 1191, 1545, 1565, 1611, 1693, 1713⫺1714, 1716, 1718⫺1719, 1722⫺1727, 1737, 1740⫺ 1741, 1748, 1751, 1761⫺1762, 1770, 1784 Swerts, Marc 159, 616, 1397, 1400, 1738

Tinbergen, Niko 1957, 1963 Todorov, Alexander 443, 561, 564, 1401⫺ 1405, 1409 Tomasello, Michael 1, 22, 446, 459⫺460, 468, 471⫺472, 484, 492⫺493, 527⫺528, 542⫺545, 695, 721, 839, 842, 1094, 1330, 1803⫺1804, 1851, 1859, 1956, 2012 Tomkins, Silvan 616, 919, 1972 Tompakov, Roland 1176 Traugott, Elisabeth C. 191, 471, 2134, 2138, 2140 Treis, Yvonne 1524 Tremearne, Arthur J. N. 1155 Trier, Jost 718, 1632⫺1633 Trubetzkoy, Nikolaj 1069, 1656 Truslit, Alexander 1433 Turner, Mark 59, 193, 233, 235, 246, 254, 755, 2057, 2094, 2105

U Umiker-Sebeok, Donna-Jean 17

T

V

Talmy, Leonard 31⫺32, 68, 148, 189, 192, 701, 759, 777⫺778, 1105, 1235, 1678, 1687, 1697⫺1698, 1755, 1805, 1876, 1879 Tannen, Deborah 276, 2163 Taub, Sarah 131, 189, 403, 712, 755, 762, 767, 776, 779, 1695⫺1696, 1714, 1726, 1750, 1753, 2130 Tellier, Marion 1280, 1428, 1872 Ten Have, Paul 570, 590 Teßendorf, Sedinha 63, 68, 190, 214, 394, 396, 400, 402⫺404, 718, 1080, 1083, 1105⫺ 1106, 1113⫺1114, 1532, 1534, 1536, 1540⫺ 1542, 1548⫺1550, 1555, 1559⫺1561, 1563⫺ 1564, 1566⫺1567, 1569, 1575⫺1576, 1579, 1585⫺1586, 1595⫺1596, 1598, 1600, 1608, 1610⫺1611, 1615, 1619, 1626, 1631, 1634, 1637, 1663, 1692, 1768⫺1769, 1771 Thelen, Esther 161, 678, 786⫺787, 789, 1859, 1931, 2173 Theophrastus of Eresus 329⫺330, 333⫺335, 341 Thieberger, Nicholas 975 Thoinot, Arbeau 417 Thompson, Robert F. 1158 Thompson, Sandra A. 39, 118, 580, 1363

Varela, Francisco J. 535⫺536, 755, 2021⫺ 2022 Vasconcelos, Jose´ Leite de 1260, 1266 Va´vra, Vlastimil 1497, 1498 Versante, Laura 13, 185, 681, 976, 1623, 1803⫺1804, 1809, 1825 Verzijl, Harriette 1970, 1977 Volterra, Virginia 122, 469, 483⫺484, 491, 544, 635, 695, 814, 1714 Vrij, Zuckerman 621, 1915, 1919 Vygotsky, Lev Semenovich 32, 61, 136⫺137, 141, 276, 528, 543, 1734, 1849, 1993, 1996, 2038⫺2040

W Wachowska, Monika 1296 Wallbott, Harald G. 212, 551⫺554, 1024, 1038, 1053⫺1054, 1311, 1314, 1319, 1336⫺ 1337, 1907⫺1909 Watzlawick, Paul 2⫺3, 58, 396, 737, 807, 1356, 2114 Weaver, John 418 Weaver, Warren 156, 168, 260, 2063

Authors Index Webb, Rebecca 63, 67, 94, 188, 402⫺403, 1064, 1080, 1204, 1541, 1545, 1547⫺1548, 1565, 1579 Weil, Pierre 1176 Weinrich, Lotte 404, 1044, 1364, 1519, 1579, 1633, 2094, 2105 Werner, Heinz 41, 508, 1834, 1849, 1857⫺ 1858, 2017 Wertheimer, Max 2115 Westermann, Diedrich 1155 Whorf, Benjamin Lee 32, 43, 1182 Wigman, Mary 416, 422, 423, 429, 432, 434 Wilbur, Ronnie 1641⫺1642, 2128, 2151, 2154⫺2155, 2157⫺2158 Wilcox, Phyllis 131, 762, 767, 776, 779, 788, 789, 1568, 1759, 2173 Wilcox, Sherman 22, 64⫺66, 68, 71, 93, 131, 189, 203, 457, 468, 469, 483, 484, 518, 524, 709, 727, 759, 779, 788⫺790, 1112, 1311, 1568⫺1569, 1615, 1670, 1696, 1714, 1716, 1726, 1741, 1750, 1756, 1759, 1768, 1804, 1957, 2134, 2144⫺2146 Wilkins, David P. 171, 174, 176, 182, 681, 701, 703, 975⫺976, 1155, 1162, 1196, 1287, 1804, 1809, 1826, 1988 Williams, Robert F. 66, 68, 190, 193, 242⫺ 245, 254⫺255, 403, 429, 757, 777, 810, 1051 Wilson, Margaret 535, 1940, 2017, 2049 Wilson, Deidre 88, 259, 698, 1475, 1966 Wittgenstein, Ludwig 212, 538, 571, 676, 686, 1620⫺1622, 1718

2193 Woll, Bencie 9, 483⫺484, 488, 1651, 2155 Wollock, Jeffrey 56, 84, 96, 365, 370⫺374 Wooffitt, Robin 567, 590 Wundt, Wilhelm 56, 83, 153⫺154, 202, 393⫺ 394, 396, 468, 498, 712, 734, 804, 1245, 1457, 1474, 1545⫺1548, 1619, 1694⫺1695, 1738, 1791, 1796⫺1797, 1800⫺1801, 1810, 1824, 2052, 2075, 2083, 2113

Y Yirmiya, Nurit 968⫺969

Z Zaidel, Eran 170⫺171, 182 Zarrilli, Phillip B. 1450 Zaslavsky, Claudia 1156 Zebrowski, Robin 2018 Zelinsky-Wibbelt, Cornelia 741, 1619, 1621, 1625 Zlatev, Jordan 68, 191, 534⫺538, 542, 544⫺ 546, 757, 777, 1284⫺1285, 1287, 1289, 1718, 1720, 1723, 1741, 1807, 1850, 1852, 1990, 1992, 1994, 2113 Zwaan, Rolf A. 194, 515, 522, 533, 2001⫺ 2002, 2013, 2054

Subject Index A Aborigines 9, 17, 304, 306 ⫺ aboriginal 18⫺19, 486, 975, 987, 1524, 1527, 1695 Abstraction 68, 126⫺127, 136, 210, 319,373, 381, 394, 400, 419, 463, 521, 526, 553, 565, 660, 673, 678, 715, 726, 755⫺756, 759, 761⫺766, 775, 788, 790, 1082, 1105, 1186, 1545, 1549, 1564, 1612⫺1613, 1626, 1647, 1687, 1692⫺1693, 1705, 1707, 1709, 1711⫺ 1712, 1715, 1720, 1739, 1747⫺1748, 1753, 1759, 1761⫺1762, 1768, 1776, 2017, 2019 ⫺ abstract action 66, 206, 710 ⫺ abstract concept 68, 431, 463, 469, 516⫺ 517, 772, 777, 1148, 1157, 1231, 1406, 1463, 1545⫺1546, 1645, 1841⫺1845, 2005, 2057, 2098 ⫺ abstract deixis 185, 189, 700, see also Deixis ⫺ abstract idea 114, 145, 182, 313, 317, 319, 401, 468, 516, 756, 762, 777, 1456⫺1458, 1733, 1776, 2077 ⫺ abstract meaning 190, 388, 399, 1606⫺ 1607, 1613, 1646⫺1647 ⫺ abstract symbol 157, 160, 193, 1834 ⫺ abstract word 158, 422, 1841, 1843, 1845 Acting ⫺ acting body 2071⫺2073, 2075, 2077 ⫺ acting theory 2072, 2074 ⫺ theories of acting 329, 1444, 1449 Action ⫺ action profile 960, 967 ⫺ action scheme 718, 1548⫺1550, 1552⫺ 1553, 1564, 1592, 1596⫺1602, 1635, 1643⫺ 1644, 1768⫺1769 Actor 8, 12⫺13, 18, 21, 30, 117⫺118, 164, 206, 229, 236, 275⫺285, 309, 329⫺330, 332⫺333, 336⫺341, 366, 368, 370, 444, 459, 463, 475, 639, 642, 659, 682⫺683, 841⫺842, 844, 849, 1106, 1237, 1249, 1273, 1350, 1385, 1400⫺1401, 1408, 1440⫺1450, 1463, 1496, 1498, 1500, 1502, 1559, 1594, 1600, 1609, 1814⫺1815, 1950, 2051⫺2052, 2066, 2071⫺2074, 2076⫺2078, 2081, 2084⫺2085 Aesthetics 311, 313, 373, 383, 419, 427⫺428, 432⫺433, 438, 442, 1450⫺1451, 1525, 2049,

2052, 2057, 2068, 2072, 2078, 2081⫺2082, 2084, 2094⫺2095, 2107⫺2108 Affect ⫺ affective stance 55, 206, 2105, 2115⫺2116, 2122 ⫺ inter-affectivity 2052, 2057, 2116⫺2117, 2122 Affiliate ⫺ lexical affiliate 148, 253, 748, 805⫺807, 809, 811⫺813, 840, 1002, 1110, 1217 ⫺ verbal affiliate 1365, 1789⫺1790 Africa ⫺ African diaspora 1154, 1158 ⫺ Bantu languages 1148⫺1150, 1185 ⫺ Dwang 1161⫺1165 ⫺ Ewe 1155, 1157, 1161⫺1164 ⫺ South Africa 89⫺90, 92, 1147, 1149⫺1152, 1478, 1497, 1524, 1526, 1528 ⫺ West Africa 1161⫺1162, 1164, 1166, 1168, 1172, 1174, 1246 Alignment 103, 106, 113, 221, 421, 499, 583, 585, 596, 685, 760, 1108, 1114, 1131⫺1132, 1156, 1229, 1231, 1277, 1310, 1327, 1382, 1384, 1682⫺1683, 1716, 1737⫺1738, 1740, 1949, 2009, 2011, 2013⫺2014, 2056, 2157⫺ 2158 ⫺ temporal alignment 50, 64, 1672, 2041 Annotation ⫺ ANVIL 585, 875, 1010, 1015, 1018⫺1019, 1041, 1043, 1100, 1115, 1131 ⫺ ELAN 585, 875, 1010, 1015, 1018⫺1019, 1030, 1043, 1072, 1094, 1100, 1115, 1131⫺ 1132, 1338, 1340, 1395, 1664, 1680, 1827, 1850, 1852, 1861 ⫺ EXMARaLDA 1015, 1019, 1115 ⫺ Praat 601, 604, 641, 1010⫺1011, 1019, 1395 ⫺ TASX 1017 ⫺ Transana 1015, 1019 Anthropology 1, 3, 5, 24⫺25, 57, 62, 96, 100, 227⫺228, 230, 232⫺234, 240⫺241, 243, 245, 247, 249, 251, 253, 255, 287⫺288, 297, 306, 394, 397, 427⫺428, 430, 432, 438, 444, 533, 760, 934, 982⫺986, 988, 993, 995, 1015, 1024, 1081, 1099, 1270, 1303, 1312, 1442, 1479, 1696, 1713, 1956, 2071, 2114, 2116, 2174

Subject Index Aphasia 45⫺47, 463, 472, 574, 595, 804, 813⫺814, 817, 1295, 1469⫺1470, 1736⫺ 1737, 1890, 1892⫺1895, 1898⫺1903, 1905 Apraxia 170, 173, 177, 182, 454, 456, 463, 1469, 1890⫺1895, 1902 Aproprioception 52, 2026 Arapaho 1216⫺1224, see also The Americas Arousal 268, 618⫺620, 633, 638, 701, 911⫺ 912, 964, 1025, 1342, 1347, 1386, 1483, 1901, 1914⫺1915, 2053 Artificial Intelligence 2, 393⫺394, 405, 1703, 1712 ⫺ social robot 1943⫺1945, 1947 Arts 5, 272, 307⫺309, 313, 317, 319⫺320, 364⫺367, 374, 378, 417, 424, 428, 438, 713, 959, 1177, 1186, 1191, 1227⫺1229, 1244, 1248, 1277, 1440⫺1441, 1444, 1449, 1687⫺ 1688, 1699, 1769, 2051⫺2052, 2056, 2072⫺ 2073, 2093 Asia ⫺ Chinese 31, 39⫺40, 154, 355, 370, 386, 502, 1234⫺1238, 1393, 1398, 1442, 1451, 1845, 1869, 1987, 1995 Attention ⫺ attention getter 541⫺542, 1549 ⫺ attentional state 471, 1956⫺1958, 1960 Attunement 961, 966⫺967, 1033, 1302, 2108 Audio-visual 4, 68, 661⫺662, 672, 1072, 1303, 1340, 1385⫺1386, 1392, 1394⫺1395, 1426⫺1427, 1689, 1895, 2049⫺2057, 2062⫺ 2064, 2068, 2078, 2081⫺2085, 2087⫺2089, 2093⫺2105, 2107⫺2108 Autism 461, 969, 1263, 1969, 1977⫺1978, 2022⫺2023 Aymara 1182, 1184⫺1185, 1784⫺1785, see also The Americas

B Behavior ⫺ bodily behavior 57⫺58, 184, 393, 397, 404⫺405, 1038, 1045, 1049, 1053⫺1054, 1102, 1302, 1306, 1351⫺1352, 1401, 1404, 1407, 1453, 1455, 1459, 2044, 2113⫺2114, 2117 ⫺ non-verbal behavior 59, 1158, 1311⫺1312, 1314, 1319, 1949 ⫺ movement behavior 932⫺933, 935, 937, 939, 959⫺960, 1905⫺1909 Blending 59, 68, 131, 182⫺183, 186, 193, 256, 258, 294⫺295, 416, 741⫺742, 1524, 1625⫺1626, 1951, 2095

2195 ⫺ gestural blending 1624 Body ⫺ body action 94, 943⫺945, 1054, 1586, 1751⫺1752, 1756⫺1758, 1760⫺1761 ⫺ body attitude 318, 943, 961 ⫺ body language 240, 329, 331, 336, 340, 432, 611, 904, 1039, 1177⫺1180, 1241⫺ 1251, 1289⫺1291, 1408, 1513, 1528, 1944⫺ 1945, 1976, 1986, 2074, 2114 ⫺ body mirroring 461 ⫺ body movement 3⫺5, 102, 104⫺105, 218, 336, 428, 574, 662, 716, 722, 808, 812, 940, 942, 959, 965, 1029, 1033, 1053, 1158, 1228⫺1229, 1297, 1326, 1385, 1462⫺1464, 1505, 1525, 1772, 1774, 1781, 1906⫺1909, 1943⫺1947, 2067, 2076, 2078, 2115, 2153 ⫺ body parts 4, 10, 64, 104, 115, 290⫺291, 294, 308, 343⫺344, 398, 418, 434, 525, 567, 650, 653, 656, 662, 665, 716, 722, 734, 775, 865, 869, 877, 934, 943⫺944, 950, 952, 954, 956, 963, 979, 1009, 1039, 1053⫺1054, 1082, 1103, 1204, 1209, 1290⫺1291, 1387, 1420⫺1421, 1435, 1509, 1533⫺1534, 1663, 1668, 1723, 1755, 1759, 1899, 1914, 1956, 1997, 2164, 2166⫺2167 ⫺ body posture 100, 103, 218⫺221, 223, 349, 404, 566, 577⫺578, 584⫺585, 590, 592, 597, 641, 756⫺757, 759, 761, 785, 999, 1038, 1053⫺1054, 1082, 1157⫺1158, 1234, 1244, 1301⫺1308, 1314, 1317⫺1318, 1337, 1343, 1352⫺1353, 1355, 1383, 1430, 1444, 1503, 1713, 1716, 1733, 1751⫺1752, 1758, 1850, 1956 ⫺ body-focused 170, 663, 1338, 1908⫺1909, 1916 ⫺ bodily semiotics 756, 1749, 1751 ⫺ techniques of the body 203, 231, 675, 677⫺678, 683, 710, 1768 ⫺ torque 113, 567, 577

C Catalonia 1266⫺1267, 1269, see also Europe Catchment 46⫺47, 49, 52, 135, 148, 151⫺ 152, 1114, 1384, 1546, 1641, 1656, 1735, 1741 Category ⫺ functional category 996, 1000, 1338⫺ 1340, 1966 ⫺ grammatical category 65, 113, 118, 726, 757, 760, 776, 779, 1148, 1484, 1673, 2172, see also Syntax

2196 Chinese 31, 39⫺40, 154, 355, 370, 386, 502, 1234⫺1238, 1393, 1398, 1442, 1451, 1845, 1869, 1987, 1995, see also Asia Cinema ⫺ cinematic communication 2051, 2062⫺ 2068, 2084 ⫺ cinematic expressive movement 2050, 2052, 2056⫺2057, 2082, 2084, 2088, 2101, 2108 Classification ⫺ functional classification 212, 401, 1042, 1053, 1314, 1334, 1339⫺1340, 1363, 1458, 1533, 1739, see also Function ⫺ gesture classification 83, 395, 1041, 1453, 1482, 1544, 1554, see also Gesture ⫺ semantic classification 400, 1492, 1641⫺ 1642, see also Semantics Co-expressiveness 137, 139, 485, 488, 1110, 2038, 2043⫺2046 Coding ⫺ decoding 156, 162, 241, 258⫺260, 263, 266, 269⫺270, 529, 551, 614⫺617, 654, 659, 690, 806, 839, 841, 904⫺911, 913⫺914, 1267, 1406, 1477, 1800, 2052 ⫺ encoding/decoding paradigm 260, 263 Cognition ⫺ distributed cognition 240⫺243, 245⫺247, 256, 676, 683, 685, 1413, 2020 ⫺ embodied cognition 69, 271, 535, 676, 959, 1841, 1843, 1845, 1932, 2020, 2057, see also Embodied ⫺ enactive cognition 2017, 2020 ⫺ grounded cognition 530, 1841 ⫺ cognitive ecology 243, 681, 2020 ⫺ cognitive functions; see Functions ⫺ cognitive load 793, 800⫺801, 845, 907, 1837, 1914⫺1916, 1933, 2002 ⫺ cognitive models 5, 240, 253, 255, 1566, 1635, 1891, 1922, 1928 ⫺ cognitive neuroscience 182, 676, 785, 789, 843, 1890 ⫺ cognitive process 3, 67, 69, 242⫺243, 247, 263, 268, 270, 344, 381⫺382, 387, 406, 527, 529, 578, 659, 679, 690, 701, 710, 714, 768, 851, 911, 1041, 1178, 1261, 1342⫺1344, 1426, 1470, 1555, 1566, 1619, 1635, 1673, 1718, 1761, 1803, 1841, 1888, 1922, 1930⫺ 1933, 1940, 1983, 2008⫺2009, 2013⫺2014, 2090, 2094⫺2098, 2104 ⫺ cognitive resource 267⫺268, 270, 515, 801, 1403, 1837, 1892, 1916, 2013 ⫺ cognitive semantics; see Semantics

Indices ⫺ cognitive semiotics 533, 538⫺539, 546, 1769 Combination 21, 39, 41, 46⫺47, 49, 61, 87, 94, 114, 119, 136, 139⫺141, 183, 188, 193, 267, 297, 304, 308, 310, 318, 324, 346, 369, 387, 440, 470, 485, 487, 498, 508, 521, 528, 539, 542, 553, 616, 622, 634, 648⫺649, 656, 664, 670, 689, 699, 701, 703, 723⫺724, 727, 734, 757, 772, 775, 795, 880, 921, 923, 927, 950, 968, 1016, 1030, 1038, 1042, 1044, 1054, 1061, 1085⫺1086, 1088, 1111, 1183, 1188, 1292, 1303, 1325, 1338, 1342, 1354, 1401, 1407, 1414, 1420, 1476, 1479, 1498, 1503, 1546⫺1547, 1555, 1566⫺1567, 1625, 1646, 1651, 1733, 1761, 1785, 1826, 1852, 1857⫺1859, 1861⫺1865, 1888, 1898, 1952⫺ 1953, 1959, 1963, 1987, 2001, 2038, 2040, 2045, 2055, 2057, 2154 Common ground 156, 163, 235, 253, 578, 689, 696, 698, 826, 828, 832⫺833, 847⫺849, 1355, 1369⫺1370, 1433, 1737⫺1738, 1755, 1760, 1803, 2012 Communication ⫺ bodily communication 2, 71, 164, 195, 215, 235, 258⫺259, 264, 266⫺270, 333, 373, 427⫺428, 451, 459, 461⫺463, 617⫺618, 721, 778, 904⫺905, 908⫺911, 913⫺914, 994⫺995, 999⫺1002, 1208, 1241⫺1246, 1248, 1259, 1264, 1270, 1305, 1320, 1350⫺ 1352, 1354, 1356⫺1357, 1400, 1402, 1408⫺ 1409, 1456, 1470, 1722, 1915⫺1919 ⫺ communication accommodation theory 258, 265⫺266, 617, 620 ⫺ communication strategy 1869, 1871 ⫺ communicative intention 158, 261, 528, 690, 1319, 1437, 1826 ⫺ non-verbal communication 57⫺58, 334, 339, 558, 631, 985, 1155, 1158, 1315⫺1316, 1319, 1476, 1537, 1703, 1956 ⫺ political communication 1400⫺1401, 1405, 1408⫺1409, 1467, 1469 Compositionality 91, 94⫺95, 405, 530, 632, 1459, 1565, 1567, 1621, 1625⫺1626 Computer ⫺ computer animation 1944, 1958 ⫺ computer science 742, 877, 929, 1015, 1259, 1263, 1296, 1424, 2008 Conceptualization 2, 32, 69, 127, 130⫺131, 157⫺159, 177, 182⫺184, 186, 191⫺193, 209, 233, 242, 244, 254⫺255, 258, 307⫺308, 319, 394, 399, 535, 566, 568, 580, 679, 682, 708⫺710, 713, 715⫺716, 763, 804, 811, 815,

Subject Index 996, 999⫺1000, 1002, 1004, 1176, 1182, 1184, 1191, 1211, 1231, 1262, 1290, 1292, 1298, 1360, 1612, 1614, 1685, 1688⫺1689, 1699, 1718, 1726⫺1727, 1762, 1769, 1771⫺ 1773, 1781, 1783⫺1785, 1894, 1938, 2018, 2054, 2063, 2106 Constructed action 2152⫺2153, 2163⫺2168 Construction 9, 20⫺21, 31, 45⫺46, 62, 64, 68⫺69, 109, 115, 136, 146, 152⫺154, 160, 183, 193, 195, 221⫺223, 230, 233⫺234, 241, 252, 263⫺264, 272, 283, 285, 292, 307, 309, 314, 347, 371, 374, 387, 397, 404⫺405, 444, 457⫺458, 462⫺463, 471, 485, 496, 501⫺502, 506⫺508, 512, 517⫺ 518, 521, 528, 545, 568, 573, 579, 585, 590, 592⫺593, 595, 598⫺599, 601, 603, 632, 671, 691⫺693, 696, 702⫺703, 709, 715, 717, 742, 759, 761, 767⫺768, 770, 779, 808, 810, 817, 993, 995, 998, 1025, 1081, 1100, 1105⫺1106, 1112⫺1114, 1126, 1171, 1178, 1228, 1245, 1261⫺1262, 1288, 1290, 1292, 1310⫺1312, 1314, 1334, 1361⫺1363, 1365, 1371, 1378, 1387, 1463, 1477, 1482, 1543, 1545, 1578, 1626, 1631, 1637, 1646, 1657, 1689, 1691⫺1692, 1725, 1747⫺1749, 1751, 1759, 1768, 1772⫺1773, 1794, 1801, 1851, 1988, 2000⫺2001, 2011, 2013, 2017, 2021, 2028, 2040, 2073, 2076, 2093⫺2094, 2097, 2101, 2105, 2107⫺2108, 2150, 2154⫺ 2157, 2168, 2171 ⫺ construction grammar 458, 703, 1311 ⫺ meaning construction 62, 69, 214, 307, 309, 314, 405, 521, 528, 709, 715, 717, 742, 759, 761, 767⫺768, 770, 779, 995, 1081, 1100, 1105, 1112⫺1114, 1387, 1545, 1578, 1626, 1725, 1747, 1749, 1759, 1772, 2000, 2017, 2021, 2105, 2107⫺2108 ⫺ social construction 230, 264, 272 Context ⫺ context analysis 102, 174, 232, 404, 684, 985, 988, 1301⫺1303, 1305⫺1306, 1317, 1614 Contiguity 68, 269, 315, 317, 659⫺660, 699⫺ 700, 714, 755, 757⫺759, 761⫺762, 765⫺ 767, 769⫺776, 778, 1199, 1717⫺1718, 1723, 1741, 1747⫺1752, 1754⫺1760, 1768, 1992, 1994⫺1995 Convention ⫺ conventionality 84, 91⫺94, 529, 545, 701, 759, 767, 1276, 1283⫺1288, 1547⫺1548, 1560, 1576, 1712, 1715, 1717⫺1718, 1723, 1727, 1733, 1740, 1773, 1990, 2172

2197 ⫺ conventionalization 63, 87⫺88, 94, 463, 469, 519, 719, 734, 740⫺741, 762, 767, 1284, 1454, 1459, 1475⫺1476, 1478⫺1479, 1531, 1541⫺1542, 1547⫺1548, 1561, 1564, 1569⫺1570, 1575⫺1576, 1579, 1588, 1596, 1605, 1614⫺1615, 1619, 1714, 1723, 1727, 1741 Conversation ⫺ conversation analysis 2, 5, 102, 218⫺221, 223, 228⫺229, 235, 241, 393⫺394, 404, 564⫺567, 569⫺570, 572, 577⫺580, 582, 589⫺591, 593, 597, 676, 684, 686, 893, 982, 985, 987⫺988, 995, 1003, 1019, 1038, 1044, 1048, 1050, 1303, 1305⫺1306, 1365, 1413, 1804, 2014 ⫺ conversational interaction 100⫺105, 107, 109, 528, 577, 589⫺591, 593, 595, 597⫺599, 601, 603, 605, 1114, 1212, 1324⫺1325, 1327⫺1329, 1382, 1462, 2013, 2020, see also Interaction ⫺ speakership transition 997, 1364 ⫺ temporal overlap 65, 70, 747⫺748, 809, 1110⫺1111, 1643, 1646⫺1647, 1662 ⫺ turn constructional unit 1107, see also Unit ⫺ turn-taking 219, 222, 404, 461, 579, 590, 618, 638, 1033, 1054, 1106, 1169, 1305, 1310, 1347, 1361⫺1365, 1372, 1430, 1467, 1813, 2117, 2120, 2139⫺2140, 2152 ⫺ turns of talk 888, 893, 897 Coordination 11, 50, 101, 104⫺105, 107, 113, 170, 175, 204, 220⫺221, 223, 240, 242⫺245, 247, 249, 251, 253, 255⫺256, 267, 272, 303, 462, 519, 523, 565, 567, 574, 577⫺ 578, 589, 618, 675, 678, 685, 790, 801, 808⫺ 809, 813⫺814, 830, 893, 923, 934, 993, 1000⫺1002, 1024, 1026, 1060⫺1061, 1114, 1151, 1200, 1245, 1249, 1301⫺1308, 1316, 1318, 1346, 1355, 1360⫺1362, 1392, 1418, 1428, 1430, 1435⫺1437, 1458, 1466, 1469⫺ 1470, 1479, 1500, 1531⫺1532, 1646, 1755, 1803, 1848, 1934, 1943, 1945, 1947, 1954, 2013, 2017, 2021, 2052 ⫺ temporal coordination 519, 1316, 1361, 1392 Cortex 24, 52, 250⫺251, 452⫺453, 457, 472⫺473, 496, 558, 615, 1031, 1844, 1889, 1922, 1960, 2054 ⫺ motor cortex 24, 452, 473, 558, 1902, 1934 ⫺ premotor cortex 452, 472, 1889 Counting 50, 92, 240, 243⫺247, 254⫺256, 337, 395, 1154, 1156⫺1157, 1174, 1306, 1837

2198

Indices

Culture 2, 57, 64, 82⫺84, 86⫺87, 90, 95⫺96, 101, 187, 190, 203, 230⫺236, 243, 256, 258, 267, 271, 288, 297, 320⫺321, 325, 328, 331, 343⫺345, 347, 349, 351⫺353, 355, 357, 359, 361, 371⫺374, 381⫺384, 387, 393, 395⫺ 396, 405, 420, 422, 425, 427⫺428, 431, 434, 440, 453⫺454, 476, 482, 530, 537⫺538, 610, 612⫺613, 616⫺617, 620, 631, 635, 650, 653, 677⫺678, 686, 691, 736, 763, 810, 834, 910, 924, 957, 984, 986, 1039, 1147, 1151⫺1152, 1155, 1157⫺1158, 1161⫺1162, 1176, 1177⫺ 1178, 1180, 1185, 1191, 1195⫺1196, 1200, 1207, 1212, 1234⫺1235, 1238, 1241⫺1243, 1246⫺1248, 1270⫺1271, 1285, 1287, 1289⫺ 1291, 1295⫺1296, 1320, 1327, 1340, 1350, 1352, 1356⫺1357, 1372, 1401, 1405, 1441⫺ 1442, 1451, 1460, 1463, 1479, 1482, 1492, 1503, 1509⫺1510, 1512⫺1513, 1517, 1526, 1528, 1546, 1548, 1559, 1633, 1771, 1782, 1825⫺1826, 1829, 1845, 1859, 1869, 1885, 1975, 2018, 2020, 2063, 2073, 2078, 2081, 2088, 2093, 2108, 2146, 2174 ⫺ cultural practice 227, 229, 231, 233, 235, 240, 243⫺244, 253⫺254, 428, 482, 493, 540, 674, 983⫺984, 1203, 1261, 1286, 1406, 1723, 1739, 1781, 1783⫺1784, 2019, 2078 ⫺ cross-cultural comparison 234, 1288, 1454, 1465, 1478, 1504, 1575 ⫺ cross-cultural difference 267, 1170, 1234, 1455, 1512, 1519⫺1520, 1919 ⫺ cross-linguistic 35, 69, 125, 159, 844, 1043, 1156, 1237⫺1238, 1290, 1570, 1636⫺1637, 1714, 1735, 1875, 1900, 2129, 2141, 2163, 2166⫺2167

D Dance 4⫺5, 70, 230, 233, 297, 306⫺315, 317⫺319, 322, 378, 383, 416⫺425, 427⫺ 435, 438, 453, 530, 538, 540, 932⫺933, 938, 940, 942, 954, 957, 959, 970, 983, 1054, 1147, 1157, 1227⫺1231, 1259, 1262⫺1263, 1270, 1274⫺1276, 1318, 1433, 1441⫺1442, 1734, 1774, 2051, 2089, 2108 ⫺ ballet 367, 416⫺417, 420⫺423, 425, 427⫺ 435, 438, 453, 1449, 1774⫺1775, 1971 ⫺ narrative dance 307⫺308, 313, see also Narration Deception 466, 561, 615, 621⫺622, 630, 643, 910, 925, 1356, 1382, 1386, 1466, 1913⫺ 1919, 1965⫺1966, 1988

Deixis 144, 185, 189, 317, 451, 488, 696, 698, 700, 832, 1204, 1209, 1269⫺1270, 1340, 1623, 1625, 1770, 1803⫺1809, 1811⫺1818, 1824 ⫺ origo 60, 1194⫺1195, 1199, 1204, 1457, 1460, 1801, 1803⫺1807, 1810⫺1814, 1818 Delivery 10, 55, 108, 296, 329⫺341, 366⫺ 372, 380, 1243⫺1244, 1273, 1394, 1458, 1491, 1516, 1518⫺1521, 1532, 1755, 1902, 1970, 2051 Denotation 128, 387, 394, 1792, 1796 Depiction 21, 61⫺62, 185, 204⫺206, 209, 246⫺249, 301, 303, 308, 313, 315⫺316, 318, 365, 519, 604, 668, 681⫺682, 684, 690, 710, 712⫺713, 715⫺716, 750, 764, 869, 1085, 1089⫺1091, 1361, 1570, 1643⫺1645, 1668, 1670, 1684, 1687, 1689⫺1693, 1695⫺1697, 1699, 1721, 1736, 1768, 1771, 1776, 1995, 2072 Development ⫺ cognitive development 159, 384, 790, 840, 1833⫺1838, 1855, see also Cognition ⫺ language development 115, 470, 488, 494, 507⫺508, 1254, 1264, 1296, 1528, 1736, 1849, 1852⫺1853, 1858, 1869⫺1870, see also Language ⫺ mother-child interaction 901, 959, 970 ⫺ motor development 160 ⫺ preverbal 650, 672, 811, 816, 1275, 1950, 2145 Diagram 13, 21, 156, 159, 195, 248⫺250, 463, 538, 689, 697, 702, 759, 762, 764⫺765, 767, 769⫺770, 776, 778, 900⫺901, 961⫺ 962, 966⫺967, 1015, 1157, 1228, 1679, 1722⫺1726, 1730, 1754 Disease ⫺ depression 969⫺970, 1905⫺1906, 1909, 1969, 1974⫺1975, 1977 ⫺ mental illness 932⫺933, 937, 959, 1905, 1907, 1909, 2022 ⫺ parkinson’s disease 1386, 1969⫺1970, 1976⫺1978 Discourse ⫺ discourse analysis 1, 236, 263, 275⫺278, 283, 285, 566, 993, 998⫺1000, 1038, 1046, 1050, 1134, 1260, 1311 ⫺ discourse marker 1427, 1467, 1532⫺1533, 1541, 1567, 2135, 2138, 2140 Disorder ⫺ eating disorder 1032, 1905⫺1907 Dwang 1161⫺1165, see also Africa

Subject Index

E Ecology 62, 87, 91, 220, 243, 584⫺585, 595, 603, 617, 674⫺675, 679⫺681, 686, 988, 1352, 1528, 1717, 1963, 2017, 2020⫺2021 Effort 12, 18, 22, 52, 129, 143, 205, 256, 267⫺268, 311, 322, 346, 382, 455, 498, 529, 546, 568, 638, 643, 669⫺670, 682, 708, 832, 849, 858, 860, 865, 877, 932, 940, 943⫺944, 948⫺952, 954⫺957, 959⫺961, 964⫺966, 968, 1012⫺1013, 1019, 1044, 1061⫺1062, 1242, 1344, 1375, 1421⫺1422, 1467, 1498, 1761, 1834, 1916, 1944⫺1945, 2009, 2014, 2114, 2118⫺2119, see also Movement, Shape Embodiment 2, 4, 42, 61, 136, 154, 168, 221, 271⫺272, 307⫺308, 326, 351, 440, 525, 533⫺535, 537⫺541, 543, 545⫺546, 676, 678, 686, 756, 777, 894, 959, 969, 1029, 1201, 1204, 1212, 1273, 1356, 1406, 1427, 1441⫺1442, 1449, 1451, 1766, 1774, 1807, 1812, 1818, 1833, 1841⫺1842, 1850, 1872, 1944, 1954, 1973⫺1974, 2000, 2005, 2016⫺ 2023, 2035, 2040⫺2042, 2045, 2049⫺2052, 2056⫺2057, 2062⫺2063, 2065, 2067⫺2068, 2071⫺2072, 2076, 2078, 2083, 2095, 2107⫺ 2108, 2113⫺2114, 2163⫺2164, 2168 ⫺ embodied conversational agent 1946⫺ 1947, 1949⫺1954 ⫺ embodied resource 218⫺220, 222, 577⫺ 580, 585, 2000⫺2011 ⫺ embodied semantics 462⫺463, see also Semantics ⫺ embodied social psychology 258, 270 Emotion 2, 4, 55, 210⫺212, 233⫺234, 267, 271, 289, 293, 308⫺309, 313, 315, 318⫺319, 329⫺341, 366, 369, 373, 380, 382⫺383, 394⫺396, 416, 419⫺420, 422⫺423, 425, 431⫺432, 443⫺444, 446, 452, 461⫺462, 469, 516, 521⫺522, 529, 551⫺560, 578, 597, 612⫺617, 621⫺622, 632⫺644, 756, 904, 906, 919⫺925, 933⫺934, 967, 1015, 1023⫺ 1029, 1032, 1053, 1108, 1227⫺1228, 1231⫺ 1232, 1260, 1262⫺1263, 1273⫺1274, 1277⫺ 1278, 1293⫺1296, 1314⫺1315, 1338, 1343⫺ 1344, 1346, 1354, 1386⫺1387, 1405, 1430, 1440, 1443⫺1449, 1483, 1527, 1758⫺1760, 1762, 1842, 1845, 1909, 1915, 1918, 1943, 1949, 1964⫺1966, 1969⫺1976, 1983⫺1988, 2018, 2050⫺2051, 2053, 2055, 2074⫺2076, 2081⫺2083, 2095, 2097, 2107, 2113⫺2116, 2120, 2150⫺2152 ⫺ emotive 184, 318, 597⫺599, 601, 603⫺ 604, 1288, 1523, 1983⫺1987

2199 Empathy 271, 461, 558, 959⫺960, 966⫺967, 1015, 1446⫺1447, 1759, 1969, 1973, 1978, 2115 Enactment 14, 18, 61, 185, 189, 246, 250⫺ 251, 253, 463, 525, 617, 675, 679, 683⫺684, 700, 716, 756, 1106, 1361, 1548, 1727, 1734, 1747, 1750, 1760, 1762, 1768, 2005, 2018, 2040 Enlightenment 5, 56, 215, 378⫺381, 383, 385, 387, 389⫺390, 430, 443, 1244, 1274⫺ 1275, 2072⫺2073 ERP (Event-Related Potential) 843⫺844, 849, 1923, 1925, 1928 ⫺ n300 1924⫺1925, 1928 ⫺ n400 52, 843, 1923⫺1926, 1928 Ethnography 5, 88, 227⫺229, 231⫺236, 241⫺242, 258, 393, 404, 683, 685, 982, 987⫺988, 1038, 1081, 1297, 1527 ⫺ micro-ethnography 229, 236, 241, 683, 982, 988 Ethology 9, 58, 128, 934, 1298, 1963, 1965 Euroasia ⫺ Russian 89, 137, 351, 354⫺355, 1289⫺ 1292, 1297⫺1298, 1392⫺1393, 1395⫺1396, 1398, 1492, 1987, see also Russian Europe ⫺ Catalonia 1266⫺1267, 1269 ⫺ France 25, 227, 231, 303, 379, 390, 417, 442⫺443, 613, 670, 674, 678, 683, 686, 983, 1131, 1178, 1272⫺1279, 1413⫺1414, 1418, 1443, 1519, 1524, 1527, 1785, 2145 ⫺ Greece 566, 1243, 1497, 1513, 1515, 1519⫺1520 ⫺ Italy 13⫺15, 87, 89, 96, 102, 275, 373, 378, 681, 685, 1234, 1241⫺1249, 1273, 1324, 1460, 1478, 1487, 1490, 1496, 1512, 1516⫺1520, 1524, 1541, 1848 ⫺ Sweden 1283, 1285, 1287 Evolution ⫺ biological evolution 458 ⫺ cognitive evolution 301 ⫺ evolution of facial expression 928 ⫺ evolution of gesture 480⫺481, 483, 485, 487, 489, 491, 493, 495, 497, 499, 501, 503, 505, 507, 711 ⫺ evolution of language 52, 393, 462⫺463, 542, 788, 790, 1545, 1694, 2144 ⫺ human evolution 306, 469, 481, 488⫺489, 506, 713, 1699 ⫺ intellectual evolution 390 ⫺ natural evolution 1887 Ewe 1155, 1157, 1161⫺1164, see also Africa

2200 Experience 17, 49, 61, 67, 130, 184, 189, 249, 268⫺271, 287, 308⫺317, 332, 336, 346, 381, 405, 422⫺423, 427⫺428, 431, 446, 503, 525⫺528, 536⫺546, 571, 599, 613, 616, 648, 678⫺683, 686, 694, 696, 709⫺726, 755⫺ 779, 858, 866, 918, 957, 966, 980, 984⫺985, 1017, 1019, 1023, 1104⫺1105, 1163, 1177⫺ 1186, 1199⫺1201, 1203, 1218, 1262, 1279, 1348, 1370, 1376, 1450, 1550, 1613, 1705, 1718, 1721, 1727, 1751, 1762, 1767⫺1772, 1782⫺1783, 1786, 1811, 1814, 1845, 1849⫺ 1850, 1887, 1909, 1915, 1965⫺1991, 1996, 2005, 2009⫺2010, 2017⫺2023, 2029, 2040, 2042⫺2043, 2065⫺2066, 2068, 2074, 2076, 2140 ⫺ aesthetic experience 309, 422, 2054, 2056 ⫺ affective experience 2049⫺2056, 2083⫺ 2089, 2094⫺2104, 2115⫺2122 ⫺ bodily experience 93, 210, 403, 538, 727, 967, 1776, 2021, 2049, 2062, 2094, 2103 ⫺ embodied experience 676, 1185, 1230⫺ 1231, 1748, 1774, 2020, 2052⫺2057, 2062⫺ 2063, 2089, 2095⫺2096, 2122 ⫺ motor experience 160, 659, 661, 712, 1939, 2055 Expression ⫺ emotional expression 176, 210, 419, 443⫺ 444, 552, 559, 564, 613, 615⫺617, 906, 919, 1266, 1284, 1345, 1356, 1463, 1964, 1971 ⫺ facial expression 2⫺3, 57⫺58, 88, 94, 100, 104⫺106, 176, 210, 219⫺221, 223, 235, 240, 271, 276, 296, 331, 335⫺337, 340, 396⫺397, 404, 461, 475, 516, 551⫺552, 554⫺556, 560, 564, 577⫺578, 580, 585, 591⫺592, 599, 604, 611⫺613, 617, 621, 640⫺641, 651, 661⫺ 663, 671, 689, 716, 721, 906, 910, 913, 918⫺ 929, 933⫺936, 978, 999⫺1001, 1043, 1127⫺ 1128, 1130, 1189, 1208, 1244, 1259⫺1260, 1263, 1272, 1274⫺1276, 1290, 1296⫺1297, 1305, 1307⫺1308, 1310⫺1311, 1313, 1315, 1317, 1334⫺1336, 1342⫺1347, 1355, 1372, 1375, 1400, 1426⫺1430, 1434, 1450, 1455, 1476, 1485⫺1488, 1497, 1503, 1578⫺1579, 1584, 1703, 1736, 1758, 1826, 1848, 1850, 1907, 1909, 1943, 1945, 1947, 1949, 1954, 1956, 1960, 1963⫺1966, 1969, 1971⫺1978, 2009, 2013, 2052, 2064, 2071, 2073, 2087, 2095, 2097, 2117, 2134, 2138, 2150⫺2156, 2158, 2166⫺2167 ⫺ forms of expression 7, 21, 164, 183, 193, 195, 202, 295, 395, 529, 612, 653, 1769, 2071, 2113

Indices ⫺ gestural expression 158, 188, 306, 309, 313, 401, 661, 671, 717, 1061, 1255, 1287, 1295, 1512, 1515⫺1516, 1564, 1574, 1592, 1631, 1636⫺1637, 1718, 1770, 1781⫺1782, 1790, 1859, 1881, 1883⫺1884, 2064, 2115 ⫺ kinesic expression 11⫺12, 14⫺15, 661 ⫺ linguistic expression 21, 93, 95, 186, 191, 202, 516, 545, 671, 769⫺770, 1291, 1318, 1407, 1466, 1725, 1754, 1767, 1784, 1792, 1877, 1879 ⫺ metaphoric expression 67, 1713, 1725, 1770, 1776, 2102, 2117 ⫺ modes of expression 9, 11, 21, 61⫺62, 71, 429, 441, 656, 674, 711, 737, 948, 1305, 1592, 1596, 1692, 1767, 1974, see also Mode ⫺ verbal expression 7, 9, 11⫺12, 14, 17, 19⫺ 21, 92, 188, 210, 269, 293, 346, 351, 639, 1255⫺1256, 1270, 1292, 1305, 1317, 1337, 1346, 1371, 1395, 1476, 1503, 1595, 1690, 1770, 1792, 1795, 1799, 1803, 1806⫺1807, 1813, 1824, 1866, 2100 ⫺ vocal expression 23⫺24 Eye ⫺ eye contact 272, 293, 344, 621, 638, 1151, 1208, 1317, 1337, 1351⫺1352, 1357, 1397, 1914⫺1917, 1971 ⫺ eye dialect 999, 1048

F Face-to-face ⫺ face-to-face communication 1, 4, 70, 193, 195, 649, 651, 654, 656, 739, 810, 1261, 2062⫺2064, 2083⫺2085, 2095, 2108, 2114, 2116⫺2117, 2120⫺2123 ⫺ face-to-face dialogue 406, 822, 827, 830, 834, 1468, 1532 ⫺ face-to-face interaction 163, 189, 232⫺ 233, 567, 578, 591⫺592, 597, 1045, 1177, 1259, 1264, 1324, 1326, 1340, 1375, 1387, 1631, 1792, 1803, 1949, 2011, see also Interaction Family Resemblance 88, 1267, 1476, 1479, 1534, 1620⫺1622, 1626, 1718, see also Semantics Figure and Ground 1805 ⫺ foreground 275, 281⫺283, 942⫺943, 955⫺ 956, 1018, 1112, 1178, 1224, 1421, 1450, 1528, 1671⫺1672, 1689, 1691, 1740, 1773, 1805

Subject Index ⫺ foregrounding 67⫺68, 70, 679, 1114, 1397, 1536, 1662, 1671, 1771⫺1773 Film 858, 865, 869⫺870, 893, 2049⫺2057, 2062⫺2068, 2071, 2078, 2081⫺2085, 2088⫺ 2089, 2093⫺2097, 2105, 2107⫺2108 FMRI (Functional Magnetic Resonance Imaging) 176, 250, 256, 453, 496, 714, 843⫺844, 1031, 1843⫺1844 Form ⫺ gestural form 63, 65, 68, 184, 190, 215, 254, 685, 710, 714, 719, 722, 761, 768, 779, 1080, 1084, 1100, 1103⫺1104, 1111, 1114, 1285, 1476, 1512, 1520, 1526, 1560⫺1562, 1564, 1567, 1569, 1577, 1606, 1615, 1631, 1634, 1641, 1652, 1655⫺1656, 1669⫺1670, 1672, 1767, 1770, 1786, 1825 ⫺ gestural parameter 63, 141, 161⫺162, 333, 463, 589, 627, 632, 634⫺635, 638⫺639, 641, 644, 700, 716, 721, 760, 785, 874, 919, 942⫺ 943, 953, 955⫺957, 965, 968, 995, 1001, 1024, 1027, 1041, 1053, 1060, 1064, 1071, 1074, 1080, 1082⫺1083, 1093, 1103⫺1104, 1111, 1129, 1314, 1339, 1354, 1357, 1463, 1483⫺1484, 1486⫺1488, 1498, 1503, 1561, 1565⫺1567, 1569, 1611⫺1612, 1614⫺1615, 1644, 1652, 1655⫺1656, 1670, 1894, 1906⫺ 1907, 1909, 1933, 1950⫺1951, 1996, 2017, 2128⫺2129, 2165 French 20, 33, 86, 89, 96, 100, 136, 190, 202, 231, 291, 302, 305, 359, 372, 378, 380, 385⫺ 390, 399, 424⫺425, 427, 430, 443, 467, 594, 661⫺662, 674, 677, 736, 789, 1130, 1150⫺ 1152, 1154, 1158, 1177⫺1179, 1180, 1241⫺ 1242, 1246⫺1247, 1254, 1269⫺1270, 1276⫺ 1279, 1296, 1356, 1383⫺1384, 1387, 1414, 1427, 1441, 1474, 1476⫺1477, 1479, 1497, 1499, 1527, 1579, 1592, 1680, 1695⫺1696, 1736, 1752, 1782, 1801, 1851⫺1852, 1871⫺ 1872, 1900, 1927⫺1928, 1987⫺1988, 1993, 2004, 2144⫺2146, 2173, see also Europe Function ⫺ cognitive function 176, 182, 245, 343, 790, 845, 1023, 2019, see also Cognition ⫺ communicative function 24, 55, 86, 96, 157, 543, 611, 615, 804, 810, 845⫺846, 1001, 1152, 1156, 1302, 1312, 1459⫺1460, 1468, 1476, 1525, 1537, 1576, 1736, 1771, 1943, 1945, 2136, see also Communication ⫺ emotional function 173, 175, 178, 1024, 1031⫺1032 ⫺ forms and functions 1, 57, 90, 122, 234, 400⫺401, 405, 551, 622, 757, 775, 1060,

2201 1063, 1069, 1083, 1592, 1600, 1605, 1642, 1662, 1712⫺1713, 1727, 1736 ⫺ function of gesture 3, 331, 839, 1288, 1360, 1458, 1537, 1560, 1770, 1960 ⫺ interactive function 87, 287, 294, 399, 828⫺829, 843, 1024, 1043, 1074, 1302, 1307, 1336, 1393, 1457⫺1458, 1467, 1479, 1498, 1713, see also Interaction ⫺ pragmatic function 16, 93, 202, 212⫺213, 320, 327⫺328, 401, 739, 756, 758, 761, 1101, 1107, 1112⫺1113, 1255⫺1257, 1377, 1384, 1395, 1474, 1478⫺1479, 1492, 1531⫺ 1533, 1537, 1540⫺1542, 1544⫺1550, 1553⫺ 1554, 1559⫺1560, 1563⫺1564, 1575⫺1579, 1584⫺1588, 1592, 1596⫺1598, 1601, 1606, 1615, 1636, 1646, 1663, 1717, 1760, 1768, 1771, 1826, 1954, 1988, 2010, 2139 ⫺ semantic function 104, 204, 1337 ⫺ social functioning 1975⫺1976, 1978 ⫺ syntactic function 65⫺66, 119, 709, 733, 735⫺736, 745, 1101, 1109, 1337, 1346

G Gaze 3, 7, 49, 85, 103, 159, 176, 219⫺221, 223, 244, 254⫺255, 270, 276, 282, 289, 292, 294⫺295, 306, 318, 354, 404, 505, 565⫺567, 574, 577⫺583, 585, 593, 596⫺597, 601, 603⫺604, 618, 627, 632, 640, 655, 662, 667, 670, 681, 691⫺703, 716, 772, 844, 876, 912, 919, 935, 970, 988, 999, 1001, 1011, 1028, 1038⫺1051, 1053⫺1054, 1126, 1128⫺1130, 1133, 1155, 1171⫺1179, 1183, 1194, 1196⫺ 1198, 1200⫺1203, 1212, 1220⫺1221, 1254, 1260, 1296, 1304⫺1305, 1307, 1320, 1324⫺ 1340, 1343⫺1345, 1350, 1365, 1369, 1372, 1375, 1385, 1397, 1426⫺1430, 1434, 1436⫺ 1437, 1449, 1482, 1498, 1524, 1549, 1578⫺ 1579, 1585, 1734, 1737, 1749, 1804, 1824⫺ 1829, 1848, 1852, 1945, 1947, 1949⫺1954, 1977, 2009⫺2010, 2012⫺2014, 2045, 2050⫺ 2051, 2071, 2117, 2151, 2153, 2163 ⫺ averted gaze 842, 936, 1354, 1357, 1585⫺ 1586, 1917 ⫺ deictic gaze 644 ⫺ gaze shift 109, 193, 280, 662, 1015, 1326, 1329⫺1330 ⫺ mutual gaze 107, 221, 264, 1175, 1208⫺ 1209, 1326⫺1327, 1428 German Expression Psychology 551⫺556, 560

2202 Gesamtvorstellung 1791, 1796⫺1797, 1800 Gestalt 60, 212, 215, 221, 223, 294, 554, 557, 561, 683, 727, 755⫺756, 759, 769, 777⫺779, 1100, 1104⫺1105, 1578, 1596, 1605, 1670, 1687, 1693, 1699, 1712, 1714⫺1715, 1718, 1721⫺1722, 1724, 1727, 1739, 1747, 1749⫺ 1751, 1753⫺1754, 1760, 1762, 1811, 1818, 1994, 2021, 2081, 2083⫺2085, 2087⫺2089, 2095, 2101⫺2102, 2114⫺2117, 2119⫺2120 Gesture ⫺ co-speech gesture 129, 187, 316, 660, 662, 668, 690, 693, 748, 837⫺839, 841, 843⫺845, 847, 849⫺851, 976, 1147, 1187, 1190, 1209⫺ 1210, 1247, 1419⫺1420, 1466⫺1468, 1470, 1576, 1650, 1747, 1750, 1781, 1786⫺1788, 1794, 1799, 2003, 2133⫺2136, 2139, 2143, 2145 ⫺ coverbal gesture 204, 394, 399, 405, 757, 759⫺760, 804⫺805, 1011⫺1013, 1019, 1270, 1476, 1536, 1715, 1747⫺1748 ⫺ gesture acquisition 96, 1259, 1261, 1960 ⫺ gesture code 485, 1413⫺1414, 1418 ⫺ gesture comprehension 518, 839, 841, 843, 1736, 1891, 1903, 1924 ⫺ gesture dictionary 400, 1503 ⫺ gesture family 63, 159, 401⫺402, 710, 717⫺719, 727, 1094, 1505, 1510, 1531⫺ 1532, 1534, 1537, 1543, 1549, 1565⫺1566, 1570, 1576, 1579, 1585, 1587⫺1588, 1592⫺ 1593, 1600, 1602, 1619, 1626, 1631⫺1637, 1692, 1717, 2136⫺2137 ⫺ gesture interpretation 157, 756, 1423, 1722 ⫺ gesture phrase 11, 30, 138, 153, 595, 722⫺ 723, 1012⫺1013, 1061, 1362⫺1363, 1421, 1555, 1641, 2136 ⫺ gesture production 157, 159, 164, 168⫺ 169, 171, 173, 175⫺177, 317⫺318, 399, 406, 518⫺520, 801, 808, 814, 816⫺817, 837, 844, 848, 850⫺851, 1008⫺1009, 1129, 1189, 1191, 1292, 1379, 1460, 1632, 1713, 1719, 1736, 1761, 1790, 1833⫺1835, 1838, 1870, 1872, 1891⫺1894, 1900⫺1903, 1936⫺1938, 1950, 1954, 2003 ⫺ gesture recognition 158, 1419, 1422⫺1424, 1503 ⫺ gesture space 50, 63, 174, 177, 248, 312, 315, 317, 677, 701, 716⫺718, 722⫺726, 746, 756, 760, 768, 772⫺775, 777, 1032, 1041⫺ 1042, 1062, 1064, 1070⫺1071, 1074⫺1075, 1080, 1082⫺1084, 1086⫺1088, 1091⫺1093, 1104, 1166, 1168, 1283, 1512, 1517, 1519, 1521, 1528, 1532, 1543, 1562, 1565, 1568⫺

Indices 1570, 1597⫺1598, 1606⫺1607, 1610⫺1612, 1615, 1634, 1645⫺1646, 1655⫺1657, 1690⫺ 1691, 1716, 1723, 1733, 1741, 1755, 1758, 1771, 1795, 1817, 2031, 2033 ⫺ gesture type 82, 85, 93, 168⫺170, 173⫺ 178, 186, 190, 700, 840, 1009, 1044, 1217, 1368, 1393⫺1394, 1419, 1453, 1455, 1457⫺ 1459, 1462, 1541, 1544⫺1546, 1559⫺1561, 1569⫺1570, 1662⫺1663, 1668, 1672, 1900, 1959, 2136, see also Gesture Category ⫺ gesture unit 10⫺11, 484, 595, 722⫺725, 727, 742, 744, 761, 1061, 1101⫺1103, 1106⫺ 1107, 1290, 1361, 1371, 1378, 1565⫺1566, 1653⫺1657, 2030, 2117 ⫺ gesture use 22, 87⫺88, 90, 195, 395, 794, 822, 827, 846⫺847, 849, 851, 874, 1147, 1152, 1154, 1158, 1191, 1216, 1427, 1430, 1554, 1672, 1770, 1776, 1899, 1902, 1956⫺ 1960 ⫺ gesture variant 665, 667, 670⫺671, 673, 1535 ⫺ gesture-sign-interface 2150 Gesture Category ⫺ adaptor 9, 93, 396, 881, 885⫺886, 889, 906⫺907, 912⫺913, 1102, 1246⫺1247, 1314, 1336, 1338, 1342⫺1343, 1347, 1364, 1375, 1394, 1407, 1434, 1455⫺1456, 1463⫺1467, 1915⫺1917 ⫺ autonomous gesture 83, 635, 1266, 1287, 1474⫺1475, 1482, 1893⫺1894 ⫺ baton 114, 169, 173⫺176, 178, 1029, 1334⫺1335, 1337, 1454⫺1455, 1458⫺1459, 1517, 1532, 1535 ⫺ beat 50, 114, 117, 159, 173⫺174, 176, 187, 336, 351, 519, 605, 631⫺632, 760, 775, 805, 807, 809⫺810, 812, 1009, 1029, 1173, 1202, 1234⫺1235, 1238, 1246, 1254, 1340, 1368, 1370, 1383⫺1386, 1394, 1396⫺1397, 1428, 1435⫺1437, 1456⫺1459, 1463⫺1465, 1482, 1532⫺1533, 1546, 1549, 1554, 1734, 1749, 1871, 1900⫺1901, 1951⫺1954, 1995, 2018, 2145 ⫺ contact gesture 1502⫺1510, 1636 ⫺ conversational gesture 290, 582, 684, 834, 906⫺907, 1042, 1393, 1458, 1562, 1893, 1951 ⫺ emblem 4, 12, 32, 64, 82⫺96, 169, 174⫺ 175, 214, 396, 483, 498, 529⫺530, 631, 661, 696, 804⫺805, 881, 924, 936, 1009, 1028⫺ 1030, 1045, 1113, 1147⫺1148, 1150⫺1151, 1156, 1234⫺1235, 1245⫺1248, 1266⫺1270, 1276, 1284⫺1288, 1290, 1296, 1312, 1314,

Subject Index 1335⫺1337, 1342⫺1343, 1347, 1392, 1394, 1407, 1423, 1427⫺1428, 1434, 1437, 1454⫺ 1456, 1458⫺1460, 1463, 1465, 1474⫺1479, 1482, 1496, 1499, 1512⫺1513, 1515, 1524, 1531⫺1532, 1534⫺1535, 1541⫺1542, 1544, 1546⫺1548, 1554, 1559⫺1560, 1563, 1569⫺ 1570, 1575⫺1576, 1596, 1615, 1623, 1637, 1736, 1771, 1834⫺1835, 1869, 1900, 1991, 1995, 1997, 2027⫺2028 ⫺ emblematic gesture 63, 83⫺84, 87, 173, 184, 230, 236, 361, 614, 735, 747, 1030, 1207, 1212, 1234, 1245, 1266⫺1267, 1269, 1287, 1290, 1298, 1392, 1428, 1474, 1481, 1503, 1526, 1534, 1536, 1541⫺1542, 1569, 1576, 1586, 1588, 1619, 1623, 1626, 1631, 1650, 1663, 1668, 1672, 1711, 1859, 1951, 1993, 2145⫺2146 ⫺ facial gesture 159, 458, 461, 469, 476, 789, 1385⫺1386, 1956, 2158, 2174 ⫺ hand gesture 82, 94, 169, 171⫺172, 175, 187, 193, 210, 269, 307⫺310, 312, 334, 337, 339, 341, 351, 405⫺406, 435, 484, 515, 530, 616, 671, 675, 679⫺681, 684⫺685, 689⫺ 690, 693, 696⫺697, 699⫺700, 716⫺717, 741, 758, 765, 772⫺773, 779, 805, 834, 869, 873⫺874, 877, 880⫺881, 886, 905, 907, 911, 934, 938, 1082, 1157, 1165, 1168, 1174, 1182, 1189, 1227, 1244⫺1245, 1256⫺1257, 1268, 1312, 1336, 1345, 1347, 1354, 1357, 1379, 1394, 1396⫺1398, 1400, 1406⫺1408, 1422, 1424, 1426⫺1427, 1429⫺1430, 1450, 1462⫺1464, 1466⫺1467, 1469, 1498, 1525, 1532⫺1533, 1535, 1566⫺1568, 1623, 1634, 1756, 1774, 1809, 1814, 1894, 1936, 1949, 2085, 2117 ⫺ head gesture 663, 1155, 1334, 1496⫺1498, see also Head ⫺ iconic gesture 30, 45, 51, 65, 85, 114⫺116, 119⫺121, 173, 194, 236, 302, 484, 492, 513, 518, 520, 523, 525⫺530, 533, 542, 544, 567, 616, 632, 640, 644, 682, 697, 714, 747⫺750, 763, 765, 804⫺809, 811⫺814, 816⫺817, 1002, 1186, 1194, 1207, 1209, 1216, 1234, 1283, 1285⫺1288, 1379, 1407, 1423⫺1424, 1428, 1456, 1463, 1483, 1545⫺1546, 1554, 1560, 1672, 1691, 1713⫺1714, 1716, 1719, 1723, 1726, 1733⫺1738, 1740⫺1741, 1797, 1800⫺1801, 1803, 1818, 1824, 1883, 1894, 1899⫺1901, 1903, 1922, 1924, 1926, 1928, 1937, 1940, 1951⫺1954, 1995, 2032, 2154, see also Iconicity ⫺ ideographic gesture 1335, 1454

2203 ⫺ illustrator 396, 616, 881, 885⫺887, 1045, 1314, 1335⫺1337, 1363, 1376, 1386, 1393⫺ 1394, 1434, 1437, 1455⫺1456, 1458, 1463, 1465, 1476, 1478, 1532, 1739, 1915, 1918 ⫺ lexical gesture 91⫺92, 157⫺158, 816⫺817, 1148⫺1149, 1217⫺1218, 1224, 1736 ⫺ metaphoric gesture 5, 50, 67, 94, 100, 186, 188⫺189, 206, 208⫺209, 403, 764, 776, 1184, 1254, 1423, 1428, 1456⫺1457, 1466, 1546⫺1548, 1550, 1559⫺1560, 1567, 1668, 1672, 1726, 1740, 1767⫺1768, 1774⫺1776, 1995, 2035, see also Metaphor ⫺ natural gesture 431, 840, 1261, 1446 ⫺ object-related gesture 294, 717, 1789, 1795, 1800 ⫺ performative gesture 213⫺214, 723, 1255, 1540, 1543⫺1544, 1546, 1552⫺1555, 1859, see also Speech Act ⫺ pragmatic gesture 214, 401⫺402, 595, 775, 1114, 1170, 1254⫺1255, 1257, 1285, 1407, 1427, 1531⫺1537, 1544, 1547⫺1548, 1553⫺ 1554, 1559⫺1561, 1576, 1615, 1692, 1768, 1871, see also Pragmatics ⫺ quotable gesture 12⫺13, 24, 82⫺85, 87, 89, 91, 93, 95⫺96, 1147⫺1150, 1152, 1154⫺ 1156, 1260, 1460, 1474⫺1475, 1477⫺1479, 1482, 1512, 1524, 1526, 1534, 1537, 1541, 1568, 1570, 1575, 1596, 1869 ⫺ recurrent gesture 63⫺64, 87, 93⫺94, 402, 711, 719⫺721, 726⫺727, 735, 777, 1084, 1093, 1100, 1113, 1287, 1532⫺1533, 1540⫺ 1544, 1547⫺1548, 1554⫺1555, 1559⫺1570, 1576⫺1588, 1592, 1596⫺1599, 1605⫺1606, 1611, 1615, 1626, 1635, 1637, 1668, 1717, 1741 ⫺ referential gesture 66, 188⫺189, 214, 490, 595, 669, 673, 760, 1113, 1254⫺1255, 1285, 1533, 1542, 1554, 1663⫺1664, 1668, 1671, 1713, 1733, 1738⫺1739, 1770, 1859 ⫺ regulator 396, 1314, 1316, 1335⫺1336, 1338, 1347, 1363, 1393⫺1394, 1434, 1436⫺ 1437, 1455⫺1456, 1459, 2137 ⫺ representational gesture 159, 163, 174, 176, 182, 189, 191, 249, 256, 522, 659⫺660, 662⫺664, 765, 1150, 1186, 1238, 1376, 1428, 1466, 1546⫺1548, 1668, 1733, 1735, 1737, 1739, 1741, 1833⫺1835, 1851, 1871, 1903, 1939, 1952, 2000⫺2004, see also Representation ⫺ speech-replacing gesture 1503, 1534 ⫺ symbolic gesture 8, 12, 83⫺84, 174, 318, 546, 627, 632⫺636, 638, 1335, 1347, 1419,

2204

Indices

1423, 1454, 1474, 1482⫺1485, 1487⫺1489, 1491⫺1493, 1495, 1545⫺1548, 1650, 1663, 1738, 1835, 1848, 1850⫺1853, 1857, 1952 ⫺ taboo gesture 1523⫺1526, see also Taboo ⫺ tactile gesture 459, 721, 1094, 1956, 1958 ⫺ temporal gesture 1184, 1191, 1469, 1781⫺ 1786, see also Temporality Grammaticalization 64, 68, 131, 202, 471, 580, 727, 751, 762, 788, 1184, 1562, 1567⫺ 1570, 1605, 1614⫺1615, 1626, 1630, 1637, 1693, 1714, 1727, 1741, 1768, 2133⫺2134, 2137, 2139⫺2141, 2143⫺2146, 2150, 2155, 2158, 2170⫺2171, 2173 Grasping 146, 162, 245, 255, 314, 452, 457, 467, 473⫺474, 499, 517, 526, 558, 682, 806, 1170, 1173, 1516⫺1517, 1519⫺1520, 1541⫺ 1542, 1642, 1691⫺1692, 1751⫺1752, 1772, 1887, 1891, 1934, 1945, 2043, 2056, 2063, 2100 Groupe μ 324 Growth Point 21, 25, 32⫺33, 40, 45⫺47, 49, 51⫺52, 69, 135⫺143, 145⫺149, 151⫺154, 161, 241, 398, 481, 484, 486⫺489, 494⫺501, 503⫺504, 506⫺508, 611, 695, 809, 811, 1008, 1177, 1546, 1734, 1791, 1796, 1800⫺ 1801, 1885, 2026, 2030, 2038⫺2041, 2043, 2046

H Hand ⫺ affordance 240, 242, 245⫺247, 284, 455, 514, 538, 680, 682, 757, 1434, 1691, 1715, 1734, 1748⫺1749, 2000, 2005, 2023, 2042, see also Movement ⫺ hand action 9⫺11, 14⫺15, 20, 23, 240, 245⫺246 ⫺ hand movement 2⫺3, 10, 57, 202, 206, 210, 241, 249, 269, 311, 325⫺326, 341, 451, 472, 492, 503, 526, 597, 632, 667, 689, 691, 693, 697, 699, 701⫺702, 708, 710, 757, 770, 774, 797, 822, 844, 870⫺872, 876, 906, 1023⫺1025, 1027, 1029, 1031⫺1033, 1044, 1200⫺1201, 1231, 1234⫺1235, 1297, 1383⫺ 1384, 1398, 1407, 1455, 1462, 1464, 1467, 1469, 1482, 1653, 1687, 1689, 1698, 1733, 1739, 1749, 1782, 1804, 1861, 1902⫺1903, 1908, 1917, 1993, 2027, 2046, 2114, 2172 ⫺ hand orientation 787, 1028, see also Orientation

⫺ hand position 152, 1364, 1625, 1953, see also Position ⫺ hand shape 13⫺15, 46, 63, 86, 138, 146, 152, 185, 307, 309⫺310, 312, 315, 317, 402, 702, 710, 716⫺717, 720⫺721, 757, 760, 775, 864, 1012, 1028, 1041, 1043⫺1044, 1060, 1062, 1067, 1071, 1074, 1080, 1082⫺1086, 1101, 1104, 1194, 1212, 1223, 1286⫺1287, 1376, 1379, 1407, 1482, 1512⫺1514, 1517⫺ 1521, 1535, 1541, 1543, 1565, 1567, 1595⫺ 1596, 1600, 1611, 1631, 1633⫺1634, 1645, 1670⫺1673, 1684, 1697, 1733, 1754, 1761, 1771, 1795, 1824⫺1825, 1889, 1894, 1953, 2129⫺2130, 2165⫺2166 ⫺ open palm 214, 758, 765, 828, 1567, 1582, 1633, 1715, 1752⫺1756, 1768, 1824⫺1829, 2136 ⫺ palm up open hand 16, 64, 402, 720, 723, 1082, 1256, 1394, 1396⫺1397, 1531⫺1535, 1543⫺1545, 1547, 1552⫺1553, 1555, 1560⫺ 1562, 1565⫺1568, 1579, 1586, 1593, 1633⫺ 1635, 1768, 1771, 2136, 2138 ⫺ purse hand 1151, 1541, 1560 Head ⫺ head movement 9, 105⫺106, 159, 193, 219⫺220, 223, 240, 275, 475, 566, 577⫺578, 597, 604, 632, 640, 662, 670, 716, 920, 935⫺ 936, 938⫺939, 1033, 1155, 1201, 1242⫺ 1243, 1268, 1352, 1383, 1386, 1396, 1429⫺ 1430, 1436, 1454, 1478, 1496, 1498, 1749, 1751, 1850, 1915, 1917, 1947, 1949, 1953, 2134, 2140⫺2141, 2151 ⫺ head nod 106, 109⫺110, 338, 345, 599, 604, 618, 631, 934, 936, 1174, 1220⫺1224, 1255, 1351, 1371, 1377, 1383, 1396, 1428, 1497, 1828, 1894, 2140, 2145, 2151⫺2152, 2156⫺2157 ⫺ head shake 15, 106, 604, 670, 1377, 1486, 1496⫺1500, 1636, 1848⫺1853

I Iconicity ⫺ iconic principle 1279 ⫺ iconic relation 524, 1200, 1545 ⫺ diagrammatic iconicity 741, 761, 769, 1620⫺1622, 1626, 1714, 1724⫺1725 ⫺ metaphor iconicity 769, 1713, 1725⫺1726, see also Metaphor ⫺ primary iconicity 1279 ⫺ secondary iconicity 1723, 1996⫺1997

Subject Index Ideophone 1157, 1185⫺1191 Image ⫺ complex image 140, 1670 ⫺ image schema 68, 130, 189⫺191, 403, 535, 727, 765, 770, 777⫺778, 1101, 1103⫺1105, 1111⫺1112, 1227, 1229⫺1232, 1279, 1562, 1578, 1612⫺1613, 1634, 1670, 1719, 1721, 1753⫺1754, 1807, 1811, 1818, see also Schema ⫺ image schemata 755⫺756, 758⫺759, 777⫺ 779, 1186, 1727, see also Schema ⫺ mental image 161, 325, 1423, 1717, 1793, 1798, 2018 Imagery 20, 29⫺30, 32⫺33, 39⫺40, 43, 46⫺ 49, 51⫺52, 61, 135⫺141, 148, 152⫺154, 161, 186, 190⫺191, 195, 240⫺241, 310, 313, 317⫺319, 373, 386, 403, 481⫺482, 486, 488⫺490, 492, 495⫺499, 503, 505⫺506, 512⫺513, 517⫺519, 528, 530, 673, 675, 682⫺683, 710, 755⫺756, 765, 775⫺776, 814, 845, 919, 1012, 1032, 1041, 1185, 1226⫺1227, 1231, 1641, 1718, 1723, 1733⫺ 1735, 1783, 1786, 1790⫺1791, 1893, 1901, 1939, 2003, 2029⫺2030, 2032, 2035, 2038⫺ 2040, 2093⫺2094, 2098⫺2102, 2108 Imitation ⫺ complex 457⫺458, 462, 1850, 1886 ⫺ simple 458, 460, 462, 1888 Index Finger 13, 119, 144⫺146, 208, 212⫺ 213, 244, 248, 255, 311, 316⫺317, 460, 634, 636⫺638, 665⫺666, 700, 712, 720⫺721, 741, 773⫺774, 787, 805, 876⫺877, 1012, 1043, 1155, 1161⫺1162, 1166, 1168, 1196, 1198, 1202, 1209, 1234, 1245, 1268⫺1269, 1286⫺1287, 1347, 1397, 1407, 1422, 1485, 1512, 1514, 1535, 1543, 1547, 1549, 1567, 1580, 1583, 1623, 1625, 1634, 1644⫺1645, 1669, 1691, 1694, 1720, 1724, 1737⫺1738, 1748, 1753, 1755, 1757⫺1758, 1771, 1782, 1786, 1807, 1809, 1824⫺1829, 1859, 1861, 1863⫺1864, 1883, 1889⫺1890, 2001 Indexicality 346, 577, 579, 698, 714, 755, 759, 761, 766, 769, 775⫺776, 779, 987, 1288, 1696, 1712, 1717, 1732, 1734, 1755⫺ 1756, 1758, 1805, 1992, 1994⫺1995 Inference 504, 553⫺554, 564, 629, 631, 812, 1001, 1013, 1403⫺1405, 1550, 1712, 1718, 1727, 1748, 1761⫺1762, 1775, 1901, 1927, 2042 ⫺ inferencing 759, 761, 771, 778, 1747, 1755, 1762

2205 Integration ⫺ conceptual integration 131, 182⫺183, 186, 193, 258, 2105 ⫺ functional integration 65, 205, 709, 736, 739⫺740, 745 ⫺ gesture-speech integration 843⫺844, 1424 ⫺ integration of gesture 2, 64⫺65, 68, 709, 750, 1099, 1109, 1663⫺1665, 1667⫺1669, 1671, 1673, 2134 ⫺ multimodal integration 745, 750, 1424, 1658, 1737, see also Multimodality ⫺ phonological integration 2144 ⫺ semantic integration 843, 1424, 1663, 1872, see also Semantics ⫺ speech-gesture integration 1406, 1922, 1926 ⫺ syntactic integration 64, 68, 748⫺750, 1188, 1662⫺1665, 1668, 1797, see also Syntax ⫺ systematic integration 1364, 2153, 2157 Intentional 193, 229, 268⫺269, 383, 451, 461⫺462, 466⫺467, 490⫺491, 534, 536⫺ 537, 542⫺543, 559, 610, 614, 618, 641, 694, 708, 814, 1312, 1319, 1403, 1524, 1542, 1658, 1704, 1850, 1957, 1960, 2042, 2065 Interaction ⫺ classroom interaction 1426⫺1428, 1430⫺ 1431 ⫺ conversational interaction 100⫺105, 107, 109, 528, 577, 589⫺591, 593, 595, 597⫺599, 601, 603, 605, 1114, 1212, 1324⫺1325, 1327⫺1329, 1382, 1462, 2013, 2020, see also Conversation ⫺ everyday interaction 22, 90, 341, 590, 975, 1047, 1749, 1857 ⫺ human-computer interaction 1177, 1420, see also Computer ⫺ human-machine interaction 929, 1943, 1949 ⫺ human-robot interaction 1945⫺1946, 1949 ⫺ interaction space 860, 865, 1306, 1314⫺ 1315, 1318 ⫺ natural interaction 580, 843, 1047, 1216 ⫺ participation management 1301, 1303, 1305, 1307, 1428 ⫺ social interaction 107, 113, 218, 220, 223, 229, 253⫺254, 258⫺259, 261⫺265, 267, 269, 271⫺272, 406, 444, 455, 504, 526⫺527, 564⫺566, 569⫺574, 577⫺580, 582, 584⫺ 585, 589, 592, 617, 676⫺677, 686, 694⫺695, 702, 761, 822, 828, 834, 850⫺851, 904, 982, 987⫺988, 1170, 1207, 1304, 1306, 1315,

2206

Indices

1318, 1326, 1342⫺1343, 1345, 1347, 1350, 1352, 1354⫺1356, 1405, 1422, 1466, 1474, 1479, 1612, 1749, 1803, 1850, 1855, 1964, 1969, 1972, 1974, 1977⫺1978, 2008, 2011⫺ 2014, 2019, 2097 ⫺ verbal interaction 590, 849, 994, 999⫺ 1000, 1033, 1306, 1318, 2010 Intercultural Communication 1151, 1264, 1320, 1451 Interjection 86, 94, 386, 633, 655, 998, 1189, 1255, 1476, 1506⫺1507, 1599, 1983⫺1988 Interpersonal ⫺ interpersonal adaptation 264, 272 ⫺ interpersonal attitude 1261, 1311, 1313, 1325, 1342⫺1343 Intersubjectivity 1, 252, 263, 546, 569, 572, 676, 683, 1289, 1340, 2023, 2051⫺2053, 2071 Isomorphism 385, 1387, 1714, 1725, 1733, 1739

J Jewish 5, 57, 87, 320⫺323, 325⫺328, 984, 1170, 1246, 1248, 1454, 1492 Joint 290, 344⫺345, 862, 869, 876, 944, 1039, 1086, 1420⫺1422, 1514, 1932, 1946, 1950

K Kinesics 57, 182, 232, 287⫺291, 293⫺294, 296⫺297, 397, 611, 622, 985, 988, 1000, 1024⫺1025, 1176, 1206⫺1207

L Language ⫺ first language 379, 383, 486, 1047, 1288, 1859, 1868, 1869, 1876, 1884, 1983 ⫺ first language acquisition 1047, 1859, 1866, 1983 ⫺ interlanguage 1871, 1877 ⫺ language acquisition 125, 160, 334, 507⫺ 508, 787, 1047, 1212, 1259, 1262, 1264, 1280, 1308, 1382, 1386, 1392, 1499, 1718, 1736, 1848⫺1850, 1857⫺1860, 1864, 1866, 1869⫺1871, 1876, 1887, 1895, 1983, 2158

⫺ language comprehension 156, 160, 171, 505, 512, 516, 520, 533, 1891, 1923⫺1924, 2000⫺2001 ⫺ language faculty 378⫺380, 733, 735, 737⫺ 738, 742⫺743, 745, 751, 1651, 1658⫺1659 ⫺ language learning 114, 505, 794⫺796, 1427, 1842, 1878 ⫺ language of affects and emotions 419, see also Emotion ⫺ language of thought 157, 161, 515 ⫺ language production 168, 171⫺172, 787, 811, 1008, 1379, 1466, 1892, 1898, 1903, 1938, 2001⫺2003, 2127, 2133 ⫺ origin of language 23, 71, 203, 215, 378, 380⫺382, 387, 390, 457, 476, 480⫺484, 497⫺499, 507, 1673, 1804, 2146 ⫺ natural language 56, 88, 96, 335, 378, 385, 467, 504, 681, 727, 743, 1008⫺1009, 1289⫺ 1291, 1297, 1446, 2144 ⫺ second language 787, 1010, 1382, 1386, 1426⫺1427, 1869⫺1871, 1876, 1878, 1885 ⫺ spatial language 187, 1210⫺1211, 1685, see also Space ⫺ universal language 5, 56, 71, 100, 126, 364⫺365, 367, 369⫺374, 378⫺379, 388⫺ 389, 420, 428, 431 Learner 254, 491, 793, 795⫺796, 798, 801, 829, 1263, 1427⫺1430, 1869, 1871, 1876⫺ 1877, 1879⫺1885, 1931 Lexicalization 86, 1465, 1664, 1693, 2133, 2144⫺2146, 2155, 2170⫺2171 Lexicon 85, 113, 115⫺116, 130, 158⫺159, 162, 191, 202, 369, 372, 386⫺387, 455⫺457, 459, 463, 515, 530, 552, 627, 632⫺635, 638⫺639, 644, 679⫺681, 735, 761, 804, 812⫺817, 881, 1043, 1134, 1290, 1295, 1298, 1459, 1468, 1479, 1491, 1588, 1621, 1631, 1637, 1650, 1714, 1772, 1872, 1952, 1954, 1988, 2127, 2129⫺2130, 2134, 2155 ⫺ lexical access 157, 839, 846, 1466, 1900⫺ 1903 Linearity 21, 25, 61, 309, 311, 965, 1126, 1130, 1470, 1650⫺1651 Lingua Franca 1183, 1190, 1216, 1413 Linguistic diversity 1841, 1844 Linguistics ⫺ cognitive linguistics 5, 59, 67, 88, 125, 128⫺131, 182⫺195, 393, 403, 497, 516, 533, 676, 755, 1081, 1226⫺1227, 1229, 1231, 1632, 1695, 1719, 1749, 1769, 1771, 1773, 1781, 1801, 1807, 1812, 1818, 2017⫺2018, 2020, 2022, 2063, 2094, 2097, 2113⫺2114

Subject Index ⫺ corpus linguistics 67, 2104 ⫺ generative linguistics 741, 786, 1620 ⫺ interactional linguistics 235, 580, 589⫺ 590, 593, 1303, 1365 ⫺ sign language linguistics 1125, 1766 ⫺ structural linguistics 58, 395, 397, 741, 1003, 1303, 1306

M Material carrier 61, 154, 528, 2038, 2040⫺ 2042 Meaning ⫺ abstract meaning 388, 399, 1606⫺1607, 1646⫺1647, see also Abstraction ⫺ composite meaning 139, 693, 698, 2029, 2100 ⫺ construal of meaning 1378, 1573, 1697, 1855 ⫺ conventional meaning 119, 261, 699, 702, 805, 1312, 1503 ⫺ core meaning 63, 152, 635 ⫺ meaning derivation 210, 659⫺661, 665⫺ 666, 668, 670, 673, 713, 718, 1106, 1600, 1633, 2140 ⫺ form-meaning 64, 188, 210, 214, 402, 483, 690, 696, 701, 717⫺719, 735, 740, 1152, 1376, 1483, 1512, 1519, 1547, 1554, 1559, 1564, 1569, 1575⫺1576, 1596, 1619, 1621⫺ 1623, 1716, 1809, 2028 ⫺ gestural meaning 61⫺64, 203, 400, 405, 513, 708⫺712, 715, 718, 723, 725⫺726, 1041, 1081⫺1083, 1100, 1102, 1105⫺1106, 1108⫺1109, 1113⫺1114, 1565, 1567, 1578, 1592, 1600, 1606, 1644, 1646⫺1647, 1687, 1698, 1768 ⫺ interactional meaning 572, 591 ⫺ literal meaning 261, 634⫺635, 638⫺639, 641⫺643, 1490, 1493, 1773, 1988 ⫺ meaning construction 62, 69, 307, 309, 405, 709, 715, 717, 742, 759, 761, 767⫺768, 770, 779, 1081, 1100, 1105, 1112⫺1114, 1387, 1545, 1578, 1626, 1725, 1747, 1749, 1759, 1772, 2000, 2017, 2021, 2105, 2107⫺ 2108 ⫺ metaphoric meaning 67, 70, 710, 715⫺ 716, 759, 1766, 1771⫺1776, 2094⫺2107, 2117, see also Metaphor ⫺ meaning making 2, 62, 193, 256, 308, 400, 405, 520, 677, 683, 775, 942, 949, 956, 1114,

2207 1176, 1293, 1566, 1719, 1727, 1776, 2000, 2002, 2005, 2022, 2068, 2093⫺2108, 2122 ⫺ multimodal meaning 252, 775, 1114, 1662, 1670 ⫺ referential meaning 13, 114, 213, 1254, 1454, 1532, 1606 ⫺ utterance meaning 9, 692, 699, 1109⫺ 1110, 1576, 1641, 1646⫺1647, 1663, 1860 ⫺ word meaning 20, 158, 183, 697, 717, 1772, 1803, 1848 Media ⫺ audio-visual media 4, 2049⫺2051, 2053⫺ 2057, 2062⫺2064, 2068, 2078, 2081⫺2083, 2085, 2087⫺2089, 2093⫺2097, 2099, 2101, 2103, 2105, 2107⫺2108 ⫺ media reception 2062, 2089, 2093⫺2094, 2096⫺2100, 2102, 2104, 2107 Memory 46, 142, 154, 157⫺160, 162, 168, 233, 236, 243, 245, 256, 271, 321⫺322, 325⫺326, 330, 343, 368, 370, 381⫺383, 406, 438, 445, 455, 475, 502, 514, 517, 533, 545, 570⫺571, 629, 632, 634, 636, 638⫺639, 644, 800⫺801, 816, 841, 845, 851, 911⫺912, 919, 970, 975, 980, 983, 995, 1010, 1178, 1262, 1342, 1344, 1375, 1719⫺1721, 1724, 1790⫺ 1791, 1799, 1833, 1837⫺1838, 1845, 1869, 1890, 1892⫺1893, 1915⫺1916, 1918⫺1919, 1922⫺1924, 1926, 1928, 1931, 1936⫺1940, 2002⫺2003, 2008, 2013⫺2014, 2028, 2030 ⫺ working memory 157⫺159, 245, 800⫺801, 845, 995, 1178, 1790⫺1791, 1833, 1837⫺ 1838, 1869, 1893, 1916, 1918⫺1919, 1931, 1936⫺1940 Metaphor ⫺ conceptual metaphor 183, 186⫺188, 193, 210, 517, 535, 764, 767, 1541, 1546, 1553⫺ 1555, 1719, 1726, 1769⫺1770, 1772, 1776, 1781⫺1783, 1785, 1842, 2017⫺2018, 2095, 2105 ⫺ gestural metaphor 1268, 1725, 1776, see also Gesture Category ⫺ metaphor emergence 2102⫺2103, 2107 ⫺ metaphoricity 67⫺68, 70, 709, 760, 769, 1340, 1545⫺1546, 1734, 1762, 1768, 1772⫺ 1773, 2021, 2040, 2093, 2096, 2104⫺2105, 2107, 2118 ⫺ multimodal metaphor 67⫺68, 764, 770, 1725, 1767, 1773, 2021, 2089, 2094, 2097⫺ 2098, 2101, 2107⫺2108, 2122 ⫺ spatial metaphor 187, 1782, 1845, 2005, see also Space Metonymy

2208 ⫺ conceptual metonymy 68, 183, 188⫺189, 210, 403, 535, 1785 ⫺ external metonymy 68, 766⫺767, 769⫺ 774, 776, 1564, 1747, 1750, 1754⫺1757, 1760⫺1762, 1768 ⫺ internal metonymy 68, 766⫺768, 770, 774, 776, 1750⫺1751, 1753⫺1754, 1758, 1760⫺ 1762, 1768 ⫺ metonymic chain 1761 ⫺ metonymic shift 1747⫺1748 Mimetic 68, 191, 313, 332, 335, 379, 385⫺ 386, 438⫺446, 486, 526, 538, 651, 660, 685, 700, 711, 713⫺715, 724⫺725, 765, 777, 1186, 1188, 1227, 1262, 1278, 1669, 1696, 1699, 1718, 1722⫺1723, 1737, 1739, 1850, 2154, 2163 ⫺ mimesis 5, 191, 366, 423, 429, 438⫺445, 485, 538, 713⫺714, 1105, 1188, 1262, 1670⫺1671, 1737, 1884⫺1885 Mimicry 154, 271, 366, 368, 439, 446, 472, 483⫺484, 504, 506, 554, 619, 757, 848, 851, 1125, 1133, 1273, 1301, 1308, 1355, 1371⫺ 1372, 1375⫺1379, 1408, 1430, 1433, 1468, 1737⫺1738, 1741, 1973, 2031 Mirror ⫺ mirror neuron 269, 446, 451⫺454, 456, 462⫺463, 472⫺473, 482⫺483, 489⫺490, 493, 495, 756, 1308, 1887 ⫺ mirror neuron system 194, 453, 516, 1277, 1887⫺1891, 1895 ⫺ mirror system 5, 195, 215, 451⫺453, 455, 457, 459, 461⫺463 Mismatch 94, 139, 501, 793, 796, 798⫺799, 844, 851, 1789, 1835⫺1836, 1855 Möbius Syndrome 1969⫺1977 Modality 1, 64, 66⫺67, 70⫺71, 85, 117, 121, 128, 156, 159, 162, 271, 384, 386⫺388, 403, 435, 486, 519, 521, 541⫺542, 589, 591⫺592, 627, 630, 640⫺643, 661, 680, 684, 689, 693, 697, 726⫺727, 736, 739⫺740, 745, 751, 756, 768⫺770, 848, 910, 1008, 1044⫺1045, 1111, 1115, 1126, 1188, 1195, 1279, 1316⫺1317, 1400, 1414, 1419, 1424, 1436, 1464, 1569, 1602, 1619, 1626, 1647, 1651, 1658, 1689, 1694, 1725⫺1726, 1734, 1754, 1786, 1841, 1843, 1848, 1851, 1853, 1855, 1891, 1945, 1956, 1996, 2005, 2041, 2045, 2127, 2129, 2133⫺2135, 2139, 2144, 2146, 2150, 2152, 2155, 2157, 2173 ⫺ bimodal 383, 757, 1216⫺1219, 1221⫺ 1224, 1887

Indices ⫺ cross-modal 157, 525, 528, 756, 761, 768, 774, 1718, 1727, 1741, 1747, 1749, 1759⫺ 1761, 1927⫺1928, 2129 ⫺ visual-gestural modality 1853, 2133, 2159, 2152, 2157 Mode ⫺ expressive mode 1286 ⫺ (gestural) mode of representation 136, 177, 185, 205, 209, 277, 307, 313, 400⫺401, 429, 433, 481, 711⫺715, 718, 727, 746⫺747, 749⫺750, 765, 1101, 1103⫺1105, 1227, 1278, 1369, 1562, 1578, 1669⫺1670, 1687⫺ 1689, 1691⫺1699, 1722, 1739, 1762, 1800, see also Representation ⫺ iconic mode 767, 771, 774, 1714, 1723, 1725, 1761, see also Iconicity ⫺ metonymic mode 755, 759, 765, 767, 776, 1755, 1760, 1762, see also Metonymy ⫺ mode of expression 9, 11, 21, 61⫺62, 71, 429, 441, 656, 674, 711, 737, 948, 1305, 1592, 1596, 1692, 1767, 1974, see also Expression ⫺ modes of shape change 947⫺948, 951 ⫺ modes of thought 59 ⫺ semiotic mode 137, 154, 481, 485⫺486, 757⫺758, 760, 766, 1751, 2029, 2038⫺2040 Morphology 113, 116⫺117, 130, 183, 248, 306, 397, 402, 470, 498, 638, 735, 761, 769, 779, 923, 925, 929, 1186, 1194, 1270, 1619, 1626, 1633, 1714, 1725, 1727, 1894, 2001, 2127, 2129⫺2130, 2158, 2170, 2172 Motion Capture 246, 858⫺859, 861, 863⫺ 866, 875, 877, 1015, 1017⫺1018, 1030, 1131, 1263, 1437, 1727, 1952⫺1954 Motion Event Typology ⫺ manner of motion 31, 69, 154, 524, 1698, 1735, 1751, 1870, 1885 ⫺ motion event 29, 31⫺32, 39⫺41, 43, 69, 159, 192, 1104⫺1105, 1187⫺1188, 1235, 1237, 1296, 1370, 1683, 1687, 1697⫺1698, 1719, 1721, 1724, 1733, 1735, 1753, 1876, 1878⫺1882, 1884 ⫺ motion generation 1943, 1946 ⫺ path 31⫺46, 67⫺68, 138⫺139, 146, 189⫺ 190, 192, 208, 244, 254⫺255, 406, 502, 702, 712, 851, 1043⫺1044, 1054, 1063, 1075, 1104⫺1105, 1163, 1188, 1194, 1198, 1200⫺ 1201, 1203⫺1204, 1231, 1235⫺1236, 1264, 1335, 1370, 1376, 1405, 1455, 1486, 1525, 1598, 1678, 1698⫺1699, 1721, 1735, 1753, 1770⫺1771, 1845, 1870, 1876⫺1885, 1900, 2028

Subject Index ⫺ satellite-framed 31⫺32, 69, 1235, 1876, 1900 ⫺ verb-framed 31⫺33, 1236, 1735, 1876, 1900 Motivation 64, 68, 86, 190, 268⫺270, 272, 505, 528, 533, 546, 661⫺662, 665⫺667, 673, 703, 711⫺712, 714⫺715, 719, 727, 755, 759, 761⫺762, 982, 1101, 1103⫺1104, 1354, 1403, 1405, 1456, 1498, 1512⫺1514, 1517⫺ 1521, 1547, 1564, 1567, 1577⫺1578, 1585, 1592, 1594⫺1602, 1621⫺1623, 1669, 1687, 1694⫺1695, 1736, 1739, 1767⫺1769, 1916, 1918, 1978, 1996 Motivation and Opportunity as Determinants Model 269, 1403 Movement ⫺ interactive expressive movement 2115⫺ 2117, 2119⫺2122 ⫺ expressive movement 394, 406, 528, 708, 1189, 1434, 1436⫺1437, 1545, 1758, 1775, 1943, 1945, 1947, 1953, 2050⫺2052, 2054, 2056⫺2057, 2071, 2075, 2077⫺2078, 2081⫺ 2089, 2093⫺2095, 2097⫺2099, 2101⫺2105, 2107⫺2108, 2113, 2115⫺2117, 2119⫺2122 ⫺ movement observation 311, 956⫺957 ⫺ movement signature 1024 ⫺ movement trajectory 858, 875, 1092, 1716 ⫺ natural movement 1273 ⫺ rhythm of movement 595, 1227 Movement Analysis ⫺ bound flow 949, 962⫺964, 966, 968, 1906⫺1907 ⫺ free flow 949, 963⫺964, 968 ⫺ kinesphere 945⫺947, 1907 ⫺ Laban Movement Analysis 307, 311, 319, 933, 960, 1024, 1229, 1906 ⫺ motif writing 943, 945, 947, 953⫺956 ⫺ phrasing 287, 292, 312, 943⫺944, 949, 952, 967, 1048, 1051⫺1052, 1435, 1482 ⫺ phrase writing 952⫺956, ⫺ shape 943⫺944, 947⫺948, 950⫺952, 954, 956⫺957, 959⫺962, 965⫺968, 1421, see also Effort ⫺ shape qualities 948, 952, 954 ⫺ shape-flow 948, 960⫺962, 965⫺968 ⫺ shape-flow design 961⫺962, 965 ⫺ shaping in directions 961⫺962, 967 ⫺ shaping in planes 961⫺962, 966 Mudras 322, 324, 435, 1450 Multimodality 2⫺4, 71, 278, 285, 578, 580, 585, 589⫺590, 592⫺593, 595, 627, 640, 643⫺644, 648⫺649, 652, 733⫺740, 745,

2209 749, 751, 999, 1191, 1260⫺1261, 1263⫺ 1264, 1266, 1280, 1301, 1303, 1305, 1308, 1340, 1499, 1658, 1849⫺1850, 1866, 2045, 2099, 2107 ⫺ multimodal attribution 65, 709, 733, 735, 745, 751, 1664 ⫺ multimodal discourse 67, 285, 759⫺760, 772, 1008⫺1009, 1016, 1922⫺1923, 1925, 1927⫺1928 ⫺ multimodal grammar 58, 65, 402⫺403, 709, 711, 727, 733⫺734, 1072, 1099, 1110, 1295, 1664, 1791, 1800⫺1801 ⫺ multimodal interaction 5, 283, 577, 579, 581, 583, 585, 591, 596, 994, 1264, 1301, 1311, 1531, 1537, 1712, 1747, 1754, 1815, 2009, 2011, 2100 ⫺ multimodal skill 1848⫺1849, 1851, 1853, 1855 ⫺ multimodal system 488, 1419 ⫺ multimodal utterance 66, 214, 252, 303, 405, 684, 747, 1423, 1641, 1647, 1662⫺ 1673, 1804, 1815, 1872, 1952, 1954 Music 277, 284, 309, 367, 378, 383, 428, 440, 467, 475, 530, 614, 627, 644, 737, 932, 942, 954⫺955, 957, 1001, 1010, 1012, 1018⫺ 1019, 1259, 1262⫺1263, 1427, 1433⫺1437, 1470, 1711, 1864, 1889, 1945, 2066, 2078, 2084, 2087, 2089, 2101, 2121

N Narration ⫺ authentic narrative 2167 ⫺ cartoon narrative 827⫺828, 842 ⫺ narrative 35, 50, 70, 154, 193, 288, 290, 294, 297, 315, 336, 372, 422, 430, 435, 518, 840, 1009⫺1010, 1033, 1151⫺1152, 1157, 1186, 1188⫺1190, 1211⫺1212, 1248, 1254, 1256⫺1257, 1325, 1457, 1516, 1610, 1691, 1733, 1770, 1785, 1894, 2031⫺2032, 2040, 2050, 2077, 2082, 2084, 2093, 2136 ⫺ narrative context 318, 1384 ⫺ narrative idea 31 ⫺ narrative indicator 146 ⫺ narrative recall 163, 1895 ⫺ narrative strategy 49 ⫺ narrative style 50 ⫺ narrative telling 1211 Native American 434, 864, 1216, see also The Americas

2210 Negation 15, 64⫺65, 68, 119, 475, 665⫺667, 670⫺671, 673, 684, 718⫺720, 1093, 1149, 1255, 1287, 1346⫺1347, 1496⫺1500, 1533, 1560, 1563, 1582⫺1583, 1592⫺1597, 1601⫺ 1602, 1635⫺1637, 1848⫺1849, 1852⫺1853, 1855, 2099, 2135, 2141⫺2142, 2144⫺2146, 2154, 2157 ⫺ gestures of negation 1592, 1597, 1601⫺ 1602, 1635, 1852⫺1853 Neurology 1, 3, 182, 394, 959, 1024, 1038 Nheengatu´ 1182⫺1185, 1785, see also The Americas Non-human primate ⫺ ape 15, 22, 451, 458⫺462, 466⫺468, 471, 474⫺476, 485, 491, 493, 495, 505⫺506, 513, 525⫺528, 530, 537, 541⫺542, 544, 721⫺ 722, 1094, 1804, 1956⫺1960, 1994, 1996 ⫺ chimpanzee 451, 460, 462, 466, 471⫺473, 476, 482, 493⫺494, 504⫺506, 525⫺526, 919, 927⫺929, 1888 ⫺ great ape 459, 467⫺468, 471, 474, 491, 493, 513, 525⫺526, 528, 541⫺542, 544, 721, 1957⫺1960 ⫺ macaque 451⫺453, 457, 927, 929, 1887, 2055 ⫺ monkey 446, 451⫺452, 457, 459, 462, 466, 473, 492, 530, 1956, 1958, 1960, 2055⫺2056 Notation ⫺ BTS (Berkeley Transcription System) 1134 ⫺ Facial Action Coding System 210, 285, 560, 641, 918⫺929, 1000, 1040, 1343, 1345⫺1347, 2151 ⫺ GAT (GesprächsAnalytisches Transkriptionssystem), 997, 1038, 1044⫺ 1045, 1048⫺1050, 1861 ⫺ HamNoSys (Hamburger Notationssystem für Gebärdensprachen), 875, 1041⫺1043, 1084, 1128, 1130⫺1131 ⫺ HIAT (HalbInterpretative ArbeitsTranskription), 656, 996, 1038, 1044, 1047, 1050 ⫺ Laban Notation 984, 1044, 1054 ⫺ LASG (Linguistic Annotation System for Gestures) 1099, 1866 ⫺ SignWriting 875, 1128⫺1129 Noun 65, 118⫺119, 121, 138, 140⫺141, 157, 403, 429, 512, 684, 703, 709, 726, 733, 735⫺ 736, 743, 745⫺751, 758, 765, 768⫺769, 772, 778, 812, 1109, 1198, 1477, 1484, 1643, 1647, 1658, 1662, 1664⫺1666, 1668⫺1673, 1725, 1756, 1789, 1791⫺1801, 1808, 1844, 1871, 1877⫺1884, 1905, 2130, 2133, see also Syntax

Indices

O Onomatopoeia 35, 297, 386, 524, 1479, 1620, 1622, 1982⫺1983, 1987 Ontogenetic ritualization 451, 459⫺460, 542 Orientation ⫺ mutual orientation 241, 247⫺248, 256, 263, 581⫺583, 1326, 1398 ⫺ spatial orientation 516, 765, 1305, 1406, 2019

P Pantomime 173⫺175, 177⫺178, 339⫺341, 383, 420, 423, 425, 455, 458, 462⫺463, 472, 483⫺485, 487⫺488, 493, 507, 528⫺530, 537, 541, 543⫺544, 804, 814, 844, 1029⫺ 1031, 1189, 1273, 1274⫺1276, 1427, 1429⫺ 1430, 1460, 1463, 1475, 1483, 1664, 1672, 1888, 1891⫺1895, 2027⫺2028, 2153 ⫺ pantomimic 18, 21, 66, 389, 421, 423, 429, 455, 492, 495, 508, 521, 716, 805, 947, 1277, 1474, 1483, 1663, 1668, 2170 Particle 1422, 1500, 1609, 1611, 1876, 1900, 2137⫺2143, 2145, see also Syntax Pattern ⫺ accentuation pattern 1050, see also Prosody ⫺ behavior pattern 397, 1308, 1316, 1334, 1336⫺1339 ⫺ cultural pattern 1151, 1196, 1295, 1737, see also Culture ⫺ embodied pattern 1186, see also Embodiment ⫺ gaze pattern 581, 935, 1200, see also Gaze ⫺ geometric pattern 190, 756, 778, 1227⫺ 1231, 1670, 2002 ⫺ gesture pattern 1043, 1236, 1870 ⫺ iconic pattern 1718, 1735, 1741, see also Iconicity ⫺ image-schematic pattern 1741, see also Schema ⫺ interaction pattern 272, 960, 1427, see also Interaction ⫺ motion pattern 20, 68, 402, 681, 777, 1088⫺1090, 1101, 1103⫺1105, 1534, 1564, 1694, 1753 ⫺ motor pattern 20, 68, 403, 679, 727, 1101, 1103⫺1105, 1554, 1562, 1578, 1670, 1965 ⫺ movement pattern 86, 94, 311, 423, 517, 760, 774, 789, 933, 940, 957, 959⫺963,

Subject Index 967⫺970, 1000, 1075, 1080, 1083, 1090, 1231, 1503, 1512, 1517, 1519, 1521, 1534, 1541, 1554, 1561, 1592⫺1593, 1601, 1605⫺ 1606, 1631, 1633⫺1634, 1670, 1693⫺1694, 1718, 1906⫺1907, 1931, 1933, 2049⫺2050, 2052⫺2057, 2067, 2082⫺2083, 2085, 2087⫺ 2089, 2095, 2098, 2101, 2103⫺2104, 2107, 2119, 2136 ⫺ movement pattern analysis 957, 960, 967 ⫺ patterns of action 7, 10, 14, 504, 755 ⫺ pitch pattern 1052, see also Prosody ⫺ prosodic pattern 521, 595, see also Prosody ⫺ recursive pattern 727, see also Syntax ⫺ rhythmic pattern 963, 1108, 1427, see also Prosody ⫺ sequential pattern 967 ⫺ sound patterns 385, 474 ⫺ syntactic pattern 118, see also Syntax ⫺ thought pattern 173, 525 Perception ⫺ interpersonal perception 264, 561, 906, 908⫺910, 1342 ⫺ language perception 1308, 1938, see also Language ⫺ political perception 1401⫺1402 ⫺ social perception 259, 268, 271⫺272, 561, 909, 1964, 1969, 1976, 1978 ⫺ visual perception 343⫺344, 514, 527, 712, 756, 768, 777, 1185, 1688, 1699, 1751, 2049, 2173⫺2174 Performance 9, 21, 50, 55, 57, 136⫺137, 160, 162, 164, 182, 233⫺234, 256, 269, 308, 319, 324⫺327, 330⫺331, 335, 361, 366, 400, 417, 421, 438, 446, 454, 456, 459⫺460, 481, 505, 508, 520, 526, 627, 643⫺644, 678, 685, 703, 716, 757, 769, 776, 785⫺788, 800⫺801, 807, 858, 860, 873⫺874, 911, 913, 919, 956, 1012, 1031, 1067, 1148⫺1151, 1174, 1185, 1210, 1259, 1262⫺1264, 1270, 1297, 1339⫺ 1340, 1350⫺1351, 1356, 1377, 1407⫺1408, 1430, 1433⫺1437, 1440⫺1441, 1443⫺1445, 1448⫺1451, 1475, 1477, 1503, 1506⫺1507, 1510, 1516, 1525, 1527⫺1528, 1532, 1609, 1620, 1721, 1725, 1751, 1761, 1775, 1837, 1890⫺1893, 1899, 1902⫺1903, 1931, 1933, 1936, 1938, 1964, 2003⫺2004, 2013, 2031, 2041, 2046, 2071⫺2077, 2084, 2094, 2097, 2115, 2119 Personification 757⫺758, 769, 776, 1752 Phenomenology 62, 177, 423, 535, 539, 546, 567⫺568, 571, 676, 969, 1177, 1203, 2038,

2211 2041, 2043, 2045⫺2046, 2052, 2097, 2113⫺ 2114 Phonestheme 736, 741⫺742, 1618⫺1622, 1626 Physiology 364, 372, 919, 1231, 1356, 2005, 2019, 2055, 2072 Plane 187, 664, 666⫺667, 774⫺776, 805, 871, 937, 966, 1229, 1231, 1421, 1505, 1535, 1561⫺1562, 1597, 1601, 1694, 1753, 1760, 1770, 1806, 1930, 1997, 2004, 2044 ⫺ sagittal 290, 778, 946, 965⫺966, 1039⫺ 1040, 1087, 1089⫺1090, 1469, 1646, 1782, 1909 Plural 64, 661⫺662, 665⫺667, 1646⫺1647 Pointing 8, 115, 129, 173, 176, 184⫺185, 189, 191⫺192, 205, 221, 230, 236, 240⫺241, 244⫺245, 248, 251, 255, 280⫺281, 317, 395, 451, 460, 471⫺472, 484, 506⫺507, 525, 542, 544, 567, 578, 596, 616, 656, 675, 698⫺700, 702⫺703, 711, 734, 741, 748, 765, 772, 775⫺776, 794⫺796, 804⫺805, 825, 828, 830, 832, 841⫺842, 866, 966, 978, 1012, 1126, 1151, 1154, 1161⫺1168, 1171, 1174, 1185, 1194, 1198⫺1204, 1211⫺1212, 1216, 1223⫺1224, 1234⫺1235, 1260, 1264, 1269, 1285, 1318, 1329, 1335⫺1336, 1353, 1361, 1376, 1393, 1397, 1407⫺1408, 1428⫺1429, 1455⫺1457, 1459, 1463, 1469, 1484, 1504, 1525, 1533, 1607, 1623⫺1625, 1668, 1720, 1734, 1751, 1758, 1760, 1770⫺1771, 1785, 1814⫺1818, 1848⫺1851, 1853⫺1854, 1861, 1889⫺1890, 1947, 1958, 1990, 1992⫺1995, 1997, 2001, 2041, 2051, 2094, 2171, see also Index Finger ⫺ eye pointing 1155, see also Gaze ⫺ head pointing 1155 ⫺ lip pointing 685, 1155, 1183, 1189, 1207, 1826 ⫺ metaphoric pointing 488, see also Metaphor ⫺ pointing gesture 118, 222, 581⫺582, 652, 680, 695, 696, 864, 876, 979, 1164⫺1168, 1183, 1196, 1198, 1210, 1287, 1364, 1372, 1420, 1423, 1803, 1804⫺1817, 1824⫺1829, 1864, 1902, 1950, 2010⫺2013 ⫺ pointing action 13, 771, 1755, 1890 Polysemy 516, 615, 639, 665, 669⫺671, 1476, 1488, 1499, 1513, 2017 ⫺ polysemous 665, 669⫺671, 673, 760⫺761, 768, 777, 779, 1148, 1740, 1749, 1753 Polysign 665, 668⫺671, 673

2212 Portuguese 89, 1177⫺1178, 1180, 1189⫺ 1190, 1259⫺1264, 1414, see also Europe Position ⫺ home position 105, 219, 684, 1208 ⫺ location 128, 174, 221, 245, 247⫺248, 250⫺251, 253, 317, 463, 496, 533, 616, 634⫺ 635, 638⫺639, 700, 708, 723⫺725, 744, 760, 765, 774⫺775, 826, 831⫺832, 842, 847, 921, 1012⫺1013, 1040, 1044, 1053, 1083, 1126⫺ 1127, 1162⫺1164, 1166, 1182, 1185, 1194, 1198⫺1200, 1202, 1210⫺1211, 1368⫺1370, 1379, 1463, 1475, 1481⫺1484, 1512, 1521, 1525, 1563, 1615, 1652, 1677⫺1685, 1698, 1755, 1758, 1760, 1781⫺1782, 1785, 1793, 1808, 1825, 1854, 1883, 1890, 1894, 1928, 1931, 1933⫺1934, 1940, 1945, 2001, 2012, 2031, 2128⫺2130, 2136, 2166, 2171 ⫺ rest position 10⫺11, 209, 213, 369, 594⫺ 595, 601, 672, 676, 684, 722⫺725, 744, 776, 937, 1012⫺1013, 1025, 1027, 1061⫺1063, 1067, 1069⫺1071, 1073, 1075, 1102, 1198, 1202, 1269, 1361⫺1363, 1397, 1555, 1652⫺ 1657 ⫺ temporal position 66, 1107, 1111, 1217, 1532 Power 51, 56, 68, 126⫺127, 228, 231, 234⫺ 235, 264, 266, 271⫺272, 297, 338⫺339, 348, 352, 354, 357⫺358, 361, 366⫺367, 385, 417, 430⫺431, 441⫺444, 452, 484, 503, 518, 620, 628, 634, 639, 689, 969, 986, 1011, 1350⫺ 1369, 1372, 1400⫺1401, 1403, 1406, 1420, 1444, 1451, 1466⫺1467, 1509, 1540, 1584, 1689, 1849, 1891, 1936, 2040, 2043, 2102, 2128 Practice 1, 5, 17, 19, 21, 23, 62, 219⫺220, 227⫺236, 240⫺241, 243⫺247, 250, 253⫺254, 256, 276⫺277, 284⫺285, 304, 307⫺308, 310⫺311, 315, 319⫺328, 331, 333, 336, 343⫺345, 347⫺351, 353, 355, 357⫺359, 361, 388, 390, 404⫺405, 421, 428⫺430, 433, 435, 439, 442, 444, 446, 456, 482, 486, 493⫺494, 535, 539⫺540, 545, 572, 578, 584⫺585, 589⫺591, 593⫺599, 601, 603⫺605, 674⫺686, 701, 756, 762, 777, 809, 858, 865, 869, 871, 876, 921, 938, 957, 968, 978, 982⫺988, 997, 1004, 1008⫺1009, 1011, 1017, 1050, 1053, 1073, 1133, 1149, 1151, 1154, 1158, 1161⫺ 1162, 1174, 1177, 1194, 1196, 1199⫺1200, 1203⫺1204, 1207⫺1209, 1216, 1222⫺1223, 1227⫺1228, 1244, 1260⫺1261, 1271, 1279, 1286, 1288, 1303, 1305, 1320, 1327, 1372, 1406, 1413, 1427, 1430, 1435, 1507, 1524⫺

Indices 1525, 1527⫺1528, 1554, 1712⫺1713, 1722⫺ 1723, 1733, 1739, 1747, 1755, 1758, 1761⫺ 1762, 1781, 1783⫺1785, 1825, 1869, 1871, 2009, 2019⫺2020, 2027, 2045, 2062⫺2063, 2072⫺2074, 2077⫺2078, 2088⫺2089, 2093 Pragmatics 648⫺649, 655, 690, 779, 804, 806, 1100⫺1101, 1112, 1114, 1147, 1150, 1152, 1291, 1297, 1437, 1460, 1483, 1485, 1487, 1489, 1491, 1493, 1528, 1531, 1540⫺ 1541, 1544, 1578, 1714, 1767, 1804, 1852, 1901, 1988 ⫺ gestural pragmatics 1147, 1150, 1152, 1528 ⫺ pragmatic marker 1531⫺1532 Praxeology 674⫺677, 679, 681, 683, 685⫺ 686 Precision 16, 126⫺127, 203, 232, 303, 306, 313, 452, 559, 654, 665⫺666, 686, 788, 847, 858, 864, 866, 870⫺874, 877, 881⫺883, 899, 934, 1131, 1421, 1435, 1516⫺1521, 1534⫺ 1535, 1583, 1634 Process ⫺ automatic process 267⫺268, 1355, 1403 ⫺ information processing models 156, 160⫺ 161, 1891 ⫺ interactive process 404, 834, 838, 893, 1023⫺1024, 1033, 1112 Profile 778, 910, 938, 959⫺962, 964, 966⫺ 970, 1244, 1247, 1279, 1295, 1352, 1408, 1747, 2120 ⫺ profiling 70, 523, 772⫺773, 1751 Projection Space 222, 581, 1002, see also Conversation Proprioception 52, 435, 678, 1204, 2026, 2034 Prosody ⫺ accentuation 556, 591, 601, 603, 605, 655, 662, 1049⫺1050, 1071, 1090, 1296, 1392⫺ 1393, 1396⫺1398, 1448, 2119 ⫺ intonation 64, 108⫺109, 159, 195, 241, 327, 521⫺522, 530, 594⫺595, 611, 640⫺ 642, 689, 695, 703, 789, 848, 1002, 1004, 1015, 1047⫺1053, 1101, 1106⫺1108, 1156, 1188, 1256, 1296, 1315, 1346, 1360⫺1363, 1365, 1382⫺1387, 1407, 1477, 1488, 1532, 1568, 1615, 1664⫺1665, 1667, 2117, 2119⫺ 2120, 2138, 2145⫺2146, 2152⫺2153, 2174 ⫺ stress 64, 110, 178, 353, 358, 591, 641, 809, 812, 935, 969⫺970, 980, 1024, 1032, 1046, 1051⫺1052, 1090, 1156, 1382⫺1386, 1395⫺1396, 1428, 1569, 1615, 1909, 1918, 1964, 2035, 2084

Subject Index Proxemics 282, 611⫺612, 622, 651, 1176, 1310⫺1313, 1315⫺1319, 1334, 1353, 1426, 1993

Q Quechua ⫺ Pastaza Quechua 1183, 1185⫺1186, 1194⫺ 1195, 1197, 1199, 1201⫺1204, see also The Americas

R Reasoning 84, 126⫺127, 183, 242, 247, 249⫺ 250, 368, 379, 387, 528, 614, 778, 788, 911, 1243, 1284, 1499, 1762, 1781, 1835⫺1836, 1997, 2074, 2141, 2144 Recurrence 46, 1114, 1371, 1384, 1559⫺1560, 1613, 1641, 1735, 1737, 2089 Recursion 502, 733, 735⫺736, 742⫺745, 751, 1650⫺1651, 1653, 1655, 1657⫺1658 ⫺ self-embedding 742⫺744, 751, 1650, 1653, 1657⫺1658 Reduplication 711, 723, 1046, 1186, 1189, 1645⫺1646, 2129 Reference ⫺ absolute frame of reference 1183, 1210, 1212 ⫺ deictic reference 581, 1204, 1803, 1805⫺ 1806, 1816, see also Space ⫺ frame of reference 173, 860, 1183, 1199, 1203, 1207, 1210⫺1212, 1647, 1678⫺1680, 1683, 1685, 1806, see also Space ⫺ gestural reference 184, 1782 ⫺ object reference 1456 ⫺ person reference 1212, 1223 ⫺ reference point 68, 189, 826, 923, 950, 1184⫺1185, 1500, 1806 ⫺ reference system 57, 1053, 1210 ⫺ spatial reference 950, 1209⫺1211, see also Space ⫺ spatial reference point 864, 945⫺946, see also Space ⫺ symbolic reference 1887 Relation ⫺ semantic relation 66, 204, 401, 507, 641, 664, 703, 808⫺809, 813, 1101, 1110⫺1112, 1126, 1157, 1575, 1587, see also Semantics ⫺ social relation 578, 1157, 1200, 1203, 1351⫺1352, 1354, 1507, 1570

2213 ⫺ temporal relation 811, 1002, 1012, 1101, 1110⫺1111, 1126, 1128, 1256⫺1257, 1361, 1500, 1562, 1692 Repertoire 24, 63, 82⫺83, 87⫺94, 96, 116, 158, 160, 309, 313, 319, 396, 438, 451, 453⫺ 459, 515, 543, 611, 651, 653⫺654, 664⫺665, 681, 686, 719, 793, 797, 801, 807, 827, 924, 935, 967, 1044, 1092⫺1093, 1147⫺1149, 1152, 1156, 1188, 1190⫺1191, 1212, 1216, 1224, 1257, 1266⫺1267, 1297⫺1298, 1345, 1347, 1422, 1449, 1454⫺1455, 1460, 1463, 1477, 1479, 1503⫺1504, 1508, 1523, 1537, 1542, 1546⫺1547, 1555, 1559, 1568, 1575⫺ 1581, 1583, 1585⫺1588, 1596, 1694, 1826, 1864, 1893, 1959⫺1960, 1976, 2022, 2164 Repetition 11, 152, 385, 445⫺446, 454, 635, 641, 643, 664, 672, 723, 743, 957, 1208, 1218, 1375⫺1377, 1429, 1486⫺1487, 1543, 1642⫺1647, 1883, 1898, 1900, 1903, 2011, 2089, 2119, 2129 Representation ⫺ gestural modes of representation 185, 205, 209, 307, 313, 400, 711⫺712, 746, 749⫺750, 1562, 1669⫺1670, 1687, 1689, 1691, 1693⫺ 1697, 1699, 1762, 1800, see also Mode ⫺ gestural representation 69, 84, 92, 185, 400⫺401, 659⫺660, 710, 713, 1211, 1288, 1371, 1600, 1691⫺1692, 1739, 1768, 1790 ⫺ iconic representation 22, 187, 190, 806, 1372, 1623, see also Iconicity ⫺ mental representation 9, 168, 177, 398, 522, 759, 800, 1231, 1379, 1619, 1718, 1749, 1791, 1796⫺1797, 1833 ⫺ metaphorical representation 1211⫺1212 ⫺ mode(s) of representation 136, 177, 185, 205, 209, 277, 307, 313, 400⫺401, 429, 433, 481, 711⫺715, 718, 727, 746⫺747, 749⫺ 750, 765, 1101, 1103⫺1105, 1227, 1279, 1369, 1562, 1578, 1669⫺1670, 1687⫺1689, 1691⫺1699, 1738⫺1739, 1762, 1800, see also Mode ⫺ prosodic representation 1051 ⫺ symbolic representation 394, 442, 806 ⫺ system of representation 277 ⫺ visual representation 454, 816, 1427, 1895, 2141 Rhetoric ⫺ actio 10, 55, 329, 337, 351, 366, 371, 378, 380, 727, 1273⫺1274, 1516 ⫺ oratory 330, 335⫺338, 340, 366⫺368, 371, 1243, 1273⫺1275, 1409, 1527, 2170

2214

Indices

⫺ persuasion 272, 333, 620⫺621, 1170, 1243, 1274, 1466⫺1467 Rhythm 159, 284, 297, 314, 331⫺333, 337, 422, 595, 605, 616, 639, 641, 644, 805, 932, 950, 962, 984, 1050, 1108, 1227⫺1228, 1296, 1303⫺1304, 1307, 1316, 1337, 1382, 1385⫺1386, 1434, 1442, 1444, 1463, 1945, 2076, 2083, 2085, 2088, 2096, 2119⫺2120 Ring 8, 94, 305, 401, 721, 723, 1043, 1184, 1268, 1493, 1511⫺1521, 1531, 1534⫺1535, 1542, 1544, 1563, 1566⫺1567, 1576, 1579, 1583, 1586, 1642⫺1643, 1692, 1710, 2034, 2041 Ritual 17, 19, 93, 227, 230⫺231, 233, 235⫺ 236, 303, 321⫺322, 325, 327⫺328, 331, 344, 346⫺347, 349⫺350, 355⫺361, 365, 423, 425, 428, 434, 439, 445⫺446, 538, 680, 976, 978, 982, 1227⫺1228, 1241, 1244, 1270, 1414, 1427, 1451, 1465, 1523, 1527 ⫺ ritualization 86, 93, 451, 459⫺460, 500, 521, 542, 639, 1476, 1534, 1548, 1965 Russian 89, 137, 351, 354⫺355, 1289⫺1292, 1297⫺1298, 1392⫺1393, 1395⫺1396, 1398, 1492, 1987, see also Eurasia

S Schema 96, 190, 456, 462, 498, 524, 535, 558, 659⫺660, 673⫺674, 681⫺682, 762, 770, 777⫺778, 919, 1018, 1101, 1231, 1279, 1312, 1612⫺1613, 1719, 1721, 1724, 1807, 1811, 1890, 2026, 2066⫺2067, see also Image ⫺ preconceptual schema 96, 673⫺674 Schizophrenia 932⫺933, 937, 939, 985, 1337, 1905, 1907⫺1908, 2022 Segmentation 35, 37, 64, 85, 161, 486, 498, 530, 726, 734, 740, 861, 888, 993, 996, 1004, 1030, 1046⫺1049, 1054, 1061⫺1062, 1073, 1080, 1102⫺1103, 1131, 1133⫺1134, 1176, 1308, 1337, 1422⫺1423, 1459, 1657, 2055 Seiler’s continuum of determination 749 Selection ⫺ categorial selection 746⫺747, 749⫺750 ⫺ evolutionary selection 481 ⫺ natural selection 480, 482, 486, 489, 491, 493, 496, 500, 1963, 1966 ⫺ re-selection 814, 817 ⫺ self-selection 221⫺222

Semantics ⫺ cognitive semantics 755, 759, 764⫺765, 1633, 1715, 1717, see also Cognition ⫺ connotation 84, 88, 176, 913, 950, 1334 ⫺ interpretant 66, 716⫺717, 760, 763, 1200, 1715, 1717, 1719, 1722, 1740, 1789, 1791⫺ 1793, 1795⫺1801, 1990 ⫺ gestural field 1510, 1587⫺1588, 1631, 1633, 1635, 1637 ⫺ onomasiology 1588, 1632, 1635⫺1636 ⫺ prototype 66, 88, 195, 673, 677, 695, 1032, 1318, 1423, 1719, 1768, 1793⫺1794, 1797, 1799⫺1801, 2172 ⫺ semantic analogy 553, 556 ⫺ semantic analysis 85, 635, 809, 1295 ⫺ semantic association 1012, 1464, 1899 ⫺ semantic change 516, 1186, 1524, 1646 ⫺ semantic coherence 11, 519 ⫺ semantic core 88, 634, 710, 717, 719, 727, 1267, 1474⫺1475, 1533, 1536, 1540, 1543, 1559⫺1560, 1562, 1568⫺1569, 1576⫺1578, 1580⫺1585, 1587, 1593, 1605⫺1606, 1611⫺ 1612, 1634⫺1635 ⫺ semantic derivation 659⫺661, 665⫺666, 668, 670, 673 ⫺ semantic domain 817 ⫺ semantic feature 66, 756, 815, 817, 840, 842, 844, 847, 1033, 1111⫺1112, 1619⫺ 1622, 1632, 1644, 1646⫺1647, 1737⫺1738, 1790, 1792, 1923, 2053 ⫺ semantic field 86, 400, 654, 1488, 1602, 1632⫺1633, 1636⫺1637, 2105 ⫺ semantic frame 496 ⫺ semantic intent 9, 12, 307, 317 ⫺ semantic interaction 17, 401 ⫺ semantic relationship 14, 344, 844, 1482 ⫺ semantic role 118, 757⫺758, 767, 769, 1670, 1678, 1752 ⫺ semantic specification 813 ⫺ semantic structure 130, 184, 760, 1506, 1592, 1596, 1631, 1671, 1698, 1714, 1717, 1982, 1985 ⫺ semantic theme 63, 401, 710, 717⫺718, 1148, 1517, 1535, 1562, 1566, 1575⫺1576, 1579, 1587⫺1588, 1592⫺1595, 1598⫺1599, 1601⫺1602, 1631, 1633⫺1635, 1642 ⫺ semantization 717, 733, 735⫺736, 740, 751, 1562, 1570, 1619⫺1623, 1626, 1650, 1769 ⫺ semasiology 1588, 1632⫺1633, 1635 Sensualism 379, 2073⫺2074 ⫺ sensitive gesture 2072⫺2078

Subject Index Sequentiality ⫺ sequential analysis 222, 893, 900⫺901, 1280, 1543 ⫺ sequential notation 893, 895, 897, 899, 901 ⫺ sequential organization 45, 218, 221, 579, 596, 880, 997 ⫺ sequential pattern 967, see also Pattern Sign ⫺ basic sign 1703⫺1706, 1710, 1712, 2155 ⫺ bodily sign 4, 287⫺288, 294, 343⫺344, 346⫺347, 369, 779, 906, 909, 1297, 1712⫺ 1713, 1715, 1718, 1723, 1740⫺1741, 1747⫺ 1749, 2045 ⫺ complex sign 19, 203, 205, 2130 ⫺ composite sign 691⫺693, 698 ⫺ conventional sign 85, 93, 113, 117⫺118, 120, 695⫺696, 698, 702, 1128, 1130, 1134. 1347, 1446, 2028 ⫺ coverbal sign 672 ⫺ gestural sign 68, 209, 385, 389⫺390, 400, 402⫺403, 661⫺662, 664, 666, 668, 671⫺ 673, 712, 756⫺757, 759⫺760, 763, 765, 769, 772, 775⫺776, 778⫺779, 1104, 1273, 1446, 1564, 1636, 1712, 1714, 1716⫺1717, 1719, 1722, 1740⫺1741, 1747, 1750, 1761, 1767, 1799, 1815 ⫺ gestural sign formation 68, 403, 756⫺757, 759, 763, 1712, 1714, 1716, 1747, 1750 ⫺ iconic sign 346, 373, 469, 1188, 1279, 1694, 1696, 1724, 1740, 1753, 1758, 1814, 1817, 1990, 1992, 1994, 1996, see also Iconicity ⫺ kinesic sign 661 ⫺ linguistic sign 4, 30, 61, 188, 204, 212, 508, 538, 690, 703, 1212, 1651, 1719, 1749, 1786 ⫺ manual sign 384, 389, 471, 504, 760, 767, 772, 1128, 1200, 1756, 1996, 2143, 2156 ⫺ multimodal sign 774, 778, 1717, 1761 ⫺ natural sign 365, 382⫺383, 387⫺388, 1274, 1447 ⫺ primary sign 18⫺19, 659 ⫺ sign carrier 764, 1713⫺1714, 1717⫺1720, 1741, 1749 ⫺ sign filtration 689, 698⫺699 ⫺ sign formation 18, 21, 68, 403, 742, 756⫺ 757, 759, 763, 767, 1712, 1714, 1716, 1747⫺ 1748, 1750, 1768 ⫺ sign function 1992, 1996 ⫺ sign model 763, 1717, 1993 ⫺ sign motivation 759

2215 ⫺ sign process 736, 759, 764, 766⫺767, 772⫺776, 778, 1336, 1703⫺1705, 1708⫺ 1710, 1712, 1717⫺1719, 1749, 1761, 1816 ⫺ sign system 4, 121⫺122, 203, 205, 212, 287⫺288, 346, 360, 400, 487, 533, 711, 736, 759, 761⫺762, 1128, 1212, 1312, 1523, 1527, 1533, 1621, 1626, 1631, 1636, 1714, 1716⫺1717, 1740, 1786 ⫺ sign transformation 1709⫺1710 ⫺ sign type 695, 1703, 1705⫺1706, 1710, 1712 ⫺ sign use 533, 537⫺538, 542⫺543 ⫺ victory sign 82, 92, 1619 Sign Language ⫺ ABSL (Al-Aayyid Bedouin Sign Language), 470 ⫺ Alternate Sign Language 9, 19, 87, 711, 727, 1216, 1523, 1527, 1694⫺1695 ⫺ ASL (American Sign Language) 31, 33, 116, 128, 467⫺468, 739, 762, 787, 864, 1042, 1125, 1390, 1483, 1504, 1568, 1569, 1615, 1693, 1695, 1753, 1756, 1759, 1858, 2127, 2134⫺2135, 2138, 2142⫺2143, 2155, 2165, 2171⫺2172 ⫺ classifier 20⫺21, 25, 39, 64, 702, 713, 1126, 1133, 1693⫺1694, 1696⫺1698, 1768, 1854, 1894, 2130, 2165⫺2168, 2171 ⫺ DGS (German Sign Language) 739, 1084, 1500, 1679, 1858, 2142⫺2143, 2151⫺2153, 2155⫺2157 ⫺ gestural nonmanual 2152, 2158 ⫺ grammatical nonmanual 2156⫺2158 ⫺ LIS (Italian Sign Language) 469, 539, 789, 1129, 1387, 1483, 1568⫺1569, 1615, 2141⫺ 2142, 2157 ⫺ LSC (Catalan Sign Language) 789, 1387, 2143, 2157, 2173⫺2174 ⫺ LSF (French Sign Language) 202, 378, 387, 390, 789, 1278, 1387, 1695, 2137 ⫺ manual dominant 2141⫺2144 ⫺ manual marker 2135, 2158 ⫺ NGT (Sign Language of the Netherlands) 1134, 2128⫺2129, 2137⫺2138 ⫺ NSL (Nicaraguan Sign Language) 121⫺ 122, 202, 500, 545 ⫺ non-manual dominant 2142⫺2144 ⫺ non-manual marker 2135 ⫺ PISL (Plains Indians Sign Language) 17, 56, 202, 1216 ⫺ Warlpiri Sign Language 19⫺20, 487, 1527

2216 Signal ⫺ acoustic signal 473, 720, 1012, 1015, 1584, 1710 ⫺ boundary signal 1656⫺1657 ⫺ communicative signal 544, 630⫺631, 643 ⫺ composite signal 66, 695, 1663 ⫺ holophrastic signal 1485 ⫺ signal system 1312 ⫺ simple signal 361, 1710 ⫺ speech signal 474, 1051 ⫺ turn signal 1364, 2137, 2139, see also Conversation Simulation 182⫺183, 186, 192⫺194, 242, 271, 405, 445, 453, 461, 512⫺523, 525⫺529, 533, 635, 756, 790, 887, 1330, 1719, 1948, 1951, 1954, 1973, 2000⫺2005, 2018, 2031, 2056 Simultaneity 21, 25, 275, 281, 283⫺284, 287, 292⫺293, 736, 738, 823, 997⫺998, 1000⫺ 1001, 1004, 1256, 1306, 1650⫺1651 Space ⫺ coordinate system 860, 865, 1652, 1806, 1811, 1818 ⫺ frontal axis 1678⫺1679, 1682, 1685 ⫺ gestural space 1040, 1150, 1157, 1211, 1526, 1781, 1785, see also Space ⫺ spatial scale 947 Spanish 31, 33, 35⫺40, 42⫺43, 89, 154, 190, 545, 736, 1051, 1175⫺1178, 1180, 1185, 1202, 1237, 1267, 1269⫺1270, 1384, 1428, 1479, 1536, 1540⫺1541, 1548, 1550, 1579, 1689⫺1690, 1771, 1784, 1845, 1876⫺1878, 1880⫺1885, 1900, 1987, 2137, see also Europe Speech Act ⫺ illocutionary 12, 15, 83, 86, 88, 92, 214, 261, 352⫺353, 653, 655, 1108, 1113, 1149, 1267, 1460, 1474⫺1477, 1525, 1533, 1536⫺ 1537, 1540⫺1542, 1546⫺1548, 1563, 1566, 1575, 1578, 1580⫺1586, 1588, 1711⫺1712, 1984 ⫺ performative 214, 636, 1113, 1148, 1158, 1255, 1445, 1485⫺1486, 1492, 1493, 1528, 1531, 1533, 1544, 1550⫺1554, 1563, 1566, 1576, 1586, 1596, 1601, 1859, 1995 ⫺ perlocutionary 212⫺214, 261, 352, 1113, 1506, 1586 Speech handling 486, 1559 Storytelling 101⫺102, 105⫺106, 108, 138, 221, 339, 421, 595⫺598, 604, 1152, 1157, 1430, 2028, 2154

Indices Structuralism 57, 212, 545, 733, 1278, 1631, 1637, 1695 Structure ⫺ discourse structure 16, 183, 192⫺193, 695, 1149, 1256⫺1257, 1384, 1393⫺1394, 1398, 1460, 1467, 1544, 1546, 1586, 1641, 1770⫺ 1771, see also Discourse ⫺ emergent structure 786, 789, 2095 ⫺ image-schematic structure 756, 759, 777, 1585, 1718, 1721, 1727, see also Schema ⫺ information structure 695, 1108, 1216, 1953 ⫺ interactive structure 1106, 1317 ⫺ intonational structure 1012, 1156, 1363, see also Prosody ⫺ kinesic structure 57, 1039, see also Kinesics ⫺ language structure 57, 100, 128, 466, 1147, 1150, 1470, 2174, see also Language ⫺ linguistic structure 71, 128, 152, 183, 205, 215, 402, 566, 580, 590, 690, 710, 727, 735⫺ 736, 761, 776, 1039, 1303, 1365, 1699, 1714, 2127, 2129, 2145 ⫺ narrative structure 596, 2085, see also Narration ⫺ temporal structure 723, 1040, 1395, 1861, 1930, 2068, 2096 ⫺ salience structure 70, 1108, 1114 ⫺ sentence structure 117, 262, 762, 773, 1725, 2013 ⫺ sequential structure 153, 603⫺604, 694, 711, 1049, 1081, 1093, 1100, 1578, 1791, 2128, see also Sequentiality ⫺ spatial structure 762, 1713, 1727, 1735, 1811 ⫺ structure of speech 55, 287, 401, 786 ⫺ syntactic structure 65, 131, 593, 709, 733, 742, 760, 812, 999, 1109, 1187, 1363, 1365, 1375, 1462, 1467, 1658, 1662, 1664⫺1665, 1670, 1791, 1835, 2011, 2143, 2145, 2168, see also Syntax Synchronization 139, 142, 490, 577, 865, 1301, 1304⫺1305, 1307⫺1308, 1338, 1433⫺ 1436, 1463, 1467, 1902, 1945, 1947, 1950, 2031, 2043⫺2044, 2067 Synchrony ⫺ interaction synchrony 1301⫺1305, 1307⫺ 1308, 1355, see also Interaction ⫺ self-synchrony 1303, 1385, 1406 ⫺ temporal synchrony 1378, see also Temporality

Subject Index Syntax 65, 129⫺130, 162, 183, 220, 280, 369, 386⫺387, 397, 458, 463, 475, 480, 496⫺498, 500, 504, 506, 508, 530, 580, 590⫺591, 593⫺594, 671, 689, 703, 735, 740, 742, 745, 761, 954, 957, 1101, 1107⫺1110, 1114, 1128, 1362⫺1363, 1365, 1372, 1384, 1459, 1477, 1563, 1578, 1595, 1619, 1650, 1658, 1662⫺ 1664, 1668⫺1669, 1714, 1791, 1797, 1849, 1852, 1859, 1888, 2011, 2127, 2170⫺2171, 2174 ⫺ mixed syntax 1477, 1663⫺1664 ⫺ direct object 140⫺141, 177, 2142⫺2143 System ⫺ coding system 173, 210, 560, 564, 611, 880⫺883, 885, 887⫺889, 894⫺895, 920⫺ 927, 1000⫺1001, 1023⫺1029, 1030, 1040, 1053⫺1054, 1082, 1084, 1082, 1099, 2151, see also Annotation, Notation ⫺ dynamic systems 160, 162, 168, 790 ⫺ tracking systems 858⫺861, 865⫺866, 877, 1421 ⫺ functional system 240, 243⫺246, 249⫺ 250, 255⫺256, see also Function

T Taboo ⫺ eating taboo 1161⫺1163 ⫺ left-hand taboo 1161⫺1162 ⫺ pointing taboo 1161⫺1163, 1169, see also Pointing Teacher 68, 102, 126, 254⫺255, 294, 322, 333, 360, 617, 643, 727, 757, 769, 788, 796⫺ 797, 837, 957, 988, 1168, 1224, 1263, 1314, 1318, 1426⫺1430, 1486⫺1487, 1516, 1705, 1710, 1725, 1752, 1774, 1806, 1829, 1837⫺ 1838, 1871⫺1872, 1931, 1946, 2073 Temporality 218, 367, 580, 582, 585, 987, 1306, 1308, 2049, 2068, 2072, 2084, 2089, 2096, 2101, 2107, 2120⫺2121 ⫺ temporal dimension 287⫺288, 290, 292, 1652, 2068, 2082, 2084 ⫺ temporal flow 49, 2097, 2108 ⫺ temporal unfolding 804, 809, 1775, 2051, 2067, 2071, 2078, 2085 ⫺ temporal order 501, 943, 949, 954, 956 The Americas ⫺ Andes 1185, 1784 ⫺ Arapaho 1216⫺1224, see also The Americas ⫺ Aymara 1182, 1184⫺1185, 1784⫺1785

2217 ⫺ Brazil 87⫺90, 235, 435, 1176⫺1178, 1182⫺1183, 1187, 1189, 1191, 1193, 1785 ⫺ Great Plains 1190, 1216 ⫺ Mayan 1182, 1206⫺1207, 1209, 1211⫺ 1212 ⫺ Pastaza Quechua 1183, 1185⫺1186, 1194⫺ 1195, 1197, 1199, 1201⫺1204, see also Quechua ⫺ South America 1177⫺1178, 1180, 1182, 1191, 1195, 1197, 1199, 1201, 1203, 1284, 1519, 1792 ⫺ Tupian 1182, 1187, 1189 ⫺ Upper Xingu 1189⫺1190 Theater 306⫺309, 311, 313, 315, 317, 319, 329, 331, 333, 335, 337, 339, 341, 349, 368, 371, 424, 428, 430, 433, 442⫺443, 537, 942, 945, 1227, 1229, 1231, 1262, 1272⫺1276, 1298, 1440⫺1451, 1793⫺1795, 1798⫺1799, 1813⫺1815, 2062, 2071⫺2078, 2081, 2084, 2094 Thinking for Speaking 3, 32, 39, 142, 508, 710, 713, 1689, 1735, 1740, 1766, 1870, 1876⫺1885 Thought-Language-Hand Link 52, 494⫺495, 2031, 2035, 2038, 2041, 2043, 2046 Time ⫺ time-telling 243, 258, 416 ⫺ timed-event 895, 899⫺900 Touch 3, 85, 91, 170, 174⫺175, 243⫺244, 255, 290⫺292, 297⫺298, 325⫺327, 330, 337, 343, 347, 349, 355, 368, 435, 514, 526, 565, 611⫺612, 619, 621, 627, 632⫺633, 639, 651, 663, 678, 680, 686, 877, 910, 948, 951, 1024, 1044, 1054, 1069, 1086, 1171⫺1172, 1174⫺1175, 1212, 1267⫺1268, 1270, 1375, 1434, 1455, 1477, 1490, 1504, 1506, 1508, 1510, 1512⫺1514, 1547, 1583, 1697, 1737, 1748, 1755⫺1756, 1758, 1909⫺1910, 1972, 1986, 2000, 2026, 2050 Transcription 39, 109⫺110, 152, 284⫺285, 476, 498, 570, 597, 604, 648⫺649, 656, 866, 993⫺994, 996, 998⫺1004, 1008, 1010⫺1011, 1016, 1019, 1038⫺1039, 1041⫺1054, 1080, 1082, 1094, 1099⫺1100, 1115, 1125⫺1131, 1133⫺1134, 1277, 1297, 1319⫺1320, 1324⫺ 1325, 1340, 1478, 1555, 1664, 1680 Trust 359, 959, 967, 2031 Tupian 1182, 1187, 1189, see also The Americas Typification 403, 733, 735⫺736, 740, 742, 751, 1619⫺1622, 1626, 1650

2218

Indices

U Unit ⫺ action unit 653, 920⫺925, 927, 1001, 1343, see also Action ⫺ coding unit 885, 888 ⫺ cognitive unit 153, 487, 634, 2029, 2035, see also Cognition ⫺ composite unit 668, 673 ⫺ discourse unit 148, 1293, 1393, 2140, see also Discourse ⫺ expressive movement unit 2086, 2088, 2116⫺2117, 2119⫺2122 ⫺ expressive unit 653⫺655 ⫺ idea unit 31⫺32, 139, 154, 481⫺482, 485, 498, 725, 1008, 1013, 1110, 1546, 2028, 2030, 2038⫺2039 ⫺ interactional unit 897, see also Interaction ⫺ intonation unit 241, 594, 695, 1015, 1048⫺1050, 1101, 1106⫺1108, 1361, 1664, see also Prosody ⫺ kinesic unit 671, see also Kinesics ⫺ lexical unit 1126, 1134, 1176, 1563, 1772⫺ 1773, 2144 ⫺ psychological unit 137, 140, 161 ⫺ semantic unit 19⫺20, 139, see also Semantic ⫺ speech unit 398, 804, 808, 811, 1103 ⫺ tone unit 10⫺11, 1051, 1360⫺1362, 1383, 1385, see also Prosody ⫺ utterance unit 2009 Universality 369⫺370, 373, 393, 496, 921, 1285, 1287, 1850, 2018

V Validity 242, 351, 441, 443, 507, 558⫺559, 686, 849, 880⫺883, 885, 887⫺889, 904, 906, 910, 914, 923, 939, 968, 976, 995, 1030⫺ 1031, 1248, 1631, 1914, 1918, 1993, 2008, 2046 Variant 63, 88, 94, 665, 667, 670⫺671, 673, 737, 758, 777, 927, 929, 998, 1039, 1041, 1094, 1134, 1266⫺1267, 1283, 1474, 1476, 1478⫺1479, 1505⫺1507, 1517, 1520, 1534⫺ 1536, 1560⫺1566, 1568⫺1570, 1576, 1578⫺ 1579, 1585, 1592, 1595⫺1596, 1598⫺1599, 1605⫺1606, 1611⫺1615, 1634, 1758, 1810, 1984, 1986, 2003 Variation 94, 159, 228, 230, 234, 260, 302, 304⫺306, 310, 312, 381, 402, 421, 457, 499,

559, 681, 710, 717⫺718, 721⫺722, 810, 839, 850, 877, 937, 1053, 1066, 1127, 1147, 1242, 1269, 1287, 1383, 1386, 1422, 1496, 1498, 1500, 1505, 1512⫺1513, 1517⫺1519, 1541, 1566, 1568, 1595, 1611, 1615, 1633⫺1634, 1845, 1862, 1899, 1991, 2155 Verb 14, 19⫺20, 30⫺33, 35, 38⫺39, 49, 65, 69, 118, 121, 140⫺141, 147⫺148, 152⫺153, 159, 192, 385⫺387, 429, 496, 508, 709, 726, 758, 769⫺770, 772⫺774, 776, 789, 812, 815, 859, 1183⫺1184, 1186⫺1188, 1198, 1217⫺ 1218, 1235⫺1236, 1396, 1484, 1569, 1607, 1615, 1647, 1662, 1665, 1667⫺1669, 1671⫺ 1673, 1680, 1725⫺1726, 1735, 1754, 1756⫺ 1757, 1790, 1844, 1876, 1878⫺1882, 1884, 1900, 1905, 1936, 2023, 2130, 2133⫺2134, 2142⫺2144, 2157, 2166, 2174, see also Syntax Viewpoint ⫺ character viewpoint 43⫺45, 174, 192, 247, 488, 544, 767, 1698, 1721, 1734, 1737, 1751, 2031, 2035, 2037 ⫺ observer viewpoint 43⫺44, 192, 247, 544, 763⫺764, 768, 1497, 1698, 1721, 1733, 1736, 2031 Virtual Reality 866, 869, 871, 873, 875 Visualization 246, 248, 319, 322, 864, 866, 877, 1016, 1019, 1131, 1227, 1950, 2076, 2087, 2120

W Wolof 231, 983, 1171⫺1172, 1174⫺1175, see also Africa Word ⫺ word family 1631⫺1633, 1637 ⫺ word order 122, 386⫺387, 703, 1237, 1429, 1633, 1870⫺1871, 2133, 2142

Y Yemenite Jew 320, 322⫺323, 329, see also Jewish

Z Zulu 1147, 1150⫺1152, 1156, see also Africa