265 96 130MB
English Pages 1146 [1148] Year 2013
Body – Language – Communication HSK 38.1
Handbücher zur Sprach- und Kommunikationswissenschaft Handbooks of Linguistics and Communication Science Manuels de linguistique et des sciences de communication Mitbegründet von Gerold Ungeheuer (†) Mitherausgegeben 1985−2001 von Hugo Steger
Herausgegeben von / Edited by / Edités par Herbert Ernst Wiegand Band 38.1
De Gruyter Mouton
Body – Language – Communication An International Handbook on Multimodality in Human Interaction
Edited by Cornelia Müller Alan Cienki Ellen Fricke Silva H. Ladewig David McNeill Sedinha Teßendorf Volume 1
De Gruyter Mouton
ISBN 978-3-11-020962-4 e-ISBN 978-3-11-026131-8 ISSN 1861-5090 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.dnb.de. © 2013 Walter de Gruyter GmbH, Berlin/Boston Cover design: Martin Zech, Bremen Typesetting: Apex CoVantage Printing: Hubert & Co. GmbH & Co. KG, Go¨ttingen ⬁ Printed on acid-free paper s Printed in Germany www.degruyter.com
Contents
Volume 1 Introduction Cornelia Mu¨ller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
I.
How the body relates to language and communication: Outlining the subject matter
1.
Exploring the utterance roles of visible bodily action: A personal account Adam Kendon. . . . . . . . . . . . . . . . . . . . . . . . . . 2. Gesture as a window onto mind and brain, and the relationship to linguistic relativity and ontogenesis David McNeill . . . . . . . . . . . . . . 3. Gestures and speech from a linguistic perspective: A new field and its history Cornelia Mu¨ller, Silva H. Ladewig and Jana Bressem. 4. Emblems, quotable gestures, or conventionalized body movements Sedinha Teßendorf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Framing, grounding, and coordinating conversational interaction: Posture, gaze, facial expression, and movement in space Mardi Kidwell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. Homesign: When gesture is called upon to be language Susan Goldin-Meadow. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. Speech, sign, and gesture Sherman Wilcox . . . . . . . . . . . . . . . . . . . .
... 7 . . 28 . . 55 . . 82
. . 100 . . 113 . . 125
II. Perspectives from different disciplines 8.
The growth point hypothesis of language and gesture as a dynamic and integrated system David McNeill . . . . . . . . . . . . . . . . . . . . . . . . 9. Psycholinguistics of speech and gesture: Production, comprehension, architecture Pierre Feyereisen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10. Neuropsychology of gesture production Hedda Lausberg . . . . . . . . . 11. Cognitive Linguistics: Spoken language and gesture as expressions of conceptualization Alan Cienki . . . . . . . . . . . . . . . . . . . . . . . . . . . 12. Gestures as a medium of expression: The linguistic potential of gestures Cornelia Mu¨ller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13. Conversation analysis: Talk and bodily resources for the organization of social interaction Lorenza Mondada . . . . . . . . . . . . . . . . . . . . . . . 14. Ethnography: Body, communication, and cultural practices Christian Meyer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15. Cognitive Anthropology: Distributed cognition and gesture Robert F. Williams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16. Social psychology: Body and language in social interaction Marino Bonaiuto and Fridanna Maricchiolo . . . . . . . . . .
. . 135 . . 156 . . 168 . . 182 . . 202 . . 218 . . 227 . . 240 . . 258
vi
Contents
17. 18.
Multimodal (inter)action analysis: An integrative methodology Sigrid Norris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Body gestures, manners, and postures in literature Fernando Poyatos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
III. Historical dimensions 19. 20. 21. 22. 23. 24. 25. 26. 27. 28.
29.
Prehistoric gestures: Evidence from artifacts and rock art Paul Bouissac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indian traditions: A grammar of gestures in classical dance and dance theatre Rajyashree Ramesh . . . . . . . . . . . . . . . . . . . . . Jewish traditions: Active gestural practices in religious life Roman Katsman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The body in rhetorical delivery and in theater: An overview of classical works Dorota Dutsch . . . . . . . . . . . . . . . . . . . . . . . . . . . Medieval perspectives in Europe: Oral culture and bodily practices Dmitri Zakharine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renaissance philosophy: Gesture as universal language Jeffrey Wollock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enlightenment philosophy: Gestures, language, and the origin of human understanding Mary M. Copple . . . . . . . . . . . . . . . . . . 20th century: Empirical research of body, language, and communication Jana Bressem . . . . . . . . . . . . . . . . . . . . . . . . Language – gesture – code: Patterns of movement in artistic dance from the Baroque until today Susanne Foellmer . . . . . . . . . . . . . . Communicating with dance: A historiography of aesthetic and anthropological reflections on the relation between dance, language, and representation Yvonne Hardt . . . . . . . . . . . . . . . . . Mimesis: The history of a notion Gunter Gebauer and Christoph Wulf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 301 . . . . 306 . . . . 320 . . . . 329 . . . . 343 . . . . 364 . . . . 378 . . . . 393 . . . . 416
. . . . 427 . . . . 438
IV. Contemporary approaches 30. 31. 32. 33. 34. 35. 36. 37.
Mirror systems and the neurocognitive substrates of bodily communication and language Michael A. Arbib . . . . . . . . . . . . . . Gesture as precursor to speech in evolution Michael C. Corballis . The co-evolution of gesture and speech, and downstream consequences David McNeill . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensorimotor simulation in speaking, gesturing, and understanding Marcus Perlman and Raymond W. Gibbs . . . . Levels of embodiment and communication Jordan Zlatev . . . . . . . Body and speech as expression of inner states Eva Krumhuber, Susanne Kaiser, Kappas Arvid and Klaus R. Scherer . . . . . . . . . . . Fused Bodies: On the interrelatedness of cognition and interaction Anders R. Hougaard and Gitte Rasmussen . . . . . Multimodal interaction Lorenza Mondada . . . . . . . . . . . . . . . . . .
. . . . 451 . . . . 466 . . . . 480 . . . . 512 . . . . 533 . . . . 551 . . . . 564 . . . . 577
Contents
38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51.
Verbal, vocal, and visual practices in conversational interaction Margret Selting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The codes and functions of nonverbal communication Judee K. Burgoon, Laura K. Guerrero and Cindy H. White . . . . . . . . . . . . . . . Mind, hands, face, and body: A sketch of a goal and belief view of multimodal communication Isabella Poggi . . . . . . . . . . . . . . Nonverbal communication in a functional pragmatic perspective Konrad Ehlich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elements of meaning in gesture: The analogical links Genevie`ve Calbris. Praxeology of gesture Ju¨rgen Streeck . . . . . . . . . . . . . . . . . . . . . . . . A “Composite Utterances” approach to meaning N. J. Enfield . . . . . Towards a grammar of gestures: A form-based view Cornelia Mu¨ller, Jana Bressem and Silva H. Ladewig . . . . . . . . Towards a unified grammar of gesture and speech: A multimodal approach Ellen Fricke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The exbodied mind: Cognitive-semiotic principles as motivating forces in gesture Irene Mittelberg. . . . . . . . . . . . . . . . . . . . . . . . . . . Articulation as gesture: Gesture and the nature of language Sherman Wilcox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How our gestures help us learn Susan Goldin-Meadow. . . . . . . . . . . Coverbal gestures: Between communication and speech production Uri Hadar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The social interactive nature of gestures: Theory, assumptions, methods, and findings Jennifer Gerwing and Janet Bavelas . . . . . . . .
vii
. . 589 . . 609 . . 627 . . . .
. . . .
648 658 674 689
. . 707 . . 733 . . 755 . . 785 . . 792 . . 804 . . 821
V. Methods 52. Experimental methods in co-speech gesture research Judith Holler. . 53. Documentation of gestures with motion capture Thies Pfeiffer . . . . . 54. Documentation of gestures with data gloves Thies Pfeiffer . . . . . . . . 55. Reliability and validity of coding systems for bodily forms of communication Augusto Gnisci, Fridanna Maricchiolo and Marino Bonaiuto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56. Sequential notation and analysis for bodily forms of communication Augusto Gnisci, Roger Bakeman and Fridanna Maricchiolo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57. Decoding bodily forms of communication Fridanna Maricchiolo, Angiola Di Conza, Augusto Gnisci and Marino Bonaiuto . . . . . . . . . 58. Analysing facial expression using the facial action coding system (FACS) Bridget M. Waller and Marcia Smith Pasqualini. . . . . . . . . . 59. Coding psychopathology in movement behavior: The movement psychodiagnostic inventory Martha Davis . . . . . . . . . . . . . . . . . . . . . 60. Laban based analysis and notation of body movement Antja Kennedy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61. Kestenberg movement analysis Sabine C. Koch and K. Mark Sossin . 62. Doing fieldwork on the body, language, and communication N. J. Enfield . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 837 . . 857 . . 868
. . 879
. . 892 . . 904 . . 917 . . 932 . . 941 . . 958 . . 974
viii
Contents
Video as a tool in the social sciences Lorenza Mondada . . . . . . . . . Approaching notation, coding, and analysis from a conversational analysis point of view Ulrike Bohle . . . . . . . . . . . . . . . . . . . . . . . . 65. Transcribing gesture with speech Susan Duncan . . . . . . . . . . . . . . . 66. Multimodal annotation tools Susan Duncan, Katharina Rohlfing and Dan Loehr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67. NEUROGES – A coding system for the empirical analysis of hand movement behaviour as a reflection of cognitive, emotional, and interactive processes Hedda Lausberg . . . . . . . . . . . . . . . . . . . 68. Transcription systems for gestures, speech, prosody, postures, and gaze Jana Bressem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69. A linguistic perspective on the notation of gesture phases Silva H. Ladewig and Jana Bressem . . . . . . . . . . . . . . . . . . 70. A linguistic perspective on the notation of form features in gestures Jana Bressem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71. Linguistic Annotation System for Gestures (LASG) Jana Bressem, Silva H. Ladewig and Cornelia Mu¨ller . . . . . . . . . . . . . . . . . . . . . . . 72. Transcription systems for sign languages: A sketch of the different graphical representations of sign language and their characteristics Brigitte Garcia and Marie-Anne Sallandre . . . . . . . . 63. 64.
. . . 982 . . . 992 . . 1007 . . 1015
. . 1022 . . 1037 . . 1060 . . 1079 . . 1098
. . 1125
Introduction 1. Why a handbook on body, language, and communication? The handbook offers an encompassing account of the current state of the art in an emerging and highly interdisciplinary field. Given its scope and size, the book has the character of an encyclopedia. It introduces fundamental concepts, theories, empirical methods and a documentation of what is known about forms and functions of the body as a modality that goes hand in hand with speech in face-to-face communication. Why do we need a handbook on the body in relation to language and communication? Do we need one, at all? We think that yes indeed, the time is ripe to direct scholarly attention to the very nucleus of human communication: the face-to-face situation of communication. Whenever we speak with each other it is not only through words; bodily movements are always involved and they are so closely intertwined with language that they sometimes become part and parts of language or even become language themselves – as is the case in sign languages all around the world. Face-to-face communication is by nature multi-modal; it is the nucleus of communication and it is here where language in onto- and phylogenesis emerges. It is here where the “modern mind” evolves and where intersubjectivity appears on the evolutionary stage (Donald 1993; Tomasello 2000). Other forms of communication such as writing or talking on the phone are ultimately derived from this communicative practice. This is one reason for devoting a handbook to these primary forms of interpersonal communication. Yet this is not the only reason why the relation of the body to language and communication has become a focus of interest in a variety of disciplines such as: linguistics, psychology, cognitive science, anthropology, sociology, semiotics, literature, computing and engineering. The main – albeit mostly not explicitly recognized – triggering force is the contemporary availability of the microscopes of face-to-face communication: film and video. Video technology, being affordable and even more common nowadays than audio recording, has turned into the default medium of documenting face-to-face communication, and it is the specifics of this instrument that have literally created new interests. More and more scholars from neurology to linguistics have realized that speakers use their bodies, their hands, arms, and faces when they speak, and they are becoming aware of this because they videorecord communication rather than just capturing the audio portion. It is with this discovery that new questions arise which constitute the focus of this handbook: what do these movements mean, how do we analyze them, and how do we classify and annotate them? Body – Language – Communication aims at bringing together the available knowledge to answer these pertinent questions. Thus it is ultimately the “microscope” of video and film that has triggered the sudden increase of scholarly attention, from a broad range of disciplines, to particular facts – facts that before the availability of an “objective” documentation of verbal and bodily forms of communication in real time conditions and natural contexts of use were simply not recognized as pertinent features of language and communication at all – not even in conversation and discourse analysis. This “microscope” is the prerequisite for empirically grounded, scientific research on the body in relation to language and communication. Mu¨ller, Cienki, Fricke, Ladewig, McNeill, Teßendorf (eds.) 2013, Body – Language – Communication (HSK 38.1), de Gruyter, 1–6
2
Introduction One consequence of this development is the sudden increase in interest in the bodily aspects of language and communication, a phenomenon that is apparent from the escalation in the number of research projects from fields such as artificial intelligence to media studies and conversation analysis on topics which are being subsumed under the term “multimodality”. The handbook offers a perspective on the body as “part” and “partner” of language and communication. In this way it contributes to some of the current key issues of the humanities and the natural sciences: the multimodality of language and communication, and the notion of embodiment as a resource for meaning-making and conceptualization in language and communication. It overcomes the longstanding dichotomy represented in the concepts of verbal and nonverbal communication, and promotes an incorporation of the body as integral part of language and communication. With this perspective, the handbook documents the bodily and embodied nature of language as such. We should underline that nonverbal communication studies are products of a fundamentally different concept of how bodily and linguistic forms of communication cooperate in communication. Nonverbal communication studies focus on the social-psychological dimensions of bodily communication and have basically separated the body from language. Informed by Watzlawick’s dichotomy of analogic and digital forms of communication and the functional attributions of “social-relation” versus “linguistic content”, nonverbal communication research has devoted most of its interest to research on social and affective facets of bodily forms of communication (Watzlawick, Bavelas and Jackson 1967). The claim in this approach is that the verbal part of the message is what carries content, while the nonverbal part does not, it only conveys affective and social meaning. This theory has inspired highly important strands of research, among them: the very rich field of studying facial expressions of affect and emotion (Ekman and Davidson 1994; Ekman and Richardson 1997), and in this context, the study of forms of deceit and nonverbal leakage; movement analysis as a measure for psychic integration and disintegration (Davis 1970; Davis and Hadiks 1990; Lausberg, von Wintersheim and Hubert 1996; Lausberg 1998); and fields of study concerning issues of gender, culture, and social status. It is not by accident that the analysis of hand movements or gestures plays a minor role in nonverbal communication studies. Gestures were recognized as being not non-verbal enough to be considered of interest for nonverbal communication research (see the debate on the “verbal” or “non-verbal” status of gesture in Psychological Review Butterworth and Hadar 1989; Feyereisen 1987; McNeill 1985, 1987, 1989). Indeed it is the close integration of gestures with speech (Beattie 2003; Cienki 1998, 2005; Cienki and Mu¨ller 2008a, 2008b; Duncan, Cassell and Levy 2007; Fricke 2007, 2012; Kendon 1972, 1980, 2004; McNeill, 1992, 2005; Mu¨ller 1998; Mu¨ller 2009) that has made those forms of bodily movements less interesting for the research conducted in the spirit of nonverbal communication. And it is precisely this that makes gestures such an interesting topic for students of language proper. An obvious consequence of the particular orientation of nonverbal communication studies was that relatively little was known about human gesticulation and its integration with language and communication until very recently. Only when the humanities shifted more significantly towards cognitive science in the 1970s and 80s did gestures begin very slowly to attract the interest of linguists, psychologists and anthropologists. The grounds were laid early on with the pioneering writings of Adam Kendon and David McNeill; they served as a basis on which a steadily increasing community of
Introduction scholars from various disciplines could build their research in the 90s on the hand movements that people make when they talk. Since then a field of gesture studies has emerged with its own journal, book series, society, and biennial international conferences. The research carried out on human and non-human forms and uses of gestures will be widely documented in this handbook. Hand-gestures are the “articulators” that are closest to vocal language: they contribute to all levels of meaning, and they are syntactically, pragmatically, and semantically integrated with speech, forming in Adam Kendon’s terms gesture-speech ensembles, and constituting in David McNeill’s terms the imagistic part of language, playing a crucial role in the cognitive processes of thinking for speaking. And as we have mentioned already, it is the hand movements that under certain circumstances may turn into a full-fledged language. Note that this is not true for the face, the torso, or the legs. The hands are the primary articulators along with our vocal tracts that can become articulators of language. Despite the importance of gestures, however, we will underline in the handbook the fact that it is not only the hands plus vocal tract which are used to communicate: we will highlight the integration of other concomitant forms of visible action as well, such as the face, gaze, posture, and body movement and orientation. With this orientation, the handbook seeks to overcome Watzlawick’s dichotomy that has blindfolded the close cooperation of visible and audible forms of communication.
2. General statement of goals The handbook gives an overview of the scope of the wide interdisciplinary field of research that addresses the relation of the body to language and communication. It gives an overview of historical as well as contemporary approaches, presents a variety of currently proposed – sometimes competing – theoretical frameworks, introduces fundamental concepts, and offers an overview of core controversial issues under scholarly scrutiny. It thus offers a unique tool for experienced scholars as well as for novices, while on the one hand introducing the pertinent theoretical issues under debate from the perspective of various disciplines and on the other hand documenting varying methodological approaches which naturally come with the different disciplines involved in this kind of research. The handbook thus offers for the first time a truly interdisciplinary perspective on one of the most vital topics in the humanities and the natural sciences: the multimodality of human (and non-human) communication. It includes an overview of established methodological procedures for the study of body, language, and communication, including both qualitative and quantitative procedures, and it presents a systematic account of what is known regarding the structures, categories, and functions of gesture, posture, touch, gaze, facial expression, and movement in space. In addition, the handbook covers a wide variety of specific topics and phenomena without giving preference to one specific approach, it aims at providing a non-biased interdisciplinary perspective, allowing for the coexistence of competing theoretical and methodological approaches. As a consequence of this interdisciplinary scope, the handbook also addresses scholars from a variety of different fields, including linguistics and communication as well as the cognitive sciences, psychology, neurology, and semiotics in particular but also anthropology, sociology, literature, history, computing and engineering, and all the disciplines that share the interest in bodily forms of language and communication.
3
4
Introduction To ensure cross-disciplinary transparency, the articles in this handbook are written and conceptualized for an interdisciplinary audience. The handbook may serve both as a resource for specific questions as well as for gaining an overview of specific topics, problems, and questions discussed in the field. It may serve both as guideline and orientation for anyone interested in this new field of scientific interest.
3. Structure of the book The central idea of the book is the integration of visible bodily movements with language as used in face-to-face communication (including distance communication of the new audio-visual media). The term “body” is used to refer to visible bodily movements and the handbook documents the various dimensions of how the body relates to language and communication. Body movements are inherently intertwined with language and communication: they carry a potential for development into linguistic signs, such as in sign languages, but they are different since they constitute an integrated ensemble with vocal language. In sign languages all the functions have to be fulfilled by the visual articulators and this has systematic consequences for the bodily signs and their interrelation. Sign languages and sign linguistics have developed into a major field on their own and this handbook will close the gap between research on vocal languages and that on signed languages by focussing on visible movements of the hearing that are used in conjunction with speech. This fundamental idea inspires the structure of Body – Language – Communication. As a consequence, articles addressing bodily signs which are “close” to language such as gestures that are integrated with language and communication (be they spontaneous creations or conventionalized “gesture” words), play a central role. On the other hand, the handbook also includes bodily movements that are less clearly tied with linguistic forms of communication such as bodily movements in dance, or bodily movements as symptoms in clinical diagnoses. In its core the handbook addresses the “multimodality of language and communication”. The handbook Body – Language – Communication is divided into two volumes. The first volume offers the theoretical, notional, and methodological grounds of the field. The second one documents what we know about forms, functions, types of bodily movements, and their cross-cultural distribution, and it offers space for the presentation of a range of specific perspectives on the body in relation to communication. The handbook entails chapters on cultures, contexts and interactions, embodiment, cognition and emotion, and it closes with a chapter on visible body movement as sign language. In an Appendix, a list of relevant organizations, links, reference publications, and periodicals are provided. Volume I contains 5 chapters with a total of 72 articles. The first chapter of volume I outlines the subject matter of the book. It begins with the two pioneers of contemporary gesture studies: Adam Kendon and David McNeill outlining their respective approaches to the study of gestures with speech. The chapter then proceeds with an overview of research on gestures and speech from a linguistic point of view, an documentation of conventionalized gestures, so-called emblems, then extends the scope to how all the other body parts contribute to conversation and ends with two chapters that document hand-movements in two types of manual signed languages: home signs, e.g. sign systems
Introduction evolving within one family. Here a particular emphasis is put on how gestures relate to signs in signed languages. Chapter two outlines perspectives on the relation of the body to language and communication from the perspective of various different disciplines. Multimodal communication has raised the interest of a wide range of disciplines, and this chapter is giving accounts from: Psychology of Language, Psycholinguistics, Neuropsychology, Cognitive Linguistics, Linguistics, Conversation Analysis, Ethnography, Cognitive Anthropology, Social Psychology, Multimodal Interaction, and Literature. Chapter three presents a documentation of historical and cross-cultural dimensions of research regarding the relation of body movements to language and speech. Starting from prehistoric gestures, Indian traditions of a grammar of gestures in dance, Jewish traditions and their active gestural practices in religious life, it moves on to European scholarly treatments. It further includes articles on medieval practices of the body, on Renaissance ideas on gestures as universal language, on enlightenment philosophy and the debate around gestures, language, and the origin of human understanding and it ends with a sketch of 19th and 20th century research of body, language, communication. The historical considerations of body movements as communication are concluded with contributions from arts and philosophy – dance and the history of the notion of mimesis. Chapter four offers an encompassing collection of contemporary approaches of how the relation between body motion and language in communication should be conceived. Notably, each author outlines his or her particular view on this subject matter and we are presenting here views of eminent and senior scholars as well as perspectives advanced by junior colleagues. These articles present theories or approaches to the body in communication in a nutshell. Topics range from mirror systems and gestures as precursor of speech in evolution to the social interactive nature of gestures. Chapter five finally provides a valuable collection of methods for the analysis of multimodal communication. Again methods included here cover a wide range of disciplines including quantitative as well as qualitative takes on the analysis of body movement used with and without speech.
4. References Beattie, Geoffrey 2003. Visible Thought: The New Psychology of Body Language. London: Routledge. Butterworth, Brian and Uri Hadar 1989. Gesture speech and computational stages: A reply to McNeill. Psychological Review 96(1): 168–174. Cienki, Alan 1998. Metaphoric gestures and some of their relations to verbal metaphorical expressions. In: Jean-Pierre Koenig (ed.), Discourse and Cognition: Bridging the Gap, 189–204. Stanford, CA: Center for the Study of Language and Information. Cienki, Alan 2005. Metaphor in the “Strict Father” and “Nurturant Parent” cognitive models: Theoretical issues raised in an empirical study. Cognitive Linguistics 16(2): 279–312. Cienki, Alan and Cornelia Mu¨ller (eds.) 2008a. Metaphor and Gesture. Amsterdam: John Benjamins. Cienki, Alan and Cornelia Mu¨ller (2008b). Metaphor, gesture, and thought. In: Raymond W. Gibbs, Jr. (ed.), The Cambridge Handbook of Metaphor and Thought, 483–501. Cambridge: Cambridge University Press.
5
6
Introduction Davis, Martha 1970. Movement characteristics of hospitalized psychiatric patients. In: Claire Schmais (ed.) Proceedings of the Fifth Annual Conference of the American Dance Therapy Association, 25–45. Columbia: The Association. Davis, Martha and D. Hadiks 1990. Nonverbal behavior and client state changes during psychotherapy. Journal of Clinical Psychology 46(3): 340–351. Donald, Merlin 1993. Origins of the Modern Mind. Cambridge, MA: Harvard University Press. Duncan, Susan, Justine Cassell and Elena Levy (eds.) 2007. Gesture and the Dynamic Dimension of Language. Amsterdam: John Benjamins. Ekman, Paul and Richard J. Davidson (eds.) 1994. The Nature of Emotion. Oxford: Oxford University Press. Ekman, Paul and Erika Rosenberg (eds.) 1997. What the Face Reveals. Basic and Applied Studies of Spontaneous Expression using the Facial Action Coding System (FACS). Oxford: Oxford University Press. Feyereisen, Pierre 1987. Gestures and speech, interactions and separations: A reply to McNeill. Psychological Review 94(4): 493–498. Fricke, Ellen 2007. Origo, Geste und Raum: Lokaldeixis im Deutschen. Berlin: de Gruyter. Fricke, Ellen 2012. Grammatik multimodal: Wie Wo¨rter und Gesten zusammenwirken. Berlin: De Gruyter Mouton. Kendon, Adam 1972. Some relationships between body motion and speech: An analysis of an example. In: Aaron W. Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 177–210. New York: Pergamon Press. Kendon, Adam 1980. Gesticulation and speech: Two aspects of the process of utterance. In: Mary Ritchie Key (ed.), Nonverbal Communication and Language, 207–227. The Hague: Mouton. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Lausberg, Hedda 1998. Does movement behavior have differential diagnostic potential? American Journal of Dance Therapy 20(2): 85–99. Lausberg, Hedda, Jo¨rn von Wietersheim and Feiereis Hubert 1996. Movement behaviour of patients with eating disorders and inflammatory bowel disease. A controlled study. Psychotherapy and Psychosomatics 65(6): 272–276. McNeill, David 1985. So you think gestures are nonverbal? Psychological Review 92(3): 350–371. McNeill, David 1987. So you do think gestures are nonverbal! A reply to Feyereisen. Psychological Review 94(4): 499–504. McNeill, David 1989. A straight path – to where? Reply to Butterworth and Hadar. Psychological Review 96(1): 175–179. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Mu¨ller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte – Theorie – Sprachvergleich. Berlin: Arno Spitz. Mu¨ller, Cornelia 2008. Metaphors. Dead and Alive, Sleeping and Waking. A Dynamic View. Chicago: University of Chicago Press. Mu¨ller, Cornelia 2009. Gesture and Language. In Kirsten Malmkjaer (ed.) Routledge’s Linguistics Encyclopedia. 214–217. Abington/New York: Routledge. Tomasello, Michael 2000. The Cultural Origins of Human Cognition. Cambridge, MA: Harvard University Press. Watzlawick Paul, Janet H. Beavin Bavelas and Don D. Jackson 1967. Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxies. New York: W.W. Norton.
Cornelia Mu¨ller, Frankfurt (Oder) (Germany)
I. How the body relates to language and communication: Outlining the subject matter 1. Exploring the utterance roles of visible bodily action: A personal account 1. 2. 3. 4. 5. 6.
Utterance visible action as a domain of inquiry Temporal co-ordination between speech and hand, arm and head actions The semantics of utterance visible actions in relation to the semantics of verbal expression When utterance visible action is the main utterance vehicle Broader implications References
Abstract In this essay, I offer a survey of the main questions with which I have been engaged in regard to “gesture,” or, as I prefer to call it, and as will be explained below, “utterance visible action.” In doing so, I hope to make clear the approach I have employed over a long period of time in which, to put it in the most general terms, visible bodily actions used in the service of utterance are seen as a resource which can be used in many different ways and from which many different forms of expression can be fashioned, depending upon the circumstances of use, the communicative purposes for which they are intended, and how they may be used in relation to other media of expression that may be available. Accordingly, I have sought to describe the diverse ways in which utterance visible actions may be employed, their semiotic properties, and how they work as components of utterance in relation to the other components that may also be being employed.
1. Utterance visible action as a domain of inquiry As Goffman (1963) pointed out, humans, when in co-presence and by means of visible bodily action, continually provide each other with information about their intentions, interests, feelings and ideas, whether they wish to do so or not. Within a gathering, the pattern of positions, spacing, and directions of gaze of the participants provide much information about who is engaged with whom, the nature of those engagements, and the level and nature of their involvement in the situation. Activities directed toward objects or features of the environment provide information about a person’s aims, goals, and interests. There are also actions that are deemed to be expressive, however. Thus, by how people approach each other or withdraw, by patterns of action in the face, and with actions of their forelimbs, they show each other affection, disdain, indifference, concern, gratitude; they challenge or threaten one another; they submit, comply, or defy one another, or they show fear, joy, and so on.
Mu¨ller, Cienki, Fricke, Ladewig, McNeill, Teßendorf (eds.) 2013, Body – Language – Communication (HSK 38.1), de Gruyter, 7–28
8
I. How the body relates to language and communication Visible bodily action may also serve as a means of discourse, however. Either by itself, or in collaboration with speaking, visible bodily actions can be used as a means of saying something. For example, one draws attention to something by pointing at it, one may employ one’s hands to describe the appearance of something or to suggest the form of a process or the structure of an idea. By means of visible bodily action one can show that one is asking a question, pleading for an answer, is in disagreement, and a host of other things, specific to the current linguistically managed interchange. There are forms of visible bodily actions that can serve instead of words, and in some circumstances entire language systems are developed using only visible action. In short, there are many different ways in which visible bodily action may be employed to accomplish expressions that have semantic and pragmatic import similar to, or overlapping with, the semantic and pragmatic import of spoken utterances. This constitutes the utterance uses of visible bodily action. It is this that I shall call utterance visible action, and it corresponds to what is often referred to by the word “gesture.” However, because “gesture” is also sometimes used more widely to refer any kind of purposive action, for example the component actions of practical action sequences, or actions that may have symptomatic significance, such as self-touchings, patting the hair, fiddling with a wedding ring, rubbing the back of the head, and the like, because it is also used as a way of referring to the expressive significance of any sort of action (for example, saying that sending flowers to someone is a “gesture of affection”), and because, too, in some contexts the word “gesture” carries evaluative implications not always positive, it seems better to find a new and more specific term. I also think that doing so invites the undertaking, without prejudice, of a comparative semiotic analysis of all of the different ways in which visible bodily action can enter into the creation of utterances (Kendon 2008, 2010). By “utterance” I mean any action or ensemble of actions that may be employed to provide expression to something that is deemed by participants to be something that the actor meant to express, that was expressed wilfully, that is. Goffman’s distinction between information that is “given” and information that is “given off ” is helpful in clarifying this (see Goffman 1963: 13–14). As he says, everything we do all of the time “gives off ” information about our intentions, interests, attitudes, and the like. However, some kinds of actions are taken to have been done with the intent to express something, whether by words alone, by words combined with actions, or by visible actions alone (as in sign languages). These actions are taken to “give” information, they express what the person “meant” and the actor can be called to account for them. Actions treated by co-participants in this manner are the actions of utterance, and we may establish a domain of concern that attends to the different ways in which visible bodily action can serve as utterance action, and how it may do so (see Kendon 1978, 1981, 1990: Chapter 8; and, especially, Kendon 2004: Chapters 1–2). It is important to stress that this domain cannot be established with sharp boundaries nor can rigid criteria be established according to which an action is or is not admitted as an utterance visible action. There is a core of actions about which there seems to be widespread agreement that they comprise utterance visible actions. This includes, waving, pointing, the use of symbolic gestures of any kind, manual actions made while speaking (“gesticulation”), as well actions performed in the course of creating utterances in sign language. There are always forms of action whose status is ambiguous, however. If we compare actions that tend to be accepted as being done with what we
1. Exploring the utterance roles of visible bodily action: A personal account might call “semantic intent” with those that are not so regarded, we may discover a set of features which actions may have less of or more of. The less they have of these features, the more likely they are to be disregarded, not attended, or not counted as “meant.” Sometimes this ambiguity is exploited. On occasion, someone wishing to convey something to another by means of a visible action which they want to be understood only by one specific other, and not by anyone who may be co-present, may alter the performance of their action so that it seems casual or to have the character of a “comfort movement” or some other sort of disattenable action (for examples, see de Jorio 2000: 179–180, 185, 188, 260–261; Morris et al. 1979: 67–68, 88–89). Comfort movements (“self-adaptors” in the terminology of Ekman and Friesen 1969) and other kinds of “mere actions” may well be studied for what they reveal as symptoms of a person’s motivational or affective state, thus attracting attention from a psychological point of view (for early studies see Krout 1935; Mahl 1968). Actions considered as “meant,” on the other hand, attract attention from a point of view that is closer to that of students of language and discourse. Issues of interest here include questions about the semiotic character of utterance visible actions and how they are employed as components in utterance construction. These modes of expression also raise issues for cognitive theories of language. For example, utterance visible actions are treated by some authors as if they are image-like representations of meaning (McNeill 1992, 2005). When deployed in relation to spoken language, their study may suggest how the mental representation of utterance meaning is multi-levelled and organised as a simultaneous configuration, aspects of which can be represented through utterance visible action at the same time as other aspects can be represented by means of the linear structures of spoken language. Old questions about the relationship between language and the structure of thought, debated extensively in the eighteenth century, may be re-opened in a new way through studies of utterance visible action both in speakers and in signers (Woll 2007; see also Ricken 1994). I now turn to discuss some of the main themes which have occupied me in my work in this domain. I begin with aspects of how utterance visible action, speech and verbal expression are related within the utterance. Then I discuss work on utterance visible action when it is the sole vehicle of utterance. This includes a study of a primary (deaf) sign language in Papua New Guinea, and a much larger study of alternate sign languages in use among Australian Aborigines. I conclude with a short survey of what I see as some of the broader implications of these studies.
2. Temporal co-ordination between speech and hand, arm and head actions My earliest work on utterance visible action was influenced by an early exposure to ethology. I took an interest in the organization of human communication conduct as it may be observed in human co-present interaction. I appreciated very much the fine-grained observation Erving Goffman pioneered in his work on human interaction, and I wanted to investigate what he called the “small behaviors” of spacing and posture, of glances and spoken utterances, of hand actions and head movements – the observable stuff out of which occasions of interaction are fashioned (Goffman 1967: 1). Among other things, this led me to the work of Ray Birdwhistell (see Birdwhistell 1970; Kendon 1972a), who offered very interesting observations on how movements
9
10
I. How the body relates to language and communication of the body, especially of the head and face, patterned in relation to aspects of speech. In consequence of this, and adopting methods I had learned from an association with William Condon (Condon and Ogston 1966, 1967; Condon 1976), I undertook a detailed analysis of the bodily action that could be observed in a two-minute film showing a continuous discourse by a man who was engaged in an informal discussion of “national character” in a London pub. In a paper published in 1972 (Kendon 1972b), which reported this analysis, I described how, in association with each “tone unit” (Crystal 1969) in the spoken discourse one could observe a contrasting pattern of bodily action. Patterns of action of shorter duration might be accompanied by other contrasting patterns of longer duration – so one could say that the movement flow was organized at multiple levels simultaneously. To a considerable degree these multiple levels in the movement flow corresponded to the several different levels of organization in terms of which the flow of speech could be analyzed. I was led to suggest – to quote from the paper – “the speech production process is manifested in two forms of activity simultaneously: in the vocal organs but also in bodily movement, particularly in movements of the hands and arms” (Kendon 1972b: 205). In the aforementioned 1972 study, I attempted to deal with all observable movements – the fingers and hands and arms, the shifts in positionings of the trunk, changes in orientation of the head. From this it appeared that the larger segments of discourse were bracketed by sustained changes in posture or new orientations of the head or repeated patterns of head action, while shorter segments of discourse, down to the level of the tone unit and even syllables within the tone unit, were associated with phrases of movement of shorter duration. This was in accord with observations that Birdwhistell and his colleague Albert Scheflen had summarised in earlier publications (see Scheflen 1965). From this single study I concluded that the “utterance” manifested itself in two aspects simultaneously – in speech and visible bodily action (Kendon 1980a). In subsequent work, my attention focused more upon speakers’ hand actions. Such actions, as is well known, had been in the past, as they have been subsequently, the principal focus of interest in studies of “gesture.” There is good reason for this. After all, as Quintilian noted some 2000 years ago, of all the body parts that speakers move, the hands are closest to being instruments of speaking. In his discussion of the role of visible bodily action in Delivery (Actio), he writes while “other parts of the body merely help the speaker … the hands may almost be said to speak” (see Quintilian Institutio Oratoria Book XI, iii. 86–89 in Butler 1922). Subsequent to my 1972 publication, I developed a terminology and a scheme for analyzing the organization of speaker’s hand movements and offered some general observations on how these relate to speech (Kendon 1980a). These suggestions were restated and slightly revised later (Kendon 2004: Chapter 7). The slight modifications in terminology given in this revision are reflected in what follows here. As a starting point I noted how the forelimb movements of utterance visible actions are organised as excursions – the hand or hands are lifted away from a position of rest (on the body, on the arm of chair, etc.), they move out into space and perform some action, thereafter returning to a position of rest, often very similar to the one from which it started. This entire excursion, from position of rest to position of rest, I called a Gesture Unit. Within such an excursion the hand or hands might perform one or more actions – pointing, outlining or sculpting a shape, performing a part of an action pattern, and
1. Exploring the utterance roles of visible bodily action: A personal account so on. This action was called the stroke. Whatever the hand or hands did to organize themselves for this action was called the preparation. The preparation and stroke, taken together, I referred to as a Gesture Phrase, with the stroke and any subsequent sustained position of the hand considered as the nucleus of the Gesture Phrase. Once the hand or hands began to relax, the Gesture Phrase was finished. The hand (or hands) might then start upon a new preparation, in which case a new Gesture Phrase begins, or it might go back to a position of complete rest, in which case the Gesture Unit would be finished. The distinction between Gesture Unit, the more inclusive unit, and Gesture Phrase, was necessary, for in this way a succession of Gesture Phrases within the frame of a single Gesture Unit could be accommodated. As we had shown (in Kendon 1972b), the nested hierarchical relationship between Gesture Unit and Gesture Phrases corresponded to the nested hierarchical relationship between tone unit groupings at various levels within the spoken discourse. Just as spoken discourse is organized at multiple levels simultaneously, so this appears to be true of associated utterance visible actions of the forelimbs. Examination of how these Gesture Phrases were organized in relation to their concurrent tone units suggested that the stroke of the Gesture Phrase tended to anticipate slightly, or to coincide with the tonic center of the tone unit. Looking at the form of action in the stroke and what it seemed to express, it seemed that there was a close coordination between the meanings attributed to the action of the stroke and the meaning being expressed in the associated tone unit. This did not mean that the meanings attributed to the forms of action in the Gesture Phrases were always the same as the meanings expressed in the associated speech. Rather, it meant that there was generally a semantic coherence between them (McNeill 1992 has called this “co-expression”). Sometimes these meanings seemed to parallel verbal meaning, but they often seemed to complement it or add to it in various ways. Uttering, that is, could be done both verbally and kinesically in coordination. This gave rise to the general observation that, somehow, expression in words and expression in visible bodily action are intimately related. These conclusions were, in part, confirmed in independent observations by McNeill and were incorporated and re-stated by him in his book Hand and Mind (McNeill 1992). Subsequent to this demonstration that a speaker, in using the hands in this way, does so as an integral part of the utterance, I began to investigate the different ways in which these hand actions could be deployed in relation to the speech component of the utterance. From this it appeared that the utterer can be flexible in how this is done. The coordinate use of the two modes of expression is orchestrated in relation to whatever might be the speaker’s current rhetorical aim. Thus we described examples where the speaker delayed speech so that a kinesic expression could be foregrounded or completed, examples in which the speaker delays a kinesic expression so that it could be placed appropriately in relation to what was said, and yet other examples showing how the speaker, though repeating the same verbal expression, employed a different kinesic expression with each repetition. These observations were presented in Chapter 8 of Kendon (2004). I took them as supporting the view that these manual actions “should be looked upon as fully fashioned components of the finished utterance, produced as an integral part of the ‘object’ that is created when an utterance is fashioned” (Kendon 2004: 157).
11
12
I. How the body relates to language and communication
3. The semantics of utterance visible actions in relation to the semantics of verbal expression Utterance visible actions, especially those of the forearms, are generally seen as being done, as we have put it, with “semantic intent.” That is, they are seen as actions done by the actor as part of an effort to express meanings. These actions differ widely in terms of the extent and nature of the meanings attributed to them. In most speaking communities, probably in all, there exist shared vocabularies of kinesic expressions which are used with shared meanings. These have, in some cases, been separately described, as if they constitute a distinct class or category. In such cases they have been termed “emblems” (Ekman and Friesen 1969) or “symbolic gestures” (Morris et al. 1979) or “quotable gestures” (Kendon 1992). Dictionaries of them have also been attempted (see Meo-Zilio and Mejia 1980–1983 for one of the largest of these). These highly conventional forms are used by speakers in various contexts (unfortunately this has received little systematic attention, but see Sherzer 1991 and Brookes 2001, 2004, 2005). However, even when speakers are not making use of forms with “quotable” meanings, the forms of action they employ still convey meaning in various ways and are governed, to varying degrees, by social conventions. The most well documented demonstration of this point remains, remarkably, that of David Efron in 1941 (there is no later comparable comparative study; see Efron 1972). An important question for investigation is how these meanings of utterance visible actions (whether or not they are “quotable”) may interact with meanings expressed verbally, and what consequences this interaction may have for how the utterance is understood by others. From the point of view of how the meaning of a speaker’s utterance may be interpreted, concurrent or associated utterance visible actions, in virtue of their own meanings, in interaction with what is expressed in words, can extend, enrich, supplement, complement spoken meaning in various ways and in respect to various aspects and levels of meaning. In a preliminary discussion of this, I suggested five main ways in which these actions may do so (Kendon 2004: 158 et seq.). These may be termed referential, in which the kinesic expression contributes to the referential or propositional meaning of what is being uttered; operational, in which the kinesic expression operates in relation to what is being expressed verbally, as when it confirms it, denies it, negates it; performative, in which the kinesic action expresses or makes manifest the illocutionary force of the utterance, as in showing whether a question is being asked, a request or an offer is being made, and the like; modal, in which the action provides an interpretative frame for what is being expressed verbally, as in indicating that what the speaker is saying is a quotation, is hypothetical, is to be taken literally, to be taken as a joke, and so forth; and parsing or punctuational, where the utterance visible action appears to make distinct different segments or components of the discourse, providing emphasis, contrast, parenthesis, and the like, or where it marks up the discourse in relation to aspects of its structure such theme-rheme or topical focus (see also Kendon 1995). We now give more detail for each of these different functions in turn.
3.1. Referential There are two ways in which visible actions can contribute to referential or propositional meaning. One way is by pointing. Here the actor, by pointing at something,
1. Exploring the utterance roles of visible bodily action: A personal account can establish what is pointed at as the referent to some deictic expression in the discourse. In a study of pointing (Kendon and Versante 2003; Kendon 2004: Chapter 11) the different hand shapes used when pointing were described (six different forms were identified), and the discourse contexts in which they were used were examined. It emerged that different hand shapes are used in pointing, according to the way in which speakers used the referent of the pointing in their discourse. For example, if it was important that the speaker’s recipients distinguish one specific object pointed at from another (“Over there you see St. Peters, then to left you see the Old Vicarage”), the extended index finger was the commonest hand shape. On the other hand, if the speaker referred to something because it is an example of a category (“you see there a fine example of a war memorial”), because the speaker makes a comment about it, or because it is something which has features the speaker’s recipients are to take note of (“you can see again the quality of the building in this particular case”), the speaker is more likely to use a hand in which all fingers are extended and held together, palm oriented vertically or upwards. That is, the shape and orientation of the hand employed in pointing is chosen according to how the speaker is treating, in spoken discourse, the referent of the pointing action. This may reflect a more general feature of utterance visible actions, which is that, very often, they are derived forms of actions made when operating upon or manipulating the objects which the refer to, whether these be literal or metaphorical. When we talk about things, we conjure them up as objects in a virtual presence and with our hands we may manipulate them in various ways, pushing them into position, touching them as we speak of them, arranging them in relation to one another spatially, and so on (for a view not unrelated to this, see Streeck 2009). The other way for actors to use their hands in relation to the referential content of their discourse is to use them to do something which itself has referential meaning. These actions may be highly conventionalized, recognized as having quite specific or restricted meanings (often directly expressible in a word or phrase that is regarded as having an equivalent meaning), or they may be forms of action by which a sketch or diagram of some object is provided, by which some pattern of action is depicted, or which provides a movement image analogous to the dynamic character of a process or mode of action. In a survey of numerous recordings of unscripted conversations in various settings, I distinguished six different ways in which visible actions could, in this manner, participate in the referential meaning of the speaker’s discourse (Kendon 2004: 158–198). These may be summarized as follows: (i) A manual expression with a “narrow gloss” (“quotable gesture”) is used simultaneously with a word that has an identical or very similar meaning. In Naples, in Italy, where I collected recordings of conversations, it was not uncommon to observe how, from time to time, such expressions were used in the course of talk, so that it was as if the speaker uttered the same word simultaneously in speech and kinesically. A speaker explaining that nowadays in Naples there were too many thieves uttered the word “ladri” (thieves) and used a manual expression which is always glossed with this word. Again, as a speaker says “money” he rubs the tip of his index finger against the tip of his thumb in an action always glossed as “money.” Yet again, as a (British) speaker describes her job and
13
14
I. How the body relates to language and communication
(ii)
(iii)
(iv)
(v)
(vi)
says “I do everything from the accounts, to the typing, to the telephone, to the cleaning,” as she says “typing” and “telephone” and “cleaning” she does an action, in each case a conventional form, often glossed with the same words that she utters (see Kendon 2004: 178 for these examples). In such cases the semantic relationship between the two modalities appears to be one of complete redundancy. However, a study of the contexts in which this occurs, taking into consideration how the action is performed, suggests that there are various effects speakers achieve by using such narrow gloss expressions in this way. More attention to this kind of use of kinesic expressions is needed. Kinesic expressions with a narrow gloss may also be used in parallel with verbal expressions in such a way that they are not semantically redundant but make a significant addition to the content of what the speaker is saying. For example, a city bus driver (in Salerno, Italy), describing the disgraceful behavior of boys on the buses adds that they behave this way in front of girls, who are not in the least upset, saying that also they are happy about it. As he says this, he holds both hands out, index fingers extended, positioned so that the two index fingers are held parallel to one another. In this way he adds the comment that boys and girls are equal participants in this activity, using here a kinesic expression glossed as “same” or “equal,” among other meanings given it (de Jorio 2000: 90). Kendon (2004: 181–185) describes this and several other examples. Kinesic expressions may be used to make more specific the meaning of something that is being said in words. For example, it is common to observe how an enactment, used in conjunction with a verb phrase, appears to make the meaning of the verb phrase much more specific. For example, a speaker speaks of how someone used to “throw ground rice” over ripening cheeses to dry off the cheeses’ “sweat.” As he says “throw” he shapes his hand as if it is holding powder and does a double wrist extension as if doing what you would do if you were to scatter a powder over a surface. In this way the actions referred to by the verb “throw” are given a much more specific meaning (Kendon 2004: 185–190). Hand actions may be used to create the representation of an object of some kind. This may be deployed in relation to what is being said as if it is an exemplar or an illustration of it. For example, a speaker is explaining how, in a new building being discussed, a security arrangement will include “a bar across the double doors on the inside.” As he says “bar” he lifts up his two hands and moves them apart with a hand shape that suggests he is molding a wide horizontal elongate object. As the speaker talks about an object, he uses his hands to create it, as if to bring it forth as an exhibit or illustration (Kendon 2004: 190–191). Hand actions are often used either as a way of laying out the shape, size and spatial characteristics or relationships of an object being referred to, or as a way of exhibiting patterns of action which provide either visual or motoric images of processes (Kendon 2004: 191–194). Hand actions can also be employed to create objects of reference for deictic expressions. For example, a speaker described a Christmas cake and said it was “this sort of size,” using his extended index fingers to sketch out a rectangular area over the table in front of him, thus enabling recipients to envisage a large rectangular object lying on the table (Kendon 2004: 194–197).
1. Exploring the utterance roles of visible bodily action: A personal account
3.2. Operational In contrast to these kinds of uses, hand or head actions are common that function as an operator in relation to the speaker’s spoken meaning. An obvious way in which this may be observed is in the use of head or hand actions that add negation to what is being said. This is not always a straightforward matter, however. For example, the head shake, commonly interpreted as a way of saying “no” is of course used for this, but it can also be used when a speaker is not saying “no” to anything directly, but saying something which implies some kind of negation (Kendon 2002). Likewise, there is a very widely used hand action in which the hand, held with all fingers extended and adducted (a so-called open hand), held on a supinated forearm (so the palm faces downwards), is moved horizontally and laterally. Such a hand action is commonly seen in relation to negative statements or statements that imply a negative circumstance (as in a shopkeeper using this action as she explains her supply of a cheese to a customer: “That’s the finish of that particular brie”), but it may also be seen in relation to positive absolute statements, as if the hand action serves to forestall any attempts to deny what is being said, as in: “Neapolitan cooking is the best of all cooking,” the horizontal hand action acting here as if to say that any contrary claim will be denied (see Kendon 2004: 265–264; see also Harrison 2010).
3.3. Modal Utterance visible actions may also be used to provide an interpretative frame for a stretch of speech. The use of the “quotation marks” gesture to indicate that the speaker is putting what he is saying in quotes is a common example. In an example drawn from one of my recordings (not published), a speaker is in a conversation with someone about how he negotiated a good deal with a representative of a mobile phone company. In describing his successful negotiation he repeats what he said to the representative in accepting some offer. He says: “yes, I’ll have that” and as he does so he held his hand up to his ear in a Y hand shape, commonly used as a kinesic expression for “telephone.” In this way he frames his words as quoted – as what he said to the representative – and shows that he said this while talking on the telephone. In another example, also from my recordings (made in Salerno in 1991), someone discussing a robbery puts forward a speculation about what the robber might have done. As he describes what the robber did he places a “finger bunch” hand against the side of his forehead and moves it away and upward, expanding his fingers as he does so. This is an action that is widely accepted (in Southern Italy) as a reference to imagination. Here it serves to frame his statement as a hypothesis.
3.4. Performative Hand actions are often used as a way of making manifest the speech act or illocutionary force of what a speaker is saying. Many examples of this sort of usage were described by Quintilian, and some of the forms he described are also used today (Dutsch 2002; Quintilian Book XI, iii: lines 14–15, 61–149 in Butler 1922). In my own work I have described the use of ‘praying hands’ or mani giunte and also of the ‘finger
15
16
I. How the body relates to language and communication bunch’ or grappolo as devices for marking questions in Neapolitan speakers (Kendon 1995), and some of the uses of the so-called palm up open hand also can be used in this manner, as when a palm up open hand is proffered when a speaker gives an example of something, or when a speaker asks a question of another, holding out the palm-up-open hand as if they want something to be put in it (Kendon 2004: 264–281. See also Mu¨ller 2004).
3.5. Parsing Lastly, there is a punctuational parsing or discourse structure marking function of speaker’s hand or head actions. For example, speakers not uncommonly, in giving a list of items, place their head in a slightly different angular position in relation to each item as they describe it. “Batonic” movements of the hand can be observed to occur in apparent association with features of spoken discourse that are given prominence (see Efron 1972; Ekman and Friesen 1969). However there are also hand action sequences, such as the “finger-bunch-open-hand” sequence observed in Neapolitan speakers that are coordinated with the topic-comment structure of the speaker’s discourse. A version of this has also been described for Persian speakers in southern Iran (Seyfeddinipur 2004). Also observed among Neapolitan speakers, but observed elsewhere as well, is the thumb-tip-to-index-finger-tip “precision grip.” This is often used to mark a stretch of speech which the speaker deems to be of central importance to what is being said, as when the speaker is emphasizing something that is quite specific and important (see Kendon 1995; Kendon 2004: 238–247). For an account of German uses of this hand action see Neumann (2004). For uses by an American speaker see Lempert (2011).
3.6. Discussion It should be stressed that the different ways described here of how visible actions can contribute to the meaning of an utterance is only a beginning. More complete and more systematic accounts have yet to be provided. Previous partial attempts similar to this include Efron (1972), McNeill (1992) (and see also Streeck 2009 and Calbris 2011). Furthermore, and it is important to stress this, it should be understood that these semantic and pragmatic functions of utterance visible actions are not mutually exclusive. A given action can serve in more than one way simultaneously, and a given form may function in one way in one context and in a different way in another. A second point must be made. We have spoken about different ways in which these utterance visible actions can contribute to the meaning of the utterance, pointing out how they may contribute to the propositional content of an utterance, or function in various ways in relation to various aspects of its pragmatic meaning. The different ways we have outlined are different ways which have been arrived at by observers or analysts, after they have reflected upon how the form of visible action, regarded as in some way intended or meant as part of the speaker’s expression, can be related to the semantic or pragmatic content that has been apprehended from the speech. Our ability to do this is based upon our ability to grasp how these actions are intelligible. The basis for this understanding remains obscure, however. Very little attention has been paid to the problem of how the semantic “affiliation” claimed between words and kinesic expressions is justified. Involved here is the question of the intelligibility
1. Exploring the utterance roles of visible bodily action: A personal account of utterance visible actions and how this interacts with the intelligibility of associated spoken expression. The nature of this intelligibility and of this semantic interaction deserves much more systematic attention (one recent relevant discussion is Lascarides and Stone 2009). Finally, how can we be sure whether, or to what extent, these utterance visible actions make a difference to how recipients grasp or understand the meanings of the utterances they are a part of. We do know, both from everyday experience and from numerous experimental studies (Hostetter 2011; Kendon 1994), that these visible actions do make a difference for recipients, but whether they always do so, and whether they do so in the same way, this we cannot say, nor do we have a good understanding of the circumstances in which they may or may not do so. (See Rime´ and Schiaratura 1991: 272–275 for an interesting start in investigating this issue). To conclude, the brief survey offered above should make clear the diverse ways in which speakers employ utterance visible action. No simple statement can be made about what these actions do or what they are for. For me, it seems, a consideration of these different modes of use supports the view that these actions are to be regarded as components of a speaker’s final product. That is, they are not (or are not only) symptoms of processes leading to verbal expression (as some approaches to them might suggest). Rather, they are integral components of a person’s expression which, in the cases we have been considering, are composed as an ensemble of different modalities of expression.
4. When utterance visible action is the main utterance vehicle Utterance visible action, as indicated earlier, includes, of course, its use as a means of utterance when it is used on its own, without speech. This comes about in a variety of circumstances. For example, when people are too far away to hear one another, but otherwise need to exchange utterances, visible action is made use of. This may be observed on an occasional basis in all sorts of circumstances, but there are circumstances where it happens as a matter of routine. This has been reported in factories (e.g. Meissner and Philpott 1975), in cities such as Naples (e.g. de Jorio (1819: 108) where he describes the language of the basso popolo as “double” – they also have a language of gesture, that is, (2000). See also Mayer 1948 and discussion in Kendon 2004: 349–354), or among hunter-gatherers, such as Congo Pygmies (Lewis 2009), and Australian Aborigines (Kendon 1988). In these circumstances fairly complex kinesic codes may become established. There are also circumstances in which speech is prevented for ritual reasons and here systems of kinesic communication may become highly elaborated, to the extent that they may earn the title “sign language.” The most notable examples are the systems found in the central desert areas of Australia where the practice of tabooing speech as part of mourning ritual (among women) or as part of initiation ceremonies (among men) was and is followed (Kendon 1988), and the systems at one time in widespread use among the Plains Indians of North America (Davis 2010; Farnell 1995; Mallery 1972). Sign languages developed for ritual reasons also were (and perhaps still are) used in some Christian monastic orders (Bruce 2007; Umiker-Sebeok and Sebeok 1987). Besides this, and best known of all, are the circumstances of deafness. As has long been known, among the deaf, elaborate systems of utterance visible actions are
17
18
I. How the body relates to language and communication employed and developed with semiotic features that are comparable to spoken languages. Depending on the community and the place of deaf persons within it, these sign languages may also be used between deaf and hearing, as well as just among the deaf. The literature on these sign languages is now very extensive. For a representative survey see Brentari (2010). In my own work on utterance visible action as the sole vehicle for utterance, I have undertaken two pieces of research. One was a small scale study of material collected in Papua New Guinea (Kendon 1980b, 1980c, 1980d), mainly from one deaf young woman. The other was a large scale study of sign languages in Aboriginal Australia (Kendon 1988). The work with the material collected in Papua New Guinea was (for me) a pioneering and preliminary effort in many ways, and restricted in scope, since it was based on limited material collected as a result of a chance encounter. In the course of attempting to make films of certain kinds of social occasions among the Enga in the Papua New Guinea highlands, a young deaf woman appeared one day near my residence. She talked in signs with great fluency. She was using a system that was used by various families in the valley who had deaf members. The deafness in the valley was said to be a consequence of an epidemic of meningitis of some years back. Fortunately, my New Guinean field assistant was able to converse with her, since he also had deaf relatives. He was later able to interpret for me much of what I was able to record, as well as assisting in the recordings. I later undertook a detailed study of some of this material. Despite its limitations, undertaking such a detailed study led me to confront some fundamental issues regarding the way in which meanings may be encoded in the media of visible bodily action (see the discussion in Kendon 1980c). The fundamental process involved seems to be one in which the actor, by means of range of different techniques of representation, “brings forth” or “conjures” actions, objects, movements, spatial relations, in this way representing concepts, ideas, and the like, so they are understood as making reference to these things. This may take the form of a kind of re-enactment of actions and their circumstances and of the actions themselves in a fairly elaborated pantomimic manner. Very quickly, however, these forms of action become reduced schematized and standardized in various ways as they become a shared means by which meanings may be represented. This is a fundamental and general process that has been described many times by students of autonomous utterance visible actions. Although the terminology is various, the processes of “sign formation” that have been described by Kuschel (1973), Tervoort (1961), Klima and Bellugi (1979), Yau (1992), Eastman (1989), Cuxac (Cuxac and Sallandre 2007; see also Fousellier-Souza 2006) – to name just a few of the authors who have written about this – are all fundamentally similar. To represent a meaning for someone else (and also, I think, to represent it for oneself), one resorts to a sort of re-creation. As if, by showing the other the thing that is meant, the other will come to grasp it in a way that overlaps with the way it is grasped by oneself. As these representations become socially shared, they rapidly undergo various processes of schematization. In consequence they are no longer understood only because they are depictions of something but also because they are forms which contrast with other forms in the system, acquiring the status of lexical items in a system, that is. In this process we seem able to observe the processes of language system formation. This provides one of the main reasons why primary sign languages (sign languages of the deaf, that is) have become objects of such intense interest.
1. Exploring the utterance roles of visible bodily action: A personal account In a much larger investigation, I examined Australian Aboriginal sign languages. What interested me here was the fact that these are well developed, fully flexible systems, developed by speaker-hearers who have always had full access to spoken language. These sign languages, developed for ritual reasons, for the most part, are also widely used as a convenient alternative to speech in all sorts of circumstances (Kendon 1988: Chapter 14). In an area of central Australia that extends northwards from above Alice Springs in the Northern Territory as far as the border with Arnhem Land, a practice is followed in which a woman, once bereaved, forgoes the use of speech for long periods. This has given rise to complex sign languages which are used among women and which may be used in all circumstances of every day life. There are many interesting aspects of these sign languages, from cultural and semiotic points of view, and it is very interesting to compare them to other alternate sign languages (such as those reported from North America or in Christian monastic communities) and also to primary sign languages. It is also useful to compare them with other language codes, such as writing or drum and whistle languages (see Kendon 1988: Chapter 13). Here I will comment on just one issue, which was central in my work, and that is how these central Australian sign languages are related, structurally, to the spoken languages of their users. A comparison of signing among women of different ages that I undertook at the Warlpiri settlement, Yuendumu (Kendon 1984), suggested that, as users became more proficient at using these sign languages, they come to use, more and more, signs that represent the semantic units expressed by the semantic morphemes of the spoken language. A notable feature is that it appears common for signs to develop which represent the meanings of the morphemes of the spoken language, qua morphemes. In consequence, concepts expressed in spoken language by compound morphemes get expressed by compound signs that are the equivalent of these morphemes, and not by a separate sign derived from some property of the thing in question. I give just one example to illustrate this point (for this and other examples see Kendon 1988: 369–372). In Warlpiri “scorpion” is kana-parnta, a compound of kana “digging stick” and -parnta, a possessive suffix, which we can render in English as “having.” Thus “scorpion” in Warlpiri is, literally, “digging stick-having.” In a language of a neighbouring community, Warlmanpa, the same creature is known as jalangartata, which is a compound of the word jala “mouth” and ngartata “crab.” In the signs for “scorpion” we find however, that in Warlpiri it is a compound sign, the equivalent of a sign for “digging stick” followed by a sign which is used, among other things, as a sign for a possessive. In Warlmanpa, in contrast, the sign is also a compound sign, but this time a compound of the sign for “mouth” followed by the sign for “crab.” It is interesting that, in creating signs for these creatures, we do not find a sign for “scorpion” derived from some feature of the animal (its action of raising its tail comes to mind) but signs based on representations of the meanings of the verbal components which make up the verbal expression. In sign languages of neighboring language communities in this part of Australia, we thus can find differences in signs for similar things which derive from the fact that these sign languages in part develop as kinesic representations of the semantic units of their respective spoken languages. It is interesting to consider this in relation to some recent findings regarding differences between manual expressions and object-placement verbs in speakers of different languages. Gullberg (2011) has reported that, in Dutch, the equivalent of the verb “to put” is different according to the nature of the physical object being put somewhere or the orientation of the object being placed. For example, to describe the putting of a vase
19
20
I. How the body relates to language and communication on a shelf or some other object which has a base on which it stands, one chooses the verb zetten. However, if the object does not have a base or is something, such as a book, that can be put down on its side, one chooses the verb leggen. In French, on the other hand, one uses the same verb mettre, whatever the object or its placement orientation might be. Gullberg found that Dutch speakers, if using hand actions as they talked about putting objects somewhere, accompanied their verb phrases with different hand actions, according to which placement verb they used. French speakers, on the other hand, did not use hand actions that were differentiated in this way, regardless of the nature of the object they were talking about. This suggests that where a language makes semantic distinctions of this sort and manual expressions are also being employed, these may reflect these semantic distinctions. The language spoken, thus, may link directly to the kinds of manual expressions that may be used, if these are used when speaking. This is a further piece of evidence in favour of the view that, as Kendon (1980a) put it, “gesticulation and speech are two aspects of the process of utterance.” Exactly how this is to be understood is yet to be made clear. However, the detailed way in which Warlpiri speakers or Warlmanpa speakers have created kinesic expressions for the semantic units their spoken languages supply reinforces the view, also suggested by Gullberg’s work (and suggested, too, by the phenomenon we described earlier, in which “narrow gloss” kinesic expressions may be used conjointly with spoken expressions of the same meaning), that word meanings are somehow linked to or grounded in schematic perceptuo-motor patterns so that, if the hands are also employed when speaking, we see these patterns being drawn upon as a source for the hand actions. For the Warlpiri women, who, of necessity, were to create kinesic representations of concepts provided by their language, a strategy they followed was to draw upon repertoires of already existing perceptuo-motor representations. If this is so, this might mean that the “imagery” that McNeill (2005) suggests is opposed to the categorical expressions of words is not always to be so sharply separated. Kinesic expressions can also be like words. Indeed, they are often highly schematic in form and serve as devices to refer to conceptual categories in ways very similar to words. Cogill-Koez (2000a, 2000b) shows this for “classifier predicates” in sign languages, which have features in common with some kinds of manual expressions seen in speakers (see Kendon 2004: 316–324; Schembri, Jones, and Burnham 2005). Whatever it is that is made available through verbal expression can also be made available by other means. The distinction between imagistic expression and verbal expression may be much less sharp than has often been supposed.
5. Broader implications In the foregoing I have touched upon some of the questions I have been concerned with in my studies of utterance visible action. My purpose has been to illustrate the particular perspective in terms of which I have approached the study of this domain of human action. What are some of the broader implications?
5.1. Utterance visible action and speech and the construction of utterances In section 3 above, the various ways in which utterance visible actions can enter into the construction of utterances that also involve speech, and the different levels at which
1. Exploring the utterance roles of visible bodily action: A personal account they may do so, suggests that in the process of utterance production the speaker forges “utterance objects” out of materials of diverse semiotic properties. This makes it possible for a speaker to “escape” the constraints of the linearity of verbal expression, at least to some degree. As has recently been pointed out, in sign languages use is made of multiple articulators simultaneously. This means that, in these languages, simultaneous as well as linear constructions must be envisaged as part of their grammar (Vermeerbergen, Leeson, and Crasborne 2007). Once it is seen that speakers also can make use of utterance visible actions as they construct utterances, it will be clear that a similar kind of simultaneity of construction becomes possible. As the examples we have mentioned make clear (and as is clear from the many others that have been described), speakers do in fact exploit this possibility. For the most part, at least as far as is known, the use of simultaneous constructions in spoken language through the combination of speech and utterance visible action has not, in any community of speakers, become stabilized and formalized as a shared practice to the point that it must be considered as a part of the formal grammar of any spoken language. Such a manner of constructing utterances is widespread nevertheless, and, from the point of view of describing languaging, rather than language, it must be taken into consideration (Kendon 2011).
5.2. The emergence of linguistic symbols A second issue of great interest that the study of utterance visible action can throw light upon has to do with the emergence of linguistic symbols. We already have referred to this briefly, in reference to the study of sign languages, where the study of the phenomena of “sign formation” has allowed us to see how forms of expression that first come into being as pantomimic or picture-like representations (“Highly Iconic Structures” in Cuxac’s terminology – Cuxac and Sallandre 2007) become transformed into economized schematic forms which, in virtue of the fact that they are shared between people as expressions in common, come to exist as autonomous symbolic forms which have the characteristics that are perceived as arbitrary. Yet the “iconicity” of linguistic expression is always latently present, and it can re-emerge at any time. This is the implication of the presence of analogic forms of expression in sign language such as may be seen in the use of so-called “classifiers” (Emmorey 2003), the depiction of conceptual relations by means of spatial diagrams (Liddell 2003) and the modification of sign performance to achieve “iconic effects” (e.g. Duncan 2005). We see comparable processes in speakers, as in the various uses of vocal effects that speakers exploit, but this is even more evident if we take into consideration their uses of visible action. Just as we see how, in signed discourse, there is a continuous interplay between aspects that admit of formal structural description at the same time as aspects are used which are dynamic, analogous, “iconic,” so we may see the same things in speakers. Constructing an utterance as a meaningful object, whatever modalities may be used, is always the result of a co-operative adjustment between forms governed by shared formal structures and modes of expression that follow analogic or “iconic” principles. The dialectic between “imagistic” and “linguistic categorical” expression that McNeill (2005) describes in his theory of the “growth point” may be regarded as an attempt to capture this point. However, in my view, the actor is continually adjusting his expressive resources in relation to one another as he seeks to create an “utterance object” that meets his rhetorical aims within the frame of whatever interactional moment he is faced with. I do not
21
22
I. How the body relates to language and communication see a dialectical struggle, but an orchestration of resources under the guidance of a communicative aim. The study of the ways in which utterance visible actions can become shared forms – seen especially in the study of sign languages, but not only there – helps to throw light on the social and semiotic processes that are involved in the creation of “language systems.” What might be called the “effort after representation” – making a connection here with Bartlett’s (1932) notion of “effort after meaning” – seems to be a fundamental process in language. On this view, the place of so-called “iconic processes” including “sound symbolism” in speech, should be re-evaluated. Whereas when spoken languages and also, to a degree sign languages, are considered as abstracted “social objects” and described as formal systems, the “iconic impulse” may seem not to be so very important, when considering the genesis of language, its continual emergence in everyday interaction (as well as historically), it is clear that it is of fundamental importance.
5.3. Utterance visible action and language origins There is a long tradition, which goes back at least to Condillac (see his Essai of 1746 – Condillac 2001), that suggests that “gesture” – some form of utterance visible action which Condillac referred to as la langage d’action – must have been the first form of language (Rosenfeld 2001 provides an excellent discussion). In modern times, especially since the seminal paper of Hewes of 1973, this view has gained increasing support. Scholars such as Donald (1991), Armstrong, Stokoe and Wilcox (1995), Stokoe (2001), Corballis (2002), Arbib (2005, 2012), and Tomasello (2008), among others, have all endorsed this idea, although the details of the evolutionary scenarios offered differ somewhat from one author to another. Common to all of these scholars is the idea that the kind of symbolic action that would support a form of communication that would count as being “linguistic” ( just what this means also differs between authors) would have first emerged through visible bodily action. There are many different points brought up in support of this idea (including, for example, the allegedly inflexible character of ape vocalizations, in contrast to the flexibility of ape gesture use; the first manifestations of language-like symbolic action in human babies is with gestures like pointing; the readiness with which humans are able to develop full-fledged languages in the medium of visible bodily action), but I think that the fundamental reason why it has attraction is because, in the medium of visible action, it is easier to envisage how a transition might be made from literal action to symbolic action. It is harder to imagine how vocalizations could come to have symbolic significance because they do not seem good vehicles for iconic representation, and iconic representation, as already noted, is widely agreed to be a fundamental process in the formation of linguistic symbols. This “gesture first” language origins scenario has its critics, of course. Perhaps the most important objection raised is the fact that, with the relatively rare exception of deafness, which forces people to express themselves linguistically only with visible bodily action, all human languages are spoken. Furthermore, anatomical and neurophysiological studies suggest that humans are biologically specialized as speaking creatures, and must have evolved as such, over a very long period of time. Gesture first scenarios all refer to a transition or a switch from “gesture” to “speech,” but none of the advocates of this scenario have been able to provide a convincing account of how or why this might have occurred. On the other hand, although those who argue that language
1. Exploring the utterance roles of visible bodily action: A personal account evolved as a system of vocal expression do not face this “switch” problem, none of them pay very much attention to the intimate interrelations between speaking and visible bodily action we have discussed here. The involvement of manual (and other) bodily action in speaking needs to be accounted for in any proposal put forward to account for the origin of language in evolutionary terms. Most writers who advocate a “gesture first” theory of language origins draw attention to the commonly noted intimate association between gesture and speech as supporting evidence (as, indeed, I did myself in Kendon 1975). However, given that utterance visible action, when used in conjunction with speech, has a rather different role in utterance and, accordingly, exhibits a different range of semiotic properties than it does when it is employed as the sole vehicle of utterance (as in signing), it is clear that it is not some kind of left-over from a non-speech kind of language and is not appropriately so regarded. It is, rather, an integrated component of contemporary languaging practice. Further, given modern developments in our understanding of the neurological interrelations between speaking and hand actions (for one review see Willems and Hagoort 2007), it seems much better to suppose that speaking and utterance manual action evolved together. According to a proposal that I am currently working on (expressed in a preliminary way in Kendon 2009), it is suggested that we might better approach the problem if we started out, not by thinking about the actions of speaking and gesturing as being descended with modification only from communicative or expressive actions, but by thinking of them as including modifications of the practical actions involved in manipulating and altering the environment, especially as this is required in the acquisition of food, and including the manipulation and alteration of the behavior of conspecifics, as in mothering, grooming, mating and fighting. MacNeilage (2008) has suggested that the complex oral actions that form the basis of speech have their origins in the oral manipulatory actions that are involved in the management of food intake. Perhaps this could be extended to actions of other parts of the body involved in feeding. If an animal is to masticate its food, food has to be brought into the mouth in some way. Leroi-Gourhan (1993) pointed out that an animal may do this by moving its whole body close enough to foodstuffs so that it can grasp them with its mouth directly. Animals that do this tend to be herbivores and all four of their limbs are specialized for body support and locomotion. They acquire food by grazing or cropping. On the other hand, many animals, for example squirrels and raccoons, grasp and manipulate foodstuffs with their hands, which they also use to carry food to the mouth. Such animals tend to be carnivores or omnivores and their forelimbs are equipped as instruments of manipulation, each with five mobile digits. In mammals of this sort, a system of forelimb-mouth co-ordination becomes established. This development is particularly marked in primates, of course, who, perhaps, in adopting an arboreal style of life, have developed forelimbs that can serve in environmental manipulation as well as in body support and locomotion. This sets the stage for the development of oral-forelimb manipulatory action systems, and this may explain the origin of co-involvement of hand and mouth in utterance production (see Gentilucci and Corballis 2006). This implies that the actions involved in speaking and in utterance visible action, two forms of action that, as we have seen, are so intimately connected that they must somehow be regarded as two aspects of the same process, are adaptations of oral and manual environmental manipulatory systems employed in practical actions. The adaptations
23
24
I. How the body relates to language and communication that allow them to serve communication at a distance are adaptations that arise as practical actions came to function in situations of co-present interaction between conspecifics, at first, perhaps, as “try-out” or “as if ” versions of true practical actions (Kendon 1991). On this view the actions of speaking and gesturing do not derive only from earlier forms of expressive actions. We may expect, accordingly, that there will be components of the executive systems involved in speech that will be closely related to those involved in forelimb action and that these will be different from those components of oral and laryngeal action that are part of the vocal-expression system. This view receives some support in the neuroscience literature, where it is reported that the actions of the tongue and lips by which the oral articulatory gestures of speech are achieved, controlled as they are in the pre-motor and motor cortex, can be separated from actions involved in exhalation and in the activation of the larynx, which produce vocalization. The control circuits for these actions involve sub-cortical structures instead. However, in normal speech, the oral gestures of speech articulation are combined with vocal expression, which provides the affective and motivational components of speaking (see, for example, Ploog 2002). Engaging in utterance, doing language, as we might say, is thus to be thought of as being derived from forms of action by which a creature intervenes in the world. Languaging (doing language), in consequence, because it involves practical action, involves the mobilization of oral and manual practical action systems. It also involves the mobilization of vocal and kinesic expressive systems, as they come to be a part of social action. Utterance visible actions, thus, are neither supplements nor add-ons. They are an integral part of what is involved in taking action in the virtual or fictional world that is always conjured up whenever language is made use of. A theory of language that takes this perspective, we suggest, will be better able to allow us to understand why it is that visible bodily action is also mobilized when speakers speak and why, more generally, speaking, using language in co-present interaction, that is, is always a form of action that involves several different executive systems in co-ordination.
6. References Arbib, Michael 2005. From monkey-like action to human language: An evolutionary framework for neurolinguistics. Behavioral and Brain Sciences 28: 105–167. Arbib, Michael 2012. How the Brain Got Language: The Mirror Neuron Hypothesis. Oxford: Oxford University Press. Armstrong, David F., William C. Stokoe and Sherman E. Wilcox 1995. Gesture and the Nature of Language. Cambridge: Cambridge University Press. Bartlett, Frederick C. 1932. Remembering: A Study in Experimental and Social Psychology. Cambridge: Cambridge University Press. Birdwhistell, Ray L. 1970. Kinesics and Context: Essays in Body Motion Communication. Philadelphia: University of Pennsylvania Press. Brentari, Diane (ed.) 2010. Sign Language. Cambridge: Cambridge University Press. Brookes, Heather J. 2001. O clever “He’s streetwise.” When gestures become quotable: The case of the clever gesture. Gesture 1: 167–184. Brookes, Heather J. 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14: 186–224. Brookes, Heather J. 2005. What gestures do: Some communicative functions of quotable gestures in conversations among black urban South Africans. Journal of Pragmatics 37: 2044–2085.
1. Exploring the utterance roles of visible bodily action: A personal account Bruce, Scott G. 2007. Silence and Sign Language in Medieval Monasticism: The Cluniac Tradition C.900–1200. Cambridge: Cambridge University Press. Butler, Harold E. 1922. The Institutio Oratoria of Quintilian. With an English Translation by H. E. Butler. London: William Heinemann. Calbris, Genevieve 2011. Elements of Meaning in Gesture. Amsterdam: John Benjamins. Cogill-Koez, Dorothea 2000a. Signed language classifier predicates: Linguistic structures or schematic visual representation? Sign Language and Linguistics 3: 153–207. Cogill-Koez, Dorothea 2000b. A model of signed language “classifier predicates” as templated visual representation. Sign Language and Linguistics 3: 209–236. Condillac, E´tienne Bonnot de 2001. Essay on the Origin of Human Knowledge. Translated and edited by Hans Aarsleff. Cambridge: Cambridge University Press. Condon, William S. 1976. An analysis of behavioral organization. Sign Language Studies 13: 285– 318. Condon, William S. and Richard D. Ogston 1966. Sound film analysis of normal and pathological behavior patterns. Journal of Nervous and Mental Disease 143: 338–347. Condon, William S. and Richard D. Ogston 1967. A segmentation of behavior. Journal of Psychiatric Research 5: 221–235. Corballis, Michael C. 2002. From Hand to Mouth: The Origins of Language. Princeton, NJ: Princeton University Press. Crystal, David 1969. Prosodic Systems and Intonation in English. Cambridge: Cambridge University Press. Cuxac, Christian and Marie-Anne Sallandre 2007. Iconicity and arbitrariness in French sign language: Highly iconic structures, degenerated iconicity and diagrammatic iconicity. In: Elena Pizzuto, Paola Pietandrea and Raffael Simone (eds.), Verbal and Signed Languages: Comparing Structures, Concepts and Methodologies, 13–33. Berlin: De Gruyter Mouton. Davis, Jefferey E. 2010. Hand Talk: Sign Language among American Indian Nations. Cambridge: Cambridge University Press. de Jorio, Andrea 1819. Indicazione Del Piu` Rimarcabile in Napoli E Contorni. Naples: Simoniana [dalla tipografia simoniana]. de Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. A Translation of “La Mimica Degli Antichi Investigata Nel Gestire Napoletano” by Andrea De Jorio (1832) and with an Introduction and Notes by Adam Kendon. Bloomington: Indiana University Press. Donald, Merlin 1991. Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition. Cambridge, MA: Harvard University Press. Duncan, Susan 2005. Gesture in signing: A case study from Taiwan sign language. Language and Linguistics 6: 279–318. Dutsch, Dorata 2002. Towards a grammar of gesture: A comparison between the type of hand movements of the orator and the actor in Quintilian’s Institutio Oratoria. Gesture 2: 259–281. Eastman, Gilbert C. 1989. From Mime to Sign. Silver Spring, MD: T. J. Publishers. Efron, David 1972. Gesture, Race and Culture, Second Edition. The Hague: Mouton. First published [1941]. Ekman, Paul and Wallace Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1: 49–98. Emmorey, Karen (ed.) 2003. Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum. Farnell, Brenda 1995. Do You See What I Mean? Plains Indian Sign Talk and the Embodiment of Action. Austin: University of Texas Press. Fousellier-Souza, Ivani 2006. Emergence and development of signed languages: From a semiogenic point of view. Sign Language Studies 7: 30–56. Gentilucci, Maurizio and Michael C. Corballis 2006. From manual gesture to speech: A gradual transition. Neuroscience and Biobehavioral Reviews 30: 949–960. Goffman, Erving 1963. Behavior in Public Places. New York: Free Press of Glencoe.
25
26
I. How the body relates to language and communication Goffman, Erving 1967. Interaction Ritual. Chicago: Aldine. Gullberg, Marianne 2011. Language-specific encoding of placement events in gestures. In: Eric Pederson and Ju¨rgen Bohnemeyer (eds.), Event Representations in Language and Cognition, 166–188. Cambridge: Cambridge University Press. Harrison, Simon 2010. Evidence for node and scope of negation in coverbal gesture. Gesture 10(1): 29–51. Hewes, Gordon W. 1973. Primate communication and the gestural origins of language. Current Anthropology 14: 5–24. Hostetter, Autumn B. 2011. When do gestures communicate? A meta-analysis. Psychological Bulletin 137: 297–315. Kendon, Adam 1972a. A review of “Kinesics and Context” by Ray L. Birdwhistell. American Journal of Psychology 85: 441–455. Kendon, Adam 1972b. Some relationships between body motion and speech. An analysis of an example. In: Aaron Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 177–216. Elmsford, NY: Pergamon Press. Kendon, Adam 1975. Gesticulation, speech and the gesture theory of language origins. Sign Language Studies 9: 349–373. Kendon, Adam 1978. Differential perception and attentional frame: Two problems for investigation. Semiotica 24: 305–315. Kendon, Adam 1980a. Gesticulation and speech: Two aspects of the process of utterance. In: Mary Ritchie Key (ed.), The Relationship of Verbal and Nonverbal Communication, 207–227. The Hague: Mouton. Kendon, Adam 1980b. A description of a deaf-mute sign language from the Enga Province of Papua New Guinea with some comparative discussion. Part I: The formational properties of Enga signs. Semiotica 32: 1–32. Kendon, Adam 1980c. A description of a deaf-mute sign language from the Enga Province of Papua New Guinea with some comparative discussion. Part II: The semiotic functioning of Enga signs. Semiotica 32: 81–117. Kendon, Adam 1980d. A description of a deaf-mute sign language from the Enga Province of Papua New Guinea with some comparative discussion. Part III: Aspects of utterance construction. Semiotica 32: 245–313. Kendon, Adam 1981. Introduction: Current issues in the study of “nonverbal communication.” In: Adam Kendon (ed.), Nonverbal Communication, Interaction and Gesture, 1–53. The Hague: Mouton. Kendon, Adam 1984. Knowledge of sign language in an Australian aboriginal community. Journal of Anthropological Research 40: 556–576. Kendon, Adam 1988. Sign Languages of Aboriginal Australia: Cultural, Semiotic and Communicative Perspectives. Cambridge: Cambridge University Press. Kendon, Adam 1990. Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge: Cambridge University Press. Kendon, Adam 1991. Some considerations for a theory of language origins. Man (N.S.) 26: 602– 619. Kendon, Adam 1992. Some recent work from Italy on quotable gestures (“emblems”). Journal of Linguistic Anthropology 2(1): 77–93. Kendon, Adam 1994. Do gestures communicate? A review. Research on Language and Social Interaction 27: 175–200. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in southern Italian conversation. Journal of Pragmatics 23: 247–279. Kendon, Adam 2002. Some uses of the head shake. Gesture 2(2): 147–182. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kendon, Adam 2008. Some reflections on “gesture” and “sign.” Gesture 8: 348–366.
1. Exploring the utterance roles of visible bodily action: A personal account Kendon, Adam 2009. Manual actions, speech and the nature of language. In: Daniele Gambarara and Alfredo Giviigliano (eds.), Origine e Sviluppo Del Linguaggio, Fra Teoria e Storia, 19–33. Rome: Aracne Editrice. Kendon, Adam 2010. Pointing and the problem of “gesture”: Some reflections. Revista Psicolinguistica Applicata 10: 19–30. Kendon, Adam 2011. “Gesture first” or “speech first” in language origins? In: Donna Jo Napoli and Gaurav Mathur (eds.), Deaf Around the World, 251–267. New York: Oxford University Press. Kendon, Adam and Laura Versante 2003. Pointing by hand in “Neapolitan.” In: Sotaro Kita (ed.), Pointing: Where Language, Culture and Cognition Meet, 109–137. Mahwah, NJ: Lawrence Erlbaum. Klima, Edward A. and Ursula Bellugi 1979. The Signs of Language. Cambridge, MA: Harvard University Press. Krout, Maurice H. 1935. Autistic gestures: An experimental study in symbolic movement. Psychological Monographs 208(46): 1–126. Kuschel, Rolf 1973. The silent inventor: The creation of a sign language by the only deaf mute on a Polynesian Island. Sign Language Studies 3: 1–27. Lascarides, Alex and Matthew Stone 2009. Discourse coherence and gesture interpretation. Gesture 9: 147–180. Lempert, Michael 2011. Barack Obama, being sharp: Indexical order in the pragmatics of precision-grip gesture. Gesture 11(3): 241–270. Leroi-Gourhan, Andre´ 1993. Gesture and Speech. Cambridge: Massachusetts Institute of Technology Press. Lewis, Jerome 2009. As well as words: Congo pygmy hunting, mimicry and play. In: Rudolf Botha and Chris Knight (eds.), The Cradle of Language, 236–256. Oxford: Oxford University Press. Liddell, Scott K. 2003. Grammar, Gesture and Meaning in American Sign Language. Cambridge: Cambridge University Press. MacNeilage, Peter F. 2008. Origin of Speech. Oxford: Oxford University Press. Mahl, George F. 1968. Gestures and body movements in interviews. Research in Psychotherapy (American Psychological Association) 3: 295–346. Mallery, Garrick 1972. Sign Language Among North American Indians Compared With That Among Other Peoples and Deaf Mutes. The Hague: Mouton. Mayer, Carl Augusto 1948. Vita Popolare a Napoli Nell’ Eta` Romantica. Bari: Gius. Laterza & Figli. McNeill, David 1992. Hand and Mind. Chicago: Chicago University Press. McNeill, David 2005. Gesture and Thought. Chicago: Chicago University Press. Meissner, Martin and Stuart B. Philpott 1975. The sign language of sawmill workers in British Columbia. Sign Language Studies 9: 291–308. Meo-Zilio, Giovanni and Silvia Mejia 1980–1983. Diccionario De Gestos: Espan˜a E Hispanoame´rica. Tomo I (1980), Tomo II (1983). Bogota´: Instituto Caro y Cuervo. Morris, Desmond, Peter Collett, Peter Marsh and Maria O’Shaughnessy 1979. Gestures: Their Origins and Distribution. London: Jonathan Cape. Mu¨ller, Cornelia 2004. Forms and uses of the palm up open hand: A case of a gesture family? In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 233–256. Berlin: Weidler Buchverlag. Neumann, Ranghild 2004. The conventionalization of the ring gesture in German discourse. In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 216–224. Berlin: Weidler Buchverlag. Ploog, Deltev 2002. Is the neural basis of vocalization different in non-human primates and Homo Sapiens? In: Tim J. Crow (ed.), The Speciation of Modern Homo Sapiens, 121–135. Oxford: Oxford University Press. Ricken, Ulrich 1994. Linguistics, Anthropology and Philosophy in the French Enlightenment. London: Routledge.
27
28
I. How the body relates to language and communication Rime´, Bernard and Laura Schiaratura 1991. Gesture and speech. In: Robert S. Feldman and Bernard Rime` (eds.), Fundamentals of Nonverbal Behavior, 239–281. Cambridge: Cambridge University Press. Rosenfeld, Sophia 2001. Language and Revolution in France: The Problem of Signs in Late Eighteenth Century France. Stanford, CA: Stanford University Press. Scheflen, Albert E. 1965. The significance of posture in communication systems. Psychiatry: Journal of Interpersonal Relations 27: 316–331. Schembri, Adam, Caroline Jones and Denis Burnham 2005. Comparing action gestures and classifier verbs of motion: Evidence from Australian sign language, Taiwan sign language and nonsigners gestures without speech. Journal of Deaf Studies and Deaf Education 10: 272–290. Seyfeddinipur, Mandana 2004. Meta-discursive gestures from Iran: Some uses of the “Pistol Hand.” In: Cornelia Mu¨ller and Roland Posnan (eds.), The Semantics and Pragmatics of Everyday Gestures, 205–216. Berlin: Weidler Buchverlag. Sherzer, Joel 1991. The Brazilian thumbs-up gesture. Journal of Linguistic Anthropology 1: 189–197. Stokoe, William C. 2001. Language in Hand: Why Sign Came Before Speech. Washington, DC: Gallaudet University Press. Streeck, Ju¨rgen 2009. Gesturecraft: The Manu-Facturing of Meaning. Amsterdam: John Benjamins. Tervoort, Bernard 1961. Esoteric symbolism in the communication behavior of young deaf children. American Annals of the Deaf 106: 436–480. Tomasello, Michael 2008. The Origins of Human Communication. Cambridge: Massachusetts Institute of Technology Press. Umiker-Sebeok, Jean and Thomas A. Sebeok (eds.) 1987. Monastic Sign Languages. Berlin: De Gruyter Mouton. Vermeerbergen, Myriam, Lorraine Leeson and Onno Crasborn (eds.) 2007. Simultaneity in Signed Languages: Form and Function. Amsterdam: John Benjamins. Willems, Roel M. and Peter Hagoort 2007. Neural evidence for the interplay between language, gesture and action: A review. Brain and Language 101: 278–289. Woll, Bencie 2007. Perspectives on linearity and simultaneity. In: Myriam Vermeerbergen, Lorraine Leeson and Onno Crasborn (eds.), Simultaneity in Signed Languages, 337–344. Amsterdam: John Benjamins. Yau, Shun-Chiu 1992. Creations Gestuelle Et Debuts Du Langage: Creation De Langues Gestuelles Chez Des Sourds Isoles. Paris: Editions Langages Croise´s.
Adam Kendon, Philadelphia, PA (USA)
2. Gesture as a window onto mind and brain, and the relationship to linguistic relativity and ontogenesis 1. 2. 3. 4. 5. 6. 7. 8. 9.
Introduction “Gesture” in a psychological perspective Example of this perspective The growth point Gesture and linguistic relativity Gesture and ontogenesis Neurogesture Summary and brain model References
Mu¨ller, Cienki, Fricke, Ladewig, McNeill, Teßendorf (eds.) 2013, Body – Language – Communication (HSK 38.1), de Gruyter, 28–54
2. Gesture as a window onto mind and brain
Abstract This paper provides an overview on what is currently known about gestures and speech from a psychological perspective. Spontaneous co-verbal gestures offer insights into imagistic and dynamic forms of thinking while speaking and gesturing. Includes motion event studies, also from cross-cultural and developmental perspectives, and concerning those with language impairments. “it’s like seeing someone’s thought” – Mitsuko Iriye, historian, on observing how to code gestures.
1. Introduction To see in gesture “someone’s thought,” as our motto remarks, we look at each case individually and in close detail. Since they are unique in their context of occurrence, gestures, for this purpose, are transcribed one by one, never accumulated, and, since often it is the tiniest features through which thought peeks, we record in detail. Taking gesture at this fine-grained scale, we cover a wide range – gestures in different types of language (the “S-type” and “V-type”), gestures of children, and gestures in neurological disturbances – and find in each region that our “window” provides views of thinking as it takes place, different across languages, ages, and neurological condition.
2. “Gesture” in a psychological perspective Defining “gesture” is a necessary but vexing exercise, bound to fall short of a fully satisfying definition. It is a word with many uses, often pejorative and misleading, and to find a replacement would be a real contribution, but one does not appear. One problem is that the meaning of the word is not independent of the perspective one takes. It thus has built into it acceptances and exclusions. It is wise to make these known from the start. A “psychological” perspective implies its own definition of “gesture.” Adam Kendon placed gestures in the category of “actions that have the features of manifest deliberate expressiveness” (2004: 13–14). I adopt this definition; it is the best that I have seen but do so with one qualification and one proviso. The qualification is that gesture cannot be deliberate; as we define them, “gestures” are unwitting and anything but deliberate. (Kendon may have meant by “deliberate” non-accidental, and with this I agree; but the word also conveys “done for a purpose,” and with that I do not agree.) The proviso concerns “action.” If by action we understand movements orchestrated by some significance created by the speaker, this is accurate but (again) are not actions to attain some goal. So our definition, based on Kendon’s but excising “deliberate” and specifying the kind of action (and far from tripping off the tongue), is this: A gesture is an unwitting, non-goal-directed action orchestrated by speaker-created significances, having features of manifest expressiveness. Very often I use “gesture” still more restrictively to mean all of the above, plus: An expressive action that enacts imagery (not necessarily by the hands or hands alone) that is part of the process of speaking.
29
30
I. How the body relates to language and communication A slightly different term does denote speech-linked gesture: “gesticulation,” which in fact was used by Kendon in an earlier publication (1980). I remain with “gesture” partly for brevity, but more crucially because “gesticulation” carries an image of windmilling arms that is false to the reality we are aiming to explain. This reaction is not idiosyncratic: according to the Oxford English Dictionary to gesticulate is “To make lively or energetic motions with the limbs or body; esp. as an accompaniment or in lieu of speech”. These are regarded as rival claims about infant predispositions for the forms of language, although in fact they seem closely related, emphasizing one side or the other of the linguistic sign, signifier or signified. Bootstrapping from either is expected to lead to other areas of language. One hypothesis holds that certain semantic (signified) patterns evolved (such as actor-action-recipient, from Pinker 1989); the other that syntactic (signifier) patterns did (such as subject-verb-object, from Gleitman 1990); both provide entre´e to the rest of language. Of course, both or neither (as Slobin 2009 suggests in his review) may have evolved. The picture is murky to say the least. So through the gesture window we see a kind of action of the mind that is linked to imagery and is part of language dynamically conceived, action regarded as part of the process of thinking for (and while) speaking.
3. Example of this perspective Here is one participant in our first experiment, recounting an animated color cartoon that she has just watched. It is typical of many gestures that we see (Fig. 2.1). It is not codified or “quotable” (to use Adam Kendon’s term), a gesture of the kind that appears in gesture atlases and dictionaries of the “gesture language” of some nationality or other. It is instead a unique, unlikely-to-recur, spontaneous, individually formed expression of the speaker’s idea at the moment of speaking. (For notation and the method of data collection, see McNeill this volume b, but briefly, square brackets enclose the gesture phrase; boldface shows the gesture stroke; underlining is a gesture hold, in this case a poststroke hold; “/” is a silent speech pause, and font size reflects prosodic peaks.) [ / and it goes dOWn] BH/mirroring each other in tense spread C-shapes, palms toward center (PTC), move down from upper central periphery to lower central periph.; sudden stop.
Fig. 2.1: Iconic gesture depicting an event from the animated stimulus, Canary Row. In the cartoon, Sylvester, the ever-frustrated cat, attempts to reach Tweety, his perpetual prey, by climbing a drainspout conveniently attached to the side of the building where Tweety in his birdcage is perched. In this instance, Sylvester is climbing the pipe on the inside, a stealth approach. Tweety nonetheless spots him, rushes out of his cage and into the room behind, then reappears with an impossibly large bowling ball, which he drops into the pipe. In the example, the speaker is describing the bowling ball’s descent. Used with permission of Cambridge University Press.
2. Gesture as a window onto mind and brain It is important to consider the precise temporal details of a gesture. They suggest that in the microgenesis of an utterance the gesture image and linguistic categorization constitute one idea unit, and their timing is an inherent part of how this idea unit is created. The start of the gesture preparation is the dawn of the idea unit, which is kept intact and is unpacked, as a unit, into the full utterance. The phases fall out at the precise moment of their intellectual realizations. Timing the gesture phases is inherent to this developing meaning.
3.1. Interpreting the example It is not implied that gesture and speech always convey the same elements of meaning; they are co-expressive if they capture the same idea, but each may express a different aspect of it. In a referential sense, speech and gesture in the example did convey the same content, but semiotically they are not the same. Speech and gesture are “coexpressive,” meaning that they each express, in their own semiotic ways, the same underlying idea – here, that the bowling ball was moving down inside the pipe – but semiotically they are not redundant. The gesture in this example showed, in one symbolic form, a pipe-shape moving downward. The object and the path through which it passes moved together. They were fused into one package. Such a path-figure-ground unit (to use Talmy’s 2000 terms) does not occur in any language so far as we are aware. Talmy (2000) proposed a typology reflecting how the basic motion verbs of a language incorporate motion event components. In English and other “satellite-framed” languages, such as German, the Scandinavian languages, Chinese and still others, verbs incorporate the manner of motion but not the path (so run, walk, stagger, all denote different manners of motion with direction unspecified). In Spanish, Japanese, Korean, American Sign Language and other “verb-framed” languages, verbs incorporate path but not manner (two English verbs borrowed from Romance give satellite speakers the flavor of verb framing, “exit” and “enter” – “he exited out” or “entered in” is redundant). Yet other languages have verbs that incorporate the figure, or the entity doing the moving; this mindbending situation appears in Atsugewi, a Hokan language spoken in Northern California that Talmy studied in depth and, again, appears in eclectic English (rain as a verb). But apparently no language has single verbs that incorporate the figure and “ground” (landmark) with the path, such as *to inside, a verb of motion meaning “some figure moving upward or downward inside a container”. Nonetheless, the gesture in the example embodied this semantic package, figure plus path plus ground. This kind of fusion is typical of gesture. Meanings that in speech are analyzed into separate linguistic segments can be synthesized into single symbolic forms in gesture. Further, the speaker’s hands, in their tension (“tense spread C-shapes”), may have embodied the narrative idea of the bowling ball as the point of maximum energy of the episode, for which, again, there is no speech equivalent (although in this instance the speaker could in theory have mentioned it). The gesture, then, synthesized several elements of meaning that would be separated in speech, including some (like “downward moving hollowness”) that are impossible in speech. In speech, nearly everything is the reverse – the words “and it goes down,” the intransitive construction, the metapragmatic function of the “and” – all conventional
31
32
I. How the body relates to language and communication forms in the codified system of English, and the meaning of the whole is composed out of these separately meaningful parts according the plan of the intransitive phrase. Due to synchrony, the gesture semiotic presents its content at the same time as the linguistic semiotic, and this duality is an important key to what evolved. This mechanism of combined semiotic opposites is one important spectacle we see through the gesture window.
4. The growth point A growth point (GP) is a mental package that combines both linguistic categorial and imagistic components. It is called a growth point because it is meant to be the initial pulse of thinking for and while speaking, out of which a dynamic process of organization emerges. Growth points are brief dynamic processes, during which idea units take form. It is a minimal unit, in Vygotsky’s 1987 sense of being the smallest unit that retains the quality of being a whole – here, a minimal unit of combined imagery and linguistic form. It is accordingly the smallest packet of an imagery-language dialectic, a minimal unit on the dynamic dimension of language, and the smallest unit of change on the microgenetic scale. For extensive discussion of the growth point, its relationship to context, and how it models the dynamic dimension of language, see the previously cited article (McNeill this volume b).
5. Gesture and linguistic relativity “Whorf,” a gifted amateur linguist, in this discussion is less an individual than an emblem for a range of ideas having to do with the influence of language on thought. The “Whorfian hypothesis” addresses language as a static object. It describes “habitual thought” as a static mode of cognition and how it is shaped through linguistic analogies (see Lucy 1992a, 1992b; Whorf 1956). A corresponding dynamic hypothesis is “thinking for speaking,” introduced by Dan Slobin as follows: “ ‘Thinking for speaking’ involves picking those characteristics that (a) fit some conceptualization of the event, and (b) are readily encodable in the language” (Slobin 1987: 435). In terms of growth points, the thinking for speaking hypothesis is that, during speech, growth points may differ across languages in predictable ways. It might be better called the hypothesis of “thinking while speaking.” A major insight, however the dynamic hypothesis is named, comes from the distinction between “satellite-framed” versus “verb-framed” languages identified by Talmy (1975, 1985, 2000), a distinction referring to how languages package motion event information, including path and manner.
5.1. S-type and V-type languages As pointed out earlier, English follows the satellite-framed or “S-type” semantics (Slobin 1996). In S-type languages, a verb packages the fact of motion with information about manner. Rolls is an example, and the verb describes, in one package, that something is in motion and how it is moving – motion by rolling. The path component, in contrast, is outside the verb. From rolls alone, we have no inkling of the direction of motion; for that we add one or more satellites: rolls out/in/down/up/through.
2. Gesture as a window onto mind and brain A complex curvilinear path in an S-type description tends to be resolved into a series of straight segments or paths. E.g., “and it goes down but it rolls him out down the rainspout out into the sidewalk into a bowling alley” (a recorded example) – one verb and six satellites (italicized) or segments of path. It is also typical of these languages to emphasize ground – each path segment tends to have its own locus with respect to a ground element, as in this example: the sidewalk, rainspout, and bowling alley. The other broad category of language is the verb-framed or “V-type,” of which Spanish, French, Turkish, American Sign Language, and Japanese are examples. In such languages, a verb packages the fact of motion with path (the direction of motion) – ascend or descends, exits or enters, etc. – and it is manner that is conveyed outside the verb (or omitted altogether). Unlike an S-type language, a complex curvilinear path can be described holistically with a single verb – descends, for example, for the same curvilinear path that was broken into six segments in the English example. V-type languages tend to highlight a whole mise en sce`ne rather than an isolable landmark or ground (a collection of descriptions like “there are tall buildings and a slanted street with some signs around, and he ascends climbing,” in contrast to “he climbs up the drainpipe,” where upward path is localized to the ground, the drainpipe). This is termed the “setting” by Slobin.
5.2. Implications for growth points In keeping the Whorfian hypothesis, gestures also differ between S-type and V-type, implying the possibility of different growth points and imagery-language dialectics in languages of the two kinds (see McNeill and Duncan 2000).
5.3. Effects on path The following comparisons of English and Spanish speakers describing the same bowling ball episode, speakers create different visuospatial imagery.
5.3.1. English The above example of a speaker describing the aftermath of the bowling ball event divided the event into six path segments, each with its own path gesture: (i) (ii) (iii) (iv) (v) (vi)
and it goes down but it rolls him out down the rain spout out into the sidewalk into a bowling alley and he knocks over all the pins
The match between speech and gesture is nearly complete. The speaker’s visuospatial cognition – in gesture – consists of a half dozen straight line segments, not the single curved path that Sylvester actually followed (Fig. 2.2).
33
34
I. How the body relates to language and communication
Gesture
Synchronous speech PATH 1[/
and it goes
down]
PATH 2
but [[it roll] [s him out*]]
PATH 3
[[down the / / ]
PATH 4
[ / rainspo]]
(Continued )
2. Gesture as a window onto mind and brain PATH 5
[ut/ out i][nto
PATH 6
the sidew]alk/ into a] [bowling alley
Fig. 2.2: English speaker’s segmentation of a curvilinear path. Computer art in this and subsequent illustrations by Fey Parrill. Used with permission of University of Chicago Press.
5.3.2. Spanish In video recordings by Karl-Erik McCullough and Lisa Miotto, Spanish speakers, in contrast, represent this scene without significant segmentation. Their gestures are single, unbroken curvilinear trajectories. In speech, the entire path may be covered by a single verb. The following description is by a monolingual speaker, recorded in Guadalajara, Mexico: (1)
[entonces SSS] then SSSS he falls
The accompanying gesture traces a single, unbroken arcing trajectory down and to the side. What had been segmented in English becomes in Spanish one curvaceous gesture that re-creates Sylvester’s path. In speech, the speaker made use of onomatopoeia, which is a frequent verb substitute in our Spanish-language narrative materials (Fig. 2.3). To quantify this possible cross-linguistic difference, the table shows the number of path segments that occur in Spanish and English gestures for the path running from the encounter with the bowling ball inside the pipe to the denouement in the bowling alley (Tab. 2.1). English speakers break this trajectory into 43 percent more segments than Spanish speakers: 3.3 in English and 2.3 in Spanish. Extremes, moreover, favor
35
36
I. How the body relates to language and communication
Fig. 2.3: Spanish speaker’s single continuous arc for Sylvester’s circuitous trip down the pipe, along the street and into the bowling alley (scan images from left to right). Elapsed time is about 1 sec. This illustrates Spanish-style visuospatial cognition of a curved trajectory as single a single, unsegmented path. Used with permission of University of Chicago Press.
2. Gesture as a window onto mind and brain
37
Tab. 2.1: Segmentation of paths by English- and Spanish-speaking adults Number of gestures Segments
English
Spanish
0 1 2 3 4 5 ≥6
0 3 7 3 2 1 5
1 5 6 4 1 0 1
21
18
Total
English. Five English speakers divided the trajectory into six or more segments, compared to only one Spanish speaker. Thus Spanish speakers, even when they divide paths into segments, have fewer and broader segments.
5.4. Effects on manner 5.4.1. Manner fogs Slobin has observed many times in Spanish speech and writing that manner is cumbersome to include, and consequently speakers and writers tend to avoid it if they can (Slobin 1996). However, manner does not necessarily thereby disappear from the speaker’s consciousness. The result is often a “manner fog” – a scene description that has multiple manner occurrences in gesture but lacks manner in speech. An example is the following, a description of Sylvester climbing the pipe on the inside: (2)
e entonces busca la ma[nera (silent pause)] and so he looks for the way Gesture depicts the shape of the pipe: ground.
(3)
[de entra][r / / se met][e por el] to enter REFL goes-into through the Both hands rock and rise simultaneously: manner + path combined (left hand only through "mete").
(4)
[desague / / ] [/ / si?] drainpipe…yes? Right hand circles in arc: manner + ground (shape of pipe).
(5)
[desague entra /] drainpipe, enters Both hands briefly in palm-down position (clambering paws) and then rise with chop-like motion: manner+ path combined.
38
I. How the body relates to language and communication Gestural manner was in the second, third, and fourth lines, despite the total absence of spoken manner references. Thus, while manner may seem absent when speech alone is considered, it can be present, even abundant, in the visuospatial thinking.
Fig. 2.4: Spanish speaker’s “manner fog,” while describing Sylvester’s inside ascent. She is at line 2, “[de entra][r / / se met][e por el]” (to enter refl goes-into through the). Her hands continually rock back and forth (= climbing manner) while rising (= upward path) but without verbal mention of manner.
5.5. Manner modulation In English, the opposite takes place, in a sense. Whereas a manner fog adds manner when it is lacking from speech, modulation adjusts manner that is obligatory in speech. Modulation solves a problem created by English verbs, that they often package manner with motion and are accordingly manner verbs as well as verbs of motion, even when a speaker intends to convey only the fact of motion. A gesture, however, can include manner or not, and can accordingly modulate the manner component of verbs in exact accord with intentions. The following examples, from different English speakers, show manner modulation – respectively, reinforcement of manner and removal of manner: (6)
Speaker A (removes manner) but [it roll]s him out down the Both hands sweep to right and both rotate at the wrist as they go, conveying both path and manner
The gesture contains manner and synchronizes with the manner verb, “rolls.” The context highlighted manner as the point of differentiation. The content and co-occurrence highlight manner and suggest that it was part of the psychological predicate. (7)
Speaker B (removes manner) and he rolls [/ down the drai]nspout Left hand (loose fist shape, palm-side toward self) plunges straight down, conveying path only
2. Gesture as a window onto mind and brain
39
This gesture, despite the presence of the same verb, “rolls,” skips the verb and presents no manner content of its own. It shows path alone, and co-occurs with the satellite, “down.” Both the timing and the shape of the gesture suggest that manner was not a major element of the speaker’s intent and that “rolls,” while referentially accurate, was de-emphasized and functioned as a verb of motion only, with the manner content modulated (the speaker could as well have said “goes down,” but this would have meant editing out the true reference to rolling).
5.6. Chinese and English: Thematic groups and predicate domination Chinese motion event gestures often resemble those in English, as would be expected from their shared typology as S-type languages. However, there are also differences. Chinese speakers perform a kind of gesture that appears, so far as I am aware, only in that language. It is as if, in English, we said a stick and, simultaneously, performed a gesture showing how to give a blow. While such a combination is obviously possible for an English speaker, it does not occur often, and when it does it is treated as an error by the speaker. However, they take place with Mandarin speakers, and seem to do so with some frequency. The hallmark of this Chinese pattern is that a gesture occurs earlier in the temporal sequence of speech than we would find in English or Spanish. In an example and transcription from Susan Duncan we find the following: (8)
lao tai-tai [na -ge old lady hold
(9) da bang hao]-xiang gei big stick seem CAUSE
(10) ta da-xia him hit-downverb-satellite
CLASSIFIER
‘The old lady apparently knocked him down with a big stick’ The gesture (a downward blow with her left hand, fist clenched around “the stick,” palm facing center) that accompanied the spoken reference to the stick (da bang ‘big stick’) was expressively the same as the verb and satellite, da-xia ‘hit-down’. However, the speaker’s hand promptly relaxed, long before this verb phrase was reached in speech. Chinese is what Li and Thompson (1976, 1981) termed a “topic prominent” language. Wallace Chafe stated the sense of topicalization intended: “What the topics appear to do is limit the applicability of the main predication to a certain restricted domain […] the topic sets a spatial, temporal, or individual framework within which the main predication holds” (Chafe 1976: 50). In this instance, the domain is what was done with the big stick. English and Spanish, in contrast, are “subject prominent.” Utterances in the latter languages are founded on subject-predicate relations. In line with this typological distinction, we find cases like the above, in which gesture provides one element and speech another element, and they jointly create something like a topic frame. This may be again, therefore, the impact of language on thinking for speaking. In English, too, gestures occasionally occur that depict an event yet to appear in speech (referring here to time lapses far longer than the previously discussed fraction-of-a-second gesture anticipations). Such premature imagery is handled by the speaker as an error, requiring repair. In the following, a gesture shows the result of an action and it occurred with speech describing its cause. This is a semantically
40
I. How the body relates to language and communication appropriate pairing not unlike the Chinese example, but it involved separating the gesture from the predicate describing the same event. It was repaired first by holding it until the predicate arrived, and then repeating it in enlarged form: [so it hits him on the hea] [d and he winds up rolling down the stre]et
The two gestures in the first clause depicted Sylvester moving down the street, an event described only in the following clause. The difference between Chinese and English in this situation is apparent in the second line, the point at which the target predication emerged in speech. Unlike the Chinese speaker, whose hands were at rest by now, this English speaker held the gesture (underlined text) and then repeated it in a larger, more definite way when the possible growth point occurred. The subsequent enhanced repeat indicates the relevance of the gesture to the predicate. In other words, the speaker retained the imagery from the first clause for the growth point of the second. She did not, as the Chinese speaker did, use it as a selfcontained framing unit when it first appeared.
5.7. Summary: Visuospatial cognition across languages From the gesture evidence, we infer the following differences in visuospatial cognition across the languages surveyed: (i) Gestural paths tend to be broken into straight line segments in English and into unbroken curvilinear wholes in Spanish. Chinese also tends to break paths into straight line segments. (ii) Gestural manner tends to expand the encoding resources of Spanish and to modulate them in English (the relationship in Chinese is not known). (iii) Gestures can combine with linguistic segments to create discourse topics: this occurs in Chinese, but not in English or Spanish.
6. Gesture and ontogenesis 6.1. The decomposition effect “Decomposition” refers to a reduction of motion event complexity in gestures after an earlier stage where, seemingly, they have full complexity. Decomposition suggests that the meanings within gestures are becoming analytically integrated with speech: the path and manner components of motion events come to be handled separately, as they also are in linguistic representations (see “he climbs [manner and the fact of motion] up [path]”). Episodes in the animated cartoon are often comprised of motion events in which path and manner components are simultaneously present. Sylvester rolling down a street with the bowling ball inside him is a motion event incorporating both path (along the street) and manner (rolling). Adults, when they describe such motion events, typically produce gestures showing only path (for example, the hand moving down) or gestures showing in a single gesture both manner and path (for example, the hand
2. Gesture as a window onto mind and brain rotating as it goes down for rolling). Manner without path, however, rarely occurs. Children, like adults, have path-only gestures but, unlike adults, they also have large numbers of pure manner gestures and few path+manner gestures.
Fig. 2.5: No decomposition with an English speaking 2;6 year-old, who has path and manner in one gesture. The hand simultaneously swept to the right, moved up and down, and opened and closed. Computer art by Fey Parrill. Used with permission of the University of Chicago Press.
In other words, they “decompose” the motion event to pure manner or pure path, and tend not to have gestures that combine the semantic components. Decomposition, while seemingly regressive, is actually a step forward. The youngest child from whom we have obtained any kind of recognizable narration of the animated stimulus was a two-and-a-half year-old English-speaking child. The accompanying illustration (Fig. 2.5) shows her version of Sylvester rolling down the street with the bowling ball inside him (she reasons that it is under him). The important observation is that she does not show the decomposition effect. In a single gesture, the child combines path (her sweeping arc to the right) and manner (in two forms – an undulating trajectory and an opening and closing of her hand as it sweeps right, suggested by the up-and-down arrow). Is this an adult-like combined manner-path gesture? I believe not. An alternative possible interpretation is suggested by Werner and Kaplan (1963), who described a nonrepresentational mode of cognition in young children, a mode that could also be the basis of this gesture. Werner and Kaplan said that the symbolic actions of young children (in this case, the gesture) have “the character of ‘sharing’ experiences with the Other rather than of ‘communicating’ messages to the Other” (1963: 42). Sharing with, as opposed to communicating and representing to, could be what we see in the two-and-a-half year-old’s gesture. The double indication of manner is suggestive of sharing, since this redundancy would not be a relevant force, as it might have been in a communicative representation of this event where the child were merely trying to re-create an interesting spectacle for her mother. One of the first attempts by children to shape their gestures for combination with language could be the phenomenon of path and manner decomposition. The mechanism causing this could be that the decomposition effect creates in gesture what Karmiloff-Smith (1979) has suggested for speech: When children begin to see elements of meaning in a form, they tend to pull these elements out in their representations to
41
42
I. How the body relates to language and communication get a “better grip” on them. Bowerman (1982) added that the elements children select tend to be those with “recurrent organizational significance” in the language. Manner and path would be such elements, and their reduction in gesture to a single component could be this kind of hyperanalytic response. Three illustrations show the decomposition effect in English (age 4, Fig. 2.6), Mandarin (age 3;11, Fig. 2.7), and Spanish (age 3;1, Fig. 2.8).
Fig. 2.6: English-speaking four-year-old with decomposition to manner alone. The child is describing Tweety escaping from his cage. The stimulus event combined a highly circular path with flying. The child reduces it to pure manner – flapping wings, suggested by the two arrows – without path, which was conspicuous in the stimulus and had been emphasized by the adult interlocutor (not shown, but who demonstrated Tweety’s flight in a simultaneous path-manner gesture). The embodiment of the bird, in other words, was reduced to pure manner, path excised. Computer art by Fey Parrill. Used with permission of the University of Chicago Press.
Fig. 2.7: Mandarin-speaking 3;11-year-old with decomposition to manner – clambering without upward motion. The child is describing Sylvester’s clambering up the pipe on the inside. The hands depict manner without upward path (while he says, “ta* [# ta zhei- # yang-zi* /] he* (‘# he this- # way* /’). Direction is shown through his upward-orientated body and arms. Direction is one aspect of path, although there is no upward motion in this case. Computer art by Fey Parrill. Used with permission of the University of Chicago Press.
2. Gesture as a window onto mind and brain
Fig. 2.8: Spanish-speaking 3;1-year-old with decomposition to manner – clambering without upward motion. The child is likewise describing Sylvester as he climbs up the pipe on the inside. His mother had asked, y lo agarro´? (‘and did he grab him?’) and the child answered, no # /se subio´ en el tubo [y le corrio´] (‘no # he went up the tube and he ran’), with the gesture illustrated – both hands clambering without path. Computer art by Fey Parrill. Used with permission of the University of Chicago Press.
In languages as different as English, Mandarin and Spanish, children beyond three years decompose motion events that are fused in the stimulus, and are fused again by adult speakers of the same languages (see “Whorf ” section above).
6.2. Perspective We gain insight into the decomposition effect and how it forms a step in the child’s emerging imagery-language dialectic when we consider gestural viewpoint; the firstperson or character viewpoint (C-VPT) and the third-person or observer viewpoint (O-VPT). In observer viewpoint, the speaker’s hands are a character or other entity as a whole, the space is a stage or screen on which the action occurs, and the speaker’s body is distanced from the event and is “observing” it. In character viewpoint, the speaker’s hands are the character’s hands, her body is its body, her space its space, etc. – the speaker enters the event and becomes the character in part. Unlike pantomime, character viewpoint is synchronized and co-expressive with speech and forms psychological predicates (see Parrill 2011 for extensive discussion of viewpoint combinations). Tab. 2.2 shows the viewpoints of path decomposed, manner decomposed and fused path + manner gestures for three age groups; all are English speakers. For adults, we see that most gestures are observer viewpoint, both those that fuse manner and path and those with path alone. Few gestures in either viewpoint occur with manner alone. For children, both older and younger, we see something quite different. Not only do we see the decomposition effect, but manner and path are sequestered into different viewpoints. Path tends to be observer viewpoint and manner character viewpoint. This sequestering enforces the path-manner decomposition: if one gesture cannot have both viewpoints, it is impossible to combine the motion event components.
43
44
I. How the body relates to language and communication Tab. 2.2: Gestural viewpoints of English speakers at three ages* Viewpoint Adults N=25 7-11 years N=23 3-6 years N=45
C-VPT O-VPT C-VPT O-VPT C-VPT O-VPT
M+P Combined
M Decomposed
P Decomposed
0% 38% 3% 4% 5% 9%
6% 0% 27% 13% 25% 10%
0% 58% 19% 34% 2% 49%
*All figures are percentages. M = manner; P = path. C-VPT = character viewpoint; O-VPT = observer viewpoint.
The decomposition effect and this viewpoint sequestering are very long lasting; children up to twelve years old still show them. Longevity implies that the final break from decomposition depends on some kind of late-to-emerge development that enables the child (at last) to conceptualize manner in the observer viewpoint. Until this development, whatever it may be, the difference in perspective locks in path-manner decomposition.
6.3. Imitation In an example thanks to Karl-Erik McCullough, the decomposition of manner and its sequestering in character viewpoint is revealed in another way by imitation. Children do not imitate model gestures with manner in observer viewpoint, even when the model is directly in front of them and the imitation is concurrent. They change the model to fit their own decompositional semantics, putting manner into the character viewpoint and omitting path. In Fig. 2.9, a four-year-old is imitating an adult model. The adult depicts manner (running) plus path in observer viewpoint, his hand moving forward, fingers wiggling. The child watches intently; nonetheless, she transforms
Fig. 2.9: Decomposition to manner alone in imitation of model with combined path and manner. Computer art by Fey Parrill. Used with permission of the University of Chicago Press.
2. Gesture as a window onto mind and brain the gesture into manner with no path, in character viewpoint (in the gesture, she is Sylvester, her arms moving as if running).
7. Neurogesture I describe here a case of severe Broca’s agrammatic aphasia from Pedelty (1987), a case of Wernicke’s aphasia, also from Pedelty, a case of split-brain gesture, collected in collaboration with Dalia Zaidel, a psychologist at University of California, Los Angeles, and the effects of right hemisphere injury on gesture, collected in collaboration with Laura Pedelty. The first case demonstrates the presence of growth points in Broca’s aphasia, the second the truncation of growth points in Wernicke’s aphasia, and the split-brain case a role in the production of iconic gesture for the right hemisphere, a role confirmed by the study of right-hemisphere injury itself.
7.1. Agrammatic (Broca’s) aphasia To judge from Pedelty’s data, Broca’s aphasia spares (i) growth points, (ii) the capacity to construct the context from which it is differentiated, and (iii) the formation of psychological predicates, but it impairs the ability to access constructions and to orchestrate sequences of speech and gesture movements. The speaker in Fig. 2.10 had viewed the animated stimulus (the bowling ball scene). She clearly was able to remember many details of the scene but suffered extreme impairment of linguistic sequential organization: “cat – bird? – ‘nd cat – and uh – the uh – she (unintell.) – ‘partment an’ t* – that (?) – [eh ///] – old uh – [mied //] – uh – woman – and uh – [she] – like – er ap – [they ap – #] – cat [/] – [an’ uh bird /] – [is //] –
Fig. 2.10: Gestures by an agrammatic (Broca’s) aphasic speaker timed with “an’ down t’ t’ down.” The speaker was attempting to describe Tweety’s bowling ball going down the drainpipe. Computer art by Fey Parrill. Use with permission of the University of Chicago Press.
45
46
I. How the body relates to language and communication I uh – [ch- cheows] – [an’ down t’ t’ down]”. Gestures occurred at several points, indicated with square brackets, and appeared to convey newsworthy content. The figure shows a gesture synchronous with “an’ down t’ t’ down,” depicting the bowling ball’s downward path. Plausibly, this combination of imagery and linguistic categorization was a growth point. The gesture occurred at the same discourse juncture where gestures from normal speakers also occur, implying that for the patient, as for normals, a psychological predicate was being differentiated from a field of oppositions.
7.1.1. Broca catchments We also see Broca’s aphasics briefly overcoming severe agrammatic limits in the course of gesture catchments (catchments are when space, trajectory, hand shape, etc. recur in two or more – not necessarily consecutive – gestures). Such recurring features mark out discourse cohesion and provide an empirical route, based on gestures themselves, to the discovery of the discourse beyond the individual utterance (for more, see McNeill this volume b). In one case, over time, with ongoing spatial recurrences made by repeated gesture points into the upper space, speech advanced from single elements (“el”), to phrases (“on the tracks”), to a single clause (“he saw an el train”) to, finally, in the last slide and without a gesture, a sentence with an embedded clause (“he saw the el train comin’ ”). Of course, the duration of the time it took to reach the final two-clause construction was far too great for normal social discourse (two minutes, seventeen seconds), but it shows that complex linguistic forms are possible with gesture support.
7.2. Wernicke’s aphasia Wernicke’s aphasia, in a sense, is the inverse of Broca’s. While fluent, speech is semantically and pragmatically empty or semantically and pragmatically unconstrained, such as distortions of word forms (paraphasias) and “clangs,” unbridled phonetic primings such as “a little tooki tooki goin’ to-it to him,” for example. It is difficult to say what exactly the speaker is trying to say in the following, but the recurring speech and gesture seem to reflect his impression of Sylvester’s many attempts to reach Tweety (“go to it”). a little tooki tooki goin to-it to him looki’ on a little little tooki goin’ to him it’s a not digga not næ he weduh like he’ll get me mema run to-it they had to is then he put it sutthing to it takun a jo to-it that’s nobody to-it I mean pawdi di get to-it she got got got glasses she could look to-it After injury, gestures, like speech, are garbled and lack intelligible pragmatic or semantic content. Strikingly, one gesture-speech combination (“to-it” with the gesture in Fig. 2.11), seems to have become fixed in his memory, and repeatedly occurred; a
2. Gesture as a window onto mind and brain growth point that – very abnormally – would not switch off (normal growth points disintegrate after a second or two; see McNeill 1992: 240–244).
Fig. 2.11: Wernicke aphasic recurring imagery with the phrase “to-it.” Each panel shows a speechgesture combination created without meaningful context. The panels represent temporally widely separated points, and show “getting to-it.” Computer art by Fey Parrill. Used with permission of the University of Chicago Press.
Within traditional models of the brain, Wernicke’s area supplies linguistic categorial content. It is known to be essential for speech comprehension, which is severely disrupted after injury to the posterior speech area (Gardner 1974). However, it also might play a role in speech production. As inferred from the effect of injuries, Wernicke’s area could help generate the categorial content of growth points; in turn, this content giving the imagery of the growth point a shape that accords with the linguistic meanings. Damage accordingly interferes the growth point, as we see in the transcript and Fig. 2.11. The repetitiveness in the “to-it” example, whereby an initially meaningful speech-gesture combination (as it appears) became detached from context and sense, ensures that all ensuing growth points would be denied content (since they cannot vary their linguistic categorial parts).
7.3. Right hemisphere injury The right hemisphere is often called “nonlinguistic,” and this label is appropriate in one sense – limited access to the static dimension. But the dynamic dimension – imagery, context, relevance – depends on it. As suggested in the Wernicke discussion, the right hemisphere may be a brain region involved in the formation of growth points. In contrast to Wernicke’s aphasia, where the growth point itself breaks down, right hemisphere damage affects the contextual background of the growth point, catchments and fields of oppositions, and hence psychological predicates. All of this is demonstrated in the cases below, recorded in collaboration with Laura Pedelty (see McNeill and Pedelty 1995 for a summary).
7.3.1. Imagery decline One effect of right hemisphere damage is to reduce the sheer amount of gesture. In turn, reduced output suggests depletion of imagery. Not all right-hemisphere injured
47
48
I. How the body relates to language and communication patients display reduced gesture output. Our sample of 5 clumps at the low end of the distribution. Two other right-hemisphere patients had gesture outputs in the normal range. The difference between the groups presumably is due to details of the injured areas (hand dominance was not a factor). Since the two preserved patients and the 5 depleted patients were non-overlapping groups, it is misleading to combine them statistically. We focus on the depletion phenomenon and limit our sample to that group therefore. Tab. 2.3 compares Canary Row narrations by 5 right hemisphere patients to those by 3 normal speakers. Tab. 2.3: Effect of right hemisphere injury on gesture 5 RH
3 Normal
20 total gestures average 0.2 gestures/clause 4.2 gestures/minute
103 total gestures average 1.1 gestures/clause 15 gestures/minute
Injury has no impact on speech, as measured in the number of clauses and number of words, or the length of time taken to recount the stimulus; if anything, right hemisphere damaged speakers are more talkative on these measures (Tab. 2.4). Tab. 2.4: Non-effect of right hemisphere injury on speech 5 RH
3 Normal
114 clauses 773 words
96 clauses 656 words
And right hemisphere damaged patients talk faster – more words and clauses, while taking less time (Tab. 2.5). Tab. 2.5: Effect of right hemisphere injury on speech time 5 RH
3 Normal
5.3 minutes 21.5 clauses/minute 145.8 words/minute
6.8 minutes 14.1 clauses/minute 96.5 words/minute
Gesture imagery thus seems to be the specific target of right hemisphere damage.
7.3.2. Cohesion deficit It is well-known that right hemisphere injury interrupts the cohesion and logical coherence of discourse (Gardner et al. 1983). This breakdown is seen in the verbal descriptions such patients produce. Narratives clearly exhibit a breakdown of cohesion/ coherence. One patient begins with an event (ascending the drainpipe) and then,
2. Gesture as a window onto mind and brain without indicating a transition, jumps to the middle of a different event (involving an organ grinder and a monkey disguise). It then shifts back to the end of the drainpipe event; then moves back to the start of the organ grinder-monkey event; and finally returns to the organ grinder-monkey event now in the middle. As Susan Duncan has pointed out, the patient seems unaware, in other words, of the logical and temporal flow in the story. The speaker recalls the events from the cartoon in a more or less random order. His narrative strategy was to follow stepwise associations, with each successive association triggering a further step. We shall encounter a similar incremental style in a split-brain patient (patient LB). Reliance on association might be the left hemisphere’s modus operandi in both cases.
7.3.3. Unstable growth points Given the central role of imagery in the formation of growth points, right hemisphere injury could (a) disrupt growth point functioning by disturbing the visuospatial component of the growth point. It could also (b) lead to instability of the imagery-language dialectic, making catchments difficult to achieve. A phenomenon supporting both hypotheses is that, in some right hemisphere patients, chance gaze at one’s own gesture causes a change of linguistic categorization. This illustrates an instability and fragility of the language-gesture combination and is a further manifestation of a lack of discourse cohesion. In the following, there is a lexical change after the subject observes her own hopping gesture: I just saw the* # the cat running ar* run* white and black cat [# running here or there t* hop to here*] here, there, everywhere. a b c Hand hops forward four times: a = onset of first hopping gesture b = between second and third hopping gestures, and the approximate point when her hand entered her field of vision c = fourth hopping gesture The speaker was describing Sylvester’s running and began with this verb, but, for reasons unknown, her hand was undulating as it moved forward. As she caught sight of her own motion, she started to say “hopping.” Kinetic experience also may have been a factor, but it was not sufficient since the change occurred only when her hopping hand moved into her field of view. The example illustrates an imagery-language looseness and release from ongoing cohesive constraints that seems to be a result of right hemisphere damage. The imagery with “running” was not constrained by the linguistic category “to run,” in contrast to normal gesture imagery that is strongly adapted to the immediate linguistic environment. It also illustrates, however, that speech and gesture are still tightly bound after right hemisphere damage, in that speech shifted when the undulating gesture came into view.
49
50
I. How the body relates to language and communication
7.4. The split-brain The surgical procedure of commisurotomy (the complete separation of the two hemispheres at the corpus callosum) has been performed in selected cases of intractable epilepsy, where further seizures would have led to dangerous brain injury. Such cases have fascinated neuropsychologists for generations. The patients seem to have two sensibilities inside one skull, each half brain with its own powers, personality and limitations. We had an opportunity to test two patients, LB and NG, through the good offices of Colwyn Trevarthen, at the University of Edinburgh, who introduced us to Dalia Zaidel, psychologist at University of California, Los Angeles. She was studying and looking after the patients and generously agreed to videotape them retelling our standard animated stimulus (for a general description of the splitbrain patient, see Gazzaniga 1970, and for a history of how they have been studied, Gazzaniga 1995). There should be obstacles as a result of the split-brain procedure for organizing linguistic output in terms of the expected coordination of the two hemispheres. Straightforward organization of linguistic actions should not be possible. In fact, LB and NG appear to follow distinct strategies designed to solve the two-hemisphere problem (see McNeill 1992). LB seems to rely heavily on his left hemisphere, even for the production of gestures, and makes little use of his right hemisphere. NG, in contrast, seems “bicameral,” her left hemisphere controlling speech and her right hemisphere her gestures (she was strongly right handed, but a bicameral division of function is possible since each hemisphere has motor links to both sides of the body). Accomplishing this feat implies that NG was communicating to herself externally – her left hemisphere watching her right hemisphere’s gestures and her right hemisphere listening to her left hemisphere’s speech. As a result, although her gestures were often synchronized with speech, they also could get out of temporal alignment. The most telling asynchrony is when speech precedes co-expressive gesture, a direction almost never seen in normal speech-gesture timing, but not uncommon in NG’s performance. LB had few gestures. Most were beats or simple conduit-like metaphoric gestures with the hand, palm up, “holding a discursive object,” performed in the lower center gesture space, near his lap. This absence of iconicity is consistent with a left-hemisphere origin of his gestures. He could make bimanual gestures, almost always two similar hands of the Palm Up or Palm Down Open Hand types, with corresponding metaphoric significances. Again, this could be managed from the left hemisphere via bimanual motor control. His narrative style was list-like, a recitation of steps from the cartoon, sometimes accompanied by counting on his fingers, which also is consistent with a preponderantly left-hemisphere organization. This decontextualized style and minimal gesturing may be what the left hemisphere is capable of on its own. His approach was not unlike that of the right-hemisphere patient described earlier, who also displayed a listlike form of recitation. LB’s recall, however, was better, and far more sequential. Such similarity is explained if neither speaker was using his right hemisphere to any degree, albeit for different reasons. In contrast, NG remembered less, but had gestures of greater iconicity. Her gestures look repetitive and stylized, although this impression is difficult to verify. Still, her narration, while poorer than LB’s in the amount recalled, was more influenced by a sense of the story line.
2. Gesture as a window onto mind and brain LB and NG therefore jointly illustrate one of our main conclusions above – the right hemisphere (available to NG, apparently minimally used by LB) is necessary for situating speech in context and imbuing imagery with linguistically categorized significance; the left hemisphere (relied on by LB, available to NG) orchestrates wellformed speech output but otherwise has minimal ability to apprehend and establish discourse cohesion.
7.4.1. A right-hemisphere coup d’état? LB had of course an intact right hemisphere. In some instances it appears to have asserted itself in a kind of coup d’e´tat. LB sometimes performed elaborate iconic gestures; the trade-off was that speech then completely stopped. The right brain appears to have taken control and speech – Broca’s specialty – ceased for the duration. An example is LB saying, “he had a plan,” then speech stopping while an elaborate iconic gesture took place (the elapsed time more than a second). After the gesture, speech then resumed with “to get up,” completing the clause – as if the left hemisphere had switched to standby while the right hemisphere intruded. Each hemisphere was thus performing its specialty but could not coordinate with the other hemisphere. With normal speakers this event is a discourse climax and would be registered in an iconic gesture with synchronous speech and together they would highlight the climatic role. Climax is what the right hemisphere would apprehend, and the discourse juncture seems to have activated LB’s gesture but in so doing, his growth point leapt the chasm to the right hemisphere, leaving the left and with it the power of speech behind. Lausberg et al. (2003) suggest that the isolated left hemisphere simply ignores experiences arising from the right hemisphere (also Zaidel 1978). In this instance, however, LB’s left hemisphere was attentive to a right hemisphere gesture – like NG in this regard, possibly traveling an external attention route by one hemisphere to the other.
8. Summary and brain model This article is meant to give an overview of a “psychological perspective” on gesture – what through this window we see of mind and brain. The vista can be summarized with steps toward a brain model of language and gesture. The language centers of the brain have classically been regarded as just two, Wernicke’s and Broca’s areas, but if we are on the right track, contextual background information must be present to activate the broader spectrum of brain regions that the model describes. Typical item-recognition and production tests would not tap these other brain regions but discourse, conversation, play, work, and the exigencies of language in daily life would. (i) The brain must be able to combine motor systems – manual and vocal/oral – in a systematic, meaning-controlled way. (ii) There must be a convergence of two cognitive modes – visuospatial and linguistic – and a locus where they converge in a final motor sequence. Broca’s area is a logical candidate for this place. It has the further advantage of orchestrating actions
51
52
I. How the body relates to language and communication
(iii)
(iv) (v)
(vi)
(vii)
that can be realized both manually and within the oral-vocal tract. MacNeilage (2008) relates speech to cyclical open-close patterns of the mandible, and proposes that speech could have evolved out of ingestive motor control. (See language origin theories in McNeill this volume a). More than Broca’s and Wernicke’s areas underlie language – there is also the right hemisphere and interactions between the right and left hemispheres, as well as possibly the frontal cortex. A germane result is Federmeier and Kutas (1999), who found through evoked potential recordings different information strategies in the right and left sides of the brain – the left they characterized as “integrative,” the right as “predictive.” These terms relate very well to the hypothesized roles of the right and left hemispheres in the generation of growth points and unpacking. The growth point is integrative, par excellence, and is assembled in the right hemisphere, per the hypothesis of this chapter; unpacking is sequential orchestration, and orchestration would be involved in prediction, when that is the experimental focus. And Kelly, Kravitz and Hopkins (2004) observe evoked response effects (N400) in the right brain when subjects observe speech-gesture mismatches. Wernicke’s area serves more than comprehension – it also provides categorization, might initiate imagery and might also shape it. Imagery arises in the right hemisphere and needs Wernicke-originated categorizations to form growth points. Categorial content triggers and/or shapes the imagery in the right hemisphere. At the same time, it is related to the context to which the right hemisphere has access. The growth point is unpacked in Broca’s area. Growth points may take form in the right hemisphere, but they are dependent on multiple areas across the brain (frontal, posterior left, as well as right and anterior left). In addition, the cerebellum would be involved in the initiation and timing of gesture phases relative to speech effort (see Spencer et al. 2003). However, this area is not necessarily a site specifically influenced by the evolution of language ability. Catchments and growth points specifically are shaped under multiple influences – from Wernicke’s area, the right hemisphere, and the frontal area – and take form in the right hemisphere. (For catchments, see McNeill this volume b).
Throughout the model, the concept is that information from the posterior left hemisphere, the right hemisphere, and the prefrontal cortex converge and synthesize in the frontal left hemisphere motor areas of the brain – Broca’s area and the adjacent premotor areas. This circuit could be composed of many smaller circuits – “localized operations [that] in themselves do not constitute an observable behavior […] [but] form part of the neural ‘computations’ that, linked together in complex neural circuits, are manifested in behaviors” (Lieberman 2002: 39). See Feyereisen (volume 2), for evidence from aproprioception for a thought-language-hand link in the brain. Broca’s area in all this is the unique point of (a) convergence and (b) orchestration of manual and vocal actions guided by growth points and semantically framed language forms. The evolutionary model presented in McNeill (this volume a) specifically aims at explaining orchestration of actions under other significances in this brain area and how it could have been co-opted by language and thought.
2. Gesture as a window onto mind and brain
9. References Bowerman, Melissa 1982. Starting to talk worse: Clues to language acquisition from children’s late speech errors. In: Sidney Strauss (ed.), U-Shaped Behavioral Growth, 101–145. New York: Academic Press. Chafe, Wallace 1976. Givenness, contrastiveness, definiteness, subjects, topics, and point of view. In: Charles N. Li (ed.), Subject and Topic, 25–55. New York: Academic Press. Feyereisen, Pierre volume 2. Gesture and the neuropsychology of language. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.) Berlin: De Gruyter Mouton. Federmeier, Kara D. and Marta Kutas 1999. Right words and left words: Electrophysiological evidence for hemispheric differences in meaning processing. Cognitive Brain Research 8: 373–392. Gardner, Howard 1974. The Shattered Mind. New York: Vintage Books. Gardner, Howard, Hiram H. Brownell, Wendy Wapner and Diane Michelow 1983. Missing the point: The role of the right hemisphere in the processing of complex linguistic material. In: Ellen Perecman (ed.), Cognitive Processing in the Right Hemisphere, 169–191. New York: Academic Press. Gazzaniga, Michael S. 1970. The Bisected Brain. New York: Appleton-Century-Crofts. Gazzaniga, Michael S. 1995. Consciousness and the cerebral hemispheres. In: Michael S. Gazzaniga (ed.), The Cognitive Neurosciences, 1391–1400. Cambridge: Massachusetts Institute of Technology Press. Gleitman, Lila 1990. The structural sources of verb meanings. Language Acquisition 1(1): 3–55. Karmiloff-Smith, Annette 1979. Micro- and macrodevelopmental changes in language acquisition and other representational systems. Cognitive Science 3: 91–118. Kelly, Spencer D., Corinne Kravitz and Michael Hopkins 2004. Neural correlates of bimodal speech and gesture comprehension. Brain and Language 89: 253–260. Kendon, Adam 1980. Gesticulation and speech: Two aspects of the process of utterance. In: May Ritchie Key (ed.), The Relationship of Verbal and Nonverbal Communication, 207–227. The Hague: Mouton. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Lausberg, Hedda, Sotaro Kita, Eran Zaidel and Alain Ptito 2003. Split-brain patients neglect left personal space during right-handed gestures. Neuropsychologia 41: 1317–1329. Li, Charles N. and Sandra A. Thompson 1976. Subject and topic: A new typology of language. In: Charles N. Li (ed.), Subject and Topic, 457–490. New York: Academic Press. Li, Charles N. and Sandra A. Thompson 1981. Mandarin Chinese: A Functional Reference Grammar. Berkeley: University of California Press. Lieberman, Philip 2002. On the nature and evolution of the neural bases of human language. Yearbook of Physical Anthropology 45: 36–63. Lucy, John A. 1992a. Grammatical Categories and Cognition: A Case Study of the Linguistic Relativity Hypothesis. Cambridge: Cambridge University Press. Lucy, John A. 1992b. Language Diversity and Thought: A Reformulation of the Linguistic Relativity Hypothesis. Cambridge: Cambridge University Press. MacNeilage, Peter F. 2008. The Origin of Speech. Oxford: Oxford University Press. McNeill, David 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press. McNeill, David this volume a. The co-evolution of gesture and speech, and its downstream consequences. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body-Language-Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton.
53
54
I. How the body relates to language and communication McNeill, David this volume b. The growth point hypothesis of language and gesture as a dynamic and integrated system. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body-Language-Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. McNeill, David and Susan D. Duncan 2000. Growth points in thinking for speaking. In: David McNeill (ed.), Language and Gesture, 141–161. Cambridge: Cambridge University Press. McNeill, David and Laura Pedelty 1995. Right brain and gesture. In: Karen Emmorey and Judy Snitzer Reilly (eds.), Sign, Gesture, and Space, 63–85. Hillsdale, NJ: Erlbaum. Parrill, Fey 2011. The relation between the encoding of motion event information and viewpoint in English-accompanying gestures. Gesture 11: 61–80. Pedelty, Laura L. 1987. Gesture in Aphasia. Ph.D. dissertation, University of Chicago. Pinker, Steven 1989. Learnability and Cognition: The Acquisition of Argument Structure. Cambridge, MA: Massachusetts Institute of Technology Press. Slobin, Dan I. 1987. Thinking for speaking. In: Jon Aske, Natasha Beery, Laura Michaelis and Hana Filip (eds.), Proceedings of the Thirteenth Annual Meeting of the Berkeley Linguistic Society, 435–445. Berkeley, CA: Berkeley Linguistic Society. Slobin, Dan I. 1996. From “thought and language” to “thinking for speaking.” In: John Joseph Gumperz and Stephen C. Levinson (eds.), Rethinking Linguistic Relativity, 70–96. Cambridge: Cambridge University Press. Slobin, Dan I. 2004. The many ways to search for a frog: Linguistic typology and the expression of motion events. In: Sven Stro¨mqvist and Ludo Verhoeven (eds.), Relating Events in Narrative, Volume 2: Typological and Contextual Perspectives, 219–257. Mahwah, NJ: Lawrence Erlbaum. Slobin, Dan I. 2009. Review of M. Bowerman & O. Brown (eds). Crosslinguistic perspectives on argument structure: Implications for learnability. Journal of Child Language 36: 697–704. Spencer, Rebecca M.C., Howard N. Zelaznik, Jo¨rn Diedrichsen and Richard B. Ivry 2003. Disrupted timing of discontinuous but not continuous movements by cerebellar lesions. Science 300: 1437–1439. Talmy, Leonard 1975. Syntax and semantics of motion. In: John P. Kimball (ed.), Syntax and Semantics, Volume 4, 181–238. New York: Academic Press. Talmy, Leonard 1985. Lexicalization patterns: Semantic structure in lexical forms. In: Timothy Shopen (ed.), Language Typology and Syntactic Description, Volume III: Grammatical Categories and the Lexicon, 57–149. Cambridge: Cambridge University Press. Talmy, Leonard 2000. Toward a Cognitive Semantics. Cambridge: Massachusetts Institute of Technology Press. Vygotsky, Lev Semenovich 1987. Thought and Language. Edited and translated by Eugenia Hanfmann and Gertrude Vakar (revised and edited by Alex Kozulin). Cambridge: Massachusetts Institute of Technology Press. Werner, Heinz and Bernard Kaplan 1963. Symbol Formation. New York: John Wiley. [Reprinted in 1984 by Erlbaum]. Whorf, Benjamin Lee 1956. Language, Thought, and Reality. Selected Writings of Benjamin Lee Whorf. Edited by John B. Carroll. Cambridge: Massachusetts Institute of Technology Press. Zaidel, Eran 1978. Concepts of cerebral dominance in the split brain. In: Pierre A. Buser and Arlette Rougeul-Buser (eds.), Cerebral Correlates of Conscious Experience, 263–284. Amsterdam: Elsevier.
David McNeill, Chicago, IL (USA)
3. Gestures and speech from a linguistic perspective: A new field and its history
55
3. Gestures and speech from a linguistic perspective: A new field and its history 1. 2. 3. 4.
Gestures as part of spoken language – a sketch of historical perspectives Gestures as part of language – a sketch of the present state of art Conclusion References
Abstract This chapter gives a brief overview of gesture research from a linguistic point of view. It begins with a short overview of the history of research on gestures as part of spoken language and an attempt to understand the longstanding lack of linguistic interest in considering gestures a relevant topic – or a relevant feature of language. It then shows that a new field of gesture research has emerged over the past decades, which regards gesture and speech as inherently intertwined. We have attempted to systematize the findings regarding the nature of gestures and their relation to language in use according to the four aspects currently most widely researched: 1) form and meaning of gestures, 2) gestures and their relation to utterance formation, 3) gestures, language, and cognition, and 4) gestures as communicative resource in interaction and discourse. In doing this, an overview of the present state of the art of research on gesture as part of spoken language is presented. The chapter is complemented by an encompassing bibliography of current research on gestures and speech from a linguistic perspective.
1. Gestures as part of spoken language – a sketch of historical perspectives Regarding gestures as part of spoken language or even as a language in itself reaches far way back into the Western history of thought. In the tradition of Rhetoric, gestures were considered a major part of the elocutio, the delivery. At the height of classical Rhetoric, Quintilian developed a detailed understanding of the communicative functions of co-verbal gestures. Quintilian stands out among other rhetoricians by considering the actio‚ the bodily performance coming along with a speech, a major aspect of an orator’s performance on stage. In his lectures for a speaker Institutionis oratoriae (Quintilian, Institutionis oratoriae XI 3, 92–106) he distinguished gestures relating to parts of speech (beginning, narration, debate, accusation, conviction), gestures expressing speech-acts (accusing, denouncing, promising, advising, praising, affirming, questioning), gestures expressing affective stance and emotions (certainty, sharpness of accusation, emphasis, affirmation, modesty, anxiety, admiration, indignation, fear, remorse, rage, refusal) and gestures which relate to the structure of speech itself (presenting, structuring, and emphasizing the speech, enumerating evidences, and discriminating different aspects mentioned verbally) (for further detail see Mu¨ller 1998: 33–43; Dutsch this volume; and Graf 1994). Barnett characterizes Quintilian’s account of gestures as an art in itself and as “a part of a double language system consisting of a highly detailed
Mu¨ller, Cienki, Fricke, Ladewig, McNeill, Teßendorf (eds.) 2013, Body – Language – Communication (HSK 38.1), de Gruyter, 55–81
56
I. How the body relates to language and communication and sophisticated verbal language together with an equally expressive nonverbal language consisting of gesture, postures and actions.” (Barnett 1990: 65) Quintilian was convinced that gestures are a natural language of mankind and he proposed that they have almost all expressive qualities of words themselves as they may point, demand, promise, count, display the size and the amount of concrete and abstract entities, indicate time and they may even be used as adverbs and pronomina (Quintilian: Institutionis oratoriae XI 3, 87) (see Mu¨ller 1998: 35 and Dutsch this volume). As for the hands, without which all action would be crippled and enfeebled, it is scarcely possible to describe the variety of their motions, since they are almost as expressive as words. For other portions of the body merely help the speaker, whereas the hands may almost be said to speak. Do we not use them to demand, promise, summon, dismiss, threaten, supplicate, express aversion or fear, question or deny? Do we not employ them to indicate joy, sorrow, hesitation, confession, penitence, measure, quantity, number and time? Have they not power to excite and prohibit, to express approval, wonder or shame? Do they not take the place of adverbs and pronouns when we point at places and things. In fact, though the peoples and nations of the earth speak a multitude of tongues, they share in common the universal language of the hands. The gestures of which I have thus far spoken are such as naturally proceed from us simultaneously with our words (Quintilian: Institutionis oratoriae XI 3, 85–88).
The idea of gestures as a universal language was present in Renaissance ideas (Bacon, Bulwer), it played a prominent role in the philosophy of Enlightenment (Condillac, Diderot) and also Romanticism discussed the idea (Vico, Herder) (for more detail see Copple this volume; Mu¨ller 1998; Wollock 1997, 2002, this volume). Notably this longstanding recognition of gesture’s linguistic properties and their potential for language declined with the 19th and the 20th century. Treatments like de Jorio’s Mimica degli Antichi investigata nel gestire napoletano in the early 19th century (see de Jorio [1832] 2000 with an introduction by Adam Kendon) and Wundt’s work on gestures and signs of Neapolitans, Plains Indians and Deaf people (Wundt 1921) did not inspire scholarly reflection upon gesture as part of language (see also Kendon 2004: chapter 3 and 4). Wollock (this volume) summarizes this development and its implications for contemporary reflections on gestures as follows: Renaissance ideas on gesture foreshadow the 18th century, and to some extent even Romanticism (see Vico, Herder). Important for us today is not so much the literal question whether gesture is a universal language, as the fact that in this period gesture called attention to linguistic processes that are certainly universal – psychophysiological processes common to verbal and nonverbal thought – but that were often overlooked, downplayed, or even denied in 20th–century linguistics. (Wollock this volume)
Within 20th century linguistics gestures were not considered a relevant topic – or a relevant feature of language. Under the auspices of Saussurean linguistics, handmovements that go along with speech were thrown into the wastebasket of parole or of language use (Saussure, Bally, and Sechehaye 2001; Albrecht 2007). The idea of language as a social system langue underlying all forms of language use was critical in defining and establishing a scholarly discipline of linguistics in Europe. It had an immense
3. Gestures and speech from a linguistic perspective: A new field and its history impact on the humanities in Europe. Structuralism became one of the most influential schools of thought in the twentieth century. Such a focus on language as a social system distanced the attention of linguists from those phenomena that are characteristic of language use. This also holds for American structuralism, which was strongly marked by the great challenge of documenting the wealth of unwritten languages of Native America (see Bloomfield 1983; Z. Harris 1951). Notably, those were languages without a writing system, spoken languages, which were characterized through their lack of literacy: un-written languages. Interestingly enough American linguistic anthropology focused on the de-contextualized systematic features of these languages, not on their particular nature as spoken languages. However, given their goal to identify the grammatical structures or the linguistic system “behind” the spoken words this appeared to make perfect sense – concepts of emergent grammar were not discussed at the time (Hopper 1998). An exceptional study that did empirically investigate forms and functions of gestures that are used in conjunction with speech comes out of American anthropology: the doctoral dissertation of David Efron ([1941] 1972), a student of Franz Boas. Carried out during the Second World War it was not aimed at a contribution to linguistic questions, but as an empirical study within the “nature-nurture” debate and seeking an empirical answer to the question: Is human behavior shaped by culture or by nature? To counter racist and eugenics positions, David Efron went out to study the gestural behavior of traditional Eastern-Jewish and Southern Italian immigrants in New York City and compared their style of gesturing with that of the second-generation immigrants. What he found were hybrid gestural forms in the second generation, such as gestures, which blended “American” and “Italian” or “Jewish” forms of gesturing. This was taken as an indication and support of the nurture position, because it showed the influence of culture on shaping human communicative behavior. Efron’s meticulous semiotic analysis of gestural forms was prerequisite to actually defining and identifying these differences. But the study did not inspire scholars of language to look at gesture. His work and his classification system of gestures was later made public by the psychologist Paul Ekman and became a standard reference system in 20th century research on bodily behavior more generally and non-verbal communication research in particular (Ekman and Friesen 1969). With Chomskian linguistics taking over the lead in the middle of the twentieth century, linguistics made a turn towards cognitive science: the universal cognitive competence of humans for acquiring language came to be the topic of linguistics proper. “Language performance” and accordingly gesture was not regarded a relevant topic of inquiry within the so defined field of linguistics (Chomsky 1965; R. Harris 1995). At roughly the same time however there were some singular attempts to analyze body movements from a point of view of structuralism: Ray Birdwhistell (1970) – linguistic anthropologist – put forward an account of facial expressions, postures and hand movement for which he coined the term “Kinesics”. He developed a structuralist framework for the description of body-movements and proposed that units of gestures are very much structured like linguistic units: The isolation of gestures and the attempt to understand them led to the most important findings of kinesic research. This original study of gestures gave the first indication that kinesic structure is parallel to language structure. By the study of gestures in context, it
57
58
I. How the body relates to language and communication became clear that the kinesic system has forms which are astonishingly like words in a language. (Birdwhistell 1970: 80)
In the sixties the anthropologist and linguist Kenneth Lee Pike put forward a theoretical framework for language as part of human behavior (Pike 1967). Extending the phonology/phonetics or phonem/phon distinction of structural linguistics to human behavior in general, he introduced the differentiation between emic and etic aspects of human behavior. He proposed that emic aspects of human behavior concern meaning and etic aspects of behavior address their material characteristics. Pike even argued that these behavioral units could form what nowadays would be termed a “multimodal syntactic structure”, namely a sentence in which verbal and gestural forms are systematically integrated. (For a detailed account of Pike’s contribution to a multimodal grammar see Fricke 2012, this volume). A pioneer in researching bodily behavior with speech is Adam Kendon (with an education in biology and in particular ethology). In the sixties and seventies he researched the patterns of interactive bodily behavior (Kendon, Harris, and Key 1975; Kendon 1990). His analysis of the behavioral units and sequencing of greetings (Kendon and Ferber 1973) revealed that communicative bodily actions are highly structured, meaningful and closely integrated with speech. In the early seventies Kendon provided the first systematic micro-analysis of gestural and vocal units of expression. At that time film recordings became available for scientific research and the possibility to inspect these sequences again and again made it possible to discover the fine-grained microstructures of human bodily and verbal behavior. An important outcome of this development in technology was the fist micro-analysis of speech and body motion. Kendon showed that units of speech and units of body motion possess a similar hierarchical structure: larger units of movements go along with larger units of speech and smaller units of movement parallel smaller portions of speech (Kendon 1972). It was only about 10 years later that he formulated explicitly the idea of gesture and language as being two sides of one process of utterance (Kendon 1980; for his current view see Kendon 2004, this volume). In the seventies and also for most part of the eighties linguistics continued to be dominated by generative theory (then reformulated as Government and Binding Theory; see Chomsky 1981). Psychology adopted the concept of non-verbal communication (Argyle 1975; Feldmann and Rime´ 1991; Hinde 1972; Ruesch and Kees 1970; Scherer and Ekman 1982; Watzlawick, Bavelas, and Jackson 1967) and gestures as part of speech were regarded as only marginally relevant for such a field of research. On the contrary, those body-movements not related to speech and with functions different from language attracted most interest. One consequence of this was a big increase in researching facial expression (see Ekman and Rosenberg 1997 for an overview). Such a scholarly climate made it difficult to pursue a linguistic perspective on gestures and language throughout the eighties. However, there were other positions: David McNeill (1979) – coming from psychology and linguistics – proposed a theory of language and gesture in which both modalities form one integrated system. Already at that time McNeill and Kendon concentrated on gestures as movements of hands and on their particular relationship to speech. In contrast to nonverbal communication scholars they were interested in the movement of the hands because they exhibit a particularly tight interrelatedness with language.
3. Gestures and speech from a linguistic perspective: A new field and its history McNeill’s idea of gesture as being part of the verbal utterance challenged the distinction of verbal versus non-verbal behavior, which characterized the mainstream research on nonverbal communication at the time. It even triggered a public debate carried out in several articles in the Journal “Psychological Review”. McNeill challenged the psycholinguistic belief in gestures as part of the NON-verbal dimensions of communication, by raising the question: So you think gestures are non-verbal (McNeill 1985). Participants of this debate were: Brian Butterworth, Pierre Feyereisen, and Uri Hadar on the one hand and David McNeill on the other hand (Butterworth and Hadar 1989; Feyereisen 1987; McNeill 1985, 1987, 1989). While McNeill criticized the idea of gestures as being non-verbal, Butterworth and Hadar presented psycholinguistic evidence for their assumption of gestures as something different from speech (see Hadar this volume). In 1992 McNeill published his integrated theory of gestures and speech in, what became a landmark book for a psychological and linguistic approach to gesture and speech, “Hand and Mind. What gestures reveal about thought.” (McNeill 1992) In this book David McNeill develops his theory of language and gesture. He proposes that gesture and speech are different but integrated facets of language: gesture as imagistic, holistic, synthetic and language as arbitrary, analytic, and linear. These two sides of language reside on two different modes of thought: one imagistic – the other one propositional and McNeill considers the dialectic tension between the two modes of thought as propelling thought and communication (for more detail, see McNeill this volume). With its core idea of gestures as “‘window’ onto thinking” (McNeill and Duncan 2000), as revelatory of imagistic forms of thought, the book matched a turn towards cognitive science in the humanities and raised a great amount of interest in psychological research on language and cognition (for an overview see McNeill 2000). But for linguistics proper, gesture remained a phenomenon at the margins of interest – if at all. This holds even for cognitive linguistics [including cognitive grammar (Langacker 1987); metaphor theory (Ortony 1993); or blending theory (Fauconnier and Turner 2002)], which developed a counter position to the modularism of generativism (including its further developments as Government and Binding Theory and the minimalist program, see Chomsky 1981, 1992). Cognitive linguistics argues that language resides on general cognitive principles and capacities challenging the generativist position of linguistic competence as a particular and cognitively distinct module. A cognitive linguistic position quite naturally opens up the gate for a concept of language which is not restricted to the oral mode alone and which allows for an integration of different modalities within one process of utterance (see Cienki 2010, 2012). Despite this theoretical pathway, cognitive linguists for the most part have relied on the analysis of invented sentences – not on data from language use, which might have attracted their attention to the work gestures do in conjunction with speech. But over the past two decades the situation has changed and we find an increasing amount of publications within cognitive linguistics that do consider gestures as part of linguistic analysis (examples are Cienki 1998a, 1998b; Cienki and Mu¨ller 2008b; McNeill and Duncan 2000; Mittelberg 2006, 2010a, 2010b; Mu¨ller 1998; Mu¨ller and Tag 2010; Sweetser 1998; Nu´n˜ez and Sweetser 2006). Moreover, also outside of Cognitive Linguistics an increasing amount of publications with a linguistic perspective on gestures – or at least a perspective that is compatible with a linguistic analysis of gestures – was published. In 2004 Kendon’s monograph on “Gesture: Visible Action as Utterance” appeared, presenting an encompassing account of the manifold ways in
59
60
I. How the body relates to language and communication which gestures can become part of verbal utterances, including a detailed historical section of gesture classifications as well as a discussion of what makes a movement of the hands a gesture. Other books relevant to a linguistic perspective on gestures include: Calbris’ “Elements of meaning in gesture” (2011), Enfield’s “The anatomy of meaning: speech, gesture, and composite utterances” (2009), Fricke’s “Origo, Geste und Raum: Lokaldeixis im Deutschen” (2007) and “Grammatik multimodal” (2012), McNeill’s edited volume “Language and Gesture” (2000) and his book on “Gesture and Thought” (2005), Mu¨ller’s “Redebegleitende Gesten: Kulturgeschichte – Theorie – Sprachvergleich” (1998), Mu¨ller and Posner’s edited volume “The semantics and pragmatics of everyday gestures” (2004), and Streeck’s “Gesturecraft: The manufacture of meaning” (2009). After the turn of the century, more and more scholars have begun to look at gestures from a linguistic perspective thereby focusing on a range of different aspects. In the following sections we will present and discuss the present state of the art of a linguistic view on gestures in more detail. We will concentrate on four main areas of research: form and meaning of gestures, gestures and their relation to utterance formation, gestures, language, and cognition, and gestures as a dynamic communicative resource in discourse.
2. Gestures as part of language – a sketch of the present state of art 2.1. Form and meaning of gestures “If we explain the meaning of a gesture we explain the form” (McNeill 1992: 23) is how McNeill sums up his account of the distinct nature of meaning in gestures. No “duality of patterning” (Hockett 1958) or “double articulation” (Martinet 1960/1963), “no standards of form”, no two different systems on the level of form and meaning as in language where phonemes distinguish meaning and morphemes carry meaning (McNeill 1992: 22, Saussure, Bally, and Sechehaye 2001). In language two distinct systems are matched onto each other by convention and the relation between form (sound) and meaning is characterized by an arbitrary mapping (Saussure, Bally, and Sechehaye 2001). In gestures, on the contrary, the meaning resides in the form: “Kinesic form is not an independent level as sound is an independent level of language. Kinesic form in a gesture is determined by its meaning.” (McNeill 1992: 23) Gestures are considered fundamentally different, they are conceived of as motivated signs, created on the spot that convey meaning in a global-synthetic way. While in language “parts (the words) are combined to create a whole (a sentence); the direction [being] from part to whole”, in gestures the meaning of the parts is determined by the whole, the gestalt of the form(s) (in this sense they are considered “global”) (McNeill 1992: 19). Gestures convey meaning in a synthetic way: “one gesture can combine many meanings (it is synthetic)” (McNeill 1992: 19). McNeill illustrates his view on a global-synthetic nature of meaning in gestures with an example of a gesture in which wiggling fingers depict a character running along a wire: This gesture-symbol is global in that the whole is not composed out of separately meaningful parts. Rather, the parts gain meaning because of the meaning of the whole. The
3. Gestures and speech from a linguistic perspective: A new field and its history wiggling fingers mean running only because we know that the gesture, as a whole, depicts someone running. It’s not that a gesture depicting someone running was composed out of separately meaningful parts: wiggling + motion, for instance. The gesture also is synthetic. It combines different meaning elements. The segments of the utterance, “he + running + along the wire,” were combined in the gesture into a single depiction of Sylvester-runningalong-the-wire. (McNeill 1992: 20–21)
The ways in which gestures convey meaning across sequences of gestures is furthermore characterized as being “non-combinatoric” and “non-hierarchical”: “two gestures produced together don’t combine to form a larger, more complex gesture. There is no hierarchical structure of gestures made out of other gestures” (McNeill 1992: 21) and even if several gestures are combined this does not result in a more complex gesture: “Even […] several gestures don’t combine into a more complex gesture. Each gesture depicts the content from a different angle, bringing out a different aspect or temporal phase, and each is a complete expression of meaning by itself.” (McNeill 1992: 21) For McNeill’s theory of language and gesture this sharp distinction between the ways in which meaning is “carried” in language and how it is “conveyed” in gesture, is of core importance. McNeill uses a structuralist account of language as a system of arbitrary signs as a contrastive frame that brings out the particular articulatory properties of gestures and that maximizes the differences between the two modes of expression. This sharp distinction is a prerequisite for his theory of language, gesture and thought, in which gestures are considered to reveal a fundamentally different type of thought, one that is imagistic and global-synthetic and holistic, whereas language forces thought into the linearity of speech-sounds and the arbitrariness of linguistic signs. It is also constitutive for his understanding of thinking and speaking as a dynamic process propelled by an imagery-language dialectic, whose basic unit is the so-called “Growth-Point” (see McNeill 1992: 219–239 and 2005: 92–97; McNeill and Duncan 2000): “It is this unstable combination of opposites that fuels thought and speech.” (McNeill 2005: 92) Notably, in his 2005 book McNeill brings in a phenomenological turn, now taking a non-dualistic point of view with regard to the relation of gesture and mind. Rather than assuming that gestures reveal inner images, he now argues with reference to the work of MerleauPonty (1962) that gestures do not represent meaning but “inhabit” it (McNeill 2005: 91–92). Drawing on Heidegger, he proposes the H-Model of gestures, suggesting that: “To the speaker, gesture and speech are not only ‘messages’ or communications, but are a way of cognitively existing, of cognitively being, at the moment of speaking.” (McNeill 2005: 99) This new concept of gestural meaning and its relation to form is brought together in the concept of gestures as “material carriers” (inspired by Vygotsky 1986), and advances a phenomenological understanding of the meaning of gestures: A material carrier […] is the embodiment of meaning in a concrete enactment or material experience. […] The concept implies that the gesture, the actual motion of the gesture itself, is a dimension of meaning. Such is possible if the gesture is the very image; not an “expression” or “representation” of it, but is it. (McNeill 2005: 98, highlighting in the original)
McNeill’s (1992) book set the stage for a view on the meaning of gestures as holistic, “global-synthetic” and it inspired a wealth of research in the domain of language and thought (see McNeill 2000; Parrill 2008; Parrill and Sweetser 2002, 2004; for a discussion see Kendon 2008).
61
62
I. How the body relates to language and communication In his 2009 book “Gesturecraft. The manu-facture of meaning” Ju¨rgen Streeck proposes a praxeological account to the meaning of gestures, which is strongly informed by phenomenology too. However, Streeck focuses on the situatedness of meaning making in mundane practices of the world: The point of departure for the research reported in this book, thus, are human beings in their daily activities. The perspective on gesture is informed by the work of phenomenological philosophers (Heidegger 1962; Merleau-Ponty 1962; Polanyi 1958) who have argued that we must understand human understanding by finding it, in the first place, in concrete practical, physical activity in the world, as well as by more recent work in anthropology […], philosophy and linguistics […], educational psychology […], and sociology, which is defined by the view that the human mind – and the symbols that it relies upon – are embodied. (Streeck 2009: 6, highlighting in the original)
Consequently Streeck conceives of the meaning of gestures in terms of particular situational settings, e.g. “gesture ecologies” such as: making sense of the world at hand, disclosing the world within sight, depiction, thinking by hand, displaying communicative action, ordering and mediating transactions (see Streeck 2009: 7–11). However, he takes the form of gestures in terms of their physiological properties and as being an instrument, a technique to deal with the world at hand as a point of departure for an analysis of gestural meaning within those different ecologies (Streeck 2009: chapter 3). In this regard the form and meaning of gestures are derived from practical actions of the hands and their mode of being in a particular ecological context (see Streeck this volume). While Kendon (2004) also regards gestures as forms of action, notably visible actions, he suggests that gestures must be considered in the Goffmanian sense as moves in an interaction (Kendon 2004: 7). For the meaning of gestures in general, he assumes it to be an achievement of a speaker, resulting from a rhetorical goal, which motivates the meaning of both speech and gestures and drives the construction of “gesture-speech ensembles”: We suggest, […], that the conjunction of the stroke with the informational centre of the spoken phrase is something the speaker achieves. In creating an utterance that uses both modes of expression, the speaker creates an ensemble in which gesture and speech are employed together as partners in a single rhetorical enterprise. (Kendon 2004: 127)
Kendon underlines the flexible ongoing adjustment of the two modes of expression in this intertwined process of verbo-gestural meaning construction in an ongoing conversation (Kendon 2004: 127). Enfield develops a concept of meaning of gestures which departs from the perspective of the interpreter (taking a Peircian approach in this regard). In his book on the “Anatomy of meaning: speech, gesture, and composite utterances” (Enfield 2009: IX) he brings together semiotic (Peirce), pragmatic (Grice, Levinson) and interactive (Goffman, Sacks) approaches to the meaning of gestures and language as used in an interaction. However, Enfield, is also in line with Kendon’s and Streeck’s take on the meaning of gesture as situated interactional moves by conceiving gestures as elements of composite utterances. His proposal opens up further important facets of Kendon’s gesturespeech ensembles: “ composite utterances [are defined] as a communicative move that incorporates multiple signs of multiple types.” (Enfield 2009: 15) The meaning of such
3. Gestures and speech from a linguistic perspective: A new field and its history a composite utterance then is a matter of interpreting the co-occuring signs based on a pragmatic heuristics: “Composite utterances are interpreted through the recognition and bringing together of these multiple signs under a pragmatic unity heuristic or co-relevance principle, i.e. an interpreter’s steadfast presumption of pragmatic unity despite semiotic complexity.” (Enfield 2009: 15, see Enfield this volume) In parallel to those holistic and global approaches to the meaning of gestures a strand of gesture research has developed which suggests that gestural meaning is to a certain degree decomposable into form features (Calbris 1990, 2003, 2008, 2011, this volume; Kendon 2004; Mittelberg 2006, 2010a; Mu¨ller 2004, 2010b; Webb 1996, 1998, inter alia). However, what appears as opposed views at first sight, turn out to be assumptions addressing different types of gestures: while McNeill’s characterization of gestures as being global-synthetic and holistic applies to spontaneously created singular gestures, the proposal that gestures are decomposable into form features as originally proposed by Calbris (1990, this volume) applies to gestures that are either fully conventionalized (e.g. emblematic gestures) or to gestures which are in a process of conventionalization (e.g. recurrent gestures, for the distinction of singular versus recurrent gestures see Mu¨ller 2010b, submitted; Mu¨ller, Bressem, and Ladewig this volume). For instance, Kendon’s concept of a gesture family, which consists of a group of gestures sharing a common formational core and semantic theme, is based on the idea of a core set of form features that make up the core meaning of a gesture. For instance in the gesture families of the open hand, the critical formational feature is shared hand shape. “In both of these families the hand shape is ‘open.’ That is, the hand is held with all digits extended and more or less adducted (they are not ‘spread’).” (Kendon 2004: 248) Gestures belonging to the Open Hand Prone family share the formational core of the forearm being directed downwards. Within the Open Hand Supine family, in contrast, the forearm is always directed upwards. Differences within one family are marked with regard to the other formational features: movement and orientation of the palm. The Open Hand Prone family, for instance, shows two form variants: In Vertical Palm (VP) gestures the palm of the hand is directed away from the speaker’s body and they indicate “an intention to halt […] a current line of action” (Kendon 2004: 251). Horizontal Palm (ZP) gestures face downward and “obliquely away” (Kendon 2004: 251). Furthermore they are always moved in a horizontal lateral movement. Those gestures where observed in contexts, which involved “a reference to some line of action that is being suspended, interrupted or cut off.” (Kendon 2004: 255) Such “simultaneous structures of gestures” (Mu¨ller 2010b, submitted; Mu¨ller, Bressem, and Ladewig this volume; Mu¨ller et al. 2005) can be described systematically by adapting the four parameters of sign language “hand shape”, “orientation” of the palm, “movement”, and “position” in gesture space (Battison 1974; Stokoe 1960; see also Bressem this volume; Bressem, Ladewig, and Mu¨ller this volume). The idea of decomposing gestures into their meaningful segments was particularly advanced by studies on gestures that show a particular recurrent gestural form coming with a particular semantic theme (Calbris 2003, 2008; Fricke 2010; Harrison 2009, 2010; Ladewig 2010, 2011; Mu¨ller 2004, 2010b; Kendon 2004; Bressem, Mu¨ller, and Fricke in preparation; Teßendorf 2008). Distribution analyses across different contexts of use revealed that such “recurrent gestures”, as they were termed (Ladewig 2007, 2010, 2011; Mu¨ller 2010b), often show a variation of meaning that becomes manifest in a correlation between form and context of use. This means that speakers draw upon a repertoire
63
64
I. How the body relates to language and communication of gestural forms, which they use recurrently. As McNeill suggests for the Palm Up Open Hand (2005: 48–53) recurrent gestures are in a process of becoming conventionalized, their form-meaning relation is motivated and the motivation is still transparent, but given their recurrent usages in a limited set of contexts, they appear best be placed somewhere in the middle of a continuum between spontaneously created gestures on the one hand and fully conventionalized emblems on the other hand (for different gesture continua McNeill 2000: 2–7). Based on a discussion of recurrent gestural forms and recurrent gestural meanings, studies have furthermore documented that gestures can become semanticized as well as grammaticalized. Hence culture-specific gestures can be deployed as lexical or grammatical elements in co-occurrence with speech or they may even enter sign linguistic systems as lexemes or grammatical morphemes. Accordingly, gestures were, for instance, identified as markers of negation (Harrison 2009, 2010; Mu¨ller and Speckmann 2002; Bressem, Mu¨ller, and Fricke in preparation), of Aktionsart (Becker et al. 2011; Bressem 2012; Ladewig and Bressem forthcoming; Mu¨ller 2000), of a topic comment structure (Kendon 1995; Seyfeddinipur 2004), or as plural markers (Bressem 2012). Furthermore pathways of grammaticalization from gesture to sign have been traced, for instance, for classifier constructions (Pfau and Steinbach 2006; Mu¨ller 2009), tense and modality (Janzen and Shaffer 2002; Wilcox 2004; Wilcox and Rossini 2010; Wilcox and Wilcox 1995), topic marking (Janzen and Shaffer 2002) or for the development of pronouns and auxiliaries (Pfau and Steinbach 2006, 2011).
2.2. Gestures and utterance formation The interplay of gestures and speech in forming utterances has been subject of investigation in gesture studies early on. Researched facets of this interplay concern: the correlation of bodily movements to patterns of the speech stream, the syntactic integration of gestures into spoken utterances, and the distribution of semantic information over both modalities. Based on observations that the speaker’s body dances “synchronously with the articulatory segmentation of his speech” (Condon and Ogston 1967: 234), gesture scholars studied in detail the temporal alignment of gestural and spoken units. Amongst them are studies on parallel hierarchical structures in spoken discourse and in accompanying body movements (e.g., Kendon 1972, 1980) or the temporal alignment between units of body movements and units of speech (Condon and Ogston 1966, 1967; Kendon 1972, 1980; McNeill et al. 2002). In particular the correlation of kinesic and prosodic stress (e.g., Birdwhistell 1970; Loehr 2004, 2007; McClave 1991, 1994; Scheflen 1973; Tuite 1993) as well as of intonation and movements of particular body parts (Birdwhistell 1970; Scheflen 1973; Bolinger 1983; McClave 2000) was examined. Evidence for the tight link of both modalities has also been gained by showing that the whole “gesture-speech ensemble” may be modified in order to meet the necessities of articulating both modalities at the same time. Both, speech and gesture, may be repeated, revised or adapted to meet the structure of the other modality (Kendon 1983, 2004; see also Seyfeddinipur 2006). The tight interrelation of both modalities, gesture and speech, finds its expression in the often quoted remark that “speech and movement appear together as manifestations of the same process of utterance” (Kendon 1980: 208) and “arise from a single process of utterance formation” (McNeill 1992: 30).
3. Gestures and speech from a linguistic perspective: A new field and its history Several studies have also shown that gestures and speech are intertwined on the level of syntax and semantics, each providing necessary information to the formation of an utterance. Gestures are obligatory elements for the use of particular verbal deictic expressions such as so, here or there (e.g., de Ruiter 2000; Fricke 2007; Kita 2003; Streeck 2002; Stukenbrock 2008; inter alia) and may even differ in the gestural form depending on the intended reference object of the deictic expression (e.g., Fricke 2007; Kendon 2004). Gestures also stand in close relation with aspects of verbal negation (e.g., Bressem, Mu¨ller, and Fricke in preparation; Calbris 1990, 2003, 2008; Harrison 2009; Kendon 2003, 2004; Bressem, Mu¨ller, and Fricke in preparation; Streeck 2009 inter alia) and may go along with different types of negation, such as negative particles, morphological negation, implicit negation as well as the grammatical aspects of scope and node of negation (e.g., Harrison 2009, 2010; Lapaire 2006). Furthermore, gestures seem to be closely related with grammatical categories of the verbal utterance so that iconic gestures, for instance, often correlate with nouns, verbs, and adjectives (e.g., Hadar and Krauss 1999; Sowa 2005; Bergmann, Aksu, and Kopp 2011). Various scholars have furthermore argued that gestures can be integrated into the syntactic structure of an utterance (e.g., Andre´n 2010; Bohle 2007; Clark 1996; Clark and Gerrig 1990; Goodwin 1986, 2007; Enfield 2009; Langacker 2008; McNeill 2005, 2007; Mu¨ller and Tag 2010; Slama-Cazacu 1976; Streeck 1988, 1993, 2002, 2009; Wilcox 2002). Recent empirical studies have expanded those characterizations and suggest that gestures can take over syntactic functions either by accompanying or by substituting speech. Fricke (2012), for instance, distinguishes two forms of integrability: gestures may be integrated by positioning, that is either through occupying a syntactic gap or through temporal overlap; or they might be integrated cataphorically, that is by using deictic expressions son or solch (‘such a’). These deictic expressions demand “a qualitative description that can be instantiated gesturally” (Fricke 2012, our translation). In doing so gestures expand a verbal noun phrase and serve as an attribute. This phenomenon, also referred to as “multimodal attribution”, furnishes evidence for the structural and functional integration of gestures into spoken language and laid the ground for developing the framework of a “multimodal grammar” (Fricke 2012). Bressem (2012) and Ladewig (2012) expanded the notion of a multimodal grammar by showing that gestures either accompany or substitute nouns and verbs of spoken utterances. Bressem could show that gestural repetitions can serve attributive function when they co-occur with noun phrases and they serve adverbial function in cases of temporal overlap with verb phrases. The potential of gestures to take over syntactic functions by specifying the shape and size of objects or depicting the manner of the action is in those cases not bound to a cataphoric integration of the gestures into verbal utterance by explicit linguistic devices (see Fricke 2012), but rather seems to be based on the temporal, semantic, and syntactic overlap of the gestures with speech. In her study on gestures in syntactic gaps exposed by interrupted utterances, Ladewig could show that gestures do not adopt all kinds of syntactic functions when substituting speech as was assumed by some authors (e.g., Slama-Cazacu 1976). On the contrary, when replacing speech gestures preferably fulfill the function of objects and predicates. Based on her study she argued for a “continuum of integrability” (Ladewig 2012: 183) in which the link between gesture and speech can be conceived of as varyingly strong depending on three aspects: the type of integration, the distribution of information over the different modalities, and the order in which speech and gesture are deployed.
65
66
I. How the body relates to language and communication In her study, she furthermore found that referential gestures (e.g., gestures referring to concrete or abstract actions, entities, events, properties) are the most frequently used type of gesture with a substitutive function. This finding questions the widely accepted assumption that the kinds of gestures typically used to replace speech are emblematic or pantomimic gestures. Gestures not only take over syntactic functions when forming multimodal utterances, but they also contribute to the semantics of an utterance. Gestures may replace information, illustrate and emphasize what is being uttered verbally, soften or slightly modify the meaning expressed in speech or even create a discrepancy between the gestural and verbal meaning (see Bavelas, Kenwood, and Phillips 2002; Bergmann, Aksu, and Kopp 2011; Bressem 2012; Calbris 1990; Engle 2000; Freedman 1977; Fricke 2012; Gut et al. 2002; Kendon 1987, 1998, 2004; Ladewig 2011, 2012; McNeill 1992, 2005; Scherer 1979; Slama-Cazacu 1976). In exploring the semantic relation of gesture and speech, linguistic studies have offered different approaches. Semantic information conveyed in both modalities can be described in terms of image schematic structures (e.g., Cienki 1998b, 2005; Ladewig 2010, 2011, 2012; Mittelberg 2006, 2010a; Williams 2008) or in terms of semantic features (e.g., Beattie and Shovelton 1999, 2007; Bergmann, Aksu, and Kopp 2011; Bressem 2012; Kopp, Bergmann, and Wachsmuth 2008; Ladewig 2012). Together with the temporal position of gesture and speech the semantic relation of both modalities as well as the semantic function of gestures can be captured: If gestures double the information expressed in speech, their relation can be described as co-expressive (McNeill 1992, 2005) or redundant (e.g., Gut et al. 2002). If they add information to that expressed in speech, the relation between both can be described as “complementary” or “supplementary”. In these cases gestures modify information expressed in speech (e.g., Andre´n 2010; Birdwhistell 1970; Bergmann, Aksu, and Kopp 2011; Bressem 2012; Kendon 1986, 2004; Freedman 1977; Fricke 2012; Scherer 1979). Thereby both modalities are considered as being “enriched” by their co-occurrence and by the context in which they are embedded (Enfield 2009, this volume; see also Bressem 2012; Ladewig 2012). At the same time, the range of a gesture’s possible meanings is reduced as the spoken modality provides necessary information to single out a reference object (Ladewig 2012). In doing so gestures “are not limited to primarily depicting specific situations or individuals” but “can be used to depict types or kinds of things, like prototypes” (Engle 2000: 39). Gestures may single out exemplar interpretations in speech by picking out a specific individual from a collection mentioned in speech (Engle 2000) and thus refer to a meaning or concept associated with a word, that is a prototype, or to an intended object of reference. By being interpretant- or object related (Fricke 2007, 2012 based on Peirce 1931), gestures are not always and only tied to the representation of referents in the real world, but are also capable of seemingly contradicting the intended object of reference (see Fricke 2012). Gestures that replace speech fulfill a substitutive function and can form an utterance on their own or provide the semantic center of a multimodal utterance (e.g., Bohle 2007; Clark 1996; Clark and Gerrig 1990; McNeill 2005, 2007; Mu¨ller and Tag 2010; Slama-Cazacu 1976; Wilcox 2002). The unit created by both modalities has been referred to as “gesture-speech ensemble” (Kendon 2004), “multimodal utterance” (Goodwin 2006; Wachsmuth 1999), “composite utterance” (Enfield 2009, this volume), “composite signal” (Clark 1996; Clark and Gerrig 1990; Engle 1998), “multimodal package” (Streeck this volume), and as “hybrid utterance” (Goodwin 2007).
3. Gestures and speech from a linguistic perspective: A new field and its history
2.3. Gestures, language and cognition The cognitive foundation of gestures has gained rising interest in the field of gesture studies since McNeill proposed that gestures offer a “window onto thinking” (McNeill and Duncan 2000: 143; see also McNeill 1992) (for an overview see Cienki 2010). McNeill’s 1992 book “Hand and mind. What gestures reveal about thought” had a major impact on gesture research from a psychological perspective. It also was of paramount importance for raising the interest of cognitive linguists and metaphor scholars (for an overview see Cienki this volume). Notably, McNeill distinguishes in this book iconic from metaphoric gestures – opening up a path to gesture for scholars interested in metaphor research in the early nineties of the past century. Cienki (1998a) and Sweetser (1998) conducted the first studies taking the analysis of metaphoric gestures into cognitive linguistics. Many more studies followed which addressed co-verbal gestures’ relation to human cognition and have hitherto taken gesture as an indication for the cognitive linguistic claim that metaphor and metonymy should be regarded as general cognitive processes or – to put it in Raymond Gibbs’ words: that the mind makes use of poetic processes (see Gibbs 1994). […] the traditional view of mind is mistaken, because human cognition is fundamentally shaped by various poetic or figurative processes. Metaphor, metonymy, irony, and other tropes are not linguistic distortions of literal mental thought but constitute basic schemes by which people conceptualize their experience and the external world. (Gibbs 1994: 3)
Research on gestures in relation to verbal metaphoric expressions has made even more specific claims by proposing that gestures used in conjunction with speech may form multimodal metaphors. In doing this, the gesture part of the metaphor very frequently embodies the experiential source domain of the verbalized metaphoric expression (e.g., Calbris 1990, 2003, 2011; Cienki 1998a, 2008; Cienki and Mu¨ller 2008b; Kappelhoff and Mu¨ller 2011; McNeill 1992; McNeill and Levy 1982; Mu¨ller 1998, 2004, 2008a, 2008b; Mu¨ller and Cienki 2009; Nu´n˜ez and Sweetser 2006; Sweetser 1998; Webb 1998; a state of the art is given in Cienki and Mu¨ller 2008a). What is striking about the studies on metaphor, gesture, and speech is the variable relation of both modalities in expressing metaphoricity: either metaphoricity can be expressed monomodally, that is in speech or gesture, or multimodally, that is in both speech and gesture (Cienki 2008; Cienki and Mu¨ller 2008b; Mu¨ller and Cienki 2009). The observed distribution of metaphoric meaning across the different modalities led to an enhanced understanding of the “modality-independent nature of metaphoricity” (Mu¨ller and Cienki 2009: 321; see also Mu¨ller 2008a, 2008b). It has been suggested that metaphors are clearly delimited countable units – apt for statistical analysis in corpus linguistics – yet studying metaphors in the context of multimodal discourse, e.g. as a phenomenon of use, it turned out that very often metaphors are not bound to single lexical items but rather evolve over time. This points to an understanding of metaphoricity as a process not as product, a process which can evolve dynamically over time in an interaction, through speech, gesture, and other modalities (Kappelhoff and Mu¨ller 2011; Kolter et al. 2012). In those multimodally orchestrated interactions, metaphoric gestures in conjunction with speech are used as a foregrounding strategy which activates metaphoricity of sleeping metaphors (e.g., so called “dead” metaphors, see Mu¨ller 2008a, 2008b and Mu¨ller and Tag 2010 for an extended version of this argument).
67
68
I. How the body relates to language and communication Also the notion of conceptual metonymy (e.g., Gibbs 1994; Lakoff and Johnson 1980) is recently receiving increasing attention in the field of gesture studies (Ishino 2001; Mittelberg 2006, 2008, 2010a, 2010b; Mu¨ller 1998, 2004, 2009). It is assumed to play a major role in gestural sign formation. Mittelberg (2006, 2008, 2010b), for instance, proposes that observers of gestures follow a metonymic path from a gesture to infer a conceived object. She suggests “that accounting for metonymy in gesture may illuminate links between habitual bodily acts, the abstractive power of the mind, and interpretative/inferential processes.” (Mittelberg 2006: 292–293) By introducing the concepts of “internal and external metonymy” (Jakobson and Pomorska 1983; Mittelberg 2006, 2010b), different processes of abstraction involved in the creation and interpretation of gestures are disentangled. Accordingly, in the case of “internal metonymy” an observer of a gesture can infer a whole action or an object of which salient aspects are depicted gesturally. In case of “external metonymy” objects that are manipulated by the hands can be inferred via a contiguity relation between the object and the hand. Processes of abstraction in gestures are pertinent to the motivation of the form of gestures and they contribute significantly to the meaning of gestures. They concern the level of pre-conceptual structures such as image schemas (see above), action schemas (e.g., Bressem, Mu¨ller and Fricke in preparation; Calbris 2011, this volume; Mittelberg 2008; Teßendorf 2008; Streeck 2008, 2009), mimetic schemas (Zlatev 2002), and motor patterns (Mittelberg 2006; Ladewig and Teßendorf 2008, in preparation). Conceptual blending must be regarded as a higher cognitive process since it concerns the construction of complex forms of meaning in gestures (Parrill and Sweetser 2004; Sweetser and Parrill volume 2) as well as in signs (Liddell 1998) and in the interactive construction of multi-layered blends as for instance in the context of a school teacher’s explanation of how a clock symbolizes time (Williams 2008). Gestures in language use have also been subject to analyses addressing more specifically issues of a cognitive grammar: Bressem (2012) on repetitions in gestures, Harrison (2009) on gestures and negation, Ladewig (2012) on the semantic and syntactic integration of gestures, and Wilcox on the grammaticalization of gestures to signs of signed languages (2004). Nu´n˜ez (2008) and Streeck (2009) have pointed out that gestures embody what Leonard Talmy terms “fictive motion” (Talmy 1983), thus exhibiting that abstract concepts lexicalized for instance as motion verbs (e.g. the road runs along the river), are conceived as actual body motion. Mittelberg (2008) and Bressem (2012) both found that gestures appear to play a vital role in the establishment of so-called “reference-points” (e.g. Langacker 1993). A reference point is a cognitively salient item that provides mental contact with a less salient target. A gestural form serves as a reference point by providing cognitive access to a concrete or abstract object. In so doing, gestures may guide the hearer’s attention to particular aspects of a conversation. In reference point relations, gestures may “serve as an index” providing cognitive access to a construed object (Mittelberg 2008: 129). Broadening the scope from the meaning and functions of single units to cognition and gesture in language as use, Harrison (2009), Andre´n (2010), and Bressem (2012) have proposed to conceive of this interplay as multimodal or embodied constructions. Mu¨ller (2008a, 2008b), Mu¨ller and Tag (2010), and Ladewig (2012) have suggested that gestures display the flow of attention, especially with regard to the foregrounding and activating of metaphoricity (for an extension to audio-visual multimodal metaphor
3. Gestures and speech from a linguistic perspective: A new field and its history see Kappelhoff and Mu¨ller 2011). Furthermore, Bressem (2012) has suggested that repetitions in gesture follow the attentional flow. A further aspect, which has gained vital interest over the past years in gesture research, concerns the gestural representation of motion events and its relation with grammatical aspects of the verbal utterance and the information distributed across ¨ zyu¨rek 2002; the modalities (e.g., Duncan 2005; Gullberg 2011; Kita 2000; Kita and O McNeill 2000; McNeill and Duncan 2000; McNeill and Levy 1982; Mu¨ller 1998; Parrill 2008 inter alia). Numerous studies have shown that gestural representations of the same motion event may differ across languages depending on whether the languages are verb- or satellite-framed. Whereas speakers of English, for instance, might express the notion of a ball rolling down a hill in one clause and one gesture, which represents the motion and the direction at the same time, Japanese or Turkish speakers express the same notion in two verbal clauses accompanied by two distinct gestures, one expressing ¨ zyu¨rek 2002; the motion and the other the direction or manner of motion (Kita and O Kita et al. 2007). Thus, if meaning is distributed over two spoken clauses, the same meaning is likely to be expressed in two gestures, each expressing similar meaning as the spoken clause (Kita et al. 2007). Therefore, gestures reflect information considered relevant for expression (what to say) as well as its linguistic encoding (how to say it), with cross-linguistic consequences. Gestures thus reflect linguistic conceptualization and cross-linguistic differences in such conceptualizations. (Gullberg 2011: 148) To sum up: bringing together gesture studies and cognitive perspectives on language and language as use contributes to the discussion of “embodied cognition”, underlining that cognitive processes and conceptual knowledge are deeply rooted in the body’s interactions with the world.
2.4. Gesture as a dynamic communicative resource in the process of meaning construction in discourse Not only has a multimodal perspective on interaction (see Enfield; Gerwing and Bavelas; Hougaard and Rasmussen; Kidwell; Mondada; Streeck all this volume) implications on how concepts, such as utterances (see above), metaphor (e.g., see above), conversational pauses (e.g., Bohle 2007; Esposito and Marinaro 2007; Ladewig 2012; Mu¨ller and Paul 1999), or turns (e.g., Bohle 2007; Mondada 2007; Schmitt 2005; Streeck und Hartge 1992) need to be conceived of, but it also has fundamental consequences for a theory of language in general. Since gesture should be included into the study of language use, researchers proposed the multimodal nature of language, not only on the level of language use but also on the level of the language system (Bressem 2012; Fricke 2012, this volume; Ladewig 2012; Mu¨ller 2007, 2008a, 2008b). Furthermore, studies on discourse revealed a dynamic intertwining of speech and gesture revealing a dynamic dimension of language. What we see when people speak and gesture is, in McNeill’s terms, a product of an online “dialectic between speech and gesture” (McNeill 2005). Both speech and gesture are outcomes of “the moment-by-moment thinking that takes place as one speaks” (McNeill 2005: 15) whereby different modes of thinking are reflected in both modalities – imagistic thinking in gestures and analytic and categorical thinking in language. Both modes of thinking are seeded in the growth point (McNeill 1992, 2005, this volume). The two modalities, combined in a growth point, are considered as equal partners
69
70
I. How the body relates to language and communication when creating discourse, participating “in a real-time dialectic during discourse, and thus propel and shape speech and thought as they occur moment to moment” (McNeill 2005: 3). Another dynamic dimension introduced by McNeill “that reveals itself quite naturally when extending one’s focus from single gesture-speech units to the unfolding of discourse” (Mu¨ller 2007: 109f.) is that of “communicative dynamism” (Firbas 1971). Following Firbas, communicative dynamism is regarded “as the extent to which the message at a given point is pushing the communication forward.” (McNeill 1992: 207) McNeill observed that the quantity of gestures as well as the complexity of gestural and spoken expressions would “increase at points of topic shift, such as new narrative episodes or new conversational themes” (McNeill and Levy 1993: 365). Further, when speech and gesture synchronize, i.e., when they are used in temporal overlap, co-expressing “a single underlying meaning”, the “point of highest communicative dynamism” is reached (McNeill 2007: 20). With these information revealed in speech and gesture, it can be traced what a speaker focuses on along a narration. “As the speaker moves between levels and event lines, at any given moment some element is in focus and other elements recede in the background […] The focal element will have the effect of pushing the communication forward” (McNeill 1992: 207). McNeill’s observations on communicative dynamism pave ways for Mu¨ller’s observations of dynamic meaning activation going along with different attentional foci of the speaker (Mu¨ller 2007, 2008a, 2008b; Mu¨ller and Tag 2010). Adopting a discourse perspective on the analysis of multimodal communication, she found that meaning (and in particular metaphoric meaning) is not created on the spot but emerges over the flow of the discourse. Through the interplay of the different communicative resources participants of a conversation have at hand, meaning can be activated to different degrees and become foregrounded for both speaker and recipient. In their analysis on metaphoricity in multimodal communication, Mu¨ller and Tag (2010) identified three different foregrounding techniques in which gestures play a significant role. Accordingly, when metaphoricity is being expressed in only one modality, that is in speech or gesture, it is regarded only minimally activated. When metaphoricity is being elaborated or expressed in both speech and gesture it is considered waking and highly activated. This dynamic foregrounding of different aspects of (metaphoric) meaning goes along with a moving focus of attention (Chafe 1994). In doing so, “participants in a conversation co-construct an interactively attainable salience structure, that they engage in a process of profiling metaphoric meaning by foregrounding it” (Mu¨ller and Tag 2010). Recent work within the framework of “dynamic multimodal communication” (Mu¨ller 2008a, 2008b) focuses on the experiential grounding of metaphoric meaning. More precisely, minute studies of face-to-face communication in therapeutic settings and in the context of dance lessons revealed that bodily movements as well as their “felt qualities” (Johnson 2005; see also Sheets-Johnstone 1999) provide the affective, embodied grounds of metaphoricity (Kappelhoff and Mu¨ller 2011; Kolter et al. 2012). Metaphoricity can be observed to emerge from bodily movement being verbalized at a later point in the conversation proving the dynamic dimension of metaphoric meaning. These observations give empirical evidence of what has been referred to as “languageing of movement” (Sheets-Johnstone 1999) – the translation of body movements into words and, as such, the emergence of meaning from the body.
3. Gestures and speech from a linguistic perspective: A new field and its history
3. Conclusion Regarding gestures and speech from a linguistic perspective addresses the properties of gestures as a medium of expression both in conjunction with speech and as a modality with its own particular characteristics. It departs from the assumption that the hands possess the articulatory and functional properties to potentially develop a linguistic system (Mu¨ller 1998, 2009, this volume; Mu¨ller, Bressem, and Ladewig this volume). That the hands can indeed become language is visible in signed languages all over the world. In the early days of sign linguistics the challenge was to prove that signed languages are actually languages. In order to substantiate this claim, a sharp boundary had to be drawn between gestures and signs. However, with the increasing recognition of signed languages as full-fledged linguistic systems, the stage has opened up for gestures to be studied as precursors of signs (Kendon 2004: chapter 15; Armstrong and Wilcox 2007). This brings us back to claims concerning gestures as the universal language of mankind, especially as Quintilian formulates them. What we see in co-verbal gestures are pre-requisites of embodied linguistic structures and patterns that can evolve to language, when the oral mode of expression is not a viable form of communication. We would like to suggest therefore that studying gestures and their “grammar” allows us to gain some insights into processes of language evolution within the manual modality. Despite the missing reflection upon gestures as part of language for most part of the twentieth century, a linguistic view on the multimodality of language has by now proven to be a valuable “companion to other present foci, such as psychological or interactional approaches, by expanding the fields of investigations and approaches in gesture studies and thereby contributing to a more thorough understanding of the medium ‘gesture’ itself as well as the relation of speech and gesture.” (Bressem and Ladewig 2011: 87) By allowing for a different point of view on phenomena observable in gestures and its relation with speech, a linguistic view not only further unravels that nature of how speech and gesture “arise from a single process of utterance formation” (McNeill 1992: 30) and are able to “appear together as manifestations of the same process of utterance” (Kendon 1980: 208), but moreover underpins the multimodal nature of language use and of language in general.
Acknowledgements We are grateful to the Volkswagen Foundation for supporting this work with a grant for the interdisciplinary project “Towards a grammar of gesture: Evolution, brain and linguistic structures” (www.togog.org).
4. References ¨ berblick. Albrecht, Jo¨rn 2007. Europa¨ischer Strukturalismus: Ein forschungsgeschichtlicher U Tu¨bingen: Gunter Narr. Andre´n, Mats 2010. Children’s gestures from 18 to 30 months. Ph.D. dissertation, Centre for Languages and Literature, Lund University. Argyle, Michael 1975. Bodily Communication. New York: International Universities Press. Armstrong, David F. and Sherman E. Wilcox 2007. The Gestural Origin of Language. New York: Oxford University Press.
71
72
I. How the body relates to language and communication Barnett, Dene 1990. The art of gesture. In: Volker Kapp (ed.), Die Sprache der Zeichen und Bilder, Rhetorik und nonverbale Kommunikation in der fru¨hen Neuzeit, 65–76. Marburg: Hitzeroth. Battison, Robin 1974. Phonological deletion in American sign language. Sign Language Studies 5: 1–19. Bavelas, Janet Beavin, Trudy Johnson Kenwood and Bruce Phillips 2002. An experimental study of when and how speakers use gesture to communicate. Gesture 2(1): 1–17. Beattie, Geoffrey and Heather Shovelton 1999. Do iconic hand gestures really contribute anything to the semantic information conveyed by speech? An experimental investigation. Semiotica 123(1/2): 1–30. Beattie, Geoffrey and Heather Shovelton 2007. The role of iconic gesture in semantic communication and its theoretical and practical implications. In: Susan D. Duncan, Justine Cassell and Elena Tevy Levy (eds.), Gesture and the Dynamic Dimension of Language, Volume 1, 221–241. Philadelphia: John Benjamins. Becker, Raymond, Alan Cienki, Austin Bennett, Christina Cudina, Camille Debras, Zuzanna Fleischer, Michael Haaheim, Torsten Mu¨ller, Kashmiri Stec and Alessandra Zarcone 2011. Aktionsarten, speech and gesture. Proceedings of the 2nd Workshop on Gesture and Speech in Interaction – GESPIN, Bielefeld, Germany, 5–7 September. Bergmann, Kirsten, Volkan Aksu and Stefan Kopp 2011. The relation of speech and gestures: Temporal synchrony follows semantic synchrony. Paper presented at the 2nd Workshop on Gesture and Speech in Interaction – GESPIN, Bielefeld, Germany, 5–7 September. Birdwhistell, Ray L. 1970. Kinesics and Context. Philadelphia: University of Pennsylvania Press. Bloomfield, Leonard 1983. An Introduction to the Study of Language. Volume 3. Amsterdam: John Benjamins. Bohle, Ulrike 2007. Das Wort ergreifen – das Wort u¨bergeben: Explorative Studie zur Rolle redebegleitender Gesten in der Organisation des Sprecherwechsels. Berlin: Weidler. Bolinger, Dwight 1983. Intonation and gesture. American Speech 58(2): 156–174. Bressem, Jana 2012. Repetitions in gesture: Structures, functions, and cognitive aspects. Ph.D. dissertation, European University Viadrina, Frankfurt (Oder). Bressem, Jana this volume. A linguistic perspective on the notation of form features in gestures. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Bressem, Jana and Silva H. Ladewig 2011. Rethinking gesture phases – articulatory features of gestural movement? Semiotica 184(1/4): 53–91. Bressem, Jana, Silva H. Ladewig and Cornelia Mu¨ller this volume. Linguistic annotation system for gestures. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Bressem, Jana, Cornelia Mu¨ller and Ellen Fricke in preparation. “No, not, none of that” – cases of exclusion and negation in gesture. Butterworth, Brian and Uri Hadar 1989. Gesture, speech, and computational stages: A reply to McNeill. Psychological Review 96(1): 168–174. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Calbris, Genevie`ve 2003. From cutting an object to a clear cut analysis. Gesture as the representation of a preconceptual schema linking concrete actions to abstract notions. Gesture 3(1): 19–46. Calbris, Genevie`ve 2008. From left to right…: Coverbal gestures and their symbolic use of space. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gesture, 27–53. Amsterdam: John Benjamins. Calbris, Genevie`ve 2011. Elements of Meaning in Gesture. Amsterdam: John Benjamins.
3. Gestures and speech from a linguistic perspective: A new field and its history Calbris, Genevie`ve this volume. Elements of meaning in gesture. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Chafe, Wallace L. 1994. Discourse, Consciousness, and Time: The Flow and Displacement of Conscious Experience in Speaking and Writing. Chicago: University of Chicago Press. Chomsky, Noam 1965. Aspects of the Theory of Syntax. Cambridge: Massachusetts Institute of Technology Press. Chomsky, Noam 1981. Lectures on Government and Binding. Dordrecht, the Netherlands: Foris. Chomsky, Noam 1992. A Minimalist Program for Linguistic Theory. Cambridge: Massachusetts Institute of Technology Press. Cienki, Alan 1998a. Metaphoric gestures and some of their relations to verbal metaphorical expressions. In: Jean-Pierre Ko¨nig (ed.), Discourse and Cognition: Bridging the Gap, 189–204. Stanford, CA: Center for the Study of Language and Information. Cienki, Alan 1998b. Straight: An image schema and its metaphorical extensions. Cognitive Linguistics 9(2): 107–149. Cienki, Alan 2005. Image schemas and gesture. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics, 421–442. Berlin: De Gruyter Mouton. Cienki, Alan 2008. Why study metaphor and gesture. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gesture, 5–25. Amsterdam: John Benjamins. Cienki, Alan 2010. Gesture and (cognitive) linguistic theory. In: Rosario Caballero (ed.), Proceedings of the XXVII AESLA International Conference ‘Ways and Modes of Human Communication’, 45–56. Ciudad Real, Spain: Universidad de Castilla-La Mancha. Cienki, Alan 2012. Usage events of spoken language and the symbolic units (may) abstract from them. In: Krzysztof Kosecki and Janusz Badio (eds.), Cognitive Processes in Language, 149– 158. Frankfurt: Peter Lang. Cienki, Alan this volume. Cognitive Linguistics: Spoken language and gesture as expressions of conceptualization. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Cienki, Alan and Cornelia Mu¨ller (eds.) 2008a. Metaphor and Gesture. Amsterdam: John Benjamins. Cienki, Alan and Cornelia Mu¨ller 2008b. Metaphor, gesture and thought. In: Raymond W. Gibbs (ed.), Cambridge Handbook of Metaphor and Thought, 483–501. Cambridge: Cambridge University Press. Clark, Herbert H. 1996. Using Language. Volume 4. Cambridge: Cambridge University Press. Clark, Herbert H. and Richard J. Gerrig 1990. Quotations as demonstrations. Language 66(4): 764–805. Condon, William C. and Richard Ogston 1966. Sound film analysis of normal and pathological behavior patterns. Journal of Nervous and Mental Disease 143(4): 338–347. Condon, William C. and Richard Ogston 1967. A segmentation of behavior. Journal of Psychiatric Research 5: 221–235. Copple, Mary this volume. Enlightenment philosophy: Gestures, language, and the origin of human understanding. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. A translation of La mimica degli antichi investigata nel gestire napoletano. With an introduction and notes by Adam Kendon. Bloomington: Indiana University Press. First published Fibreno, Naples [1832]. De Ruiter, Jan Peter 2000. The production of gesture and speech. In: David McNeill (ed.), Language and Gesture, 284–311. Cambridge: Cambridge University Press.
73
74
I. How the body relates to language and communication Duncan, Susan 2005. Gesture in signing: A case study in Taiwan Sign Language. Language and Linguistics 6(2): 279–318. Dutsch, Dorota this volume. The body in rhetorical delivery and in theatre – An overview of classical works. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Efron, David 1972. Gesture, Race and Culture. Paris: Mouton. First published [1941]. Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1(1): 49–98. Ekman, Paul and Erika Rosenberg (eds.) 1997. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). New York: Oxford University Press. Enfield, N. J. 2009. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge: Cambridge University Press. Enfield, N. J. this volume. A ‘Composite Utterances’ approach to meaning. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Engle, Randi A. 1998. Not channels but composite signals: Speech, gesture, diagrams and object demonstrations are integrated in multimodal explanations. In: Morton Ann Gernsbacher and Sharon J. Derry (eds.), Proceedings of the Twentieth Annual Conference of the Cognitive Science Society, 321–326. Mahwah, NJ: Erlbaum. Engle, Randi A. 2000. Toward a theory of multimodal communication combining speech, gestures, diagrams, and demonstrations in instructional explanations. Ph.D. dissertation, Stanford University. Esposito, Anna and Maria Marinaro 2007. What pauses can tell us about speech and gesture partnership. In: Anna Esposito, Maja Bratanic, Eric Keller and Maria Marinaro (eds.), Fundamentals of Verbal and Nonverbal Communication and the Biometric Issue, 45–57. Amsterdam: IOS Press. Fauconnier, Gilles and Mark Turner 2002. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York: Basic Books. Feldmann, Robert S. and Bernard Rime´ (eds.) 1991. Fundamentals of Nonverbal Behavior. Cambridge: Cambridge University Press. Feyereisen, Pierre 1987. Gestures and speech, interactions and separations: A reply to McNeill. Psychological Review 94(4): 493–498. Firbas, Jan 1971. On the concept of communicative dynamism in the theory of functional sentence perspective. Brno Studies in English 7: 12–47. Freedman, Norbert 1977. Hands, words and mind: On the structuralization of body movements during discourse and the capacity for verbal representation. In: Norbert Freedman and Stanley Grand (eds.), Communicative Structures and Psychic Structures, 109–132. New York: Plenum. Fricke, Ellen 2007. Origo, Geste und Raum: Lokaldeixis im Deutschen. Berlin: Walter de Gruyter. Fricke, Ellen 2010. Phonaestheme, Kinaestheme und multimodale Grammatik: Wie Artikulationen zu Typen werden, die bedeuten ko¨nnen. In: Sprache und Literatur 41(1): 70–88. Fricke, Ellen 2012. Grammatik multimodal: Wie Wo¨rter und Gesten zusammenwirken. Berlin: De Gruyter Mouton. Fricke, Ellen this volume. Towards a unified grammar of gesture and speech. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Gerwing, Jennifer and Janet Beavin Bavelas this volume. The social interactive nature of gestures: theory, assumptions, methods, and findings. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton.
3. Gestures and speech from a linguistic perspective: A new field and its history Gibbs, Raymond W. 1994. The Poetics of Mind: Figurative Thought, Language, and Understanding. Cambridge: Cambridge University Press. Goodwin, Charles 1986. Gesture as a resource for the organization of mutual orientation. Semiotica 62(1/2): 29–49. Goodwin, Charles 2006. Human sociality as mutual orientation in a rich interactive environment: Multimodal utterances and pointing in Aphasia. In: N. J. Enfield and Stephen C. Levinson (eds.), Roots of Human Sociality: Culture, Cognition and Interaction, 97–125. London: Berg. Goodwin, Charles 2007. Environmentally coupled gestures. In: Susan Duncan, Justine Cassell, and Elena Levy (eds.), Gesture and Dynamic Dimensions of Language, 195–212. Amsterdam: John Benjamins. Graf, Fritz 1994. Gestures and conventions: The gestures of Roman actors and orators. In: Jan Bremmer and Herman Roodenburg (eds.), A Cultural History of Gesture, 36–58. Cambridge: Polity Press. Gullberg, Marianne 2011. Thinking, speaking and gesturing about motion in more than one language. In: Aneta Pavlenko (ed.), Thinking and Speaking in Two Languages, 143–169. Bristol: Multilingual Matters. Gut, Ulrike, Karin Looks, Alexandra Thies and Dafydd Gibbon 2002. Cogest: Conversational gesture transcription system version 1.0. Fakulta¨t fu¨r Linguistik und Literaturwissenschaft, Universita¨t Bielefeld, ModeLex Tech. Report, 1. Hadar, Uri this volume. Coverbal gestures: Between communication and speech production. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Hadar, Uri and Robert Krauss 1999. Iconic gestures: The grammatical categories of lexical affiliates. Journal of Neurolinguistics 12(1): 1–12. Harris, Randy A. 1995. The Linguistics Wars. New York: Oxford University Press, USA. Harris, Zellig 1951. Methods in Structural Linguistics. Chicago: Chicago University Press. Harrison, Simon 2009. Grammar, gesture, and cognition: The case of negation in English. Ph.D. dissertation, Universite´ Michel de Montaigne, Bourdeaux 3. Harrison, Simon 2010. Evidence for node and scope of negation in coverbal gesture. Gesture 10(1): 29–51. Heidegger, Martin 1962. Being and Time. Translated by John Macquarrie and Edward Robinson. New York: Harper and Row. Hinde, Robert A. (ed.) 1972. Nonverbal Communication. Cambridge: Cambridge University Press. Hockett, Charles F. 1958. A Course in Modern Linguistics. New York: MacMillan. Hopper, Paul 1998. Emergent grammar. In: Michael Tomasello (ed.), The New Psychology of Language: Cognitive and Functional Approaches to Language Structure, volume 1, 155–175. Mahwah, NJ: Lawrence Erlbaum. Hougaard, Anders and Gitte Rasmussen this volume. Fused bodies: on the interrelatedness of cognition and interaction. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Ishino, Mika 2001. Conceptual metaphors and metonymies of metaphoric gestures of anger in discourse of native speakers of Japanese. Mary Andronis, Christopher Ball, Heidi Elston and Sylvain Neuvel (eds.), CLS 37: The Main Session, 259–273. Chicago: Chicago Linguistic Society. Jakobson, Roman and Krystyna Pomorska 1983. Dialogues. Cambridge: Massachusetts Institute of Technology Press. Janzen, Terry and Barbara Shaffer 2002. Gesture as the substrate in the process of ASL grammaticalization. In: Richard P. Meier, Kearsy Cormier and David Quinto-Pozos (eds.), Modality and Structure in Signed and Spoken Languages, 199–223. Cambridge: Cambridge University Press. Johnson, Mark 2005. The philosophical significance of image schemas. In: Beate Hampe (ed.), From Perception to Meaning: Image Schemas in Cognitive Linguistics, 15–33. Berlin: De Gruyter Mouton.
75
76
I. How the body relates to language and communication Kappelhoff, Hermann and Cornelia Mu¨ller 2011. Embodied meaning construction. Multimodal metaphor and expressive movement in speech, gesture, and feature film. Metaphor in the Social World 1(2): 121–153. Kendon, Adam 1972. Some relationships between body motion and speech: An analysis of an example. In: Aron Wolfe Siegman and Benjamin Pope (eds.), Studies in Dyadic Communication, 177–210. New York: Elsevier. Kendon, Adam 1980. Gesticulation and speech: Two aspects of the process of utterance. In: Mary R. Key (ed.), Nonverbal Communication and Language, 207–227. The Hague: Mouton. Kendon, Adam (1983). Gesture and speech: How they interact. In: John M. Wiemann (ed.), Nonverbal interaction, 13–46. Beverly Hills, California: Sage Publications. Kendon, Adam 1986. Some reasons for studying gesture. Semiotica 62(1/2): 3–28. Kendon, Adam 1987. On gesture: Its complementary relationship with speech. In: Aaron W. Siegman and Stanley Feldstein (eds.), Nonverbal Behavior and Communication, 65–97. London: Lawrence Erlbaum. Kendon, Adam 1990. Conducting Interaction: Patterns of Behaviour in Focused Encounters. Cambridge: Cambridge University Press. Kendon, A. 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics, 23: 247–279. Kendon, Adam 1998. Die wechselseitige Einbettung von Geste und Rede. In: Caroline Schmauser and Thomas Knoll (eds.), Ko¨rperbewegungen und ihre Bedeutungen, 9–19. Berlin: Arno Spitz. Kendon, Adam 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kendon, Adam 2003. Some uses of the head shake. Gesture 2(2): 147–182. Kendon, Adam 2008. Language’s matrix. Gesture 9: 355–372. Kendon, Adam this volume. Exploring the utterance roles of visible bodily action: A personal account. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Kendon, Adam and Andrew Ferber 1973. A description of some human greetings. In: Richard Phillip Michael and John Hurrell Crook (eds.), Comparative Ecology and Behaviour of Primates, 591–668. London: Academic Press. Kendon, Adam, Richard M. Harris and Mary Ritchie Key 1975. The Organization of Behavior in Face-to-Face Interaction. The Hague: Mouton. Kidwell, Mardi this volume. Framing, grounding and coordinating conversational interaction: Posture, gaze, facial expression, and movement in space. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Kita, Sotaro 2000. How representational gestures help speaking. In: David McNeill (ed.), Language and Gesture. Cambridge: Cambridge University Press. Kita, Sotaro 2003. Pointing where language, culture, and cognition meet. Mahwah, NJ: Lawrence Erlbaum. ¨ zyu¨rek 2002. What does cross-linguistic variation in semantic coordination Kita, Sotaro and Asli O of speech and gesture reveal?: Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48: 16–32. ¨ zyu¨rek, Shanley Allen, Amanda Brown, Reyhan Furman and Tomoko Ishizuka Kita, Sotaro, Asli O 2007. Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production. Language and Cognitive Processes 22(8): 1212–1236. Kolter, Astrid, Silva H. Ladewig, Michela Summa, Sabine Koch, Thomas Fuchs and Cornelia Mu¨ller 2012. Body memory and emergence of metaphor in movement and speech. An interdisciplinary case study. In: Sabine Koch, Thomas Fuchs, Michela Summa and Cornelia Mu¨ller (eds.), Body Memory, Metaphor, and Movement, 201–226. Amsterdam: John Benjamins.
3. Gestures and speech from a linguistic perspective: A new field and its history Kopp, Stefan, Kirsten Bergmann and Ipke Wachsmuth 2008. Multimodal communication from multimodal thinking – towards an integrated model of speech and gesture production. International Journal of Semantic Computing 2(1): 115–136. Ladewig, Silva H. 2007. The family of the cyclic gesture and its variants – systematic variation of form and contexts. http://www.silvaladewig.de/publications/papers/Ladewig-cyclic_gesture_pdf; accessed January 2008. Ladewig, Silva H. 2010. Beschreiben, suchen und auffordern – Varianten einer rekurrenten Geste. In: Sprache und Literatur 41(1): 89–111. Ladewig, Silva H. 2011. Putting the cyclic gesture on a cognitive basis. CogniTextes 6. http:// cognitextes.revues.org/406. Ladewig, Silva H. 2012 Syntactic and semantic integration of gestures into speech: Structural, cognitive, and conceptual aspects. Ph.D. dissertation, European University Viadrina, Frankfurt (Oder). Ladewig, Silva H. and Jana Bressem forthcoming. New insights into the medium hand – Discovering Structures in gestures based on the four parameters of sign language. Semiotica. Ladewig, Silva H. and Sedinha Teßendorf in preparation. The brushing-aside and the cyclic gesture – reconstructing their underlying patterns. Lakoff, George and Mark Johnson 1980. Metaphors We Live By. Chicago: Chicago University Press. Langacker, Ronald W. 1987. Foundations of Cognitive Grammar: Theoretical Prerequisites. Stanford, CA: Stanford University Press. Langacker, Ronald W. 1993. Reference-point constructions. Cognitive Linguistics 4(1): 1–38. Langacker, Ronald W. 2008. Cognitive Grammar: A Basic Introduction. Oxford: Oxford University Press. Lapaire, Jean-Remı´ 2006. Negation, reification and manipulation in a cognitive grammar of substance. In: Stephanie Bonnefille and Sebastian Salbayre (eds.), La Ne´gation, 333–349. Tours: Presses Universitaires Franc¸ois Rabelais. Liddell, Scott 1998. Grounded blends, gestures, and conceptual shifts. Cognitive Linguistics 9(3): 283–314. Loehr, Dan 2004. Gesture and intonation. Ph.D. dissertation, Georgetown University, Washington, DC. Loehr, Dan 2007. Aspects of rhythm in gesture and speech. Gesture 7(2): 179–214. Martinet, Andre´ (1960/1963). Grundzu¨ge der Allgemeinen Sprachwissenschaft. Stuttgart: Kohlhammer. McClave, Evelyn Z. 1991. Intonation and gesture. Ph.D. dissertation, Georgetown University, Washington, DC. McClave, Evelyn Z. 1994. Gestural beats: The rhythm hypothesis. Journal of Psycholinguistic Research 23(1): 45–66. McClave, Evelyn Z. 2000. Linguistic functions of head movements in the context of speech. Journal of Pragmatics 32(7): 855–878. McNeill, David 1979. The Conceptual Basis of Language. Hillsdale, NJ: Erlbaum. McNeill, David 1985. So you think gestures are nonverbal? Psychological Review 92(3): 350–371. McNeill, David 1987. So you do think gestures are nonverbal. Reply to Feyereisen (1987). Psychological Review 94(4): 499–504. McNeill, David 1989. A straight path – to where? Reply to Butterworth and Hadar. Psychological Review 96(1): 175–179. McNeill, David 1992. Hand and Mind. What Gestures Reveal about Thought. Chicago: University of Chicago Press. McNeill, David (ed.) 2000. Language and Gesture. Cambridge: Cambridge University Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. McNeill, David 2007. Gesture and thought. In: Anna Esposito, Maja Bratanic´, Eric Keller and Maria Marinaro (eds.), Fundamentals of Verbal and Nonverbal Communication and the Biometric Issue, 20–33. Amsterdam: IOS Press. McNeill, David this volume. The growth point hypothesis of language and gesture as a dynamic and integrated system. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International
77
78
I. How the body relates to language and communication Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. McNeill, David and Susan D. Duncan 2000. Growth points in thinking-for-speaking. In: David McNeill (ed.), Language and Gesture, 141–161. Cambridge: Cambridge University Press. McNeill, David and Elena T. Levy 1982. Conceptual representations in language activity and gesture. In: Robert J. Jarvella and Wolfgang Klein (eds.), Speech, Place, and Action, 271–295. New York: Wiley and Sons. McNeill, David and Elena T. Levy 1993. Cohesion and gesture. Discourse Processes 16(4): 363–386. McNeill, David, Francis Quek, Karl Eric McCullough, Susan Duncan, Robert Bryll, Xin-Feng Ma and R. Ansari 2002. Dynamic imagery in speech and gesture. In: Bjo¨rn Granstro¨m, David House and Inger Karlsson (eds.), Multimodality in Language and Speech Systems, Volume 19, 27–44. Dordrecht, the Netherlands: Kluwer Academic. Merleau-Ponty, Maurice 1962. Phenomenology of Perception. Translated by Colin Smith. London: Routledge. Mittelberg, Irene 2006. Metaphor and metonymy in language and gesture: Discoursive evidence for multimodal models of grammar. Ph.D. dissertation, Cornell University. Mittelberg, Irene 2008. Peircean semiotics meets conceptual metaphor: Iconic modes in gestural representations of grammar. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gesture, 145–184. Amsterdam: John Benjamins. Mittelberg, Irene 2010a. Geometric and image-schematic patterns in gesture space. In: Vyvyan Evans and Paul Chilton (eds.), Language, Cognition, and Space: The State of the Art and New Directions, 351–385. London: Equinox. Mittelberg, Irene 2010b. Interne und externe Metonymie: Jakobsonsche Kontiguita¨tsbeziehungen in redebegleitenden Gesten. In: Sprache und Literatur 41(1): 112–143. Mondada, Lorenza 2007. Multimodal resources for turn-taking: pointing and the emergence of possible next speakers. Discourse Studies 9(2): 194–225. Mondada, Lorenza this volume. Multimodal interaction. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. Handbooks of Linguistics and Communication Science 38.1. New York: De Gruyter Mouton. Mu¨ller, Cornelia 1998. Redebegleitende Gesten: Kulturgeschichte, Theorie, Sprachvergleich. Berlin: Arno Spitz. Mu¨ller, Cornelia 2000. Zeit als Raum. Eine kognitiv-semantische Mikroanalyse des sprachlichen und gestischen Ausdrucks von Aktionsarten. In: Ernest W. B. Hess-Lu¨ttich and H. Walter Schmitz (eds.), Botschaften verstehen. Kommunikationstheorie und Zeichenpraxis. Festschrift fu¨r Helmut Richter, 211–218. Frankfurt a.M.: Peter Lang. Mu¨ller, Cornelia 2004. Forms and uses of the palm up open hand. A case of a gesture family? In: Cornelia Mu¨ller and Roland Posner (eds.), Semantics and Pragmatics of Everyday Gestures, 234–256. Berlin: Weidler. Mu¨ller, Cornelia 2007. A dynamic view on gesture, language and thought. In: Susan D. Duncan, Justine Cassell and Elena T. Levy (eds.), Gesture and the Dynamic Dimension of Language, 109–116. Amsterdam: John Benjamins. Mu¨ller, Cornelia 2008a. Metaphors Dead and Alive, Sleeping and Waking: A Dynamic View. Chicago: Chicago University Press. Mu¨ller, Cornelia 2008b. What gestures reveal about the nature of metaphor. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gesture, 249–275. Amsterdam: John Benjamins. Mu¨ller, Cornelia 2009. Gesture and language. In: Kirsten Malmkjaer (ed.), Routledge’s Linguistics Encyclopedia, 214–217. Abingdon: Routledge. Mu¨ller, Cornelia 2010a. Mimesis und Gestik. In: Gertrud Koch, Martin Vo¨hler und Christiane Voss (eds.), Die Mimesis und ihre Ku¨nste, 149–187. Paderborn: Fink. Mu¨ller, Cornelia 2010b. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. In Sprache und Literatur 41(1): 37–68.
3. Gestures and speech from a linguistic perspective: A new field and its history Mu¨ller, Cornelia this volume. Gestures as a medium of expression: The linguistic potential of gestures. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Mu¨ller, Cornelia submitted. How gestures mean – The construal of meaning in gestures with speech. Mu¨ller, Cornelia, Jana Bressem and Silva H. Ladewig this volume. Towards a grammar of gestures: a form-based view. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Mu¨ller, Cornelia and Alan Cienki 2009. Words, gestures, and beyond: Forms of multimodal metaphor in the use of spoken language. In: Charles Forceville and Eduardo Urios-Aparisi (eds.), Multimodal Metaphor, 297–328. Berlin: De Gruyter Mouton. Mu¨ller, Cornelia, Hedda Lausberg, Ellen Fricke and Katja Liebal 2005. Towards a Grammar of Gesture: Evolution, Brain, and Linguistic Structures. Berlin: Antrag im Rahmen der Fo¨rderinitiative “Schlu¨sselthemen der Geisteswissenschaften. Programm zur Fo¨rderung fachu¨bergreifender und internationaler Zusammenarbeit”. Mu¨ller, Cornelia and Ingwer Paul 1999. Gestikulieren in Sprechpausen. Eine konversationssyntaktische Fallstudie. In: Hartmut Eggert and Janusz Golec (eds.), … wortlos der Sprache ma¨chtig. Schweigen und Sprechen in Literatur und sprachlicher Kommunikation, 265–281. Stuttgart: Metzler. Mu¨ller, Cornelia and Roland Posner (eds.) 2004. The Semantics and Pragmatics of Everyday Gestures. Berlin: Weidler. Mu¨ller, Cornelia and Gerald Speckmann 2002. Gestos con una valoracio´n negativa en la conversacio´n cubana. DeSignis 3: 91–103. Mu¨ller, Cornelia and Susanne Tag 2010. The embodied dynamics of metaphoricity: Activating metaphoricity in conversational interaction. Cognitive Semiotics 6: 85–120. Nu´n˜ez, Raphael 2008. A fresh look at the foundations of mathematics: Gesture and the psychological reality of conceptual metaphor. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gestures, 225–247. Amsterdam: John Benjamins. Nu´n˜ez, Rafael E. and Eve Sweetser 2006. With the future behind them: Convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science 30(3): 401–450. Ortony, Andrew 1993. Metaphor and Thought. Cambridge: Cambridge University Press. Parrill, Fey 2008. Form, meaning and convention: An experimental examination of metaphoric gestures. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gesture, 225–247. Amsterdam: John Benjamins. Parrill, Fey and Eve Sweetser 2002. Representing meaning: Morphemic level analysis with a holistic appraoch to gesture transcription. Paper presented at the First Congress of the International Society of Gesture Studies, The University of Texas, Austin. Parrill, Fey and Eve Sweetser 2004. What we mean by meaning: Conceptual integration in gesture analysis and transcription. Gesture 4(2): 197–219. Peirce, Charles S. 1931. Collected Papers of Charles Sanders Peirce. Cambridge, MA: Harvard University Press. Pfau, Roland and Markus Steinbach 2006. Pluralization in sign and in speech: A cross-modal typological study. Linguistic Typology 10(2): 135–182. Pfau, Roland and Markus Steinbach 2011. Grammaticalization in sign languages. In: Bernd Heine and Heiko Narrog (eds.), Handbook of Grammaticalization, 681–693. Oxford: Oxford University Press. Pike, Kenneth Lee 1967. Language in Relation to a Unified Theory of the Structure of Human Behavior (second and revised edition). The Hague: Mouton.
79
80
I. How the body relates to language and communication Polanyi, Michael 1958. Personal Knowledge. Chicago: University of Chicago Press. Quintilian, Marcus Fabius 1969. The Institutio Oratoria of Quintilian. With an English translation by H. E. Butler. New York: G. P. Putnam. Ruesch, Jurgen and Weldon Kees 1970. Nonverbal Communication: Notes on the Visual Perception of Human Relations. Berkeley: University of California Press. Saussure, Ferdinand de, Charles Bally and Albert Sechehaye 2001. Grundfragen der allgemeinen Sprachwissenschaft. Berlin: Walter de Gruyter. Scheflen, Albert E. 1973. How Behavior Means. New York: Gordon and Breach. Scherer, Klaus R. 1979. Die Funktionen des nonverbalen Verhaltens im Gespra¨ch. In: Klaus R. Scherer and Harald G. Wallbott (eds.), Nonverbale Kommunikation: Forschungsberichte zum Interaktionsverhalten, 25–32. Weinheim, Germany: Beltz. Scherer, Klaus R. and Paul Ekman 1982. Handbook of Methods in Nonverbal Behavior Research. Cambridge: Cambridge University Press. Schmitt, Reinhold 2005. Zur multimodalen Struktur von turn-taking. Gespra¨chsforschung – Online-Zeitschrift zur verbalen Interaktion 6: 17–61. Seyfeddinipur, Mandana 2004. Meta-discursive gestures from Iran: Some uses of the ‘Pistol Hand’. In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures, 205–216. Berlin: Weidler. Seyfeddinipur, Mandana 2006. Disfluency: Interrupting speech and gesture (MPI Series in Psycholinguistics 39). Nijmegen: University of Nijmegen. Sheets-Johnstone, Maxine 1999. The Primacy of Movement. New York: John Benjamins. Slama-Cazacu, Tatiana 1976. Nonverbal components in message sequence: “Mixed syntax”. In: William Charles McCormack and Stephen A. Wurm (eds.), Language and Man: Anthropological Issues, 217–227. The Hague: Mouton. Sowa, Timo 2005. Understanding Coverbal Iconic Gestures in Object Shape Descriptions. Berlin: Akademische Verlagsgesellschaft Aka. Stokoe, William C. 1960. Sign Language Structure. Buffalo, NY: Buffalo University Press. Streeck, Ju¨rgen 1988. The significance of gesture: How it is established. International Pragmatics Association Papers in Pragmatics 2(1/2): 60–83. Streeck, Ju¨rgen 1993. Gesture as communication I: Its coordination with gaze and speech. Communication Monographs 60(4): 275–299. Streeck, Ju¨rgen 2002. Grammars, words, and embodied meanings: On the uses and evolution of so and like. Journal of Communication 52(3): 581–596. Streeck, Ju¨rgen 2008. Depicting by gestures. Gesture 8(3): 285–301. Streeck, Ju¨rgen 2009. Gesturecraft: Manufacturing Understanding. Amsterdam: John Benjamins. Streeck, Ju¨rgen this volume. Praxeology of gesture. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Streeck, Ju¨rgen and Ulrike Hartge 1992. Previews: Gestures at the transition place. In: Peter Auer and Alsdo di Luzio (eds.), The Contextualization of Language, 135–157. Amsterdam: John Benjamins. Stukenbrock, Anja 2008 “Wo ist der Hauptschmerz?” – Zeigen am menschlichen Ko¨rper in der medizinischen Kommunikation. Gespra¨chsforschung. Online-Zeitschrift zur verbalen Interaktion 9: 1–33. Sweetser, Eve 1998. Regular metaphoricity in gesture: bodily-based models of speech interaction. Actes du 16e Congres International des Linguistes (CD-ROM). Sweetser, Eve and Fey Parrill volume 2. Gestures as conceptual blends. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Volume 2. Berlin: De Gruyter Mouton.
3. Gestures and speech from a linguistic perspective: A new field and its history Talmy, Leonard (1983). How language structures space. In: Herbert L. Pick and Linda P. Acredolo (eds.), Spatial Orientation: Theory, Research, and Application, 225–282. New York: Plenum Press. Teßendorf, Sedinha 2008. Pragmatic and metaphoric gestures – combining functional with cognitive approaches. Unpublished manuscript, European University Viadrina, Frankfurt (Oder). Teßendorf, Sedinha and Silva H. Ladewig 2008. The brushing-aside and the cyclic gesture – reconstructing their underlying patterns, GCLA-08/DGKL-08. Leipzig, Germany. Tuite, Kevin 1993. The production of gesture. Semiotica 93(1/2): 83–105. Vygotsky, Lev 1986. Thought and Language. Edited and translated by Eugenia Hanfmann and Gertrude Vakar, revised and edited by Alex Kozulin. Cambridge: Massachusetts Institute of Technology Press. Wachsmuth, Ipke 1999. Communicative rhythm in gesture and speech. In: Annelies Braffort, Rachid Gherbi, Sylvie Gibet, James Richardson and Daniel Teil (eds.), Gesture-Based Communication in Human-Computer Interaction – Proceedings International Gesture Workshop GW’99, 277–289. Berlin: Springer. Watzlawick, Paul, Janet Beavin Bavelas, and Don D. Jackson 1967. Pragmatics of Human Communication: A Study of Interactional Patterns, Pathologies and Paradoxes. New York: Norton. Webb, Rebecca 1996. Linguistic features of metaphoric gestures. Unpublished Ph.D. dissertation, University of Rochester, New York. Webb, Rebecca 1998. The lexicon and componentiality of American metaphoric gestures. In: Serge Santi, Isabelle Guaitella, Christian Cave´ and Gabrielle Konopczynski (eds.), Oralite´ et Gestualite´: Communication Multimodale, Interaction, 387–391. Paris: L’Harmattan. Wilcox, Sherman 2002. The iconic mapping of space and time in signed languages. In: Liliana Albertazzi (ed.), Unfolding Perceptual Continua, 255–281. Amsterdam: John Benjamins. Wilcox, Sherman 2004. Gesture and language. Gesture 4(1): 3–73. Wilcox, Sherman and Paolo Rossini 2010. Grammaticalization in sign languages. In: Diane Brentari (ed.), Sign Languages, 332–354. Cambridge: Cambridge University Press. Wilcox, Sherman and Phyllis Wilcox 1995. The gestural expression of modality in ASL. In: Joan Bybee and Suzanne Fleischman (eds.), Modality in Grammar and Discourse, 135–162. Amsterdam: John Benjamins. Williams, Robert F. 2008. Gesture as a conceptual mapping tool. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gesture, 55–92. Amsterdam: John Benjamins. Wollock, Jeffrey 1997. The Noblest Animate Motion: Speech, Physiology, and Medicine in Pre-Cartesian Linguistic Thought. Amsterdam: John Benjamins. Wollock, Jeffrey 2002. John Bulwer (1606–1656) and the significance of gesture in 17th century theories of language and cognition. Gesture 2(2): 227–258. Wollock, Jeffrey this volume. Renaissance philosophy: Gesture as universal language. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Wundt, Wilhlem 1921. Vo¨lkerpyschologie: Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte. Erster Band. Die Sprache. Leipzig: Engelmann. Zlatev, Jordan 2002. Mimesis: The “missing link” between signals and symbols in phylogeny and ontogeny? In: Anneli Pajunen (ed.), Mimesis, Sign and Language Evolution, 93–122. Publications in General Linguistics 3. Turku: University of Turku Press.
Cornelia Mu¨ller, Frankfurt (Oder) (Germany) Silva H. Ladewig, Frankfurt (Oder) (Germany) Jana Bressem, Chemnitz (Germany)
81
82
I. How the body relates to language and communication
4. Emblems, quotable gestures, or conventionalized body movements 1. 2. 3. 4. 5. 6. 7. 8.
Introduction Definition(s) and history of the term “emblem” Theoretical approaches Emblem repertoires Cross-cultural findings Characteristics of emblems Concluding remarks References
Abstract This article offers an account of the nature of emblems, the history of the concept and its definitions. It sketches the most important theoretical approaches towards emblems and their findings. Here, we will present insights from cognitive and psychological, semiotic, and ethnographic and pragmatic perspectives on the subject. We will give a very brief overview about mono-cultural and cross-cultural repertoires of emblems and address some of the cross-cultural findings concerning this conventional gesture type. At the end of the article, some of the most important characteristics of emblems are described.
1. Introduction Emblems or quotable gestures are conventional body movements that have a precise meaning which can be understood easily without speech by a certain cultural or social group. In this article, we will concentrate on emblematic hand gestures only. Examples of prototypical emblems are the so-called “thumbs up gesture” that, at least in Western cultures, is used to express something good or positive and can be glossed with “OK” (Morris et al. 1979; Sherzer 1991) or the “V as victory sign” (Brookes 2011; Morris et al. 1979; Schuler 1944) where the index and middle finger are stretched, the other fingers curled in and the palm faces the interlocutor. These are the gestures that are likely to appear on photos in newspapers, advertisements, or in ancient paintings having a clear message and often express the attitude of the gesturer. At least since the beginning of modern gesture studies with the seminal study of David Efron ([1941] 1972) emblems have been regarded as a class of gestures different from spontaneous co-speech gestures. The majority of gesture students agrees that emblems differ from spontaneous co-speech gestures, which are assumed to be created rather on the spot (see McNeill 1992, 2000, 2005; Mu¨ller 1998, 2010, this volume; Poggi 2002, inter alia), in that emblems are historically developed and therefore belong to the gestural repertoire of a certain culture or group. Emblems are conventional gestures that have a standard of well-formedness. It is widely accepted that they have a more or less defined meaning, are easily translatable into words or a phrase and can therefore be used as a substitute for speech (Ekman and Friesen 1969, 1972; Johnson, Mu¨ller, Cienki, Fricke, Ladewig, McNeill, Teßendorf (eds.) 2013, Body – Language – Communication (HSK 38.1), de Gruyter, 82–100
4. Emblems, quotable gestures, or conventionalized body movements Ekman, and Friesen 1975; McNeill 1992, 2000, 2005; Morris 2002; Morris et al. 1979; Mu¨ller 1998, 2010; Payrato´ 1993; Poggi 2002, 2007, inter alia). Consequently, that means that emblems have, at least in part, an illocutionary force (Kendon 1995; Payrato´ 1993, 2003; Poggi 2002). This article will not treat other conventionalized body movements such as coded gestures (e.g. the semaphore language of arm signals, see Morris 2002: 40) or technical gestures (Morris 2002: 38) which are invented and used by a minority for technical communication, e.g. the gestures of crane drivers or firemen. These gestures usually do not enter the gestural repertoire of a wider group and are therefore not addressed here (see Kendon 2004b: 291ff. for an overview).
2. Definition(s) and history of the term “emblem” Throughout the years, terms and definitions for these conventional gestures have varied: They have been called emblematic gestures or emblems (Efron 1972; Ekman and Friesen 1969, 1972; Johnson, Ekman, and Friesen 1975; McNeill 1992, 2000, 2005, inter alia), symbolic gestures (Calbris 1990; Efron 1972; Poggi 2002; Sparhawk 1978; Wundt 1900), semiotic gestures (Barakat 1973), quotable gestures (Brookes 2001, 2005, 2011; Kendon 1984, 1992), autonomous gestures (Kendon 1983; Payrato´ 1993) or narrow gloss gestures (Kendon 2004b), each term shedding a different light on the phenomenon. In this article, we will adhere to the term emblem because it is the most widespread within the research community, yet acknowledging that it is not a label for gestures that are “semiotically, of the same type, when in fact, […] this is not the case” (Kendon 1992: 92). We will also adopt this term, because it was coined by David Efron, who – with the first empirical study on gestures – drew the scholarly attention to these cultural and conventional gestures, being aware of the concept of symbolic gestures from Wilhelm Wundt (1900). They constitute the most complex class in Wundt’s gesture classification and are found especially in traditional sign languages. According to Wundt, they are as close to a word as a gesture can be because their relation to the signified is characterized by associations and not by iconicity. These associations are strengthened by the constant use of the gesture which in the end may lead to a completely conventional and arbitrary sign. Efron’s study Gesture and Environment from 1941 (re-published as Gesture, Race, and Culture by Paul Ekman in 1972) introduced the term emblematic gesture to describe symbolic, conventional and arbitrary gestures. While reviewing theories about gestures as being always pictorial, natural and congenital, Efron picks up the term emblem from the Renaissance and cites the work of Francis Bacon (1640): Notes therefore of things, which without the helpe and mediation of Words signifie Things, are of two sorts; whereof the first sort is significant of Congruitie, the other ad placitum. Of the former are Hieroglyphiques and Gestures; […] As for Gestures they are, as it were, Transitory Hieroglyphiques. […] This in the meane is plain, that Hieroglyphiques and Gestures ever have some similitude with the thing signified, and are kind of Emblemes. (Bacon 1640: 258–259; Efron 1972: 94–95)
Although Bacon includes all gestures that work as signs without the “mediation” of language, and compares them to hieroglyphs, because both signs are connected to their
83
84
I. How the body relates to language and communication signified through similarity, Efron reserves the term symbolic or emblematic gestures for those that are “representing either a visual or a logical object by means of a pictorial or a non-pictorial form which has no morphological relationship to the thing represented” (Efron 1972: 96), actually reserving the term emblems for arbitrary gestures, and excluding those that have “some similitude with the thing signified”. Nevertheless, in a footnote he notes that there are some symbolic gestures that are partially similar to their referent and calls them “hybrid movements” (Efron 1972: 96). But since they fall into two different categories and are therefore hard to classify, he refrains from considering them any further. While Bacon in this “very brief discussion uses the term “hieroglyphic” in a very generic way, for all iconic ideograms” ( Jeffrey Wollock, personal communication; see also this volume), Efron reserves this term for those gestures that have no relation of similarity to their signified, but an arbitrary one. In her survey of the history of gesture studies Mu¨ller (1998: 61–62, footnote 72) points out that with the discovery of the Egyptian hieroglyphs in the 16th and 17th century a sudden interest in iconology arose within the intellectual circles of Europe, leading to the development of pictorial symbols, such as emblems, displaying proverbs, idioms and abstract notions. It was in this context that gestures were considered ideograms or emblems. Mu¨ller alludes to the fact that some emblematic gestures (in the Efronian sense) are in effect grounded in the gestural representation of proverbs and idioms (Mu¨ller 1998: 62; see also Payrato´ 2008). For David Efron, emblematic gestures are meaningful by the conventional symbolic connotation that they possess independently from the speech for which they “may, or may not, be an adjunct” (Efron 1972: 96), a characteristic which also counts for deictic or pictorial gestures. The matter of iconicity, arbitrariness, and conventionality has been discussed thoroughly by Barbara E. Hanna (1996, see below). Most emblem researchers have followed Ekman and Friesen’s definition (1972, a slightly adjusted version of the one presented in 1969) who shifted the focus from conventionality towards the emblem’s relation to speech: Emblems are those nonverbal acts (a) which have a direct verbal translation usually consisting of a word or two, or a phrase, (b) for which this precise meaning is known by most or all members of a group, class, subculture or culture, (c) which are most often deliberately used with the conscious intent to send a particular message to other person(s), (d) for which the person(s) who sees the emblem usually not only knows the emblem’s message but also knows that it was deliberately sent to him, and (e) for which the sender usually takes responsibility for having made that communication. A further touchstone of an emblem is whether it can be replaced by a word or two, its message verbalized without substantially modifying the conversation. (Ekman and Friesen 1972: 357)
This definition focuses on the word-likeliness of emblems, and, following Hanna (1996), has hindered the development of thorough studies on emblems as communicative signs in their own right and lead to a series of emblem repertoires (see also Payrato´ 1993 for a systematic discussion). Adam Kendon qualifies his own definition about emblems as “autonomous or quotable gestures” as a practical user’s definition, thereby declaredly circumventing the difficulties of establishing coherent semiotic criteria, which even within theoretical reasoning are difficult to meet. The term therefore refers to gestures
4. Emblems, quotable gestures, or conventionalized body movements that “are standardized in form and which can be quoted and glossed apart from a context of spoken utterance” (Kendon 1986: 7–8). With this definition, Kendon grasps those gestures which have already made their way “into an explicit list or vocabulary” (Kendon 2004b: 335), such as for instance the “thumbs up gesture”, the “victory gesture” or the “fingers cross gesture” (see Morris et al. 1979, for examples).
3. Theoretical approaches Emblems have been treated by almost all gesture researchers because they hold a prominent position between conventional and codified gestural systems, such as sign languages, and supposedly idiosyncratic and singular co-speech gestures. This idea is expressed in the so-called Kendon’s continuum that was introduced by David McNeill (1992: 37–38) and elaborated in McNeill (2000) and Kendon (2004b) which arranges gesture types on a scale from holistic, spontaneous, idiosyncratic and co-speech dependent gesticulations to the language-like, conventional signs of sign languages. In between are language-like gestures, pantomimes, and emblems, the last described as having a “segmentation, standards of well-formedness, a historical tradition, and a community of users” (McNeill 1992: 56). What has been looked at when considering emblems depends heavily on the respective researcher’s theoretical assumptions. In the following, we will sketch the most influential approaches.
3.1. Psychological and cognitive perspectives As noted above, the work of the anthropologists and psychologists Paul Ekman and Wallace V. Friesen (1969, 1972; with Harold G. Johnson 1975) has been most influential. Their goal was to code and classify all nonverbal behavior according to its origin, coding, and usage, being well aware that their endeavor was actually impossible. Emblems were seen as a social phenomenon, almost word like. Although they adopted the term emblem from Efron they changed the scope, and included iconic gestures in this category. According to Ekman and Friesen, emblems differ from spontaneous gestures mainly because they are used consciously, intentionally and without speech. At the same time, though, they have to be replaceable “by a word or two, its message verbalized without substantially modifying the conversation” (Ekman and Friesen 1972: 357). With this definition, the characteristics of the emblem as a conventional and cultural communicative sign in its own right were moved out of focus. Isabella Poggi (Poggi 1983, 1987, 2002, 2004, 2007, inter alia; Poggi and Zomparelli 1987) has been pursuing a quite similar aim: the establishment of a lexicon for each modality (touch, gaze, gesture), working with a semiotic and cognitive model of communication in terms of the notions of goal and beliefs (see Castelfranchi and Parisi 1980; Poggi 2007). Following Ekman and Friesen, she states a strict division between emblems and other gestures, emblems being comparable to words in a foreign language and stored the same way (see Poggi and Magno Caldognetto 1997). They are culturally codified, autonomous and translatable. Poggi’s semantic analysis of different aspects of gestures and the establishment of a gesture typology leads to the “proto-grammatical” differentiation between holophrastic emblems and lexical or, more recently, articulated emblems. According to her findings, holophrastic emblems can be compared to
85
86
I. How the body relates to language and communication interjections, an equivalent of a complete speech act, with a clear and unchangeable illocutionary force, whereas articulated emblems behave like components of a communicative act. Comparable to words they participate in communicative acts, but their performative character changes according to the context. In short, both approaches can be characterized by their semantic focus and their verbocentric point of view.
3.2. Semiotic perspectives The following lines of research share the semiotic perspective on the class of emblems. The ontogeny of emblems as a result of ritualization has been exemplified with the method of rational reconstruction by Roland Posner (2002), explicating the cognitive as well as semiotic processes at work. The lexicalization process from a spontaneous gesture to an emblem or a highly conventional gesture is illustrated by Kendon (1988). Considering gestures as signs in their own right widens his perspective to include reflections on general properties of the gestural medium, an issue that, very surprisingly, is rarely addressed. As such, Kendon (1981, 1996, 2004a, 2004b and elsewhere) underlines the characteristics of gestures. The fact that gestures are silent, faster to produce than speech, visible, energetically cost-effective, have a greater immediate impact, do not rely on organized structures of attention and are hideable, contributes highly to the emergence and development of emblem repertoires which seem to evolve around a restricted set of communicative functions (see below). A semiotic and also linguistic perspective is characteristic for the work of Sparhawk (1976, 1978) and, in a more general way, Calbris (1990, 2003). In her investigation of the formal features of Iranian emblems, using the methodology of Stokoe (1960) for Sign Languages, and Pike (1947), Sparhawk concludes that there is a set of iconic contrasting features in Persian emblems. The reason that it does not develop into a whole system of oppositions can be explained by the relatively small number of emblems, which makes such elaboration unnecessary. A part of Genevie`ve Calbris’ work can be seen in a similar vein. Among other things, like a systematic analysis of semantic fields in gestures, she investigated the formal properties of French gestures, such as movement pattern, hand shape, direction, etc. tending toward a systematic set of form features that is motivated and conventional at the same time, and as such culturally coded (see Calbris 1990). Barbara E. Hanna (1996) has redefined the emblem in a thoroughly semiotic way, drawing on the theories of Eco (1976), Peirce (1931–1958) and Jakobson (1960). She emphasizes their conventional character as a sign within the field of wider semiotics. According to her detailed and encompassing analysis, she concludes that the main characteristics of emblems, as a class of gestures with fuzzy edges, lie in their strong coding and their generality between contexts, where analogous links are unnecessary and questions of motivation and/or arbitrariness can be neglected.
3.3. Cultural, ethnographic and/or pragmatic perspectives David Efron’s research on gestures was driven by the question whether gestures were part of nature or part of human culture. He was not the first to investigate emblems from an ethnographic perspective (see Kendon 2004b for an overview) when he
4. Emblems, quotable gestures, or conventionalized body movements compared the gesture use of US-immigrants from Southern Italy with the gesture use of Jewish immigrants from Eastern Europe, but he was the first to apply a variety of empirical methods, for example direct observation combined with sketches, and – as a revolutionary novelty – the compilation and interpretation of film material recorded on the scene within natural communicative settings. Efron found that the use and especially the repertoire of conventional gestures differed greatly between the two groups investigated. While the Italians had an extensive and diversified repertoire of conventional gestures (151 gesture-words, not only emblematic, but also physiographic gestures), the Jews hardly made any use of emblems at all, only six rather symbolic movements could be spelt out. The assimilated groups of both origins, though, had clearly taken over the US American standard displayed by their new status and/or social group and hardly used any emblematic gestures at all. Adam Kendon’s work starts out just where Efron’s ended. With a great expertise on alternate sign languages, gesture and culture, his efforts have been directed towards the investigation of gestures in use, in their natural surrounding. He has argued quite early (e.g. 1988) against a definitorial division between the so-called spontaneous or idiosyncratic and conventional or quotable gestures. In a study in 1995, Kendon compared emblems and formally similar conventional gestures, such as the emblematic gesture of the mano a borsa with the finger bunch, a recurrent gesture according to Ladewig (2011a). Both gestures are used pragmatically, the mano a borsa to indicate a certain speech act (request, negative comment), the finger bunch in order to mark the topic of the utterance. This suggests that a pragmatic use of gestures might be related to a process of conventionalization. In her study of the “pistol hand” in Iran, Seyfeddinipur (2004) obtains similar results. In a comparative study of the gesturing of a Neapolitan and an Englishman, Kendon (2004a) confirmed Efron’s findings about the elaborate repertoire and usage of conventional gestures by the Italian. One possible explanation of the abundant gesture vocabulary seems to lie in what he calls the ecology of interaction in Naples (Kendon 2004a, 2004b). In a review of Morris et al.’s book about the origin and distribution of emblems in Europe (Morris et al. 1979, see below), Kendon (1981) resumed the functions of these gestures on the basis of existing emblem repertoires. As we have noted above, emblems are used to express communicative acts, rather than being used as a mere substitute for a word (see below for the functions). They are especially used for communicative acts of interpersonal control, for announcing one’s own current state, and as evaluative descriptions of the action or appearances of someone else. Two contextual studies stand out in this line of research: Joel Sherzer (1991) has undertaken a careful context of use analysis for the omnipresent “thumbs up gesture” in urban Brazilian settings, so has Heather Brookes for the “clever gesture” (2001) in the South African townships, followed by contextual studies of the “drinking”, “clever” and “money gesture” (2005), and the “HIV gesture” (Brookes 2011) in the same community. Basing his analysis on the theories of Jakobson and Goffman, Sherzer shows that the “thumbs up” gesture combines the paradigmatic notion of “OK”, “positive” with the syntagmatic or interactive function of “social obligation met”. This combination accounts for a multifunctional use of this emblem covering almost all functions that Kendon (1981) had extracted from the different repertoires. According to Sherzer the main reason for its abundant use is that the gesture expresses a key concept of the Brazilian culture representing a friendly and positive linkage between people, “a public
87
88
I. How the body relates to language and communication self-image very important to Brazilians” (Sherzer 1991: 196), who actually live in a socially and economically divided society. A quite similar approach is Brookes’ study (2001) on the “clever gesture” in South African townships, which expresses the concept of being clever in the sense of “streetwise” or “city slick”, an important cultural concept in township life. The different functions of this gesture are connected through the semantic core: a formal reference to seeing. The core, the situational context, and the facial expression constitute the gesture’s functions, as a warning, as a comment, or even as a greeting. In the case of the “HIV gesture” (Brookes 2011), we can actually observe the emergence, frequent use and decay of a gesture (see below). Here, the gesture’s use and prominence are shown to be a result of a taboo, which is connected to the connotation of sex and severity of this widespread illness together with social communicational norms, like politeness for example. The pragmatic linguist Lluı´s Payrato´ (1993, 2003, 2004, 2008, volume 2 of this handbook; Payrato´, Alturo, and Paya` 2004) has not only compiled a basic repertoire of Catalan emblems, used by a certain social class in Barcelona, but has also introduced solid methods of pragmatics, sociolinguistics, cognitive linguistics, and ethnography of communication to emblem research. For Payrato´, the determinant feature of emblems is their illocutionary force. Using the speech act classification of Searle (1979) to investigate emblematic functions more closely, he confirmed Kendon’s results (1981) and, moreover, was able to show a tendency toward emblematization or conventionalization. Considering the data of the Catalan basic repertoire, it can be said that directive gestures, gestures for interpersonal control and such gestures that are based on interactive actions seem to be the ones most likely to undergo emblematization (Payrato´ 1993: 206). In questions that concern the structure of an emblem repertoire, Payrato´ (2003) used prototype theory, family resemblance, and relevance theory (Sperber and Wilson 1995) to account for the different relationships and meanings of single gestures or their variants. On different occasions (Payrato´ 1993, especially 2001, 2004) he has argued for an implementation of diverse precise linguistic methods into gesture studies and for an opening of the traditional linguistics towards the fundamental insights that gesture studies can contribute to the understanding of human communication, examples of a fruitful integration can be seen throughout his work.
4. Emblem repertoires The collection of emblems goes back to ancient times. Although Quintilian also addresses conventional gestures, among the first repertoires known is Bonifacio’s treatise on the art of signs (L’arte de’ Cenni 1616, see Kendon 2004b: 23) together with the works of John Bulwer (Chirologia and Chironomia [1644] 1972), both in the context of gestures as the natural language of mankind. In the 19th century de Jorio’s ([1832] 2000) and Mallery’s ([1881] 2001) works stand out for their detailed description and ethnographic interest. Throughout the centuries, there has been a great interest in collecting emblems as cultural gestures, and a detailed historic account would exceed the possibilities of this paper, but Kendon (1981, 2004b) offers a good summary and Bremmer and Roodenburg (1992) present a diachronic view on gesture use. A good overview of emblem repertoires can be found in Kendon (1981, 1996, 2004b), and, with a detailed bibliography, in Payrato´ (1993), for the Hispanic tradition see Payrato´ (2008).
4. Emblems, quotable gestures, or conventionalized body movements Considering the number of repertoires, one should expect theoretical insights regarding this field. Unfortunately, this is not the case, one of the major reasons being the lack of a common set of techniques and criteria for the elicitation and handling of data (see Kendon 1981; especially Payrato´ 2001, 2004; Poyatos 1981) and its embedding in cultural or linguistic theories. Often, the methods of data gathering are left unclear, exceptions being Brookes (2004), Johnson, Ekman, and Friesen (1975), Morris et al. (1979), Payrato´ (1993), Sparhawk (1978). Those reasons are partly responsible for the fact that concise cultural comparisons are rather scarce. Poyatos’ (1981) review of the findings and the methods of Green (1968), and Efron (1972), and Kendon’s (1981) review of Morris et al. (1979), and Saitz and Cervenka (1972), are exceptions displaying on a more theoretical level.
4.1. Mono-cultural repertoires The following lists are by no means exhaustive, but they try to include the most important repertoires. Some European gesture repertoires are: Posner et al. (in preparation) for Berlin, Germany; Cestero (1999), Gelabert and Martinell (1990), Green (1968), Poyatos (1970) for Spanish in Spain; Payrato´ (1993) for Catalan in Barcelona; Calbris (1990, including some contrastive findings with Hungarian and Japanese speakers), Calbris and Montredon (1986), Wylie (1977) for French; Kreidlin (2004), Monahan (1983) for Russian; Diadori (1990), Munari (1963), Ricci Bitti (1992), Poggi (2002, 2004) for Italy; de Jorio (2000), Paura and Sorge (2002) for Naples, Italy. For the USA there is the repertoire of Johnson, Ekman and Friesen (1975); for Santo Domingo see Pe´rez (2000); for South Africa see Brookes (2004); Sparhawk (1976, 1978 using an emblem list of Paul Ekman; see also Johnson, Ekman, and Friesen 1975) and Seyfeddinipur (2004) for Iran; Barakat (1973) for the gestures of the Levantine Arabs; Tumarkin (2002) for Japanese gestures. Interestingly, and d’accord with the cliche´, the Mediterranean area, especially the countries with Latin heritage, seem to be very attractive for gesture research (for historical continuities of Latin emblems, see de Jorio 2000, and Forne´s and Puig 2008).
4.2. Cross-cultural and contrastive emblem collections The following are repertoires that compare emblems either of different countries and different languages, or, as in the case of Meo-Zilio and Mejı´a (1980) and Rector and Trigo (2004) one language within different geographical areas. Sociolinguistic comparisons within one language and one area are still missing. Influential collections are: Saitz and Cervenka (1972, gestures from Colombia and USA), Meo-Zilio and Mejı´a (1980, presenting more than 2000 gestures of Spain and Latin America, and, 1986, presenting the extralinguistic sounds to the gestures); Rector and Trigo (2004, focusing on Portuguese on three different continents), Nilma Nascimento Dominique (2008, comparing Brazilian and Iberian Spanish emblems); Morris et al. (1979, a comparison of the origin, distribution and use of a sample of 20 different gestures in 25 different European countries); Kacem (2012, comparing German and Tunisian emblems, particularly in the school context); Creider (1977, who compared four different groups with different languages within Kenya); Efron (1972, a thorough analysis of the emblem use by Southern Italians, and Lithuanian and Polish Jews, see above), and Safadi and Valentine (1990, comparing gestures and nonverbal behavior of the USA and Arabic countries).
89
90
I. How the body relates to language and communication
5. Cross-cultural findings Cross-cultural findings regarding emblems can be subdivided into issues of varying complexities: Differences in the meaning(s) of individual gestures, their spread and distribution; differences in cultural key concepts expressed by emblems and finally differences in the use, size and diversity of a gestural repertoire. The fact that – on an individual level – emblems differ from one culture to another can be proven by the mere existence of culture specific dictionaries or repertoires as listed above. It is, of course, difficult to know why a certain gesture exists in this form in one area and in a different form or with another meaning in another area gesturers rely on the iconic interpretation of signs leading to widespread and very popular speculations about the origins of emblems (see again Morris et al. 1979 for diverse etymological derivations). Although emblems have been defined as having a clearcut translation, they are not restricted to one meaning, not only across cultures, but also within one culture. As Adam Kendon (1981) observed, there seems to be a link between the range of meanings and their spread. The most widespread gestures in Morris et al. (1979), like the “nose thumb”, for instance, have only one or a few related meanings attributed to them, while the ones with a whole range of (unrelated) meanings are geographically entrenched. Reasons for the spread of emblems can be seen in culture contact, common history, common religion, beliefs, and traditions, a common language, common climate, traveling, and the influence of modern media. None of these factors act exclusively, or predictably. When a certain emblem is tied to a specific idiom, an interjection and the like, it might not cross linguistic borders. When an emblem is tied to religious beliefs, its spread will probably depict the spread of this religion. In trying to answer the question what keeps gestures from spreading, Morris et al. (1979: 263–265) propose, among other things, cultural prejudice barriers, linguistic barriers, ideological and religious barriers, geographical barriers, and gesture taboos. And, on a somewhat different level, the semantic characteristics of the existing repertoire that prevents or shapes the adoption of an emblem. Close contextual and ethnographic studies such as those by Brookes (2001), Kendon (1995) and Sherzer (1991) have shown that the frequent use of certain emblems in a community may shed some light on important key concepts or concerns of this community. Being positive and meeting social obligations in everyday interaction is an important characteristic in urban Brazil, just as “doing” being clever and streetwise, and belonging to the right group, is in the townships of South Africa, both are concepts that need to be negotiated within everyday communication and interaction. Brookes’ (2004, 2005) collection of the gesture repertoire of South African urban young men has similar features. By investigating the gesturers and their gesture use in their everyday surrounding, distinguishing different forms and functions in various interactional contexts, she was able to get a very detailed hold on the characteristics of this special repertoire which belongs to the overall communicative behavior gesturing as a skill which should be mastered as an important part of male township identity. In order to gain cross-cultural insights, though, other, comparable investigations are required. Cultural differences in gesture repertoires have been presented most notably by David Efron (1972). He observed that the Italians in his study used more pictorial gestures, and had a far bigger repertoire than the Eastern European Jews. To Efron it
4. Emblems, quotable gestures, or conventionalized body movements seemed that the Italian repertoire could serve as an exclusive means of communication while the Jews hardly used any emblems at all, and if they did, they were not interpreted consistently. De Jorio (2000), Kendon (1995, 2004a, 2004b), and others have confirmed and described the size and diversity of the (Southern) Italian repertoire (see also Burke 1992). While de Jorio concentrated on the historical aspect of gestures, tracing the gestures back to ancient times, Kendon developed a theory about the overall ecology of Naples as a reason for the abundance of conventional gestures. The dominance of a somewhat theatrical public life, the crowded streets, the overall noise, the interest in display and a tradition of secrecy, according to Kendon, have their share in the emergence of this refined communication system.
6. Characteristics of emblems The following sections will touch upon some of the most important characteristics that are at stake when discussing emblems: their semantic domains, the emergence and origin of emblems, their compositionality, conventionality, and their relation to speech.
6.1. Semantic domains: Meanings and functions Emblems seem to cluster in certain semantic domains: they are used for certain functions and within certain contexts. David Efron (1972: 124) resumed the semantic domains of the Italian “gesture words”, which include gestures about bodily functions, moral qualities, values, and attitudes, logical and affective states and superstitious motives. When Johnson, Ekman and Friesen (1975: 343) compared their findings on American emblems with others they observed that most emblems were found within greetings and departures, were insults, interpersonal directions, replies, comments one’s own physical state, expressions of affect, and appearance. Summarizing and systematizing the findings of different repertoires (Creider 1977; Efron 1972; Morris et al. 1979; Payrato´ 1993; Saitz and Cervenka 1972; Sparhawk 1978; Washabaugh 1986; Wylie 1977) on a more abstract level, Kendon (1981, 2004b) also noticed that the great majority of emblems can be divided into three major groups according to the messages they convey: Emblems are used for interpersonal control, such as “stop”, “I am watching you”, “be quiet”, etc., secondly for announcing one’s own current state, such as “I am hungry”, “I am late”, and thirdly as an evaluative description of the action or appearances of someone else, as in “he is crazy” (see Kendon 2004b: 339). What is rare throughout most repertoires are pure nominal glosses, such as the money gesture, where thumb and index are rubbed repeatedly, or the scissors gesture, where index and middle finger reenact the opening and closing of a pair of scissors, exceptions are presented by Sparhawk (1978) and Brookes (2004). Brookes classified the South African emblem repertoire adapting, among other analytical tools, the functional typology proposed by Poggi (1983, 1987) that divides emblems into holophrastic and lexical gesture, which she extends by concept gestures (see Kendon 2004a). Rather surprisingly, in the South African repertoire lexical gestures present the majority of emblems dealing with the actions and objects of everyday life, such as gestures referring to a phone, a pen, to cooking, and are used for rather practical reasons. But they also include gestures that reflect the young men’s township identity relating to typical clothing, crime, and violence and are
91
92
I. How the body relates to language and communication used for the identification of people, for commenting on them, for threatening and warnings. Brookes concludes that the functions of lexical gestures vary and that those emblems that are based on practical objects and actions fulfill a smaller range of functions than the others. Those lexical gestures seem to be close to what Kendon (2004b: chapter 10) has called narrow gloss gestures when they display rather substantial than pragmatic information. The other lexical gestures seem to be used as interactional moves just as described a detailed comparison of the functions of lexical emblems with other repertoires has not been undertaken. As mentioned above, Payrato´ (1993) used Searle’s (1979) speech act classification in order to describe the functions of the gestures in the overall Catalan repertoire, consisting of emblems, pseudoemblems and other items, where the last two categories decrease in conventionality and preciseness of meaning. Due to the fact that one gesture can have multiple illocutionary values, the categories (assertives, directives, etc.) were not seen as exclusive. The results reveal that most emblems have an assertive function, followed by directive and expressive functions. What is even more interesting is that the comparison of the three sets shows that the assertive function increases within the lesser conventional sets, just as the directive function decreases. This suggests that there is a clear correlation between gestural functions analyzed in strict linguistic terms and conventionality: Gestures with a directive function tend to undergo emblematization more easily, a trend which underlines once again the findings of Adam Kendon and others, namely that emblems cluster around functions that are concerned “with the immediate interaction situation” (Kendon 1981: 142).
6.2. Emergence and origin As we have noted earlier, the exact origin of emblems remains unclear most of the times. Only in two gestures, so far, can we observe the process of the origin and emergence of a conventional gesture: the “V as victory sign”, as described by Schuler (1944) and “The three letters”, a gesture signifying HIV in South Africa, as described by Brookes (2011). The “V as victory sign” was invented as a secret sign to unite the efforts in their fight against Nazi fascism. The “HIV gesture” emerged as a sign to communicate a relevant social (health) issue that was taboo (see Brookes volume 2). Both gestures are connected to the linguistic system. In the case of the “V as victory sign” the fingers represent the letter “V” as in “victory”, in the second case the counting gesture “three” was re-semanticised and linked to the verbal expression “the three letters”, meaning HIV, that eventually faded. Here we have a process of obfuscation that starts with the verbal use of an acronym, then followed by a verbal reference to the mere number of the letters of this acronym, and finally by the gestural representation of this number. In both cases, though, the communicative need in the community to address something privately, secretly, thereby respecting the social norms seems to be relevant, not only for the emergence of the gestures, but also for their change and durability. Similar processes can be assumed for the emergence of gestures for insults, directions, and the expression of attitudes, where a medium that is quiet, quickly performed, visible and at the same time disguisable, is most apt for these communicative needs. Beginning with the base of an emblem, Roland Posner (2002) describes in his rational reconstruction the ontogenesis of the emblem of “flapping one’s hand”. This gesture is used to express something hot with all of its metaphorical and metonymical mappings
4. Emblems, quotable gestures, or conventionalized body movements and originates in the actual burning of one’s own hand as a bodily experience. Posner’s semiotic and ethological analysis of the emergence of an emblem as a process of ritualization is of a more general scope because it may hold for a wider range of emblems, namely such that are based on body movements of different sorts, regardless of their communicative function. The emergence of a historic emblem, the gesture of “bound hands”, from an action in a ritual context is described as a modulation in Goffman’s terms (see Goffman 1974) by Mu¨ller and Haferland (1997). Similar emblems like the “fingertip kiss” can be found in Morris et al. (1979). Having started this section with the emergence of emblems out of relevant communicative and social needs, we have come to the emergence of emblems from different bases, such as body movements or ritual actions. Further bases of emblems are other (co-speech) gestures, affect displays and expressions of feelings, adaptors, interpersonal actions, intention movements, (symbolic) objects, idioms and other linguistic expressions, and abstract entities (see Brookes 2011 for an overview, and Kendon 1981). Regarding the Catalan repertoire, Payrato´ (1993) concludes that gestures based on interactive actions are most likely to become emblematized than others, which, again, seems to match the overall assessment that emblems are concerned with the immediate interactional situation.
6.3. Conventionality Emblems are conventional gestures and therefore differ from spontaneous, singular or creative co-speech gestures. The only study, to our knowledge, that treats the conventionality of emblems in depth is the one by Barbara E. Hanna (1996). As we have sketched above, according to Hanna, emblems are conventional signs and as such they are strongly coded. Apparently, they have a standard of form and a notion of generality. For Hanna, an emblem is a replica of a type that is already known and that specifies the form and the meaning. Because of the strong coding, neither an analogous link to the object represented nor a specific context is necessary. While for Hanna convention is essential to the functioning of every sign, what makes emblems specific is “that the interpretation of emblems is governed by strong habits, that emblems are ruled by strong conventions, thus being conventionalized to the point of generality” (1996: 346). Emblems are a category of gestures with fuzzy edges and conventionality is not an exclusive characteristic of them. Kendon’s continuum or continua (McNeill 1992, 2000; see also Kendon 2004b) was a way to determine the characteristics of different gesture types on a continuum that comprised their relationship to speech, linguistic properties, to conventions, and the character of their semiosis. According to this tradition, emblems are in between signs of a sign language and gesticulation or spontaneous idiosyncratic gestures. The relationship between gestures and signs has been reconsidered recently by Wilcox (2005, this volume) and Kendon (2008), inter alia, insofar as the interconnections are foregrounded and not the divide. This line of research might open up new perspectives in the question of the conventionality of emblems. From the perspective of co-speech gestures, Kendon’s work has been influential yet again. As mentioned above, Kendon’s (1995) comparative study of emblems and apparently conventional co-speech gestures, that were used primarily with pragmatic functions, initiated the investigations of what has been called recurrent gestures (Ladewig 2011a, volume 2; Mu¨ller 2010, this volume).
93
94
I. How the body relates to language and communication Although further research is needed, it appears that there are fundamental overlaps between emblems and recurrent gestures, like the “palm-up-open-hand-gesture” that presents something on the open hand (see Mu¨ller 2004). An experimental study by Fey Parrill (2008) comparing the “palm-up-open-hand-gesture” with the “OK” emblem presents similarities and differences between those two types. While the emblem had a more restricted range of usages, both gestures were acknowledged to have formal variants. Interestingly, standards of well-formedness could not be confirmed for either gesture. More insights on the subject of conventionality of emblems can be found in studies about the process of emblematization such as the ones by Brookes (2011) and Payrato´ (1993). Comparing the three sets of gestures in his repertoire, Payrato´ was able to conclude that “directive gestures, interpersonal control gestures, and gestures based on interactive actions are the least restrained by the filters in the basic repertoire of Catalan emblems; therefore, they seem to be more likely than any others to reach the highest level of emblematization or conventionalization of body action” (Payrato´ 1993: 206).
6.4. Compositionality Another characteristic of emblems is its basic compositionality, meaning that an emblem can consist of more than one formal gestural component as, for example, when the “thumbs up” gesture is moved repeatedly towards the interlocutor, combining a significant hand configuration with a movement pattern (Calbris 1990, 2003; Kendon 1995, 2004b; McNeill 1992, 2000; Poggi 2002; Sparhawk 1978). The results of Sparhawk’s analysis show that although she could confirm a set of contrasting elements, even some minimal pairs, in the Persian data, they differ notably from the contrastive system of sign languages. Rebecca Webb (1996) has undertaken a similar approach toward so-called metaphoric gestures. Her findings suggest a small set of “morpheme-like” components that can be recombined with other components. Compositionality can also mean that an emblem consists of a hand gesture and a facial expression (among others Calbris 1990; Payrato´ 2003; Poggi 2002, inter alia; Poyatos 1981; Ricci Bitti 1992; Sparhawk 1978). The importance of the facial component in emblems has been shown by Poggi (2002: 80). In order to decide whether an emblem represents a fixed communicative act or not, she performed different performative faces to see if they match or mismatch the gestural function. If variations are possible, it is an articulated emblem, if only one facial expression is valid, it is a holophrastic emblem. In a third interpretation, compositionality might mean that two emblems combine into a new one (Calbris 1990; Johnson, Ekman, and Friesen 1975; Morris et al. 1979). This case is very seldom, but Morris et al. report a combination of the “flat handchop threat emblem” with the “ring” and the combination of the “fig” or “horn gesture” with the “forearm jerk”, as to double the impact of the insult (Morris et al. 1979: 267). Somewhat differently, it may mean that an emblem is used with a sound, which can be paralinguistic, made by the mouth, by the hand or by another articulator, or an interjection, for instance (Calbris 1990; Meo-Zilio 1986; Posner 2002; Poyatos 1981). In the case of the “flapping hand gesture” presented by Posner, the original sound of blowing onto the burnt hand and of taking a deep breath develops towards linguistic articulation, ending in two interjections, each of them leading to a different
4. Emblems, quotable gestures, or conventionalized body movements interpretation of the overall gesture. While one refers to the danger of something, the other refers to its fascination.
6.5. Relation to speech Similar to the issue of compositionality, the emblem’s relation to speech can be subdivided. First, and maybe foremost, it describes its relationship to the ongoing verbal discourse. As referred to earlier, the absence of speech has become a widely adopted criterion for emblems. But, as Poyatos (1981: 39–40) observed, it seems that emblems are generally used together with speech, at least within the Hispanic culture and Kendon points to similar observations for Naples (2008: 360). From the opposite perspective, the studies of Ladewig (2011b) and Andre´n (2010) have shown that gestures that are not conventional can perfectly fit into syntactic slots where there is no speech or that they can form utterances by gesture only. Interestingly, empirical investigations of the use of emblems with or without concurring speech are still lacking. Another way of regarding the relation between emblems and speech is when emblems develop on the base of an idiom, acronym or another linguistic expression and are therefore language dependent instead of culture dependent, a criterion proposed by Payrato´ (2008). This division is especially useful when emblems are investigated cross-culturally in areas like the Mediterranean, where language, cultural and national borders have spread in different ways and where culture and language contact take place on a daily basis. A last way of looking at emblems and their relation to speech is the comparison of the characteristics of the two communicative systems, speech and gesture. Brookes (2011) has addressed the attitude towards gesture, in contrast to verbal speech. The emblem for HIV, which is established by reference to the spoken acronym, benefits from the fact that “gesture is seen as a secondary, and indirect source, an act of “nonsaying” and thereby respecting social values” (Brookes 2011: 211). Since gesture is not the only communicative system, users feel freer to play around with it (see Calbris 1990 for similar considerations). Not many authors have asked why people use emblems, when they are like words, although it appears to be such a central question. Adam Kendon (1981, 2004b) proposes the following properties of gesture as possible reasons: Gestures are quick to perform and can express complex concepts and interactional moves in silence, and do not consume vast amounts of communicative energy. These features allow the gesture for “encounters that are fleeting” (Kendon 2004b: 343), but also for side exchanges within a conversation, or for secret exchanges. Gestures are visible, which makes them apt for communicative exchanges at a distance.
7. Concluding remarks Throughout this article, we have tried to sketch the characteristics of emblems as a presumed class of conventional gestures. What is fascinating about them is that they “act as conveyors of meaning in their own right” as Kendon puts it (1981: 146). Regarding them as mere word-substitutes has not only obscured their functions within communication, but has also distracted the attention from their versatility and dynamics. More recently, studies from different theoretical backgrounds seem to have overcome this
95
96
I. How the body relates to language and communication constraint. In some areas, though, such as gesture acquisition, gesture processing, and most of the psycholinguistic tradition emblems need to receive more attention. Besides more ethnographic and contextual studies, what is essential for future research on emblems is the development of scientific standards that allow for a true comparison of emblem repertoires.
Acknowledgements I would like to thank Jeffrey Wollock for his insights on emblems in the Renaissance and Cornelia Mu¨ller for helpful comments on earlier versions of this chapter.
8. References Andre´n, Mats 2010. Children’s gestures from 18 to 30 months. Ph.D. thesis, Centre for Languages and Literature, Lund University. Bacon, Francis 1640. Of the Advancement and Proficience of Learning. Book VI. Oxford: Young and Forrest. Barakat, Robert A. 1973. Arabic gestures. Journal of Popular Culture 4: 749–793. Bremmer, Jan and Herman Roodenburg (eds.) 1992. A Cultural History of Gesture. Ithaca, NY: Cornell University Press. Brookes, Heather 2001. The case of the clever gesture. Gesture 1(2): 167–184. Brookes, Heather 2004. A repertoire of South African quotable gestures. Journal of Linguistic Anthropology 14(2): 186–224. Brookes, Heather 2005. What gestures do: Some communicative functions of quotable gestures in conversations among Black urban South Africans. Journal of Pragmatics 32: 2044–2085. Brookes, Heather 2011. Amangama amathathu ‘The three letters’. The emergence of a quotable gesture (emblem). Gesture 11(2): 194–217. Brookes, Heather volume 2. Gestures and taboo. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.) Berlin: De Gruyter Mouton. Bulwer, John 1974. Chirologia or the Natural Language of the Hand, etc. (and) Chiromania or the Art of Manual Rhetoric, etc. Cabonville: Southern Illinois Press First published [1644]. Burke, Peter 1992. The language of gesture in early modern Italy. In: Jan Bremmer and Herman Roodenburg (eds.), A Cultural History of Gesture, 71–83. Ithaca, NY: Cornell University Press. Calbris, Genevie`ve 1990. The Semiotics of French Gestures. Bloomington: Indiana University Press. Calbris, Genevie`ve 2003. From cutting an object to a clear cut analysis: Gesture as the representation of a preconceptual schema linking concrete actions to abstract notions. Gesture 3(1): 19–46. Calbris, Genevie`ve and Jacques Montredon 1986. Des Gestes et des Mots Pour le Dire. Paris: Cle´ International. Castelfranchi, Cristiano and Domenico Parisi 1980. Linguaggio, Conoscenze e Scopi. Bologna: Il Mulino. Cestero, Ana Marı´a 1999. Repertorio Ba´sico de Signos no Verbales del Espan˜ol. Madrid: Arco Libros. Creider, Chet A. 1977. Towards a description of East African Gestures. Sign Language Studies 14: 1–20. De Jorio, Andrea 2000. Gesture in Naples and Gesture in Classical Antiquity. A translation of La mimica degli antichi investigata nel gestire napoletano (Fibreno, Naples 1832), with an introduction and notes by Adam Kendon. Bloomington: Indiana University Press. Diadori, Pierangela 1990. Senza Parole: 100 Gesti degli Italiani. Rome: Bonacci Editore. Eco, Umberto 1976. A Theory of Semiotics. Bloomington: Indiana University Press. Efron, David 1972. Gesture, Race and Culture. The Hague: Mouton First published [1941].
4. Emblems, quotable gestures, or conventionalized body movements Ekman, Paul and Wallace V. Friesen 1969. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1: 49–98. Ekman, Paul and Wallace V. Friesen 1972. Hand movements. Journal of Communication 22: 353–374. Forne´s, Maria Anto`nia and Merce` Puig 2008. El Porque´ de Nuestros Gestos. La Roma de Ayer en la Gestualidad de Hoy. Palma: Edicions Universitat de les Illes Balears. Gelabert, Marı´a Jose´ and Emma Martinell 1990. Diccionario de Gestos con sus Usos Ma´s Usuales. Madrid: Edelsa. Goffman, Erving 1974. Frame Analysis. An Essay on the Organization of Experience. Cambridge, MA: Harvard University Press. Green, Jerald R. 1968. Gesture Inventory for the Teaching of Spanish. Philadelphia: Chilton Books. Hanna, Barbara E. 1996. Defining the emblem. Semiotica 112(3/4): 289–358. Johnson, Harold G., Paul Ekman and Wallace Friesen 1975. Communicative body movements: American emblems. Semiotica 15(4): 335–353. Kacem, Chaouki 2012. Gestenverhalten an Deutschen und Tunesischen Schulen. Ph.D. thesis, Technical University, Berlin. URN: urn:nbn:de:kobv:83-opus-34158 URL: http://opus.kobv. de/tuberlin/volltexte/2012/3415/ Kendon, Adam 1981. Geography of gesture. Semiotica 37(1–2): 129–163. Kendon, Adam 1983. Gesture and speech: How they interact. In: John M. Wieman and Randall P. Harrison (eds.), Nonverbal Interaction, 13–45. Beverly Hills, CA: Sage. Kendon, Adam 1984. Did gesture have the happiness to escape the curse at the confusion of Babel? In: Aaron Wolfgang (ed.), Nonverbal Behavior: Perspectives, Applications, Intercultural Insights, 75–114. Lewiston, NY: C. J. Hogrefe. Kendon, Adam 1986. Some reasons for studying gestures. Semiotica 62: 3–28. Kendon, Adam 1988. How gestures can become like words. In: Fernando Poyatos (ed.), CrossCultural Perspectives in Nonverbal Behavior, 131–141. Toronto: C. J. Hogrefe. Kendon, Adam 1992. Some recent work from Italy on quotable gestures (emblems). Journal of Linguistic Anthropology 2(1): 92–108. Kendon, Adam 1995. Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23: 247–279. Kendon, Adam 1996. An agenda for gesture studies. Semiotic Review of Books 7(3): 7–12. Kendon, Adam 2004a. Contrasts in gesticulation. A British and a Neapolitan speaker compared. In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures. Proceedings of the Berlin Conference, April 1998, 173–193. Berlin: Weidler. Kendon, Adam 2004b. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kendon, Adam 2008. Some reflections on the relationship between ‘gesture’ and ‘sign’. Gesture 8(3): 348–366. Kreidlin, Grigori E. 2004. The Russian dictionary of Gestures. In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures. Proceedings of the Berlin Conference, April 1998, 173–193. Berlin: Weidler. Ladewig, Silva H. 2011a. Putting a recurrent gesture on a cognitive basis. CogniTexte 6 http:// cognitextes.revues.org/406. Ladewig, Silva H. 2011b. Syntactic and semantic integration of gestures into speech: Structural, cognitive, and conceptual aspects. Ph.D. thesis, European University Viadrina, Frankfurt (Oder). Ladewig, Silva H. volume 2. Recurrent gestures. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.) Berlin: De Gruyter Mouton. Mallery, Garrick 2001. Sign Language among North American Indians. New York: Dover. First published [1881]. McNeill, David 1992. Hand and Mind. What Gestures Reveal about Thought, 2nd edition. Chicago: Chicago University Press.
97
98
I. How the body relates to language and communication McNeill, David 2000. Introduction. In: David McNeill (ed.), Language and Gesture, 1–10. Chicago: University of Chicago Press. McNeill, David 2005. Gesture and Thought. Chicago: University of Chicago Press. Meo-Zilio, Giovanni 1986. Expresiones extralingu¨´ısticas concomitantes con expresiones gestuales en el espan˜ol de Ame´rica. In: Sebastian Neumeister (ed.), Actas del IX Congreso de la Asociacio´n Internacional de Hispanistas. Meo-Zilio, Giovanni and Silvia Mejı´a 1980. Diccionario de Gestos: Espan˜a e Hispanoame´rica. Bogota: Instituto Caro y Cuervo. Monahan, Barbara 1983. A Dictionary of Russian Gestures. Ann Arbor, MI: Hermitage. Morris, Desmond 2002. Peoplewatching. London: Vintage. Morris, Desmond, Peter Collett, Peter Marsh and Marie O’Shaughnessy 1979. Gestures. Their Origins and Distributions. New York: Stein and Day. Mu¨ller, Cornelia 1998. Redebegleitende Gesten. Kulturgeschichte – Theorie – Sprachvergleich. Berlin: Arno Spitz. Mu¨ller, Cornelia 2004. The Palm-Up-Open-Hand. A case of a gesture family? In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures. Proceedings of the Berlin Conference, April 1998, 233–256. Berlin: Weidler. Mu¨ller, Cornelia 2010. Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. In: Sprache und Literatur 41(1): 37–68. Munich: Fink. Mu¨ller, Cornelia this volume. Linguistics: Gestures as a medium of expression. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Mu¨ller, Cornelia and Harald Haferland 1997. Gefesselte Ha¨nde. Zur Semiose performativer Gesten. Mitteilungen des Deutschen Germanistenverbandes 44(3): 29–53. Munari, Bruno 1963. Supplemento al Dizionario Italiano. Milan: Muggiani. Nascimento Dominique, Nilma 2008. Inventario de emblemas gestuales espan˜oles y brasilen˜os. Language Design 10: 5–75. Parrill, Fey 2008. Form, meaning and convention: An experimental examination of metaphoric gestures. In: Alan Cienki and Cornelia Mu¨ller (eds.), Metaphor and Gesture, 225–247. Amsterdam: John Benjamins. Paura, Bruno and Marina Sorge 2002. Comme te L’aggia Dicere? Ovvero L’arte Gestuale a Napoli. Naples: Intra Moenia. Payrato´, Lluı´s 1993. A pragmatic view on autonomous gestures: A first repertoire of Catalan emblems. Journal of Pragmatics 20: 193–216. Payrato´, Lluı´s 2001. Methodological remarks on the study of emblems: The need for common elicitation procedures. In: Christian Cave´, Isabelle Guaitella and Serge Santi (eds.), Oralite´ et Gestualite´: Interactions et Comportements Multimodeaux dans la Communicacion, 262–265. Paris: Harmattan. Payrato´, Lluı´s 2003. What does ‘the same gesture’ mean? A reflection on emblems, their organization and their interpretation. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures, Meaning and Use, 73–81. Porto: Fernando Pessoa University Press. Payrato´, Lluı´s 2004. Notes on pragmatic and social aspects of everyday gestures. In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures. Proceedings of the Berlin Conference, April 1998, 103–113. Berlin: Weidler. Payrato´, Lluı´s 2008. Past, present, and future research on emblems in the Hispanic tradition: Preliminary and methodological considerations. Gesture 8(1): 5–21. Payrato´, Lluı´s volume 2. Emblems or quotable gestures: Structures, categories, and functions. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Jana Bressem (eds.), Body – Language – Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.2.) Berlin: De Gruyter Mouton.
4. Emblems, quotable gestures, or conventionalized body movements Payrato´, Lluı´s, Nu´ria Alturo and Marta Paya` (eds.) 2004. Les Fronteres del Llenguatge. Lingu¨ı´stica I Comunicacio´ No Verbal. Barcelona: Promociones y Publicaciones Universitarias. Peirce, Charles Sanders 1960. Collected Papers of Charles Sanders Peirce (1931–1958), Volume I: Principles of Philosophy, Volume II: Elements of Logic, edited by Charles Hartshorne and Paul Weiss. Cambridge, MA: Belknap Press of Harvard University Press. Pe´rez, Faustino 2000. Diccionario de Gestos Dominicanos. Santo Domingo, Republica Domenicana: Faustino Pe´rez. Pike, Kenneth L. 1947. Phonemics: A Technique for Reducing Languages to Writing. Ann Arbor: University of Michigan Press. Poggi, Isabella 1983. La mano a borsa: Analisi semantica di un gesto emblematico olofrastico. In: Grazia Attili and Pio Enrico Ricci Bitti (eds.), Comunicare Senza Parole, 219–238. Rome: Bulzoni. Poggi, Isabella (ed.) 1987. Le Parole nella Testa: Guida a un’ Edicazione Linguistica Cognitivista. Bologna: Il Mulino. Poggi, Isabella 2002. Symbolic gestures. The case of the Italian gestionary. Gesture 2(1): 71–98. Poggi, Isabella 2004. The Italian gestionary. Meaning representation, ambiguity, and context. In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures. Proceedings of the Berlin Conference, April 1998. Berlin: Weidler. Poggi, Isabella 2007. Mind, Hands, Face and Body: A Goal and Belief View of Multimodal Communication. Berlin: Weidler. Poggi, Isabella and Emanuela Magno Caldognetto 1997. Mani Che Parlano. Padova: Unipress. Poggi, Isabella and Marina Zomparelli 1987. Lessico e grammatica nei gesti e nelle parole. In: Isabella Poggi (ed.), Le Parole nella Testa: Guida a un’ Edicazione Cognitivista, 291–328. Bologna: Il Mulino. Posner, Roland 2002. Everyday gestures as a result of ritualization. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures. Meaning and Use, 217–230. Porto: Fernando Pessoa University Press. Posner, Roland, Reinhard Kru¨ger, Thomas Noll and Massimo Serenari in preparation. The Berlin Dictionary of Everyday Gestures. Berlin: Weidler. Poyatos, Fernando 1970. Kine´sica del espan˜ol actual. Hispania 53: 444–452. Poyatos, Fernando 1981. Gesture inventories: Fieldwork methodology and problems. In: Adam Kendon (ed.), Nonverbal Communication, Interaction, and Gesture. Selections from Semiotica, 371–400. The Hague: Mouton. Rector, Monica and Salvato Trigo 2004. Body signs: Portuguese communication on three continents. In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures. Proceedings of the Berlin Conference, April 1998, 195–204. Berlin: Weidler. Ricci Bitti, Pio Enrique 1992. Facial and manual components of Italian symbolic Gestures. In: Fernando Poyatos (ed.), Advances in Nonverbal Communication, 187–196. Amsterdam: John Benjamins. Safadi, Michaela and Carol Ann Valentine 1990. Contrastive analysis of American and Arab nonverbal and paralinguistic communication. Semiotica 82(3–4): 269–292. Saitz, Robert L. and Edward J. Cervenka 1972. Handbook of Gestures. The Hague: Mouton. Schuler, Edgar A. 1944. V for victory: A study in symbolic social control. Journal of Social Psychology, 19: 283–299. Searle, John R. 1979. Expression and Meaning. Studies in the Theory of Speech Acts. Cambridge: Cambridge University Press. Seyfeddinipur, Mandana 2004. Meta-discursive gestures from Iran: Some uses of the ‘Pistol Hand’. In: Cornelia Mu¨ller and Roland Posner (eds.), The Semantics and Pragmatics of Everyday Gestures. Proceedings of the Berlin Conference, April 1998, 205–216. Berlin: Weidler. Sherzer, Joel 1991. The Brazilian thumbs-up gesture. Journal of Linguistic Anthropology 1(2): 189–197. Sparhawk, Carol M. 1976. Linguistics and gesture: An application of linguistic theory to the study of Persian emblems. Ph.D. thesis, The University of Michigan.
99
100
I. How the body relates to language and communication Sparhawk, Carol M. 1978. Contrastive-Identificational features of Persian Gesture. Semiotica 24(1/2): 49–85. Sperber, Dan and Deirdre Wilson 1995. Relevance: Communication and Cognition, 2nd edition. Oxford: Blackwell. Stokoe, William C. 1960. Sign Language Structure. Buffalo, NY: Buffalo University Press. Tumarkin, Petr S. 2002. On a dictionary of Japanese gesture. In: Monica Rector, Isabella Poggi and Nadine Trigo (eds.), Gestures. Meaning and Use. Porto: Fernando Pessoa University Press. Webb, Rebecca 1996. Linguistic features of metaphoric gestures. Ph.D. thesis, New York: University of Rochester. Wilcox, Sherman 2005. Routes from gesture to language. Revista da Abralin 4(1/2): 11–45. Wilcox, Sherman this volume. Speech, sign, and gesture. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body-Language-Communication: An International Handbook on Multimodality in Human Interaction (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Wollock, Jeffrey this volume. Renaissance philosophy: Gesture as universal language. In: Cornelia Mu¨ller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill and Sedinha Teßendorf (eds.), Body-Language-Communication: An International Handbook on Multimodality in Human Interaction. (Handbooks of Linguistics and Communication Science 38.1.) Berlin: De Gruyter Mouton. Wundt, Wilhelm 1900. Vo¨lkerpsychologie: Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte. Volume 1: Die Sprache, Part 1. Leipzig: Engelmann. Wylie, Lawrence 1977. Beaux Gestes: A Guide to French Body Talk. Cambridge, MA: Undergraduate Press.
Sedinha Teßendorf, Frankfurt (Oder) (Germany)
5. Framing, grounding, and coordinating conversational interaction: Posture, gaze, facial expression, and movement in space 1. 2. 3. 4. 5. 6. 7. 8. 9.
Introduction Background Posture Gaze Facial expression Movement The interplay of body and talk in interaction Conclusion References
Abstract This chapter examines several forms of embodied action in interaction. After discussing the historical emergence of an interactionist approach to embodied action from early figures in American anthropology, to the Palo Alto group, to present day conversation analysts, it considers research on body posture, gaze, facial expression, and movement in space for their distinct contributions to the moment-by-moment production and Mu¨ller, Cienki, Fricke, Ladewig, McNeill, Teßendorf (eds.) 2013, Body – Language – Communication (HSK 38.1), de Gruyter, 100–113
5. Framing, grounding and coordinating conversational interaction management of conversational interaction. Then, the chapter examines the interplay of these particular forms of embodied action in recurrent interactional activities, using as examples openings and storytelling. As is demonstrated through these examples, the variety of embodied actions that participants make use of in interaction are part of an extraordinarily powerful yet nuanced toolkit for differentiating their work as particular sorts of participants (i.e., as speaker and recipient, storyteller and story recipient, doctor and patient, etc.), and in the particular sorts of interactional, interpersonal, and institutional business that comprises encounters.
1. Introduction The framing, grounding, and coordination of conversational interaction is a nuanced and complex enterprise, one that is made possible in large part by the relative flexibility of the human body. The head, eyes, mouth, face, torso, legs, arms, hands, fingers, and even the feet comprise moveable elements of the human body that can be arranged and mobilized in conjunction with talk in a potentially limitless variety of configurations. These configurations convey participants’ readiness to interact; the nature and quality of their relationships; the current and unfolding tenor of the immediate interaction; as well as the moment-by-moment differentiation of their identities as speakers and hearers, storytellers and story recipients, doctors and patients, and other such identities that are associated with a variety of interactional, interpersonal, and institutional activities in interaction. These are activities that are constituted via the particulars of participants’ speech and body movements as being recognizably about something, as being directed toward some end, and they comprise the frameworks for meaning and action in interaction.
2. Background The study of the body and speech in interaction as a detailed, naturalistic endeavor owes its beginnings to a confluence of figures from several disciplines and the emergence of technologies – namely, film and video – capable of capturing behaviors that appear one moment and disappear the next. By most accounts, the meeting of scholars at Stanford University’s Center for Advanced Study in the Behavioral Sciences in 1955, and extending via briefer meetings through the late 1960s, marks a pivotal point in this area of study (Kendon 1990; Leeds-Hurwitz 1987), as does the work of sociologist Erving Goffman, and later, of conversation analysts. The Stanford group, sometimes referred to as the Palo Alto group, included such early and mid-twentieth century figures as psychiatrists Frieda Fromm-Reichmann and Henry Brosin; linguists Charles Hockett and Norman McQuown; and anthropologists Alfred Kroeber, Gregory Bateson and Ray Birdwhistell, among others (for a more complete list, see Leeds-Hurwitz 1987). These figures had, in part, inherited their interest in culture and communication, including gesture and body motion, and the desire to study these phenomena closely through film, from an earlier generation of anthropologists who included Frans Boas and Edward Sapir, and, later, Margaret Mead (the latter figures, Boas’ students); they were also influenced by figures in cybernetics and information theory. The group’s initial goal was to use film to understand the role of nonverbal behavior in the treatment of psychiatric patients, but their work came to be associated
101
102
I. How the body relates to language and communication with the emergence of a research approach that treated communication as an integrated system of embodied as well as linguistic behaviors that take on meaning not in isolation, but in the contexts of other behaviors and events. The approach came to be known as the structuralist approach to communication or, as “context analysis” (Kendon 1990: 15). Its influence can be seen in the work of a number of scholars working from the 1960s onward including Albert Scheflen, William Condon, Starkey Duncan, and Erik Erikson (discussed in Leeds-Hurwitz 1987: 31–32). Kendon, one of the most prolific contributors to an interactionist approach to the study of body and speech, was a latecomer to the Palo Alto group and notes not only its influence on subsequent interactionist research, but also the influence of Erving Goffman and conversation analysts (Kendon 1990: 38–41 and 44–49). From a different tack, Erving Goffman approached the study of interaction not with the methods afforded by film and microanalysis, but rather through astute ethnographic observation and anecdote. He is to be credited with the championing of interaction as an object of inquiry in its own right within sociology (Goffman 1983), and of providing an analytic apparatus for understanding the basic organization of interaction with, for example, his conceptualizations of the working consensus; participation frameworks; the management of dominant and subordinate involvements; and face work (Goffman 1963, 1981). With Goffman, we understand the basic performativity involved in interactional processes. Goffman’s students, Harvey Sacks and Emanuel Schegloff, sometimes at odds with their teacher, went on with their colleague Gail Jefferson to found conversation analysis (see e.g., Sacks, Schegloff, and Jefferson 1974; Schegloff and Sacks 1973). This approach, also informed by the central interest of ethnomethodology in everyday sensemaking (Garfinkel 1967; Heritage 1984), involves the rigorously empirical and detailed study of conversational interaction using recorded, naturally-occurring data. It has, since the 1980s, been influential to a number of investigations into speech and body movement carried out by such scholars as Charles and Marjorie Harness Goodwin, Christian Heath, Ju¨rgen Streeck, Lorenza Mondada, Curtis LeBaron, and others. To be considered in this chapter are the findings of some of the scholars mentioned above and others: first for their work on how particular forms of body behavior contribute to interaction; and then, in more detail, for how these behaviors work in concert toward the accomplishment of recurrent interactional activities such as openings and storytelling. It should be noted that while much of the research on embodied action in interaction to date focuses primarily on native English speakers of American and British background, some that is represented in this chapter draws on interactants from other nationalities such as Japan, Italy, Finland, and Papua New Guinea, suggesting, perhaps, that for at least some uses of the body in interaction there is a cross-cultural consistency.
3. Posture When two or more people interact, they arrange their bodies to communicate their orientations to engagement. The “ecological huddle” in Goffman’s terms (1961), or F-formation in Kendon’s (1990), is the positioning of one’s body toward another (or others) for interaction, and in ways that convey varying degrees of involvement in any number of other, possibly competing, activities and events. With their body arrangements,
5. Framing, grounding and coordinating conversational interaction participants create a “frame” of engagement and visibly display their alignment toward one another as interactants. As a number of researchers have noted, the human body provides a segmentally organized hierarchy of resources for communicating participants’ engagement in interaction (Goffman 1963; C. Goodwin 1981; M. H. Goodwin 1997; Kendon 1990; Robinson 1998; Schegloff 1998). The head, torso, and legs especially can be arranged to convey different points of attentional focus: for example, the head can be oriented in one direction, the torso in another and the legs in yet another. When these body segments are aligned in the same direction, a single dominant orientation is communicated; when they are not, they communicate multiple simultaneous orientations that are ranked in accord with the relative stability of each body segment. Put another way, the most stable of these segments, the legs, communicates a person’s dominant orientation relative to the torso and the head, while the torso communicates a more dominant orientation relative to the head. Schegloff (1998) writes that when these body segments are arranged divergently, and as such communicate multiple simultaneous involvements, they convey a postural instability that projects a resolution in terms of moving, for example, the least stable segment, the head, back into alignment with the more stable segments, the torso and the legs. Thus, a person’s fleeting and transitory involvements are communicated as such relative to their more primary and long term involvements, and this has important consequences for the forwarding and, alternately holding off, of interaction. Schegloff (1998) finds, for example, that co-participants to a conversation treat the unstable, or “torqued,” body posture of their interlocutors as cause for limiting expansion of a sequence of talk, as when the co-participant turns her head but not her lower body to engage in talk; alternately, he finds that the alignment of the lower body with the torqued head can be cause for sequence expansion. As another case in point, in medical consultations, Robinson (1998) reports that patients entering the consultation room may find that the doctor, who is seated at his desk, has turned his head to greet them, although his legs and torso remain directed forward, oriented to the medical records on the desk in front of him. In this way, the doctor’s body, representing a hierarchy of differentially aligned segments, projects his initial engagement with the patient as fleeting, and a return to the business with the records as an impending and dominant involvement – although in the activity context of this encounter, a return to interaction with the patient is projectably imminent. Patients are sensitive to this matter: when the doctor turns back to the medical records, they occupy themselves with such activities as settling in (e.g., shutting the door and taking a seat). When the doctor is ready to begin the business proper, he will typically turn and orient his entire body toward the patient, that is, with head, torso, and lower body simultaneously aligned, and produce a topic initiating utterance such as “what’s the problem?” or “what can we do for you today?”
4. Gaze Gaze, too, is an integral element in the communication of participants’ orientations to engagement, and works in concert with body posture. Looking at another, and another’s looking back, is a critical step in the move from “sheer and mere co-presence” to ratified mutual engagement, and people may avoid others’ gazes, and/or avoid directing
103
104
I. How the body relates to language and communication their own gaze to others, to discourage interaction (Goffman 1963). The management of speaker-recipient roles, once interaction has begun, has been taken up by a number of researchers (e.g., Argyle and Cook 1976; Bavelas, Coates, and Johnson 2002; Egbert 1996; C. Goodwin 1981; M. H. Goodwin 1980; Hayashi 2005; Kendon 1967, 1990; Kidwell 1997, 2006; Lerner 2003; Rossano, Brown, and Levinson 2009; Streeck 1993, 1994). Speakers and recipients do not typically gaze at one another continuously, but intermittently: recipients gaze toward speakers as an indication of their attentiveness to talk, and speakers direct their gaze to recipients to show that talk is being addressed to them; recipients typically gaze for a longer duration at speakers, and speakers for shorter duration (they tend to look away during long turns at talk as when telling a story; C. Goodwin 1981; Kendon 1967, 1990). When speakers do not have the gaze of a recipient, they may produce cut-offs, re-starts, and other dysfluencies until they secure the recipient’s gaze (C. Goodwin 1981). Speakers may also produce such actions as tapping or touching the other, bringing their own face and eyes into the other’s line of regard, and, in some cases, even taking hold of the other’s face and turning it toward their own; these are actions that are linked with efforts to remediate an encounter with a resistant and/or unwilling interactant (Kidwell 2006). Recipients, too, may take action to get a speaker to begin talk or address ongoing talk to them, for example by directing their gaze (i.e., a show of recipiency) to the would-be speaker, making a sudden body movement, or contacting the other’s body via some manner of touching (Heath 1986; Kidwell 1997).
5. Facial expression The face, while an important topic of study in psychological approaches to body communication (especially, for example, in the work of Ekman), has often been overlooked as an element in the coordination and management of conversational interaction. The great mobility of the face, along with the speed (i.e., relative to other body parts) with which it can be deployed in interaction, make it an especially useful resource as both a stand-in for, and elaborator of, talk. There is a rich line of research into the syntactic and semantic functions of the face in conjunction with speech (e.g., Bavelas and Chovil 1997; Birdwhistell 1970; Chovil 1991; Ekman, Sorenson, and Friesen 1969). However, the face can also be used as a means of regulating talk and other interactional activities. Kendon (1990) writes of the face in a “kissing round” between a man and a woman sitting on a park bench for how the face, particularly that of the woman’s, regulates the approach and orientations of the male. While Kendon notes a number of types of facial expressions (for example “dreamy look” and “innocent look”), he specifically notes that a closed-lip smile by the woman invites kissing, while a teeth-exposed smile does not. In this way, the woman’s face serves as a resource for projecting not only what she will do next (i.e., kiss or not kiss), but also what she will allow the male to do. In conversational openings, Pillet-Shore (2012) notes that the face, particularly smiling in conjunction with greetings, is used to “do being warm” at the outset of an encounter, and invites further interaction. The face may be displayed prominently in interaction, particularly for the role it plays in the expression of positive affect, but it may also be shielded in interaction, particularly when it is used in expressions of grief. Thus, participants will shield their eyes with their hands or a tissue, turn away, or lower their heads to prevent others from seeing their faces during emotionally painful moments
5. Framing, grounding and coordinating conversational interaction (Beach and LeBaron 2002; Kidwell 2006). The face itself, as Goffman noted, is one of “the most delicate components of personal appearance” and integrally involved in the interactional work by which participants show themselves via constant control of their facial movements to be situationally present, or “in play” and alive to the obligations of their involvements with others (Goffman 1963: 27). In a more recent study of the face in interaction, Ruusuvuori and Pera¨kyla¨ (2009) have demonstrated that facial displays not only accompany specific elements of talk, but can project and follow these elements both in redundant and non-redundant ways, in effect, making use of the face to extend the temporal boundaries of an action beyond a turn at talk. They examine the role of the face in storytelling assessments and other types of tellings. As they report, the face may be used by the speaker to foreshadow a stance toward something being described, in this way preparing the listener for how to respond. Following an utterance, the face may be used by a speaker to pursue uptake by a listener who fails to respond, as when a speaker continues to smile after completing talk. They also demonstrate that the listener may respond not only verbally in a way that shows understanding and affiliation with a speaker’s stance, but also with a like facial expression: in other words, listeners may reciprocate a speaker’s facial expression as a means of producing a reciprocating stance. It has also been reported that listeners may use facial actions in conjunction with acknowledgement tokens and continuers such as “mh hm” and “okay”, or as stand-alone responses to another’s talk (i.e., without accompanying verbalizations; cf. Bavelas and Chovil 1997).
6. Movement Movement is not so much an overlooked element in the coordination and management of conversational interaction as it is a taken-for-granted one. Someone’s approach toward another, like gaze directed at another, is one of the most basic and pervasive ways by which interaction is initiated and, with the person’s movement away, terminated – a particularly powerful resource for even very young children, who are in the pre- and early-verbal stages of language use (Kidwell and Zimmerman 2007). Body movement as an interactional resource has been considered in other ways as well. For example, police may strategically move their bodies toward a suspect in conjunction with their talk to prompt a confession (LeBaron and Streeck 1997); in a public place such as a museum, visitors are attracted to exhibits that others are attracted to, and move into the spaces left by others when they move on to the next exhibit (Lehn, Heath, and Hindmarsh 2001). Regarding a fundamental organization of body movement, Sacks and Schegloff (2002) showed that moving bodies, including moving hands and limbs, typically return to the place from which they started, that is, to a “home position” (Sacks and Schegloff 2002). During conversation, participants may exhibit “interactional synchrony”, that is, a roughly similar flow of body movements such as postural shifts, positioning of limbs, and head movements by which they make visible and regulate their involvement with one another (Condon and Ogston 1966; Kendon 1970, 1990). Head movements have been found to have quite diverse functions. As a semantic matter, the head can be used with or without speech to signify an affirmative or negative response. McClave (2000) reports on a number of additional semantic patterns: for example, in conjunction with certain words or phrases, lateral head sweeps can be used to show inclusivity;
105
106
I. How the body relates to language and communication lateral head shakes can be used to show intensity, disbelief, and/or uncertainty (M. H. Goodwin 1980). Head movements are produced with greater frequency by speakers, and speakers’ head movements may trigger listeners’ head movements (McClave 2000: 874–875). Listeners also produce head movements as a demonstration of their attention to talk. Head nods may be produced alone, or in conjunction with acknowledgement tokens and continuers such as “mh hm” and “okay”. Stivers (2008) notes a distinct difference between the use of head nods and verbal tokens. Specifically, head nods that are placed in the mid-telling position of a story demonstrate an affiliative stance toward that displayed via speaker’s formulation of story events, while verbal tokens demonstrate alignment. Listeners may also make more affective responses with their heads, as when they make a sudden jerk back to show surprise, or a particular sort of comprehension, what Goodwin has called “take” (M. H. Goodwin 1980: 309).
7. The interplay of body and talk in interaction To be considered next is how the interplay of the body behaviors discussed here – posture, gaze, facial expression, and movement – contribute to the constitution of important interactional activities. Openings in interaction and storytelling will be examined for how participants mobilize these body behaviors, in conjunction with talk, to set up and coordinate frameworks for distinct types of activities with distinct types of participation opportunities for those involved.
7.1. Openings In interaction, participants must have some way of beginning an encounter, that is, of indicating their interest in interacting, and their availability and willingness to do so. Openings are critical to the initiation of interaction, not only in terms of coordinating participants’ basic entry into an encounter, but also in terms of proposing something about the nature of participants’ relationship to one another, the business at hand, and, often, the tenor of the interaction to come.
7.1.1. Availability: Establishing and managing physical co-presence Before interaction can begin in face-to-face situations, participants must first come into one another’s physical presence. In this way, they make visible their availability to interact, and can monitor others for their availability and readiness. For example, in medical encounters, a patient coming through the door of the consultation room makes her or himself available to interact with the doctor (Heath 1986; Robinson 1998). In service encounters, a party’s approach toward an information desk is the first step toward the initiation of interaction with the receptionist (Kidwell 2000). The establishment of physical co-presence may be thought of as a pre-initiating move on the way to the initiation of interaction for any number of face-to-face activities (Heath 1984: 250; also, Schegloff 1979). However, as Goffman (1963) writes, the management of physical co-presence itself is an intricate enterprise. When people are in one another’s presence, whether intending interaction or not, they monitor – or “glean” as Goffman writes – information about one another (Goffman 1959). One can imagine that such situations include activities
5. Framing, grounding and coordinating conversational interaction like waiting for a bus or sitting in a class. These sorts of scenarios represent in Goffman’s terms, the realm of unfocused interaction, situations in which, although people are co-present and attending consciously or unconsciously to any number of embodied or otherwise unspoken communication phenomena by others, ratified social interaction has yet (if at all) to take place (Goffman 1963).
7.1.2. Gaze co-ordination The move to ratified social interaction, that is, in Goffman’s terms to focused interaction, is one in which participants cooperate to sustain a single focus of attention, typically through talk, but also in such activities as playing a game of chess, dancing, performing surgery and any other activity that requires participants’ intentionally coordinated joint action (Goffman 1963). Either concurrently with the establishment of co-presence, or shortly thereafter, a next move toward the initiation of ratified interaction is through participants’ coordination of gaze. Indeed, people can be co-present and withhold gaze from another either because they do not intend interaction, or because they see that another is pre-occupied and not yet ready for interaction. People in public settings may also quickly gaze at another, and then gaze away, performing toward an unacquainted other a moment of “civil inattention”, an act by which they acknowledge another’s presence but convey that they do not intend interaction (Goffman 1963). The establishment of co-presence plus the coordination of mutual gaze are necessary pre-conditions for parties’ entry into ratified social interaction, that is, in the move from “mere and sheer” physical co-presence, to social co-presence (Goffman 1963; Mondada 2009; Pillet-Shore 2011). Once these pre-conditions have been satisfied, participants then work to begin the interaction proper. One of the most pervasive ways this is accomplished is through greetings.
7.1.3. Greetings Greetings may be verbal (Hello! Hey! How’s it goin’?) and/or embodied actions (waves, head tosses, handshakes, hugs), that also typically include participants orientation of their eyes and bodies toward one another and facial displays (e.g., smiles and eyebrow flashes). Greetings proffered and greetings returned is a way that parties acknowledge one another when they come into one another’s presence, a fundamental means of “person appreciation”, but they also open up the possibility of further interaction and are perhaps the most frequent way that participants begin interaction. Through their lexical and intonational verbal production, in conjunction with their embodied components, greetings reflect and propose something about the character of a relationship: for example, are participants strangers or casual acquaintances, or are they good friends who have not seen one another in a long time? Kendon and Ferber (1973; Kendon 1990), in their classic paper on greetings, describe a recurrent sequence of behaviors by which participants come to greet one another in naturally occurring social gatherings. In the backyard birthday party example, guest and host proceed through distinct phases, what are termed the “distance salutation” (made when the guest first enters through the backyard gate), the “approach”, and the “close salutation”. In the distance salutation phase, behaviors include sighting, in which participants visually locate one another and typically wait for a return sighting, followed by greeting displays (e.g., a hand wave, a head toss, and/or a “hello”) and accompanying
107
108
I. How the body relates to language and communication smiles. The approach phase may occur concurrently, or shortly thereafter. This phase is characterized by participants looking away from one another, especially as they get close; participants may also engage in self-grooms (e.g., smoothing their hair, adjusting their clothing) and “body crosses” (crossing one or both arms or hands in front of the body) as they approach. Once participants are near enough to begin the close salutation, they again look at one another and produce another greeting, often followed by or produced simultaneously with such actions as a handshake, embrace, kiss, and/or other sort of touching; this phase is also accompanied by smiles. The authors note that greeting interactions are interrelated with participants’ roles as guests and hosts, their degree of familiarity, and their relational status. For example, a host traveling far from the center of the party to greet a guest creates a display of respect and enthusiasm at their arrival; guests entering into the center before being greeted create a show of familiarity, while those who wait on the fringe to be greeted first show relative unfamiliarity. Indeed, the very first moves in face-to-face openings enable participants to discern whether or not they are acquainted with someone and to design their greetings and next moves accordingly. As Pillet-Shore (2011) writes, gaze is used to do critical identification/recognition work, that is, to discern in the very first moments of interaction whether or not participants who are coming into one another’s presence already know each other. Participants’ distinction between the acquainted and the unacquainted, and the consequences for subsequent interaction, is a major organizing feature of social behavior, as noted by Goffman (1963). Pillet-Shore (2012) documents the systematicity by which participants, upon visually locating another as an acquaintance or not, produce greetings that are recipient designed. Greetings between acquainted parties are produced at a relatively louder volume than surrounding speech, and make use of such features as a higher pitch, “smiley voice” intonation in conjunction with smiles, continuing and rising final intonation, sound stretches, and multiple greeting components (verbal and embodied); these latter two features enable greetings to be produced in overlap. Hence, acquainted participants “do being warm” and index their familiarity, in this way conveying that their identification/recognition work has been successful and that they may move forward in the interaction.
7.2. Storytelling One very common sort of activity that participants engage in in interaction is storytelling. As Sacks (1972) and Jefferson (1978) noted, stories have a distinct structure that consists of (i) initiation, (ii) delivery, and (iii) reception by the story recipients. Each of these components is realized via the moment-by-moment changing configurations of participants’ body behaviors and talk – story teller and story recipients’ – that work to create and sustain the participation frameworks of any given moment (C. Goodwin 1984; M. H. Goodwin 1997). In the following case, for example, three women (A, T, and R) are sitting around a table. They have been playing a board game, and there has been a lull in their talk,
5. Framing, grounding and coordinating conversational interaction when one of them, A, turns to another, T, and initiates a story (discussed in Kidwell 1997). Transcription conventions can be found in the appendix. 1 A: =*did I **tell you that I met another recovering 2 M-A-***S-N volunteer this wee:k? ((* A turns, shifts gaze to T; **slaps table/ T and R shift their gaze to A just after; ***T shakes head “no”)) A’s actions at line 1, that is, her turn toward T in addition to slapping the table and directing her gaze at her, are embodied techniques that, in conjunction with her talk, designate T as her primary addressed recipient. A’s actions, however, have the effect of eliciting a display of recipiency from both T and R: they both shift their gaze to A although it is only T (the addressed recipient) who answers the story initiation question that A has posed, which she does with a negative head nod. Story initiations are designed to separate knowing from unknowing recipients, prepare recipients for the kind of story that is being offered, and set up an extended turn space for the teller to deliver the story (C. Goodwin 1981; Jefferson 1978; Sacks 1972). T’s action (the negative head nod) informs A that she hasn’t heard the story, and, thus, functions as a go-ahead for A to tell the story. Getting the go-ahead, A assumes a distinct teller’s posture by returning her gaze back to the center of the table and adjusting her clothing; she then places her elbows on the table, and rests her head in her hands as she speaks at line 6 (see also C. Goodwin 1984). She maintains this position until she once again shifts her gaze to T at line 8. Of note, however, is that it is R, the unaddressed recipient who responds with continuers and head nods at lines 10 and 12. 6 7 8 9 10 11 12 13
A: *˚someho:w,˚ (.) I don’t even remember how I O:h cuz someone, (0.2) ˚ok˚ this is a guy who’s organizing **queery? which is this new r[adio show R: ***[hmm hmm A: ****on WORTS= ((radio station name)) R: ***=hm [m hmm A: [okay
((* A places elbows on table, head in hands; **shifts gaze to T; ***onset of R’s head nods; ****onset of T’s head nods))
The gaze shift by A at line 8 is done as part of a reference check: A wants to confirm that T knows what she is talking about and, in addition to shifting her gaze to T, she produces the word “queery” with a try-marked, rising intonation (Sacks and Schegloff 1979). Getting no indication of recognition from T, she continues with an explanation of what “queery” means in lines 8, 9, and 11 while she looks at T. Although A has not directed any of her gazes to R, and thus has not treated her as someone for whom the story is being told, it is R who responds. R produces continuers that, along with her head nods and gaze toward A, displays – and claims – recipient status; T, for her part, makes only head nods at line 11 (Schegloff 1982). Moreover, R’s continuers and head nods, produced in overlap with A’s explanation rather than at turn construction
109
110
I. How the body relates to language and communication unit boundaries, displays that she already knows what A is talking about, a way of demonstrating that the story-in-progress is relevant for her, too. In sum, A uses body positioning, movement, and gaze to designate T as her primary addressed recipient. However, R challenges this framework with her embodied actions and vocalizations. By positioning her head nods and continuers as she does, R shows that not only is she a recipient, but that certain story elements are familiar to her, too, and, therefore, that she is entitled to being addressed as a recipient. These moves and other moves by R (not shown here) work to re-shape the participation framework such that A subsequently (albeit briefly) accommodates her as a story recipient.
8. Conclusion As has been discussed here, the human body provides participants with a critical resource for accomplishing and differentiating their work as particular sorts of participants in interaction (i.e., speaker and recipient, storyteller and story recipient, doctor and patient, etc.), and in the variety of interactional, interpersonal, and institutional activities that comprise encounters. The sensitivities of participants to these body behavioral resources speak to the fundamental sociality of a social species in which even the most minimal of movements of the body, face, eyes, hands, head and so on are of consequence for what they understand about what others are doing, and what they themselves are expected to do, upon occasions of their coming together. Together with talk, these resources are part of an extraordinarily powerful yet nuanced toolkit for going about the complex business of being human.
Appendix: Transcription conventions Below is a list of transcription conventions developed by Gail Jefferson and used in conversation analytic transcriptions of talk. Embodied actions are described in double parentheses following talk; an asterisk (*) designates the point of onset relative to the talk. For other systems of representing embodied action, see C. Goodwin (1981), Heath (1986), and Robinson (1998). [ () (word) (.) (1.0) wor:d word– =word word WORD ˚word˚ ↑↓ .hh hh
brackets indicate overlapping talk talk heard, but not understood a guess at the talk very brief pauses measured silence colon(s) indicates elongation of prior sound dash indicates cut-off word equals sign indicates latched speech underline indicates stress on word extra loud volume spoken softly indicate rise and fall in pitch, respectively inbreath (preceded by period) outbreath
5. Framing, grounding and coordinating conversational interaction
9. References Argyle, Michael and Mark Cook 1976. Gaze and Mutual Gaze. Cambridge: Cambridge University Press. Bavelas, Janet, Linda Coates and Trudy Johnson 2002. Listener responses as a collaborative process: The role of gaze. Journal of Communication 52(3): 566–580. Bavelas, Janet and Nicole Chovil 1997. Faces in dialogue. In: James A. Russell and Jose´ Miguel Fernandez-Dols (eds.), The Psychology of Facial Expression, 334–346. Cambridge: Cambridge University Press. Beach, Wayne A. and Curtis D. LeBaron 2002. Body disclosures: Attending to personal problems and reported sexual abuse during a medical encounter. Journal of Communication 52: 617–639. Birdwhistell, Ray L. 1970. Kinesics and Context: Essays on Body Motion Communication. Philadelphia: University of Pennsylvania Press. Chovil, Nicole 1991. Discourse-oriented facial displays in conversation. Research on Language and Social Interaction 25: 163–194. Condon, William S. and William D. Ogston 1966. Sound film analysis of normal and pathological behavior patterns. Journal of Nervous and Mental Disease 143(4): 338–347. Egbert, Maria 1996. Context sensitivity in conversation analysis: Eye gaze and the German repair initiator “bitte.” Language in Society 25: 587–612. Ekman, Paul, E. Richard Sorenson and Wallace V. Friesen 1969. Pan-cultural elements in facial displays of emotions. Science 164(3875): 86–88. Garfinkel, Harold 1967. Studies in Ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall. Goffman, Erving 1959. Presentation of Self in Everyday Life. New York: Doubleday Anchor. Goffman, Erving 1961. Encounters: Two Studies in the Sociology of Interaction. Indianapolis: Bobbs-Merrill. Goffman, Erving 1963. Behavior in Public Places. New York: Free Press. Goffman, Erving 1981. Forms of Talk. Philadelphia: University of Pennsylvania Press. Goffman, Erving 1983. The interaction order. American Sociological Review 48: 1–17. Goodwin, Charles 1981. Conversational Organization: Interaction between Speakers and Hearers. London: Academic Press. Goodwin, Charles 1984. Notes on story structures and the organization of participation. In: John Maxwell Atkinson and John Heritage (eds.), Structures of Social Action: Studies in Conversation Analysis, 225–246. Cambridge: Cambridge University Press. Goodwin, Marjorie Harness 1980. Processes of mutual monitoring implicated in the production of description sequences. Sociological Inquiry 50: 303–317. Goodwin, Marjorie Harness 1997. By-play: Negotiating evaluation in storytelling. In: Gregory R. Guy, Crawford Feagin, Deborah Schiffrin and John Baugh (eds.), Toward a Social Science of Language: Papers in Honor of William Labov, Volume 2, 77–102. Amsterdam: John Benjamins. Hayashi, Makoto 2005. Joint turn construction through language and the body: Notes on embodiment in coordinated participation in situated activities. Semiotica 156: 21–53. Heritage, John 1984. Garfinkel and Ethnomethodology. Cambridge: Polity Press. Heath, Christian 1984. Talk and recipiency: sequential organization in speech and body movement. In: John Maxwell Atkinson and John Heritage (eds.), Structures of Social Action: Studies in Conversation Analysis, 247–265. Cambridge: Cambridge University Press. Heath, Christian 1986. Body Movement and Speech in Medical Interaction. Cambridge: Cambridge University Press. Jefferson, Gail 1978. Sequential aspects of story telling in conversation. In: Jim N. Schenkein (ed.), Studies in the Organization of Conversational Interaction, 213–248. New York: Academic Press. Kendon, Adam 1967. Some functions of gaze direction in social interaction. Acta Psychologica 26: 22–63. Kendon, Adam 1970. Movement coordination in social interaction. Acta Psychologica 32: 1–25.
111
112
I. How the body relates to language and communication Kendon, Adam 1990. Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge: Cambridge University Press. Kendon, Adam and Andrew Ferber 1973. A description of some human greetings. In: Richard Phillip Michael and John Hurrell Cook (eds.), Comparative Ecology and Behavior of Primates, 591–668. London: Academic Press. Kidwell, Mardi 1997. Demonstrating recipiency: Resources for the unacknowledged recipient. Issues in Applied Linguistics 8(2): 85–96. Kidwell, Mardi 2000. Common ground in cross-cultural communication: Sequential and institutional contexts in front desk service encounters. Issues in Applied Linguistics 11(1): 17–37. Kidwell, Mardi 2006. “Calm Down!”: The role of gaze in the interactional management of hysteria by the police. Discourse Studies 8(6): 745–770. Kidwell, Mardi and Don Zimmerman 2007. Joint attention as action. Journal of Pragmatics 39(3): 592–611. Leeds-Hurwitz, Wendy 1987. The social history of the “Natural History of an Interview”: A multidisciplinary investigation of social communication. Research on Language and Social Interaction 20: 1–51. LeBaron, Curtis D. and Ju¨rgen Streeck 1997. Built space and the interactional framing of experience during a murder interrogation. Human Studies 20: 1–25. Lehn, Dirk, Christian Heath and Jon Hindmarsh 2001. Exhibiting interaction: Conduct and collaboration in museums and galleries. Symbolic Interaction 24(2): 189–216. Lerner, Gene H. 2003. Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society 32: 177–201. Lerner, Gene H., Don Zimmerman and Mardi Kidwell 2011. Formal structures of practical tasks: A resource for action in the social lives of very young children. In: Charles Goodwin, Ju¨rgen Streeck and Curtis D. LeBaron (eds.), Multimodality and Human Activity: Research on Human Behavior, Action, and Communication, 44–58. Cambridge: Cambridge University Press. McClave, Evelyn Z. 2000. Linguistic funtions of head movements in the context of speech. Journal of Pragmatics 32: 855–878. Mondada, Lorenza 2009. Emergent focused interactions in public places: A systematic analysis of the multimodal achievement of a common interactional space. Journal of Pragmatics 41(10): 1977–1997. Pillet-Shore, Danielle 2011. Doing introductions: The work involved in meeting someone new. Communication Monographs 78(1): 73–95. Pillet-Shore, Danielle 2012. Displaying stance through prosodic recipient design. Research on Language and Social Interaction. 45(4): 375–398. Robinson, Jeffrey David 1998. Getting down to business: Talk, gaze, and body orientation during openings of doctor-patient consultations. Human Communication Research 25: 97–123. Rossano, Federico, Penelope Brown and Stephen C. Levinson 2009. Gaze, questioning and culture. In: Jack Sidnell (ed.), Conversation Analysis: Comparative Perspectives, 187–249. Cambridge: Cambridge University Press. Ruusuvuori, Johanna and Anssi Pera¨kyla¨ 2009. Facial and verbal expressions in assessing stories and topics. Research on Language and Social Interaction 42(4): 377–394. Sacks, Harvey 1972. On the analyzability of stories by children. In: John J. Gumperz and Dell Hymes (eds.), Directions in Sociolinguistics: The Ethnography of Communication, 325–345. New York: Rinehart and Winston. Sacks, Harvey and Emanuel A. Schegloff 1979. Two preferences in the organization of reference to persons in conversation and their interaction. In: George Psathas (ed.), Everyday Language: Studies in Ethnomethoology, 15–21. New York: Erlbaum. Sacks, Harvey and Emanuel Schegloff 2002. Home position. Gesture 2: 133–146. Sacks, Harvey, Emanuel A. Schegloff and Gail Jefferson 1974. A simplest systematics for the organization of turn-taking for conversation. Language 50: 696–735.
6. Homesign: When gesture is called upon to be language
113
Schegloff, Emanuel A. 1979. Identification and recognition in telephone openings. In: George Psathas (ed.), Everyday Language: Studies in Ethnomethoology, 24–78. New York: Erlbaum. Schegloff, Emanuel A. 1982. Discourse as an interactional achievement: Some uses of “uh huh” and other things that come between sentences. In: Deborah Tannen (ed.), Analyzing Discourse: Text and Talk, 71–93. Washington, DC: Georgetown University Press. Schegloff, Emanuel A. 1998. Body torque. Social Research 65: 535–586. Schegloff, Emanuel A. and Harvey Sacks 1973. Opening up closings. Semiotica 8: 289–327. Stivers, Tanya 2008. Stance, alignment and affiliation during story telling: When nodding is a token of preliminary affiliation. Research on Language in Social Interaction 41: 29–55. Streeck, Ju¨rgen 1993. Gesture as communication I: Its coordination with gaze and speech. Communication Monographs 60: 275–299. Streeck, Ju¨rgen 1994. Gesture as communication II: The audience as co-author. Research on Language and Social Interaction 27: 239–267.
Mardi Kidwell, Durham, NH (USA)
6. Homesign: When gesture is called upon to be language 1. 2. 3. 4. 5.
Gesture’s role in learning a spoken language Gesture’s role when a model for language is not available: Homesign The input to homesign The next step after homesign References
Abstract When people speak, they gesture, and young children are no exception. In fact, children who are learning spoken language use gesture to take steps into language that they cannot yet take in speech. But not all children are able to make use of the spoken input that surrounds them. Deaf children whose profound hearing losses prevent them from acquiring spoken language and whose hearing parents have not exposed them to sign language also use gesture, called homesigns, to communicate. These homesigns take on the most basic functions and forms of language – lexicon, morphology, sentential structure, grammatical categories, sentential markers for negations, questions, past and future, and phrasal structure. As such, the deaf children’s homesign gestures are qualitatively different from the co-speech gestures that surround them and, in this sense, represent first steps in the process of language creation. All children who learn a spoken language use gesture. But some children – deaf children with profound hearing losses, for example – are unable to learn the spoken language that surrounds them. If exposed to a conventional sign language, these deaf children will acquire that language as naturally as hearing children acquire spoken language (LilloMartin 1999; Newport and Meier 1985). If, however, deaf children with profound hearing losses are not exposed to sign, they have only gesture to communicate with the hearing individuals in their worlds. Mu¨ller, Cienki, Fricke, Ladewig, McNeill, Teßendorf (eds.) 2013, Body – Language – Communication (HSK 38.1), de Gruyter, 113–125
114
I. How the body relates to language and communication The gestures used by deaf children in these circumstances are known as homesigns. They are different in both form and function from the gestures that hearing children produce to communicate along with speech, and resemble more closely the signs that deaf children of deaf parents and the words that hearing children of hearing parents learn from their respective communities. We begin with a brief look at the gestures that hearing children produce in the early stages of language learning, and then turn to the homesign gestures that deaf children create to substitute for language.
1. Gesture’s role in learning a spoken language Gesture is very often a young child’s first way of communicating with others. At a time when children are limited in the words they know, gesture can extend the range of ideas they are able to express. The earliest gestures children use, typically beginning around 10 months, are deictics, gestures whose referential meaning is given entirely by the context and not by their form, e.g., holding up an object to draw an adult’s attention to that object or, later in development, pointing at the object (Bates et al. 1979). In addition to deictic gestures, children also use iconic gestures. Unlike deictics, the form of an iconic gesture captures aspects of its intended referent and thus its meaning is less dependent on context, e.g., opening and closing the mouth to represent a fish. These iconic gestures are rare in some children, frequent in others. If parents encourage their children to use iconic gestures, these gestures become more frequent, which then facilitates, at least temporarily, the child’s production of words (Goodwyn, Acredolo, and Brown 2000). The remaining types of gestures that adults produce – metaphorics (gestures whose pictorial content presents an abstract idea rather than a concrete object or event) and beats (small baton-like movements that move along with the rhythmical pulsation of speech) – are not produced routinely until relatively late in development. The early gestures that children produce not only predate their words, they predict them. It is, for example, possible to predict a large proportion of the lexical items that eventually appear in a child’s spoken vocabulary from looking at that child’s earlier pointing gestures (Iverson and Goldin-Meadow 2005). Moreover, one of the best predictors of the size of a child’s comprehension vocabulary at 42 months is the number of different objects to which the child pointed at 14 months. Indeed, child gesture at 14 months is a better predictor of later vocabulary size than mother speech at 14 months (Rowe, Ozcaliskan, and Goldin-Meadow 2008; Rowe and Goldin-Meadow 2009). In addition to presaging the shape of their eventual spoken vocabularies, gesture also paves the way for early sentences. Children combine pointing gestures with words to express sentence-like meanings (“open” + point at box) months before they can express these same meanings in a word + word combination (“open box”). Importantly, the age at which children first produce gesture + speech combinations of this sort reliably predicts the age at which they first produce two-word utterances (Goldin-Meadow and Butcher 2003; Iverson and Goldin-Meadow 2005). Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. Moreover, the types of gesture + speech combinations children produce change over time and presage changes in their speech (Ozcaliskan and Goldin-Meadow 2005). For example, children produce gesture + speech combinations conveying more than one
6. Homesign: When gesture is called upon to be language proposition (akin to a complex sentence, e.g., “I like it” + eat gesture) several months before producing a complex sentence entirely in speech (“I like to eat it”). Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.
2. Gesture’s role when a model for language is not available: Homesign Children make use of gestures even if they are not learning language from their elders but are, instead, forced to create their own language. Deaf children whose hearing losses are so severe that they cannot learn a spoken language and whose hearing parents have not exposed them to a sign language nevertheless communicate with the hearing individuals in their worlds and use homesign gestures to do so (Lenneberg 1964; Moores 1974; Tervoort 1961). Interestingly, homesigners use their gestures for the functions to which conventional languages are put. They use homesigns not only to get others to do things for them (i.e., to make requests), but also to share ideas and request information (i.e., to make comments and ask questions). Homesigners even use their gestures to serve some of the more sophisticated functions of language – to tell stories (Phillips, Goldin-Meadow, and Miller 2001), to comment on their own and others’ gestures, and to talk to themselves (Goldin-Meadow 1993). In this sense, the children’s communications are qualitatively different from those produced by language-trained apes who use whatever language they are able to develop to change peoples’ behavior, not to change their ideas (see, for example, Greenfield and Savage-Rumbaugh 1991). The homesigners’ gestures serve the functions of language. The homesigners’ gestures also take on the forms of language. They are structured in language-like ways despite the fact that the children do not have a usable model of a conventional language to guide their gesture creation (Goldin-Meadow 2003). We describe the properties of homesign that have been studied thus far in the following sections.
2.1. Lexicon Like hearing children at the earliest stages of language-learning, deaf homesigners use both pointing gestures and iconic gestures to communicate. Their gestures, rather than being mime-like displays, are discrete units, each of which conveys a particular meaning. Moreover, the gestures are non-situation-specific – a twist gesture, for instance, can be used to request someone to twist open a jar, to indicate that a jar has been twisted open, to comment that a jar cannot be twisted open, or to tell a story about twisting open a jar that is not present in the room. In other words, the homesigner’s gestures are not tied to a particular context, nor are they even tied to the here-and-now (Morford and Goldin-Meadow 1997). In this sense, the gestures warrant the label sign. Homesigners use their pointing gestures to refer to the same range of objects that young hearing children refer to using, first, pointing gestures and, later, words – and in the same distribution (Feldman, Goldin-Meadow, and Gleitman 1978). Both groups of children refer most often to inanimate objects, followed by people and animals. They also both refer to body parts, food, clothing, vehicles, furniture and places, but less frequently.
115
116
I. How the body relates to language and communication Homesigners use iconic gestures more frequently than most hearing children learning spoken language. Their iconic gestures function like nouns, verbs, and adjectives in conventional languages (Goldin-Meadow et al. 1994), although there are fundamental differences between iconic gestures and words. The form of an iconic gesture captures an aspect of its referent; the form of a word does not. Interestingly, although iconicity is present in many of the signs of American Sign Language (ASL), deaf children learning American Sign Language do not seem to notice. Most of their early signs are either not iconic (Bonvillian, Orlansky, and Novack 1983) or, if iconic from an adult’s point of view, not recognized as iconic by the child (Schlesinger 1978). In contrast, deaf individuals inventing their own homesigns are forced by their social situation to create gestures that not only begin transparent but remain so. If they didn’t, no one in their worlds would be able to take any meaning from the gestures they create. Homesigns therefore have an iconic base. Despite the fact that the gestures in a homesign system need to be iconic to be understood, they form a stable lexicon. Homesigners could create each gesture anew every time they use it, as hearing speakers seem to do with their gestures (McNeill 1992). If so, we might still expect some consistency in the forms the gestures take simply because the gestures are iconic and iconicity constrains the set of forms that can be used to convey a meaning. However, we might also expect a great deal of variability around a prototypical form – variability that would crop up simply because each situation is a little different, and a gesture created specifically for that situation is likely to reflect that difference. In fact, it turns out that there is relatively little variability in the set of forms a homesigner uses to convey a particular meaning. The child tends to use the same form, say, two fists breaking apart in a short arc to mean “break”, every single time that child gestures about breaking, no matter whether it’s a cup breaking, or a piece of chalk breaking, or a car breaking (Goldin-Meadow et al. 1994). Thus, the homesigner’s gestures adhere to standards of form, just as a hearing child’s words or a deaf child’s signs do (Singleton, Morford, and Goldin-Meadow 1993). The difference is that the homesigner’s standards are idiosyncratic to the creator rather than shared by a community of language users.
2.2. Morphology Modern languages (both signed and spoken) build up words combinatorially from a repertoire of a few dozen smaller meaningless units. We do not yet know whether homesign has phonological structure (but see Brentari et al. 2012). However, there is evidence that homesigns are composed of parts, each of which is associated with a particular meaning; that is, they have morphological structure (Goldin-Meadow, Mylander, and Butcher 1995; Goldin-Meadow, Mylander, and Franklin 2007). The homesigners could have faithfully reproduced in their gestures the actions that they actually perform. They could have, for example, created gestures that capture the difference between holding a balloon string and holding an umbrella. But they don’t. Instead, the children’s gestures are composed of a limited set of handshape forms, each standing for a class of objects, and a limited set of motion forms, each standing for a class of actions. These handshape and motion components combine freely to create gestures, and the meanings of these gestures are predictable from the meanings of their component parts. For example, a hand shaped like an “O” with the fingers touching the thumb, that is, an
6. Homesign: When gesture is called upon to be language OTouch handshape form, combined with a Revolve motion form means “rotate an object