319 110 9MB
English Pages [215] Year 2020
a to l a r r r e e L t r ca e l f i a c n a I n l i n , J e n ( e d s. ) r u a L gh laván u o L Mc Noa Ta and
B E N J A M I N S C U R R E N T TO P I C S
tion ansla al Tr stics ov i s u n g u i Audi plied Li i n Ap
Audiovisual Translation in Applied Linguistics
Benjamins Current Topics issn 1874-0081 Special issues of established journals tend to circulate within the orbit of the subscribers of those journals. For the Benjamins Current Topics series a number of special issues of various journals have been selected containing salient topics of research with the aim of finding new audiences for topically interesting material, bringing such material to a wider readership in book format. For an overview of all books published in this series, please see benjamins.com/catalog/bct
Volume 111 Audiovisual Translation in Applied Linguistics. Educational perspectives Edited by Laura Incalcaterra McLoughlin, Jennifer Lertola and Noa Talaván These materials were previously published in Translation and Translanguaging in Multilingual Contexts 4:1 (2018).
Audiovisual Translation in Applied Linguistics Educational perspectives Edited by
Laura Incalcaterra McLoughlin National University of Ireland, Galway
Jennifer Lertola Università del Piemonte Orientale
Noa Talaván Universidad Nacional de Educación a Distancia
John Benjamins Publishing Company Amsterdam / Philadelphia
8
TM
The paper used in this publication meets the minimum requirements of the American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.
doi 10.1075/bct.111 Cataloging-in-Publication Data available from Library of Congress: lccn 2020022175 (print) / 2020022176 (e-book) isbn 978 90 272 0755 5 (Hb) isbn 978 90 272 6074 1 (e-book)
© 2020 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Company · https://benjamins.com
Table of contents Audiovisual Translation in Language Education: An introduction Laura Incalcaterra McLoughlin, Jennifer Lertola and Noa Talaván
1
Didactic subtitling in the Foreign Language (FL) classroom. Improving language skills through task-based practice and FormFocused Instruction (FFI): Background considerations Valentina Ragni
9
A pedagogical model for integrating film education and audio description in foreign language acquisition Carmen Herrero and Manuela Escobar
31
The implications of Cognitive Load Theory and exposure to subtitles in English as a Foreign Language (EFL) Anca Daniela Frumuselu
57
Exploring the possibilities of interactive audiovisual activities for language learning Stavroula Sokoli
79
Intralingual dubbing as a tool for developing speaking skills Alicia Sánchez-Requena
103
The use of audio description in foreign language education: A preliminary approach Marga Navarrete
131
Why is that creature grunting? The use of SDH subtitles in video games from an accessibility perspective Tomás Costal
153
Studying the language of Dutch audio description: An example of a corpus-based analysis Nina Reviers
181
Index
206
Audiovisual Translation in Language Education An introduction Laura Incalcaterra McLoughlin, Jennifer Lertola and Noa Talaván
National University of Ireland, Galway | Università del Piemonte Orientale | Universidad Nacional de Educación a Distancia
In recent years, interest in the application of audiovisual translation (AVT) techniques in language teaching has grown beyond unconnected case studies to create a lively network of methodological intertextuality, cross-references, reviews and continuation of previous trials, ultimately defining a recognisable and scalable trend. This volume presents a sample of the current research in this field, with particular reference to case studies that either have a large-scale or international dimension, or can be scaled and replicated with different languages, at different fluency levels and in different contexts. It is our hope that these debates, studies and proposals may arouse the interest of publishers of language learning material and other stakeholders and ultimately lead to the mainstreaming of AVT in language education. The use of AVT in language teaching is not new: subtitles as a support in particular (both interlingual and intralingual) have been utilised extensively for decades, both in teacher-led and in independent learning contexts. Studies on the impact of the subtitles on language learners go back to the late 1980s. Indeed Vanderplank’s (1988) pioneering study on the effectiveness of intralingual teletext subtitles on students of English as a Foreign Language has become a seminal reference for researchers interested in AVT in Applied Linguistics and can veritably be considered the starting point of this particular field of study. Since then, subtitled material has been shown to have beneficial effects on language learners in relation to receptive skills and cultural awareness (Garza 1991; Price 1983; Vanderplank 1988; Winke et al. 2010; Abdolmanafi Rokni and Jannati Ataee 2014), speaking skills (Borrás and Lafayette 1994), and learning strategies in general (Caimi and Mariotti 2015), and also to be a valuable resource for the promotion of bilingualism and multilingualism in countries with more than one national
https://doi.org/10.1075/bct.111.ttmc.00001.edi © 2020 John Benjamins Publishing Company
2
Laura Incalcaterra McLoughlin, Jennifer Lertola and Noa Talaván
language (Kothari et al. 2004; Kruger et al. 2007; Ayonghe 2009). These studies concentrate on ‘ready-made’ audiovisual translations: the standard (L2 audio and L1 written text) or reversed (L1 audio and L2 written text) interlingual subtitled version of a video was shown and complemented with a number of additional activities. This volume however also looks at a different application of AVT, in which learners are involved in the audiovisual translation process itself, performing tasks such as subtitling, dubbing, or audio describing. This different methodological and operational approach exploits a combination of receptive and productive tasks, facilitating meaningful interaction in the target language. In this context, learners are asked to view the audiovisual text and then translate (or create) the verbal soundtrack in form of subtitles, dubbing, audio description, voice narration, etc. Early proposals of the potential of this methodology date back to the late 1980s for both dubbing (Duff 1989; Kumai 1996) and subtitling (Díaz Cintas 1995), and their impact has been studied in relation to mnemonic retention, pragmatic awareness, vocabulary acquisition, listening and writing skills, specialised languages, pronunciation and intonation, autonomous learning and motivational factors, intercultural awareness, as well as translation skills (Incalcaterra McLoughlin 2009; Talaván, 2010, 2011; Chiu 2012; Lertola 2012; Borghetti and Lertola 2014; Talaván and Rodríguez-Arancón 2014; He and Wasuntarasophit 2015; Lopriore and Ceruti, 2015; Talaván and Costal, 2017; among others). Generally speaking, teachers and researchers who work on active AVT tasks with their students found that they encourage not only receptive and productive tasks, but also critical thinking, pragmatic and intercultural awareness both in L1 and L2, as well as the ability to extract and infer information from multisemiotic texts. And in the future, it may be useful to focus also on the relationship between active AVT tasks and the enhancement of digital literacies in language learners, for example by linking tasks to critical use of online translators, prompting comparisons and appraisals of different AVT versions of the same product – whether available on the market or taken from a student corpus – or encouraging the use of online corpora, and so on. Whilst subtitling has undoubtedly received most attention, possibly thanks to the availability of free and user-friendly subtitling software (Incalcaterra McLoughlin and Lertola 2015; Sokoli 2015), dubbing has also found its way in language teaching to aid phonetic and phonological training, improve speed of delivery and express emotions (Danan 2010; Talaván and Costal 2017). Considered more effective than role-plays because learners can review and improve their performance (Burston 2005), it has also been shown to be more popular than subtitling, with learners feeling that their productive and translation skills had improved considerably more through dubbing than through subtitling (Talaván
Audiovisual Translation in Language Education
and Ávila-Cabrera 2015). Audio description (AD) is a relatively new addition to AVT in Applied Linguistics but its use shows encouraging results, not just in relation to language acquisition (Clouet 2005; Ibáñez Moreno and Vermeulen 2013, 2015; Talaván and Lertola 2016) but also regarding motivation and intercultural awareness (Ibáñez Moreno and Vermeulen 2015; Cenni and Izzo 2016). The potential for future developments is significant, especially if researchers from accessibility studies, AVT and Applied Linguistics worked together towards new language learning frontiers. From a theoretical point of view, Paivio’s Dual Coding theory (1969, 1986) and Mayer’s Cognitive Theory of Multimedia Learning (2005) are arguably the most widely referenced premises supporting this methodology, together with Krashen’s affective filter hypothesis (1982). In all studies, the learner emerges as agent, a subject who learns by doing and being directly involved in the communicative experience. Edgar Dale’s pioneering work on audiovisual material in education is also relevant in this context (1946). Dale’s well-known Cone of Experience has seen many variations during the years, but the principle that people generally remember what they do much better than what they just read or hear has remained at the basis of many innovative teaching approaches. Dale did not suggest that mnemonic retention follows a linear path, rather he stressed that “Direct purposeful experience [is] seen, handled, tasted, felt, touched, smelled” (1946: 39) and is part of the “enactive” level (the level of ‘doing’), where different senses are employed, and retention is more effective. Multisensory and intersemiotic experiences, therefore, have a stronger impact on cognitive processes than those relying on just one sense or one semiotic channel. The research and case studies presented in this volume show that AVT can provide such direct experiences: learners watch, listen, read, translate, manipulate, react and re-enact communication events observed (or better, experienced) in their linguistic and extra-linguistic contexts. It is this “doing”, researchers agree, that makes deep learning possible. Perhaps one element that has been underestimated so far to a certain extent is the way that AVT tasks can help to develop the so-called 21st Century Skills, yet the combination of software use, teamwork, analysis, creation and evaluation of contextualised and purposeful speech acts facilitates the acquisition of new and essential skills. The Framework for 21st Century Learning (Partnership for 21st Century Learning 2007)1 lists Life and Career Skills, Learning and Innovation Skills and Information, Media and Technology Skills as key student attributes. Learning and Information Skills, or 4Cs, prepare students for “increasingly complex life and work environments”.2 They are creativity, critical thinking, com1. http://www.p21.org/ Accessed November 1, 2017. 2. http://www.p21.org/about-us/p21-framework Accessed November 1, 2017.
3
4
Laura Incalcaterra McLoughlin, Jennifer Lertola and Noa Talaván
munication and collaboration and are all required for the completion of a successful AVT task. Equally important is the ability to understand, contribute to and evaluate media environments, to utilise technology, select and analyse information. All aspects that form part of any type of AVT task. This volume presents contributions from mainly new voices in AVT research, who discuss innovative approaches and recent case studies. Valentina Ragni explores subtitle creation as a foreign language learning tool. Ragni, who refers to the creation of subtitles as ‘didactic subtitling’, acknowledges the studies on subtitling and draws attention to the need for a clearer theoretical framework to support subtitling activities. She reviews a number of theories from Second Language Acquisition (SLA) literature that underpin interlingual subtitling in foreign language learning. After examining how subtitling can be applied within Task-based Learning and Teaching (TBLT), she argues for the integration of didactic subtitling and Form-Focused Instruction (FFI). The author acknowledges that it can be challenging moving from theoretical considerations to realworld applications and encourages further investigation on the topic. Carmen Herrero and Manuela Escobar encourage the integration of Film Literacy Education and audio description in the foreign language curriculum. In the context of Spanish as a Foreign Language in Higher Education, the authors propose a pedagogical model designed to assist learners in developing linguistic as well as cultural and intercultural competences, while fostering critical understanding of the aesthetic dimension of films. Based on two case studies carried out in the UK, the model is developed in three sessions: introduction to film language and critical thinking; focus on a Spanish director and selected movie – in this case Pedro Almodóvar and his movie based on the story of a visually-impaired filmmaker, so as to relate to the accessibility issue – and finally, audio description task performance. Anca Frumuselu considers the pedagogical use of subtitled and captioned material in the foreign language classroom by reviewing relevant theories such as Cognitive Load Theory (CLT), Cognitive Theory of Multimedia Learning (CTML) and Cognitive Affective Theory of Learning with Media (CATLM), which reveal the cognitive processing activated when students are exposed to multimedia and subtitled audiovisual materials. Frumuselu presents the results of two empirical studies showing the benefits of using interlingual (L1) and intralingual (L2) subtitles in the English as a Foreign Language (EFL) classroom in Higher Education for informal and colloquial language learning in this context. Stavroula Sokoli reports on ClipFlair – a European-funded project – that has developed a platform specifically created for foreign language learning through interactive captioning and revoicing of video clips. The platform hosts activities that allow language learners to insert their own writing (captioning) or speech
Audiovisual Translation in Language Education
(revoicing) into the videos. Sokoli illustrates the ClipFlair conceptual framework, the educational specifications for the web platform considering the role of learners and teachers in the learning process, the learning context and the teaching approach, as well as examples of ClipFlair activities. She also reports on the learners’ survey carried out in the pilot phase of the project, that involved over a thousand learners who tested 85 language learning activities in 12 languages. Alicia Sánchez-Requena investigates the potential benefits of intralingual dubbing as a tool for developing speaking skills. The author reports on an experimental study with 47 B1-level students learning Spanish in five different secondary schools in England over 12 weeks. Data were gathered with a number of instruments: podcasts, three questionnaires and the teacher-researcher’s notes. Findings from the quantitative and qualitative analysis of the of data show concurrent improvement in pronunciation, intonation and speed. Furthermore, intralingual dubbing enhances learners’ motivation and self-confidence. The contribution provides useful feedback to serve as a guide on how to employ dubbing in the Spanish FL classroom in order to facilitate teachers’ practice. Marga Navarrete explores the potential of audio description in the development of oral skills through a small-scale experimental study. The study was carried out with six B1-level final-year undergraduate students of Spanish as an FL at Imperial College, London. Students were pre-tested; then, after being introduced to the topic of the video selected for the AD task, they carried out the AD task in ClipFlair. AD tasks were sent to the teacher for correction and, finally, students’ samples were shown in the classroom to encourage peer-to-peer discussion regarding oral performance aspects. Navarrete acknowledges the limitation of the study; however, she considers that learners’ positive responses to the AD task are encouraging and set the basis for further investigation. Tomás Costal examines the advantages of including Subtitles for the Deaf and Hard of Hearing (SDH) in video games, which are multimedia audiovisual products designed to engage users in high degrees of interactions. He attempts to discern whether accessibility in video games could be improved by reconsidering the way in which linguistic and extralinguistic content is conveyed, by means of an additional textual track. To this end, Costal has compiled a small corpus of popular video games and has carried out an in-depth analysis. He puts forward a preliminary norm to evaluate the quality of subtitling projects specifically oriented to the video game industry. Results can be of interest to scholars as well as practitioners and the industry. Nina Reviers investigates language features of Dutch audio description. Reviers analyses an annotated audiovisual corpus of 39 Dutch films and series released with AD in Flanders and the Netherlands. The results were compared to data from Dutch reference corpora such as SoNaR and Subtlex-nl. The quantita-
5
6
Laura Incalcaterra McLoughlin, Jennifer Lertola and Noa Talaván
tive analysis of over 150,000 words reveals that AD language is idiosyncratic and shows a set of lexico-grammatical features. The analysis of the most salient lexicogrammatical features of Dutch AD language – within the framework of Systemic Functional Linguistics – reveals the type of processes used in AD and how they are expressed linguistically.
References Abdolmanafi Rokni, Seyed Jalal and Azam Jannati Ataee. 2014. “The Effect of Movie Subtitles on EFL Learners’ Oral Performance.” International Journal of English Language, Literature & Humanities 1 (5): 201–215. Ayonghe, Lum Suzanne. 2009. “Subtitling as a Tool for the Promotion of Bilingualism/Multilingualism in Cameroon.” In Language, Literature and Social Discourse in Africa, ed. by Vincent Tanda, Henry Kah Jick and Pius Ngwa Tamanji, 106–120. University of Buea: Departments of English and Linguistics. Borghetti, Claudia, and Jennifer Lertola. 2014. “Interlingual Subtitling for Intercultural Language Education: A Case Study.” Language and Intercultural Communication 14 (4): 423–440. https://doi.org/10.1080/14708477.2014.934380 Borrás, Isabel, and Robert C. Lafayette. 1994. “Effects of Multimedia Courseware Subtitling on the Speaking Performance of College Students of French.” The Modern Language Journal 78: 61–75. https://doi.org/10.1111/j.1540‑4781.1994.tb02015.x Burston, Jack. 2005. “Video Dubbing Projects in the Foreign Language Curriculum.” CALICO Journal 23 (1): 79–92. https://doi.org/10.1558/cj.v23i1.79‑92 Caimi, Annamaria, and Cristina Mariotti. 2015. “Beyond the Book: The Use of Subtitled Audiovisual Material to Promote Content and Language Integrated Learning in Higher Education.” In Audiovisual Translation. Taking Stock, ed. by Jorge Díaz Cintas and Josélia Neves, 230–243. Cambridge: Cambridge Scholars Publishing, Cenni, Irene, and Giuliano Izzo. 2016. “Audiodescrizione nella classe di italiano L2. Un esperimento didattico.” Incontri 31 (2): 45–60. Chiu, Yi-hui. 2012. “Can Film Dubbing Projects Facilitate EFL Learners’ Acquisition of English Pronunciation?” British Journal of Educational Technology 43 (1): E24–E27. https://doi.org/10.1111/j.1467‑8535.2011.01252.x
Clouet, Richard. 2005. “Estrategia y propuestas para promover y practicar la escritura creativa en una clase de inglés para traductores.” Actas del IX Simposio Internacional de la Sociedad Española de Didáctica de la Lengua y la Literatura, 319–326. Dale, Edgar. 1946. Audio-visual Methods in Teaching. New York: The Dryden Press. Danan, Martine. 2010. “Dubbing Projects for the Language Learner: A Framework for Integrating Audiovisual Translation into Task-Based Instruction.” Computer Assisted Language Learning 23 (5): 441–456. https://doi.org/10.1080/09588221.2010.522528 Díaz Cintas, Jorge. 1995. “El subtitulado como técnica docente.” Vida Hispánica 12: 10–14. Duff, Alan. 1989. Translation. Oxford: Oxford University Press. Garza, Thomas J. 1991. “Evaluating the Use of Captioned Video Materials in Advanced Foreign Language Learning.” Foreign Language Annals 24: 239–258. https://doi.org/10.1111/j.1944‑9720.1991.tb00469.x
Audiovisual Translation in Language Education
He, Pin, and Sukhum Wasuntarasophit. 2015. “The Effects of Video Dubbing Tasks on Reinforcing Oral Proficiency for Chinese Vocational College Students.” Asian EFL Journal, 17 (2): 106–133. Ibáñez Moreno, Ana, and Anna Vermeulen. 2013. “Audio Description as a Tool to Improve Lexical and Phraseological Competence in Foreign Language Learning.” In Translation in Language Teaching and Assessment, ed. by Dina Tsigari and Georgios Floros, 41–61. Newcastle upon Tyne: Cambridge Scholars Publishing. Ibáñez Moreno, Ana, and Anna Vermeulen. 2015. “Using VISP (VIdeos for SPeaking), a Mobile App Based on Audio Description, to Promote English Language Learning among Spanish Students: a Case Study.” Procedia – Social and Behavioral Sciences 178: 132–138. https://doi.org/10.1016/j.sbspro.2015.03.169
Incalcaterra McLoughlin, Laura. 2009. “Inter-semiotic Translation in Foreign Language Acquisition: The Case of Subtitles.” In Translation in Second Language Learning and Teaching, ed. by Arnd Witte, Theo Harden, and Alessandra Ramos de Oliveira Harden, 227–244. Bern: Peter Lang. Incalcaterra McLoughlin, Laura, and Jennifer Lertola. 2015. “Captioning and Revoicing of Clips in Foreign Language Learning-Using ClipFlair for Teaching Italian in Online Learning Environments.” In The Future of Italian Teaching. Media, New Technologies and Multi-Disciplinary Perspectives, ed. by Catherine Ramsey-Portolano, 55–69. Newcastle upon Tyne: Cambridge Scholars Publishing. Kothari, Brij, Avinash Pandey, and Amita R. Chudgar. 2004. “Reading out of the “Idiot Box”: Same-Language Subtitling on Television in India.” Journal of Information Technologies and International Development 2 (1): 23–44. https://doi.org/10.1162/1544752043971170 Kumai, William N. 1996. “ Karaoke Movies: Dubbing Movies for Pronunciation.” Language Teacher 20 (9). Accessed October 30, 2017. http://jalt-publications.org/tlt/departments /myshare/articles/2049-karaoke-movies-dubbing-movies-pronunciation Krashen, Stephen D. 1982. Principles and Practice in Second Language Acquisition. Oxford: Pergamon. Kruger, Jan-Louis, Haidee Kruger, and Marlene Verhoef. 2007. “Subtitling and the Promotion of Multilingualism: The Case of Marginalised Languages in South Africa.” Linguistica Antverpiensia 6: 35–49. Lertola, Jennifer. 2012. “The Effect of the Subtitling Task on Vocabulary Learning.” In Translation Research Project 4, ed. by Anthony Pym and David Orrego-Carmona, 61–70. Tarragona: Universitat Rovira i Virgili. Lopriore, Lucilla, and Maria Angela Ceruti. 2015. “Subtitling and Language Awareness: A way and ways.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 293–321. Bern: Peter Lang. Mayer, Richard. 2005. “Cognitive Theory of Multimedia Learning.” In The Cambridge Handbook of Multimedia Learning, ed. by Richard E. Mayer, 31–48. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511816819.004 Paivio, Allan. 1969. “Mental Imagery in Associative Learning and Memory.” Psychological Review 76 (3), 241–263. https://doi.org/10.1037/h0027272 Paivio, Allan. 1986. Mental Representations: A Dual Coding Approach. Oxford: Oxford University Press. Price, Karen. 1983. “Closed-Captioned TV: An Untapped Resource.” MATSOL Newsletter 12 (2): 1–8.
7
8
Laura Incalcaterra McLoughlin, Jennifer Lertola and Noa Talaván
Sokoli, Stavroula. 2015. “ClipFlair: Foreign Language Learning through Interactive Revoicing and Captioning of Clips.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 127–147. Bern: Peter Lang. Talaván, Noa. 2010. “Subtitling as a Task and Subtitles as Support: Pedagogical Applications.” In New Insights into Audiovisual Translation and Media Accessibility, ed. by Jorge Díaz Cintas, Anna Matamala, and Josélia Neves, 285–299. Amsterdam: Rodopi. https://doi.org/10.1163/9789042031814_021
Talaván, Noa. 2011. “A Quasi-Experimental Research Project on Subtitling and Foreign Language Acquisition.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Maire Áine Ní Mhainnín, 197–217. Bern: Peter Lang. Talaván, Noa, and José Javier Ávila-Cabrera. 2015. “First Insights into the Combination of Dubbing and Subtitling.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti. 149–172. Bern: Peter Lang. Talaván, Noa, and Tomás Costal. 2017. “iDub – The Potential of Intralingual Dubbing in Foreign Language Learning: How to assess the task.” Language Value 9 (1): 62–88. https://doi.org/10.6035/LanguageV.2017.9.4
Talaván, Noa, and Jennifer Lertola. 2016. “Active Audiodescription to Promote Speaking Skills in Online Environments.” Sintagma 28: 59–74. Talaván, Noa, and Pilar Rodríguez-Arancón. 2014. “The Use of Reverse Subtitling as an Online Collaborative Language Learning Tool.” The Interpreter and Translator Trainer 8 (1): 84–101. https://doi.org/10.1080/1750399X.2014.908559 Vanderplank, Robert. 1988. “The Value of Teletext Subtitles in Language Learning.” ELT Journal 42 (4): 272–281. https://doi.org/10.1093/elt/42.4.272 Winke, Paula, Susan Gass, and Tetyana Sydorenko. 2010. “The Effects of Captioning Videos Used for Foreign Language Listening Activities.” Language Learning & Technology 14 (1): 65–86.
Didactic subtitling in the Foreign Language (FL) classroom. Improving language skills through task-based practice and FormFocused Instruction (FFI) Background considerations Valentina Ragni
University of Leeds
Didactic subtitling is a relatively new area of investigation that is undergoing a surge in popularity. By bringing together findings from Audiovisual Translation (AVT), Second Language Acquisition (SLA) and psycholinguistics, some theoretical issues related to the practice of subtitle creation in Foreign Language Learning (FLL) are appraised. The article introduces Task-Based Learning and Teaching (TBLT) and reflects on what didactic subtitling can and cannot offer to TBLT approaches. In a still predominantly communicative era, language researchers are questioning the effectiveness of entirely communicative approaches to FLL. Many support the idea that, if successful learning is to be achieved, some Form-Focused Instruction (FFI) is needed. This article reviews relevant FFI literature, and explores how far active subtitling can provide an effective strategy for focussing on form that leads to communicative language development. In doing so, concepts such as noticing, skill development, interaction, pushed output and consciousnessraising are addressed. It is argued that a combination of task-based and form-focused instruction in the subtitling classroom can have great potential and should be investigated further, both theoretically and empirically. Keywords: audiovisual translation, didactic subtitling, foreign language learning, task-based learning and teaching, form-focused instruction
1.
Introduction: An overview of didactic subtitling in FLL
Subtitle creation in Foreign Language Learning (FLL) is a relatively new area of investigation, which has gained considerable popularity over the last few years. A https://doi.org/10.1075/bct.111.ttmc.00002.rag © 2020 John Benjamins Publishing Company
10
Valentina Ragni
distinction has been made (Talaván 2010) between subtitle use (subtitles as a support) and subtitle creation (subtitling as a task). This paper is concerned with the latter, namely the addition of subtitles onto a video clip carried out by the learners themselves. Since this audiovisual (AV ) practice is specifically referred to in the context of language pedagogy, the term didactic subtitling will be used throughout this paper. From the 1980s to the present day, a plethora of experimental and pedagogical studies have addressed subtitle use in the fields of FLL, Foreign Language Teaching (FLT), Second Language Acquisition (SLA), psycholinguistics (e.g., using eye-tracking) and Audiovisual Translation (AVT). In contrast, very few studies have dealt with subtitle creation specifically and, so far, the vast majority come from the AVT literature. Traditionally, subtitle creation has been of two main types: standard and reverse. In standard subtitling, learners watch and listen to the FL AV text and translate the message into native language subtitles. In reverse subtitling, learners watch and listen in their native language and produce FL subtitles. To date, a number of publications have addressed didactic subtitling (Williams and Thorne 2000; Neves 2004; Sokoli 2015; Talaván 2013; Kantz 2015). Some specifically investigated the creation of standard (Talaván 2011; Incalcaterra McLoughlin and Lertola 2011, 2014 and 2015; Lertola 2012; Lopriore and Ceruti 2015) or reverse subtitles (Talaván and Rodríguez-Arancón 2014; Talaván and Ávila-Cabrera 2015). In these studies, the act of creating subtitles has been found to facilitate retention and promote vocabulary acquisition (Lertola 2012), while providing the opportunity to practise reading ability and listening comprehension alongside the development of transferrable skills such as digital literacy (Incalcaterra McLoughlin and Lertola 2015). Didactic subtitling can also enhance productive abilities such as spelling and summarising, thus counteracting the passivity of other language learning activities (Sokoli 2006) and reinforce both student foreign language (FL) writing and translation skills (Talaván and Ávila-Cabrera 2015). Didactic subtitling constitutes a functional and interactive exercise allowing peers to create together or share their work (Talaván 2010), thus promoting collaboration while also fostering learner autonomy in a distance-learning context (Talaván 2013). It involves a series of micro-activities such as note-taking and information prioritisation (Sokoli 2006) and it challenges students to find synonyms and condense the message (Lertola 2015), thus bolstering pragmatic competence (Lopriore and Ceruti 2015). Moreover, it produces a tangible result that resembles that of a professional subtitler in the real world (Sokoli et al. 2011), it allows to use authentic material in a cultural context (Williams and Thorne 2000), thus offering opportunities for intercultural language education (Borghetti and Lertola 2014), it increases language awareness and fosters metalinguistic reflection (Lopriore and Ceruti 2015), and finally it creates emotionally charged activities
Didactic subtitling in the Foreign Language (FL) classroom
that provide a motivational stimulus (Incalcaterra McLoughlin and Lertola 2014). The EU has recognised such potential and supported a number of projects aimed at spurring the use of video applications for class-based language activities. Examples are Divis1 (Digital Video Streaming and Multilingualism), LeViS2 (Learning Via Subtitling) and ClipFlair.3 The introduction and noticeable increase in the use of these resources demonstrates that the integration of AV activities in FLT is already underway, and highlights the need for a clearer positioning of such activities with respect to theoretical concepts addressed in the FLL and SLA literature.
2.
Scope, aims and structure
There are numerous ways of integrating audiovisuals in the FL classroom. Some practices do not include translation. For example, students could be asked to add subtitles directly in the FL to a video with music and background noises but no Source Language (SL) dialogues in order to describe what they see in a scene. Some AVT types4 do not include the presence of subtitles, e.g., dubbing, narration and voiceover. For a summary of possible learning activities involving audiovisuals, see Zabalbeascoa et al. (2012, 21–22). In the studies and platforms mentioned above, a number of AV texts were employed, such as TV series, ads, documentaries, animations, etc. Different learner characteristics were addressed, such as age groups (young or adult) and proficiency levels (from CEFR A1 to C2). Different types of institutions (private or public, school or university, undergraduate or postgraduate) and types of classes (e.g., small or large groups, mixed-origin or same language background) were involved. Finally, different types of instructional 1. http://www.divisproject.eu/ [accessed 23/02/2016]. 2. http://levis.cti.gr/ [accessed 23/02/2016]. 3. http://clipflair.net/overview/ [accessed 23/02/2016]. 4. For the purpose of this paper, a distinction is made between AVT type (or mode) and sub-type (or submode). Mode and type have been used as synonyms (Gambier 2003), and as such they will be used in this paper too. Broader practices such as subtitling, dubbing, audiodescription, narration, key-word captioning and free commentary are herein considered AVT types/modes. These are practices that can be sub-divided further according to a number of different criteria. Any of their sub-divisions will be considered sub-types, whatever the criteria used in the classification. There are taxonomical challenges in classifying AVT types and subtypes, especially since new modes are created as the discipline expands and redefines itself. A particularly controversial issue is where the line between types and sub-types should be drawn (Hernández Bartolomé and Mendiluce Cabrera 2005). Since taxonomic classification is peripheral to the purposes of this paper, the choice is motivated chiefly by its convenience and clarity in the present discussion.
11
12
Valentina Ragni
delivery, such as face-to-face or distance learning were exploited. This paper aims at broadening this picture by appraising some issues of theoretical interest related to using subtitling as an FLL tool. In doing so, some ideas for activities and class structure will be mentioned in passing. However, the main goal of this paper is not to present a methodological proposal, but to provide a foundation upon which such proposals can be grounded. Within its length constraints, this article constitutes a first attempt to situate didactic subtitling in the SLA and FLL literature by considering a number of theories and recent developments in these fields that can inform the subsequent design of subtitling activities. Since this is an integration of findings coming from the AVT studies reviewed above and from the SLA literature, the scope will not be restricted a priori only to a specific AVT text, type of learner or proficiency level, yet these will be called upon when relevant. Interlingual subtitling, both standard and reverse, will be at the centre of the discussion, while other AV types and sub-types (e.g., dubbing, intralingual subtitling)5 will be touched upon where relevant. Moreover, and importantly, the focus will be on foreign rather than second language contexts. In the former, language learning happens in the native language environment of the student; in the latter, it happens in the Target Language (TL) environment. For this reason, the more general abbreviation FL, rather than second language (L2), will be used. As mentioned above, Talaván (2010) highlighted the difference between subtitles as a support and subtitling as a task, and so indirectly posed the question of why and how adding subtitles to authentic video material can be considered a task applicable to Communicative Language Teaching (CLT) approaches. The first part of this paper takes a closer look at Task-Based Learning and Teaching (TBLT), provides a definition of tasks and, in doing so, examines how the subtitling task can be exploited within such an approach. In the second part, some constructs from SLA and cognitive psychology will be introduced in order to describe the learning process and assess what the subtitling task can and cannot provide in FL classroom settings. Lastly, the documented shift from purely meaning-based to form-based approaches to FLL and FLT will be addressed, and an argument for the integration of didactic subtitling and Form-Focused Instruction (FFI) will be put forward.
5. This AV sub-type has also been referred to as monolingual, teletext or same-language subtitling in the AVT literature and bimodal input in the psycholinguistics literature.
Didactic subtitling in the Foreign Language (FL) classroom
3.
Defining tasks
Several definitions of task have been given in SLA literature. A useful starting point is Skehan’s definition of a task as “an activity in which meaning is primary, there is some sort of relationship to the real world, task completion has some priority, and the assessment of task performance is in terms of task outcome” (1996, 1). A task is an activity that necessarily requires pragmatic processing of language, where learner attention is primarily focused on meaning. Tasks are concerned with the use of language in context, which resembles, directly or indirectly, the communicative processes involved in real life. Examples include making an airline reservation or filling in an official form. In a task, the student chooses what linguistic resources to use in order to achieve the communicative goal at hand, making any learning that might take place incidental rather than intentional (Ellis 2003). Therefore, a task usually requires participants to see themselves as language users rather than learners. However, the extent to which they will pay attention to meaning when performing a task will vary, as they may momentarily pay attention to form and therefore adopt the role of language learners rather than users (2003); for example, when they look a word up in the dictionary. A task can be designed without the practice of a specific structure in mind (unfocused) or with the aim of eliciting a particular linguistic feature (focused). Even in focused tasks, however, this feature should not be mentioned explicitly in the rubric of the task (2003) and consequently may or may not result in being used. A task can therefore constrain the linguistic forms to be used but cannot specify them, leaving the final choice to the learner.
4.
Didactic subtitling as a task
Drawing on previous work, Ellis (2003, 9–10) lists six key criteria a task must satisfy. Each will be analysed in turn, to assess how and why the act of subtitling in itself can be considered a communicative task. (1) A task is a workplan for learner activity that involves teaching materials. It specifies what learners have to do, yet it is relatively unstructured so that they can choose what linguistic resources to use.
The subtitling activities used in FLT usually involve a lesson plan set by the teacher, who selects the relevant teaching materials (including the video clips to be subtitled) and gives at least a minimal set of instructions to the students, e.g., to translate the clip into their native or foreign language. Often activities are relatively unstructured, as the students choose the linguistic forms to render the
13
14
Valentina Ragni
source text (ST) message. Not only that, but learners can also have control of how to watch (when to pause, re-listen, slow down the video) and how to work on the task. For instance, some students tend to print a transcript whenever one is available, while others work directly in the subtitling platform. (2) A task must have a primary focus on meaning and incorporate a ‘gap’ for the students to fill, be it an information, opinion or reasoning gap.
Much has been said about the polysemiotic dimension of AV media and how it affects communication of meaning in AVT. More than 25 years ago now, Delabastita pointed out that film translation is “not just a matter of language conversion” (1990, 99). From his well-known analysis of the “semiotic nature of the total film sign” (1990, 101) to Lambert and Delabastita’s (1996) classification of how semiotic shifts between verbal and non-verbal channels affect meaning in AVT, passing through Gottlieb’s (1994) idea of diagonal translation, one thing stands uncontested: message conveyance has always been pivotal to the act of translating. Subtitling, as a form of AVT, is an inherently meaning-centred activity. Moreover, in didactic subtitling there is a clear information gap between the AV source and the target, which is left to the students to close by transferring multimodal content into subtitles through their own language resources. This criterion is of particular relevance to the act of subtitling and will be revisited later (Section 5.3). (3) A task involves real-world processes of language use.
During the subtitling activity, learners will take on the role of a subtitler, to some extent reproducing the real operating conditions of professional work. When faced with comprehensible input, students work at what has been called the i + 1 level (Krashen 1982): they will understand the gist of the message and most of the language, but they will also encounter words and expressions they are not familiar with. They may look up words in a dictionary, do terminology research on a topic and use support materials such as glossaries. If translation cannot be fitted around the space and time constraints typical of the AV medium, they may have to look for alternative ways of conveying a piece of information, for example through synonyms or sentence restructuring. These are all operations professional subtitlers also carry out on a daily basis. Different tasks will have varying degrees of impact on what learners are going to do when communicating in the FL outside the classroom. The act of subtitling per se may not have the same immediate real-world relevance as the act of asking for or giving directions to a certain location; when in the foreign country, students are more likely to need to request and understand directions to a supermarket than to find themselves subtitling a clip. Nevertheless, the language pro-
Didactic subtitling in the Foreign Language (FL) classroom
duced through subtitling can very much reflect that of a real situation. In fact, a video excerpt could be selected precisely because it contains an exchange where directions are asked for by a character and provided by another. Furthermore, the presence of the moving images adds authenticity and memorability to the communicative situation. In fact, the AV input provides a much closer experience to an immersion situation than many other classroom-based activities. I would argue that, from this point of view, subtitling is less artificial than, for example, a spot-the-difference task (where one has to determine whether two pictures are the same or different) and certainly than a fill-in-the-gaps exercise, an operation that learners are highly unlikely to perform outside the classroom.6 Not only can didactic subtitling deploy the same meaning-making processes of a profession that exists in the real world, but it can also very much resemble naturally occurring situations leaners are likely to experience outside the classroom. (4) A task does not exclusively involve oral production skills.
Although much of the literature on tasks has concentrated on oral skills (Bygate et al. 2001), Willis and Willis (2007) note that a task can involve any of the four language skills. A task may entail receptive or productive skills, produce an oral or written text and involve monologic or dialogic language use. Didactic subtitling requires students to create a written text. As we have seen above, it was found to help towards the improvement of productive abilities such as overall writing skills, be the output in the FL or SL. Where the foreign input is in the audio (standard subtitling) the subtitling task also entails practice of FL listening comprehension skills (Danan 2004). Where the native language input is in the audio (reverse subtitling) it is FL production skills such as composition, reformulation and spelling that will be practiced (Talaván and Rodríguez-Arancón 2014). (5) A task requires a number of cognitive processes.
Alongside language manipulation, a task involves cognitive operations such as reasoning and perceptual skills. Auditory and visual perception are crucial perceptual skills in audiovisual processing. However, their conceptualisation as 6. Subtitling and spot-the-difference tasks certainly have different purposes but are both activities that can be used in the FL classroom, and as such, they can be compared in a discussion on exercise type. Since both tasks have been deemed artificial in the published literature (Ellis 2003; Ghia 2011), drawing a parallel between them serves the purpose of highlighting what I consider to be a flaw in the artificiality argument: basing our judgement of learning activities solely on whether they are likely to be performed by the learners outside the class (potentially discarding some activities on such basis) can be dangerous, since other reasons are also relevant and should be considered in their assessment (for example, their different purposes, or the cognitive benefits involved with such activities).
15
16
Valentina Ragni
separate components is artificial. In fact, speech perception is a natural multisensory process where both the auditory properties of the speech stream and the visual articulatory attributes of the talker – where available – are automatically attended to (McGurk and MacDonald 1976; Erber 1979). This is also the case during subtitling tasks. When the FL is perceived aurally (standard mode), learners have to combine their listening skills (including the ability to understand intonation, dialects, accents and singing) with their ability to read and interpret signs from the visual input (including character information, movements, proxemics, as well as camera techniques such as close-ups or panoramic views). This integration of auditory and visual perception skills is also crucial when the AV ST is in the native language of the learners (reverse mode) as it will affect and inform the FL output they produce. Ellis maintains that tasks also involve the structuring and restructuring of the FL (2003), which is precisely what, through concept elaboration and problem solving skills, is achieved during the creation of FL reverse subtitles. Prabhu (1987) discusses reasoning as a thought process involving inferences and perception of patterns and relationships, where deductions, connections and evaluations are made between new and old pieces of information. AV comprehension skills and reasoning abilities are linked during the subtitling task. Within the rich semiotic architecture of the AV text, deductive skills, connections between source and target text, content selection and situation evaluation are needed both to process and produce the FL, for example when learners must go beyond the denotative meaning of words in order to understand or render the affective meaning of an utterance. (6) A task has a clear communicative outcome.
Finally, a task should have a defined outcome other than the simple use of the FL. The outcome determines whether the task has been completed. For example, a spot-the-difference task would result in the students having a list of differences between the items compared, which constitutes a clear outcome (Ellis 2003). In didactic subtitling, a defined outcome signals whether the task has been completed, namely the production of the subtitled clip. This is a piece of learner-produced language that seeks to communicate a source message to a target audience, so that the latter can have an experience of the AV text equivalent to that of the originally intended audience. Thus, not only will the students produce a tangible, semiprofessional result, they will also achieve a well-defined goal by enabling viewers who do not understand the original language to access the AV clip. As we have seen, subtitle creation has specific characteristics that make it a task suitable for use in the communicative language classroom. The position of didactic subtitling in TBLT and the consideration of why this activity can be con-
Didactic subtitling in the Foreign Language (FL) classroom
sidered a task, however, indirectly call for consideration of what features of this task are most relevant to the learning process. We will therefore now explore to what extent and in which ways didactic subtitling can be used to foster language learning in classroom contexts.
5.
The role of the subtitling task in the learning process: Reappraisal of form
5.1 Noticing, ‘form’ and FLL Schmidt’s Noticing Hypothesis (1990 and 2001) proposes that noticing, a concept closely related to attention, may often be a necessity for input to become intake, i.e., to be internalised by the learner and increase its chances of being acquired. So, although some learning may occur without attention, most often focused attention on both forms and meaning is necessary. Since language features are often “infrequent, non-salient and communicatively redundant” (Laufer and Girsai 2008, 697) they may easily be disregarded by the learner unless some attention is focused on their form. In fact, in a still predominantly communicative era, language researchers have been questioning for years the effectiveness of entirely communicative approaches to FLT. Many support the idea that, if successful language learning is to be achieved, some Form-Focused Instruction is needed (Doughty and Williams 1998; Laufer 2006; Loewen 2005; Long 1991). Ellis defines FFI as “any planned or incidental instructional activity that is intended to induce language learners to pay attention to linguistic form” (2001, 1–2), where ‘form’ is used in a broad acceptation, intended as any phonological, lexical, grammatical and pragmatic language aspect focused on during instruction. Therefore, the term includes the function that a particular form fulfils (Laufer 2006). An example would be knowing that the English verbal form -ing, when non-nominalised, usually indicates a continuous action. FFI has been further categorised into Focus on Form (FonF) and Focus on FormS (FonFs). In the former, learners’ attention is drawn to linguistic elements during a communicative task, be it comprehensionor production-based (Ellis 2001). In the latter, learners’ attention is drawn to linguistic elements through “teaching discrete linguistic structures in separate lessons in a sequence determined by syllabus writers” (Laufer and Girsai 2008, 695). FonF can be incidental, if it arises from student need, or planned, if it arises from task design (Laufer 2005). Several scholars maintain that through meaning and communication alone, students might not be able to achieve native-like levels of accuracy (Loewen 2005), native-like speech (Tschirner 2001) or grammatical competence (Laufer 2006). Evidence for this comes from the realisation that some
17
18
Valentina Ragni
grammatical structures are not acquired even after years of exposure to comprehensible input in purely communicative situations (Ellis 2001). Tschirner argues that native-like oral production “may occur only when the learner is directed towards the linguistic form in addition to the meaning it encodes” (2001, 308). From a psychological perspective, Sharwood Smith (1993) also argues that both form and meaning must be perceived and processed simultaneously if learner interlanguages are to develop. To do so, he suggests that explicit attention should be drawn to formal properties of the input (which he calls ‘input enhancement’).
5.2 Didactic subtitling and attention to form Intrinsic to the subtitling task, there are specific time-, space- and picture-related constraints such as minimum and maximum permanence time on screen, maximum number of lines of text and visual ties to the image. The ST often cannot be translated verbatim, naturally requiring reformulating, summarising, sometimes reducing or even omitting information that is not essential to the core message. These technical constraints force learners to prioritise the message and make subtitling an inherently meaning-centred activity, leading the students to use language pragmatically rather than displaying their language knowledge, to engage in an act of communication rather than just practising one pre-selected item, as it happens in some traditional exercises and drills. However, while students primarily focus on meaning, they have to concentrate on form too, at least to a certain extent, in order to render said meaning in the translation. In standard subtitling, students have to understand and break down both the FL speech stream and the rest of the multisemiotic content in order to establish what to prioritise and how to transfer the core message into appropriate forms of their native language. In reverse subtitling, understanding the speech stream is not an issue since the auditory input is in the students’ native language. Therefore, students will be able to concentrate their efforts on integrating meaning from linguistic and nonlinguistic sources, evaluating and prioritising this information, in order to choose appropriate FL forms to create a coherent piece of FL writing that respects the core message of the original. Students do all the above during the communicative task, while they are mainly concerned with understanding and manipulating messages. From this standpoint, therefore, the act of subtitling can be considered a form of FonF. And indeed, in their comparative study, Laufer and Girsai (2008) treat translation as a form of contrastive FFI. One of the advantages of having students create standard or reverse subtitles is that the contrastive differences between SL and TL are naturally highlighted in the process of filling that information gap between AV text and subtitles. Through this form of highly contextualised translation, students can be quite inventive and have to figure out things for
Didactic subtitling in the Foreign Language (FL) classroom
themselves. This need to take initiative and use their own judgement to link the semiotic codes is likely to increase their awareness of the language. In fact, translation, fiercely criticised for decades in FLT theories has started to be reconsidered precisely because it has been shown to increase language awareness and provide an opportunity for consciousness raising (see, amongst others, Butzkamm 2003; Scheffler 2013). If, from a pedagogical perspective, we consider translation as a continuum along which different classroom activities can be placed, then translating disconnected, artificial, stand-alone sentences that bear no relevance to learners’ life (like those used in the much-criticised Grammar Translation method) sits at the diametrically opposite end from translating complex multisemiotic meanings through the creation and addition of subtitles to a piece of rich, authentic video. AVT tasks offer plenty of opportunities for consciousness raising, which should make them a favourable candidate in applied studies aimed at shedding light on the role of the mother tongue in the FL classroom.
5.3 Didactic subtitling and production Another concept that is linked to noticing and relevant to subtitling as a task is that of ‘pushed output’. In her Output Hypothesis, Swain (1985) proposed that noticing the gap between their linguistic resources and a linguistic problem they need to manage pushes learners to look for the adequate knowledge needed to fill that gap. The resulting production is pushed output, which, for Swain, constitutes part of the language learning process. Indeed, translation can be considered a form of pushed output (Laufer and Girsai 2008). When adding subtitles to a rich AV text, students naturally focus on the gap between what the AV text is communicating and their own language resources, which they need to stretch and manipulate in order to get the meaning across. Moreover, closing the gap means generating language, so pushed output is closely related to the process of production. Typically, the enhancement of ‘active’ production skills, such as speaking and writing, is more problematic for learners to master than ‘passive’ ones (Laufer and Girsai 2008). This is the case both because more in-depth knowledge is required to use a word or a structure correctly (pronunciation, spelling, register, different types of meanings, syntagmatic and paradigmatic relationships) and because learners tend to encounter language items receptively more often than they get to practice them actively (Laufer 2005). AVT tasks requiring learners to produce the FL, such as reverse subtitling, monolingual FL subtitling and revoicing practices, can therefore present a particularly fruitful addition to language courses aimed at enabling learners to make active use of language in communicative situations. In addition, words that are acquired productively, i.e., by means of active language use (be it speaking or writing) are less prone to be forgotten than
19
20
Valentina Ragni
those acquired passively (Schmitt 1998), providing a further argument for the use of didactic subtitling in FLT as a means of improving productive acquisition of language features.
5.4 Active and passive skills in didactic subtitling It should be pointed out that dichotomous concepts such as ‘passive’ and ‘active’ skills, although commonly used in SLA, do not provide the best fit to the audiovisual environment, where listening and reading – traditionally considered passive activities – can indeed be active (Zabalbeascoa et al. 2012). Nor can didactic subtitling be satisfactorily categorised within the finite categories of listening, reading, writing and speaking. There clearly seems to be more to it than development of linguistic competence. In fact, more than ten years ago, Gambier already acknowledged that subtitling can be considered translating only “if translation is not viewed as purely a word-for-word transfer” (2003, 179) and takes into account the multimodality of AV communication. Although this terminology is used herein for ease of reference to the SLA literature and in order to make clear what is being referred to in widely understood FLL terms, I support Incalcaterra McLoughlin and Lertola’s (2014) argument that the traditional four-skill model may be too restrictive when multimodal meaning is conveyed through the AV medium. Zabalbeascoa et al. (2012) attempted a solution by introducing the concept of AV literacy and proposed six AV-specific skills: AV-watching, AV-listening, AV-reading, AV-speaking, AV-writing and AV-production. Although it is beyond the scope of this article to address these skills directly, they appropriately highlight the need to update linguistic models and classifications in light of the complex set of semiotic relationships that inform multimodal AV communication (see Zabalbeascoa 2008).
5.5 Didactic subtitling and interaction Within her Output Hypothesis, Swain (1993) also highlighted how, in order to be most useful, student production needs to happen in a meaning-focused environment and through interactional exchanges. For Gass (1997) FL conversational interaction is considered the basis for FL grammar development. And indeed, interaction has been found to improve second language development (Ohta 2000). One of the most evident drawbacks of the subtitling task seems to be that it is not interactional per se, since it typically involves only the learner and the AV text. This need not be so, however. The great potential of the AV medium also lies in its versatility. The subtitling task could be modified so that students engage in group work, which is considered central to task-based teaching (Ellis 2003). For
Didactic subtitling in the Foreign Language (FL) classroom
example, students could create subtitles in pairs to produce a single, final, agreedupon translation. If one looks beyond didactic subtitling, other opportunities for both oral and written interaction arise, e.g., through revoicing activities (e.g., two students dub a dialogue) or through chatroom-based exchanges revolving around an AV text (Arslanyilmaz and Pedersen 2010). While productive skills practised through didactic subtitling are valuable, and should find a more stable place in the language classroom, the value of other tasks, in particular oral conversation, remains a given. Since oral tasks are crucial to language development, students should still be allowed ample time to communicate interactionally, especially in FL rather than SL contexts, where classroom hours may be the only opportunity they have to practise the FL.
5.6 Didactic subtitling and multiple exposures Despite the encouraging findings in the AVT literature reviewed above, one must bear in mind that, as the cognitive psychology literature teaches us, elaborate processing alone is unlikely to result in acquisition. New information – even rich, authentic input such as the one provided by audiovisuals – is unlikely to leave a lasting trace in memory if not frequently reactivated (Hulstijn 2001). In fact, some research in AVT and language teaching has already highlighted the need to create classroom activities that ensure multiple exposures (Bueno 2009). If didactic subtitling activities are designed to extend over more than one class, they allow for reactivation of previously learnt words and structures. For example, a video clip could be introduced and the subtitling task could be started in a first class (first exposure). The students could then be asked to complete the task at home (second exposure). Finally, in the following class, the clip could be watched again, the task could be revisited and reinforcement activities and exercises could be carried out (third exposure). This structure would seem to work best if the classes were some time apart. If classes were close together, however, the structure could still be adapted, for example by removing the homework phase. Alternatively, reactivation can be promoted by presenting new clips that contain ‘old’ language students are already familiar with, for example by taking two excerpts from the same source video, or TV series episodes that build up on each other. This type of input processing has been termed i − 1 (Day and Bamford 1998), since the level of the clip will be just below (− 1) the current level of competence of the learners (i). In introducing this concept, Day and Bamford mirrored Krashen’s idea of comprehensible input, whereby, in order for new language to be acquired, the input has to contain elements just above the learner’s knowledge, that is, at the i + 1 level (Krashen 1982). Practicing at the i + 1 level fosters language development, doing so at the i − 1 level fosters fluency (Bruton and Alonso Marks 2004), and is used in
21
22
Valentina Ragni
automaticity training (Day and Bamford 1998), especially to reinforce sight vocabulary, i.e., all the “[w]ords that readers are able to recognise automatically” (1998, 13). Since the level of the clip will be just below the level of competence of the students, they will encounter familiar language and comprehend most of the content, which may also boost motivation and provide a sense of satisfaction. In these cases, one might speak of comprehended input (Gass 1997) rather than comprehensible input. So, while the concept of the i + 1 level (Krashen 1982) is certainly relevant and has been referred to in the AVT literature (Incalcaterra McLoughlin and Lertola 2014), practicing at the i − 1 level also has an educational value as reinforcement and rehearsal of previously learnt content.
5.7 TBLT and unpredictability Finally, a drawback intrinsic to TBLT is that tasks, by their very nature, make both the production and acquisition of specific forms unpredictable. Since no explicit indication of what forms to use can be given in the rubric and the choice of linguistic resources is left to the learner, one cannot be sure they will produce a word or a structure, even when the task is focused. When reverse subtitling is employed in the classroom, the teacher cannot predict whether students will produce particular FL items, even when the subtitling activity was designed to elicit their use. If production is not certain, accurate production is even less so. In some cases, if a student has made a mistake that does not cause communication to break down, they might not notice their mistake and therefore make no effort to correct it. In addition, the teacher might not find an opportunity to elicit its correct use in a purely task-based learning environment, so students could achieve fluency at the expense of accuracy. To prevent this from happening and to capitalise on their language development, Willis maintains that another stage is needed after the task cycle, where instruction will examine language forms and entail a level of analysis, in order to “get students to identify and think about particular features of language form and language use” (1996, 102). In fact, such focusing of learner attention on formal features of the language, as Tschirner (2001) notes, could be one of the key advantages that classroom-based instruction has over natural learning.
6.
One step further: Focus on FormS
Didactic subtitling creates a situation where form is attended as student need arises during communicative tasks. However, as we have seen, the subtitling task alone, despite the rich and meaningful multimodal environment, may not always result in the internalisation of the FL. In some cases, therefore, having a more
Didactic subtitling in the Foreign Language (FL) classroom
explicit FonFs phase might be necessary. In such a phase, formal instruction is given, language is treated as the object of study rather than communication tool, and students relate themselves to the language as learners rather than users. A number of classroom studies (Lightbown and Spada 1990; White et al. 1991; Spada and Lightbown 1993) have indirectly questioned uninstructed positions in FLL by showing that explicit rule teaching and error correction is superior to implicit learning (DeKeyser 1998). Moreover, Form-Focused Instruction approaches have been found to accelerate rate of learning (see Long and Robinson 1998) and raise ultimate attainment levels (Pavesi 1986; Eckman et al. 1988, in Long 1991, 47) compared to naturalistic settings where exposure to positive input may be large but formal instruction is almost absent. Some grammar features are certainly more difficult to master without explicit focus on form than others. For example, White (1989) demonstrated that the adjacency principle in English adverb placement was not successfully learnt by French native speakers though positive input alone. This would suggest that some form of additional salience or negative evidence might be required, at least in some cases. Salience can be achieved through input enhancement, for example by highlighting target words or structures through typeface (underlining, italic, bold, etc.) or colouring. Negative evidence can be explicit or implicit and includes grammar rules, overt error correction, recasts and repairs when communication breaks down (Long and Robinson 1998). The body of evidence presented herein suggests that a more explicit FonFs phase after the subtitling task would be beneficial to learners and in some cases might be the only way to effectively enable them to achieve linguistic production accuracy. Willis and Willis (2007) note that introducing and practicing individual forms right before a task is likely to affect the learners so that they will be less likely to focus on getting the meaning across and more likely to display their knowledge of those forms. Therefore, however the FonFs phase is implemented, it is usually presented after the task phase to avoid conditioning the learners. Drawing attention to specific formS related to the topic addressed in the video after the students have completed a subtitling task can be achieved in many ways, including through salience or negative evidence. However, as noted by Spada et al. (2005) how explicitly learner attention is drawn to linguistic forms can change dramatically, both in the presentation of rules and feedback on error. In some cases, input can be enhanced post-task without overt instruction or error correction. In AVT, for example, other types of AV input such as keyword subtitles could be integrated in a post-subtitling FFI phase as a form of input enhancement. Although it is beyond the scope of this chapter to address the specifics of how FonFs can be achieved, the reader is referred to Willis and Willis (2007) for a comprehensive treatment of task-based teaching that includes detailed examples of how to incorporate FonF and FonFs activities in a task-based curriculum. Some of the
23
24
Valentina Ragni
activities they present would require a degree of adaptation, in light of the characteristics of the AV medium and its specific meaning-making process, but they could be a starting point for a principled integration of didactic subtitling (as well as other AV learning tasks) and a task-related FonFs phase. How such integration may be best achieved is yet to be established, and underlines the need to warrant further investigation into the topic, both through experimental and pedagogical applications.
7. Conclusions This chapter has reviewed the literature on didactic subtitling and its findings to date, explained why this AVT type can be considered a task and discussed how it can fit within a task-based view of language learning and teaching. It has also described the learning process by revising some cognitive constructs such as attention, noticing, reactivation and pushed output, and demonstrated their relevance to the subtitling task. Starting from recent research that refutes purely communicative approaches, it has also addressed FFI, considering both FonF and FonFs in relation to didactic subtitling. The reasons why this AVT mode is a FonF activity were explained by examining the relationship between processing form and meaning during subtitling. Finally, this chapter has argued in favour of the integration of a more overtly instructional FonFs phase after the subtitling task. While research in FLL has undergone major changes in the last few decades, Tschirner (2001, 306) asked: “have these changes also affected classroom realities?.” I now propose they have started to do so, as demonstrated by the AVT studies mentioned at the beginning of this chapter. Of course, shifting from theoretical considerations to practical implementation is rarely a mundane exercise, and several open questions remain. However, by making explicit some of the cognitive underpinnings of subtitling as a task and demonstrating that it is compatible with widely accepted modern SLA theories, it was shown that this multimodal translation practice has specific acquisitional potential and why it should be investigated further in the context of SLA. As Spencer (1991) first and then Borrás and Lafayette remind us, “it is not the provision of technology in itself but its application in education that will affect learner performance” (1994, 71–72, italics in the original). Therefore, it is only through empirical applications of didactic subtitling such as the ones herein reviewed that evidence can be gathered in order to further knowledge on the topic. Finally, addressing the integration of didactic subtitling and FFI in TBLT provides a starting point for discussing methodological proposals regarding the inclusion of audiovisuals in FL class and syllabus design. By addressing such topics, it is hoped that teachers and practitioners will
Didactic subtitling in the Foreign Language (FL) classroom
feel inspired to incorporate subtitling tasks or other AV(T) activities in their classroom practice.
References Arslanyilmaz, Abdurrahman, and Susan Pedersen. 2010. “Improving Language Production via Subtitled Similar Task Videos.” Language Teaching Research Journal 14 (4): 377–395. https://doi.org/10.1177/1362168810375363
Borghetti, Claudia, and Jennifer Lertola. 2014. “Interlingual Subtitling for Intercultural Language Education: A Case Study.” Language and Intercultural Communication 14 (4): 423–440. https://doi.org/10.1080/14708477.2014.934380 Borrás, Isabel, and Robert C. Lafayette. 1994. “Effects of Multimedia Courseware Subtitling on the Speaking Performance of College Students of French.” The Modern Language Journal 78 (1): 61–75. https://doi.org/10.1111/j.1540‑4781.1994.tb02015.x Bruton, Anthony, and Emilia Alonso Marks. 2004. “Reading Texts in Instructed L1 and FL Reading: Student Perceptions and Actual Selections.” HISPANIA 87 (4): 770–783. https://doi.org/10.2307/20140909
Bueno, Kathleen A. 2009. “Got Film? Is It a Readily Accessible Window to the Target Language and Culture for Your Students?” Foreign Language Annals 42 (2): 318–339. https://doi.org/10.1111/j.1944‑9720.2009.01023.x
Butzkamm, Wolfgang. 2003. “We Only Learn Language Once. The Role of the Mother Tongue in FL Classrooms: Death of a Dogma.” Language Learning Journal 28: 29–39. https://doi.org/10.1080/09571730385200181
Bygate, Martin, Peter Skehan, and Merrill Swain, eds. 2001. Researching Pedagogic Tasks. Second Language Learning, Teaching and Testing. Harlow: Longman. Danan, Martine. 2004. “Captioning and Subtitling: Undervalued Language Learning Strategies.” Meta: Translators’ Journal 49 (1): 67–77. https://doi.org/10.7202/009021ar Day, Richard R., and Julian Bamford. 1998. Extensive Reading in the Second Language Classroom. New York: Cambridge University Press. https://doi.org/10.1177/003368829802900211
DeKeyser, Robert M. 1998. “Beyond Focus on Form: Cognitive Perspectives on Learning and Practicing Second language.” In Focus on Form in Classroom Second Language Acquisition, ed. by Catherine Doughty, and Jessica Williams, 42–63. Cambridge: Cambridge University Press. Delabastita, Dirk. 1990. “Translation and the Mass Media.” In Translation, History and Culture, ed. by Susan Bassnett, and André Lefevere, 96–109. London: Pinter Publishers. Doughty, Catherine, and Jessica Williams, eds. 1998. Focus on Form in Classroom Second Language Acquisition. Cambridge: Cambridge University Press. Eckman, Fred R., Lawrence E. Bell, and Diane Nelson. 1988. “On the Generalisation of Relative Clause Instruction in the Acquisition of English as a Second Language.” Applied Linguistics 9 (1): 1–20. https://doi.org/10.1093/applin/9.1.1 Ellis, Rod. 2001. “Introduction: Investigating Form‐Focused Instruction.” Language Learning 51 (s1): 1–46. https://doi.org/10.1111/j.1467‑1770.2001.tb00013.x Ellis, Rod. 2003. Task-Based Language Learning and Teaching. Oxford: Oxford University Press.
25
26
Valentina Ragni
Erber, Norman P. 1979. “Speech Perception by Profoundly Hearing-impaired Children.” Journal of Speech and Hearing Disorders 44: 255–270. https://doi.org/10.1044/jshd.4403.255 Gambier, Yves. 2003. “Introduction: Screen Transadaptation. Perception and Reception.” Screen Translation, ed. by Yves Gambier, special issue of The Translator 9 (2): 171–189. Gass, Susan M. 1997. Input, Interaction, and the Second Language Learner. Mahwah, NJ: Lawrence Erlbaum Associates. Ghia, Elisa. 2011. “The Acquisition of L2 Syntax through Audiovisual Translation.” In Audiovisual Translation in Close-up: Practical and Theoretical Approaches, 1st ed. by Adriana Şerban, Anna Matamala and Jean-Marc Lavaur, 95–112. Bern: Peter Lang. Gottlieb, Henrik. 1994. “Subtitling: Diagonal Translation.” Perspectives: Studies in Translatology 2 (1): 101–121. https://doi.org/10.1080/0907676X.1994.9961227 Hernández Bartolomé, Ana Isabel, and Gustavo Mendiluce Cabrera. 2005. “New Trends in Audiovisual Translation: The Latest Challenging Modes.” Miscelánea: A Journal of English and American Studies 31: 89–104. Hulstijn, Jan H. 2001. “Intentional and Incidental Second Language Vocabulary Learning: A Reappraisal of Elaboration, Rehearsal and Automaticity.” In Cognition and Second Language Instruction, ed. by Peter Robinson, 258–286. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139524780.011 Incalcaterra McLoughlin, Laura, and Jennifer Lertola. 2011. “Learn through subtitling: Subtitling as an Aid to Language Learning.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 243–263. Oxford: Peter Lang. https://doi.org/10.3726/978‑3‑0353‑0167‑0
Incalcaterra McLoughlin, Laura, and Jennifer Lertola. 2014. “Audiovisual Translation in Second Language Acquisition. Integrating Subtitling in the Foreign-language Curriculum.” The Interpreter and Translator Trainer 8 (1): 70–83. https://doi.org/10.1080/1750399X.2014.908558
Incalcaterra McLoughlin, Laura, and Jennifer Lertola. 2015. “Captioning and Revoicing of Clips in Foreign Language Learning Using ClipFlair for Teaching Italian in Online Learning Environments.” In The Future of Italian Teaching, ed. by Catherine Ramsey-Portolano, 55–69. Newcastle upon Tyne: Cambridge Scholars Publishing. Kantz, Deirdre. 2015. “Multimodal Subtitling – A Medical Perspective.” In Subtitles and Language Learning. Principles, Strategies and Practical Experiences, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 269–292. Bern: Peter Lang. Krashen, Stephen D. 1982. Principles and Practice in Second Language Acquisition. Oxford: Pergamon Press. Lambert, José, and Dirk Delabastita. 1996. “La Traduction de Textes Audiovisuels: Modes et Enjeux Culturels.” In Les Transferts Linguistiques dans les Médias Audiovisuels, ed. by Yves Gambier, 33–58. Villeneuve d’A scq: Presses Universitaires du Septentrion. Laufer, Batia. 2005. “Focus on Form in Second Language Vocabulary Learning.” In EUROSLA Yearbook 5, ed. by Susan H. Foster-Cohen, María del Pilar García Mayo, and Jasone Cenoz, 223–250. Amsterdam: John Benjamins. Laufer, Batia. 2006. “Comparing Focus on Form and Focus on Forms in Second-Language Vocabulary Learning.” Canadian Modern Language Review, 63 (1): 149–166. https://doi.org/10.3138/cmlr.63.1.149
Didactic subtitling in the Foreign Language (FL) classroom
Laufer, Batia, and Nany Girsai. 2008. “Form-focused Instruction in Second Language Vocabulary Learning: A Case for Contrastive Analysis and Translation.” Applied Linguistics 29 (4): 694–716. https://doi.org/10.1093/applin/amn018 Lertola, Jennifer. 2012. “The Effect of the Subtitling Task on Vocabulary Learning.” In Translation Research Project 4, ed. by Anthony Pym, and David Orrego-Carmona, 61–70. Intercultural Studies Group: Tarragona. Lertola, Jennifer. 2015. “Subtitling in Language Teaching: Suggestions for Language Teachers.” In Subtitles and Language Learning. Principles, Strategies and Practical Experiences, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 245–267. Bern: Peter Lang. Lightbown, Patsy M., and Nina Spada. 1990. “Focus on Form and Corrective Feedback in Communicative Language Teaching: Effects on Second Language Learning.” Studies in Second Language Acquisition. 12 (4): 429–448. https://doi.org/10.1017/S0272263100009517 Loewen, Shawn. 2005. “Incidental Focus on Form and Second Language Learning.” Studies in Second Language Acquisition 27 (3): 361–386. https://doi.org/10.1017/S0272263105050163 Long, Michael. 1991. “Focus on Form: A Design Feature in Language Teaching Methodology.” In Foreign Language Research in Cross-cultural Perspective, ed. by Kees de Bot, Ralph B. Ginsberg, and Claire Kramsch, 39–52. Amsterdam: John Benjamins. https://doi.org/10.1075/sibil.2.07lon
Long, Michael, and Peter Robinson. 1998. “Focus on form: Theory, Research and Practice.” In Focus on Form in Classroom Second Language Acquisition, ed. by Catherine Doughty, and Jessica Williams, 15–41. Cambridge: Cambridge University Press. Lopriore, Lucilla, and Maria Angela Ceruti. 2015. “Subtitling and Language Awareness: A Way and Ways.” In Subtitles and Language Learning. Principles, Strategies and Practical Experiences, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 293–321. Bern: Peter Lang. McGurk, Harry, and John MacDonald. 1976. “Hearing Lips and Seeing Voices.” Nature 264: 746–748. https://doi.org/10.1038/264746a0 Neves, Josélia. 2004. “Language Awareness through Training in Subtitling.” In Topics in Audiovisual Translation, ed. by Pilar Orero, 127–140. Amsterdam & Philadelphia: John Benjamins. https://doi.org/10.1075/btl.56.14nev Ohta, Amy S. 2000. “Rethinking Interaction in SLA: Developmentally Appropriate Assistance in the Zone of Proximal Development and the Acquisition of L2 Grammar.” In Sociocultural Theory and Second Language Learning, ed. by James P. Lantolf, 51–78. Oxford: Oxford University Press. Pavesi, Maria. 1986. “Markedness, Discoursal Modes, and Relative Clause Formation in a Formal and an Informal Context.” Studies in Second Language Acquisition 8: 138–155. https://doi.org/10.1017/S0272263100005829
Prabhu, N. S. 1987. Second Language Pedagogy. Oxford: Oxford University Press. Scheffler, Paweł. 2013. “Learners’ Perceptions of Grammar-Translation as Consciousness Raising.” Language Awareness 22 (3): 255–269. https://doi.org/10.1080/09658416.2012.703673 Schmidt, Richard. 1990. “The Role of Consciousness in Second Language Learning.” Applied Linguistics 11: 129–158. https://doi.org/10.1093/applin/11.2.129 Schmidt, Richard. 2001. “Attention” In Cognition and Second Language Instruction, ed. by Peter Robinson, 3–32. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139524780.003
27
28
Valentina Ragni
Schmitt, Norbert. 1998. “Tracking the Incremental Acquisition of Second Language Vocabulary: A Longitudinal Study.” Language Learning 48 (2): 281–317. https://doi.org/10.1111/1467‑9922.00042
Sharwood Smith, Michael A. 1993. “Input Enhancement in Instructed SLA: Theoretical Bases.” Studies in Second Language Acquisition 15: 165–179. https://doi.org/10.1017/S0272263100011943
Skehan, Peter. 1996. “A Framework for the Implementation of Task-Based Instruction.” Applied Linguistics 17 (1): 38–62. https://doi.org/10.1093/applin/17.1.38 Sokoli, Stavroula. 2006. “Learning via Subtitling (LvS): A Tool for the Creation of Foreign Language Learning Activities Based on Film Subtitling.” In Proceedings of the Marie Curie Euroconferences MuTra: Audiovisual Translation Scenario, Copenhagen, 1–5 May, ed. by Mary Carroll, and Heidrun Gerzymisch-Arbogast, 66–73. Sokoli, Stavroula. 2015. “ClipFlair: Foreign Language Learning through Interactive Revoicing and Captioning of Clips.” In Subtitles and Language Learning. Principles, Strategies and Practical Experiences, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 127–147. Bern: Peter Lang. Sokoli, Stavroula, Patrick Zabalbeascoa, and Maria Fountana. 2011. “Subtitling Activities for Foreign Language Learning: What Learners and Teachers Think.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 219–242. Oxford: Peter Lang. Spada, Nina, and Patsy M. Lightbown. 1993. “Instruction and the Development of Questions in L2 Classrooms.” Studies in Second Language Acquisition 15 (2): 205–221. https://doi.org/10.1017/S0272263100011967
Spada, Nina, Patsy M. Lightbown, and Joanna L. White. 2005. “The Importance of Form/Meaning Mappings in Explicit Form-Focused Instruction.” In Investigations in Instructed Second Language Acquisition, ed. by Alex Housen, and Michel Pierrard, 199–234. Berlin: Mouton de Gruyter. https://doi.org/10.1515/9783110197372.2.199 Spencer, Ken. 1991. “Modes, Media and Methods: The Search for Effectiveness.” British Journal of Educational Technology 22: 12–22. https://doi.org/10.1111/j.1467‑8535.1991.tb00048.x Swain, Merrill. 1985. “Communicative Competence: Some Roles of Comprehensible Input and Comprehensible Output in its Development.” In Input in Second Language Acquisition, ed. by Susan Gass, and Carolyn Madden, 235–256. New York: Newbury House. Swain, Merrill. 1993. “The Output Hypothesis: Just Speaking and Writing Aren’t Enough.” Canadian Modern Language Review, 50 (1): 158–164. https://doi.org/10.3138/cmlr.50.1.158 Talaván, Noa. 2010. “Subtitling as a Task and Subtitles as Support: Pedagogical Applications.” In New Insights into Audiovisual Translation and Media Accessibility, ed. by Jorge Díaz Cintas, Anna Matamala, and Josélia Neves, 285–299. Amsterdam: Rodopi. https://doi.org/10.1163/9789042031814_021
Talaván, Noa. 2011. “A Quasi-experimental Research Project on Subtitling and Foreign Language Acquisition.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 197–218. Oxford: Peter Lang. Talaván, Noa. 2013. La subtitulación en el aprendizaje de lenguas extranjeras. Barcelona: Octaedro.
Didactic subtitling in the Foreign Language (FL) classroom
Talaván, Noa, and José J. Ávila-Cabrera. 2015. “First Insights into the Combination of Dubbing and Subtitling as L2 Didactic Tools.” In Subtitles and Language Learning. Principles, Strategies and Practical Experiences, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 149–172. Bern: Peter Lang. Talaván, Noa, and Pilar Rodríguez-Arancón. 2014. “The Use of Reverse Subtitling as an Online Collaborative Language Learning Tool.” The Interpreter and Translator Trainer 8 (1): 84–101. https://doi.org/10.1080/1750399X.2014.908559 Tschirner, Erwin. 2001. “Language Acquisition in the Classroom: The Role of Digital Video.” Computer Assisted Language Learning 14: 305–319. https://doi.org/10.1076/call.14.3.305.5796 White, Lydia. 1989. “The Adjacency Condition on Case Assignment: Do L2 Learners Observe the Subset Principle?” In Linguistic Perspectives on Second Language Acquisition, ed. by Susan M. Gass, and Jacquelyn Schachter, 134–158. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139524544.010 White, Lydia, Nina Spada, Patsy Lightbrown and Leila Ranta. 1991. “Input Enhancement and L2 Question Formation.” Applied Linguistics 12 (4): 416–432. https://doi.org/10.1093/applin/12.4.416
Williams, Helen, and David Thorne. 2000. “The Value of Teletext Subtitling as a Medium for Language Learning.” System 28 (2): 217–228. https://doi.org/10.1016/S0346‑251X(00)00008‑7 Willis, Jane. 1996. A Framework for Task-Based Learning. Harlow: Longman. Willis, Dave, and Jane Willis. 2007. Doing Task-Based Teaching. Oxford: Oxford University Press. Zabalbeascoa, Patrick. 2008. “The Nature of the Audiovisual Text and its Parameters.” In The Didactics of Audiovisual Translation, ed. by Jorge Díaz Cintas, 21–37. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.77.05zab Zabalbeascoa, Patrick, Stavroula Sokoli and Olga Torres. 2012. “ClipFlair: Foreign Language Learning Through Interactive Revoicing and Captioning of Clips. Lifelong Learning Programme – Key Activity 2. Languages, Multilateral Project. D2.1. Conceptual Framework and Pedagogical Methodology.” Accessed May 18, 2017. http://clipflair.net/wp -content/uploads/2014/06/D2.1ConceptualFramework.pdf
29
A pedagogical model for integrating film education and audio description in foreign language acquisition Carmen Herrero and Manuela Escobar
Manchester Metropolitan University | Universidad de Sevilla
Films are particularly powerful pedagogical tools that can help improve the linguistic skills of foreign language learners. Audio describing tasks can provide additional benefits. However, for an efficient use of feature films, learners need to be trained on how to elaborate audio description texts and develop active viewing strategies. This article discusses a language teaching approach that advocates the addition of Film Literacy education and audio description tasks to the language curriculum. It focuses on the application of audio description, in both oral and written form, to the acquisition of Spanish as a foreign language in Higher Education. It presents a pedagogical model designed to help students develop linguistic, cultural and intercultural competences while encouraging the aesthetic appreciation of films as cultural objects that can be evaluated through a wide range of critical approaches. Keywords: film literacy, audio description, language acquisition, pedagogy
1.
Introduction
Audiovisual material is a powerful pedagogical tool widely used to improve the linguistic skills of Foreign Language (FL) learners. In the last decade in particular, it has been profitably employed to incorporate audiovisual translation tasks in FL teaching and, more recently Audio Description (AD) tasks have also been exploited with encouraging results suggesting that they can help improve the linguistic skills of FL learners. However, for an efficient use of feature films in language teaching, learners would benefit from training on how to elaborate AD texts and develop active viewing strategies. This article discusses a language teaching approach that advocates the addition of education in Film Literacy and AD tasks
https://doi.org/10.1075/bct.111.ttmc.00003.her © 2020 John Benjamins Publishing Company
32
Carmen Herrero and Manuela Escobar
to the language curriculum.1 It focuses on the application of AD, in both oral and written form, to the acquisition of Spanish as a FL in Higher Education (HE). It presents a pedagogical model designed to help students develop linguistic, cultural and intercultural competences while encouraging the aesthetic appreciation of films as cultural objects that can be evaluated through a wide range of critical approaches. In the first section we begin by offering a brief analysis of the different approaches to teaching and learning a FL on which our proposal is based. The pedagogical model, which we started to develop over a decade ago, supports the principle that working with multimodal texts can address the educational needs of intercultural awareness and film literacy, adding a new nuance to the notion of Content and Language Integrated Learning (CLIL). Considering films as multimodal texts, the model leads students to pay attention to the multiple modes of meaning and how they interact to render a comprehensive linguistic, social, cultural, and intercultural description. In the second section, we argue that Film Literacy is a type of literacy that has often been overlooked in the language curricula of institutional education at both Secondary and HE levels. We suggest that applying the three key dimensions of film education – Creative, Critical and Cultural – (British Film Institute 2008; 2010a; 2010b; 2013) to AD tasks improves motivation and aids the language learning process. After a brief overview of the use of audiovisual texts for language learning and teaching and the application of audiovisual translation to foreign language acquisition, this second part focuses on a pedagogical approach based on the combination of AD and Film Literacy education. We suggest that, by integrating AD as part of the FL curriculum, learners benefit from the acquisition of a wide range of tools and skills and develop “film sensibility” (British Film Institute 2013). Furthermore, language learning and film education activities equip learners with crucial transferrable skills (creativity, critical thinking and cultural/intercultural awareness) that are highly valued by employers. These activities can potentially contribute to lifelong learning by making learners competent and skilled users of media as well as information and communication technologies.
1. Film Literacy has more recently been defined as “the level of understanding of a film, the ability to be conscious and curious in the choice of films; the competence to critically watch a film and to analyse its content, cinematography and technical aspects; and the ability to manipulate its language and technical resources in creative moving image production.” (British Film Institute 2013, 8).
A pedagogical model in foreign language acquisition
2.
Methodological foundation: A literature review of the principles
The framework presented in this study integrates different approaches to teaching and learning a FL. First, it is grounded in the concept of communicative competence (Canale and Swain 1980), further developed into the concepts of grammatical, discourse, sociolinguistic and strategic competences (Canale 1983) and interactional competence (Celce-Murcia 1995; Celce-Murcia et al. 1995). It partially adapts the Task-Based Learning (TBL) theory that proposes the acquisition of a foreign language based on tasks (Nunan 1989), with emphasis on ‘transferable skills’ (Holmes 1995) and learning by doing.2 It follows the principles of ProjectBased Learning (PBL) that include authentic content and cooperative learning (Thomas 2000). Three of the critical concerns for PBL are integrating technology, assessment (computer or physical models, videos, games, writing samples, plays or exhibits), and scaffolding the learning process. A central goal of this theoretical approach is to foster students’ acquisition of 21st century competences (Condliffe et al. 2016). This pedagogical proposal is also grounded in the literacy-based approaches guiding some of the predominant curricular and pedagogical reforms directing current FL teaching (Kern 2003; Paesani et al. 2016). The following sections provide a brief summary of these approaches before outlining some research results that have contributed to expanding the use and value of audiovisual media in FL instruction.
2.1 Literacy-based approaches and multimodality The overall shift brought by the rapid expansion of the Internet and the wide range of practices linked to the new information and communication technologies (ICTs) has shaped the term literacy into a broader and multiple concept. To capture the complexity and changing nature of the term literacy, Leu et al. (2013) opt for a multiple theoretical perspective divided into two levels: lowercase (new literacies) and uppercase (New Literacies). Lowercase theories include those that focus on a specific discipline or area of new literacy and new technology. New Literacies look at the common elements across the theoretical research and practices of the lower literacies. One of the central principles of New Literacies is that they are “multiple, multimodal, and multifaceted” (Leu et al. 2013, 1158). Within this context, the term ‘multiliteracies’ was coined by the New London Group to account for the rapid changes in the concept of literacy, due to globalisation, 2. Nunan (1989, 10) defined task as a “piece of classroom work which involves learners in comprehending, manipulating, producing or interacting in the target language, while their attention is principally focused on meaning rather than form.”
33
34
Carmen Herrero and Manuela Escobar
technology and increasing linguistic, cultural and social diversity.3 Their seminal work, “A Pedagogy of Multiliteracies: Designing Social Futures”, proposes that literacy “now must account for the burgeoning variety of text forms associated with information and multimedia technologies” (1996, 61). Multimodality is another common principle related to New Literacies. Indeed, since the development of Web 2.0, and given the ever increasing role of visual information in the digital age, communication is increasingly based on multimodal texts, which can be defined as “texts that communicate their message using more than one semiotic mode, or channel of communication” (Openlearn 2010, online). Challenging the traditional view of the dominant role of written texts in teaching and learning, Gunther Kress and Theo van Leeuwen (2001) argued that other modes of communication (such as image, gesture, music, spatial and bodily codes) could also contribute to the multimodal ways of meaningmaking and knowledge construction. They identify five design elements in the meaning-making process represented in Figure 1: Linguistic, Visual, Audio, Gestural and Spatial meaning. The multimodal patterns of meaning are combinations of the above semiotic codes. Therefore, multimodal literacies refer to the meaning-making that takes place when interacting with and producing multimodal texts. It focuses on the ‘modal affordances’ and the orchestration and interaction of semiotic resources or modes (language, images, gesture, etc.) in different modalities (visual, aural, haptic, olfactory, and gustatory) during the design of multimodal texts or genres (blogs, posters, websites, films, etc.) (Kress 2010). Kress (2003) noted the cultural, social and discourse values inherent to multimodal texts. Unsurprisingly, educators and researchers are calling for multiple multimodal text exposures in FL, including their use as instructional tools and in creative projects (Chan and Herrero 2010; Baños and Sokoli 2015; Paesani et al. 2016). However, research on the use of multimodal texts in FL settings reveals that, although language learners “develop awareness and understanding of the synesthetic relationship between multimodal resources for making meaning”, they do not take “full advantage of the meaning potential of these new modalities” (Paesani et al. 2016, 242). There is no doubt that the impact of audiovisual media on citizens’ lives makes the acquisition of critical and creative competences through effective Film and Media Literacy teaching more relevant (Wilson et al. 2011). However, it is not an area that has 3. The New London Group refers to the ten leaders in the field of literacy pedagogy who met in 1999 in the small town of New London, New Hampshire, in order to discuss the growing importance of cultural and linguistic diversity and multimodal literacy due to the power of new communication technologies. The outcome of their discussions was encapsulated under the term ‘Multiliteracies’.
A pedagogical model in foreign language acquisition
Figure 1. Diagram by Cope and Kalantzis (2000), redesigned by decafnomilk.com in Chan and Herrero (2010)
been explored thoroughly in the language classroom (FILTA 2010; Thaler 2014; Herrero 2016). Another level of multiplicity refers to the new social practices and skills necessary to interact online with information and individuals from different social and cultural backgrounds. Therefore, the need for developing intercultural understanding becomes a key issue in any educational model (Byram et al. 2013; Dervin and Liddicoat 2013); and it is recognised as a fundamental soft skill in the literature relevant to the employment prospects of HE graduates (Jones 2013; British Academy 2016). Nevertheless, as noted by Pegrum (2008), Herrero (2009) and Barrett et al. (2014), films are still an underexploited resource for promoting intercultural competence and developing learners’ critical thinking skills. The practices for new literacies require a very different set of values, priorities and attitudes. The advances in ICT encourage participatory and collaborative practices and sharing with others, giving more value to the distribution of
35
36
Carmen Herrero and Manuela Escobar
information and knowledge than to the recognition of authorship (Lankshear and Knobel 2003). Henry Jenkins (2008) uses the term participatory culture to explain the growth of user-generated content, ‘distributed cognition’ and ‘collective intelligence’. This new ethos is gradually permeating the development of new literacy strategies in education (Jenkins et al. 2009). The pedagogical model presented in this article is based on the aforementioned methodological principles. In particular, it takes into account the changes in the concept of the term literacy due to the new social, cultural and technological practices, and captures the paramount importance of multimodal communication and Media Literacy in FL learning and teaching.
2.2 The use of audiovisual texts for language learning and teaching Audiovisual media allow for the simultaneous reception of both audio and visual input information, i.e., watching television, videos or films with subtitles. Teachers and researchers have valued audiovisual texts as a resource for improving different areas of linguistic competence for over two decades. Herron (1994) showed in a comparative study – video versus text materials – carried out among university students in France, that the use of video improves listening comprehension capacity. Weyers (1999) confirmed a similar hypothesis applied to Spanish as an FL through the use of soap operas for several months as part of a guided and structured task inserted into the curriculum. The results confirmed, at the same time, an improvement in the quantity and quality of oral production of students exposed to the audiovisual task. Other researchers have corroborated this (Chapple and Curtis 2000, among others). Focusing on language acquisition and more specifically on oral and aural skills, audiovisual texts are generally used to offer students access to a wide range of voices and accents from different geographical areas. They expose FL learners to linguistic varieties (geographical, social, diachronic, situational) as well as to different jargon. Finally, creative tasks and presentations using audiovisual material can help to refine intonation and pronunciation (Baddock 1996; Porcel 2009). Many studies argue in favour of the exploitation of video, films, television and ads as tools to appreciate and practice a variety of grammar structures (Altman 1989; Ruiz Fajardo 1994; Cardillo 1996). As Toro Escudero states (2009) the diversity of linguistic registers in films illustrates how learning grammar should be related to understanding the syntax of language in use rather than learning the rules of a prescriptive grammar. In relation to semantics, audiovisual texts contribute to a contextualised learning of vocabulary and the visualisation of meaning (Canning-Wilson 2000). Furthermore, its use helps students with incidental learning of vocabulary and
A pedagogical model in foreign language acquisition
particularly of lexical units and their cultural contexts, such as formulae, collocations and idioms (Argüelles-Díaz 2015). In fact, because of their format, short films and ads are especially useful for a controlled experience of lexical units in context (Guerra Robles 2013; Argüelles-Díaz 2015). Furthermore, the amount of vocabulary necessary to be able to follow a film or television programme in FL should be considered. The recommendations inferred from Webb and Rodgers’ study (2009) for incidental vocabulary acquisition could be applied to the learning of any FL: pre-viewing activities, the use of subtitles in FL, and an increase in the frequency of contact with new words through regular work with films. Other studies argue that learning is improved when some preliminary information is introduced (advance organiser) to facilitate audiovisual comprehension and lighten cognitive load before the viewing. The use of descriptions and images, presentation of vocabulary and short questions to guide the learner are some of the recommendations provided by various researchers (Herron et al. 1995; Chung and Huang 1998; Lin and Chen 2006). Given the interrelation between language and culture, it is advisable to integrate a cultural component in FL teaching. As authentic material,4 audiovisual texts are tools that can increase learners’ motivation (Sherman 2003) and also help to develop sociocultural competence because they facilitate the understanding of communicative behaviour (Corpas Viñals 2000). Audiovisual texts give the opportunity to observe different registers, formal, informal, academic, etc. (Pérez Basanta 1999; Brandimonte 2003; Meler 2005). They help to contextualise language in use and are therefore ideal for widening comprehension and production of pragmatic meaning, paying attention to both linguistic and social elements and their context (Bustos Gisbert 1997; Corpas Viñals 2000; Vílchez Tallón 2007). It is worth noticing that through the use of videos, and especially films, FL students are exposed to items that could otherwise be difficult to show, such as body language and expressions associated with a specific culture, paralinguistic elements and sublinguistic sounds (Altman 1989; Herrero 2009; Chan and Herrero 2010). Ultimately, audiovisual texts in the FL classroom are ideal tools for supporting an ‘effective and affective learning’ across a wide range of areas (linguistic, socio-pragmatic and cultural competences) (Crespo Fernández 2012).
4. Authentic texts for language teaching relates to two of the meanings considered by Gilmore (2007, 97–8) in his literature review: (a) those that contain “language produced by a real speaker/writer for a real audience”; (b) those that “relate to culture, and the ability to behave or think like a target language group in order to be recognized and validated by them.”
37
38
Carmen Herrero and Manuela Escobar
2.3 Audiovisual Translation applied to FL teaching and learning A growing area of research in the field of audiovisual media in Applied Linguistics is Audiovisual Translation (AVT): audio description, dubbing, subtitling, voiceover, etc. There is an increasing number of empirical studies that look into the benefits of AVT applied to the teaching and learning of a FL, especially since the wide availability of IT tools favours a wider use of AVT. Intralinguistic (from oral to written message in the same language) and interlinguistic subtitles (different languages) facilitate vocabulary acquisition, reading comprehension, oral production and motivation (Vanderplank 1988; 2010; Borrás and Lafayette 1994; Guillory 1998; Bird and Williams 2002; Danan, 2004; Caimi 2006; Díaz Cintas 2012). So far, research on audiovisual media and its application to foreign language acquisition has focused mainly on the use of subtitles for L1 or L2 and their potential for the development of such skills as oral and aural comprehension and lexical acquisition, with subtitles as a bridge between reading and aural comprehension. There is also a more direct application to the improvement of linguistic skills for translation teaching and the training of professional translators (Borrás and Lafayette 1994; Gambier 2007; Sokoli et al. 2011; Díaz Cintas 2012). Related to this, there has been significant growth in interest in the use of subtitling as an active task for FL teaching purposes (Williams and Thorne 2000; Sokoli 2006; Bravo 2008; Talaván 2010; Borghetti 2011; Incalcaterra McLoughlin and Lertola 2011; Talaván 2013; Incalcaterra McLoughlin and Lertola 2014). As pointed out by Talaván (2010, 286), “subtitling as a task” (the production of subtitles by students) complements the use of “subtitles as support”, by helping learners to improve oral comprehension and fostering autonomous learning. On the other hand, active dubbing is valued for its capacity to enhance active participation by students (Danan 2010). Chiu (2012) and Sánchez-Requena (2016) have used dubbing to improve pronunciation, intonation and fluency in English and Spanish respectively. Navarrete (2013) has shown the advantages of applying dubbing for Spanish learning within the ClipFlair project frame.5 Talaván and Ávila Cabrera (2015) have stressed the use of dubbing to improve writing and speaking as well as learners’ translation skills. AD research has analysed mainly linguistic and semantic content in this type of text (Díaz Cintas 2010), its features as a special type of text and the possibilities of translation (Bourne and Jiménez Hurtado 2007; Orero 2007; Maszerowska et al. 2014; Matamala and Orero 2016; Talaván et al. 2016).6 However, AD is less used for teaching purposes, although it is now starting to be applied with a view 5. www.clipflair.net (accessed August 7, 2017). 6. Of special interest is the project http://www.adlabproject.eu (accessed August 7, 2017).
A pedagogical model in foreign language acquisition
to improving vocabulary acquisition, as well as the four linguistic skills (Ibáñez Moreno and Vermeulen, 2014; Talaván and Lertola, 2016).
3.
An inclusive pedagogical proposal
The literature review previously presented has outlined the theoretical approaches on which we are basing our pedagogical approach for the use of AD in FL. First, working with film texts boosts students’ interest and enhances FL learning skills; secondly, film analysis enables learners to understand that films are complex meaning-making documents. Furthermore, the approach builds on media practices that learners face outside the formal learning spaces and so facilitates a better understanding of the complexity and vital importance of multimodal communication in today’s world.
3.1 Pedagogical approach This study focuses on active AD of feature films. AD is defined as the techniques and skills applied to compensate for the lack of visual input in any message, providing appropriate sound information that translates or explains the message to an impaired visual receiver (Díaz Cintas 2010). However, for this study it is particularly useful to consider AD as a form of creative writing, a “descriptive narrative” (Greening and Rolph 2007, 127), and a type of text that maintains an “intimate intertextual relation with the filmic text” (Hannelore Poethe 2005, 40, in Bourne and Hurtado 2007, 176). The pedagogical approach that we are presenting in this section focuses on the audiovisual and written components of AD applied to the teaching of Spanish as a Second/Foreign Language. The principles of the model, based on the previously discussed multiliteracies framework, may be stated as follows: a. The importance of merging language and content in the curriculum. b. The understanding that a wider range of multimodal texts should be part of the language curriculum. c. Films are multimodal texts and, therefore, they transmit information through a combination of semiotic systems (image, gesture, music, spatial and bodily codes). They combine what Burn (2013, 2) defines as “contributory modes” (movement, lighting, costume, objects, sets, etc.) and “orchestrating modes” (filming and editing, which are the “overarching framing systems in space and time”).
39
40
Carmen Herrero and Manuela Escobar
d. Audiovisual texts and films in particular, are ideal tools for raising students’ cultural and intercultural awareness as they allow for reflection on discourse practices as situated discourses (historically and culturally). e. Film Literacy is an essential competence that language teachers and students should master. f. AD is a multiliteracy-oriented task that integrates both analytical and creative components (awareness, analysis, reflection and creative language use). g. AD projects enhance language learners’ linguistic, cultural and intercultural competences. They include encoding and decoding as fundamental processes for AD tasks.7
3.2 Strategies and competences in AD A review of the competences of audio describers provides a useful guide for the design of AD tasks for the language classroom. First, from a linguistic point of view, both academics and professionals agree on the need to summarise information accurately and objectively, in order to adapt the text to the time set between dialogues (Orero 2005); at the same time, there is a need to take into account the audience so that the proper register can be used (Matamala 2006; Matamala and Orero 2007; Vercauteren 2007). Finally, audio describers should possess a wide range of vocabulary, master different linguistic registers, be aware of the consequences of making pragmatic choices, as well as mastering rhetorical devices to convey information, and add texture to the description (Díaz Cintas 2006; Matamala and Orero 2007). Although professional audio describers are required to develop these skills to a higher level and obtain qualifications, language learners could benefit from being introduced to key professional skills that could lead to postgraduate study in this area; e.g., they should demonstrate grammatical, sociolinguistic, discourse and strategic competences in productive and receptive language skills and mediation, including near-native phonological, grammatical and lexical precision in target language speech, and grammatical and lexical accuracy in target language writing in a wide variety of personal, academic, professional and other domains, and across a full range of genres.8 7. Borghetti (2011) distinguishes two phases in the viewing of films in FL AVT contexts; first, foreign language students attempt to decode the film according to their own schemata, and later they become translators (encoding process) for the target audience. The same principles can be applied to AD, adding a filmic and multimodal analysis. 8. See Díaz Cintas (2006) for the essential and desirable professional competences for audio describers, some of them very relevant to the employability skills required of language learners in the 21st century.
A pedagogical model in foreign language acquisition
Film is an art form and a contemporary language. Therefore, aesthetic appreciation of movies should be guided with different critical approaches. All these reasons explain the importance of having activities that contribute to film education in schools and Higher Education. Unsurprisingly, the competence related to the knowledge of film language and the semiotics of image required to be able to provide descriptions that render visual imagery and its impact is of special interest (Orero and Matamala 2007, Orero 2012; Romero Fresco 2013). The AD standards adopted in different countries (UK, Greece, France, Germany, Spain, and the USA) were the starting point for the creation of the AD pedagogical model that we are presenting in this article. These guidelines are broken down into four major components (when, where, who and what) that constitute the essential parts of the description (Rai et al. 2010). According to Vercauteren (2007), in order to elaborate AD texts, the following questions must be answered: (a) what must be described, (b) when it must be described, (c) how it must be described, (d) how much must be described. Before presenting our pedagogical model, we will discuss these four questions in more detail: a. What must be described? Snyder (2013) suggests starting with the description of the relevant facts and of who is on screen. The UK Ofcom guidelines on Television Access (2017) include the description of other relevant elements such as on-screen action or information as well as any sound that may be easily identifiable. In summary, the key is to identify and describe those features that are relevant to the storyline. b. When to describe? AD should take place during gaps or silent moments between dialogues. c. How much to describe? Clark (2007, online), outlining the standard techniques in AD, provides this useful advice: “describe when necessary, but do not necessarily describe.” d. How to describe visual and aural information? A comparative study of AD guidelines in different European countries (Rai et al. 2010) points out two common categories consistently included in the recommendations: on the one hand, register and style and, on the other, grammar structures. Table 1 provides a summary of how to describe visual and aural information. The degree of specialisation of the description is not fixed; in our view, it depends on the target audience. Table 2 summarises the questions and guidelines for elaborating an AD script. To sum up, AD requires the ability to summarise, as accurately as possible, the full sense of the original information based on an adequate understanding of the film content and of the meaning of its visual aesthetics.
41
42
Carmen Herrero and Manuela Escobar
Table 1. How to describe visual and aural information based on AD guidelines from different countries (Rai et al. 2010) Register and style – Simple and easy-flowing style with clear and precise descriptions, avoiding repetition and poor or rude language. – To avoid uncommon vocabulary or an excessively formal register so that the reading text sounds natural. – To write simple sentences and not to provide too much information in a sentence. – To offer objective descriptions avoiding personal interpretation. – To establish the type of register for each film (both in pronunciation and vocabulary). – Descriptions must agree with style and film genre to cater for the target audience. – The use of film terminology must focus on well-known terms. Grammar elements – Verbal tense and mode must be specified. – Descriptions should be delivered in present tense. – Third-person narrative style helps “to show neutrality and non interference” (Stempleski 2013, 67). – Semantic precision of verbs is recommended, instead of a verb plus an adverb. – Variation of verbs is important to give a vivid account of the action described. – The use of objective, descriptive and specific adjectives is preferred. – Colours must be described when relevant. – Adverbs, following the adjectives, must be objective, descriptive and specific. – Personal pronouns must be avoided and, when used, we must be very clear to whom they apply. – It is preferable to repeat the names of the characters to remind the audience who they are. – Special attention must be devoted to the specific terminology, i.e., when the topic of the film includes specific subjects.
Table 2. Questions and guidelines for elaborating an AD script WHAT
WHEN
Moving images.
Sounds (source).
On-screen text.
Relevant facts (when, where and who) and actions. Physical characteristics and relationships of the characters.
Sound effects difficult to identify. Lyrics of songs and dialogues in other languages.
Opening titles. Casting. Credits. Any signs that appear on the screen as subtitles.
During gaps or silent moments between dialogues.
HOW Essential information for understanding the action. MUCH HOW
Style and register.
Grammar structures.
A pedagogical model in foreign language acquisition
3.3 Film Literacy and AD in FL acquisition: a model The concept of Film Literacy has a long tradition. The positive impact of film education has been recognised by many media studies, researchers and teachers (Buckingham 2003; Ambròs and Breu 2007; Buckingham 2007; Burn and Durran 2007; British Film Institute 2008; Bazalgette 2009; British Film Institute 2010b). The principles of Film Literacy are summarised in the three “Cs”: the critical, cultural and creative approaches to Film and Media Literacy (British Film Institute 2008; 2013). The critical approach focuses on recognising different types of stories. The cultural approach means broadening the range of films that the students have access to so that they can engage with a wider range of cultural perspectives; and, in the context of language teaching, we would like to add an intercultural/ transcultural perspective that focuses on mediating between different cultures. Finally, the third approach brings creative filmmaking work to complement, support and expand learners’ knowledge and understanding of what films can do. Using a Cultural Studies framework gives learners a set of analytical tools for ‘reading’ the filmic text and rendering the cultural messages. The analysis of cinematography, mise-en-scène, editing, sound, genre conventions and narrative construction provides a solid ground to examine the way in which social, cultural, political and historical representations are conveyed in films, and how they are intertwined. Recent reports and studies propose a model of Film Literacy education that includes critical reception and practice (British Film Institute 2013). Therefore, one of the objectives of the AD task is to support learners in their development of film appreciation and creative practice. Ferrés and Piscitelli (2012) have proposed the dimensions and indicators to define Media Literacy that comprises Film Literacy. Their proposal focuses on two areas, the production of their own messages and the interaction with outside messages, and six major indicators: languages, technology, interaction processes, production and dissemination processes, ideology and values, and the aesthetic dimension. Following Ferrés and Piscitelli (2012), we propose in Table 3 a selection of the main competences, skills and knowledge required as part of Film Literacy training in AD. On the one hand, the skills in the area of analysis identify films as textual constructions, whose workings should be deconstructed by considering the different codes of representation (genre, cultural issues, aspects of industry, audience/s, etc.) and micro-components (mise-en-scène, sound, and so on). On the other hand, the model reiterates the importance of being able to become a creative producer of multimedia content. The resultant model is based on two case studies carried out with undergraduate students at B2 level, according to the Common European Framework
43
44
Carmen Herrero and Manuela Escobar
Table 3. Main competences, skills and knowledge required as part of the film literacy training in AD, based on Ferrés and Piscitelli (2012, 79–80) Skills in the area of analysis
Skills in the area of expression
Languages
The ability to interpret and evaluate the various codes of representation and the function they perform within a message.
Choose between different systems of representation and different styles according to the communicative situation, the type of content to be transmitted and the type of user.
Technology
The ability to handle technological innovations that make multimodal and multimedia communication possible.
Use media and communication tools effectively in a multimedia and multimodal environment.
Interaction processes
Understand basic concepts of audience, of audience studies, their usefulness and limitations. Appreciate messages from other cultures, for intercultural dialogue in an age of media without borders.
Demonstrate active participation in the interaction with screens, understood as an opportunity to construct a more complete citizenry, an integral development, to be transformed, and to transform the environment.
Production and dissemination processes
Recognise basic conventions for production systems, programming techniques and broadcasting mechanisms.
Select meaningful messages, and use and transform them to make new meanings.
Ideology and value
Search for, organise, contrast, prioritise and synthesise information from different systems and environments. Detect the intentions and interests that underlie corporate and popular productions, their ideology and values, latent or patent, and take a critical stance towards them.
Use new media and communication tools to transmit values and contribute to improving the environment, based on social and cultural commitments.
Aesthetics
Enjoy formal aspects of media, that is, not only of what is communicated but also of how it is communicated. Identify basic aesthetic categories like formal and thematic innovation, originality, style, schools and trends.
Produce elementary messages that can be understood and which help to raise the level of personal or collective creativity, originality and sensibility. Appropriate and transform artistic productions, boosting creativity, innovation, experimentation and aesthetic sensibility.
A pedagogical model in foreign language acquisition
of Reference (CEFR) (Herrero 2014; Herrero and Escobar 2014) and numerous pedagogical interventions with secondary school students of Spanish as an FL in the UK.9 Based on Project-Based Learning, the training comprises of three main types of sessions. The first one is on visual rhetoric with an introduction to film language to guide students in carrying out a deeper investigation of the meaning of movies and help to develop their Film Literacy. It provides an introduction to macro analysis (ideology, representation, genre, cultural issues, narrative, aspects of the national film industry, etc.) and micro analysis (cinematography and miseen-scène). The training guides learners to draw plausible interpretations from relating the two levels of analysis. This session is complemented by an introduction to the multimodal approach, exploring how different modes are orchestrated to produce complex meaning. A second session focuses on auteurship providing an introduction to the filmmaking of Pedro Almodóvar. The workshop focuses on the style, themes and genres that have characterised Almodóvar’s films since his first, subversive work in the 1980s. His films Women on the Verge of a Nervous Breakdown, All about my Mother, Volver and Julieta, are relevant to the UK secondary school curriculum (16–18 years old).10 Los abrazos rotos / Broken Embraces (2009) was chosen for the main case study for the following reasons: firstly, it exemplifies the richness of Almodóvar’s visual style with complex metacinematic references and a sophisticated narrative, all within a clear time structure; secondly, it tells the story of a visually impaired filmmaker, bringing relevance to the AD training element. Furthermore, the Spanish and English DVDs provide AD in each language, respectively. Finally, the third session comprises a short introduction to AD as well as relevant activities to support learners in the elaboration of the AD script. Pre-tasks include independent research on cultural aspects relevant to the film (gazpacho, the film Voyage to Italy, artist César Manrique, Lanzarote, and film noir), followed by film vocabulary and film analysis exercises. The main tasks were designed to practice how to audio describe and prepare the AD draft script. Scenes were selected because of their relevance (description of characters and spaces), and 9. The Common European Framework of Reference for Languages “provides a common basis for the elaboration of language syllabuses, curriculum guidelines, examinations, textbooks, etc. across Europe” (Council of Europe 2001, 1). The CEFR defines six proficiency levels: Basic User (A1–A2), Independent User (B1–B2) and Proficient User (C1–C2). The CEFR specifies which competencies, knowledge and skills learners are expected to reach at each level. 10. Depending on the examination board responsible for setting and awarding secondary education level qualifications in the United Kingdom, students must study either one literary text and one film or two films from a list provided, drawing on advice from subject experts from Higher Education establishments and subject associations. Almodóvar’s films are included in the list of prescribed works of all of the awarding bodies.
45
46
Carmen Herrero and Manuela Escobar
because there were silent moments between dialogues. The Study Guide “Audio description. Los abrazos rotos”, which includes a summary of the sessions and activities is available to download from the FILTA Spanish resources area (www .filta.org.uk).11 Preliminary findings show a significant improvement of learners’ Film Literacy and accessibility awareness. Students were more conscious of the need to be able to read the film language and understand the aesthetic style in order to produce an AD script. Further evidence of the success of Film Literacy applied to language learning has been collected through professional training days, workshops, and film study days designed for pre-university students and language teachers and delivered since 2009 (i.e., enthusiasm, confidence and motivation; improved attitudes to writing; increased attainment in writing; improved linguistic skills; better understanding and application of concepts, and so forth). The effects noticed on teachers’ attitudes and practices as well as students’ results suggest that there has been a change towards a more innovative way of conducting the FL class, especially promoting more frequent use of Film Literacy (Herrero 2016; FILTA). In summary, the use of film in the FL classroom allows for the development and practice of audiovisual comprehension strategies in a holistic way, increasing the visual competence of the learners. In addition, it contributes to the development of film competence helping to perceive, analyse, and comprehend a number of communicative and cultural strategies (Chan and Herrero 2010; Thaler 2014; Herrero 2018). Language learners should be able to communicate in different media forms; therefore, it seems beneficial to introduce a practical component of audiovisual production (writing a screenplay for a short film, dubbing a film clip, or audio describing a short film without dialogues or a teaser) either as an individual task or as team work (Bahloul and Graham 2012; Keddie 2014; Donaghy 2015; Goldstein and Driver 2015; Video for all 2015; Anderson and Macleroy 2016; Herrero 2018).
4.
Implications of the framework and conclusions
AD is clearly a valuable tool with which to train students to develop their Visual Literacy, linguistic and cultural knowledge, even though research into its application to language teaching is in its infancy. In this article, we started by considering how research on AD tasks has concentrated mainly on linguistic and semantic content as well as the specific features of AD texts and their translatabil11. http://filtacommunity.ning.com/page/spanish-study-guides (accessed August, 7 2017. Free registration required).
A pedagogical model in foreign language acquisition
ity. We laid out the conceptual base for AD pedagogy in FL within the multiliteracies framework, emphasising the importance of unifying the study of language and cultural content and working with multimodal texts, e.g., films that relate to learners’ interests. The framework presented in this article provides the tools to support a productive engagement with film to improve FL learners’ linguistic, cultural, intercultural and digital competences by elaborating an AD script within the principles of Project-Based Learning. In previous sections, we point out some of the competences developed through AVT tasks that are part of the employability skills associated with learning a language (see British Academy, 2016). AD requires a number of transferable competences and skills that may be useful for a wide range of professional sectors: linguistic competence; audiovisual and film competence; teamwork skills; cross competences (accessibility awareness); technological or applied competence; personal and general competences; and intercultural communicative competence. The use of films in the FL classroom presents benefits and challenges for learners and teachers. In order to generate the AD script, learners have to acquire the relevant film terminology and Visual Literacy, as well as pay attention to paralinguistic elements that will help to render a comprehensive linguistic, social, cultural, and intercultural description. For a productive engagement with this type of project, Film Literacy should be included as part of the language curriculum (Chan and Herrero 2010; Lardoux 2014; Herrero 2016). Understanding the basic components of film studies is essential to appreciate cinema as an aesthetic medium and how it generates meaning and responses. Such training is essential for students to audio describe a film (or part of a film) adequately. The use of the framework, which has been tested on Higher Education language students, incorporates a more comprehensive approach to bring Film Literacy – via critical, cultural, intercultural and creative approaches – into the FL classroom. It responds to the need to engage language learners in cross-curricular tasks and approaches. In conclusion, AD creative projects constitute a useful and practical way of offering FL project-based tasks. The activities engage students in the process of critically ‘reading’ films, creating a text that makes connections and translating images into words. When selecting the appropriate text for students, it is important to consider the background knowledge needed, as well as how the text directs learners’ attention to the multimodal orchestration. Ultimately, the AD project incorporates a holistic approach that includes translation skills, critical, cultural and intercultural competences, and supports the development of Film Literacy.
47
48
Carmen Herrero and Manuela Escobar
References Altman, Rick. 1989. The Video Connection: Integrating Video into Language Teaching. Boston, MA: Houghton, Mifflin Company. Ambròs, Alba, and Ramón Breu. 2007. Cine y educación: el cine en el aula de primaria y secundaria. Barcelona: Graó. Anderson, Jim, and Vicky Macleroy. 2016. Multilingual Digital Storytelling. London and New York: Routledge. https://doi.org/10.4324/9781315758220 Argüelles Díaz, Alba. 2015. Los anuncios en la clase de ELE: una propuesta didáctica. Oviedo: Universidad de Oviedo. MA diss. Accessed October 29, 2017. http://hdl.handle.net/10651 /33779 Baddock, Barry. 1996. Using Films in the English Class. Hertfordshire: Phoenix ELT. Bahloul, Maher, and Carolyn Graham, eds. 2012. Lights! Camera! Action and the Brain: The Use of Film in Education. Newcastle upon Tyne: Cambridge Scholars. Baños, Rocío, and Stavroula Sokoli. 2015. “Learning Foreign Languages with ClipFlair: Using Captioning and Revoicing Activities to Increase Students’ Motivation and Engagement.” In 10 Years of the LLAS e-Learning Symposium: Case Studies in Good Practice, ed. by Kate Borthwick, Erika Corradini, and Alison Dickens, 203–213. Dublin and Voillans: Research-publishing.net. Barrett, Martin, Michael Byram, Ildikó Lázár, Pascale Mompoint-Gaillard, and Stavroula Philippou. 2014. Developing Intercultural Competence through Education. Strasbourg: Council of Europe Publishing. Bazalgette, Cary. 2009. Impacts of Moving Image Education: A Summary of Research. Glasgow: Scottish Screen. Bird, Stephen A., and John N. Williams. 2002. “The Effect of Bimodal Input on Implicit and Explicit Memory: An Investigation into the Benefits of Within-language Subtitling.” Applied Psycholinguistics 23 (4): 509–533. https://doi.org/10.1017/S0142716402004022 Borghetti, Claudia. 2011. “Intercultural Learning through Subtitling: The Cultural Studies Approach.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 111–138. Bern: Peter Lang. Borrás, Isabel, and Robert C. Laffayete. 1994. “Effects of Multimedia Course Subtitling on the Speaking Performance of College Students in French.” The Modern Language Journal 78 (1): 61–75. https://doi.org/10.1111/j.1540‑4781.1994.tb02015.x Bourne, Julián, and Catalina Jiménez Hurtado. 2007. “From the Visual to the Verbal in Two Languages: A Contrastive Analysis of the Audio Description of The Hours in English and Spanish.” In Media for all: Subtitling for the Deaf, Audio Description and Sign Language, ed. by Pilar Orero, and Aline Remael, 175–187. Amsterdam: Rodopi. https://doi.org/10.1163/9789401209564_013
Brandimonte, Giovanni. 2003. “El soporte audiovisual en la clase de E/LE: el cine y la televisión.” In Medios de Comunicación y Enseñanza del Español como Lengua Extranjera. Actas del XIV Congreso de ASELE, ed. by Hermógenes Perdiguero, and Antonio Álvarez, 870–881. Burgos: Servicio de Publicaciones Universidad de Burgos. Bravo, Conceição. 2008. Putting the Reader in the Picture: Screen Translation and ForeignLanguage Learning. PhD Diss. Tarragona: University Rovira i Virgili.
A pedagogical model in foreign language acquisition
British Academy. 2016. Born Global. Accessed March 5, 2016. http://www.britac.ac.uk/bornglobal British Film Institute. 2008. Reframing Literacy, BFI, London, BFI. Accessed March 5, 2016. http://www.bfi.org.uk/screening-literacy-film-education-europe British Film Institute 2010a. Making the Case for Film Education (21st Century Literacies). London: British Film Institute. British Film Institute. 2010b. Film: 21st Century Literacy – Pilot Project Blueprints. London: British Film Institute. British Film Institute. 2013. Screening Literacy in Europe. London: British Film Institute. Buckingham, David. 2003. Media Education: Literacy, Learning and Contemporary Culture. Cambridge: Polity Press. Buckingham, David. 2007. Beyond Technology: Children’s Learning in the Age of Digital Culture. Cambridge: Polity. Burn, Andrew, and James Durran. 2007. Media Literacy in Schools: Practice, Production and Progression. London: Sage. Burn, Andrew. 2013. The Kineikonic Mode: Towards a Multimodal Approach to Moving Image Media. Accessed March 5, 2016. http://eprints.ncrm.ac.uk/3085/1/KINEIKONIC_MODE .pdf Bustos Gisbert, José Manuel. 1997. “Aplicaciones del vídeo a la enseñanza de español como lengua extranjera.” Carabela 42: 93–105. Byram, Michael, Prue Holmes, and Nicola Savvides. 2013. “Intercultural Communicative Competence in Foreign Language Education: Questions of Theory, Practice and Research.” The Language Learning Journal 41 (3): 251–253. https://doi.org/10.1080/09571736.2013.836343
Caimi, Annamaria. 2006. “Audiovisual Translation and Language Learning: The Promotion of Intralingual Subtitles.” The Journal of Specialised Translation 6: 85–98. Canale, Michael, and Merrill Swain. 1980. “Theoretical Bases of Communicative Approaches to Second Language Teaching and Testing.” Applied Linguistics 1 (1): 1–47. https://doi.org/10.1093/applin/1.1.1
Canale, Michael. 1983. “From Communicative Competence to Communicative Language Pedagogy.” Language and Communication 1: 1–47. Canning-Wilson, Christine. 2000. “Practical Aspects of Using Video in the Foreign Language Classroom.” The Internet TESL Journal 6 (11). Accessed March 5, 2016. http://iteslj.org /Articles/Canning-Video.html Cardillo, Darlene S. 1996. “Using a Foreign Film to Improve Second Language Proficiency: Video vs. Interactive Multimedia.” Journal of Educational Technology Systems 25 (2): 169–177. https://doi.org/10.2190/55AE‑AGFW‑8KQF‑6PEH Celce-Murcia, Marianne. 1995. “The Elaboration of Sociolinguistic Competence: Implications for Teacher Education.” In Linguistics and the Education of Language Teachers: Ethnolinguistic, Psycholinguistic, and Sociolinguistic Aspects. Proceedings of the Georgetown University, Round Table on Languages and Linguistics, ed. by James E. Alatis, Carolyn A. Straehle, and Maggie Ronkin, 699–710. Georgetown University Press, Washington DC. Celce-Murcia, Marianne, Zoltan Dörnyei, and Sarah Thurrell. 1995. “Communicative Competence: A Pedagogically Motivated Model with Content Specifications.” Issues in Applied Linguistics 6 (2): 5–35.
49
50
Carmen Herrero and Manuela Escobar
Chan, Deborah, and Carmen Herrero. 2010. Using Film to Teach Languages. Manchester: Cornerhouse. Accessed October 29, 2017. https://goo.gl/mosW5h Chapple, Lynda, and Andy Curtis. 2000. “Content-based Instruction in Hong Kong: Student Responses to Film.” System 28 (3): 419–433. https://doi.org/10.1016/S0346‑251X(00)00021‑X Chiu, Yi-hui. 2012. “Can Film Dubbing Projects Facilitate EFL Learners’ Acquisition of English Pronunciation?” British Journal of Educational Technology 43 (1): 24–27. https://doi.org/10.1111/j.1467‑8535.2011.01252.x
Chung, Jing Mei, and Shuchen Huang. 1998. “The Effects of Three Aural Advance Organizers for Video Viewing in a Foreign Language Classroom.” System 26 (4): 553–565. https://doi.org/10.1016/S0346‑251X(98)00037‑2
Clark, Joe. 2007. “Standard Techniques in Audio Description. Joe Clark (Accessibility, Design, Writing) .” Accessed March 5, 2016. https://joeclark.org/access/description/ad-principles .html Condliffe, Barbara, Mary G. Visher, Michael R. Bangser, J. D. Sonia Drohojowska, and Larissa Saco. 2016. Project-Based Learning: A Literature Review. New York, NY: MDRC. Accessed March 5, 2016. https://s3-us-west-1.amazonaws.com/ler /MDRC+PBL+Literature+Review.pdf Cope, Bill, and Mary Kalantzis 2000. Multiliteracies: Literacy Learning and the Design of Social Futures. London: Routledge. Cope, Bill, and Mary Kalantzis. 2009. “Multiliteracies: New Literacies, New Learning.” Pedagogies: An international journal 4 (3): 164–195. https://doi.org/10.1080/15544800903076044
Corpas Viñals, Jaime. 2000. “La utilización del vídeo en el aula de E/LE. El componente cultural.” In Actas del XI Congreso Internacional de ASELE, ed. by María Antonia Martín Zorraquino, and Cristina Díez Pelegrín, 785–791. Zaragoza: Universidad de Zaragoza. Council of Europe. 2001. Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge University Press. Crespo Fernández, Ana. 2012. Explotación didáctica de material fílmico en el aula de E/LE: efectividad y afectividad del cine de Pedro Almodóvar. PhD diss. Córdoba: Universidad de Córdoba. Danan, Martine. 2004. “Captioning and Subtitling: Undervalued Language Learning Strategies.” Meta: Translators’ Journal 49 (1): 67–77. https://doi.org/10.7202/009021ar Dervin, Fred, and Anthony Liddicoat, eds. 2013. Linguistics for Intercultural Education. Amsterdam and Philadelphia: John Benjamins Publishing. https://doi.org/10.1075/lllt.33 Díaz Cintas, Jorge. 2006. Competencias profesionales del subtitulador y el audiodescriptor. Madrid: CESyA. Accessed March 5, 2016. http://www.cesya.es/estaticas/jornada /documentos/informe.pdf Díaz Cintas, Jorge. 2010. “La accesibilidad a los medios de comunicación audiovisual a través del subtitulado y de la audiodescripción.” In El español, lengua de traducción para la cooperación y el diálogo, ed. by Luis González, and Pollux Hernúñez, 157–180. Madrid: Instituto Cervantes. Díaz Cintas, Jorge. 2012. “Los subtítulos y la subtitulación en la clase de lengua extranjera.” Abehache, Revista da Associação Brasileira de Hispanistas 2 (3): 95–114. Donaghy, Kieran. 2015. Film in Action: Teaching Language Using Moving Images. Surrey: Delta Publishing.
A pedagogical model in foreign language acquisition
Ferrés, Joan, and Alejandro Piscitelli. 2012. “Media Competence. Articulated Proposal of Dimensions and Indicators.” Comunicar 19 (38): 75–81. https://doi.org/10.3916/C38‑2012‑02‑08
Film in Language Teaching Association (FILTA). Accessed 14 Feburary 2017. www.filta.org.uk Gambier, Yves. 2007. “Sous-titrage et Apprentissage des Langues.” Linguística Antverpiensia 6: 97–113. Gilmore, Alex. 2007. “Authentic Materials and Authenticity in Foreign Language Learning.” Language Teaching 40: 97–118. https://doi.org/10.1017/S0261444807004144 Goldstein, Ben, and Paul Driver. 2015. Language Learning with Digital Video. Cambridge: Cambridge University Press. Greening, Joan, and Deborah Rolph. 2007. “Accessibility: Raising Awareness of Audio Description in the UK.” In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, ed. by Pilar Orero, and Aline Remael, 127–138. Amsterdam: Rodopi. https://doi.org/10.1163/9789401209564_010
Guerra Robles, Patricia. 2013. El corto en el aula de ELE y la enseñanza del léxico en contexto. M.A. diss. Oviedo: Universidad de Oviedo. Guillory, Helen Gant. 1998. “The Effects of Keyword Captions to Authentic French Video on Learner Comprehension.” Calico Journal 15 (1): 89–108. Herrero, Carmen. 2009. “El uso de cortometrajes en la clase de español: la representación de las minorías etnicas.” Jornadas didácticas del Instituto Cervantes Manchester, Biblioteca Virtual redELE. Accessed May 10, 2016. http://www.mecd.gob.es/redele/BibliotecaVirtual/2009/Numeros-Especiales/I_JORNADAS_INSTITUTO_CERVANTES _MANCHESTER.html Herrero, Carmen. 2014. “Crossing Boundaries: Developing Intercultural Competence through Film.” Paper presented at the Symposium Raising Intercultural Awareness, Manchester Metropolitan University, 12 June. Herrero, Carmen. 2016. “The Film in Language Teaching Association (FILTA): a Multilingual Community of Practice.” ELT Journal 70 (2): 190–199. https://doi.org/10.1093/elt/ccv080 Herrero, Carmen. 2018. “El cine y otras manifestaciones culturales en ELE”, Iniciación a la metodología de la enseñanza de ELE. In Literatura, cine y otras manifestaciones literarias (4), ed. by Maria Martínez-Atienza de Dios, and Alfonso Zamorano Aguilar, 65-82. Madrid: enCLAVEELE. Herrero, Carmen, and Manuela Escobar. 2014. “Un modelo integrador de cine y audio descripción para el aprendizaje de lenguas extranjeras: Los abrazos rotos (Almodóvar).” Paper presented at the ClipFlair Conference. Innovation in Language Learning: Multimodal Approaches . Universitat Autònoma de Barcelona, 18–20 June. Herron, Carol. 1994. “An Investigation of the Effectiveness of Using an Advance Organizer to Introduce Video in the Foreign Language Classroom.” The Modern Language Journal 78 (2): 190–198. https://doi.org/10.1111/j.1540‑4781.1994.tb02032.x Herron, Carol, Julia E. Hanley, and Steven. P. Cole. 1995. “A Comparison Study of two Advance Organizers for Introducing Beginning Foreign Language Students to Video.” The Modern Language Journal 79 (3): 387–395. https://doi.org/10.1111/j.1540‑4781.1995.tb01116.x Holmes, Len. 1995. “Skills: a Social Perspective.” In Transferable Skills in Higher Education, ed. by Alison Assister, 20–28. London: Kogan Page.
51
52
Carmen Herrero and Manuela Escobar
Ibáñez Moreno, Ana, and Anna Vermeulen. 2014. “La audiodescripción como recurso didáctico en el aula de ELE para promover el desarrollo integrado de competencias.” In New Directions in Hispanic Linguistics, ed. by Rafael Orozco, 264–292. Newcastle upon Tyne: Cambridge Scholars Publishing. Incalcaterra McLoughlin, Laura, and Jennifer Lertola. 2011. “Learn through Subtitling: Subtitling as an Aid to Language Learning.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 243–263. Bern: Peter Lang. https://doi.org/10.3726/978‑3‑0353‑0167‑0
Incalcaterra McLoughlin, Laura, and Jennifer Lertola. 2014. “Audiovisual Translation in Second Language Acquisition. Integrating Subtitling in the Foreign-language Curriculum.” The Interpreter and Translator Trainer 8 (1): 70–83. https://doi.org/10.1080/1750399X.2014.908558
Jenkins, Henry. 2008. Convergence Culture: la cultura de la convergencia de los medios. Barcelona: Paidós. Jenkins, Henry, et al. 2009. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/8435.001.0001
Jones, Elspeth. 2013. “Internationalization and Employability: the Role of Intercultural Experiences in the Development of Transferable Skills.” Public Money and Management 33 (2): 95–104. https://doi.org/10.1080/09540962.2013.763416 Keddie, Jamie. 2014. Bringing Online Video into the Classroom. Oxford: Oxford University Press. Kern, Richard. 2003. “Literacy as a New Organizing Principle for Foreign Language Education.” In Reading between the Lines: Perspectives on Foreign Language Literacy, ed. by Peter C. Patrikis, 40–59. New Haven, CT: Yale University Press. Kress, Günther, and Theo van Leeuwen. 2001. Multimodal Discourses: The Modes and Media of Contemporary Communication. New York: Oxford University Press. Kress, Günther. 2003. Literacy in the New Media Age. London: Routledge. https://doi.org/10.4324/9780203164754
Kress, Günther. 2010. Multimodality. London: Routledge Lankshear, Colin, and Michele Knobel. 2003. New Literacies: Changing Knowledge and Classroom Learning. Buckingham: Open University Press. Lardoux, Xavier. 2014. For a European Film Education Policy. Paris: Centre national du cinéma et de l’image animée. Accessed May 15, 2015. http://www.europacreativamedia.cat /rcs_media/For_a_European_Film_Education_Policy.pdf Leu, Donald J., et al.. 2013. “New Literacies: A Dual Level Theory of the Changing Nature of Literacy, Instruction, and Assessment.” In Theoretical Models and Processes of Reading, ed. by Donna E. Alvermann, Norman J. Unrau, and Robert B. Ruddell, (6th ed.), 1150–1181. Newark, DE: International Reading Association. https://doi.org/10.1598/0710.42 Lin, Huifen, and Tsuiping Chen. 2006. “Decreasing Cognitive Load For Novice EFL Learners: Effects of Question and Descriptive Advance Organizers in Facilitating EFL learners’ Comprehension of an Animation-based Content Lesson.” System 34 (3): 416–431. Accessed October 29, 2017. https://doi.org/10.1016/j.system.2006.04.008 Maszerowska, Anna, Anna Matamala, and Pilar Orero, eds. 2014. Audio Description. New Perspectives Illustrated. Philadelphia and Amsterdam: John Benjamins. https://doi.org/10.1075/btl.112
A pedagogical model in foreign language acquisition
Matamala, Anna. 2006. “La accesibilidad en los medios: aspectos lingüísticos y retos de formación.” In Sociedad, integración y televisión en España, ed. by Ricardo Amat, and Álvaro Pérez-Ugena, 293–306. Madrid: Laberinto. Matamala, Anna, and Pilar Orero. 2007. “Designing a Course on Audio Description and Defining the Main Competences of Future Professional.” Linguistica Antverpiensia 6: 329–344. Matamala, Anna, and Pilar Orero. 2016. Researching Audio-Description. New Approaches. Basingstoke: Palgrave Macmillan. https://doi.org/10.1057/978‑1‑137‑56917‑2 Meler, Mirna. 2005. “El anuncio publicitario televisivo en la enseñanza E/LE: una aproximación a los componentes socioculturales.” Cuadernos Canela 17: 89–108. Navarrete, Marga. 2013. “El doblaje como herramienta de aprendizaje en el aula de español y desde el entorno de ClipFlair.” MarcoELE 16: 75–87. New London Group. 1996. “A Pedagogy of Multiliteracies: Designing Social Futures.” Harvard Educational Review 66 (1): 60–92. https://doi.org/10.17763/haer.66.1.17370n67v22j160u Nunan, David. 1989. Designing Tasks for the Communicative Classroom. Cambridge: Cambridge University Press. Ofcom. 2015. Code on Television Access Services. Accessed October 29, 2017. https://www .ofcom.org.uk/__data/assets/pdf_file/0020/97040/Access-service-code-Jan-2017.pdf Openlearn. 2010. “Language and Literacy in a Changing World.” Accessed May 10, 2016. http:// www.open.edu/openlearnworks/mod/oucontent/view.php?id=15196andsection=4.1 Orero, Pilar. 2007. “Sampling Audio Description in Europe.” In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, ed. by Pilar Orero, and Aline Remael, 111–125. Amsterdam: Rodopi. https://doi.org/10.1163/9789401209564_009 Orero, Pilar, and Anna Matamala. 2007. “Accessible Opera: Overcoming Linguistic and Sensorial Barriers.” Perspectives Studies in Translatology 15 (4): 262–267. https://doi.org/10.1080/13670050802326766
Orero, Pilar. 2012. “Film Reading for Writing Audio Descriptions: A Word is Worth a Thousand Images?” In Emerging Topics in Translation: Audio Description, ed. by Elisa Perego, 13–28. Trieste: Edizioni Università di Trieste. Paesani, Kate, Heather W. Allen, and Beatrice Dupuy. 2016. A Multiliteracies Framework for Collegiate Foreign Language Teaching. Upper Saddle River, NJ: Pearson. Pegrum, Mark. 2008. “Film Culture and Identity: Critical Intercultural Literacies for the Language Classroom.” Language and International Communication 8 (2): 136–154. Pérez Basanta, Carmen. 1999. “El uso del vídeo en la enseñanza de una lengua extranjera: Beauty and the Beast. Una actividad para la comprensión oral, la adquisición léxica y la reflexión sobre la estructura del lenguaje narrativo.” In Actas de las VII Jornadas Internacionales sobre la enseñanza de lenguas, ed. by Ángela Celis, and José Ramón Heredia, 369–378. Granada: Servicio de Publicaciones de la Universidad de Granada. Porcel, Carme. 2009. “Using Films in Class.” Modern English Teacher 18 (3): 24–29. Rai, Sonali, Joan Greening, and Leen Petré. 2010. A Comparative Study of Audio Description Guidelines Prevalent in Different Countries. London: RNIB. Romero-Fresco, Pablo. 2013. “Accessible Filmmaking: Joining the Dots between Audiovisual Translation, Accessibility and Filmmaking.” The Journal of Specialised Translation 20: 201–223. Ruiz Fajardo, Guadalupe. 1994. “Vídeo en clase. Virtudes y vicios.” MarcoELE 8: 141–164.
53
54
Carmen Herrero and Manuela Escobar
Sánchez-Requena, Alicia. 2016. “Audiovisual Translation in Teaching Foreign Languages: Contributions of Revoicing to Improve Fluency and Pronunciation in Spontaneous Conversations.” Portalinguarum 26: 9–21. Sherman, Jane. 2003. Using Authentic Video in the Language Classroom. Cambridge: Cambridge University Press. Snyder, Joel. 2013. Audio Description: Seeing with the Mind’s Eye – A Comprehensive Training Manual and Guide to the History and Applications of Audio Description. PhD diss. Universitat Autonoma de Barcelona. Sokoli, Stavroula. 2006. “Learning via Subtitling (LvS): A Tool for the Creation of Foreign Language Learning Activities Based on Film Subtitling.” In Proceedings of the Marie Curie Euroconferences MuTra: Audiovisual Translation Scenario, Copenhagen, 1–5 May, ed. by Mary Carroll, and Heidrun Gerzymisch-Arbogast, 66–73. Sokoli, Stavroula, Patrick Zabalbeascoa, and Maria Fountana. 2011. “Subtitling Activities for Foreign Language Learning: What Learners and Teachers Think.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 219–242. Bern: Peter Lang. Talaván, Noa. 2010. “Subtitling as a Task and Subtitles as Support: Pedagogical Applications.” In New Insights into Audiovisual Translation and Media Accessibility, ed. by Jorge Díaz Cintas, Anna Matamala, and Josélia Neves, 285–299. Amsterdam: Rodopi. https://doi.org/10.1163/9789042031814_021
Talaván, Noa. 2013. La subtitulación en el aprendizaje de lenguas extranjeras. Barcelona: Octaedro. Talaván, Noa, and José Javier Ávila-Cabrera. 2015. “First Insights into the Combination of Dubbing and Subtitling as L2 Didactic Tools.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 149–172. New York: Peter Lang. Talaván, Noa, Jose Javier Ávila-Cabrera, and Tomás Costal. 2016. Traducción y accesibilidad audiovisual. Barcelona: Editorial UOC. Talaván, Noa, and Jennifer Lertola. 2016. “Active Audiodescription to Promote Speaking Skills in Online Environments.” Sintagma, 28: 59–74. Thaler, Engelbert. 2014. Teaching English with Films. Paderborn: Schöningh. Thomas, John W. 2000. “A Review of Research on Project-Based Learning.” Accessed October 30, 2017. http://www.bie.org/images/uploads/general/9d06758fd346969cb63653 d00dca55c0.pdf Toro Escudero, Juan Ignacio. 2009. “Enseñanza del español a través del cine hispano: marco teórico y ejemplos prácticos.” MarcoELE 8. Accessed June 6, 2013. http://www.marcoele .com/descargas/china/ji.toro_cinehispano.pdf Vanderplank, Robert. 1988. “The Value of Teletext Subtitles in Language Learning.” ELT Journal 42 (4): 272–281. https://doi.org/10.1093/elt/42.4.272 Vanderplank, Robert. 2010. “Déjà vu? A Decade of Research on Language Laboratories, Television and Video in Language Learning.” Language Teaching 43 (1): 1–37. https://doi.org/10.1017/S0261444809990267
Vercauteren, Gert. 2007. “Towards a European Guideline for Audio Description.” In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, ed. by Jorge Díaz Cintas, Pilar Orero, and Aline Remael, 139–149. Amsterdam: Rodopi. https://doi.org/10.1163/9789401209564_011
Video for all. Accessed 24 May 2016. http://videoforall.eu
A pedagogical model in foreign language acquisition
Vílchez Tallón, José Antonio. 2007. La enseñanza del componente pragmático a través de fragmentos de películas. MA diss. Madrid: Universidad de Alcalá. Webb, Stuart, and Michael P. Rodgers. 2009. “The Lexical Coverage of Movies.” Applied Linguistics 30 (3): 407–427. https://doi.org/10.1093/applin/amp010 Weyers, Joseph R. 1999. “The Effect of Authentic Video on Communicative Competence.” The Modern Language Journal 83 (3): 339–349. https://doi.org/10.1111/0026‑7902.00026 Williams, Helen, and David Thorne. 2000. “The Value of Teletext Subtitling as a Medium for Language Learning.” System 28 (2): 217–228. https://doi.org/10.1016/S0346‑251X(00)00008‑7 Wilson, Carolyn, Grizzle Alton, Ramon Alton, Kwame Akyempong Tuazon, and Cheung Chi-Kim, 2011. Media and Information Literacy Curriculum for Teachers. Paris: UNESCO. Accessed October 29, 2017. http://unesdoc.unesco.org/images/0022/002256 /225606e.pdf
55
The implications of Cognitive Load Theory and exposure to subtitles in English as a Foreign Language (EFL) Anca Daniela Frumuselu Universitat Rovira i Virgili
The pedagogical use of subtitled and captioned material in the foreign language classroom is upheld by various theories which reveal the cognitive processing activated when students are exposed to multimedia and subtitled audiovisual materials. The three theories that will be considered here are Cognitive Load Theory (CLT), Cognitive Theory of Multimedia Learning (CTML) and Cognitive Affective Theory of Learning with Media (CATLM). The main purpose of the paper is to illustrate the internal mechanisms triggered in learners when various sensorial channels (visual, auditory and textual) coincide simultaneously on screen and how this may affect their cognitive engagement and motivation while learning a foreign language. Additionally, two empirical studies will be presented in the second part of the article in order to provide evidence of the benefits of using subtitled audiovisual materials in the English as a Foreign Language (EFL) classroom in Higher Education. The results show that both interlingual (L1) and intralingual (L2) subtitles prove to have a facilitating role in informal and colloquial language learning in this context. Keywords: cognitive load theory, cognitive theory of multimedia learning, cognitive affective theory of learning with media, subtitled material, colloquial language learning
1.
Introduction
Several theories related to the cognitive processing of multimedia and subtitled material have been developed in the last decade in order to shed light on the learning processes activated when learners are exposed to subtitled audiovisual materials. However, the Cognitive Load Theory (CLT) (Sweller 1994) has been scarcely
https://doi.org/10.1075/bct.111.ttmc.00004.fru © 2020 John Benjamins Publishing Company
58
Anca Daniela Frumuselu
investigated in relation to the use of Audiovisual Translation (AVT) practices in second or foreign language (L2/FL) learning settings. Thus, the purpose of the present article is to analyse the implications of CLT on the application of subtitled audiovisual material in English as a Foreign Language (EFL) classrooms, so as to better comprehend learners’ manifold inner mechanisms when they are listening to sound, watching images and reading subtitles at the same time. Moreover, additional theories will be explored in relation to CLT, such as the Cognitive Theory of Multimedia Learning (CTML) (Mayer 2009) and the Cognitive Affective Theory of Learning with Media (CATLM) (Moreno 2005; Mayer 2014). These last two theories are considered of paramount importance in order to comprehend the interconnection between the visual, written and auditory sources simultaneously present in subtitled material. Furthermore, the role of motivation and cognitive engagement in the context of FL learning will be examined. In the second part of the article, two research studies on the use of subtitled episodes from the sitcom Friends for informal and colloquial language learning will be discussed. The aim of the studies is to draw evidence on the benefits of using both interlingual and intralingual subtitled materials in the EFL classroom in Higher Education. The results from the two studies show that both interlingual (L1) and intralingual (L2) subtitles facilitate informal and colloquial language learning in the EFL classroom.
2.
Principles of Cognitive Load Theory
According to CLT, for instruction to be effective, the brain’s capacity for processing information should not be overloaded (Sweller 1994). CLT promotes the idea that the activities students are engaged in should be directed at schema acquisition and automation (Chandler and Sweller 1991; Sweller 1994). In other words, the instructor should not create unnecessary activities that require excessive attention or concentration, as this may lead to overloading of the working memory (WM) and prevent students from acquiring the essential information that is to be learned. This principle is vital in any form of instruction, but it is a fundamental consideration in multimedia instruction, due to the ease with which distractions can arise (Sorden 2012). CLT accounts for three types of cognitive load: intrinsic, extraneous and germane (van Merriënboer and Ayres 2005) as shown in Figure 1. Intrinsic cognitive load occurs during the interaction between the nature of the material to be learned and the expertise of the learner. It is dependent on the intrinsic nature (level of difficulty) of the learning material and also on the learner’s amount of
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
Figure 1. Cognitive loads in CLT (adapted from Nguyen and Clark 2005, 3)
prior knowledge. Extraneous (also known as extrinsic) cognitive load shares common grounds with Mayer’s (2009) extraneous processing. It is caused by factors that are not essential to the material to be learned and split learners’ attention between several sources of information. Extraneous processing refers to cognitive processing that does not serve the instructional objective and it is caused by poor instructional design and confusion. If extraneous processing exhausts all cognitive capacity due to poor design, then the learner will not be able to fulfill other cognitive processes, such as selecting, integrating and organising, which in turn will lead to poor language retention and performance (Mayer 2003; 2009). Thus, the extraneous load does not contribute in a direct way to the understanding of the material taught. This type of load should be minimised as much as possible in order to avoid WM overload (Homer et al. 2008). The germane cognitive load is the mental effort that is employed by learners to process the new information and to integrate it into their existing knowledge structures. This type of cognitive load is considered to boost learning by organising and integrating information in WM (Sweller et al. 1998). As an overall principle, learning materials should be designed with the aim of reducing extraneous load and giving access to mental resources by increasing the germane load. According to Sweller (2005), WM has a limited capacity to store novel information, as opposed to long-term memory (LTM), which has an unlimited ability to hold cognitive schemas that can vary in their degree of complexity and automation. Therefore, human expertise is considered to come from knowledge gathered in cognitive schemas and not from the capacity to engage in comprehending new elements, yet to be organised in LTM (van Merriënboer and Ayres 2005). A schema is a cognitive concept that organises the elements of information in the order that they will be dealt with. Newly acquired information is modified so as to be consistent with knowledge of the subject matter. In this way, a person’s understanding is arranged into schemas and these schemas determine how new information is processed (Sweller 1994). Furthermore, schemas have the property of holding most of the learned and intellectual skills people show, therefore one’s
59
60
Anca Daniela Frumuselu
Figure 2. Schema automation process in CLT (adapted from Howarth 2015, 21)
knowledge and intellectual abilities depend on schema acquisition. Another property of schemas is that they can reduce WM load, because even a complex schema can be treated as one element when brought into WM. Thus, the capacity of WM is expanded, and consequently the cognitive load of WM is reduced (van Merriënboer and Ayres 2005, 6). Figure 2 shows the process of schema assimilation through the WM into LTM and further up into the process of schema automation. Even though cognitive schemas are kept in and reclaimed from LTM, new information must be processed in WM. The main concern of CLT is the ease with which information may be processed in WM, as this can be affected by intrinsic cognitive load (the inner nature of the tasks themselves), by the extraneous cognitive load (the way in which the tasks are presented) or by germane cognitive load (the amount of cognitive resources that learners supply in schema construction and automation) (Sweller et al. 1998; van Merriënboer and Ayres 2005). The main principle of CLT is to decrease extraneous cognitive load and to increase germane cognitive load, considering the limitations of the available processing capacity of WM, and hence preventing cognitive overload (Figure 3).
Figure 3. Intrinsic and extraneous cognitive loads (adapted from Chong 2005, 108)
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
Both schema acquisition and automation share a common characteristic, i.e., they have the effect of reducing WM load. On the one hand, schemas increase the amount of information that can be stored in WM by “chunking individual elements into a single element” (Sweller 1994, 299). On the other hand, automation allows working memory to be sidestepped, as processing that occurs automatically requires less working memory space and consequently, the working capacity is released to carry out other functions. The implications of CLT have been investigated in the context of multimedia learning, mainly due to the use of technology as an instructional tool that processes and regards information in different presentation modes and sensory modalities. The Cognitive Theory of Multimedia Learning depicted by Mayer (2009) is a process theory that supplements CLT. The basic principles in the CTML are the dual coding assumption and the dual channel assumption. While the former refers to the presentation mode of the information (verbal and pictorial), which is processed in separate but interconnected systems, the latter makes reference to the sensory path of information perception, highlighting that visual and auditory information are processed in systems that are different, but that can be interdependent. In the following section, CTML will be described in relation to CLT and the cognitive load perspective in order to discuss the main implications of the two theories upon second/foreign language learning and teaching with subtitles and multimedia input. The use of subtitles as support in the EFL classroom is upheld by these theories, given that adding a sensory channel, i.e., textual in the form of subtitles, apart from the visual and auditory ones, increases the limited capacity of information processing, and prior knowledge is more likely to be activated and become accessible to learners.
3.
Cognitive Load Theory and Cognitive Theory of Multimedia Learning
The core principle of both CTML (Mayer 2009) and CLT (Chandler and Sweller 1991; Sweller 2005) revolves around the idea that learners are engaged in three kinds of cognitive processing while learning: extraneous processing, essential processing, and generative processing. Extraneous processing refers to cognitive processing that does not serve the instructional objective and it is caused by poor instructional design and confusion. If extraneous processing exhausts all cognitive capacity due to poor design, then the learner will not be able to fulfill other cognitive processes, such as selecting, integrating and organising, which in turn will lead to poor language retention
61
62
Anca Daniela Frumuselu
and performance. Essential processing is the cognitive processing aimed at mentally representing the displayed material in WM and it is hampered by complexity. Too many steps and underlying processes at this stage could lead to an overload of a learner’s cognitive capacity. That is why learners should be provided with key elements to ease that complexity and allow them to focus on essential processing, resulting in good retention. Generative processing, on the other hand, entails the cognitive processing focused on understanding the material and it is grounded in the learner’s effort to get involved in the learning process, such as selecting, organising and integrating the presented material. This stage is strongly related to the learner’s level of motivation and engagement in the learning environment. If learners manage to engage in essential and generative processing, it is likely they will achieve meaningful learning outcomes with good retention and good transfer performance. The ultimate goal in this respect is to reduce extraneous processing while managing essential processing and fostering generative processing (Mayer 2003; 2009; 2014). With reference to the sensory modality of information, Mayer (2009) states that knowledge is better acquired if the materials are simultaneously presented auditorially and visually. CLT supports this principle by exemplifying the modality effect in relation to the memory load. Hence, a picture-and-text format implies a higher load in visual WM, because both types of information are to be processed in this system. In comparison, the picture-and-narration mode generates a lower amount of cognitive load in visual WM, because auditory and visual information are each processed in their respective system, thus the total amount of load is distributed between the two systems. Both CLT and the CTML point out the principle of redundancy as relevant for instructional design if it is used in L2/FL learning contexts. Initially, it was believed that adding redundant printed text to narrated graphics would create extraneous processing. The practice of AVT makes use of printed text in the form of interlingual and intralingual subtitles on screen in addition to sound and image. At first sight, subtitled audiovisual materials may seem inappropriate for language learning and prone to cause extraneous processing and besiege the cognitive load. Mayer et al. (2014) claim that extraneous processing may occur when redundant on-screen subtitles are used. This is believed to happen when the student tries to integrate two verbal streams in order to make sure the printed words correspond to the spoken words, and when the learner scans between the words in the caption area; redundant printed text can lead to split attention in the visual channel which may cause learners to miss information if the video is too fastpaced or if they need to spend too much time reading the printed words. However, it should be noted that redundancy is considered beneficial if it involves learning an L2/FL, because it appears as reinforcement rather than as a redundant element
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
(Mayer et al. 2014). On the basis of CLT (Sweller 2005) and CTML (Mayer 2009), the redundancy facilitation hypothesis suggests “a reverse redundancy effect in scenarios where the redundant material can support and reinforce basic cognitive processing that is not yet automated in non-native speakers, while minimizing extra cognitive load” (Mayer et al. 2014, 654). This is the case of the participants that took part in the two research studies, since they were studying for a Bachelor Degree (BA) in English Studies. Accordingly, subtitles do not act as a redundant tool, first because they are not a wordfor-word transcription of the spoken dialogues; and second, because they offer a support in understanding the linguistic items in a meaningful, implicit and authentic context. Hence, thanks to subtitles as a support, learners are likely to internalise the linguistic concepts and transform them into automated forms. In this way, their difficulty will diminish over time, thereby reducing the cognitive load of the WM.
4.
The role of motivation in the Cognitive Affective Theory of Learning with Media
A relevant aspect of cognitive theories involves the role of motivation in multimedia learning, which according to Mayer (2014) is “the internal state that initiates, maintains, and energises the learner’s effort to engage in the learning process” (171). However, the underlying question is: what motivates learners to engage in the cognitive processes of selecting, organising and integrating elements that are considered vital for meaningful learning to take place (Mayer 2011)? The Cognitive Affective Theory of Learning with Media includes motivational and metacognitive factors that are not mentioned in CTML and in CLT (Moreno 2005). Moreover, Moreno and Mayer (2007, 313) state that “motivational factors mediate learning by increasing or decreasing cognitive engagement” and “metacognitive factors mediate learning by regulating cognitive processing and affect.” The motivational factors are considered to improve student learning by fomenting generative processing as long as the learner is not permanently overwhelmed with extraneous processing or constantly distracted from essential processing (Mayer 2014). A central theme in CATLM is that affective features of an instructional message can influence the level of learner engagement in cognitive processing during learning. Thus, there are three conceptualisations of the effects of adding affective features to a lesson to increase learner motivation: less-is-more, more-is-more and focused-more-is-more (Mayer 2014). The less-is-more conceptualisation considers instructional design as techniques aimed at reducing extraneous processing
63
64
Anca Daniela Frumuselu
(e.g., delete extraneous illustrations and text) and at managing essential processing (e.g., highlight essential material). The more-is-more concept focuses on instructional design techniques aimed at fostering generative processing (e.g., adding appealing graphics or challenging scenarios). Finally, the focused-more-ismore conceptualisation encompasses all three instructional design objectives in order to motivate learners to engage in generative processing as well as offering enough guidance to overcome the extraneous processing (e.g., adding appealing graphics that are relevant to the instructional objectives or including challenging learning situations, but also providing sufficient time and guidance to attain the learning objectives). CATLM extends Mayer’s CTML to media such as virtual reality, agent-based and case-based learning environments, which may offer the learner additional instructional materials to words and pictures. CATLM is based on several assumptions, such as: humans have separate channels for processing different information modalities (Baddeley 1992); only a few pieces of information can be actively processed at any time in WM within each channel (Sweller 1994); or meaningful learning takes place when the learner makes a conscious effort in cognitive processes, such as selecting, organising, and integrating new information with prior knowledge (Moreno and Mayer 2007). Figure 4 shows a model of learning with an interactive multimodal environment, according to CATLM.
Figure 4. A Cognitive-affective model of learning with media (adapted from Moreno and Mayer 2007, 314)
Thus, the instructional media may be made up of verbal explanations presented either in written or spoken words combined with non-verbal representations such as pictures and sounds. In order for meaningful learning to occur, learners need to focus first on the most relevant verbal and non-verbal information, so that further processing can take place in WM. Then, they need to
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
organise the multiple representations into a coherent mental model and integrate them with their prior knowledge. When in interactive learning environments, these cognitive processes are guided partially by prior knowledge activated by the learner and partially by the feedback and instructional methods received in the learning environment. Therefore, learners may make use of their metacognitive abilities to coordinate their motivation and cognitive processing during learning. Among the researchers who highlight the relevance of the abovementioned theories in the didactic use of subtitles as language learning support, Talaván (2011; 2012; 2013) points out the capacity of the information processing system to expand when an additional channel (i.e., textual in the form of subtitles) is added to the picture. Hence, by generating extra related information to the visual and auditory channel, the limited capacity of information processing claimed by Mayer (2009) is more likely to expand and prior knowledge activation becomes more accessible (Wuang and Shen 2007). In other words, incoming information is more efficiently coded when it enters through more than one channel, becoming easier to grasp and acquire. Therefore, the communicative situations disclosed in subtitled audiovisual materials take the shape of what Krashen (1985) called comprehensible input, so essential for making progress in FL learning. Moreover, Moreno’s (2005) and Moreno and Mayer’s (2007) assumption is that extraneous processing is reduced through learners’ engagement and motivation in the learning process. The practice of adding a third channel in the form of subtitles can only be productive in establishing mental connections among the different channels and between the textual information and previous knowledge. Thus, comprehension and language retention are facilitated through the use of subtitles as a solid support tool in foreign language teaching. Both teachers and learners should have a clear goal when using them in order to take full advantage of their benefits. Students need to learn how to use subtitles by having a meaningful objective in mind and that should not be understanding ‘everything’ they hear. This can be accomplished by making use of the common characteristic of subtitles, i.e., the reduction and contraction of the linguistic information on the screen, which allow subtitles to be appropriately synchronised with speech and the time allocated on screen. In fact, linguistic differences between aural and written text can only be beneficial in this context, not only because they foster the learner’s attention and motivation in noticing the differences between speech and the written text on screen, but also because they reinforce learners’ belief in their ability to understand the foreign language they listen to (Talaván 2013).
65
66
Anca Daniela Frumuselu
5.
Two empirical studies
The aforementioned theories – CLT, CTML and CATLM – support the pedagogical application of AVT and can be considered the core theories in research studies in the L2/FL classroom that make use, as in this case, of interlingual and intralingual subtitled material. The forthcoming empirical evidence implies the use of multimedia messages, which come in the shape of subtitled audiovisual materials, to present the language visually, aurally and textually. The interconnections between the three modes enhance the learner’s active participation in the process of language learning. In addition, the relationship between the three modes helps to build internal connections among words and pictures, and eventually link the verbal, pictorial and written models with learners’ prior knowledge, hence creating external connections. The three processes learners become engaged in while listening, watching and reading multimedia sources are considered vital for better language retention and comprehension and for active learning to take place (Mayer 2009). Likewise, learners have a clear goal in all the tasks they are involved in, as they are not simply watching television series for the sake of doing so, but they are focusing on overall comprehension of the episodes, paying special attention to both the contextual and linguistic information about colloquial and informal language use. The motivational factor is fundamental in easing the load of extraneous cognitive processing and enhancing the essential and generative cognitive processing. Despite the range of theoretical and empirical studies already carried out on subtitled audiovisual materials (Bravo 2008, 2010; Danan 1992, 2004; D’Ydewalle and Van de Poel 1999; Koolstra and Beentjes 1999; Talaván 2011; Vanderplank 1988), the aspect of informality and conversational speech in connection with the use of subtitles and audiovisual aids has scarcely been investigated. Thus, this paper outlines two empirical studies that have been designed in order to evaluate the effectiveness of both interlingual and intralingual subtitles upon informal and colloquial language acquisition in Higher Education classroom settings.
5.1 Methodology The testing procedure for the two empirical studies (Study 1 and Study 2) was similar, although there were some differences that marked their distinct purposes. In Study 1, learners’ results were analysed at the beginning and at the end of a seven-week experiment period, the pre-test was also used as post-test at the end of the experiment in order to track any noticeable progress; while in Study 2, the results of the tests were analysed on the basis of their immediate performance since learners had to answer the test immediately after they watched the
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
episodes during a seven-week experiment period. Thus, learners in the second study had to take the test after each viewing session, immediately after watching each episode, whereas learners in the first study took the pre-/post-test before they started watching the first episode of the series and at the very end of the experiment respectively. Learners in the second study had to remember the target vocabulary from the episodes in an ‘ad-hoc’ way, compared to the process of learning the vocabulary in the first study, given that learners might have already familiarised with the target vocabulary when tested at the very end of the experiment. In this way, learners of Study 1 had time to internalise the target language when they were given the post-test in the seventh week, whereas in the second study they watched episodes every week for seven weeks and their vocabulary recall was tested immediately after, which did not allow them to base their answers on previously acquired knowledge. The data collection was based entirely on test results. No questionnaires or class observation sheets were used because the main aim of the empirical studies was to look into the overall and individual scores in relation to their performance after watching subtitled videos. The statistical analyses offer an in-depth view of learners’ predictive trend over a period of seven weeks. 5.1.1 Participants The 49 learners that participated in the two studies were second-year university students (level A2 to C1 of CEFR), males (24%) and females (76%), between 19 and 25 years old. The number of participants in each study varied depending on the aims of each study and learners’ class attendance. The number of subjects that took part in each study will be mentioned in its corresponding section together with details about how students were assigned to various viewing conditions (either interlingual or intralingual). The participants were enrolled in a Bachelor Degree in English Studies at Rovira i Virgili University and the Oral Skills (Listening and Speaking) course was the basis for this research. An online questionnaire1 was distributed before starting data collection in order to have an insight into subjects’ backgrounds and viewing habits, and eliminate those who were not suitable for the study.
1. www.encuestafacil.com was used as an online tool to create, distribute and analyse the questionnaires. The collaboration between Universia (www.universia.net) and Encuesta facil gives free access to the consortium of university members to create questionnaires and distribute them for research purposes.
67
68
Anca Daniela Frumuselu
5.1.2 Resources A set of authentic audiovisual materials (episodes from the North American television series Friends – Season 1 and 2) was selected by the lecturer/researcher in the classroom to be used as pedagogical support. The sitcom Friends2 was considered a meaningful didactic tool for the current situation due to its entertaining and motivational traits, highly appropriate for the learners’ age and rich in informal contexts and colloquial language. It also presents communicative, reallife situations, which are highly relevant for the age of the participants. Due to its entertainment value and motivational context, the selected audiovisual material creates a relaxing and enjoyable atmosphere, promoting a low “affective filter”, which is greatly profitable for language learning acquisition (Krashen 1985, 3) and is in line with Moreno’s (2005) principles of the Cognitive Affective Theory of Multimedia Learning. Several types of language tests were created in order to evaluate learners’ response to the authentic materials throughout the development of the two language studies. For Study 1, a 30-item (15 open questions and 15 multiple-choice) pre-test – used also as a post-test – was designed in order to exploit the colloquial and informal expressions and words (slang, idioms, phrasal verbs, single word informal lexis, and colloquial fixed formulae). The pre- and post-test were administered before the beginning and at the end of the seven-week study respectively. The 15 multiple-choice questions exploited the informal and colloquial words and expressions that appeared in the viewing sessions, containing one correct option and 2 distractors; these questions were designed to make students recognise and identify the correct items present in a specific scene and context in the episode. The 15 open questions aimed at developing students’ ability to express the meaning of the items assessed in their own words, and they contained informal words and expressions, such as slang, idioms, and phrasal verbs present in the episodes. A similar type of test was designed for gathering data for Study 2. Thus, a 20-item immediate test (10 open questions and 10 multiple-choice questions) was
2. The project includes the use of copyrighted materials, i.e., episodes from the official DVD of the sitcom Friends, Season 1 and 2. For the viewing activities, only the corresponding subtitles in Spanish or captions in English from the official DVD were used. The use of copyrighted material for educational and research purposes is allowed by the Spanish intellectual property law, Legislative Decree 1/1996, amended by Law 23/2006 and Law 21/2014, as long as the inclusion of the material is justified and the source of the content is properly mentioned. For this project, the material is used for analysis purposes only. The video material used for the analysis is not published here or distributed by itself nor in its entirety, and it was used solely by the researcher to prepare the testing procedures and the in-class activities.
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
Table 1. Student distribution in the two subtitling groups according to their proficiency level G1-Interlingual subtitles
G2-Intralingual subtitles
Level
Level
No. students
No. students
A2
1
A2
2
B1
8
B1
8
B2
7
B2
8
C1
2
C1
4
Total:
18
Total:
22
administered after each viewing session. The multiple-choice and open questions were designed in the same way as the pre- and post-test of Study 1.
5.2 Study 1 The data for the first study was collected from only 40 participants that were randomly assigned to one of the two groups (G1 and G2), either to G1 (interlingual subtitles) or to G2 (intralingual subtitles). Both groups included students from A2 to C1 level according to CEFR as shown in Table 1. Students in both G1 and G2 watched thirteen episodes from the TV series Friends over a period of 7 weeks. Every week the participants watched 2 episodes of approximately 25 minutes each, so they were exposed to thirteen subtitled episodes, totalling approximately 325 minutes. The 30-item pre-test and posttest was administered in order to analyse the effect of the two subtitling modes on informal and colloquial language acquisition before and after the seven-week period. A Welch Two sample t-test and the effect size were carried out in order to spot the magnitude of the observed effect in our sample and to identify the difference in students’ performance between the two subtitling conditions. Table 2 shows the analysis of post-test by group using the t-test. Table 2. T-test of post-test by group, mean scores (SD) in each condition, the level of significance and the effect size (Cohen's r) Welch two sample t-test Group/Condition
N
Mean
SD
p-value
G1(Sp.S)
18 10.95
3.94 0.01
G2(En.S)
22 14.68
4.97
r .39
Hence, the t-test shows significant results in relation to the boundary point set at .05 alpha value and the difference between the two groups post-test scores
69
70
Anca Daniela Frumuselu
reveals statistical significance (t = −2.70, p = 0.01). The average for G2 post-test, i.e., Mean (M) = 14.68, Standard Deviation (SD) = 4.97 is higher than the average for G1 post-test, i.e., (M) = 10.95, (SD) = 3.94, therefore a mean difference of 3.73 points was found between the two conditions, revealing higher scores under the intralingual condition (G2).3 The effect size was also calculated in order to quantify the difference between the two groups and to measure the effectiveness of the treatment (Coe 2000). Cohen’s rule of thumb suggests 0.1 or less representing a ‘small’ effect size, 0.3 a ‘medium’ effect size and 0.5 a ‘large’ effect size (Field et al. 2012). Reporting the effect size has been considered essential to reinforce the significant value and to support the assumption that if the results do not show at least a small effect size, no meaningful difference can be noticed, even if the alpha value is statistically significant (Cohen et al. 2011). In our sample, the results showed that it did represent a medium-sized effect (r = .39), which means that the effect accounts for 9% of the total variance and the difference in means between the two groups and subtitle conditions showed a medium effect among the sampled population. Overall, the results disclose a significant effect of the intralingual condition (G2) on participants’ post-test scores when exposed to episodes from the American sitcom Friends. Students were able to rely on the visual, audio and written elements from the videos in order to identify the correct meaning of the informal expressions and words in the context provided. We can conclude that based on the findings of the study, students who were exposed to subtitled audiovisual materials for a period of 7 weeks with intralingual (English) subtitles benefitted more than those who watched the episodes under the interlingual (Spanish/ English) condition. The current findings contradict previous research studies claiming that interlingual subtitles are more beneficial for language learning than intralingual subtitles (D’Ydewalle and Van de Poel, 1999; Koolstra and Beentjes 1999; Bianchi and Ciabattoni, 2008; Bravo, 2010), but support several investigations that accounted for the benefits of intralingual subtitles (Vanderplank 1988; Garza 1991; Borrás and Lafayette 1994; Bird and Williams 2002; Caimi 2006; Araújo 2008; Chai and Erlam 2008).
5.3 Study 2 The data for Study 2 were gathered at the same time Study 1 was taking place. Therefore, the same participants (49), (A2 to C1 of CEFR) in the same assigned groups watched thirteen episodes from the American series Friends over the 3. For a more detailed analysis of the results and the implications of the current study in the AVT and FL learning, see the full article Frumuselu et al. (2015).
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
same period of 7 weeks. However, the aim of this study was to investigate learners’ short-term and immediate acquisition of colloquial and idiomatic vocabulary acquisition, in addition to the pre/post-test results from Study 1. Thus, the students were tested with a 20-item multiple choice and open question test administered after each viewing session. Linear Mixed Effects Regression (LMER) procedure was used to build several growth models in order to investigate any significant differences between the two types of subtitle conditions and students’ performance. The exposure time of 7 weeks was deliberately chosen, given the previous statements in the literature referring to the process of language acquisition. In order to extract possible pedagogical implications for teachers’ community and classroom instruction, D’Ydewalle and Van de Poel (1999) advocate the need for further studies to use longitudinal exposure in order to assess cumulative effects. Similarly, Bisson et al. (2014) state that the lack of differences across conditions in their study might be due to the limited exposure to the FL (only 25 minutes) and that future studies should measure the impact of long-term exposure to FL films with subtitles on language acquisition, given the slow process of incidental vocabulary acquisition with small vocabulary gains. Hence, the research questions in the current study4 aim on the one hand, to examine whether students’ progress differently on the acquisition level of informal and colloquial vocabulary learning depending on the subtitle condition (interlingual and intralingual) they are exposed to, and on the other hand, whether highly proficient students progress more over time than less proficient students when exposed to one of the two subtitle conditions. The proficiency of students is modelled as a quadratic growth curve in this second study and we assume individual differences between students, concerning the linear path of the growth in proficiency. This basic model can be visualised in Figure 5. As the graph reveals, there is a general tendency in subjects’ scores to increase between session 5 and 9, as the estimated growth curves for the individual students disclose. Nevertheless, a decrease in this trend can be observed towards the last sessions. The tendency is not homogenous among all the students, as there are obvious differences between their results after the first session, i.e., some of them scoring higher than others, and moreover showing noticeable ascent throughout all the thirteen sessions, as well as others scoring high after the first session and showing a tendency to decrease towards the last viewing sessions.
4. For further details about the methodology, data analyses and the results of the study, see the unpublished doctoral thesis Frumuselu (2015).
71
72
Anca Daniela Frumuselu
Figure 5. Estimated growth curves for the individual students (each line = a student)
Students’ immediate scores, tested after each session, reveal that the two subtitle conditions are not significant in changing the proficiency acquired in terms of informal and colloquial vocabulary, although individual growth has been observed in each type of subtitle condition students were exposed to. Consequently, the current findings contradict previous results in Study 1 carried out with pre- and post-tests, in which the intralingual condition was found to be more beneficial than the interlingual one. These differences could be the result of the length of time allocated to learners to internalise the vocabulary concepts and for further recall.
6.
Conclusions
The use of subtitled audiovisual materials as didactic tools in FL learning contexts is supported by the principles of CLT, CTML and CATLM, which offer a solid theoretical rationale. Thus, by distributing the information among the three sys-
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
tems (audio, visual and textual), it is possible to foster language comprehension and meaning. Subtitles possess an added value, as they can raise learners’ awareness of the salient input, which could otherwise be missed if students rely either only on their listening ability or only on reading the subtitles as a primary task. By grasping the salient language input from subtitled or captioned videos, learners are likely to store the information in LTM. Once the input is stored in LTM, it is split into two parts and kept either in the form of semantic memory or episodic memory. As audiovisuals are mainly episode-orientated, the content of the video can easily be remembered in LTM. Storing the input in an episodic or visual memory form can be beneficial for later activation of the content of semantic memory, in other words, lexical items can be more easily remembered if associated with visual elements, thus easing the load of WM (Wuang and Shen 2007). The two empirical studies described in this chapter provide evidence of the benefits derived from the use of subtitled audiovisual materials in the EFL classroom in Higher Education. Both interlingual and intralingual subtitles proved to have a facilitating role in informal and colloquial language learning in the EFL classroom. On the one hand, extensive viewing of films with interlingual subtitles seems to ease incidental learning; on the other hand, being exposed to intralingual subtitled programmes helps language acquisition, especially in relation to vocabulary and listening comprehension, as learners can visualise the words they hear and map them onto the written representations. The empirical evidence presented reinforces the didactic application of AVT practices in FL learning and it is upheld by the solid theories mentioned in the first part of this article. The principles of CLT can be perceived in the procedure and the method employed in both research studies: first, the lecturer/researcher did not create additional activities that entail excessive attention or concentration; on the contrary, learners were provided with subtitled materials that minimised WM overload and allowed subjects to pay special attention to the essential information. As subtitles come in a summarised form and not in a word-for-word format, they are aimed at grasping the core of the message and not at flooding the screen with extra and unnecessary information. Hence, the textual mode, in the form of subtitles, comes as a reinforcing channel and the visual and audio messages are aided by the presence of a textual channel. This beneficial use of multi-sensorial transfer of information is acknowledged by both CTML and CLT theories, as the materials are presented simultaneously via various channels (auditorially, visually and textually) and this eases memory load. Learners were able to make connections between the images presented in the television series, the dialogues of the actors and their synthesised written representations in the form of subtitles. Thus, they processed the auditory and visual
73
74
Anca Daniela Frumuselu
information in their respective systems and they were able to distribute the cognitive load between the two systems, hence avoiding an overload of the WM. The aspect of motivation is vital in CATLM, as the affective features can influence the level of engagement in cognitive processing during learning and this may lead to decreased or increased learner motivation (Moreno 2005). Given that participants were watching subtitled episodes from a humorous and entertaining sitcom, they were more likely to engage in a kind of cognitive processing that favors the germane cognitive load, by organising and integrating new information in the WM. This way, learners could overcome the intrinsic difficulty of the authentic material, by referring to additional channels, i.e., subtitles. Therefore, they could diminish the extraneous cognitive load and allow access to mental resources that would boost the germane load. An additional textual channel (L1 or L2 subtitles), which for learners is a comprehensible means of understanding the spoken input, offers students the necessary guidance to overcome the extraneous processing, in other words, the intelligibility of the message delivered only in an oral or spoken format. In order to validate the results, replication of the studies with different age groups of participants and distinctive characteristics (e.g., students that are not following a language degree) or even with younger or adult EFL learners in different educational contexts would broaden the perspective on the implication of subtitled audiovisual materials on FL learning. As the current studies were carried out in Spain, a traditionally dubbing country, replication of the studies in subtitling countries could lead to contrasting results and this would enrich the field of subtitling research. Future considerations concerning EFL learners’ characteristics, such as motivation and interest in the target language, their learning styles and proficiency level need closer examination. In order to address issues such as motivation, students’ learning styles and individual differences, Danan (2015) recommends more longitudinal and qualitative studies in order to discover how different learners interact with and benefit from subtitles over time and assess how to introduce subtitling more productively in the language learning curricula. There are still a number of areas in the field of AVT and FL learning that are waiting to be investigated and explored in the future. The theories that support the existing empirical evidence on the use of subtitled audiovisuals in relation to FL learning stand as proof of the potential of AVT. In this era of technological progress and instant dissemination of information and material over the Internet, the new “digitalized” and “virtualized” generation of users have no difficulty in profiting from the benefits of both “effects of ” and “effects with” subtitled and captioned videos and TV programmes (Vanderplank 2015, 33). Having access to digital devices from any part of the world and at any time gives students the essential
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
autonomy to be in control of their own learning process and choose the method that suits them best in an informal leisure-activity that can help them to build up their linguistic competences.
References Araújo, Vera. 2008. “The Educational Use of Subtitled Films in EFL Teaching.” In The Didactics of Audiovisual Translation, ed. by Jorge Díaz Cintas, 227–238. Amsterdam: John Benjamins Publishing Company. https://doi.org/10.1075/btl.77.22san Baddeley, Alan D. 1992. “Working Memory.” Science 255 (5044): 556–59. https://doi.org/10.1126/science.1736359
Bianchi, Francesca, and Tiziana Ciabattoni. 2008. “Captions and Subtitles in EFL Learning: An Investigative Study in a Comprehensive Computer Environment.” In From Didactas to Ecolingua, ed. by Anthony Baldry, Maria Pavesi, Carol Taylor Torsello, and Christopher Taylor, 69–90. Trieste: EUT – Edizioni Università di Trieste. Bird, Stephen A., and John N. Williams. 2002. “The Effect of Bimodal Input on Implicit and Explicit Memory: An Investigation into the Benefits of within-Language Subtitling.” Applied Psycholinguistics 23 (4): 509–33. https://doi.org/10.1017/S0142716402004022 Bisson, Marie-Josée, Walter J. B. Van Heuven, Kathy Conklin, and Richard J. Tunney. 2014. “Processing of Native and Foreign Language Subtitles in Films: An Eye Tracking Study.” Applied Psycholinguistics 35 (2): 399–418. https://doi.org/10.1017/S0142716412000434 Borrás, Isabel, and Robert C. Lafayette. 1994. “Effects of Multimedia Courseware Subtitling on the Speaking Performance of College Students of French.” The Modern Language Journal 78 (1): 61–75. https://doi.org/10.2307/329253. Bravo, Conceiçao. 2008. “Putting the Reader in the Picture. Screen Translation and ForeignLanguage Learning.” PhD diss. Rovira i Virgili University. Accessed October 29, 2017. http://tdx.cat/bitstream/handle/10803/8771/Condhino.pdf?sequence=1. Bravo, Conceiçao. 2010. “Text on Screen and Text on Air: A Useful Tool for Foreign Language Teachers and Learners.” In New Insights into Audiovisual Translation and Media, ed. by Jorge Díaz Cintas, Anna Matamala, and Josélia Neves, 269–83. Amsterdam: Rodopi. https://doi.org/10.1163/9789042031814_020
Caimi, Annamaria. 2006. “Audiovisual Translation and Language Learning: The Promotion of Intralingual Subtitles.” The Journal of Specialised Translation 6: 85–98. Chai, Judy, and Rosemary Erlam. 2008. “The Effect and the Influence of the Use of Video and Captions on Second Language Learning.” New Zealand Studies in Applied Linguistics 14 (2): 25–44. Chandler, Paul, and John Sweller. 1991. “Cognitive Load Theory and the Format of Instruction.” Cognition and Instruction 8 (4): 293–332. https://doi.org/10.1207/s1532690xci0804_2
Chong, Toh Seong. 2005. “Recent Advances in Cognitive Load Theory Research: Implications for Instructional Designers.” Malaysian Online Journal of Instructional Technology (MOJIT) 2 (3): 106–117. Coe, Robert. 2000. “What Is an ‘Effect Size’?” CEM Centre, University of Durham. Accessed October 29, 2017. http://www.cem.org/effect-size-resources.
75
76
Anca Daniela Frumuselu
Cohen, Louis, Lawrence Manion, and Keith Morrison. 2011. Research Methods in Education. 7th ed. New York: Taylor & Francis Group. D’Ydewalle, Géry, and Marijke Van de Poel. 1999. “Incidental Foreign-Language Acquisition by Children Watching Subtitled Television Programs.” Journal of Psycholinguistic Research 28 (3): 227–244. https://doi.org/10.1023/A:1023202130625 Danan, Martine. 1992. “Reversed Subtitling and Dual Coding Theory: New Directions for Foreign Language Instruction.” Language Learning 42 (4): 497–527. https://doi.org/10.1111/j.1467‑1770.1992.tb01042.x. Danan, Martine. 2004. “Captioning and Subtitling: Undervalued Language Learning Strategies.” Meta 49 (1): 67–77. https://doi.org/10.7202/009021ar Danan, Martine. 2015. “Subtitling as a Language Learning Tool: Past Findings, Current Applications, and Future Paths.” In Subtitles and Language Learning. Principles, Strategies and Practical Experiences, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 41–61. Bern, New York: Peter Lang. Field, Andy P., Jeremy Miles, and Zoe Field. 2012. Discovering Statistics Using R. London: Sage. Frumuselu, Anca Daniela. 2015. “Subtitled Television Series inside the EFL Classroom: LongTerm Effects upon Colloquial Language Learning and Oral Production.” PhD diss. Universitat Rovira i Virgili. Frumuselu, Anca Daniela, Sven De Maeyer, Vincent Donche, and María del Mar Gutiérrez Colon Plana. 2015. “Television Series inside the EFL Classroom: Bridging the Gap between Teaching and Learning Informal Language through Subtitles.” Linguistics and Education 32: 107–117. https://doi.org/10.1016/j.linged.2015.10.001 Garza, Thomas J. 1991. “Evaluating the Use of Captioned Video Materials in Advanced Foreign Language Learning.” Foreign Language Annals 24 (3): 239–58. https://doi.org/10.1111/j.1944‑9720.1991.tb00469.x
Homer, Bruce D., Jan L. Plass, and Linda Blake. 2008. “The Effects of Video on Cognitive Load and Social Presence in Multimedia-Learning.” Computers in Human Behavior 24 (3): 786–797. https://doi.org/10.1016/j.chb.2007.02.009 Howarth, Jeff. 2015. “Learning by Solving Problems: Cognitive Load Theory and the ReDesign of an Introductory GIS Course.” Cartographic Perspectives 80: 18–34. https://doi.org/10.14714/CP80.1320
Koolstra, Cees M., and Johannes W. J. Beentjes. 1999. “Children’s Vocabulary Acquisition in a Foreign Language through Watching Subtitled Television Programs at Home.” Etr&DEducational Technology Research and Development 47 (1): 51–60. https://doi.org/10.1007/BF02299476
Krashen, Stephen D. 1985. The Input Hypothesis: Issues and Implications. London: Longman. Mayer, Richard E. 2003. “Cognitive Theory of Multimedia Learning.” Learning and Instruction 13: 125–39. https://doi.org/10.1016/S0959‑4752(02)00016‑6 Mayer, Richard E. 2009. Multimedia Learning. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511811678
Mayer, Richard E. 2011. Applying the Science of Learning. Boston, MA: Pearson/Allyn&Bacon. Mayer, Richard E. 2014. “Incorporating Motivation into Multimedia Learning.” Learning and Instruction 29: 171–73. https://doi.org/10.1016/j.learninstruc.2013.04.003 Mayer, Richard E., Hyunjeong Lee, and Alanna Peebles. 2014. “Multimedia Learning in a Second Language: A Cognitive Load Perspective.” Applied Cognitive Psychology 28 (5): 653–660. https://doi.org/10.1002/acp.3050
The implications of Cognitive Load Theory in English as a Foreign Language (EFL)
Moreno, Roxana. 2005. “Instructional Technology: Promise and Pitfalls.” In Technology-Based Education: Bringing Researchers and Practitioners Together, ed. by Lisa M. Pytlik Zillig, Mary Bodvarsson, and Roger Bruning, 1–19. Greenwich, CT: Information Age Publishing. Moreno, Roxana, and Richard Mayer. 2007. “Interactive Multimodal Learning Environments.” Educational Psychology Review 19 (3): 309–326. https://doi.org/10.1007/s10648‑007‑9047‑2 Nguyen, Frank, and Ruth Colvin Clark. 2005. “Efficiency in E-Learning: Proven Instructional Methods for Faster, Better, Online Learning.” Learning Solutions Magazine, November. Accessed October 29, 2017. http://www.clarktraining.com/content/articles/Guild_ELearning.pdf. Sorden, Stephen D. 2012. “The Cognitive Theory of Multimedia Learning.” In Handbook of Educational Theories, 1–31. Charlotte, NC: Information Age Publishing. Sweller, John. 1994. “Cognitive Load Theory, Learning Difficulty, and Instructional Design.” Learning and Instruction 4: 295–312. https://doi.org/10.1016/0959‑4752(94)90003‑5 Sweller, John. 2005. “Implications of Cognitive Load Theory for Multimedia Learning.” In The Cambridge Handbook of Multimedia Learning, ed. by Richard E. Mayer, 19–30. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511816819.003 Sweller, John, Jeroen J. G. van Merriënboer, and Fred G. W. C. Paas. 1998. “Cognitive Architecture and Instructional Design.” Educational Psychology Review 10 (3): 251–296. https://doi.org/10.1023/A:1022193728205
Talaván, Noa. 2011. “A Quasi-Experimental Research Project on Subtitling and Foreign Language Acquisition.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra, Marie Biscio, and Máire Àine Ní Mhainnín, 197–218. Oxford: Peter Lang. Talaván, Noa. 2012. “Justificación teórico-práctica del uso de los subtítulos en la enseñanzaaprendizaje de idiomas.” Trans. Revista de Traductología 16: 23–37. Talaván, Noa. 2013. La subtitulación en el aprendizaje de lenguas extranjeras. Barcelona: Octaedro. Vanderplank, Robert. 1988. “The Value of Teletext Sub-Titles in Language Learning.” ELT Journal 42 (2): 272–81. https://doi.org/10.1093/elt/42.4.272. Vanderplank, Robert. 2015. “Thirty Years of Research into Captions/Same Language Subtitles and Second/Foreign Language Learning: Distinguishing between ‘Effects Of ’ Subtitles and ‘Effects With’ Subtitles for Future Research.” In Subtiles and Language Learning. Principles, Strategies and Practical Experiences, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 19–40. Bern, New York: Peter Lang. van Merriënboer, Jeroen J. G., and Paul Ayres. 2005. “Research on Cognitive Load Theory and its Design Implications for E-Learning.” Etr&D 53 (3): 5–13. https://doi.org/10.1007/BF02504793
Wuang, Yan-dong, and Cai-fen Shen. 2007. “Tentative Model of Integrating Authentic Captioned Video to Facilitate ESL Learning.” Sino-Us English Teaching 4 (9): 1–13.
77
Exploring the possibilities of interactive audiovisual activities for language learning Stavroula Sokoli
Computer Technology Institute & Press “Diophantus”
Language teachers often resort to video to familiarise their students with contextualised linguistic and cultural aspects of communication. Since they tend to consider learning-by-doing more effective than learning-byviewing, they try to further exploit this valuable asset through active tasks, such as taking notes or answering comprehension questions, silent viewing and predicting, ordering sentences, role-playing, analysing, summarising and describing (Zabalbeascoa et al. 2015). Advances in ICT have enabled more interactive options, with a view to expanding the range of available activities to include audiovisual translation (AVT) activities, such as subtitling and dubbing. This is the focus of ClipFlair, a project which developed a platform for creating and hosting such activities and a pedagogical proposal based on the idea that language learning can be enhanced with the use of activities asking learners to work from a video by inserting their own writing (captioning) or speech (revoicing). Based on this framework, a whole range of possible activities are open to teachers, beyond standard subtitling and dubbing. This paper starts out by briefly describing previous work done in the area and goes on to illustrate the ClipFlair conceptual framework including the educational specifications for the web platform, after the description of which concrete examples are provided in order to expand on the possible audiovisual activities that can be used in a language classroom and beyond. Finally, the paper gives an account of the learner survey carried out during the pilot phase of the project, which included feedback provided by more than a thousand learners and teachers. Keywords: multimodal activities, language learning, audiovisual translation, captioning, revoicing, subtitling, dubbing, audio description
https://doi.org/10.1075/bct.111.ttmc.00005.sok © 2020 John Benjamins Publishing Company
80
Stavroula Sokoli
1.
Introduction
The advantages of using video in the foreign language classroom have been widely acknowledged and explored by several scholars carrying out research in this field (e.g., Allan 1985, Baltova 1994, Stempleski and Tomalin 1990, Tschirner 2001, King 2002). Associated benefits include variety, flexibility and adaptability to learning needs, enhancing learner motivation, providing exposure to nonverbal elements and presenting authentic linguistic and cultural aspects of communication in context. Depending on the language learning setting – formal or non-formal, teacherguided or independent, ICT-aided or not – video can be used in different ways, with varying degrees of engagement on the part of the learner. In an informal setting, a learner could benefit from video by simply watching foreign films with or without subtitles. In a classroom, learners could be asked to take notes or answer comprehension questions, thus preventing passive viewing. Other activities aiming to engage learners and help them become active users of the language, include silent viewing and predicting, ordering sentences, role-playing, analysing, summarising and describing (Zabalbeascoa et al. 2015). Advances in ICT have enabled more interactive options, with a view to expanding the range of available activities. One of them is the simulation of the professional environment of a subtitler or a dubber. The practice of AVT implies being involved in an authentic task, situated in a meaningful context, whose outcome, unlike watching subtitles or using viewing techniques, is a tangible, shareable product: the subtitled or dubbed video. Researchers have found that subtitling improves trainee translators’ linguistic skills. Until recently, this conclusion had been reached incidentally, as a side benefit of attending courses designed for future professionals using professional subtitling software (Klerkx 1998; Williams and Thorne 2000; Neves 2004). Subsequent empirical research, focused specifically on language learning, has highlighted certain positive effects of subtitling on aspects such as vocabulary learning (Lertola 2012), listening comprehension (Talaván 2011), idiomatic expression retention and recall (Bravo 2008), writing skills (Talaván and Rodríguez-Arancón 2015 and Talaván et al. 2017), pragmatic awareness (Incalcaterra McLoughlin 2009) and intercultural skills (Borghetti and Lertola 2014). Researchers have also focused on the use of other audiovisual translation modes, including dubbing (Burston 2005; Danan 2010; Chiu 2012), reverse dubbing (Talaván and Ávila-Cabrera 2015) and audio description (Ibáñez and Vermeulen 2013; Talaván and Lertola 2016). Despite the advantages, practical integration of multimodal activities in language learning can be daunting. High cost professional software combined with lack of technical knowledge and terminology have discouraged both teachers and
Exploring the possibilities of interactive audiovisual activities for language learning
learners. Teachers wanting to provide learners with opportunities to practice language through subtitling activities do not always have the financial capacity to purchase expensive professional subtitling software. Free software, such as Subtitle Workshop or Aegisub, could be an option, but even then, becoming acquainted with the technical requirements to use this software necessitates time and effort. In 2004, there was a first endeavour to overcome these obstacles, with the development of free subtitling software and activities specifically designed for language education, under the Learning via Subtitling project (Sokoli 2006). According to the LeViS survey (Sokoli et al. 2011), learners not only consolidated and improved their linguistic skills, they were also enthusiastic about the innovative nature of the approach. The next step was to increase the number of ways to interact with video, by introducing revoicing through the ClipFlair project (Sokoli, 2015).
2.
A conceptual framework to address multimodal language learning
Our main aim as members1 of the ClipFlair project (www.clipflair.net) was to develop multimodal activities and resources for language learning and make them available through an open and free web platform. In order to do that, we needed a conceptual framework to define key terms and establish the pedagogical approaches and suppositions. Definition of terms helped avoid misunderstandings among project members and users of the platform. But it was also crucial in the effort to widen the range of possibilities for multimodal activities, beyond standard dubbing and subtitling. The general approach of the ClipFlair conceptual framework (Zabalbeascoa et al. 2012) is based on learning through the interaction of verbal elements (written and spoken words) and non-verbal elements (image and sound). The hypothesis is that language learning can be enhanced through the use of activities whereby learners are asked to modify the video by inserting their own writing or speech. Adding written words includes processes such as standard interlingual subtitling but it can also refer to inserting captions for the deaf and hard-of-hearing, intertitles, annotations or speech bubbles. This broad concept is expressed under the umbrella term of captioning. Similarly, we use the term revoicing to include all kinds of recording speech on video, including dubbing, 1. The ClipFlair consortium, led by Universitat Pompeu Fabra, was formed by the following universities and institutions: Computer Technology Institute, Universitat Autònoma de Barcelona, Imperial College London, Universitatea “Babeş-Bolyai”, Universidad de Deusto, Tallinn University, University of Warsaw, Universidade do Algarve, and National University of Ireland, Galway.
81
82
Stavroula Sokoli
voice over, audio description for the blind and visually impaired, free commentary, karaoke singing, or reciting. When multimodal material is used in an interactive way, one of the key issues raised is skill development. The unimodal text approach aims to develop literacy, the ability to read for knowledge, write coherently, and think critically about the written word. Multimodal literacy, accordingly, requires further skills: being able to interpret the multimodal text in its totality, as a complex communication act, and make sense of a combination of verbal and non-verbal sign elements. For this reason, ClipFlair proposes the concept of audiovisual (AV ) skills, including AV speaking and AV writing (Zabalbeascoa et al. 2012). These skills refer to the ability to produce speech and writing, respectively, in combination with the video, taking into consideration and adapting to its other elements, such as speed, voice quality, performance, shot transition. For example, in dubbing there are certain time restrictions and synchronisation demands: the learner’s utterance has to be produced at the same speed as the pace used by the original character. Similarly, AV reading and AV listening refer to oral and written comprehension with the combined effect of the elements of the multimodal material. Another essential question to consider when introducing multimodal activities is what the learners are asked to do with the material, i.e., how they are supposed to respond to the video. Within this approach, the various possible responses can be categorised in three types of verbal production: repeating, rephrasing or reacting. Repeating refers to verbatim rendering of the verbal elements of the clip as literally as possible. Rephrasing means free rendering or rewording the text, and includes concepts such as ‘loose’ paraphrase, gist and summary. Reacting has to do with producing a new communicative contribution in response to a previous one. Multimodal activities may involve not only the language being learned (L2) but also the learner’s language (L1) resulting in three combinations that are meaningful for language learning: intralingual (L2 to L2), standard interlingual (L2 to L1) and reverse interlingual (L1 to L2). In an intralingual activity, the language of response (repeating, rephrasing or reacting) is the same as the language of the video. An interlingual activity, on the other hand, could be standard, involving a language transfer from the L2 to the L1, or reverse. There are further combinations resulting from the fact that the video might be silent (non-verbal), in which case the activity would be categorised as intersemiotic (non-verbal to L2). A video could also involve a third language (L3), resulting in a multilingual categorisation. The process of combining the aforementioned aspects, i.e., kinds of response, and language combinations and skills, multiplies the ideas for new activities even further. An example of the intralingual-repeating-captioning combination would be an activity in which the learner is asked to produce same language captions for
Exploring the possibilities of interactive audiovisual activities for language learning
an L2 video with the aim to practise AV listening and AV writing. An activity based on the intralingual-reacting-revoicing combination, may contain a clip where one character’s turns are silenced, so that the learner has to fill in that character’s turns, according to what the other characters are saying and doing, or what is happening. In another case of the same combination, the learner may be asked to react to a video by free commentary, e.g., director’s comments for a DVD. More examples will be examined in the last section of this paper.
3.
Educational specifications of the web platform
This ClipFlair conceptual framework establishes fundamental principles and factors involved in language learning which, in turn, lead to the educational specifications of the web platform. It takes into account a set of interrelated and mutually influenced factors: the learners as the centre of the learning process, the teachers and their different roles, the learning context (as well as the teaching approach). Starting with the learners, different levels of participation are catered for, depending on their needs and language level. Learners’ activity ranges from minimum, such as watching a video, to maximum, such as providing subtitles in the L2 without a script or even producing their own clip. Another central idea is that learning is a unique and individual process, and that learners learn at different paces. As a consequence, ClipFlair integrates instructions in the activity in order to give learners the chance to follow them at their own pace and to repeat videos as many times as they need. However, since learning is also a social process, the platform provides collaboration tools including forums, groups and blogs to allow for different levels of learner involvement. Teacher involvement is also seen as a continuum from minimum to maximum contribution. Teachers may find an existing activity that is suitable for their teaching goals and decide to introduce it in their course as it is or with some modifications. Teachers’ involvement is increased when they create their own activity from scratch, which includes establishing the goals and the skills to be developed, finding a suitable clip, writing instructions for the learner, including other useful material or texts for further reference and finally setting up these activity components in a user-friendly way. The maximum teacher involvement would be to deliver a whole course through the platform, to design and upload activities for each course unit, and use the ClipFlair Social network (see Section 4) to form the student group and provide feedback. The learning context where ClipFlair can be used is flexible. In the case of teacher-driven learners who follow a course with predefined units and lessons, the teacher decides how learners can best use activities and whether to integrate
83
84
Stavroula Sokoli
them in the syllabus as supplementary material, or as remedial work, voluntary work, further reference, etc. Activities may be carried out in the language lab or at home, in face-to-face or distance education courses. At the other end of the continuum, independent learners selecting and organising their own learning path, goals and strategies, are able to use activities freely, to modify and adapt them for their needs or even create their own. In other words, authoring is not confined to teachers; learners can mix existing material – videos, texts, instructions and scripts – to generate their own activities and share or use them for their own purposes.
4.
Areas of the ClipFlair platform
The ClipFlair platform has three main areas: the Gallery, the Studio, and the Social Network. A good place to start when familiarising oneself with the platform is the ClipFlair Gallery (http://gallery.clipflair.net), where all resources, including activities, clips, texts and images, can be accessed. As shown in Figure 1, users can explore the Activity Gallery using filters to narrow down their search, including a filter named ‘for learners of ’, where the user selects the language to be learned or practiced and ‘for speakers of ’, where the choice is the language spoken by the learners (either their L1 or another language they understand enough to be able to carry out the activity). The next filters are there to help the users select the language combination they wish to work with, i.e., whether the activity is interlingual, intralingual or multilingual; the language level according to the Common European Framework of Reference (A1 to C2); the estimated time to complete the activity; the skills developed, namely reading, writing, speaking and listening skills, as well as the corresponding AV skills, as described in Section 2; the kind of response required by the learner (repeating, reacting or rephrasing); and the kind of revoicing task, which is categorised into dubbing, audio description, voice-over, free commentary and karaoke; and whether the activity involves a captioning task, with the choices of subtitling, multimodal writing, or intertitles. Filters also include the learner type (independent, guided or teacher dependent) and the age-group for which the activity is most suitable for. Finally, users of the Gallery have the option of narrowing down their search according to the kind of feedback to the learner provided after the activity is completed, which can be sample answers, self-assessment, in-class, individual or other forms of feedback. There is also a search field where keywords of interest can be entered. Filtering activities is possible owing to the information provided by the authors for each activity. When they complete one and upload it to the Gallery, they are asked to fill in a form providing structured metadata. This serves not only
Exploring the possibilities of interactive audiovisual activities for language learning
Figure 1. Filters for searching activities in the ClipFlair Gallery
85
86
Stavroula Sokoli
the purpose of searching and filtering, but also helps viewing all the characteristics of an activity when selecting it in the Gallery. The ClipFlair Studio (http://studio.clipflair.net) offers the necessary captioning and revoicing tools for activity authors and it also constitutes the workspace of language learners. It is basically a zoomable window, a container of floating window components that serve to integrate activity parts: clips, texts, captions, voice recordings, images and maps. From a technical point of view, there is no specific order for the integration of parts in the activity, but the user can start by simply dragging and dropping a video file from the local file manager application into the Studio activity area. A clip component (or window) is instantly created with playback buttons and a timeline showing the current time of viewing. Written information, such as instructions, conventional written exercises, language tips or interesting trivia can be integrated with the help of the text component, a tool for viewing, editing and formatting text. If the user already has a useful text file, created with another application, he or she can use it directly in the Studio by dragging and dropping it into the activity area. The captions component allows users to insert, edit and delete captions. When users want to add a caption line, they just need to pause the video and click on the ‘add caption at current time’ button. The caption’s default duration is 2 seconds, but its end time can be modified through the corresponding button. The user may also import existing subtitle files in .srt, .tss and other file formats. Similarly, subtitles created through the Studio can be exported to be used in other subtitling software, to be uploaded to a YouTube video or to be used otherwise. The revoicing component is used for recording voice, as well as saving and listening to saved recordings. Users can first create their revoicing entries (empty lines), which are synchronised with caption lines (if available). Then, they can click on the recording button available on each line to record their voice. Recordings can be saved individually or all together as a single .wav file which can then be used with other video-editing software. Finally, the Studio offers an image and a map component, for loading and viewing images and maps respectively. Each component has a back panel which can be accessed through the options button on the upper right corner. At this back panel, the user can edit the component settings, such as the component size or the background colour. The position of the components is also up to the author’s choice, as they can be dragged and dropped at any place of the activity area. When the activity design is completed, editable options can be deactivated so that the learner can focus only on the content of the activity without accidentally changing the position of the components, or any other pre-established feature.
Exploring the possibilities of interactive audiovisual activities for language learning
Given the component-based nature of the Studio, the same material can be mixed for diverse activities in different languages. For example, the same clip may be exploited differently for other levels, or the same set of instructions can be used with other clips. The activity can be saved locally on the user’s computer as a .clipflair file, which is a compressed file that contains all the sub-files necessary to load the activity again. The user can include the clip itself in the .clipflair file, but this will result in a large file that is difficult to exchange. To avoid this, the information on clip source, namely its URL, can be saved in the activity. After a review process by ClipFlair members, the saved activity is uploaded in the Activity Gallery, where it acquires its own URL. This feature makes sending activities as easy as sharing a link in a blog post, email, or tweet. Alternatively, the .clipflair file can be shared as any other file and then loaded on the Studio. The ClipFlair Studio is launched through the web browser, but it is also available as a desktop application, installed separately to allow offline use. There are various advantages of using ClipFlair instead of other subtitling applications. Firstly, the instructions to carry out the tasks are all contained within the activity, so there is no need to resort to other files and documents. Additionally, activities offered through the Gallery are accessible through a link using a web browser without having to install specialised software. One of the most distinctive features of ClipFlair is that teacher or peer feedback to the learner’s work can be provided in the form of written or audio comments per caption. When comments are enabled (through the component settings), a new column appears, where the user can write or record an observation, correction or suggestion about the specific caption or revoicing segment. The final area of the platform is the ClipFlair Social (http://social.clipflair .net), a network with an increasing number of members (more than 2,300 by early 2020) hosting a community of people interested in innovative ways of using video in language learning. Its aim is to enable users to form online communities, collaborate, interact and share materials through groups and forums. ClipFlair Social hosts tutorials and manuals on how to use or create activities and it also offers a space for users to provide the software developers with feedback on the web application. All areas of the ClipFlair platform are open source and free, i.e., teachers, learners and other interested parties can freely access and use all materials, tools and resources.
5.
Activity examples and ideas
This section aims to present activities designed according to the ClipFlair conceptual framework and implemented in the web platform. It includes activities
87
88
Stavroula Sokoli
where learners are involved from a minimum to a greater degree, starting from filling gaps in the subtitles to captioning for the deaf and hard-of-hearing. There are also activities aiming to introduce AVT practices which may not be familiar to learners, such as audio description, and ways to tap into the potential of silent video clips. When ClipFlair is first introduced to learners, we recommend a simple task with minimum involvement on the part of the learner. One such activity2 could be filling in gaps in existing subtitles, as it requires no knowledge of subtitling rules or software use. An example would be the activity for learners of Spanish named Rosa (see Figure 2), where learners just have to click on the activity link (http://studio.clipflair.net/?activity=Rosa) sent by the teacher and start following the instructions.
Figure 2. Rosa activity as an example of intralingual-repeating-captioning
Before they watch the film scene, they are asked to fill in the missing verbs and prepositions in the existing subtitles, in order to practice specific grammatical points. Then, they are asked to watch the subtitled film and listen to the dialogues to check if the verbs and prepositions they have added are correct. It is worth noting that all captioning and revoicing tools have been hidden to allow for a simple interface, where the learner can only edit the content of existing subtitles and not add or delete any. The Rosa activity could arguably be done on paper, but work2. All activities presented here have been designed by the author.
Exploring the possibilities of interactive audiovisual activities for language learning
ing with a film scene is expected to be more motivating for learners, as they deal with material that they normally access in their leisure time, unlike the dialogue on paper. In addition, they can watch the film scene at their own pace as opposed to watching it in plenary with the entire class and the teacher organising the viewing. Meaningfulness is also a motivating factor, as enhanced by the fact that the purpose of the task is to produce a real-life outcome, a subtitled clip, which can be watched by peers and friends, as opposed to the completed gaps on paper. More importantly, further skills are fomented, compared to those practiced using the paper equivalent, namely AV listening. According to the feedback given by students at the Hellenic University, where this activity was piloted, they managed to practice listening comprehension as they had to listen repeatedly to parts of the dialogue to check their initial solutions. An activity can also function as an introduction to the standards of Subtitling for the Deaf and Hard of Hearing (SDH), i.e., providing subtitles to allow hard-of-hearing viewers to follow the dialogue. SDH should also provide information about who is speaking or about sound effects that may be important to understanding the plot. The activity entitled Yalom’s Cure (http://studio.clipflair .net/?activity=Yalom-Cap-B2-EN) aims to introduce students to this audiovisual translation mode and raise awareness on accessibility issues. As shown in Figure 3, all the necessary technical instructions for the creation of new captions are provided in a text component titled ‘How to add captions’. General captioning principles are also contained in this activity, including considerations on time and space restrictions and the subsequent need for condensing and paraphrasing the dialogue. The learner is also informed about guidelines for correct synchronisation and segmentation. Through this intralingual-rephrasingcaptioning activity, learners do not only practice AV writing but also AV listening, as the script is not provided, and they may have to listen repeatedly without being asked explicitly to do so as in conventional listening drills. In other words, the direct goal of the activity is to develop AV writing skills, including gist translation and/or summarising, but AV listening skills are also indirectly developed. It should be noted that the ClipFlair platform can – and has been – used not only for language learning, but also for raising awareness on the existence of other ways of communicating, including sign language, as shown by the project ‘Communicate in a different way!’ carried out by the Model Experimental General Lyceum of the University of Macedonia. During the 2014–2015 school year, students subtitled clips to make them accessible for their friends in the Special High School for Deaf and Hard of Hearing of Thessaloniki. This teacher-led initiative aimed at “building bridges and pulling down obstacles” and involved students in a meaningful activity.
89
90
Stavroula Sokoli
Figure 3. Yalom’s Cure activity as an example of intralingual-rephrasing-captioning
Learners’ participation is further increased through the next activity example, entitled Loteria de Navidad (http://studio.clipflair.net/?activity=Loteria-REV-B2 -C1-ANY.clipflair), which is based on audio description for the visually impaired. Audio description (AD) for language learning has recently attracted researchers’ interest, for example, Ibáñez and Vermeulen (2013, 53) who have found that “tasks based on AD allow students to observe the importance of selecting the most accurate lexical items and colloquial expressions, since in AD it is of primary importance to select precise words to describe specific scenes.” The activity Lotería de Navidad includes a brief definition of audio description and its goals, as well as some theoretical instructions of how to go about audio describing a clip (see Figure 4). Learners first have to prepare written descriptions of what happens on screen, using the captions area for practical reasons. In other words, captions are used as a place to write the audio description text and not to be projected on the clip. Learners are then asked to click on the recording button of each section to record their voice narrating those descriptions. This activity is an example of reacting to the visual input, as opposed to the previously described repeating or rephrasing activities. It is also intersemiotic as learners are asked to transform the visual non-verbal information into oral verbal elements. It also falls in the revoicing category as it requires adding voice to a clip. This activity can be considered similar to guided composition exercises where learners are asked to write a description of a picture or a video. The critical difference here is that learners develop AV writing skills, and not just writing skills, as their output has to con-
Exploring the possibilities of interactive audiovisual activities for language learning
form to the time restrictions of the video. The time available for describing is only the time of the specific shot or group of shots, forcing the learners to carefully select visual elements to describe depending on their relevance to the intended audience. Equally importantly AV speaking skills are promoted, as the narration has to fit within these time limits (in and out-time), which means that learners might need to speak quite fast, if the text they prepare is long. The words-perminute (wpm) column helps the learners in the process of written preparation so as not to exceed the recommended number of words per minute (180 wpm). There is also a bar below the recording button which is green when the corresponding voice recording fits within the time limits and turns red if not. Quite a significant difference with conventional unimodal activities is the authentic and meaningful nature of the task which, as the previous activity based on subtitling for the deaf and hard-of-hearing, raises awareness on accessibility issues. Selecting engaging clips depending on the interests and age of the learners is crucial when creating activities. A humorous animation, such as the Big Buck Bunny (http://studio.clipflair.net/?activity=Tutorial) may be more appealing to younger learners. This clip has no verbal elements, only sounds and music; therefore, it can be used for learning any language, i.e., speakers of any L1 can use the clip as there are no dialogues to understand, and they can create captions or narration for any L2 they are learning. Silent clips are more suitable for learners with no experience in synchronising captions or voice-recordings with the
Figure 4. Loteria de Navidad, an intersemiotic-reacting-revoicing activity
91
92
Stavroula Sokoli
Figure 5. Big Buck Bunny activity as an example of intersemiotic-reacting-revoicing
original audio, as they do not need to create precise timings, which is a requirement for videos with dialogues. Possible ideas include asking learners to create captions depicting imaginary dialogues or the characters’ thoughts or to record their voice narrating a story inspired by the clip (see Figure 5). The activities described above do not involve language transfer because their aim is to reach the maximum possible number of potential users. This is because intralingual L2 to L2 activities can be carried out by speakers of any language learning a specific language, whereas the use of standard interlingual combinations is commonly restricted to the learners with a specific L1-L2 combination. Intersemiotic activities cater for an even wider audience, as they lend themselves to easy adaptation into any L2 and they only require knowledge of the language the instructions are written in. Regarding the level of language proficiency of the learner, repeating activities seem to be more appropriate for A2 or B1 levels, provided that the speech in the video is not too fast or complicated, whereas rephrasing and reacting seem more suitable for advanced learners. However, all types can be adapted to different levels. For example, the reaction asked from an A2 learner can be simple answers to questions in an interview video where the answers have been muted and only the questions can be heard.
Exploring the possibilities of interactive audiovisual activities for language learning
Depending on the nature of the chosen clip, the language level and the skills to be developed, a vast number of activities can be created. For instance, reciting a famous poem and recording it over an evocative clip; revoicing a weather forecast clip; preparing a clip with pictures on a famous person’s biography and then recording the narration of the biography on the clip; creating or choosing an existing Google Earth Tour and narrating a sightseeing tour (revoicing) or adding subtitles to guide the viewers; broadcasting a sports event or discussing recent news; or even revoicing a clip using different voices, accents, or intonations and discussing the result among peers.
6.
Learner survey
The ClipFlair project included a pilot phase where the activities and the platform were used by learners, mostly in class together with the teacher. From early 2013 to early 2014, a number of 1,250 learners used 85 language learning activities for 12 languages (by order of quantity): English, Spanish, Catalan, Portuguese, Chinese, Romanian, Estonian, Arabic, Polish, Greek, Basque, and Irish. The profile of the learners can be summarised as female (79%) university students (95%), between 18–35 years old, who report that they like working with computers (80%) and working with clips to learn foreign languages (82%). As it can be seen in the online questionnaire users were asked to complete (Appendix 1), one of the first questions of the survey was whether the student had completed the activity or not. Most learners (91%) completed the activity and out of the 9% of those who did not, most reported that it was because they did not have time. A negligible percentage of learners (0.6%) found the activity too complicated, did not understand what they had to do, or did not feel comfortable recording their voices. When asked about the ClipFlair Activity, about half of them (49%) found the activity easy, whereas 40% thought that it was more or less easy. A significant percentage of learners reported that the activity they carried out was interesting (76%), clear (75%) and fun (64%). A higher percentage found it useful for language learning (84%) and for improving their competence in translation (78%). Most participants answered that they would like to do more activities like that (77%). The survey shows a slight prevalence of the use of captioning (54%) over revoicing activities (31%) or activities that involve both captioning and revoicing (15%). It has to be noted here that most learners participating in the survey used ClipFlair in class with the teachers, as mentioned above, which means that the choice of activity was not their own, but their teachers’. The slight preference for
93
94
Stavroula Sokoli
captioning activities does not necessarily mean that teachers think that they are more useful for learning, but it might be due to technical or classroom limitation (e.g., the lack of microphones at the lab computers or the need for simultaneous carrying out of the activity, which would result in learners recording fellow student voices). As for the platform itself, most learners state that the ClipFlair Studio is userfriendly (78%) and more or less attractive (75%) and that on the whole, they have enjoyed the ClipFlair experience. About 10% of the participants left free text comments in the survey, including suggestions for improving the platform, such as adding the possibility of saving the project as a video file, or for enriching the activity itself, e.g., by including a glossary. There were also reports that the activity was not adequate for the particular learner’s level (characters in the video speaking too fast) or of technical problems, e.g., that ClipFlair cannot be used in certain operating systems and specific web browsers. On a different note, the fact that one third of the comments included purely positive feedback was considered surprising by the Clipflair partnership, as it was expected that participants would make the effort to write an observation only if they needed to report complaints or make suggestions. Overall, the learner feedback was very positive, despite the fact that the platform was still in beta version. This feedback was used to improve the Studio, for example by enabling text directionality for Arabic as well as loading clips locally and not only using online clips.
7.
Concluding remarks
Using video in the foreign language classroom is considered valuable by teachers and scholars because it is appealing, it offers variety and flexibility, provides exposure to non-verbal cultural elements, contextualises linguistic aspects and it is closer to natural ideal communication compared to the written mode. Despite these, and other advantages, practical integration of multimodal activities in language learning can be a demanding task, due to technical requirements and lack of expertise in ICT areas, where AVT can be included. The ClipFlair project aims to cover the need for ready-to-use multimodal resources by providing a web platform with the necessary tools and materials. It also offers a conceptual framework which promotes a common understanding of terms, but most importantly widens the range of possibilities for such activities, beyond the traditional AVT modes of subtitling and dubbing. This framework proposes the concept of AV skills, including AV writing, or captioning and AV speaking, or revoicing. New ideas for activities are inspired by combining the particular skill with the type of response required by the learner (repeating, rephrasing, reacting) as well as the
Exploring the possibilities of interactive audiovisual activities for language learning
language combination of the clip and the learner input (interlingual or intralingual). The relevance and value of this framework and the proposed activities have been proven by the results of a survey involving 1,250 learners who found the activities not only interesting and fun but also useful for their learning. Research interest is another indication of its impact in the scientific community as shown in publications dedicated to ClipFlair, such as Talaván and Rodríguez-Arancón 2014, Baños and Sokoli 2015, Incalcaterra McLoughlin and Lertola 2015, and Lertola 2016. Most importantly, though, its significance is evident in its continued use in graduate and postgraduate courses at universities both within and outside the initial consortium, even though the funding period drew to an end in 2014. Newly designed activities and materials are still being uploaded to the Social part of the platform and made available for all teachers and learners at the Gallery.
Funding The ClipFlair project was funded under the Lifelong Learning Programme of the European Commission (Grant Agreement 519085-LLP-2011-ES-KA2-KA2MP).
References Allan, Margaret. 1985. Teaching English with Video. London: Longman. Baños, Rocío, and Stavroula Sokoli. 2015. “Learning Foreign Languages with ClipFlair: Using Captioning and Revoicing Activities to Increase Students’ Motivation and Engagement.” In 10 Years of the LLAS Elearning Symposium: Case Studies in Good Practice, ed. by Kate Borthwick, Erika Corradini and Alison Dickens, 203–213. Dublin and Voillans: Research-publishing.net. Baltova, Iva. 1994. “The Impact of Video on the Comprehension Skills of Core French Students.” The Canadian Modern Language Review 50: 507–532. https://doi.org/10.3138/cmlr.50.3.507
Borghetti, Claudia, and Jennifer Lertola. 2014. “Interlingual Subtitling for Intercultural Language Education: A Case Study.” Language and Intercultural Communication 14 (4): 423–440. https://doi.org/10.1080/14708477.2014.934380 Bravo, Conceiçao. 2008. Putting the Reader in the Picture: Screen Translation and Foreign Language Learning. PhD. diss. University Rovira i Virgili. Burston, Jack. 2005. “Video Dubbing Projects in the Foreign Language Curriculum.” CALICO Journal 23 (1): 79–92. https://doi.org/10.1558/cj.v23i1.79‑92 Chiu, Yi-hui. 2012. “Can Film Dubbing Projects Facilitate EFL Learners’ Acquisition of English Pronunciation?” British Journal of Educational Technology 43 (1): 24–27. https://doi.org/10.1111/j.1467‑8535.2011.01252.x
95
96
Stavroula Sokoli
Danan, Martine. 2010. “Dubbing Projects for the Language Learner: a Framework for Integrating Audiovisual Translation into Task-based Instruction.” Computer Assisted Language Learning 23 (5): 441–456. https://doi.org/10.1080/09588221.2010.522528 Ibáñez Moreno, Ana, and Anna Vermeulen. 2013. “Audio Description as a Tool to Improve Lexical and Phraseological Competence in Foreign Language Learning.” In Translation in Language Teaching and Assessment, ed. by Dina Tsagari and Georgios Floros, 45–61. Newcastle upon Tyne: Cambridge Scholars Publishing. Incalcaterra McLoughlin, Laura. 2009. “Inter-semiotic Translation in Foreign Language Acquisition: The Case of Subtitles.” In Translation in Second Language Learning and Teaching, ed. by Theo Harden, Arnd Witte, and Alessandra Ramos de Oliveira Harden, 227–244. Bern: Peter Lang. https://doi.org/10.3726/978‑3‑0353‑0167‑0 Incalcaterra McLoughlin, Laura, and Jennifer Lertola. 2015. “Captioning and Revoicing of Clips in Foreign Language Learning-Using ClipFlair for Teaching Italian in Online Learning Environments.” In The Future of Italian Teaching, ed. by Catherine Ramsey-Portolano, 55–69. Newcastle upon Tyne: Cambridge Scholars Publishing. King, Jane. 2002. “Using DVD Feature Films in the EFL Classroom.” Computer Assisted Language Learning 15: 509–23. https://doi.org/10.1076/call.15.5.509.13468 Klerkx, Jan. 1998. “The Place of Subtitling in a Translator Training Course.” In Translating for the Media, ed. by Yves Gambier, 259–264. Turku: University of Turku. Lertola, Jennifer. 2012. “The Effect of Subtitling Task on Vocabulary Learning.” In Translation Research Projects 4, ed. by Anthony Pym, and David Orrego-Carmona, 61–70. Tarragona: Intercultural Studies Group. Lertola, Jennifer. 2016. “La sottotitolazione per apprendenti di italiano L2”. In L’input per l’acquisizione di L2: strutturazione, percezione, elaborazione, ed. by Ada Valentini, 153–162. Firenze: Cesati editore. Neves, Josélia. 2004. “Language Awareness through Training in Subtitling.” In Topics in Audiovisual Translation, ed. by Pilar Orero, 127–140. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.56.14nev
Sokoli, Stavroula. 2006. “Learning via Subtitling (LvS): A Tool for the Creation of Foreign Language Learning Activities Based on Film Subtitling.” In Proceedings of the Marie Curie Euroconferences MuTra: Audiovisual Translation Scenario, Copenhagen, 1–5 May, ed. by Mary Carroll, and Heidrun Gerzymisch-Arbogast, 66–73. Sokoli, Stavroula. 2015. “ClipFlair: Foreign Language Learning through Interactive Revoicing and Captioning of Clips.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 127–147. Bern: Peter Lang. Sokoli, Stavroula, Patrick Zabalbeascoa, and Maria Fountana. 2011. “Subtitling Activities for Foreign Language Learning: What Learners and Teachers Think.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 219–242. Bern: Peter Lang, Stempleski, Susan, and Barry Tomalin. 1990. Video in Action: Recipes for Using Video in Language Teaching. London: Prentice Hall. Talaván, Noa. 2011. “A Quasi-Experimental Research Project on Subtitling and Foreign Language Acquisition.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 197–217. Bern: Peter Lang.
Exploring the possibilities of interactive audiovisual activities for language learning
Talaván, Noa, and Pilar Rodríguez-Arancón. 2014. “The Use of Interlingual Subtitling To Improve Listening Comprehension Skills in Advanced EFL Students.” In Subtitling and Intercultural Communication. European Languages and Beyond, ed. by Beatrice Garzelli and Michela Baldo, 273–288. Pisa: InterLinguistica, ETS. Talaván, Noa, and José Javier Ávila-Cabrera. 2015. “First Insights into the Combination of Dubbing and Subtitling as L2 Didactic Tools.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 149–172. Bern: Peter Lang. Talaván, Noa, Pilar Rodríguez-Arancón, and Elena Martín-Monje. 2015. “The Enhancement of Speaking Skills Practice and Assessment in an Online Environment.” In Tendencias en educación y lingüística, ed. by Lucía Pilar Cancelas y Ouviña and Susana Sánchez Rodriguez, 329–351. Cádiz: Editorial GEU. Talaván, Noa, and Jennifer Lertola. 2016. “Active Audiodescription to Promote Speaking Skills in Online Environments.” Sintagma 28: 59–74. Talaván, Noa, Ana Ibáñez, and Elena Bárcena. 2017. “Exploring Collaborative Reverse Subtitling for the Enhancement of Written Production Activities in English as a Second Language.” ReCALL 29 (1): 1–20. https://doi.org/10.1017/S0958344016000197 Tschirner, Erwin. 2001. “Language Acquisition in the Classroom: The Role of Digital Video.” Computer Assisted Language Learning 14 (3–4): 305–319. https://doi.org/10.1076/call.14.3.305.5796
Williams, Helen, and David Thorne. 2000. “The Value of Teletext Subtitling as a Medium for Language Learning.” System 28 (2): 217–228. https://doi.org/10.1016/S0346‑251X(00)00008‑7 Zabalbeascoa, Patrick, Stavroula Sokoli, and Olga Torres. 2012. “ClipFlair Conceptual Framework and Pedagogical Methodology. ClipFlair project.” Accessed March 15, 2015. http://clipflair.net/wp-content/uploads/2014/06/D2.1ConceptualFramework.pdf Zabalbeascoa, Patrick, Sara González-Casillas, and Rebecca Pascual-Herce. 2015. “Bringing the SLL Project to Life: Engaging Spanish Teenagers in Learning while Watching Foreign Language Audiovisuals.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 105–126. Bern: Peter Lang.
97
98
Stavroula Sokoli
Appendix
Figure 6. Learner Feedback Questionnaire Part 1
Exploring the possibilities of interactive audiovisual activities for language learning
Figure 7. Learner Feedback Questionnaire Part 2
99
100
Stavroula Sokoli
Figure 8. Learner Feedback Questionnaire Part 3
Exploring the possibilities of interactive audiovisual activities for language learning
Figure 9. Learner Feedback Questionnaire Part 4
101
102
Stavroula Sokoli
Figure 10. Learner Feedback Questionnaire Part 5
Intralingual dubbing as a tool for developing speaking skills Alicia Sánchez-Requena Sheffield Hallam University
Communicating verbally with others is one of the main features of human behaviour, but the time employed in class to practise this skill is often insufficient. In an attempt to overcome the need to practise oral conversations and help students feel less anxious in foreign language (FL) contexts, new didactic approaches are being considered. Amongst those, the active use of techniques traditionally employed in audiovisual translation (AVT) has proved to have a positive impact on FL learning. This paper examines the relationship between intralingual dubbing (students' replacement of the original voices of actors in one-minute long clips) and FL oral expression. The main aim is to provide objective evidence that the use of intralingual dubbing can enhance speed, intonation and pronunciation when speaking spontaneously in Spanish as an FL. A total of 47 participants aged 16–18 with a B1 level of Spanish dubbed videos for 12 weeks. Data is triangulated both qualitatively and quantitatively. Results confirm the main hypothesis and serve as evidence to support theoretical aspects of the inclusion of active AVT techniques in FL speaking classes. Keywords: audiovisual translation, intralingual dubbing, speed, intonation and pronunciation
1.
Introduction
The ubiquity of screens in our daily lives has had and is still having a remarkable impact on educational contexts: computers, interactive boards, tablets and mobile phones open up new opportunities for a revolution in traditional teaching methods (British Council 2013). In this regard, digital software is improving in availability and quality, creating sophisticated resources that assist students when developing skills such as listening, writing, reading or speaking. This study considers that the time employed in class to practise speaking skills is often insufficient given group sizes, the length of the sessions and the priority given to written https://doi.org/10.1075/bct.111.ttmc.00006.san © 2020 John Benjamins Publishing Company
104
Alicia Sánchez-Requena
skills in numerous courses. This is particularly relevant because oral expression tends to be an important part of subject assessment. In an attempt to highlight the need for students to practise oral skills in the foreign language (FL) classroom, new didactic approaches are being considered. For instance, the inclusion of screen devices in the language classroom through non-professional practice of audiovisual translation (AVT) techniques has shown good signs of success (Talaván 2013; Baños and Sokoli 2015). This paper presents a study on the use of the technique of intralingual dubbing (understood as the replacement of the original actors’ voices with the students’ own voices while paying attention to synchrony) to help students develop their speaking skills. The focus is placed on speed, intonation and pronunciation in spontaneous conversations. The context selected was a group of Spanish language A-level1 students in England (aged 16–18). Practising this exercise on a regular basis can not only help students to develop specific oral expression traits as a result of repetition and drama techniques involved (Yoshimura and MacWhinney 2007), but also foster a more positive attitude towards oral production tasks in FL learning.
2.
Theoretical framework
Communicating verbally with others is one of our main features as humans (Pinker 1994). Yet, inside the FL classroom, there seems to be a need for more speaking practice. This idea is reinforced by the final report of the European Survey on Language Competences (European Commission 2013), which found that while an average of 30% of European students can follow a complex speech in the FL, only 1% of FL students in England can do so. According to the Joint Council for Qualifications (JCQ 2014),2 there has been an ongoing decrease in the number of students choosing languages in the past few years. Furthermore, a slight deterioration in the students’ results has also been observed. This shortcoming becomes more intricate when students do not speak while relying on a memorised text, thus resulting in fewer resources to answer questions in a less prepared and more spontaneous manner. Following a pilot study (Sánchez-Requena 2016), this investigation aims to promote languages in England and suggests that including intralingual dubbing exercises for oral expression will offer these students a beneficial resource. In addition, the A-level course was chosen on the basis that 1. A levels corresponds to the two years of secondary non-compulsory education prior to university. 2. The JCQ is a body that represents exam boards in the UK (http://www.jcq.org.uk).
Intralingual dubbing as a tool for developing speaking skills
pupils would have an already advanced set of acquired language skills and therefore a greater possibility to develop spontaneous speech still further in secondary schools, and also because it represents the bridge between compulsory and university education. The A-level speaking exam is worth 30% of the overall mark, which reflects the importance of this skill. It lasts 21–23 minutes and is structured in two different parts. There is an element of preparation together with an element of spontaneity, which students often struggle with.
2.1 Oral production in Spanish A-level contexts In the general context of FL, the Common European Framework of Reference for Languages (Council of Europe 2001), one of the most relevant guidelines to teach languages in Europe, includes the following analytic descriptors of spoken language: range, accuracy, fluency, interaction and coherence. In a very general sense, range can be considered as the student’s language variety; accuracy as the precision and quality of the language spoken from a linguistic point of view; fluency has to do with speed and keeping the speech going; interaction relates to those strategies used to communicate with others; and coherence deals with the relationship between all the previous elements together in a given context. These interrelated elements are key factors in oral expression. In the specific context of A-level, the main exam boards considered for this study (Edexcel, Eduqas and AQA) include recurrent terms, such as the ability to interact, fluency, accuracy, range, pronunciation and intonation. Taking into consideration the three examination boards chosen, in Edexcel (2016, 28–29) there are statements in the marking scheme such as “interacts spontaneously”, “occasional hesitation”, “able to sustain the conversation”, “pronunciation and intonation are accurate, intelligible and authentic sound.” In Eduqas (2016, 41) there are statements such as “excellent interaction: engages very well, with spontaneity, and sustains discussion”, “consistently accurate pronunciation and intonation, which sound authentic.” In AQA, the mark scheme is even more specific (AQA 2016, 29–30): […] Fluency is defined as delivery at a pace, which reflects natural discourse, although not of the level associated with a native speaker. Hesitation and pauses may occur to allow for a word to be found, for a phrase to be formulated or for self-correction and/or repair strategies to be used. The use of self-correction and/ or repair strategies will not be penalised. […] Pronunciation and intonation are not expected to be of a native speaker standard. Serious errors are defined as those which adversely affect communication.
105
106
Alicia Sánchez-Requena
Although some of the exam boards are more precise than others when describing their assessment rubrics, it is often assumed that the examiners have an adequate understanding of terms like speed, pronunciation, intonation, hesitation, pauses, self-correction or spontaneity; all of them are key words in this study and considered essential to be fluent in an FL. Fluency in this study is the ability to have a conversation in the FL with an adequate speed to promote communication, an acceptable intonation and pronunciation, the competence to self-correct, the ability to fill the pauses with similar resources to those of a native speaker and with little repetition of semantic structures, so that the speech is easy to follow (adapted from Sánchez-Requena 2016). Bearing in mind the assessment criteria considered, this study emphasises the fluency of speaking skills with a focus on utterance (the product that results from speaking) and the perceived aspect (the listener’s impression) (Segalowitz 2010). In this research, particular attention has been paid to three fundamental elements: speed, intonation and pronunciation, selected due to their frequency in the above-mentioned marking schemes. Secondary elements are: ease to follow the speech, ability to self-correct, vocabulary knowledge, grammar knowledge, hesitations, and pauses in complete silence (adapted from Sánchez Avedaño 2002).
2.2 Benefits and limitations of the use of intralingual dubbing The burgeoning use of AVT techniques for FL purposes in recent years has provided information about some of the benefits and limitations considered to date. Previous projects in the field have claimed that AVT in the FL classroom enhances motivation, multiple transferrable skills due to the multimodal nature of the material, flexibility (since activities can be adapted to different contexts) and learning independence, among others (Talaván 2013; Baños and Sokoli 2015). In particular, intralingual dubbing exercises allow for the inclusion of the following elements that, as explained below, are considered positive and enriching for the student’s FL learning process (Maley and Duff 2005; Danan 2010): (1) theatre techniques, (2) extra-verbal elements, (3) native-speed speech delivery, (4) ordinary life situations and (5) colloquial expressions. Intralingual dubbing could favour the inclusion of drama techniques in the classroom without the need to perform in front of an audience as it incorporates observation, body language, voice, and visual elements in the FL (Wakefield 2014). Furthermore, in the case of shy students, the fact that they can hide behind a screen may decrease their level of anxiety in comparison to live performances in front of the whole class, the teacher or an examiner. Body movements and lip synchronisation not only provide information about the foreign culture or its paralinguistic connotations (i.e., intonation,
Intralingual dubbing as a tool for developing speaking skills
rhythm), but they also help the student to focus while doing the voice recording (Chiu 2012). This also encourages students to work on their timings and speed when expressing orally in the FL (Navarrete 2013). Students can self-monitor their performance and progress in a way that would not be possible with traditional role-plays, since there is a final product they can watch and listen to repeatedly. The possibility to observe and manipulate clips where ordinary life situations are presented also provides students with a more realistic resource for oral activities (Wagener 2006). In their ‘Store Model of Memory’, Atkinson and Shiffrin (1968) suggest that information only stays in the long-term memory if there is rehearsal. In our context, it could be argued that because students have to practise their dialogue on numerous occasions, this could have a positive impact on their acquisition of new vocabulary (Burston 2005). The use of AVT in the classroom also encounters some limitations (López Cirugeda and Sánchez Ruiz 2013), such as the time needed to prepare the sessions, intellectual property constraints and technological failure. One of the main concerns involving the use of this type of material in the classroom is the time needed to find the most appropriate material and the legality of sharing it, due to copyright issues. In terms of using videos in class, the World Intellectual Property Organisation (WIPO) accepts its use as long as the purpose is justified and the utilisation is fair. Therefore, there is acceptance in educational contexts with no commercialisation purposes, like the case of this study. As far as the software is concerned, nowadays there are free programmes such as Windows Movie Maker, or specific projects like ClipFlair (2011),3 that streamline the process involved in this type of activities. Although they might not always be technically accurate, teachers could reduce the number of technical issues by anticipating some common problems (for example, by checking the equipment before the session, having a shared folder with the students in order to save the project, connecting more computers in case they are needed, checking the size of the video used to prevent images from freezing), and accepting that some computer failures cannot be controlled in advance. The present work considers that some of the previous claims (both for the advantages and disadvantages of using AVT in FL contexts), although useful and valuable, require more supporting evidence to be confirmed, hence the need for more studies on the field. Nonetheless, in the case of AVT techniques, the present study suggests that the advantages surpass the limitations and that additional teacher-training in the field could reduce the number of constraints.
3. For further information on ClipFlair, visit http://clipflair.net
107
108
Alicia Sánchez-Requena
3.
Research objectives and questions
This study has two main objectives. Firstly, it seeks to examine the effect of using an intralingual dubbing technique to develop oral expression in spontaneous conversations of students of Spanish in different schools in the UK. Secondly, the results will lead to the design of a guide for language teachers on how to use dubbing in Spanish as an FL (SFL) classrooms to develop oral expression, thus facilitating teacher training tasks. Regarding its secondary objectives, this research intends to provide new techniques to work on oral expression inside the classroom and to have a positive impact on how students’ feel when they speak SFL. Eventually, this work aims to complement and expand the existing research in the field of AVT in FL teaching by contributing with a high number of participants and a focus on an FL different from English, opening a new window for those whose first language is English and wish to learn other languages. To achieve these objectives, the following questions need to be answered: 1.
Does intralingual dubbing improve oral expression in spontaneous conversations? 2. Is the effect more noticeable in speed, intonation or pronunciation? 3. Can intralingual dubbing projects be successfully implemented in a variety of schools? The answers to these questions will be provided along with the results and discussed in the conclusions, following a description of the intralingual dubbing activities that were implemented in the SFL classroom.
4. Methodology This study is based on empirical, primary and mixed methods research with an observational-descriptive-reflexive design (Dörnyei 2007). The present study analyses, reflects on and adapts the teaching of an intralingual dubbing technique to improve the oral expression of students of SFL and, more specifically, their speed, intonation and pronunciation in spontaneous conversations. The specific context where this project takes place is non-compulsory secondary education in the UK, with an age range between 16 and 18. A combination of quantitative and qualitative approaches is used in the data analysis, with an emphasis on the qualitative perspective. The data were collected using different tools including podcasts, questionnaires, teacher’s notes and a blog. The source of the data was also varied: the pupils, their subject teachers as observers, four external evaluators (to
Intralingual dubbing as a tool for developing speaking skills
impartially assess the oral speech samples), and the teacher-researcher responsible for this study.
4.1 Context and participants This project was undertaken in 5 different secondary schools around Manchester and the data collection itself lasted 12 weeks. During this time, students had onehour weekly sessions. The sample consisted of 47 students (6 boys and 41 girls) with a variety of backgrounds and dissimilar socioeconomic status. The schools had different requirements for taking part in the dubbing projects: for the students in two of the schools this project was compulsory, while it was optional for the other three centres. The characteristics of the students are summarised in Table 1. Table 1. Participant information Characteristics Gender Mother tongue
Level Years of Spanish studies
No. Participants Male
6
Female
41
Only English
36
Bilingual English + another FL
(11)
– Italian
2
– Urdu
2
– Portuguese
2
– Pashto
1
– Yoruba
1
– Polish
1
– Chinese
1
– Dutch
1
AS students (16–17 years old)
27
A2 students (17–18 years old)
20
One year
1
Three years
26
Four years
20
All the students had English as a first language but 11 were bilingual, 4 of them being bilingual in a Romance language (Italian and Portuguese). The age and number of years they had studied Spanish was similar, but there was one student who had only studied one year of Spanish before doing her A-levels. The sample
109
110
Alicia Sánchez-Requena
reflects the heterogeneity of British secondary schools across the board, representative of the current social panorama (Long and Bolton 2016).
4.2 Variables To fulfil the primary objective of this study, the variables considered are divided into the independent variable (intralingual dubbing) and three dependent variables related to oral expression elements on which this analysis focuses (speed, intonation and pronunciation). The following is a brief definition of each one of them for the purposes of this study: a. Intralingual dubbing: replacement of the actors’ original voices with the students’ voices in substitute by clips originally in Spanish. b. Speed: quickness and continuity of the speech. c. Intonation: combination of frequencies and melodic variations in the speech as a result of opening and closing the vocal folds. d. Pronunciation: acoustic result of producing phonemes as well as the auditory impression obtained from the interpretation of these acoustic waves. Concerning specific sounds of pronunciation, the sounds selected as problematic were adapted from Herrero de Haro and Andión (2012). The sounds of vowels taken into consideration were /e/, /o/, /u/ and two vowels together (i.e. /au/, /ie/). Regarding the consonants, the sounds considered to be more difficult for the students were the distinction between b/v, s/c and t/d; and the pronunciation of /h/, /p/, /g/, and /r/. The four variables considered are justified because the primary aim is to analyse the impact of intralingual dubbing on speed, intonation and pronunciation on a sample of students with different characteristics, where each student is only compared with his/her own progress. However, it is necessary to bear in mind that other factors may affect the results: whether the project is compulsory or optional; the students’ gender; their socioeconomic status; their teacher’s enthusiasm regarding the project; the students’ experience with oral exams; and whether the students were bilingual or not. Some of these aspects will be acknowledged in the analysis; however, further independent analyses of each of the elements would be particularly welcome.
4.3 Instruments The instruments used in the data collection of this study are characteristic of qualitative research (Dörnyei 2007) and can be summarised as follows:
Intralingual dubbing as a tool for developing speaking skills
a. Podcasts: Students record their voice before and after the project, talking about 5 different generic and familiar topics that students studied in previous years (i.e., family, house and hobbies). Pupils are encouraged to speak for 3 minutes continuously (without pausing the recordings) for each topic (although not all students are able to speak this long). The recordings include a range of different tenses: present, past and future/conditional. b. Questionnaires: There are two types of questionnaires. The aim of the first questionnaire is to find out the students’ experiences during the project. The second questionnaire is intended to reflect upon the teachers-observers’ thoughts on the project. c. Teacher’s notes: Teachers’ diaries contain separate information for each school experience. They distinguish between the dynamics of the class, the clips used and the characteristics of the technical equipment employed. d. Blog to comment on the videos: A blog is created so that the different teachers-observers from the participating schools can provide formative feedback on any aspect that they consider relevant. They are similarly related to the dynamics of the class, the material used, and the technical issues. The different instruments and resources used allow for the triangulation of the data using podcasts, questionnaires, teacher’s notes and a blog analysed from different perspectives: the students, the teachers-observers, four native Spanish assessors, and the teacher-researcher.
4.4 Data collection In general terms, as it can be appreciated in Table 2, the project is divided in different stages that include finding schools willing to take part, creating the material and designing the dubbing sessions. The data collection itself lasts 12 weeks. In weeks 1 and 12 students record podcasts and complete questionnaires. During the rest of the weeks, students dub clips. The dubbing sessions include 9 videos in total. Students have a specific routine to work on the clips in 60-minute weekly sessions. Each video was oneminute long and the content included topics related to the course curricula. The speech consisted of dialogues between two people (students worked in pairs), a neutral accent and moderate speed. In addition, the camera angle should allow the viewer to see the actor’s mouth when speaking as much as possible. Table 3 presents an overview of those sessions. (1) Firstly, the teacher projects the video for the whole class to show what they are going to work on. (2) Then, working in pairs, students read the text aloud following the script on paper. Questions regarding vocabulary and pronunciation
111
112
Alicia Sánchez-Requena
Table 2. Summary of the project Stage 1 (4 weeks)
Selection of material
Clips: dialogues from different TV shows, short films, interviews. Subtitling: faithful transcription of the dialogues by the teacher using Subtitle Workshop. This provides extra written input for the students.
Podcasts
Related to general topics using different verb tenses (3 minutes for each topic).
Taster sessions
Technical problems are solved and final decisions about lesson planning are taken.
Stage 2 Dubbing project (12 weeks)
Dubbing clips into Spanish. The teacher-researcher takes notes through class observation. Teachers-observers write comments on a blog.
Stage 3 Final podcast (10 weeks) Questionnaire 1
Similar to stage 1; slightly different topics. Students give their opinion about the influence of the dubbing activity on their learning process.
Questionnaire 2
This time teachers-observers have to complete another questionnaire on their own assessment of the intralingual dubbing project.
Analysis of the final tests
Qualitative data: NVivo is used for the analysis. The sources are the students, the teacher-researcher, the teachers-observers and four external assessors. Quantitative data: Words per minute (WPM).
Comparison of results
Initial and final test results are compared.
are solved both with the help of the teacher-researcher and by listening to the original dialogue. (3) As a warm up activity, students read the text aloud in pairs with the video in the background for a first contact with the original speed. (4) Immediately after, the student practises his/her part of the dialogue following the actor’s performance, pausing the video according to his/her own needs. Mutual help and collaborative work are encouraged. Students receive advice on how to achieve, for example, an adequate speed with specific examples from the script. (5) Later on, students rehearse the dialogue in pairs. For this step, while one of the students wears headphones, the other only follows the video without sound, and vice versa. (6) Then, students use the software to mute the voice of the actors and record their own. They can record as many attempts as they want within the time given. The most important aspect is that they do all the dialogue at once (and not in small parts). (7) Finally, they listen to their performance, comment on it and make notes to improve in the next class.
Intralingual dubbing as a tool for developing speaking skills
Table 3. Dubbing session, step-by-step Step
Activity
1. Before dubbing – Introduction (2 min)
Students watch the video in class.
2. Before dubbing – Contextualisation (10 min)
Students read the dialogue script and vocabulary questions are solved as a group. The context is also discussed.
3. Dubbing – first part (5 min)
There is a warm-up consisting of reading the text aloud in pairs, becoming familiar with the oral speech and synchronisation needed for the video.
4. Dubbing – second part (12 min)
Individually, each student rehearses his/her part of the dialogue with the help of the video, paying attention to the actors’ voices they are going to replace.
5. Dubbing – third part (15 min)
Rehearsal in pairs several times. They swap headphones so that one of them has the audio and visual input and the other just the visual input. At the end, they will only have the video with no sound.
6. Dubbing – fourth part (10 min)
Students mute the voice of the original video and record their voice instead. They do several takes until they are satisfied with the results.
7. After dubbing (6 min)
Students listen to their work and exchange opinions in pairs.
At the same time, the 10 dubbing sessions are organised in three phases. The first three videos have a focus on speed, the next two videos focus on intonation and there are three videos that place emphasis on specific sounds. The final video allows for the implementation of all the previous knowledge to work on speed, intonation and pronunciation. The time used for each one of the steps mentioned in Table 3 is adjusted depending on the session. For example, some of the videos include more unfamiliar vocabulary than others or students ask for more rehearsal time in certain videos. Videos are approximately one minute long and they are part of short films, TV series or programmes. They were selected because they contain topics related to the students’ academic course content. The speed was considered adequate for the purposes of the project and the accent was neutral Castilian Spanish (similar to what is taught and evaluated in A-level courses).
5.
Results
The results of the present study include both a qualitative and quantitative analysis, with a greater emphasis on the former. This section presents the results obtained in each one of the instruments used to collect the data.
113
114
Alicia Sánchez-Requena
5.1 Podcasts Podcasts contain students’ non-prepared oral speech before and after engaging in the intralingual dubbing project. A total of 6 recordings per student were analysed (3 pre-recordings and 3 post-recordings), both from a quantitative and a qualitative perspective. Turning to the quantitative analysis of the podcasts, one of the main elements assessed was words per minute (WPM) that were counted manually.4 Firstly, the speech was transcribed. Secondly, only complete words in Spanish were counted from the first minute of each recording. The reason for not using a computer for the transcription or analysis is the need for human intervention to distinguish words in an FL, unfinished words or self-corrections (SC), in other words, when the student corrects him/herself in the speech and repeats words as a consequence. The post-recordings show that students increase their speed by an average of 17 WPM. The student who improves the most does so by 50.6 WPM and 52 WPM after SC. There is one student who does not improve, and produces fewer words (participant 23). The data does not provide obvious reasons to give a solid explanation and it could simply be due to personal circumstances of the student on the particular day of the recording. There are 11 students that improve more than 25 WPM and 12 students that improve in fewer than 10 WPM. If we look at those participants, there is no indicator capable of explaining objectively why some students improve more than others. Finally, there is no evidence or pattern to explain a difference between bilingual and non-bilingual students. In terms of the qualitative analysis, four external evaluators assessed the podcasts. To enter the data, they listened to the students’ pre- and post-podcasts and used Google Forms to give their feedback about each student. Their feedback related to different elements of oral expression. Figure 1 shows the evaluators’ opinions on general aspects considered for oral production of the speech. The scale given to the students was (1) Poor; (2) Adequate; (3) Good; (4) Very good; (5) Excellent. The figures shown here are just illustrative since no specific statistical analysis was carried out. The difference between the results obtained in the pre- and post-recordings show that, according to the evaluators, speed improved the most (0.97), closely followed by intonation (0.89) and easy to follow speech (0.87). On a similar level, they consider that pronunciation and vocabulary acquisition improved equally (0.7). Finally, they think that students show more progress in grammar (0.63) than in their ability to self-correct (0.62). Nonetheless, in terms of scores received in each one of the previous components, the highest score is given to pronunciation (3.38), closely 4. See Appendix 1 (Table 11) for more details on WPM per student.
Intralingual dubbing as a tool for developing speaking skills
followed by easy to follow speech (3.23), intonation (3.15) and speed (3.13). Vocabulary acquisition is really close as well (3.12), while grammar (2.87) and ability to self-correct (2.67) obtain the lowest mark. Regarding pauses and wavering when speaking, the information is presented in Figure 2. Students tended to doubt more (wavering) rather than use complete silences (pauses) in their speech, both before and after the project. The scale provided to evaluators in the table included (1) Hardly any; (2) Some; (3) Quite a few; (4) Too many. Students reduced both pauses in complete silence (0.78) and wavering (0.79) on a similar level.
Figure 1. Evaluators’ feedback on oral expression part I
Figure 2. Evaluators’ feedback on oral expression part II
Concerning pronunciation, this study included specific explanations in the sessions on how to pronounce sounds. First of all, the aim was to discover the sounds in which the students made more mistakes in the pre-podcasts, and the subjects’ departure point can be noted in Figure 3 and Figure 4. Before the dubbing tasks, students made more mistakes with the vowels e and o, since e was sometimes pronounced as i and o was pronounced as ou. They made fewer mistakes with u,
115
116
Alicia Sánchez-Requena
which they tended to pronounce as iu, perhaps because there were fewer words in their speech that featured u in comparison to e and o. Similar reasons could explain the groups with two vowels.
Figure 3. Incorrect vowels pronounced by the students; pre-project I (vowels)
Figure 4. Incorrect consonants pronounced by the students; pre-project II (consonants)
Consonants showed more mistakes made by the students to start with, indicating that students in general find it harder to pronounce consonants than vowels. The biggest mistakes shown are the distinction between b/v and t/d, maybe because their teachers had not paid much attention to this since the emphasis is normally placed on more obvious sounds such as h. The ability to roll the r and the distinction between s/c also had a high number of errors. At the other end of the scale, the aspiration of p is the sound with the smallest percentage of mistakes. Figure 5 shows the students’ improvement in relation to these sounds. In general, the sound that students improved the most after the dubbing tasks had been implemented was h. This was followed by p and g. If we look at Figure 4, those three sounds were the ones in which students made fewer errors pre-project.
Intralingual dubbing as a tool for developing speaking skills
Therefore, it could be said from these results that those three sounds seem to be easier for the students to correct after an explicit mention is made. After implementing the dubbing tasks, the consonants that still proved harder for the students to pronounce were rolling the r, followed by the distinction between t/d. The students corrected better the pronunciation of a group of vowels. This was followed by e, o and u, but no specific reason was found to justify the difference in the improvement of the vowels.
Figure 5. Sounds improved post-project from mistakes made in Figure 3 and 4
The analysis of the previous elements only through a qualitative rubric is justified here by the fact that A-level evaluation of oral expression is also assessed through qualitative rubrics. Considering that there are four external evaluators, and that the data from the different sources is not contradictory but complementary, enough information has been provided to give solidity to these results.
5.2 Questionnaires There is a total of two questionnaires. They contain closed and open questions. The closed questions are presented in this subsection in the form of diagrams. Open questions for each questionnaire were analysed using NVivo, a software that supports qualitative and mixed methods research. Both closed and open answers are provided for each questionnaire in the following paragraphs. In questionnaire 1, students gave their opinions on the intralingual dubbing project. The intention of this questionnaire is to find out what the students think about the project. The questionnaire is divided into four parts: (1) how students
117
118
Alicia Sánchez-Requena
thought the intralingual dubbing project influenced their general communication skills; (2) the impact of the project on specific learning areas that affect oral expression; (3) their opinion on the materials used in general; and (4) their observations or free comments on the project in general. The values were given on a scale from 1 to 4: (1) I totally agree/a lot, it has been a very good way to practice/learn/improve my Spanish skills; (2) I’m satisfied with what I have practised/ learnt; (3) A bit, but not enough; (4) I totally disagree/very little or nothing). Table 4 gathers the results for the first part of the questionnaire. Regarding the four traditional language skills, students believed that the skill they improved the most was oral expression, which fulfils the aim of the project. Nonetheless, it is particularly relevant that intralingual dubbing helped them to develop all four skills. Regarding learning areas, the information is reflected in Table 5. Table 4. Students’ opinions for each of the skills Values (1 strongly agree; … 4 strongly disagree) 1
2
3
4
Listening comprehension
17% (8)
55.3% (26)
25.5% (12)
2.1% (1)
Reading comprehension
29.8% (14)
42.6% (20)
23.4% (11)
4.3% (2)
Oral production
38.3% (18)
34% (16)
23.4% (11)
4.3% (2)
Written production
25.5% (12)
34% (16)
29.8% (14)
10.6% (5)
These results could be analysed from different points of view. 80.8% of the students seem happy with their progress in terms of speed, intonation and pronunciation, where they believe they did improve: speed was the most obvious (55.3%), followed by pronunciation (46.8%) and intonation (27.7%). However, if we add up the two positive values 1 (strongly agree) and 2 (agree) indicated in Table 5, the order of the three elements of fluency varies. The first is now pronunciation (83%), then intonation (74.5%) and finally speed (74.4%). Regarding learning areas such as vocabulary and grammar (indirectly addressed in the project), when adding the two positive values in the answers, the percentage is much higher in vocabulary (83%) than in grammar (57.4%). It should be noted that the importance of using a variety of vocabulary when performing the dubbing tasks in the FL was explicitly mentioned in class. Another question was whether the students believed that this project was interesting and motivating for them. In this regard, 72.3% of the students answered positively. At the other end of the scale, 5 students felt that it was neither motivating nor interesting. Reasons for these answers may be the student’s level (if it was too low, they might have found it difficult), the clips chosen, the fact that the project was compulsory for them, that it happened during lunchtime or that
Intralingual dubbing as a tool for developing speaking skills
Table 5. Students’ opinions for each of the learning areas Values (1 strongly agree; … 4 strongly disagree) 1
2
3
4
My ability to speak in Spanish has improved
31.9% (15)
48.9% (23)
14.9% (7)
4.3% (2)
My speed has improved
55.3% (26)
19.1% (9)
21.3% (10)
4.3% (2)
My intonation has improved
27.7% (13)
46.8% (22)
19.1% (9)
6.4% (3)
My pronunciation has improved
46.8% (22)
36.2% (17)
10.6% (5)
6.4% (3)
Aside from my improvement, I am more aware of the natural speed, intonation and pronunciation in Spanish
44.7% (21)
40.4% (19)
14.9% (7)
My vocabulary has increased
38.3% (18)
44.7% (21)
6.4% (3)
My grammar has improved
17% (8)
40.4% (19)
14.9% (7)
Dubbing has been motivating and interesting for me
40.4% (19)
31.9% (15)
17% (8)
10.6% (5)
I am interested in dubbing in the future to improve my Spanish
21.3% (10)
25.5% (12)
38.3% (18)
14.9% (7)
10.6% (5)
Table 6. Students’ opinions on the project Positive aspects
Negative aspects
I improved oral expression: speed, intonation and pronunciation.
I think it should not be done at lunchtime.
I learned new vocabulary expressions, particularly useful for the exam.
I believe the speed of the videos was a bit too fast at times.
I increased my confidence.
I did not have enough time to listen to what others had produced in class.
I enjoyed pair-work and class project.
I would have liked to have more time for each session, since it was a bit rushed at times.
I was able to be more aware of how native speakers sound in Spanish as well as some of their cultural aspects.
I enjoyed speaking in Spanish something that I can read from but not the part where I had to be assessed spontaneously.
I was more aware of my own learning process. I enjoyed listening to videos and watching them at home. I liked the variety of contexts and clips.
119
120
Alicia Sánchez-Requena
the sessions lasted 60 minutes and at times some tasks felt rushed. It is particularly significant that approximately 38% of the students ‘disagreed’ with the statement “I am interested in dubbing again to improve my Spanish.” It will be interesting to find more detailed reasons for this, since the great majority found the project motivating and interesting but not all of them would dub again. The third part of this questionnaire was an open question where the students provided comments on any aspect of the project. A summary of the main opinions is presented in Table 6. Regarding the frequency of words mentioned, among the positive aspects, students referred particularly to the improvement in awareness of the three elements of oral expression targeted as well as vocabulary acquisition. On the negative side, the most common idea was that speed was a bit too difficult. This discouraged some of the students at times, but it motivated and challenged others; therefore, it is worth considering that it might be best to find slower dialogues for the first videos of the project until the students familiarise themselves with higher speeds. Another aspect that needs consideration for the future is extending the length of each session of the project, since students would benefit from expanding the information given in the videos. Turning now to questionnaire 2, this included teachers-observers’ opinions on the intralingual dubbing project. The structure and scale given to this questionnaire is the same as questionnaire 1. In the first part the teachers-observers provided information about the four communication skills they considered the students had improved in, a summary of which is included in Table 7. Table 7. Teachers’-observers’ opinions per skill Values (1 strongly agree; … 4 strongly disagree) 1
2
3
Listening comprehension
60% (3)
20% (1)
20% (1)
Reading comprehension
40% (2)
60% (3)
Oral production
80% (4)
20% (1)
Written production
80% (4)
4
20% (1)
100% responded positively to the statement that their students improved their oral expression in SFL. In relation to the other skills, according to the teachersobservers, students improved their skills in the following order: listening comprehension, reading comprehension, and writing production. In the second part of the questionnaire, teachers were asked about specific learning areas as shown in Table 8. All of the teachers-observers agreed that pronunciation and intonation had improved more than speed. This could be related to the students’ feedback pointing out that some of the videos seemed very fast.
Intralingual dubbing as a tool for developing speaking skills
Regarding vocabulary and grammar, 80% of the teachers were satisfied with their students’ progress in both areas. However, like their students, they thought that the intralingual dubbing project had a greater impact on vocabulary than on grammar. Concerning motivation and self-confidence, 60% of the participants strongly agreed and 40% simply agreed with the statement. Finally, it is particularly positive that all the teachers would be interested in carrying out dubbing projects again. The following section of the questionnaire was related to the strong and weak points of the project, as Table 9 shows. Table 8. Teachers’-observers’ opinions for each of the learning areas Values (1 strongly agree; … 4 strongly disagree) 1
2
3
Speed
60% (3)
20% (1)
20% (1)
Intonation
80% (4)
20% (1)
Pronunciation
80% (4)
20% (1)
Vocabulary
20% (1)
80% (4)
Grammar revision
80% (4)
Motivation
60% (3)
40% (2)
Self-confidence
60% (3)
40% (2)
Consider dubbing again
100% (5)
4
20% (1)
Table 9. Teachers’-observers’ opinions on the project Positive aspects
Suggestions to improve
Students enhanced their speed, intonation and pronunciation.
Students could find the vocabulary by themselves using a dictionary.
Students remained very focused during the activities, especially when recording and listening to their voices.
The videos could be more closely related to the exam topics.
Students were engaged, interested and recognised the valued of the project.
Choose slower videos in the future.
Students increased their confidence when speaking in SFL.
It would have been nice to have more time per session to stretch pupils further by making them speak spontaneously about the topic of each video.
The teachers’ comments complemented previous data, since they acknowledged that their students improved the different aspects considered essential in
121
122
Alicia Sánchez-Requena
the project. Their suggestions were mainly related to the material chosen and the time dedicated to each session. More time could provide a chance to work on vocabulary, to enable to listen to their outcomes after the sessions and to provide more individual feedback by the teacher. Furthermore, the project can benefit from changing some of the videos and trying to find new and slower clips for the earlier stages of the project, as already mentioned.
5.3 Teacher-researcher’s notes The teacher-researcher’s notes include the weekly impressions on implementing the intralingual dubbing project in each of the schools, as it is shown in Table 10. They were divided in three sections: the dynamics of the class, the clips, and the characteristics of the technical equipment. The six groups involved (belonging to five different schools) had different experiences but all in all results were very satisfactory.
5.4 Blog5 The blog was created with the idea that the teacher-researcher may offer formative feedback on how the sessions were going from an outsider point of view. The focus in both cases was on the dynamics of the class, the clips used, and the characteristics of the technical equipment. The project was more successful with students who had a higher level of fluency, while students with a lower level found some of the videos quite challenging. In terms of engagement, students who did the project voluntarily were more engaged, although by the end of the project most students had increased their levels of commitment. As to the clips, some of the videos were challenging in terms of speed, especially for weaker students. Some of the participants would have liked the videos to be more related to the exam topics (although this point was already taken into consideration when selecting the material). Moving now to technical equipment, in general, computers worked fairly well. The main issues were related to the size of the first videos (since some of the images froze) and the students not paying attention to the volume or sound of their headphones before recording. All in all, these notes provide useful information for future teachers willing to use intralingual dubbing projects with their students. To conclude, evidence shows that results from the different tools and sources complement each other in a similar direction. 5. To find the blog, please visit https://goo.gl/Zaah2P
Intralingual dubbing as a tool for developing speaking skills
Table 10. Teacher-researcher’s notes Class dynamics
Clips
Equipment
Group 1
In general, the students were not as engaged. Their level was the lowest in comparison to the other participants involved. I observed them a couple of times in their normal Spanish classes and they also lacked enthusiasm there. The fact that they were the first group I tried the session with (Mondays) did not help either.
I just felt a bit frustrated on many occasions because the level of the videos seemed too much for some of them. However, there were some good moments. Some videos worked really well for them.
The equipment was absolutely fine. New computers, new headphones. Very lucky in that respect.
Group 2
In general, this group was fantastic. They were really engaged during all the sessions. Some of them spoke other languages at home and maybe this helped them to find the project more accessible. A couple of students had a lower level but the atmosphere in the classroom helped them not to be discouraged.
The clips were fine for most of them. Quite challenging at times, but doable. If there were more difficult parts for a couple of the students, they were helped with different tips and finally performed them. The hardest video was 8.
The computer was not working so well in the first couple of sessions, but we changed rooms and everything worked smoothly since then.
Group 3
This group worked well since the beginning. Their level was not particularly high but they were very keen and were willing to ask for help whenever they needed to work on specific sentences or paragraphs. They also worked quite independently from the beginning and they always submitted work on time.
Some of the clips were quite challenging, since they were an AS group. However, they did not complain and worked hard on them. They kept asking if they could watch the whole video at home.
The equipment in this school was really good. Only a couple of microphones did not work at times but these were minor issues.
Group 4
This group worked really well throughout the project. In general, the girls were very busy with other activities at school but if they missed a session they caught up quickly. It was challenging to manage such a big group in such a limited time; however, the students worked very independently and the help of their teacher was also essential.
In general, the clips worked fine. Some students whose level was a bit lower struggled at times but once again the group atmosphere helped them to overcome obstacles and improve week by week. Definitely, video 8 was too difficult.
They did not have a language lab, but old laptops. The first clip froze (because of the size). Something important to note!
123
124
Alicia Sánchez-Requena
Table 10. (continued) Class dynamics
Clips
Equipment
Group 5
This is the only school where I was able to do the activity during lesson time. This put me under more pressure because the teacher really wanted each video to be related to their topics in class. Students were not so keen at the beginning but really got into the project after a few sessions and I think we all enjoyed it.
Some of the clips were challenging but they managed all in all. The fact that I was swapping pairs on a regular basis in the final sessions really helped them be more engaged and dynamic. I think we were all pleased with that.
Equipment had ups and downs in the early sessions. It was not always reliable and we needed an IT technician to sort out sound problems.
Group 6
This group was very disorganised at the beginning because not all students attended regularly. It was a mixed AS/A2 group but they all seem to have learned important aspects to apply in their oral exam from the project.
A2 students were fine with the clips but AS students struggled at times. The help of the other students, and the support from their teacher and myself were essential.
I am very pleased with this equipment. Top quality!
6.
Conclusions
The oral expression of the students who took part in the project improved thanks to the intralingual dubbing tasks. The three elements analysed (speed, intonation and pronunciation) were enhanced, both from the speakers’ point of view and from the observers’ point of view. Out of the three elements considered, students seemed to have gained more awareness on how to improve pronunciation elements; external evaluators perceived a greater improvement in speed and intonation, and the teachers-observers in intonation and pronunciation. Therefore, results do not show a clear improvement in one of these three elements in comparison with the other two; rather, all components were ameliorated concurrently. Other aspects also played a fundamental role in the oral expression of FL learners, such as how easy it was to follow the speech, the students’ ability to self-correct, pauses in complete silence, wavering, vocabulary and grammar knowledge, together with more abstract elements related to how the students felt, such as motivation and self-confidence. Answers from questionnaires and analyses of the sample showed an improvement in all the aspects mentioned when speaking SFL. These results complement those obtained in the pilot study of this research (Sánchez-Requena 2016), giving more weight to intralingual dubbing projects for A-level students. If we combine both projects, a total of 64 A-level students and 6
Intralingual dubbing as a tool for developing speaking skills
schools were exposed to these activities. The variety of schools and students’ backgrounds suggest that intralingual dubbing projects can be used in different contexts of Spanish A-level students. However, it is believed that the project is more beneficial when students are in the second year of A-level studies because they already have some experience in oral exams and see more clearly the purpose of the proposed activities. Therefore, it is advisable to carry out these projects a few months after the commencement of the A-level course. Considering all the previous information, this research shows that intralingual dubbing exercises are a convenient approach for the digital age to complement traditional classes. Useful feedback has been provided to be able to create a routine or study guide for these activities. One of the most important aspects to remember, which also complements the previous research, is that even when the focus is on one skill, different learning areas improve indirectly. Results from this study also encourage further research in aspects such as differences between monolingual students and students who speak more than one language fluently, the impact of dubbing projects on vocabulary acquisition and the impact of dubbing exercises from a cognitive point of view.
References Assessment and Qualifications Alliance (AQA). 2016. Accessed May 14, 2016. https://goo.gl /LR0CYn Atkinson, Richard, and Richard Shiffrin. 1968. “Human Memory: A Proposed System and Its Control Processes.” In The Psychology of Learning and Motivation, ed. by Kenneth W. Spence, and Janet T. Spence, 89–195. New York: Academic Press. Baños, Rocío, and Stavroula Sokoli. 2015. “Learning Foreign Languages with ClipFlair: Using Captioning and Revoicing Activities to Increase Students’ Motivation and Engagement.” In 10 Years of the LLAS Elearning Symposium: Case Studies in Good Practice, ed. by Kate Borthwick, Erika Corradini, and Alison Dickens, 203–213. Dublin and Voillans: Research-publishing.net. British Council. 2013. Innovations in Learning Technologies for English Language Learning, ed. by Gary Monteram. Accessed May 23, 2016. https://goo.gl/1kI7er Burston, Jack. 2005. “Video Dubbing Projects in the Foreign Language Curriculum.” CALICO Journal, 23 (1): 72–79. Chiu, Yi-hui. 2012. “Can Film-Dubbing Projects Facilitate EFL Learners’ Acquisition of English Pronunciation?” British Journal of Educational Technology 43 (1): 24–27. https://doi.org/10.1111/j.1467‑8535.2011.01252.x
ClipFlair. 2011. Foreign Language Learning through Interactive Revoicing & Captioning of Clips. Accessed May 18, 2016. http://www.clipflair.net Council of Europe. 2001. Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge University Press.
125
126
Alicia Sánchez-Requena
Danan, Martine. 2010. “Dubbing Projects for the Language Learner: A Framework for Integrating Audiovisual Translation into Task-Based Instruction.” Computer Assisted Language Learning 23 (5): 441–456. https://doi.org/10.1080/09588221.2010.522528 Dörnyei, Zoltán. 2007. Research Methods in Applied Linguistics: Quantitative, Qualitative and Mixed Methodologies. Oxford: Oxford University Press. Edexcel. 2016. Accessed May 14, 2016. https://goo.gl/jVULHv Eduqas. 2016. Accessed May 14, 2016. https://goo.gl/RX8xQs European Commission. 2013. First European Survey on Language Competences. Accessed May 17, 2016. https://goo.gl/wiMGv5 Joint Council for Qualifications (JCQ). 2014. Accessed May 15, 2016. http://www.jcq.org.uk Herrero de Haro, Alfredo, and Miguel A. Andión. 2012. “La enseñanza de la pronunciación del castellano a aprendices irlandeses. Contrastes dialectales de interés.” Porta Linguarum 18: 191–212. López Cirugeda, Isabel, and Raquel Sánchez Ruiz. 2013. “Subtitling as a Didactic Tool: A Teacher Training Experience.” Porta Linguarum 20: 45–62. Long, Robert and Bolton, Paul. 2016. “Language Teaching in Schools (England).” House of Commons Library Briefing. Accessed May 17, 2016. https://goo.gl/Oxg6hY Maley, Alan, and Alan Duff. 2005. Drama Techniques. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511733079 Navarrete, Marga. 2013. “El doblaje como herramienta en el aula de español y desde el entorno ClipFlair.” MarcoELE 16: 75–87. Pinker, Steven. 1994. The Language Instinct. New York: Harper Perennial Modern Classics. https://doi.org/10.1037/e412952005‑009
Sánchez Avedaño, Carlos. 2002. “La percepción del español como segunda lengua”. Filologías y Lingüística 28 (1): 137–163. Sánchez-Requena, Alicia. 2016. “Audiovisual Translation in Teaching Foreign Languages: Contributions of Dubbing to Develop Fluency and Pronunciation in Spontaneous Conversation.” Porta Linguarum 26: 9–21. Segalowitz, Norman. 2010. Cognitive Bases of Second Language Fluency. New York: Routledge. https://doi.org/10.4324/9780203851357
Talaván, Noa. 2013. La subtitulación en el aprendizaje de lenguas extranjeras. Barcelona: Octaedro. Wagener, Debbie. 2006. “Promoting Independent Learning Skills Using Video on Digital Laboratories.” Computer Assisted Language Learning 19: 279–286. https://doi.org/10.1080/09588220601043180
Wakefield, Jerome C. 2014. “Dubbing as a Method for Language Practice and Learning.” In Language Arts in Asia II: English and Chinese through Literature, Drama and Popular Culture, ed. by Christina Decoursey, 160–166. Newcastle upon Tyne UK: Cambridge Scholars Publishing. World Intellectual Property Organisation (WIPO). Accessed May 20, 2016. http://www.wipo .int/portal/en/index.html Yoshimura, Yuki and Brian MacWhinney. 2007. “The Effect of Oral Repetition on L2 Speech Fluency: An Experimental Tool and Language Tutor.” SLaTE 1 (2): 25–28.
Intralingual dubbing as a tool for developing speaking skills
Appendix 1 Table 11. WPM produced by participant
Participant 1 Participant 2 Participant 3 Participant 4 Participant 5 Participant 6 Participant 7 Participant 8 Participant 9 Participant 10 Participant 11 Participant 12 Participant 13 Participant 14 Participant 15 Participant 16 Participant 17
Average WPM
Difference in WPM
PRE
44
9
POST
53
PRE
39.7
POST
60.7
PRE
36.3
POST
53.7
PRE
35
POST
42
PRE
26.3
POST
37.6
PRE
69.3
POST
73.3
PRE
38
POST
52.7
PRE
53.3
POST
95.3
PRE
59.3
POST
66.3
PRE
46
POST
81.7
PRE
42.7
POST
55.3
PRE
54.7
POST
105.3
PRE
35
POST
61.7
PRE
37
POST
66
PRE
83.7
POST
85.3
PRE
58.3
POST
85.3
PRE
19.3
POST
39.3
21 17.4 7 11.3 4 14.7 42 7 35.7 12.6 50.6 26.7 29 1.6 27 20
Average WPM without SC
Difference in WPM without SC
PRE
43
0.5
POST
43.5
PRE
37.7
POST
58.7
PRE
35.7
POST
53.7
PRE
34
POST
39
PRE
24.6
POST
33.6
PRE
63
POST
64.7
PRE
35
POST
50.3
PRE
50.3
POST
88.3
PRE
58
POST
65.3
PRE
45
POST
80.3
PRE
37.3
POST
50.7
PRE
50.7
POST
102.7
PRE
34.3
POST
59.7
PRE
29.3
POST
63.3
PRE
76
POST
82.3
PRE
49.7
POST
81.3
PRE
17
POST
39
21 18 5 9 1.7 15.3 38 7.3 35.3 13.4 52 25.4 34 6.3 31.6 22
127
128
Alicia Sánchez-Requena
Table 11. (continued)
Participant 18 Participant 19 Participant 20 Participant 21 Participant 22 Participant 23 Participant 24 Participant 25 Participant 26 Participant 27 Participant 28 Participant 29 Participant 30 Participant 31 Participant 32 Participant 33 Participant 34 Participant 35
Average WPM
Difference in WPM
PRE
25.3
15.4
POST
40.7
PRE
35.7
POST
48.7
PRE
32.7
POST
39.3
PRE
50.7
POST
79.7
PRE
36
POST
49
PRE
64
POST
55
PRE
32.6
POST
47.6
PRE
30
POST
55.7
PRE
28.7
POST
33.3
PRE
61
POST
82.3
PRE
42
POST
53
PRE
32
POST
56.6
PRE
30.3
POST
39
PRE
57.7
POST
69.3
PRE
40
POST
56.3
PRE
40
POST
52
PRE
52.7
POST
58.3
PRE
40.7
POST
61.7
13 6.6 29 13 −9 15 25.7 4.6 21.3 11 24.6 8.7 11.6 16.3 12 5.6 21
Average WPM without SC
Difference in WPM without SC
PRE
24.7
15.3
POST
40
PRE
35
POST
44.3
PRE
25.3
POST
35.3
PRE
47.7
POST
76.3
PRE
32
POST
48
PRE
44
POST
43
PRE
29
POST
49.6
PRE
27.7
POST
54.3
PRE
27.7
POST
32.3
PRE
58
POST
78.6
PRE
45
POST
52
PRE
27.3
POST
55
PRE
29.6
POST
35.7
PRE
54.7
POST
66.3
PRE
39
POST
53.3
PRE
39.3
POST
50
PRE
49.3
POST
55.7
PRE
33.7
POST
60
9.3 10 28.6 16 −1 20.6 26.6 4.6 20.6 7 27.7 6.1 11.6 14.3 10.7 6.4 26.3
Intralingual dubbing as a tool for developing speaking skills
Table 11. (continued)
Participant 36 Participant 37 Participant 38 Participant 39 Participant 40 Participant 41 Participant 42 Participant 43 Participant 44 Participant 45 Participant 46 Participant 47 AVERAGE
Average WPM
Difference in WPM
PRE
67
14.7
POST
81.7
PRE
68.3
POST
82
PRE
62.7
POST
69.7
PRE
43
POST
64.3
PRE
43.3
POST
60.3
PRE
32
POST
43.6
PRE
50.6
POST
60.6
PRE
28
POST
40.7
PRE
51.3
POST
71
PRE
38.6
POST
71.3
PRE
31.3
POST
61.3
PRE
46.3
POST
51.3
13.7 7 21.3 17 11.6 10 12.7 19.7 32.7 30 5 16.54
Average WPM without SC
Difference in WPM without SC
PRE
60.7
17
POST
77.7
PRE
65.7
POST
80.3
PRE
55.7
POST
66.7
PRE
37.7
POST
62.3
PRE
39.7
POST
55.3
PRE
27.3
POST
43.3
PRE
42
POST
55.6
PRE
25.7
POST
38.7
PRE
50
POST
70.7
PRE
36
POST
67.3
PRE
26
POST
54
PRE
46
POST
50.3
14.6 11 24.6 15.6 16 13.6 13 20.7 31.3 28 4.3 17.15
129
The use of audio description in foreign language education A preliminary approach Marga Navarrete
Universidad Nacional de Educación a Distancia (UNED) and University College London
Audio description (AD) is a type of audiovisual translation (AVT) used for making video content accessible to the blind and visually impaired. Over the last decade, the pedagogic potential of AVT in foreign language learning (FLL) has gained increasing recognition by experts. However, AD as a didactic tool in FLL is an innovative area that has received very little attention so far, despite its significant potential for language learners. In addition, many experts in Applied Linguistics have shown a growing interest in the study of fluency, pronunciation and intonation. With these ideas in mind the author of the present article has carried out a small scale preliminary experiment with university students of Spanish as a Foreign Language. This article presents the methodological framework of the experiment which includes the instruments for data gathering. Although only six students completed the experiment, their responses were positive and encouraging as they found active AD tasks useful for language learning. It is hoped that the lessons learnt will inform the methodological framework for larger scale studies. Keywords: audio description, audiovisual translation, oral skills, podcasts, Spanish as a foreign language
1.
Introduction
A number of researchers have studied the different modalities of AVT to examine their impact in FLL as didactic tools for the enhancement of a range of learners’ skills. AD tasks appear to have great potential for language learning, although at the moment only a very limited number of studies substantiate this claim. In order to contribute to the community of experts interested in this area of research, https://doi.org/10.1075/bct.111.ttmc.00007.nav © 2020 John Benjamins Publishing Company
132
Marga Navarrete
this article will present a pilot project developed to study the enhancement of oral skills and participants’ perceptions of AD impact on language learning. The design of the experiment includes the methodological framework, the context of the study, participants, and the procedures of the experiment. This will be followed by the description of the instruments used for data gathering and the rationale behind their selection. The analysis of the results, including the observation of events occurred over the course of the experiment, will be discussed by triangulating the data in order to obtain rigorous findings. Finally, relevant conclusions will be highlighted looking at ways to improve larger scale experiments for further research in this area.
2.
Theoretical framework
Over the last two decades, researchers have shown a growing interest in the impact of AVT on language skills. The main modes studied so far have been subtitling and dubbing with a particular emphasis on the former. Initial studies focussed on how non-active subtitling (i.e., the use of subtitles as a support) helped learners’ oral production (Borrás and Lafayette 1994), understanding of oral content (Guillory 1998; Danan 2004; Caimi 2006; Bravo 2008) and vocabulary recall (Bird and Williams 2002; Talaván 2007). Current research has shifted focus to study the benefits of active subtitling. Talaván (2006a) can be said to be the pioneer in this area as she presented the potential benefits of active subtitling to enhance language learning in general terms. Incalcaterra McLoughlin (2009) examined the development of pragmatic awareness and linguistic retention through active subtitling via a series of trials with her students of Italian. Her work was accompanied by a number of relevant studies that focussed on specific skills: listening comprehension (Talaván 2010; Talaván 2011; Talaván and Rodríguez-Arancón 2014), lexical acquisition (Lertola 2012; Montero Pérez et al. 2014), writing skills (Talaván and Rodríguez-Arancón 2014b) and cultural and intercultural awareness (Borghetti 2011; Borghetti and Lertola 2014). The potential of active subtitling in learning for specific purposes (LSPs) was also examined by Talaván (2006b). Incalcaterra McLoughlin and Lertola (2014) explored the integration of subtitling in the FLL curriculum. They established a methodological framework and reported on students’ feedback on their 24-week subtitling module taught annually at the National University of Ireland, Galway. Talaván (2013) and Lertola (2015) also discussed the theoretical framework of active subtitling and put forward practical applications for language learning. In
The use of audio description in foreign language education
addition to these studies, López Cirugeda and Sánchez Ruiz (2013) explored the area of teacher training, using teachers-to-be as participants for their study. The second AVT modality studied so far has been dubbing: the activity by which original dialogues are replaced by a translation into another language. Up until now researchers and practitioners have devoted their attention to oral productive skills; with the exception of Burston (2005), who provided an exhaustive analysis of the use of dubbing in the language classroom to enhance the four main language skills, and also foster the acquisition of advanced grammar and vocabulary. Kumai (1996) examined the improvement in phonetic competence, intonation, and speech speed through dubbing tasks where learners felt motivated to reproduce utterances. Wagener (2006) explored the area of independent learning with her students. They had to carry out different tasks involving the editing of clips, such as trying to mimic the original dialogues and testing the potential of consecutive interpretation for vocabulary acquisition and oral skills. Her work was followed by relevant studies in fluency, pronunciation and intonation. Danan (2010) carried out a series of dubbing experiments in which participants had to translate from L1 into L2. She reported not only an enhancement of oral skills, but also, of vocabulary acquisition and student motivation. Chiu’s quantitative and qualitative study (2012) also demonstrated significant improvements in fluency, pronunciation and intonation when students replicated original dialogues in L2. Sánchez-Requena (2016) reported an increase in speech speed: her students had to reproduce the original dialogues of the clips in L2. Although her findings were encouraging, her study was inconclusive in demonstrating an effect on learners’ fluency with spontaneous conversations and pronunciation. Talaván et al. (2015) indicated a general enhancement of oral production. They made students of different disciplines work in a collaborative project of dubbing and reverse subtitling from L1 into L2 in both modalities. Finally, Talaván and Ávila-Cabrera (2014) demonstrated enhancement in both writing and oral productive skills; their students worked on a collaborative project that also combined dubbing and reverse subtitling. In addition to these studies on specific skills, Navarrete’s proposal (2013) tested dubbing tasks with secondary school students who had to replicate the original dialogues of the clips provided. The third AVT modality that has only recently started to be explored by researchers and practitioners as a didactic tool in FLL is AD. Over the last two decades, accessibility regulations have promoted the implementation of AD in all kinds of media products across large geographical areas of the world such as Europe, USA, Canada and Australia. This fairly new movement is making people more aware of the relevance of AD as an accessibility modality. However, despite its innovative nature and its significant potential for language learners, AD has not
133
134
Marga Navarrete
yet received the attention that it deserves. Snyder’s (2013, 12) description of AD is as follows: AD makes the visual images of theatre, media and visual art accessible for people who are blind or have low vision. Using words that are succinct, vivid, and imaginative (via the use of similes or comparisons), describers convey the visual image that is either inaccessible or only partially accessible to a significant segment of the population.
In live plays it is often the audio describer who inserts his/her voice during the parts of the performance where there is no dialogue. In audiovisual media, AD is inserted in between the dialogue of the characters and the original soundtrack. In both cases, AD provides additional and explanatory information about actions, facial expressions and scenery that is transmitted visually. Martínez Martínez (2012) was one of the first scholars to research this area. She discussed the potential of passive AD and explored it with her non-native students of German translation for the acquisition of vocabulary. For her experiment, she focussed on movement verbs and created a series of activities using Hot Potatoes, a free authoring software of the time for creating a variety of activities (e.g., matching, multiple choice, crosswords, etc.). However, AD use in the FLL environment has evolved. Researchers expressed their curiosity in exploring the potential of active tasks. Thus, new studies presented experiments where language learners were expected to work as audio describers. Ibáñez Moreno and Vermeulen (2013) reported an increase in lexical and phraseological acquisition through active AD tasks. They worked with native Dutch speaking students who were learning Spanish with an initial level of B2 (Council of Europe 2001). In another study (Ibáñez Moreno and Vermeulen 2014), two groups of students worked together in a collaborative project that involved the AD of a film clip. The first group consisted of Spanish speakers studying AVT from English into Spanish, and the second group consisted of Dutch speakers studying Spanish who possessed a B2 level of fluency. Both sets of students worked together through a series of sessions where they engaged in discussing and comparing their Spanish ADs. The results of the study demonstrated a general improvement of the four skills in the Dutch speakers. Following these studies, Ibáñez Moreno et al. (2016) developed a mobile assisted language learning application containing short clips that had to be audio described by students. The aim of their experiment was to promote accuracy and fluency in oral production. Their results were encouraging and set the bases for future projects in the area. Gajek and Szarkowska (2013) designed teaching materials using the ClipFlair platform; the aim of their study was to demonstrate that active AD and reverse subtitling for short videos was beneficial in all aspects of
The use of audio description in foreign language education
language learning from levels A1-C2. Finally, Talaván and Lertola (2016) carried out a study with students of English for Tourism. They compared two sets of students, an experimental group who carried out online collaborative AD tasks on two clips on tourism, and a control group, who continued with regular activities of their course. The results of their study clearly demonstrated the potential of AD tasks in distance FLL environments and the improvement of oral production skills.
3.
The enhancement of oral skills
In order to study the enhancement of oral skills using AVT modalities as didactic tools for FLL, the oral skills of fluency, pronunciation and intonation will need to be examined. However, these three concepts pose problems of different nature. Authors do not seem to agree on a single definition of fluency and it is also difficult to measure it. Assessing pronunciation can be challenging as one might have to consider the intelligibility of the learner’s speech or how close it is to a particular standard. Finally, intonation is also difficult to teach and assess. This study attempts to provide an answer to some of these questions, explore AD tasks as a didactic tool for the enhancement of oral skills and measure learners’ potential improvement.
3.1 Oral fluency Over the last few decades, applied linguistics experts have shown an increasing interest in the study of oral production, especially in terms of fluency. This idea is demonstrated by Riggenbach’s volume (2000), where she gathered a series of articles by scholars in the area, such as Koponen, Lennon, Fillmore, Brumfit and Segalowitz, among others. Her book attempted to define this complex term analysing its key components. Also, it introduced some empirical studies on the cognitive processes that take part in the development of fluency in FLL. Luoma (2004, 88, emphasis in the original) points out that fluency is a controversial issue since it is difficult to define, and even more, to measure. According to this author, “[f ]luency is a thorny issue in assessing speaking. This is partly because the word fluency has a general meaning, as in she is fluent in five languages, and a technical meaning when applied linguists use it to characterise a learner’s speech.” Fillmore was one of the first researchers to establish an exhaustive definition of oral fluency for language learning. He identified four types of fluency, all of which were limited to oral speech (Fillmore 2000, 51):
135
136
Marga Navarrete
– – – –
The first type of fluency has to do with time and is related to the ability to keep the speech flow going, i.e., “the ability to talk at length with few pauses, the ability to fill time with talk.” The second type refers to “the ability to talk in coherent, reasoned and semantically dense sentences.” The third type “is simply the ability to have appropriate things to say in a wide range of contexts” and is related to socio-linguistics. The fourth type refers “to the ability some people have to be creative and imaginative in their language use, to express their ideas in novel ways, to pun, to make up jokes, to vary styles, to create and build on metaphors and so on.”
However, there are other definitions of fluency that might be considered when examining this feature. Segalowitz’s definition (2010) proposes a triple division of the notion of fluency: cognitive, utterance and perceived. Cognitive fluency comprises processes that affect utterance production. It is the ability to plan and deliver an efficient speech. Therefore, utterance fluency is defined by the features affected by cognitive fluency, such as the ability to maintain a conversation in L2 with adequate speed, acceptable pronunciation, minimum number of repetitions, use of typical native filler forms, etc. This triangle is completed by perceived fluency, which is the listener’s perceptions of the speaker’s speech. Finally, there have been many quantitative studies that have examined the dichotomy of fluency versus dysfluency. Dysfluency is a term used by many experts such as Riggenbach (2000) and Lennon (1990). In some of these studies, a number of features of spoken language were measured, such as hesitations, speech rate, level of connectedness, pragmatic ability and ability to appropriately claim or surrender turns. Also, non-verbal features were quantified such as spontaneous facial and hand gestures. Although authors have not arrived at an agreed definition of fluency, their reflections were taken into account in order to select the appropriate rubric for assessing fluency in our experiment.
3.2 Pronunciation and intonation How to evaluate pronunciation within the FLL environment has also proven to be a very challenging area (Luoma 2004). This is because learners’ speech tends to be assessed in terms of closeness to native speakers’ pronunciation. This however poses another problem: whether there is in fact a standard variation/pronunciation within a particular language. All languages have regional varieties, and it is commonly difficult to establish a consensus to identify a single standard. Furthermore, Luoma points out that although many advanced students can produce
The use of audio description in foreign language education
a level of pronunciation and intonation that reflects an efficient and intelligible speech, a much lower number of speakers are able to reach a native and ‘standard’ level of pronunciation. She therefore suggests (2004, 10) that assessment should be based on effective communication: “[c]ommunicative effectiveness, which is based on comprehensibility and probably guided by native speaker standards but defined in terms of realistic learner achievement, is a better standard for learner pronunciation.” Assessment of intonation is yet another complex issue. Levis (1999) discusses two definitions of intonation; the first one, provided by Allen (1971, 74), is rather broad: intonation is viewed as quality of language that includes prosody (rhythm and melody) and is “produced by tonal height and depth along with stress, volume and varying lengths of pause.” Levis (1999, 38) explains that this broad definition is widespread among experts and practitioners “as intonation is often used to refer to the way someone says something.” However, he opts for a second definition, based on Ladd’s (1996), because it is more specific and “restricts its meaning to significant, linguistic uses of pitch.” Ladd (1996, 6) defines intonation as one of the “suprasegmental features [of language] to convey […] sentence level pragmatic meanings in a linguistically structured way.” Levis (1999, 37) reviewed American course books on language teaching and criticised the fact that although research and methodology in language teaching in general have evolved over the years, teaching practices of intonation have not changed. First, he blames the inadequacy in which the functions of intonation are viewed, the “overemphasis” of intonation pointing at grammatical relations and the “emphasis of its role in conveying speakers’ attitudes and emotions.” Second, the lack of a communicative purpose demonstrated by the materials, “focusing instead on uncontextualized, sentence-level practice of intonational forms.” He discusses four principles for best practice in teaching intonation, which are applicable to active AD tasks (Levis 1999): – –
The first principle is based on the importance of teaching intonation in context. With active AD tasks learners have to describe what they see, following an intonation that is coherent with the context of their scripts. The second principle stresses the idea that intonational meanings must be generalisable. Levis (1999, 56, emphasis in the original) is in favour of the widespread idea that “intonation makes an independent contribution to the meaning of utterances.” However, “descriptions of intonational meaning in terms of attitude or with specific labels, such as boredom… cannot easily be generalized to new sentences. Thus, an intonation that sounds bored in one sentence, for instance, may sound level-headed in another.” Because of the interaction of intonation with other factors, teachers should describe intonational
137
138
Marga Navarrete
– –
4.
meaning in general terms, and show specific examples in contextual situations. In AD tasks, images present authentic contexts for students to express meaning via intonational features. The third principle alludes to the fact that the teaching of intonation has to be subordinated to a larger communicative approach. This can be easily put into practice when audio describing scenes. The fourth principle refers to the idea that intonation needs to be connected with realistic language. Again, AD tasks are an ideal environment where students describe what they see in the clips based on real inter-semiotic linguistic features.
The study
The study was a small scale preliminary experiment with university students of Spanish as a Foreign Language with a proficiency level of B1. The experiment took place over two terms in the academic year 2014–15 at Imperial College, London.
4.1 Context At Imperial College students can take an optional course, which is taught in a session of two hours a week for two terms (40 hours in total). These courses may or may not carry credits, and their value towards the students’ final marks in the degree varies.1
4.2 Methodology The starting point of this proposal is Nunan’s reflection (1997, 13) on the relationship between language teaching and research where he defines the latter as: A systematic process of inquiry in which the researcher poses a question or questions, collects relevant data, analyzes and interprets it, and makes the results accessible to others. It looks at the simplistic but persistent distinction between qualitative and quantitative methodologies.
The author refers here to action research, the methodology adopted for this pilot study. However, due to the nature of the data collected, our experiment followed a qualitative, rather than a quantitative approach to the analysis of the results. 1. These credits refer to the European Credit Transfer and Accumulation System (ECTS), an academic credit system that works as a standard for comparing knowledge attainment and performance of students in Higher Education across the European Union.
The use of audio description in foreign language education
This study was designed to focus on two specific areas: the enhancement of oral tasks and students’ perceptions of the achievements gained with their task completion. Therefore, the main objective was to respond to the following research questions: – –
Do oral skills improve with active AD tasks? Are students’ perceptions on completion of active AD tasks positive?
The study was of quasi-experimental nature following Cohen et al.’s (2007, 282) research ideas. They pointed out that “often in educational research, it is simply not possible for investigators to undertake true experiments, e.g. in random assignation of participants to control or experimental groups.” Also, a number of methods were used for the experiment, so that a methodological triangulation could take place. Thus, the results obtained, using a variety of instruments for data gathering, could be contrasted in a more efficient way; this procedure is recommended by a number of educational experts (Robson 2002; Cohen et al. 2007). The main data gathering tools of this pilot experiment were a pre-test in the form of a podcast activity, an active AD task, the rubric used to assess students’ improvement in oral skills and two final questionnaires on students’ perceptions of both activities.
4.3 Participants Initially, there were 12 participants taking part in the experiment, all of them were in the final year of their degree, but only 6 finished the course and responded to the questionnaires. It is not uncommon to have a poor record of student completion of these courses. Half of the students took the course as part of their degree; the rest took it as a “non-credit” course, which meant that their Spanish results would not have any impact on their final mark. Many final year students realise how demanding their degrees are and they withdraw from their language or other optional courses.
4.4 Procedures The procedure of the study is illustrated in Figure 1 and its main stages explained below. 1.
Students were tested at the beginning of the course, via their recording of a podcast in the form of a conversation on a familiar topic that had not been prepared in advance.
139
140
Marga Navarrete
2. Students worked on a new topic, which in this case was “fracking” (the process that involves drilling down in rocks and injecting some substances to release the gas and oil inside them). In addition to discussing the topic and consequently learning new vocabulary, they practiced certain linguistic functional features and also revised some grammatical points such as the impersonal use of the particle “se”. 3. Students responded to questionnaires to evaluate their perceptions in relation to task completion. 4. Results were analysed.
Figure 1. Procedure for the pilot study
4.5 Resources Several resources were used for data gathering and analysis of results. At the beginning of the course, students recorded the podcast about themselves. This task had the main objective of testing students’ initial level of proficiency. Then, students worked on a reading and comprehension text on fracking, to gain information and learn new vocabulary in advance of the AD task. After that, students had to describe a fracking simulation that lasted about three minutes. For both tasks the researcher used a rubric to analyse students’ linguistic competence in pronunciation, intonation, fluency, vocabulary and grammar. The study was completed with a final questionnaire given to students. They had to express their thoughts about the tasks they had carried out: level of difficulty, enjoyment and effectiveness for the four language skills.
The use of audio description in foreign language education
4.5.1 Oral task 1: Podcast This first task was intended to assess students’ starting level of proficiency in oral skills, and it also aimed at providing homework practice on their oral skills. It was designed in the form of a podcast because this type of technology is easy to learn and use, accessible, affordable and adjustable (King and Gura 2007). To complete the activity, students had to respond to personal questions related to their degrees, favourite subjects, hobbies, nationality, family and language background. They were asked to talk about the languages they spoke, the number of years they had been studying Spanish and the reasons why they were interested in learning it. This allowed the researcher to reach a better understanding of the language profiles of the students involved. The researcher examined their answers to assess students’ initial skills and to find information about their language track record. She observed how their confidence was boosted when positive and encouraging feedback was provided. This is essential for language learning, especially at the early stages of a new language course. There are many studies that support this idea, for instance Noels (2001, 126) points out that “the teacher must be viewed as an active participant in the learning process, who provides feedback in a positive and encouraging manner.” Students received written feedback not only on fluency, pronunciation and intonation, but also on vocabulary and grammar. Common errors were also discussed in class as part of group activity. The main stages of the task are summarised in Figure 2.
Figure 2. Oral task 1 (podcast)
141
142
Marga Navarrete
4.5.2 Active AD task In the active AD activity students had to audio describe a ‘fracking simulation’ clip using the ClipFlair Studio2 software. ClipFlair was a European Union (EU) funded project carried out by ten EU universities, including Imperial College. The main aim was to develop activities for FLL through interactive revoicing (e.g., dubbing and AD) and captioning of clips. A free access platform was designed which included more than 400 tasks, a social network and an area called ClipFlair Studio, where activities can be both designed by practitioners and used by learners. There were three main stages involved in the activity, as it can be seen in Figure 4. The first one was carried out in a language lab. Students watched the three-minute clip that showed the fracking procedure and illustrated possible disadvantages of this practice for the environment. As shown clockwise in Figure 3, the activity interface had several components. It included the clip itself, the revoicing panel, instructions, a blank component where students had to write their AD narrative and an area with useful online tools such as dictionaries. Students started to write their scripts and became familiar with ClipFlair Studio interface and its functionality, an example of which is provided in Figure 3.
Figure 3. ClipFlair interface for the AD activity
In the second stage of the activity, students had to complete their task and send it to their teacher. Examples were shown and discussed by students in class not only as regards fluency, pronunciation and intonation, but also in terms of grammar and vocabulary. 2. ClipFlair Studio is available at http://studio.clipflair.net/ (accessed October 30, 2017).
The use of audio description in foreign language education
Figure 4. Procedure for the AD activity
4.5.3 Assessment rubric Figure 5 shows the rubric used to assess students’ initial level of oral competence. It was divided into four skills: pronunciation and intonation, fluency, vocabulary and grammar. For each skill, there were 5 possible scores to be given to students, 1 being the lowest and 5 the highest level of competence in that particular skill. Pronunciation and intonation were measured in terms of frequency of errors. In the fluency section, hesitations resulted in lower marks. Vocabulary was assessed according to the level of sophistication and accuracy of the words selected. Finally, for the grammar section, appropriateness of tenses, correctness of the pronouns and gender and number agreement were considered and scores were allocated according to the frequency of grammar errors.
143
144
Marga Navarrete
Figure 5. Assessment rubric (adapted from Talaván and Lertola 2016)
4.5.4 Questionnaires Two questionnaires were given to participants in order to evaluate their perceptions of the two oral tasks that were part of the experiment. The first one focused on the podcast activity (https://goo.gl/L77UFu), and the second one, on the AD task (https://goo.gl/rPsPP4). Both evaluated different areas: – – –
Efficiency of the tasks for language learning; Level of difficulty and fun of the tasks, as well as willingness to carry out similar tasks in the future; Students’ feelings about recording and listening to their own voices and appreciation of the feedback provided by the teacher.
Students were also asked how useful they found the activities for language learning in general and more specifically for writing and oral skills, for pronunciation and intonation skills, and for the acquisition of new vocabulary and grammar structures. Both questionnaires included a similar set of questions, but in only one were the students asked if they liked the topic of fracking; this was a controversial topic that required investigating a great amount of specialised lexical items. Also, at the beginning of the course, it was observed that some students felt slightly uncomfortable with recording and listening to their own voices. This was identified as an important issue to consider carefully, given that in contrast to other oral tasks, recording students' voices is a key component of AD activities.
The use of audio description in foreign language education
5.
Analysis and discussion
The experiment faced several challenges. First of all, constraints were placed by the institution. The Spanish course where the experiment took place was rather demanding. In 40 hours of class lessons students had to submit six pieces of compulsory, summatively assessed coursework. Also, the course was content-heavy and a wide range of areas needed to be covered in terms of topics (of technical, scientific and literary nature), vocabulary and grammar structures. The activities designed specifically for the study followed the course syllabus, but only formative feedback was provided to the learners. As tasks were optional, it was difficult to get students to submit them quickly, thus no additional tasks could be included in the study. Unfortunately, students at times resist completing optional tasks that do not count towards their final course result. Another aspect that prevented students from completing the activities faster to allow time for the submission of new tasks was the fact that some technical glitches occurred when using the software to record the AD narratives. The revoicing software presented bugs, some of which were solved by the ClipFlair engineers, but it took a while to discover that revoicing will not work at all within Mac systems, some Windows versions still caused glitches and Chrome internet browser did not fully function with revoicing. Another problem was the dependency of ClipFlair Studio on an additional piece of software: Silverlight, a Windows programme that needs to be downloaded to allow ClipFlair Studio to work. There were some trial and error tests, but it caused a lack of confidence in the software by the parties involved in the study. Not all learners had the right tools to complete their tasks comfortably at home. Some of them eventually used university labs, but understandably, delays took place. An additional aim of the tasks was to provide relevant and useful feedback to the learners on how to improve these areas of fluency (as well as pronunciation and intonation) based on the analysis and evaluation of student performance when interacting orally. The descriptive data of the oral tasks can be seen in Table 1. It shows the mean, which is the average of results and is based on the scores of the six students that took both oral tasks for each of the skills tested (on a scale of 1–5). Table 1. Descriptive analysis of oral tasks (mean results) Pronunciation & Intonation Fluency
Vocabulary
Grammar
Podcast
3.66
4.33
4.16
3.66
Active AD task (fracking)
3.78
4.35
4.20
3.83
145
146
Marga Navarrete
At first sight, one can appreciate that students obtained slightly better scores in the second task than in the first in the four skills analysed (pronunciation and intonation, fluency, vocabulary, grammar). These figures imply that students’ skills might have improved. However, generalisation is not possible as there was a limited number of participants in the study and only two tasks were used for comparison of results. Adding tasks to the study and having a larger number of participants would allow for further validation of results. It should be noted that the second task was more complex than the first one, so students had to demonstrate a higher level of linguistic competence. For the podcast activity, students were in their comfort zone because they were familiar with the topics they had to comment on, unlike the topic area of the second task. Also, grammar structures and specialised vocabulary needed for task completion made it a more demanding one. This would reinforce the fact that getting slightly better scores in the second task demonstrates that students’ oral skills were really enhanced. Nevertheless, it is important to emphasise one of Martín Álvarez’s findings in his study on podcasting as a didactic tool for language learning (2014). He noticed that students improve articulation and pronunciation, rather than fluency. This is an area that should be re-examined in future projects on oral skill enhancement when using prepared scripts (for podcasts or any other revoicing AVT tasks). According to the results of the questionnaire about the podcast activity, which can be seen in Figure 6, students’ perceptions were all highly positive. All subjects thought that the feedback on task performance provided by the teacher was relevant. This is probably due to the fact that students are not used to receiving feedback on oral skills such as fluency, pronunciation and intonation. All students apart from one (who responded “neither”) showed themselves very keen to complete activities of this type. Also, all apart from one (who responded “agree” rather than “strongly agree”) loved the idea of recording and listening to their own voices because they could learn from their mistakes and improve intonation and pronunciation. All students thought that it was useful for language learning in general (apart from one who gave a “neither” for answer), and for learning vocabulary, but it seemed to be less beneficial for learning grammar structures and writing skills in general. This could be due to the fact that the focus for the task was on oral skills. All students seemed to enjoy the task and none of them found it difficult. However, the students considered the AD fracking activity, as seen in Figure 7, useful for improving pronunciation and intonation; the feedback provided by the teacher on task performance was relevant and although the activity was not particularly easy for everyone, it turned out to be fun, and most students expressed their desire to do more activities like this one. Most of them liked recording their
The use of audio description in foreign language education
Figure 6. Podcast questionnaire (students’ responses)
own voice in order to practice speaking skills. They all agreed that it was useful to learn new vocabulary and for language learning in general, and all but two students, who responded with “disagree” or “neither”, thought that it was also useful for learning new grammar structures and for improving writing skills in general. These reflections seem to support Martín Álvarez’s findings (2014). He points out that with the creation of podcasts there was not a clear distinction regarding the improvement in both oral and writing skills, especially when students were asked to work with their own scripts and practise reading them several times before recordings took place. The responses to both questionnaires did not differ much from students’ impressions on the level of difficulty of the tasks. As seen in Figure 6, students did not find the first podcast task to be difficult, with responses classifying it as either ‘very easy’, ‘easy’ or ‘neither’. Whereas two participants viewed the second AD task
147
148
Marga Navarrete
Figure 7. AD Fracking questionnaire (students’ responses)
as not particularly ‘easy’ and four responded ‘neither’ to that question, as seen in Figure 7. The fact that students unanimously found these types of activities ‘useful’ and ‘fun’ and that they could see their effectiveness for language learning for both oral and writing tasks invites further studies that may allow for quantitative and qualitative analysis of learners’ progress.
6.
Conclusion
This article focused on a pilot experiment with undergraduate Spanish language learners of a B1 level. Although this is a preliminary study with a limited number of participants, the findings are very encouraging. Students’ perceptions of their completed tasks are very positive and show that they appreciate the potential of this type of activity for their learning. The use of a variety of data gathering instruments and the triangulation of data has contributed to establishing a sound design for a methodological framework that could work effectively also in larger research projects. For future studies, it is recommended to have at least 50 subjects, who should be carrying out at least 8–10 active AD tasks on a long-term basis. In order to
The use of audio description in foreign language education
assess the enhancement of productive oral skills, it may be more beneficial to design initial and final tests in which learners produce spontaneous speeches as opposed to written prepared tasks. The results would measure potential enhancement of fluency, pronunciation and intonation skills. In addition, it would be advisable to have several observers (who are experts in the area) to revise the data gathering instruments and the corresponding data analysis in order to provide additional interpretations to the experiment. This extended framework will lend further reliability to the results and universal validity to future research projects. Considering that ClipFlair for revoicing activities presented some flaws, other ways of carrying out this task should be sought. Alternative programs are available, which integrate video and audio features, such as Windows Movie Maker, iMovie for Mac, WeVideo (a user-friendly, free online program) and YouDescribe (an online program created by YouTube). The experiment was successful in confirming the potential of active AD activities in the language environment and has set a possible methodological framework for future experiments. All in all, this study invites other researchers to take into account these learning outcomes for future projects as there is reason to believe that active AD tasks have great potential for FLL.
References Allen, Virginia. 1971. “Teaching Intonation, from Theory to Practice.” TESOL Quarterly 4: 73–91. https://doi.org/10.2307/3586113 Bird, Stephen, and John Williams. 2002. “The Effect of Bimodal Input on Implicit and Explicit Memory: An Investigation into the Benefits of Within-language Subtitling.” Applied Psycholinguistics 23 (4): 509–533. https://doi.org/10.1017/S0142716402004022 Borghetti, Claudia. 2011. “Intercultural Learning through Subtitling: The Cultural Studies Approach.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 111–137. Bern: Peter Lang. Borghetti, Claudia, and Jennifer Lertola. 2014. “Interlingual Subtitling for Intercultural Language Education: A Case Study.” Language and Intercultural Communication 14 (4): 423–440. https://doi.org/10.1080/14708477.2014.934380 Borrás, Isabel, and Robert C. Lafayette. 1994. “Effects of Multimedia Courseware Subtitling on the Speaking Performance of College Students of French.” The Modern Language Journal 78 (1): 61–75. https://doi.org/10.1111/j.1540‑4781.1994.tb02015.x Bravo, Conceiçao. 2008. Putting the Reader in the Picture: Screen Translation and Foreign Language Learning. Unpublished PhD diss. Tarragona: Universtat Rovira I. Burston, Jack. 2005. “Video Dubbing Projects in the Foreign Language Curriculum.” CALICO Journal 23 (1): 72–79.
149
150
Marga Navarrete
Caimi, Annamaria. 2006. “Audiovisual Translation and Language Learning: The Promotion of Intralingual Subtitles.” The Journal of Specialised Translation 6: 85–98. Accessed October 30, 2017. http://www.jostrans.org/issue06/art_caimi.pdf. Cohen, Louis, Lawrence Manion, and Keith Morrison. 2007. Research Methods in Education. London and New York: Routledge. https://doi.org/10.4324/9780203029053 Council of Europe. 2001. Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge, U.K.: Press Syndicate of the University of Cambridge. Chiu, Yi-Hui. 2012. “Can Film Dubbing Projects Facilitate EFL Learners’ Acquisition of English Pronunciation?” British Journal of Educational Technology 43 (1): 24–27. https://doi.org/10.1111/j.1467‑8535.2011.01252.x
Danan, Martine. 2004. “Captioning and Subtitling: Undervalued Language Learning Strategies.” Meta 49 (1): 67–77. https://doi.org/10.7202/009021ar Danan, Martine. 2010. “Dubbing Projects for the Language Learner: A Framework for Integrating Audiovisual Translation into Task-Based Instruction.” Computer Assisted Language Learning 23 (5): 441–456. https://doi.org/10.1080/09588221.2010.522528 Gajek, Elżbieta, and Agnieszka Szarkowska. 2013. “Audiodeskrypcja i napisy jako technikiuczenia się języka – projekt ClipFlair.” Jezyki Obce w Szkole 02 (13), 106–110. Guillory, Helen Gant. 1998. “The effects of Keyword Captions to Authentic French Video on Learner Comprehension.” CALICO Journal 15 (1–3): 89–109. Ibáñez Moreno, Ana, and Anna Vermeulen. 2013. “Audio Description as a Tool to Improve Lexical and Phraseological Competence in Foreign Language Learning.” In Translation in Language Teaching and Assessment, ed. by Dina Tsagari and Georgios Floros, 41–63. Newcastle upon Tyne: Cambridge Scholars Press. Ibáñez Moreno, Ana, and Anna Vermeulen. 2014. “La audiodescripción como recurso didáctico en el aula ELE para promover el desarrollo integrado de competencias.” In New Directions in Hispanic Linguistics, ed. by Rafael Orozco, 263–292. Newcastle upon Tyne: Cambridge Scholars Press. Ibáñez Moreno, Ana, Jordano de la Torre, María, and Anna Vermeulen. 2016. “Diseño y evaluación de VISP, una aplicación móvil para la práctica de la competencia oral.” RIED: Revista Iberoamericana de Educación a Distancia 19: 63–81. Incalcaterra McLoughlin, Laura. 2009. “Inter-semiotic Translation in Foreign Language Learning. The Case of Subtitling.” In Translation in Second Language Teaching and Learning, ed. by Arndt Witte, Theo Harden, and Alessandra Ramos de Oliveira Harden, 227–244. Oxford: Peter Lang. Incalcaterra McLoughlin, Laura, and Jennifer Lertola. 2014. “Audiovisual Translation in Second Language Acquisition. Integrating Subtitling in the Foreign Language Curriculum.” The Interpreter and Translator Trainer 8 (1): 70–83. https://doi.org/10.1080/1750399X.2014.908558
King, Kathleen P., and Mark Gura. 2007. Podcasting for Teachers: Using a New Technology to Revolutionize Teaching and Learning (Emerging Technologies for Evolving Learners). Charlotte: Information Age Publishing. Kumai, William. 1996. “Karaoke Movies: Dubbing Movies for Pronunciation.” The Language Teacher Online 20 (9). Accessed October 30, 2017. http://www.jalt-publications.org/tlt /files/96/sept/dub.html Ladd, D. Robert. 1996. Intonational Phonology. Cambridge: Cambridge University Press. Lennon, Paul. 1990. “Investigating Fluency in EFL: A Quantitative Approach.” In Language Learning 40 (3): 387–417. https://doi.org/10.1111/j.1467‑1770.1990.tb00669.x
The use of audio description in foreign language education
Lertola, Jennifer. 2012. “The Effect of the Subtitling Task on Vocabulary Learning.” In Translation Research Projects, ed. by Anthony Pym, and David Orrego-Carmona 61–70. Tarragona: Universitat Rovira i Virgili. Lertola, Jennifer. 2015. “Subtitling in Language Teaching: Suggestions for Language Teachers.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 245–267. Bern: Peter Lang. Levis, John M. 1999. “Intonation in Theory and Practice, Revisited.” TESOL Quarterly 33: 37–63. https://doi.org/10.2307/3588190 López Cirugeda, Isabel, and Raquel Sánchez Ruiz. 2013. “Subtitling as a Didactic Tool. A Teacher Training Experience.” Porta Linguarum 20: 45–62. Luoma, Sari. 2004. Assessing Speaking. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511733017
Martín Álvarez, Francisco J. 2014. El podcasting en la enseñanza de las lenguas extranjeras. Unpublished PhD Diss. Madrid: Universidad Nacional de Educación a Distancia. Montero Perez, Maribel, Peters Elke, Clarebout Geraldine, and Piet Desmet. 2014. “Effects of Captioning on Video Comprehension and Incidental Vocabulary.” Language, Learning and Technology 18 (1): 118–141. Navarrete, Marga. 2013. “El doblaje como herramienta de aprendizaje en el aula de español y desde el entorno de ClipFlair.” MarcoELE 16: 75–87. Accessed October 30, 2017. http:// marcoele.com/descargas/16/8.londres-2.pdf Noels, Kimberly A. 2001. “Learning Spanish as a Second Language: Learners’ Orientations and Perceptions of Their Teachers’ Communication Style.” Language Learning 51 (1): 107–144. https://doi.org/10.1111/0023‑8333.00149
Martínez Martínez, Silvia. 2012. “La audiodescripción (AD) como herramienta didáctica: Adquisición de la competencia léxica.” In Traducir en la frontera, ed. by Susana Cruces, Maribel Del Pozo, Ana Luna, and Alberto Álvarez, 87–102. Granada: Atrio. Riggenbach, Heidi. 2000. Perspectives on Fluency. Ann Arbor: The University of Michigan Press. https://doi.org/10.3998/mpub.16109 Robson, Colin. 2002. Real Word Research: A Resource for Social Scientists and PractionerResearchers. Oxford: Blackwell Publishing. Sánchez-Requena, Alicia. 2016. “Audiovisual Translation in Teaching Foreign Languages: Contributions of Revoicing to Improve Fluency and Pronunciation in Spontaneous Conversations.” Porta Linguarum 26: 9–21. Segalowitz, Norman. 2010. Cognitive Bases of Second Language Fluency. New York: Routledge. https://doi.org/10.4324/9780203851357
Snyder, Joel. 2013. Audio Description: Seeing with the Mind’s Eye: A Comprehensive Training Manual and Guide to the History and Applications of Audio Description. Unpublished PhD diss. Barcelona: Universitat Autonoma de Barcelona. Talaván, Noa. 2006a. “Using Subtitles to Enhance Foreign Language Education.” Porta Linguarum 6: 41–52. Talaván, Noa. 2006b. “The Technique of Subtitling for Business English Communication.” RLFE (Revista de Lenguas para Fines Específicos) 11/12: 313–346. Talaván, Noa. 2007. “Learning Vocabulary through Authentic Video and Subtitles.” TESOLSPAIN Newsletter 31: 5–8.
151
152
Marga Navarrete
Talaván, Noa 2010. “Subtitling as a Task and Subtitles as Support: Pedagogical Applications.” In New Insights into Audiovisual Translation and Media Accessibility, ed. by Jorge Díaz Cintas, Ana Matamala, and Josélia Neves, 285–299. Amsterdam: Rodopi. https://doi.org/10.1163/9789042031814_021
Talaván, Noa. 2011. “A Quasi-Experimental Research into Subtitling and Foreign Language Education.” In Audiovisual Translation Subtitles and Subtitling. Theory and Practice, ed. by Laura Incalcaterra McLoughlin, Marie Biscio, and Máire Áine Ní Mhainnín, 197–217. Oxford: Peter Lang. Talaván, Noa 2013. La subtitulación en el aprendizaje de lenguas extranjeras. Barcelona: Octaedro. Talaván, Noa, and Pilar Rodríguez-Arancón. 2014. “The use of reverse subtitling as an online collaborative language learning tool.” The Interpreter and Translator Trainer 8 (1): 84–101. https://doi.org/10.1080/1750399X.2014.908559
Talaván, Noa, and José Javier Avila-Cabrera. 2014. “First Insights into the Combination of Dubbing and Subtitling as L2 Didactic Tools.” In Subtitles and Language Learning, ed. by Yves Gambier, Annamaria Caimi, and Cristina Mariotti, 149–172. Bern: Peter Lang. Talaván, Noa, Elena Bárcena, and Álvaro Villarroel. 2014. “Aprendizaje colaborativo asistido por ordenador para la transferencia de las competencias mediadora y lingüísticocomunicativa en inglés especializado.” In Digital Competence Development in Higher Education: An International Perspective. ed. by María Luisa Pérez Cañado, and Juan Raáez Padilla, 87–106. Bern: Peter Lang. Talaván, Noa, Pilar Rodríguez-Arancón, and Elena Martín-Monje. 2015. “The Enhancement of Speaking Skills Practice and Assessment in an Online Environment.” In Tendencias en Educación y Lingüística, ed. by Lucía P. Cancelas y Ouviña, and Susana Sánchez Rodríguez, 329–351. Cádiz: Editorial GEU. Talaván, Noa, and Jennifer Lertola. 2016. “Active Audiodescription to Promote Speaking Skills in Online Environments.” Sintagma 27: 59–74. Wagener, Debbie. 2006. “Promoting Independent Learning Skills Using Video on Digital Laboratories.” Computer Assisted Language Learning 19: 279–286. https://doi.org/10.1080/09588220601043180
Why is that creature grunting? The use of SDH subtitles in video games from an accessibility perspective Tomás Costal
Universidad Nacional de Educación a Distancia (UNED)
Video games today are highly complex audiovisual products. Their nature is not only multisemiotic but also interactive. Their potential audience has certain expectations and, especially in the case of digital blockbusters, final users need the advantage of knowledge and the force of numbers. A faux pas in design, continuity or playability will most likely be conducive to social media outrage, and will see official apologies be released presently. Conversely, accessibility shortcomings rarely or never have the same impact. The present study puts forward the advantages of including Subtitles for the Deaf and Hard of Hearing (SDH) in popular video games and offers an in-depth analysis of a selection of recent multimedia titles. Drawing on the work of Bernal Merino (2015), O’Hagan and Mangiron (2013) and Trabattoni (2014) on the special characteristics of video games, the main elements around which they are structured and the aspects that determine their success or failure, the author will endeavour to advance a convincing argument in favour of the introduction of SDH subtitling conventions. Keywords: subtitles for the deaf and hard of hearing, localisation, video games, accessibility, subtitling norms
1.
The medium
Video games are multimedia products that have been designed, developed and programmed to engage users and achieve a compelling degree of interaction between one or several individuals and a piece of software whose main purpose is to entertain. To this somewhat detached definition it might be added that concepts such as ‘interaction’ and ‘multimedia’ may sound rather vague and consequently elicit divergent responses on the part of the reader. The principal assumption here is that multimedia products are forms of digital content, which https://doi.org/10.1075/bct.111.ttmc.00008.cos © 2020 John Benjamins Publishing Company
154
Tomás Costal
may take different shapes – as in the case of a video game that is made available simultaneously for various platforms, including mobile devices – and, in addition, present all the characteristics of audiovisual creations; in other words, they feature the overlapping combination of a number of communicative channels and make use of complementary linguistic and non-linguistic codes. The concept of interaction between the human player and the machine that executes the video game programme is frequently taken for granted as an inescapable prerequisite in the absence of which the medium would radically change its nature. It could be stated that a video game devoid of interaction with the player would not be different from other forms of passive reception such as films or books but, in fact, this interaction has considerable limitations. Players are never really free to act as they desire, given that their every move – or rather, their avatar’s – has been directed and foreseen by the designer. The existence of alternative paths, ramifications, or even dead ends does not necessarily imply that even the minutest detail has not been thoroughly predicted. This particularity signifies that so-called video game interaction is fundamentally one-sided: players are faced with ready-made virtual environments they initially know very little or nothing about, they gradually acquire a set of skills that are meant to ensure their steady progress, they accumulate experience and, in one form or another, eventually beat the game and start over with a new one following the exact same procedure. As Bogost (2007, 4) points out: “[…] Software is composed of algorithms that model the ways things behave. To write procedurally, one authors code that enforces rules to generate some kind of representation, rather than authoring the representation itself.” Video games conceived as entertainment products, therefore, would stand as an instance of indirect communication between individuals that is mediated by computational commands. The internal architecture of the game remains elusive for the player: the programming code gives the appropriate instructions, the machine obeys those instructions, and a given virtual reality is rendered on the screen for the user’s enjoyment. In Bogost’s words (9, emphasis in the original): “[…] Procedural representation explains processes with other processes. Procedural representation is a form of symbolic expression that uses processes rather than language.” The previous statement largely determines the degree of separation between the semiotics of the audiovisual alone and those of the multimedia: the final product being released – the one the user may purchase – is always multi-layered and highly complex; however, when procedurality is involved, most of the content lies occult for users, at least for the great majority, and their understanding of its intricacies becomes remarkably diminished for that precise reason. To put it another way, procedurality leads to a shift from reflection towards action, in the
The use of SDH in video games from an accessibility perspective
sense that players stop asking themselves why something is happening – provided that their technical expertise is not akin to that of the programmers – and move on to thinking what needs to be done in order to fulfil the conditions which have been put forward, implicitly or explicitly, by the artificial intelligence that is offering them a false choice. Egenfeldt-Nielsen et al. (2008, 175) believe that, regardless of their relatively disadvantaged position with respect to their digital counterparts, users still play an important role in the interactive process: The most important component of a game world is the game space, understood as a setting for the gameplay. Game spaces are not realistic, but reductive, they reproduce some features of the game world, but create their own rules in order to facilitate gameplay (and to reduce the processing power required by a computer to run the game).
Thus, there exists a tacit agreement between gamers and the industry, whereby the definition of interaction and reality is transformed once they start moving within the boundaries established by the game itself. The same suspension of disbelief that is required in other media (Whitman-Linsen, 1992; Pedersen, 2011; Chaume, 2012) needs to be persistently applied in the field of gaming if the illusion is not to be broken. In other words, what happens inside the game stays inside the game. Trabattoni (2014, 13, our translation) attributes the success of this consensual deception to the proficient interplay that takes place between a key set of core elements: We could say that video games employ different substances of expression (images, texts, music, sound effects, etc.) that come into syncretism to form a unified textual project. It is not a mere accumulation of languages that “collaborate” in the same communication project, presupposing one another and allowing us to speak of the resulting text as something organic, coherent, sensible.1
Such syncretism makes it possible for this multimedia construct to “facilitate persuasion, not just simulated persuasion,” as Bogost (2007, 284) claims, so that players take action, participate, engage and interact with the only visible layer they are provided entry to. While Trabattoni’s approach focuses on the multisemiotic, and relies heavily upon the conclusions he draws from the application
1. Possiamo allora dire che i videogiochi impiegano differenti sostanze dell’espressione (immagini, scritte, musiche, effetti sonori, ecc.) che entrano in sincretismo tra loro per formare un progetto testuale unitario. Non si tratta allora di un semplice accumulo di linguaggi che “collaborano” allo stesso progetto comunicativo, che si presuppongono reciprocamente e che permettono di parlare del testo risultante come di qualcosa di organico, coerente, sensato.
155
156
Tomás Costal
of an analytical model to a small corpus of video games, Bogost’s and EgenfeldtNielsen et al.’s objective seems to suggest that video games could perhaps be deemed a separate medium with peculiar features, a medium capable of generating a rhetoric and a discourse of its own. Throughout the present study, we will accept this latter notion and endeavour to discern whether the reconsideration of the way in which linguistic and extralinguistic content is conveyed, by means of an additional textual track, could improve accessibility in video games; and whether idiosyncratic practices should be substituted with more normative behaviours geared towards digital literacy. With this goal in mind, subsequent sections will proceed from the general to the particular: from the language being used to the peculiarities of the SDH modality.
2.
The language
One of the latest controversies that have hit the disciplines of Translation Studies, Audiovisual Translation, and Video Game Localisation is closely related to the nature of the profession. Some, like Schäler (2008, 196, emphasis in the original), present arguments that would seem to advocate interdisciplinary convergence: Probably the most difficult distinction to make, however, is that between localisation and translation. Not just localisers, but translators as well adapt products (text) linguistically and culturally so that they can be understood in different locales. However, translation does not necessarily deal with digital material whereas localisation is always happening in the digital world.
On this basis, localisers are, of course, translators who face dissimilar challenges but share a common bond with other types of specialists. Notwithstanding, the question of adapting products linguistically does very little to solve the problem of nomenclature: Is adaptation common to both translation and localisation? Does localisation require more adaptation? Does localisation imply a somewhat more developed sense of cultural awareness?
2.1 Translation vs Localisation It seems to be common among game localisers who are also stakeholders in the world of academia to highlight the ominous difficulties they confront when they translate video games. Those who would not go as far as claiming that video game localisation has very little to do with other, perhaps more established, traditional, pre-eminent, or mainstream forms of translation, insist in pointing out that locali-
The use of SDH in video games from an accessibility perspective
sation demands more versatile professionals. As Méndez González (2014, 108, our translation) puts it: […] video games are productions that bring together practically every branch of translation: subtitling, dubbing, literary translation, software localisation, legal translation, advertising paratranslation, game covers, guides, accompanying material and a multitude of additional elements to be taken into account.2
One could argue that, today, with the advent of digital publishing and the emergence of new business models (O’Hagan and Mangiron 2013), such competences are no longer exclusive to any particular speciality or group of translators. In fact, it would be difficult to accept the idea that translators and localisers are the only professionals who are expected to train competences like the aforementioned. There are also some who go one step further and coin a phrase to distinguish the mildly acceptable from the hypothetically desirable, as is the case of Edwards (2011, 20): […] Culturization is going a step further beyond localization as it takes a deeper look into a game’s fundamental assumptions and content choices, and then gauges their viability in both the broad, multicultural marketplace as well as in the specific geographic locales. Localization helps gamers simply comprehend the game’s content (primarily through translation), but culturization helps gamers to potentially engage with the game’s content at a much deeper, more meaningful level.
The author has created a hierarchy that brings video game localisation back to the barren discussion of whether certain translation strategies are legitimate, whether or not certain instances of what is conventionally called ‘translation’ constitute normative infringements, respond to a functional and pragmatic need, are a consequence of the work conditions translators might be forced to accept, or even whether the process and the final product might be established as translation proper. Edwards’ claims appear to be associated with the role of, for lack of a better word, ‘culturizer’, an almost premonitory potency: an individual capable of foreseeing what global markets and consumers will demand even before they do, and one who manages to bridge the cultural gap in such a way that engagement, and not mere understanding, is achieved.
2. […] el videojuego es una producción que prácticamente aúna todas las ramas de la traducción: subtitulado, doblaje, traducción literaria, localización de software, traducción jurídica, paratraducción publicitaria, carátulas, guías de juego, material anexo e infinidad de elementos adicionales a tener en cuenta.
157
158
Tomás Costal
2.2 Localisation vs Transcreation The middle ground between localisation as an alternative name for translation, which specifically refers to digital entertainment multimedia products and the loosely defined concept of culturization, is embodied in what academics like O’Hagan and Mangiron (2013, 104) denominate transcreation: […] The notion of “transcreation” draws attention to the presence of the human agency of the translator in the process of translation, inviting variable, nonuniform and at times unpredictable solutions. As such, it contrasts with the focus placed on standardization and uniformity which often characterizes productivity software localization.
Once more, the appellative spectrum has been widened: whereas translation contributes to understanding alone – in other words, to making the content linguistically accessible to a user who does not speak the language in which it is originally written –, localisation combines linguistic versatility with technological expertise, culturization aims towards full familiarisation every step of the way, and transcreation leaves the original content aside – in part or completely – and endeavours to produce new meaning using the source exclusively as a point of departure which does not deserve excessive regard. In our view, the level of appropriacy of a given translation, and the measure of its perceived success, is determined by both controllable and uncontrollable factors such as individual knowledge, professional competence, the existence of industrial standards, audience reception, etc., but its terminological preference in no way influences its ultimate utilitarian purpose. These four separate notions of translation, localisation, culturization and transcreation, reveal the existence of distinct underlying principles which are conducive to separate frameworks of reference for quality assessment. To put it another way, and hopefully with greater transparency: the debate still has not shifted focus from the binary opposition between prescriptiveness and functionality. As we will suggest in our critical analysis of the use of SDH subtitles in video games, the coexistence of such radically different theoretical signposts, productive and practical as they may be under a given set of conditions, may engender incongruities, inconsistencies and errors as by-products of oversimplification.
2.3 Video game content presentation Whichever interpretation is chosen, and independently of the number of duties translators should be made responsible for, it would not be too adventurous to conjecture that a medium like video games presents content in a wide variety of
The use of SDH in video games from an accessibility perspective
forms. Bernal Merino (2015) identifies seven different text types: narrative, oral/ dialogic, technical, functional, didactic, promotional and legal. This classification is accompanied by a discussion on whether the processes of translation and localisation are essentially different depending on the game genre. It would stand to reason, for instance, that a fighting and a role-playing game diverged in their preferred method of information conveyance: where the former is supposed to be swift and accurate, the latter may consciously decide to be verbose. Indeed, there is a series of text types that is more visible and perceptible, and thus receives most of the attention and resources available, such as the narrative and oral/dialogic sections of the video game, while there are others that present a much higher tolerance threshold, with the possible exception of legal documents, licence agreements and end-user contracts, where mistakes would obviously be inadmissible. The same principle that applies to textual content affects the remaining dimensions of the audiovisual whole: “The synergistic effect derived from the relationship amongst all the layers of linguistic and non-linguistic information found in these products enhances the receiver’s experience” (Bernal Merino, 2015, 52). This experience is heavily reliant on the capacity to immerse the audience into the adventure by “trigger[ing] players’ suspension of disbelief ” (147), a concept which other authors have also explored, such as Christou et al. (2011, 40), who see it as “the mark of perfect localization” or Chaume (2012), who judges it to be a precondition for successful dubbing projects. Therefore, in our leap of faith from procedural representation to the adequate, persuasive and immersive interplay of audiovisual dimensions, the difference has been made between purely textual content, filtered through translation in the case of international releases, and the semiotic constraints, which emerge from the continuous overlap of linguistic, extralinguistic and non-linguistic components. Such components configure a signifying universe of their own whose integrity might be compromised by the final users’ unexpected irruption in the form of romhacking (Díaz Montón 2011); in other words, direct intervention by the user on the computer code of a given piece of software. Interlinguistic phenomena in translation have been the object of much research focused on topics as far apart as efficiency management (Bartelt-Krantz 2011) and advergaming (De Pedro Ricoy 2007). Notwithstanding, to the author’s knowledge, no study has dealt with intralinguistic phenomena in conjunction with accessibility in video games and conducted a rigorous analysis of the ways in which the absence of a homogenising criterion shared by producers, creators and distributors alike has led to a plethora of problematic decisions. Such decisions, as we will conclude, have resulted in idiosyncratic choices being taken as a respectable precedent, later replicated in subsequent generations of games, and
159
160
Tomás Costal
eventually fossilised as common practices in the industry. In the coming sections, we will explain the origins of this dilemma, provide selected examples, and advance plausible alternatives.
3.
The accessibility modality
Video games today, particularly triple A or ‘A AA’ titles whose cost of production exceeds $100 million (Egenfeldt-Nielsen et al. 2008), make use of audio tracks recorded by voice talents (dubbing actors) or well-known performers from other media – from film to television to theatre – instead of the run-of-the-mill, mostly text-based strategies of old, where a monotonous soundtrack accompanied the original dialogues in Japanese or English for the most part (O’Hagan and Mangiron 2013). Times have undoubtedly changed. Not only is the hardware different and incomparably more powerful than that of yesteryear; the software brings the machine’s processing capacity to the limit, and audiences expect to be blown away by their entertainment devices. Two reasons could be put forward in this regard: finishing a video game demands tens, when not hundreds, of hours of gameplay; and the rather steep price of games together with the aggressive marketing campaigns that announce their launch leave a very meagre margin for errors and miscalculations. O’Hagan and Mangiron (2013, 21, emphasis in the original) assert that: […] One technological trend is to make certain genres of mainstream console games more like movies, where pre-rendered movie sequences (cut-scenes) and real-time interactive playing scenes seamlessly merge through the use of high definition graphics and dialogues voiced by professional actors. The cinematic features employed in games have in turn led to the use of subtitling and dubbing of dialogues in game localization, though not necessarily following the more established norms of Audiovisual Translation.
Video game developers usually decide who will take charge of the dubbing process very early on, as voice acting represents a very considerable portion of the total budget (Chandler 2014). Localised versions, if at all dubbed, have a tendency to follow the general chronogram that is set for the whole international project, without making any concessions (Chandler and Deming 2012). Inconsistencies in either the dubbing or the subtitling jobs – not so much the latter, since they are frequently presented as an optional track in the home market – would force developers to incur in rocketing costs at critical production stages.
The use of SDH in video games from an accessibility perspective
Therefore, linguistic, design and testing departments are the three supporting pillars of the video game industry. Indeed, the ways in which the same video game is perceived by its intended audience varies from culture to culture (O’Hagan and Mangiron 2013; Yuste Frías 2014), and it is also true that the message of the original version may suffer changes that will depend on a variety of factors ranging from the economic to the political and ideological, as it happens in other audiovisual media (Agost 1999; Chaume 2012). What remains clear is that gamers’ reception of a given video game is different when an American production is released in the United States in English, when a Japanese production is released in the United States subtitled, but not dubbed, in English, or when any of the above is launched in Brazil, Italy, Germany or France in their respective local languages. The use of subtitles in video games may ostensibly respond to the progressive evolution of these entertainment products, as well as to the rise of the interactive viewer, who does no longer conform with the creations the industry imposes, but expects to be taken into consideration, listened to, and catered for. Whereas these principles may seem to be generally acceptable and, to a certain extent, simple to implement, there are still too many areas in which the data available contradict this rather optimistic view. Accessibility in video games, for instance, is one such area. O’Hagan and Mangiron (2013, 283) reflect that: […] the evolution of game technology can, ironically, erect more accessibility barriers if players with disabilities are not taken into consideration. We highlighted how the evolutionary advances of audio technology eventually led to more common inclusions of recorded human voices into in-game dialogues, making the gameplay experience more realistic and cinematic. However, if the dialogues and sound effects in a game are not subtitled, this becomes an accessibility barrier for DH players – a barrier that did not exist when all text in games was provided in written form.
If video games are to be taken not solely as forms of entertainment, but also as vehicles for the advancement of educational agendas and social awareness, then provision of access to all users, whoever they are and regardless of their diversity, needs to be prioritised. Accessibility is in no way exclusively linked to subtitling; quite the contrary, since audio description for the blind, dubbing, voice-over, narration, interpreting, free commentary and respeaking, among other audiovisual translation modes, could be employed for these purposes in a variety of ways, they should be seen as fundamental tools in the business. Despite this advocacy and the availability of manuals that build upon a long-consolidated theoretical framework (see, for instance, Díaz Cintas and Remael 2007; Pedersen 2011), given their
161
162
Tomás Costal
negligible impact on the very influential gaming industry, an objective and empirical corroboration – however tentative – of the current state of affairs is called for. As we will see from the results of the characterisation presented in the following video game titles, the concept of ‘subtitle’ still is a complex issue to agree upon. SDH subtitles in particular, those that convey “a written rendering on the screen of the characters’ dialogue as well as complementary information to help deaf viewers identify speakers and gain access to paralinguistic information and sound effects that they cannot hear from the soundtrack” (Díaz Cintas 2008a, 7), stand as a promise that remains undelivered by designers, even when the “box and docs” themselves state that closed captions are optional or hardcoded throughout the video game.
4.
The corpus
A small corpus of video games has been gathered with the intention of detecting subtitling patterns which are common to diverse stakeholders in the industry: creators, distributors, developers, testers, content supervisors and users. The selection criteria included: the availability of cutscene compilations produced by gamers and published in YouTube channels; the legitimacy of extracting audiovisual clips and static images from these materials for non-commercial purposes – the status of each product was double-checked with the documents made available online by the owners or copyright holders – ; the total duration of the compilations, which had to be within the range of one to seven hours; the inclusion of a subtitle track throughout the cutscene compilation, and the possibility of comparing cinematic and in-game sequences, that is, non-interactive and interactive sections with subtitles. After a cursory analysis of 25 samples, only nine of them finally succeeded in fulfilling these prerequisites: Back to the Future: The Game (Telltale Games 2010); Catherine (Atlus 2011); Anarchy Reigns (Platinum Games 2012); Deadpool (High Moon Studios 2013); Alien: Isolation (Creative Assembly 2014); Castlevania: Lords of Shadow 2 (Mercury Steam 2014); Halo 5 (343 Industries 2015); Rise of the Tomb Raider (Crystal Dynamics 2015), and Street Fighter V (Capcom and Dimps 2016). Each video game characterisation is presented below with additional information regarding the video game genre, the developer and the source
The use of SDH in video games from an accessibility perspective
of the cutscene compilation, as well as commentary on all those features which stand out in terms of the textual track or the subtitle lines. While the partial reflections on the clips and screenshots only try to reveal the existence of fossilised incongruities based on past practices, the synthetic assessment – or verdict – at the bottom of the chart, will try to offer an encompassing view of the final product as a whole, in the way that it reached its international audience. Finally, even though localised versions exist for several of the video games in our selected corpus, the evaluation of the use of SDH subtitles will be exclusively intralinguistic. The groundwork on which our categories are based combines both the normative and informative sections of the Spanish SDH subtitling norm (Aenor 2012), and research work carried out by diverse authors (Díaz Cintas 2007; Díaz Cintas and Remael 2007; Tercedor Sánchez et al. 2007; Neves 2008; Pavesi and Perego 2008; Díaz Cintas 2008b; Gottlieb 2009; Perego 2009; Pereira 2010; Civera and Orero 2010; Báez Montero and Fernández Soneira 2010; Pedersen 2011; Arnáiz-Uzquiza, 2015).
4.1 First case study Table 1. Characterisation of Back to the Future: The Game Title
Back to the Future: The Game
Genre
Graphic adventure
Year
2010
Developer
Telltale Games
Source of the cutscenes
Gamer’s Little Playground – https://goo.gl/eRG9gP
Files analysed File type and reference
Observations
BTTF_2010_1 https://goo.gl /ri2erS
Roll up subtitle presented in two lines. Faithful transcript of the audio track. Frequent colour clash. Small typeface. Lines too long and arbitrarily divided. Texton-screen overwhelming.
Screenshots https://goo.gl /eVX5wg
No attention is paid to the superimposition of the textual and audio tracks, to the point of partly or completely concealing the action that is taking place in the middle, and even upper third, of the screen. Line divisions are confirmed to be arbitrary and syntactically inadequate.
Verdict Full transcript divided in irregular chunks. Idiosyncratic. In the light of the lack of condensation, it cannot be considered a subtitle proper, much less still SDH.
163
164
Tomás Costal
4.2 Second case study Table 2. Characterisation of Catherine Title
Catherine
Genre
Adventure
Year
2011
Developer
Atlus
Source of the cutscenes
UPlayNetwork – https://goo.gl/QTWfq6
Files analysed File type and reference
Observations
C_2011_1 https://goo.gl /OH7Uya
No reference is made in the subtitles to the siren, the helicopter noise, or the sound of steps. No subtitle during the news report or the camera movements from the entrance to the table where the protagonists are located and having a chat. It is impossible to determine who is actually speaking; there are no lip movements and no speaker ID. One-liners of extremely different lengths (up to 67 characters).
C_2011_2 https://goo.gl /tqNO53
The voices in the protagonist’s head are not subtitled. Music and noises not conveyed in writing.
C_2011_3 https://goo.gl /IxGp5R
Initial sigh not subtitled, while onomatopoeias (*glug*), phatic expressions (huh?), hesitations (I-I), and doubts (right?) are conveyed using the textual track. No reference is made to the background music, the clinking sounds of the china, or those sound effects introduced to accentuate character reactions. Inclusion of multiple ellipses followed by an exclamation point when the facial expression would suffice. Use of the initial ellipsis in several lines, regardless of the existence of a pause. Idiosyncratic use of the asterisk before and after paralinguistic elements.
C_2011_4 https://goo.gl /6A8ULK
No mention of background music. Use of end-of-line dashes instead of ellipsis. Shot changes not respected. No character identification. A one-liner covers the whole screen from left to right (77 characters).
Screenshots https://goo.gl /3BEKGy
Sound descriptions and onomatopoeias are tagged using asterisks (beep, click, cough, crazed laughter, gasp, giggle, glug, gulp, huff, munch, phew, sigh, sniff); the correspondence between the actual sound and its description is rarely coincidental. Capitalisation stands for emphasis. Extremely long two-liners may cover the whole lower third of the screen from left to right. Several sound descriptions and dialogue text may be present in the same line. The use of inverted commas is not always typographically correct. Line length registers extreme variability. Subtitle lines may conceal important on-screen elements in close-ups.
Verdict A clear although unsuccessful attempt has been made at trying to normalise sound description conveyance using a tagging strategy – that of including asterisks (*) before and after the action word. Given that all dialogue lines are transcribed literally throughout the video game, these cannot be considered part of a subtitle. Rather than condensation and reduction, length suffers notable increases owing to the introduction of tags, hesitations, doubts and an unnecessary freight of typographic marks.
The use of SDH in video games from an accessibility perspective
4.3 Third case study Table 3. Characterisation of Anarchy Reigns Title
Anarchy Reigns
Genre
Beat ‘em up’
Year
2012
Developer
Platinum Games
Source of the cutscenes
RabidRetrospectGames – https://goo.gl/MUwTjd
Files analysed File type and reference
Observations
AR_2012_1 https://goo.gl /41F807
Unreadable text: bright colour and tiny font. No lip synchrony. Characters identified by name in capital letters under their avatar, which remains almost static throughout the chat. Paralinguistic elements (laughs) and sound effects (robot eyes) not included. Mood indicators (frustrated, tongue-in-cheek) not included either.
Screenshots https://goo.gl /Om1JTC
Static avatar dialogic exchanges and cinematic cutscenes make use of different subtitling criteria: the former resort to a larger font, long one-liners (up to almost 60 characters) and two-liners of balanced length; while the latter present an almost unreadable font where three-liners are likely to be found.
Verdict Irregularly divided full transcript. Not a subtitle. Not SDH.
4.4 Fourth case study Table 4. Characterisation of Deadpool Title
Deadpool
Genre
Beat ‘em up’
Year
2013
Developer
High Moon Studios
Source of the cutscenes
Red’s 3rd Dimension Gaming – https://goo.gl/0nLMZK
Files analysed File type and reference
Observations
DP_2013_1 https://goo.gl /pD6hmi
No reference to the rain outside, the phone beep, the gunshot or the final metallic clink in the textual track; those pieces of relevant information are lost to the deaf and hard of hearing (DH) audience. Tiny font. Capitalisation stands for emphatic intonation. Flash effect in short one-liners. Arbitrary line divisions. Excessive line length (up to almost 70 characters). On-screen text is replicated in the textual track when it represents voices inside the main character’s head.
165
166
Tomás Costal
Table 4. (continued) DP_2013_2 https://goo.gl /sPxeuu
No reference is made to the background noise. Shot changes are not respected. Arbitrary line divisions (a second line seems to be automatically added when the character count reaches 66).
DP_2013_3 https://goo.gl /qu6HXo
No reference is made to the different types of background music and the sound of crickets – which are not visible – or the laser beam, the explosion and the teleportation effects – which are indeed perceptible through the eye and may be left in the visual track alone. Shot changes are not respected. Extreme differences attested in line length for two-liners. On-screen text not subtitled when it is not voiced-over. Lines of up to 72 characters. Inclusion of creative onomatopoeia (Ugghhhh…). Attempts have been made at conveying accent via the textual track (What you talkin’ ‘bout, Summers!).
DP_2013_4 https://goo.gl /h7votb
Insistence on the use of idiosyncratic and creative forms to reflect emphasis (sooooo posting this; UUUGGGHHH!). No reference is made to background music or sound effects. Only voiced-over text on screen is subtitled. No mood indicators are included (irony, frustration, elation…).
DP_2013_5 https://goo.gl /K74Ufv
Onomatopoeia takes precedence over tagging (Heh heh heh [sic]). Missing commas; orthographic errors. No indication that the voices are conveyed via radio or that there is an echo. Capitalisation means emphasis. Line divisions are arbitrary. One-liners of up to 68 characters. Use of triple exclamation points.
DP_2013_6 https://goo.gl /Epk54p
Arbitrary line divisions in conjunction with hectic reading speeds.
Screenshots https://goo.gl /iZkbm9
Creative use of onomatopoeia. Extreme line lengths. Irregular orthography, particularly in terms of capitalisation. Inconsistent use of tags, either asterisks (*) or chevrons (17 cps) Orthography and typography Emphasis
✓
✓
Use of italics Accent conveyance in writing
✓ ✓
✓
✓ ✓
5.1 Tier 1: General level If it were to be interpreted as a rubric, or rather as a points-based scale of good subtitling practice in video games, the general level, or tier 1, would indicate the type of synchronised textual track that is to be expected. Our preliminary norm introduces the binary opposition of transcript versus subtitle as an indicator of the degree of faithfulness with respect to one of the vital components of the audiovisual product: the audio track. Despite the contradictory proclivity towards literalness in Aenor (2012) – contradictory because this aspect, taken to the extreme, is incompatible with the prevention of spikes in reading speed and the observance of most linguistic and stylistic caveats contained in the Spanish norm itself –, most authors (Gottlieb 2009) defend the idea that condensation and reduction are inescapable conditions at the time of editing subtitles adroitly. We would argue that SDH, with its freight of complementary information, without which deaf and hard of hearing audiences would be left in the dark, necessitates an even closer and more careful study of the original content and how it is adapted to the specifications of a new medium. Indeed, the adequacy of a subtitle cannot logically be discussed when it is a transcript that we are being presented with. In other words, if the video game industry creates a textual track
The use of SDH in video games from an accessibility perspective
whose only virtue is to show a duplicate of what is being uttered by the characters, then a redefinition of the concept of subtitle is peremptory. In our corpus, a few textual tracks are quite a long way away from being accessible to any audience. Once the terminological conundrum is unveiled, developers must brace themselves to confront yet another dilemma: that of producing a subtitle which offers no additional paralinguistic content, among other features detailed below, or opting for a more complete SDH version. Despite the observations made in the individual characterisation, some of them indicative of an increased accessibility awareness in AAA titles – however misguided – on the part of the industry, no textual track in our corpus is consistently aimed at the achievement of universal accessibility. In fact, a few of the case studies have shown that the label ‘paralinguistic’ and its immediate derivations are the source of much confusion, which a preliminary norm supplemented by a series of practical examples would help to alleviate. In turn, this would diminish the remarkable levels of variance in subtitle presentation.
5.2 Tier 2: Relationship between sound and text A second level of analysis, or tier 2, would concentrate on the information that is contained in the SDH track. Ideally, for a subtitle to be highly accessible, dialogue, music scores, soundtracks, some form of coherent speaker identification, additional text-based instructions, mood indicators and other paralinguistic elements like sound and noise descriptions, as well as onomatopoeia, must be included. In very complex, extremely constrained audiovisual sequences, whether they may be cinematic or in-game, we would suggest the exploitation of non-linguistic channels to successfully complete the process of meaning transference to the player. This could plausibly be achieved by reflecting upon the potential function of controller vibration – or any other haptic accessory – and implementing their introduction in the video game accessibility norm. If, as in one of our Halo 5 clips, we feel that the SDH cannot possibly manage the coexistence of dialogue, soundtrack, gunshots being fired, gasps, explosions, laser beams, and speaker identification, then the subtitler should be able to separate the wheat from the chaff, delineate a relevance hierarchy and resort to as many alternative sensory channels as needed to ensure that semiotic integrity is not torn to pieces. In terms of speaker identification, our corpus reveals that aesthetics may turn out to be detrimental for the players’ level of understanding of the textual track, since different colours do not tag different speakers. What is more, most of the identifications by name attested in the corpus were extremely space-consuming and repetitious, thus affecting the immersive quality of the gaming experience.
173
174
Tomás Costal
Music, soundtrack, mood indicators and paralinguistic elements in SDH are very scarce in our selected samples; in consequence, those should be the parameters on which the focus is placed in forthcoming design projects.
5.3 Tier 3: Textual level The third level, or tier 3, deals with textual and stylistic aspects. The two categories of sequencing and number of lines intend to measure the levels of cohesion, coherence and synchrony of the final product. Narrative and dialogic styles try to accomplish different communicative goals, and the individuals at the receiving end of the subtitle should notice this separation. CLOS, for instance, employs dissimilar sequencing strategies in narrative cinematic sections and in action-packed in-game episodes. Syntactically appropriate line divisions, which avoid breaking units of meaning, together with the consideration of shot changes as a form of visual syntax cannot be ignored: automatic line breaks based on maximum number of character thresholds may generate illegible texts and result in “gaze shifts from the subtitle area to the image” (Krejtz et al. 2013, 8). An immediate derivation of the previous four parameters, imbricated with synchrony, is the question of moderate reading speeds. Our corpus reveals that quick dialogic exchanges, automatic line divisions, and transcripts whose sequencing is excessively ambitious pave the way for inconsistent or illegible textual tracks. The road from 17 to 21 characters per second seems to be the one most travelled even by recent blockbusters, whereas anything over 15 cps – even fewer in the case of children-oriented products – tends to be discouraged in the Spanish norm and the above references. In terms of orthography and emphatic forms, the options remain as idiosyncratic as they may be, which may seem logical given the absence of agreed-upon industry standards, since there appears to be no identifiable pattern of behaviour. Lastly, regarding the conveyance in written form of regional accents and speaker idiolects, incipient attempts to find strategies like those of onomatopoeia might be starting to emerge; however, the evidence is yet too meagre to draw definite conclusions.
5.4 Overall assessment None of the video games analysed uses condensation or reduction strategies. Textual tracks contain full transcripts rather than subtitles. If a subtitle typology had to be attributed to the final products, it would be that of the traditional subtitle instead of its accessibility-focused alternative. All instances are predominantly idiosyncratic, to the point of breaking their own linguistic and stylistic rules.
The use of SDH in video games from an accessibility perspective
As for the relationship between the two main audiovisual layers of sound and text, all case studies reflect dialogue very faithfully, but omit musical scores and soundtracks in writing; this information is completely lost to DH audiences. Speaker identification, conveyed through the addition of nametags at the beginning of the phrase, appears in four cases. Text on screen and in-game instructions rely on the written medium and are present in all video games in the corpus, although didascalies – a term borrowed from drama which stands for “[…] general information about how the characters utter their speech” (Pereira 2010, 92, note 11) – are almost non-existent: only ROTTR makes quite a timid use of mood indicators. Paralinguistic elements are attested in just three games. As for the purely textual level, sequencing is completely ignored and the appearance and disappearance of subtitle lines on the screen is tightly determined by either the voice-over or the dubbing sound tracks, while the visual dimension is forgotten. The case of CLOS could perhaps constitute an exception in this regard, although its arbitrary divisions and extra lines lead to the conclusion that time and space constraints, and not the intentional application of a set of projectbased guidelines, are the reason behind this patent divergence. Overall, the maximum number of lines exceeds two in a total of four video games. Line divisions are always arbitrary and shot changes are not even a matter of consideration. On average, reading speeds exceed 17 characters per second in seven out of nine cases. Orthography and typography are fairly regular, even though that does not seem to be the case in humorous and creative onomatopoeic games, such as DP and SFV. Emphasis, commonly expressed in the form of capitalisation, appears in three games, while the use of italics and the attempt at conveying a variety of accents in writing have only been observed in two cases.
6.
The potential applications
The present study has compiled a small corpus of video games, approached each case in search of inconsistencies and errors first and patterns of good practice afterwards, applied the theoretical frameworks derived from consolidated research in audiovisual translation and accessibility, put forward a preliminary norm to evaluate the quality of a subtitling project specifically oriented to the video game industry, and contrasted the empirical results with the ideal hypothetical aspiration of making multimedia entertainment products more user friendly. The application of this preliminary norm, and its gradual expansion with additional levels that take other communicative channels into account, will be beneficial not just for the adaptation of pre-existing hardware, ancillary devices
175
176
Tomás Costal
and re-releases of popular games, but for the consideration of accessibility as a point of departure instead of an unpredictably burdensome task. If neither the industry nor the users are able to agree on the characteristics that are definitory of a subtitle, then quality cannot be demanded because there is no model to be followed. Our corpus supports the idea that, in the absence of standards, conventions, rules or norms, the final product is affected negatively. Far from restricting the creative freedom the art of video game design involves, a common framework of reference would help to ascertain whether the innovative touch of the artist will reach the user as intended, or whether the omissions incurred in central aspects that guarantee the maintenance of a smooth communicative process will eventually outweigh and occlude those novel ideas that make the project what it is. Transcripts that leave syntactic logic aside can, and should, be replaced with at least one common core version of SDH subtitles which pays heed to the inherent relationship that brings together the visual and the aural. While sign language interpreting may be judged by some as reductive and oriented to a niche audience, well-crafted SDH subtitles with regular divisions, a manageable number of lines and reading speed, sound descriptions, mood indicators, character identifications, good style and correct orthotypography would serve users much better than what the industry currently offers. If transcripts are made regardless of great technical constraints, SDH subtitles that follow an agreed upon norm should not present an insurmountable challenge.
6.1 A note on SDH and language learning The shift from the transcript to the real SDH subtitle is not a choice between the lesser of two evils, but the provision of a fundamental service which, despite having repeatedly been promised, remains undelivered time and time again. In fact, both the home market and the international market would receive the change with open arms, given that video games combine interactivity with the educational potentialities of audiovisual media. Their multisemiotic nature renders video games useful for language learners, who may choose to utilise them as receptive or productive resources. Apart from direct exposure to and manipulation of the entertainment software as was intended by the creator, which already provides plentiful opportunities to train oral and written receptive skills, a more proactive and critical use of subtitled material or, alternatively, material to be subtitled by the viewer, would present the learner with a variety of situations in which a hands-on approach is called for: from the replication of the tasks carried out in the present study – the evaluation of a final product by reinterpreting its every signifying layer – to its intra or
The use of SDH in video games from an accessibility perspective
interlinguistic translation and subtitling keeping in mind the needs of the target audience, to the establishment of an empirical typology which guides the subtitler through the complexities of a hierarchy of information processing based on viewer preferences. Subtitles in video games require an immediate redefinition and SDH accessibility is the perfect starting point.
Supplementary audiovisual presentation The author has prepared a supplementary audiovisual presentation which synthesises the contents of his research project: SDH in video games. Xunco English YouTube channel. https://goo .gl/nSTQpX
Acknowledgements The author would like to thank Dr Pablo Romero Fresco (University of Roehampton, London, UK) for his insightful comments on the first version of this manuscript.
References Aenor. 2012. UNE 153010. Subtitulado para personas sordas y personas con discapacidad auditiva. Madrid: Aenor. Agost, Rosa. 1999. Traducción y doblaje: palabras, voces e imágenes. Barcelona: Ariel. Arnáiz-Uzquiza, Verónica. 2015. “Long Questionnaire in Spain.” In The Reception of Subtitles for the Deaf and Hard of Hearing in Europe, ed. by Pablo Romero-Fresco, 95–116. Frankfurt am Main: Peter Lang. Báez Montero, Inmaculada, and Ana María Fernández Soneira. 2010. “Spanish Deaf People as Recipients of Close Captioning.” In Listening to Subtitles: Subtitles for the Deaf and Hard of Hearing, ed. by Anna Matamala, and Pilar Orero, 25–44. Bern: Peter Lang. Bartelt-Krantz, Michaela. 2011. “Game Localization Management: Balancing Linguistic Quality and Financial Efficiency.” Trans. Revista de Traductología 15: 83–88. Bernal Merino, Miguel Ángel. 2015. Translation and Localisation in Video Games: Making Entertainment Software Global. New York: Routledge. Bogost, Ian. 2007. Persuasive Games: The Expressive Power of Videogames. Cambridge (MA): The MIT Press. https://doi.org/10.7551/mitpress/5334.001.0001 Chandler, Heather Maxwell. 2014. The Game Production Handbook. 3rd ed. Sudbury (MA): Jones and Bartlett Learning. Chandler, Heather Maxwell, and Stephanie O’Malley Deming. 2012. The Game Localization Handbook. 2nd ed. Burlington (MA): Jones and Bartlett Learning. Chaume, Frederic. 2012. Audiovisual Translation: Dubbing. Manchester: St. Jerome.
177
178
Tomás Costal
Christou, Chris, Jenny McKearney, and Ryan Warden. 2011. “Enabling the Localization of Large Role-Playing Games.” Trans. Revista de Traductología 15: 39–51. https://doi.org/10.24310/TRANS.2011.v0i15.3194
Civera, Clara, and Pilar Orero. 2010. “Introducing Icons in Subtitles for the Deaf and Hard of Hearing: Optimising Reception?” In Listening to Subtitles: Subtitles for the Deaf and Hard of Hearing, ed. by Anna Matamala, and Pilar Orero, 149–162. Bern: Peter Lang. De Pedro Ricoy, Raquel. 2007. “Internationalization vs. Localization: The Translation of Videogame Advertising.” Meta: Translators’ Journal 52 (2): 260–275. https://doi.org/10.7202/016069ar
Díaz Cintas, Jorge. 2007. “Traducción audiovisual y accesibilidad.” In Traducción y accesibilidad. Subtitulación para sordos y audiodescripción para ciegos: nuevas modalidades de traducción audiovisual, ed. by Catalina Jiménez Hurtado, 9–23. Frankfurt: Peter Lang. Díaz Cintas, Jorge. 2008a. “Introduction: The Didactics of Audiovisual Translation.” In The Didactics of Audiovisual Translation, ed. by Jorge Díaz Cintas, 1–18. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.77.03dia Díaz Cintas, Jorge. 2008b. “Teaching and Learning to Subtitle in an Academic Environment.” In The Didactics of Audiovisual Translation, ed. by Jorge Díaz Cintas, 89–104. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.77.10dia Díaz Cintas, Jorge, and Aline Remael. 2007. Audiovisual Translation: Subtitling. Manchester: St. Jerome. Díaz Montón, Diana. 2011. “La traducción amateur de videojuegos al español.” Trans. Revista de Traductología 15: 69–82. https://doi.org/10.24310/TRANS.2011.v0i15.3196 Edwards, Kate. 2011. “Culturalization: The Geopolitical and Cultural Dimension of Game Content.” Trans. Revista de Traductología 15: 19–28. https://doi.org/10.24310/TRANS.2011.v0i15.3192
Egenfeldt-Nielsen, Simon, Jonas Heide Smith, and Susana Pajares Tosca. 2008. Understanding Video Games: The Essential Introduction. New York: Routledge. Gottlieb, Henrik. 2009. “Subtitling Against the Current: Danish Concepts, English Minds.” In New Trends in Audiovisual Translation, ed. by Jorge Díaz Cintas, 21–43. Bristol: Multilingual Matters. https://doi.org/10.21832/9781847691552‑004 Krejtz, Izabela, Agnieszka Szarkowska, and Krzysztof Krejtz. 2013. “The Effect of Shot Changes on Eye Movements in Subtitling.” Journal of Eye Movement Research 6 (5): 1–12. Méndez González, Ramón. 2014. “Traducir para un nuevo entorno cultural: el sector de los videojuegos.” In Traducción e industrias culturales: Nuevas perspectivas de análisis, ed. by Xoán Montero Rodríguez, 105–120. Frankfurt: Peter Lang. Neves, Josélia. 2008. “10 Fallacies about Subtitling for the Deaf and the Hard of Hearing.” The Journal of Specialised Translation 10: 128–143. O’Hagan, Minako, and Carmen Mangiron. 2013. Game Localization. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.106 Pavesi, Maria, and Elisa Perego. 2008. “Tailor-Made Interlingual Subtitling as a Means to Enhance Second Language Acquisition.” In The Didactics of Audiovisual Translation, ed. by Jorge Díaz Cintas, 215–226. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.77.21pav
Pedersen, Jan. 2011. Subtitling Norms for Television: An Exploration Focussing on Extralinguistic Cultural References. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.98
The use of SDH in video games from an accessibility perspective
Perego, Elisa. 2009. “The Codification of Nonverbal Information in Subtitled Texts.” In New Trends in Audiovisual Translation, ed. by Jorge Díaz Cintas, 58–69. Bristol: Multilingual Matters. https://doi.org/10.21832/9781847691552‑006 Pereira, Ana. 2010. “Criteria for Elaborating Subtitles for the Deaf and Hard of Hearing Adults in Spain: Description of a Case Study.” In Listening to Subtitles: Subtitles for the Deaf and Hard of Hearing, ed. by Anna Matamala, and Pilar Orero, 87–102. Bern: Peter Lang. Schäler, Reinhard. 2008. “Linguistic Resources and Localisation.” In Topics in Language Resources for Translation and Localisation, ed. by Elia Yuste Rodrigo, 195–214. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.79.13sch Tercedor Sánchez, Maribel, Pilar Lara Burgos, Dolores Herrador Molina, Irene Márquez Linares, and Lourdes Márquez Alhambra. 2007. “Parámetros de análisis en la subtitulación accesible.” In Traducción y accesibilidad. Subtitulación para sordos y audiodescripción para ciegos: nuevas modalidades de traducción audiovisual, ed. by Catalina Jiménez Hurtado, 41–55. Frankfurt: Peter Lang. Trabattoni, Marco. 2014. Shenmue: Una sfida semiotica. Milan: Edizioni Unicopli. Whitman-Linsen, Candace. 1992. Through the Dubbing Glass: The Synchronization of American Motion Pictures into German, French and Spanish. Frankfurt: Peter Lang. Yuste Frías, José. 2014. “Traducción y paratraducción en la localización de videojuegos.” Scientia Traductionis 15: 61–76. https://doi.org/10.5007/1980‑4237.2014n15p61
Video game corpus Alien: Isolation (Creative Assembly 2014). Anarchy Reigns (Platinum Games 2012). Back to the Future: The Game (Telltale Games 2010). Castlevania: Lords of Shadow 2 (Mercury Steam 2014). Catherine (Atlus 2011). Deadpool (High Moon Studios 2013). Halo 5 (343 Industries 2015). Rise of the Tomb Raider (Crystal Dynamics 2015). Street Fighter V (Capcom and Dimps 2016).
179
Studying the language of Dutch audio description An example of a corpus-based analysis Nina Reviers
University of Antwerp
The present paper aims to combine insights from Applied Linguistics, Corpus Linguistics, Multimodality Research and Audiovisual Translation Studies in order to explore language use in a specific form of audiovisual translation, namely Audio Description (AD) for the blind and visually impaired. It is said that the communicative function of ADs and their multimodal context have a significant influence on the lexical, grammatical and syntactical choices describers make. This article aims to uncover these idiosyncratic linguistic patterns by conducting a quantitative and qualitative analysis of an annotated, audiovisual corpus of 39 Dutch films and series that have been released with AD in Flanders and the Netherlands. The paper analyses frequency lists, keywords, part-of-speech distributions and type-token ratios statistically, and subsequently conducts a qualitative analysis taking systemic functional linguistics as a theoretical framework. The results confirm the hypothesis that the language of AD is idiosyncratic and highlight the most salient lexico-grammatical features characterising the language of Dutch AD. Keywords: audio description, corpus studies, audiovisual translation, multimodality
1.
Introduction
The field of Audiovisual Translation (AVT) has been an increasingly popular topic in the past three decades and it has been studied from a range of interdisciplinary approaches, gradually moving from the periphery of Translation Studies (TS) to its very centre. One of the youngest disciplines in AVT that has been attracting more and more attention is Audio Description (AD), a form of media accessibility that renders audiovisual texts, such as films, television series and thehttps://doi.org/10.1075/bct.111.ttmc.00009.rev © 2020 John Benjamins Publishing Company
182
Nina Reviers
atre performances, accessible to blind and visually impaired audiences. AD is a form of intersemiotic translation that transfers images into words that are delivered aurally in between the sound effects and dialogues of the original audiovisual product. The aim is that audiences – not only the blind and visually impaired, but also those lacking access to the images for various reasons – can understand and enjoy the audiovisual text through the audio channel only. As a professional access service, AD took off in the 1990s and while it is well developed in some countries, it is rare or even non-existent in others (Reviers 2016). Research on AD is also recent. The main research focus has been on descriptive studies of specific genres, such as film, theatre, opera or dance, and on the detailed analysis of specific features of one text or a small collection of texts. Prevalent research topics include the question of content selection in AD, i.e., which visual elements should be prioritised (Vercauteren 2012), the degree of interpretation that is acceptable in AD with regard to, for instance, facial expressions (Orero 2012), and the reception of AD by end users (Fryer and Freeman 2012, 2013; Chmiel and Mazur 2012), to name but a few (see also Remael et al. 2016). Contrary to other forms of AVT such as subtitling and dubbing, AD research has concentrated on the analysis and transfer of the visual aspects of the source text, rather than on the verbal aspects of wording and formulation of the target text. Linguistic aspects of AD, which are the main focus of this study, have been researched only by a handful of scholars, even though the literature frequently highlights the importance of language usage that is concise and comprehensible, yet simultaneously precise and vivid. Bourne and Jiménez (2007) were among the first to study linguistic aspects of AD by contrasting the Spanish and English descriptions of the same film. Kluckhohn (2005) studied cohesion and information order by analysing the AD of the German film Laura, Mein Engel (Runze 1994). AD guidelines for professional describers also advise on how to use or not to use language in AD with regard to aspects such as sentence length and complexity, word choice, tense, cohesion, albeit in very general terms (Ofcom 2000; Remael et al. 2014; Matamala et al. 2010). Salway (2007) tackled linguistic issues in English AD in a larger collection of texts with the TIWO corpus project. These studies, however, do not paint a complete picture and cover only a limited set of languages. What is more, today, scholars and practitioners emphasise that the translation of emotional and aesthetic aspects of the visual image – for which wording and style are particularly relevant – are as important as making the narrative accessible (Fryer and Freeman 2013). Finally, the scholarly study of different forms of AVT, including AD, has been criticised over the years, regarding the lack of methodological and theoretical rigour (Gambier 2006, 2009; Pérez-González 2014 and Remael et al. 2016). After all, it has relied largely on small case-studies and it has not been able to adequately integrate the multimodal aspects of this
Studying the language of Dutch audio description
type of translation – i.e., the fact that different semiotic signs such as dialogue, music and sound effects interact – into its research designs. In addition, research into the language of Dutch AD, the more specific focus of this study, is virtually non-existent. Whilst at the start of the project AD services in the Low Countries were still in their infancy, in 2016 AD received a new boost under the influence of new regulations, incentives and technological advances.1 At the time of writing, about 15 Dutch titles were available on DVD with AD, almost 60 titles could be accessed with a newly developed app called Earcatch (earcatch.nl) and at least three prime-time television series were aired (with open and closed AD) by the Flemish public broadcaster VRT per year.2 These evolutions underline the importance of research projects aimed at supporting AD in the Low Countries. In particular, Corpus Linguistics research in the field of AD is sparse, with just three relevant projects completed so far. The TIWO project (Television in Words) carried out a detailed analysis of the language of AD in a corpus of 91 British English descriptions (Salway et al. 2004, Salway 2007). The TRACCE project (Jiménez and Seibel 2012) developed what is currently the largest AD corpus, with 300 audio described films in Spanish and 50 in German, English and French. Finally, the recent VIW project (Visuals into Words) conducted at the Autonomous University of Barcelona is the first AD corpus freely available online. It contains 10 English, Spanish and Catalan descriptions of the same short film.3 Trying to move beyond the current state of affairs, the research project described here aims to combine insights from Applied Linguistics, Corpus Linguistics, Multimodality Research and AVT in order to obtain a detailed description of the lexico-grammatical features that characterise language use in AD in Flanders and the Netherlands. The hypothesis is that language use in AD is idiosyncratic and determined by the communicative function the text fulfils for its target audi1. In Flanders, the Flemish Film Fund (Vlaams Audiovisueel Fonds; vaf.be) has made AD mandatory for all the films they finance. In the Netherlands, the development of an app called Earcatch (earcatch.nl) through which the descriptions of Dutch films and series can be accessed, has spurred the development of AD services for both film and television. The Dutch Film Fund (Film Fonds; filmfonds.nl) is actively promoting AD and provides the possibility of partial funding. 2. (a) These data were collected in October 2016. (b) The Flemish Public Broadcaster VRT offers AD as an additional audio channel that can be activated via the language menu for digital tv users (closed AD). In addition, the series is aired in parallel on another channel (OP12), where the AD is automatically activated, so that viewers who do not have digital TV, can also access the AD. 3. For more information, visit the project website at http://pagines.uab.cat/viw/ (accessed March 10, 2017).
183
184
Nina Reviers
ence (Salway 2007; Reviers et al. 2015). In addition, the multimodal context of the text type – the fact that the descriptions of salient visual elements by a narrator have to fit in between the dialogues, music and sound effects of the original audiovisual product – is thought to have a significant influence on the describers’ lexical, grammatical and syntactical choices. The current article presents the first phase of a four-year PhD project, conducted at the University of Antwerp between 2013 and 2017 and provides a first outline of the most salient features of this audiovisual text type. The goal of this first phase was to conduct a quantitative and qualitative lexico-grammatical analysis of a collection of Dutch descriptions. This analysis builds on the results of a pilot study conducted between 2010 and 2011 (Reviers et al. 2015). In Section 2, the article gives a brief description of the corpus and its design. This is followed by a general overview of the corpus data, with a brief outline of the methodology for data extraction and statistical processing. Finally, in Section 3, the data are analysed within the framework of Systemic Functional Linguistics (SFL).
2.
The Dutch AD corpus
2.1 Corpus design The corpus under analysis contains 39 descriptions – films and episodes of television series – totalling 154,570 words, and it was collected between 2011 and 2013. At that time, the corpus included the bulk of the material available in Dutch, except for a handful of products which were no longer available or never recorded, or for which we did not obtain copyright permission. But AD has grown exponentially since then, both in Flanders and the Netherlands. As a result, corpus collection is an ongoing process and it is our goal to keep it up to date at all times. That said, the corpus of 39 scripts that is the basis for the current analysis (henceforth referred to as the Dutch AD corpus) is a representative sample that contains three genres: action, drama and humour (see Table 1). While this may seem a rather limited number of genres, it is an adequate reflection of the Dutch AD market. To begin with, there are few Dutch films or TV series in other genres, such as thriller or sci-fi, and what is more, these are not the type of products that are made accessible to the blind and visually impaired. AD producers in the Low Countries seem to focus on popular mainstream film and TV products for the time being. The corpus contains text, video and audio material of the 39 descriptions selected for the analysis. First, the voiced descriptions were transcribed from the DVD and provided with timecodes. The mark-up language XML was selected as
Studying the language of Dutch audio description
Table 1. Genre distribution in the Dutch AD corpus Type
Genre Action
Drama
Humour
Total
13%
61%
1%
76%
episode 22%
1%
1%
24%
Total
62%
3%
100%
film
35%
154,570 words
the representation format for these transcriptions and timecodes, so that partsof-speech (PoS) and lemma annotations could be added automatically with the Frog software.4 The timecodes, in turn, allowed us to link each descriptive unit to its corresponding position in the video file through a specially developed multimodal concordancer, so that the voiced recording and the surrounding multimodal elements (such as sound effects, music and dialogue) could be consulted together with the transcriptions. This design facilitated the computer-based analysis of word, lemma and PoS frequencies, as well as the calculation of other relevant statistics like standardised type token ratio (TTR), average length of words, sentences and descriptive units, and average AD reading speed.5 These data were collected in Excel for further statistical processing. The aim of the statistical analysis is to describe the features of a special language or register in terms of statistically significant differences between the corpus studied and a general language sample, as discussed in the next section.
2.2 Statistical processing To begin with, the frequency counts extracted from the corpus texts were normalised, i.e., converted into a value per thousand words, because the length of the individual texts varied considerably. Means, medians and standard deviations were subsequently calculated. The standard deviation (sd) is a measure reflecting the variety of results across individual texts in the corpus. It is an estimate of how much the frequency scores in each individual text deviate from the average frequency score; in other words, how dispersed the data are. The standard deviation in our collection of texts was on the low side overall (maximum sd of 15, average 4. (a) The project followed the TEI P5 guidelines for XML developed by the Text Encoding Initiative (http://www.tei-c.org/Guidelines/P5/ (accessed May 23, 2016)). (b) The Frog system is a memory-based morphosyntactic tagger and dependency parser for Dutch developed by the CLiPS research group of the University of Antwerp, Belgium (see Van den Bosch et al. 2007). 5. These data were extracted with Python, a standard programming language in natural language processing (see python.org and nltk.org).
185
186
Nina Reviers
sd of 6.2). This signifies that the results were generally clustered around the average and that the corpus has a high degree of consistency. Next, the lemma and PoS frequencies were contrasted with frequency data from two Dutch reference corpora: the SoNaR corpus, a written text corpus of about 500 million words from different fields and genres, and Subtlex-nl (Keuleers and Brysbaert 2010), a 44 million word corpus of Dutch subtitles.6 This comparison investigates to what extent the frequency data in terms of PoS from the Dutch AD corpus deviate from the values one would expect to find in general language (represented by the SoNaR corpus) and in language use in audiovisual texts more in particular (represented by the Subtlex-nl corpus, which contains a language variety determined by the multimodal character of the source and target text as well). A chi-square goodness-of-fit test was used to calculate the probability or p-value for the PoS frequencies. Usually a p-value of 0.05 is taken as the critical value, below which the observed differences in frequency are significant. The obtained p-values were less than 0.001 for both corpora, which means that the observed differences between our corpus and the reference corpora are significant and not due to chance. Usually it is assumed that the lower the p-value, the more significant the differences between the two samples. However, taking the p-value as a measure for the degree of difference has been criticised lately, and some argue that the calculation of an effect size measure is more accurate (Nuzzo 2014). Therefore, we calculated an effect size (w) for the PoS frequencies as well (for SoNaR effect size w = 0.27; for Subtlex-nl effect size w = 1.02). The effect size was higher in the Subtlex-nl corpus, meaning that language use in our corpus in terms of PoS differs the most from language use in the subtitling corpus. Finally, a p-value and an effect size were calculated for each PoS individually. This time we used a different critical value for p (alpha = 0.002 for SoNaR and alpha = 0.006 for Subtlex-nl) after taking into account a Bonferroni correction for multiple comparisons (Field 2009). This revealed that all measured PoS categories (adjectives, nouns, adverbs, verbs, articles, conjunctions, pronouns and prepositions) deviate from the reference values (p < 0.0001) when comparing with both reference corpora for all categories (see Table 4). However, the effect sizes for each PoS category were rather small (0.1 on average for SoNaR and 0.3 for Subtlexnl). In other words, the results are statistically significant, but the size of the differences is modest. The results discussed above also corroborate what previous research has suggested. In the pilot study preceding the current project, a similar analysis was conducted, albeit with a smaller corpus and with another reference
6. For more information, visit the project website: http://lands.let.ru.nl/cgn/ (accessed October 30, 2017).
Studying the language of Dutch audio description
corpus, namely the Corpus of Spoken Dutch (CGN).7 The results presented in this article overlap with and extend the results from the pilot study, revealing the same overall tendencies (Reviers et al. 2015). In brief, the results of the quantitative analysis support the hypothesis that there is indeed a language of AD, also in Dutch, with idiosyncratic features.8
2.3 Overview of the data Tables 2 to 11 provide a summary of the features of the language of AD in Dutch, which are discussed in greater detail in the next section. Table 2 is an overview of the PoS distribution in our Dutch AD corpus. Note that this distribution is based on PoS tags that follow a formal, rather than a functional approach. This means that the Frog software tags nominal adjectives as adjectives, even if they function as nouns in the sentence at issue. Example: De oude liggen daar [*The old are there]. Recalculating frequencies to reflect the functional use of PoS is not unproblematic based on the current tags in the corpus, and certain categories overlap partially. For instance, there is a label “free” that applies to words that are used both adverbially or predicatively. In Table 3, therefore, adjectives as well as verbs used predicatively and adverbially are grouped together, which is not ideal for our purpose. However, Table 3 does put the data from Table 2 into perspective. For instance, where adjectives have a relative frequency of around 6% in the formal approach, these numbers drop to 3% in the functional perspective. In other words, adjectives are more often used in other ways than in the traditional prenominal position. Table 4 summarises the comparison of our corpus to the two reference samples, SoNaR and Subtlex-nl and contains the p-values and effect sizes, which point to statistically significant differences compared to both reference corpora for all PoS categories measured (p < 0.0001). The effect sizes, however, point to rather small differences. Table 5 lists the 50 most frequent lemmas and keywords, taking only the open class words into account. Keywords were sorted based on their effect sizes, taking the lemma frequencies from SoNaR as reference values. In other words, keywords are words that occur more frequently than in the SoNaR corpus.
7. The CGN is also part of the analysis in the PhD project that underpins this article, but these data were being processed at the time of writing and are not within the scope of this article. For more information on the CGN corpus, visit the project website: http://lands.let.ru.nl (accessed March 10, 2017). 8. The guidelines are available via the project website (accessed March 10, 2017).
187
188
Nina Reviers
Table 2. PoS distribution in the Dutch AD corpus (formal approach) Part of speech category
Proportion
Open class
46.51%
adjectives
5.48%
nouns
19.61%
adverbs
3.22%
verbs
18.21%
Closed class
42.79%
articles
12.35%
interjections
0.02%
numerals
0.52%
conjunctions
4.15%
pronouns
9.89%
prepositions
15.86%
Other (abbreviations, symbols, foreign words)
10.69%
Table 3. PoS distribution in the Dutch AD corpus (functional approach) Part of speech category
Proportion
Adjectives and verbs used prenominally or postnominally nouns
2.8% 19.82%
common nouns
3.4%
proper nouns
16.42%
Adverbs and adjectives/verbs used adverbially or predicatively
9.74%
adverbially or predicatively used adjectives
3.05%
adverbially or predicatively used verbs
3.47%
Finite verbs
14.14%
Table 4. Inferential statistics for the Dutch AD corpus SoNaR
Subtlex-nl
goodness of fit (p) effect size (w)
goodness of fit (p) effect size (w)
adjectives
0.0000
0.02
0.000
0.15
nouns
0.0000
0.09
0.000
0.53
adverbs
0.0000
0.10
0.000
0.05
verbs
0.0000
0.14
0.000
0.21
articles
0.0000
0.14
0.000
0.62
conjunctions
0.0000
0.01
0.000
0.14
pronouns
0.0000
0.04
0.000
0.07
prepositions
0.0000
0.15
0.000
0.58
Studying the language of Dutch audio description
Table 5. Most frequent lemmas and keywords of open class words in the Dutch AD corpus #
Lemmas lemma
Keywords pos
lemma
1 kijken
[to look]
verb
kijken
[to look]
pos verb
2 zijn
[to be]
verb
knikken
[to nod]
verb
3 komen
[to come]
verb
glimlachen
[to smile]
verb
4 staan
[to stand]
verb
stappen
[to walk]
verb
5 gaan
[to go]
verb
lopen
[to run/ walk]
verb
6 lopen
[to run]
verb
rennen
[to run]
verb
7 zitten
[to sit]
verb
blik
[gaze]
common noun
8 Witse
Witse [proper name]
common noun
duwen
[to push]
verb
9 hand
[hand]
common noun
schudden
[to shake]
verb
10 weg
[away]
adverb
hand
[hand]
common noun
11 nemen
[to take]
verb
arm
[arm]
common noun
12 man
[man]
common noun
draaien
[to turn]
verb
13 stappen
[to walk]
verb
gezicht
[face]
common noun
14 hebben
[to have]
verb
deur
[door]
common noun
15 zien
[to see]
verb
raam
[window]
common noun
16 worden
[to become]
verb
trap
[stairs]
common noun
17 vrouw
[woman]
common noun
weg
[away]
adverb
18 geven
[to give]
verb
schouder
[shoulder]
common noun
19 auto
[car]
common noun
langzaam
[slow]
predicative or adverbial adjective
20 nog
[again/more]
adverb
staan
[to stand]
verb
21 oog
[eye]
common noun
pakken
[to take]
verb
22 terug
[again/back]
adverb
jas
[jacket]
common noun
23 hoofd
[head]
common noun
hoofd
[head]
common noun
24 deur
[door]
common noun
bed
[bed]
common noun
25 trekken
[to pull]
verb
kantoor
[office]
common noun
26 liggen
[to lie]
verb
openen
[to open]
verb
27 niet
[not]
adverb
bekijken
[to look/to examine]
verb
28 blijven
[to stay]
verb
tafel
[tabel]
common noun
29 blik
[gaze]
common noun
oog
[eye]
common noun
30 zitten
[to sit]
verb
gooien
[to throw]
verb
31 dan
[then]
adverb
neer
[down]
adverb
32 gezicht
[face]
common noun
nemen
[to take]
verb
33 even
[momentarily] adverb
glas
[glass]
common noun
189
190
Nina Reviers
Table 5. (continued) #
Lemmas lemma
Keywords pos
lemma
34 houde
[to hold]
verb
rijden
[to drive]
verb
35 rijden
[to drive]
verb
auto
[car]
common noun
36 agent
[police officer]
common noun
trekken
[to pull]
verb
37 weer
[again]
adverb
haar
[hair]
common noun
38 zetten
[to put]
verb
zitten
[to sit]
verb
39 tafel
[table]
common noun
slaan
[to hit]
verb
40 dam
[dam]
common noun
stoppen
[to stop]
verb
41 bed
[bed]
common noun
open
[open]
predicative or adverbial adjective
42 draaien
[to turn]
verb
gang
[hallway]
common noun
43 knikken
[to nod]
verb
soldaat
[soldier]
common noun
44 halen
[to get]
verb
zitten
[to sit]
infinitive
45 vader
[father]
common noun
wit
[white]
prenominal adjective
46 leggen
[to lay]
verb
kamer
[room]
common noun
47 maken
[to make]
verb
agent
[police officer]
common noun
48 water
[water]
common noun
man
[man]
common noun
49 arm
[arm]
common noun
steken
[to put]
verb
50 moeder
[mother]
common noun
zijn
[to be]
verb
3.
pos
The systemic functional linguistic analysis
3.1 Theoretical framework In the previous section, we have argued that our data confirm the hypothesis that language use in AD is indeed idiosyncratic. What is particularly striking is that the overall language use is not only different to a high degree of statistical significance, but it also differs on many levels, as all PoS categories demonstrate significantly higher or lower frequencies than in our reference samples. The questions that follow then are: 1. How are these linguistic features used and 2. What does this tell us about the communicative function of this text type? In order to answer these questions, specific PoS categories are studied from a Systemic Functional Linguistics perspective. SFL looks at language in use and relates the usage of lexico-grammatical features in particular texts to both the context in
Studying the language of Dutch audio description
which they occur and the meaning they are meant to convey. Halliday’s work in particular has been fundamental to functionalist and text linguistic approaches to language, and has inspired the disciplines to which the present analysis is indebted, such as Corpus Linguistics, Multimodality and AVT (see Royce 2007 and Sindoni 2011 for a combined approach of SFL and Multimodality). Table 6 presents a concise summary of SFL theory (based on Eggins 1994, 113). A more detailed account is provided by general works on SFL such as Halliday (1994) and Eggins (1994). According to SFL, the aspects that determine how we use language in a particular context are Field, Tenor and Mode. In brief, what the text is about, who is talking to whom and through which specific channels. On the semiotic level, these aspects are expressed by three metafunctions of language: the experiential, the interpersonal and the textual function respectively. Each of these functions is covered by a specific lexico-grammatical system: transitivity, mood and theme organisation. Finally, a text only gains “texture”, i.e., it only becomes a coherent piece of discourse, when it is also cohesive (Halliday 1994, 334). Cohesive devices can again be grouped according to the function they support, i.e., lexical relations for the experiential function, conversational structure for the interpersonal function, and reference and conjunction for the textual level. Table 6. Systemic Functional Linguistics in brief Context Function
Lexico-grammar
Discourse semantics/cohesion
Field
experiential
transitivity
lexical relations
Tenor
interpersonal
mood
conversational structure
Mode
textual
theme/rheme
reference and conjunction
For the present article, those aspects of the context of language that are grouped under the umbrella term Field will be discussed. Eggins (1994, 52) describes Field as “what the language is being used to talk about”, or which aspects of reality are represented. In other words, with respect to AD, it involves the linguistic choices describers make to translate the reality presented by the images of the audiovisual source text. Tenor, by contrast, refers to the interaction between participants and has to do with conversational structure and dialogue, which are not particularly significant for the descriptive units under analysis here. Finally, Mode pertains to the way meaning in a text is organised and how the elements of a text are held together. This is especially important for the end product created by AD, as cohesion and coherence are crucial for a proper understanding by the target audience (Braun 2001, Reviers and Remael 2015). In the next sections, we adopt a bottom-up approach to describe what the lexico-grammatical and cohesive features specific to AD tell us about the experiential meaning expressed by AD.
191
192
Nina Reviers
3.2 Analysis According to Halliday (1994, 106), “[l]anguage enables human beings to build a mental picture of reality.” This is what he calls the “experiential function of language”, which is precisely the function of AD: i.e., describing which reality the images depict, so that blind and visually impaired audiences can form a mental picture of it. This reality can be expressed by a manageable set of process types in the transitivity system. Each process consists of three components: (1) the process itself, expressed by “verbials” (Halliday 1994, 214); (2) the participants, represented by nominals; and (3) the circumstances, translated into adverbials and prepositional phrases. In this context, it is revealing that in the Dutch AD corpus the number of open class words outweighs that of closed class words: 46.5% open class versus 42.8% closed class words (see Table 2). Salway (2007) found the same preponderance of open class words in the TIWO project of English ADs. We see the exact opposite in our reference corpora SoNaR and Subtlex-nl, where closed class words dominate. This finding underlines the importance of the experiential function in AD, as open class words are particularly relevant to such functions. In other words, describers focus on transferring “what they see”: who does what to whom and how. In what follows, we will look at each component of the transitivity system separately. Table 7. Verb forms in the Dutch AD corpus Part of speech category
Proportion in the corpus
Proportion within the verb category
Verbs
18.2%
100%
infinitives
1.9%
10.4%
present participles
0.3%
1.6%
finite past tenses
0.2%
1.1%
finite present tenses
14.0%
77%
past participles
1.2%
6.6%
nominal and prenominal verbs
0.6%
3.3%
3.2.1 Processes – verbials Table 7 summarises the verb ratio in the Dutch AD corpus. Column 2 provides the total percentages in the whole corpus; the third column gives the distribution of the verbs separately. The predominance of finite verb forms, especially in the present tense, stands out (77% of all verbs). Present tense verbs occur significantly more frequently than in both our reference corpora. This seems logical, given that AD tends to describe the action while it is happening, therefore preferring present tenses. The
Studying the language of Dutch audio description
use of such tenses is also advised by European AD guidelines usually consulted by Dutch describers (Ofcom 2000; Remael et al. 2014) and was also highlighted in the TIWO project for English AD. Past tenses are rare and significantly less frequent than in our reference samples. Given that ADs use the present tense as they describe an ongoing narrative, it is interesting to see in which contexts the past tense is appropriate. The pilot study (Reviers et al. 2015) suggested that past tenses are used to link back to or specify characters (or actions) mentioned in previous descriptions. A qualitative analysis of the past tenses in the Dutch AD corpus confirmed this finding. We discovered that most cases consist of constructions with (defining) relative clauses to identify objects, characters or places that have been presented before. The examples below are a selection from the sentences analysed, in order to illustrate the use of tenses and contain the original sentences from the AD scripts and a back-translation. (1) Van In zit op het kantoor van de grote, kale man die met zijn foto op de voorpagina stond. [Van In sits in the office of the tall, bald man who stood on the front page with his picture]. (2) Ada staat aan de rand van het ravijn waar Esther het verhaal van de diamant vertelde. [Ada stands at the edge of the ravine where Esther told her the story of the diamant].
In the Dutch AD corpus, we also found a few past tenses that seem to reflect characters’ thoughts; they describe facial expressions (as in Example 3) or flashbacks, with characters remembering previous events (as in Example 4). (3) Nu pas ziet Johnny dat Matty weg is. Dit had hij niet verwacht. [Only now does Johnny notice that Matty is gone. He did not expect this]. (4) Hij zoekt zijn gsm en herinnert zich hoe hij hem kwijtspeelde. [He looks for his cell and remembers how he lost it].
In some cases, a past tense is used to describe an event after it has happened on screen, when time constraints prevent narrators from describing the action simultaneously with the images. In most of these cases, the action still tends to be described in the present tense, since it immediately follows the images. But when there is too much time in between the image (and its related sounds) and the description, or when the description is interrupted by other sounds and dialogues, the past tense is used. This allows narrators to make clear to the listener that the description links up with previously heard dialogues or sound effects. (5) Gunther die in de huiskamer alles gevolgd had, gaat naar boven. [Gunther, who followed it all in the living room, goes upstairs].
193
194
Nina Reviers
(6) Hij kijkt naar waar het geluid vandaan kwam. [He looks to where the sound had come from].
Finally, some of the descriptions in the Dutch AD corpus contain translated dialogue. When a character says a few lines in a foreign language, a voiced version of these subtitles for the benefit of the blind and visually impaired (called Audio Subtitling) is not always provided, but the translation of the dialogue is interwoven in the AD. Some of these lines contain past tenses, too. (7) Herinneringen spoken door het hoofd van Nazim. "Ik vraag een rat niet om vergiffenis." "Smeek zijn vergiffenis." "Ik deed wat ik moest doen, ik heb het ongedierte uitgeroeid." Nazim zit nog steeds in de luxueuze villa. [Memories haunt Nazim’s mind. “I don’t ask a rat for forgiveness.” “Beg for his forgiveness.” “I did what I had to do, I killed the pests.” Nazim is still sitting in the luxurious villa].
What stands out most when it comes to the other verb forms is that present participles are more frequent in the Dutch AD corpus as compared to both reference samples. They are mainly used as adverbs, further specifying the action or reflecting facial expressions that accompany the action. The most frequent co-occurrences of verbs and present participles are: vragend kijken [*to look questioning], zoekend kijken [*to look searching], glimlachend komen, [*to come smiling] and aarzelend komen [*to come hesitating]. Another function of the present participle is to create reduced subordinate clauses indicating simultaneity (often in sentence-initial position). Simultaneity is indeed quite a challenge for describers: many actions happen simultaneously on screen in a film or a series, making it hard to render them in linear sentences (Braun 2011). (8) Haast slaapwandelend stapt Matty verder met haar dochter in de richting van de kassa. [Almost sleepwalking, Matty continues to the cash register with her daughter]. (9) Vera komt aarzelend binnen en gaat tegenover haar moeder staan. [Vera comes in hesitantly and faces her mother].
The verbs in the Dutch AD corpus have relatively high frequency scores. The top 50 most frequent lemmas are all closed class words, except for 12 finite verbs. The 7 most frequent open class lemmas are all verbs as well (see Table 5): kijken [to look], zijn [to be], komen [to come], staan [to stand], gaan [to go], lopen [to walk], zitten [to sit]. When we look at the most noticeable keywords (keywords were calculated by comparing lemma frequencies between our corpus and SoNaR), we get a slightly different list: kijken [to look], knikken [to nod], glimlachen [to smile], stappen [to walk/to step], lopen [to walk] and rennen [to race]. The verbs in both
Studying the language of Dutch audio description
of these lists, however, describe tangible, physical actions and express how the people performing them move around or what they are looking at. The focus of AD language on such material and behavioural processes is confirmed when we take into account which types of processes represented by verbs are most common in the Dutch AD corpus. Their frequencies are illustrated in Table 8 (which only includes verbs with normalised frequencies per thousand exceeding 10). Table 8. Process types in the Dutch AD corpus Process type
Proportion
behavioural
15%
existential
6%
material
75%
mental
1%
relational
2%
verbal
1%
Material and, to a lesser extent, behavioural processes dominate; while mental processes are particularly scarce. As the lemma analysis above has already revealed, these behavioural processes mainly include verbs indicating where people are looking (kijken [to look], zien [to see], bekijken [to look at/to examine], staren [to stare]). The most frequent mental processes are voelen [to feel], denken [to think], weten [to know], peinzen [to ponder], onderzoeken [to examine], begrijpen [to understand] and ontroeren [to move emotionally]. It is worth noting that in many cases verbs reflecting mental processes are used to describe facial expressions that are visible on the screen. (10) Ze voelt zich betrapt. [She feels caught]. (11) Sofie kijkt diep ontroerd toe. [Sofie watches deeply moved].
3.2.2 Participants – nominals Participants are described by nominal groups that range from a single noun to quite complex structures including determiners, adjectives and qualifiers. What do our corpus data reveal about these nominal groups and, hence, the participants in AD texts? First, nouns make up the largest PoS category, with almost 20% of all words. They are significantly more frequent than in our reference samples. This is not surprising, since we have already established that the actions these participants perform (translated into verbs) are also more frequent. What is interesting, though, is that over 80% of the nouns are proper nouns and most of them are
195
196
Nina Reviers
character names. For instance, 8 out of the 10 first keywords are proper names. The 20% common nouns in the Dutch AD corpus also refer mainly to human participants. For instance, the words man [man] and vrouw [woman] are in the top 3 of most frequent common nouns (Table 9). Other high frequency nouns involve the participants’ body parts (hand, eye, head, arm). Also note that the nouns in our corpus usually refer to concrete and tangible things; abstract nouns are rare. Table 9. Top 30 most frequent common nouns in the Dutch AD corpus #
Lemma
Translation
1
hand
[hand]
2
man
[man]
3
vrouw
[woman]
4
auto
[car]
5
oog
[eye]
6
hoofd
[head]
7
deur
[door]
8
blik
[gaze]
9
gezicht
[face]
10
agent
[police officer]
11
tafel
[table]
12
dam
[dam]
13
bed
[bed]
14
vader
[father]
15
water
[water]
16
arm
[arm]
17
moeder
[mother]
18
kantoor
[office]
19
bureau
[desk]
20
foto
[photo]
A noun can be combined with a number of words to form a noun group, including determiners (articles, demonstratives, possessives), adjectives and qualifiers. First, articles are significantly more frequent in the Dutch AD corpus, which is to be expected given the higher frequency of nouns. We also see that there are almost three times more definite articles than indefinite articles, which holds for our reference corpora too. Demonstratives, though, are significantly less frequent. A possible explanation can be found in the guideline to avoid fuzzy cohesion. Describers must make sure it is very clear which participant a demonstrative is referring to, so in case of doubt they prefer to repeat the noun referring to the participant (Ofcom 2000; Remael et al. 2014).
Studying the language of Dutch audio description
Next, adjectives have been the focal point of previous research in the field of AD (Arma 2011). They are deemed crucial to presenting a vivid, precise and engaging description. In our analysis, we can see that 5.5% of all words are adjectives. Interestingly, while in our pilot study adjectives occurred more frequently than in the reference corpus (which was CGN in the pilot), they occur less frequently as compared to the SoNaR corpus. This might be due to the nature of the reference corpora, since SoNaR reflects written language and CGN spoken language. It remains to be studied what type of language AD is: it is a spoken form of language, but the texts are prepared in advance in writing. Equally noteworthy is that the frequency of the adjectives in our corpus exceeds the frequencies from the Subtlex-nl corpus, which can be considered to be a ‘hybrid’ language variety as well, as it is the written version of spoken language. In brief, it is difficult to determine to what extent the frequencies of the adjectives are higher or lower than one would expect from this type of text. That being said, Table 10 shows the adjective type ratios. What is noticeable is that these adjectives are used adverbially or predicatively rather than occurring in their traditional, prenominal function. In other words, adjectives are used more often to further specify a process (e.g., Hij komt aarzelend binnen [*He enters hesitating]) than a participant (e.g., A tall, dark stranger). The same observation can be made with regard to our subtitling reference corpus, but not to our written corpus SoNaR, where prenominal adjectives dominate. We must not forget, however, that the analysis of the verbs has revealed the prenominal use of present and past participles. These account for 0.47% of all words. Previous research has shown a proportion of 1 adjective per 20 nouns in AD (Arma 2011). In our corpus, this proportion is 1 adjective per 7 nouns (also including the prenominal participles). Table 10. Proportions of adjectives in the Dutch AD corpus Part of speech category
Proportion in the corpus
Proportion within the adjectives category
Adjectives
5.95%
100%
nominal adjectives
0.09%
1.5%
postnominal adjectives
0.02%
0.3%
prenominal adjectives
2.32%
39%
prenominal present participles
0.25%
4.2%
prenominal past participles
0.22%
3.7%
adjectives as adverbs of predicates
3.05%
51.3%
197
198
Nina Reviers
Contrary to the verbs, the adjectives in the Dutch AD corpus seem to have lower overall frequencies. This may be surprising, given the importance of adjectives in the guidelines as a means to give precise and vivid descriptions (Ofcom 2000; Remael et al. 2014). On the other hand, time constraints can explain why there is not always room for elaborate descriptions including adjectives. The first adjective appears on place 129 in the list of most frequent lemmas. The most frequent adjectives include: groot [tall/big], ander [other], wit [white], zwart [black], klein [small]. Other prenominal elements are infrequent and have normalised frequencies per thousand words 10 and they all refer to time (nog [still], even [for a while], nu [now], dan [then], weer [again]) or to characters’ movement (weg [away] as in weggaan [to go away], terug [back] as in terugkomen [to come back], neer [down] as in neerzitten [to sit down]). Salway (2007) explained the lower frequency of adverbs in the TIWO project as follows: temporal information in AD is expressed through other means than adverbs of time, namely verb tense and the order of speaking. The events described are assumed to follow chronologically and do not require an adverb of time. Only in the rare case of a significant time lapse does this require specific mention. Although AD guidelines emphasise the importance of adverbs for describing facial expressions and emotions, this does not seem to be reflected in our corpus. In this context, however, we need to underline the role of present participles that are used frequently as adverbs in AD (see 3.2.1) and often reflect characters’ emotions and/or facial expressions. A frequently occurring verb-adverb combination, for instance, is glimlachend kijken [*to look smiling]. Another important means to describe circumstances are prepositional phrases. As we have seen, prepositional phrases are used to qualify nouns. However, they also serve to express circumstances. Our quantitative analysis indicated the more frequent use of prepositions in AD compared to the reference corpora,
199
200
Nina Reviers
while a qualitative analysis showed that they are mainly used to indicate place and time with the prepositions in [in], op [on], naar [to/towards]. (17) Ze laadt de boodschappen in de koffer. [She puts the groceries in the car]. (18) Er staat een muzikant in het zonnetje te spelen. [A musician is playing in the sunshine].
Even though a large part of the cases that come under the heading of “circumstances” were only analysed qualitatively, it becomes clear that the most common circumstances expressed are those of time, location and manner. Adverbs contribute somewhat to this function, however the use of present participles and prepositional phrases to express this level of meaning is what stands out most in our corpus. 3.2.4 Lexical cohesion The discussion so far has shown that the vocabulary in the Dutch AD corpus is concentrated at the top of the frequency lists, i.e., it is characterised by a number of high frequency words. The first 128 most frequent words account for 50% of the vocabulary, and 58 of them are open class words. This suggests a high degree of word repetition, which is confirmed by an analysis of the Type Token Ratio (TTR) of our corpus. The TTR reflects the ratio between the number of unique words in the corpus (types) and the total number of words (tokens). In other words, a corpus with a low TTR will contain a considerable amount of repetition. Table 11 contains the Standardised Type Token Ratios (the TTR calculated per thousand words to minimise the effect of different text lengths) for the corpus in general and for the open class words individually. Vocabulary variety in the Dutch AD corpus is relatively low, with an overall STTR of 0.38, i.e., 0.38% unique words per 1.000 words. Variety is still considerably lower for verbs (0.27), pointing to an even higher degree of repetition in this category. Table 11. Standardised Type Token Ratios in the Dutch AD corpus STTR Dutch AD corpus
0.38
STTR adjectives (all prenominal items)
0.41
STTR nouns (all nominal items)
0.49
STTR verbs
0.27
STTR adverbs
0.30
Studying the language of Dutch audio description
This high degree of repetition is revealing in terms of lexical cohesion, the aspect of discourse semantics that can be related to the experiential function (see Table 6). According to Halliday (1994, 330), lexical cohesion “comes about through the selection of items that are related in some way to those that have gone before.” The first category is simple repetition. The STTR of the Dutch AD corpus indicates that this seems to be the most popular strategy in AD. A second important category is synonymy. An initial qualitative analysis of the most frequent words (with relative frequencies per thousand words >10), however, reveals that the list includes very few synonyms, namely: kijken/zien/bekijken [to look/to see/to examine], lopen/gaan/stappen/rennen [to run/to go/to walk/to race], pakken/nemen [to take/to get], wagen/auto [car/automobile] and vrouw/ meisje [woman/girl]. Another subcategory of synonymy discussed by Halliday is that of hyponymy and meronymy, which apply to lexical items with a specific-general and partwhole relation respectively. The analysis of the nouns above, for instance, revealed that such relations are common in the Dutch AD corpus when describing (human) participants: among the frequent nouns we found man [man] and vrouw [woman], and subsequently lexical items referring to their body parts, such as hand, arm, eyes (meronymy). However, these types of semantic relations between words were not annotated in the Dutch AD corpus, so a quantitative analysis of this kind is beyond the scope of the present study.
4.
Concluding remarks
The description of the data extracted from the Dutch AD corpus has first and foremost confirmed our hypothesis that there is a distinct language of audio description, and that this is also true for Dutch. The language of AD is characterised by a set of salient lexico-grammatical features. In addition, we found that AD language is idiosyncratic on all analysed levels, to a high degree of statistical significance, which is in line with the findings of our pilot study (Reviers et al. 2015). Low standard deviations also revealed consistency across scripts, at least in terms of PoS. This means that the choice of PoS seems to be mainly influenced by the inherent constraints imposed by the AD text type, rather than by genre or experience, for instance. It is also quite notable that AD language is characterised by extremely high or low values for certain grammatical categories: it contains very few past tenses, nearly exclusively finite verbs, and a lot of proper names, to name just a few. Finally, AD texts are marked by a high degree of lexical repetition. The discussion of the most salient lexico-grammatical features of Dutch AD from an SFL perspective has revealed what type of experiential meaning is
201
202
Nina Reviers
expressed in this text type. In other words, we discovered which processes are mainly dealt with and, more importantly, how these processes are expressed linguistically. Material and, to a lesser extent, behavioural processes are preponderant. Actions, most often involving movement or gaze, are described in the present tense, as if the action was happening at that moment. Past tenses only occur in very specific contexts. In particular, the use of present participles to further specify the action stands out in this language variety. The participants, then, are mostly humans, identified through the use of proper names or, to a lesser extent, by concrete nouns pointing to their physique or body parts. Pronouns seem to be second choice when it comes to identifying participants. Prenominal adjectives are relatively frequent to further qualify participants, but the role of identifying relative clauses and prepositional phrases in this context is evident. Finally, the circumstances most commonly expressed are those of time, place and manner. Adverbs play a minor role in the expression of this type of information, but the use of present participles in this context is particularly noteworthy. The focus of this article was on the experiential function of language, but the corpus provides similar insights for other metafunctions as well (for instance regarding reference and conjunction, sentence length and complexity, word length and reading speed), which however were not analysed. Moreover, the present article only covers the textual analysis of the verbal aspects of AD. Non-verbal aspects, such as sound effects and music, have not been taken into account. However, the hypothesis is, of course, that the findings, even on the linguistic level studied so far, are dictated by the interaction with these different sign systems, which the logic behind the data seems to indicate. To confirm this hypothesis, a larger project currently underway will cover the other metafunctions as well. In addition, we have developed a multimodal concordancer to analyse the (verbal) corpus data in relation to the sound effects and music that accompany them. These multimodal aspects are particularly relevant to the analysis of textual cohesion and coherence. However, Multimodality is a relatively new discipline that is still in its infancy, and AVT is only now integrating some of its insights into its theoretical and methodological frameworks. Much work remains to be done to create a coherent framework for the analysis of audiovisual translations. In particular, methodological and technical challenges remain with regard to multimodal corpus development, a development to which we aim to contribute. A final issue worth mentioning is that AD in the Low Countries is still developing. Current practice could therefore be expected to be rather heterogeneous, since professional describers have so far followed no standard, even though ADLAB guidelines (which have been recently translated into Dutch) are now being followed. Our data, however, reveal significant consistencies across the
Studying the language of Dutch audio description
corpus, which arguably point to the impact of the intersemiotic functioning of the text. Nevertheless, analyses such as the one presented here should be rerun as new material is published and added to the corpus. To conclude, the current project is one of few corpus projects in the field of AD and the first of its kind in the Low Countries. The above insights can be of particular value to, on the one hand, the fledgling professional field – providing concrete input for the development of guidelines, for instance – and, on the other hand, to the university and vocational trainings of describers. The results yielded by corpus studies in the field of AVT/AD are promising and it is envisaged that in the future, this research approach will further gain in importance, as advances are made in multimodal corpus development.
Funding This paper reports on the results of a PhD project funded by the BOF fund of the University of Antwerp, under the supervision of Prof. Dr Aline Remael and Prof. Dr Reinhild Vandekerckhove (2012–2017).
Acknowledgments The author would like to thank a number of people who have contributed to the project: the CLiPS Research Group of the University of Antwerp for implementing an xml-interface for the Frog Software. Ron van den Brande of the “Centrum voor Teksteditie en Bronstudie” for his help with the development of an xml-tagset following the Text Encoding Initiative. The author is also indebted to Javier Serrano from the CAIAC Research Centre of the Autonomous University of Barcelona, for the development of the multimodal concordancer. Finally, thanks are due to all the professionals, companies and (volunteer) organisations who have supplied the material for the AD corpus.
References Arma, Saveria. 2011. The Language of Filmic Audio Description: A Corpus-Based Analysis of Adjectives. PhD diss., Università degli Studi di Napoli Federico II. Bourne, Julian, and Catalina Jiménez. 2007. “From the Visual to the Verbal in Two Languages: A Contrastive Analysis of the Audio Description of The Hours in English and Spanish.” In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, ed. by Jorge Díaz Cintas, Pilar Orero, and Aline Remael, 175–187. Amsterdam: Rodopi. https://doi.org/10.1163/9789401209564_013
Braun, Sabine. 2011. “Creating Coherence in Audio Description.” Meta: Translators’ Journal 56 (3): 645–662. https://doi.org/10.7202/1008338ar
203
204
Nina Reviers
Chmiel, Agnieszka, and Iwona Mazur. 2012. “AD Reception Research: Some Methodological Considerations.” In Emerging Topics in Translation: Audio Description, ed. by Elisa Perego, 57–80. Trieste: EUT Edizioni Università di Trieste. Eggins, Susanne. 1994. An Introduction to Systemic Functional Linguistics. London: Pinter Publishers. Field, Andy. 2009. Discovering Statistics Using SPSS. London: Sage. Fryer, Louise, and Jonathan Freeman. 2012. “Presence in Those with and without Sight: Implications for Virtual Reality and Audio Description.” Journal of Cyber Therapy and Rehabilitation 5 (1): 15–23. Fryer, Louise, and Jonathan Freeman. 2013. “Cinematic Language and the Description of Film: Keeping AD Users in the Frame.” Perspectives: Studies in Translatology 21 (3): 412–426. https://doi.org/10.1080/0907676X.2012.693108
Gambier, Yves. 2006. “Multimodality and Audiovisual Translation.” In Audiovisual Translation Scenarios: Proceedings of the Marie Curie Euroconferences MuTra, ed. by Mary Carroll, Heidrun Gerzymisch-Arbogast and Sandra Nauert. Gambier, Yves. 2009. “Challenges in Research on Audiovisual Translation.” In Translation Research Projects, ed. by Anthony Pym, and Alexandra Perekrestenko, 17–15. Tarragona: Intercultural Studies Group. Halliday, Michael A. 1994. Introduction to Functional Grammar. New York: Routledge. Jiménez, Catalina, and Claudia Seibel. 2012. “Multisemiotic and Multimodal Corpus Analysis in Audio Description: TRACCE.” In Media for All: Audiovisual Translation and Media Accessibility at the Crossroads, ed. by Aline Remael, Mary Carroll, and Pilar Orero, 409–425. Amsterdam: Rodopi. Keuleers, Emmanuel, and Marc Brysbaert. 2010. “Subtlex-nl: A New Measure for Dutch Word Frequency Based on Film Subtitles.” Research Methods 42 (3): 643–650. https://doi.org/10.3758/BRM.42.3.643
Kluckhöhn, K. 2005. “Informationsstrukturierung als Kompensationsstrategie: Audiodekription und Syntax.” In Horfilm. Bildkompensation durch Sprache, ed. by Ulla Fix, 49–66. Berlin: Erich Schmidt Verlag. Matamala, Anna, Pilar Orero, and Laura Puigdomenech. 2010. “Audio Description of Films: State of the Art and a Protocol Proposal.” In Perspectives on Audiovisual Translation, ed. by Lukasz Bogucki, and Krzysztof Kredens, 27–43. Frankfurt am Main: Peter Lang. Nuzzo, Regina. 2014. “Statistical Errors: P Values, the ‘Gold Standard’ of Statistical Validity, Are Not As Reliable As Many Scientists Assume.” Nature 506: 150–152. https://doi.org/10.1038/506150a
Ofcom. 2000. ITC Guidance on Standards for Audio Description. London: The Independent Television Commission. Orero, Pilar. 2012. “Film Reading for Writing Audio Descriptions: A Word is Worth a Thousand images?” In Emerging Topics in Translation: Audio Description, ed. by Elisa Perego, 13–28. Trieste: EUT Edizioni Università di Trieste. Pérez-González, Luis. 2014. Audiovisual Translation: Theories, Methods, Issues. London/New York: Routledge. https://doi.org/10.4324/9781315762975 Remael, Aline, Gert Vercauteren, and Nina Reviers, eds. 2014. Pictures Painted in Words: ADLAB Audio Description Guidelines. Trieste: ADLAB. Remael, Aline and Nina Reviers, 2018. “Multimodal Cohesion in Accessible Films: A First Inventory.” In The Routledge Handbook of Audiovisual Translation Studies, ed. by Luis Pérez González, 260–280. London: Routledge.
Studying the language of Dutch audio description
Remael, Aline, Nina Reviers, and Reinhild Vandekerckhove. 2016. “From Translation Studies and Audiovisual Translation to Media Accessibility.” Target 28 (2): 248–260. https://doi.org/10.1075/target.28.2.06rem
Reviers, Nina, Aline Remael, and Walter Daelemans. 2015. “The Language of Audio Description in Dutch: Results of a Corpus Study.” In New Points of View on Audiovisual Translation and Accessibility, ed. by Anna Jankowska, and Agnieszka Szarkowska, 167–189. Bern: Peter Lang. Reviers, Nina, and Aline Remael. 2015. “Recreating Multimodal Cohesion in Audio Description: A Case Study of Audio Subtitling in Dutch Multilingual Films.” New Voices in Translation Studies 13: 50–75. Reviers, Nina. 2016. “Audio Description in Europe: An Update.” The Journal of Specialised Translation 26: 232–247. Royce, Terry D. 2007. “Intersemiotic Complementarity: A Framework for Multimodal Discourse Analysis.” In New Directions in the Analysis of Multimodal Discourse, ed. by Terry D. Royce, and W. Bowcher, 63–109. Mahwah, NJ: Lawrence Earlbaum. Salway, Andrew, Elia Tomadaki, and Andrew Vassiliou. 2004. “Building and Analysing a Corpus of Audio Description Scripts.” (Research report). University of Surrey. Salway, Andrew. 2007. “A Corpus-Based Analysis of Audio Description.” In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, ed. by Jorge Díaz Cintas, Pilar Orero, and Aline Remael, 151–174. Amsterdam: Rodopi. https://doi.org/10.1163/9789401209564_012
Sindoni, Maria G. 2011. Systemic-Functional Grammar and Multimodal Studies: An Introduction with Text Analysis. Pavia: Ibis. Van den Bosch, Busser Bertjan Antal, Sander Canisius, and Walter Daelemans. 2007. “An Efficient Memory-Based Morpho-Syntactic Tagger and Parser for Dutch.” In Computational Linguistics in the Netherlands: Selected Papers from the Seventeenth CLIN Meeting, ed. by Peter Dirix, Ineke Schuurman, Vincent Vandeghinste, and Frank van Eynde, 99–114. Leuven, Belgium: KUL. Vercauteren, Gert. 2012. “Narratological Approach to Content Selection in Audio Description. Towards a Strategy for the Description of Narratological Time.” Monti: Multidisciplinary in Audiovisual Translation 4: 207–231. https://doi.org/10.6035/MonTI.2012.4.9
205
Index A accessibility 3–5, 46–47, 89, 91, 133, 156, 159, 160–161, 171, 173–177, 182 Aenor 163, 172 affective filter 3, 68 Assessment and Qualifications Alliance (AQA) 105 audio description (AD) 3, 5–6, 31–32, 38–47, 90, 131–135, 137–140, 142–149, 181–203 authentic materials/authentic texts 10, 37, 68, 74 AV listening 82–83, 89 AV reading 82 AV speaking 82, 91, 94 AV writing 82–83, 89–90, 94 Ávila-Cabrera, José 3, 10, 80, 133 B Baños, Rocío 34, 95, 104, 106 Bird, Stephen 38, 70, 132 blind and visually impaired 82, 182, 184, 192, 194 Borghetti, Claudia 2, 10, 38, 80, 132 Borrás, Isabel 1, 24, 38, 70, 132 Bravo, Conceição 38, 66, 70, 80, 132 British Academy 35, 47 British Film Institute (FBI) 32, 43 Burston, Jack 2, 80, 107, 133 C Caimi, Annamaria 1, 38, 70, 132 caption/s 4, 62, 73–74, 81–82, 84, 86–94, 162 Ceruti, Maria Angela 2, 10 Chaume, Frederic 155, 159, 161 Chiu, Yi-hui 2, 38, 80, 107, 133 ClipFlair 4–5, 11, 38, 81–89, 93–95, 107, 134, 142, 145, 149
Cognitive Affective Theory of Learning with Media (CATLM) 4, 58, 63–64, 66, 72, 74 Cognitive Load Theory (CLT) 4, 12, 57–63, 66, 72–73 Cognitive Theory of Multimedia Learning (CTML) 3–4, 58, 61–64, 66, 72–73 Cohen, Louis 69, 70, 139 colloquial language learning/ acquisition 4, 57–58, 66, 68–69, 73, 76 Common European Framework of Reference (CEFR) 11, 43, 45, 67, 69–70, 84, 105 communicative task 13, 17–18, 22 comprehensible input 14, 18, 21–22, 65 Council of Europe 105, 134 D D’Ydewalle, Géry 66, 70–71 Danan, Martine 2, 15, 38, 66, 74, 80, 106, 132–133 describers 134, 182, 184, 191–194, 197, 203 Díaz Cintas, Jorge 2, 38–40, 161–163 Dörnyei, Zoltan 108, 110 dubbing 2, 5, 11–12, 38, 46, 74, 80–82, 84, 94, 103–104, 106, 108, 110, 112, 114, 117–118, 120–122, 124–125, 132–133, 142, 157, 159–161, 175, 182 Duff, Alan 2, 106 E Ellis, Rod 13, 16–18, 21 English as a Foreign Language (EFL) 4, 6–7, 50, 52, 57–58, 61, 73–76, 95–97, 125, 150
F Film in Language Teaching Association (FILTA) 35, 46 Foreign Language Learning (FLL) 4, 9–12, 17, 20, 23–24, 61, 131–136, 142, 149 fluency 1, 22, 38, 105–106, 118, 122, 133–136, 140–141, 143, 145–146, 149 Focus on Form (FonF) 17–18, 24 Focus on FormS (FonFs) 17, 22–24 Form-Focused Instruction (FFI) 4, 12, 17–18, 23–24 G Gambier, Yves 20, 38, 182 Garza, Thomas 1, 70 Gottlieb, Henrik 14, 163, 172 I Ibáñez Moreno, Ana 39, 134 Incalcaterra McLoughlin, Laura 2, 10–11, 20, 22, 38, 80, 95, 132 interlingual subtitling/subtitles 1–2, 4, 6, 12, 25, 57–58, 62, 66–67, 69–73, 81, 84, 92, 95, 97, 149 intersemiotic 3, 82, 90–92, 182, 203 intonation 2, 5, 16, 36, 38, 93, 104–106, 108, 110, 113–115, 118–121, 124, 133, 135–138, 140–141, 143–146, 149, 165 intraligual subtitling/subtitles 1, 4–5, 8, 12, 49, 57–58, 62, 66–67, 69–73, 75, 83–84, 89–90, 92, 95, 103–106, 108–110, 112, 114, 117–118, 120–122, 124–125, 150 K Krashen, Stephen 3, 14, 21–22, 65, 68
Index
L Lafayette, Robert 1, 24, 38, 70, 132 Laufer, Batia 17–19 Learning via Subtitling (LvS/ LeViS) 11, 81 Lertola, Jennifer 2–3, 10–11, 20, 22, 38–39, 80, 95, 132, 135, 144 listening (skills) 10, 15–16, 20, 36, 66, 73, 80, 103, 118, 120, 132 long-term memory (LTM) 59–60, 73, 107 Lopriore, Lucilla 2, 10 M Mangiron, Carmen 157–158, 160–161 Matamala, Anna 38, 40–41, 182 Mayer, Richard 3, 58–59, 61–66 multiliteracies 33–34, 39, 47 multimodal 14, 20, 22, 24, 32–34, 36, 39, 44–45, 47, 64, 80–82, 84, 94, 106, 183–186, 202–203 multimodality 20, 33–34, 183, 191, 202 N Neves, Josélia 10, 80, 163 New Literacies 33–35 Nunan, David 33, 138 O O’Hagan, Minako 157–158, 160–161 Ofcom 41, 182, 193, 197, 199 oral expression/production: 15, 18, 36, 38, 103–105, 108, 110, 114–115, 117–120, 124, 132–135 see also speaking (skills) Orero, Pilar 38, 40–41, 163, 182 Output Hypothesis 19–20 P Pavesi, Maria 23, 163
Pedersen, Jan 21, 155, 161, 163 pronunciation 2, 5, 19, 36, 38, 42, 104–106, 108, 110–111, 113–115, 117–121, 124, 133, 135–137, 140–146, 149 R reading (skills) 10, 20, 38, 84, 103, 118, 120 Remael, Aline 161, 163, 182, 191, 193, 197 reverse subtitling/reversed subtitles 10, 15–16, 18–19, 22, 133–134 revoicing 4–5, 19, 21, 81, 83–84, 86–88, 90–94, 142, 145–146 Rodríguez-Arancón, Pilar 2, 10, 15, 95, 132 rubric 13, 22, 106, 117, 136, 139–140, 143–144, 172 S Segalowitz, Norman 106, 135–136 Snyder, Joel 41, 134 Sokoli, Stavroula 2, 4–5, 10, 34, 38, 81, 95, 104, 106 Spanish as a Foreign Language (SFL) 4–5, 31–32, 36, 39, 45, 103, 108, 110, 120–121, 124, 131, 138 speaking (skills) 1, 5, 38, 67, 84, 104, 106, 135, 147 speed 2, 5, 82, 103–108, 110–115, 118, 120–122, 124, 133, 136, 168–170, 172, 174–176, 185, 202 subtitled (audiovisual) material/s 1, 4, 6, 57–58, 62, 65–66, 70, 72–74, 176 subtitles as a (language learning) support 1, 8, 10, 12, 28, 38, 54, 61, 63, 65, 132, 152 Subtitles for the Deaf and Hard of Hearing (SDH) 5, 89, 153,
156, 158, 162–163, 165–166, 168–174, 176–177 Swain, Merrill 19, 20, 33 Sweller, John 57–61, 63–64 Systemic Functional Linguistics (SFL) 6, 108, 110, 120–121, 124, 181, 184, 190–191, 202 T Talaván, Noa 2–3, 10, 12, 15, 38–39, 65–66, 80, 95, 104, 106, 132–133, 135, 144 Task-based Learning and Teaching (TBLT) 4, 9, 12, 16, 22–24, 33 Thorne, David 10, 38, 80 TIWO project 183, 192–193, 199 Tschirner, Erwin 17–18, 22, 24, 80 V Van de Poel, Marijke 66, 70–71 Vanderplank, Robert 1, 38, 66, 70, 74 Vercauteren, Gert 40–41, 182 Vermeulen, Anna 39, 80, 90, 134 Video game localisation 156–157 W Wagener, Debbie 107, 133 Windows Movie Maker 107, 149 writing (skills) 2, 10, 15, 19, 38, 80, 84, 90, 103, 120, 132–133, 144, 146–147 Working Memory (WM) 58–64, 73–74 Z Zabalbeascoa, Patrick 11, 20, 79, 80–82
207
In recent years, interest in the application of audiovisual translation (AVT) techniques in language teaching has grown beyond unconnected case studies to create a lively network of methodological intertextuality, crossreferences, reviews and continuation of previous trials, ultimately defining a recognisable and scalable trend. Whilst the use of AVT as a support in language teaching is not new, this volume looks at a different application of AVT, with learners involved in the audiovisual translation process itself, performing tasks such as subtitling, dubbing, or audio describing. It therefore presents a sample of the current research in this field, with particular reference to case studies that either have a large-scale or international dimension, or can be scaled and replicated in various contexts. It is our hope that these contributions will arouse the interest of publishers of language learning material and other stakeholders and ultimately lead to the mainstreaming of AVT in language education. Originally published as special issue of Translation and Translanguaging in Multilingual Contexts 4:1 (2018). This volume brings together research and insights from some of the leading scholars on the topic of applying AVT to foreign-language learning. Patrick Zabalbeascoa, Universitat Pompeu Fabra A key reading for both researchers and teachers in applied linguistics. Claudia Borghetti, Università di Bologna An essential read for teachers, specialists and professionals in the field. Elisa Ghia, Università per Stranieri di Siena Important contributions offering invaluable insights from up to date research on the topic. Agnieszka Szarkowska, University of Warsaw A remarkable effort to bring theory and practice together in a field which has traditionally lacked the balance of both dimensions. Alberto Fernández Costales, Universidad de Oviedo A testament to the power of active learner involvement in audiovisual translation tasks.
ISBN
978 90 272 0755 5
Martine Danan, DLIFLC, California Long-overdue collection of high quality articles. Robert Vanderplank, Oxford University
John Benjamins Publishing Company