656 20 5MB
English Pages 292 Year 2020
Subtitling
Subtitling: Concepts and Practices provides students, researchers and practitioners with a research-based introduction to the theory and practice of subtitling. Te book, inspired by the highly successful Audiovisual Translation: Subtitling by the same authors, is a new publication refecting the developments in practice and research that mark subtitling today, while considering the way ahead. It supplies the core concepts that will allow its users to acquaint themselves with the technical, linguistic and cultural features of this specifc yet extremely diverse form of audiovisual translation and the many contexts in which it is deployed today. Te book ofers concrete subtitling strategies and contains a wealth of examples in numerous languages for dealing with specifc translation problems. State-of-the-art translation technologies and their impact on the profession are explored along with a discussion of the ways in which they cater for the socio-political, multicultural and multilingual challenges that audiovisual productions and their translations must meet today. A truly multimedia package, Subtitling: Concepts and Practices comes with a companion website which includes a wide range of exercises with answer keys, video clips, dialogue lists, a glossary of concepts and terminology used in the industry and much more. It also provides access to a professional desktop subtitle editor, Wincaps Q4, and a leading cloudbased subtitling platform, OOONA. Jorge Díaz Cintas is Professor of Translation and founder director (2013–2016) of the Centre for Translation Studies (CenTraS) at University College London. He is the author of numerous articles, special issues and books on audiovisual translation. He is the chief editor of the Peter Lang series New Trends in Translation Studies and a member of the European Union expert group LIND (Language Industry). He is the recipient of the Jan Ivarsson Award (2014) and the Xènia Martínez Award (2015) for invaluable services to the feld of audiovisual translation. Aline Remael is Professor Emeritus of Translation Teory and Audiovisual Translation in the Department of Applied Linguistics/Translation and Interpreting at the University of Antwerp. She is founder of OPEN, the departmental Expertise Centre for Accessible Media and Culture, and a member of the departmental research group TricS. Her main research interests and publications are in audiovisual translation, media accessibility and translation as a multimodal practice. She is the former chief editor of Linguistica Antverpiensia NS – Temes in Translation Studies and has been a partner in numerous European accessibility projects and a board member of ESIST, ENPSIT and EST. In 2018 she received the ESIST Jan Ivarsson Award for invaluable services to the feld of audiovisual translation.
Translation Practices Explained Series Editor: Kelly Washbourne
Translation Practices Explained is a series of coursebooks designed to help self-learners and students on translation and interpreting courses. Each volume focuses on a specifc aspect of professional translation and interpreting practice, usually corresponding to courses available in translator- and interpreter-training institutions. Te authors are practicing translators, interpreters, and/or translator or interpreter trainers. Although specialists, they explain their professional insights in a manner accessible to the wider learning public. Each volume includes activities and exercises designed to help learners consolidate their knowledge, while updated reading lists and website addresses will also help individual learners gain further insight into the realities of professional practice. Most recent titles in the series: Consecutive Interpreting A Short Course Andrew Gillies Healthcare Interpreting Explained Claudia V. Angelelli Revising and Editing for Translators 4e Brian Mossop A Project-Based Approach to Translation Technology Rosemary Mitchell-Schuitevoerder Translating Promotional and Advertising Texts 2e Ira Torresi Subtitling Concepts and Practices Jorge Díaz Cintas and Aline Remael For more information on any of these and other titles, or to order, please go to www.routledge.com/Translation-Practices-Explained/book-series/TPE Additional resources for Translation and Interpreting Studies are available on the Routledge Translation Studies Portal: http://routledgetranslationstudiesportal.com/
Subtitling Concepts and Practices
Jorge Díaz Cintas and Aline Remael
First published 2021 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN and by Routledge 52 Vanderbilt Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 Jorge Díaz Cintas and Aline Remael Te right of Jorge Díaz Cintas and Aline Remael to be identifed as authors of this work has been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifcation and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Díaz Cintas, Jorge, author. | Remael, Aline, author. Title: Subtitling : concepts and practices / Jorge Díaz Cintas and Aline Remael. Other titles: Audiovisual translation Description: First edition. | Abingdon, Oxon ; New York, NY : Routledge, 1. Series: Translation practices explained, 1470-966X | First edition was frst published in 2007 by St. Jerome Publishing under the title ‘Audiovisual Translation: Subtitling’. | Includes bibliographical references and index. Identifers: LCCN 2020038206 | ISBN 9781138940536 (hardback) | ISBN 9781138940543 (paperback) | ISBN 9781315674278 (ebook) Subjects: LCSH: Translating and interpreting. | Mass media and language. | Dubbing of motion pictures. | Television programs—Titling. | Motion pictures—Titling. | Audio-visual translation. Classifcation: LCC P306.2 .D53 2021 | DDC 778.5/2344—dc22 LC record available at https://lccn.loc.gov/2020038206 ISBN: 978-1-138-94053-6 (hbk) ISBN: 978-1-138-94054-3 (pbk) ISBN: 978-1-3156-7427-8 (ebk) Typeset in Garamond by Apex CoVantage, LLC Visit the companion website: www.routledge.com/cw/diaz-cintas
Contents
List of fgures List of tables Acknowledgements Permissions How to use this book and its companion website Te book xiv Te companion website xiv OOONA xvi Wincaps Q4 xvii
1
Reconceptualizing subtitling 1.1 1.2 1.3 1.4 1.5
Preliminary discussion 1 Te power of the moving image 1 From the periphery to the centre 3 Te many instantiations of audiovisual translation 7 Classifcation of subtitles 11 1.5.1 Linguistic parameters 11 1.5.2 Time available for preparation 21 1.5.3 Display mode 25 1.5.4 Technical parameters 26 1.5.5 Methods of projection 27 1.5.6 Distribution 29 1.6 Intertitles 30 1.7 Exercises 31
2
Professional ecosystem 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8
Preliminary discussion 32 Te subtitling process 33 Te professionals 37 Dialogue lists 39 Templates and master (sub)titles 43 Guidelines and style guides 47 Subtitling software editors 48 Te profession 51 2.8.1 Clients and rates 55
ix x xi xiii xiv
1
32
vi
Contents 2.8.2 Deadlines 57 2.8.3 Authors’ rights and professional associations 58 2.9 Training 61 2.10 Exercises 63
3
Te semiotics of subtitling 3.1 Preliminary discussion 64 3.2 Films as multisemiotic and multimodal texts 64 3.2.1 Screenwriting and flm dialogue 66 3.2.2 Intersemiotic cohesion 69 3.2.3 Te multimodality of language 72 3.2.4 Camera movement and editing 73 3.2.5 A blessing in disguise 74 3.3 Subtitling, soundtrack and text on screen 75 3.3.1 Subtitling’s vulnerability 76 3.3.2 Multilingualism and multimodality as a resource for translation 78 3.3.3 Text on screen 85 3.3.4 Speech to writing: a matter of compromise 88 3.4 Exercises 90
4
Spatial and temporal features 4.1 Preliminary discussion 91 4.2 Code of good subtitling practice 91 4.3 Spatial dimension 92 4.3.1 Maximum number of lines and position on screen 93 4.3.2 Centred and left-aligned 95 4.3.3 Font type, font size and colour 96 4.3.4 Maximum number of characters per line 97 4.3.5 One-liners and two-liners 99 4.4 Temporal dimension 100 4.4.1 Frames per second 100 4.4.2 Synchronization and spotting 101 4.4.3 Timecodes 103 4.4.4 Duration of subtitles 105 4.4.5 Subtitle display rates: characters per second and words per minute 106 4.4.6 Te six-second rule 109 4.4.7 Gap between subtitles 113 4.4.8 Shot changes 114 4.4.9 Feet and frames in cinema 116 4.5 Exercises 117
5
Formal and textual features 5.1 Preliminary discussion 118 5.2 In search of conventions 118 5.3 Punctuation conventions 120 5.3.1 Comma (,) 120 5.3.2 Full stop (.) 121
64
91
118
Contents
vii
5.3.3 Colon (:) 122 5.3.4 Parentheses ( ) 122 5.3.5 Exclamation marks (!) and question marks (?) 123 5.3.6 Hyphen (-) 124 5.3.7 Triple dots (...) 125 5.3.8 Asterisk (*) 127 5.3.9 Slash (/) 128 5.3.10 Other symbols 128 5.3.11 Capital letters 129 5.3.12 Quotation marks or inverted commas ("..."), (“...”), (‘...’) 130 5.4 Other conventions 132 5.4.1 Italics 132 5.4.1.1 Songs 134 5.4.1.2 Onscreen text 135 5.4.2 Colours 136 5.4.3 Abbreviations 137 5.4.4 Numbers 139 5.4.4.1 Time 140 5.4.4.2 Measurements and weights 140 5.5 Subtitling quality 141 5.6 Exercises 144
6
Te linguistics of subtitling 6.1 Preliminary discussion 145 6.2 Subtitling: translation as text localization 145 6.3 Text reduction 146 6.3.1 Condensation and reformulation 151 6.3.1.1 Condensation and reformulation at word level 151 6.3.1.2 Condensation and reformulation at clause/sentence level 154 6.3.2 Omissions 161 6.3.2.1 Omissions at word level 162 6.3.2.2 Omissions at clause/sentence level 164 6.4 Linguistic cohesion and coherence in subtitling 168 6.5 Segmentation and line breaks 169 6.5.1 Line breaks within subtitles 172 6.5.2 Line breaks across subtitles 174 6.5.3 Rhetorical spotting 175 6.6 Exercises 177
7
Subtitling language variation and songs
145
178
7.1 Preliminary discussion 178 7.2 Marked speech and language variation 178 7.2.1 Marked speech: a pragmatic classifcation 179 7.2.1.1 Intra-speaker variation: style and register 179 7.2.1.2 Inter-speaker variation: dialect, sociolect, slang 180 7.2.1.3 Intra- and inter-speaker variation: entanglements 180 7.2.1.4 Intra- and inter-speaker variation: swearwords and taboo words 181
viii
Contents 7.2.2 Subtitling marked speech and language variation 182 7.2.2.1 Complexity in abundance 182 7.2.2.2 Conficting priorities, difcult decisions 184 7.2.2.3 Subtitling intra- and inter-speaker variation 185 7.2.2.4 Literary styles 186 7.2.2.5 Forms of address 186 7.2.2.6 Agrammaticalities 187 7.2.2.7 Lexical variation 188 7.2.2.8 Swearwords, expletives and taboo words 189 7.2.2.9 Accents and pronunciation 194 7.3 Te translation of songs 195 7.3.1 Deciding what to translate 196 7.3.2 Deciding how to translate 199 7.4 Exercises 200
8 Subtitling cultural references, humour and ideology 8.1 Preliminary discussion 201 8.2 Te translation of cultural references 201 8.2.1 Cultural references: what are they? 202 8.2.1.1 Real-world cultural references 203 8.2.1.2 Intertextual cultural references 204 8.2.2 Cultural references: what determines their translation? 204 8.2.3 Cultural references: translation strategies 207 8.3 Te translation of humour 217 8.3.1 Pinning down humour 217 8.3.2 Subtitling humour 220 8.3.2.1 Detecting and interpreting humour 220 8.3.2.2 Translating humour in subtitles 222 8.4 Ideology, manipulation and (self-)censorship 238 8.5 Exercises 241
9 Technology in motion 9.1 9.2 9.3 9.4 9.5
Preliminary discussion 242 Tools for subtitlers 242 Machine translation and translation memory in subtitling 243 Migrating to the cloud 245 Exercises 248
10 References 10.1 Bibliography 249 10.2 Filmography 262 Index Glossary – available on companion website Appendices – available on companion website
201
242
249
266
Figures
1.1 Classifcation of subtitles according to linguistic parameters 1.2 Bilingual subtitles 1.3 Classifcation of subtitles according to the time available for preparation 1.4 Classifcation of subtitles according to the display mode 1.5 Cumulative subtitle – part 1 1.6 Cumulative subtitle – part 2 1.7 Tree-dimensional subtitles 2.1 Typical subtitling workfow 2.2 Detailed dialogue list 2.3 Content of a typical dialogue list, as required by Netfix 2.4 Industry evolution 3.1 Multilingualism in subtitling 3.2 Dealing with multilingualism in subtitling 3.3 Running text at the bottom of the screen 3.4 Concurrent dialogue and onscreen text 3.5 Concurrent dialogue and onscreen text 3.6 Onscreen text and subtitles – 1 3.7 Onscreen text and subtitles – 2 3.8 Text messaging on screen 3.9 Text messaging on screen 4.1 Use of proportional lettering 4.2 Frames per second 4.3 Frame with timecode – top 4.4 Frame with timecode – bottom 4.5 Timing rules confguration in Wincaps Q4 4.6 Timing rules confguration in OOONA 4.7 Spotting before the shot change 4.8 Spotting after the shot change 5.1 & 5.2 Joanna Lumley’s Greek Odyssey, episode 4, “Mount Olympus and Beyond” 6.1 View [Compact Alt+3] in Wincaps Q4 8.1 Astérix & Obélix : Mission Cléopâtre 8.2 Pulp Fiction 8.3 A Touch of Spice 8.4 Baby Cart in the Land of Demons 9.1 OOONA’s Online Captions & Subtitle Toolkit
11 20 21 25 25 26 29 38 41 42 49 82 83 85 86 86 86 86 87 87 98 102 104 104 108 108 115 115 128 170 210 210 212 212 246
Tables
4.1 Equivalence between seconds/frames and characters, including spaces (12 cps/150 wpm) 4.2 Equivalence between seconds/frames and characters, including spaces (15 cps/180 wpm) 4.3 Equivalence between seconds/frames and characters, including spaces (17 cps/200 wpm) 4.4 Equivalence between seconds/frames and characters, including spaces (13 cps/160 wpm) 4.5 Viewing speed and distribution of gaze between subtitles and images (Romero-Fresco 2015: 338) 4.6 Equivalence between feet/frames and number of characters (including spaces)
109 110 111 111 113 116
Acknowledgements
We would like to express our gratitude to the many people who have assisted and supported us in the writing, editing and designing of this book and its companion website. For their invaluable help with some of the examples, exercises and useful advice, special thanks go to: Amer Al-Adwan Fatimah Aljuied Alina Amankeldi Paola Amante Jackie Ball Rocío Baños Laura Barberis Kathrine Beuchert Alejandro Bolaños-García-Escribano Isabel Caddy Mary Carroll Frederic Chaume Hasuria Che Omar Hao Chen Claudia Corthout Ian Crane Sriparna Das Max Deryagin Lucile Desblache Elena Di Giovanni Sara Elshlmani Andrés García García Yota Georgakopoulou Ali Gürkan Jing Han Tiina Holopainen Helle Hylleberg Huihuang Jia Lily Kahn Péter Krizsán
Jan-Louis Kruger Katrien Lievois Serenella Massida Josélia Neves Minako Ohagan Eirik Olsen Albert Feng-shuo Pai Sara Papini Jan Pedersen Cécile Renaud Nina Reviers Isabelle Robert Pablo Romero-Fresco Leena Salmi Roula Sokoli Agnieszka Szarkowska Ayumi Tanaka Adriana Tortoriello Reinhild Vandekerckhove Gert Vercauteren Chengcheng Wang Rachel Weissbrod Maddie Wontner Xiaochen Xu Şirin Yener Justin Long Yuan Patrick Zabalbeascoa Juan Zhang Jiaxin Zhu Yingyi Zhuan
xii
Acknowledgements
We are equally indebted to many colleagues in academia and in the subtitling profession worldwide, as well as to flmmakers, production and distribution companies, professional associations, graphic designers, students and colleagues. Warm thanks also to our colleagues at the Faculty of Arts, TricS Research Group and OPEN-Expertise Centre Accessible Media and Culture at University of Antwerp, and the Centre for Translation Studies (CenTraS) at University College London. Special thanks are due to colleagues that have given us the opportunity to make this project truly multimedia and relevant to the industry: to John Birch and John Boulton, from BroadStream Solutions, for allowing us to use their professional, subtitle creation software Wincaps Q4; and to Wayne Garb and Alex Yofe, from OOONA, so that learners can use their subtitling toolkit to practice in the cloud. Our gratitude goes also to Louisa Semlyem and Eleni Steck, our editors at Routledge, for their unwavering support from inception through to production, and to Kelly Washbourne, the series editor, for his thorough reading of the manuscript and insightful feedback. Finally, we would like to thank our family and friends for being always there, either in person or, in challenging times, at the other end of the camera. And, of course, more than thanks go out to Ian and Mary, for the many reasons they know best.
Permissions
Every efort has been made to secure permission to reproduce copyrighted material. Any omissions brought to our attention will be remedied in future editions. Te vast majority of videos used in this multimedia project are public domain, as indicated in the Public Domain Movies website (publicdomainmovies.net) and, as such, are not subject to copyright or other legal restrictions. Te flms and cartoons under this category are: 90 Day Wondering, Beneath the 12-Mile Reef, Charade, Cyrano de Bergerac, Father’s Little Dividend, House on Haunted Hill, Horror Express, It’s a Wonderful Life, Little Shop of Horrors, McLintock!, Popeye for President, Royal Wedding, Sita Sings the Blues, Te Flying Deuces, Te Last Time I Saw Paris, Te Man with the Golden Arm, Te Night of the Living Dead, Te Snows of Kilimanjaro and Te Stranger. Te authors and the publishers would like to thank the following copyright holders for granting their permission to use this material: Aaron Lee, for the photograph used on the front cover of the book (unsplash.com/@ aaronlee224); Aliakbar Campwala, director of the documentary Te Invisible Subtitler (youtube. com/watch?v=Pz75i6EsOto); Chris Fetner, executive producer of the video Meridian, released under a Creative Commons license (youtube.com/watch?v=u1LTx4h7D_0); Firdaus Kharas, founder of Chocolate Moose Media, for the videos Te Tree Amigos (chocmoose.com); Jean-Luc Ducasse, producer of the flm Aislados (imdb.com/title/tt0455073); Jens Rijsdijk, director of the flm Tese Dirty Words (jensrijsdijk.nl); Marge Glynn, Greenpeace, for the video Turtle Journey: Te Crisis in Our Oceans, (youtube.com/watch?v=cQB4RAZVMf4); Nils, from Goodnight Productions, for the video Te Time Is Now! (youtube.com/ watch?v=WnWTB5MCMPk); Peter Hartzbech and Jef Zornig, from iMotions, for the video Mobile Eye Tracking & EEG (imotions.com); Rafeef Ziadah, for the video Hadeel, music by Phil Monsour (rafeefziadah.net/ js_videos/hadeel-live-dublin); Roland Armstrong and Victor Jamison, producers and directors of the video series TellyGiggles (youtube.com/user/TellyGiggles); Soledad Zárate, director of the documentary You Don’t Know Me (youtube.com/ watch?v=sLfFFEkyrPY and youtube.com/watch?v=v82KcQsTGIk).
How to use this book and its companion website
Tis multimedia project is addressed to translation trainers, students, researchers, professionals and all those interested in the practice and theory of subtitling. Eminently interactive, it consists of a book and a companion website, containing a wealth of audiovisual material and exercises. Te vehicular language is English, and examples taken from existing subtitled flms and TV programmes are provided in a wide range of languages, with back translations. In addition, Subtitling explores the most relevant academic challenges of this very specifc form of translation and raises a number of fundamental research questions. Writing for an international audience is certainly ambitious and means that some generalization is inevitable, whereas subtitling traditions vary from country to country and even from company to company. However, the degree of variation is relative and professional practices are often beginning to converge, due to factors related to globalization as well as technological advancements. Most diferences in professional practice do not really afect the fundamentals of subtitling, and learners who have acquired an insight into these specifc issues will be able to apply this knowledge and these skills in any context. Unique to this project is the opportunity for readers to engage in the actual practice of subtitling. To this end, a special agreement has been reached with BroadStream Solutions to allow learners to use their professional, subtitle creation software Wincaps Q4, as well as with OOONA, so that learners can also practice subtitling in the cloud. Te book Te book is divided into nine chapters that cover all major aspects of the subtitling practice and profession. Tey all start with suggestions for a preliminary discussion and end with a selection of graded exercises, to refect on the theory of subtitling and to have a go at real subtitling practice. Te book contains only some exercises, while many other activities and materials are hosted on the companion website. Te companion website Te book is meant to be used in combination with a dedicated companion website, which contains additional resources: routledge.com/cw/diaz-cintas
How to use this book and its companion website
xv
You will need a code to gain access to the website, for which a question and answer method is used by Routledge. On the website portal, you will be asked to fll in a request form and, provided the answer is correct, you will then receive a token by email. Once inside the portal, the material is grouped in sections that, on the whole, coincide with the nine chapters of the book, plus a few more areas, as illustrated next: OOONA Registration OOONA guides Video tutorials Initiation exercise Wincaps Q4 Registration Set-up Wincaps Q4 guides Video walkthrough tutorials Initiation exercise Book of Exercises (all chapters) Chapter 1 List of exercises – 1 Exercises Chapter 2 Additional material Preliminary discussion List of exercises – 2 Exercises Chapter 3 Examples List of exercises – 3 Exercises Chapter 4 Examples List of exercises – 4 Exercises Chapter 5 List of exercises – 5 Exercises Chapter 6 List of exercises – 6 Exercises Chapter 7 List of exercises – 7 Exercises Chapter 8 List of exercises – 8 Exercises
xvi How to use this book and its companion website Chapter 9 List of exercises – 9 Exercises Extra Exercises List of extra exercises Exercises Glossary Appendices Appendix 1 – Subtitling software programs Appendix 2 – AVT companies Appendix 3 – AVT associations and groups Appendix 4 – Subtitling language guidelines Appendix 5 – Cloud-based subtitling platforms Appendix 6 – Adding subtitles to video Acknowledgements Permissions
Te section OOONA contains information on how to register for a free trial version, while Wincaps Q4 includes details on how to get hold of your free copy of the software, install it on your computer and use it. A Book of Exercises (.pdf ) lists all the activities designed by the authors for all the chapters, with instructions on how to exploit them. Te nine chapters contain a list of the exercises created for each individual chapter together with material for the various tasks, such as video clips (.mp4), dialogue transcripts (.pdf ), Wincaps fles (.w32), subtitle fles (.ooona), OOONA project fles (.json) and the keys (.pdf ) to some of the activities. Instructions on how to work with the exercises can be found in the comprehensive Book of Exercises or the individual List of exercises, where all the tasks for that particular chapter are listed, including details on where the material for a given activity (e.g. video, subtitle fle, dialogue list) is located on the companion website. Te website also features a glossary of terms commonly used in the profession and six appendices with information on the industry and the technology employed in this feld. It concludes with a section on copyright issues and acknowledgements. OOONA One of the highlights of this multimedia project is the inclusion of exercises to be exploited with the professional cloud-based subtitling platform developed by OOONA (ooona.net). Committed to addressing any subtitling or captioning needs, OOONA have developed a wide range of tools for the creation, translation, review and burning of subtitles and captions. OOONA’s top-of-the-line, cutting-edge technology products are trusted and used by thousands of users around the world. Te company has also developed their OOONA EDU (ooona.net/ooona-edu), which is the frst ever educational cloud-based platform to be used by educational centres in their training of new subtitlers.
How to use this book and its companion website
xvii
On the companion website, there is an area with information on how to register for a free trial account [Web > OOONA > Registration]. Once you have registered, you will be able to create subtitles from scratch, to translate from templates and to review the subtitles created by someone else. To facilitate your frst approach to OOONA, you should start with the Initiation Exercise [Web > OOONA > Initiation exercise]. To help you with the process of creating subtitles, translating with the help of templates and revising subtitles, you may fnd useful the various easy guides [Web > OOONA > Guides] as well as the video tutorials [Web > OOONA > Video tutorials]. To work in OOONA with some exercises, you will have to download the appropriate material from the companion website and save it on your computer.
Wincaps Q4 Equally exciting is the inclusion of Wincaps Q4, a professional subtitling software program developed by Screen Subtitling Systems, part of BroadStream Solutions (subtitling.com) and widely used in the broadcast market worldwide, including numerous television stations, access services companies and language service providers specializing in subtitling. Wincaps is also the leading subtitling program for education, used by many universities and colleges around the world to train future subtitlers. Te companion website contains a section where you will fnd details about how to download and activate Wincaps Q4 [Web > Wincaps Q4 > Registration]. Tis version of Wincaps will be valid for three months and will only work on Windows PC and on one computer. Once you have successfully installed the program on your computer, you can start producing your own subtitles. To facilitate your frst approach to Wincaps, we suggest you start with the Initiation Exercise [Web > Wincaps Q4 > Initiation exercise]. To help you with the process of creating subtitles, you may fnd useful the various easy guides [Web > Wincaps Q4 > Guides] as well as the video tutorials [Web > Wincaps Q4 > Video tutorials]. A datasheet listing the full specifcations of Wincaps Q4 can be found on [Web > Wincaps Q4 > Set up > Wincaps Q4 specifcations]. Comprehensive instructions on using Wincaps Q4 can be found on the in-product Help menu as well as on [Web > Wincaps Q4 > Set up > Wincaps Q4 user guide].
[
Important: This free Wincaps Q4 version is to be used exclusively for educational purposes to support the exercises in this book; it is not licensed for any form of commercial use. To purchase a personal subscription when this free version expires, just visit the BroadStream Solutions website: broadstream. com/store To work with Wincaps Q4, you will have to download the appropriate material from the companion website and save it on your computer.
1
Reconceptualizing subtitling
1.1
Preliminary discussion
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
1.2
What is meant by the umbrella concept of audiovisual translation? Which professional practices do you think this term encompasses? How would you define the practice of subtitling? Do you consider subtitling to be a case of translation or adaptation? What are the reasons for your choice? Do you think that you, personally, watch more subtitled productions now than a few years back? If so, why? Is subtitling popular in your country? In which contexts? Explain how subtitling may have been affected by social media. Thinking about video games, 3D, virtual reality and immersive environments, how do you imagine the subtitles of the future?
Te power of the moving image
In recent decades, audiovisual translation has been, without a doubt, one of the most prolifc areas of research in the feld of Translation Studies, if not the most prolifc one. Although it was ignored in academic and educational circles for many years, audiovisual translation (AVT) has existed as a professional practice since the invention of cinema at the turn of the 20th century. However, it was not until the mid-1990s, with the advent of digitization and the proliferation and distribution of audiovisual materials, that it began to gain scholarly prominence and boost its number of acolytes. In a technologically driven multimedia society like the present one, the value of moving images, accompanied with sound and text, is crucial when it comes to engaging in communication. Te transition from the paper page to the digital page has brought about a number of substantial changes that have had a great impact not only on the way in which information and messages are produced and transmitted but also on the role played by users and consumers in this new and dynamic mediascape. In our working and personal lives, we are surrounded by screens of all shapes and sizes: television sets, silver screens, desktop computers, laptops, videogame consoles and smart phones are a common feature of our socio-cultural environment, heavily based on the omnipresence and omnipotence of the image. Enmeshments with technology punctuate
2
Reconceptualizing subtitling
our daily routine as we experience a great deal of exposure to screens – at home, in our work place, on public transport, in libraries, bars, restaurants, cinemas – and consume vast amounts of audiovisual productions to enjoy ourselves, to obtain information, to carry out our work, to learn and study, and to develop and enhance our professional and academic careers. Te audiovisualization of communication in our time and age is clearly evident in the ubiquity of the moving images and their impact on our lives. Tis tallies well with the constant linear increase that has been observed in the use of computers and digital devices, particularly among the younger generations (Ferri 2012). In his discussion on the new divide between digital natives and digital immigrants (the latter a.k.a. Gutenberg natives), Prensky (2001: 1) argues that today’s students, making special reference to graduates in the USA, have spent their entire lives surrounded by and using computers, video games, digital music players, video cams, cell phones, and all the other toys and tools of the digital age. Today’s average college grads have spent fewer than 5,000 hours of their lives reading, but over 10,000 hours playing video games (not to mention 20,000 hours watching TV). Computer games, e-mail, the Internet, cell phones and instant messaging are integral parts of their lives.
Along the same lines, but in the case of the UK, Elks (2012: online) highlights, “children will spend an entire year sat in front of screens by the time they reach seven”, with an average of 6.1 hours a day spent on a computer or watching TV and with “some ten and 11-year-olds having access to fve screens at home”. In the battle between the paper and the digital page, it is the latter that seems to be winning and, coupled with the relative afordability of technology and the pervasiveness of the internet, is making a daily occurrence of multimodal communication (Chapter 3) for millions of viewers around the globe. All forms of communication are based on the production, transmission and reception of a message among various participants. Tough, on the surface, this seems to be a rather straightforward process, human interaction of this nature can be complex even when the participants share the same language, and more so when they belong to different linguacultural communities. Tat is why practices like translation and interpreting have existed for centuries, to facilitate communication and understanding among cultures, as well as to promote networks of power and servitude. To a large extent, translation can be said to be running parallel to the history of communication, and it has experienced a similar transition in recent decades from the printed page to the more dynamic digital screen. From this perspective, and given the exponential growth experienced by audiovisual communication, the concomitant boom witnessed in the volume of audiovisual translation comes as no surprise. Technological advancements have had a great impact on the way in which we deal with the translation and distribution of audiovisual productions, and the switchover from analogue to digital technology at the turn of the last millennium proved to be particularly pivotal and a harbinger of the things to come. We are in the midst of the fourth industrial revolution (Schwab 2016), a technical transformation that is fundamentally changing the way in which we live, work and relate to one another. In an age defned by technifcation, digitization and internetization, the world seems to have shrunk, contact across languages and cultures has accelerated and exchanges have become fast and immediate. Symbolically, VHS tapes have long gone, the DVD and VCD came and went in what felt like a blink of an eye,
Reconceptualizing subtitling
3
Blu-ray never quite made it as a household phenomenon, 3D seems to have stalled and, in the age of the cloud, we have become consumers of streaming, with the possession of actual physical items being almost a thing of the past. Te societal infuence of AVT has expanded its remit in terms of the number of people that it reaches, the way in which we consume audiovisual productions, and the nature of the programmes that get to be translated. If audiovisual translation originally came into being as a commercial practice aiming to enhance the international reach of feature flms, the situation has now changed quite drastically as the gamut of audiovisual genres that are translated nowadays is virtually limitless, whether for commercial, ludic or instructional purposes: flms, TV series, cartoons, sports programmes, reality shows, documentaries, cookery programmes, current afairs, edutainment material, commercials, educational lectures and corporate videos to name but a few. Te digital revolution has also had an impact on the very essence of newspapers, which have migrated to the web and now host videos on their digital versions that are usually translated with subtitles when language transfer is required. Consumption of audiovisual productions has also been altered signifcantly, from the large public spaces represented by cinema, to the segregated family experience of watching television in the privacy of the living room, to the more individualistic approach of binge watching in front of our personal computer, tablet or mobile telephone. Te old, neat distinction between the roles of producers and consumers has now morphed into the fgure of the prosumer, a direct result of the new potentiality ofered by social media and the digital world. In addition to watching, consuming and sharing others’ programmes, netizens are also encouraged to become producers and create their own user-generated content that can be easily assembled with freely available software programs and apps on computers, smartphones or tablets, uploaded to any of the numerous sharing platforms that populate the ether and distributed throughout the world virtually instantaneously. 1.3
From the periphery to the centre
For years, the activity of translating audiovisual programmes was perceived by many academics as falling short of translation proper because of all the spatial and temporal limitations imposed by the medium itself (Chapter 4), which in turn constrain the end result. Tey preferred to talk about adaptation, an attitude that stymied the early debates about the place of AVT in Translation Studies (TS) and may explain why the feld was ignored by TS scholars until relatively recently. In her attempt to dispel inherited assumptions that have had the pernicious efect of academically marginalizing certain translational practices, O’Sullivan (2013: 2) remarks, “[t]ranslation is usually thought of as being about the printed word, but in today’s multimodal environment translators must take account of other signifying elements too”. Tis warning is echoed by Pérez-González (2014: 185), who bemoans the excessive emphasis placed by AVT researchers on the linguistic analysis of often decontextualized dialogue lines, and for whom the need “to gain a better understanding of the interdependence of semiotic resources in audiovisual texts has become increasingly necessary against a background of accelerating changes in audiovisual textualities”. In this respect, greater scholarly attention to the interplay between dialogue and the rest of semiotic layers that confgure the audiovisual production can only be a positive, if challenging, development for the discipline.
4
Reconceptualizing subtitling
As instances of multimodal productions, audiovisual programmes rely on the interplay between two codes, images and sound; and whereas literature and poetry evoke, audiovisual productions represent and actualize a particular reality through specifc images that have been edited together by a director. In this communicative context, meaning is conveyed not only through the dialogue exchanges between characters, but also through images, gestures, camera movements, music, special efects, etc. Information is thus transmitted simultaneously through the acoustic and the visual channels and conveyed through a wide range of signifying codes, articulated according to specifc flmic rules and conventions (Chapter 3). As such, Chaume (2012: 100) defnes the audiovisual text as “a semiotic construct woven by a series of signifying codes that operate simultaneously to produce meaning”, which can be transmitted through the acoustic (linguistic, paralinguistic, musical, special efects and sound position codes) and the visual channels (iconographic, photographic, shot, mobility, graphic and montage/editing codes). Many of the challenges faced by audiovisual translators result from the interaction of the various codes and from the fact that, in most cases, the only code they can work with is the linguistic one in the form of dialogue, voiceover narration or written insertions in the images, as will be detailed in the following chapters. Subtitling, along with dubbing and voiceover, is a practice constrained by the need to reach synchrony between the linguistic target text (TT) and these additional translational parameters of images and sound as well as time. Te subtitles should not contradict what the characters are doing or saying on screen, and the delivery of the translated message should coincide with that of the original speech. In addition, subtitles entail a change of mode from oral to written and resort frequently to the condensation and omission of lexical items from the original (§6.3) in order to maintain temporal synchrony with the dialogue. As far as space is concerned, the dimensions of the screens are ultimately fnite, and the TT will have to accommodate the width of the screen and the agreed safe area (Chapter 4). It is the impact of these rather strict parameters on the textuality of the subtitles that was traditionally used as evidence that justifed the lack of academic interest in this activity and, as highlighted by Delabastita (1989: 213) in his early paper, was responsible “for the fact that translation studies of all disciplines have been rather reluctant to include flm translation among their subjects of study”. Te expansion of the concept of translation and thereby the object of study of TS from essentially verbal texts to audiovisual texts was much slower than the development of AVT as a translation practice. Jakobson (1959) is often cited as being one of the frst academics to open up the feld of TS, when he famously established three types of translation, namely intralingual (or rewording), interlingual (or translation proper) and intersemiotic (or transmutation). One of the early scholars to discuss the signifcance for translation of the multimodal nature of the source text (ST) is Reiss (1971/2000), who in her text typology for translators distinguishes three initial groups, namely (1) content-focused texts, (2) form-focused texts, and (3) appeal-focused texts. To these, she adds a fourth, overarching category that she refers to as audiomedial texts, which, in her own words, “are distinctive in their dependence on non-linguistic (technical) media and on graphic, acoustic, and visual kinds of expression. It is only in combination with them that the whole complex literary form realizes its full potential” (ibid.: 43), as in radio and television scripts, songs, musicals, operas, and stage plays. Audiomedial texts seem then to be aimed at hearers, since they “are written to
Reconceptualizing subtitling
5
be spoken (or sung) and hence are not read by their audience but heard” (ibid.: 27). From the perspective of audiovisual translation, the term is clearly problematic, and Reiss’s taxonomy is also rather wanting as any reference to the main flm translation modes, whether dubbing or subtitling, is conspicuously absent in her work and the emphasis is placed, symptomatically, on the ‘hearer’ rather than the ‘viewer/reader’. A decade later, she revisited the term and changed it to multimedial (Reiss 1981), allowing her to include texts like comics and advertising material, which resort to visual but not acoustic elements. In an attempt to overcome the limitations of Reiss’s terminological framework of reference and to dispel any potential confusion, Snell-Hornby (2006: 85) later coined four diferent terms for four diferent classes of text that all depend on elements other than the verbal: 1 2 3 4
Multimedial texts (in English usually audiovisual) are conveyed by technical and/ or electronic media involving both sight and sound (e.g. material for flm or television, sub-/surtitling); Multimodal texts involve diferent modes of verbal and nonverbal expression, comprising both sight and sound, as in drama and opera; Multisemiotic texts use diferent graphic sign systems, verbal and nonverbal (e.g. comics or print advertisements); Audiomedial texts are those written to be spoken, hence reach their ultimate recipient by means of the human voice and not from the printed page (e.g. political speeches, academic papers).
In all cases, we are clearly dealing with texts that go beyond language and that in Translation Studies “until well into the 1980s were hardly investigated as a specifc challenge for translation” (ibid.). Indeed, one of the earliest TS scholars to theorize the various semiotic layers of audiovisual productions and question limitative defnitions of translation was Delabastita (1989: 214), who was aware of the risks involved in having a restrictive and normative defnition of translation that “is in danger of being applicable to very few, well-selected cases, and of being unsuitable for a description of most actual fact”. He therefore rejects the traditionally straightjacketed defnition in favour of a more fexible and operational notion. Translation must be understood from a fexible, heterogeneous and non-static perspective, one that encompasses a broad set of empirical realities and acknowledges the ever-changing nature of this professional practice. AVT is a prime example of such practices and the continuous process of change to which they are subjected. Furthermore, AVT has contributed greatly to the questioning and reframing of long-established tenets such as text, authorship, original work, translation unit or fdelity to the original that are now well established in TS. Tese days, translation is generally perceived as a very fexible and inclusive concept, capable of accommodating new professional realities rather than disregarding practices that do not ft into a corseted, outdated notion of a term that was coined many centuries ago, when the cinema, the television and the computer had not yet been invented. Once it is acknowledged that AVT is a form of translation, the next step is deciding whether audiovisual translation is the generic term that encompasses all its diferent manifestations. Te terminology used in the feld has a history of its own. Lacking the tradition of other more established translation domains such as medical, literary
6
Reconceptualizing subtitling
or legal translation, consensus on which hypernym to use to encapsulate translations falling within the audiovisual practice has proved rather elusive. In works published in the 1980s and 1990s, adjectives like constrained (Titford 1982; Mayoral Asensio et al. 1988) and subordinate (Díaz Cintas 1998) translation, which highlight the complexity of AVT by foregrounding that the translator’s task is inhibited by the interplay between the various semiotic layers (images, music, dialogue, etc.), were commonly used but soon began to be criticized for their restrictive connotations. Other scholars preferred terms such as flm translation (Snell-Hornby 1988), cinema translation (Hurtado 1994) or flm and TV translation (Delabastita 1989) but, as the feld of study kept expanding to include other types of programmes (e.g. sitcoms, documentaries, cartoons, etc.) and other media (e.g. cinema, TV, VHS, DVD, VCD, Blu-ray, internet), these concepts also became problematic. Other nomenclatures, all of them bringing in diferent nuances, have been coined over the years. A denomination that has caught on in English but not so much in other languages is that of screen translation (Mason 1989). By opening up the concept to include screens as an essential component – be they for television, cinema, computers or smartphones – such an umbrella term ofers the potential of encompassing the translation of other products that had been excluded from more restricting conceptions such as flm translation: computer games, software programs, mobile applications and webpages. Multimedia translation (Gambier and Gottlieb 2001), multimodal translation (Remael 2001) and multidimensional translation (Gerzymisch-Arbogast and Nauert 2005) are other appellations that have known some currency in the past to account for products where the message is conveyed through multiple media and channels. Yet another concept that has been employed in academic circles is that of transadaptation (Gambier 2003; Neves 2005), in an attempt to justify the hybrid nature of the process characterizing all the diferent audiovisual translation types. Tis terminological fuctuation is no more than a refection of the changing times and, far from representing a barrier to communication, it can be interpreted as a clear sign of the desire of many translation scholars to maintain an open and fexible approach to the object of study; one that not only acknowledges but also accommodates the new realities emerging in the translation industry. Nowadays, audiovisual translation, abbreviated to AVT, has made it as the standard referent widely used across languages to refer to the feld. Tis coinage has the advantage of including the semiotic dimension (audio and visual) of the product to be translated and is employed to denote the various translation practices (§1.4) implemented in the audiovisual industry. In the beginning it was limited to refer to interlingual practices in which there was a transfer from a source to a target language (TL) but, with the consolidation in the late 1990s of the then novel activities of intralingual subtitling for people who are D/deaf or hard-of-hearing (SDH) and audio description (AD) for people who are blind or partially sighted, the concept quickly expanded to accommodate these forms of textual transformations too. However, this caused some further terminological disarray as neither of the aforementioned practices, at the beginning at least, implied the interlingual transfer from a source to a TL, one of the traditional defning features of translation activity. Te wider notion of accessibility to the media has been therefore put forward to encompass the newer practices (Díaz Cintas 2005a). In societies that aim to be more just and inclusive, accessibility services fulfl a social function by making audiovisual productions available to people with sensory impairments, who otherwise would not be able
Reconceptualizing subtitling
7
to enjoy them. In this sense, whether the communication obstacle consists of a linguistic or a sensorial barrier, the ultimate goal of the translation commission can be said to be identical, i.e. to grant access to an otherwise hermetic source of information and entertainment. From this viewpoint, accessibility becomes the common denominator that underpins all of these practices, be they interlingual translation or not (Greco 2018). Finally, interactive software programs, video games, virtual reality and immersive environments keep pushing the nature of audiovisual translation into new territories, as documented by Chaume (2013). Te impact of growing practices like cybersubtitling (Díaz Cintas 2018), totally deregulated and free from commercial imperatives, is also being felt in the way subtitles are being produced, distributed and consumed, both in terms of content as well as layout. Cybersubtitles subsume myriad diferent subtitles found on the internet that can be purposely requested by some collectives, i.e. crowdsourced, or generated on a voluntary basis, and the individuals behind their production can be either amateurs or professionals. Tree main sub-groups can be distinguished: (1) fansubs, created by fans; (2) guerrilla subtitles, produced by people engaged in political causes; and (3) altruist subtitles, usually commissioned and undertaken by individuals with a close afnity to a project. All these changes are ultimately representative of a broader cultural shift in which diferent types of audiovisual translation and other translation modes are converging, giving rise to new hybrid forms and catering for diferent and well-defned target audiences. Ultimately, the key to successful audiovisual translation research and practice is insight and understanding of the product and its expected function, combined with a desire to learn and willingness to adapt. 1.4
Te many instantiations of audiovisual translation
Used as an umbrella term, audiovisual translation subsumes a wide variety of translation practices that difer from each other in the nature of their linguistic output and the translation strategies on which they rely. Te various ways in which audiovisual productions can be translated into other languages have been discussed by many authors over the years, but the typologies presented by Chaume (2013) and Díaz Cintas (2020) are perhaps two of the most recent and complete. What follows is a panoptic overview of each of the main modes. In the main, two fundamental approaches can be distinguished when dealing with the linguistic transfer in AVT: either the original dialogue soundtrack is substituted with a newly recorded or live soundtrack in the TL (i.e. revoicing) or it is converted into written text that appears on screen (timed text). Within these two all-encompassing approaches, further sub-categorizations can be established. Tus, revoicing subsumes interpreting, voiceover, narration, dubbing, fandubbing and audio description. In simultaneous or consecutive interpreting, the source speech is transferred by an interpreter, who listens to the original and verbally translates the content. Tough currently restricted to the translation of live speeches and interviews, it used to be a fairly common practice during screenings at flm festivals, when the flm prints arrived too late and there was not enough time to proceed to their subtitling. Voiceover (VO) consists in orally presenting the translation of the ST speech over the still audible original voice. Usually, the speaker is heard for a few seconds in the foreign language (FL), after which the volume of the soundtrack is dimmed, so that
8
Reconceptualizing subtitling
the original utterances can still be heard in the background, and the translation in the TL is then overlaid. Te translation typically concludes whilst the speaker continues talking for a few more seconds, so that the audience can clearly hear the FL once more. Closely associated with the translation of factual genres, such as documentaries and interviews, it is hailed by some authors (Franco et al. 2010) as a transfer mode that faithfully respects the message of the original text, an assertion that is, of course, highly debatable. Also known as lektoring in some countries – usually done by a man, who reads all the dialogue in a monotone voice, this way of translating audiovisual materials is common for television in Poland, Russia and a few other countries in Eastern Europe, while cinemas prefer to show flms subtitled or dubbed. Narration is also used primarily for the translation of non-fction and difers from voiceover in that the original speech is completely removed from the soundtrack and replaced by a new voice, in translation. Te ensuing translation is often roughly synched with the visuals on screen, and the degree of freedom taken in both the translation and the recording of the new TT can difer greatly as TV documentaries, for instance, are often adapted to ft diferent time slots. Dubbing, also known as lip-sync and famously referred to as traduction totale by Cary (1960) because of its many linguistic challenges, consists in the substitution of the dialogue track of an audiovisual production with another track containing the new lines in the TL. It is widely practised in Brazil, China, France, Germany, Japan, Italy, Tailand, Turkey and Spain, among many others. A fctional world within a broader fctional world that is cinema, dubbing’s ultimate fabrication is to make viewers believe that the characters on screen share the same language as the viewer. To achieve this illusion, three types of synchronization need to be respected: (1) lip synchrony, to ensure that the translated sounds ft into the mouth of the onscreen characters, particularly when they are shown in close-up, (2) isochrony, to guarantee that the duration of the source and the target utterances coincide in length, especially when the characters’ lip movements can be seen, and (3) kinetic synchrony, to ensure that the translated dialogue does not enter into confict with the thespian performance of the actors and that the voices chosen for the new recording are not at odds with the personal attributes and the physical appearance of the onscreen characters. Fandubbing refers to the dubbing or redubbing, usually done by amateurs or enthusiasts rather than by professional actors, of audiovisual productions that have not been ofcially dubbed or whose available dubbed versions are deemed to be of poor quality. Mostly interlingual, some of them are also intralingual, in which case the intent is primarily humorous and they are then known as fundubs (Baños 2019). Finally, audio description (AD) is an access service that, in the words of Remael et al. (2015: 9–10): ofers a verbal description of the relevant (visual) components of a work of art or media product, so that blind and visually impaired patrons can fully grasp its form and content. ... Te descriptions of essential visual elements ... have to be inserted into the “natural pauses” in the original soundtrack of the production. It is only in combination with the original sounds, music and dialogues that the AD constitutes a coherent and meaningful whole.
Tis additional narration describes all visually conveyed information in order to help people with sight loss to follow the plot of the story, access the body language and
Reconceptualizing subtitling
9
facial expressions of the characters as well as their outward appearance and sometimes their psychology, and identify the locations of the action. It also covers as much of the action on screen or stage as can be managed in a linear narration and identifes the source of sounds that might create confusion if they are only perceived aurally (a squeaking sound could be produced by a bird or the hinges of a door). AD can be recorded and added to the original soundtrack, as is usually the case for flm and TV, or it can be performed live, as in the case of live stage performances. Sometimes AD is combined with audio subtitling (AST), an aural rendering of written subtitles or surtitles, to which people with sight loss would not have access otherwise. Te subtitles can be read by the same voice narrating the AD or by another voice(s). Occasionally, the audio subtitles are adapted to compensate for the omissions that inevitably occur in interlingual subtitles. Surtitles with live events are often incorporated into the AD or an audio introduction. Te latter, an access service used in combination with AD, is a summary of the main plot, including information about the characters and settings, usually of a live event, delivered live or in recorded form before the event takes place. European and nation-specifc legislation aimed at encouraging the provision of assistive services in order to enhance access to audiovisual media for people with sensory disabilities has allowed for the quantitative expansion of AD, especially in public service broadcasting, but also on DVDs, in cinemas, theatres and museums and, more recently, on the internet. Te second main approach to AVT consists in adding a written text to the original production, for which some players in the industry, like Netfix, have started to use the umbrella term timed text. Tese fitting chunks of text correspond to usually condensed, synchronized translations or transcriptions of the original verbal input found in the source language (SL). As a superordinate concept, timed text can be either interlingual or intralingual, and it comprises the following related practices: subtitling, surtitling, subtitling for people who are D/deaf or hard-of-hearing, live subtitling and cybersubtitling. Interlingual subtitling, the main topic of this book, may be defned as a translation practice that consists in presenting a written text, generally on the lower part of the screen, that aims to recount the original dialogue exchanged among the various speakers, as well as all the other verbal information that is transmitted visually (letters, inserts, grafti, text messages, inscriptions, placards, and the like) and aurally (songs, voices of, voiceover narration). In some languages, like Japanese, subtitles can be presented vertically and then tend to appear on the right-hand side of the screen. Tis is the preferred audiovisual translation mode in countries like Dutch-speaking Belgium, Croatia, Greece, the Netherlands, Portugal and the Scandinavian nations, among many others. All subtitled programmes are made up of three main constituents: the spoken word, the image and the subtitles. Te interaction of these three components, along with the viewer’s ability to read both the images and the written text at a particular speed, and the actual size of the screen, determine the basic characteristics of the audiovisual medium. Subtitles must appear in synchrony with the images and dialogue, provide a semantically adequate account of the SL dialogues, and remain displayed on screen long enough for the viewers to be able to read them (Chapter 4). Surtitling, also known as supertitling in the USA and supratitling, is a close relative of subtitling and refers to the translation, across languages, or transcription, within the same idiom, of dialogue and lyrics in live events such as operas, musical shows,
10
Reconceptualizing subtitling
concerts, conferences and theatre performances. Tey were frst developed by the Canadian Opera Company in Toronto, and the frst production in the world to be accompanied with surtitles was the staging of Elektra in January 1983. Surtitles started being originally projected onto an LED display, usually located above the stage. Tough some venues continue with this approach, the enormous dimensions of many prosceniums means that titles projected on such strip screens risk being invisible or illegible to some sections of the house. An alternative system that many opera houses and theatres have embraced has seen the installation throughout the venue of small, individual computerized screens that are fxed in the seat in front of the patron. Known as seat-back title screens, these individualized contraptions give the audience greater power as they allow surtitles to be simultaneously provided in more than one language. To avoid invading other patrons’ space, these display screens are normally designed so that one cannot see the titles on adjacent screens. In other cases, surtitles are integrated into the stage design, and new experiments involve the use of glasses on which the surtitles are projected. In a similar way to subtitles – in fact, many in the profession call them subtitles or titles – their aim is to convey the overall meaning of what is being enunciated or sung, whilst complying with the various constraints resulting from time and space limitations. On some occasions, they may add some clarifcations, like the characters’ names, so that the audience fnds it easier to follow the diegesis. In terms of appearance, they can scroll from right to left but most of the time they are presented like pop-up subtitles of two (or three) lines, which is acknowledged by some to be less distracting for the audience. Given that we are dealing with live performances, timing the subtitles is usually one of the trickiest issues. Te subtitles are usually prepared beforehand but launched live by a technician or titler so that they can follow the rhythm and delivery of the live performance as closely as possible, with recent software allowing the projectionist to switch or skip surtitles as required. Te other major assistive service in AVT, subtitling for people who are D/deaf or hard-of-hearing (SDH), a.k.a. captioning, is a practice that consists in presenting on screen a written text that accounts for the dialogue and its paralinguistic dimension, as well as for music, sounds and noises contained in the soundtrack, so that audiences with hearing impairments can access audiovisual material. As opposed to the next category, this type of subtitling is always done for audiovisual programmes that have been pre-recorded. Live subtitling is the production of subtitles for live programmes or events, which can be achieved by several means (§1.5.2). Tis type of subtitling can be both intralingual and interlingual and be realtime, as in a sports programme, or semi-live, as in some sections of the news, where a script of the content is usually made available shortly before the broadcast. Traditionally, professionals used stenotype or shorthand techniques and diferent keyboards to transcribe or translate the original dialogue, but these days respeaking, or speech-based live subtitling, is gaining ground in the industry. Te democratization of technology has acted as a fllip for the rise of amateur practices on the internet, giving birth to a raft of new translation operations that are technically similar to subtitling, though still exhibiting some remarkable diferences in terms of layout and activated translation strategies, as discussed by Díaz Cintas (2018), who groups them under the broader concept of cybersubtitling.
Reconceptualizing subtitling
11
Te next section ofers a taxonomy of the many diferent types of subtitles that can be found in today’s audiovisual landscape, structured according to various parameters. Te book then focuses on the challenging feld of (commercial) interlingual subtitling. However, for those wishing to expand their horizons beyond this specialization, two comprehensive and up-to-date resources covering the booming feld of AVT in its entirety, including media accessibility, are the works by Pérez-González (2019) and Bogucki and Deckert (2020). 1.5
Classifcation of subtitles
As already discussed, subtitles seem to be ubiquitous in our current digital society, fulflling countless roles and taking diferent shapes. Beyond their obvious translation role, subtitles are also used to enhance search engine optimization, as Google ranks pages higher if they contain a link to video content and the keywords within the subtitles are searchable, thus boosting a company’s search engine ranking. Trying to classify subtitles into neat boxes may seem a futile activity, yet, it can be a good exercise in order to better appreciate their nature and social importance. In this sense, diferent typologies can be established depending on the criteria that are used at the onset. One of the main provisos to bear in mind is the fact that the fuidity of the mediascape that surrounds us, together with subtitling’s close relationship with technology, and the speed at which the latter develops, make it rather utopian to be able to come up with a fxed classifcation. Te best way to gain as comprehensive as possible an overview of the many diferent types in existence is therefore to look at them in manifold ways. To this end, subtitles have been grouped according to the following six criteria: (1) linguistic, (2) time available for preparation, (3) display mode, (4) technical parameters, (5) methods of projection, and (6) medium of distribution. 1.5.1
Linguistic parameters
One of the most traditional classifcations of subtitles focuses on their linguistic dimension. From this perspective, as illustrated in Figure 1.1, the following types exist:
Figure 1.1 Classifcation of subtitles according to linguistic parameters
12
Reconceptualizing subtitling
Intralingual subtitling, known by some as same-language subtitling (SLS), involves a shift from oral to written but, as it is a monolingual activity that remains within the same language, there has been a reluctance in some quarters to refer to this practice as translation. Tough these subtitles can also target hearers, the main practice that is usually associated with intralingual subtitles is that of subtitling for people who are D/deaf or hard-of-hearing. Developed in order to ensure greater democratic access to audiovisual programming for people with hearing impairments, this variety is also known as (closed) captioning in American English. During the analogue period, these subtitles, broadcast through Ceefax and Teletext in Europe, were aired on television by means of an independent signal activated only by those interested, who had to access pages 888 or 777 of the teletext service. In North America they were transmitted on what is known as line 21. With the digital switchover in the 2000s, whereby analogue terrestrial television has been gradually replaced with digital terrestrial television, TV providers ofer a diferent and upgraded text service based on higher-quality images, a better level of interactivity and faster data rates. Te new text service is usually available via a button on the set-top box’s remote control. In a nutshell, SDH provides text for any audible information contained in a flm or video, also known as audibles. Subtitlers thus convert into written text the dialogue exchanges heard in the soundtrack, indicating who is saying what and ideally incorporating all paralinguistic information that accompanies the actors’ utterances and is relevant for the appreciation of the message: emphasis, ironic statements, tone, speech impediments, prosody, accents and use of FLs, among others. Tey also refect any other relevant sound features that can be heard and are important for the development of the plot, the understanding of the storyline or the creation of an atmosphere and that a hearing-impaired viewer cannot access directly from the soundtrack, e.g. instrumental music, songs, a telephone ringing, laughter, applause, a knock on the door, the revving of an engine and similar sound efects and environmental noises. On certain television stations, the text changes colour depending on the person who is talking or the emphasis given to certain words or expressions within the same subtitle. Other broadcasters, publishers of DVDs and Blu-rays, and over the top (OTT) operators prefer to use labels to identify the speakers. Although they are usually presented at the bottom of the screen, in subtitles of two, and rarely, three or four lines, they lend themselves to changeable positioning, as it is possible to move them to the left or right of the screen when it is necessary to identify speakers or to indicate where an of-screen sound is coming from. SDH, including live subtitling, has spread widely in the last decades and continues to develop, both for pre-recorded programmes and live events, thanks to its greater visibility in society and the success achieved by pressure groups campaigning for the interests of this sector of the audience. Te fruit of their work is evident from the announcement and passing of new legislation in many countries, obliging television channels to broadcast a certain percentage of their programmes with this access service. In terms of regulations at an international level, the United Nations (2006), in Article 30 of the Convention on the Rights of Persons with Disabilities, requires that: States Parties recognize the right of persons with disabilities to take part on an equal basis with others in cultural life, and shall take all appropriate measures to ensure that persons with disabilities ... enjoy access to television programmes, flms, theatre and other cultural activities, in accessible formats.
Reconceptualizing subtitling
13
In the European Union, the celebration of the Year of Disabled People in 2003 helped in great measure to give increased visibility to the issue of accessibility to audiovisual media, particularly in those countries that had been lagging behind (Neves 2005). Te frst piece of legislation passed at European level specifcally mentioning accessibility to audiovisual programmes is the Directive 2007/65/EC from December 2007, followed by the Audiovisual Media Service Directive 2010/13/UE from March 2010, both issued jointly by the European Parliament and the European Council. Although not a legal obligation, they prompt Member States to encourage media service providers under their jurisdiction to ensure that their services are made more accessible to people with visual or hearing impairments, including through sign language interpreting, subtitles and audio description (European Commission 2018). In April 2019, the new EU Accessibility Act was adopted, providing a broad legislative basis for the provision of access services in the European Union (Remael and Reviers 2020). As regards television broadcasting, the volume of SDH has undergone spectacular growth in the last decade. In countries like the United Kingdom, where SDH has been on ofer since the 1980s, the Communications Act 2003 included, among other measures, provision for the requirements of access services for sensory impaired viewers. Te UK’s communications regulator, the Ofce of Communication (Ofcom), is currently the agency in charge of promoting and monitoring the ofer in this area, requiring television channels to gradually increase the volume of assistive services during a ten-year period, according to the following targets: SDH (80%), sign language interpreting (5%) and AD (10%) for all stations with large audience shares (Ofcom 2017). Whilst the frst initiatives concentrated on raising the volume of SDH provided by the broadcasters, Ofcom’s policies today have shifted to focus more on the slippery topic of the quality of the subtitles, still ensuring that the volume of subtitled programming increases on all channels. As most complaints on the part of the viewers are related to live subtitles, Ofcom decided to explore this area in greater detail and in 2014 published its frst report on the quality of live subtitling on television (Ofcom 2014). In the country, the BBC (British Broadcasting Corporation) is, without a doubt, one of leading broadcasters when it comes to SDH, having committed itself ahead of regulations, back in 1999, to subtitling 100% of their programmes on their main channels by 2008, and having succeeded in their endeavour (BBC 2008). In addition to the SDH provision ofered by many TV stations around the world, the theatrical, DVD, VCD, Blu-ray and video on demand (VOD) markets have also contributed to the exponential growth of this type of subtitling, making it more widely and easily available. Initiatives like yourlocalcinema.com, in the UK, have channelled their eforts to make cinema more enjoyable for viewers with hearing loss, and the internet is also joining the trend. Streaming players like Amazon (n.d.: online) portray themselves as “a customer-obsessed company and captions help ensure a consistent viewing experience for all customers, including those who might be hearing-impaired, are non-native English speakers, or prefer to view videos without sound”, while YouTube has experimented with the addition of automatic captions to one billion of their videos (Edelberg 2017), with rather disappointing results, as decried by the ensuing #NoMoreCraptions campaign (Grifn 2017). To date, targets and regulations in most countries are limited to the broadcasting mediascape only. Yet, as a sign of the things to come, legislation has been drafted recently in the UK, in the form of the Digital Economy Act 2017, compelling ondemand broadcasters operating on the internet to include subtitles, AD and signing
14
Reconceptualizing subtitling
in their programmes (Wilkinson-Jones 2017). If successfully applied, this should have an enormous impact on the volume, if not the quality, of accessible services available to people with sensory impairments. On a pedagogical note, few educational institutions around the globe ofer programmes of study with components that touch on SDH or respeaking, and the feld is still to be developed. Te second group of intralingual subtitles are those specifcally devised as a didactic tool for the teaching and learning of FLs, by students and also an ever-increasing number of migrants round the world who have the opportunity of learning the language of their host countries by watching subtitled programmes broadcast on television, published on DVD or streamed via internet. Some frms and distribution companies have recognized this educational potential, seen a niche in the market and responded with their own initiatives. Columbia Tristar Home Video, for example, was one of the frst companies in the 1990s to launch a collection of English-language flm videos with English subtitles entitled SpeakUp. Viewers were thus able to strengthen their comprehension of the original utterances by reading on the screen the written dialogue of the actors and recognize or confrm what they had not understood aurally. Te conventions applied in this type of subtitling difer substantially from those followed in interlingual subtitling, and it is not uncommon to fnd subtitles of three lines, full of lexical repetitions that are a verbatim transcription, word for word, of the flm dialogue, putting some pressure on reading speed. Te Spanish newspaper El País also jumped on board, in collaboration with Disney, with its collection Diviértete con el inglés [Have Fun with English]. Over several months in 2002, many classics from Disney were distributed on DVD in their original English soundtrack with English subtitles so that young people could become familiar with the FL in an enjoyable way. In a rather indirect manner, the arrival of DVD marked an infection point in the consolidation of didactic subtitles, as a track clearly distinct to and independent from that of SDH. For many years, big distributors like Disney and Paramount marketed a number of their DVDs with two tracks of intralingual subtitles in English: one clearly labelled for people who are D/deaf and hard-of-hearing and the other one just in English, without further details. Te latter is understood to be the working template (§2.5) created in English for their subsequent translation into FLs. Lacking all the paralinguistic information that is common in SDH and making use of condensation to adjust the text to an adequate reading speed, these subtitles have been an educational source for many viewers avid to learn English. Although, by and large, most productions that make use of this type of subtitling tend to be in English, many companies these days seem to have awoken to the attraction exerted by digital audiovisuals-cum-subtitles and the pedagogic potential they ofer for exploiting the teaching and learning of FLs and cultures, such as Spanish (aprenderespanol.org). Television has not been immune to the didactic value of this tool, as explicitly acknowledged by the Association of Public Broadcasting Corporations in Germany (ARD n.d.). For its part, the international French channel, TV5MONDE, has for years been broadcasting some of its programmes in French with open subtitles, also in French, in order to promote the learning of the language, and has more recently developed two sources of materials for learning (apprendre.tv5monde.com) as well as for teaching (enseigner.tv5monde.com) French.
Reconceptualizing subtitling
15
Intralingual subtitles also help promote literacy among children. Te successful project of same-language subtitling carried out in India by Kothari et al. (2004) has led to the marketing of BookBox (bookbox.com), a web-based jukebox of videos in numerous languages from around the world. Involving same-language subtitling of the audiovisual programme, BookBox synchronizes the written text, the audio and the visual media to create an educational and entertaining reading experience for children who can relate the phonetic sounds with the visual subtitles to accelerate reading skill development in their mother tongue. Tis approach has also proven to motivate non-literates toward literacy, through entertainment and popular culture, to make reading an automatic and refex phenomenon in everyday life, and to create a reading culture and an environment for reading. Although it is not primarily produced for the learning of languages as such, it can be argued that SDH can also act as a great educational tool for people with limited knowledge of a country’s language, e.g. immigrants and foreign students, and its added value on this front has been praised by scholars such as Danan (2004) and Vanderplank (2016). Tis pedagogical potential has also long been recognized in the case of interlingual subtitles. In the early years, authors such as Dollerup (1974) and Vanderplank (1988) claimed that many people in countries like Denmark and Finland, where most foreign programmes are subtitled, were reported to have acquired a lot of their knowledge in English by enjoying US flms, series and sitcoms subtitled in their mother tongue on television. Interest in this area has also been shown by the European Commission, who in 2011 requested a study on the potential of subtitling to encourage FL learning and improve the mastery of FLs, in the framework of its policy for the promotion of multilingualism (Media Consulting Group 2011). Te fnal report makes a clear distinction between the actual learning and motivation to learn, clarifying that learning is a process that occurs in a complex context in which subtitling is only one of the many factors at work and depends on variables such as the following: a b c d
Viewers accustomed to subtitling develop learning strategies more quickly than those accustomed to dubbing. Depending on the learner’s level, either intralingual or interlingual subtitling will be more appropriate. Intralingual subtitling seems to be better suited to learning grammar and spelling if the learner is not a beginner, whereas interlingual subtitling is more useful for building vocabulary. Interlingual subtitles seem to be more efective when the languages at work are closely related.
Teir conclusions suggest that subtitling does help to improve FL skills and can also create awareness and provide motivation to learn languages, in both formal and informal contexts, thus contributing to the creation of an environment that encourages multilingualism. Indeed, watching, listening and reading flms and programmes subtitled from other languages helps viewers not only to keep up and improve their linguistic skills in the foreign idiom, but also to contextualize the language and culture of other communities through real-life situations. Tis familiarization occurs through exposure to the soundtrack (vocabulary, rhythm, intonation, pronunciation) as well as to the images,
16
Reconceptualizing subtitling
which bring viewers into contact with the mannerisms and behaviour of citizens from other cultures (gestures, sartorial style, proxemics, geographical spaces). In subtitled productions, it is this unique opportunity of having direct access to the original and being able to compare it with its translation that has been stressed by many theoreticians as one of the most positive additional bonuses of subtitling (Incalcaterra McLoughlin et al. 2011; Gambier et al. 2015). Tis improved linguistic competence is taken to be mostly due to incidental learning processes – i.e. accidental, unplanned learning within an informal or formal learning situation, which has been recorded to occur when watching foreign audiovisual productions with both interlingual and intralingual subtitles (Van de Poel and d’Ydewalle 2001). In the current digital age, where watching videos seems to be more popular among some members of society than reading books, companies are making the most of the new opportunities ofered by streaming and video on demand services to facilitate a more immersive language experience. Apps like Lingvo TV (lingvo.tv) connect Netfix and the user’s phone, and allow the latter to translate particular words in a subtitle stream into a diferent language. Tis means that the viewer can watch an Italian flm with Italian subtitles and translate certain words into English by tapping them on the phone. Te subtitle stream can be seen either on the computer, in Netfix, or on the phone, through the Lingvo TV app (McAlone 2016). Likewise, the language learning platform Fleex (feex.tv) ofers access to tools that can be used with videos from YouTube, Netfix and Amazon Prime, among others. Teir set of features contribute to a more interactive experience, allowing users to lower the speed of the audio; to adapt the subtitles to their language level by showing more subtitles in the viewers’ mother tongue if they are beginners and more in English if they are advanced speakers; or to click on any word or expression to obtain its translation, defnition and pronunciation. Although this line of enquiry has traditionally concentrated on the use of subtitles from a passive perspective, i.e. that of the viewers reading the written subtitles (Bravo 2008), the availability of subtitling freeware and platforms such as Aegisub, Subtitle Edit, Subtitle Workshop or Amara, to name but a few, has enabled a more dynamic approach to subtitling by allowing those interested to create the actual subtitles (Talaván 2011; Lertola 2013; Talaván et al. 2017). In this sense, the ClipFlair project (Foreign Language Learning through Interactive Captioning and Revoicing of Clips, clipfair.net), funded by the European Union and conducted between 2011 and 2014, had as its main remit the development of educational material for active FL learning through revoicing (including dubbing, audio description, karaoke singing and reciting) and captioning (including subtitling, SDH and video annotations), thus covering the four skills (reading, listening, writing and speaking) and reinforcing cultural awareness. In this respect, the potential of all audiovisual translation modes, not just subtitling, in the teaching and learning of FLs is a feld in current expansion (Incalcaterra McLoughlin et al. 2018). A third type of intralingual subtitling that has gained remarkable popularity in recent years is known as karaoke, sing along and sing-a-long-a. It is used to transcribe the lyrics of songs or movie musicals so that the public can join in the singing at the same time as the characters on screen and the rest of the audience. Tese interactive shows frst began at the Prince Charles Cinema in central London with the classic flm musical Te Sound of Music and are now famous worldwide. Te initiative was a tremendous hit and initiated a tradition with the intralingual subtitling of other movies and programmes such as Te Rocky Horror Picture Show, Grease, Joseph
Reconceptualizing subtitling
17
and the Amazing Technicolor Dreamcoat, Frozen and Mamma Mia. Relying on the participation of the audience, who are provided with a fun pack, a host usually leads the audience through a vocal warm up and gives a comprehensive guide to the accompanying actions. To add to the entertainment, some sing-a-long-a attenders dress up as anything and everything represented in the flms. Another example of intralingual subtitling is the use of subtitles, in movies and other audiovisual programmes, to account for the utterances of speakers whose prosodic accent is considered difcult to be understood by the audience, despite being the same language. English, Spanish, Arabic and French, which are spoken far and wide throughout the world, tend to be the ones more commonly afected. Although rare, such an approach can be activated throughout an entire programme as, for instance, in the case of the British flms My Name Is Joe and Trainspotting, which were distributed in the United States with intralingual English subtitles amid fears the North American audiences would have difculty understanding the accents. Te Mexican flm Roma also aroused controversy in early 2019, when Netfix decided to subtitle it into Iberian Spanish, prompting the director, Alejandro Cuarón (in Jones 2019: online), to decry the decision as “parochial, ignorant and ofensive to Spaniards themselves”. Notwithstanding these sporadic cases, the more habitual practice is to resort to these subtitles only on punctual occasions where a need is felt to transcribe utterances from people who speak the language either as a FL or as their own native language but with a strong, local accent and lexical variation that makes it challenging to understand for the rest of the speakers of that language. Te sensitivities that can be ofended by such an attitude are best illustrated in the row that erupted over the BBC’s subtitling, in their documentary series Countryfle, of a Northern Irish blacksmith, who had inspired poet Seamus Heaney. Te corporation was accused of having acted in a patronizing way, angering politicians across the political divide, with one of them branding the act as “part of an ongoing process by the BBC of insulting the Irish people both in culture and language, in [sic] this occasion putting subtitles over the voice” (Glennie 2014: online). Similar incidents have taken place during the broadcasting of the BBC’s documentaries Usain Bolt: Te Fastest Man Alive and Te Secret History of Our Streets, where some of the English-speaking black contributors are subtitled in English. As argued by Lawson (2012: online): Te problem with the practice of captioning some accents is that it automatically implies that these speakers are deviating from some commonly agreed standard of comprehensible pronounciation [sic]. And it is almost impossible to set that standard without class-based or potentially racist implications and, just as insultingly, assuming a common ear among the audience.
Other lesser-spoken languages are also known to transcribe certain dialogue exchanges. On Flemish television in Belgium, intralingual subtitles are often used to ‘translate’ linguistic variants that the producer of a particular programme feels will not be understood by the entire population. Tis means that not only are some Dutch programmes from the Netherlands intralingually subtitled on Flemish television, but also some TV series, and parts of reality shows, whenever a character or speaker uses a (regional) Flemish variant that may not be understood by the entire Flemish
18
Reconceptualizing subtitling
community, due to phonetic or lexical variation. Te language used in these subtitles is standard Dutch, which means that all couleur locale is lost. Te desirability of such subtitling is the topic of much debate as discussed by Remael et al. (2008). In the meantime, many Danes have complained that it is increasingly hard to follow the dialogue of movies produced in their country, as many actors have chosen to speak in local versions of Danish, with the end result that some cinema theatres now provide moviegoers with the option to either choose screenings with intralingual subtitles in Danish or without (Noack 2015). Another situation in which these subtitles are used is to render, in writing, dialogue exchanges that are inaudible in some sections of the original programme because of poor quality recording or environmental noise as happens, for example, in street interviews or footage recorded with a hidden camera. Telop, an acronym of television opaque projector, is used in Japan and other Asian countries to indicate text superimposed on a screen, such as captions, subtitles or scrolling tickers. It makes use of special efects that are added during the post-production phase, typically onto variety shows, and may appear at any time, on any part of the screen, in myriad colours, fonts and sizes that can be disproportionately large and occupy a sizable portion of the screen (O’Hagan and Sasamoto 2016). Additionally, they can also make use of special displaying efects like scrolling, and sometimes they can resort to the inclusion of ideograms, emoticons and even pictures. According to Sasamoto et al. (2017), far from being mere aesthetic additions, these intralingual subtitles are deployed in conjunction with other communicative resources in a deliberate attempt to infuence viewers’ interpretations, by enhancing certain messages and making afective values in TV programmes more explicit. Invented in Japan in the late 1980s, the use of telop quickly spread to other Asian countries such as South Korea, mainland China, Taiwan and Tailand. Although most people refer to them as subtitles, whether they can be considered as such is certainly debatable as they hardly ever imply the translation of the dialogue and they do not need to be synchronized with the original speech. Tey are often explanations or sardonic remarks and sometimes literal transcriptions of terms and expressions of the spoken dialogue. As for their length, they can be as short as one word and as long as fve to six lines, and sometimes chunks of text can even occupy half of the screen. In recent years, novel subtitle types have appeared in the Chinese mediascape, which no longer simply translate or transcribe what is being said by the people on screen, but they also incorporate extra information. Especially in cybersubtitling, translators do on occasions intentionally joke with the audience rather than accurately translate the ST, in a kind of subtitler’s performance. Borrowing from telop, the Chinese practice of 吐槽, tù cáo, is exemplary of this approach. Unlike the generally faithful translation contained in the standard subtitles, the tù cáo version “is inclined to depart from the original text, and the translator’s notes and glosses have been utilized to express the translator’s comments or feelings other than explaining difcult cultural references points” (Zhang 2013: 33). One way subtitlers do this is by embedding aleatory comments, usually of a humorous and sarcastic nature, about certain phenomena or current events in China in their subtitles. Te other tactic consists in inserting comments that express their feelings while translating or that describe their translating experience, as if to remind the viewers that the subtitler is accompanying them while watching the television drama or flm. In countries like Japan, and particularly in China, a new form of subtitles called 弹幕, dànmù, which literally translates as bullet curtain or bullet fre, has become very popular among young audiences. It consists of realtime, user-generated comments, which are dynamic, are contextualized and appear overlaid on top of a video (He et al.
Reconceptualizing subtitling
19
2016; He et al. 2018; Zhang and Cassany 2019). Tey are essentially snippets of text containing viewers’ comments on and reactions to a particular scene. Written in different font sizes and colours, they often contain smileys and emoticons and are presented scrolling on the screen, from right to left, like a bullet. In China, these subtitles frst appeared on fandom platforms around 2007, and since then have become increasingly popular, especially now that the technology to produce them has been enabled on most media streaming platforms, including iQiyi, Youku, and Tencent. Unlike traditional subtitles, they are not subject to time or space constraints, as they do not need to be synchronized with the original dialogue, though, like traditional subtitles, they are limited in the number of characters that a subtitle projection may contain, which is technically restricted to up to 50 characters but normally stands at around 15 to 20 characters, since long texts will roll away too quickly to be properly read. In a clear attempt to boost audiences’ interactivity and participation, the practice is meant to facilitate communication between diferent viewers and to strengthen afective links between the audience and the audiovisual programme. Te second major type of subtitles falls under the category of interlingual and, in their most common incarnation, they are monolingual, implying the translation from a source to a TL. Gottlieb (1994) calls this diagonal subtitling since it involves a shift from one language to another along with a change of mode, from oral to written. Tis subtitling is the main focus of this book and is analyzed in depth in the chapters that follow. Bilingual subtitles are a variant within the interlingual category and are produced in geographical areas where two or more languages are spoken. In Belgium, in an attempt to satisfy the Walloon and Flemish communities, subtitles in some cinemas are in French and Flemish Dutch. In Finland, where Swedish is an ofcial language on a par with Finnish, bilingualism is also respected in certain regions, and television and cinema resort to the use of subtitles in both languages. Outside of Europe, in countries such as Jordan and Israel, Hebrew and Arabic often co-exist at the bottom of the screen. In all these cases, the two lines available for subtitles are in constant use, each one dedicated to a diferent language. To avoid excessive pollution of the image, only two-liners tend to be used, which clearly restricts the amount of text available for translation, although subtitles of four lines may also occur. Another setting where bilingual subtitles are resorted to is in international flm festivals. In order to attract a wider cosmopolitan audience, some of these festivals screen their foreign flms – say Iranian, Spanish or Japanese – with two sets of subtitles. One set is in English, to satisfy the needs of an international audience made up of flm directors, producers, distributors, actors and viewers who come from all corners of the world, and the other set of subtitles is in the language of the country where the flm festival takes place, e.g. French in Cannes, German in Berlin and Italian in Venice. Interestingly, bilingual subtitling is a mode much appreciated by young Chinese people, to the extent that most English programmes are now subtitled bilingually in both English and Chinese on some TV stations as well as on the major online platforms providing media streaming services in the country; and Chinese-produced flms that are shown on VOD platforms and in cinema are often bilingually subtitled with both English and Chinese in the hope to boost their international circulation. Te functionality ofered by many video players to activate more than one subtitle track at once gives users the option of calling up two diferent languages onto the screen, usually at the top and the bottom (Figure 1.2), thus creating a learning environment in which the original dialogue appears transcribed and translated at the same time, while it can also be heard:
20
Reconceptualizing subtitling
Figure 1.2 Bilingual subtitles Source: Cyrano de Bergerac
Te aim of this linguistic combination is to reap the benefts of concurrently displaying intralingual and interlingual subtitles whilst listening to the original soundtrack. Tough there is anecdotal evidence that some learners make use of this learning environment, research on the educational value of this juxtaposition of subtitles is still in its infancy. Despite the immediate caveat of risking a potential situation of information overload, experimentation conducted by García (2017: 477) with engineering students concludes that bilingual, or dual, subtitles “contribute to vocabulary production signifcantly better than intralingual and/or interlingual subtitles due to the fact that when having the translation available, users are aware of the grammatically correct, well-punctuated, and unambiguous written form in both L1 [source language] and L2 [target language]”. To promote and exploit this kind of subtitles, the software application DualSub (bonigarcia.github.io/dualsub), an open source desktop tool, allows the merging of two .srt fles containing the subtitles in two diferent languages. Yet, if the user only has a single subtitling fle, DualSub can automatically translate the input into other languages, with the help of the online service Google Translate. Te third and fnal sub-category within interlingual subtitling is that of multilingual subtitles, in which case three or more languages appear simultaneously on screen. As surprising as it may sound to some, this was a rather common occurrence during the analogue period in countries like Malaysia, where open subtitles in Malay, Chinese and English were broadcast at the same time in some audiovisual productions. With the potential ofered by digitization, this has become a thing of the past, and these days audiences can choose the language of the subtitles from a menu of closed subtitles in the said languages and also Tamil for certain programmes (§1.5.4). Te traditional, broad and erroneous distinction that equates interlingual subtitles with hearing viewers and intralingual subtitles with the hearing-impaired audience systematically overlooks the professional practice that started with the commercialization of the DVD: interlingual subtitles for people who are D/deaf or hard-of-hearing. Historically, in
Reconceptualizing subtitling
21
countries with a strong tradition of dubbing, such as Spain, Germany, Austria, France and Italy, people with hearing impairments could only watch the few programmes that had been originally produced in Spanish, German, French or Italian, and later also subtitled intralingually into these languages. Given that the translating custom of these fve countries favours the dubbing of the vast majority of programmes imported from other nations, it has always been a challenge for D/deaf and hard-of-hearing audiences to gain proper access to the information contained in these programmes. Tey have had to content themselves with the limited ofer of foreign programmes that are broadcast with standard subtitles, which do not contain any extra information on sound. Similarly, in other latitudes with a stronger subtitling tradition, for instance Portugal, Greece, the Netherlands, Croatia and the Scandinavian nations, those with a hearing loss have normally been served by the same interlingual subtitles as have hearing viewers, even when these have clearly been inappropriate to their needs, since they do not incorporate the paralinguistic information necessary for the hearing impaired to be able to contextualize the action. Yet, with the arrival of the DVD, the situation changed and some foreign flms marketed in countries such as Germany, Italy and the UK contained two diferent tracks of interlingual subtitles: one for the hearing population and a second one for the hearing impaired. Examples of this commercial strategy are the Spanish flm Women on the Verge of a Nervous Breakdown, which is retailed on DVD by MGM with two interlingual subtitle tracks in English and a further two in German, as well as the US flms Telma & Louise and Annie Hall, also commercialized digitally by MGM with two sets of interlingual subtitles in German. A similar approach is adopted in the DVDs of the US flms My Fair Lady and Vicky Cristina Barcelona, both sold by Warner Bros., whereby the former contains two interlingual subtitle tracks in Italian and the latter has them in Spanish. Te choice of languages to be graced with these two types of subtitles seems rather aleatory and perplexing, particularly when the same distributor markets the flms in all countries, and the DVD used for the distribution contains numerous languages. It remains a mystery why some languages occasionally have two subtitle tracks and others just one. Te excitement raised by these early initiatives was soon dampened by the decline of the DVD format and the stalling of the Blu-ray, as the superseding streaming giants have not yet fully embraced this novel approach to interlingual SDH. 1.5.2
Time available for preparation
By looking at subtitles from this perspective, the following types can be distinguished, as shown in Figure 1.3:
Figure 1.3 Classifcation of subtitles according to the time available for preparation
22
Reconceptualizing subtitling
Te main diference between pre-prepared/ofine and live/online subtitles resides in the fact that the former are done after the programme has been shot and ahead of its broadcasting or release, giving translators sufcient time to carry out their work. Tey are the standard mode in subtitling, allowing for the text to be carefully synchronized with images and soundtrack, with precise in and out timecodes, edited to a reasonable reading speed, and checked for errors. On the other hand, the online type is performed live, i.e. at the same time as the original programme is taking place or being broadcast. Te various challenges raised by this second category justify the recommendation by the European Broadcasting Union (EBU 2004: 10), “live subtitling should be limited to occasions when there is insufcient time to prepare subtitles using other methods”. Within this latter category, two groups can be distinguished. Semi-live subtitling, or as-live subtitling, is a method “typically used for live programmes which are heavily scripted and have pre-recorded inserts” (ibid.), like news bulletins, theatre plays or opera productions. In these cases, the subtitler normally creates a list of subtitles, without fxed timecodes, and during the transmission of the programme or staging of the performance cues these subtitles manually, following the dialogue as closely as possible. Today, software is being developed that allows for the automatic live broadcasting of prepared surtitles, e.g. for opera and for musicals, which are less prone to improvisation than theatre is (soundfocus.nl). Te semi-live approach minimizes the risk of errors that can crop up in live subtitling, though textual accuracy also depends on the amount of material that has been given to the subtitler prior to the event. Nevertheless, temporal synchrony continues to be a challenge. Te second type is known as live subtitling, sometimes also called realtime, which is produced for sports programmes, some TV newscasts, live interviews and talk shows as well as for parliamentary proceedings and in educational settings, to provide linguistic support with lectures. In cases like these, the programme or speech to be subtitled is not at the disposal of the subtitler beforehand. One reason for the advent of intralingual live subtitling for television was the imposition of SDH quotas of up to 100% in some countries, which made it impossible to provide the required amount of subtitling in pre-prepared mode. Historically, four main approaches can be discerned. First, to achieve the speed and accuracy required for live subtitling, a stenographer or stenotypist – known in North America as stenocaptioner – is in charge of creating the subtitles using a specialist keyboard machine, called a stenotype, to record dictation in shorthand by a series of phonetic symbols, rather than typing letters, and applying writing theories common in court reporting (Robson 2004). Tis was the pioneering method adopted in the early attempts to cope with the provision of subtitles for live programmes and, although still in practice, is not widely used because of its labour intensive nature as well as its high costs and required length of training. Te BBC’s experience is “that training a subtitler to re-speak ... takes a couple of months; by comparison, training a stenographic subtitler from scratch can take 3–4 years” (EBU 2004: 20). To overcome the drawbacks of the use of stenography in subtitling, the ergonomic velotype keyboard (velotype.com) was invented. A Dutch discovery, it allows velotypists to write complete syllables and words by pressing several keys simultaneously, instead of typing character by character. Yet, mastering velotype involves labourintensive training, which is one of the reasons it is being gradually supplanted by speech recognition. Nevertheless, some live subtitling using velotype, or other
Reconceptualizing subtitling
23
keyboards, remains in use in environments that have no access to speech recognition technology or for languages for which no speech to text software exists. Velotype, for instance, is sometimes used at conferences and for didactic purposes, and in those cases the subtitles are projected above or under the speaker’s PowerPoint presentation, using software like Text on Top (text-on-top.com/en), which can also be used with respeaking. To a large extent, the subtitles become surtitles, or simply titles since they may also be projected on a separate screen at live events. Te dual keyboard system, the third approach to live subtitling, which resorted to employing a group of subtitlers, usually three, working on the same programme in collaboration has virtually been abandoned in favour of methods based on automatic speech recognition (ASR). Taking turns and typing at very fast speeds on standard keyboards, two of the subtitlers would listen to the audio and produce the subtitles, while the third person revised the text produced by the other two, before authorizing its broadcast. In addition to being too costly, this way of working resulted in signifcant latency, i.e. the delay between the occurrence of speech and the appearance of the subtitle on screen. Te fourth and most common technique today is known as respeaking. It relies on a subtitler/respeaker making use of ASR software to generate the subtitles. It is a process whereby a person listens to the original utterances of a live programme and dictates the speech, i.e. respeaks it, including punctuation marks and some specifc features for the D/deaf and the hard-of-hearing audiences, to a microphone connected to a computer, to an ASR app and to a subtitling program, which then displays subtitles on screen with the shortest possible delay (Romero-Fresco 2011). Te fact that ASR programs can only work with certain major languages, simply because more research efort and capital investment have been put into them, makes their application impractical in the case of some minoritized languages, at least for the time being. Nonetheless, the technology has been around since the 1990s, steadily improving in the number of languages covered as well as in the accuracy of the recognition and the subsequent reduction of spelling mistakes. At present, diferent workfows exist side by side. Tey vary from country to country, from broadcaster to broadcaster and even from programme to programme. Sometimes, the subtitler/respeaker is assisted by an editor, who corrects mistakes before the subtitles are broadcast; sometimes this task is conducted by the respeaker, who, in addition to respeaking the original utterances, can correct misrecognitions, i.e. errors made by the respeaking software, and other errors before airing the subtitles (Remael et al. 2014). Te preferred workfow can also be connected to the ASR used and the display mode of the subtitles (§1.5.3). Speaker-independent respeaking software, which produces subtitles directly from the speaker without the flter of the respeaker and eliminates any human involvement in the process, does exist but can only be used in very specifc circumstances. Te main issues with it are the fact that ASR works best if specialized terminology, names and acronyms are added to its database before use and if the software is trained by the respeaker. Another issue is the need for studio-quality audio input without ambient noise. If these conditions are not fulflled, the number of recognition errors increases exponentially, which is why speaker-independent speech recognition is not practicable for live television, whereas it could be an option in contexts where there is time for post-editing. In addition, as people tend to be faster when they speak than when they write, respeaking is increasingly being used to generate pre-prepared subtitles, both intralingual and interlingual, as it is believed to help save time and increase productivity.
24
Reconceptualizing subtitling
Tough live subtitling was mainly used in the production of intralingual SDH in the past, it is nowadays making inroads into the feld of interlingual subtitling. Tis is due to the increased multilingual character of some local live TV shows, in which a foreign guest may have to be subtitled interlingually, or due to the need to subtitle live high-profle FL broadcasts such as the inaugural speeches of presidents or Nobel Prize winners or, also, the Eurovision Song Contest. In addition, interlingual live subtitling is also increasingly popular at conferences and other public events where it can replace simultaneous interpreting. In interlingual live subtitling another challenge is added to the already complex respeaking practice: the audio input must be translated/interpreted live and communicated to the ASR software, while the respeakers/interpreters listen to the new audio input that is to follow and monitor their own speech as they are reproducing the previous input. As a result, the delay or latency, which can already be disturbing in intralingual live subtitling, often further increases in interlingual live. Since this tends to elicit strong criticism from its users, some broadcasters occasionally grant live subtitlers a slight head start, or broadcast delay, which allows them to keep abreast of the speaker and deliver synchronous subtitles. In terms of workfow, here too a lot of variety exists today as broadcasters are experimenting with this fairly new subtitling practice (Robert and Remael 2017). Tis type of multitasking strongly resembles simultaneous interpreting with the added challenge of having to talk in a continuous monotone voice to the ASR, adding punctuation and typing in corrections. To train professionals in this challenging job, the European project ILSA (Interlingual Live Subtitling for Access, ilsaproject.eu) has developed the competences needed to successfully operate in this feld and has also designed a specialized training course for live interlingual subtitling. To conclude, any of the subtitles discussed previously under the various categories can be subdivided further according to their lexical density. Edited subtitles, due to the spatial and temporal limits imposed by the medium, are the most commonly used and consumed when watching a subtitled programme, especially in the case of pre-prepared interlingual subtitles. Te levels of condensation applied depend on the assumed reading speed of the audience, which has direct impact on the display rate of the subtitles on screen (§4.4). Verbatim subtitles, on the other hand, are meant to be a full and literal transcription or translation of the spoken words, which risks pushing viewers’ reading speed up to uncomfortable levels. Nevertheless, SDH verbatim live subtitling is still practised in many countries, in spite of research demonstrating that users can retain more information if the subs are in a (slightly) edited mode. What is more, some interlingually subtitled productions that boast to display new and ‘better’ or ‘improved’ subtitles tend to base their claim on the fact that the subtitles are usually less condensed than in the previous version(s) rather than on the linguistic quality of the translation (Bywood 2016). Te choice between verbatim and edited subtitles remains a particularly hot topic of discussion in SDH (Romero-Fresco 2009; Szarkowska et al. 2011), where scholars and researchers often support editing while hearing-impaired viewers and D/deaf associations tend to demand verbatim subtitles as the only way to have full access to audiovisual programmes.
Reconceptualizing subtitling 1.5.3
25
Display mode
Figure 1.4 presents a taxonomy of subtitles according to their display mode.
Figure 1.4 Classifcation of subtitles according to the display mode
Subtitles can appear on screen intermittently, in full and in one go, in which case they are known as block, pop-up or pop-on subtitles. Tey are the standard display mode in pre-prepared subtitling, whether intralingual or interlingual, as a clear consensus exists that block subtitles are the least disruptive of them all, are easier for viewers to read and thus allow them to spend more time looking at images. According to traditional practice, when two speakers appear in the same subtitle, each one is allocated a line and both appear at the same time on the screen. Cumulative subtitles (Williams 2009) are sometimes used to permit two – exceptionally three – chunks of speech appearing in the same subtitle but not at the very same time. Each part of the text pops up on screen at a diferent time, in sync with its speaker, but all text chunks leave the screen at the very same time. Te second section of the subtitle appears in sync with the second utterance and is added to the frst part, which remains on screen (Figures 1.5 and 1.6). Tese cumulative subtitles are normally used for dramatic efect and to avoid advancing information, i.e. to present part of a message too early with the risk of diminishing the intended impact of the original message. Tese subtitles are particularly suitable for jokes (to keep the punch line separate), in quizzes (to separate questions from their answers), to delay a dramatic response or to adhere to the rhythm of a song.
Figure 1.5 Cumulative subtitle – part 1 Source: Meridian
26
Reconceptualizing subtitling
Figure 1.6 Cumulative subtitle – part 2 Source: Meridian
Roll-up subtitles have been the typical mode favoured when producing live subtitling in most Anglo-Saxon countries, whereas many other nations prefer block subtitles across the board. Tese subtitles appear as a constant fow of text and are usually displayed word by word or in blocks containing short phases. Tey tend to be disliked by viewers because their instability and constant upward movement of the text on screen make the reading a rather challenging and onerous cognitive activity. In the UK, a transition appears to be on the way as, according to Ofcom (2015: online), broadcasters are improving several aspects of subtitling, including “making extensive use of easier to read ‘block subtitles’, which show several words as a single block of text”, even though the text still continues to roll up the screen. 1.5.4
Technical parameters
From a technical perspective, the following two types of subtitles can be identifed: (1) open, hard subtitles or hard titles and (2) closed subtitles. Te basic diference between them is that, in the frst case, the subtitles are not encoded into the video signals and are instead irreversibly burned or projected onto the image, which means that they cannot be removed or turned of. As they are merged with the photography, no special equipment or software is required for playback. Te audiovisual programme and the subtitles cannot be disassociated from each other, giving the viewer no choice whatsoever as to their presence on screen since they are always visible. Within this category, forced narrative subtitles are those present in the original audiovisual production to provide information necessary to make it comprehensible for the source audience. A good example of forced subs in an English-language TV series are those used in Jane the Virgin to translate into English the lines of the grandmother, who only speaks Spanish, and they generally fulfl a comedic efect. Another example is the English subtitles used in the British flm Four Weddings and a Funeral to translate the sign language exchanges between the male protagonist, Charles (Hugh Grant), and his deaf brother. In addition, they may be employed in the original production to translate foreign-language written content that appears on screen, such as newspaper headlines or street signage that is relevant to the plot and which an English
Reconceptualizing subtitling
27
speaker would not understand. Of course, on occasions, this material may be left untranslated in the original if the flm is meant to be seen from the point of view of a particular character who does not speak the language in question. In the second case, closed or pre-rendered subtitles are separate video frames that are overlaid on the original video stream while playing, which means that they can be turned on and of and become visible on screen only when activated by the viewer by means of a decoder. Tis formatting makes it possible to have multiple language subtitles and to easily switch between them. As these subtitles are usually encoded as bitmap images, i.e. a series of tiny dots or pixels, they cannot render many colours, which is one of the reasons why SDH on DVD rarely makes use of colours to identify speakers. Until the arrival of digitization, interlingual subtitles were always open on television and in cinemas and were distributed with the old VHS tape. Intralingual subtitles, on the other hand, were always closed and broadcast via teletext or line 21. With DVD and VCD and later Blu-ray, all of them much more versatile formats, the situation changed and closed interlingual subtitles became common currency in the industry. However, it is not surprising on occasions to come across programmes on DVD with hard subtitles. In the age of streaming and video on demand, where companies seem to be more conscious of viewers’ likes and dislikes, closed subtitles have become the norm. 1.5.5
Methods of projection
A classifcation of subtitles from this perspective represents in efect an excursus throughout the history of subtitling: • • • • • • •
Mechanical and thermal Photochemical Optical Laser Electronic 3D Immersive
Unsurprisingly, the technical process of transferring the subtitles to the actual flm or audiovisual programme has undergone a considerable evolution, leading to signifcant improvements in their legibility and stability on screen. Tis section focuses on the last four types, but those interested in this topic from a historical perspective can refer to Ivarsson and Carroll (1998) and Díaz Cintas (2001a), where a more detailed account of each of them is provided. Te current method of impression most commonly used in cinema subtitling is laser. Introduced in the late 1980s, it rapidly proved to be much more efective than the previous methods it began to replace. A laser ray of great precision burns the emulsion of the positive 35 mm and 16 mm copies while printing the subtitles, which, thanks to the timecodes, are perfectly synchronized with the actors’ speech. Trough this technical process, the subtitles become an integral part of the flm copy, as they have been engraved on the images, and every time the flm is projected the subtitles will appear on the lower part of the screen. Te reason these open subtitles are white is simply that, as the flm is projected onto a white screen and the subtitles are translucent holes burned through the flm’s coating, they allow the light of the projector to
28
Reconceptualizing subtitling
go through, thus showing the white colour of the screen behind. If the cinema screen were blue or yellow, the subtitles would then be blue or yellow respectively. One of the upsides of this method is that it permits excellent defnition of letters, with enhanced contours that facilitate the contrast between the text and the images, enhancing the legibility of the subtitles. Being engraved onto the flm’s copy eliminates any possibility of the subtitles moving or shaking during the projection of the movie. To laser subtitle a full-length feature flm takes about ten times the flm’s projection time. Electronic subtitling is the other method frequently used in the profession as an alternative to laser subtitling, its greatest advantage being that it allows subtitles to be superimposed on the screen instead of being hardcoded onto the movie. Te preferred type in flm festivals, these pre-recorded subtitles, which have been previously translated and synchronized, are then beamed by a projector onto the screen. Te technology uses a timecode system to ensure that the text is shown in realtime, in synchrony with the screening of the flm. Electronic subtitling permits extremely versatile subtitling of a single flm print, making it possible to project the subtitles onto (or below) the image, in any language, in any colour (though they tend to be white or yellow), and without damaging the original copy. It is cheaper than laser engraving and is used mainly in flm festivals where a single copy of the flm can be shown with various sets of subtitles in several countries. Another great advantage over laser is that, since electronic subtitles are independent of the audiovisual programme, they can easily be revised and modifed from projection to projection. Tis system is also used to provide better access to movies screened in the cinema for people with hearing loss, without imposing on the hearing population as the subtitles can be projected onto a screen adjacent to the movie screen. In addition, it is a solution for cinema theatres that want to use the same flm print for alternative screenings with and without SDH, at diferent times of the day. Electronic subtitling is also preferred on television and DVD. A Christmas Carol marked a milestone in UK cinema as the frst movie ever in the country to become truly accessible in 3D to D/deaf and hard-of-hearing viewers, and hence, as the frst flm to show 3D intralingual subtitles. Te release of Avatar a month later saw the birth of interlingual 3D subtitles to translate the fctional Na’vi language. Te surge in interest for 3D stereographic movies has brought about new job profles – like the 3D subtitle mapper, responsible for the positioning of the subtitles – as well as fresh challenges and novel ways of thinking about subtitling. Te main challenges to maintain the 3D illusion derive from the way the subtitles are positioned on screen and how they interact with the objects and people being depicted. Any confict between an onscreen object and the subtitle text will destroy the 3D illusion, as is the case with the ghosting efect, where the subtitles have a visible shadow that makes their reading difcult and can lead to >physiological side efects in the form of headaches, eyestrain and nausea. In essence, 3D subtitles combine the standard subtitle position, along the X and Y axes of the picture, with a third position along the Z axis. Te latter positioning allows the subtitle to foat in front of the 3D image, creating the feeling of depth (Figure 1.7). Tis option is available in digital cinema as well as in 3D Blu-ray releases. Finally, the technological possibilities unleashed by the new forms of immersive content and viewing experience, like 360° video, virtual reality and augmented reality, are also having an impact on the way in which subtitles are being conceptualized and displayed so that they can add value to the immersive environment rather than detract. While in conventional cinema and television, people look at the screen in front of them and the subtitles are mostly displayed within the image, usually at the bottom
Reconceptualizing subtitling
29
Figure 1.7 Tree-dimensional subtitles
of the screen, immersive environments allow viewers to look in any direction, thus raising the question of where best to position the subtitles, be they interlingual or intralingual. Experiments with members of the audience are already being conducted on this front (Brown and Patterson 2017), and it is not hard to fathom that changes afecting the nature of subtitling will be occurring in the not too distant future. 1.5.6
Distribution
Although not as neat as the previous categories, a sixth and last taxonomy can also be somewhat discerned, according to the medium used for the distribution of the audiovisual production, which may afect the way in which subtitles are produced. Tus, subtitles can be made for the following: • • • • •
Cinema Video, VHS DVD, VCD, Blu-ray Television Internet
Not long ago, the typical commercial release of a feature flm started in the cinema, was followed by the copy on video cassette, which was superseded by DVD/Blu-ray, and fnished on television. Today, the panorama has changed quite radically and some major flms may not even start their life in the cinema but rather on the internet, launched and hosted by OTT distributors like Netfix or Amazon Prime. In this mediascape, it is not unusual to fnd the same programme on the market with different sets of subtitles. Te reasons for this plurality of versions are multifarious. From a commercial perspective, on occasions, the rights to screen flms and other audiovisual productions are sold to diferent distributors and TV stations that decide not to use any previous translations because they consider them somewhat defcient, the actual subtitle fles cannot be found or the subtitles are aimed at a diferent population. For instance, a TV station in Angola may decide against the subtitles produced
30
Reconceptualizing subtitling
in Portugal and commission a new set to take into consideration the linguistic specifcities of their Portuguese as well as their audience’s cultural background. It is also not uncommon for private TV stations to create new subtitles that are diferent from the ones used for cinema exhibition or the ones broadcast by their competitors. From an aesthetics viewpoint, cinema subtitling tends to respect shot changes more strictly than the rest of media and shows a proclivity to go for shorter lines and for clustering the text in the centre of the screen, by giving priority to a two-liner with short lines over a one-liner that is too long. Tis means that a feature flm lasting approximately 90 minutes and containing some 900 to 1,000 subtitles in the cinema copy can end up with around 800 subs in the television version. Technically, to be able to broadcast on television a flm that was originally produced for cinema exhibition, the motion picture needs to go through a process called telecine, since the cinema illusion is based on a succession of 24 frames per second and the small screen uses 25 frames per second in PAL (Phase Alternating Line) and SECAM (Séquentiel couleur à mémoire) systems, and 30 in countries that adopt NTSC (National Television System Committee). Tis means that the pace of the flm must be slightly sped up, reducing the length of the movie and the time available for subtitles, which in turn may have an impact on the spotting of the subtitles, justifying the production of new subtitles. 1.6
Intertitles
Intertitles are at the origin of subtitles and can be considered their oldest relatives, the frst experiments with intertitles having taken place in the early 20th century. Tey are also known as title cards and can be defned as a piece of flmed, printed text that appears between scenes. Tey were a mainstay of silent flms and consisted of short sentences written against a dark background, usually white on black. At a time when pure cinema was intended to be made up of images only, whose complex visual coding was self-sufcient, the inclusion of intertitles in the flm was seen as an obstruction to the natural fow of the images. Teir use was largely curbed as many directors were reluctant to incorporate them in their works, and cineastes like Carl Dreyer and Carl Mayer were celebrated for their sparing use of intertitles (Van Wert 1980). Teir main functions were to convey the dialogue exchanges between the characters (dialogue intertitles) as well as the descriptive narrative material related to the images (expository intertitles). Although primarily understood as a means of communication in essence, some directors also exploited them as an artistic and expressive device, on occasions juxtaposing them with the images to create metaphorical meanings or to render inner monologues (Chisholm 1987). Te arrival of the talkies in the late 1920s eroded their usefulness in terms of cinematic value, leading to their gradual disappearance from the screens, though some directors may still use them as creative devices. When silent movies travelled, the original intertitles used to be edited out and replaced by new title cards in the TL, produced in the same country as the flm. On other occasions, the intertitles were left in the SL and a master of ceremonies would translate and explain them to the rest of the audience during the screening of the flm. When translated for today’s viewers, the original print is left intact and the content of the intertitles is either subtitled or voiced over.
Reconceptualizing subtitling
31
Te area has been vastly neglected in academic exchanges, both within Film Studies and Translation Studies, though some works have delved into their translation (Díaz Cintas 2001a). More recently, the historiography of flm translation, including intertitles, has seen a surge of interest spurned on by scholars like O’Sullivan and Cornu (2019). 1.7
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 1 > Exercises
2
Professional ecosystem
2.1 2.1
Preliminary discussion This is a quote from Ivarsson and Carroll (1998: v), in which they refer to a previous edition of their book written by Ivarsson in 1992: It was a book about subtitling, not translation. Translation is a different art. I decided to call it Subtitling for the Media – A Handbook of an Art, since in my view subtitling, when it is done to high standard, includes so many of the elements essential to art and above all demands so much skill, imagination and creative talent that it is indeed an art.
❶ What are your thoughts about this definition of the subtitling profession? ❷ Do you agree with their concept of subtitling as ‘art’? ❸ Is subtitling a craft?
2.2
What are, in your opinion, the main skills required from a subtitler?
❶ Visit the European Skills, Competences, Qualifications and Occupations (ESCO) website > Classification > Occupations > and search for the term ‘subtitler’. Do you agree with their definition? ❷ Find other sites with information on the skills required from a subtitler and compare them. [Web > Chapter 2 > Preliminary discussion > Discussion 2.2]
2.3 What are your thoughts about the following definition of the figure of the subtitler? A translator has to be multi-talented, but a subtitler also has to be a verbal acrobat – a language virtuoso who can work within the confines of a postage stamp. You only have a limited amount of characters per line to work with, including commas and spaces. If you have one letter too many, you can’t just leave it out. You sometimes have to rewrite the entire sentence. The layout and spotting of subtitles are art forms in themselves, because subtitles have to follow the same rhythm as the film.
Professional ecosystem 2.4 2.5 2.6
2.2
33
Do you know of any specific tools designed only for the production of subtitles? Which ones? Do you think the profession of the subtitler has changed much since 1992? How? Who do you think are the main employers in this industry?
Te subtitling process
Subtitling is the result of a team efort, in which several stages have to be followed from the moment a job is commissioned until the audiovisual production can be enjoyed on screen. Gaining a comprehensive and updated overview of the workfows operating in the industry can be rather challenging, as diferent companies work in diferent ways and new technological advances and commercial forces tend to have an immediate, disruptive impact on the subtitling profession. Te mercurial nature of the entertainment mediascape is best refected in the exhilarating irruption in the mid-1990s of, at the time, the novel DVD, followed by the Blu-ray in the mid-2000s, both of which in the space of 20 years have gradually declined in importance, overtaken by new distribution and exhibition models like streaming. Although still a commercial reality, legacy DVD and Blu-ray business can be said to be trending towards zero. Crucially, many of the professional subtitling practices brought about with the advent of digitization continue to be relevant. Te client, normally a production or distribution company but sometimes a television station or flm festival, contacts the language service provider (LSP) with a commission. General details concerning the name of the client, the title of the audiovisual production, the number of languages into which it needs to be subtitled, the delivery deadline as well as the contact details of the project manager and the translator(s) assigned, among other details, are entered into the system. In the feld of subtitling, LSPs are more often than not multilanguage vendors (MLVs) with the capability of ofering AVT services in a wide range of languages. Incidentally, flm and episode titles are not normally translated by the subtitlers themselves but rather by the marketing departments, in such a way as to attract more viewers and thus maximize revenue. Tis usually happens when creating the marketing campaign, before the flm moves to the translation stage. Somebody in the company watches the commissioned programme to make sure that the copy is not damaged, to verify that the dialogue list (§2.4) is complete and accurate – if the AV production comes with such a document – and to check if there is any other information (songs, hard titles, and the like) that also needs to be translated in addition to the dialogue exchanges. If one of these working documents has not been supplied, a transcript of the utterances will need to be transcribed ab initio from the soundtrack, either manually or with the help of transcription applications based on automatic speech recognition (ASR). A digital working copy is then made of the original flm and, if need be, it will be telecined to convert from cinema 24-frame rate to video 25-frame rate (PAL and SECAM systems) or 30-frame rate (NTSC system). To strengthen security and dissuade bootlegging, special anti-piracy measures might be taken, with some companies
34
Professional ecosystem
blurring the photography or producing low-resolution copies while others prefer to add spoilers or watermarks throughout the production, reminding the viewer about the legal owner of the copyright. Also known in the industry as timing and cueing, spotting is the next task. It consists in determining the in and out times of each and every one of the subtitles in a flm, i.e. deciding the exact moment when a subtitle should pop up on screen and when it should leave, according to a series of spatial and temporal considerations discussed in Chapter 4. For many professionals, the ideal situation is when spotting is carried out individually by the experienced subtitlers themselves, though since the advent of digitization at the turn of the 21st century and the spread of multilingual distribution via DVDs, Blu-rays and, more recently, streaming, the task is usually performed by technicians conversant in the SL only. Tey are then responsible for creating a template or master English fle to serve as reference for the translated subtitles, as further broached in §2.5. On occasions, especially in the case of theatrical releases, the flm may come with a very detailed combined continuity and subtitle/ spotting list, containing all the dialogue already segmented into master titles for the translator to follow (§2.4). A copy of the flm and the dialogue list, and/or a fle with the master titles, are then forwarded to the translator, although it is not uncommon in the profession for subtitlers to have to work without access to the images, or directly from the soundtrack without a copy of the written dialogue. Te reasons are manifold, from fear that illegal copies might be made to a sheer lack of time because the distribution company wants the translation to be underway while the flm is still being edited, so that the date of the movie’s premiere can be met, for instance. Before digitization, audiovisual programmes were sent to the translators on a VHS cassette, but these days the fle transfer protocol (FTP) is the standard used for the transfer of fles and videos between a client and server on a computer network. Te most common video fle formats in the industry used to be .avi and .mpeg, which are still in use, but these days formats like .wmv, .mov and .mp4 are preferred for storing digital video. When working in the cloud, subtitlers do not need to download any materials as they are securely stored on the vendor’s platform (Chapter 9). All the videos used in this project are .mp4, with a frame rate of 25 frames per second.
Sometimes there is an additional step between spotting/template-making and translation, called conformance. It happens when, due to time limitations, the spotter has to work not on the fnal cut of a flm or programme but rather on an intermediary cut, known as locked cut. Conformance, then, consists in (quickly) adapting the subtitles to the fnal video cut after it arrives. In that fnal cut, some new shots may have been added, some removed, some made shorter or longer, etc. Occasionally, translation commences after template conformance, but in some instances translators can be asked to start working from a locked cut’s template and then to conform their translations themselves to the fnal cut when it arrives. Watching the flm or programme in its entirety before proceeding to translate is highly advisable, although not always very realistic when having to comply with tight
Professional ecosystem
35
deadlines. Time permitting, and if working with a written dialogue list, it may be sensible to take notes of the points and issues that could prove problematic at a later stage. Having done the preparatory work, the subtitler can start with the translation from the source to the TL, paying due attention to the actors’ dialogue, but without forgetting other acoustic and visual elements that should also be translated, such as songs, inserts, newspaper headlines, or voices coming from a radio or a television set. Besides being aware of the technical constraints imposed by the medium, translators also have to be familiar with the value added by the images, which is often culturally determined. When working from a template fle containing the master titles, known in the industry as originating, translators are requested to translate the English subtitles into their respective working language, flling the corresponding empty boxes of the template. If taking care of timing and segmentation adjustments too, then the subtitler will have to make use of a specialist piece of software to perform this activity. Once the translation (and spotting) have been completed, the resulting work will be saved as a fle format that tends to be unique to subtitling. Tese fles can be either specifc to the subtitling software manufacturer or one of the many industry standard fles that facilitate the exchange of this type of material, such as .stl, .pac, .srt and .ass. In the case of Wincaps Q4, the fles created have the extension .w32 and the program allows for easy exportation into many other fle formats. OOONA has its own fle extension for projects, .json, and like Wincaps Q4 it also permits easy conversion into multiple formats. Te special encoding applied by these professional types of fle, which can only be read by the appropriate software, means that subtitlers can format their text in a variety of ways and the information will not be lost when exchanging the fles: text colour, text background colour, onscreen positioning, bold, italics, foreign characters and the like. Subtitles can also be edited within basic programs such as notepad, in which case they are usually saved as .rtf or .txt. A fle format that is gaining momentum is Web Video Text Tracks (WebVTT), .vtt, as it allows for the addition of any form of metadata that is time-aligned with audio and/or video content, thus making it easier for the user to navigate to a particular section of a video. Once the translation has been completed, the subtitler should run fnal checks to ensure that it is error free, and only then email the fle with the translation, in the requested format, to the person who commissioned the job at the LSP, unless they are working in the cloud, in which case the subtitles will be automatically saved on the platform and there will be no need for exchanging fles. If the subtitler was not asked to produce the actual subtitles but was instead charged with producing a ‘complete’ translation of the dialogue exchanges without consideration of the technical constraints, the delivered translated text would need to go through a technical adaptation process. In cases like this, a technician or adaptor is then responsible for adjusting the translation to appropriate subtitle events, according to the spatial and temporal limitations that encroach on the various scenes of the flm, as well as to the subtitling display rate applied throughout the programme. Tis practice tends to be more prevalent when subtitling for the cinema or sometimes for a television station than when working for the DVD or the streaming industries. It is also commonly adopted when surtitling for the opera and the theatre, though the professional reality is that these adaptation tasks are
36
Professional ecosystem
becoming obsolete and, when followed, they are increasingly done by the subtitler rather than a technician. To guarantee a fnal product of high quality, after the subtitles have been received by the LSP or produced in-house from the complete translation, a revision is carried out by a proofer to detect any possible mistranslations, typos or inconsistencies. Spelling mistakes seem to be more noticeable on screen than on a page and must be avoided at all costs, as their presence may not only be irritating but may also spoil the viewing experience. In the case of languages going through a process of linguistic standardization, like Catalan or Basque in Spain, a linguistic corrector, who works mainly for public service broadcasters, may be called on to ensure that the language used on the screen abides by the stipulated linguistic rules. Tis stage in the subtitling practice is by no means habitual practice and, when it does happen, the subtitler may receive the proofread translation fle once again and be asked to accept or reject the proposed suggestions and corrections. Te completed translation fle is then returned to the LSP or updated in the cloud. In some companies, a subsequent quality control (QC) department checks the accepted translation and, if any questions arise, they send the fle back to the translator for correction one more time. Along with checking formatting and language usage, quality controllers, popularly known as QCers, also check the consistency (e.g. that names are translated in the same way throughout the subtitles) and compliance (e.g. that certain terms deemed obscene by the client have not been used) of the translation. In some companies, QCers are expected to fx the subtitles and mark the errors, which then infuence the subtitler’s performance metrics. When working for the DVD/Blu-ray industry or subscription video on demand (SVOD) platforms, the fle containing the accepted subtitles is converted into an image format and is again checked, in what is known as the simulation. Te individual responsible for reviewing this stage checks each subtitle and, if the need arises, corrects any conversion and timing errors, typos or grammatical mistakes. After these fnal corrections, the subtitles are ready to be delivered to the client. In the case of theatrical releases, and before the subtitles are laser burned onto the celluloid, a simulation of what the flm is going to look like with the subtitles is carried out in the presence of the client. Some of the big distribution companies employ a supervisor who is responsible for this activity. If required, amendments or changes are incorporated at this stage. Te simulation of an average flm – some 90 minutes and a thousand subtitles – takes about three hours, and the subtitler is sometimes, though not always, invited to attend the session. When the simulation is to the taste of the client, the subtitling company can then proceed to the laser engraving of the subtitles on the celluloid. Laser subtitling is still widely used in theatrical releases and, although relatively more expensive, it has proved to be more efective and reliable than any of the previously mentioned methods (§1.5.5). In this process, a laser ray burns the flm emulsion in order to engrave the subtitles, producing a black dust that obscures the new written text and requires the celluloid to be washed and dried again in a special machine immediately after the operation. Once the subtitles have been laser engraved on the flm copy, a fnal viewing takes place to make sure that both the engraving and the washing of the celluloid have gone according to plan. Te flm is then dispatched to the client. Despite its praised efciency, laser subtitling is doomed to disappear in the not-too-distant future as digitization and technological advancements ofer new and cheaper solutions. In this respect, one of the alternative methods
Professional ecosystem
37
is electronic subtitling, which is more economical and often utilized in events like flm festivals. Up until here we have seen the stages that are usually followed when subtitling a feature length movie. However, practice and theory do not always coincide, and some of the stages might be skipped. It cannot be forgotten, either, that some of these operations are being constantly revisited, and what was normal practice a few years back is now obsolete. Te digitization of the image, the commercialization of subtitling editors and, more recently, the migration to the cloud are some of the milestones that have led to considerable changes in the profession (Díaz Cintas and Massidda 2019). 2.3
Te professionals
Te traditional, academic perception of translation as an individual activity clashes with the professional reality of subtitling being the result of a sustained team efort, to which a panoply of professionals contribute. Te spotter, or templator, is responsible for the technical task of deciding the in and out times of the subtitles and for creating templates and master titles with relevant annotations for the translators. Tese professionals tend to share the language of the original programme, although not always, and they might not know any FLs, but they are expected to be technologically literate, with an excellent working knowledge of subtitling programs. Tey should be conversant with flm language and narrative techniques and capable of dealing with issues like shot changes. Translators, on the other hand, are in charge of the language transfer, must have an excellent command of the source and the TLs and cultures, and know the intricacies of shifting from speech to written text. Increasingly, they are known as subtitlers and are expected to be fully familiar with the spatial and temporal constraints that characterize subtitling, as well as conversant with the relevant translational strategies that are commonly applied in this feld. In some countries, companies also employ an adaptor, an expert in the media limitations that constrain subtitling and familiar with condensation and reduction strategies in the TL. Te role of this professional is to convert the rough translation produced by the translator into subtitle events, searching for shorter synonyms and altering syntactical structures without sacrifcing the meaning of the original, even though in some cases they might not know the SL. Although still very prominent in dubbing as a professional in charge of lip synchronization, adaptors are gradually disappearing from the subtitling profession, and their tasks are being subsumed by the subtitler. As professionals with ample experience in subtitling, proofers, revisers or QCers are responsible for checking and curating the work done by other colleagues, paying attention to the linguistic, translational and technical dimensions. Finally, project or client managers are in overall charge of the planning, developing and execution of a particular project, liaising not only with the client but also with the rest of the professionals assigned to a given commission. Figure 2.1 ofers a synoptic overview of the typical subtitling workfow, from the perspective of the various professionals involved in it. Te fact that traditionally neither spotters nor adaptors have been required to be conversant in the language of the audiovisual programme has been criticized as a major weakness of such a workfow, particularly because any changes incorporated
38
Professional ecosystem
Figure 2.1 Typical subtitling workfow
can shift the meaning of the original in a negative manner. Indeed, authors such as Luyken et al. (1991: 57) posit, “[i]deally the translation and subtitling functions should be combined in one person which will reduce the risk of error due to the inaccurate communication of concepts”; a combination that has been successfully in place in many countries and is fast spreading to the rest. Indeed, most subtitlers these days carry out both tasks, i.e. translation and adaptation to adhere to the subtitle conventions. As for spotting, and in countries with a longstanding tradition in subtitling, the general approach before the arrival of digitization and the establishment of the DVD industry was for a professional to take care of all subtitling tasks, from the spotting of the original dialogue to the translation and adjustment of the fnal subtitles, condensing the message if and when necessary. Tis has been pretty much the norm in some working environments like cinema and broadcasting subtitling. Te arrival of the DVD brought about a substantive overhaul in the industry, with the division of the world into zones and the propagation of new working practices that have now been consolidated in the VOD and streaming ecosystems. A document illustrating the different DVD and Blu-ray regions in the world can be found on Web > Chapter 2 > Additional material > Regions
From the point of view of translation, the most signifcant development has been the cohabitation on one single DVD, Blu-ray or streamed production of several dubbed and subtitled versions of the same flm in various languages. For a producer needing a flm or programme to be subtitled into manifold idioms, it is more convenient to send
Professional ecosystem
39
the audiovisual production to a single multilanguage vendor, who is in principle capable of delivering in all the required languages, than to commission the work from numerous single-language vendors or small companies that will only be able to translate into one or just a few languages. For a large subtitling company game to embark on multilingual projects of this nature, it is operationally easier and fnancially cheaper to create a single document, known as a template (§2.5), which contains the master titles to be translated into all the diferent languages. Te original dialogue is then spotted in the same way and all the subtitles follow exactly the same timing in all the TLs. In these scenarios, the distinction between spotters/templators and translators/subtitlers is a very real one, despite some of the criticism levelled at such division of the tasks (Nikolić 2015). 2.4
Dialogue lists
Te main objective of any translation is to reformulate a message originally produced in a SL into another language, avoiding any misunderstandings in the process. In other translation practices it might arguably be easier to deal with some translational challenges in footnotes or by leaving out or radically changing obscure cultural referents or difcult plays on words, even if this is hardly good translation practice. It is more difcult for consumers to spot mistakes or changes in the translation unless they have direct access to the original text, which is not always the case. Yet, this concealment is impossible in a translation practice as uniquely vulnerable as subtitling (§3.3.1), always defned by the concurrent presence of the original and TTs at all times. In this context, a good dialogue list is a key document that facilitates the task of the audiovisual translator, helping to dispel potential comprehension mistakes. As far as nomenclature goes, many terms exist to refer to this type of document – e.g. screenplay, (as-broadcast) script, dialogue transcript, combined continuity, etc. – but, despite the minor diferences among them, the umbrella term dialogue list is used in these pages. A dialogue list is essentially the accurate compilation of all the dialogue exchanges uttered in the flm and is a document usually supplied by the flm producer or distributor. Besides a verbatim transcription of all the dialogue, the ideal list also ofers extra information on implicit socio-cultural connotations, explains plays on words or possible ambiguities, transcribes any written text that appears on screen, elucidates the meaning of colloquial and dialectal terms, gives the correct spelling of all proper names and clarifes implicit as well as explicit allusions, though not all dialogue lists are this complete. Te European Broadcasting Union (EBU), resulting from the Conference on Dubbing and Subtitling that took place in Stockholm in 1987, launched a proposal with general guidelines aimed at homogenizing the production of these documents (Ivarsson and Carroll 1998). However, they are only recommendations, and the professional reality is that these documents follow many diferent layouts and vary enormously from one to the next. Although post-production dialogue lists are vital to carry out an optimal translation, they are rare in the industry, even in the case of flms and TV series, which rely on multimillion dollar budgets and are long-term projects. When working with old movies or productions that do not come with a dialogue list, subtitlers need to transcribe the exchanges directly from the soundtrack, either manually or automatically. In these cases, due care has to be taken as poor sound quality, overlapping of dialogue lines, strong accents and environmental noises can easily lead to mishearings and misunderstandings
40
Professional ecosystem
of the ST and, therefore, to the production of subtitles that are at odds with the original message. In the age of the internet commons, free access to dialogue lists has improved considerably, thanks to the abundant number of portals archiving the screenplays of many flms and popular TV series. As these scripts tend to be pre-production, rather than post-production, they do not always coincide with what can be heard on screen, and a thorough checking of their contents is advisable. Dialogue lists of feature flms to be shown on cinema screens tend to be the most detailed ones, particularly the ones that accompany most blockbusters hailing from the USA. Large multinationals and streaming giants are fully aware of the economic potential of their productions on the world market and rely on these documents to smooth the translation process. Other cinema industries, on the other hand, do not seem to appreciate the added value of these working documents and, on the few occasions when flms come with dialogue lists, they tend to be rather basic, incorporating little extralinguistic information to help translators. Figure 2.2 is an example of the very detailed dialogue list, more precisely known as a combined continuity and subtitle/spotting list, of the flm Manhattan Murder Mystery, which can also be found on Web > Chapter 2 > Additional material > Figure 2.2. On the left-hand side of the page the Combined Continuity & Dialogue can be found, consisting of all the utterances that will need to be translated when dubbing the flm. Te literal transcription contains the detailed and faithful reproduction of what the actors say on screen, including repetitions, exclamations, false starts, redundancies, linguistic fllers, etc. together with information on the performance of the characters, in brackets, such as (overlapping), (chuckles), (sighs) and the like. Whether the character speaks on or of camera is also indicated with the symbols (ON) and (OFF) respectively. Te document also contains stage directions, in capital letters, indicating actions, movements of actors, sound efects, lighting and similar features. When provided with this wealth of information, the translator is sometimes asked to translate without a copy of the flm. In the centre column, information is provided about the reel in which the utterances are found, i.e. number 2 in the example in Figure 2.2, followed by the sequential number of the subtitle: 2–104, 2–105 and so on. On the right-hand side of the page, entitled Spotting List Footages & Titles, we fnd what in the profession are known as master (sub)titles, i.e. the dialogue already cut into chunks and pruned of what is considered to be irrelevant information. Tis is the text that subtitlers are asked to transfer into their TL. It also incorporates myriad explanatory notes aimed at facilitating and speeding the translator’s task, whether for dubbing or for subtitling, though some of them may seem rather expendable for the translation process as they may state the obvious. Te three sets of digits preceding this material mark the spotting of the subtitle, indicating the precise place where it appears (459.8) and disappears (465.14) in the footage of the flm, and the length of the speech utterance (6.6), in feet and frames (§4.4.9). What is interesting in the case of the condensed dialogue on the right-hand side of the page is its greater linguistic rationalization and naturalization when compared with the more spontaneous dialogue found on the left, something that also happens in the production of templates (§2.5).
Professional ecosystem
Figure 2.2 Detailed dialogue list
41
42
Professional ecosystem
Hesitations, repetitions, stutters, false starts, etc. have all been neutralized into much more compact and logical discourse, albeit more standard and less colourful, as can be appreciated in the following two lines in Example 2.1 from the flm Manhattan Murder Mystery: Example 2.1 You said she liked- She liked eating high cholesterol → desserts. Is that what you said? Yes, yes. Yeah, no, no, no, no, I-I-I-I understand. I, → uh, yes, no.
You said she liked eating high cholesterol desserts. Yes, no. I understand.
Traditionally, dialogue lists have largely been used only in the translation of motion pictures to be screened in cinema theatres, while programmes to be broadcast on TV or released on DVD or Blu-ray rarely made use of these working documents and, when they did, they were mere transcriptions of the dialogue exchanges, with virtually no additional information for the subtitlers. Te situation seems to be changing these days and OTT distributors like Netfix (n.d. 1) are requesting their media partners to deliver detailed dialogue lists together with their audiovisual productions, which have to contain frame-accurate timing, verbatim transcription of the dialogue and any written text appearing on screen, be it burned-in subtitles (i.e. any subtitle that appears in the original flm to translate a FL, for instance) or onscreen text (i.e. main titles, end credits, narrative text, location call-outs, and other supportive or creative text), as well as pertinent annotations. Figure 2.3, which can also be found on Web > Chapter 2 > Additional material > Figure 2.3, is a typical example of such a list:
Figure 2.3 Content of a typical dialogue list, as required by Netfix
A good dialogue list should be compiled by a professional with a fair for the sort of problems involved in linguistic transfer and should contain as many relevant details as possible. Some professionals may fnd some of these explanations irritating and often too basic, but it cannot be forgotten that the ultimate aim of the list is to be useful to as many translators as possible, all over the world, and from very diferent cultural backgrounds. What may seem straightforward for a European audiovisual translator subtitling a US or French flm may not be so for a translator working into an African or Asian language.
Professional ecosystem
43
2.5 Templates and master (sub)titles With the boom of the DVD industry in the 1990s, international subtitling companies were set up, usually headquartered in California and London, to respond to the need for an unprecedented high volume of subtitles, which were required in many diferent languages and with fast turnaround delivery deadlines, prompted by studios wanting to close the windows between theatrical and DVD releases in order to combat piracy (Georgakopoulou 2012). In a rather short span of time, LSPs found themselves having to create multiple subtitling streams for both scripted (e.g. feature flms, TV series) and unscripted material (e.g. commentaries and value-added material) commonly found on DVDs. To address this new agile ecosystem, new workfows were designed based on centralized subtitle creation, making it easier for studios “to keep track of and archive their assets, such as subtitle fles, and hold the copyright to reuse them as necessary for adaptation or reformats to other media, e.g. for broadcast, video on demand (VOD), airline releases, etc.” (ibid.: 81). From a strategic and operational viewpoint, the subtitling process was thus split into two very distinct activities: technical spotting and linguistic translation. One of the main pillars of this centralization was the introduction of the template fle, also known in the industry as master fle, master (sub)titles, genesis fle, spotting list and transfle, which continues to be pivotal and widely used in today’s industry. Tese fles contain the fxed in and out timecodes of the subtitles and, more often than not, the SL dialogue has been already compressed to avoid subtitle display rates that exceed the agreed maximum. By contrast, verbatim templates rely on a word-for-word transcription of the dialogue uttered by the actors on screen. In the new cloud-based environments (Chapter 9), workfows are being constantly monitored and the use of templates reassessed. On this front, subtitlers have been awarded a certain degree of latitude as, depending on the companies, they are entitled to alter the timecodes and can ignore the condensed formulations if they prefer to work from the complete utterances heard in the original soundtrack. Templates are usually produced in English to accompany an original production also in English, but they can also be created in English to be used as a frst or pivot translation in the subtitling of an audiovisual programme originally shot in a third language (say a flm in Greek or Hindi that is to be translated into Czech or Icelandic with the support of English master titles as the relay language). A potential consequence is that any errors or misunderstandings in the English translation will most likely be replicated in the other languages too, and ambiguities, nuances and interpretations of particular lines in the original will also be fltered through English. Unsurprisingly, this modus operandi has been decried by many as problematic and worrying, since not only are most audiovisual productions already produced in English, but even flms shot in other languages end up being translated from and fltered through English. Te opposing view maintains that this way of working is inevitable, and a solution to enhance the quality of the resulting subs would be the creation of templates that contain very detailed information to help the subtitlers. On rare occasions, pivot master titles in other languages than English may also be used (e.g. a template in Swedish to subtitle a flm from English into Croatian), normally as a means to take advantage of the timings of the already spotted subtitles and not so much to rely on the language of the predefned subtitles (Nikolić 2015).
44
Professional ecosystem
As for their layout, templates vary substantially across companies, depending on whether they are created for the DVD industry, usually in formats that can be opened with a notepad (.txt, .rtf ), or for the streaming market, in which case they tend to be produced in specialist subtitle fle formats that can be opened in the relevant cloud applications (Chapter 9). A plain format is the one illustrated in Example 2.2, in which only the most basic information is included: Example 2.2 (Memento) Subtitle number
Time in
0133 00:00:11:04 What does he want? 0134 00:00:12:10 Wants to know what happened to Jimmy and his money. He thinks I have it and I took it. 0135 00:00:17:15 What’s this all about? 0136 00:00:20:17 You don’t have a fucking clue, do you? You’re blissfully ignorant, aren’t you?
Time out 00:00:12:08 00:00:15:20
00:00:18:23 00:00:24:19
Example 2.3, on the other hand, not only includes more technical information – i.e. the duration of each one of the subtitles, which is missing in the previous example, but it has also been designed with the help of macros that will not allow translators to write or modify the information contained in the grey rows and will force them to respect a maximum of 37 characters per line, including spaces, when writing their subtitles in the white boxes: Example 2.3 (U2: Te Best of 1990–2000)
Professional ecosystem
45
As already discussed (§2.5), to create the templates, LSPs tend to employ English native speakers who are familiar with the specialist software, conversant with the spotting task and also take the responsibility for condensing and reducing the original dialogue into the master titles that are later sent to all the diferent subtitlers to be translated into the required TLs. As could be expected, this working approach was received with some controversy when it was frst introduced, and continues to be so, particularly among traditional subtitlers and local companies. Te advantages and disadvantages of working with these fles have been discussed in profuse detail by authors such as Georgakopoulou (2012) and Nikolić (2015). From the industry’s perspective, the central production of subtitles meant an immediate reduction of costs for the spotting needed to be done only once and then shared among the many translators. Te subtitlers, however, saw their rates decrease because they no longer had to take care of the technical dimension of spotting. In terms of working practices, and according to Georgakopoulou (2012: 81): template fles provided the solution to easily centralise the management and quality check of subtitle fles without necessarily having in-house staf fuent in all the languages in which subtitles are produced. It also opened up the pool of subtitler resources needed to produce this volume of work to include the entire body of traditional text translators worldwide, who could successfully translate such template fles into their native languages with minimal training in subtitle production.
Criticism has been levelled by professionals who argue that using one and the same template to suit all the languages is a practice that ignores idiosyncratic customs, since the subtitling styles followed in each country obey local norms that have developed over the years in response to local needs and, as such, they should be respected rather than disregarded. As evidenced by Georgakopoulou (2010), diferent countries apply diferent levels of reduction when creating subtitles, with Nordic countries customarily opting for fewer subtitles than do markets like Spain or France when dealing with exactly the same flms. If this is the case, and subtitling is performed according to established, country-specifc subtitling norms, then the use of verbatim templates that are locked and contain a literal transcription of all the lines uttered by the actors is bound to prove problematic in countries where reduction of text is of the essence. To appease the disquiet generated by the use of templates, some companies allow their professional subtitlers to tweak the times and the spotting contained in the master list, e.g. they can merge two subtitles or split one subtitle into two. From a linguistic perspective, optimal templates can help ensure spelling accuracy of proper names, for instance, as well as the correct understanding of the original dialogue. Yet, translators working in this environment have to adhere to the fxed cueing of these master (sub)titles and their translations have to render the content of the already pruned subtitles rather than the speech, thus impinging on their freedom since they could otherwise opt to segment the original dialogue in a diferent manner. Tis opinion is also shared by Smith (1998: 144–145) when he states, “a ready-made spotting list would tend to encourage translators to divide their subtitles to ft it, even where better
46
Professional ecosystem
solutions could be found”. In Example 2.4 from Manhattan Murder Mystery, the original dialogue in the left column has been reduced to the master title on the right: Example 2.4 You want to lie down for a while? We’ll put a cold compress on your head, or a hot compress on your back, or –
→
You want to lie down? We’ll put a cold compress on your head.
Yet, the subtitler has decided to ignore the master title and adhere more closely to the soundtrack in order to recreate the humorous impact of the original, which would have been lost otherwise, as illustrated in Example 2.5: Example 2.5 Túmbate, te pondré compresas frías en la frente,
[Lie down, I’ll put cold compresses on your forehead,]
o calientes en la espalda. O... [or hot ones on your back. Or...]
Had the subtitler blindly followed the content of the master title, the humour of the original would have been lost, since there is nothing strange in putting a cold compress on the forehead of someone ill or dizzy. It is indeed the addition of the nonsensical second half of the utterance – i.e. to put hot compresses on someone’s back – that allows for the comedic reading of the text. Discussing the centralization and fragmentation of the subtitling workfow, brought about by structural changes in the industry, Kapsaskis (2011: 174–175) decries the fact that it is arguable that template fles efectively indicate what to translate and how to translate it. To a signifcant extent, they dictate specifc or strategic choices that are often debatable as far as the TL is concerned, and ultimately they tend to replace the audiovisual material as the source text of the translation.
Little empirical research has been conducted to date on the impact that templates have on the quality of the target subtitles. Te qualitative analysis carried out by Artegiani and Kapsaskis (2014: 433) of the French, Greek and Spanish subtitles of two episodes of the TV series Te Sopranos, which were translated based on template fles, shows that “the consequences of text reduction choices in the template left visible traces in all three languages examined and sometimes had an adverse impact on translation quality” as, on many occasions, they were blindly reproduced in the TLs. Some slippages in the unorthodox use of certain punctuation marks and the evidence of stylistic calques, both interferences from the written English text, could also be detected, as subtitlers appear to be using the templates as their main ST for translation rather than the audiovisual material contained in the images and the soundtrack. Te results of an online survey conducted by Oziemblewska and Szarkowska (2020)
Professional ecosystem
47
among practising subtitlers from around the world indicate that the quality of the templates is perceived by many professionals as rather inadequate. In an industry as multilingually complex and diverse as the subtitling one, template fles are here to stay, in one form or another, as they have become a key structural tool of the vendors’ workfows, when working for both the DVD/Blu-ray publishers and the VOD distributors. However, like dictionaries, dialogue lists and templates are not infallible, and neither are their producers. Subtitlers ought to use these documents with care and be on the alert for any potential confict between the written text found in the master titles and the information being relayed through the audio and the visuals. In this respect, special attention must be paid to numbers as well as to the right spelling of proper names and loan words from third languages. Rather than acting as a straightjacketing device, these documents should be used with fexibility, as some companies are now allowing in the cloud, and subtitlers should be granted the freedom to be able to depart from them when they prefer to segment the information diferently or to give priority to other details contained in the soundtrack or the images. 2.6
Guidelines and style guides
In addition to a dialogue list of the programme to be translated and/or a list of master titles, subtitlers also ought to receive a style guide, or equivalent, from the broadcaster or subtitling company, in which the main parameters to be applied in their subtitles are made explicit. In an early publication, Zabalbeascoa (1996: 250) was already calling for the systematic use of these documents in the profession: An essential part of the translator’s reference material should be a specialized in-house stylebook, which could include all the information that the employer or frm can anticipate that the translator will need to know and use, including glossaries, television policies and translational norms ... along with a considerable number of practical examples of problems and strategies.
Although not as complete as the ones suggested by Zabalbeascoa, many companies, particularly the large multinationals specializing in subtitling, produce their own guidelines but, as they are for internal use, they have typically known very little distribution beyond their own employees. Secrecy, for fear of disclosing sensitive information, has traditionally hindered the circulation of these working documents, especially in the case of interlingual subtitling, where some professionals are subject to non-disclosure agreements. Yet, the situation has radically evolved and opened up with the arrival of over-the-top media services providers like Netfix (n.d. 2), which archive all their subtitling style guides in numerous languages on their website, open to anyone willing to access them. Te British broadcaster Channel 4’s (n.d.) subtitling guidelines for FL programmes can also be freely downloaded from their website, and various audiovisual translation associations have joined forces to come up with consensual guidelines in languages like Croatian, Danish, Dutch, Finnish, French, German, Norwegian and Swedish (§4.2). Glossaries and KNP (key names and phrases) lists are other types of working documents usually provided to the subtitlers.
48
2.7
Professional ecosystem
Subtitling software editors
After the invention of cinema at the end of the 19th century and the advent of television in the 1950s, the arrival of the internet in the 1990s can be hailed as one of the most signifcant turning points in the history of human communication. In its relatively short history, the internet has known a phenomenal growth and has had a transformative impact on culture, commerce, politics, technology and education, including the rise of near-instant communication by email, instant messaging, telephone calls, twoway interactive video calls and the world wide web with its discussion forums, blogs, social networking and online teaching and shopping sites, among many other key developments. From a subtitler’s viewpoint, the trove of information on the net seems to be boundless. Dictionaries, glossaries, encyclopaedias, specialized thematic websites, distribution lists, automatic translation engines, subtitlers’ forums, job ofers, archives of dialogue lists and scripts, subtitling fles repositories, etc. are just a click away from anybody connected to the internet. From a technological perspective, internet has greatly transformed the world of translation in general, and that of subtitling in particular, with the development of many software programs – commercial, proprietary and freeware – as well as cloud-based platforms (Chapter 9) developed for subtitling work. Te frst programs designed exclusively for the professional practice of subtitling started being commercialized in the second half of the 1970s and, over time, have been evolving to the generations that are available today. A comprehensive list of subtitling programs – commercial, proprietary and freeware – can be found on Web > Appendices > 1. Subtitling software
At the time, subtitlers needed a desktop computer, an external video player in which to play the old VHS tapes, or Betacam, with the material to be translated, and a television monitor to watch the audiovisual productions. Te computer would have a word processor as well as a basic subtitling program, which made it possible to simulate the subtitles against the images on screen. Some subtitlers would also need a stopwatch to perform a more or less accurate timing of the dialogue, while others would use a toggle on the video system. Te switchover from analogue to digital technology at the turn of the 21st century brought about the possibility of digitizing images, thus impacting the very essence of the subtitling profession. Te old VHS tape soon became defunct, superseded by the DVD, which in turn gave way to the Blu-ray, with which it has shared the media space for some years, though both are now gradually being forced into oblivion by the seemingly unstoppable forces of internet streaming and VOD, as schematically displayed in Figure 2.4. Subtitling workstations that a few decades ago relied on various pieces of hardware to undertake all the necessary tasks have become obsolete, overtaken by a single computer equipped with software specifcally designed for subtitling. And although this is, arguably, still the most typical working environment nowadays, innovations seen in the cloud point to a potential shift of paradigm in the not-too-distant future, whereby the need for installing a specialist subtitling program on one’s computer may become a thing of the past.
Professional ecosystem
49
Figure 2.4 Industry evolution
Unless working with templates, subtitlers who are also in charge of the spotting usually require a computer, a subtitling program, and a digital copy of the audiovisual material to be translated in order to perform all pertinent tasks in front of a single screen. Tis equipment permits them to have simultaneously open on the computer monitor a word processor and a window to watch the video, allowing them to decide the in and out times of each of their subtitles, following the rhythm of the original dialogue and taking care of shot changes, to type their translation, to monitor the display rate and length of their subtitles, to decide on the positioning and colour of the text, to spell check their translation and conduct other checks as required, and to simulate the fnal product against the images. Traditionally, a serious obstacle for the freelancer working in this feld has been the high price of some of the commercial subtitling programs, which are not cost-efective for those creating subtitles only occasionally or for those working from templates in English. Another downside, from the training perspective, has been the reluctance on the part of many universities and educational centres to invest large sums of money in such equipment, thus thwarting the inclusion of practical modules on this topic as part of their curriculum. To remedy this situation, companies like BroadStream Solutions and OOONA have come up with educational packages that target this niche of the market, thus allowing trainee subtitlers to become familiar with the type of state-of-the-art technology used in the industry. In the early years, software developers required users to purchase the software as shrink-wrap or as electronic purchase of a license, operated with a dongle. More recently, however, recognizing that many freelancers take on subtitling jobs only on an occasional basis, making the investment in professional software impracticable, some companies have launched pay-as-you-go versions of their software on more fexible licensing terms. From a commercial perspective, and to compete with the rise
50
Professional ecosystem
of freeware, a transition has taken place in the industry from capital expenditure (CAPEX), i.e. investing in the purchase of tools, to operating expenditure (OPEX), whereby users rent a piece of software when and if needed. In this manner, professionals do not need to invest a large sum of money up front and can instead buy a monthly, six monthly or yearly subscription to use the software if and when needed. Competing alternatives to the high price of commercial subtitling programs have also come about. A common practice among some of the leading vendors specializing in subtitling has been the development, by their in-house R&D teams, of their own proprietary subtitling software, which is only made available to those registered in their books as subtitlers. Additionally, the harnessing of collective intelligence and the democratization of technology have materialized in our feld in the development of a vast array of free, open source subtitling programs that populate the internet and are widely used in commercial as well as cybersubtitling activities (Díaz Cintas 2018). Today’s subtitling editors are user-friendly and multilanguage, with customizable interfaces and shortcuts that allow users to create, edit and convert text-based subtitle fles in a myriad of formats. Tey boast advanced functions such as automatic calculation of ideal subtitle duration, spell checkers in diferent languages and frame per second conversion. Additionally, most subtitling editors also support style and colour tags and inform subtitlers about the number of characters per second being used in any given subtitle. Yet, as time is money, the capability and functionality of subtitling programs, whether commercial or freeware, are being constantly revisited and enhanced with a view to maximizing subtitlers’ productivity and, as a result, reducing labour costs. Improved user interfaces and the automation of certain subtitling tasks, particularly at the technical level, have always been the favoured remit of software engineers. During the subtitling production phase, users can rely on automatic backup, type horizontal and vertical subtitles and apply any layout deemed appropriate. To speed up the process of synchronizing the subtitles with the soundtrack whilst avoiding crossing shot changes, most subtitling software applications detect shot changes in the audiovisual programme automatically, displaying a timeline in which the video track and the shot change boundaries are shown, thus making it easier and quicker to set the in and out times of the subtitles. In addition, some programs include a shot-change avoidance feature, whereby it automatically pushes subtitles away from the shot changes according to whichever specifc rules have been given by the client. Another improved feature that enhances the accuracy of subtitle synchronization is the provision of an audio level indication waveform bar, whereby changes in soundtrack volume are visually displayed and speech presence can be detected and distinguished from music or background efects. Te main benefts of these efciency tools are twofold. Firstly, subtitlers can skip the scenes with no speech, saving time especially during the fnal preview or quality check. Secondly, by assisting them in identifying the timing of speech points, it helps make spotting a lot easier, faster and more accurate. Te function of playback with audio scrubbing, i.e. enabling the user to drag a cursor across a segment of a waveform to hear it, both when forwarding and rewinding the video, is another addition to boosting accurate timing. Technology can further assist subtitlers by simplifying the tasks of text input and timecode synchronization. Te automatic timing of subtitles is achieved by means of speech alignment technology: the script or transcript of the dialogue is fed to the subtitling program which, equipped with a speech recognition system, synchronizes it with the soundtrack of the video and assigns it a given timecode, taking account
Professional ecosystem
51
of parameters such as timing rules for shot changes, display rates, minimum gaps between subtitles, and minimum and maximum duration of subtitles. If the script contains more textual information than just the dialogue exchanges, the latter can still be imported into the software, with a script extractor that is capable of parsing the script layout to extract dialogue or any information deemed relevant, such as speaker cues. When subtitling for viewers who are D/deaf or hard-of-hearing, this information can be used in order to automatically colour the interventions of the diferent actors, for instance. In the case of live subtitling, automatic speech recognition has been instrumental in the growth of respeaking, both intralingual and interlingual. Some programs have also started to integrate auto translate engines to ofer subtitlers a frst draft in the TL. To complete the translation activity, a raft of checks can be conducted, automatically or manually, to secure high levels of linguistic quality and consistency. Tese can focus on timing to guarantee that the minimum and maximum duration of subtitles are respected, the maximum display rate is not exceeded, no overlaps of timecodes have taken place between consecutive subtitles, the minimum gap between chained subtitles is adhered to and shot changes have been respected. Tey can also centre on presentation and check that font size, type and positioning of text are consistent, the maximum number of characters per line is not exceeded, no subtitles go beyond the safe area and no extra blank spaces or rows appear in the fle. Finally, through the proofng check, the software performs an analysis of (customizable) linguistic errors and typos, and ensures that a fle complies with the given house style guidelines and/ or client rules. A guide to the checks that can be carried out with Wincaps Q4 can be found on Web > Wincaps Q4 > Guides > 5_Q4_Checks
On completion of the translation, subtitlers can lock the timecodes and/or subtitles to prevent external editing, merge subtitle fles, resynch the timing to suit diferent video formats, export and convert the subtitle fle into virtually any format available on the market, and compare source, translated and revised fles for quality control (QC). All in all, this functionality helps subtitlers boost their productivity and reach high levels of consistency. Other substantive technological developments that are afecting the subtitling practice, like the use of machine translation and memory tools to assist in the actual production of subtitles and the engineering of cloud-based portals to simplify processes throughout the entire work cycle, are explored in greater depth in Chapter 9. 2.8
Te profession
Te increasing pervasiveness and democratization of technology together with the intensifcation of globalization processes have facilitated the fuid and immediate circulation of information and cultural messages. Te entertainment and multimedia industry plays a crucial role in this agile mediascape through the various AVT practices at its disposal. Technology has come to be an omnipresent reality that infltrates not only the social and working life of the individual but also the way in which the
52
Professional ecosystem
external environment is being moulded. To be successful in this ecosystem, translators need to adapt and adjust to the new changes so that they can harness state-of-the-art technologies to their advantage rather than risk being replaced by them. In recent decades, AVT has been, without a doubt, one of the most prolifc areas in the translation industry, if not the most prolifc. Te production of digital video has been in hyper growth mode over the last few years. According to Cisco (2019), there will be close to one million minutes of video crossing the internet per second by 2020, and online videos are expected to make up over 82% of all consumer internet trafc by 2022, which is 15 times higher than the fgure for 2017. Tis is because of improvements in video technology, wider viewing device options and an increase in content made available online both from television broadcasters and from other video services. Simultaneously, giants like YouTube estimate that more than 60% of a YouTube channel’s views come from outside its country of origin (Reuters 2015), clearly pointing to the pivotal role of translation in cross-cultural communication. Te parallel commercial upsurge of translation activity in the audiovisual industry has been corroborated by research conducted on behalf of the Media & Entertainment Services Alliance Europe, underlining that audiovisual media content localization across Europe, the Middle East and Africa is expected to increase from $2 billion in 2017 to over $2.5 billion before 2020 (MESA 2017). Te mushrooming of channels and VOD platforms, driven partly by OTT players like Netfix, Amazon Prime, Viki, Hulu, iQiyi and Ifix, to name but a few, who specialize in the delivery of content over the internet, has opened up more opportunities for programme makers to sell their titles into new markets. To be successful in this venture, most audiovisual productions are accompanied by subtitled and dubbed versions in various languages, with many also including subtitles for people who are D/deaf or hard-of-hearing and audio description for people who are blind or partially sighted. As highlighted by Bond (2019), traditionally, linear TV broadcasters would focus on distributing programs on a predefned schedule to household TV sets around the world, while major flm studios would direct their eforts to producing box ofce hits, and streaming services would concentrate on making third-party content available on their platforms. Yet, the lines began to blur when streaming services started producing their own original content and flm studios realized they too could get into the streaming game, each one encroaching on the other’s territory. Te result is that the market for online streaming services is awash with subscription-based platforms pouring billions into original content and vying for the attention of audiences the world over, which in turn has triggered a soaring demand for multilingual localization as a key support service. Never before has translation been so prominent on screen. Of the various modes of translating audiovisual programmes, subtitling is arguably the most widely used in commercial and social environments for two main reasons: it is cheap and it can be done fast. Subtitles are used in all distribution channels – cinema, television, DVD, Blu-ray, internet, both intra and interlingually. Te entertainment industry, and increasingly the corporate and the educational worlds, have been quick to take advantage of the potential ofered by digital technology to distribute the same audiovisual programme with numerous subtitled tracks in diferent languages. On the internet, the presence of subtitles has been boosted by the development and distribution of specialist subtitling freeware, which allows fansubbers and amateur subtitlers to create their own translations and distribute them around the globe (Díaz Cintas and Muñoz-Sánchez 2006). On a more domestic note, one of the most symbolic ways
Professional ecosystem
53
in which (intralingual) subtitles have been propelled to the media centre stage has been the inclusion of a subtitle button on most TV remote controls, which takes viewers to the subtitles in an easy and straightforward manner. Legislation in many countries is also having a great impact on the total amount of subtitled hours that TV stations must broadcast to satisfy the needs of viewers who are D/deaf or hard-of-hearing, with some corporations like the BBC subtitling 100% of their output. As subtitling projects have become larger in terms of the number of hours and languages that need to be translated, their budgets have also increased, making the whole operation an attractive feld for many old and new companies setting up innovative businesses or expanding the portfolio of services they provide to their clients. In this highly competitive commercial environment, the role of new technologies aimed at boosting productivity is being keenly explored by many of the stakeholders. A collateral result of this fast-growing global demand for content that needs to be translated – from high-profle new releases to back-catalogue TV series and flms for new audiences in regions where they have not been commercialized previously, without forgetting the myriad of new genres that a few years back would not have been translated – is the perceived “talent crunch” (Estopace 2017: online) in the dubbing and subtitling industry. Given the lack of formal AVT training in many countries, the situation is likely to get worse in the short term, especially in the case of certain language combinations. What is certain, however, is that the demand for audiovisual translation is here to stay, as companies and organizations around the world continue to recognize the immense value of adapting their content into multiple languages to extend their global reach. Its playbook, however, is still an evolving paradigm, and subtitlers need to keep abreast of this evolution if they are to develop and sustain a prosperous professional career. Successfully executing localization at scale is by no means an easy feat for media entertainment buyers. For clients with audiovisual productions that need to be subtitled into a panoply of languages, a normal occurrence in the industry is to approach a large subtitling company that can take care of all the languages involved in the project rather than sending the programmes to various single-language vendors located in the countries where the languages are spoken. Some OTT operators have tried to disrupt this modus operandi. A case in point is Netfix, who launched and then axed its Hermes project (Bond 2018), with which it had hoped to recruit the best subtitlers around the globe under its own steam. Te company soon pivoted and realized that media localization suppliers were best placed to test, train, onboard and work directly with subtitlers. Te current situation is that streaming giants seem to prefer to appoint various end-to-end localization vendors, who, in turn, manage a wider pool of media localizers, including individual subtitlers and smaller LSPs. A thorny issue is the fact that many of the big international LSPs specializing in subtitling have their main ofces in cities like London and Los Angeles, nerve centres of the audiovisual world, with Bangalore also gaining momentum because of its strategic position to cover the Asian market. Tis means that, very frequently, translational decisions are not taken in the country where the subtitles will be consumed. Whereas the diversity of the world out there seems almost limitless, the commercial world of VOD and DVD production appears to be pulling subtitling into the opposite direction. As large multinationals dominate the market, more standardization is encroaching on subtitling practice and the subtitlers’ freedom and creativity are curbed, with the use of templates being a case in point (§2.5).
54
Professional ecosystem
Globalization has also brought about subtitling parameters that are decided outside the country where the programme is fnally watched. A greater degree of standardization and homogenization can be observed in the conventions applied when subtitling the same programme into several languages for DVD or VOD distribution. Given that many subtitling tracks can be accessed by anyone anywhere, the tendency is to use the same or very similar conventions in all languages, even though in some cases they might be at odds with domestic practice and longstanding traditions. A development of this nature raises questions about the balance among languages (and cultures) in the audiovisual world, since not only are programmes and flms produced in English, their translations are also being done and decided in the country of origin. Te jargon employed in the profession is certainly telling, with expressions such as master titles and genesis fles commonly used to name the timed English subtitles used for translation, and territories to refer to the countries where the translated programmes are distributed. Te problem is further compounded when, as previously discussed (§2.5), some flms from lesser-spoken languages are subtitled using English as a pivot language, rather than translating them from the SL; a practice that may only intensify in the near future given the nascent interest of OTT distributors in the production of shows in languages other than English that then need to be translated into myriad idioms to capture audiences around the world (Rodríguez 2017). Tis development ties in well with new EU regulations that require 30% of content on all VOD platforms operating in the European bloc to be of home-grown origin (Roxborough 2018), which streamers can only achieve by increasing both original productions and local acquisitions, thus contributing to a hike in localization services from a panoply of SLs. In a drive to increase efciency, another practice implemented by some companies consists in converting languages that are spoken in diferent parts of the globe, e.g. Castilian Spanish into Latin American Spanish, French into Canadian French, Portuguese into Brazilian Portuguese, Roman Serbian into Cyrillic Serbian, traditional Chinese into simplifed, and vice versa. Te subtitles are initially done from the SL into one of these languages, say Portuguese, and then converted into Brazilian Portuguese. Te timing of the subtitles is the same in both languages, and the task consists in changing only the words and expressions that are too local and might not be understood by the other language community. Also in expansion is the use of a neutral variety of the language that could satisfy the needs of several countries where the same language is spoken. Many VOD operators and DVD publishers distribute their productions with a subtitling track in Castilian Spanish and another one in neutral Latin American Spanish, with the latter meant to reach all Hispanic countries in America. To date, little research has been done into the morphological, syntactical and lexical characteristics of this language variant or into the way it is perceived by viewers. Over the years, many professionals have bemoaned the fact that “flm directors and TV producers seldom show any interest in what happens to their works once they are exported to other countries” (Ivarsson 1992: 11). Tough this might be true to a large extent, the reality is that copyright usually rests with the distributors, not with the directors or producers, who therefore do not have much leverage on the translational decisions ultimately taken. When discussing the crucial role of multilingualism in most of his works, Ken Loach (in De Higes-Andino 2014: 215) foregrounds that, in the diegesis of his flms, language is massively important and one of his general principles is that “the characters should speak their own language”. Tis plurality of languages tends to disappear in the dubbed versions of some of
Professional ecosystem
55
his works, like It’s a Free World..., a flm about migration in which the Polish lines, which are not translated in the original flm, are dubbed into Spanish along with the rest of English exchanges. Tis procedure not only expunges the linguistic tension found in the original, but it also leads to awkward situations when interpreters appear on the scene to mediate in the communication between characters that, in the Spanish dubbed version, all speak the same language. When made aware of this linguistic travesty, the director acknowledged the faults in the production chain, underlining the fact that cineastes are rarely consulted about the translation of their flms. Notwithstanding this situation, directors and producers ought to embrace translation as yet another creative and artistic factor on which more control needs to be exerted. As foregrounded by Romero-Fresco (2019), translation and accessibility are not typically supervised or controlled by the cineaste or the creative team. In an increasingly globalized market, in which box ofce receipts abroad can be higher than at home, translation plays a fundamental part in the international success of a movie and, as the required fnancial investment is very modest when compared with the overall production budget, the return on investment of this language activity can be particularly benefcial. Indeed, in an analysis of the domestic and overseas revenue generated by both the top-grossing and the Best Picture Oscar-winning flms premiered since the beginning of the 21st century, Romero-Fresco (ibid.) uncovers that, on average, more than half of the revenue obtained by these flms come from foreign markets and the accessible versions screened for audiences with sensory impairments. However, the estimate is that only around 0.1% of their budgets are devoted to translation and access services. In this respect, and despite a timid evolution, much is to be done if subtitling, as well as dubbing, are to be understood as an integral part of the production process of a flm and not as an afterthought, much in the same way as translation has gradually become a more entrenched component in the internationalization strategy of video games. 2.8.1
Clients and rates
Subtitlers can ofer their services, in a freelance or in-house capacity, to a vast array of employers. Language service providers specializing in subtitling and other audiovisual translation modes are the most obvious, frst port of contact for people interested in entering the market. As intermediaries in the business chain, they will, of course, keep part of the budget destined to the translation activity. Working directly with clients in need of subtitles may be more challenging, at least in the early years of a professional’s career, but it also tends to be more fnancially rewarding. Some of these direct clients are production and distribution companies dealing with flms, trailers and other audiovisual material meant for theatrical release; private and public television stations; DVD and Blu-ray publishers; production companies dealing with corporate videos; educational centres developing video lectures and other tutorials; and frms working with multimedia products. Other potential clients are the organizers of flm or documentary festivals, which usually screen a substantial number of novel productions during a relatively short period of time. Companies working in the interactive software industry, a.k.a. entertainment software industry or videogame industry, are also possible employers since they make use of dubbing, voiceover and subtitling when localizing their products.
56
Professional ecosystem A list of language service providers working on subtitling can be found on Web > Appendices > 2. AVT companies
Freelancing tends to be the most common form of employment for subtitlers, and only in countries where the volume of subtitling is very high are subtitlers hired inhouse. When working freelance, subtitlers can be paid per whole audiovisual programme, per number of words, per subtitle or, more commonly these days, per minute of video. In this latter case, the rates will vary depending on whether the subtitler is asked to perform the spotting or to translate from an already spotted template. Rates vary from country to country and company to company, and translators ought to inform themselves about the going rates, seeking advice from colleagues or professional associations (§2.8.3), to avoid unfair competition and dumping the market with unnecessarily low rates. Tere are several factors that impinge on the subtitlers’ remuneration. When asked to translate from the audiotrack without a dialogue list or template, they should charge higher rates, since the task will take considerably longer. Some translators’ associations recommend a surcharge of some 30% to 50%, although this is not always possible and some bargaining might be called for. Subtitlers ought to have sufcient time to do proper research on terminology and cultural referents as well as to revise their own work. Te less the time allowed for doing the translation, the higher the rates should be. Once again, this desideratum is not always achieved in practice, and it usually depends on the bargaining power of individuals. Te medium is another factor afecting rates, and subtitling feature flms for theatrical release normally commands higher rates than subtitling for flm festivals, broadcasters, DVD/Blu-ray publishers or VOD operators. Some of the most disruptive changes in AVT come hand in hand with distributors’ internationalization strategies and technological advances. In today’s subtitling ecosystem, it is not necessary for the professional subtitler to live in the same city, country, or even continent as the client or LSP. Te potential unleashed by technology has meant that increasingly larger amounts of data, like video clips, can be easily transmitted and exchanged at higher speeds, making freelancing from the comfort of home a distinct possibility for many audiovisual practitioners. Fast broadband speeds allow subtitlers to ofer their services to any companies around the globe and give frms the opportunity to onboard a multilingual workforce that can be based anywhere in the world. An instantiation of these technical developments can be seen in the new generation of web-based platforms designed for the production and management of subtitles (Chapter 9). Pros and cons can be attached to this situation. Te great advantage for translators is the increase in the number of potential clients that can be contacted without having to leave home. Te downside is that competition becomes fercer and, in some cases, companies prefer to recruit their workers from countries where labour costs are lower. Given that most translators in AVT work freelance and it can be difcult to guarantee a steady source of income, the more versatile the professional is, the more chances of securing diferent jobs within the subtitling industry. Tat is, the more tasks they are able to perform, the greater their employability potential. As in many other professions, technology has had a profound impact on the subtitling praxis and has made life easier for all those working in the feld. Te profle
Professional ecosystem
57
expected of subtitlers has changed substantially, and linguistic competence, sociocultural awareness and subject knowledge are no longer sufcient to operate efectively and successfully in this profession. Would-be subtitlers are expected to be fully conversant with information and communication technologies, to demonstrate high technical know-how and familiarity with increasingly more powerful subtitling software, and to be capable of quickly acquainting themselves with new programs and specifcations, since they are more than likely to have to work concurrently with several different programs and clients. 2.8.2
Deadlines
Although projects in subtitling are virtually always urgent by default, deadlines tend to vary according to the distribution channel. When subtitling flms are to be screened in the cinema or broadcast on television, translators tend to be allowed more time than when working for flm festivals, DVD/Blu-ray publishers or OTT platforms, where the rhythm has accelerated considerably in recent years to keep up with audiences’ impatience to watch their favourite programmes, whilst fghting piracy and bootlegging. Film festivals are notoriously frantic, and subtitlers can be asked to translate a flm in a couple of days, or even in just a few hours, because of the late arrival of the movie. Mimicking the sim-ship (simultaneous shipment) approach adopted by the videogame industry, which refers to the concurrent distribution of all content related to a particular video game to all target markets in the appropriate languages, the broadcast scene has coined the term simulcasting, i.e. the broadcasting of programs or events across more than one medium, or more than one service on the same medium, at exactly the same time. VOD operators have also disrupted traditional distribution and exhibition models, whereby the multiple episodes of a TV series would be normally staggered over various days in one week or follow a week-by-week drop. Nowadays, the concept of day and date is a trend largely driven by the online streaming giants, and consists in releasing multiple episodes of a TV series or a movie around the world in manifold languages to coincide as closely as possible with the release date of the original. With whole seasons becoming available instantaneously on OTT platforms, audiences have taken to binge watching numerous episodes of their beloved series in one sitting. In this mercurial mediascape, the timespan between the production of original audiovisual footage and its release in multiple languages has been greatly shortened. Faced with timelines that are getting tighter, the answer of vendors has been to increase the speed at which their subtitles are generated. For instance, in a direct reaction to the fast turnaround of subtitles created by fansubbers, large corporations like Netfix have experimented with their subtitling workfows and managed to release each episode of their talk show Chelsea translated into 20 languages to an audience in over 190 countries simultaneously, less than 34 hours after each taping (Roettgers 2016). In an attempt to meet these short deadlines, some vendors have resorted to microtasking, i.e. employing various professionals to work on diferent sections of the same flm or TV series. Needless to say, unless a thorough revision is done at the end, this practice can lead to a lack of cohesion and coherence in the subtitles, as the risk exists that the same terms or expressions might have been translated in diferent ways by the various translators involved in the commission. It is rather challenging to ascertain exactly how long it takes, on average, to subtitle a standard-length flm of some 90 minutes. Te time invested will depend on factors
58
Professional ecosystem
such as the density of the dialogue, the editing of the scenes and number of shot changes, the terminology employed, the cultural references mentioned and the overall difculty of the topic, as well as on the practitioner’s experience and expertise. Under normal circumstances, dealing with a flm containing some 1,000 to 1,200 subs, the spotting can take around two days, and the translator is given between four and seven days to produce the TL subtitles, with an extra couple of days to conduct the QC. In the case of theatrical distribution, and depending on the runtime and number of subtitles, their laser engraving onto the flm copy and washing takes around ten times the length of the flm, so to subtitle a 90-minute production will take between 15 and 20 hours. Te fnal simulation with the client can be done in a morning or an afternoon. All in all, the whole process to subtitle a full-length flm can last some 12 to 15 days from the moment it has been placed with the subtitling company. 2.8.3
Authors’ rights and professional associations
At the international level, the Berne Convention for the Protection of Literary and Artistic Works, adopted in 1886, deals with the protection of works and the rights of their authors, providing creators, translators included, with the means to control how their works are used, by whom, and on what terms. As foregrounded by Murillo Chávez (2018), Article 6bis requires members of the Convention to grant authors the moral rights to (1) claim authorship of a work they have created, also referred to as the right of attribution, and (2) to object to any distortion or modifcation of their work that may be prejudicial to their honour or reputation (a.k.a. the right of integrity). Tese moral rights are diferent to the economic or exploitation rights, which generate royalties anytime the subtitles are reused on any of the various media or, these days, for the training of machine engines that can eventually enhance the results of automatic subtitling. Moral rights are conferred on individual authors indefnitely and, in many national laws, they cannot be transferred or waived; that is, translators retain them even after they have signed away their economic rights to the employer. Te broadcasting and distribution rights of original content have traditionally had clear licensing guidelines, unlike the localization industry, which has been laxer about the issue of rights (Nsofor 2020). Against this backdrop, a common occurrence in the industry is for language vendors to include a clause in their service agreements, whereby subtitlers are asked to surrender their intellectual property rights to translation work in favour of the vendor. Ultimately, this transference of rights depends on the copyright laws of the subtitler’s country and on whether the subtitler’s country’s laws regard the subtitlers’ output as having been created in their own country or in the country of the outsourcer, which in the era of globalization may be very difcult to ascertain, if not impossible. Arguably, translation agencies do this for simplicity’s sake, as once they own the economic rights, they do not need to ask, or pay, translators for permission to reuse their subtitles. Te fairness of this approach is regularly contested by professional subtitlers in many countries, as it deprives them of a source of income that they consider to be rightly theirs. In this context, France is rather unique in the sense that dubbing translators and subtitlers are considered full authors under French law, and therefore own the rights to their works indefnitely, both moral and economic. In principle, though rather more complex in practice, they are the only ones deciding what can be done (or not) with their translations, as they are only lending to clients, in exchange of an agreed remuneration, the right to use their work.
Professional ecosystem
59
Drawing on the Berne Convention, Ivarsson (1992: 106) reminds us that subtitlers “have the same copyright under the Berne and World Conventions as writers and therefore have the right to see their names on works that are published”. Along the same lines, Point 5.h, Section III of UNESCO’s (1976: online) Recommendation on the Legal Protection of Translators and Translations and the Practical Means to Improve the Status of Translators reinforces this right by explicitly stating: the name of the author of the translation should appear in a prominent place on all published copies of the translation, on theatre bills, in announcements made in connexion with radio or television broadcasts, in the credit titles of flms and in any other promotional material.
As moral rights are inalienable, subtitlers should, in principle, have the last word when deciding whether to have their name acknowledged on the translation. So, subject to their consent, the names of subtitlers should appear in the credits. Yet, the widespread opinion that the best subtitles are those that viewers do not notice runs deep in the profession. From this perspective, the subtitler’s task becomes a contradiction in terms: to provide a translation that is written a posteriori on the original programme, fashes intermittently at the bottom of the screen, but pretends not to be there. In what can be considered as an attempt at enforcing invisibility, many distributors and language vendors do not acknowledge the name of the subtitler or the subtitling company, although this practice varies enormously, with some companies and countries being more respectful than others. On certain occasions, subtitlers may decline to have their names acknowledged in a particular production, especially when they disagree with the changes made during the revision process. In those cases in which the decision is taken to recognize the name of the subtitler and/or the LSP, its placement and time of exposure varies according to companies and traditions. Film credits are a highly negotiated space between all the flm contributors, and common sense should be used when inserting the subtitler credit. Some vendors would normally display the acknowledgement credit at the end of the programme, though some companies prefer to add this information at the beginning, mimicking the way it is done in literary translation. Not to call undue attention to them, the font type and size used for these credits is the same as the ones used for the rest of subtitles, and they tend to stay on screen no longer than some two seconds. Alongside this onomastic acknowledgement, it would also be desirable for the translated production to include a mention of the date when the subtitles were produced, since they do not always coincide with the production year of the flm or TV series. In addition to the attribution of authorship to the subtitler in a visible manner in the translated productions, other ways of promoting the subtitling profession and the socio-economic standing of translators would be by including their details in national flm data bases or ad hoc websites, like the Freelance Subtitlers portal in ProZ.com (proz.com/pools/subtitlers), the directory for professionals working in the audiovisual localisation industry known as Te Poool (the-poool.com) or the Spanish eldobaje.com, where a section exists for the display of subtitlers with a list of the flms translated by each of them. Imitating what has been done in the literary world for much longer, some AVT professional associations have created an annual or biannual prize to recognize the outstanding work carried out by some of the professionals active in this feld. Primary examples of this initiative
60
Professional ecosystem
are the Jan Ivarsson Award (European Association for Studies in Screen Translation, ESIST), the ATRAE prizes (Asociación de Traducción y Adaptación Audiovisual de España) and the Prix ATAA (Association des Traducteurs/Adaptateurs de l’Audiovisuel, France). Te many and disruptive changes taking place in the audiovisual translation industry have led to a greater readiness on the part of the translators to come together in an attempt to fght common causes and safeguard decent working conditions. In a relatively short period of time, numerous associations focusing on AVT have been created, mainly in Europe. Such development can be seen as an initiative building on Point 7, Section III, of the UNESCO (1976: online) Recommendation, which calls for the creation of this type of association, suggesting: Member States should also promote measures to ensure efective representation of translators and to encourage the creation and development of professional organizations of translators and other organizations or associations representing them, to defne the rules and duties which should govern the exercise of the profession, to defend the moral and material interests of translators and to facilitate linguistic, cultural, scientifc and technical exchanges among translators and between translators and the authors of works to be translated.
Perhaps one of the most ambitious initiatives is AudioVisual Translators Europe (AVTE), a pan-European umbrella organization that, similarly to the Fédération Internationale des Traducteurs/International Federation of Translators (FIT), functions as a grouping of national associations of AVT translators. Founded in 2011, its main remit is to oversee all matters related to the practice of the profession of audiovisual translation. Te organization’s aims and objectives are (1) to protect and strengthen the rights and working conditions of audiovisual translators; (2) to uphold and improve professionalism and to promote high standards of AVT and education; (3) to encourage and support member organizations in collective bargaining; (4) to promote cooperation between member organizations, and to support trade organization development by means of the organization of continental and regional groups; (5) to encourage the provision of professional and trade organization education and training for audiovisual translators; (6) to establish and maintain close relations with relevant European, government and non-government organizations; (7) to fght for authors’ rights and European reimbursement systems; and (8) to promote the cultural importance of AVT. Te spread and omnipresence of social media has also enabled the mushrooming of communities of subtitlers on platforms like LinkedIn (Subtitlers) and Facebook (Subtitlers & Audiovisual Translators and Subtitling All around the World, among many others). Tey all act as meeting points for practising and aspiring audiovisual translators and subtitlers, where members can network, keep abreast of changes in the profession, ofer help and advice to each other, confer about clients and working conditions, propose solutions to translation challenges, share experiences, mentor newcomers, inform about rights and obligations, and discuss rates and style guides. A list of AVT associations and groups all over the world can be found on Web > Appendices > 3. AVT associations and groups
Professional ecosystem
61
2.9 Training As in many other professional disciplines, the well-being of the subtitling profession depends on the sound training of experts in the feld. If only some 30 years ago the teaching, learning and researching of general translation in educational institutions was a rare occurrence, nowadays it is an unquestionable reality. Te discipline has advanced, and translation has been part of the educational landscape for some time now. From an initial over-emphasis on literary texts, we have slowly moved on to cover many other areas such as localization, economic, scientifc, technical and legal translation. However, despite the growing importance of AVT in our daily lives, many universities have traditionally been rather passive in the preparation of students in this area, and dubbing, subtitling, voiceover and access services have been largely ignored in the curricula. Lack of interest, prohibitive software prices, dearth of teacher expertise, vested interests or mere ignorance may be some of the reasons behind this state of afairs. Te end result has been that for many decades AVT could only be learnt hands-on, in-house, outside educational institutions, with little academic backbone. Yet, despite the multiple challenges encountered when teaching and learning AVT in the context of higher education, the reality is that training in this feld has developed substantially in recent years, gaining visibility and prominence in the undergraduate and postgraduate curriculum of numerous translator training programmes around the world, especially in Europe. Likewise, an incipient body of experience-based accounts on the teaching of AVT has seen the light in recent times, ranging from course descriptions (Díaz Cintas and Orero 2003) and multimodal pre-translation analysis (Taylor 2016) to the development of virtual teaching environments (Dorado and Orero 2007; Bartrina 2009) and the didactics of AVT (Díaz Cintas 2008; CerezoMerchán 2018). In addition to these formal educational initiatives, stand-alone training courses of a more applied nature have also fourished. Ofered by universities and private companies alike, they take the form of online or face-to-face courses and can last as little as a few hours to various weeks and months. Emulating other areas of translation specialization, the early pioneering programmes were at postgraduate level, aimed at undergraduate students with a FLs background or prior translation skills. Te frst AVT-specifc course was the postgraduate diploma on flm translation launched by Lille 3 University (Bréan and Cornu 2014) in the late 1980s and now discontinued. Interestingly, undergraduate courses on AVT took generally longer to be incorporated into the curriculum, with the frst modules appearing in the mid-1990s. Te frst instructional courses on AVT used to focus almost entirely on the translation of cinema productions, mainly feature flms, as these were the most prominent works being translated in the entertainment industry and enjoyed a canonized status that could be said to run parallel to the translation of literature. In fact, for infuential scholars like Bassnett (1980/2002) and Snell-Hornby (1988), subtitling and dubbing were practices subsumed within the larger area of literary translation, as they were understood to deal exclusively with the translation of feature flms. Yet, as argued by Chaume (2004b), one of the recurrent early misconceptions in the siting of AVT within Translation Studies was the assumption that this domain dealt with a genre, i.e. flms, rather than a text type, i.e. audiovisual, that encompasses a plurality of genres like documentaries, educational and corporate videos, commercials and many more.
62
Professional ecosystem
In some programmes of study, notably at undergraduate level though not exclusively, AVT is taught as a general, overarching module that touches on all the various translation modes, including accessibility. Such an approach can certainly whet students’ appetite for the feld, but it falls short of equipping them with the comprehensive knowledge and instrumental skills that are needed to operate successfully in the profession. To allow for greater granularity and specialization, the curriculum ought to contain discreet modules that focus on the various AVT practices, which is usually achieved with specialist postgraduate programmes centred on AVT. Time permitting, these individual modules may be made up of theoretical core lectures, including profession-oriented topics, and hands-on practical seminars in which students are exposed to translation tasks of increasing linguacultural and technical complexity. Familiarization with the flmmaking process and the language of flm should also be nurtured in these educational programmes. In the particular case of AVT training, the main diference with other translation specialisms lies in its multimodal and multimedia nature, which calls for transversal abilities closely related to digital technology and audiovisual literacy. In this respect, little attention has been paid to unravelling how translation competence intertwines with the various AVT modes, save for the work by Cerezo-Merchán (2018) and Pöchhacker and Remael (2019). Educationists seem to have overlooked the pivotal role that the so-called occupational and instrumental competences, as conceived by the PACTE (2005) group, play in the teaching and learning of AVT. Te instrumental competence seems to be particularly relevant in the case of AVT courses as it entails the mastery of AVT-specifc software and the ability to work with a plethora of multimedia fles and technologies. Likewise, occupational competence, which refers to the abilities necessary to operate appropriately in the translation labour market, seems highly important in an industry subject to substantive time and fnancial pressures and regulated by high competition and idiosyncratic working conditions. To train the professionals of the future, forward-looking modules need to be designed and developed now, taking into account the linguacultural dimension as well as the technological possibilities and the market reality. Translators-to-be will need to be able to adapt to an industry in continual development, to an ecosystem on which new technologies such as machine translation, speech recognition and artifcial intelligence are already having an impact. In this sense, careers on language engineering, text and subtitle post-editing, live respeaking, microtasking, transcreation and video editing are expected to grow exponentially in the upcoming years, thus demanding a set of addon skills that higher education institutions will have to include in their existing curricula in order to boost graduates’ employability. Training centres will need to re-examine and modify curricula and programmes of study – as much as AVT trainers may need to adjust the syllabi – so as to not lag behind technical advancements and to be able to cater for the new needs of the industry. Yet, it tends to take a long while before technical innovations instigated by the industry, such as the development of specialist subtitling/dubbing systems, the application of machine translation and the role of post-editing in the feld of subtitling or the more recent migration to cloudbased ecosystems and the embedding of subtitling apps in translation memories, are fully incorporated in the curriculum ofered by training centres (Bolaños-GarcíaEscribano and Díaz Cintas 2020a), thus creating an undesirable imbalance between profession and academia.
Professional ecosystem
63
One of the reasons behind this state of afairs is the fact that collaboration between trainers and the industry has been traditionally minimal, although here too some steady progress is being made with the degree and type of collaboration varying greatly. To bridge such a gap between academe and industry, a more advantageous relationship should be explored among the interested parties, not only with the aim of licensing appropriate specialist software but also of increasing internship opportunities, organizing workshops and company visits, and promoting real-life professional experiences that can be ofered to translators-to-be and members of staf. Indeed, trainers and trainees could be granted wider access to cutting-edge professional tools and platforms for their own interest and technical preparation. Opportunities of this calibre could also be utilized from a research perspective to conduct user experience tests among practitioners and translators-to-be not only to inform future training but also in exchange for advice on potential improvements of those tools. To make sure that students are qualifed to be able to functionally operate in the future, their instrumental knowledge should be honed by being exposed to the latest advances in the industry, including up-to-date technologies and translation workfows. Scholarly endeavours have been, and will continue to be, crucial in order to fnetune and improve the teaching of a particular discipline to would-be professionals and ultimately guarantee its sustainability into the future. As already mentioned, it is essential that the long-standing gap between the AVT industry and academia be bridged, so that the latter can beneft from learning about the current and future needs of the former, and the industry can recruit employees with the right skills. Existing synergies of this nature demonstrate that they help scholars to conduct selfrefective assessments of the curriculum on ofer at their own institutions while supporting the industry with research they may not have the time to develop, thereby instigating the necessary innovative and transformational changes that will secure the well-being of the discipline in the years to come. 2.10
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 2 > Exercises
3
Te semiotics of subtitling
3.1
Preliminary discussion
Semiotics is the study of signs and symbols and their use or interpretation as part of a signification system, used for communication. This, according to Eco (1979: 9), “is an autonomous semiotic construct that has an abstract mode of existence independent of any possible communicative act it makes possible”. In other words, it can refer to natural language systems but to any other signifying system that makes use of signs as well. The semiotics of film refers to the various aural and visual sign systems that films combine to communicate their message or story. 3.1 What, in your view, might the semiotics of subtitling refer to? That is, which sign systems do you expect to be involved in the way subtitles signify and how? 3.2 Provide some examples of audiovisual productions in which subtitles play a fundamental role in the original version. Analyze a scene.
❶ How do the subtitles contribute to the narrative? ❷ How do they interact with the other sign systems? ❸ What additional challenges might they pose for standard subtitling?
3.3 Films as well as comic strips make use of different semiotic systems.
3.2
❶ What do they have in common and in what ways do they differ? ❷ What translation challenges do they share? ❸ What additional challenges are typical of subtitling only?
Films as multisemiotic and multimodal texts
Subtitles are provided with many diferent types of audiovisual productions. For convenience’s sake the umbrella term ‘flm’ is used here to encompass all of them: fction flms, documentaries, talk shows, corporate videos, TV series, etc. Films are complex multisemiotic texts in which diferent modes, the term used to refer to the visual and/or aural encodings of diferent semiotic systems, cooperate to
Te semiotics of subtitling
65
create potential meaning for the viewer. As Branigan (1992) writes, from the perspective of Film Studies, in flm, light and sound create two fundamental systems of space, time and causal interaction: one on the screen, and the other within a story world that viewers conceptualize in their heads. For many early flm theorists, cinema represented a universal photographic language that could be understood by all and that would have continued to conquer the world had it not been for the advent of sound, which made translation into diferent languages necessary. Tis idea of cinema being a universal visual Esperanto, however, is a fallacy that has been countered many times over. As Dwyer (2005: 301) states: Historical records reveal ... that the internationalism and supposed universalism of the silent era was in fact underwritten by a vast array of translation practices both linguistic and ideological in nature. During this period translation took many forms encompassing the textual, aural and visual realms. Intertitles were swapped, flms were accompanied by live commentators/ interpreters, and whole storylines were transformed. Indeed, intertitles were subject to both inter- and intra-lingual translation.
Although both verbal and nonverbal visual and aural signs have always been an integral part of the flmic text, the myth of the universality of images persisted for a long time in Film Studies and eventually also found its way into Translation Studies. Today, it is generally accepted that images do not necessarily carry universal meaning since they are often culturally determined (Chapter 8), and it is also understood that it is not merely due to the advent of talking movies in the late 1920s that AVT had to come to the rescue. Talking movies certainly stimulated the quest for new and appropriate translation solutions, but even early cinema relied on diferent translation practices (O’Sullivan and Cornu 2019). One of the core challenges of all types of audiovisual translation, including subtitling, resides in the fact that all the flmic sign systems and their visual and aural representations or modes must be taken into account when translating verbal text, in order to create a new meaningful multisemiotic and multimodal whole. Te diferent modes, which together constitute the flmic text and are therefore also central in multimodal research into AVT (Delabastita 1989; Zabalbeascoa 2008; Reviers and Remael 2015; Boria et al. 2019), are • • • •
the aural-verbal mode (flm dialogues, voiceover, narration, lyrics); the aural-nonverbal mode (music and sound efects; e.g. background noises); the visual-verbal mode (text on screen, credit titles, letters, ads, newspapers headlines); the visual-nonverbal mode (editing, photography, mise-en-scène, gestures).
It is through the concurrent interaction of these modes, and their interpretation by the viewer, that meaning is created. In a FL production it is mostly the aural-verbal mode that becomes inaccessible for the new target audience and can thereby jeopardize the multimodal functioning of the target whole. Subtitling, or other forms of AVT, are then activated to fll the gap.
66
Te semiotics of subtitling
In the following subsection we frst explore the more traditional verbal components that contribute to the flmic text, the ones that are usually considered to be subtitling’s underlying or primary ST: the screenplay and the flm dialogue. 3.2.1
Screenwriting and flm dialogue
When it comes to subtitling, screenplays are important mainly for two reasons. First, because they are the source of the narrative structure of the flm; and second, because they are documents to which subtitlers can resort when no post-production dialogue list is supplied. Te internet hosts quite a few good screenplay sites, from where both pre- and post-production screenplays can be downloaded. Having said that, some caution is in order: neither pre-production nor post-production screenplays are entirely reliable and must therefore be double-checked with the flm version supplied for translation, especially if they were not provided by the producer or distributor. Likewise, the spelling of proper names, references to historic events, terms in other languages, mentions to numbers and dates, etc., cannot be taken for granted, whether they occur in the screenplays of fction flms or documentaries. What concerns us here frst, however, is the screenplay as a kind of virtual structure. Most commercially released flms are highly structured, basically with a view to catching and retaining the attention of the viewers for the entire time slot allotted to the production. Originally, screenplays harked back to the well-structured 19thcentury play, relying on theatrical models to tell their stories for the screen. Classical Hollywood flms, therefore, have an exposition, development, climax and denouement (Bordwell et al. 1985/1996). In the exposition, all the details the viewers require about the characters, their background, the setting, the historical period and the like are supplied. Te development follows an initial disturbance in the life of the protagonist, which sets her/him of on a quest. Te climax is the moment of no return toward which the entire flm evolves; at this point the confict in which the protagonist is entangled is either resolved or lost. Te climax is followed by a short denouement in which all narrative loose ends are tied up. Tis basic template has produced a lot of variants by now, although screenwriters are usually encouraged to learn the basics of story-structure frst, before embarking on experiments (Dancyger and Rush 1991). Important concepts that keep returning in any mainstream screenwriting manual are continuity and causality, which means that any classical flm story avoids narrative gaps at all costs. In mainstream cinema one action always leads to another and everything is somehow connected: not a single window is opened or referred to, so to speak, if it does not have a narrative signifcance. Given that reduction and deletion are strategies often implemented in subtitling, this is what subtitlers must always keep in mind, that events and references to events at the beginning of a programme are usually connected to events that occur much later in the diegesis. Early challenges are set up and used or built upon in subsequent scenes, and even documentary flms that have little commercial ambition will try to create some form of suspense to keep their audience interested and improve their ratings (Remael 1998; Hanoulle et al. 2015). Film dialogues (including partly scripted interviews) are highly implicated in all of this. Dialogue writing is one of the last stages in screenwriting, which starts with a synopsis, followed by a treatment containing a sketch of the diferent scenes. Only after the outline of the screenplay is established does the writing of dialogue begin. It
Te semiotics of subtitling
67
goes without saying that dialogue will have to support the story thus outlined, which it does in collaboration and interaction with the flm’s other semiotic systems. One obvious result of this is that the dialogue exchanges must actually inform the viewers while mimicking the features of everyday conversation; they are what Chaume (2004a) calls a form of oralidad prefabricada [prefabricated orality]. Likewise, interviewees addressing the interviewer of a documentary flm are really informing the public of that flm and not (only) the journalist. Tat is to say, there is dialogic interaction between the (fctional or non-fctional) characters on screen, but their words also interact with the flm’s visual and other aural modes, and this interaction ultimately determines what the flm communicates to the spectator. Tis triangular relationship is, of course, very familiar to anyone working in flm, screenwriting or even the theatre, and it has been discussed in depth by the flm scholar Vanoye (1985). He distinguishes the horizontal communication between characters from the vertical communication that takes place between the flm’s apparatus (which includes verbal and nonverbal devices) and the viewer. Trough this form of double-layeredness, some of the formal features of flm dialogue are meant to suggest the interactional build-up of dialogue, especially in fctional programmes, whereas much of the content is addressed to the viewers who are witnessing the conversation on screen. How must the subtitler handle this? Subtitling is an ancillary element added to the fnished product. It is extraneous to the diegesis or narrative and obviously addresses the viewers. It must therefore contain the (core) information that is meant for the public. Usually, this is thought to be the propositional content of the utterances, but since form and content cannot be completely separated, some of the dialogue exchanges’ formal features, which are part of the diegesis, will have to be rendered too. Even though subtitlers will hardly ever have sufcient time for in-depth textual analysis, they should be aware of the existence of the aforementioned underlying narrative structures, of flm dialogue’s interactional features and its interpersonal connotations as well as its propositional content. A more detailed, if still brief, excursion into the functions that flm dialogue fulfls is therefore in order. According to Kozlof (2000) the functions of flm dialogue can be subdivided into two categories: narrative functions on the one hand and functions related mostly to aesthetic efects and ideological persuasion on the other. Tanks to its narrative or narrative-informative function (Remael 2003) flm dialogue gives structure to the flmic story, provides (additional) information about when and where it is enacted, supports narrative causality and contributes to the development of the characters. However, it also creates an impression of realism in keeping with its specifc context (e.g. in historical flms) and controls the amount of information that is imparted to the viewers. Te aesthetic and ideological functions of flm dialogue can be detected in the way in which it exploits the linguistic resources of language and imparts underlying thematic messages or authorial commentary. Tough fruitful, the subdivision is somewhat artifcial. Film dialogue will use its linguistic resources, and especially its interactive features, to support character development or the relations between characters, and to convey an ideologically coloured message at the same time. Tis is also pointed out by Desilla (2012) who foregrounds that the way characters speak refects not only their personality, their emotional status and relations with others but also their social class. Tis is not only achieved through the use of a specifc lexicon, marked speech or speech-related idiosyncrasies that reveal background and class but
68
Te semiotics of subtitling
also through the interactional features of dialogue, or what Linell (1998) calls the dynamics of dialogue. Indeed, as already indicated earlier, flm dialogue mimics features of everyday dialogic interaction, such as turn taking and its local production of understanding or confict, through who speaks frst, who keeps the dialogue going or who interrupts or otherwise determines its course. In brief, not only what characters say but also how they say it in terms of linguistic variation and conversation management have an impact on the narrative development. In fact, this includes what is left unspoken or is implied: implicature is part and parcel of everyday conversation and is often used in flm as well, where it can exploit the double-layeredness of flm dialogue for both narrative and aesthetic purposes, increasing its playfulness, for instance, but also regulating its information fow (Desilla 2012). It is obviously important that subtitlers try to retain as much of this complex dialogic functioning as they possibly can, keeping in mind that the flm visuals will always contribute their bit (§3.3). Let us consider a dialogue excerpt from the published script of American Beauty (Ball 2000: 11–12) by way of example (Example 3.1). Tis scene follows one in which the protagonist, Lester, has been told that he will have to write a self-evaluation report if he wants to keep his job with the company he is currently working for. Carolyn is his wife. Example 3.1 EXT. BURNHAM HOUSE – LATE AFTERNOON
A MOVING VAN is parked in front of the COLONIAL HOUSE next door to the Burnhams’. Movers carry furniture toward the house. The Mercedes-Benz pulls into the Burnham driveway. Carolyn drives, Lester is in the passenger seat. CAROLYN: LESTER: CAROLYN: LESTER:
– there is no decision, you just write the damn thing! You don’t think it’s weird and kinda fascist? Possibly. But you don’t want to be unemployed. Oh? Well, let’s just all sell our souls and work for Satan, because it’s more convenient that way. CAROLYN: Could you be just a little bit more dramatic, please, huh? As they get out of the car, Carolyn scopes out the MOVERS next door. CAROLYN:
So we’ve finally got new neighbours. You know, if the Lomans had let me represent them, instead of – (heavy disdain) – ‘The real Estate King,’ that house would never have sat on the market for six months.
She heads into the house, followed by Lester. LESTER: CAROLYN:
Well, they were still mad at you for cutting down their sycamore. Their sycamore? C’mon! A substantial portion of the root structure was on our property. You know that. How can you call it their sycamore? I wouldn’t have the heart to just cut down something that wasn’t partially mine, which of course it was.
Te entire dialogue is narrative-informative but its interactional message is just as important as its propositional one. Te exchange between Lester and Carolyn
Te semiotics of subtitling
69
frst confrms that Lester’s job is endangered, which the previous scene already communicated, and is here rendered partly through implicature. Te frst line of the exchange therefore also has an anaphoric structuring function in that it links the present scene to the previous one. It becomes clear that Lester and his wife, Carolyn, hold diferent views on how he should react to the way his company is treating him. In addition, Carolyn tells Lester and us, the audience, that the couple have new neighbours and, through implicature, that she is an estate agent. Te way she speaks establishes her nervous-tempestuous character and her dominance: not only does she not share Lester’s concern about his job, she changes the topic half-way through the scene, thereby using the dialogic features of spontaneous speech to determine the course of the dialogue. Since flm dialogue, like everyday conversation, evolves sequentially, this means that as two interlocutors speak, they build on each other’s interventions to move the dialogue forward. In other words, each and every intervention can potentially take the conversation in a diferent direction. Speakers usually confrm they have understood what has been said before moving on the interaction, adding new information, giving comments etc. Each turn at talk, as such interventions are called in conversation analysis, is therefore context confrming and context renewing. Tis does not only happen when Carolyn radically changes the topic of the conversation. When she says ‘possibly’, replying to Lester’s ‘You don’t think it’s weird and kinda fascist?’, she confrms she has heard what Lester has just said, even agreeing with him to some extent. Tis is the context-confrming part of her utterance. Ten she goes on to say ‘but you don’t want to be unemployed’, which gives the conversation a whole new direction. Tis section of her turn is context renewing. Te same happens in Lester’s next turn, in which the exclamation ‘oh’ is context confrming and the remainder of his utterance is context-renewing. Subtitlers sometimes tend to omit short turns at talk when they think they can be easily understood but, frst of all, that means they will no longer be accessible for D/deaf or hard-ofhearing viewers and secondly, such omissions might afect character interaction and thereby also the flm narrative (Remael 2003). Not only the topic of the conversation but also the tone in which the lines are delivered, as well as the verbal and physical interaction between the characters, convey the protagonists’ disagreement and their communication breakdown. At the end of the scene, as Carolyn heads into the house, followed by Lester, she actually continues to shout words of protest to which he, by then, pays absolutely no attention. Tese interactional features of the dialogue, and their emotional connotations, as well as the enactment of the dialogue in the broader mise-en-scène, contribute as much to the narrative and to one of the core themes of the flm (estrangement) as do the words uttered by the couple. 3.2.2
Intersemiotic cohesion
Since the 1990s, research into multimodal discourse has been drawing on diferent aspects of Halliday’s (1978) social semiotic theory of language and has generated a considerable body of work studying meaning-making in multimodal texts, including what makes them visually and verbally coherent (Liu and O’Halloran 2009; Reviers and Remael 2015; Remael and Reviers 2019).
70
Te semiotics of subtitling
As Liu and O’Halloran (2009: 368) write, Halliday and Hassan (1976) already pointed out that texture, which is the result of the combination of semantic configurations of both register and cohesion, “is the crucial criterion to distinguish text from ‘non-text’”. Their well-known example of how texture is created through cohesion goes as follows. The sequence of sentences ‘Wash and core six cooking apples. Put them in a fireproof dish’ has texture because of the cohesion provided through the co-referentiality of ‘them’ and ‘six cooking apples’ (Halliday and Hassan 1976: 2). Stressing the importance of linguistic textual cohesion in a translation context, Baker (1992: 218) defines the concept as the “network of surface relations which link words and expressions to other words and expressions in a text”. She contrasts the concept of cohesion with that of coherence, which is “the network of conceptual relations which underline the surface text” (ibid.). In other words, whereas “coherence is a property of texts that make sense, cohesion refers to those linguistic techniques which are used to facilitate the readers’ task of discovering the coherence in your text” (Hannay and Mackenzie 2002: 155), or, from the perspective of Halliday and Hassan (1976), that give it texture. In purely verbal texts, such cohesive devices are reference (e.g. the use of pronouns and various types of anaphoric or cataphoric reference), lexical choice (e.g. repetition versus variation), tense choice and the use of connectives. With respect to the functioning of multimodal products as texts, Liu and O’Halloran (2009: 369) propose the concept of intersemiotic texture, which refers to a matter of semantic relations between diferent modalities realized through Intersemiotic Cohesive Devices in multimodal discourse. It is the crucial attribute of multisemiotic texts that creates integration of words and pictures rather than a mere linkage between the two modes.
Te study of intersemiotic cohesion and intersemiotic texture in multimodal texts, especially research using insights from social semiotics in combination with insights from Film Studies (Tseng 2013), is crucial to our understanding of how multimodal products work. On the one hand, this research is relatively young, ongoing and of too great complexity for the scope of this book; on the other hand, it is especially important for subtitlers to be aware of the interaction between verbal and nonverbal signs in flm, and to not merely focus on verbal-verbal interaction when translating. In what follows, we propose a very concrete example of what the interaction between the aural-verbal mode and the visual-nonverbal mode, that is, between words and images, can look like. We demonstrate how narration relies on images, words and their interaction in a scene taken from the flm Te Snows of Kilimanjaro (Example 3.2). A few features stand out. To begin with, the narrative relies on camera movement and editing, i.e. extradiegetic visual manipulation that does not belong to the fctional story as such, and on the gestures and looks of the characters, as integrated in the mise-en-scène, i.e. diegetic information that does belong to the story proper. Both supply essential information for the understanding of the flm. Besides, there is the wealth of details provided by other diegetic elements such as the setting, props, costumes and the like.
Te semiotics of subtitling
71
Example 3.2 Film: The Snows of Kilimanjaro Scene: Beginning Web > Chapter 3 > Examples > Example 3.2 In this scene, camera movement is used to convey information gradually. First, the viewer sees a close-up of the main character Harry, later the camera moves away from him to reveal a vulture flying overhead, and eventually, a woman, Helen, by his side. Editing, in the form of cuts, varies from a closeup of Harry to a shot of a vulture flying overhead, to a travelling shot of the setting, obviously Africa. There is a mountain (Mount Kilimanjaro of the title) in the background, and there are tents in the foreground. Eventually, the camera returns to the people near the tents. This basically takes care of setting the scene for the action. The gestures and the looks of the characters provide the link between the wound on Harry’s leg (also shown in close-up) and the vultures flying above, or perching on a nearby tree, signalling the gravity of his condition. The dialogues then complement and further explain all visually conveyed iconic information. Looks, gestures, facial expressions and language are inseparable, but also result in a degree of redundancy.
In the previous example, diferent sign systems collaborate to create the narrative but they also create a degree of redundancy. Tis is typical of much flm narration and results from the partial overlap and partial complementarity of the information conveyed by flm’s diferent sign systems. In fact, it is one of the features of flmmaking that subtitlers can use to their own advantage. According to Marleau (1982: 274), the relation between image and word can take two concrete forms, and these two forms correspond to two functions which the author also attributes to subtitles (for alternative approaches see Liu and O’Halloran 2009). In some cases the verbal mode further defnes information that is also given visually, which Marleau calls fonction d’ancrage or anchoring. In other instances, for which he uses the term fonction de redondance or redundancy, words and images communicate, more or less, the same information. Te scene we have just analyzed (Example 3.3) is a clear instance of the frst type: the dialogue provides a form of anchoring for the visuals through verbalvisual interaction or intersemiotic cohesion, which then produces a coherent flmic text – one that has intersemiotic texture (Liu and O’Halloran 2009). In this particular case, the cohesion is produced by rather straightforward linguistic means, i.e. personal pronouns that refer to information provided by the visuals rather than to a previous exchange or sentence in the verbal text. In writing, pronouns usually refer anaphorically to nouns, whereas in audiovisual texts pronouns and other features of the dialogue often refer to people or objects on the screen. Example 3.3 Film: The Snows of Kilimanjaro
Scene: Beginning
Harry: I wonder is it sight or is it scent that brings them? Helen: They’ve been about for ever so long they don’t mean a thing. Harry: The marvellous thing is that it’s painless now. Helen: Is it really? Harry: Yes, That’s how you know when it starts.
72
Te semiotics of subtitling
Without the narrative context and the visual images to support the dialogue, this exchange makes absolutely no sense, as the text has low coherence. Conversely, if the flm were to be watched without sound, it would take some guessing to interpret what is happening on screen. Tis scene is therefore an obvious example of intersemiotic cohesion and anchoring, in which one semiotic system supplements the other, more concretely: • • •
Te personal pronouns ‘them’ and ‘they’ refer to the vultures circling above Harry and Helen. Te indefnite pronoun ‘it’ in lines three (Harry) and four (Helen), refers to the festering wound (in close-up) on Harry’s leg. Te last ‘it’, in Harry’s third turn, refers to the gangrene that has got hold of the wound and that will eventually kill him.
Te referents of ‘them’, ‘they’ and the frst two mentions of ‘it’ are visible on screen, within the diegetic story world, and constitute an example of intersemiotic rather than purely linguistic cohesion. Te last ‘it’ is slightly more complicated and left open to the viewers’ interpretation: they can replace it with either ‘wound’ or ‘gangrene’, basing their judgement on the context or the narration as well as their own knowledge of the world. In other words, this is an example of linguistic implicature (§3.2.1) involving intersemiotic cohesion. It is obvious that for this kind of cohesion to keep working in the subtitled version of the flm, the visual-verbal links must be maintained and the subtitles must remain synchronous not only with the audio but also with the images. Tat is why subtitles should always avoid anticipating, being ahead of visual narration on screen or appearing too late. Still, staying in sync with the dialogues will usually take care of synchrony with the images more or less automatically, whether in a fction flm or documentary. 3.2.3
Te multimodality of language
Another form of intersemiotic cohesion is at play in the interaction between speech and gesture. In the clip from Te Snows of Kilimanjaro, for instance, Harry touches his leg when he says, ‘Te marvellous thing is that it’s painless now’, referring to his wound. Tis is a feature of the so-called multimodality of spoken language itself, which makes use of aural-verbal signs as well as visual-nonverbal gestures and facial expressions. Tis form of visual-verbal interaction is equally important for flm narrative. Dialogue studies in such diverse domains as linguistics, interpreting, anthropology and psychology make use of video recordings that allow scholars to complement their research into how dialogue works from a linguistic viewpoint, with the study of the interaction between word and movement. Luckmann (1990: 53–54) writes that the full meaning of a statement in a dialogue is frst produced by an adequate use of linguistic codes and the options of language, but he adds that these “options are chosen or routinely employed, by the speaker, along with body-postures, gestures and facial expressions which are laden with particular meanings”. In flm, this interaction between words and gestures is always narratively purposeful, as is the positioning of the characters within the mise-en-scène. Another form of intersemiotic cohesion is therefore at work here. Besides, like any form of iconography, body-postures and
Te semiotics of subtitling
73
gestures also communicate information non-vocally by themselves and are often culturebound. Some stereotypical diferences in gestural meanings are well-known (e.g. shaking the head rather than nodding to signify ‘yes’ in Greek and Bulgarian), and though some works have been published on the topic (Poyatos 1997), much research is still needed in this area, particularly in the feld of AVT, although some publications in the feld of audio description have been flling the gap (Mazur 2014), as have recent publications in psychology (Latif et al. 2018). Still, even without worrying too much about culture-bound problems, movements, gestures or a simple nod of the head can in themselves be quite challenging for subtitlers and bring home the need for synchrony, as illustrated in Example 3.4: Example 3.4 In Anna Campion’s film Loaded a group of students is spending the weekend at the house belonging to the aunt of one of the girls. The following exchange is taken from an early scene in the film. One of the guests asks, ‘Does your aunt mind us staying here?’ and the girl replies, shaking her head, ‘No, she hasn’t lived here since my uncle died’. Subtitlers cannot afford to contradict this movement, nor the clearly audible ‘no’. The negative reply must therefore be retained in the subtitling, even if an affirmative reformulation is, theoretically, possible. If the subtitling were to opt for this latter alternative, the question would have to be phrased as ‘Is your aunt happy with us staying here?’, for instance, which would then yield a reply along the lines of: ‘Yes, she moved out when my uncle died’. However, if there is no obvious reason for changing an affirmative sentence into a negative one, or the other way around (e.g. because it is shorter), it is always advisable to retain synchrony not only with speech, but also with movement, as in this particular case.
Te terms habitually used to refer to the movement and positioning of characters, also on stage, are proxemics and kinesics. Proxemics is the branch of knowledge that deals with the amount of space that people feel is necessary to set between themselves and others, whereas kinesics studies the way in which body movements and gestures convey meaning non-vocally. Within Western cultures there is a certain amount of uniformity in the degree of physical closeness that is acceptable between people in a certain situation. Te challenge for subtitlers resides in the detection of coherence between movement or closeness and intonation, word choice, as well as other linguistic features, such as (formal and informal) forms of address that will, of course, be codetermined by the narrative situation and the scene as a whole. 3.2.4
Camera movement and editing
Camera movement can also require careful handling when its rhythm somehow conficts with the linear succession of the subtitles, especially when there is a disruption in the synchrony between visual and acoustic channels. A conversation between two people is often flmed with alternating shots (shot–reverse shot) of the two characters, usually over the opposite character’s shoulder. Tis means that the camera sometimes focuses on the character speaking the lines, but that on other occasions it will direct the viewers’ attention to the character who is listening in order to render their reaction. Subtitling will in such cases always follow the speaker, which means that the
74
Te semiotics of subtitling
subtitle might appear under the person listening, which is not a problem for the hearing public but does require speaker identifcation in the case of SDH (§1.5.1). In recent productions the pace of editing has increased considerably, to such an extent that it is often impossible and undesirable for the subtitles to follow the visual pace of the flm. However, new forms of subtitling such as creative or integrated titles (§3.3.3) actually experiment with diferent placements and font types on the screen, like some instances of fansubbing, thereby focusing on smooth integration and positioning in the image, combining visual and verbal synchrony. Generally speaking, it is important for subtitlers to consider scene changes that involve a change in place and/or time when timing subtitles (§4.4.8). In scenes flmed with alternating shots such as the ones discussed previously, the characters remain in one location; in other instances, editing may take the flm to another era and setting altogether. Whereas subtitles normally follow the rhythm of speech, meaning that they appear and disappear when a character starts/stops speaking, they are usually allowed to linger on the screen (for several frames) to ofer viewers a more comfortable reading pace, if no new dialogue or scene change follows immediately. If the flm editing takes the narrative to another location immediately after a character’s turn, the subtitle must disappear with or slightly before the cut to a new location, unless the voice provides a sound bridge or link between the scenes. 3.2.5
A blessing in disguise
Despite the challenges that may arise for the subtitler because of incongruities or overlap between image and sound, it will have become clear by now that the information the images convey can also be an asset in the translation process. Since subtitles are an ancillary translation that is added to the audiovisual production, subtitlers can and must rely on the visuals to abbreviate text when necessary, leaving out redundant information and thereby allowing the flm to tell its own story. In fact, it is often the very interaction of the flm dialogue with the information given visually that allows for major deletions. Tis also explains why a print-out of subtitles, just like a dialogue list without scene descriptions, only very rarely makes any sense at all, and why it is essential for subtitlers to have the flm at their disposal when they are translating, even though in practice this is not always the case. Consider Examples 3.5 and 3.6: Example 3.5 In Manhattan Murder Mystery the following scene occurs. As Helen comes down a dangerous-looking flight of stairs, she is warned: ‘Watch your step. It’s very steep. Be careful’. The subtitles can easily render this as ‘Be careful!’ in any language, without the need to refer to the step. Example 3.6 In another scene in the same film, a character is showing off his stamp collection and says: ‘Now, let me show you a mint nineteen thirty-three airmail’. A translation stating ‘This one is from 1933’ will already convey the core message. Note, however, that film genre can sometimes have an impact on decision-making. If this scene were to occur in a documentary on postage stamps, the subtitle might have to be more detailed, depending on how the stamp in question is filmed.
Te semiotics of subtitling
75
Not only should visually conveyed information be put to good use for the production of short, idiomatic and simple translation solutions, it can also help solve translation problems. Te following scene from Te Grapes of Wrath provides a good example: Example 3.7 The Joads, a poor peasant family from Oklahoma, are on their way to California. They have been chased from their land and are making the journey in a ramshackle lorry, joining an army of immigrants on the move. As they are about to embark on their crossing of the Death Valley desert, a bystander remarks: ‘Boy, but I’d hate to hit that desert in a jalopy like that!’ Referring both to the desert crossing and the ‘jalopy’ (an old car in a dilapidated condition), one subtitler wrote: Ik zou niet graag met hen wisselen. [I’d hate to be in their place.]
Another solution, which makes the most of the information provided by the images would be: ‘I’d hate to travel in one of those’, using a deictic pronoun to replace ‘jalopy’, a word that may be rather difficult to translate concisely in most languages.
In Example 3.7, ‘jalopy’ is a word that would probably be unknown to many viewers because it is typical of a particular type of non-standard language and its interpretation is supported by the images. However, as pointed out at the beginning of this chapter, images (or the objects they represent) are not always universally accessible. Tat is why, occasionally, subtitlers may want to explain some of the visually rendered information. In their study into the relations between visual images and subtitles in the Chinese translation of Finding Nemo, Chen and Wang (2016) confrm that if the visually rendered information is judged to be clear enough, this information is used implicitly to keep the subtitle short, or in their terms, the subtitles then do not verbalize the visually rendered information. However, in instances where the images convey information that is unusual or not easily accessible for the Chinese audience, the subtitles will sometimes verbalize an image. When one of the sharks says, “On my honour, or may I be chopped up and made into soup”, the subtitle renders ‘soup’ as 鱼翅汤 [shark fn soup], a translation that “takes Chinese viewers’ cultural background into consideration and facilitates the viewing process by integrating the visual and verbal messages in the subtitle” (ibid.: 75). Likewise, the authors point out that the Chinese subtitles sometimes verbalize the images from a previous scene in more specifc anaphoric references than those used in the flm dialogue, thereby increasing intersemiotic texture. In another example, a reference to a scene where Nemo was seen riding a turtle, the flm dialogue, referring back to that scene, states that Nemo was in the company of “a bunch of turtles”, whereas the subtitles specify that he was seen 骑着乌龟 [riding the turtle] (ibid.: 81). 3.3 Subtitling, soundtrack and text on screen Te following sections explore in more detail how a few specifc aspects of flm’s complex signifying systems can impact on subtitling challenges and decisions.
76
Te semiotics of subtitling
3.3.1
Subtitling’s vulnerability
Even though subtitles have become part and parcel of flm and TV viewing in many diferent contexts, and even though the increased need for subtitling is obvious in our globalized and digitized world, many people still hold subtitling quality in rather low esteem. On the one hand, it is true that due to commercial pressure subtitles are not necessarily improving across the board; on the other hand, at the root of this judgemental attitude is the fact that the translated text and the original are delivered concurrently in time and space. Tis cohabitation of source and target texts allows some viewers to immediately compare both messages, which consumers of other types of translations (e.g. dubbing or literary translation) cannot normally do. Although the two languages also co-exist in interpreting and bilingual publications with parallel texts, the reception of the two messages is not as immediate as in the case of subtitling. While a translated novel or poem, or a dubbed programme, often obscures the original linguistic material, subtitles fnd themselves in the difcult position of being constantly accompanied by the flm dialogue, giving rise to what in the professional world is known as the “gossiping efect” (Törnqvist 1995: 49) or ‘feedback efect’. Tis possibility to contrast original and translation is becoming even easier thanks to the afordability of digital technology that allows viewers to stop a flm, return to any previous shot, and freeze a subtitle on the screen for all to see and judge. Te co-existence of the two languages unavoidably has its repercussions on the translated programme and, as Gottlieb (1994: 102) foregrounds, “subtitling is an overt type of translation, retaining the original version, thus laying itself bare to criticism from everybody with the slightest knowledge of the source language”. One common subtitling strategy is to transfer all those terms from the original that have strong phonetic or morphological similarities in both languages, and that the viewer may recognize in the original dialogue. Content-wise, the second translation of ‘paranoid’, in Example 3.8, works just as well as the frst, but the frst option retains a closer phonetic link with what the audience actually hears: Example 3.8 Listen, you’re getting too paranoid. 1
Hé, je bent paranoïde. [Hey, you’re paranoid.]
2
Overdrijf toch niet zo! [Don’t exaggerate!]
Te non-appearance in the subtitles of recognizable lexical items that are audible in the soundtrack may be a factor directly responsible for the criticisms that many viewers launch against subtitling, although this would need to be further confrmed by reception research. Some viewer comments certainly seem to point out that shorter and less literal translations lead many to believe that the translator has forgotten to translate such-and-such a word that was audible on the soundtrack. In this respect, Karamitroglou (1998: 6) writes, Investigations in the psychology of viewing indicate that when such linguistic items are recognized by the viewers, the exact, literal, translationally equivalent items are expected to appear in the subtitles as well. Tis occurs because of the constant presence of an inherently operating checking mechanism in the brain of the viewers which raises the
Te semiotics of subtitling
77
suspicions that the translation of the original text is not ‘properly’ or ‘correctly’ rendered in the subtitles, every time word-for-word translations for such items are not spotted.
Feedback from the soundtrack can take many shapes. In the opening scene of Mrs. Doubtfre (Example 3.9), Pudgie the parrot is frightened a cat is going to eat him and yells in a very syncopated manner: Example 3.9 Subtitle 1 Subtitle 2 Subtitle 3
9-1-1! 9-1-1! Police! Civic authorities! ASPCA! ASAP! Murder! Betrayal! Kidnapped!
Te articulation is extremely clear and most viewers will be able to tell that the content for the frst subtitle are numbers. Tus, a translation that does not resort to an emergency telephone number but explicates the cultural information embedded in this phone number – something along the lines of ‘Help! Help!’ or ‘S.O.S! S.O.S!’ – may be received with some reservations by some members of the audience. Likewise, the seven exclamations in subtitles 2 and 3 are punctuated by distinct short silences that separate them and inform the viewer of the number of exclamations in each turn. Yet again, transferring fewer interjections in any of these two subtitles – say three and two – may be considered as an infelicitous solution by certain viewers as they will clearly notice their deletion. In this respect, it is equally important in subtitling to try and maintain a close semantic and syntactic correlation between the dialogue of the flm and the content of the subtitles by striving at maximum linguistic synchrony whenever possible. It is thought that viewers fnd it disturbing to hear information on the soundtrack that they have already read earlier in the subtitle or to read terms that have not yet been uttered. Conversely, it has to be borne in mind that listening to one text in a FL while reading another in the mother tongue may also slow down comprehension, although recent research involving eye tracking is revealing that the efects of cognitive multitasking in subtitling may not be as problematic as initially thought. Both cognitive load and comprehension, as well as personal preferences, are codetermined by many diferent factors, including the formal characteristics of subtitling (e.g. line breaks) as well as their linguistic features (e.g. whether they are interlingual or intralingual) but also the degree of linguistic complexity of the actual dialogue (Perego and Ghia 2011; Ghia 2012a; Szarkowska and Gerber-Morón 2019). In view of these psychological and cognitive considerations that are triggered by the concurrence of the source and TLs, we propose to refer to subtitling as an instance of vulnerable translation. Not only must the subtitles respect space and time constraints, they must also strike a close relationship with the original dialogue, linguistically and quantitatively, so that they can somewhat stand up to the scrutiny of an audience that may have some knowledge of the original language. Tis state of afairs tends to be aggravated when the SL is English, or when the source and target text languages draw from the same linguistic roots. Viewers may easily feel cheated when the aggressive or rude performance of an actor leads them to expect a certain type of vocabulary that is not relayed in the translation, when a laconic exchange becomes a lengthy
78
Te semiotics of subtitling
subtitle, or when an actor who speaks in linguistic waterfalls is given very brief subtitles. Tey may then start wondering what was lost in translation, as happens in the well-known scene from the eponymous flm in which the actor-protagonist, Bob Harris, must rely on consecutive interpretation to understand the wishes of his (fctional) flm director. His employer’s instructions are elaborate in Japanese, but the English interpreter’s version of them is so brief and simplistic that it elicits expressions of disbelief on Harris’s part: ‘Tat’s all he said?’; the kind of criticism often heard about subtitling that is perceived to omit information. Tere is, of course, a limit to how far one can go in remaining faithful to the ST, not only because of technical limitations of time and space but also because the TL cannot be infnitely stretched. Inexperienced subtitlers sometimes remain too close to the ST, reproducing expressions or phrases too literally, thereby producing unidiomatic language through linguistic contamination. Given the observed hike in the assumed reading speed of the viewers, subtitles contained on DVDs or circulating on the internet tend to stay quantitatively closer to the ST language than subtitles produced for the cinema, for instance. A tentative explanation for this situation may be the fact that some distributors and professionals from the media industry in charge of commissioning subtitles are not subtitling specialists themselves and have little experience with translation generally. Tey feel that the more literal and complete a translation is, content-wise and formally, the better, claiming that this is also what the viewers prefer. Common sense dictates that a compromise will be required on many occasions. In any case, more diferentiated research, from a reception perspective, would be most welcome as subtitles are used in an increasingly varied number of ways, on an increasingly varied number of platforms and by an increasingly varied audience, some of whom may be not just consumers but prosumers of subtitles. Another dimension that adds to the vulnerability of subtitling is the fact that, unlike what happens in literary translation, for instance, professional subtitlers are normally not allowed to resort to explanatory annotations such as prologues, epilogues, glosses or footnotes to justify some of their translation decisions. Te use of metalinguistic headnotes or topnotes, as coined by Díaz Cintas (2005b) in the case of subtitling, is fairly restricted to the feld of fansubbing and virtually non-existent in commercial environments. In this sense, subtitlers do not have a proper channel to explain their solutions and cannot inform the audience that the non-inclusion in their subtitles of a particular play on words or an obscure reference is due to the medium’s limitations and not to a lack of understanding of the original dialogue exchange. Chapter 7 ofers a more in-depth analysis of the translation problems closely linked to the vulnerability of subtitling and looks at some of the most common strategies implemented by subtitlers to overcome them. 3.3.2
Multilingualism and multimodality as a resource for translation
Multilingualism is a fact of life in the 21st century, and it is part and parcel of communication in all its forms, including the entertainment industry. Te topic has attracted a great deal of scholarly attention in recent years, with projects like Traflm (Te Translation of Multilingual Films in Spain; traflm.net) focusing on the translation of multilingual audiovisual texts so as to discover professional and social practices along with the norms and criteria of this specifc translation challenge and to validate
Te semiotics of subtitling
79
and refne existing theoretical models on audiovisual translation and multilingualism by describing and analyzing a rich collection of data. As Meylaerts and Şerban (2014) point out, multilingualism makes translation and communication issues more visible, and it can highlight internal tensions within cultures. Moreover, if cultures and many of the texts they produce are intrinsically multilingual, this means that “translation does not take place in between monolingual cultures, messages and people but, rather, within and in between multilingual entities” (ibid.: 1). Such a state of afairs further undermines the traditional and shaky TS aim of achieving equivalence between a ST in a given language and a TT in a second language. Grutman (2009: 182) defnes multilingualism as “the co-presence of two or more languages (in a society, a text or an individual)”. In some defnitions, including Grutman’s, multilingualism encompasses language variation within one language, a topic that is dealt with in Chapter 7. For our present purposes, we consider multilingual flms to be those in which “at least two diferent languages are spoken, by a single character or, more commonly, by several characters” (Díaz Cintas 2011: 215). Te numerical system devised by Corrius (2008) to distinguish three types of language in multilingual flms is summarized in De Higes-Andino (2014: 214) as follows: • • •
L1, or the frst language, is the dominant language in the source text; L2, or the second language, is the dominant language in the target text; L3, or the third language, is any other language spoken in the flm.
As Voellmer and Zabalbeascoa (2014: 238) point out, L1 is to be “conceptually regarded as the main language of an ST, but a text may happen to have more than one main language, with each language being of relatively equal importance, regardless of the presence of any other ‘lesser’ language”. Tis would be the case in a bilingual flm. However, some flms may have more than one L1 in addition to one or more L3. A near-bilingual example is Monsoon Wedding, a flm about an arranged marriage in India, in which all the characters speak English as well as Hindi and some Punjabi. Te spontaneous switch between languages in the flm is typical of modern middle-class Indians and refects the production’s mixture of characters. Some of them have returned to India from prolonged stays in America or Australia while others have remained in India and represent attitudes that are rooted in tradition. According to the UNESCO Institute for Statistics (2012: 12), “flmmakers increasingly incorporate the contemporary context of cultural exchange, characterized by cross-border fows of people, commodities and culture, into the story-world of the flm”. In order to name such multicultural and multilingual productions, Wahl (2005) coins the term polyglot flm genre, and Under this umbrella term, he proposes a working classifcation of six diferent subgenres that feature many languages and are refective of diegetic, aesthetic and political purpose: episode flms, alliance flms, globalisation flms, immigrant flms, colonial flms and existential flms. In all of them, the use of linguistic polyphony is paramount. (Díaz Cintas 2011: 216)
80
Te semiotics of subtitling
Tat was not the case in Hollywood flms from the 1950s to the 1980s, which appeared to avoid linguistic and cultural confrontations that risked revealing cultural or political otherness and therefore divisions or tensions. Tough multilingualism has always been a part of flm productions to diferent degrees, such mainstream Hollywood cinema used it mostly in an anecdotal manner, as a form of exoticism, denying FLs and cultures any diegetic signifcance. Te occasional use of foreign words or short exchanges, such as ¡Buenos días! [Good morning!], in these predominantly Englishlanguage productions do little more than remind the viewer that the action is taking place in a foreign country, and they hardly need translation. Such instances of L3 do not sufce to turn these flms into polyglot productions, as they often do little more than confrm national stereotypes and, in Wahl’s (2007: 337) words, “audio-postcard” foreignness. Examples of this type of superfcial multilingualism occur, for instance, in Te Snows of Kilimanjaro, which contains some L3 French, Spanish and Swahili besides its L1 English. However, the majority of characters speak English most of the time in this flm, with the occasional FL lines serving as little more than exotic fourish. In some flms, foreignness is merely signposted through diferent English accents, which could be mimicked in dubbing but cannot be rendered in subtitles. In today’s audiovisual productions, multilingualism can be extremely complex and therefore pose serious challenges for the translator, as highlighted in the growing body of research into multilingualism and its many functions in flm (Dwyer 2005; Wahl 2005, 2007; Bleichenbacher 2008, 2012; Martínez-Sierra et al. 2010; O’Sullivan 2011). On occasions, flm directors need to fnd translation solutions at the production stage to ensure that their work is linguistically and culturally accessible for their original audience. In other words, the original programme may already resort to the use of dubbing, subtitling, diegetic interpreters or other means to deal with the linguistic transfer. A frst, rather mundane reason for multilingualism in European cinema may be the need to secure subsidies. As Díaz Cintas (2011: 219) points out, the MEDIA programme of the EU, established in 1991, supports hundreds of European flms every year and has as one of its main objectives that the European audiovisual sector should refect and respect “Europe’s cultural identity and heritage”. In line with this, multilingualism in flm certainly is an attempt at realism, at refecting society as it is, including its interculturalism and multilingualism and the inequality this often provokes among individuals and social groups. In some flms, such as Lost in Translation, Spanglish and Babel, communication problems due to linguistic diference become one of the flm’s themes, and translators or interpreters fgure among the fctional characters, as is also the case in Dances with Wolves and Te Interpreter, to give but a few examples. Such attempts at refecting social issues in a realistic manner almost automatically carry purposeful, ideological connotations and force viewers to refect on the prevalence of diference in the world in which they live. In many Belgian (historical) flms (Example 3.11), the use of French versus Flemish-Dutch signals social class separation, with the former language connotating a higher status. In addition, multilingualism can fulfl aesthetic and diegetic functions on many diferent levels. Just like intralinguistic variation, the use of FLs can also contribute to characterization and/or the foregrounding of national stereotypes as in Monsoon Wedding and Vicky Cristina Barcelona. Likewise, it can foster plot development or trigger unexpected twists in the plot, which happens in WWII flm Inglourious Basterds, when the very poor Italian of three of the protagonists gives away that they are USA soldiers and
Te semiotics of subtitling
81
leads to their arrest by the Nazis. Te inclusion of fctional languages (Avatar, Te Lord of the Rings: Te Fellowship of the Ring), old languages like Latin (Sebastiane) or Aramaic (Te Passion of the Christ), a lingua franca like Esperanto (Captain Fantastic), or sign language (Te Piano, La Famille Bélier) can be considered as another flmic device to enhance realism and propel the storyline. To sum up, technically speaking and irrespective of the function of plurilingualism, subtitlers may have to deal with multilingual flms with one or more L1s, and/or one or more L3s, in diferent combinations, with diegetic interpreters and translators or code-switching characters providing some additional challenges. An additional issue arises when one of the languages of the flm, be it L1 or L3, is the same as L2 for (part of ) the audience, as in the case of the subtitling into Spanish of the TV series Narcos, where L3 and L2 are both Spanish. As Examples 3.10 and 3.11 highlight, the coincidence of source and TL can prove rather challenging: Example 3.10 In Frasier, a USA television comedy, both Spanish (L3) and French (L3) enter into the primary English discourse (L1) of the protagonists, with different representational and ideational implications. What options does the Spanish or French subtitler have in this case? In the Spanish L2 subtitles both the English and French are rendered, whereas in the French L2 subtitles the English and the Spanish are translated. However, the foreign touch that these languages add to the original, including all their social implications, are lost, unless the viewer can rely on the soundtrack and other multimodal aspects of the audiovisual production. Example 3.11 Most original versions of multilingual films will themselves subtitle some of the languages they use. Depuis qu’Otar est parti is a French-Belgian co-production in Georgian, French, and Russian in which the Georgian and Russian dialogues have French subtitles in the French-Belgian version of the film. In the English version the three languages require subtitling, and the viewers have to rely on the narrative and its different locations to determine what language is spoken. In the case of Russian and Georgian the clues are not always forthcoming, and the purpose of the linguistic switches can only be derived from the social connotations they have. However, this problem is no different in the French version as the nuanced implications of multilingualism simply cannot be rendered in the subtitles.
It will be obvious by now that there are no clear-cut solutions for subtitling multilingual flms and, on occasions, distributors are barely interested in these aspects. Indeed, Ávila-Cabrera (2013) reports on the Spanish subtitled version of Inglourious Basterds, which, disregarding language plurality, was done entirely on the basis of a monolingual English pivot text. Te frst decision subtitlers must make when dealing with a polyglot flm is whether they will mark multilingualism in the translation. If one decides not to mark multilingualism, a monolingual subtitled version in L2 will be produced. If, on the other hand, the decision is taken to mark linguistic diference, then creative solutions will have to be activated. Instructions from the client apart, the judgement will be based
82
Te semiotics of subtitling
on the treatment of multilingualism by the flmmakers themselves, on its function and complexity within the flm, and on the subtitler’s perception of the target viewers and the degree to which they might have access to the multilingual dialogues. In this sense, passages in Galician or Catalan in an English flm will normally have to be subtitled for an English or Swedish audience, for instance, but subtitles may not be required for a Brazilian or Spanish public. Other decisive factors are the way in which the dialogue exchanges are supported by the other semiotic modes of the flm, including dialogue’s multimodality, the technical feasibility of activating certain translation solutions as well as their supposed acceptability for the target audience (e.g. an audience used to dubbing). First of all, it is important to remember that subtitling itself does not hide multilingualism and that it actually contributes to the linguistic plurality of flms, as it adds an L2 to the source texts’ visual narration as well as their L1 and L3, which remain audible in the TT, as illustrated in Figure 3.1:
Figure 3.1 Multilingualism in subtitling
As previously discussed, such a predicament makes subtitling a vulnerable translation (§3.3.1) but it also has its advantages. Subtitling can make the most of the multisemiotic nature of flm, usually to a greater degree than dubbing, which replaces the original soundtrack. Filmmakers themselves will on occasion translate the second L1 or the L3 in the flm into subtitles in the main L1, which signals that the foreign utterances are important and must also be subtitled into L2. In Slumdog Millionaire, for instance, the Hindi L3 dialogue exchanges are subtitled into English (L1) in the English-language version. When subtitling the flm for a Japanese audience (L2), the Hindi exchanges (L3) should also be translated into L2. In other cases the foreign dialogue may be used for suspense, humour or to trick one of the characters, whereby the flmmaker may wish to limit the perspective of the original audience and refrain from providing any subtitles. In instances like these, subtitles in L2 are not required either. Nowadays, flms in which the L3 is an immigrant language are on the rise. Te migrants in such flms tend to speak both the language of their host nation and that of their country of origin. In It’s a Free World..., which recounts the story of a British single mother who starts a recruitment
Te semiotics of subtitling
83
agency, fve diferent L3s are spoken besides English. However, as appears from De Higes-Andino’s (2014) interviews with the director and screenwriter, they chose not to subtitle any of the FLs but to use interpreting or self-translation instead to support the plot because they wanted to highlight the immigrants’ communication problems. In cases like this, the interpreter’s utterances in L1 are usually subtitled into L2. However, interpreters can be unreliable. Tis happens in such diverse flms as Hitchcock’s Secret Agent (De Bonis 2014) and the acclaimed Chinese flm 鬼子来了 [Devils on the Doorstep] (Takeda 2014), which is set in a village in northern China during the Japanese occupation towards the end of WWII. Te Chinese version of the flm also contains subtitles, but these do not translate the interpreter who mediates between the Chinese and the Japanese when this entails repeating the Chinese dialogue. Te flm does, however, subtitle the interpreter whenever he does not render the Chinese ST faithfully (ibid.). Te way in which diferent languages are intertwined in a flm, e.g. overlapping dialogue, can render technically impossible the provision of fully adequate subtitles. In those cases, subtitling only the dialogue turns that are the diegetically most important is usually the best solution, as happens in some passages with overlapping L1 and L3 dialogue in It’s a Free World, where the subtitles only give account of the L1 (De Higes-Andino 2014). Numerous solutions have been suggested for subtitlers wishing to explicitly mark the presence of multilingualism in flms by making use of certain typographical devices or transferring some of the features from SDH to standard interlingual subtitling, such as the use of colours. In Devils on the Doorstep, for instance, italics are used in the English subtitles to distinguish the Japanese from the Chinese (Takeda 2014), and in the Australian flm Head On, it is the Greek L3 that is subtitled in italics (Díaz Cintas 2011). De Higes-Andino et al. (2013) provide an exhaustive list of the (sometimes hypothetical) solutions to date: use of (1) subtitles in the normal format as a base line, (2) subtitles in italics (which, confusingly, can also signal that a character is of screen), (3) subtitles in diferent colours, (4) literal intralingual transcription of the foreign utterances in the subtitles, (5) no-translation but indication in brackets of the language being spoken, (6) and no-translation. Te diferent possibilities are placed on a scale that goes from domestication to foreignization, as displayed in Figure 3.2:
Figure 3.2 Dealing with multilingualism in subtitling Source: De Higes-Andino et al. (2013: 139)
84
Te semiotics of subtitling
Te last of their solutions, no-translation, brings to the fore the realization that defning translation as a form of communication that transforms an ST in an L1 into a TT in an L2 is far too simplistic and reductionist, as STs (and target audiences) are seldom monolingual. Since subtitling is a translation mode that can make use of all the different semiotic systems of the flm, it can certainly resort to no-translation in some instances. It then relies on the audience to make the most of their knowledge of the flm’s L1 and/or L3, as well as its other narrative features such as camera positions, mise-en-scène, geographical location, narrative logic, facial expressions and gestures. Both Perego (2009) and Sanz-Ortega (2011) have stressed the importance of nonverbal information in multilingual or polyglot flms. Referring to Spanglish, which takes place in a bilingual Californian environment and inextricably mixes English L1 with Spanish L3 to purposefully create communication breakdowns among the characters, Sanz Ortega (ibid.: 26) points out that the Spanish exchanges are not subtitled in the original but rendered through diferent forms of interpreting. In one instance, the gestures of the intradiegetic interpreter actually convey that she may be acting as an interpreter but barely understands what she is translating. Her gestures, which form part of the communication between the characters on a horizontal level, thereby also take care of the vertical communication between characters and spectators (ibid.). In the French flm Gazon Maudit (Example 3.12), multilingualism is supported by the characters’ proxemics and gestures to such an extent that subtitling is not required: Example 3.12 In Gazon Maudit [French Twist] one of the protagonists, Loli (Victoria Abril), occasionally lapses into her native Spanish in very specific instances, mostly in emotional or erotic scenes. These occurrences are not subtitled in the original and need no translation in any other language since the context and the character’s gestures make perfectly clear what is going on. A few linguistically comparable scenes in French also occur in the aforementioned Belgian-Flemish film Lijmen. Short French exchanges or announcements are not subtitled in Dutch, whereas longer French scenes are.
Recent reception research into the efciency and acceptability of no-translation of L3 in multilingual flms has shown that the majority of viewers of the TV series Breaking Bad do not seem to have any problem with the absence of translation and appreciate its efectiveness as a way to mark language diversity (Krämer and Duran Eppler 2018). One important qualifcation, however, is that the context must make the dialogue clear (to some extent at least), whereby information that can be gleaned from the series’ diferent visual modes is considered more important by most respondents than is a degree of familiarity with the L3 dialogue. Another caveat raised by the scholars is that dialogue that contains important diegetic information should always be subtitled. As suggested by Heiss (2004: 214) and to avoid the complete wipeout of linguistic diversity in dubbed versions, subtitling is sometimes called upon to support the translation of an L3 in dubbed flms. On the one hand, the degree to which this approach will be considered acceptable by the target viewers may vary, depending on their familiarity with practices like dubbing and subtitling. On the other hand, audiences’ watching habits evolve as they are increasingly exposed and become more accustomed to diferent AVT practices and to multilingualism with all its complications. Krämer and Duran Eppler’s (2018) research also shows that viewers from dubbing countries of a rather intellectual TV series like Breaking Bad seem to prefer subtitling or no-translation at all
Te semiotics of subtitling
85
of L3 to monolingual dubbing, so that multilingualism can be preserved. Subtitlers must remember, though, that no-subtitling will only work in scenes in which the other flmic cues come to the rescue and that narratively important information must always be subtitled. Te fact that an L3 is quantitatively unimportant in a flm is no excuse to ignore it: a few well-placed lines in the FL can have a great impact on the plot. 3.3.3
Text on screen
Text on screen can take on many forms, from the verbal signs that are an integral part of the images (newspapers headlines, banners, road signs, etc.) to the added inserts, hard titles, forced titles or burnt-in subtitles that provide temporal information or details about the geographical location or the professional afliation of the speaker. Traditionally, subtitles always give priority to dialogue over written text or songs, although subtitlers do try to translate any relevant verbal information rendered visually. Given the causal, well-structured narration and mise-en-scène of most mainstream productions, there usually is little or no interference from other semiotic channels whenever written words do appear on screen, as illustrated in Example 3.13: Example 3.13 In the opening scene of The Man with the Golden Arm, protagonist Frankie returns to town and walks past several neon lights and window signs that typify his neighbourhood, but there is no interference from the film dialogue. Web > Chapter 3 > Examples > Example 3.13 > GoldenArm-City
In scenes in which narrative text is important because it serves to contextualize the diegesis, as might happen at the beginning of a flm, dialogue is absent and the subtitles are moved to the top of the screen to avoid any visual or aesthetic clash with the original text. A classic example is the introductory onscreen narration in the Star Wars flms, as displayed in Figure 3.3:
Figure 3.3 Running text at the bottom of the screen Source: Star Wars
86
Te semiotics of subtitling
In the case of documentaries and corporate videos, it is not uncommon to chance upon scenes in which there is a confict between the visual and acoustic channels. When the information given aurally is diferent to the one depicted visually, usually details about the location or containing biographical information about the speaker, and both sets of information are essential, various options remain. Respecting synchronization between dialogue and subtitle, a concise written summary of the utterance can be given at the top of the screen or close to the original onscreen source text (Figures 3.4 and 3.5), even if this means that not all viewers may have time to read all the information contained in the images and the subtitle:
Figure 3.4 Concurrent dialogue and onscreen text
Figure 3.5 Concurrent dialogue and onscreen text
Source: iMotions – Mobile Eye Tracking & EEG
Source: Te Invisible Subtitler
Alternatively, and jeopardizing synchronization, the in-time of the subtitle rendering the source dialogue may be slightly delayed and the translation may appear after the viewer has had some time to read the onscreen text. If written texts follow each other very quickly on screen, for instance in a shot using newspaper headlines to give information about a given period or event, the only solution is to abbreviate and cut, making sure the essence of the original message is preserved as much as possible. Finally, if the onscreen information is recognizable, because source and TL make use of words with the same roots, no translation is required, e.g. for an insert like ‘Paris, 1968’. As for the layout, when the supportive text is presented in a manner that it only occupies a fraction of the lower part of the screen, the subtitles can then be positioned in diferent places, as displayed in Figures 3.6 and 3.7:
Figure 3.6 Onscreen text and subtitles – 1
Figure 3.7 Onscreen text and subtitles – 2
Source: Charade
Source: Charade
Te semiotics of subtitling
87
In a distinct move away from early cinema’s dislike of the written or spoken word, a new trend in some flms today has seen the proliferation on screen of subtitles and other types of creative texts in the original production. In some cases this is in response to the multilingual nature of the flm, such as in Man on Fire, where the Spanish exchanges are subtitled into English in a dynamic way and the format of the English text shimmers to recreate emotion and augments its size to represent the volume of speech (O’Sullivan 2011). In other cases the text on screen is part of the narrative of the original production, as in the case of the long-running animation series Te Simpsons, where ironic comments in the original language are inserted as written text. Aesthetics is another catalyst driving some of these changes, epitomized in the kinetic forced titles used in the TV series Sherlock or flms like Te Fault in Our Stars (Figure 3.8) or Sex Drive (Figure 3.9), where mobile phone texting takes centre stage, literally.
Figure 3.8 Text messaging on screen Source: Te Fault in Our Stars
Figure 3.9 Text messaging on screen Source: Sex Drive
88
Te semiotics of subtitling
In all these cases the written text is displayed in diferent places on the screen and in diferent sizes, fonts and colours. Some of the fctional characters may even refer to the extradiegetic texts in the flm dialogue, or actually produce them. Recent research suggests that, in these cases, it might be better to replace the ST with visually similar TT, referred to as “integrated subtitles” by scholars like Fox (2016). Te increased ubiquity of text on screen in flm productions and greater awareness among flm directors of the impact subtitles can have might contribute to consumers’ and flmmakers’ acceptance of more creative subtitles, also in the case of subtitling for people who are D/deaf or hard-of-hearing (Romero-Fresco 2019). 3.3.4
Speech to writing: a matter of compromise
Not only is subtitling an unusual form of translation because it is added to the ST, it also stands out as a unique translational genre because it renders speech in writing, in a counter-movement from flm dialogue, which is written to be spoken. Tis feature too determines the shape subtitles eventually take. Tere are two basic types of speech in flm, scripted and spontaneous, although entirely spontaneous speech is rare since even interviews in documentaries will undergo a certain degree of editing. Scripted speech can be further subdivided into mimetic dialogue that imitates conversation, stylized dialogue of the type that may be used in period theatre, voiceover and of-screen narration, and texts read from the page or teleprompters such as political speeches and television news. Most of the audiovisual excerpts compiled on the accompanying website contain examples of supposedly mimetic scripted flm dialogue of the type discussed in §3.2.1. In some instances, exchanges sound more or less natural, but other scenes feature hard-boiled dialogue and stereotypical Hollywood lines, reminiscent of famous one-liners like ‘We’ll always have Paris’ from Casablanca or ‘Are you talking to me?’ from Taxi Driver. Many recent productions, and especially flms and TV series of the social-realist genre, such as the ones directed by Ken Loach, Mike Leigh, Stephen Frears and Shane Meadows, to name but a few, tend to have realisticsounding dialogue. Ken Loach is also known for casting non-professional actors. Films with realist and/or social ambitions that feature specifc social groups and their slang may also contain more swearwords and taboo words than written language can usually stomach (Chapter 7). Conversely, good examples of stylized theatre dialogue, with some of its often poetic and rhetorical features, would appear in any classical Shakespeare flm adaptation, but subtitlers are also faced with them in feature flms inspired by the classics such as Shakespeare in Love, period dramas such as Love & Friendship, based on Jane Austen’s novella, or TV series and flms such as Downton Abbey. Such dialogue too has its very own challenges as it is transferred from speech to writing. Instances of the challenges presented by the semi-unscripted speech that is so typical of live interviews can be found in many a TV documentary. Te transition from the oral to the written mode means that some of the typical features of spoken language will have to disappear, no matter what subgenre a dialogue belongs to. Ten again, the oral features of spoken language in the cinema, on TV or in any other visual medium are relative since orality is co-determined by flm’s other semiotic systems and the communicative function(s) that the dialogue must fulfl (§3.2).
Te semiotics of subtitling
89
We have already pointed out that flm dialogue evolves sequentially, like spontaneous conversation. Te dialogue excerpt from American Beauty, analyzed in Example 3.1, is typical of dialogue that is written to be spoken in that it is very well-structured and only contains very functional instances of features that are typical of everyday speech. Real-life conversations and some instances of flm dialogue can be less straightforward. Consider the following transcription (Example 3.14), adapted from Goodwin (1979: 111–112): Example 3.14 JOHN: DON: JOHN: ANN:
I gave, I gave up smoking cigarettes... Yeah. I-uh, one – one week ago today, actually. Really? And you quit for good?
Not only does John hesitate several times and repeat himself, he only fnishes his sentence ‘I gave up smoking cigarettes one week ago today, actually’, after a prompt from Don (‘yeah’). Film dialogue does not render all the hesitations and false starts or requests for confrmation that are typical of conversation or speech in general. It only suggests these conversational features in as far as they have a narrative function. In the scene from American Beauty (§3.2.1) the characters do but then again do not really address each other; their words are meant for the viewers and must convey all the information they require about the story and the characters’ psychology. Subtitling will usually take flm dialogue’s purposeful simplifcation one step further, ideally without afecting the narrative function of its interactions. However, subtitles are limited to two lines, each allowing for a maximum number of characters that cannot be exceeded, depending on the time the subtitle remains on screen (Chapter 4). Te viewer will therefore have a limited amount of time for reading and understanding the transient translation that appears on screen. Tis is why traditional commercial subtitling has developed a style of its own that has an impact on grammar and register, as well as on the interactional and other oral features of dialogue. It moves speech a step closer to the more careful organization of writing and, although subtitling style varies somewhat across genres, some standard subtitling guidelines are almost universal. Grammar, syntax and lexical items tend to be simplifed and cleaned up, whereas interactional features and intonation are only maintained to some extent (e.g. through word order, rhetorical questions, occasional interjections and incomplete sentences). In other words, not all the features of speech are lost in their transfer to written subtitles. Many of these traits can be salvaged in writing, but rendering them all would lead to illegible and exceedingly long subtitles. Since subtitling focuses on the denotative dimension and on those items that are informationally most relevant, often context-renewing clauses are retained, whereas context-confrming ones are dropped. In documentary flms, scripted speech can be challenging because of its heavy information load, but unscripted speech, with all the hesitations typical of oral discourse, can also require quite a bit of interpretation, reformulation and rewriting. In addition, there is the issue of whom the speaker is addressing. Interviews tend to be introduced by journalists, who then also guide the interaction with the interviewee. Since the questions they ask determine topic and topic shifts, the interaction between
90
Te semiotics of subtitling
journalists and respondents is diferent from real-life conversation and from flm dialogue. Reid (1996) points out that in the case of such interviews, the subtitler may have to act as an intermediary between the interviewee and the TV audience. In the interview situation, the interviewee replies to the interviewer’s questions and often counts on the latter’s expert knowledge when formulating answers. Te subtitler may have to evaluate the knowledge of the broader target public of the subtitles and adapt both the form and content of the interviewees’ interventions. Te latter may speak poor English, leave their sentences unfnished, use specialized vocabulary or rely on background knowledge that cannot be presupposed in the target audience. Tis may call for explicitation, explanation and even interpretation. What is more, in an interview situation, the images tend to ofer little help. Te linguistic features of subtitling are taken up in greater detail in Chapter 6. 3.4
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 3 > Exercises
4
Spatial and temporal features
4.1
Preliminary discussion
4.1 Think of three characteristics that, in your opinion, define the presentation of subtitles on screen. 4.2 Find three examples of different subtitles from films, video games or other audiovisual productions and discuss why they are special: font type, positioning, use of colours . . . 4.3 Should there exist an international set of guidelines to regulate the technical presentation of subtitles on screen, irrespective of the country, company and/or language? Why (not)? 4.4 Are you aware of any guidelines that discuss the technical presentation of subtitles? Which one/s?
4.2
Code of good subtitling practice
Te experience of watching just a few subtitled programmes brings home the realization that there is a general lack of consensus and harmonization as to the presentation of subtitles on screen. Conventions are not always systematically applied, and variations can be observed at a technical level as well as in the layout of the subtitles, both within and across languages. A considerable range of styles has developed over time afecting the length and duration of lines, their display rates, the maximum number of lines, the use of typographical signs (Chapter 5), and the disposition of line breaks, among others. Heterogeneity in practice has typically been perceived as a symptomatic lack of quality, and many attempts have been made over the years at recommending standards that would prevent the deterioration of quality. In the late 1990s, Ivarsson and Carroll (1998: 157–159) put forward a Code of Good Subtitling Practice in an attempt to ofer general guidelines aimed at preserving and fostering quality in subtitling. Te result of a common efort by a working group of professionals and academics under the aegis of the European Association for Studies in Screen Translation (ESIST, esist.org), these guidelines have been widely regarded as a standard in the profession for many years. Tey are not binding, and companies and professionals alike can adhere to them if so they wish. Te document is not only addressed to translators but also to all the stakeholders involved in the process of subtitling. Karamitroglou (1998), Díaz Cintas
92
Spatial and temporal features
(2003) and Al-Adwan (2019) have also worked in this direction, proposing guidelines at pan-European, Spanish and Arab levels respectively. More recently, companies like Netfix (n.d. 2) have also contributed to this debate with the free distribution online of their timed text style guides in over 30 languages. On the other hand, as an instantiation of glocalization forces, a scaling up of eforts and coordinated action can be observed, whereby professional associations have started collaborating with other stakeholders and interested parties to produce consensual guidelines in languages like Croatian, Danish, Dutch, Finnish, French, German, Norwegian and Swedish (§5.2). A document containing a list of subtitling guidelines in different languages can be found on Web > Appendices > 4. Subtitling language guidelines
Although many regard this setting of parameters as a commendable efort, for others it is nothing short of a dogmatic catalogue of rules and regulations promoting unnecessary uniformity and conficting with existing national idiosyncrasies. Subtitling conventions applied in many countries have come to be what they are following long periods of practice. Tey have been socially sifted and are the result of a long tradition, and viewers are now familiar with them. From this perspective, it is understandable that some local companies or television broadcasters may look on these general recommendations with suspicion, as drawn up by professionals who might not have sufcient insider knowledge of specifc local realities. Be that as it may, most of these guidelines tend to be rather broad in scope so as not to impinge too much on local customs. Moreover, parameters governing the technical dimension of subtitling are less exposed to criticism than guidelines about subtitle layout, which is inextricably linked with the linguistic dimension of the TL and thus more entrenched in its culture. To avoid this type of confict, many LSPs and some of the new OTT providers produce diferent guidelines for the many languages in which they work (§5.2). Tese documents contain sections that are the same for all the languages, while also allowing for other chapters in which language-specifc parameters are highlighted. As in the case of the templates, guidelines of this general nature should not be understood as an invasion of any country’s or company’s subtitling tradition but rather as a declaration of good intentions aspiring to set some minimum standards in the profession. Not being set in stone, these recommendations are open to debate, changes and modifcations, as illustrated, for instance, in the many updated and revised versions of guidelines logged on Netfix’s (n.d. 2) website. Te objective in the next pages is to explore the main parameters that characterize the production and presentation of subtitles today. Tey have been grouped under three categories: spatial dimension, temporal dimension and formal and textual features; the latter being discussed separately in Chapter 5. 4.3
Spatial dimension
Tis section centres on the spatial limitations that shape the practice of subtitling and discusses them from a technical viewpoint. Tough subtitlers must be aware of them in order to perform linguistic operations successfully, ultimately the decisions afecting this dimension tend to be taken by technicians, producers, distributors and project managers rather than translators. As an additional cog in the wheel, subtitlers need
Spatial and temporal features
93
to have an overall understanding of the process and know what happens with a programme before and after the actual translation. Above all, future subtitlers should be fexible in their approach, gain an insight into the advantages and disadvantages of diferent practices, and be consistent when applying the spatial and other conventions proposed by a particular subtitling company. Even though there is no absolute uniformity in the way subtitles are positioned on the screen, certain trends can be discerned. Today, the situation is one of variation within generally accepted practice. Te initial diversity was due to the more or less independent, gradual development of subtitling and subtitling guidelines in diferent countries, based on individual preference, national literary or cinematic/broadcasting traditions, and the evolution of technology. Te development of subtitling for digital media has no doubt also been a determining factor in the emergence of formal guidelines. As already discussed (§2.5), the use of templates together with the fact that streaming and DVD/Blu-ray usually contain several subtitling tracks in diferent languages has led to more rather than less uniformity, whereas television subtitling remains less uniform. When appropriate, an indication is provided, preceded by the Wincaps Q4 and the OOONA logos, to highlight the recommended parameters that should be applied when carrying out the exercises hosted on the companion website. A set of easy guides on how to use Wincaps Q4 can be found on Web > Wincaps Q4 > Guides A set of easy guides on how to use OOONA can be found on Web > OOONA > Guides 4.3.1
Maximum number of lines and position on screen
Te traditional criticism levelled against subtitles for being an unwelcome blemish on the flm copy is partly responsible for the old, and to a large extent illogical, adage iterated in the profession that the best subtitles are those that viewers do not notice. According to accepted opinion in the industry, the ideal subtitles should thus be uncluttered and avoid attracting attention to themselves, whether formally or linguistically. Tis is why, generally speaking, interlingual subtitling is limited to a maximum of two lines, which occupy no more than two twelfths of the screen image. Yet, this rule is being broken daily by the emergence of three-, four- and even fve-liners, notably in the cybersubtitles that populate the internet (Díaz Cintas 2018). On this front, subtitling for a hearing audience difers from SDH, which in some countries often makes use of three or even four lines, and bilingual subtitles may also resort sporadically to four lines.
The maximum number of lines per subtitle is two.
In the world of digital video, pictures are made up of individual dots known as pixels, a blending of picture elements. Each frame of the video is a picture 720 pixels wide and 576 pixels high, known in the profession as broadcast resolution. Te arrival of 4K
94
Spatial and temporal features
or Ultra HD, used in commercial digital cinema, has resulted in the upscaling of the image quality by allowing an increment in pixels, i.e. 4096 × 2160, and is meant to have the potential to deliver almost as much depth as 3D, without the need for glasses. Written text and graphics shown on screen may get distorted if they appear too close to the edges because TV manufacturers deal with screen edges diferently. Tis is why all text must be centrally positioned within what is known as a safe area, which is the visible area of the video screen where the text will not be cut regardless of the over-scan (margin of the video image that is normally not visible) of the television used. Since the advent of digital television, the safe area has lost some relevance as digital and high defnition televisions, which operate on rectangular frames, do not over-scan images in the same way as older television sets did. Te safe area is usually within 10% of each frame edge, e.g. 72 pixels in from the right and left edges and 57 pixels from the top and bottom, which are the settings in OOONA. By default, the following standard parameters are applied by Wincaps Q4 in regards to the safe area: top 32, left 56, right 56, and bottom 32. Tey should always be respected when working with this subtitling program unless otherwise instructed by the client. Te standard position for subtitles is horizontal at the bottom of the screen since this restricts the obstruction of the image, and this part of the screen is usually of lesser importance to the action. Some languages, like Japanese and Korean, have a long history of placing subtitles vertically on the right-hand side of the screen, especially for theatrical releases. With the arrival of video and DVD, horizontal subtitles have become more common than ever before, although both approaches still co-exist. Te positioning of a two-line subtitle at the bottom of the screen does not ofer any options since both lines are in use. Te situation is diferent when dealing with one-line subtitles, with some companies using the frst – i.e. top – line and some others preferring the second line. Tis traditional variation in the placement of oneline subtitles is giving way to a more uniform approach these days, with most oneliners habitually appearing on the second – i.e. bottom – line, thus keeping clear of the image as best they can.
One-line subtitles should be written on the second, bottom line available.
Technology makes it exceedingly easy to place the subtitles, either the two lines or just the bottom one, immediately below the image. Pollution of the original photography is therefore reduced, but more research ought to be carried out to establish whether this aesthetic change has any negative impact on the overall appreciation of the programme, since the eye has to move across a wider screen area in order to scan all the information available. Subtitles can be moved from the bottom of the screen to another position if the need arises. Such a move can occur if •
the background at the bottom of the screen is so light that the subtitles risk being illegible;
Spatial and temporal features
• •
95
important action is taking place in the lower part of the screen; overlap must be avoided with onscreen text that is displayed at the bottom of the screen while dialogue continues to be heard and must therefore be subtitled. Examples are hard titles providing dates or information about a speaker, or the broadcaster’s logo.
When the decision is taken to displace the subtitles in a flm, they are then placed at the top, which is the most common practice, or in the middle of the screen, though this is extremely rare. In the case of non-fction programmes, like documentaries or interviews, showing original inserts that cannot be edited or moved to another place, subtitles tend to be displayed just above the text appearing on screen. In the case of one-liners, another approach to avoid the collision on screen with any other textually rendered information is to use the top or bottom subtitle line, as appropriate. Alternatively, the position of the subtitle can also be shifted horizontally, to the right or left, so that it does not cover up the open insert or logo. As the exposure time available for the projection of this material is reasonably short, due consideration has to be given to the amount of text that can be comfortably read by the viewer, without creating confusion. Te same challenge arises when data appear both at the top and the bottom of the screen. In these cases, where the overlap is impossible to avoid, the subtitle should be placed where it is easier to read and the subtitle display time has to be reconsidered, usually allowing the translation to appear slightly earlier or to trail a bit longer on screen, if possible, and without disturbing the synchrony exceedingly. Each particular instance will require an ad hoc solution. If the decision goes against re-positioning the subtitles, two options are possible: the translation is encased in a grey or black box that covers up (part of ) the original information or a certain degree of asynchrony is allowed in the presentation of the subtitle and the translation of the onscreen text, whereby one or both of them appear on screen slightly out of sync, usually by letting one of them linger on screen for a bit longer. Since viewers expect subtitles to appear at the bottom of the screen it is common practice not to move them around unnecessarily, though research is being conducted on the dynamic placement of subtitles that are directly integrated into the picture and follow the source of sound as an alternative to traditional practice (Fox 2016). Te results of this type of research, in conjunction with experimentation being carried out on the layout of subtitles in immersive environments (§1.5.5), have the potential of bringing about substantive change in the positioning of subtitles in the medium term. 4.3.2
Centred and left-aligned
In the past, TV subtitles were often left-aligned, and some TV channels still left-align subtitles, but they are the exception rather than the rule. On occasions, a mixed approach can be found within the same production, with most subtitles being centre justifed while dialogue two-liners appear left-aligned, with each of the lines preceded by a hyphen. By contrast, subtitles are nearly always centred on DVD and VOD platforms, an approach that appears to be infuencing TV subtitling conventions. Another reason why subtitles are being centre justifed on television as well is because broadcaster logos are sometimes placed in the lower left-hand corner of the screen, thus potentially blocking the frst couple of characters in long subtitles, hampering legibility.
96
Spatial and temporal features
In the cinema, subtitles tend to be shown in the middle of the screen, because in a large movie theatre left-centred subtitles would be too far removed from spectators sitting on the far right. Another reason in favour of centring subtitles has to do with the fact that the action tends to happen in the middle of the screen and, if the subtitles are also centre justifed, the eye has to travel less from the image to the text. As far as the reading experience is concerned, left-aligned subtitles start always at the same point on the screen, whereby the eye gets used to it and goes to the same spot when a new subtitle is due to appear. If, on the other hand, subtitles are centred they will always appear at diferent points on the screen and it will be impossible for the eye to anticipate the location of a new subtitle with the same precision.
All subtitles, including dialogue subtitles, should be centre justified on screen.
4.3.3
Font type, font size and colour
By their very nature, subtitles interfere with the image, and the extent of this interference depends largely on their aesthetic credentials, especially the chosen font type, size and colour. With people spending ever longer time in front of the screen binge watching their favourite series with subtitles, it is surprising the little research that has been conducted on the visual appearance of subtitles, bar the extremely detailed and illustrative work by Deryagin (2018). Subtitle fonts are determined by factors such as the platform in which they will be shown, the delivery mechanism and the client. To contribute to the invisibility of subtitling, distributors habitually opt for neutral fonts, without serifs, that do not call undue attention to themselves, like Arial, Verdana or Helvetica in most Western languages, unless the subtitles fulfl a creative function in the programme. In contrast with this, integrated titles, which are normally devised by the creator of the original, tend to make use of bolder font types and play with the positioning of the subs on screen, as seen in series like Sherlock or flms like Night Watch and Man on Fire. Large companies, like Netfix, have come up with their own proprietary font, known as Netfix Sans (Brewer 2018). Te size of the font is calculated in pixels and depends on the dimensions of the device, the viewing distance and the screen resolution, among other variables (BBC 2019). Te following shows the recommendations for working with Wincaps Q4 and OOONA: The following font types and sizes are recommend: Arial 30 – Latin-based, Cyrillic-based and Semitic languages, Thai Gulim 35 – Korean MS Gothic 30 – Japanese SimHei or Simsun 35 – Chinese Shusha 35 – Hindi
Spatial and temporal features
97
Irrespective of the media, most subtitles are white, although occasionally yellow is used when subtitling classical black and white flms, so that the contrast between image and text is sharper. Te characters tend to be with an edge as well as a slight drop shadow or black contoured, which bolsters legibility. When the subtitles appear against a very light background, one of the solutions to enhance their presence is to encase them in a grey or black box. Tese boxes are standard in subtitling software and can be made to appear throughout the flm or simply whenever they are required in concrete subtitles. Tey can be of solid background or partly transparent, and the shade of grey can be adapted (rendered lighter or darker) depending on whether the background must remain visible to some extent. If the only purpose of the box is to improve legibility, a grey box is preferable because it stands out less on the screen than a black one and is therefore less obtrusive.
All subtitles should be in white colour. colour.
4.3.4
Maximum number of characters per line
Bearing in mind the space available within the screen’s safe area (§4.3.1), the maximum number of characters traditionally allowed per line of subtitle was around 37, including blank spaces and typographical signs, with all taking up one space each, as subtitles were created using monospaced fonts, like Courier and Consolas, whose letters and characters each occupied the same amount of horizontal space. Teletext would only permit 35 characters because of the limitations of analogue technology. With the arrival of digital media, the functionality of subtitling software programs was enhanced inasmuch as they started working with proportional lettering or variable-width fonts, where the letters and spacing between them have diferent widths, thus allowing for greater rationalization of the space available for subtitles. Proportionally spaced fonts such as Arial, Helvetica and Verdana use a diferent amount of horizontal space depending on the width of the letter, as illustrated in Example 4.1: Example 4.1 Types of font Arial 9 Helvetica 9 Verdana 9 Consolas 9 Courier 9
Meanwhile, the real criminal is out there, tweeting. Meanwhile, the real criminal is out there, tweeting. Meanwhile, the real criminal is out there, tweeting. Meanwhile, the real criminal is out there, tweeting. Meanwhile, the real criminal is out there, tweeting.
Since the introduction of proportional fonts, restricting the maximum number of characters per line (cpl) to 35, 39 or even 42 is not an overruling factor anymore. As long as the end text is contained within the confnes of the safe area, subtitlers
98
Spatial and temporal features
can write as much text as possible, depending on the font size being used and, crucially, on the actual letters that make up the message as, for instance, an ‘i’ or a ‘j’ takes less space on screen that an ‘m’ or a ‘w’. As fonts have diferent character widths, it is impossible to determine a priori the exact fnal pixel width of a line of subtitles, which will always vary depending on the letters used. In Figure 4.1, where the subtitle uses Arial 30 and both lines remain within the safe area, the top line looks longer than the bottom one, even though it actually contains fewer characters (43) than the second line (45). Te visual discrepancy here derives from the actual letters being used in each of the lines, with a greater number of leaner characters being used in the second one:
Figure 4.1 Use of proportional lettering Source: Turtle Journey
Despite these advancements, many subtitling vendors continue to indicate a maximum number of characters per line to their subtitlers. For TV, cinema and DVD a maximum of 37 to 39 characters has been the norm for many years, while these days VOD is elongating the lines to accommodate up to 42 characters, as in the case of Netfix. Tese values are true for single-byte languages based on the Roman and Cyrillic alphabets, as well as Semitic languages, Hindi and Tai, for instance. In the case of double-byte languages from the Far East, like Chinese, Korean and Japanese, the limit tends to be set at 16 characters per line.
Spatial and temporal features
99
These are the recommended maximum number of characters per line (cpl): 42 cpl, with a total of 84 for two lines Latin-based, Cyrillic-based and Semitic languages, Hindi and Thai 16 cpl, with a total of 32 for two lines Chinese, Japanese, Korean
Diachronically, the evolution has been clearly upwards, due in part to higher-quality projection processes as well as increased viewer exposure to subtitles and to reading intermittent text on screen. Still, a higher number of characters per line often results in more image being covered by text, or in a smaller font being selected, which again hampers legibility and leaves viewers less time to scan the flm’s other semiotic channels. It is not uncommon these days to come across subtitles with well over 60 characters in just one line, particularly in cybersubtitling practices (Díaz Cintas 2018). In addition to the aesthetic implications that so much text can have on the flm, the issue remains as to whether viewers are given sufcient time to read such long subtitles. In practice, subtitlers get instructions as to how many characters they can use, either from their clients or from the subtitling company they are working for. With that fgure, the software preferences are set accordingly and the program takes care of the monitoring. Most subtitling programs have a function which warns the subtitler when the maximum number of characters has been exceeded, usually by changing colour and becoming red when a violation has taken place. Tere is no fxed rule as for the minimum number of characters a subtitle must have, but subtitles with less than four or fve characters are rare. A subtitle should ideally remain on screen for at least one second so that the eye of the viewer can register its presence, although it is not uncommon to come across subs that stay on screen for as little as 20 or 25 frames, depending on whether the video runs at 25 or 30 frames per second (§4.4.4). Subtitles that are kept on screen for a shorter period of time risk appearing and disappearing like a fash and therefore not being read by the viewer. On the other hand, if a very short subtitle remains on the screen too long, the viewers will have time to read it repeatedly, which can be exasperating and can also break the reading rhythm. On occasions, a one-word subtitle can just as well be incorporated into the preceding or following one. 4.3.5
One-liners and two-liners
Te ideal number of lines and their positioning on screen is an issue a bit closer to the subtitler’s jurisdiction. Opinions on whether a one-liner should be preferred over a sub made of two short lines of similar length or whether the top line should be shorter than the bottom one vary across the industry, though some regularities can be detected. Choices regarding the physical distribution of text always try to balance between issues connected with the linguistics of subtitling, since respecting syntactic and semantic units is thought to promote readability, and more purely visual aesthetic matters that can foster legibility. Additionally, as already discussed, the ideal line length also depends on the screen’s safe area and on the positioning of the subtitle, i.e. leftaligned versus centre justifed.
100
Spatial and temporal features
Te most widespread approach in the industry is to keep the text on one line, rather than two, unless the character limitation is exceeded, particularly in the case of TV, DVD and VOD. Te assumption behind this approach is that the images are less polluted with text. In cinema exhibition, however, some companies would prefer, for aesthetic considerations, to go for a subtitle made up of two shorter lines of roughly similar length rather than having a long one-liner, grammar and syntax permitting. Another reason for this choice is the fact that long subtitles force the eye to travel, if only from left to right (or right to left depending on the language), from the end of the top line to the onset of the bottom line (or next subtitle), especially on large cinema screens. When the decision is to favour a two-line presentation, some sentences lend themselves more easily to become two-liners than others, particularly those composed of clauses. Te division into two lines can also be exploited to underscore the message and/or help render intonation. Compare, for instance, the following alternatives in Example 4.2. In the flm dialogue, there is a brief pause after ‘up there’: Example 4.2 Can you see the light up there, in the window? Can you see the light in the window?
Can you see the light? In the window?
As already mentioned (§4.3.1), when dealing with one-liners, a decision has to be taken on whether to place them on the frst/top line or on the second/bottom line of the onscreen space available for the subs. Some companies give preference to subtitle placement on the bottom line since this way the written text is pushed to the edge of the screen and hence interferes less with the image. Other studios give priority to the use of the top line as they prefer the constancy of starting the subtitles always at the same height on the screen. Tis way, the eye gets used to it and goes automatically to the same place on the screen every time a new subtitle pops up. While this procedure brings hardly any benefts when working for small screens, it may be more productive in cinema subtitling where the dimensions of the screen are bigger and the eye has to travel more. With two-liners, a recommendation based on aesthetics propounds the pyramidal structure, whereby the top line should be shorter, whenever possible, in order not to pollute the image. However, respecting the syntax of the original and keeping sense blocks together, so as to enhance the readability of the TT, ought to be the overriding factors when deciding line breaks. Priority has to be given to a subtitle that is easy to decipher, rather than to a convoluted subtitle that is symmetrically perfect, may obstruct the image less, but is difcult to read. 4.4 Temporal dimension Tis section explores the second set of parameters that condition the nature of subtitling and revolve around the time available for the presentation of the translation on screen. 4.4.1
Frames per second
Audiovisual productions create the illusion of moving images thanks to the rapid succession of a given number of frames per second (fps); a frame being one of the
Spatial and temporal features
101
many still images which compose the complete flm. In cinema, the frame rate is 24 fps, whereas in broadcast the frequency at which frames of video data are scanned on the screen varies depending on the system being used. In the NTSC system, the frame rate is 29.97 fps, whereas for PAL and SECAM the frequency is 25 fps. Te technical process of transferring a motion picture flm, which runs at 24 fps, into video that can be watched on television sets, video recorders, DVDs/Blu-rays and computers is called telecine. Te PAL (Phase Alternating Line) region covers most of Asia, Africa, Europe, South America and Oceania, as opposed to the NTSC (National Television System Committee) standard, traditionally used in Japan and North America. SECAM (Séquentiel couleur à mémoire [Sequential colour with memory]) was selected in France, Russia and some of their former colonies. As they were designed for analogue colour television broadcast, most countries have switched, or are in process of switching, to newer digital television standards, there being at least four diferent standards in use around the world: 1 2 3 4
DVB (Digital Video Broadcasting), the pan-European standard; ATSC (Advanced Television Systems Committee), a replacement for the analogue NTSC standard, used mostly in North America; DTMB (Digital Terrestrial Multimedia Broadcast), the system in the People’s Republic of China; and ISDB (Integrated Services Digital Broadcasting), typical of Japan.
Example 4.3 A document with two world maps illustrating the different regions can be found on Web > Chapter 4 > Examples > Example 4.3
In the new digital mediascape, standard frame rates are 24 for cinema, and 25 and 30 fps for the other media, depending on the region. In the case of high defnition broadcasting, the number of frames doubles to 50 and 60 fps respectively. From a practical point of view, and although many subtitling programs will recognize the frame rate of a video automatically, subtitlers need to be aware of the frame frequency of the video they will subtitle. To fnd out the frame rate of a video, right click on the unopened icon of said video and go to Properties > Details, where such information is contained, as illustrated in Figure 4.2. 4.4.2
Synchronization and spotting
Temporal synchronization is the task whereby dialogue and subtitle content are paired up in an audiovisual production. For many viewers, this is arguably the main factor afecting their appreciation of the quality of a subtitled programme. Poor timing, with subtitles that come in too early or too late or leave the screen without following the original soundtrack, is confusing, detracts from enjoying a programme and has the potential of ruining what may otherwise be an excellent linguistic transfer. Accurate timing is crucial for optimal subtitling since it reinforces the internal semiotic cohesion of the translated programme and plays the essential role of helping the viewer identify who is saying what in the programme.
102
Spatial and temporal features
Figure 4.2 Frames per second
Also known as timing and cueing, the task of spotting consists in determining the in and out times of each and every one of the subtitles in a production, i.e. deciding the exact moment when a subtitle should pop up on screen and when it should leave, according to a series of temporal and visual considerations. When spotting, the faster the pace of the dialogue exchanges, the more challenging the task becomes. As perfect synchronization may not always be attainable, a degree of technical fexibility can be observed in professional practice. In instances when the original dialogue is semantically dense, and it is very difcult to condense or delete information without compromising the message, a certain degree of asynchrony is allowed in the presentation of the subtitles. In these cases, they can appear a few frames before the actual dialogue is uttered and/or leave the screen a fraction of a second after the speaker has actually fnished talking. When more time is needed to stay within the maximum display rate, the out-time can be
Spatial and temporal features
103
extended up to 12 frames past the timecode at which the audio ends, and it can start up to three frames earlier if necessary. Tis strategy is frequently used in SDH, when the spotting needs to follow the image more closely than the soundtrack, but restraint is advised when working in interlingual subtitling. Its sporadic application may be of great value to the subtitler, but if used too often it may be easily interpreted as lousy timing. As opposed to oral speech, written texts, including subtitles, are sequential and can only present dialogue exchanges in a linear manner, i.e. one after the other. Tis makes the spotting of overlapping dialogue particularly tricky. When more than one person speak at the same time, the spotter has to make the difcult decision of deciding which information will make it to the TL and which will have to be deleted. Tis is expressly challenging in those scenes in which several people are involved in rapid exchanges, especially if they are having an argument or are seen in diferent places, interspersed with numerous shot changes. Te timing will have to be done in as clear a way as possible so as not to confuse the viewer, who can hear several voices at the same time and may not know who is saying what. In these cases, a good layout of the subtitles is also essential. Older studies on viewer’s reading speed seem to indicate that the greater the number of words in one subtitle, the less time is spent reading each one of the words. Tat is, viewers need proportionally more time to read short subtitles than longer ones (Brondeel 1994; Ivarsson and Carroll 1998). Given these fndings, it would seem more appropriate in general to resort to two-liners whenever possible; obvious exceptions being cases when the original utterances are very short themselves or when a shot change has to be respected. To facilitate and speed the spotting task, many current subtitling programs, like Wincaps Q4, come with functions that ofer speech presence indication, which detects the point at which speech begins as well as its duration, ofering a graphical representation of the actual speech in the shape of an audio level waveform display. Whereas it can be difcult, via headphones, to discern audibly the precise moment of speech onset, this display aid is very valuable in timing subtitles and making them coincide with the spoken word. Players like YouTube have been experimenting with the potential of ASR tools in order to automatically sync text and audio, provided the text of the dialogue is available in written format. 4.4.3
Timecodes
Te introduction of timecodes in the subtitling process brought about changes that altered virtually all stages in the profession, from the timing of the subtitles to their engraving or projection on screen, including the way in which they can be archived, revised and amended. Te frst timecodes made their appearance in the 1970s, although they only became central to subtitling in the mid-1980s. Before their arrival, stopwatches were used to do the cueing. A timecode generator assigns an eight-digit fgure to every single frame of the flm or programme. It is a sort of identity sign unique to each frame, making it very easy for any professional to identify a particular frame within the whole programme. Te code can be
104
Spatial and temporal features
engraved at the top or the bottom of the working copy, where a TCR – timecode reader – indicates the hours, minutes, seconds and frames (Figures 4.3 and 4.4). If the timecodes are not part of the working copy of the flm, the subtitling program will generate them:
Figure 4.3 Frame with timecode – top
Figure 4.4 Frame with timecode – bottom
In Figure 4.3, the value 00: 08: 10: 22 indicates that this frame can be found at the beginning of the flm (hour 0), 8 minutes (of a total of 60), 10 seconds (of a total of 60), and 22 frames (of a total of 24 in cinema, 25 in television and video in PAL/ SECAM systems, and 29.97 in NTSC system). To indicate that the programme has moved past the frst hour, the following code will appear: 01: 08: 10: 22. Te most common formats for creating timecodes are the longitudinal timecode (LTC) and the vertical interval timecode (VITC). Timecodes are an essential tool not only for subtitling, but also for the rest of AVT modes such as dubbing, voiceover and audio description. Tey allow quick and easy location of scenes and frames and permit perfect synchronization between soundtrack, shot changes and written subtitles. To illustrate the value of timecodes, let us consider the dialogue exchange in Example 4.4. Example 4.4 001 00:37:22:19 00:37:26:01 - Isn’t it all down to genes? - Don’t know. 002 00:37:26:21 00:37:31:17 If they could control that, they could create two-footed players.
In the frst subtitle, the numbers mean the following: 001 subtitle number 00:37:22:19 in-time 00:37:26:01 out-time From these fgures, the duration of the exchange can be gleaned, by working out the time that has lapsed between the in and out cues, i.e. 3 seconds and 7 frames (when
Spatial and temporal features
105
working with 25 fps). Tere is then a silence of 20 frames until the frst speaker starts talking again in subtitle 002. Tis time, the sentence lasts slightly longer than in the previous example: 4 seconds and 21 frames. Once the length of time the speakers have spoken is known and a decision has been taken as to the display rate that can be applied to the presentation of the subs, the maximum number of characters per subtitle can be worked out (§4.4.6). Normally, the subtitling program does the calculation automatically and timecode alerts warn of any violations in the allocation of times. In the case of some subtitling freeware programs, timecodes are computed in milliseconds and, thus, contain nine digits (00:07:03.769). In these cases, one frame (when working with 25 fps) is equal to 0.040 milliseconds. 4.4.4
Duration of subtitles
Ultimately, the spotting of the dialogue has to mirror the rhythm of the flm and the performance of the actors and be mindful of pauses, interruptions and other prosodic features that characterize the original speech. Long sentences might have to be split over several subtitles and short sentences combined to avoid telegraphic style. In practice, the golden rule for ideal spotting prescribes that subtitles should keep temporal synchrony with the utterances, whereby a subtitle should appear at the precise moment the person starts speaking and disappear when the person stops speaking. Tanks to an eight-digit timecode, the exact cues are accurately defned in hours, minutes, seconds and frames (§4.4.3). Te maximum amount of text that can be written on each subtitle is dictated by the safe area, the physical length of the lines and the time the person speaks. Traditionally, six seconds has been the recommended maximum exposure time to keep a full two-liner on screen – i.e. two lines of some 35 to 42 characters each – with shorter periods of time permitting proportionally fewer characters (§4.4.6). Viewers’ greater familiarity with reading feeting text on screen, together with the fact that many audiovisual productions have become more verbose and dialogue dense, have led to a reduction in the maximum display time for subs so that more text can fnd its way into the subs. Hence, some vendors now use fve seconds as the new upper limit rather than the traditional six. One exception to this rule is the subtitling of songs, in which case the subtitle can be left hanging on screen beyond the six seconds if the rhythm requires it, usually up to seven. When a subtitle remains on screen longer than the time the viewer actually needs to decipher it, there is a tendency to read it again. So, to avoid this unnecessary second reading, subtitling programs alert professionals when a subtitle is deemed to stay too long on screen for the amount of text it contains. To adjust the subtitle, the solution will be to write a longer text or to reduce the time the sub remains on screen. Similarly, when spotting the dialogue of a flm, chunks that last longer than six seconds should be reconsidered and split into smaller units, as maintaining them on screen longer would not serve any purpose, for viewers are supposed to have read them in six seconds or even less. If it is the same person who continues talking beyond six seconds, the utterance should be divided when a natural pause crops up in the delivery, or at a point where the logic of the sentence allows for
106
Spatial and temporal features
it (§6.4). Let us consider the following utterance from the flm Te French Lieutenant’s Woman: At a time when the male population of London of all ages was one and a quarter million, the prostitutes were receiving clients at a rate of two million per week.
It lasts eight seconds and 16 frames. If cued all together, the subtitle will have to condense the original information dramatically if it is to ft in a two-liner of some 78 to 84 characters, and it will clearly exceed the six seconds. A much better option would be to spot the utterance in two diferent subtitles, as in Example 4.5, taking into account the logic of the sentence and the fact that the actor makes a slight pause halfway through. A few frames, usually two, will have to be left blank between chained subtitles as these two (§4.4.7): Example 4.5 Sub 1: 00:31:12:03 At a time when the male population of London Duration – 04:19 00:31:16:22 of all ages was one and a quarter million, Sub 2: 00:31:16:24 the prostitutes were receiving clients at a rate Duration – 03:20 00:31:20:19 of two million per week.
At the other end of the scale, to avoid subtitles fashing on screen and to guarantee that viewers have enough time to read the content, the ideal minimum exposure time for a subtitle is commonly agreed at one second – i.e. 24, 25 or 29.97 frames, even with short subtitles that could leave the screen earlier, although some companies set it as low as 20 frames for cinema, PAL and SECAM and 25 frames for NTSC. If the timing of a particular utterance falls under this category of less than one second, two possible strategies can be activated. If another person is speaking immediately before or after, consideration should be given to the possibility of using a dialogue subtitle presenting both persons in the same projection. If the utterance is preceded and followed by pauses, the subtitler should consider allowing for a certain margin of asynchrony at the onset and/or the outset of the subtitle (§4.4.2). Minimum duration of a subtitle on screen: 20 frames for 24 and 25 fps 25 frames for 29.97 fps Maximum duration of a subtitle on screen: 6 seconds 4.4.5
Subtitle display rates: characters per second and words per minute
As previously discussed, two of the basic principles in subtitling dictate that the subtitle has to appear and disappear in synchrony with the original dialogue, and that its exposure time on screen has to be sufcient for the viewer to be able to read the content comfortably. Te time a sub remains on screen depends therefore on the delivery pace of the original dialogue and the assumed reading speed of the target viewers. When the original speech is uttered at a slow pace, the subtitler will not
Spatial and temporal features
107
encounter major hurdles to transfer the information to the TL in its entirety. Te problem arises when people on screen speak too fast for the target viewer to be able to read it all in translation within the same time, as individuals are supposed to be faster at hearing than reading. To address this challenge, two dimensions can be manipulated: the degree of condensation that one is willing to apply to the original dialogue and the speed at which the information is to be presented. Given the diversity that characterizes the target audience, particularly in factors like age and educational background, it is always challenging to generalize and agree on a reading speed that will be comfortable for all viewers. When deciding the audience’s maximum reading speed, it has to be borne in mind that not only do the subtitles have to be read, viewers have to be given enough time to be able to scan the images and ‘read’ the photography too. It is this constellation of information that leads Romero-Fresco (2015) to talk about viewing speed rather than reading speed. Consuming subtitles is not a mere reading exercise, as the message has to be assimilated and understood in a very short time span. Furthermore, reading times or display rates cannot be assessed on an absolute basis as they typically only take into account the volume of text, thus ignoring other elements that impinge on the viewer’s pace of reading and tend to slow it. Tese factors can relate to the form (poor legibility due to the lack of contrast between text and images, unexpected positioning of the subs, presence of action and enthralling visuals), as well as the content (use of complex vocabulary or syntax, abundance of numbers, poor line breaks, demanding dialogue containing plays on words, cultural references and metaphors, etc.). Additionally, the degree of familiarity that viewers can be assumed to have with the SL and with subtitling itself are factors that may impact the pace of reading. Te distribution medium is another consideration to be taken into account, as some professionals and companies believe that the subtitling display rate for TV programmes should be slower than for productions to be distributed on cinema, VOD, airline releases or DVD/Blu-ray. One of the reasons behind this conviction is that television addresses a wider section of society and thus should cater for individuals of all reading abilities. Te technical empowerment of consumers, who can now easily manipulate digital media by pausing and rewinding it if necessary, justifes the greater speeds found on DVDs, Blu-rays and VOD platforms. Yet, with the development of catch-up television in the form of internet streaming, allowing viewers the same control over the audiovisual productions, these discrepancies across media are bound to converge in the near future. Traditionally, the term reading speed has been used in this context, though in reality the subtitler does not set the viewer’s reading speed but rather decides on the maximum speed that should not be exceeded. Te subtitling program will then display the rate of presentation of each sub and will usually go red when the maximum rate has been exceeded, or blue when the display rate is too slow and can lead to confict with the soundtrack, as some members of the audience may have fnished reading the text while the character is still speaking. Viewers then need to match their reading pace to the subtitle display rates, which will vary according to the haste with which the onscreen characters speak. From this perspective, more accurate terms to refer to this parameter are subtitling speed or subtitle display/presentation rate (Pedersen 2011; Sandford 2015). Te subtitle display rate is understood as the relationship that exists between the quantity of text contained in a subtitle and the time that it remains on screen. To measure it, two parameters are used: characters per second (cps) and words per minute (wpm). Te former is a more transparent unit of measurement whereas the second
108
Spatial and temporal features
one becomes a bit more opaque, for the length of words varies substantially within and across languages. In the industry, calculations done in wpm are usually based on the English language and assume that the average size for a word is fve letters. Be that as it may, the reality is that diferent subtitling programs and cloud-based platforms compute these parameters diferently, as there is a number of variables that can be taken into account, such as including blank spaces and punctuation, adding line breaks, allowing for deviation and counting half-width symbols as 0.5 characters, which are some of the reasons for the discrepancy. Additionally, many subtitling programs, like Wincaps Q4 (Figure 4.5) and OOONA (Figure 4.6), will allow subtitlers to confgure their display rate rules including or excluding blank spaces in the word count, whether working with wpm or cps.
Figure 4.5 Timing rules confguration in Wincaps Q4
Figure 4.6 Timing rules confguration in OOONA
According to this optionality, the sentence ‘You can only be rich if you steal.’ can be considered to have 34 characters, when including the blank spaces in the display rate, or 27 characters, when the blank spaces are not included. Tis is a grey area in the industry, as some clients do not seem to be aware of the implications and some professionals argue that the blank spaces between words should not be included in the calculation of the display rate because they do not add to the reader’s cognitive load.
Spatial and temporal features
109
Include spaces in the display rate, unless otherwise stated in the exercises.
4.4.6
Te six-second rule
Subtitling before the advent of digital media was traditionally based on what is known in the profession as the six-second rule (Laks 1957; D’Ydewalle et al. 1987; Brondeel 1994), according to which an average viewer can comfortably read in six seconds the text written on two full subtitle lines, when each line contains a maximum of some 37 characters, i.e. a total of 74 characters. Te rule results from cinema exhibition, and the mathematical reasoning behind it is that two frames allow for a subtitle space. Given that the cinema illusion requires the projection of 24 frames per second – 25 or 29.97 in television, this means that subtitlers can write 12 cps. In six seconds, the total will be 72 characters over two lines. In the case of double-byte languages, these fgures are roughly halved. When considered in words per minute, this calculation implies a rather low presentation rate of some 150 wpm, when the blank spaces are included in the display rate and words are considered to contain fve letters each. Table 4.1 illustrates the maximum number of characters available for any period of time between one and six seconds. Table 4.1 Equivalence between seconds/frames and characters, including spaces (12 cps/150 wpm) INCLUDING blank spaces in display rate
Seconds : frames
Characters
Seconds : frames
Characters
12 cps (≈ 150 wpm)
01:00 01:04 01:08 01:12 01:16 01:20
12 14 17 19 22 24
02:00 02:04 02:08 02:12 02:16 02:20
24 26 28 30 33 35
Seconds : frames
Characters
Seconds : frames
Characters
Seconds : frames
Characters
03:00 03:04 03:08 03:12 03:16 03:20
36 38 40 42 45 47
04:00 04:04 04:08 04:12 04:16 04:20
48 50 52 54 57 59
05:00 05:04 05:08 05:12 05:16 05:20 06:00
60 62 64 66 69 71 72
With the help of this table, the approximate maximum number of characters that can be used to translate any dialogue exchange can be worked out. So, if an actor
110
Spatial and temporal features
speaks during 3 seconds and 20 frames, as in sub 2 in Example 4.5, the subtitler can make use of a total of some 47 characters to translate the content into the TL. A subtitle with many more characters will risk not being read by some viewers, while one with much fewer characters may stand out as containing too little information and viewers may read it before the actor on screen has actually completed the utterance. In both cases, subtitle programs will alert professionals so that they can reconsider their solutions. Tese rather slow subtitling rates are the approximate values traditionally applied in the subtitling of audiovisual productions that are aimed at viewers who may fnd it challenging to read text on screen, like young children. As already discussed, the advent of the DVD revolutionized the audiovisual translation industry at the time. Arguing that viewers were gaining more control over the watching experience and that greater exposure to subtitles was having the beneft of enhancing viewers’ reading speed, most DVD publishers hiked display rates to 15 cps, roughly the same as 180 wpm, as shown in Table 4.2. Table 4.2 Equivalence between seconds/frames and characters, including spaces (15 cps/180 wpm) INCLUDING blank spaces in display rate
Seconds : frames
Characters
Seconds : frames
Characters
15 cps (≈ 180 wpm)
01:00 01:04 01:08 01:12 01:16 01:20
15 17 20 22 25 27
02:00 02:04 02:08 02:12 02:16 02:20
30 32 34 36 39 41
Seconds : frames
Characters
Seconds : frames
Characters
Seconds : frames
Characters
03:00 03:04 03:08 03:12 03:16 03:20
45 47 49 51 54 56
04:00 04:04 04:08 04:12 04:16 04:20
60 62 64 66 69 71
05:00 05:04 05:08 05:12 05:16 05:20 06:00
75 77 78 78 78 78 78
Standards and conventions keep evolving in subtitling, and the arrival of the over-the-top media service providers, particularly Netfix, has been felt as yet another major catalyst in the profession (Chapter 5). Tey have brought along longer lines of a maximum of 42 characters and, in what concerns subtitling display rates, they have contributed to the consolidation in the industry of the characters per second formula, rather than words per minute, and 17 cps (or 200 wpm) is fast becoming the preferred rate when dealing with general programming for adults (Table 4.3).
Spatial and temporal features
111
Table 4.3 Equivalence between seconds/frames and characters, including spaces (17 cps/200 wpm) INCLUDING blank spaces in display rate
Seconds : frames
Characters
Seconds : frames
Characters
17 cps (≈ 200 wpm)
01:00 01:04 01:08 01:12 01:16 01:20
17 19 22 24 27 30
02:00 02:04 02:08 02:12 02:16 02:20
34 36 38 40 43 45
Seconds : frames
Characters
Seconds : frames
Characters
Seconds : frames
Characters
03:00 03:04 03:08 03:12 03:16 03:20
51 53 55 57 60 62
04:00 04:04 04:08 04:12 04:16 04:20
68 70 72 74 77 79
05:00 05:04 05:08 05:12 05:16 05:20 06:00
84 84 84 84 84 84 84
Te recommended display rate for the subtitling of children’s programmes stands at 13 cps, or 160 wpm, as shown in Table 4.4. Table 4.4 Equivalence between seconds/frames and characters, including spaces (13 cps/160 wpm) INCLUDING blank spaces in display rate
Seconds : frames
Characters
Seconds : frames
Characters
13 cps (≈ 160 wpm)
01:00 01:04 01:08 01:12 01:16 01:20
13 15 18 20 23 25
02:00 02:04 02:08 02:12 02:16 02:20
26 28 30 32 35 37
Seconds : frames
Characters
Seconds : frames
Characters
Seconds : frames
Characters
03:00 03:04 03:08 03:12 03:16 03:20
39 41 43 45 48 50
04:00 04:04 04:08 04:12 04:16 04:20
52 54 56 58 61 63
05:00 05:04 05:08 05:12 05:16 05:20 06:00
65 67 69 71 74 77 78
112
Spatial and temporal features
Tese fgures apply to most single-byte languages in the Netfix catalogue, with only a few surprising exceptions, like Arabic (20 cps and 17 cps, for adults and children respectively) and Hindi (22 cps and 18 cps). For the preparation of English templates, their Timed Text Style Guide also recommends the rather high values of 20 cps (i.e. 240 wpm) for adults and 17 cps for children (Netfix n.d. 2). Subtitling display rates to be applied in the exercises of this book, including blank spaces, unless otherwise indicated: Latin-based, Cyrillic-based and Semitic languages, Hindi and Thai Adult programmes: 17 cps or 200 wpm Children programmes: 13 cps or 160 wpm Chinese Adult programmes: 9 cps Children programmes: 7 cps Japanese All programmes: 4 cps Korean Adult programmes: 12 cps Children programmes: 9 cps
Te following is a conversion table showing a parallel between the display rates calculated in cps and in wpm. Tey include the blank spaces in the count and, in the case of wpm, each word is considered to have fve letters on average: 12 cps
13 cps
14 cps
15 cps
16 cps
17 cps
18 cps
19 cps
20 cps
150 wpm 160 wpm 170 wpm 180 wpm 190 wpm 200 wpm 215 wpm 225 wpm 240 wpm
Te decision not to include the blank spaces in the display rate alleviates the task of subtitlers and allows them a greater margin of manoeuvre since they do not have to condense the text as much. When working in wpm, the calculations are based on a typical word consisting of 4.5 letters on average: 130 wpm 140 wpm 150 wpm 160 wpm 170 wpm 180 wpm 190 wpm 200 wpm 215 wpm 12 cps
13 cps
14 cps
15 cps
16 cps
17cps
18 cps
19 cps
20 cps
Example 4.6 A document containing the various display rates in wpm and cps, including and excluding the blank spaces, can be found on Web > Chapter 4 > Examples > Example 4.6
Te smoothness of the subtitle reading process also depends on factors like the nature and characteristics of the audiovisual material, as well as the language used to create
Spatial and temporal features
113
the subtitles. For example, in one study conducted by d’Ydewalle and Van Rensbergen (1989), it was found that children would pay less attention to the subtitles when the flm involved a lot of action. Te authors noticed that the use of special efects and the way the visuals were edited shifted the balance of content towards the action at the expense of the dialogue and the subtitles. Another factor that infuences subtitle reading is related to shot changes, as discussed in §4.4.8. To try and gauge the impact that the various display rates have on the viewer’s reading experience, empirical research with eye tracking has been conducted with hearing, hard-of-hearing and D/deaf participants from six diferent countries (RomeroFresco 2015). Unsurprisingly, the higher the subtitle presentation rate, the less time is available for the viewers to watch the images. Te results yielded by such experiments are shown in Table 4.5. Table 4.5 Viewing speed and distribution of gaze between subtitles and images (Romero-Fresco 2015: 338) Viewing speed
Time spent on subtitles
Time spent on images
120 wpm
± 40%
± 60%
150 wpm
± 50%
± 50%
180 wpm
± 60%–70%
± 40%–30%
200 wpm
± 80%
± 20%
Similar research has also been conducted by Szarkowska and Gerber-Morón (2018), who have looked into whether viewers can keep up with increasingly fast subtitles and whether the way they cope with subtitled content depends on their familiarity with subtitling and on their knowledge of the language of the flm soundtrack. By analyzing the eye gaze of people who had watched some video excerpts in English and Hungarian at diferent speeds (12, 16 and 20 cps), they endorse that most viewers managed to read the subtitles and follow the images well, even in the case of the fastest subs. Slow subtitles triggered more rereading, particularly in English clips, causing more frustration and less enjoyment. Faster subtitles with unreduced text were preferred in the case of English videos, and slower subtitles with text edited down in Hungarian videos. 4.4.7
Gap between subtitles
In a similar way as the separation that is left between words, a small, clear pause has to be created between two consecutively chained subtitles if the viewer is to register that a change of written material has taken place on screen. If a sub is immediately followed by another one without leaving any clean frames (i.e. frames without text) between the two, the eye fnds it difcult to realize that new information has been presented. Te chances of this mishap occurring are greater when the two successive subtitles share a similar layout. To avoid this potential problem, subtitling programs have an automatic delay function that creates a small pause immediately after the out-time of every subtitle, before the next one can be cued in. Tis delay function can be set by the user and various values can be selected, but in order to be efective a minimum of two frames are needed.
114
Spatial and temporal features A minimum gap of two clean frames is to be left between closely consecutive subtitles, regardless of frame rate.
When dialogue is dense and continuous, and the pauses between the lines are shorter than around ten frames, one of the strategies to make the most of the time available is to chain the various subtitles according to the required minimum interval, avoiding leaving short gaps that vary in length. In Example 4.7, the four subtitles in the left column keep difering intervals between them (fve, seven and four frames between subs 1 and 2, 2 and 3, and 3 and 4 respectively). Te same subtitles have been chained in the right column, leaving the minimum gap of two frames between each of them and thus maintaining a regular subtitle output with no uneven gaps: Example 4.7 0001 00:00:00:00 00:00:01:14 Hadeel is the name of a girl.
0001 00:00:00:00 00:00:01:14 Hadeel is the name of a girl.
0002 00:00:01:19 00:00:05:15 It’s a very common name in Arabic. It means the cooing of doves.
0002 00:00:01:16 00:00:05:15 It’s a very common name in Arabic. It means the cooing of doves.
0003 00:00:05:22 00:00:09:16 This is the story of a girl who was killed in a bombing in Gaza.
0003 00:00:05:17 00:00:09:16 This is the story of a girl who was killed in a bombing in Gaza.
0004 00:00:09:20 00:00:12:14 And her brother Ahmed lost his sight on this very day.
0004 00:00:09:18 00:00:12:14 And her brother Ahmed lost his sight on this very day.
4.4.8
Shot changes
Another golden rule in spotting practice recommends that, whenever possible, a subtitle should not cross a shot change, as this has been typically interpreted as a trigger for the re-reading of the subtitle: “if a caption remains on the screen when the scene changes behind it, viewers will automatically start reading the caption over again, assuming that the caption changed with the scene” (Robson 2004: 184). Yet, according to recent research (Krejtz et al. 2013; Szarkowska et al. 2015), there is no conclusive evidence that supports the claim that subtitles are re-read when shot changes occur, although subtitles straddling shot changes are still considered to be disruptive to the viewing experience and should be avoided. Professional practice recommends that a subtitle should leave the screen just before the change takes place and a new subtitle should be spotted after the cut, which functions as a dividing frontier between the subtitles. Tis may entail splitting a sentence at an appropriate point, or delaying the start of a new sentence to coincide with the shot change. Respecting shot changes has become more of an issue as some of today’s fast-moving audiovisual productions rely on editing techniques where cuts are frequent as a means to adding dynamism to the action. Additionally, it is not uncommon for actors to continue speaking over the shot change, making it difcult, not to say impossible, to not break this
Spatial and temporal features
115
rule. Te distribution format also has an impact and flms to be screened in the cinema tend to adhere to this rule in a much stricter way than productions for television or home entertainment. Priorities have to be set and, whilst soft cuts pose less of a problem, hard cuts should be respected as much as possible. As general guidance, (1) subtitles should not hang over shot changes if the speaker has fnished speaking, (2) if a subtitle has to be left hanging over a shot change, it should not be removed too soon after the cut, and (3) a subtitle should never be carried over into the next shot if this means crossing into a clearly diferent scene, except when the voice provides a sound bridge. Tere is some controversy as to when exactly to time the subtitles before and after the shot change. For some professionals it is best to avoid displaying a subtitle precisely as a shot change takes place, because this can be somewhat disturbing to the eye; others prefer to use the exact moment when the cut happens to cue the subtitle out. Te BBC (2019), for instance, recommends that where possible, subtitles should start on the frst frame after the shot and end on the last frame before the shot. Netfix (n.d. 2), on the other hand, suggests that when dialogue crosses the shot change the timecodes should be adjusted to either be at the shot change or at least 12 frames from it. Common in the industry is the 12-frame rule, whereby when a subtitle fnishes just before a shot change, the out-time may be set either on the shot change or at least 12 frames before the shot change, as illustrated in Figure 4.7.
Figure 4.7 Spotting before the shot change
When a subtitle begins just after a shot change, the in-time of the subtitle may be either two frames or at least 12 frames after the shot change, as shown in Figure 4.8.
Figure 4.8 Spotting after the shot change
116
Spatial and temporal features
Some of today’s subtitling programs, like Wincaps Q4 or OOONA Tools, come with a shot change detector that automatically analyzes a video fle and identifes the shot changes within it, making the whole process much easier. To cue around shot changes, and as long as the text does not cross over, the recommendation is that the in-time should coincide with the start of the utterance and the out-time with the end, irrespective of their distance to the actual shot change. Ideally, subtitles should start on the first frame after the shot and end on the last frame before the shot. 4.4.9
Feet and frames in cinema
When working for theatrical releases, 35 mm motion pictures are measured using the Imperial units, feet and frames. For the record, a flm foot contains 16 frames, and for the viewer to believe the illusion of the moving images on screen, 24 frames have to be projected every second against the silver screen. Tus, a second of a flm is equal to 1 foot and 8 frames, i.e. 1.5 feet. In order to guarantee a comfortable reading speed, it is a commonly accepted convention that a flm foot, i.e. 16 frames, should contain ten characters (including letters, spaces and punctuation marks). In order words, each frame may contain 0.625 spaces or characters. According to these values, and adapting the conversion table proposed by Castro Roig (2001: 279), the following equivalences can be established. Table 4.6 Equivalence between feet/frames and number of characters (including spaces) Feet : Frames
Characters
Feet : Frames
Characters
Feet : Frames
Characters
Feet : Frames
Characters
0:01 0:03 0:05 0:07 0:09 0:11 0:13 0:15 0:16 (= 1:00)
1 2 3 4 6 7 8 9 10
1:01 1:03 1:05 1:07 1:09 1;11 1:13 1:15 1:16 (= 2:00)
11 12 13 14 16 17 18 19 20
2:01 2:03 2:05 2:07 2:09 2:11 2:13 2:15 2:16 (= 3:00)
21 22 23 24 26 27 28 29 30
3:01 3:03 3:05 3:07 3:09 3:11 3:13 3:15 3:16 (= 4:00)
31 32 33 34 36 37 38 39 40
Feet : Frames
Characters
Feet : Frames
Characters
Feet : Frames
Characters
Feet : Frames
Characters
4:01 4:03 4:05 4:07 4:09 4:11 4:13 4:15 4:16 (= 5:00)
41 42 43 44 46 47 48 49 50
5:01 5:03 5:05 5:07 5:09 5:11 5:13 5:15 5:16 (= 6:00)
51 52 53 54 56 57 58 59 60
6:01 6:03 6:05 6:07 6:09 6:11 6:13 6:15 6:16 (= 7:00)
61 62 63 64 66 67 68 69 70
7:01 7:03 7:05 7:07 7:09 7:11 7:13 7:15 7:16 (= 8:00)
71 72 73 74 76 77 78 79 80
Spatial and temporal features
117
Example 4.8, from the flm Te Broken Hearts Club, will help illustrate how to apply these parameters when subtitling: Example 4.8 Subtitle Number 0012
Start 90.07
Footage End 95.02
Total 4.11
Subtitle
0013
95.06
97.13
2.07
Fine. I don’t care that I lost. I hate games. Where’s the check? I think you should be proud to have lost.
Subtitle number 0012 indicates that the actor starts his utterance in foot 90, frame 07, and fnishes his question in foot 95, frame 02, which means that he has been talking for 4 feet and 11 frames. According to Table 4.6, the subtitler will then be allowed to write the translation in a subtitle of a maximum of 47 characters, including blank spaces. Applying the same analysis, in the second example, the maximum number of characters available for the subtitle is 24. With the possibility of having the flm print in digital format and using a subtitling program, this way of translating is fast disappearing in the industry, which prefers to work with frames and seconds, as discussed in the previous sections. 4.5
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 4 > Exercises
5
Formal and textual features
5.1
Preliminary discussion
5.1 Discuss the difference between readability and legibility in relation to subtitling. 5.2 Watch some three or four programmes subtitled into one of the languages you know and which have been commercialized by different companies, or broadcast by different television stations.
❶ Have you noticed any differences in the formal layout in which subtitles are presented? Which differences? Make a list of three main areas of disparity that have called your attention. ❷ What do you think are the reasons for these differences in the presentation of the subtitles? Aesthetic, commercial? ❸ In the programmes you have watched, is there any one way of presenting the subtitles that you prefer above the rest? Why? ❹ Would you discard some of the conventions you have seen on screen? Which one/s? Why? ❺ Would you propose any other conventions to enhance the layout of subtitles? Why?
5.3 In your opinion, what sort of impact can a lack of harmonization have on the industry? And on the audience? 5.4 Do you consider the subtitling done in your country to be of good or bad quality in general? Why? 5.5 Do you believe the quality of subtitling varies according to the means of distribution (cinema, television, DVD, VOD)? Or to the language from which the translation is done? 5.6 Find some subtitling guidelines on the internet and consider the issues they tackle and the advice they provide.
5.2
In search of conventions
Contrary to the static presence of written text on a page, the feeting nature of subtitles lends them a fragmented quality that complicates their reading. Each subtitle is an isolated event, physically disconnected from the preceding and the following subtitles. In this respect, it can be argued that reading subtitles that can be verbose,
Formal and textual features
119
that appear and disappear at a given speed, that compete with the images for the viewer’s attention and that are accompanied by a soundtrack in another language is a more demanding task than reading a novel or a newspaper. It is rather disruptive, not to say impossible like in the cinema, to stop the screening and rewind to reread a subtitle that has not been fully understood. Te original images and soundtrack in the FL are the immediate co-text of the subtitles, which compete with the spoken utterances for the viewers’ attention while at the same time complementing them at semiotic (Chapter 3) and linguistic (Chapter 6) levels. Part of the subtitlers’ job is to facilitate the audience’s reading experience and, in order to do so, two complementary concepts are of the essence: legibility and readability. Te former refers to the ease with which a viewer can read a text on screen depending on the type and size of font used, the defnition and contrast against the image background and the speed at which the subtitles appear on screen. Readability, on the other hand, is more psycholinguistic and refers to the ease with which a reader can recognize the various components of a text and, ultimately, its meaning, depending on parameters such as syntactical complexity, information density, semantic load and the like. In their pursuit, subtitlers also have to master the use of punctuation and other stylistic resources. Te apparent lack of harmonization in the presentation of subtitles is due, amongst other reasons, to the large number of subtitling providers operating in the market as well as to the fact that some vendors, television broadcasters and other creators of subtitles do not always have a stylebook with specifc instructions in all the languages into which they ofer their services. Yet, despite the seemingly myriad approaches that exist to formulate subtitles on screen, their formal presentation is not entirely fortuitous, as they usually adhere to basic conventions of which translators have to be aware. Indeed, through repeated exposure to subtitled programmes, most viewers would have internalized some of the defning features of subtitling and might be puzzled when noticing departures from the assumed norm. Subtitling follows a set of orthotypographic rules that are part and parcel of the grammar of a language, so that viewers watching a subtitled programme do not have to learn a whole new body of norms. Ultimately, subtitles are an instance of written text and, on the whole, they tend to observe the rules that govern standard punctuation, albeit with some idiosyncratic usages. Te space and time parameters that govern the practice of subtitling have been discussed in Chapter 4 and can be said to be applicable, in principle, to the creation of subtitles in any language combination. However, while a certain degree of harmonization might have been attained at a global level on the technical side, agreement on the general orthotypographic requirements to be utilized is stubbornly elusive as any decisions impinge and depend on the language of the subtitles. Ideally, specifc subtitling guidelines should be drafted for each language, if not for linguistic variants (e.g. Portuguese and Brazilian Portuguese), to better suit cultural and linguistic idiosyncrasies. Having said that, it is evident that diferent national subtitling practices share some conventions, at least at the European level. On this front, the extensive use of English template fles (§2.5) can be said to contribute to the convergence of subtitling trends. As previously discussed (§4.2), most subtitling vendors have their own guidelines that they use as internal documents and, as such, are only distributed among their employees. Te arrival of global players such as Netfix has transformed some of the behaviours in this ecosystem, as the company is freely circulating their own guidelines
120
Formal and textual features
on their website’s Partner Help Centre. Likewise, greater mobilization on the part of practising subtitlers has led to the creation and open dissemination of subtitling standards refecting professional subtitling practice and tradition in various countries. 5.3
Punctuation conventions
Attempting to recommend a fxed and unequivocal set of subtitling guidelines that can be applicable in any language is certainly a daunting task, destined to fail. While many of the rules seem to be backed by logic, some are probably applied arbitrarily and may be difcult to justify over others. In addition, many subtitling vendors maintain their own, unique in-house stylebooks as a way to set them apart from their competitors. Terefore, rather than proposing a rigid set of norms, the aim of the following sections is to foreground generally accepted practice, while identifying points of contention and keeping an eye on diferent alternatives. Ultimately, the objective is to raise awareness about the various options that are available when producing subtitles, with their pros and cons. Te discussion draws on the style guides of several companies, which have been scrutinized and synthesized so that conventions with widespread validity in the profession can be discerned. In some cases, it is clearly impossible to assess whether some are ‘better’ than others. Tey are simply diferent. As is nearly always the case in translation, subjectivity plays a big role. However, to enhance systematic practice, some rules are recommended over others so that common parameters can be applied when carrying out the activities proposed in the book and on the companion website. Some examples are preceded by the symbol [], which does not mean that they are ‘wrong’ in the traditional sense of the word, but rather that they do not comply with the conventions favoured in this book []. In any case, readers interested in doing the exercises can decide whether to adhere to the guidelines presented in these pages or whether to apply the standards put forward by any of the companies or associations mentioned in §4.2 and also available on the companion website. All the examples contained in this chapter are in English and follow the conventions that are appropriate for the English language. However, common sense ought to prevail when creating subtitles in other languages, and the basic grammatical conventions used in those languages should be respected. Punctuation rules in other languages can difer greatly to English ones, and subtitlers ought to avoid aping the punctuation used in the English template fles or the dialogue lists provided by the clients. Anybody working in subtitling, or translation for that matter, should master their working language and be cognizant of all its nuances: use of tildes, diacritic signs, punctuation in numbers and the like. Spelling in the TL must be perfect, spotless. Equally important is being fully aware of the requirements of the company and applying their guidelines consistently. 5.3.1
Comma (,)
Te role of the comma is primarily to show the structure of a sentence, dividing it in sections to make the text more easily understood. It separates parts that are related to each other in the same statement, and its usage signals a slight pause in the reading. It is written immediately after a word, and a blank space is needed before the following
Formal and textual features
121
word. Te use of commas in subtitling does not necessarily fully comply with grammar rules as they can appear whenever there is a risk for misunderstanding what the original is saying. In some instances, the comma is contrastive and its use has to follow the prosody of the speech. Note the diference between the following two subtitles:
You know he’s bought a car.
You know, he’s bought a car.
Words used as vocatives require commas:
Marilyn, come with us to the beach.
Don’t do it, dad, or it’ll get worse.
Commas are also needed to mark of phrases and clauses inserted in the main sentence, although, whenever possible, these enclosing phrases should be moved to the beginning or the end of the subtitle, as in the example on the right:
His attitude was, in my opinion, very unpleasant.
In my opinion, his attitude was very unpleasant.
Using commas at the end of a subtitle that continues in the next one should be kept to a minimum, since they may be confused with a full stop and lead the viewers to believe that they have reached the syntactical conclusion of the sentence. In some cases, the change of subtitle can be considered a substitute for the comma which may otherwise be necessary in a standard written text. Te actual physical disappearance of the written text from the screen imposes a pause in the reading pattern that many consider has the same value as the use of a comma. If no punctuation mark appears at the end of a subtitle line, this automatically signals that the sentence runs on. Given the general agreement that subtitles ought to be semantically and syntactically selfcontained, subtitlers have to be cautious when using commas that will tend to lengthen a sentence, and try to fnish the idea within the same subtitle projection. In some cases, it may be more appropriate to try to split the original sentence into smaller ones, within or across subtitles. Te use of the semi-colon (;) is very rare in subtitling and should be avoided. 5.3.2
Full stop (.)
Te full stop at the end of a subtitle is an unequivocal indication that the sentence is fnished and there is no continuation into the next subtitle. It follows the word without a space and the next line or subtitle starts in upper case:
The lady’s not coming today. The bottle calls. I’m on my way.
Zaza, your fans are waiting for you, and my dinner is going cold. Please be reasonable.
Some companies do not make use of the full stop at the end of a subtitle, which “creates the most confusing and even irritating situation of all, as it may mean two contradictory things: either that the sequence stops there or that it goes on. Needless to say, this makes subtitles following this style very difcult to read” (Cerón 2001: 176).
122 5.3.3
Formal and textual features Colon (:)
A colon signals a small pause and takes the reader’s interest forward by announcing or introducing what is to come. Like the full stop, it is placed immediately after a word and is followed by one blank space:
He used to say: “Never say never”.
He used to say: “Never say never.”
A colon introduces a list, an enumeration or an explanation, in which case the word after the colon must be written in lower case:
They took everything: Sandals, towels, They took everything: sandals, towels, swimming trunks, and goggles. swimming trunks, and goggles.
A colon is never followed by a capital letter, except when the word is a proper noun or the frst part in the quotation of somebody else’s words, as in the following example.
5.3.4
And then he said: “she doesn’t live here anymore”.
And then he said: “She doesn’t live here anymore.”
Parentheses ( )
Round parentheses serve to set apart relevant but supplementary information that could be omitted without altering the meaning of the sentence, which is the reason why explanations and asides of this nature are the frst to be deleted when condensation is required. Parentheses are efective only in very restricted contexts and hardly ever used in subtitling. Although parenthesized material is rather more distanced from the sentence proper than material within commas, the natural tendency in subtitling is to eliminate the parentheses and reconstruct the sentence using commas and, if necessary, adding a connector:
What happens in this clip (nearly 30 actors in total) is that there’s too much activity going on all the time.
What happens in this clip, with nearly 30 actors in total, is that there’s too much activity going on all the time.
In certain cases, like in the following example, parentheses are used in the translation to replicate the parentheses appearing in the written insert that viewers can see on screen. In the opening scene of the flm Te Broken Hearts Club, the viewer can read onscreen text containing the pronunciation and defnition of the term ‘meanwhile’, which in the flm is used in a rather cryptic way by some of the characters. In the Spanish subtitle, the frst set of parentheses, along with the phonetic pronunciation of the word, are lost, while the second set has been kept:
Formal and textual features
123
Mientras tanto: sustantivo. Señal entre amigos para advertirse que tomen [Meanwhile: substantive. Signal among friends to alert each other to take]
meanwhile (mën´hwïl) noun A red alert amongst friends signalling them to take immediate notice of → a passing stranger (usually attractive).
nota de los extraños que pasan (normalmente atractivos) [notice of passing strangers (normally attractive)]
5.3.5
Exclamation marks (!) and question marks (?)
Exclamation marks indicate shouting, emphasis or strong feelings like anger, scorn, surprise, happiness and disgust. Tey are also used to foreground irony, to underline insults and expletives, and to give commands and warnings. Both exclamation and question marks often signal the end of a sentence and are written immediately after the word that precedes them, without any blank space in between, and separated with a space from the word that follows them:
Wow ! Where did you buy it ?
Wow! Where did you buy it?
It is an error to add a period after an exclamation or question mark, since they already contain one:
I cannot believe he said it!.
I cannot believe he said it!
As a rule of thumb, subtitles should not be cluttered with unnecessary punctuation that does not carry any added value and is pleonastic. If overused throughout the subtitles, they lose their force and become tiresome for the viewer. Exclamation marks convey intensity to a written text and need to be used sparsely since part of that intensity may also be retrievable from the soundtrack or the gestures. On some occasions, particularly when translating directly from a dialogue list or template, the risk exists of copycatting in the subtitles the same amount of punctuation as in the working documents. Tis is why it is recommended not to rely on the script when deciding whether to use exclamation marks and to listen to the actual speech and the way it is delivered. Double or multiple exclamation or question marks should be avoided in all cases:
Did they really?? I don’t believe it!!
Did they really? I don’t believe it!
Some sentences resemble questions in their structure, but are used as exclamations. Tey are exclamatory questions and, in standard writing, can usually end with both a question and an exclamation mark, in this order. Tis approach should be avoided in subtitling, and the symbol that best suits the intonation or the meaning should be
124
Formal and textual features
picked. If no answer is put forward, as in the case of rhetorical questions, the exclamation mark should take precedence: Isn’t she clever?!
Isn’t she clever!
If, on the contrary, the utterance receives a reply, then the question mark should be given the priority: 5.3.6
- Isn’t she clever?! - Of course she is.
- Isn’t she clever? - Of course she is.
Hyphen (-)
Hyphens (-), not to be confused with n-dashes (–) or m-dashes (—), which are longer, have the specifc function in subtitling of indicating that the text appearing on screen belongs to two diferent people. Dialogue subtitles always consist of two lines, in which the top one is reserved for the speaker heard frst, and the second line for the second speaker. In the standard way of indicating that the onscreen text is a dialogue exchange, each of the lines is preceded by a hyphen followed by a space, though some companies prefer not to leave a space after the hyphen, while some others use a hyphen only in the second line, with a blank space between the punctuation mark and the letter:
- I couldn’t move her. - Be firm, for goodness’ sake! -I couldn’t move her. -Be firm, for goodness’ sake!
I couldn’t move her. - Be firm, for goodness’ sake!
Although technically possible, it is considered incorrect to include the lines of more than one speaker on a line. In the following example, either the frst statement or the second afrmation needs to be moved to the previous or following subtitle, depending on the spotting that is considered most suitable: What am I going to do?
- Put it under the rug. - It’s a wall-to-wall carpet.
- What am I going to do? - Put it under the rug. It’s a wall-to-wall carpet.
- What am I going to do? - Put it under the rug. - It’s a wall-to-wall carpet.
Hyphens are also used, conjointly with the repetition of a letter, to graphically represent the stutter in the diction of a character on screen:
And she t-t-t- told him to get lost.
A hyphen to divide words at the end of a line may be common in written texts, but it is never seen in subtitling since it makes reading more difcult:
Formal and textual features
Those are cops and firemen from Albacete.
125
Those are cops and firemen from Albacete.
Te parenthetical hyphen, used as an alternative to parentheses, is discouraged in subtitling:
A dog - not a cat is an animal that barks.
A dog, not a cat, is an animal that barks.
To indicate a dialogue exchange in the same subtitle event, a hyphen is used at the beginning of each of the two lines, followed by a space.
5.3.7
Triple dots (...)
As previously mentioned, each subtitle appears on screen as an individual and isolated item. To boost readability, when a sentence is not fnished in one subtitle and needs to be carried over to the next subtitle or subtitles, continuation dots, a.k.a. ellipses, have traditionally been used as a bridge at the end of the frst subtitle and the beginning of the following one to alert the viewer of this connection. Tis use of the three dots is unique to subtitling. No spaces are left between the dots and the words or punctuation marks that precede or follow them. Note that the word following the dots is written in lower case, as it is a continuation from the previous subtitle. To save space, some companies use three and two dots respectively to indicate sentence continuation:
Your friend called. He’s taking me to dinner...
...at some Mafia joint next week.
Your friend called. He’s taking me to dinner... ..at some Mafia joint next week.
Using three dots with this function seems a rather uneconomical way of conveying information in a professional practice where space is at a premium. Te most common practice these days is not to use any ellipses when an ongoing sentence is split between two or more continuous subtitles. If the subtitle does not have a full stop at the end of the line, then it means that the sentence is not fnished and must continue into the following subtitle. Te absence of the dot, together with the fact that the next subtitle starts with a word in lower case, are sufcient indicators to understand that the second subtitle event must be the continuation of the previous line:
Your friend called. He’s taking me to dinner at some joint next week.
126
Formal and textual features
Tere are occasions when ellipses to connect subtitles are used to strengthen internal cohesion and facilitate the viewer’s understanding of the message. Tis is particularly true in the case of songs, when the spotting of the lyrics can be tricky because of the rhythm and the time allocated for the translation of just a few words can be very long. Nino Matas (personal communication, our translation), the subtitler into Spanish of Moulin Rouge, comments that in the flm: most of the subtitles are the lyrics of the songs, and the time gap between subtitles was so long that, if we didn’t put the continuation dots at the beginning of the subtitles, it was very difcult to relate the content with the previous subtitle. In many cases, the vowel at the end of a verse could still be heard when the following subtitle had already been cued in, and sometimes the subtitle was a single word, or two. Without the continuation dots the word was ‘isolated’ for too long, lacking temporal connection with the previous and the next subtitles.
Another function attached to the three dots is the same as in other written texts, i.e. to indicate prosodic features like pauses and hesitations in the way speakers deliver their utterances. A blank space is always left after the dots but not before, as in the following examples:
You mean that... you won’t do it?
It’s not... true, is it, Renato?
When a sentence is carried over to the next subtitle because of the presence of a lengthy pause that calls for two diferent subtitles, suspension dots are then used at the end of the frst subtitle but not at the beginning of the next one, which starts with lower case:
You mean that... you won’t do it?
When there is a clear interruption, usually followed by a new sentence that marks a change of direction in the thread of the conversation, a space is left after the dots and before the next word, which is spelt with a capital letter:
I knew he’d... But it doesn’t matter now.
When a forced narrative interrupts dialogue, an ellipsis should be used at the end of the sentence in the subtitle that precedes it and at the beginning of the sentence in the subtitle that follows it: I really enjoy walking in the country...
DO NOT ENTER ...as it makes me feel free and happy.
Triple dots are also used to indicate that a subtitle is starting mid-sentence, when the onset of the original sentence is inaudible or missing in the soundtrack:
Formal and textual features
127
...and he never played it again.
Tey are also used when conveying that a sentence or idea has been left unfnished in the original too: - Come in. - Do you like cho...?
When a character fnishes a sentence that has been started by another person: - A cake. On top, I want you to write... - “To Lolo. From Auntie”, as always.
When a list of items is deliberately not completed: She took her books, records, DVDs...
Triple dots are also used to link subtitles that can be perceived as being far from each other, as in the case of overlapping dialogue, when a second speaker intervenes, interrupting the utterance of the frst speaker. In these cases, continuity should be signalled with the three dots at the end and the beginning of the subtitles afected (example on the left). Te dots are not required, however, when the subtitles follow each other (example on the right):
- We have been travelling since... - We are all tired here. ...the early hours.
- We are all tired here. - We have been travelling since the early hours.
Te three dots should never be used to transmit to the viewer that the actors are saying a lot more than can actually be ftted in the subtitles. Three dots are never used as continuation dots, unless there is a long pause that forces the cueing of two different subtitles. The absence of a full stop at the end of a subtitle is sufficient to indicate that it is unfinished and continues in the following subtitle. 5.3.8
Asterisk (*)
Te asterisk is mainly used in a written text to signify that a letter or letters have been intentionally omitted from what can be considered by some as an objectionable word or expression. Rarely used in the past, asterisks seem to be creeping in as an answer to the ever more frequent bleeps that punctuate some television programmes in certain countries. Tis prudery is responsible for the fact that some of Helen Mirren’s lines as Detective Superintendent in the British TV series Prime Suspect have been bleeped out when broadcast on American public television. Te extent of the manipulation goes to rather extreme and unsophisticated lengths, blurring and obscuring the investigator’s mouth every time she loses her temper, so that her expletives cannot be
128
Formal and textual features
lip-read either (Baxter 2004). Frequently, these words are edited out completely unless their obliteration has an impact on the syntax of the sentence or is otherwise too conspicuous, in which case some vendors and broadcasters resort to the use of asterisks, as in the following example from the British documentary Joanna Lumley’s Greek Odyssey, in which a racy poem in Greek is subtitled into English: Μ’ αρέσει το κολαράκι σου, το μυτερό βυζί σου και έτσι αποφάσισα να γαμηθώ μαζί σου
[I like your little bum, your pointed boob and so I have decided to get fucked with you]
Figures 5.1 & 5.2. Joanna Lumley’s Greek Odyssey, episode 4, “Mount Olympus and Beyond”
5.3.9
Slash (/)
Tis punctuation mark, also called stroke and forward slash, has a very limited presence in subtitling and is sporadically used to abbreviate the message as part of wellknown abbreviations like c/o (care of ), km/hr (kilometres per hour) or 1/3 (one third) in English. Tey tend to make more of an appearance in flms and programmes where scientifc jargon is of the essence: Pulse 88. Blood pressure 140/95. It’s a bit elevated.
5.3.10
Other symbols
Mathematical symbols of the type + (plus), > (bigger than), and = (equal to) should normally be avoided in subtitling, unless subtitling specialized documentaries. Depending on the cultures and languages, symbols with which audiences are familiar can be used occasionally: #
number
% percentage, per cent
$ € ¥
dollars euros yens
¢ cents £ (sterling) pounds & and
Formal and textual features
129
In any case, the full expression ought to be the frst choice if the spatial and temporal limitations allow it, and the symbols only used as a last resort: I forgot to tell you that she owns 80 per cent of the club.
Whether references to foreign currencies ought to be converted has to be considered case by case. Before deciding whether pound sterling or dollars have to be translated into, say, euros or rupees, or the other way round, consideration needs to be given to how well acquainted the target audience is with the foreign currency and how important it is that the viewer has an exact translation of the amount mentioned or just a rough idea of whether it is very much or very little. Normal practice tends to avoid converting foreign currencies into local ones and, whenever possible, the general rule is to use the full name rather than the symbol: - What’s his stake? - I’d say 10,000 dollars.
If the limitations do not allow for this, then the symbol can be used. In English, the symbol precedes the amount and is joined to it without a space, whereas in many other languages it tends to be placed after the digit: After a long conversation we paid them £ 17,000 in cash.
After a long conversation we paid them £17,000 in cash.
If the context is sufciently clear, the reference to the currency can be omitted in the translation: Would you like to try our new bacon and egg → sandwich for one dollar twenty-nine only? 5.3.11
Would you like to try our new sandwich for 1.29 only?
Capital letters
Capital letters should be used in subtitling in exactly the same way as they are used in standard writing, i.e. at the beginning of proper names and to start a new sentence after a full stop, a question mark or an exclamation mark. To have the full text capitalized is infrequent for not only do capital letters occupy more space than small letters, they are also more difcult to read. If used, they are always full caps and never small caps and the subtitle containing them does not normally have a full stop at the end of the line. Te following instances may call for the use of upper case: •
Te title of the flm or programme being subtitled: THE KILLING
130
•
Formal and textual features
Grafti, newspaper headlines, banners, writing on clothes, messages on computer monitors and any other onscreen text that is relatively short and written in upper case in the original production. Te frst subtitle is part of the decoration on a celebration cake, while the second translates the content of a banner paraded in a street demonstration: 20 YEARS OF HAPPINESS FAMILIES OF THOSE MISSING FOR POLITICAL REASONS
•
On some occasions, capital and small letters appear in the same subtitle to refect the diferent font sizes visible on screen: TO THEIR LEGITIMATE FAMILIES The grandmothers of the Plaza
5.3.12
Quotation marks or inverted commas ("..."), (“...”), (‘...’)
When emphasis is needed to foreground certain words or expressions, subtitling is relatively limited in the number of typographical efects that it can exhibit. Although technically possible and easy to activate, efects like bold, underline or change of colours are virtually never used in standard, interlingual subtitling. Te only two main devices that subtitlers can exploit are quotation marks and italics (§5.4.1), whose function can sometimes overlap depending on the companies. Te mere fact that quotation marks take up more space than italics can on occasions incline the balance in favour of italics, even though this might go against standard practice. On the whole, quotation marks are more frequent in subtitling than italics, one of the reasons being legibility, as italics are considered to be somewhat harder to read than regular Roman text. Quotation marks, also called inverted commas, speech marks or quotes for short, difer in shape depending on the language and some like the chevron («...»), used in French and Spanish, are not common in subtitling, where vertical quotes tend to be used instead. Two main types can be distinguished, single (‘...’) and double (“...”), ("..."), with the latter being the most frequently used in subtitling. Quotation marks are mainly used to indicate direct speech and to reproduce the exact words of a citation from a particular source, such as a book, a newspaper, the lines of a flm or what someone else has said. Tey are also used when a person is reading a text out loud:
She should tell him, “Get out, you, stupid idiot!” The dictionary says that a liar is “a person who tells lies.”
If a citation has to continue over several subtitles, diferent approaches are possible. Following the general convention applied in normal written texts, the inverted
Formal and textual features
131
commas are used only at the beginning and the end of the quote. However, given that the viewer can only see one subtitle at a time, some companies prefer to use opening and closing quotation marks in every subtitle to remind the viewer that a quote continues: “Sergeant Chirino, terrified, hides behind the well.
Moreira escapes over the fence. He fires his pistol at the sergeant who is badly hurt in the left arm.”
“Sergeant Chirino, terrified, hides behind the well.”
“Moreira escapes over the fence.” “He fires his pistol at the sergeant, who is badly hurt in the left arm.”
An intermediate solution, recommended here, consists in opening the quotation marks at the beginning of each subtitle to remind the viewer of the citation, and closing them only in the last subtitle of the series: “Sergeant Chirino, terrified, hides behind the well.
“Moreira escapes over the fence. “He fires his pistol at the sergeant, who is badly hurt in the left arm.”
Question and exclamation marks follow quotation marks when not part of the quoted material:
Why did he say “I hate you”?
I asked her, “Why have you done it?”
Full stops and commas are always placed inside the quotation marks:
She literally said, “Trump is a chauvinist pig.”
Single inverted commas are used to quote within a quote:
He stood there and asked, “How is ‘cemetery’ spelt?”
Quotation marks can be used to fulfl several other functions which, depending on the companies, can be covered by italics as well. For example, they can indicate that a word or expression is being used with a metalinguistic value, i.e. to speak about language itself:
Laura, in the leader’s speech, replace the word “scum” with “lout.”
132
Formal and textual features
Quotation marks are also used to denote words or expressions that have been made up or are grammatically or phonetically incorrect: She called him “Daiaz” instead of Díaz.
Lexical items that refer to concepts or ideas that are somehow special and should not be taken at face value also call for quotes. Words and expressions under this category tend to be pronounced with special intonation or prosodic emphasis: You can disguise it all you want under your “Patrick’s good advice.” By saying your “son” you mean Enrique, don’t you?
Quotation marks are also used to stress the value of nicknames, to underline plays on words or to signal that a word or expression is being used ironically: At least I don’t call him “fruitcake” like the other kids. You were the one doing the “hip, dope-smoking homo” act.
5.4
Other conventions
In this section, other typographical conventions that can also be found in subtitling are discussed. 5.4.1
Italics
In addition to quotation marks, italics are the other major typographical recourse available to subtitlers to call attention to certain elements of the text. Tis printing type slopes to the right (italics) and has the great advantage of adding emphasis to a word or phrase without taking up any extra space on screen. Te downside is that it does not stand out prominently on screen and can be sometimes hard to distinguish from the rest of the text. Normally, only languages based on the Roman alphabet make use of italics as a typographical device. One of the unique functions of italics in subtitling is to account for utterances spoken by characters who are not in the scene and not merely of screen or of camera. If the speaker is not in shot but shares the same space or is physically near the rest of actors on screen, e.g. in the same room or just outside an open door or a window, Roman type is then used. Te same applies when the speaker is on and of camera. Italics are also used to represent voices that are heard through a machine or electronic device, whether on or of screen, such as a door interphone, a radio, a television set, a computer or a loudspeaker. Tey are also activated when translating the interventions of a character who is at the other end of the telephone line, whom the viewer can hear but not see at any point during the conversation. If, however, the editing of the flm resorts to the typical shot–reverse shot – i.e. the camera moving alternatively from one speaker to the other – in such a way that the viewer has the opportunity
Formal and textual features
133
of seeing both actors, it is advisable not to italicize any of the subtitles. It can be confusing, and arbitrary to some extent, to see how the utterances of the two actors keep changing their printing type, and italics are assigned to both of the characters. Te confusion is accentuated when the shot changes are fast cut and some of the subtitles have to stay on screen despite the cuts. Italics are also useful to represent voices from within, e.g. voices that are in a character’s mind, interior monologues, voices that are heard in dreams and the voice of an onscreen character expressing unspoken thoughts. Tey also indicate voiceovers, such as not-in-scene narrators, unless this is the only voice heard in the programme, as may be the case with some documentaries, in which case Roman type is used. Lexical borrowings from other languages are also to be italicized if they are not considered fully integrated in the TL:
And you’re a jamona.
However, italics should not be used for names of pop groups, sports teams, restaurants, companies, drinks or food that are reasonably well known by the target community:
U2 recorded it a few years ago. Tey’ve been eating at McDonalds.
Literary and bibliographical references, titles of publications and books, as well as titles of flms, other audiovisual programmes, shows, operas, songs and names of record albums also go in italics:
- Who’s that? - Albert, from the Little House. I want Renato to take you through The Queen of Broadway. You’re So Vain, that’s Carly Simon.
A special case requiring the use of italics is when another FL, unfamiliar to the target audience, can be heard in the flm or programme. If the decision is to translate this information into the TL, all the exchanges in the less-used language in the flm ought to be italicized. Te Australian flm Head On stars a young man caught between his Greek heritage and the world of music, sex and drugs in the city of Melbourne. Although English is the main language throughout the flm, Greek exchanges are also very frequent in his conversations with members of his family, which are subtitled in the original flm. Te strategy followed in the Spanish subtitled version has been to translate and italicize all conversations conducted in Greek and to use plain letters for the translation of the English dialogue. When a word or expression needs to be highlighted with italics but they are already being used in the same sentence, inverted commas can be used for the emphasis:
Friends, stay with us for more surprises on “People Today.”
134
Formal and textual features
Another use attached to italics is to particularize a word or phrase by stressing it. An example can be found in Woody Allen’s flm Manhattan Murder Mystery, where the substantive Waldron appears in italics as it plays an important part in the development of the plot and the resolution of the mystery. Te lack of context leads the protagonist to believe that the noun must be somebody’s name, only to discover later in the flm that it is in fact the name of a hotel. After the discovery, the noun is never again italicized. Italics are also favoured by most companies to translate the lyrics of songs (§5.4.1.1) as well as certain written messages, letters and inserts appearing on the programme or flm (§5.4.1.2). Tis usage competes, however, with upper case (§5.3.11). If the text on screen is short and written in upper case, the subtitles tend to go in capital letters. If, on the contrary, the text is in lower case and relatively long, italics are normally called for. 5.4.1.1
Songs
Te translation of songs will depend on the plot-pertinence of the lyrics and on whether the rights have been granted (§7.3). Tough practice varies and some countries and companies avoid italics when dealing with songs and prefer to use the same font as in the rest of subtitles, albeit displacing the text to the left-hand side of the screen, most languages prefer to italicize the content of the subtitles reproducing song lyrics and to position them in the same manner as the other subtitles. Frequently, song subtitles are punctuated following the conventions of poetry: an uppercase letter is used at the beginning of each line, and only question and exclamation marks can appear at the end of a line, no full stops or commas. However, the recommendation in this book is to apply the same punctuation rules as in the rest of subtitles to facilitate the task of reading the text. All songs should go in italics:
A kiss may be grand But it won’t pay the rental
A kiss may be grand but it won’t pay the rental
On your humble flat Or help you feed your pussycat
on your humble flat, or help you feed your pussycat.
Men grow cold As girls grow old
Men grow cold as girls grow old.
And we all lose our charms in the end
And we all lose our charms in the end.
But square cut or pear-shaped These rocks don’t lose their shape
But square cut or pear-shaped, these rocks don’t lose their shape.
Diamonds are a girl’s best friend
Diamonds are a girl’s best friend.
An ellipsis should be used when a song continues in the background but is no longer subtitled as the dialogue takes precedence. Because of their rhythm, songs can be timed in a more fexible manner than dialogue exchanges. Sometimes, subtitles will have to be left on screen a bit longer than strictly necessary. In other cases, the syntax of the target text might have to follow the original too closely in order not to advance information. In the previous example, the content and syntax of the last line of the song are relatively straightforward and
Formal and textual features
135
should not give rise to particular problems when translated into other languages, like Spanish: Un diamante es el mejor amigo de la mujer.
[A diamond is the best friend of the woman.]
In the flm Moulin Rouge, the Spanish translation above is used whenever Nicole Kidman sings the refrain at a standard pace. However, at the very end of the song she slows down and creates long pauses in the delivery of this line. To keep synchrony with the soundtrack, the end result is the spotting of four subtitles instead of one, the use of triple dots to link the subtitles, and a TT with rather twisted syntax, verging on ungrammaticality in Spanish: Un diamante... [A diamond...]
...es, de la mujer,... [...is, of the woman,....]
...el mejor... [...the best...]
...amigo. [...friend.]
5.4.1.2
Onscreen text
Onscreen text can take many shapes and forms: letters, documents, neon signs, newspapers, SMS texts and the like. When plot-relevant, this information is translated with what are known as forced narrative subtitles, which tend to be in ALL CAPS:
HE’LL BE KILLED
When subtitling long passages of onscreen text, like a letter or a prologue, sentence case should be used in normal font, i.e. no italics. In the event of forced narrative interrupting dialogue, three dots should be used at the end of the subtitle that precedes it and at the beginning of the subtitle that follows it: I think you should reconsider...
DON’T FEED THE MONKEYS ...your whole approach.
On occasions, viewers are not expected to read the onscreen text as the information is conveyed aurally in the soundtrack. Adapting the recommendations put forward
136
Formal and textual features
by Ivarsson and Carroll (1998: 119), the content of the subtitles should be presented in inverted commas, since this can be considered a case of citation. As for the font style, it should be normal in the following instances: • • •
When the person writing the letter repeats the content out loud whilst writing it. When the fctional writer reads the content of the letter aloud once it has been written. When the addressee reads the letter out loud.
Italics are called for when: • • •
Te voice of the person writing the letter can be heard in of screen, as an interior monologue, whilst writing the actual text. We hear the voice in of screen of the person who has written the letter, whilst we see the addressee reading it on screen. We hear the voice of the person who has written the letter of screen, whilst we hear her/his voice in of screen, as an interior monologue.
From the point of view of the spotting, forced narrative information should not be combined with dialogue in the same subtitle, and the forced narrative subtitle should last as long as the onscreen text, except for cases where reading speed and/or surrounding dialogue takes precedence. 5.4.2
Colours
One of the main diferences between interlingual and intralingual subtitles is their diferent approach to the use of colours. Subtitles for D/deaf and hard-of-hearing people tend to rely on the use of diferent colours to identify the various characters that take part in the programme and to add emphasis to certain words and expressions. Given that hearing viewers can retrieve this information efortlessly from the original soundtrack, the use of polychrome features has been deemed irrelevant in interlingual subtitling, in which historically just one colour has been used throughout the entire production. Te two main colours used in our profession are white and yellow. In cinema, subtitles are mostly white simply because they have been laser engraved on the celluloid and they are an integral part of the flm copy. Tus, when they are projected on a big, white screen we see them white, as what we are seeing is the screen through the celluloid. If the screen were pink, we would see them pink. On some occasions, they are yellow, which means that the subtitles are not engraved on the copy of the flm but electronically projected onto it so as not to damage the celluloid. Although electronic subtitling allows for a wide spectrum of colours in which subtitles can be beamed, only yellow and white are normally used. Productions broadcast on television or distributed on DVD or VOD usually resort to subtitles that are projected onto the programme, rather than engraved, and, once again, only yellow or white are employed. To ensure optimum legibility of the subtitles, especially when they are to appear against a pale backdrop, some companies prefer to display them inside a grey to black
Formal and textual features
137
background box, whereas others surround the text by a contrasting colour, e.g. a black surround whose thickness can be selected to accentuate or attenuate the contrast of the text with the images. 5.4.3
Abbreviations
Abbreviation is a general term used to refer to a shortened form of a word or phrase. It can be made up of one or several letters and is normally used because it occupies less space than the word or phrase it replaces. In the case of subtitling, most companies recommend the avoidance of abbreviations, unless there are space and/or time limitations. If used, they should be known by the target audience or not cause any confusion. Tere are four main ways of shortening a word in English: clippings, acronyms, contractions and blends. 1 2
Clippings or shortenings omit syllables, as in ‘fab’ (fabulous), ‘mo’ (moment), ‘fu’ (infuenza), ‘plane’ (aeroplane) and ‘pro’ (professional). Tey behave like proper words and usually belong to an informal register. Acronyms can be formed in various ways. Te most common ones contain the frst letter of each word, as in ‘FBI’ (Federal Bureau of Intelligence), ‘BBC’ (British Broadcasting Corporation) and ‘EU’ (European Union). Tey are written without full stops between the letters. If real, most of these abbreviations stand for organizations and bodies that tend to have an equivalent in the rest of languages and are thus easy to translate. If made up, they can be used as in the original: I’ll try to get my old job back at E.U.R.E.S.C.O., I suppose.
I’ll try to get my old job back at EURESCO, I suppose.
Tey are also created by using the initial letters of words as in ‘radar’ (RAdio Detection And Ranging), ‘laser’ (Light Amplifcation by Stimulated Emission of Radiation) or ‘taser’ (Tomas A Swift Electric Rife). When they are fully lexicalized, they tend to be written in lower case. A new breed of colloquial acronyms in English are based on common expressions like ‘pto/PTO’ (please turn over), ‘fyi/FYI’ (for your information), ‘ott/OTT’ (over the top) or ‘ooo/OOO’ (out of ofce). Subtitlers into English need to carefully assess whether the register of the original dialogue justifes the use of these abbreviations and should avoid using them simply because they save space. 3 4
Contractions leave part of the word out, usually the middle, as in ‘Dr’, ‘Mlle’, or ‘km’. Blends or portmanteaux join parts of two words together to become a new word, as in ‘Interpol’ (International Police), ‘Eurovision’ (European Television) or ‘heliport’ (helicopter + airport) and are only rarely written in all capitals: Your husband was tailed by EUROPOL.
Your husband was tailed by Europol.
138
Formal and textual features
To boost cleaner typography, abbreviations are normally written without full stops or spaces between letters:
I don’t give a damn about your P.M.
I don’t give a damn about your PM.
My mom’s not even 50 and she’s right out of W.W.I.
My mom’s not even 50 and she’s right out of WWI.
Abbreviations of the type ‘km’ (kilometre/s), ‘cm’ (centimetre/s), ‘kg’ (kilogram/s), and ‘m’ (million/s) should be used sparingly and priority given to the full spelling of the nouns. If used, they are always written immediately after the numbers and separated by a space:
After having driven 67km, we decided to go back home.
After having driven 67 km, we decided to go back home.
Te English language is very prone to using and creating new words like acronyms and clippings, which can become a source of major challenges for subtitlers working into other languages where this linguistic phenomenon is not so widespread. Given the media limitations encountered in subtitling, the translation of expressions like ‘BLT’ (sandwich of bacon, lettuce and tomato), ‘YOLO’ (you only live once), ‘LTC’ (loving tender care) and ‘LBD’ (little black dress), which are pronounced by spelling out the various letters, can prove rather problematic when the TL cannot clip the phrases in the same way and the whole text needs to be spelt out in the subtitles. As for their translation, subtitlers must be careful when transferring abbreviations to their working language since sometimes they remain the same, but in other cases they change radically or can be confused with other abbreviations better known in the TL. In some cases, the abbreviation of a similar institution in the target culture (TC) can be used in the translation, but in other instances the best strategy is to spell out the abbreviation or to replace it with an explanation: Police! Civic authorities! ASPCA! ASAP! (ASPCA: American Society for the Pre- → vention of Cruelty to Animals) I called EMS and they got here as soon as they could, but they were too late. → (EMS: Emergency Medical Services)
Police ! Protection civile ! SPA ! Vite ! [Police! Civic Protection! SPA! Quick!] (SPA: Société Protectrice des Animaux)
He llamado a urgencias, pero han llegado tarde. [I’ve called emergencies but they have arrived late.]
When a made-up abbreviation has an important role in the programme or flm and recurs with certain frequency, one of the possible strategies, if the limitations allow it, is to present the actual abbreviation accompanied by a translation explaining it the frst time it appears. After that, the abbreviation can be used in its original form every time it appears in the subtitles.
Formal and textual features
139
5.4.4 Numbers
Provided there is space available, the general rule is that numbers from one to ten are written out, while from 11 onwards they are written in digits:
There are only 2 entrances to the house. It’s almost seventeen years since you saw the boy.
There are only two entrances to the house. It’s almost 17 years since you saw the boy.
Exceptions to this rule are the numbers of houses, fats, apartments and hotel rooms, always written in digits, as well as the days of the month:
My birthday is on the fourth of April.
My birthday is on the 4th of April.
Numbers up to ten are also presented as digits if they are next to abbreviated units of weight and measurement, in which cases the contraction is written after the fgure with a space:
They were caught in the cinema with seven kg of drugs.
They were caught in the cinema with 7 kg of drugs.
But when weights and measures are not abbreviated, the numbers should be in letters:
She ran 3 kilometres until the nearest beach.
She ran three kilometres until the nearest beach.
A general stylistic rule recommends that when a number begins a sentence or subtitle, it should be spelt out:
30 days since she left?
Thirty days since she left?
Approximate quantities, along with set phrases or idioms involving fgures, are usually written in letters:
It won’t work in 1,000,000 years.
It won’t work in a million years.
If the enumeration is random, normally used with a hyperbolic function, ordinal numbers tend to be written with letters:
I’ll tell you for the 30th time.
I’ll tell you for the thirtieth time.
On the whole, there is a high degree of fexibility when dealing with numbers in subtitling and the aforementioned rules may be broken if the space and time limitations are severe and the text of the subtitle cannot be condensed or reduced any more without compromising the content:
140
Formal and textual features He’s paid five million for the two houses.
He’s paid 5 million for the 2 houses.
When dealing with programmes in which many numbers occur, like some scientifc documentaries, special attention has to be paid to the subtitles’ display rate since, while writing the numbers in digits clearly helps to condense the information, the actual reading of a number – say 379 – takes longer than the three spaces used on screen might lead us to believe. Te spotting of these subtitles should consider these aspects and allow viewers to read the message comfortably. 5.4.4.1 Time When dealing with the time, whether using the 12- or the 24-hour system, the numbers are separated with a colon or a period, but never with a comma or a blank space. Te use of the colon is becoming more common because it mimics the way digital clocks show the hour:
Where was he at 7’30?
Where was he at 7,30?
Where was he at 7 30?
Where was he at 7:30? Where was he at 7.30?
Words and phrases that do not contain numbers should be spelt out: We’ll get there in ½ an hour.
We’ll get there in half an hour.
Due care has to be taken with the use of the abbreviations ‘am’ (ante meridiem) and ‘pm’ (post meridiem) since they may not be appropriate in some languages. If used, a space is left between the number and the abbreviation: She’ll pick you up at 7pm.
5.4.4.2
She’ll pick you up at 7 pm.
Measurements and weights
When translating from English into other languages, it is important to convert measurements to the metric system, unless the original unit of measurement is relevant to the plot. Te typical conversions are feet and inches into metres and centimetres for length, stones and pounds to kilograms and grams for weight, and Fahrenheit to Celsius for temperature. Whether the exact translation with digits needs to be given depends on the context. When the intention of the original is to give a general idea of the physical features of people, or to indicate broadly the distance that separates two cities, it is perfectly legitimate to ofer an approximation and round the fgure obtained after the conversion up or down to the nearest full number. In a science programme an exact translation may be required. Let us take a look at the following English statement: I got an APB on a Caucasian male. Brown hair and eyes. 160 pounds, 5 feet 11.
Formal and textual features
141
Whilst the unit used to inform viewers of the weight of the person is spelt out, i.e. pounds, the height may be somewhat cryptic, and more knowledge of the source culture (SC) is needed to recognize that the element missing after the number 11 is the elliptical ‘inches’. Once the information is clear, and with the help of one of the many conversion calculators available on the net, we discover that the man weighs 72.576 kilograms and is 1.8034 meters tall. However, these fgures are far too precise for what is needed in this particular context, and of the following two subtitles, the one on the right made it to the screen in Spanish:
He pillado a un hombre blanco. Pelo y ojos castaños. 72,576 kg, 1,8034 m. [I have caught a white man. Hair and eyes brown. 72.576 kg, 1.8034 m.]
He pillado a un hombre blanco. Pelo y ojos castaños. 70 kg, 1,80 m. [I have caught a white man. Hair and eyes brown. 70 kg, 1.80 m.]
Te use of fractions is not very common in subtitling and are only justifed when the topic and nature of the audiovisual production make them appropriate, e.g. a documentary on maths or a science-fction flm:
Only 1/3 of the population managed to escape the fire.
Only a third of the population managed to escape the fire.
Depending on the subtitling program being used, fractions (3/4) might be replaced by fraction characters (¾). 5.5
Subtitling quality
Determining what quality is in subtitling, and in translation more generally, is as important as it is problematic. Tere is no universally accepted model for measuring translation quality (TQ), and even defnitions of high-quality translation vary. Nevertheless, both in a professional context and in a training context, translations are evaluated and their quality is assessed. Generally speaking, quality assessment can be error-based or more holistic, but often the two approaches are combined. In the frst instance, we need to distinguish between quality control (QC) and quality management (QM) or quality assurance (QA). In our defnition, QM and QA are considered synonyms and hierarchically overarching concepts that encompass diferent forms of QC. Te latter consists of the procedures that are applied before, during and after the translation process. Tey are applied to the translation process and the translation product, in order to monitor that specifc requirements are met. Although this may sound straightforward, the very concept of ‘specifc requirements’ throws a spanner in the works, for no two translations from the same ST are ever identical, and when diferent versions co-exist, one is not necessarily qualitatively better than the other, at least in absolute terms. Quality in translation, including subtitling, is conditioned by socio-cultural evolution: its guidelines and requirements vary with context, time, production and distribution technology as well as diversifying audience requirements. Moreover, over time, ST-oriented QC has evolved into a more functional target text-, TC- and target audience-oriented QC, especially for non-literary texts. Any specifc requirements whose implementation has to be controlled are therefore unstable and situationally determined, even though the very concept of control
142
Formal and textual features
implies some verifable standards and/or procedures that can be determined and agreed on by all stakeholders involved in the QC process. It is generally accepted that the only solution is for evaluators to aim for intersubjective reliability (Schäfner 1998), a concept that is akin to interrater reliability in empirical research. Tis means that QC guidelines or parameters must be formulated clearly enough for diferent evaluators to understand them as unequivocally as possible and apply them in the same way, so that diferent evaluators’ QC yields virtually the same or very similar results. Te greater the agreement between diferent evaluators, the greater the chances of a fairly objective evaluation. Te questions we must return to then are: what exactly must be evaluated? What should evaluators’ priorities be in what circumstances and can some form of agreement be reached on this for interlingual subtitling? Te two main approaches to QA in subtitling focus either on the translation process – the use of templates in multilingual projects (Nikolić 2015) or the efciency of subtitling production chains (Abdallah 2011) – or on the translation product, as discussed later. In both cases, quality is defned situationally. In translation process QA, quality is seen to be determined by the efciency of the workfow, the diferent professionals involved in the translation process, the materials that are put at the disposal of the translators, the time they are allocated to carry out the translation and the remuneration they will receive. Tese same factors also impact on the quality of the translation that this workfow produces, bearing in mind the purpose it must fulfl, for whom, where and in what circumstances, as highlighted by Kuo (2015) in her study on the theoretical and practical aspects of subtitling quality, which paints a collective portrait of the subtitling scene by canvassing subtitlers’ views and providing a glimpse of the demographic make-up of the professionals involved in this industry. In their survey on QA and QC in the subtitling industry, which addressed both professional subtitlers and their clients, Robert and Remael (2016) outline the key quality parameters used in subtitling QC today, based on the literature on quality parameters in translation revision, industry guidelines and Ivarsson and Carroll’s (1998) Code of Good Subtitling Practice. Te authors identify the following four translation and four technical parameters for subtitling quality: Translation quality parameters • • • •
Content and transfer (including accuracy, completeness and logic) Grammar, spelling and punctuation Readability (ease of comprehension and coherence between individual subtitles) Appropriateness (socio-cultural features of the audience)
Technical parameters • • • •
Style guide Speed Spotting Formatting
Te authors’ fndings show that whereas subtitlers claim to focus on all of the aforementioned parameters, their clients tend to pay much more attention to the technical
Formal and textual features
143
than the linguistic ones, which also appears to be the case in many of the subtitling guidelines issued by LSPs. What is evident is that all of them are important since a perfectly translated dialogue exchange that does not adhere to an acceptable display rate would be lost on the audience trying to keep up with the images and text, and a well-timed but incomprehensible or grammatically or stylistically incorrect translation would be detrimental to the viewers’ enjoyment of a flm. Romero-Fresco and Martínez Pérez’s (2015) proposed NER model (Number of words, Editing errors, Recognition errors) to gauge the quality and accuracy rate in intralingual live subtitling has inspired other models in interlingual live subtitling (Romero-Fresco and Pöchhacker 2017; Robert and Remael 2017) and on pre-prepared interlingual subtitling (Pedersen 2017). In the case of the NTR model (Number of words, Translation errors, Recognition errors) used in live subtitling, translation errors are identifed based on errors in content and in style, while recognition errors are due to glitches derived from the speech recognition system that is customarily used for live subtitling. Errors in both areas can be minor, major or critical. Minor errors are inconsequential deviations (e.g. the omission of a comma); major errors are noticeable and viewers may fnd them disturbing but they can still identify them and possibly mentally rectify them; and critical errors may go unnoticed because they ft the context but change the ST meaning considerably. In NTR, errors are counted, rated and entered in a formula to calculate the subtitle accuracy rate. Based on previous research, an accuracy rating of 98% is considered good. However, the QC procedure does not end here. Te accuracy rating must be complemented by a more holistic, overall assessment that considers comments on issues not included in the formula, such as efective editions, the speed, delay and overall fow of the subtitles, speaker identifcation, the coherence between the original image/sound and the subtitles, whether too much time has been lost in the corrections, etc. (Romero-Fresco and Pöchhacker 2017: 163)
If one ignores the recognition errors, which are not relevant for pre-prepared interlingual subtitling, the categories this system proposes are not so diferent from the items listed in the grids discussed by Robert and Remael (2016). What remains problematic or complex at the very least, however, is the attribution of a particular weight to the diferent types of errors. Robert and Remael (2017) also point out that adaptations of the NER system, designed for intralingual live subtitling, to the case of interlingual (live) subtitling may not reckon enough with the subjective judgement of viewers and how they experience errors. In brief, we still run up against subjectivity issues when rating errors. Furthermore, when transferring this model to the practice of pre-prepared subtitling, other factors come into play such as the fact that viewers may be less tolerant of mistakes as they know that the subtitles were prepared beforehand (Pedersen 2017), which ought to be considered when errors are weighted. Pedersen’s (ibid.) error-based FAR model (Functional, Acceptability and Readability) considers quality from the perspective of the viewers’ appreciation of subtitling and suggests the use of local service providers’ subtitling guidelines as an evaluation tool. Te guidelines are seen as a kind of translation brief that subtitlers must adhere to, which allows the author to produce a QC tool that is adaptable. Te FAR model uses one full subtitle as its control unit and focuses on pragmatic equivalence or the
144
Formal and textual features
equivalence of the message that is communicated rather than the words it uses, taking subtitling’s tendency to delete and rephrase into account. Two types of errors are distinguished – semantic and stylistic, which can be rated as minor, standard or serious and be given specifc scores much like the NTR model does. Apart from these, Pedersen also identifes readability issues, i.e. errors related to spotting, line breaks, punctuation, reading speed and line length. For some of these, the author also suggests specifc ratings. Errors in segmentation that afect more than one subtitle, for instance, are more serious than line break problems within one subtitle. For issues relating to reading speed and line length, he advises subtitlers to refer to the client’s guidelines. In both NTR and FAR what remains subjective is the weighting attributed to each specifc error, which becomes especially tricky if style, situationality and user perceptions are to be taken into account. Additionally, both systems are quite unwieldy and can only be applied to flm excerpts, since checking an entire production would be too time consuming. Nevertheless, they have their merits in that they try to systematize subtitling QC, which is why they are being used and tested by both scholars and service providers. Models on quality and QC in interlingual subtitling have not yielded a cut-anddried solution in the industry that would help us quantify quality, though they confrm the existence of a general agreement on the parameters that are important. Professional subtitlers will have to balance these and consider the genre and context in which they are working as well as the guidelines or translation brief that they may have been given by the employer. 5.6
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 5 > Exercises
6
Te linguistics of subtitling
6.1
Preliminary discussion
Subtitling is sometimes accused of making use of impoverished language because of the technical and linguistic straightjacket imposed on it. Others argue that creativity can overcome all obstacles, even in subtitling, because faithful translation is a relative or even useless concept. 6.1 Discuss what the aforementioned concepts might refer to: ‘impoverished language’, ‘technical and linguistic straightjacket’, and the relativity of faithful translation. 6.2 Try to determine where you stand on these issues, and draw up a list of five items that you expect will be a challenge in subtitling, from a linguistic point of view. 6.3 Watch a short excerpt from a popular FL TV series of your choice in a language you know and with subtitles in your mother tongue. Listen carefully to the dialogue and identify what the subtitles retain/leave out or rephrase. Do you have the impression that the subtitled rendering affects the verbal interaction in the original film? Why (not)? If you notice translation shifts, what do they affect most? Discuss in a small group. 6.4 Repeat the exercise for a completely different audiovisual genre, for instance a medical series, a documentary or a corporate video.
6.2
Subtitling: translation as text localization
Translation Studies has long moved away from the idea of any translation being a faithful rendering of its ST. In the 1980s, the interest of translation scholars gradually turned from a fxation on the ST as a determining factor in the translation from one language and culture to another to an interest in the TT and its function in the new hosting environment. As flms and TV programmes are made, sold, reedited, translated, remade, retranslated, redistributed, repurposed and therefore localized in diferent formats for diferent media, the notions of both original text or even source text and author seem to become increasingly unstable. Nevertheless, STs continue to be
146
Te linguistics of subtitling
translated into TTs, and there is no denying that translation remains a powerful linguistic as well as cultural act. As texts are translated and localized, their links with diferent previous texts, the functions they will fulfl in their new target environments, and how they will be received can in no way be taken for granted, especially since texts are travelling further and further afeld and their audiences themselves continue to diversify. In today’s globalized and digitized world, source and target texts become confused in yet another way: through fusions of cultures producing hybrid cultural artefacts. Originally, hybridity may have been a feature of postcolonial texts (Mehrez 1992; Bhaba 1994), but nowadays it is central to our global age. As pointed out in §3.3.2, many flms resort to the portrayal of characters who speak more than one language or, as will be discussed in great detail in §7.2.2, they use hybrid, mixed or non-standard language variants. To this challenge of rendering hybrid forms or multilingualism in the subtitles, one can add the usual problems of linguistic transfer also encountered in other forms of translation, the complication of dealing with set phrases and idioms when these play with the images, the dilemma of how to deal with swearwords and taboo language, and the challenge of rendering intralinguistic and extralinguistic cultural references that can be heard (and seen) on screen (Chapter 8). Tis awareness of the complexity of the ST and its sometimes ambiguous nature has contributed to an increased focus on the creative component in any form of translation (Remael 2003). All texts are written within and bound by certain constraints that writers and translators somehow must overcome. AVT in general, and subtitling in particular, are no diferent, even though early scholars in the feld defned subtitling in terms of spatial and temporal constraints (Titford 1982) that supposedly did not apply to other types of translation. Even today, the technicalities of space and time (Chapter 4), with which the subtitler must indeed reckon, are often referred to in order to foreground, in a somewhat negative manner, the limitations of this particular form of AVT. And yet, many of the constraints within which subtitlers must work are not typical of subtitling only: translated poetry must deal with issues of layout and rhythm, and translated theatre with performance-bound constraints (Espasa 2017). Creative restraints because of space and time restrictions, the particularity of transferring speech into writing, the presence of the images and the original soundtrack are some of the challenges that subtitlers must face, though all forms of translation pose challenges and all translated texts are the result of reading, interpretation and choice. In many if not most cases, the co-existence of more than one communication channel has a direct impact on the translation process. Whether a translation solution is ‘good’ or ‘bad’ therefore depends on a host of factors and cannot be judged out of context. Tis also means that the apparent simplicity of subtitles does not necessarily make them inefective as a translation. 6.3 Text reduction Te written version of speech in mainstream commercial subtitles is nearly always a reduced form of the oral ST, although it must be stressed that the degree of reduction varies across countries and companies and that reading speeds are on the increase in some contexts. Nevertheless, subtitles are rarely a verbatim and detailed rendering of the spoken
Te linguistics of subtitling
147
text, and they need not be. Since subtitles interact with the visual and oral channels of the flm, a complete translation is, in fact, not always required. Tis does not mean that the viewers do not have a right to a qualitatively high-standard translation that will fll in the FL gaps for them. Quantity and quality are hardly the same. Te concrete causes of the usual quantitative reduction in text and content are given next: Why text reduction? ❶ Viewers/listeners can absorb speech more quickly than they can read, so subtitles must give them enough time to register and understand what is written at the bottom of the screen. ❷ Viewers must also be given the opportunity to enjoy the action on screen and listen to the soundtrack, so they must be given sufficient time to combine reading with watching and listening. ❸ Commercial subtitles are limited to a maximum of two lines. How much text they contain depends on the maximum number of characters allowed per line, the time available for any given subtitle, the subtitling display rate applied, which will depend on the assumed reading speed of the audience, and the pace at which the ST is actually pronounced.
Tere are two types of text reduction: partial and total. Partial reduction is achieved through condensation and a more concise rendering of the ST. Total reduction implies deletion or omission of lexical items. Having assessed how much time and space are available for a given translation, and having ascertained that some form of text reduction is required, the subtitler then proceeds to: • •
eliminate what is not deemed relevant for the comprehension of the message, and/or reformulate what is deemed relevant in as concise a form as is possible or required.
Very often both processes are at work, leading to the rewriting that is so typical of subtitling, as in Example 6.1. A man and a woman, who have contacted each other through the personals, meet for the frst time. Tis is part of their conversation: Example 6.1 Man: Quoi ? Vous avez déjà fait ça ? Rencontrer des hommes par annonce pour... ? [What? You’ve done this before? Meet men through an ad to...?]
What? You’ve done this before? You’ve met men through ads to...
Woman: Non, non, je voulais dire : j’ai déjà rencontré des hommes, qui me plaisaient, enfin, je No, I mean I’ve met men I liked... croyais qu’ils me plaisaient, puis, après, je voyais → que non, en fait, je m’étais trompée, ils me plaiWell, I thought I did saient pas... pas du tout, vous comprenez ? then realized I didn’t. [No, no, I mean, I have met men before that I liked, well, I thought I liked them, but then, later on, I realized that I didn’t, in fact, I was wrong, I didn’t like them... not at all, you see?]
I didn’t like them at all.
148
Te linguistics of subtitling
In the frst subtitle nothing has been deleted or abbreviated. In fact, something has been added: a repetition of the personal pronoun ‘you’ plus the short form of the auxiliary verb ‘have’, i.e. ‘you’ve’ for vous avez, which is the result of subtitles’ tendency to structure speech. All in all, the translation is rather literal. It is good to keep in mind that this is a regular occurrence and that the subtitler should not abbreviate the original unless this is absolutely necessary. Te subsequent three subtitles cover the woman’s reply. She speaks too quickly and says too much for her reply to be contained in just one subtitle. Her intervention has therefore been segmented into three subtitles and, in each one of them, something has been either deleted, or reformulated more concisely, or both. And yet, the woman’s description of her experience comes across perfectly. In the second subtitle, one non has been deleted and the adverb déjà [before] has also been eliminated. Te third subtitle resorts to a typical feature of English grammar with its use of ‘I thought I did’, reinforcing internal cohesion by referring back to the previous subtitle ‘I’ve met men I liked’. In other words, the subtitler uses the form and content of the preceding subtitle to build on in the present one, and to reformulate without actually omitting much. Te adverbs puis [then] and après [later on] have been joined in the single ‘then’, which is the obvious solution since this is a ST repetition if not in form, at least in content. Te most far-reaching instance of condensation occurs in the second line of the third subtitle. First of all, the personal pronoun ‘I’ is not repeated. Secondly, ‘realized I didn’t’ is a reformulation and condensation of je voyais que non, en fait, je m’étais trompée [I realized that I didn’t, in fact, I was wrong]; whereas en fait [in fact], used for emphasis only in the original, has been omitted in the translation. Te last subtitle again omits and reformulates while condensing. Te sentence ils me plaisaient pas... pas du tout, vous comprenez ? [I didn’t like them... not at all, you see?] is rendered as ‘I didn’t like them at all’. When choosing between the negative pas [not] versus pas du tout [not at all], the stronger expression is the obvious choice, since the character corrects herself while speaking and it is the corrected version that is rendered. As for the tag question vous comprenez ? [you see?], it does not add anything to the propositional content of the sentence – even if it does contribute to interpersonal interaction – and so, it can be omitted. Te previous self-contained scene is rather straightforward and is therefore relatively easy to determine what must be included, what must be rewritten and what must be omitted, given pre-established space limitations. However, this may not always be the case, and foolproof reduction or omission rules do not exist. In general terms, one could say that the subtitler must act on the principle of relevance. What is relevant is “connected or appropriate to the matter at hand” (NOED 1998), but there is more to this concept. It was Gutt (1991) who frst applied the theory of relevance to the feld of translation. Kovačič (1994) later tested its value for the study of subtitling. Briefy, it claims that communication works on a principle which operates in terms of a balance between processing efort and pay of. Tis is known as the mini-max efect, i.e. achieving a maximum efect with a minimum cognitive efort. Kovačič found the approach was quite useful for analyzing and explaining the logic of subtitling omissions, which cannot simply be put down to linguistic factors. It is the balance between the efort required by the viewer to process an item and its relevance for the understanding of the flm narrative that determines whether it is to be included in the translation. Tis means that subtitlers should view a flm in its entirety before embarking on a subtitling assignment, even
Te linguistics of subtitling
149
if professionals rarely have the time or opportunity to do this: what may appear to be a simple detail no doubt has a function in the larger scheme of the script. Even an inconsequential greeting like ‘good morning’, cannot automatically be rendered as the much shorter ‘hi’ or ‘hello’. Disregarding matters of style, the ST expression also contains a time reference, whereas neither ‘hi’ nor ‘hello’ do. Having seen the entire flm gives the subtitler a better outlook on what is and is not redundant. Within the context of one particular scene, and possibly taking the specifc viewing context into consideration, the question to be asked in case of doubt is: what requires more efort on the part of the viewer? A shorter subtitle with less information (less reading, more thinking)? Or a slightly longer subtitle with more information (more reading, less thinking)? Te question of how much has to be deleted or otherwise reduced also needs to be looked at in a broader flmic context because it will vary from flm to flm, as well as from scene to scene. When people argue, for instance, they not only tend to speak louder, but also more quickly. Díaz Cintas (2003) counts a reduction of 40% in the Spanish subtitles of Woody Allen’s Manhattan Murder Mystery but points out that even that percentage is rather high. Woody Allen’s flms are notoriously talkative and the speed of delivery is no doubt a determining factor. Lomheim (1995) did the count for three episodes of quite diferent TV series – thriller, comedy and sci-f – into Norwegian and came up with deletion rates of 22%, 24% and 37% respectively. Given today’s professional practice, the percentages would no doubt be lower though it has to be borne in mind that reduction levels tend to vary across countries and languages (Georgakopoulou 2010). Te most important thing to remember, rather than the percentages in themselves, is that some pruning of dialogue is usually required, but the amount of cutting/ reformulating will vary with genre, context, speed of delivery, assumed reading speed of the audience, etc. Te views of the client, the subtitler’s own perspective on the prospective audience and the way in which the subtitles will be distributed will have an impact too. As we write, new research is investigating current reading speeds, also called subtitle presentation/display rates, as well as the impact that diferent segmentation solutions have on viewers’ experience (Szarkowska 2016). To begin with, there is much variation in the way in which presentation rates are calculated: most professional subtitlers use characters per second, which is prevalent as a measurement, but a substantial number also use words per minute and the odd company calculates presentation times in terms of subtitles per second. However, even among those using the cps method, reading speeds vary greatly, with some research fndings reporting display rates ranging from 10 to 17 cps, or even 20 cps. It seems that the formal guidelines that vendors issue continue to be based mostly on intuition, habit, tradition or the supposed expectations of viewers. Most if not all companies and subtitlers limit subtitles to two lines, and even if the maximum numbers of characters allowed per line (for Latin alphabet scripts) also varies, the most frequent range hovers between 37 and 42 characters. In brief, diference and fuctuation seem to characterize professional practice, and the jury is still out on what the ideal presentation rates and reading speeds might be as research into this topic, using state-of-the-art eye tracking and other quantitative measurement tools, is ongoing. In terms of rephrasing or omission, whether one or the other or a combination of the two is the best solution, must be ascertained in each instance. One factor that may be decisive is the combination of genre and rhetorical function. In a scene depicting a lovers’
150
Te linguistics of subtitling
quarrel – for instance, an emotionally rather than referentially signifcant scene – leaving out details and following the rhythm of speech may be better than rephrasing and thereby losing the parallelism with the ST. When subtitling of-screen commentators in a documentary flm, rendering all they say may be more important, and therefore a reformulation that allows the subtitler to condense without losing much information may be a better option. Te following subtitles in Example 6.2 are from a scene of a feature flm in which the characters are having an argument in French. Rendering some of the rhetorical features of their interventions is therefore paramount. Te frst subtitle combines omission and condensation, while maintaining the rhetorical build-up of the dialogue. Even the repetition of the afrmative ‘OK’ has not been omitted, although it could easily have been left out. Te second subtitle, on the other hand, translates the entire turn. Example 6.2 Woman: Ok, on n’habitait pas dans un palais, ok, je bossais beaucoup, mais on s’en sortait ! On était heureux ! → [OK, we did not live in a palace, OK, I worked a lot, but we did manage! We were happy!]
Man: Qu’est-ce que t’en sais qu’elle était heureuse ? Elle te l’avait dit ?
[How do you know she was happy? Did she tell you?]
→
OK, the place was small. OK, I worked a lot, but we were happy.
How do you know she was happy? Did she tell you?
No steadfast rules can be given as to when to condense and reformulate, or when to omit. Studying existing subtitles is one of the best ways to learn from professionals, keeping in mind that nobody is perfect. In addition to the technical constraints dictated by time and space, both the co-text and the larger context in which a scene occurs are decisive when taking decisions, as well as its connections with what went before and what is still to come, its formal linguistic and rhetorical-stylistic features, its informative signifcance, its interaction with visually and orally rendered information, and the like. Details may be lost and prosody may sufer, since subtitles tend to go for what is essential, focusing on the propositional dimension – i.e. the content – of an utterance rather than on its connotative or poetic dimensions, and making sure that the build-up of the programme or narrative comes across. Ten again, most of the linguistic losses tend to be compensated by the information conveyed through the other flmic channels. Subtitlers must also bear in mind that they are translating for a target audience that can be more (children’s programmes) or less (a talk show) defned. Whereas it has been claimed by the opponents of subtitling that reading subtitles is cognitively more demanding than listening to a dubbed flm, the dubbing-subtitling debate has been and remains a largely intuitive one in as far as cognitive load is concerned. However, recent empirical research is painting a much more balanced picture (Kruger and Steyn 2013; Matamala et al. 2017). Whatever the case may be, a good knowledge of the source and TCs, as well as information (or a very educated guess) about the target audience, should allow the subtitler to judge how familiar those viewers might be with the SC and how much they can fll in for themselves. Whichever decision is eventually taken, the solution will have to take into consideration the information provided by the rest of the semiotic layers. Irrespective of the
Te linguistics of subtitling
151
audience’s familiarity with the SL, it is safe to assume that long subtitles under a speaker who says very little, and short subtitles under a speaker who just keeps rattling on, will both have a disturbing efect as the contrast with the original utterances is immediate. In the frst case the viewers have too much reading to do, and in the other they may feel cheated out of information. 6.3.1
Condensation and reformulation
How one should condense and rephrase depends on what can be done as much as on what really needs to be done. Besides, some changes are due to inherent linguistic differences between the language pairs involved. In any case, subtitlers must use the TL’s intrinsic possibilities to the full, which is why a native or at least near-native command of the TL is essential. It is of the utmost importance that all reformulations are idiomatic, sound natural and do not contain calques. An English phrase such as ‘What do you think?’ could be replaced by various alternatives in French, depending on the time available for its translation as well as its imbrication with the rest of information contained in the scene. It does not have to be the fairly literal Qu’est-ce que t’en penses ?, and some alternatives might well be ça te va ? [Is that right for you?] or even a short OK ? Te examples discussed next illustrate some of the most common strategies implemented by subtitlers and propose suggestions of ways to go about reducing text without losing too much content. Yet, they are in no way exhaustive and should not be read as dogmatic instructions. Moreover, some of the examples will ft under several headings, because diferent subtitling tactics are often used in combination and discussing them individually can be seen as somewhat artifcial. Te bottom line is that subtitlers have to activate certain strategies and come up with solutions whenever they are confronted with a demanding dialogue or scene. Some of the challenges appear to recur and new ones arise as flms and other audiovisual productions evolve. 6.3.1.1
Condensation and reformulation at word level
SIMPLIFYING VERBAL PERIPHRASES
Colloquial language, especially English, often makes use of verbal periphrases that can be lengthy and therefore use up valuable space on screen. Tat is why subtitlers tend to replace them with much shorter verb forms, when appropriate and as illustrated in Example 6.3: Example 6.3 I’m gonna have this place fixed up... then I’m gonna → sell it. I should really be going actually.
→
He is gonna be just the same.
→
Reformaré este local y lo venderé. [I will fix this place and will sell it.]
Je dois partir. [I have to leave.]
Il ne changera pas. [He will not change.]
152
Te linguistics of subtitling
GENERALIZING ENUMERATIONS
Occasionally, generalizations will replace enumerations, and although they obviously modify the style of the speaker, they can also help to save space, as in Example 6.4: Example 6.4 You lied to us, son. Your own mother → and father.
Tu nous as menti, à nous, tes parents. [You lied to us, to us, your parents.]
However, an approach like this is not always advisable. In Trainspotting 1 and 2, for instance, protagonist Mark Renton’s voiceover narration contains very rhythmical and narratively important enumerations that are rendered in full in the DVD subtitles. Tis monologue in Example 6.5 from Trainspotting 2, which is delivered very rapidly, is barely abbreviated at all, whereas its rhythm is maintained and paraphrasing is used to keep the German idiomatic: Example 6.5 Choose handbags,
Sag ja zu Handtaschen, [Say yes to handbags,]
choose high-heeled shoes,
sag ja zu Stöckelschuhe, [say yes to high heels,]
choose cashmere and silk
zu Kaschmir und Seide [to cashmere and silk]
to make yourself feel what passes for happy. Choose an i-Phone made by a Chinese → woman
um ein Gefühl von Glück zu simulieren. [to simulate a feeling of happiness.]
Sag ja zum i-Phone, gebaut vom einer Chinesin [Say yes to an i-Phone, made by a Chinese woman]
who jumped out of a window
die sich das Leben nahm [who took her own life]
and stick it in the pocket of your jacket fresh from a South Asian firetrap
und steck es in die Tasche deiner Jacke aus einer Feurefalle in Südasien. [and put in the pocket of your jacket from a South Asian firetrap.]
USING A SHORTER NEAR-SYNONYM OR EQUIVALENT EXPRESSION
One straightforward strategy to reduce subtitle length would be to use shorter synonyms, which is frequently done. Why not write ‘we need to talk’ rather than ‘I have to tell you something’? However, one must keep in mind that: a b
Synonyms are almost always near-synonyms rather than exact equivalents. Synonyms can belong to diferent registers and can therefore be less appropriate in a particular context.
Te linguistics of subtitling
c d
153
In certain contexts, e.g. a medical documentary, terminological appropriateness may be the overriding factor and take precedence over shorter expressions. Function words may make for slower reading than content words – i.e. ‘his’ as opposed to ‘the butcher’s’ – because they require more cognitive processing on the part of the viewer.
In Example 6.6, the permutations seem to work, even though ‘be good to’ in the frst subtitle is not a literal translation of the French être correct avec. However, the scene as a whole makes clear what is meant. Te translational shift becomes less striking, and textual cohesion is maintained when the dialogue is heard, and read, in context. Te second example is quite unproblematic. Example 6.6 QUOI ! J’ai toujours été correct avec elle ! [WHAT! I’ve always treated her right!]
He’s got lots of money.
→
What? I was good to her. Il est riche.
→
[He is rich.]
USING SIMPLE RATHER THAN COMPOUND TENSES
Example 6.7 is almost a classic because in certain contexts a simple past can easily replace a past perfect, that is, if there is no need to state explicitly that one past action occurred before another. Simple tenses tend to take up less space than compound forms. Example 6.7 Son père l’avait foutue à la porte ! [Her father had thrown her out!]
→
Her father threw her out.
Sometimes, a change in tense is inevitable, like in Example 6.8, where English takes a simple past tense and French requires a passé composé, a complex tense. A more literal verbal tense, ‘I have stopped’, would not be grammatically correct in the sentence, hence the compulsory change: Example 6.8 J’ai arrêté de fumer il y a exactement 134 jours... [I have stopped smoking exactly 134 days ago...]
→
I stopped smoking exactly 134 days ago...
In other cases, the subtitler may have a choice, but tenses can only be adapted when the TL is sufciently fexible and the change does not lead to grammatically incorrect sentences or calques. CHANGING WORD CLASSES
Very often a change in word class can ofer shorter alternatives, for instance, turning a verb into a noun, an adjective into a verb, a verb into an adverb, an adjective into a noun or vice versa, as illustrated in Example 6.9:
154
Te linguistics of subtitling
Example 6.9 Verb into noun
Je me suis mis à travailler ! [I have started working!]
→
Verb into noun
When General Pinochet → was arrested in London...
Adjective into verb
That’s an expensive weapon! →
Adjective into adverb Adjective into noun
I was totally gone, in a pro→ found sleep. I don’t want it to be too → transparent.
I found a job. Na de arrestatie van Pinochet... [After the arrest of Pinochet...]
Ça coûte cher ! [That costs a lot!]
Je dormais profondément. [I slept soundly.]
Je ne veux pas de transparence. [I don’t want transparency.]
RESORTING TO SHORT FORMS AND CONTRACTIONS
Most languages will allow the use of some kind of abbreviation and/or contraction of (a set of ) words. When translating into English, short verb forms like ‘it’ll’ or ‘I’d’, or expressions like ‘asap’ (as soon as possible), will come in handy, and Dutch pronouns like hij [he], haar [her], and het [it] can occasionally be replaced by ie, r, and t respectively. French and Spanish, for instance, make use of pronominal enclitic forms, but these can entail a change in register, as polite requests might sometimes be turned into more direct forms of address when using the imperative tense, as seen in Example 6.10: Example 6.10 Would you like to share it with me?
Compartámoslo. [Let’s share it.]
→
Still, an accumulation of such contracted forms may hinder comprehension, whereas in other instances the style of a formal-sounding speaker may require a formal subtitling style, in which case short forms should not be used. Te two English subtitles in Example 6.11 are fne, since they recreate informal exchanges. Note that in the frst one, the auxiliary ‘will’ has been omitted altogether, thus reinforcing the spoken nature of the exchange. Example 6.11 Je dois y aller, je te rappellerai. [I must go, I will call you back.]
Quoi, il y a quelque chose qui ne va pas ? [What, is there something that is a problem?]
6.3.1.2
→
I’ve got to go, call you later.
→
What’s up? What’s the problem?
Condensation and reformulation at clause/sentence level
CHANGING NEGATIONS OR QUESTIONS INTO AFFIRMATIVE SENTENCES OR ASSERTIONS, INDIRECT QUESTIONS INTO DIRECT QUESTIONS, ETC.
Sometimes changing the mode of a sentence can have the added beneft of reducing its length. Te negative sentence in Example 6.12 has become an afrmative one: a
Te linguistics of subtitling
155
phrase with ‘not’, such as ‘not (very) large’, would obviously have been longer than the present one with ‘small’, but another change has occurred as well. It is clear from the context that the speaker is discussing the house he used to live in with his wife. Te subtitler can therefore use ‘place’ rather than ‘palace’ without causing confusion: Example 6.12 OK, on n’habitait pas dans un palais... [OK, we did not live in a palace...]
→
OK, the place was small...
In Example 6.13, the more or less rhetorical question, which was meant to be informative in any case, has become a statement. Tis allows for the introductory verb, the one formulating the question, to be deleted and the compound sentence to become a simple one: Example 6.13 Did I tell you there’s a party this coming Friday?
→
Er is ‘n feestje vrijdag. [There’s a party Friday.]
In Example 6.14, the frst question, which actually doubles as an admonishment, has become an imperative, and the second question is also turned into an imperative: Example 6.14 No ponga esa cara.
What are you making a face for? → Can’t you hear the difference?
[Don’t pull that face.]
Écoutez donc !
→
[Listen up!]
SIMPLIFYING INDICATORS OF MODALITY
Modal auxiliaries and other markers of modality are used in language to indicate degrees of uncertainty, possibility, probability, etc. and are very common in formulaic forms of address and polite requests. Omitting or simplifying clauses that contain them can save space, but such actions must be undertaken with care since the omission may result in a translation shift. Potential pitfalls are that the character may then come across as more abrupt, more decisive or less polite. In Example 6.15, the modal auxiliary ‘could’ in the subtitle combines both the ‘if you like’ and ‘can’ found in the ST: Example 6.15 Wij zijn ook zo klaar. Als u wilt, dan kunnen wij u thuis afzetten. [We’ll be ready in a minute too. If you like, we can drop you off at home.]
→
We’ll be ready in a minute. We could give you a lift.
156
Te linguistics of subtitling
In Example 6.16, the modal auxiliary ‘may’ has been left out altogether, despite the fact that it does modify the original statement, especially since the cautious introductory clause ‘I understand’ has also been deleted: Example 6.16 I understand that it may be the best result, politically, that can be delivered just at the → moment.
Dat is de beste politieke oplossing op dit moment. [That is the best political solution right now.]
By contrast, in Example 6.17, the omission of ‘can’ does not matter content-wise and helps save space. Te speaker is not really asking a question about the other character’s ability to see the light in the window: Example 6.17 Can you see the light up there in → the window?
Vous voyez cette lumière, là-haut ? [You see that light, up there?]
Finally, in Example 6.18, the omission of the modal ‘would’ breaks no bones either. Te hierarchy between nurse and doctor is well established and the nurse’s tone is very submissive: Example 6.18 You wouldn’t have time for a cup of tea, doctor?
→
Een kopje thee, dokter? [A cup of tea, doctor?]
TURNING DIRECT SPEECH INTO INDIRECT SPEECH
Although this strategy also allows subtitlers to reduce the TT by getting rid of the presentative verb that normally introduces the words of a speaker, it appears to be less common in practice. In Example 6.19, the verb phrase souvent, je me dis has been reduced to ‘Sometimes’. Note that the personal pronoun has been moved back. Je [I], originally the subject of me dis [tell myself ] in the frst clause, is now the subject of ‘am glad’: Example 6.19 Souvent, je me dis : « Tant mieux qu’elle soit partie, comme ça, on est soulagés ». [I often tell myself: “Good thing she went, we’re more at ease like this.”]
→
Sometimes I’m glad she went. It makes things easier.
CHANGING THE SUBJECT OF A SENTENCE OR PHRASE
Tis operation of necessity goes hand in hand with a change in sentence structure, but does not necessarily involve major translation shifts. In Example 6.20, ‘anyone’ has now become the subject instead of ça [that]. ‘Anyone can get’ is obviously much shorter than a more quantitatively literal ‘that can happen to anyone’. As it turns out, the shorter version in the translation is also the most idiomatic one, clearly illustrating that the reformulations that are so typical of subtitling need not be for the worst.
Te linguistics of subtitling
157
Example 6.20 L’eczéma, ça. Ça peut arriver à n’importe qui. [Eczema, well. That can happen to anyone.]
Mi può dare un consiglio? [Can you give me advice?]
→ →
Eczema, anyone can get eczema. Mag ik u iets vragen? [May I ask you something?]
Te change in subject in the frst of the two following subtitles in Example 6.21 is the natural result of the change in verb, whereas in the second subtitle it is the result of a change from passive to active voice: Example 6.21 Well, I think I know what you mean → Travis. At the end of six months, you shall be taken to Buckingham Palace, in a car- → riage, beautifully dressed.
Je crois vous avoir compris, Travis. [I think I’ve understood you, Travis.]
Dans six mois, je vous emmène à Buckingham Palace. [In six months, I will take you to Buckingham Palace.]
En voiture et avec une belle robe. [In a carriage and with a beautiful dress.]
MANIPULATING THEME AND RHEME
Speech tends to reverse the order of presentation of theme (known information) and rheme (new information) much more than writing does. Speakers normally subvert this order and place the rheme at the beginning of their sentence when they want to draw special attention to a particular issue. Tis may be because the new information is important to them, or because they want to bring more variation into their style. Te result is a change in the standard word order of the sentence and the occurrence in the very beginning of lexical units that would normally be used at the end of the sentence. In Example 6.22, the speaker wants to stress that the grandmother used to take care of quite a few chores. Tese are therefore placed in initial position, even though the explanation of why he mentions le linge, le repassage [the laundry, the ironing] follows afterwards. Te enumeration is meant to signify ‘many things’. In the subtitle, the word order and syntax of the utterance has been changed and, in addition, the enumeration has been replaced by a generalization: Example 6.22 Le linge, le repassage, ta grand-mère s’en chargeait. [The laundry, the ironing, your grandmother did all that.]
→ Your grandmother did all the chores.
Te reversal to standard word order and grammar in the subtitles does usually entail an impoverishment or at least neutralization of the ST style as well as a reduction of its oral features, although intonation and gestures may provide some compensation. Subtitles resort to these changes in word order to facilitate reading, as the standard theme-rheme order gives priority to known over unknown information. Other examples of theme-rheme shifts are given in Example 6.23:
158
Te linguistics of subtitling
Example 6.23 And conquer it, you certainly will!
→
Your neighbour isn’t the Mr Cohen we all know.
→
我的孩子没了,偏你的就有了。
→
[The moment I lost my child, you discovered yours.]
Vous serez la victorieuse. [And you’ll be the winner.]
M. Cohen n’est pas ton voisin. [Mr Cohen isn’t your neighbour.]
You discovered your child at the moment I lost mine.
TURNING LONG AND/OR COMPOUND SENTENCES INTO SIMPLE SENTENCES
Another recurring strategy consists in simplifying and cutting up complex sentences that may have to be distributed over several subtitles. Shorter, simple sentences are thought to require less of a reading efort on the part of the viewers, since they do not have to tie up the end and the beginning of a sentence that does not appear on screen all at once. Note also that the second subtitle in Example 6.24 does not equal one sentence as it has been broken of in a logical place, at the end of the clause. Te down side of reducing long sentences to a series of shorter ones is that the connections between the ideas expressed in coordinated and subordinated clauses may become less explicit. In the frst example, the French coordinating conjunction mais [but] expresses contrast in the ST. Te woman had a beautiful body, even though it was ‘the body of a woman who had been pregnant’. Tis indicator of contrast has disappeared from the subtitle, thus reducing the richness of the original. Sacrifcing linking words is not recommended, even though it happens in three out of the four examples cited: Example 6.24 Elle avait un corps – un très beau corps – mais un corps de femme qui avait été enceinte, enfin aurait pu. [She had a body – a very beautiful body – but the body of a woman who had been pregnant, or at least, might have been.]
Her body was still very beautiful. →
I didn’t tell you just ’cause I thought you’d → get pissed off.
The body of a woman who had been pregnant, or who could have been. Ik heb niets gezegd. Ik dacht dat je woest zou zijn. [I didn’t say anything. I thought you’d be pissed off.]
J’ai une idée. Admettons qu’il ne puisse pas faire d’enfant. Here, I’ve got an idea. Suppose you agree that he can’t actually have babies, not having a womb, which is nobody’s fault, not → even the Romans’, but that he can have the right to have babies.
[I have an idea. Let’s agree that he can’t make a child.]
Il n’a pas d’utérus. Ce n’est la faute de personne, pas même des Romains... [He doesn’t have a womb. It’s not the fault of anyone, not even the Romans...]
mais il a le droit d’en faire. [but he has the right to make them.]
Te linguistics of subtitling
159
Mag ik u iets vragen? Mi può dare un consiglio? Io oggi ho perso il treno, e allora devo stare qui ancora una notte, solo che la pensione dove ho dormito ieri sera ha chiuso e... non mi è rimasto molto. [Can you give me advice? I have missed the train today and so I must stay here another night, but the hostel where I slept last night is closed and... I have only little left.]
[May I ask you something?]
Ik heb de trein gemist en moet nog een nacht blijven. [I missed the train and must stay another night.]
→
Maar het pension van gisteren is dicht en... ik heb niet veel meer. [But yesterday’s hostel is closed and... I have only little left.]
CONVERTING ACTIVE SENTENCES INTO PASSIVE OR VICE VERSA
Te choice of the active versus the passive voice is not a neutral one, since the focus of the attention is on the performer in one case and on the action in the other, and yet, a switch from one to the other can achieve reduction without producing major shifts. In Example 6.25, the switch from passive to active goes hand in hand with the use of a slightly shorter synonym, ‘battering’, that may seem stronger than the ST word, but is corroborated by the images. It is also a verb that fts better in an active sentence: Example 6.25 Wij weten dat u al jaren door uw echtgenoot mishandeld wordt. [We know that for years you have been maltreated by your husband.]
→
We know your husband has been battering you for years.
In Example 6.26, it is understood by the immediate co-text that ‘the government’ is responsible for having arrested ‘our heroes’ and the agent is not mentioned: Example 6.26 We knew that was where our heroes were → kept.
We wisten dat onze helden daar zaten. [We knew that our heroes were there.]
Finally, in Example 6.27, due to the switch to the passive voice, the policeman in this subtitle tells the accused that no one is likely to blame him for a given incident, whereas in the dialogue he tells the accused that he should not blame himself. Whether such a shift is acceptable must be determined in context: Example 6.27 Écoutez. Vous avez un casier judiciaire vierge. Vous n’avez pas grand chose à vous reprocher. [Listen. You have a virginal judicial record. You do not have much to reproach yourself.]
→
Listen, you have a clean record, you can’t be blamed for much.
160
Te linguistics of subtitling
USING PRONOUNS (DEMONSTRATIVE, PERSONAL, POSSESSIVE) AND OTHER DEICTICS TO REPLACE NOUNS, OR NOUN PHRASES
Deictics are words, such as pronouns and articles, or expressions whose meaning is dependent on the context in which they are used (e.g. here, you, that one there, him, etc.). Tey provide short translation solutions, as they build anaphorically on a situation or on visual information that has already been established. Te subtitle in Example 6.28 is still understandable in the context, as the pronoun ‘it’ refers to the profession of ‘hairdresser’ mentioned in line one. Te word or concept to which an article or pronoun refers should not be too far removed in the subtitle sequence: Example 6.28 Je suis coiffeur moi. Tout ce que je sais faire, c’est coiffer. [I am a hairdresser. The only thing I know is how to do hair.]
→
I’m a hairdresser. It’s all I know.
In the subtitles in Example 6.29, the pronoun or adverb refers deictically to a person/ object that appears on screen, as the subtitle makes use of intersemiotic cohesion between speech and image: Example 6.29 The murderer must have- like- hidden → in this closet, right? It’s been a long time since we’ve done → this. (hug) I didn’t kidnap my brother (points at → his brother, who’s in the same room). There is no food in this high mountain.
Le meurtrier a dû se cacher ici. [The murderer must have hidden here.]
→
Dat is lang geleden. [That is a long time ago.]
Je ne l’ai pas enlevé. [I didn’t kidnap him.]
Il n’y a rien à manger ici. [There’s nothing to eat here.]
MERGING TWO OR MORE PHRASES/SENTENCES INTO ONE
Whereas long compound sentences tend to be rendered in much shorter single or coordinated ones, a series of short sentences in the source dialogue may also be joined, as demonstrated in Example 6.30. Some subtitlers claim that joining sentences in this way can render the connections between actions more explicit and help the viewer see or understand them at a glance. Even though research into the preferred organization of subtitle text in terms of segmentation is inconclusive (Perego 2008), it may well be the case that readability is improved by cutting up lengthy interventions in some cases and linking up bits and pieces in others. Visuals and linguistic context may be determining. Te ellipsis in the example, in which ‘that day’ has been omitted in the subtitle, can be recovered by the context of the conversation, and the translator has decided to give priority to the frst question over the second: whatever the interviewee remembers will include what he actually ‘did’ on that day:
Te linguistics of subtitling
161
Example 6.30 What are your memories of that day? What → did you do on that day?
Wat herinnert u zich nog? [What do you still remember?]
In Example 6.31, clauses that are separated by a pause in the soundtrack are joined together and the subtitle is guided by the logic of the content of the utterance, rather than the delivery: Example 6.31 Tout simplement : j’ai envoyé une lettre. Avec ma photo. Elle m’a répondu. On s’est fixé un rendez-vous. → [Very simply: I sent a letter. With my picture. She replied. We arranged to meet.]
Very simply. I sent a letter with a photo. She answered and we arranged to meet.
Te short renderings found in the two following subtitles in Example 6.32 speak for themselves: Example 6.32 I want to know what I may take away with me. I don’t wanna be → accused of stealing. Where did you find this woman? → She’s a genius. 6.3.2
Je veux savoir ce que je peux emporter sans être accusée de vol. [I want to know what I may take with me without being accused of stealing.]
Où tu as trouvé ce génie ? [Where did you find this genius?]
Omissions
Omissions or deletions are unavoidable in subtitling, as many of the previous examples involving reformulation have shown, which is not necessarily problematic. Omission and reformulation tend to go hand in hand, and what works best in any given instance is hard to predict. Usually, the redundancy rule will come to the subtitler’s rescue. On some occasions, a word, phrase or its content may be repeated elsewhere in the same or the previous/following subtitle, or it may do no more than expand on an idea, rendering it more explicit. In other instances, the images may fll in a gap. However, in most cases the decision amounts to opting for the lesser loss. Before deciding to omit, subtitlers must ask themselves: will the viewers still be able to understand the message or scene without too much of an efort? And will they not misunderstand it? Subtitlers must become experts in distinguishing what is essential for the storyline from what is ancillary and what can be understood through implicature. Te omission of some of the characters’ entire turns in Example 6.33 from Secrets and Lies is rather extreme and certainly afects the interaction between the protagonists and possibly their characterization (Remael 2003):
162
Te linguistics of subtitling
Example 6.33 A: Well, I always... thought she’d ’ad a boy...
Ik dacht dat ze een jongen had. [I thought she had a boy.]
B: She’s a slag.
Ze is een snol. [She’s a slag.]
A: No, she’s not.
Ø
B: She fucking is.
Ø
A: She loves yer. We all love yer.
Ze houdt van je. Wij allemaal. [She loves you. All of us.]
B: You comin’ back?
Kom je terug?
→
[You coming back?]
A: No.
Ø
B: You got to.
Je moet. [You have to.]
A: Why should I?
Ø
B: You gotta face up to it!
Je moet ‘t onder ogen zien. [You have to face up to it.]
A: Face up to what?
6.3.2.1
Ø
Omissions at word level
Te decision to omit words in Examples 6.34 and 6.35 is always dictated by the available time and space, and it is taken with redundancy and relevance issues in mind. In addition, some omissions are language-bound because of the asymmetries between languages. When translating from English, for instance, it may not always be necessary or possible to translate question-tags as the TL will probably not have them. If the tag has an important function, then it can be rendered through a linguistic feature of the TL in question. If the tag is superfuous, or does little more than imitate speech, it may have to be omitted. For instance, one idiomatic translation into Dutch of ‘close the door, will you?’ would be wil je de deur sluiten alsjeblief? [Will you close the door please?]. Linguistic modifers, mostly adjectives and adverbs, are also obvious candidates for deletion precisely because they do no more than complement the information carried by the main verb or noun. Te question is: how important is the modifcation? In the frst of the two following subtitles, the context helps to fll in the missing ‘hot’ in the Spanish subtitle, and in the second it is not important that the fsh are ‘brightly coloured’ and, in any case, they actually appear on screen: Example 6.34 Tell me if I’ve put too much hot fudge.
→
No. No, I get nervous when brightly coloured fish are staring at me face-to- → face, you know.
¿He echado demasiado chocolate? [Have I added too much chocolate?]
No, me pongo nervioso cuando los peces me miran. [No, I get nervous when the fish look at me.]
Te linguistics of subtitling
163
In the next instance, the ongoing conversation signals that the character wants to know if the other is going to make the phone call immediately, even though ‘now’ has been deleted in the Spanish subtitle: Example 6.35 ¿Vas a llamar al Sr. House?
You’re gonna ring Mr House, now? →
[Are you going to ring Mr House?]
In Example 6.36, the deletion of the temporal qualifcation ‘sometimes’ does not alter the fact that the subtitle still signifes that one character knows the other merely from seeing her around. Note also the deletion of the original interjection ‘uh’ and the repetition of the personal pronoun ‘I’: Example 6.36 I, uh, I see her at the gym sometimes.
La he visto en el gimnasio.
→
[I have seen her at the gym.]
In Example 6.37, all that matters from a diegetic perspective is that the character is woken up, and not whether he was sleeping soundly: Example 6.37 You woke me out of a deep sleep.
→
Me has despertado. [You woke me up.]
Phatic words and expressions also tend to disappear from subtitles because they do not – strictly speaking – advance the action. In mainstream cinema, action refers to the causal events or the actions undertaken/words spoken by characters in order to reach their goal or convey an important point of view. In documentary flm, the action may refer to informative content or the argument the flmmaker wishes to present. In the frst of the two subtitles in Example 6.38, the emphatic mais enfn and comme ça have been both deleted. In the conversation about make-up presented in the second subtitle, the words ‘anyway’, ‘the fuck’ and ‘you know’ do not make it to the subtitle, which may arguably constitute a form of censorship in this case: Example 6.38 Mais enfin, Norah, on n’abandonne pas un bébé, comme ça, pendant des heures ! [In heaven’s name, Norah, one does not abandon a baby, just like that, for hours!]
→
Anyway, whatever the fuck it is, she uses → a lot of it, you know.
You don’t abandon a baby for hours! Appelle ça comme tu veux. Elle s’en met un paquet. [Call it what you like. She puts on a load.]
Interpersonal elements that may signal power relations between interlocutors and thereby establish their social standing tend to bite the dust too. Typical examples are greetings, interjections, vocatives, formulas of courtesy, etc. Some repetitions
164
Te linguistics of subtitling
can also be seen to fall under this heading, mainly when they express hesitation. Not only do such interpersonal features contribute little to the propositional content in the strict sense, formally they also occupy a somewhat isolated position, at the beginning of the sentence, for instance, or between commas. In some contexts, the omission of formulaic expressions of politeness, like in the request in the frst subtitle of Example 6.39, does not matter much, nor does the omission of ‘you know’ in the second subtitle. Even if they change the directness of the utterances somewhat, making them more abrupt, that may be compensated by the visual context. Example 6.39 A cup of coffee, please.
→
You know, why don’t you get some plates? →
Un café. [A coffee.]
Traiga unos platos. [Get some plates.]
As for the string of hesitations and false starters that pepper Example 6.40, a much briefer repetition of the negative ‘no’ will do in the subtitle: Example 6.40 No, no, no, no, no. I’m-I’m-I’m-I’m I’m j- I’m-I’m ju-... um... I’m a detective. → They-They-They- We-We- lowered the height requirements, so I-
No, no. Yo... soy inspector. Han aumentado la talla mínima, [No, no. I... am a detective. They’ve increased the minimum height,]
y yo... [and I...]
Whether omitting such interpersonal elements is always the best choice is another issue and, as discussed previously, the fnal decision should comply with the spatial and temporal limitations whilst taking into consideration the semiotic environment in which the utterances occur. In Example 6.41, the protagonist is talking on the phone to the woman he has fallen in love with, but she refuses to see him. He is trying to re-establish contact, not really knowing how to go about it. By suggesting the character’s false starts, the subtitle successfully renders his awkwardness, which is decisive in the portrayal of the protagonist in this particular scene: Example 6.41 I’ll come by the headquarters or some→ thing and we could em...
6.3.2.2
Je passe vous prendre à la permanence, je sais pas, moi, je pourrais... [I’ll come and fetch you from headquarters, I don’t know, I, I could...]
Omissions at clause/sentence level
Even though it is certainly not advisable to omit entire turns, sentences or even clauses, it can sometimes be unavoidable. In some cases the intervention of a particular character may actually have a very low information load. In a noisy crowded
Te linguistics of subtitling
165
scene, for instance one that is meant to create an atmosphere, some interventions may not have to be subtitled. In other instances, the music may be too loud for a dialogue exchange to be audible, or several people may be talking at the same time. In such cases one could say that the dialogue is part of the setting. Te last turn spoken by Carolyn in the scene in Example 6.42, from American Beauty, does not really require subtitling beyond ‘I wouldn’t have the heart’, as she is walking away from the camera towards her house at this point. It is the end of the scene and the music gradually takes over. By the time she pronounces the word ‘heart’, her voice has become virtually inaudible and subtitlers working without a script would simply not be able to translate the entire last sentence. Yet, the soundtrack and the scene as a whole convey sufciently that Carolyn is a nervous, fast talker and a selfrighteous woman: Example 6.42 Their sycamore? C’mon! A substantial portion of the root structure was on our property. You know that. How can you call it their sycamore? I wouldn’t have the heart to just cut down something that wasn’t partially mine, which of course it was.
Not all scenes are that straightforward, though. In Example 6.43, protagonist Eliza, from the musical My Fair Lady, is seen dancing and singing in the living room, but she is gradually taken to the bathroom by the servants, who want to get her ready for bed. While Eliza keeps on singing that she ‘could dance all night’, the servants, also singing along, are undressing her and saying that she needs to sleep. Teir interventions are so short and quick that the subtitler could not ft them in between Eliza’s lines. Te subtitling clearly gives priority to the protagonist lines, and the viewer has to rely on the mise-en-scène to understand what exactly is going on: Example 6.43 Eliza: I could have danced all night. Servants: You’re tired out. You must be dead. Eliza: I could have danced all night. Servants: Your face is worn. Your eyes are red. Eliza: And still have begged → Servants: Now say good night, please. Turn out the light, please. Eliza: for more. Servants: It’s really time for you to be in bed.
J’aurais voulu danser sans fin. [I would have danced forever.]
Danser jusqu’à l’aurore. [Danced until dawn.] Danser jusqu’au matin. [Danced until morning.]
Example 6.44 is an extract from a group scene from the flm Manhattan Murder Mystery, in which four interlocutors sometimes speak simultaneously. In such a challenging context, the subtitler has omitted turns that contain information another character rephrases diferently, as well as conversational, oral features that have little more than an interactional and phatic function. Nevertheless, the subtitler has taken due care not to delete the exchanges spoken by the same character more than once:
166
Te linguistics of subtitling
Example 6.44 Ted: Uh, you really saw his face? You saw, you saw what he looked like? No question. You know exactly who it is. Carol: Yes. Oh, yes, I’m here to tell you... Larry: Oh, no question about it. It was-It was Mr House. There was no... Not a, not a question. I mean, you could see him because, uh, you know, there was-there was just no way that you could avoid it. He was right there. Marcia: To me, it’s obvious. → Larry: Wh... How do you see it? Ted: How obvious? What do you mean? Marcia: Obvious he’s committed the perfect murder. Larry: What do you mean? Ted: What? How? What do you mean? Marcia: Okay, look. You have to start off with another woman who bears some ballpark resemblance to Mrs House.
Vous avez vu son visage ? Vous savez qui c’est ? [You saw his face? You know who it is?]
C’était bien M. House. [It was clearly Mr House.]
On l’a forcément vu. Il était devant nous. [We had to see him. He was in front of us.]
- Pour moi, c’est évident. - Comment ça ? [- To me, it’s obvious. - What do you mean?]
Il a commis un crime parfait. [He’s committed a perfect murder.]
C’est-à-dire ? [That is to say?]
Commençons par une femme grosso modo genre Mme House. [Let’s start with a woman style Mrs House.]
Deciding on the utterances that are relevant is the key in cases like this, and the subtitler must give priority to the character who is conveying crucial information. In the following, shorter example (Example 6.45), repetitions and conversational features are again exploited in order to render four turns in two: Example 6.45 A: Isn’t that your door? B: What? A: Isn’t that your door knocking? B: Yes.
→
- Is dat jouw deur niet? - Ja. [- Isn’t that your door? - Yes.]
Occasionally, the topic of a question is incorporated in the reply of the next speaker. Tat is why, in Example 6.46, the initial question has been deleted so that the subtitle fts within the given time and space available: Example 6.46 A: Why aren’t you coming tomorrow? B: I’m not coming to the party because I’ve → got other plans.
Ik kom niet naar het feestje, omdat ik andere plannen heb. [I am not coming to the party, because I have other plans.]
Te linguistics of subtitling
167
Luckily, most fction flms make sure that dialogue that is really important is presented free of interferences so that it can be easily heard and understood by the audience; the result being that it can also be easily subtitled because time constraints will not be so strict. Te pace of older flms is generally slower than that of today’s productions, and when they depict group scenes, they are often orchestrated to such an extent that some dialogue exchanges will stand out more clearly than others. Tese are then obvious candidates for subtitling, even if they do not contribute to the story strictly speaking, because not subtitling such conversations would leave the viewers wondering needlessly. In some cases, however, omitting short sentences or clauses may seem unavoidable even though they do fulfl a diegetic function. Making use of the sequentiality of dialogue – i.e. using information contained in a previous or subsequent subtitle – may be a solution. In Example 6.47, the conditional clause Si elle était partie [If she left] has been omitted in the English subtitle, but the deletion is compensated by the question in the previous line. Te conditional clause that follows the query, to which the speaker herself replies, is a repetition of that question and subsumes its content in the actual answer: Example 6.47 Pourquoi elle est partie ? Si elle était partie, c’est qu’elle avait des raisons ! [Why did she leave? If she left, it’s because she had some reasons!]
→
Why did she leave? She must have had a reason.
From the perspective of dialogue studies, one could say that the frst part of the utterance, Si elle était partie [If she left], is the part of the turn that is context confrming. In this sense, it contributes to the overall cohesion of the dialogue but does not contain new information, which largely justifes its deletion. In Example 6.48, the phrase mais on s’en sortait [but we did manage] has been deleted. Tis clause is redundant because the following sentence, on était heureux [we were happy], supersedes it semantically. If people are ‘happy’, they are obviously doing more than ‘managing’. On the other hand, ‘the place was small’ and ‘I worked a lot’ do signal that the couple was far from rich and that the extent of their supposed happiness may have been a matter of opinion: Example 6.48 OK, on n’habitait pas dans un palais, OK, je bossais beaucoup, mais on s’en sortait ! On était heureux ! [OK, we did not live in a palace, OK I worked a lot, but we did manage! We were happy!]
→
OK, the place was small.
OK, I worked a lot, but we were happy.
Clauses or phrases that carry less propositional content are often those that express a point of view or introduce an argument, i.e. have a presentational function. Teir purpose is to introduce the clause that is the main focus, semantically speaking. If deletion is required, these introductions tend to be the frst ones to go, as in Example 6.49:
168
Te linguistics of subtitling
Example 6.49 I was struck during the apartheid years that you always managed to keep your sense of humour.
→
Tijdens de apartheidsjaren bewaarde u uw gevoel voor humor. [Throughout the apartheid years you kept your sense of humour.]
In some cases, deletions of this nature do have an efect. In Example 6.50, the character’s uncertainty has completely disappeared from the subtitle. In the frst subtitle, the child is not sure whether he has done the right thing by moving in with his partner a day ahead of the planned time; in the second, he knows that he has literally ‘killed a duck’, even if unintentionally, and his ‘Mmmm, I think’ only brings across the fact that he is a bit afraid to break the news. Devoid of the narrative context, the subtitle makes the boy come across quite decisive and resolute. Keeping in mind that research into manageable reading speeds is inconclusive, subtitlers are advised to carefully weigh up the need to reduce text against the relevance of what they are about to omit and, in some cases, push the subtitle presentation time a bit beyond the provider’s instructed limits or alter the in and/or out times to gain extra exposure time: Example 6.50 I thought I’d move in a day early.
→
Mmmm, I think I killed a duck.
→
Ik ben wat vroeger gekomen. [I moved in a bit earlier.]
J’ai tué un canard. [I killed a duck.]
Finally, in the two subtitles of Example 6.51, it is the interaction between the characters that sufers. In the frst subtitle, the speaker is not only telling the others ‘we don’t know this is all true’, he is also warning them not to get carried away. In the second, the interlocutor of the ST is being admonished and asked to pay closer attention, whereas the subtitle merely insists on the value of the ‘theory’ under discussion: Example 6.51 Hold, hold on, for a second. We don’t know → this is all true. This is just a theory. Yeah, but it’s a great theory. Have you been → paying attention? This is a great theory.
6.4
No sabemos si todo es cierto, sólo es una teoría. [We don’t know if all this is true, it is just a theory.]
Sí, pero es una teoría genial. [Yeah, but it’s a great theory.]
Linguistic cohesion and coherence in subtitling
Coherence is a property of texts that are well written and helps the message come across, whereas the term cohesion refers to the techniques writers have at their disposal to promote such coherence. Intersemiotic cohesion in subtitling refers to the way in which the TL of the subtitles is directly connected to the soundtrack and images on screen, making use of the information they supply to create a coherent linguistic-visual whole (§3.2).
Te linguistics of subtitling
169
In the scene from American Beauty (Example 6.52), Carolyn Burnham refers to the movers that she sees carrying furniture into the neighbouring house. Both flm dialogue and subtitle rely on visually conveyed information to make sense of her frst sentence. What is more, the ‘it’ in the third subtitle refers to ‘that house’ in the character’s second sentence. Te subtitler makes use of the visuals once again, and thanks to this intersemiotic cohesion between words and image the viewers will have no problems connecting the pronoun ‘it’ with the neighbouring house: Example 6.52 Eindelijk, nieuwe buren. So, we’ve finally got new neighbours. You know, if the Lomans had let me represent them, instead of ‘The Real Estate King’, that house would never have sat on the market for six months.
[Finally, new neighbours.]
→
Als de Lomans mij als makelaar hadden genomen [If the Lomans had hired me as estate agent]
was het veel eerder verkocht. [it would have been sold much earlier.]
All the same, the condensation and reduction that is so typical of subtitling, as well as the disruption created by the layout or distribution of a sentence over more than one subtitle, can lead to coherence breakdowns. So, even if the visual and aural channels contribute their bit, it does no harm to avoid low cohesion, and thereby possible coherence problems in the subtitles themselves. Such lack of coherence can be due to fuzzy references or jumpy transitions, ill-structured sentences, clauses without verbs or illogical segmentation. Earlier on in this chapter, we pointed out how omission often goes hand in hand with reformulation; as subtitlers delete some chunks of information they must also make sure that the logic of the ST is maintained within and across the subtitles. Tey must continually look back at what they have already translated and anticipate what is still to come. It may very well be that it is difcult to keep track of this while translating, but having fnished a scene, it is always advisable to do some revision and editing, taking extra care to check transitions and cross-references, as well as typical examples of anaphoric and cataphoric references. Short circuits in the information fow must be avoided. Tis is why the subtitling software Wincaps Q4 ofers an extra interface, [Compact Alt+3], that makes it easier for subtitlers to check the complete list of subtitles they have created, as displayed in Figure 6.1: 6.5
Segmentation and line breaks
Segmenting is closely related to the task of spotting (§4.4.2), and it means to divide something into separate parts or sections. In subtitling, segmentation is the careful division of the ST dialogue, narration etc. into sections or segments – subtitles – that follow a particular layout, so that the viewers can understand the message at a glance. Segmenting is considered an important factor in facilitating subtitle reading and comprehension by all professional subtitlers. However, in §6.3 we referred to new research on reading speed and line breaks, and the results of this ongoing experimental research and other recent studies using
170
Te linguistics of subtitling
Figure 6.1 View [Compact Alt+3] in Wincaps Q4
eyetracking (Perego et al. 2010; Rajendran et al. 2013; Szarkowska 2016) now appear to be inconclusive with respect to the exact impact of line breaks and segmentation. Tis certainly does not mean that the issue has become irrelevant. Szarkowska (2016) also points out that the participants in her experiments consistently reported a slightly higher cognitive load (i.e. a higher degree of efort required) in the erratically segmented versus the properly segmented condition of the subtitles presented to them. In addition, there were more revisits (so-called regressions) to the subtitles in the poorly segmented condition, which means that the viewers went back to the same subtitle more often, leaving them less time to enjoy the images. Te researcher also suggests that viewers may be so used to badly segmented subtitles that they simply manage them and also intimates that the impact of formal subtitle characteristics may vary along with the degree of linguistic complexity of the ST. While awaiting more conclusive experimental results, we therefore base our guidelines on the hypothesis that careful segmentation of the information reinforces coherence and cohesion in subtitling and will usually facilitate reading. Translators are obviously responsible for creating subtitles that can be easily understood considering the little time they appear on screen and their relative isolation – physical, though not logical – from previous and following subtitles. To attain this objective, one of the golden rules in the profession remains to structure subtitles in such a way that they are semantically and syntactically self-contained. Ideally, any subtitle ought to have a clear structure, avoid any ambiguities that are not intentional and be a complete sentence. However, this is not always possible – it is then that spotting and segmentation become crucial.
Te linguistics of subtitling
171
Segmenting is done on two levels. A sentence may have to be distributed over the two available lines of a subtitle – line breaks – or it may run on into two or more subtitles. Te segmentation rules are basically the same within and across subtitles, but when dividing text over more than one subtitle, one should keep in mind that the memory span of viewers of any age group is limited. Complex sentences are difcult to keep track of and, whenever possible, should be split into shorter ones. When making use of the two lines of a subtitle, the segmentation of the text should follow syntactic and grammatical considerations rather than aesthetic rules, e.g. opting for lines with a symmetrical layout, flling the top line before venturing to the bottom one, or aiming at subtitles with a pyramidal distribution. Te second line can thus be shorter than the frst one or vice versa, although one should avoid obstructing images such as close-ups of faces. In Example 6.53, the version on the right is the preferred one because it groups the semantic load of the sentence in a more reader-friendly manner: Example 6.53
If I make the right amount of money I might consider moving out.
If I make the right amount of money I might consider moving out.
In Example 6.54, one subtitle equals one sentence, and the distribution of text over the two available lines in the second and third subtitles also follows the logic of semantic grouping: Example 6.54 How do you go about answering an ad? Very simply, I sent a letter with a photo. She answered and we arranged to meet.
In practice, however, it is not always feasible to match a sentence with a subtitle, so it is important to keep in mind that it is advisable for each subtitle to make sense in itself, while somehow indicating or suggesting that the sentence continues in the next subtitle, as the comma shows in Example 6.55, where it also helps organize the overall sentence structure: Example 6.55 Either God exists, and there are things he alone understands, or he doesn’t exist and there’s nothing to understand.
Te speaker in Example 6.56 just rambles on, which is most typical of productions where scriptwriting has not been part of the equation, as is often the case with interviews or reality shows. In situations like these, some form of intervention is inevitable, and in this particular example, the speaker’s utterances have been subdivided into
172
Te linguistics of subtitling
shorter sentences to ft the subtitle format. As can be seen, the resulting text is much more formalized and cohesive in its confguration, thus boosting its readability: Example 6.56 Hij stond daar, ik hier, en we schreeuwden naar mekaar. ...and he was standing on the other side from me... we were shouting at one another and there were other people, other prisoners and their families, and it was such a lot of noise... sometimes I couldn’t even hear what he was saying.
[He was there, I here, and we shouted at each other.]
→
Er stonden nog andere gevangenen en hun verwanten. [There were other prisoners and their relatives.]
Er was zoveel lawaai dat we mekaar soms niet verstonden. [There was so much noise that we didn’t always understand each other.]
Long sentences occur in diferent genres, both in fction and non-fction flms, especially when someone is reporting, telling a story, explaining a problem or giving instructions. Example 6.56 is from a woman describing to a journalist what happened when she went to see her husband in prison for the frst time. Part of the message is no doubt the confusion at the prison, but in traditional commercial subtitling that confusion is not usually rendered linguistically and has to be gleaned from the accompanying visuals. Not only has the speaker’s sentence been spotted into three clear-cut, individual subtitles in the example, but the logical connection between the two clauses that make up the third subtitle has also become more explicit. Besides lexico-syntactic segmentation, which is the type of segmentation exemplifed in the example, rhetorical and visual segmentation may also have a role to play. According to Reid (1996: 100): Te translator will determine the segments which later become one subtitle grammatically (on the basis of semantic units), rhetorically (on the basis of speech rhythms), or visually (on the basis of what happens on the screen in the way of cuts, camera angle changes etc.).
Again, due to new empirical research, the jury is still out on the efect on viewers of subtitles that cross rather than respect shot changes and visual segmentation, as defned by Reid (ibid.). However, this is part of the cueing or spotting of the flm and, as such, belongs to the domain of spatial and temporal considerations (§4.4.2), rather than linguistic ones. In the next sections, we focus solely on ofering more concrete examples of lexico-syntactic and rhetorical segmentation. 6.5.1
Line breaks within subtitles
In the following examples, the version preceded with the symbol is the preferred one by the authors.
Te linguistics of subtitling
•
173
Do not hyphenate words:
Example 6.57
•
It’s really hard when you have sacri ficed everything.
It’s really hard when you’ve sacrificed everything.
If a subtitle consists of two or more sentences, consider placing one sentence on each line whenever possible:
Example 6.58
•
That’s his second wife. She killed herself.
That’s his second wife. She killed herself.
If a subtitle consists of a sentence with two subordinated or coordinated clauses, and inserting one after the other is impossible because the maximum number of characters per line would be exceeded or would result in an extremely long subtitle, use one line for each clause:
Example 6.59 I don’t need him here because I can manage perfectly. We can’t take him along and we can’t leave him here.
I don’t need him here because I can manage perfectly. We can’t take him along and we can’t leave him here.
On occasions, it will be impossible to make such clean-cut divisions, but do bear in mind that it is not necessary to fll the frst line completely before moving down to the second. Some professionals feel a degree of equilibrium in line length is more pleasing aesthetically, sometimes the customer will dictate the layout rules, and in other cases a close-up may call for a shorter top line. Generally speaking, deciding on which line should be the longest is frst determined by the syntax of the subtitle and the word groups that must be kept together. •
Any disruption of a sense-unit may slow down reading. It is therefore ill advised to separate an adjective from a noun or an adverb, an adverb form a verb, an article from a noun, a frst name from a last name, and the like:
Example 6.60
You’re right, you’re absolutely right.
My mum will drive the car when we go to the beach. I bet it’s a recipe by Julia Child.
•
You’re right, you’re absolutely right. My mum will drive the car when we go to the beach. I bet it’s a recipe by Julia Child.
Likewise, in English, the line break should not separate the ‘to’ from the rest of the infnitive, a phrasal verb from its preposition, or a main verb from an auxiliary,
174
Te linguistics of subtitling
refexive pronoun or negation. Similar rules will apply in the case of other TLs featuring set word groups or collocations: Example 6.61 Is someone who refuses to help a victim guilty of a crime?
I have no idea what got into him this morning. The suspect has not committed any murder yet.
•
Is someone who refuses to help a victim guilty of a crime? I have no idea what got into him this morning. The suspect has not committed any murder yet.
Avoid separating a verb from its direct or indirect object. If a simple sentence has to be distributed over two lines, the subject should ideally go on the top line and the verb plus predicate on the bottom one:
Example 6.62 And I just can’t give you the address either. The nurse can instruct the doctor to start the operation now.
•
And I just can’t give you the address either. The nurse can instruct the doctor to start the operation now.
If a particular sentence is the reply to a question, or a reaction to a statement, the reply/reaction is best placed on the second line, rather than in the next subtitle, unless this would give away information too soon or the exchange is articulated around a shot change:
Example 6.63
6.5.2
Give me those damn keys. Enough! Stop! Am I a dog who deserves to die? Take the keys and let me go.
- Give me those damn keys. - Enough! Stop!
- Am I a dog who deserves to die? - Take the keys and let me go.
Line breaks across subtitles
Sometimes it is impossible to ensure that the sentence and subtitle coincide, because the utterance lasts too long for it to be incorporated in one subtitle, even if one stretches the traditional six-second rule (§4.4.6) for maximal subtitle length. Alternatively, a shot change may need to be respected, the information load of an utterance may be too great, or a given sentence structure may not lend itself to a division into independent syntacticsemantic units. In cases like these, a sentence will have to run over two or more subtitles,
Te linguistics of subtitling
175
and the same rules regarding syntactic-semantic segmentation should be applied when deciding where to break of. Te clause or word group that constitutes the subtitle must make sense and, as far as possible, anticipate the ending that is to come: Example 6.64
You said you didn’t know her, that you had never met her, but that
was obviously a lie.
You said you didn’t know her, that you had never met her, but that was obviously a lie.
Still, it is not advisable to stretch a sentence over too many subtitles as this may hamper comprehension. In such cases, dividing the utterance up will avoid stretching the viewer’s memory, as in Example 6.65 concerning spotting, taken from one of the frst books on subtitling by Ivarsson and Carroll (1998: 91): Example 6.65 Welcome to the first of four programmes
Welcome to the first of four programmes in this series.
in this series that every four weeks will show how big money
governs England, and how your money can be used to change society. We’ll see a commercial, soon to be shown in our cinemas.
6.5.3
Every four weeks we will show how big money governs England and how your money can be used to change society. We’ll see a commercial, soon to be shown in our cinemas.
Rhetorical spotting
Subtitling renders speech in writing. Rhetorical spotting tries to take some of the meaningful features of spoken language into account: hesitations and pauses, speech disorders like stuttering, or the playfulness of quick repartees, for instance, but also the phenomenon known as “disorderly speech” (Parra López 2016). Tis term refers to the representation of the speech of characters who are temporarily unable to speak or articulate clearly because of drugs or alcohol abuse, lack of sleep or other temporary afictions, which can be important for the diegesis. Te way subtitles are segmented and distributed can help refect some of the dialogue’s dynamics. Good rhetorical spotting helps convey surprise, suspense, irony, hesitation, etc. Tese prosodic features of spoken language serve a purpose in supporting and qualifying the speaker’s message. Actually, syntactic-semantic and rhetorical segmentation overlap to some extent since the linguistic and paralinguistic features of speech usually collaborate. Chances are that respecting syntactic and semantic units
176
Te linguistics of subtitling
as well as punctuation will automatically take care of rhetorical segmentation. In some cases, however, the subtitler has to make a decision. Should the reply to a question or an ironic comment be rendered in the same subtitle? What is the best break-of point? Consider the following examples: Example 6.66 Remind me to go to the baker’s in the morning... and to kill him.
→
You are looking really good for... a twenty-one-year-old.
→
Remind me to go to the baker’s in the morning... and kill him. You are looking really good for... a twenty-one-year-old.
Another question is whether a short pause or a hesitant oral delivery should be refected somehow in the subtitle. Breaking of a sentence and continuing with it in the next subtitle may render a character’s hesitation, but it may also disrupt the syntactic unity of the subtitle. Is the rhetorical impact of the disruption more relevant than the semantics of the utterance? Sometimes this is a matter of personal preference. A decision simply has to be made, taking into account the context in which the utterance occurs. Compare the following versions of the same subtitle. Te one on the left renders the two sentences in one subtitle whereas the one on the right respects the pause after ‘to’ in the second sentence and creates a visible hiatus of some frames, by pushing the rest of the information to the next subtitle: Example 6.67 I’m scared. I would like to get out of here.
←→
I’m scared. I would like to... get out of here.
A related issue is the preference for more one-line subtitles rather than fewer two-line ones, with some companies recommending keeping the text on one line always, unless the resulting translation exceeds the character limitation. On the one hand, recent research seems to confrm that two-liners facilitate reading (Szarkowska 2016), though, on the downside, they may not follow the rhythm of the speaker, which may have its very own narrative function. In Example 6.68, the character has just witnessed a car accident. He is addressing the girl who has been hit as she is lying on the road. His voice sounds worried and urgent; his sentences are short and repetitive: he appears to be prodding her with his words, trying to make her respond. In the version on the left, his sentences have been joined, which is quite all right technically speaking, in terms of space-time constraints, but not from a dramatic-prosodic perspective. In the version on the right, the subtitler has opted for almost as many short subtitles as there are sentences, following the rhythm of the speaker and retaining most of his brief pauses and repetitions:
Te linguistics of subtitling
177
Example 6.68 Miss, you have to stay awake. Answer me.
Miss, you have to stay awake. Answer me.
Count with me. Miss, answer me. Count out loud. Stay with me. Stay with me.
Count with me.
Miss, answer me. Count out loud. Stay with me. Stay with me.
Te subtitles on the right are not very efcient in terms of the space they use when compared with the information load they carry. In another type of flm, and possibly also another type of scene, it might have been better not to render the speaker’s rhythm and concentrate on the message. 6.6
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 6 > Exercises
7
Subtitling language variation and songs
7.1
Preliminary discussion
7.1 One of the advantages generally attributed to subtitling is that it respects the original soundtrack and thus encourages the learning and consolidation of FLs. Given that many audiovisual programmes are shot in English and that people are becoming increasingly familiar with this language, what do you think of the following quote from Gottlieb (2001: 258)? For future subtitling, the consequences ... could be that in several minor speech communities, we would not have to waste time subtitling from English. Most viewers would simply argue: “All the people who can read subtitles know English anyway, and besides, our language is not that different from English anymore, so why bother?”
7.2 When the US crime series The Wire was aired on BBC, many Brits switched on intralingual subtitles in order to make sense of the street argot spoken by its characters, most of whom are black American drug dealers and streetwise detectives. The writers of the series, however, claim that viewers who watch The Wire with subtitles miss the point entirely: the slang spoken by the characters is an integral part of their (story) world and if the audience wishes to enter that world they have to work at deciphering its language.
7.2
❶ How do you feel about the writers’ view? ❷ Would you be tempted to turn on the subtitles? Why (not)? ❸ Do you understand the following expressions, taken from the series: ‘He’s a yo’, ‘walk-around money’, ‘a stash house’? Do you feel it is important to understand all the nuances of the film dialogue?
Marked speech and language variation
Language, spoken language especially, is as changeable as human beings and their surroundings, whereas writing is traditionally connected with the preservation of knowledge and the creation of prestigious verbal art forms that are perceived to be more permanent than speech and to carry more cultural capital. Since the ST of subtitling is made up
Subtitling language variation and songs
179
of a complex mixture of spoken discourses that are often based on a previous written form, it results in specifc hurdles for subtitlers. Te challenges reside both in this variable but usually hybrid nature of the spoken STs of fction flms and documentaries and in the regimented nature of traditional commercial subtitling. Such subtitles are a form of writing that tries to mimic speech to diferent degrees and therefore has ties with both types of discourse, oral and written. As discussed in §3.2.1, flm and television dialogues as well as other forms of media talk encountered in documentaries, talk shows or docudramas, fulfl many functions. Te present section focuses on the use of language variation or so-called marked speech in audiovisual texts, with special emphasis on the challenges it presents for its subtitling. 7.2.1
Marked speech: a pragmatic classifcation
Marked speech is broadly defned here as speech that is characterized by language features that are not neutral. In fact, although standard language is traditionally considered the most unmarked speech variant, no form of speech can ever be completely neutral or unmarked since even the speech of a person conversing in the standard language will be marked by that person’s linguistic idiosyncrasies. Traditionally, sociolinguistics has distinguished between what Schilling (2002: 375) defnes as intra-speaker variation or “variation in the speech of individual speakers”, i.e. their individual ‘style’, which may also vary from one situation to another; and inter-speaker variation or “variation across groups of speakers”. 7.2.1.1
Intra-speaker variation: style and register
As Short (2001: 280) writes, “[t]he most commonsensical understanding of the meaning of the word style is that it relates to the typical way(s) in which one or more people do a particular thing”. Transferred to a linguistic context this means that “any person’s reasonably consistent linguistic behavior could be said to constitute a style, whether it is spoken or written, or whether the language producer is deemed a literary fgure or not” (ibid.: 282). It is characterized by the choice of words, grammatical structures and pronunciation. Sometimes, the term idiolect is used to refer to “the speech habits of an individual in a speech community, as distinct from those of a group of people” (Wales 1989: 230). Still, this term too refers to a system of individual stylistic features and is therefore much like a personal style. In addition, style is often discussed in a literary context, referring to an author’s style or a group style (Short 2001). Tis linguistic variation can be a challenge for subtitlers, who may encounter such variants in period dramas, sci-f productions or flm adaptations of literary classics. Register is a type of discourse that is very difcult to distinguish from the concept of style. Speakers resort to style-shifting depending on the context in which they interact, and Cazden (2001: 87) argues that learning language is a process of socialization that “involves learning multiple registers, i.e. particular ways of using language in particular settings within [one’s] community”. One can therefore just as well refer to a formal or informal style as to a formal or informal register, and both are determined by choice of words, grammatical structures, pronunciation, fgures of speech and the like. However, according to Wales (1989: 398–399), register is “used for those systematic variations in linguistic features common to particular non-literary situations, e.g. advertising, legal language, sports commentary”. For his part, Trudgill (1999)
180
Subtitling language variation and songs
ofers a defnition that edges close to what could be considered jargon, which is determined by lexical features only, is often connected to a profession or specialization, and therefore enters the domain of inter-speaker variation. Tis sociolinguist defnes register as determined by topic, subject matter or activity (e.g. medicine) but also admits that some registers, such as the register of law, have syntactic characteristics besides lexical ones. In brief, whereas technical registers may indicate a profession, other registers may reveal a character’s prestige or social position, and in that case register is not easily distinguishable from style. For subtitlers, it is important to detect the language variation, to link it to the context in which it appears, and to understand its implications and its impact on characterization and/or narrative. 7.2.1.2
Inter-speaker variation: dialect, sociolect, slang
Te term dialect is generally used to refer to a subordinate variety of a language. A regional dialect, sometimes referred to as geolect, is a language variety associated with a place (e.g. the Bavarian dialect) and is therefore used within certain geographical borders. However, the boundaries can also be of a social nature and delimit social groups who then speak a diferent social dialect or sociolect (Romaine 2001). Such sociolects are often associated with people’s socio-economic status. Again, dialectical features are refected in lexicon, grammar and pronunciation. Slang is normally used in two different but related senses. It can refer to the very specific speech of subcultures in society but often it denotes the use of very informal and unconventional vocabulary of more general use. It is a form of marked language that is used with social purpose, originates mostly in urban areas and is often defined in terms of its unconventionality, ephemerality, use of ellipses, bizarre metaphors, playfulness, re-lexicalizations and even anti-language traits. One example of slang is camp talk, an aesthetic sensibility present in English-speaking spaces, which is generally associated with homosexual identity and has been researched in subtitling by authors such as Villanueva Jordán (2019). It is characterized by the use of specific lexical items, cultural references, intertextuality and implicatures based on world knowledge that is shared by the community, and features in TV shows actively opposing heteronormativity like RuPaul’s Drag Race. 7.2.1.3
Intra- and inter-speaker variation: entanglements
To give an idea of the diferent types of marked speech that can be encountered in flm, and to draw attention to the narrative functions these may fulfl, the distinction between intra- and inter-speaker variation remains a useful one. However, today’s sociologists are increasingly stressing that intra-speaker variation also impinges on inter-speaker variation as speakers give dialects, sociolects and slangs a personal twist depending on the purpose of their discourse. As Schilling (2013: 327) writes, there is an increasing focus: on intra-individual patterning of variation, including how individual variants pattern across diferent discourse contexts; how variants co-occur in, cohere into, individual styles, situational styles (e.g. registers), and group styles (e.g. dialects); and, crucially, how variants are used in a non-aggregate sense, in unfolding discourse.
Subtitling language variation and songs
181
In other words, speakers use language variation creatively and for specifc purposes, even in group settings, while style may refer to a literary style or the style of an author. 7.2.1.4
Intra- and inter-speaker variation: swearwords and taboo words
Taboo words and swearwords are used by individuals in many diferent settings and for a myriad of purposes, some of which may also be associated with inter-speaker linguistic varieties. A taboo is “a social or religious custom prohibiting or restricting a particular practice, or forbidding association with a particular person, place or thing” (OED online). Allan and Burridge (2006: 11) argue that “taboo refers to a proscription of behaviour for a specifable community ..., at a specifable time, in specifable contexts”, and, in this sense, linguistic behaviour is thus subject to and conditioned by the way taboo is understood in diferent cultures and communities. Taboo words are therefore expressions whose use is restricted or prohibited by social custom. Since swearwords are ofensive words used to release anger, frustration, despair, contentment, excitement etc., some swearwords are also taboo words. Diferent cultures have different sensibilities and, consequently, allow the use of diferent swearwords and taboo words with diferent degrees of tolerance. Moreover, these sensibilities change over time and are context-bound. Allan and Burridge (ibid.: vii) propose four categories of taboo: (1) naming and addressing, (2) sex and bodily efuvia, (3) food and smell, and (4) disease, death and killing. Jay (2009: 154) points out, “although there are hundreds of taboo words and phrases, the semantic range of referents that are considered taboo is limited in scope”, and groups them as follows: 1 2 3 4 5 6 7 8
sexual references; profane or blasphemous references; scatological referents and disgusting objects; ethnic-racial-gender slurs; insulting references to perceived psychological, physical or social deviations; ancestral allusions; substandard vulgar terms; and ofensive slang.
As for their impact, taboo words can have a deleterious efect on interlocutors, whether by denotation or connotation, and provoke a potent emotional efect. Closely related to taboo words are the concepts of euphemism and dysphemism. Allan and Burridge (2006: 31) defne a dysphemism as “a word or phrase with connotations that are ofensive either about the denotatum and/or to people addressed or overhearing the utterance”. A dysphemism is rude, usually linked to the activity of breaching the etiquette of a community through the connotation of the word and, most importantly, considered taboo by the community. Euphemisms by contrast “are words or phrases used as an alternative to a dispreferred expression. Tey avoid possible loss of face by the speaker, and also the hearer or some third party” (ibid.: 32). Euphemisms therefore allow interlocutors to talk about taboo topics without embarrassing each other or provoking a strong emotional reaction. In brief, there is a strong correlation between euphemisms and taboo words. In addition, euphemisms seem to be omnitemporal and have a global presence since, as McDonald (1988: vi) points out, “euphemism is
182
Subtitling language variation and songs
not a peculiarly English phenomenon. Indeed, euphemism is used extensively throughout the world, and has been throughout recorded history”. Referring back to the aforementioned distinction between the relative prestige of spoken versus written discourse, it is clear that the mandatory subtitle transition from speech to writing may have an impact on what is acceptable in small or big letters on a screen. Te ultimate decision rests with the subtitlers, who themselves will be guided consciously and/or unconsciously by the socio-cultural context within which they work and by any expressed instructions received from the client. 7.2.2
Subtitling marked speech and language variation
Next we look into the varying implications of language variation for subtitling. 7.2.2.1
Complexity in abundance
Te current trend is for flm language to exhibit more rather than less variation. Te work of British flmmakers such as Mike Leigh, Ken Loach and Danny Boyle is well known for its use of substandard British variants, and many contemporary mainstream productions and TV series also make extensive use of diverse forms of marked language. Tis is done for diferent reasons: to suggest concrete medical (House M.D., Grey’s Anatomy) or political (House of Cards, Borgen, Te Good Wife) contexts or to contribute to both personal and social characterization in such very diferent productions as the drama Brokeback Mountain and the comedy Master of None. Most flm language is written to be spoken, and it uses language variation consciously and stylistically. Teoretically speaking, a distinction could be made between fction flm/TV and non-fction. Dramatic dialogue or flm/TV dialogue in fctional productions will use both intra-speaker and inter-speaker variation to construct characters, to build on their relations, to identify their moods and emotions, and to link them to specifc social backgrounds (Kozlof 2000; Richardson 2010). In other words, flm dialogue mimics everyday conversation’s turn-taking patterns (§3.2.1) in order to communicate with and to inform the audience and makes use of selected features of marked language for narrative purposes. Tis means that flms and TV series purposefully aim to provide identifable instances of language variation through markers in the characters’ speech. In addition, the redundancy of flm narrative, which also makes use of music, sounds and images, strongly supports the narrative functions of the dialogue and can be used for speech identifcation: if a character speaks a type of slang connected with gangs, the trait will also be made obvious visually, as in Snatch, Peaky Blinders or Narcos. What Richardson (2010: 44) defnes as media talk, that is the diferent types of oral discourse that feature in non-fction TV programmes or documentaries (news, interviews, debates, exchanges between talk show hosts and their participants), shares features with naturally occurring speech in some respects and with fctional speech in other respects. Media talk also addresses the audience, and whereas it can be more spontaneous than fctional dialogue, it is often (partially) scripted as well. Marked speech in non-fction will also reveal the speaker’s social or class background, level of
Subtitling language variation and songs
183
education etc., and is therefore important to refect in the subtitles – although such choices tend to have ideological implications (§8.4). Both in fction and non-fction, however, subtitlers must always balance their attempts to render language variation, a mostly connotative function of the dialogue, against its denotative function, especially its narrative/informative dimension. In addition, the degree to which language variation can be rendered in subtitles will depend, of course, on the technical constraints, on the guidelines the subtitler has received and on the socio-cultural TT context for which the subtitled production is made. Tis may be very diferent from the ST context and it may mutate, as productions travel through space and time, which means that the tolerance for certain taboo words may have evolved, for instance. Whatever the case may be, the function of language variation in the audiovisual production should always be evaluated frst: is this linguistic variant used throughout or is it only used by some characters? What function does it fulfl? In Trainspotting, for instance, a Scottish dialect is mixed with drugs-related insider jargon. In some scenes, the language used by the protagonist friends is pitted against the very proper standard English spoken by the outsiders. From this perspective, it is important that the contrast is conveyed in translation. As Mével (2017) points out, the reasons why diferent linguistic variants are juxtaposed on screen can be very intricate and will always be highly context-bound. It is therefore unlikely that any TL should have identical equivalents, and this is a problem posed by most forms of linguistic variation. Te connotations of diferent TC dialects will never be the same as those of the SC dialects they replace. Tey may even have their own unwanted connotations. Moreover, the variant spoken in a flm taking place in, say, 1930s’ Chicago, will not even exist in the SC today. At best, subtitlers will manage to suggest this kind of language variation. To do so, they rely on interaction with the flm’s other signs and on a guesstimate of what viewers from the TC might be expected to fll in themselves. Trying to mimic too much of the linguistic variation in the subtitles can have a reverse efect, as can extensive domestication. One striking example is that of La Haine, a French low-budget flm that was hailed as: an infammatory political pamphlet [that] raised several burning issues: youth unemployment, youth culture, integration of ethnic minorities, urban violence ... La Haine’s youth speak a language that some critics call ‘prose-combat’. It combines the particularities of the spontaneous languages spoken today in several French cités. (Jäckel 2001: 223–224)
Its subversive linguistic variant, also known as Le Téci, or langue des cites [language of the cities], is a very diverse and codifed language, a product and an expression of banlieue subculture [that] violently alters and appropriates normative French, primarily through the processes of verlanization (word inversion) truncation, the violation of standard French grammar and syntax, and the inclusion of words from a variety of other languages. (Montgomery 2008: online)
184
Subtitling language variation and songs
It is a complex marker of ‘otherness’ that is subject to rapid change and that, as Montgomery demonstrates, is almost entirely lost in the English subtitled version. Ten again, the subtitlers’ attempts to maintain traces of the original linguistic variation were criticized by Jäckel (2001), who dismissed them as an inappropriate and sloppy pastiche of black American slang. Nevertheless, there are success stories too, and Ellender (2015: 65–66) reports on the British subtitling of the French flm Bienvenu chez les Ch’tis, in which the subtitler uses: an eclectic blend of distinct translation solutions. Tese range from the freer and more creative – including transposition of pronunciation, juxtaposition of diferent linguistic registers and national variants, and rewriting wordplays – to the closer-to-the-original and more foreignising such as literal transferral of SL terms and close translation of expressions.
7.2.2.2
Conficting priorities, difcult decisions
Overall, AVT, and commercial interlingual subtitles especially, still displays a preference for standard language, neutral word order and simple well-formed sentences (§6.2), often limiting the type and amount of marked and improper language to be displayed (Ranzato 2010). In this respect, it does not difer from other forms of translation, such as literary translation, which often undergo a process of neutralization as well. In the case of subtitling, this is largely the consequence of subtitling’s traditional concern with clarity, readability and transparency, though the pressure of globalization combined with changeable cultural factors as well as the educational function of subtitling can be said to also contribute to this trend. Tis instructive function, in particular, is not to be underestimated in times of global migration streams and has been extensively researched by authors such as Talaván (2006, 2010), Incalcaterra McLoughlin et al. (2011) and Ghia (2012b), among others. Another point of contention is the impact of censorship on some forms of marked language, especially taboo words, which seems to be increasing rather than decreasing in various parts of the world, despite the recommendations of some of the distributors, as mentioned in the preliminary discussion. In this sense, to produce foolproof guidelines that can cover all subtitling aspects and can be automatically applied, irrespective of the context, is rather chimerical and the reason why the contradiction between two instructions from the Code of Good Subtitling Practice published by Ivarsson and Carroll (1998: 157) still stands: 8 9
Te language register must be appropriate and correspond with the spoken word. Te language should be (grammatically) ‘correct’ since subtitles serve as a model for literacy.
‘Correct’ can be interpreted in many ways, and a linguistic register that corresponds with ‘the spoken word’ is not necessarily incorrect, though rule 9 implies that subtitlers are usually expected to correct grammar as well as any potential mistakes. In Example 7.1, from the documentary Voices from Robben Island, the member of a political party is being interviewed about apartheid. He mixes present and past tenses, whereas the Dutch subtitle does not: the turn has been rewritten in the simple past:
Subtitling language variation and songs
185
Example 7.1 We knew that that is where our leaders were kept.
→
We wisten dat daar onze helden zaten. [We knew that our leaders were there.]
Here, the minor correction will probably not afect characterization, which is not the issue in this production anyhow, and it helps improve the readability of the sentence. However, major rewriting – including the correction of mistakes, which does afect the rendering of the ST speaker’s style and background – does occur quite often, and commercial subtitling is often criticized for it (Nornes 2007). Te borderline between ‘merely’ correcting grammatical mistakes and interfering in the way a person speaks is tenuous. In Example 7.2, from Te Grapes of Wrath, inter-speaker language variation refecting class and geography through vocabulary and grammar have been neutralized. Te French subtitle only gives an indication of the spoken language by using the grammatically incorrect negative j’étais pas rather than the standard je n’étais pas: Example 7.2 I’d’av walked if my dogs wasn’t pooped out.
→
J’aurais marché si j’étais pas crevé. [I’d have walked if I wasn’t exhausted.]
Te far-reaching efect that such forms of rewriting can have on a flm belonging to the documentary genre is discussed by Kaufmann (2004) in her study into the colourful and meaningful variation in the Hebrew spoken by immigrant interviewees in Israel. ARTE, the TV channel broadcasting the flm, insisted at the time that the immigrants’ ungrammatical Hebrew should be subtitled in standard French. Te scholar clearly demonstrates that part of the informative/denotative meaning of the flm was thus lost in the translation because of its homogenizing efect. Simply dismissing such linguistically marked features as either unwanted or untranslatable obviously will not do. If all characters speak the same linguistic variant, it can be argued that the use of standard register in the subtitles does not betray any of the characters; however, if one or a few stand out because of the type of language they speak, this should somehow be refected in the dialogue exchanges for, in such cases, the connotative dimension clearly adds to the denotative meaning. 7.2.2.3
Subtitling intra- and inter-speaker variation
As we pointed out earlier, the concepts of style and register are hard to distinguish from each other. In addition, speakers may add their personal style to forms of interspeaker variation such as dialects or sociolects. Furthermore, all these forms of marked language are characterized by specifc features relating to either lexicon and/or grammar and/or syntax and/or pronunciation, which is why they are all treated together in these pages. Professional registers, in the sense of language variants marked by jargon, are dealt with in the discussion on the subtitling of cultural references (§8.2). From our introduction about linguistic variation, it has become clear that it is important for subtitlers to identify the function of the variant and its communicative importance, bearing in mind that some flms are so replete with stylistically marked features that they become an integral part of the story and should therefore be rendered in the translation somehow.
186
Subtitling language variation and songs
7.2.2.4
Literary styles
Period dramas like Pride and Prejudice, Cyrano de Bergerac and 後宮·甄嬛傳 [Empresses in the Palace] are a case in point, as are literary programmes that include excerpts from plays or quotes from historical novels or poetry. Should the subtitles echo the canonical written translation of the work, if this exists? Te issue of what to do when subtitling Shakespeare plays, for instance, is taken up by Reid (1996). When Dutch television decided to broadcast a BBC Shakespeare series in the late 1980s, the subtitled series covered the complete range of possible subtitling strategies: from simple ancillary subtitles that did not try to imitate the playwright’s poetic style to fully rhythmic and rhyming translations. What is the best solution? Leaving instructions issued by the customer or channel to one side, the target public, if it is known, may be an important factor. A full literary rendition will not be possible, but then again, it may not be required either. As Reid (ibid.) also mentions, viewers who are interested in drama, literature or poetry may well want to listen to the original, using the subtitles only for support. Besides, rendering a touch of the literary style of any ST is possible by resorting to lexical fourishes in the translation. It may be preferable not to adapt the syntax of the subtitles to that of an older or literary language variant since that might attract attention and slow down comprehension rather than allow the viewer to enjoy the programme, although some will consider this a patronizing attitude. On the other hand, the audience of a period piece will know people spoke a diferent variant of the SL at the time and will get sufcient additional clues about the period from the setting, props and costumes. So, most of the time subtitles will occasionally hint at the period style, without rendering it in full, as in Example 7.3 from Romeo and Juliet: Example 7.3 Welk verdriet maakt de tijd zo lang? A: What sadness lengthens Romeo’s hours? B: Not having that, which, having, makes them short.
→
[What sadness makes time so long?]
Dat ik niet heb wat hem korten zou. [That I not have what would make it shorter.]
7.2.2.5
Forms of address
A particularly thorny issue for translators generally, and subtitlers in particular, is the translation of formal versus informal second person forms of address, such as vous versus tu in French, Sie versus du in German, usted versus tú in Spanish, u versus je/ ge in Dutch and tu versus Lei in Italian. Some of the factors that make interlocutors opt for one rather than the other alternative are age, sex, group membership, position of authority (Anderman 1993) and physical proximity that can be seen in the images. However, the use of the formal versus the informal personal pronoun can also have emotional connotations. In his analysis of the French flm Gazon Maudit, Mailhac (2000) points out how a character reverts back from tu to vous after a violent fght once he has calmed down and wants to re-establish some kind of distance. He frst
Subtitling language variation and songs
187
shouts Tu crois que ça va m’empêcher de te foutre mon poing dans la gueule ! [You think that will stop me from fucking hitting you in the face!], to then continue with a formal vous, in Si je vous revois dans le coin, je vous démolis la tête. Et en prime, la batte de cricket, je vous l’enfonce dans le cul ! [If I see you again in the neighbourhood, I’ll bash your head in. And as a bonus, I’ll push this cricket bat up your arse!]. Each case must be evaluated carefully, since the (unwritten) rules for choosing the formal versus the informal variant also difer from language to language. Tis means that problems can even occur when translating from, say, French into German, languages that share this morphological contrast. When translating from English, subtitlers have to resort to other visual, linguistic and narrative clues in the source flm to determine relationships between characters. Tey might have to ask themselves questions such as, is this couple’s relationship a professional one or a personal one? For how long have these two people known each other? Ten they have to determine what this would mean in the TC and decide whether the characters should address each other as tu or vous at that particular point. When translating into English, the use of frst names or nicknames in one case, and ‘Mr + family name’ or a person’s function, in the other, are obvious solutions for signalling informality versus formality. An example from the flm Guantanamera is given next (Example 7.4), but usually a fair amount of creativity will be required: Example 7.4 A: Gina... ¿Usted siguió dando clase? [Gina... Did you (formal) carry on teaching?]
B: Ay, no me trates más de usted.
→
[Oh, do not address me with ‘usted’.]
A: ¿Seguiste dando clase?
Gina will do. Still teaching?
[Did you (informal) carry on teaching?]
7.2.2.6
Gina! Do you still teach, Professor?
Agrammaticalities
Commercial subtitling almost always corrects grammar mistakes or dialectal grammar. Since dialects may be signalled to the viewers through diferent channels, it is not necessary to retain all the deviations. Double negations are very common in diferent English non-standard variants, for instance, and viewers with some knowledge of English will recognize them. How they are dealt with in subtitling varies, as shown in Example 7.5: Example 7.5 Don’t do nothing you wouldn’t want me to hear about!
→
I ain’t got no parents.
→
Niets ondeugends uithalen, hoor. [Don’t do anything naughty, you hear.]
J’en ai pas de parents. [I have none of parents.]
In Example 7.6, both the grammar (‘I done’ and ‘ain’t you’) and the lexicon (‘bust a gut’) have been neutralized in the Dutch subtitle, leaving it up to the directness of the language and semi-aggressive tone of the speaker to inform the viewer that
188
Subtitling language variation and songs
the character feels little more than contempt for his interlocutor. Te subtitler also relies on the images for extra semiotic information and local colour: Example 7.6 You’re about to bust a gut to know what I done, ain’t you? Well, I ain’t a guy to let you down.
Je wilt dolgraag weten wat ik gedaan heb, hè? Ik zal het je vertellen.
→
[You’d love to know what I’ve done, wouldn’t you? I will tell you.]
In those cases where variants simply must be suggested somehow, because they are part of the narrative but impossible to render, this is usually done through lexical choice and/or spelling variants, as is the case in this rather extreme Example 7.7 quoted by Ellender (2015: 57) from Bienvenu chez les Ch’tis. Te Ch’ti construction is simply too complex to render in its entirety: Example 7.7 Quo qu’ c’est qu’ teu baves ?
→
[Qu’est-ce que tu dis?] [What are you saying?]
What you beshmeering?
Te subtitler has dropped the auxiliary ‘are’ and used a very unusual word, ‘to besmear’ (i.e. to cover in a greasy substance), written as ‘beshmeer’ to render the unusual Ch’ti dialect pronunciation. Te point here is that the subtitler aims to make the subtitles as difcult to understand for the English audience as the Ch’ti variant is in the original for the French audience. 7.2.2.7
Lexical variation
Lexical variants are easier to suggest in subtitling than substandard grammar, as the literal translation in Example 7.8 shows: Example 7.8 But he’s a mate, you know.
→
Mais c’est un pote. [But he’s a mate.]
In the passages in Example 7.9, the colourful language spoken by a character from rural Oklahoma in the 1930s has partly disappeared from the subtitles, but the translation still conveys the rather aggressive tone of the man’s turns. Besides, the whole poverty-stricken rural context, the poor and rebellious character’s confdent behaviour, his gestures and his physical interaction with peers and superiors alike support the reduced translation in the subtitles throughout the flm. Indeed, no scene is isolated; all scenes build on each other. As for the subtitles in this particular case, the much ruder pif for nez [hooter/nose] in the frst subtitle helps to convey the speaker’s mood, whereas the choice of renifer [to snif at] is rather ofensive, suggesting animal behaviour. In the second subtitle, the character’s protest is central, not his use of language, and hence the lexical neutralization:
Subtitling language variation and songs
189
Example 7.9 That big nose of yours been goin’ over me like a sheep in a vegetable patch.
→
I don’t like nobody to draw a bead on me.
→
Ton gros pif n’a pas arrêté de renifler de mon côté. [Your big hooter has not stopped sniffing my way.]
Richt dat geweer niet op me. [Don’t point that gun at me.]
In the next set of excerpts (Example 7.10), the Yiddish slang words ‘schmuck’, which stands for ‘contemptible or foolish person’ but also means ‘penis’, and ‘schlep’ meaning ‘to drag oneself’, are instances of strong language that also designate the Jewish origin of the speaker. Tis reference to the speaker’s social group is lost in the Spanish subtitles: Example 7.10 Ted is a schmuck.
Ted está chiflado.
→
[Ted is crazy.]
You mean... you want to schlep... all → the way out to New Jersey...
¿Quieres ir hasta allí? [You want to go down there?]
Sometimes, the translation strategy of compensation can lend a hand, as happens in Example 7.11. In this subtitle from the same flm, the translation underscores the Jewish origins of the speaker, using the hyponym ‘synagogue’ for the hypernym ‘temple’: Example 7.11 Mañana tengo que madrugar e ir a la sinagoga.
You know I gotta be up early, tomor→ row. I gotta be in temple.
7.2.2.8
[Tomorrow I have to get up early and go to the synagogue.]
Swearwords, expletives and taboo words
Taboo words, swearwords and expletive interjections are often toned down in subtitles, thanks to the use of euphemisms, or even deleted if space is limited, as in the subtitles in Example 7.12: Example 7.12 You know what you are? You are a whore. Machine guns do not add one inch to your dick. What the fuck are you talking about? I’m into museums, blow jobs, theatre... and golden showers.
→ → → →
你知道吗?你是花心大萝卜 [You know what? You are a radish with a flower heart.]
األسلحة ال تزيد من رجولتك [Weapons will not add to your manhood.]
¿De qué me hablas? [What are you talking about?]
我什么都愿意做 [I’d like to do anything.]
190
Subtitling language variation and songs
Some interventions are even more radical. Especially in unscripted and non-fction programmes, expletives in the original soundtrack are sometimes bleeped out or muted. In some cases, a digital blur or box across the speaker’s mouth will even impede lipreading the ofensive utterances. Similar strategies, aiming to physically ‘hide’ the swearwords in the subtitles, are the use of asterisks, abbreviations or grawlixes, as illustrated in the subtitles in Example 7.13. In the frst of them, from the fourth episode of the TV documentary series Joanna Lumley’s Greek Odyssey – “Mount Olympus and Beyond”, the Greek expletive – σου [cunt] – has been censored twice. Firstly, it has been bleeped out in the original soundtrack, despite the fact that few viewers may be able to recognize it and, secondly, it has been transcribed into four asterisks in the written subtitles: Example 7.13 Ρόδα και τριαντάφυλλα σου στέλνω στη γιορτή σου [Flowers and roses I send you for your nameday]
και μια κορδέλα κόκκινη να δέσεις στο μουνί σου.
→
I bring all kinds of roses to your celebration. I also bring a red ribbon to tie up your ****.
[and a red ribbon to tie to your cunt.]
A: There is nothing we can do. B: She’s my sister, you son of a bitch!
→
A girl saying ‘fuck’?
→
- No hay nada que podamos hacer. - Es mi hermana, hijo de p. [- There is nothing we can do. - She’s my sister, son of a b.]
¿Una niña diciendo m$%@? [A girl saying m$%@?]
However, such words fulfl specifc functions in the dialogic interaction and, by extension, in the flm story, which means that deleting them is certainly not the only or the best option available. Tis type of approach may signal censorship or ideological concerns more generally (§8.4). Emotionally charged language usually has a phatic, exclamatory or connotative rather than denotative function and can be quite idiosyncratic, but usually it is also linked to specifc situations, ethnic groups, sexual orientation or communities’ religious backgrounds. Careful judgement of the emotional impact and the importance of taboo words and swearwords for the storyline is therefore essential, as is an assessment of the target audience’s acceptance of this type of language. Te classic example of a word that has made an inroad into the English language is ‘fuck’, one of the most used four-letter words which is still unpronounced in many circles, even if it abounds in cinema conversations. In the opening scene of Four Weddings and a Funeral, for instance, the word is fred at the unsuspecting viewer fve times in a row. However, saying such words is one thing whereas writing them is another matter, especially if they appear on screen, since “it is not the same to read a book on your own, in private, as to read (and watch) a flm as part of a gregarious group” (Díaz Cintas 2001b: 51). And yet, many curses and swearwords, including ‘fuck’ and its compounds and derivatives, are becoming increasingly common in subtitles, at least in Europe. In the
Subtitling language variation and songs
191
subtitles in Example 7.14, the strength of the expletives or taboo words in the source and target texts is well matched: Example 7.14 Oh, Christ!
→
Même ça, elle a merdé !
→
[Even that, she’s fucked it!]
A: You did all that for nothing. B: Oh, fuck my life!
→
Now, did you ever see what it can do to a woman’s pussy?
→
¡Joder! [Fuck!]
She even fucked that up! - 白忙一场。 - 操! [- You did it for nothing. - Fuck!]
Et vous avez vu ce que ça peut faire à un con ? [And have you seen what it can do to an asshole?]
Te translation of taboo words and invectives is crucial when they contribute to characterization or when they fulfl a thematic function in a flm. Tis happens, for example, in many of Almodóvar’s flms where women appropriate the use of macho language normally associated with men (Arnáiz 1998). In La for de mi secreto, the protagonist’s use of sexually explicit taboo language is not only part of her characterization but also an emancipatory tactic. Rendering her idiosyncratic use of expletives and swearwords is therefore crucial from both a narrative and a thematic viewpoint. When Leo confdes to her maid that the phosphorus pills she is taking are increasing her sexual appetite rather than helping her with her memory loss (Example 7.15), the TV solution sticks to the coarse original expression without omitting a single element of the comparison, whereas the VHS version tones down the whole expression by eliminating the reference to ‘bitch’. It seems here that the deletion cannot be justifed in terms of space and/or time constraints, since the TV version can easily accommodate exactly the same amount of information as the video does plus the controversial second part of the comparison: Example 7.15 A: Tome usted la pastilla del fósforo. VHS [Take your phosphorus pill.]
B: Ah, sί... Memoria no recuperaré, pero esto me pone cachonda como una perra. [Oh, yes... Memory I won’t regain, but this makes me horny like a bitch.]
TV
Take your phosphorus pill. →
It doesn’t help my memory, but God, it makes me horny.
Why don’t you take your tablets? → Ah, yes. They don’t help my memory but they make me randy like a bitch.
Not only the TC but also the distribution medium can have an infuence on how some expletives or taboo words are translated. In some of the previous examples, the
192
Subtitling language variation and songs
subtitlers could arguably also be exerting a form of self-censorship, but the translation is ultimately determined by what is deemed acceptable in the TC, so self-censorship is a relative, slippery concept (§8.4). Authors such as Ávila (1997) and Ballester Casado (2001) have discussed self-censorship in the case of dubbing in Spain, though, from an analytical viewpoint, it is rather difcult – if not impossible – to demonstrate which manipulative changes that surface in the translation are due to translators’ selfcensorship or were imposed by other agents, e.g. the distributors. In Example 7.16 from the Spanish flm What Have I Done to Deserve Tis?, some subtitles are more daring than others depending on the medium in which they have been commercialized, which also refects a diachronic evolution from the old VHS to the more recent, though now also virtually defunct, DVD: Example 7.16 Es igual que su padre, la ‘joía’ por culo.
VHS →
She’s just like her father.
[She is the same as her father, the fucking bugger.]
DVD →
She’s like her father, the fucking bitch.
Mira, Crystal, a veces pienso que sólo tienes sensibilidad en el chocho.
VHS →
I swear you’ve feeling in one place only.
[Look, Cristal, sometimes I think you only have feeling in your pussy.]
DVD →
Look, I think you only feel with your cunt!
Judging the strength of the utterance to be used in the target flm is a delicate issue, not only because of the strong social and ethical implications, but also because one is translating unstable connotative rather than denotative meanings. Swearwords, taboo words and expletive interjections will therefore often have to be translated diferently in diferent contexts, and their meanings can be a matter of subjective interpretation. In his investigation on the use of ‘fuck’ as well as its dubbed Catalan translation for TV, Pujol (2006) fnds that the word is used in the original to express extreme anger, emphasis, disgust, contempt, surprise and happiness. He also fnds that its translation difers even within each category, and when used to relay extreme anger it is translated as fll de puta [son of a whore] and malparit [badly born]. Te same happens in subtitling, of course, since such emotionally charged words must always be interpreted in context, as shown in Example 7.17: Example 7.17 Choose a fucking big television.
→
Choose a three-piece suite on hire purchase in a range of fucking fabrics.
→
All the fucking chemicals.
→
Choisir une putain de télé. [To choose a whore of a television.]
Choisir un salon à crédit dans un choix de tissus de merde. [To buy a living-room suite on hire purchase in a range of shitty fabrics.]
Ces saloperies chimiques. [These filthy chemicals.]
Subtitling language variation and songs
193
On rare occasions, the subtitles become stronger than the original. Tis happened in the English subtitles of the Danish TV series, Te Killing, aired on BBC – and it did not go unnoticed (Example 7.18). Te BBC launched a crackdown on the bad language in the subtitled series after a complaint from a viewer who stated that mild Scandinavian swearwords had consistently been translated to the F-word in English. A review by the subtitling company revealed that 75% of the F-word occurrences had also featured in the original script but that 25% had indeed been added. Tis is rather unusual since subtitles normally try to maintain the tone of the original; however, it can happen that subtitlers decide to render the occasional swearword in a stronger translated version in order to compensate for milder translations elsewhere. Example 7.18 Så sig nu noget, for helvede! [Then say now something, for hell!]
Hvad fanden laver du? [What the devil are you doing?]
Jeg vil skide på din dommerkendelse.
→
Say something, for fuck’s sake!
→
What the fuck are you doing?
→
I don’t give a fuck about your warrant.
→
We’ve done fuck all...
[I will crap on your warrant.]
Det rigtige... vi har sgu ikke gjort en skid rigtigt. [The right... We have damn it not done a shit right.]
Ultimately, whether a particular translation decision is justifed can only be judged in each case and context individually. Still, the suppression or simplifcation of a character or interviewee’s speech can come across as a case of suppression of the other, of the person from a diferent population group, of the man or woman who does not ft into the standard or standard language of the (power) centre or establishment. Changes in register and style may render flms more homogeneous, and alterations that impinge on character representation ultimately afect the message of the flm, i.e. the content that is subtitling’s priority (Remael 2004). Nevertheless, not each and every swearword needs to be translated in order to convey characters’ registers and/ or personalities: peppering their speech with the occasional well-placed expletive will often do the trick. Moreover, using close synonyms of varying strength is also a way out, as Example 7.17 shows. Even when words like the French foutre [fuck] and the Spanish follar [fuck] are used in their more literal meanings, English ofers many options besides the obvious ‘fuck’, and the subtitler can resort to synonyms like ‘screw’, ‘bonk’, or ‘shag’. Notwithstanding the subtitlers’ good intentions, however, some stakeholders have gone the extra technical mile with the invention of personal TV censors based on subtitles, which monitor the subtitles for hearing-impaired viewers in search for dubious words. Te application then decides – based on the user’s preferences – whether to block the entire programme or simply mute the sound for a short while (Díaz Cintas 2012). Fox (2017) provides some revealing examples: if the programme uses the words serial killer, the system could block it altogether; the word damn could be
194
Subtitling language variation and songs
acceptable on the Discovery Channel but muted on all movie channels; and the word bitch might only be permitted during a programme about pets, and never if preceded by you. Te TVGuardian (tvguardian.com), for instance, markets itself as “the foul language flter”. Finally, what often compounds the challenges posed by marked language for translators of audiovisual texts as opposed to other kinds of texts, such as drama or literature, is that oral language variation tends to be greater in audiovisual material. Film is more subject to change, it refects real-life linguistic evolutions more closely and, last but not least, it must be rendered in a hybrid form of written language within a welldefned spatio-temporal set-up. 7.2.2.9
Accents and pronunciation
Linguistic accents and pronunciation are problematic to render in subtitles, and yet they may be important for the storyline. Fortunately, marked accents often go hand in hand with marked vocabulary. In the British series Cold Feet, the nanny, Ramona, speaks with a heavy Spanish accent and her pronunciation leaves much to be desired, which is an object of concern for David, the father of the child. In the Dutch subtitles, Ramona’s pronunciation is rendered in an improvised manner (Example 7.19). Using a proper phonetic transcription system is impossible since most viewers would not be able to read it, so the subtitler adapts the spelling of the TL to suggest a foreign accent. Besides, the Spanish excusita [excuse me] is retained: Example 7.19 Excusita. I think I take my chicken into the lounge, OK?
→
Excusita. Ik eet m’n kiep in de zietkamer. [Excusita. I eate my tcheaken in ze lounche.]
Te French flm Bienvenu chez les Ch’tis is replete with such examples since the very unusual pronunciation of the regional Ch’ti dialect, a variant of Picard and spoken in the Nord-Pas-de-Calais, is topicalized in the flm, and the subtitler marks this pronunciation consistently, as illustrated in Example 7.20: Example 7.20 Oui, ch’est moi. [Oui, c’est moi.] [Yes, It’s me.]
→
Yesh, it’sh me.
In the musical My Fair Lady, accents and pronunciation are a crucial feature of protagonist Eliza Doolittle’s Cockney English (Example 7.21). She is a working class fowergirl, who is literally plucked of the street by Professor Higgins, a phonologist who undertakes to turn her into a high society lady. Tis, of course, includes teaching her how to speak the Queen’s English. Some of the original dialectal features of the Eliza’s speech must therefore be rendered in the subtitles if the linguistic contrast is to
Subtitling language variation and songs
195
be appreciated by the viewers. Te subtitler resorts to an improvised phonetic transcription in the French subtitles in order to maintain this contrast. Such solutions must obviously be applied only to very short stretches of text, otherwise they risk disrupting the readability of the TT and the enjoyment of the viewing experience. What the young woman says is of little importance, only her pronunciation matters: Example 7.21 Eliza: The rine in Spine stais minely in the pline.
In imbrin inopportin à chaquin est commin dans les plaines d’Espagne.
Higgins: The rain in Spain stays mainly in the plain.
Un embrun inopportun à chacun est commun dans les plaines d’Espagne.
Eliza: Didn’t I saiy that?
→
C’est ce que j’ai dit. [That’s what I said.]
Higgins: No, Eliza, you didn’t ‘saiy’ that. You didn’t even ‘say’ that.
Non, pas une seule fois encore. [No, not one single time.]
Finally, speech impediments can also play a crucial role in characterization, and these too must then be suggested in the subtitles. In Mrs. Doubtfre, voice talent actor Daniel Hillard imitates the voice of Porky Pig and his unmistakable stutter, refected in the repetition of certain letters and syllables (Example 7.22). Te subtitles make the most of the soundtrack, in which the reduplication of the letter ‘p’ is clearly audible, and come up with a solution that also plays with the reiteration of the same letter, albeit in a diferent position: Example 7.22 Well, in the words of Porky Pig: P-p-p-p-p-p... Piss off, Lou.
Nun, in den Worten Schweinchen Dicks: →
[Now, in the words of Dick the Piglet (Porky Pig)]
Verp-p-p-p-piss dich, Lou. [P-p-p-p-piss off, Lou.]
Te subtitling of a narratively functional speech impediment to produce humour in Te Life of Brian is discussed in §8.3.2. 7.3
Te translation of songs
Songs constitute a very specifc and yet varied language variant. Tere is extensive literature on the translation of music, in which the translation of opera is clearly dominant, both in terms of the study of singable translations (Apter 1985) and in terms of translations meant to be read (Desblache 2007; Mateo 2007a, 2007b), two rather self-explanatory concepts discussed in Low (2017). However, the translation of other types of songs and lyrics has also received due attention from scholars
196
Subtitling language variation and songs
like Gorlée (2005), Susam-Saraeva (2008, 2015) and Martín Castaño (2017). A survey of some of the major publications and issues encountered in this feld is given by Bosseaux (2011), and two authors who make sporadic mention of the subtitling of songs are Low (2017) and Franzon (2008). However, articles and chapters dedicated to the subtitling of songs are scarce and tend to be rather conjectural (Ivarsson and Carroll 1998; García 2013) or devoted to very specifc issues such as the use of same-language subtitling for the promotion of literacy (Kothari et al. 2010). Te international research group Translating Music (translatingmusic. com) aims to promote the study of music translation and to contribute to new developments in this feld by gathering data on relevant publications. With a declared bias towards the investigation of opera, the group explores the interpersonal, intercultural, intralinguistic and interlinguistic bridges on which music and translation intersect, paying special attention to the way in which words linked to music are translated and looking for solutions to improve the provision of such translation, within but also beyond lyrics. Song subtitling is a challenge for diferent reasons. Firstly, the usual spatial and temporal constraints apply (Chapter 4), even though the impact of these on song subtitling varies. Secondly, all song genres occur in audiovisual productions, which means that some of the arguments raised in publications about popular songs may have to be considered in some cases, whereas the concerns raised with respect to the translation of opera may have to be considered in others. Furthermore, all the issues raised elsewhere in this chapter and Chapter 8 with respect to the subtitling of marked language, cultural references and humour also recur when it comes to dealing with songs. Tirdly, viewers read the subtitles while listening to the music and lyrics at the same time, which may have an impact on the translation, rhythmically speaking. Tis issue, however, has hardly been researched, and reception studies into subtitling preferences for diferent genres are sorely needed. In the next sections, we point out the questions that need to be asked when subtitling songs rather than supply clear-cut strategies. Our baseline is that subtitled songs belong to the category of song translations meant to be read rather than sung. We limit ourselves to the study of songs with audible lyrics, that is, lyrics that the flm’s frst target audience has no trouble understanding and that can therefore have a bearing on the interpretation of the flm. In Low’s (2017) terms, we concentrate on songs that are based on the words and are logocentric, as opposed to songs that rely on the actual music and are therefore musico-centric. 7.3.1
Deciding what to translate
Some songs do not have to be subtitled. Firstly, the clients may have given instructions to that efect in their translation brief, for instance, because of copyright issues. Netfix, for one, recommends only subtitling plot-pertinent songs if the rights have been granted. However, copyright issues tend to be very complicated since, if permission to translate needs to be given, both the author and the label or distribution company will usually need to give their go-ahead (Davis-Ponce 2015). Secondly, the dialogue list may not include the lyrics and the language service provider, the TV channel/cinema or the translator may therefore decide that the songs are not important. In addition, the language in which a song is sung must be also considered. Tus, if the lyrics are not in the same language as the rest of the flm (e.g. a French song
Subtitling language variation and songs
197
in a British flm), they ought to be subtitled into a third language (say, Japanese) only if they were also subtitled in English in the original version. However, a flm in Catalan, for instance, may not subtitle an Italian song, because the audience can be expected to understand it, whereas that same song would have to be subtitled in a Norwegian or Finnish subtitled version, as the linguistic distance between Italian and any of the two Scandinavian languages is bigger than that between Italian and Catalan. If a translation is indeed required but no dialogue list has been provided, the internet is an invaluable source for fnding lyrics, although it must be used with caution. Te version found on the internet may difer from that in the flm, in which case the lyrics heard on the soundtrack take priority. Some other factors also need to be considered. To begin with, there may be a technical limitation. At the beginning or at the end of a flm, a song may overlap with the credits. Teoretically, the song could then be subtitled, but guidelines from distribution companies may require that priority be given to the text on screen. In the middle of a flm, there may be a degree of overlap with the dialogue. If this occurs, the subtitles always give priority to speech over lyrics. Indeed, should the opposite approach be required, the flm will provide cues and the song will be louder than the voices, or the speech will become gradually inaudible. In scenes with some overlap, decisions as to whether to subtitle a song only have to be made when lyrics and dialogue stop overlapping at some point, and the lyrics remain audible. Ten again, some songs may be internationally known whereas others may be merely used to suggest a certain period, in which case the melody may be just as important as the words. Moreover, some songs may have very simple lyrics, and we regularly hear songs that are not translated for us. If the audience can be expected to either know the song or understand it, subtitles are not required, especially if the lyrics do not contribute much to the story. But if a song is rather long, viewers who do not understand the words will start wondering about their meaning at best, and become seriously frustrated in the worst case. In a relatively recent development, songs in dubbed flms, including musicals, are sometimes subtitled today (García 2013). Tis may be because the original production features a famous singer, the original song is a culturally loaded classic, the original voice talent of an older dubbed flm is no longer available for a newly edited version or one of many other reasons. It is safe to say that if directors decide to include a song with audible lyrics, it is usually done for a purpose, and viewers who do not understand the words cannot estimate their relevance. So, when in doubt, and copyright is not an issue, the safest solution is to subtitle. Some songs constitute the essence of a flm, as in musicals. Others support the narrative more or less explicitly, whether or not they have been written for the flm, and whether or not they are extraneous to the story or part of the fctional scene (as when a character switches on the radio or walks into a bar). Others still contribute to the story in a more indirect manner, by suggesting a mood or creating an atmosphere. Tese cases must be given special attention. Songs in musicals should be subtitled, as should songs that contribute explicitly to the flm story, even in documentary flms. On the left in Example 7.23 is a parody on the song “Island in the Sun”, rewritten for Boom Boom Bang, a documentary on the country of Malaysia, which questions the virtues of progress if all it means is killing local customs and ethnicity as well as nature. Te lyrics of the original 1950s song by Harry Belafonte and Lord Burgess are on the right. Te camera focuses on
198
Subtitling language variation and songs
the two female singers on a stage, and the frst and second stanzas of their song run as follows, the words are clearly audible: Example 7.23 Our islands in the sun Taken over for development Never mind the environment No more coral now and no more ikan.
This is my island in the sun Where my people have toiled since time begun I may sail on many a sea Her shores will always be home to me.
This is my island Lankawie Where all the goods are duty-free They’re building hotels so quickly Is this the new curse of Mahsuri?
Oh island in the sun Willed to me by my father’s hand All my days I will sing in praise Of your forest, waters, your shining sand.
Te lyrics obviously contribute to the message of the flm, a message that is further enhanced through intertextuality for those viewers who know the song on which the new version is based. Unfortunately, not all instances in which a song is used are as clear-cut. Even if the lyrics do not actually provide any insight into a character’s thoughts or story but merely suggest their mood, this may be relevant. In the flm About a Boy, we hear the song “A Minor Incident” by Badly Drawn Boy when the protagonist, Marcus, returns home after his mother’s failed suicide and fnds her suicide note. Te song refects his mood and is narratively functional, which therefore calls for translation (Example 7.24). Te subtitler renders the content reasonably closely and shifts a few words around to respect, and even emphasize, the rhythm and rhyme. Tis may upset the standard French word order slightly, but does not come across as unusual precisely because this is a song. A few concepts are rendered more explicitly than in the original and, intriguingly enough, none of the other songs in this flm are subtitled: Example 7.24 Je ne peux rien dire, rien pour te faire sentir bien.
There’s nothing I could say to make you try to feel OK.
[I can say nothing, nothing to make you feel good.]
And nothing you could do to stop me feeling the way I do.
And if the chance should happen that I never see you again,
Et rien dans ton comportement ne changera mes sentiments. →
[And nothing in your behaviour will change my feelings.]
Si le mauvais sort voulait qu’on ne se revoie jamais, [If bad luck would want that we never see each other again,]
just remember that I’ll always love you.
rappelle-toi que je t’aimerai toujours. [remember that I’ll always love you.]
Subtitling language variation and songs 7.3.2
199
Deciding how to translate
Like in the case of humour translation (§8.3), most authors writing about the translation of songs (Franzon 2008; Low 2017) favour a functionalist approach, i.e. one that aims to respect the purpose(s) of the original song. Tis involves a careful analysis of the ST on the one hand, determining its form and function, as well as a careful analysis of the TT and its form and function, with a focus on achieving a functionally equivalent text. Low’s (2017) pentathlon principle distinguishes between fve criteria that need to be taken into account for singable translations to achieve such functional equivalence, and it is useful to consider them briefy. In addition to singability itself, his principles relate to naturalness, sense, rhythm and rhyme. Considering that subtitles are read while listening to the song, which itself supports the flm narrative to diferent degrees, all four factors can be important in song subtitling. Since subtitles must be read at a glance, it will not do to produce a translation with complicated word order or otherwise complex language. In the earlier French subtitles from About a Boy, for instance, the word order is slightly unusual but remains easily comprehensible, complying with the principle of naturalness. Overall, it is fair to say that sense is the subtitler’s main concern, like in the song about Malaysia, which criticizes thoughtless progress and development. Te song from About a Boy refects Marcus’s mood and ofers a comment on his relationship with his mother, which must also be retained. In some cases, rendering the atmospheric quality of the lyrics will sufce and a literal translation may not be required. In a musical, on the other hand, songs often take on the function of dialogue, and in those cases the content must really be respected. On these occasions, and this is obviously not limited to musicals, the concept of sense includes formal elements such as marked language that serves characterization and therefore also the narrative. In Example 7.25 from My Fair Lady, Higgins, the phonologist, is commenting on the way the working class’s pronunciations betray their lack of schooling. Te French translation has undergone a shift, whereby the reference to the man’s pronunciation is gone, but the subtitle still manages to render the gist of the conversation: Example 7.25 Vous avez été à l’école ? “ Oui... parole ! ” Hey, you, sir, did you go to school? ‘What do ya tike me for, a fool?’ Well, no one taught him ‘take’ instead of ‘tike’.
→
[Did you go to school? “Why... of course!”]
Cela ne fait pas honneur a votre école. [That is no credit to your school.]
On the other hand, not only content matters; the melodies of flm songs are important too. Te jury is still out on whether it is better to try to respect the rhythm or the rhyme of song lyrics in subtitling. Reaching for functional equivalence and achieving the right balance with respect to all criteria in each given instance is the dominant message, always keeping in mind that subtitles are a supporting translation and should not detract too much attention from the images and soundtrack. A very rhythmical
200
Subtitling language variation and songs
song like “Duloc” from Shrek needs rhythmical subtitles; the Dutch translation also tries to rhyme but is not always faithful: Example 7.26 Welkom in DuLoc zo’n perfecte stad.
Welcome to Duloc, such a perfect town.
[Welcome to DuLoc such a perfect town.]
Here we have some rules, let us lay them down.
Hoor de regels hier, en nu veel plezier. [Hear the rules here, and now have fun.]
Don’t make waves, stay in line,
Maar wees toch niet te wild, en blijf mooi in de rij,
and we’ll get along fine.
→
[But don’t be too wild, and stay in line,]
dan is iedereen zo blij. [so everyone is happy.]
Duloc is a perfect place.
DuLoc is zoals het moet. [DuLoc is as it should be.]
Keep your feet off the grass, shine your shoes, wipe your... face! Duloc is, Duloc is, Duloc is a perfect place!
Dus blijf wel van het gras, veeg je schoenen, en je... snoet. [So keep off the grass, wipe your shoes, and your... face.]
DuLoc is, DuLoc is DuLoc is zoals het moet! [DuLoc is as it should be!]
Franzon (2008) claims that respecting the prosody of the lyrics in subtitling can actually enhance their readability, pointing out that this is the approach taken by the Swedish subtitler of the classic Te King and I. Te words of the subtitles ft the notes syllabically, and Swedish speakers “hearing the music would see that the subtitles follow the music in terms of rhythm and stress” (ibid.: 392), as a result they “enjoy the translated lyrics simultaneously with the original melody as sung” (ibid.: 393). On the other hand, focusing on rhyme may upset the pentathlon’s balance and may cause the subtitle to attract too much attention. With respect to rhyme, Low’s (2017) advice is to consider whether it is central to the song or has been included only out of convention. 7.4
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 7 > Exercises
8
Subtitling cultural references, humour and ideology
8.1
Preliminary discussion
8.1 Translation challenges that are due to the different grammatical and lexical makeup of languages are translators’ daily fare. However, translators are also confronted with various types of cultural references, or other challenges that are not always of a linguistic nature in the strict sense but rooted in language all the same.
❶ How would you define cultural references? ❷ Give a few examples from your culture and discuss the difficulties a translator into another language may encounter. ❸ What kind of additional challenges might such cultural references hold for subtitlers as compared to, say, literary translators?
8.2 In the second line of the dialogue exchange from Shrek below, Shrek is referring to the princess that he and Donkey are about to rescue. What lies at the heart of the humour in the passage? Would it be difficult to subtitle into your mother tongue? Explain why (not). Donkey: Shrek: Donkey:
So, where’s this fire-breathing pain-in-the-neck anyway? Inside, waiting for us to rescue her. I was talking about the dragon, Shrek.
8.3 What in your view is the most difficult type of humour to subtitle? Justify your answer. 8.4 What do you understand by the concept of ideology? How can it impact subtitling even before subtitling is undertaken?
8.2
Te translation of cultural references
Tis chapter frst considers a few diferent ways in which cultural references have been defned and categorized, then looks into the challenges they pose for translation and the translation solutions that are on ofer.
202
Cultural references, humour and ideology
8.2.1
Cultural references: what are they?
Cultural references (CRs) are references to items that are tied up with a community’s culture, history or geography, and they can pose serious translation challenges. CRs are also referred to as culture bound terms, realia, culture bumps and, more recently, culture specifc references by Ranzato (2016), who, in her study about dubbing, ofers an interesting historical overview of the existing defnitions and taxonomies in Translation Studies. Pedersen (2011: 43), who has written one of the most extensive works on the issue in the context of subtitling, uses the term extralinguistic cultural reference (ECR), which he defnes as follows: Extralinguistic Cultural Reference (ECR) is defned as reference that is attempted by means of any cultural linguistic expression, which refers to an extralinguistic entity or process. Te referent of the said expression may prototypically be assumed to be identifable to a relevant audience as this referent is within the encyclopaedic knowledge of this audience.
In other words, an ECR refers to a phenomenon (referent) that exists in real life (extralinguistic) within a certain culture (cultural) and is expressed through the language of that culture (cultural linguistic expression). Te original audience is assumed to be able to understand the reference (it is identifable) through their cultural background (encyclopaedic knowledge). Pedersen’s (ibid.: 45) defnition on the one hand uses the term extralinguistic but on the other defnes the concept as a cultural linguistic expression, which he explains by specifying, “[t]he expression part of the ECR is always linguistic, and thus always intralinguistic. It is the entity or process to which the ECR refers that is extralinguistic”. However, this defnition excludes a number of cultural references that Ranzato (2016) does include in her study, i.e. intertextual cultural references or allusions (§8.2). In addition, in some cases, the distinction between an item that is intralinguistic or extralinguistic is difcult to make: think, for example, of slang that is also bound to a specifc social and geographically determined group. Tat is why in this book we opt for the term cultural reference, also leaving out the qualifer ‘specifc’ because of the increasingly international nature of some cultural references in today’s globalized world. Cultures, and especially the terms they use, can evolve quickly through mutual contact, due to which the diference between culture specifc and non-culture specifc terms, or their “cultural embeddedness” (ibid.: 58), is variable both geographically and temporally. Films and TV series are strongly implicated in such evolutions as they are distributed worldwide and through so many diferent channels (and translations) that some of them reach a vast and extremely diverse audience within the very frst months after their release. Audiovisual productions’ propensity to travel means that the cultural references they use to give shape to their story also travel extensively. In some cases that may facilitate translation (or restrict the need for translation, as is the case for the now international concept of Halloween, for instance), but it can also have the opposite efect, especially in flms that use strongly institutionalized terminology such as legal terms bound to a national legal system. Some flms also continue to circulate long after their frst release and are often retranslated. Subsequent translations have greater time gaps to span. Furthermore, in subtitling, flm semiotics (Chapter 3) is an important determining factor since both the visual and sound systems of a flm contribute to the
Cultural references, humour and ideology
203
way it gives shape to its SC(s). In brief, the degree of difculty that given cultural references will present for new target audiences can never be taken for granted, nor is it easy to determine. Cinema’s cultural diversity can present translators with a world of challenges, but these must be identifed and assessed in each case before a translation strategy can be devised. Te question then is, how does one assess the challenge? Traditional classifcations of cultural references often take the shape of lexical taxonomies such as those of Newmark (1988), Nedergaard-Larsen (1993) and Díaz Cintas and Remael (2007). We subsequently reproduce the latter in a slightly adapted form, including Ranzato’s (2016) distinction between real-world cultural references and intertextual cultural references or allusions. Te concept of real-world cultural references is used to refer to items that originate in a specifc culture or cultures, such as those listed in the following categories (all of which include historically bound references). Such items are like the ECRs in Pedersen’s (2011) approach. Te concept of intertextual cultural allusions, by contrast, focuses on “the intertextual relationship created between two cultural texts, and the efect this relationship has on the audience” (Ranzato 2016: 64) rather than on the real-world cultural origin of the item strictly speaking. Intertextual allusions can be overt (a direct mention of Hamlet in a dialogue exchange) or covert (a quote from Hamlet that the audience is required to identify). 8.2.1.1 •
Geographical references • • • • •
•
to certain phenomena: mistral, tornado, tsunami, calima; to physical, general locations: savannah, downs, plateau, plaza mayor; to physical, unique locations: Lake Tanganyika, St Andreas Fault, Yellow River; to endemic animal and plant species: sequoia, silky sifaka, platypus, pandani; ...
Ethnographic references • • • • • • • •
•
Real-world cultural references
to food and drinks: tapas, trattoria, 豆腐 [tofu], Glühwein; to objects from daily life: lederhose, igloo, sticky buds; bukhnoq; to work: farmer, gaucho, machete, man of the cloth, ranch; to art, media and culture: blues, Tanksgiving, it girl, Permeke; to groups: gringo, Cockney, frat boys, Orang Asli, Sami, Miao; to weights and measures: dollar, ounce, feet, pound, stone; to brand names and personal names: SMI, Einstein; ...
Socio-political references • • • • • •
to administrative or territorial units: county, bidonville, constituency; to institutions and functions: Reichstag, sherif, congress; to socio-cultural life: Ku Klux Klan, Prohibition, landed gentry, kowtowing; to military institutions and objects: Feldwebel, marines, Smith & Wesson; to personal names and institutional names: Che Guevara, Gandhi, NHS; ...
204
Cultural references, humour and ideology
8.2.1.2 • •
Intertextual cultural references
Overt intertextual allusions: an explicit reference to Hamlet or Game of Trones. Covert intertextual allusions: all types of parody or other allusions taking the form of not explicitly identifed references to other cultural artefacts, such as ‘A car, a car, my kingdom for a car’, playing on the original ‘A horse, a horse, I’d give my kingdom for a horse’ from Richard III.
Pedersen (2011) rightly points out that overlap between the diferent categories of a taxonomy, such as the previous one, is unavoidable (names, for instance, appear in geographical, ethnographic and socio-political references). We also realize that the aforementioned categories in themselves do not give the subtitler any indication of whether the items they include constitute translation problems in any given instance. Nevertheless, they ofer a fairly comprehensive overview of the type of CR a subtitler may encounter. 8.2.2
Cultural references: what determines their translation?
In what follows, we briefy discuss the answer to this question ofered by the two aforementioned publications dedicated to AVT: Pedersen (2011) and Ranzato (2016). Pedersen (2011) introduces a number of parameters that appear to have infuenced the choices of the translators in his corpus of Scandinavian flms and that prospective subtitlers can keep in mind when determining their translation strategies. Tey ofer a methodological framework rather than a set of ready-made translation solutions. A frst determining factor is transculturality. Tis is the degree to which a cultural reference is known to or shared with the TC. Pedersen distinguishes between three degrees of transculturality in his ECRs: transcultural, monocultural and infracultural. Transcultural ECRs can be part of the SC, TC or a third culture, but they are known to all cultures involved in the communication exchange and do not cause translation problems. Many fast food chains from the USA have outlets in European and Latin American countries, for instance, and the products they sell can be considered to be known in all of those countries and therefore simply borrowed in the translation. Te scholar also indicates that the smaller the cultural distance, the more transcultural ECRs are likely to occur in a text. A monocultural ECR, by contrast, causes translation problems because its referent is not as easily identifable by the TC audience as by the SC audience. It therefore requires a conscious strategic translation decision and assessment of the target audience’s encyclopaedic knowledge of the world. An example could be the aforementioned ‘Feldwebel’, under socio-political references, a term of German origin denotating a non-commissioned ofcer rank. Te third category of infracultural ECRs consists of very specialized items of the SC that may not even be known to all members of the original SC audience but may actually, for that very reason, be explained through their co-text or the context in which they occur. One example would be the Flemish maturiteitsexamen [maturity exam], a fnal secondary school exam that was abolished decades ago. Extratextuality distinguishes ECRs that exist in the real world (and are text external) from those that exist only in the diegetic world of the flm (and are text internal), but no clear-cut translation decisions appear to derive from this distinction. More interesting in terms of the subtitler’s decision-making process is the concept of
Cultural references, humour and ideology
205
centrality, which refers to the importance that the ECR has for the scene or narrative. For Pedersen, this centrality works on a scale, which the subtitler must identify in any given case. Te higher the degree of centrality of an ECR, e.g. to the narrative, the less liberties the translator will be able to take with the translation. A parameter that Pedersen (ibid.: 113) rightly considers to be “at the very heart of what makes subtitles diferent from other forms of translation” is that of polysemiotics. Tis refers to the ancillary nature of subtitles, which are always added to a fnished product consisting of diferent semiotic layers, with which they have to interact to a greater or lesser degree, depending on the production at hand. As has been pointed out before (Chapter 3), the nature of the information conveyed by the visuals and the soundtrack will co-determine the amount of information to be included in the subtitles. Likewise, Pedersen’s (ibid.) concept of co-text, or the immediately surrounding dialogue/speech in which an ECR occurs, will co-determine the preferred translation strategy in a given scene. Another factor that has been discussed in great detail in this book (Chapter 4) is what Pedersen (ibid.) calls media-specifc constraints, referring to the technicalities of subtitling, which may require subtitlers to condense their text. Finally, efects of the subtitling situation is an umbrella parameter that encompasses the overall translation goals or overarching translation strategies pertaining to the text as a whole, i.e. the matricial norms, to use Toury’s terminology ... Te facts that infuence the Subtitling Situation [and] would ideally be included in what Nord calls “translation briefs” (1997: 115). (Pedersen 2011: 115)
However, as Pedersen points out, and as has been confrmed elsewhere (Robert and Remael 2016), detailed translation briefs tackling fundamental translation issues are virtually non-existent in subtitling, as guidelines, which amount to translation briefs in subtitling, are limited to technical parameters. Subtitlers therefore have to determine for themselves and through their contacts with broadcasters/employers the global, macro-level strategies that will inevitably inform micro-level strategies relating to the translation of (E)CRs. Ranzato (2016: 65), whose study focuses on dubbing but is most relevant for subtitling cultural references as well, attempts to devise a classifcation that “assumes without ambiguities the point of view of the TC audiences and potential translators and categorizes CSRs from this point of view”, rather than starting from the SC perspective. She uses the term exoticism to refer to the exotic nature of items that are not felt to be an integral part of any culture by the members of that culture, writing that “no objective claim can be made on the supposed exoticism of a given element which in fact may be exotic for the TC but far from exotic for a third culture” (ibid.: 63). Her classifcation, on which ours is based to some extent, is divided into real-world references and intertextual references. Te former comprise not only source cultural, but also intercultural, third culture and TC references. Te latter encompass overt intertextual allusions, covert intertextual allusions and intertextual macroallusions. All of these can be verbal or nonverbal as well as synchronous or asynchronous. In Ranzato’s classifcation of real-world references, source cultural references are those embedded in the SC, no matter how well known they are in the TC (one
206
Cultural references, humour and ideology
such example would be ‘Beijing’s Forbidden City’). Intercultural references have forged an objectively identifable bond between SC and TC such as Greenpeace, which has representatives across the globe. Tey have been absorbed, to various degrees, by the TC, to the extent that both cultures consider them their own, as is also the case with ‘Santa Claus’, a referent equally embedded in the cultures of many countries (ibid.: 66). Tird culture references are elements that “do not originally belong either to the SC or the TC but to a third culture” (ibid.: 67), a category that Pedersen (2011) subsumes within his transcultural references. For Ranzato, they deserve to be a separate group because they pose a diferent challenge to the translator, who needs to determine the degree of familiarity of the item for both the SC and the TC. Examples are the British English concepts of ‘scone’ and ‘Victorian age’ occurring in an Australian production to be translated into Tai or Arabic. Finally, target cultural references are exotic for the SC but not for the TC, which, again, has consequences for the translation strategy to be used, as they may require fewer explanations or may create room for subtleties in the translation. A singular example will be a reference to Russian culture in a French flm to be translated into Russian. Which classifcation is most helpful in identifying degrees of translation challenges will remain subjective to some extent and tied to the translation commission at hand. With reference to Ranzato’s solution, it may not always be feasible for a subtitler to keep track of the degree of exoticism of certain intercultural references, especially since today not all subtitlers translate into their own mother tongue and culture. Similarly, with reference to Pedersen’s solutions, the distinction between a monocultural and an intracultural reference may sometimes be difcult to ascertain. However, both authors’ classifcations are valuable in that they ofer subtitlers a detailed overview of the wide range of factors that they may have to consider when determining their translation strategies. Ranzato’s classifcation has the added bonus of including intertextual cultural references, which are a type of CR increasingly popular in contemporary cinema (Lievois and Remael 2017). Te referents of such allusions are part of another human-made artefact, be it literature or pop culture, and, in the present case, they establish a link between the flm to be translated and one or more other flms, books or other cultural items that belong to the human cultural heritage. Since overt allusions are openly identifed in the ST, they are not difcult to recognize but they may remain a challenge to translate nonetheless, and the translation strategy to be chosen will depend on all the factors discussed earlier. However, it is covert intertextual allusions in the form of indirect references to other texts that are usually considered true allusions. As Lievois and Remael (ibid.: 323) write: Filmic allusions are a form of intertextuality that ofers flm-literate audiences a richer flmic experience by evoking earlier flms in the flm they are watching through the use of diferent types of cues. Employed as a device to activate the memory of a flm seen in the past while watching a new flm, allusions also prompt the audience to interpret the current flm on the basis of their knowledge of the evoked flm.
It is therefore the task of the subtitler to ensure that the reference or ‘marker’, the term used by Ben-Porat (1976) in her seminal work on the poetics of literary allusion, remains indirectly identifable in the flm to be subtitled, which must allow the
Cultural references, humour and ideology
207
audience to connect to the older flm. Indeed, part of the charm of allusions is considered to reside in the intellectual exercise of detecting them. Consequently, both spotting an allusion and then fnding an adequate way to render it in the TL is just as challenging for subtitlers. Especially problematic are allusions that refer to a cultural product that is embedded in the SC (in Ranzato’s terms) or that have a monocultural referent (in Pedersen’s terms). For the sake of completeness, we also mention Ranzato’s intertextual macroallusions. In this case the entire ST functions as a referent or marker for another cultural artefact. Such an allusion can be overt (Ranzato gives the example of one episode of the TV series South Park which explicitly presents itself as an adaptation of Charles Dickens’s Great Expectations) or covert (the example given here is that of Bridget Jones’s Diary as a macroallusion to Jane Austen’s Pride and Prejudice). Te relevance of intertextual macroallusions for concrete subtitle decisions is hard to assess, though familiarity with the alluded work is clearly recommended. 8.2.3
Cultural references: translation strategies
As discussed by Ranzato (2016), most if not all authors who have studied cultural references have also (re)created overviews of strategies commonly used to tackle them. Overall, the solutions on ofer range from very literal transfers to complete recreations, but no existing classifcation can cover all the translation strategies to which translators or subtitlers resort. As in the case of classifcations of CRs, overlap is inevitable and strategies are often combined. We propose a reorganized version of the classifcation put forward in Díaz Cintas and Remael (2007: 202): 1 2 3 4 5 6 7 8 9
Loan Literal translation Calque Explicitation Substitution Transposition Lexical recreation Compensation Omission
In the case of a (1) loan, also known as borrowing, the ST word or phrase is directly incorporated into the TL and text because both languages happen to use the exact same word, be it because of historical tradition or because the term is being incipiently used in the TL, as is happening with many concepts from the information communications technology feld today (e.g. ‘startup’, ‘frewall’, ‘app’). Such terms often have the same FL source. Other examples are references to drinks or culinary specialities such as ‘cognac’ or ‘goulash’, but also place names that remain unchanged, as ‘Los Angeles’, political terms as ‘guerrilla’ or names of dances as ‘waltz’. When the languages are based on the same alphabet, e.g. Spanish and English, the loans tend to share the same spelling, with occasional minor adjustments, as in the case of the English ‘football’ becoming ‘fútbol’ in Spanish. However, when the languages are rooted in diferent alphabets, e.g. Chinese and English, the adoption of loan words
208
Cultural references, humour and ideology
may take diferent routes. For instance, the term may be accepted in its original, usually Roman, alphabet without transliteration, as in the case of ‘iPod’ or the ‘AK4’ rife. Alternatively, and more commonly, the receiving language may incorporate the loan after a process of transliteration, with the recreated term relying on a transcription that follows the phonetics of the original as closely as possible. Examples of this strategy in Chinese are 披萨 (Pi Sa) for ‘pizza’, 沙发 (Sha Fa) for ‘sofa’, 咖啡 (Ka Fei) for ‘cofee’ or 桑拿 (Sang Na) for ‘sauna’. Languages based in the Latin alphabet do the same with expressions such as ‘hummus’, from the Arabic الحمص, ‘kowtow’ from the Cantonese 叩頭, ‘feng shui’ from the Chinese 風水, or with references to historical events such as ‘perestroika’, from the Russian ‘Перестройка’. Despite their close resemblance, the use of loan words should be carefully monitored, as some of them may undergo semantic change in the transfer. Ranzato (2016) reports on the Italian loan word ‘latte’ in the Italian version of the well-known US TV series Friends. Te Italian word in the English ST refers to a specifc type of cofee, whereas the same word retransferred to its original language means ‘milk’. On some occasions, they look like ‘false’ loans on the surface, when in reality they are lexical recreations, as is the case with the English-sounding term Handy in German to refer to a ‘mobile’ or ‘cellular’ phone, or footing in French and Spanish to mean ‘jogging’. Conversely, they may be useful in maintaining intertextual allusions since they can literally reproduce an intertextual referent or marker in the subtitle. A (2) literal translation is a special type of loan, whereby the subtitler borrows the form of expression in the SL and renders each of the elements literally into the TL structure. Tis is done in such a way that it seems to have been coined in the TL and sounds ‘natural’. For instance, the Mexican celebration of Día de los Muertos becomes ‘Day of the Dead’ in English. A (3) calque, by contrast, is a literal translation that somehow sounds ‘odd’ and thus competes with a more fuent expression in the TC, as is the case with Secretario de Estado in Spanish for ‘Secretary of State’, when Ministro de Asuntos Exteriores [Minister of Foreign Afairs] would be a more common and transparent title. Sometimes such terms do require an explanation. A British or USA public with no prior knowledge of Dutch history will fnd the English calque ‘States-General’, for the Dutch StatenGeneraal, quite mysterious. Tis may be problematic in subtitling, where one rarely has room for explanations, unless the context or visuals come to the rescue. Calques can respect the semantic structure of the SL term (lexical calque), such as in the case of ‘paper tiger’ in English, which is a calque of the Chinese phrase 紙老虎 [paper tiger] or the quite established блошиный рынок in Russian or bit pazarı in Turkish to render ‘fea market’. On other occasions, calques introduce a new structure into the TL (structural calque), such as in the expression ‘chop chop’ in English, which derives from the Cantonese 速速 [fast, quickly]. Te degree of transparency of calques will therefore vary, with some being long established in the hosting language and hence more straightforward – Persian آسمان خراشor Albanian qiellgërvishtës for ‘sky-scraper’, whereas others still have a foreignizing efect that may confuse the viewer. In other words, some calques start their lives as proper, disruptive ‘calques’ and, with time, move on to become ‘literal translations’. When a foreignizing efect remains, the question is whether this impact is desirable in the given context. Te Spanish and Turkish calques for ‘jelly doughnuts’ in Example 8.1 might mystify quite a few Spanish and Turkish viewers, and this solution contrasts with the more adaptive ones used in Dutch and Swedish:
Cultural references, humour and ideology
209
Example 8.1 A: Yeah? What’s that there? B: You want? They’re jelly doughnuts. You want a jelly doughnut? Spanish
Dutch
- ¿Qué es eso? - Buñuelos de jalea.
- Wat zijn dat? - Donuts met smurrie. Will je er een?
[- What is that? - Doughnuts of jelly.]
[- What are these? - Doughnuts with gooey stuff. Do you want one?]
Swedish
Turkish
- Vad är det? - Syltmunk. Vill du ha en?
- Nedir o? - Jöleli çörek. Sen de ister misin?
[- What is it? - Jam doughnut. Do you want one?]
[- What’s that? - Jelly doughnut. Do you want too?]
Likewise, in the next example of a calque (Example 8.2), the speaker is referring to the Anglo-Saxon practice of car boot sales. At such gatherings, people sell second-hand items they no longer need more or less literally from the boot of their car, which serves as a shop window or market stand. Since the practice is unknown in a country like Spain, the following subtitle would no doubt puzzle many members of the Spanish audience: Example 8.2 You know, start over. Maybe in Mexico. Sell blankets. We’ll work off the hood of the car or something. Spanish Dutch Empezar de cero, quizá en México. Montar una tienda en el capό del coche.
We beginnen een nieuw leven, in Mexico. We gaan met dekens venten.
[Start from zero, maybe in Mexico. Set up a shop on the hood of the car.]
[We start a new life in Mexico. We’ll go peddle blankets.]
Finnish
Turkish
Muutetaan ja aloitetaan alusta. [Let’s move and start over from the beginning.]
Mennään Meksikoon ja aletaan kaupitella huopia. [Let’s go to Mexico and start selling blankets.]
Yeniden başlayalım, mesela Meksika’da. Arabanın üstünde battaniye satarız. [Let’s start again, for example in Mexico. We’ll sell blankets on the roof of the car.]
An additional feature of the calque (or literal translation in this case) is that it may also be used to ensure the preservation of an intertextual reference. Te French flm Astérix & Obélix: Mission Cléopâtre abounds with covert intertextual allusions, among others to Quentin Tarantino’s Pulp Fiction. Te frst scene in Astérix, Example 8.3, features a domestic dispute between Caesar and Cleopatra, who are discussing the
210
Cultural references, humour and ideology
grandeur and merit of their respective cultures. Te Egyptian queen feels the pyramids surpass any Roman cultural achievement by far: Example 8.3 Cléopâtre : Jusqu’à nouvel ordre, ô César, ce ne sont pas les Romains qui ont construit les pyramides ! [Until further notice, oh Caesar, it is not the Romans who have built the pyramids!]
César : Ces trucs pointus, là ?
As far as I know, the Romans didn’t build the pyramids. → Those pointy things?
[Those pointed things, there?]
As he replies, Caesar draws a triangle in the air that appears in white lines on screen, as displayed in Figure 8.1. Te visual refers to a scene from Pulp Fiction, in which Mia Wallace (Uma Turman) accuses Vincent Vega (John Travolta) of being a ‘square’, while also drawing a square ‘in the air’, which then appears in dotted lines on the screen (Figure 8.2). Te dialogue and accompanying gesture provide a link between the verbal and visual modes of the flm in both productions, but the subtitle must be a literal translation for this link to be maintained in the translated flm where the direct reference is to the pyramids, ces trucs pointus or ‘pointy things’.
Figure 8.1 Astérix & Obélix : Mission Cléopâtre
Figure 8.2 Pulp Fiction
Vinay and Darbelnet (1958/1995: 8) defne (4) explicitation as “the process of introducing information into the TL which is present only implicitly in the source language, but which can be derived from the context or the situation”. Based on semantic or grammatical considerations, the subtitler tries to make the ST more accessible by meeting the target audience halfway. Tis can happen through (a) specifcation or use of a hyponym; (b) generalization or use of a hypernym or superordinate; or (c) addition of extra information. In the frst case ‘tulip’ or ‘daisy’ might be used for ‘fower’, while in the second case Le Soir might be translated as ‘a Belgian (quality) paper’, depending on the space available and the need to explain. Hypernyms are by far the most frequent in subtitling since they are generalizations that usually have an explanatory function, whereas hyponyms narrow down the meaning of a word. It would be unlikely for the word ‘dog’ to be replaced by ‘Schnauzer’, for instance. In the case of additions, which also tend to be rare in subtitles, mostly for spatio-temporal reasons, information is added to passages containing cultural references that are expected to cause comprehension problems but are essential for a good understanding of the programme, as is shown in Example 8.4:
Cultural references, humour and ideology
211
Example 8.4 I first saw him at Palantine Campaign Headquarters at 63rd and Broadway.
→
Now, you can send him to the chair.
→
After all, this was boomtown.
→
Je l’ai vu à la permanence du candidat Palantine sur la 63ième et Broadway. [I saw him at the headquarters of the candidate Palantine at 63rd and Broadway.]
Podéis mandarle a la silla eléctrica. [You can send him to the electric chair.]
Londres était la ville du boum. [London was the boomtown.]
Te use of hypernyms includes the translation of brand names or abbreviations by the institution or concept they stand for: the Flemish ASO thus becomes ‘secondary school’ and TVP ‘(Polish) public TV’. From a denotative point of view this works perfectly, but local colour is obviously lost. Te use of hypernyms, often dictated by the need for transparency, contributes to the loss of specifcity that is typical of subtitling and shows that subtitlers cannot always opt for the shortest word available, since clarity may have to be given priority. In Example 8.5, ‘Mau-Mau’, a reference to a violent African secret society of the 1950s, has been replaced in French by cannibales. Te speaker is referring to the black Manhattan neighbourhood of Harlem, which, in the meantime, has been gentrifed, thus changing the nature of the CR: Example 8.5 Fucking Mau-Mau land. Who stole all your jewellery and sold it on eBay?
→
→
Un quartier de cannibales. [A neighbourhood of cannibals.]
Wie stal je juwelen en verkocht ze op internet? [Who stole your jewellery and sold it on internet?]
Explicitation can also take the form of translation of a third language, as in the following English example (Example 8.6), where the South African ST audience was expected to know the Afrikaans Umkhonto We Sizwe and the target audience was not: Example 8.6 Als je ‘t woord ANC hoorde of PAC of De Speer van de natie, If you hear the name ANC or PAC or Umkhonto We Sizwe you know it is communist and that it is your enemy.
→
[If you heard the word ANC or PAC or The Spear of the Nation,]
wist je dat ’t om communisten ging, je vijand. Zo was ’t je bijgebracht. [you knew it was about communists, your enemy, so you’d been taught.]
An example in which explicitation leads to the unusual, though not impossible, addition of new information to the TT occurs in Toy Story 3 (Example 8.7). In the scene, in which various toys are tied up to the front of a moving garbage truck, surrounded
212
Cultural references, humour and ideology
by fies and other insects, Frog gives the following piece of advice to Lotso, the teddy bear. Te subtitler into Arabic takes full advantage of the semiotic context and includes information that is not uttered in the original but fts in well with the images: Example 8.7 Frog: Hey, buddy. You might → want to keep your mouth shut.
.يااخينا ماتقفل بؤك عشان الحشرات [Hey, brother, you better close your mouth because of the insects.]
An extreme case of explicitation is the addition of glosses or topnotes that are rather common in the feld of fansubbing but considerably rarer in commercial subtitling (Díaz Cintas 2005b), where space and time constraints have been frequently invoked to explain why subtitlers cannot make use of such metalinguistic devices to justify their solutions. Te imperative of having to synchronize original dialogue and subtitles, the need to stay within a maximum of two lines per subtitle, and the widespread belief that the best subtitles are the ones that are not noticed seem to confrm the idea that it is actually impossible to add any extra information alongside the translation. However, this assumption is being slowly questioned and challenged by new practices like the ones found in some commercial flms and the ones displayed in Figures 8.3 and 8.4:
Figure 8.3 A Touch of Spice
Figure 8.4 Baby Cart in the Land of Demons
(5) Substitution, as we understand it here, is a variant on explicitation, which consists in replacing the cultural reference in the ST with a similar reference that already exists in the SC or in the TC (cultural substitution), or with an expression that fts the situation but shows no connection with the ST expression (situational substitution). A phenomenon typical of subtitling, it is resorted to when spatial constraints do not allow for the insertion of a rather long term, even if it exists in the TC. Typical examples are the names of culinary dishes that have become popular in diferent countries. Te French sauce hollandaise is literally known as hollandaisesaus in Dutch but it might be translated as botersaus [butter sauce] if the space and time limitations are very strict. A Hungarian goulash will be ‘goulash’ in almost any European language but will sometimes become a ‘stew’ for the same reason.
Cultural references, humour and ideology
213
In the case of (6) transposition, a cultural concept from one community is replaced by a cultural concept from another. Tis strategy is resorted to when the target viewers would probably not understand the ST reference should a loan or literal translation be used, and there is no room for explicitation. Transposition also implies some form of clarifcation. Te Spanish El Corte Inglés [Te English Cut] might be replaced by the Dutch HEMA or the English ‘John Lewis’, although this can result in a confict with the foreign culture presented on screen, thus afecting the diegesis of the production and endangering the credibility of the translation. Example 8.8 Italian
→
French
→
Da fare invidia agli chef più esperti! [To make jealous the most expert chefs!]
Souffre en silence, Maïté. [Suffer in silence, Maïté.]
Eat your heart out, Spanish → Julia Child.
Muérete de envidia, Arguiñano. [Die of jealousy, Arguiñano.]
Hättest du das nicht auch gern, Kindchen?
German →
[Wouldn’t you like that too, little child?]
Likewise, when the ST contains clearly audible names, transposition may be problematic, though sometimes inevitable. Te viewer will hear the character say one name and read another, which can be confusing. In the case of brand names that are not commercialized globally, the use of a hypernym might be a better solution, as in Examples 8.9 and 8.10: Example 8.9 I’ve brought you some Matey.
Ik heb badschuim voor je.
→
[I’ve got bubble bath for you.]
Example 8.10 Howie, no. You know, I can’t be your Rice-A-Roni. Portuguese
Italian
Não, Howie. Não vou ser o teu prémio de consolação.
Howie, no. Non posso essere il tuo gioco da tavolo.
[No, Howie. I don’t want to be your consolation prize.]
[Howie, no. I cannot be your board game.]
References to ‘the Kennedy administration’ are standardly ‘transposed’ in Dutch as de regering Kennedy [the Kennedy government] because the literal translation de Kennedy administratie would have a diferent meaning. By contrast, the treatment of
214
Cultural references, humour and ideology
measurements and currencies will vary. In Example 8.11, ‘dollars’ are considered known, but the colloquial term for the subdivisions is not: Example 8.11 He paid 25 dollars. Hey Travis, have you got change for a nickle?
Il a payé 25 dollars.
→
[He paid 25 dollars.]
Hé, Travis. T’as la monnaie de cinq cents ?
→
[Hey, Travis. You have change for five cents?]
To conclude, the way in which subtitlers deal with cultural references may also vary from country to country, i.e. it may depend on local norms – but then again norms change, too. Whereas a decade ago Spanish subtitlers would usually opt for explicitations, and Flemish and Dutch subtitlers tended to prefer loans in the case of British or USA flms, this diference appears to be fading. Te fact that the soundtrack is heard and the audience might spot the discrepancy between subtitle and dialogue, as well as the increased or perceived transculturality of cultural references or the clarifcation provided by the co-text, are common motivations for favouring loans over transpositions, as illustrated in Examples 8.12 and 8.13: Example 8.12 You’ve been a hot topic of conversation ever since you’ve joined the Structure family. N.B. Structure is the name of a retailer.
→
Sei spesso argomento di conversazione da quando sei alla Structure. [You’re often a topic of conversation since you’ve been in Structure.]
Example 8.13 A: You seem in a strange mood. B: No, no, no. I’m just probably a little drunk. A: On Perrier? B: What are you talking about? I had rum cake! Spanish - Te noto un poco raro. - Estaré algo borracho.
French - Vous êtes bizarre. - Juste un peu ivre.
[- I notice you are a bit strange. - I may be a little drunk.]
[- You are strange. - Just a bit drunk.]
- ¿Con Perrier? - He tomado tarta al ron.
- De Perrier ? - J’ai mangé un baba au rhum.
[- With Perrier? - I have eaten rum cake.]
[- With Perrier? - I have eaten a rum doughnut.]
Cultural references, humour and ideology Swedish - Du verkar konstig idag. - Jag är nog bara lite full.
Portuguese - Estás agindo estranhamente. - Estou um pouco bêbado.
- På Perrier? - Jag åt en romkaka.
- A beber “Perrier”? - O quê? Comi bolo de rum.
215
[- You are acting strangely. - I am a bit drunk.]
[- You seem strange today. - I’m probably just a little drunk.]
[- Drinking “Perrier”? - What? I ate rum cake.]
[-On Perrier? - I had a rum cake.]
Yet, in some languages, explicitation by means of hypernyms may be favoured, as shown in example 8.14: Example 8.14 Finnish - Vaikutat hassulta. - Taidan olla huppelissa. [- You look funny. - I guess I am a bit drunk.]
- Kivennäisvedestäkö? - Söin rommikakkua. [- From mineral water? - I ate rum cake.]
(7) Lexical recreation or the invention of a neologism in the TL is warranted, may indeed be inevitable, when the ST speaker makes up new words. As shown in Example 8.15, the neologism is sometimes placed between quotation marks in the subtitle to let the viewers know that it is not a typographical error: Example 8.15 This definitely rates about a 9.0 on my weird shit-o-meter. Murder! Betrayal! Kidnapped! No, birdnapped!
→
→
Esto merece un 9 en mi “rarezametro”. [This merits a 9 on my “oddity-meter”.]
Mord! Verrat! Geiselnahme! Nein, Vogelnahme! [Murder! Treason! Hostage taking! No, birdtaking!]
(8) Compensation means making up for a translational loss in one exchange by being more creative or adding something extra in another, though it may not always be practicable in subtitling due to the oral-visual cohabitation of the source and TLs. All the same, compensation can be a blessing for the translation of humour (§8.3). In Example 8.16, the character is complaining about the bitterly cold weather while getting dressed and tacking his shirt inside his trousers, which perfectly justifes the Spanish solution. Indeed, the subtitle avoids making a reference to the rather cryptic concept of a ‘chest wig’ in Spanish culture while at the same time making the most
216
Cultural references, humour and ideology
of the images and reinforcing the semiotic cohesion of the scene by establishing a new, direct link between the subtitle and the images: Example 8.16 I just regret that I haven’t brought my chest wig.
Arrepentido de no haber traído calzoncillos largos.
→
[Regretful for not having brought long johns.]
Compensation can also be activated to contribute to the linguistic characterization of the actors, as in Example 8.17, where the neutral English word ‘company’ of the ST has been translated as boîte [joint] in the French subtitles, which is much more informal in style, and suggests the way this character habitually speaks: Example 8.17 Je voulais mettre un autocollant dans mon taxi You know in fact, I was gonna put one of your stickers on my taxi, but the company said it was against their policy.
[I wanted to put up a sticker in my taxi]
→
mais la boîte a dit que c’était contre sa politique. [but the joint said it was against their policy.]
(9) Omission has been discussed extensively in §6.3.2 and is often applied in the case of fast-paced speech, when the deletion of terms and expressions that appear in the original is unavoidable because of stringent space-time limitations. To a lesser extent, this strategy is also activated when the original reference is unknown to the target audience and the rest of the context is clear enough for the utterance to be understood, or when the TL simply does not have the corresponding term. References to ranks or professional positions, e.g. with the police force or in a medical environment, can be very difcult to translate. In the following frst passage from the detective series A Touch of Frost (Example 8.18), both spatial constraints and terminology play a part in the decision to omit the word ‘superintendent’ in Dutch. In the second subtitle, the name of the restaurant, with which the target viewers would probably not be familiar, has been omitted in Spanish: Example 8.18 A: Does he know you’re a policeman? B: My name is Frost, Superintendent. I thought we’d have a light dinner, you know, because we had a rich lunch at Twenty-One, I thought.
→
→
- Weet hij dat je van de politie bent? - Mijn naam is Frost. [- Does he know you’re a policeman? - My name is Frost.]
Cenaremos algo ligero, después de esa comilona. [We’ll eat for dinner something light, after that blowout.]
Cultural references, humour and ideology
217
However, omissions may also be ideologically inspired, particularly in the case of the subtitling of taboo words (§7.2.1) and when dealing with sensitive topics like politics, sexual behaviour and religious rituals (§8.4). Tis section concludes with a brief discussion of the concept of ofcial equivalent, as understood by Pedersen (2011), which in this book is not considered a translation strategy per se, but rather a result. It involves the use of a ready-made solution that is imposed by an authority such as a governmental agency or a broadcaster. Te guidelines of the public broadcaster in Flanders, for instance, stipulate that measures and currencies must be converted to metrics and euros respectively for the sake of clarity in non-fction but not in fction, since in that case conserving the favour of, say, an historical programme about the Roman Empire may be more important than rendering a distance accurately in kilometres (VRT 2013: 29). As the example shows, an ofcial equivalent can therefore involve diferent translation strategies such as a loan or substitution, depending on the circumstances. 8.3
Te translation of humour
Like poetry, humour is sometimes deemed untranslatable, but just like much poetry is read in translation, many comedies are watched with subtitles and travel successfully across linguistic borders. As Schröter (2010) claims, humour is certainly translated somehow, so why is it considered to be such a challenge? Part of the answer resides in the fact that humour is a social event, at the same time universal and culture-bound. 8.3.1
Pinning down humour
Te frst difculty arises with attempts to defne humour. Although we all have an intuitive understanding of what humour is, defning it is such a tricky undertaking that defnitions of humour and approaches to its study have accumulated over time, and continue to do so. None of them is or can be conclusive. Even the aim to produce laughter, including variants thereof such as smiling and giggling, cannot safely be named as humour’s core characteristic since some humour may also aim to hurt, by being derogative to certain social groups. Moreover, people may laugh for diferent reasons. Humour indeed has many purposes. In addition, some humorous utterances may intend to produce laughter but fail, whereas others may not have any humorous intent and produce laughter all the same. Tis is because, as Chiaro (2014: 17) writes, so many cognitive, emotional, social and expressive factors impact on what is or is not perceived to be humorous that one can never pin it down. We shall therefore refrain from attempting to ofer yet another defnition, but rather look at some of the recurrent features and functions of humour and the translation challenges it poses for subtitling, some of which will be very reminiscent of those posed by the translation of linguistic variation and cultural references. As was pointed out in the introduction to this chapter, both of these can actually contribute to the creation of humour. It is crucial to remember that humour is a form of communication which relies on a certain degree of common ground between the communicative parties in terms of linguistic and/or socio-cultural background and knowledge of the world. In the case of audiovisual media, some common background in audiovisual communication is also required since in audiovisual products humour will make use of all available
218
Cultural references, humour and ideology
channels. However, in spite of this common ground, humour about a shared issue can have divisive as well as uniting efects: even telling a joke about a person two parties know may amuse some and shock others (Meyer 2000). Most publications dealing with the translation of humour in AVT take the impact of the audiovisual medium on the functioning of humour into account (MartínezSierra 2009; Chiaro 2010; De Rosa et al. 2014). On the one hand, they focus on verbally expressed humour, which harks back to the seminal General Teory of Verbal Humour (GTVH) by Attardo and Raskin (1991), because translation, especially subtitling, can supposedly impact on the verbal aspect of humour only. On the other hand, they also point out the challenges of the visual anchoring of verbal humour in audiovisual productions, as well as the way in which nonverbal sounds may contribute to the expression of humorous content. In addition, the concept of verbally expressed humour is extended to include “culture-specifc references pertaining to the culture of origin which are frequently involved in humorous tropes” (Chiaro 2010: 2) or, in the words of Minutella (2014: 68), it studies “instances of language and culture-based humour” within an audiovisual context. Te distinction between language and culture-based humour is not always clear-cut, just as the distinction between extralinguistic cultural references and linguistically embedded cultural references is not. Still, with respect to humour, Graeme (2010: 34), for instance, maintains a distinction that is close to Pedersen’s (2011) with respect to cultural references, whereby verbal humour relies on the language that is used to express it whereas referential humour uses language to convey a comical story or event (that may be culture-based). Whatever the case may be, subtitlers will have to use language to help convey either type of humour, and a crucial impacting factor is the distance between the languages and cultures involved in the subtitled exchange. We will therefore return to the more general concept of exoticism, as coined by Ranzato (2016), to pinpoint degrees of familiarity/unfamiliarity of cultural references in our discussion of humour, especially since the knowledge of both the SC and the TC may cause problems. In fact, some of the humour in a flm may originate in a third culture, especially in multicultural and multilingual productions (§3.3.2). Tere are numerous studies on what the various functions of humour are and on the way in which humour is activated, signalled or expressed (Attardo 2014, 2017). We concentrate here on some of the central features that recur in discussions on the functioning of humour and will help subtitlers detect and understand instances of humour. In GTVH humour is always seen as an utterance or event that works on more than one level, playing on shifts between its propositional and connotational meaning or on two both overlapping and opposing scripts that are presented or perceived as one (Chiaro 2010). Scripts in this context are seen to be cognitive structures that humans possess to organize information and knowledge of the world. Two publications that apply GTVH to subtitling are Asimakoulas (2004) and Williamson and de Pedro Ricoy (2014). Te functioning of humour hinges on six parameters, ordered hierarchically: script opposition, logical mechanism, situation, target, narrative strategy and language. However, not all parameters are equally important in all humorous instances. As mentioned earlier, script opposition means that two scripts must be at the same time overlapping and in opposition to each other. Logical mechanism refers to the mechanism through which the incongruity is revealed (e.g. through
Cultural references, humour and ideology
219
juxtaposition). Situation means that the humorous instance must be about something, i.e. a person, an object, a group etc. Te target is/are the person(s) who are the butt of humour, a parameter that will be prominent in parody, for instance. Te narrative strategy refers to the humour’s organization (e.g. working towards a punch line), which may be most prominent in verbal humour or jokes but could, in audiovisual texts, be expanded to include the organization of the four AVT modes, as Williamson and de Pedro Ricoy (2014: 168) suggest, since audiovisual humour may hinge on the interaction of visual–verbal, verbal–nonverbal, aural–verbal and aural–nonverbal information. Finally, language denotes the linguistic form the jokes take, i.e. choices “on the phonetic, phonological, morphophonemic, morphological, lexical, syntactic, semantic and pragmatic levels” (Asimakoulas 2004: 822–823). Whereas Attardo (2002) focuses on the mechanisms of humour, three classical humour theories, discussed among others by Meyer (2000) and Larkin-Galiñanes (2017), analyze the causes of humour more generally in terms of the concepts of relief of tension, the presentation of incongruity or the creation of a feeling of superiority. According to the relief theories, people experience humour because they sense that their stress has been reduced in some way thanks to a humorous comment. From the perspective of incongruity theories, people experience humour or laugh because they feel surprise at the juxtaposition of incompatible things or thoughts, or due to an unexpected deviation from a generally accepted norm in a given utterance. According to superiority theories, “people laugh inwardly or outwardly at others because they feel some sort of triumph over them or feel superior in some way to them” (Meyer 2000: 314). As also Vandaele (1999) points out, superiority amounts to a form of increased happiness related to a heightened self-esteem that can make all or some of the parties involved feel they are better than others. Tese three classical theories’ priorities do not exclude each other nor are they, in themselves, sufcient conditions for humour to occur. For instance, causing someone to feel inferior is not necessarily funny (for all). In addition, relief of tension, the presentation of incongruities and the creation of a feeling of superiority can be, and often are, combined (Meyer 2000) and, according to LarkinGaliñanes (2017), they are essentially complementary. If a person A is ridiculed by a person B’s humorous remark, B himself and another person C may feel superior at the expense of A, and C may experience relief at not being victimized herself. Another concomitant feature of this situation is that a bond may have been created between B and C, whereas it has isolated A. In addition, the combination of incongruity and feelings of superiority may have both a norm-confrming efect (confrming superiority) and norm-undermining efect (questioning superiority). For Meyer (2000), humour is a double-edged sword, which can act as both a uniting and divisive event. Notwithstanding the mechanisms proposed by Attardo (2002), whether an exchange or incident would be experienced as funny by a subtitling audience depends on a multitude of factors, including the context in which it occurs, its cultural implications, the way in which it is presented, the social background of the viewers, the historical distance between the viewers and the event, the mood of the viewers and their age, gender, sexual orientation and so on.
220
Cultural references, humour and ideology
8.3.2
Subtitling humour
Consider Example 8.19 from Shrek that featured in the preliminary discussion of this chapter: Example 8.19 DONKEY: SHREK: DONKEY:
So, where’s this fire-breathing pain-in-the-neck anyway? Inside, waiting for us to rescue her. I was talking about the dragon, Shrek.
Te words ‘fre-breathing’ in Donkey’s frst turn lead the viewer to believe that he is referring to the dragon that the two heroes of the flm are about to take on. Shrek’s reply is therefore unexpected, an example of incongruity or misdirection, and is meant to provoke laughter because the personal pronoun ‘her’ in the second turn refers to the princess they are supposed to rescue. Beautiful princesses do not ‘breathe fre’ in traditional fairy tales, but Shrek is not a traditional fairy tale, it is a parody of one. Te flm therefore continually undermines our traditional expectations or the generally accepted fairy tale norms: it plays with two cognitive scripts, and we recognize this as viewers through the mechanism of the two characters referring to these diferent scripts: that of the fairy tale and its parody. Would the exchange be difcult to subtitle? Not from a language point of view since, some text reduction apart, the dialogue exchanges could be translated fairly literally. Not because of other audiovisual factors impacting on the narrative strategy, since the talk is not really anchored in a specifc image onscreen. It is, however, linked to the dragon that at some point appears both visually and as part of the traditional fairy tale script. In other words, the humour relies heavily on culturally determined knowledge of the type of fairy tale that Shrek and its specifc humour parodies, the target of the humour. Terefore, translating the local instances of humour in the flm, including the present passage about dragons, could be challenging when translating for cultures that do not have the same fairy tale tradition/script, or for which dragons have completely diferent connotations, such as Chinese culture. 8.3.2.1
Detecting and interpreting humour
It should be clear from the aforementioned that humour does not function in isolation. Subtitlers, like other translators and viewers, must therefore detect, interpret and understand the functioning of humoristic instances in context before they can decide whether and how to translate them. In some soaps and television comedies, the use of canned laughter indicates the very place where humour is meant to occur even though it does not ofer any help in understanding what is going on, in evaluating the target context or in producing a target equivalent. From a technical perspective, it does call for a synchronous humoristic translation that justifes the laughter. Tis also happens when characters react to other characters in the flm or interviewees laugh at their interlocutors’ jokes. If the subtitles fail to produce a humorous translation at the right time in such cases, they enter into confict with the image and soundtrack, and the viewers will be under the impression that they are missing something. However, lack of synchrony between
Cultural references, humour and ideology
221
source and target texts is not always due to a poor translation; it can also be the result of inaccurate spotting or simply of diferences in word order in the two languages. Even though such shifts in synchrony may be confusing for viewers who understand the flm dialogue to some extent, provoking laughter slightly earlier or later is not always problematic, given the myriad of restrictions subtitlers face. Te next question is: how does one detect humour when there is no canned laughter? In the case of big blockbusters and TV productions, subtitlers will often be given a continuity and spotting dialogue list (§2.4), which alerts them about the jokes. When working with templates, professionals may also be given information in this respect to speed up the translation process. When a dialogue list is not provided, detecting humour in the original will largely depend on the subtitlers’ personal sense of humour, their cultural background and world knowledge. However, the aforementioned six humorous parameters can help as they can somehow refect the basic script incongruity that many see as central to humour. It can become apparent on the language level, for instance, through the many rhetorical devices humorous utterances use. In this respect, Burgers and van Mulken (2017) distinguish between textual and co-textual markers. Te former are divided into direct markers (‘she said smilingly’), typographic markers (emoticons), morpho-syntactic markers (onomatopoeic references), schematic markers (repetition) and tropes (hyperbole). As for the co-textual markers, these include linguistic markers (change of register from formal to informal), paralinguistic markers (tone of voice), and visual markers (body gestures, facial expressions). In an AVT context, one might also add sound efects (burps, farts) and visual efects of a more cinematic nature such as camera angles. As Vandaele (2002: 165) writes: It is clear ... that: we may not always be able to grasp the sender’s intentions; we may have our own (conscious or unconscious) agenda while grasping intentions; many other contextual elements play a role in the interpretation process; original contexts may be absent; new contexts may emerge continuously; the humorous function of a text may be combined with other textual functions. Tis means that a translator of humour has to make decisions.
Ultimately, subtitlers will need to spot the disturbance in the audiovisual text on any of the six levels discussed earlier and use their world knowledge to decide whether it is humorous. Once the humour has been located and understood, the decision has to be taken as to whether to translate it, i.e. identifying to what extent the humour in question is part of the texture of the flm and therefore crucial for the diegesis. According to Zabalbeascoa (1997: 332): [I]t would seem that there is often a need to strike a balance between a search for comic efect by making the translated jokes as funny as possible, on the one hand, and, on the other, fnding solutions that will not put the viewer of because ... the plot, structure and the coherence of the text are weakened for the sake of certain witty one-liners.
Since humour can occur on diferent levels, e.g. in the interaction between words and images, as a play on words or as an integral part of the story plot, for instance in experiments with genre features and intertextuality, as in the British mockumentary
222
Cultural references, humour and ideology
Te Ofce, this means that its importance for the programme at hand will vary. Schröter (2010) points out that the quality of the text as a whole must also be considered when translating humour and wordplay, which means that toning the humour down may actually be an appropriate solution in some instances. Literal translation may not always be possible because of the asymmetry between the languages and, in audiovisual productions, free adaptations may result in clashes with the images, the sounds or the logic of the fctional world. Assessing the importance of humorous passages in the flm that one is subtitling, on both the micro and the macro level, is essential for the production of adequate translations. Is the source text a comedy in which humour is part of the message and thus has high priority in the communication exchange? Is a given joke a local rhetorical device? Is it important for the joke to be close to the original? Is a faithful translation required or can/should the joke be replaced by a diferent, equivalent one? 8.3.2.2 Translating humour in subtitles Having detected, understood and assessed the humour in a scene, the subtitler must then proceed to translate it. Te degree of difculty involved will vary greatly. Verbal humour may sometimes be easier than humour involving great cultural distance, and the strategies to be used will have to be adapted. Zabalbeascoa (1996) proposes a model of priorities and restrictions to account for the considerable degree of variability in translation. He starts from the premise that not all features can be carried over from the ST to the TT. Priorities are therefore goals, ranked in hierarchical order, establishing which formal and functional aspects/elements of the ST should ideally appear in the TT, and which more low-ranking ones may be sacrifced when a choice is inevitable. High-ranking priorities are considered a restrictive factor for lower-ranking priorities. Zabalbeascoa (ibid.) therefore calls a translation project more ambitious the more priorities it has and the higher their ranking. Concomitantly, a TT is richer than another to the extent that its ambitions have been fulflled. Tus, the difculty of a translation can be measured by the number and strength of the restrictions it entails, as assessed by the translators and tied to their ambition. Restrictions are not only the textual (e.g. ST ambiguity, wordplay) but also the contextual constraints or obstacles (e.g. the time allotted to a project) that produce hurdles for fulflling the priorities or that may determine the absence or presence of certain priorities in a given translation. In other words, how many and which hurdles subtitlers take is not merely inherent in the ST but also depends on the priorities they establish, and, in the case of the translation of humour, the assessment of its importance. Te translation challenges and solutions discussed next should be seen in this light: alternative decisions are always an option. Publications discussing translation strategies for humour (De Rosa et al. 2014; Martínez-Sierra 2009; Pai 2017) will list diferent categories, depending on the type of humour and corpus they are dealing with as well as the theoretical scaffolding behind their approach. Given that the strategies are context-bound, as is the case for cultural references generally, it is impossible to ofer an exhaustive list that would cover all potential scenarios. Some of the more general strategies listed under §8.2.3 will therefore work for the translation of humour as well, whereas some others will probably not. One of the most exhaustive surveys with reference to the translation of puns is still Delabastita’s (1996), which has also been used in
Cultural references, humour and ideology
223
the context of subtitling in combination with GTVH (Williamson and de Pedro Ricoy 2014), and it can serve as a source of inspiration. Some of the strategies, or, rather, the translation resulting from them, show that not all humour can be or is translated. Referring to the translation of puns, Delabastita (1996) distinguishes the following outcomes: ST pun to TT pun ST pun to TT non-pun ST pun to TT related rhetorical device ST pun to TT omission ST pun to TT compensatory pun ST pun to TT pun not present in ST Apart from the linguistic issues involved, the determining factors in deciding which strategy to activate will, again, be similar to what was the case for CRs: the degree of exoticism involved (Ranzato 2016) and the degree of audiovisual anchoring (Chiaro 2010). It is good to keep in mind, however, that humour sometimes fares better in AVT than in, say, literary translation. Subtitling may be constrained by sound and image, but visually as well as aurally conveyed information can at times be of great help. Even if the universality of images is limited, a lot of flm humour still relies on the semiotics of the image and can work through the incongruous juxtaposition of images or through the comical gestures and facial expressions of the speakers. What is more, some of us may regret the dominance of Anglo-Saxon flms on our screens, but North American cinema has created an internationally known culture that limits the problems of comprehension at the target end. As Chiaro (2005) highlights, many North American productions (e.g. My Big Fat Greek Wedding, Spanglish) are consciously poured into an international rather than a national mould in order to travel better. Te following examples aim to ofer an insight into the challenges raised by language issues, exoticism and audiovisual anchoring when dealing with the transfer of humour. Tey have been classifed based on the type of problem they involve, with the understanding that the distinction is not always clear-cut. Culturally determined humour will be entangled in verbal expressions to diferent degrees, while verbally expressed humour may be anchored in an image or linked to a nonverbal sound. Te grouping of the examples therefore remains subjective to some extent, and it is pragmatic in its goal of ofering advice on how to achieve a form of functional equivalence where desirable and possible, depending on one’s priorities and the restrictions one wishes to overcome. Te sources of inspiration for the categories are Zabalbeascoa (1996) and Díaz Cintas and Remael (2007): 1 2 3 1
Language-dependent humour National or community-based humour and exoticism Audiovisual challenges LANGUAGE-DEPENDENT HUMOUR
Zabalbeascoa (1996: 253) defnes language-dependent jokes as relying upon “features of natural language for their efect”, which is also a characteristic of “verbally expressed
224
Cultural references, humour and ideology
humour”, as coined by Chiaro (2005), and includes puns and wordplay. For Delabastita (1996: 128): Wordplay is the general name for various textual phenomena in which structural features of the language(s) are exploited in order to bring about a communicatively signifcant confrontation of two (or more) linguistic structures with more or less similar forms and more or less diferent meanings.
In these circumstances, as Delabastita (1994: 223) also points out, the translation challenges are caused by the fact that the semantic and pragmatic efects of the source text wordplay fnd their origin in particular structural characteristics of the source language for which more often than not the translator fails to produce a counterpart, such as the existence of certain homophones, near-homophones, polysemic clusters, idioms or grammatical rules.
Substitution and compensation (§8.2.3) are often cited as the best way out (Chiaro 2005: 136), but as some of the following examples show they are not applicable in all cases of language-dependent humour, and, as a result, half-translations and semisubstitutions also occur. Delabastita (1996) identifes the following most common types of lexical wordplay relying on the confrontation of similar forms: • • • •
homophones: diferent spelling, same pronunciation; homographs: same spelling, diferent pronunciation; homonyms: same spelling and pronunciation, diferent meaning; and paronyms: approximate sound and spelling.
However, morphological and lexical structures can be exploited too, as in ‘I can’t fnd the oranges, said Tom Fruitlessly’, and ‘Britain going metric: give them an inch and they will take out a mile’ (ibid.: 130). All of these forms, and probably some others, occur in language-dependent humour in audiovisual programmes and do so in many diferent guises and combinations. To tackle them, subtitlers must frst identify the purpose or intended efect(s) of the wordplay and then examine where they can and need to replicate the efect, including the extent to which it can and needs to be replicated and where, considering the cotext and macro flmic context in which they occur. Tis procedure applies to all types of examples discussed next. In Example 8.20, some of the original humour remains in the translation, although the fun element has become more of an ironic remark. Te target viewers might still smile in the same place as the ST viewers, and the jocular style of the speaker is retained. In this case, the English makes use of a homonym: the word ‘frank’, which means ‘honest’, but is also a man’s name. In the Dutch subtitles ‘frank’ is translated literally as eerlijk [honest] and the play on words therefore has to be altered completely.
Cultural references, humour and ideology
225
In order to achieve this, the subtitler has made the most of the flmic co-text as the two characters are playing a game: Example 8.20 A: I’ll be Frank. B: Oh, so who shall I be?
→
- Ik zal eerlijk zijn. - Goed, dan win ik. [- I’ll be honest. - Good, then I win.]
In Example 8.21, the humour derives from the semantic register of cooking both in the source and target texts. ‘Mousey’, which means ‘timid’ or ‘introverted’, is transformed into the Spanish sosa, which means ‘lacking salt, bland’. Tis connotation is what the Spanish joke uses when it says that the two characters can add salt and pepper to each other, whereas the English suggests the two mousey people can have cheese together: Example 8.21 A: Helen Dubin’s wrong for Ted. B: Yeah? A: She’s too mousey. B: Well, he’s a little mousey too. They can have their little rodent time. They can eat cheese together. Spanish German Helen no pega con Ted. Helen Dubin ist die Falsche für Ted. Es demasiado sosa. Sie ist irgendwie mausgrau. [Helen does not go with Ted. She is too bland.]
[Helen Dubin is wrong for Ted. She is somehow mouse grey.]
El también. Pueden salarse mutuamente y echarse pimienta.
Er auch. Sie können sich dann zum Käsefondue treffen.
Norwegian Helen Dubin er feil for Ted. Hun er en grå mus.
Czech Helen Dubinová není pro Teda. Je plachá jako myš.
Han er også en grå mus. De kan ha gnagerfest og spise ost sammen.
On taky trochu. Můžou si společně užívat u sýra.
[He too. They can sprinkle salt and pepper on each other.]
[Helen Dubin is wrong for Ted. She is a grey mouse.]
[He is also a grey mouse. They can have a rodent party and eat cheese together.]
[He too. They can then meet at a cheese fondue.]
[Helen Dubin is not for Ted. She is shy like a mouse.]
[He too a little. They can enjoy the cheese together.]
Another play on the double meaning of names, this time invented paronyms, occurs in this excerpt (Example 8.22) from Te Life of Brian. Te characters of the Roman
226
Cultural references, humour and ideology
governor, as well as the friend he mentions, are actually the victims of the joke. In the French translation similar paronyms are produced: Example 8.22 A: Have you checked?
Vous avez vérifié ? [Have you checked?]
B: Well, no, sir. I think it’s a joke, sir.
Non, mais je crois que c’est une blague. [No, but I think that it’s a joke.]
Comme “ Débilus Crétinus ” ou “ Enormus Vergus ”.
B: Like ‘Sillius Soddus’ or ‘Biggus Dickus’, sir. A: What’s so funny about Biggus Dickus?
→
[Like “Retarded Cretin” or “Enormous Dickus”.]
Qu’y-a-t-il de drôle dans le nom “ Enormus Vergus ” ? [What’s so funny about the name “Enormus Dickus”?]
B: Well, it’s a joke name, sir.
C’est un nom inventé, gouverneur. [It’s an invented name, governor.]
A: I have a very great friend in Rome called Biggus Dickus.
J’ai un ami à Rome qui s’appelle Enormus Vergus. [I have a friend in Rome who’s called Enormus Dickus.]
In Example 8.23, the failed attempt at recreating humour in the Spanish subtitle, or the rather literal translation, which does not actually take the wordplay into account, endangers the logic and comprehensibility of the target sentence, while the translations into Italian and Portuguese do fnd some creative solutions: Example 8.23 I was left for another man. A trainer named Dash. I was left for a punctuation mark. Spanish Italian Me ha dejado por otro hombre. Mi ha lasciato per un altro uomo. Un entrenador llamado Dash. Un allenatore di nome Dash. [He has left me for another man. A trainer called Dash.]
[He has left me for another man. A trainer called Dash.]
Me ha dejado de repente.
Mi ha lasciato per un detersivo.
[He has left me suddenly.]
[He has left me for a detergent.]
Portuguese Fui trocado por outro homem. Um treinador chamado Dove. [I’ve been swapped for another man. A trainer called Dove.]
Fui trocado por um sabonete. [I’ve been swapped for a soap.]
Cultural references, humour and ideology
227
In Example 8.24 from Shrek, the play on the English expression ‘to wear something on one’s sleeve’ has been translated more or less literally into the various languages exemplifed, which, however, do not have a similar expression. Te exchange becomes nonsensical at best: Example 8.24 Donkey: We wear our fear out there on our sleeves. Shrek: Wait a second. Donkeys don’t have sleeves. Donkey: You know what I mean. Spanish Los burros no tienen capas. Llevamos el miedo metido en las mangas. [Donkeys do not have layers. We wear the fear inside the sleeves.]
- Pero los burros no tienen mangas. - Ya sabes a lo que me refiero. [- But donkeys do not have sleeves. - You know what I mean.]
Dutch Wij dragen onze angst op onze mouw. [We wear our fear on our sleeve.]
- Wacht! Ezels hebben geen mouwen. - Je weet wel wat ik bedoel. [- Wait! Donkeys have no sleeves. - You know what I mean.]
Portuguese Os burros não têm camadas. Não escondemos o medo na manga. [Donkeys have no layers. We do not hide fear up our sleeve.]
- Espera! Os burros não têm mangas. - Sabes o que quero dizer. [- Wait! Donkeys have no sleeves. - You know what I mean.]
Some jokes rely on metalinguistic features such as accents, but also on speech impediments. Most of these are impossible to render in subtitles even though occasionally attempts ought to be made because the joke is crucial and/or recurs in diferent places in the flm. As illustrated in §7.2.2, the subtitler of Bienvenu chez les Ch’tis manages quite successfully to fnd an acceptable solution, since the entire narrative as well as much of its humour hinges on rendering the pronunciation of the characters from northern France in the subtitles. In Te Life of Brian, the Roman governor, Pilate, has a speech impediment and cannot pronounce the phoneme ‘r’, which is abused by various characters in conversation with him and even by the crowds listening to his speeches. In other words, it is a meaningful linguistic-oral joke, which occurs in a comedy, and therefore ought to get priority. In practice, this does not happen, at least not on the French DVD, although the character’s mispronunciation of the ‘r’ could easily be rendered. Te excerpt is already fairly nonsensical in the English version, as it tries to render the communication lapses resulting from Pilate’s speech impediment. In the French, the confusion remains, although it is no longer clear what causes it, unless the audience relies completely on the spoken text:
228
Cultural references, humour and ideology
Example 8.25 Pilate: Now, what’s youw name, Jew?
Alors, comment t’appelles-tu, Juif ? [So, what’s your name, Jew?]
Brian: Brian, sir. Pilate: Bwian, eh?
- Brian, gouverneur. - Brian, hein. [- Brian, governor. - Brian, eh?]
Brian: No, no, Brian Pilate: The little wascal has spiwit. Soldier: Has what, sir? Pilate: Spiwit.
- Non, Brian. - Ce rebelle a de l’esprit. [- No, Brian. - This rebel has spirit.]
→
Soldier: Yes, he did sir.
- Il a quoi ? - De l’esprit. [-He has what? - Spirit.]
Oui, c’est ça, oui. [Yes, that’s it, yes.]
Pilate: No, no, spiwit. Bwavado. A touch of dewing-do.
Non, de l’esprit. Du courage, comment dire, du cran.
Soldier: Oh, about 11, sir.
Ils étaient onze, gouverneur.
[No, spirit. Courage, let’s say, guts.]
[There were eleven of them, sir.]
Two fnal examples (Examples 8.26 and 8.27) that provide a bridge into the next category come from Woody Allen’s Manhattan Murder Mystery. Te humour is neither purely linguistic nor purely cultural and demonstrates the difculty of pinning down exoticism: some directors are extremely creative with language and invent new phrases or combinations of phrases, some of which even their own linguistic community does not necessarily share. However, this does not always mean that their translation into other languages needs to pose major problems as long as some form of humorous incongruity is retained: Example 8.26 Adrenaline is leaking out of my ears! Polish Adrenalina cieknie mi z uszu.
[Adrenaline is dripping down from my ears.]
Danish Skynd dig! Der løber adrenalin ud af ørerne på mig. [Hurry up! There is adrenaline running out of my ears.]
Spanish ¡La adrenalina me sale por las orejas! [The adrenaline comes out of my ears!]
Hindi चलो, जल्दी। जल्दी। जोश मेरे कानों से निकल रहा है।
[Let’s hurry. Rush. Passion is coming out of my ears.]
In Example 8.27, the incongruity that is meant to elicit a humorous response resides in the contrast between the rather troubling discovery the protagonists make, the
Cultural references, humour and ideology
229
way it is named, i.e. ‘Claustrophobia and a dead body’, and the additional positive comment on the discovery: ‘a neurotic’s jackpot’. Tis comment is somewhat neutralized in the Arabic and Hindi versions but not in the others. In fact, the Hungarian has added an incongruity through its juxtaposition of klausztrofóbia, which is a scientifc term (its more informal equivalent being bezártsági érzés) and the term hulla, a very informal word, more akin to the English ‘dead body’: Example 8.27 Claustrophobia and a dead body. This is a neurotic’s jackpot. Polish Klaustrofobia i martwe ciało. To marzenie neurotyka. [Claustrophobia and a dead body. This is a neurotic’s dream.]
[Fear of closed spaces and a body. This causes complete meltdown.]
Hungarian Klausztrofóbia és hulla! Neurotikus főnyeremény!
Hindi संवृति भीति और अब एक लाश ǀ मेरी तो फट गई ǀ
[Clautrophobia and a dead body! Neurotic jackpot!]
2
Arabic .خوف من األماكن المقفلة وجثة ً ًهذا يسبّب انهيارا .تاما
[Claustrophobia and now one corpse. So I got torn (I was damn scared)]
NATIONAL OR COMMUNITY-BASED HUMOUR AND EXOTICISM
National or community-based humour has its roots in a country and/or community’s cultural heritage, traditions or even prejudices and is therefore exotic from a TC point of view. In other words, national or community-based humour and exoticism are two sides of the same coin and they are linked to humour’s capability to divide, ridicule or reprimand as well as its potential to unify and generate feelings of superiority or inferiority (§8.3.1). Humorous instances of this nature, in the same way as exchanges involving CRs, may work extratextually or intertextually. Determining the degree of exotic separation between cultures, and/or the degree to which they share common ground, is becoming increasingly difcult in today’s globalized world. On the one hand, the world is becoming smaller and Anglo-Saxon culture has become increasingly dominant; on the other hand, the need to reassert cultural diference is on the rise, spearheaded by glocalization forces. Tese developments have important implications for what is ethically acceptable in the translation of humour, from diferent prisms like gender, race, body ability and sexual orientation, for instance. In addition, cultures and culture-bound references, including humorous CRs, will continue to change over time. Even in the previous instances of languagebased joking, the humour sometimes refers to a cultural issue, and often aims at a target. In the examples given next, both these factors become quite prominent. Díaz Cintas and Remael (2007) distinguish between: 1 2 3
Jokes that are international or bi-national Jokes referring to a national culture or institution Jokes refecting a culture or institution
One may be able to apply these distinctions in some cases, but more often than not it will be more useful to simply assess the degree of exoticism in each instance one
230
Cultural references, humour and ideology
encounters. Te general impact of changing international relations apart, some of the viewers witnessing a given humorous exchange may be from a diferent, third culture, which means that a bi-national joke would risk being lost on them. An illustration of such a situation would be a Spanish viewer watching a Chinese flm with English subtitles, or a Turkish one watching a French TV series subtitled in German. Each of the following examples require the readers/translators to locate the humorous instance in their own linguistic and translational context in order to assess the extent of the challenges involved in translation. To begin with, many instances of humour rely on supposedly shared cultural or world knowledge, referring to flm stars, multinationals, tourist attractions, artists or politicians, political events, facts about a country’s history, etc. In such cases, their degree of transculturality or exoticism must frst be determined before one proceeds to subtitle (§8.2.2). To what extent some (young) people today, in diferent parts of the world, will recognize what was once deemed an international joke is a matter of opinion, such as the one in Example 8.28, which is transferred by way of a literal translation: Example 8.28 I can’t listen to that much Wagner, you know? I start to get the urge to conquer Poland. Spanish Turkish No puedo escuchar tanto Wagner. Bu kadar çok Wagner dinleyemem. Me entran ganas de conquistar Polonia. Polonya’yı fethetme isteği duyuyorum. [I can’t listen to so much Wagner. I get the urge to conquer Poland.]
[I can’t listen to this much Wagner. I feel the urge to conquer Poland.]
Hungarian Ennyi Wagner nem fér belém. Mindjárt lerohanom Lengyelországot.
Hindi इतनी देर वाग्नेर को सुना कि पोलेंड ध्वस्त करने की इच्छा होती है
[This much Wagner does not fit into me, I shall invade Poland right away.]
[Heard Wagner so long that I want to destroy Poland.]
Norwegian Jeg klarer ikke så mye Wagner. Til slutt vil jeg erobre Polen.
Polish Nie mogę słuchać tyle Wagnera. Mam ochotę podbić Polskę.
[I can’t handle that much Wagner. It ends with me wanting to conquer Poland.]
[I can’t listen to so much Wagner. I feel like invading Poland.]
Similarly, it remains to be seen to what extent audiences worldwide can be expected to be familiar with General Motors recalling defective cars, as the very diferent translation solutions in Example 8.29 demonstrate: Example 8.29 A: I think you gotta see- I gotta- You gotta, You gotta go back to your shrink. B: What do you mA: I want you to see Doctor Ballard again. B: Huh? Larry, I went for two years. A: I’m s- yeah, I know. But you- You know how General Motors will recall defective cars?
Cultural references, humour and ideology Spanish
German
Creo que tienes que volver al psiquiatra, al Dr. Ballard.
Geh zum Psychologen. Du must wieder zu Dr. Ballard.
Ya fui durante dos años.
Larry, ich war da zwei Jahre lang.
La General Motors revisa su coches defectuosos.
General Motors ruft auch defekte Autos zurück.
Turkish
Finnish
[I think you should return to the psychiatrist, to Dr. Ballard.] [I already went for two years.]
[General Motors also checks defective cars.]
Psikoloğuna gitmen gerek. Dr. Ballard’a git.
[You need to see your psychologist. Go to Dr Ballard.]
Larry, iki yıl gittim.
[I went for two years, Larry.]
231
[Go to the psychologist. You have to go back to Dr. Ballard.] [Larry, I was there for two years.]
[General Motors calls defective cars back too.]
- Sinun täytyy mennä taas terapiaan. - Larry, kävin siellä kaksi vuotta. [- You have to go back to therapy. - Larry, I went for two years.]
General Motors hatalı arabaları nasıl toplatıyor?
Sinun täytyy mennä viritykseen.
Greek
Polish
Πρέπει να πας στον ψυχίατρό σου. Να δεις τον δρα Μπάλαρντ.
Musisz iść do psychologa. Masz pójść do dr Ballarda.
[You have to go to get yourself tuned.]
[How does General Motors recall faulty cars?]
[You have to go to your psychiatrist. To see Dr Balard.]
[You need to go to a psychologist. Go to Dr Ballard.]
Λάρυ, πήγαινα δύο χρόνια.
Chodziłam do niego przez 2 lata.
Ξέρεις πως η Τζένεραλ Μότορς ανακαλεί αυτοκίνητα;
Wiesz, jak GM odwołuje auta do naprawy?
[Larry, I went two years.]
[Do you know that General Motors recalls cars?]
[I used to go to him for 2 years.]
[Do you know how GM sends cars for repair?]
Halloween, an Irish-American export and the topic of Example 8.30, has become popular enough worldwide thanks to its portrayal in numerous North American flms and is now undoubtedly a transcultural reference up to a point. Nevertheless, some viewers will know that Halloween is ‘the night of the living dead’; others might just have heard it is a night of ‘trick-or-treat’ games for children. Some countries will even
232
Cultural references, humour and ideology
have developed their own Halloween traditions. It is up to the subtitler to evaluate the scope of the culture-based humorous instance and determine what translation strategy it requires in a particular context: Example 8.30 How could you see her? She’s dead. Not only is she dead, she’s even been cremated. It’s not even Halloween. Spanish ¿Cómo ibas a verla? Está muerta e incinerada. [How could you see her? She is dead and cremated.]
Y no estamos en Halloween.
French J’en suis sûr. Tu as vu une morte incinérée ? C’est pas le carnaval ! [I’m sure. You’ve seen a death (woman) cremated? It’s not carnival!]
[And we’re not in Halloween.]
In Example 8.31, the cultural references are sometimes rendered through literal translation, and sometimes the translation is closer to explicitation or transposition. However, due to the technicalities of subtitling, the TTs are generally less powerful because the supporting prosody of the aural rendering has been lost: Example 8.31 Meanwhile, I can’t get the-the Flying Dutchman theme out of my mind, you know? Remind me tomorrow to buy up all the Wagner records in town and rent a chain saw. Spanish Norwegian No puedo sacarme de la cabeza el tema de “El Holandés Errante”. Jeg kan ikke bli kvitt temaet [I can’t get out of my head fra Den flygende hollender. the theme of “The Flying Dutchman”]
Recuérdame que me compre todos los discos de Wagner, [Remind me to buy all Wagner records,]
y alquile una sierra mecánica.
[I can’t get rid of the theme from The Flying Dutchman.]
I morgen skal jeg kjøpe alle Wagnerplatene og leie en motorsag. [Tomorrow I will buy all the Wagner records and rent a chain saw.]
[and rent a chain saw.]
Hebrew בינתיים אני לא יכול להוציא את .מוטיב “ההולנדי המעופף“ מראשי
Italian Intanto non riesco a togliermi il tema del vascello fanstama dalla mente.
תזכירי לי מחר לקנות את כל .תקליטי וגנר ולשכור מסור חשמלי
Ricordami di comprare tutti i dischi di Wagner in città e una sega elettrica!
[For the moment I cannot get the “Flying Dutch” motive out of my head.]
[Remind me tomorrow to buy all the Wagner records and hire an electric saw.]
[Meanwhile I can’t manage to get the theme of the ghost vessel off my mind.]
[Remind me to buy all the records by Wagner in town and an electric saw!]
Cultural references, humour and ideology
233
German Ich kann The Flying Dutchman nicht mehr aus dem Kopf kriegen.
French Le thème du Vaisseau fantôme ne me quitte pas.
Morgen kaufe ich alle Wagner-Platten der Stadt und eine Kettensäge.
Je vais acheter tous les disques de Wagner et louer une tronçonneuse !
[I cannot get The Flying Dutchman out of the head.]
[Tomorrow I’ll buy all the records by Wagner in the city and a chainsaw.]
[The theme of the Flying Dutchman does not leave me.]
[I am going to buy all the records by Wagner and rent out a chainsaw!]
In Example 8.32, the humour hinges on the cultural reference ‘the San Andreas Fault’, a continental transform fault that extends roughly 1,200 kilometres through California. Some subtitlers appear to have decided that the fault would be known by their target audience and have retained the reference while others have resorted to explicitation to maintain the humour: Example 8.32 Did you see the dumbbells this guy lift? If I lifted dumbbells like that I’d get a hernia the size of the San Andreas Fault. Portuguese Finnish Viste os pesos que ele levanta? Näitkö, mitä painoja se ukko nostaa? Eu ficaria com uma hernia monstruosa. Minä saisin niistä valtavan tyrän. [Did you see the weights that he lifts? I’d get a monstrous hernia.]
[Did you see the dumbbells the guy raises? I’d get a huge hernia out of them.]
Polish Widziałaś, jakie on podnosi ciężarki? [Did you see the weights he lifts?]
Swedish Såg du hans hantlar?
[Did you see his dumbbells?]
Gdybym ja takie podniósł, miałbym straszną przepuklinę.
Om jag försökte lyfta dem, skulle jag få världens brock.
[If I lifted such weights, I would get a terrible hernia.]
[If I tried to lift them, I’d get the world’s worst hernia.]
German Hast du die Hanteln gesehen, die er noch stemmt?
Arabic هل رأيت الدمبالت التي يرفعها هذا الرجل؟
[Have you seen the dumbbells, which he still lifts?]
[Have you seen the dumbbells that this man lifts up?]
Ich würde mir einen Bruch so groß wie der San Andreas Graben holen.
إن رفعتها سأصاب بفتق .بحجم صدع سان اندرياس
[It would give me a hernia as large as the San Andreas Fault.]
[If I lifted them, I would get a crack as the size of San Andreas.]
An instance of humour that has to be assessed carefully in terms of cultural acceptability is the one in Example 8.33, involving taboo topics such as bodily functions, rendered in a somewhat shortened literal translation here:
234
Cultural references, humour and ideology
Example 8.33 Man, you gotta warn somebody before you just crack one off. My mouth was open → and everything.
Je moet even waarschuwen voordat je er één laat. [You must give a warning before you release one.]
Mijn mond stond open. [My mouth was open.]
In Example 8.34, diferent subtitlers resorted to diferent translation decisions, demonstrating that the degree of exoticism of the CR in the source text was assessed diferently by some of them. In the example, the Spanish and Finnish subtitlers have opted for the use of a hypernym for the translation of Prozac, the name of the antidepressant that is central to the sarcastic remark, the Danish subtitler has opted for substitution and the Hebrew subtitler has retained the original brand name: Example 8.34 There’s nothing wrong with you that can’t be cured with a little Prozac and a polo mallet. Spanish Danish No tienes nada que no pueda curarse Det, du lider af, kan ordnes con una pastilla y un mazo. med lidt Fontex og en trækølle. [You have nothing that cannot be cured with a tablet and a mallet.]
[What you suffer from can be arranged with a little Fontex and a wooden spoon.]
Hebrew את לא סובלת משום דבר שאי אפשר .לרפא בפרוזאק ובמקל פולו
Finnish Rauhoittavat ja poolomaila kyllä parantavat sinut.
[You don’t suffer from anything that cannot be cured with Prozac and a Polo stick.]
[Tranquilizers and a polo mallet will surely cure you.]
Similarly, as shown in Example 8.35, unfamiliarity with the referent could result in a distortion of meaning as well as humour. In the Spanish and Swedish subtitles, it is not clear what ‘father’ is being referred to, whereas in the Arabic and Greek versions, the humour has simply been eliminated by omitting the reference. In brief: a character is giving a hotel cleaner a tip. Te cleaner eyes the one-dollar bill she has just been given sceptically, upon which the tipper, commenting on the face of George Washington depicted on the dollar bill rather than on the amount of the tip, remarks: Example 8.35 What are you making a face for? He was the father of our country. Spanish Swedish No ponga esa cara. Klaga inte. Det är vår landsfader. Fue el padre de la patria. [Don’t pull such a face. He was the father of the country.]
[Don’t complain. It is our country’s father.]
Arabic
Greek
لِ َم تمتعضين؟
Τι μορφασμός είναι αυτός;
[Why are you vexing?]
[What grimace is this?]
Cultural references, humour and ideology
235
Example 8.36, taken from the Chinese dating show If You Are the One, subtitled into English for an Australian audience, also involves explicitation and is a case of intertextuality. 闰土, Run Tu, is originally a male character in the novel 故乡 [My Old Home] by acclaimed Chinese writer Lu Xun, published in 1921. Run Tu is a lively country boy who grows into a dull, reserved and miserable country man. In contemporary mainland Chinese society, the reference primarily means ‘a country boy’, which carries the connotation of someone who has old-fashioned, outdated tastes and looks quaint. In addition, the name has given rise to wordplay by contemporary, young Chinese people since the word ‘Tu’ in Chinese can mean both ‘earth’ and ‘out of fashion’. So ‘Run Tu’ is no longer just the character in Lu Xun’s novel but has become a term used to make fun of people who look out of fashion, like a country boy. Tis prompted the following subtitle, which, at least, retains the semantics of the original and possibly some of its humour, especially in the dating show context since the person who utters these words is trying to win a lady’s favour and is referring to himself, i.e. targeting himself with his humour: Example 8.36 他 们觉得我看起来像闰土一样。
→
[They think that I look like Run Tu/a country boy.]
They think I’m a country bumpkin.
A rather specifc instance of culture-based humour, determined by the degree of common ground that the parties involved in the exchange share, is the sense of humour typical of a particular country or nationality that has another nation as its target. Many communities make jokes at the expense of sub-communities inside their borders, or poke fun at other nationalities (e.g. Spanish jokes about people from the southern place of Lepe, Dutch or French jokes about silly Belgians and Belgian jokes about stingy Dutch). Such jests in a way rely on a form of intertextuality, as one must know the insider national tradition to understand them. Te humour can have religious overtones or be based on historical events, but more often than not, it is inspired by stereotypes and prejudice, sometimes even racism and sexism, and it can target ethnic and sexual communities. Example 8.37, from Te Commitments, makes use of more or less international prejudices, known to many with a minimal sense of recent European history or current afairs. Te whole passage was retained almost literally in the subtitles. Whether the translation would work outside (some) European countries is questionable: Example 8.37 The Irish are the blacks of Europe, and Dubliners are the blacks of Ireland. And the north side Dubliners are the blacks of Dublin.
Les Irlandais sont les Noirs de l’Europe. Et les Dublinois, les Noirs d’Irlande. →
[The Irish are the blacks of Europe. And the Dubliners, the blacks of Ireland.]
Et ceux des quartiers nord sont les Noirs de Dublin.
[And those from the north boroughs are the blacks of Dublin.]
In the next example of community-based humour (Example 8.38), at the expense of North Americans, one could say that globalization has also played its part in the translation strategy. Te frst line of this joke from the British flm Chicken Run
236
Cultural references, humour and ideology
could be considered a European joke, but its continuation with the unexpected wordplay on ‘over here’ in line two will no doubt work well in many countries worldwide. Note that the Dutch subtitler has opted for retaining the alliteration and formal repetition, coming up with a semantic variant that is diferent from the ST’s but that works equally well, playing on public opinion about the USA in a slightly diferent manner: Example 8.38 Pushy Americans! Always showing up late for every war! Overpaid, oversexed and over here!
Die arrogante Amerikanen! Komen altijd te laat in elke oorlog →
[Those arrogant Americans! Arriving late in every war]
overbetaald, oversekst en overbodig. [overpaid, oversexed and superfluous.]
Another variant on a community-bound sense of humour is discussed by Chiaro (2005), who foregrounds the fact that the translation of humour can even be problematic between countries sharing the same language, such as the UK and the USA, incidentally confrming that the translation of humour is a cultural as much as a linguistic issue. She writes: Tere is no denying that British humor on screen is based on the nation’s fxation with class and is conventionally not averse to punning, unlike USA comedy which prefers to play on the characterization of the individual and the gag to the punch. (ibid.: 138)
Te issue is problematic, which the author acknowledges. Whether or not culture-specifc senses of humour really exist has never been proven empirically; they may be a “fgment of the imagination of pop psychologists” (ibid.: 140). Yet, comedy shows such as Seinfeld and Friends, imported to Britain from the USA, have been much less successful in the target country. And British television panel shows like Have I Got News for You or QI, who rely heavily on comedy, do not seem to travel that broadly beyond the British shores. On some occasions, the subtitler may even decide to convey unintentional humour that the original target audience of a flm read into the text, turning the director of the production into the butt of humour. Tis happened in the subtitled version of a Chinese documentary about the Cultural Revolution. At that time, it was required for every household to have the portrait of Chairman Mao on its walls. At a certain period of time, it was also required that everyone, when getting up in the morning, stand in front of the portrait of the Chairman, seeking instructions and directions from him, and that, in the evening, they report to the portrait again, telling the Chairman what they had accomplished that day. Te standard line referring to this custom appeared in superscript on the documentary 早请示,晚汇报 [seeking instructions in the morning and reporting back in the evening]. A line of this nature elicits much laughter from contemporary Chinese audiences, who fnd the habit of reporting to the portrait absurd. In order to retain this feeling of absurdity, triggered by the idea of reporting to a portrait, for her Australian audience, the
Cultural references, humour and ideology
237
subtitler in Example 8.39 resorted to a form of explicitation, attempting to cover both a cultural and a historical divide, while taking subtitling’s spatial limitations into account: Example 8.39 早 请示,晚汇报 。
[Seeking instructions in the morning and reporting back in the evening.]
→
Reporting to the portrait of Chairman Mao day and night.
3 AUDIOVISUAL CHALLENGES
Some audiovisual productions rely (mostly) on visual humour. Tis can be conveyed through shot angles, editing, the typical suspense set-up in which the viewer can see more and knows more than the character(s), but also through kinetic and paralinguistic means such as the gestures and facial expressions of the actors, which, strictly speaking, are supporting elements in verbally expressed humour. Whereas in the past, flm was considered a sort of international Esperanto because of its mostly visual storytelling, today the universal value of the images is considered to be less straightforward. Once again, the assessment needs to be made on a case-by-case basis. In spite of cultural diferences, there is a certain universality in gestures and miming as successful series such as Mr Bean demonstrate. Moreover, some forms of visual communication may have become increasingly universal due to cultural globalization and the popularity of certain flm genres. However, since in visual humour the image is supposed to do the job, translators can take a back seat and focus on the accompanying dialogue if there is any. Aural humour also seems to fall outside the remit of the subtitler. Again, a distinction can be made between the paralinguistic characteristics of speech, e.g. accents and intonation, which will provide hints as to the emotional state of speakers, for instance, or, in the case of accents, their country of origin. On occasions like these the humour is usually connected to both linguistic and culture-bound features: a classic example is the Spanish character of Manuel in the British TV series Fawlty Towers. In other cases the sound efects or music (such as recurring thematic music) may also serve a humorous efect or underline the impact of actions and/or dialogue and can determine the rhythm of a scene. In the Spanish flm ¡Ay, Carmela! the two protagonists, man and wife, are comedians during the Spanish civil war. One of the best gags of the husband is the one in which he farts on stage, much to the amusement of the audience. To conclude, in some scenes several or all the semiotic layers of the audiovisual text may come together to create humour: language-bound and culture-bound, expressed through verbal and or visual means. Te combination of visual information and metaphors or culture-bound references and wordplay can be especially mindboggling (Pedersen 2015). Te following excerpt from Chicken Run (Example 8.40) combines visually rendered information (a chicken wearing thick glasses) with a linguistic metaphor that relies on it (‘four-eyes!’). Te Dutch translation is quite satisfactory. Blinde kip [blind chicken] is an ofensive Dutch expression used to refer to people with poor sight, which also implies that the person is clumsy precisely because they cannot see very
238
Cultural references, humour and ideology
well. In other words, not only do the subtitles retain the insulting reference to the chicken’s thick glasses, they add a twist by using a TL expression with ‘chicken’: Example 8.40 Don’t push me, four-eyes!
→
Niet duwen! Blinde kip! [No pushing! Blind chicken!]
Disrespectful references to chickens abound in this flm, but there happens to be none in this exchange. Te translation therefore uses compensation for those instances in which the flm did use an expression involving chickens and the subtitles did not. 8.4
Ideology, manipulation and (self-)censorship
Commercial subtitling’s usual preference for the representation of propositional content over language variation and style, its focus on getting the message across rather than the form, including the translation of cultural references, and its ultimate aim to remain invisible can lead to translational choices that are not altogether neutral. Nornes (2007: 155) goes as far as calling commercial subtitling “abusive subtitling”, accusing subtitlers of a “vision of translation that violently appropriates the source text [as] they conform the original to the rules, regulations, idioms, and frame of reference of the target language and its culture”. Tis is no doubt often the case, but the extent of the (conscious) appropriation and/or manipulation, the term used by Díaz Cintas (2012), will vary greatly and is dependent on many factors. Díaz Cintas (ibid.) distinguishes between “technical manipulation”, which refers to mostly linguistic manipulations that result from subtitling’s spatiotemporal constraints, and “ideological manipulation”, which highlights a form of interference inspired by political, religious, sexual, ethical or other considerations. It is this latter type of manipulation that can be considered a tool of conscious self-censorship and/ or censorship, though in practice the two cannot always be distinguished. Ideological manipulation can sometimes be the unwanted result of technical manipulation, but time and space constraints may also be used as an excuse to omit taboo words or sexual references, for instance. It is important for subtitlers to be aware of the ideological content of the production they are translating, the ideological context within which they are working, the pitfalls it may hold and their own ideological position. All subtitlers are informed by their habitus, which according to Bourdieu (1991) consists of our thoughts, tastes, beliefs, interests and our understanding of the world around us and which is “acquired and shaped, explicitly or implicitly, through the range of social experiences made available by socialization and education” (Hanna 2016: 43). Tis largely corresponds to what Mason (2010: 86) defnes as ideology, i.e. “the set of beliefs and values which inform an individual’s view of the world and assist their interpretation of events, facts and other aspects of experience”. In addition, subtitlers will also be impacted by what Lefevere (1985: 227–228), studying literary translation, has called “patronage”, or the “powers (persons, institutions) which help or hinder the reading, writing or rewriting of literature”, and in our case, subtitling. Tis may refer to the board of directors of a broadcaster, a governmental body such a board of censors, a religious committee, the translation brief issued by a company or the like. In addition, subtitling is a process that involves many steps, and it is not uncommon for changes to be made after the subtitles have been submitted. In other words, linguistic and language-related (cultural) issues are
Cultural references, humour and ideology
239
only part of what impacts on the fnal subtitle for translators/subtitlers are always “conditioned by the socio-cultural environment and by the rules governing the period in which they operate” (Díaz Cintas 2012: 282). Nevertheless, awareness of one’s personal biases, the working and socio-cultural conditions as well as the aim of the translation should allow professionals to take conscious decisions. Even if commercial subtitling strives to be invisible, the reality remains that it is always visible on screen and, therefore, any audience “is constantly aware of the fact that the text they receive is a mediated text” (Kruger 2012: 497). What is more, a text only comes into existence in interaction with its readers, who today have a plethora of media at their disposal where they can access fansubbing, activist subtitling and other forms of alternative subtitling to counterbalance their commercial counterparts (Dwyer 2017; Díaz Cintas 2018). Finally, these infuences may also be impacting on commercial subtitling in some cases (Pérez González 2014). Te most fagrant form of censorship is probably non-translation, whether formally enacted by the prohibition to translate or camoufaged by deleting parts of the original audiovisual text, and is an area of research that has been the subject of numerous studies into dubbing in Italy and Spain (Díaz Cintas 2012, 2019). Very common in nations ruled under authoritarian regimes, the practice persists to this day, in various forms. As audiovisual productions travel round the world due to digitization and globalization, cultures become more aware of each other’s traditions, but culture clashes also increase. One such example is the fate that befell Te Wolf of Wall Street, the original DVD of which was not released in the Arab countries and which therefore simply did not get commercial Arabic subtitles. In order to bypass the ofcial channels and to provide access to the flm, fansubbers in Lebanon and Jordan took it upon themselves to subtitle it. Despite supposedly being free from governmental interference, and rather tellingly, the fansubs show a high degree of (self-)censorship as evidenced in the study carried out by Eldalees et al. (2017), in which the various euphemistic translation strategies activated by the fansubbers for the translation of taboo language are categorized. A quite diferent form of non-translation occurs on the DVD of La battaglia di Algeri [Te Battle of Algiers] a classic docudrama about the beginning of the Algerian independence struggle. In this case, the subtitles appear to give preference to the voice of French authority over that of the Algerian people. In the flm, the French colonialists speak French, and any Algerians addressing the French do so as well. However, when the Algerians speak among themselves they use the Algerian Arabic vernacular, a dialect with strong French infuence, especially in the lexicon, but Arabic all the same. Whenever Algerian Arabic is spoken, the DVD provides French subtitles. Traditional subtitling rules state that flm dialogue should get priority over songs and written text, but also over other types of secondary speech, like announcements over loudspeakers, background comments, and the like. However, what happens when the loudspeaker represents the voice of authority? In the documentary, the French army uses loudspeakers to admonish and instruct the population. In a couple of scenes, these French announcements coincide with comments made by bystanders in Algerian Arabic. Te subtitles give preference to the loudspeakers, probably because they are more important for the narrative development of the flm, but at the same time it is the voice of the French colonizer that is heard and the voice of the protesting population that is lost. In the following scene in Example 8.41, the message of the loudspeaker (in French) and the dialogue of a few Algerian women (in Algerian) alternate (Benini 2005: 101). On the English DVD, only the ofcial government message is
240
Cultural references, humour and ideology
subtitled. Te turns spoken by the women, who pay absolutely no attention to the loudspeakers but are worrying about their sons, are left untranslated: Example 8.41 A: Ayez confiance en la France et son armée. Le FLN veut vous affamer et vous condamner à la misère. B: El-hadu-llah ela es-slâma A: La France est votre patrie. B: Rabbi meakum yâ_ûladi. A: Ayez confiance en la France et son armée. B: Saeid hoya ma-sefûs? Saeid hoya ma-sefûs?
A: Show your trust in France and its army. The resistance movement wants to starve you and condemn you to misery. B: Thank God you are unharmed. A: France is your homeland. B: God be with you my sons. A: Trust in France and its army. B: Have you not seen Said? Have you not seen Said?
Given the context in which these lines occur, the implication of the women’s words is obviously that it is the French who might have harmed their sons and brothers, not the resistance movement. Moreover, as pointed out by Benini (ibid.), the women are in full view of the camera, whereas the loudspeaker merely repeats a message it has already been broadcast before. Arguably, this can be considered a form of censorship. Matters of ideology apart, the budget and time available for subtitling will also be a determining factor. Te practice of pivot subtitling means that languages that are less known tend to be translated frst into English subtitles and then into other languages. Any of the frst translator’s mistakes or biases are then carried over into the subsequent versions (Kapsaskis 2011; Ávila-Cabrera 2014), an Anglocentric habit that can lead to (involuntary) ideological manipulation. Te rationale behind translation decisions can be hard to trace. In some instances, a client’s or broadcaster’s regulations clearly impact on the decisions pertaining to what to translate or not, as is demonstrated by Kruger (2012) in his study on the South African Broadcasting Company’s translation policy with regard to South Africa’s 11 ofcial languages, many of which were clearly disadvantaged in terms of what was translated and what was not in comparison with the treatment of English. Another phenomenon that goes beyond the impact that translators themselves have on the subtitles is the way in which some TV channels and VOD providers have recourse to bleeping out swearwords and to replacing curses by asterisks or more complex grawlixes or obscenicons, i.e. cryptic symbols such as @#*?$*!, typically used in cartoons (§7.2.2; Díaz Cintas 2012). In the following, concluding example from the Chinese dating show If You Are the One (Example 8.42), subtitled into English for an Australian public, the subtitler decided to self-censor the translation, by omitting the reference to Japanese people, for the sake of clarity and humour. Te context in which this subtitle occurs is important. Te programme is not a political flm or docudrama but rather a show that aims to entertain and divert whilst still strongly embedded in Chinese culture: Example 8.42 他就好像鬼子进村了 。
[He’s like a Japanese devil sneaking into a village.]
→
He looks like he’s sneaking into someone’s house.
Cultural references, humour and ideology
241
When a certain male candidate walked on to the talk show stage, looking left and then right, in a sheepish manner, this provoked laughter in the studio audience. Te host, Meng Fei, who is known for his witty and sharp humour, commented 他就好 像鬼子进村了 [He’s like a Japanese devil sneaking into a village]. Te ‘Japanese devil’ refers to Japanese soldiers during the Second Sino-Japanese war, the War of Resistance against Japanese Aggression in China. ‘Japanese devils’ are regularly portrayed in Chinese flms featuring the battles and guerrilla warfare that took place against Japanese soldiers during that war and, in those flms, they are usually depicted as sneaky individuals, walking gingerly and with hunched backs, as if scared of being discovered, stepping on mines or falling into traps. Te reason why they would ‘sneak’ into villages was to ambush villagers, burn down their houses and cause destruction. Tis is therefore the origin of a common Chinese saying that refers to people walking in a suspicious manner as ‘Japanese devils walking into a village’. Jing Han, the subtitler of If You Are the One, consciously and deliberately decided to omit the reference to a ‘Japanese devil’ in her translation for three reasons (personal communication). Firstly, the original Chinese context is lost on the new Australian target audience, which has never watched the guerrilla warfare flms in which the Japanese devils are portrayed. Secondly, she wanted to avoid any risk of racist implications, something to which Australian audiences are especially sensitive. Tirdly, and arguably most importantly, the comment in the original was not meant as a political statement but rather as a humorous, sarcastic comment on the behaviour of one of the male participants in the show; a show that is popular because of its lively, funny and sarcastic exchanges. In the introduction to this chapter on the subtitling of cultural references, humour and the issue of censorship in subtitling, we stated that the three topics are intimately interconnected but that they are also difcult to pinpoint. Tey vary over time and are embedded in language and cultural traditions, even if they fnd a more or less concrete representation in the audiovisual production that is to be subtitled and accessible for another language, culture and possibly time. In addition, subtitlers themselves also enter into this equation with their own habitus. Te outcome of this encounter will always remain unpredictable, subject to alternative decisions and open to retranslation or resubtitling. For this reason, it is important that subtitlers are aware of the potential challenges and deal with them to the best of their ability. 8.5
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 8 > Exercises
9
Technology in motion
9.1
Preliminary discussion
9.1 What would you say the new ‘technological frontiers’ are in the field of subtitling? 9.2 Generally speaking, what do you understand under the concept of ‘the cloud’? Can you provide some examples of activities people may carry out in the cloud? 9.3 Do you know of any subtitling cloud-based platforms? 9.4 Thinking specifically about the actual practice of subtitling, what do you think can be the benefits of working in a cloud-based subtitling platform? And the disadvantages? 9.5 What would be, in your opinion, the main pro/cons of working in the cloud for a project manager in charge of a subtitling project in numerous languages?
9.2 Tools for subtitlers In the 20th century, the introduction of basic computerized tools led to deep transformations in translation workfows and productivity. In only a matter of decades, translation experienced a prodigious, twofold growth: not only did the volume and nature of translations expand but also the proliferation of translation tools and resources allowed for a signifcant improvement of translators’ efciency. In this respect, LSPs and translation vendors have been exploring for years the possibilities ofered by new technologies and practices, such as crowdsourcing and microtasking, in their attempt to keep up with higher demand and faster turnarounds, while maintaining their budgets and keeping their competitors in check. AVT’s ample fnancial and social potential has also been acknowledged by technology developers and manufacturers, who have invested time and efort in enhancing the functionality and efciency of the tools used in this domain (§2.7). Academia, for its part, has of late veered from the study of linguistic issues to the scrutiny of the role played by technology (Díaz Cintas 2015; Matamala 2017; Díaz Cintas and Massidda 2019). Te explosion in the production and circulation of audiovisual programmes, with the subsequent need for their translation into other languages, is bound to continue into the future, thus acting
Technology in motion
243
as an incentive for software developers looking into new areas of expansion. Experimentation to ascertain the potential of machine learning, artifcial intelligence and immersive environments in the feld of audiovisual translation is bound to feature prominently in future research agendas and, to be successful in this ecosystem, translators need to adapt and adjust to the new changes so that they can harness state-of-theart technologies to their advantage rather than risking being replaced by them. Te subtitler’s workstation has not been immune to the efects of digitization, and multiple shifts and transformations have had an impact on its architecture over the last decades. Most of these shifts are shared with other translation practices and point to a progressive and steady migration to the cloud. Of the various AVT modes, subtitling has been the preferred one when it comes to testing and implementing new technological solutions, while dubbing and voiceover have been traditionally less afected (Baños 2018). Whilst developments in the technical dimension of subtitling have been numerous as regards, for example, spotting, shot changes, audio and speaker recognition and automatic colouring of text, the advances on the linguistic front have been much more modest. Some programs can facilitate text segmentation by automatically dividing the text of a script into subtitles based on linguistic rules, but the results tent to be disappointing and the participation of the translator is crucially required. 9.3
Machine translation and translation memory in subtitling
Automated approaches to subtitling have taken of in recent decades in an attempt to enable users to escalate productivity and reduce costs by utilizing functions to create subtitles from screenplays and media fles as well as translating subtitle content using machine translation (MT) solutions and integrating artifcial intelligence functions. Publications have focused on general challenges (Bywood et al. 2017), the quality of the raw output (Popowich et al. 2000; Armstrong et al. 2006; Burchardt et al. 2016), the role of post-editing (De Sousa et al. 2011; Georgakopoulou and Bywood 2014) and the potential gains in productivity for subtitlers using MT as opposed to human translation (Volk 2008; Volk et al. 2010). One of the pioneering projects (2000–2004) to delve into the applicability of MT to the task of interlingual subtitling was MUSA (MUltilingual Subtitling of multimediA content, sifnos.ilsp.gr/musa). Desktop based, it had the rather ambitious objectives for the time of creating a multimedia system that would convert the audio stream of audiovisual programmes into text transcriptions with the help of an automatic speech recognition (ASR) system. Harnessing the power of ASR and ruled-based MT, combined with translation memory (TM) applications, the ensuing output was to be condensed into subtitles subsequently translated into either English, French or Greek. Te disappointing results were mainly due to the low efciency of the MT system, which was too underdeveloped in those years to attain the project’s goals. To enhance the democratization of information and reach out to people with hearing disabilities or who do not speak other languages, YouTube’s accessibility services started in 2008 with the development of a caption feature allowing users to add subtitles to their audiovisual content. A year later, the company announced the launch of an auto-translate feature, powered by Google Translate, to provide realtime translation of subtitles by simply clicking on the CC button and selecting the language of the user’s choice from a list (Harrenstien 2009).
244 Technology in motion
Funded by the European Union, SUMAT, an online service for SUbtitling by MAchine Translation, (cordis.europa.eu/project/rcn/191741_en.html), which ran from 2011 until 2014, was a cloud-based initiative exploring the automation of subtitling. Te project employed cloud-based statistical MT engines to translate subtitles in seven bi-directional language pairs thanks to a corpus of monolingual and parallel subtitles supplied by the participating LSPs (Bywood et al. 2017). Te output from the automated process was followed by human post-editing, in an attempt to optimize the subtitling workfow and the resulting quality. Despite the reported positive results, issues relating to the copyright of the subtitles used in the building of the engine impeded the commercialization of the system. Tough there seems to be scope for the application of MT technology to translate subtitles, further research needs to be conducted to improve the quality of the output and to ascertain the translator’s efort to check and post-edit automated content as opposed to subtitling from scratch. Te usefulness of Computer Aided Translation (CAT) tools for the translation of audiovisual programmes is still relatively underexplored, as highlighted by Athanasiadi’s (2017) study on current practices in the freelance subtitling industry, where she uncovers that subtitlers are very eager to utilize subtitling tools that incorporate any kind of language assistance, especially translation memories (TMs), to improve their efciency. Tese tools have had a great impact in specialized and technical translation but their usefulness in AVT is still relatively underexplored because of the supposed fctional nature of many audiovisual productions. Tough this might have been true in the past, when most of the materials being subtitled belonged to the entertainment genre, the situation is rapidly evolving, making it worthwhile for professionals to employ assisted translation tools in the subtitling process of genres such as corporate videos, technical and scientifc documentaries and any audiovisual text with a high percentage of linguistic repetition. In the case of traditional dramas with an episodic format, TMs can help strengthen cohesion across episodes when it comes to the translation of idiosyncratic expressions, certain taboo words or proper names of people and places that are often repeated. Some developers of TM systems have now ventured into this feld with the launch of video preview players embedded in their traditional CAT tools so that translators can watch the actual videos while performing their translation and simulate their solutions on screen (Bolaños-GarcíaEscribano and Díaz Cintas 2020b). Research into speech technologies is accelerating, with new applications being developed and having a palpable impact on the profession. ASR tools have become pivotal in the transcription of original speech into written text, which can then be used to produce captions or subtitle templates for translation into multiple languages or, in combination with MT, to generate interlingual subtitles (Sawaf 2012). Another professional practice making use of ASR is respeaking (§1.5.2). At the other end of the spectrum, text-to-speech technologies are being used to make foreign subtitled programmes accessible to the visually impaired communities by creating a service which automatically produces talking subtitles using synthetic speech. Te other feld fast developing is that of speech-to-speech translation, which aims to replace all speech in a video with utterances in a diferent language and is bound to have an impact on the automation of dubbing. Hardware developments that prove the thirst for subtitles in everyday life include Will Powell’s glasses, which, inspired by Google Glass, “provide (almost) real-time subtitles for conversations being held in foreign languages” (Gold 2012: online).
Technology in motion
9.4
245
Migrating to the cloud
Te development of Web 2.0 revolutionized the traditional professional translation processes, challenging the concept of high quality as consumers’ preferences seem to have shifted towards immediacy, greater interactivity and lower costs. To meet some of these new demands, the industry has been experimenting with the automation and outsourcing of the translation process, which in turn has prompted companies specializing in translation solutions to move to the cloud. As discussed in the previous section, cloud-based platforms have been utilized in the feld of subtitling to conduct experiments testing automated solutions and are progressively featuring in subtitlers’ workbenches. Te underlying philosophy of cloud computing is to move away from the development of applications that reside on a client’s computer or a server and create a virtual space between the devices, where the apps are shared. Devices such as laptops, smartphones, PCs, television sets and tablets can connect to the cloud from anywhere and everywhere, allowing users to perform a wide range of activities: store data (Dropbox, Google Drive), translate (Memsource, XTM Cloud, MateCat), subtitle (OOONA, Amara), play online computer games, and stream videos (Amazon Prime, Hulu, Netfix, Vimeo), among many others. Te cloud-turn is proving to be a harbinger of farreaching changes in the professional landscape. As it happens, some of the frst subtitling initiatives conducted in cloud-based environments were the fruit of collaborative projects, initiated and powered by specifc organizations or teams of volunteers, such as TED Talks (ted.com), rather than by commercial forces. To avoid having to download and install any specialist programs locally, they rely on online platforms built for the specifc purpose of subtitling that are very easy to learn and use, as the contributors tend to be volunteers with limited subtitling skills rather than professional subtitlers. A comprehensive list of cloud-based subtitling platforms can be found on Web > Appendices > 5. Cloud-based subtitling platforms
When it comes to the AVT industry, the technology-driven modus operandi in place is increasingly being articulated around a global pool of localization teams connected to proprietary cloud-based platforms, with the ultimate goal of improving speed, efciency and scalability. In this agile environment, subtitling has been the preferred translation mode chosen by most developers to test the new waters. Essentially, cloudsubtitling adopts a diferent holistic working cycle that resembles to some extent the typical chain of subtitling preparation followed by subtitling companies, but online. Te frst web-based proprietary subtitling system was launched by ZOO Digital, back in 2009. Since then, a wide range of cloud-based subtitling tools have been developed by companies throughout the world, many of which are proprietary and can be used exclusively by company employees, while others, like OOONA, are available on demand to general users. What they all have in common is that they try to fulfl end-to-end services for the production and distribution of digital video, and, as they are browser-based systems, translators can access them with any device connected to the internet in order to carry out spotting and other subtitling related tasks. Some like OOONA’s subtitling toolkit (Figure 9.1) are modular and ofer users a wide range
246 Technology in motion
of tools to perform the cueing or text timing of interlingual as well as intralingual subtitles, to translate from templates, to review and proofread other linguists’ translations, to convert fles into some of the most widespread subtitle formats, to transcribe the original utterances and to burn subtitles onto the images, a.k.a. as hard-coding, among many other functionalities. Te toolkit is available in a basic as well as professional version (e.g. Create Pro), which ofers advanced functionalities such as shot change detection, waveform representation and audio scrubbing:
Figure 9.1 OOONA’s Online Captions & Subtitle Toolkit
Some of these platforms come with a quality control tool for (semi-)automated checks of the technical and linguistic dimensions of subtitling, cloud encryption to ensure that the content is safely stored online and a fully visible monitoring/managing system which can optimize the company’s internal workfow, from a potential entry test to the selection and onboarding of new freelancers to an automatic invoicing process. Furthermore, they allow users to either upload local video fles from a PC’s hard drive or to use video fles that are hosted in other cloud-storage services (e.g. Google Drive and Dropbox) or videostreaming platforms (e.g. YouTube). Contrary to steadier desktop-based solutions, newer cloud-based subtitling tools also allow users to customize their own hotkeys or shortcuts, which can help solve issues caused, for instance, by the interference of the users’ keyboard languages, browser versions and operating systems. Last but not least, these tools also ofer the possibility of delivering the fnal product in diferent formats, facilitate the archiving and reviewing of the audiovisual productions, and enhance versatility with their option to be integrated with other cloud- and desktop-based systems. Some developers are testing the potential of ASR for the automatic alignment of text with audio to expedite the spotting task, whilst still allowing for subtitle editing, with options on positioning and use of colours in the case of SDH/captioning, as well as various other technical attributes that can be set by the client or the vendor. In some of these systems, as accounted by Díaz Cintas and Massidda (2019), the transcription and linguistic transfer processes are assisted by the use of specialist CAT tools and MT systems. Cloud-based systems are credited with being agile and quick to deal with bugs or rectify missteps, as any alterations or updates only need to be implemented centrally
Technology in motion
247
for all the connected professionals to enjoy the results without having to intervene themselves, thus promoting a more stable and seamless work environment. End users’ access to these platforms vary. As mentioned above, some can only be enjoyed by professionals working directly with the subtitling company whereas other initiatives, which were originally developed as proprietary tools to be used internally, have evolved and customized versions of the platform can now be purchased by other LSPs. Access, however, remains restricted to the companies’ assets. A diferent approach is the one taken by OOONA, who have created a series of applications hosted in the cloud, for which any user anywhere in the world can create an account and purchase a time-limited plan. From a fnancial perspective, cloud-based subtitling solutions tend to represent a positive development for translators as they are totally free to use, as in the case of proprietary systems, or can be handled on a pay-as-you-go basis, thus avoiding having to invest large amounts of money in purchasing licenses and periodical updates. Some cloud tools ofer monthly or yearly subscriptions, a development also mimicked by some desktop specialist software developers, which would work best for those translators who receive subtitling commissions intermittently. In essence, cloud-based systems aim to replace traditional workstations that usually require the installation of desktop-based software. To do so, their ultimate aim is to replicate the functionality of specialist desktop programs, if not to improve on it. Of all the subtitling tasks that can be performed in the cloud, creating subtitles with the online subtitling editor, as opposed to producing them with the help of a specialist desktop program, is perhaps the most challenging one. Slow internet connections can have a signifcant impact on the accuracy of the spotting and the generation of help fles to show the sound waves. Likewise, checking the shot changes can be challenging when it is the subtitler who uploads local video fles to the platform. Tis is one of the reasons why working with templates (§2.5) is becoming increasingly popular in cloud-based workfows, as it poses less technical risks. A set of guidelines on how to use OOONA to spot subtitles, to translate using templates and to review other colleagues’ subtitles can be found on Web > OOONA > Guides
Te expansion of social media, in conjunction with cloud systems, has also brought about greater interactivity and connectivity among subtitlers and other specialists involved in translation projects. Tese initiatives have the potential of permitting translators to work synchronically and help each other, thus creating more ergonomic interfaces, introducing built-in social network connections and boosting teamwork in a profession traditionally characterised by individual, solitary work, especially in the case of freelancers. Nowadays, a common professional scenario is one in which a substantial number of individuals work together in the localization of the same audiovisual programme, in diferent language combinations and in diferent geographical spaces, at the same time. In this ecosystem, teams of various professionals work from end-to-end on the management, multilingual translation, proofreading and delivery of the same subtitle projects simultaneously. Tis in turn implies greater volumes of data sharing and the need to implement efcient and sustainable collaborative methods, which the migration to the cloud is trying to address. From the companies’ perspective, an attractive beneft of cloud environments is that the entire project can be managed online, with people working from diferent
248 Technology in motion
geographical locations, which reduces the need for physical ofces. In this respect, cloudbased platforms become the ultimate virtual workspace. Tey can reduce the overheads thanks to their leaner, streamlined workfow management, which helps save time and physical space in the editing, post-production and delivery stages of the process. In conjunction with this, the fact that employees, whether freelancers or in-house staf, can access the platform from any device connected to the internet, irrespective of their physical location, also adds fexibility to the workfow that can result in greater levels of satisfaction. To ofset the disruption of potentially slow internet connections, some of the platforms allow users to conduct certain tasks ofine, thus enhancing their operability. In an industry characterized by a frenetic rhythm and stringent deadlines, another upside of operating in an online management ecosystem is that the project manager is at all times able to check the progress of all the subtitlers and quality controllers working on a given multilingual project and can guesstimate whether deadlines can be met by some of the freelancers or whether some commissions should be divided into micro-tasks and reassigned to other colleagues. Furthermore, cloud-based systems seem to ofer many advantages not only to subtitlers and project managers but also to clients themselves, who, once granted access to the online environment, can then monitor projects more closely and track how the project is progressing. Tey can also participate actively in the workfow by placing new orders, uploading relevant working documents and material, resolve queries related to the project or review some of the translations before they are actually fnished. Some of the advantages and disadvantages of the technological developments described in this chapter, from a professional perspective, have been discussed by Baños (2018), who argues that for some these changes are a real threat leading to deprofessionalization, while for others it is a welcome paradigm that opens up the profession and bolsters the provision of more accessible and translated audiovisual content. As long as there has been a translation industry, there has also been severe competition around craftsmanship, tools and technology, pushing the boundaries of the profession and altering the relationship among the various stakeholders. Yet, to mitigate anxiety, technology developers should concentrate more on how to assist professionals and facilitate their work than on replacing humans. As technology is constantly evolving, current cloud environments will be inevitably superseded by more innovative systems that will allow users to share information and handle projects more efciently. It is just a matter of time before the ways in which users operate in cloud-computing environments today will experience yet a new metamorphosis. In such a mercurial ecosystem, new entrants as well as experienced translators are expected to adapt and accommodate not only to the current technology but also to the new one still to emerge. Flexibility and adaptation in the face of innovation will be the key to professional accomplishment, bearing in mind that the path to success may not be so much in the technology itself but rather in the innovative use the industry and its professionals make of it. 9.5
Exercises For a set of exercises in connection with this chapter go to Web > Chapter 9 > Exercises
10 References
10.1
Bibliography
Abdallah, Kristiina. 2011. “Quality problems in AVT production networks: Reconstructing an actornetwork in the subtitling industry”. In Adriana Şerban, Anna Matamala and Jean-Marc Lavaur (eds.) Audiovisual Translation in Close-Up: Practical and Theoretical Approaches. Bern: Peter Lang, 173–186. Al-Adwan, Amer. 2019. “Mapping Arabic subtitling conventions: The case of Dubai One and MBC 2”. In Said Faiq (ed.) Arabic Translation Across Discourses. London: Routledge, 63–78. Allan, Keith and Kate Burridge. 2006. Forbidden Words. Taboo and the Censoring of Language. Cambridge: Cambridge University Press. Amazon. N.d. Caption (Timed Text) Requirements. videodirect.amazon.com/home/help?topicId=G201979140 Anderman, Gunilla. 1993. “Untranslatability: The case of pronouns of address in literature”. Perspectives 1(1): 57–67. Apter, Ronnie. 1985. “A peculiar burden: Some technical problems of translating opera for performance in English”. Meta 30(4): 309–319. ARD. N.d. Deutsch lernen mit Untertiteln in der ARD. daserste.de/specials/service/barrierefreiheit-imersten-untertitel-leichtes-deutsch-100.html Armstrong, Stephen, Andy Way, Colm Caffrey, Marian Flanagan, Dorothy Kenny and Minako O’Hagan. 2006. “Improving the quality of automated DVD subtitles via example-based machine translation”. In Proceedings of Translating and the Computer 28. London: Aslib, 1–13. mt-archive.info/Aslib2006-Armstrong.pdf Arnáiz, Carmen. 1998. “El humor de las chicas de Almodóvar. La liberación de la lengua femenina”. International Journal of Iberian Studies 11(2): 90–100. Artegiani, Irene and Dionysios Kapsaskis. 2014. “Template fles: Asset or anathema? A qualitative analysis of the subtitles of The Sopranos”. Perspectives 22(3): 419–436. Asimakoulas, Dimitrios. 2004. “Towards a model for describing humour translation: A case study of the Greek subtitled versions of Airplane! and Naked Gun”. Meta 49(4): 822–842. Athanasiadi, Rafaella. 2017. “Exploring the potential of machine translation and other language assistive tools in subtitling: A new era?”. In Mikołaj Deckert (ed.) Audiovisual Translation: Research and Use. Frankfurt am Main: Peter Lang, 29–49. Attardo, Salvatore. 2002. “Semiotics and pragmatics of humor communication”. Babel A.F.I.A.L 1: 25–66. Attardo, Salvatore (ed.). 2014. Encyclopedia of Humor Studies. Los Angeles: Sage. Attardo, Salvatore (ed.). 2017. The Routledge Handbook of Language and Humour. London: Routledge. Attardo, Salvatore and Victor Raskin. 1991. “Script theory revis(it)ed: Joke similarity and joke representation model”. Humor 4(3–4): 293–347. Ávila, Alejandro. 1997. La censura del doblaje cinematográfco en España. Barcelona: CIMS. Ávila-Cabrera, José Javier. 2013. “Subtitling multilingual flms: The case of Inglourious Basterds”. Revista Electrónica de Lingüística Aplicada 12: 87–100. Ávila-Cabrera, José Javier. 2014. The Subtitling of Offensive and Taboo Language: A Descriptive Study. PhD Thesis. Madrid: Universidad Nacional de Educación a Distancia.
250
References
Baker, Mona. 1992. In Other Words: A Coursebook on Translation. London: Routledge. Ball, Alan. 2000/1999. American Beauty. London: Film Four Books. Ballester Casado, Ana. 2001. Traducción y nacionalismo: la recepción del cine americano en España a través del doblaje (1928–1948). Granada: Comares. Baños, Rocío. 2018. “Technology and audiovisual translation”. In Sin-wai Chan (ed.) An Encyclopedia of Practical Translation and Interpreting. Hong Kong: Chinese University Press, 3–30. Baños, Rocío. 2019. “Parodic dubbing in Spain: Digital manifestations of cultural appropriation, repurposing and subversion”. The Journal of Specialised Translation 32: 171–193. jostrans.org/issue32/art_ banos.pdf Bartrina, Francesca. 2009. “Teaching subtitling in a virtual environment”. In Jorge Díaz Cintas and Gunilla Anderman (eds.) Audiovisual Translation: Language Transfer on Screen. Basingstoke: Palgrave Macmillan, 229–239. Bassnett, Susan. 1980/2002. Translation Studies. London: Routledge. Baxter, Sarah. 2004. “Prim America blanks out British TV’s naughty bits”. The Sunday Times, 16 May: 24. BBC. 2008. BBC Vision Celebrates 100% Subtitling. London: British Broadcasting Corporation. bbc.co.uk/pressoffce/pressreleases/stories/2008/05_may/07/subtitling.shtml BBC. 2019. Subtitle Guidelines: Version 1.1.8. bbc.github.io/subtitle-guidelines Benini, Nadia. 2005. Sous-titrage vers le néerlandais du flm La Bataille d’Alger. MA Dissertation. Antwerp: Hoger Instituut voor Vertalers en Tolken. Ben-Porat, Ziva. 1976. “The poetics of literary allusion”. PTL: A Journal for Descriptive Poetics and Theory of Literature 1: 105–128. Bhaba, Homi. 1994. The Location of Culture. London: Routledge. Bleichenbacher, Lukas. 2008. Multilingualism in the Movies: Hollywood Characters and Their Language Choices. Tübingen: Francke. Bleichenbacher, Lukas. 2012. “Linguicism in Hollywood movies? Representations of, and audience reactions to multilingualism in mainstream movie dialogues”. Multilingua: Journal of Cross-Cultural and Interlanguage Communication 31(2): 155–176. Bogucki, Łukasz and Mikołaj Deckert (eds.). 2020. The Palgrave Handbook of Audiovisual Translation and Media Accessibility. Basingstoke: Palgrave Macmillan. Bolaños-García-Escribano, Alejandro and Jorge Díaz Cintas. 2020a. “Audiovisual translation: Subtitling and revoicing”. In Sara Laviosa and María González-Davies (eds.) The Routledge Handbook of Translation and Education. London: Routledge, 207–225. Bolaños-García-Escribano, Alejandro and Jorge Díaz Cintas. 2020b. “The cloud turn in audiovisual translation”. In Łukasz Bogucki and Mikołaj Deckert (eds.) The Palgrave Handbook of Audiovisual Translation and Media Accessibility. Basingstoke: Palgrave Macmillan, 519–544. Bond, Esther. 2018. “Why Netfix shut down its translation portal Hermes”. Slator, 19 October. slator. com/demand-drivers/why-netfix-shut-down-its-translation-portal-hermes Bond, Esther. 2019. “New streaming services from Disney and Warner Bros. spell good news for localization”. Slator, 14 August. slator.com/demand-drivers/new-streaming-services-from-disney-and-warnerbros-spell-good-news-for-localization Bordwell, David, Janet Staiger and Kristin Thompson. 1985/1996. The Classical Hollywood Cinema. London: Routledge. Boria, Monica, Ángeles Carreres, María Noriega-Sánchez and Marcus Tomalin (eds.). 2019. Translation and Multimodality: Beyond Words. London: Routledge. Bosseaux, Charlotte. 2011. “The translation of song”. In Kirsten Malmkjær and Kevin Windle (eds.) The Oxford Handbook of Translation Studies. Oxford: Oxford University Press, 1–14. Bourdieu, Pierre. 1991. Language and Symbolic Power. Trans. Gino Raymond and Matthew Adamson. Cambridge, MA: Harvard University Press. Branigan, Edward. 1992. Narrative Comprehension and Film. London: Busum: Coutinho. Bravo, Conceição. 2008. Putting the Reader in the Picture: Screen Translation and Foreign-Language Learning. PhD Thesis. Tarragona: University Rovira i Virgili. sub2learn.ie/downloads/tesicondhino.pdf
References
251
Bréan, Samuel and Jean-François Cornu. 2014. “Les débuts du DESS/Master de Traduction Cinématographique et Audiovisuelle de l’Université de Lille 3 – Entretien avec Daniel Becquemont”. L’Écran traduit 3: 23–44. Brewer, Jenny. 2018. “Netfix unveils Netfix Sans, a new custom typeface developed with Dalton Maag”. It’s Nice That, 21 March. itsnicethat.com/news/netfix-sans-typeface-dalton-maag-graphic-design-210318 Brondeel, Herman. 1994. “Teaching subtitling routines”. Meta 34(1): 26–33. Brown, Andy and Jake Patterson. 2017. “Designing subtitles for 360° content”. BBC, Research & Development, 26 October. bbc.co.uk/rd/blog/2017-03-subtitles-360-video-virtual-reality Burchardt, Aljoscha, Arle Lommel, Lindsay Bywood, Kim Harris and Maja Popović. 2016. “Machine translation quality in an audiovisual context”. Target 28(2): 206–221. Burgers, Christian and Margot van Mulken. 2017. “Humor markers”. In Salvatore Attardo (ed.) The Routledge Handbook of Language and Humour. London: Routledge, 285–399. Bywood, Lindsay. 2016. Subtitling the Films of Volker Schlöndorff in English. PhD Thesis. London: University College London. Bywood, Lindsay, Panayota Georgakopoulou and Thierry Etchegoyhen. 2017. “Embracing the threat: Machine translation as a solution for subtitling”. Perspectives 25(3): 492–508. Cary, Edmond. 1960. “La traduction totale: cinema”. Babel 6(3): 110–115. Castro Roig, Xosé. 2001. “El traductor de películas”. In Miguel Duro (ed.) La traducción para el doblaje y la subtitulación. Madrid: Cátedra, 267–298. Cazden, Courtney B. 2001. “Socialization”. In Rajend Mesthrie (ed.) Concise Encyclopedia of Sociolinguistics. Amsterdam: Elsevier, 87–89. Cerezo-Merchán, Beatriz. 2018. “Audiovisual translator training”. In Luis Pérez-González (ed.) The Routledge Handbook of Audiovisual Translation. London: Routledge, 468–482. Cerón Clara. 2001. “Punctuating subtitles: Typographical conventions and their evolution”. In Yves Gambier and Henrik Gottlieb (eds.) (Multi)Media Translation: Concepts, Practices, and Research. Amsterdam: John Benjamins, 173–177. Channel 4. N.d. Channel 4 Subtitling Guidelines for Foreign Language Programmes. channel4.com/media/ documents/corporate/foi-docs/SG_FLP.pdf Chaume, Frederic. 2004a. Cine y traducción. Madrid: Cátedra. Chaume, Frederic. 2004b. “Film studies and translation studies: Two disciplines at stake in audiovisual translation”. Meta 49(1): 12–24. Chaume, Frederic. 2012. Audiovisual Translation: Dubbing. Manchester: St Jerome. Chaume, Frederic. 2013. “The turn of audiovisual translation: New audiences and new technologies”. Translation Spaces 2: 105–123. Chen, Yuping and Wei Wang. 2016. “Relating visual images to subtitling translation in Finding Nemo: A multi-semiotic interplay”. The International Journal for Translation and Interpreting Research 8(1): 69–85. trans-int.org/index.php/transint/article/view/467/249 Chiaro, Delia. 2005. “Foreword: Verbally expressed humor and translation: An overview of a neglected feld”. International Journal of Humour Research 18(2): 135–146. Chiaro, Delia (ed.). 2010. Translation, Humour and the Media: Translation and Humour. Vols. 1 and 2. London: Continuum. Chiaro, Delia. 2014. “Laugh and the world laughs with you: Tickling people’s (transcultural) fancy”. In Gian Luigi De Rosa, Francesca Bianchi, Antonella De Laurentiis and Elisa Perego (eds.) Translating Humour in Audiovisual Texts. Bern: Peter Lang, 15–24. Chisholm, Brad. 1987. “ Reading intertitles”. Journal of Popular Film and Television 15(3): 137–142. Cisco. 2019. Cisco Visual Networking Index: Forecast and Trends, 2017–2022 White Paper. cisco.com/c/ en/us/solutions/collateral/service-provider/visual-networking-index-vni/white-paper-c11-741490.html Corrius, Montse. 2008. Translating Multilingual Audiovisual Texts: Priorities and Restrictions: Implications and Applications. PhD Thesis. Barcelona: Universitat Autònoma de Barcelona. Danan, Martine. 2004. “Captioning and subtitling: Undervalued language learning strategies”. Meta 49(1): 67–77. Dancyger, Ken and Jeff Rush. 1991. Alternative Scriptwriting: Writing Beyond the Rules. Boston: Focal Press.
252
References
Davis-Ponce, Jamie. 2015. “Who owns that song? How to research copyright ownership”. Sonicbids Blog, 16 March. De Bonis, Giuseppe. 2014. “Alfred Hitchcock presents: Multilingualism as a vehicle for . . . suspense: The Italian dubbing of Hitchcock’s multilingual flms”. Linguistica Antverpiensia NS: Themes in Translation Studies 13: 169–192. De Higes-Andino, Irene. 2014. “The translation of multilingual flms: Modes, strategies, constraints and manipulation in the Spanish translations of It’s a Free World . . .” Linguistica Antverpiensia NS: Themes in Translation Studies 13: 211–231. De Higes-Andino, Irene, Ana María Prats-Rodríguez, Juan José Martínez-Sierra and Frederic Chaume. 2013. “Subtitling language diversity in Spanish immigration flms”. Meta 58(1): 134–145. Delabastita, Dirk. 1989. “Translation and mass-communication: Film and TV translation as evidence of cultural dynamics”. Babel 35(4): 193–218. Delabastita, Dirk. 1994. “Word play as a special problem in translation studies”. Target 6(2): 223–243. Delabastita, Dirk. 1996. Word Play and Translation. Manchester: St Jerome. De Rosa, Gian Luigi, Franceca Bianchi, Antonella De Laurentiis and Elisa Perego. 2014. Translating Humour in Audiovisual Texts. Bern: Peter Lang. Deryagin, Max. 2018. Subtitle Appearance Analysis: Part 1: The Font. md-subs.com/saa-subtitle-font Desblache, Lucile. 2007. “Music to my ears, but words to my eyes? Text, opera and their audiences”. Linguistica Antverpiensia NS: Themes in Translation Studies 6: 155–170. Desilla, Louisa. 2012. “Implicatures in flm: Construal and functions in Bridget Jones romantic comedies”. Journal of Pragmatics 44(1): 30–53. De Sousa, Sheila C. M., Wilker Aziz and Lucia Specia. 2011. “Assessing the post-editing effort for automatic and semi-automatic translations of DVD subtitles”. In Proceedings of Recent Advances in Natural Language Processing, 97–103. aclweb.org/anthology/R11-1014 Díaz Cintas, Jorge. 1998. “La labor subtituladora en tanto que instancia de traducción subordinada”. In Pilar Orero (ed.) Actes del III Congrés Internacional sobre Traducció. Barcelona: Universitat Autònoma de Barcelona, 83–90. Díaz Cintas, Jorge. 2001a. La traducción audiovisual: el subtitulado. Salamanca: Almar. Díaz Cintas, Jorge. 2001b. “Sex, (sub)titles and videotapes”. In Lourdes Lorenzo García and Ana María Pereira Rodríguez (eds.) Traducción subordinada II: el subtitulado (inglés – español/galego). Vigo: University of Vigo, 47–67. Díaz Cintas, Jorge. 2003. Teoría y práctica de la subtitulación: inglés-español. Barcelona: Ariel. Díaz Cintas, Jorge. 2005a. “Audiovisual translation today: A question of accessibility for all”. Translating Today 4: 3–5. Díaz Cintas, Jorge. 2005b. “Back to the future in subtitling”. In Heidrun Gerzymisch-Arbogast and Sandra Nauert (eds.) MuTra 2005: Challenges of Multidimensional Translation. Saarbrücken: Saarland University, 16–32. translationconcepts.org/pdf/MuTra_2005_Proceedings.pdf Díaz Cintas, Jorge (ed.). 2008. The Didactics of Audiovisual Translation. Amsterdam: John Benjamins. Díaz Cintas, Jorge. 2011. “Dealing with multilingual flms in audiovisual translation”. In Wolfgang Pöckl, Ingeborg Ohnheiser and Peter Sandrini (eds.) Translation, Sprachvariation, Mehrsprachigkeit. Frankfurt am Main: Peter Lang, 215–233. Díaz Cintas, Jorge. 2012. “Clearing the smoke to see the screen: Ideological manipulation in audiovisual translation”. Meta 57(2): 279–293. Díaz Cintas, Jorge. 2015. “Technological strides in subtitling”. In S. Chan (ed.) Routledge Encyclopaedia of Translation Technology. London: Routledge, 632–643. Díaz Cintas, Jorge. 2018. “‘Subtitling’s a carnival’: New practices in cyberspace”. The Journal of Specialised Translation 30: 127–149. Díaz Cintas, Jorge. 2019. “Film censorship in Franco’s Spain: The transforming power of dubbing”. Perspectives 27(2): 182–200. Díaz Cintas, Jorge. 2020. “Audiovisual translation”. In Erik Angelone, Maureen Ehrensberger-Dow and Gary Massey (eds.) The Bloomsbury Companion to Language Industry Studies. London: Bloomsbury, 209–230.
References
253
Díaz Cintas, Jorge and Serenella Massidda. 2019. “Technological advances in audiovisual translation”. In Minako O’Hagan (ed.) The Routledge Handbook of Translation and Technology. London: Routledge, 255–270. Díaz Cintas, Jorge and Pablo Muñoz-Sánchez. 2006. “Fansubs: Audiovisual translation in an amateur environment”. The Journal of Specialised Translation 6: 37–52. Díaz Cintas, Jorge and Pilar Orero. 2003. “Course profle. Postgraduate courses in audiovisual translation”. The Translator 9(2): 371–388. Díaz Cintas, Jorge and Aline Remael. 2007. Audiovisual Translation: Subtitling. Manchester: St Jerome. Dollerup, Cay. 1974. “On subtitles in television programmes”. Babel 20: 197–202. Dorado, Carle and Pilar Orero. 2007. “Teaching audiovisual translation online: A partial achievement”. Perspectives 15(3): 191–202. Dwyer, Tessa. 2005. “Universally speaking: Lost in translation and polyglot cinema”. In Dirk Delabastita and Rainier Grutman (eds.) Fictionalising Translation and Multilingualism: Linguistica Antverpiensia NS: Themes in Translation Studies 4: 295–310. Dwyer, Tessa. 2017. Speaking in Subtitles: Revaluing Screen Translation. Edinburgh: Edinburgh University Press. D’Ydewalle, Géry and Johan Van Rensbergen. 1989. “Developmental studies of text-picture interactions in the perception of animated cartoons with text”. Advances in Psychology 58: 233–248. D’Ydewalle, Géry, Johan Van Rensbergen and Joris Pollet. 1987. “Reading a message when the same message is available auditorily in another language: The case of subtitling”. In J. K. O’Regan and A. Lévy-Schoen (eds.) Eye Movements: From Physiology to Cognition. Amsterdam: Elsevier Science Publishers, 313–321. EBU. 2004. EBU Report: Access Services: Includes Recommendations. European Broadcasting Union. tech. ebu.ch/docs/i/i044.pdf Eco, Umberto. 1979. A Theory of Semiotics. Bloomington: Indiana University Press. Edelberg, Elisa. 2017. “YouTube adds automatic captions to 1 billion videos: But that doesn’t mean they’re accessible”. 3PayMedia, 20 February. 3playmedia.com/2017/02/20/youtube-adds-automatic-captions-to1-billion-videos-but-that-doesnt-mean-theyre-accessible Eldalees, Hani Abdulla, Amer Al-Adwan and Rashid Yahiaoui. 2017. “Fansubbing in the Arab world: Modus operandi and prospects”. Arab World English Journal for Translation and Literary Studies 1(1): 48–64. Elks, Sonia. 2012. “Children ‘spend one year in front of screens by the age of seven’”. Metro, 21 May. metro.co.uk/2012/05/21/children-spend-one-year-in-front-of-screens-by-the-age-of-seven-433743 Ellender, Claire. 2015. “Dealing with dialect: The subtitling of Bienvenue chez les Ch’tis into English”. In Jorge Díaz Cintas and Josélia Neves (eds.) Audiovisual Translation: Tacking Stock. Newcastle: Cambridge Scholars Publishing, 46–68. Espasa, Eva. 2017. “Adapting-and accessing-translation for the stage”. In Geraldine Brodie and Emma Cole (eds.) Adapting Translation for the Stage. London: Routledge, 279–287. Estopace, Eden. 2017. “Audiovisual translation hits a sweet spot as subscription video on-demand skyrockets”. Slator, Language Industry Intelligence, 23 November. slator.com/features/audiovisualtranslation-hits-sweet-spot-subscription-video-on-demand-skyrockets European Commission. 2018. Audiovisual Media Services Directive (AVMSD). ec.europa.eu/digital-singlemarket/en/audiovisual-media-services-directive-avmsd Ferri, Paolo. 2012. “Digital and inter-generational divide”. In Antonio Cartelli (ed.) Current Trends and Future Practices for Digital Literacy and Competence. Hershey: IGI Global, 1–18. Fox, Wendy. 2016. “Integrated titles: An improved viewing experience?”. In Silvia Hansen-Schirra and Sambor Grucza (eds.) Eyetracking and Applied Linguistics. Berlin: Language Science Press, 5–30. Fox, Wendy. 2017. “A proposed set of placement strategies for integrated titles. Based on an analysis of existing strategies in commercial flms”. inTRAlinea, special issue on Building Bridges between Film Studies and Translation Studies. www.intralinea.org/specials/article/2250 Franco, Eliana, Anna Matamala and Pilar Orero. 2010. Voice-over Translation: An Overview. Bern: Peter Lang. Franzon, Johan. 2008. “Choices in song translation. Singability in print, subtitles and sung performance”. The Translator 14(2): 373–399.
254
References
Gambier, Yves. 2003. “Introduction: Screen transadaptation: Perception and reception”. The Translator 9(2): 171–189. Gambier, Yves, Annamaria Caimi and Cristina Mariotti (eds.). 2015. Subtitles and Language Learning: Principles, Strategies and Practical Experiences. Bern: Peter Lang. Gambier, Yves and Henrik Gottlieb (eds.). 2001. (Multi)Media Translation. Amsterdam: John Benjamins. García, Boni. 2017. “Bilingual subtitles for second-language acquisition and application to engineering education as learning pills”. Computer Applications in Engineering Education 25(3): 468–479. García, Martha. 2013. “Subtitling and dubbing songs in musical flms”. Revista de Ciencias Sociales. Comunicacíon, Cultura y Política 4(1): 107–125. Georgakopoulou, Panayota. 2010. Reduction Levels in Subtitling: DVD Subtitling: A Convergence of Trends. Saarbrücken: Lambert Academic Publishing. Georgakopoulou, Panayota. 2012. “Challenges for the audiovisual industry in the digital age: The everchanging needs of subtitle production”. The Journal of Specialised Translation 17: 78–103. Georgakopoulou, Panayota and Lindsay Bywood. 2014. “Machine translation in subtitling and the rising profle of the post-editor”. MultiLingual, January/February: 24–28. Gerzymisch-Arbogast, Heidrun and Sandra Nauert (eds.). 2005. MuTra: Challenges of Multidimensional Translation: Proceedings of the Marie Curie Euroconferences. Saarbrücken: Advanced Translation Research Center. Ghia, Elisa. 2012a. “The impact of subtitling strategies on subtitling reading”. In Elisa Perego (ed.) Eye-Tracking in Audiovisual Translation. Rome: Aracne, 157–182. Ghia, Elisa. 2012b. Subtitling Matters: New Perspectives on Subtitling and Foreign Language Learning. Oxford: Peter Lang. Glennie, Alasdair. 2014. “Anger at BBC subtitles . . . for an Irishman, 96: Unionist and nationalist politicians in Northern Ireland unite in fury against the corporation”. Mail Online, 19 November. tinyurl. com/yc5wfsgv Gold, John. 2012. “See real-time subtitles through Google glass-like apparatus”. Network World, 24 July. www.networkworld.com/article/2190073/see-real-time-subtitles-through-google-glass-like-apparatus. html Goodwin, Charles. 1979. “The interactive construction of a sentence in natural conversation”. In George Psathas (ed.) Everyday Language: Studies in Ethnomethodology. New York: Irvington Publishers, 97–121. Gorlée, Dinda L. (ed.). 2005. Song and Signifcance: Virtues and Vices of Vocal Translation. Amsterdam: Rodopi. Gottlieb, Henrik. 1994. “Subtitling: Diagonal translation”. Perspectives: Studies in Translatology 2(1): 101–121. Gottlieb, Henrik. 2001. “Anglicisms and TV subtitles in an anglifed world”. In Yves Gambier and Henrik Gottlieb (eds.) (Multi)Media Translation. Amsterdam: John Benjamins, 249–258. Graeme, Ritchie. 2010. “Linguistic factors in humour”. In Delia Chiaro (ed.) Translation, Humour and the Media: Translation and Humour 1. London: Continuum, 33–48. Greco, Gian Maria. 2018. “The nature of accessibility studies”. The Journal of Audiovisual Translation 1(1): 205–232. Griffn, Emily. 2017. “#NoMoreCraptions campaign calls for better CCs on YouTube videos”. 3PlayMedia, 26 September. 3playmedia.com/2016/09/26/nomorecraptions-campaign-calls-for-better-ccs-on-youtubevideos Grutman, Rainier. 2009. “Multilingualism”. In Mona Baker and Gabriela Saldhana (eds.) Routledge Encyclopedia of Translation Studies. London: Routledge, 182–186. Gutt, Ernst-August. 1991. Translation and Relevance: Cognition and Context. Oxford: Blackwell. Halliday, M. A. K. 1978. Language as Social Semiotic: The Social Interpretation of Language and Meaning. London: Edward Arnold. Halliday, M. A. K. and Ruqaiya Hassan. 1976. Cohesion in English. London: Longman. Hanna, Sameh. 2016. Bourdieu in Translation Studies: The Socio-Cultural Dynamics of Shakespeare in Egypt. London: Routledge.
References
255
Hannay, Mike and Lachlan Mackenzie. 2002. Effective Writing in English: A Sourcebook. Bussum: Coutinho. Hanoulle, Sabien, Véronique Hoste and Aline Remael. 2015. “The translation of documentaries: Can terminology-extraction systems reduce the translator’s workload? An experiment involving professional translators”. New Voices in Translation Studies 13: 25–49. Harrenstien, Ken. 2009. “Automatic captions in YouTube”. Google Offcial Blog, 19 November. googleblog. blogspot.com/2009/11/automatic-captions-in-youtube.html He, Ming, Yong Ge, Enhong Chen, Qi Liu and Xuesong Wang. 2018. “Exploring the emerging type of comment for online videos: Danmu”. ACM Transactions on the Wb (TWEB) 12(1): 1–13. He, Ming, Yong Ge, Le Wu, Enhong Chen and Chang Tan. 2016. “Predicting the popularity of Danmuenabled videos: A multi-factor view”. Lecture Notes in Computer Science 9643: 351–366. Heiss, Christine. 2004. “Dubbing multilingual flms: A new challenge”. Meta 49(1): 208–220. Hurtado, Amparo. 1994. “Modalidades y tipos de traducción”. Vasos comunicantes 4: 19–27. Incalcaterra McLoughlin, Laura, Marie Biscio and Máire Áine Ní Mhainnín (eds.). 2011. Audiovisual Translation: Subtitles and Subtitling: Theory and Practice. Oxford: Peter Lang. Incalcaterra McLoughlin, Laura, Jennifer Lertola and Noa Talaván. 2018. “Audiovisual translation in applied linguistics: Educational perspectives”. Special issue of Translation and Translanguaging in Multilingual Contexts 4(1). Ivarsson, Jan. 1992. Subtitling for the Media: A Handbook of an Art. Stockholm: TransEdit. Ivarsson, Jan and Mary Carroll. 1998. Subtitling. Simrishamn: TransEdit. Jäckel, Anne. 2001. “The subtitling of La Haine: A case study”. In Yves Gambier and Henrik Gottlieb (eds.) (Multi)Media Translation: Concepts, Practices, and Research. Amsterdam: John Benjamins, 223–235. Jakobson, Roman. 1959. “On linguistic aspects of translation”. In Lawrence Venuti (ed.) The Translation Studies Reader. London: Routledge, 113–118. Jay, Timothy. 2009. “The utility and ubiquity of taboo words”. Perspectives on Psychological Science 4(2): 153–161. Jones, Sam. 2019. “Alfonso Cuarón condemns Spanish subtitles on Roma”. The Guardian, 9 January. theguardian.com/flm/2019/jan/09/alfonso-cuaron-condemns-netfix-over-roma-subtitles Kapsaskis, Dionysios. 2011. “Professional identity and training of translators in the context of globalisation: The example of subtitling”. The Journal of Specialised Translation 16: 162–184. jostrans.org/ issue16/art_kapsaskis.pdf Karamitroglou, Fotios. 1998. “A proposed set of subtitling standard in Europe”. Translation Journal 2(2). translationjournal.net/journal/04stndrd.htm Kaufmann, Francine. 2004. “Un exemple d’effet pervers de l’uniformisation linguistique dans la traduction d’un documentaire: de l’hébreu des immigrants de ‘saint-Jean’ au français normatif d’ARTE”. Meta 49(1): 148–160. Kothari, Brij, Avinash Pandey and Amita R. Chudgar. 2004. “Reading out of the ‘idiot box’: Samelanguage subtitling on television in India”. Information Technologies and International Development 2(1): 23–44. mitpressjournals.org/doi/pdf/10.1162/1544752043971170 Kothari, Brij, Joe Takeda, Ashok Joshi and Avinash Pandey. 2010. “Same language subtitling: A butterfy for literacy?”. International Journal of Lifelong Education 21(1): 55–66. Kovačič, Irena. 1994. “Relevance as a factor in subtitling reductions”. In Cay Dollerup and Annette Lindegaard (eds.) Teaching Translation and Interpreting 2. Amsterdam: John Benjamins, 245–251. Kozloff, Sarah. 2000. Overhearing Film Dialogue. Berkeley: University of California Press. Krämer, Mathias and Eva Duran Eppler. 2018. “The deliberate non-subtitling of L3s in Breaking Bad: A reception study”. Meta 63(2): 365–391. Krejtz, Izabela, Agnieszka Szarkowska and Krzysztof Krejtz. 2013. “The effects of shot changes on eye movements in subtitling”. Journal of Eye Movement Research 6(5): 1–12. Kruger, Jan-Louis. 2012. “Ideology and subtitling: South African soap operas”. Meta 57(2): 279–293. Kruger, Jan-Louis and Faans Steyn. 2013. “Subtitles and eye tracking: Reading and performance”. Reading Research Quarterly 49(1): 105–120.
256
References
Kuo, Arista Szu-Yu. 2015. “Professional realities of the subtitling industry: The subtitlers’ perspective”. In Rocío Baños Piñero and Jorge Díaz Cintas (eds.) Audiovisual Translation in a Global Context: Mapping an Ever-Changing Landscape. Basingstoke: Palgrave Macmillan, 163–191. Laks, Simon. 1957. Le Sous-titrage de flms. Sa technique. Son esthétique. Paris. ataa.fr/revue/wp-content/ uploads/2013/06/ET-HS01-p04–46.pdf Larkin-Galiñanes, Cristina. 2017. “An overview of humor theory”. In Salvatore Attardo (ed.) The Routledge Handbook of Language and Humour. London: Routledge, 14–16. Latif, Nida, Agnès Alsius and K. G. Munhall. 2018. “Knowing when to respond: The role of visual informaton in conversational turn exchanges”. Attention Perception & Psychophysics 80: 27–41. Lawson, Mark. 2012. “When does subtitling risk becoming racially offensive?”. The Guardian, 19 July. theguardian.com/tv-and-radio/tvandradioblog/2012/jul/19/subtitling-risk-racially-offensive Lefevere, André. 1985. “Why waste our time on rewrites? The trouble with interpretation and the role of rewriting in an alternative paradigm”. In Theo Hermans (ed.) The Manipulation of Literature: Studies in Literary Translation. London: Croom Helm, 215–243. Lertola, Jennifer. 2013. Subtitling New Media: Audiovisual Translation and Second Language Acquisition. PhD Thesis. Galway: NUI Galway. Lievois, Katrien and Aline Remael. 2017. “Audio-describing visual flmic allusions”. Perspectives 25(2): 323–339. Linell, Per. 1998. Approaching Dialogue: Talk, Interaction and Contexts in Dialogical Perspectives. Amsterdam: John Benjamins. Liu, Yu and Kay L. O’Halloran. 2009. “Intersemiotic texture: Analyzing cohesive devices between language and images”. Social Semiotics 19(4): 367–388. Lomheim, Sylfest. 1995. “L’écriture sur l’écran: stratégies de sous-titrage à NRK, une étude de cas”. Translatio, Nouvelles de la FIT/FIT Newsletter 14(3–4): 288–293. Low, Peter. 2017. Translating Song: Lyrics and Texts. London: Routledge. Luckmann, Thomas. 1990. “Social communication, dialogue and conversation”. In Ivana Markovà and Klaus Foppa (eds.) The Dynamics of Dialogue. Hemel Hempstead: Harvester and Wheatsheaf, 45–61. Luyken, Georg Michael, Thomas Herbst, Jo Langham-Brown, Helen Reid and Hermann Spinhof. 1991. Overcoming Language Barriers in Television: Dubbing and Subtitling for the European Audience. Manchester: European Institute for the Media. Mailhac, Jean-Pierre. 2000. “Subtitling and dubbing, for better or worse? The English video versions of Gazon Maudit”. In Myriam Salama-Carr (ed.) On Translating French Literature and Film II. Amsterdam: Rodopi, 129–154. Marleau, Lucien. 1982. “Les sous-titres . . . un mal nécessaire”. Meta 27(3): 271–285. Martín Castaño, Mónica. 2017. Translating Disney Songs from Little Mermaid (1989) to Tarzan (1999): An Analysis of Translation Strategies Used to Dub and Subtitle Songs into Spanish. Warwick: University of Warwick. Martínez-Sierra, Juan José. 2009. “Translating audiovisual humour: A case study”. Perspectives 13(4): 289–296. Martínez-Sierra, Juan José, José Luis Marti-Ferriol, Irene de Higes-Andino, Ana María Prats-Rodríguez and Frederic Chaume. 2010. “Linguistic diversity in Spanish immigration flms: A translational approach”. In Verena Berger and Miya Komori (eds.) Polyglot Cinema: Migration and Transcultural Narration in France, Italy, Portugal and Spain. Vienna: LIT Verlag, 15–29. Mason, Ian. 1989. “Speaker meaning and reader meaning: Preserving coherence in screen translating”. In Rainer Kölmel and Jerry Payne (eds.) Babel: The Cultural and Linguistic Barriers between Nations. Aberdeen: Aberdeen University Press, 13–24. Mason, Ian. 2010. “Discourse, ideology and translation”. In Mona Baker (ed.) Critical Readings in Translation Studies. London: Routledge, 83–95. Matamala, Anna. 2017. “Mapping audiovisual translation investigations: Research approaches and the role of technology”. In Łukasz Bogucki (ed.) Audiovisual Translation: Research and Use. Oxford: Peter Lang, 11–28. Matamala, Anna, Elisa Perego and Sara Bottiroli. 2017. “Dubbing versus subtitling yet again? An empirical study on user comprehension and preferences in Spain”. Babel 63(3): 423–441.
References
257
Mateo, Marta. 2007a. “Reception, text and context in the study of opera surtitles”. In Yves Gambier, Miriam Shlesinger and Radegundiz Stolze (eds.) Doubts and Directions in Translation Studies: Selected Contibutions from the EST Congress, Lisbon 2004. Amsterdam: John Benjamins, 169–182. Mateo, Marta. 2007b. “Surtitling today: New uses, attitudes and developments”. Linguistica Antverpiensia NS: Themes in Translation Studies 6: 135–154. Mayoral Asensio, Roberto, Dorothy Kelly and Natividad Gallardo. 1988. “Concept of constrained translation: Non-linguistic perspectives of translation”. Meta 33(3): 356–367. Mazur, Iwona. 2014. “Gestures and facial expressions in audio description”. In Anna Maszerowska, Anna Matamala and Pilar Orero (eds.) Audio Description: New Perspectives Illustrated. Amsterdam: John Benjamins, 179–198. McAlone, Nathan. 2016. “This helpful tool makes Netfix an easy way to learn a new language”. Business Insider UK, 24 May. uk.businessinsider.com/lingvo-tv-helps-you-learn-a-new-language-usingnetfix-2016-5 McDonald, James. 1988. A Dictionary of Obscenity, Taboo and Euphemism. London: Penguin. Media Consulting Group. 2011. Study on the Use of Subtitling: The Potential of Subtitling to Encourage Foreign Language Learning and Improve the Mastery of Foreign Languages. Paris: Media Consulting Group. publications. europa.eu/en/publication-detail/-/publication/e4d5cbf4-a839-4a8a-81d0-7b19a22cc5ce Mehrez, Samia. 1992. “Translation and the postcolonial experience: The Francophone North African text”. In Lawrence Venuti (ed.) Rethinking Translation: Discourse, Subjectivity, Ideology. London: Routledge, 120–138. MESA. 2017. “Study: EMEA content localization service spending hits $2 billion”. Media & Entertainment Services Alliance, 27 June. mesalliance.org/2017/06/27/study-emea-content-localization-service-spendinghits-2-billion Mével, Pierre-Alexis. 2017. Subtitling African-American English into French. Can We Do the Right Thing? Oxford: Peter Lang. Meyer, John C. 2000. “Humor as a double edged sword: Four functions of humor in communication”. Communication Theory 10: 310–331. Meylaerts, Reine and Adriana Şerban. 2014. “Multilingualism at the cinema and on stage: A translation perspective”. Linguistica Antverpiensia NS: Themes in Translation Studies 13: 1–13. Minutella, Vincenza. 2014. “Translating verbally expressed humour in dibbing and subtitling: The Italian version of Shrek”. In Gian Luigi De Rosa, Francesca Bianchi, Antonella De Laurentiis and Elisa Perego (eds.) Translating Humour in Audiovisual Texts. Bern: Peter Lang: 67–88. Montgomery, Coleen. 2008. “Lost in translation: Subtitling banlieue subculture”. Cinephile 4. cinephile. ca/archives/volume-4-post-genre/lost-in-translation-subtitling-banlieue-subculture Murillo Chávez, Javier André. 2018. “COCOpyright and the value of moral rights”. WIPO Magazine, August. wipo.int/wipo_magazine/en/2018/04/article_0003.html Nedergaard-Larsen, Birgit. 1993. “Culture-bound problems in subtitling”. Perspectives 2: 207–251. Netfix. N.d. 1. Dialogue List Scope of Work and Style Guide. tinyurl.com/ya6asynt Netfix. N.d. 2. Timed Text Style Guides. partnerhelp.netfixstudios.com/hc/en-us/sections/203480497Timed-Text-Style-Guides Neves, Josélia. 2005. Audiovisual Translation: Subtitling for the Deaf and the Hard-of-Hearing. PhD Thesis. London: Roehampton University. Newmark, Peter. 1988. A Textbook of Translation. London: Prentice Hall. Nikolić, Kristijan. 2015. “The pros and cons of using templates in subtitling”. In Rocío Baños Piñero and Jorge Díaz Cintas (eds.) Audiovisual Translation in a Global Context: Mapping an Ever-Changing Landscape. Basingstoke: Palgrave Macmillan, 192–202. Noack, Rick. 2015. “Mumbling Danish actors force country’s theaters to subtitle Danish flms”. The Washington Post, 6 March. tinyurl.com/y764p4dg NOED. 1998. The New Oxford Dictionary of English. Ed. Judy Pearsall. Oxford: Clarendon Press. Nord, Christiane. 1997. Translating as a Purposeful Activity. Manchester: St Jerome. Nornes, Abé M. 2007. Cinema Babel: Translating Global Cinema. Minneapolis: University of Minnesota Press.
258
References
Nsofor, Adebunmi. 2020. “White paper: Localization rights: What content owners & performers need to know about the rapidly evolving marketplace”. SDI Media. sdimedia.com/localization_rights Ofcom. 2014. Ofcom Publishes First Results on Quality of TV Subtitles. ofcom.org.uk/about-ofcom/latest/ media/media-releases/2014/ofcom-publishes-frst-results-on-quality-of-tv-subtitles Ofcom. 2015. Ofcom Publishes Third Report on Quality of Live TV Subtitles. ofcom.org.uk/about-ofcom/ latest/media/media-releases/2015/quality-subtitling-3rd-report Ofcom. 2017. Ofcom’s Code on Television Access Services. ofcom.org.uk/__data/assets/pdf_fle/0020/97040/ Access-service-code-Jan-2017.pdf O’Hagan, Minako and Ryoko Sasamoto. 2016. “Crazy Japanese subtitles? Shedding light on the impact of impact captions with a focus on research methodology”. In Silvia Hansen-Schirra and Sambor Grucza (eds.) Eyetracking and Applied Linguistics. Berlin: Language Science Press, 31–57. O’Sullivan, Carol. 2011. Translating Popular Film. Basingstoke: Palgrave Macmillan. O’Sullivan, Carol. 2013. “Introduction: Multimodality as challenge and resource for translation”. The Journal of Specialised Translation 20: 2–14. O’Sullivan, Carol and Jean-François Cornu. 2019. The Translation of Films, 1900–1950. Oxford: Oxford University Press. Oziemblewska, Magdalena and Agnieszka Szarkowska. 2020. “The quality of templates in subtitling: A survey on current market practices and changing subtitler competences”. Perspectives. DOI: 10.1080/ 0907676X.2020.1791919. PACTE. 2005. “Investigating translation competence: Conceptual and methodological issues”. Meta 50(2): 609–619. Pai, Feng-shuo Albert. 2017. A Relevance-Theoretic Approach to Verbal Humour in British Sitcoms Subtitled in Chinese. PhD Thesis. London: University College London. Parra López, Guillermo. 2016. “Disorderly speech and its translation: Fear and loathing among letters”. In Margherita Dore (ed.) Achieving Consilience: Translation Theories and Practice. Newcastle: Cambridge Scholars Publishing, 82–107. Pedersen, Jan. 2011. Subtitling Norms for Television. Amsterdam: John Benjamins. Pedersen, Jan. 2015. “On the subtitling of visualised metaphors”. The Journal of Specialised Translation 23: 162–180. Pedersen, Jan. 2017. “The FAR model: Assessing quality in interlingual subtitling”. The Journal of Specialised Translation 28: 210–229. Perego, Elisa. 2008. “Subtitles and line-breaks: Towards improved readability”. In Delia Chiaro, Christine Heiss and Chiara Bucaria (eds.) Between Text and Image: Updating Research in Screen Translation. Amsterdam: John Benjamins, 211–225. Perego, Elisa. 2009. “The codifcation of non-verbal information in subtitled audiovisual texts”. In Jorge Díaz Cintas (ed.) New Trends in Audiovisual Translation. Bristol: Multilingual Matters, 58–69. Perego, Elisa, Fabio Del Missier, Marco Porta and Mauro Mosconi. 2010. “The cognitive effects of subtitle processing”. Media Psychology 13(3): 243–272. Perego, Elisa and Elisa Ghia. 2011. “Subtitle consumption according to eye-tracking data: An acquisitional perspective”. In Laura Incalcaterra McLoughlin, Marie Biscio and Maíre Aíne Ní Mhainnín (eds.) Audiovisual Translation, Subtitles and Subtitling: Theory and Practice. Oxford: Peter Lang, 177–195. Pérez-González, Luis. 2014. Audiovisual Translation: Theories, Methods and Issues. London: Routledge. Pérez-González, Luis (ed.). 2019. The Routledge Handbook of Audiovisual Translation. Abingdon: Routledge. Pöchhacker, Franz and Aline Remael. 2019. “New efforts? A competence-oriented task analysis of interlingual live subtitling”. Linguistica Antverpiensia NS: Themes in Translation Studies 18: 130–143. Popowich, Fred, Paul McFetridge, Davide Turcato and Janine Toole. 2000. “Machine translation of closed captions”. Machine Translation 15(4): 311–341. Poyatos, Fernando. 1997. Nonverbal Communication and Translation. Amsterdam: John Benjamins. Prensky, Marc. 2001. “Digital natives, digital immigrants part 1”. On the Horizon 9(5): 1–6. Pujol, Dídac. 2006. “The translation and dubbing of ‘fuck’ into Catalan: The case of From Dusk till Dawn”. The Journal of Specialised Translation 6: 121–133.
References
259
Rajendran, Dhevi J., Andrew T. Duchowski, Pilar Orero, Juan Martínez Pérez and Pablo Romero-Fresco. 2013. “Effects of chunking on subtitling: A qualitative and quantitative examination”. Perspectives 21(1): 5–21. Ranzato, Irene. 2010. “Localising Cockney: Translating dialect into Italian”. In Jorge Díaz Cintas, Anna Matamala and Josélia Neves (eds.) New Insights Into Audiovisual Translation and Media Accessibility. Amsterdam: Rodopi, 109–122. Ranzato, Irene. 2016. Translating Culture Specifc Reference son Television: The Case of Dubbing. London: Routledge. Reid, Helen. 1996. “Literature on the screen: Subtitle translating for public broadcasting”. SQR Studies in Literature 5: 97–107. Reiss, Katharina. 1971/2000. Translation Criticism: The Potentials and Limitations: Categories and Criteria for Translation Quality Assessment. Trans. Erroll F. Rhodes. Manchester and New York: St Jerome and American Bible Society. Reiss, Katharina. 1981. “Type, kind and individuality of text: Decision making in translation”. Poetics Today 2(4): 121–131. Remael, Aline. 1998. “The poetics of screenwriting manuals: A refection of some of mainstream cinema’s manifest and hidden tangles with the written word”. In Paul Joret and Aline Remael (eds.) Language and Beyond: Actuality and Virtuality in the Relations between Word, Image and Sound. Amsterdam: Rodopi, 203–218. Remael, Aline. 2001. “Some thoughts on the study of multimodal and multimedia translation”. In Yves Gambier and Henrik Gottlieb (eds.) (Multi)Media Translation: Concepts, Practices, and Research. Amsterdam: John Benjamins, 13–22. Remael, Aline. 2003. “Mainstream narrative flm dialogue and subtitling: A case study of Mike Leigh’s Secrets & Lies”. The Translator 9(2): 225–247. Remael, Aline. 2004. “Irak in het vizier. Hedendaagse internationale politiek in ondertitels”. Filter 11(3): 3–12. Remael, Aline, Annick De Houwer and Reinhild Vandekerckhove. 2008. “Intralingual open subtitling in Flanders: Audiovisual translation, linguistic variation and audience needs”. The Journal of Specialised Translation 10: 76–105. Remael, Aline and Nina Reviers. 2019. “Multimodality and audiovisual translation: Cohesion in accessible flms”. In Luis Pérez-González (ed.) Routledge Handbook of Audiovisual Translation. London: Routledge, 260–280. Remael, Aline and Nina Reviers. 2020. “Media accessibility and accessible design”. In Minako O’Hagan (ed.) The Routledge Handbook of Translation and Technology. London: Routledge, 482–497. Remael, Aline, Nina Reviers and Gert Vercauteren. 2015. “Introduction: Basic audio description concepts”. In Aline Remael, Nina Reviers and Gert Vercauteren (eds.) Pictures Painted in Words: ADLAB Audio Description Guidelines. Trieste: University of Trieste. adlabproject.eu/Docs/adlab%20book/index. html Remael, Aline, Luuk Van Waes and Mariëlle Leijten. 2014. “Live subtitling with speech recognition: How to pinpoint the challenges”. In Dror Abend (ed.) Media and Translation: An Interdisciplinary Approach. London: Bloomsbury, 121–147. Reuters. 2015. “YouTube introduces new translation tools to globalise content”. ET Brand Equity, 21 November. brandequity.economictimes.indiatimes.com/news/digital/youtube-introduces-new-translation-tools-toglobalize-content/49860574 Reviers, Nina and Aline Remael. 2015. “Recreating multimodal cohesion in audio description: The case of audio subtitling in Dutch multilingual flms”. New Voices in Translation Studies 13: 50–78. Richardson, Kay. 2010. Television Dramatic Dialogue: A Sociolinguistic Study. Oxford: Oxford University Press. Robert, Isabelle and Aline Remael. 2016. “Quality control in the subtitling industry: An exploratory survey study”. Meta 61(3): 578–605. Robert, Isabelle and Aline Remael. 2017. “Assessing quality in live interlingual subtitling: A new challenge”. Linguistica Antverpiensia NS: Themes in Translation Studies 16: 168–195.
260
References
Robson, Gary D. 2004. The Closed Captioning Handbook. Amsterdam: Elsevier. Rodríguez, Ashley. 2017. “Netfix says English won’t be its primary viewing language for much longer”. Quartz, 30 March. qz.com/946017/netfix-nfx-says-english-wont-be-its-primary-viewing-languagefor-much-longer-unveiling-a-new-hermes-translator-test Roettgers, Janko. 2016. “The technology behind Netfix’s ‘Chelsea’ talk show”. Variety, 17 May. variety. com/2016/digital/news/netfix-chelsea-encoding-ui-translation-1201776469 Romaine, Suzanne. 2001. “Dialect and dialectology”. In Rajend Mesthrie (ed.) Concise Encyclopedia of Sociolinguistics. Amsterdam: Elsevier, 310–319. Romero-Fresco, Pablo. 2009. “More haste less speed: Edited versus verbatim respoken subtitles”. VIAL 6: 109–133. Romero-Fresco, Pablo. 2011. Subtitling through Speech Recognition: Respeaking. Manchester: St Jerome. Romero-Fresco, Pablo. 2015. “Final thoughts: Viewing speed in subtitling”. In Pablo Romero-Fresco (ed.) The Reception of Subtitles for the Deaf and Hard of Hearing in Europe. Bern: Peter Lang, 335–341. Romero-Fresco, Pablo. 2019. Accessible Filmmaking: Integrating Translation and Accessibility Into the Filmmaking Process. Abingdon: Routledge. Romero-Fresco, Pablo and Juan Martínez Pérez. 2015. “Accuracy rate in live subtitling: The NER model”. In Rocío Baños Piñero and Jorge Díaz Cintas (eds.) Audiovisual Translation in a Global Context: Mapping an Ever-Changing Landscape. Basingstoke: Palgrave Macmillan, 28–50. Romero-Fresco, Pablo and Frank Pöchhacker. 2017. “Quality assessment in interlingual live subtitling: The NTR model assessing quality in live interlingual subtitling: A new challenge”. Linguistica Antverpiensia NS: Themes in Translation Studies 16: 149–167. Roxborough, Scott. 2018. “Netfix content quota in Europe may lead to TV buying spree”. The Hollywood Reporter, 10 November. hollywoodreporter.com/news/netfix-content-quota-europe-may-lead-tvbuying-spree-1150805 Sandford, James. 2015. “The impact of subtitle display rate on enjoyment under normal television viewing conditions”. BBC. downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-fles/WHP306.pdf Sanz-Ortega, Elena. 2011. “Subtitling and the relevance of non-verbal information in polyglot flms”. New Voices in Translation Studies 7: 19–34. tinyurl.com/qno6qjz Sasamoto, Ryoko, Minako O’Hagan and Stephen Doherty. 2017. “Telop, affect, and media design: A multimodal analysis of Japanese TV programs”. Television & New Media 18(5): 427–440. Sawaf, Hassan. 2012. “Automatic speech recognition and hybrid machine translation for high-quality closed-captioning and subtitling for video broadcast”. Proceedings of Association for Machine Translation in the Americas: AMTA, 1–5. pdfs.semanticscholar.org/2eeb/01f115f3418fc5ab752bea719b6883047284. pdf Schäffner, Christina (ed.). 1998. Translation and Quality. Clevedon: Multilingual Matters. Schilling, Natalie. 2002. “Investigating stylistic variation”. In J. K. Chambers, Peter Trudgill and Nathalie Schilling (eds.) The Handbook of Language Variation and Change. Oxford: Blackwell, 375–397. Schilling, Natalie. 2013. “Investigating stylistic variation”. In J. K. Chambers and Natalie Schilling (eds.) The Handbook of Language Variation and Change. Chichester: Wiley-Blaclwell, 325–349. Schröter, Thorsten. 2010. “Language-play, translation and quality with examples from dubbing and subtitling”. In Delia Chiaro (ed.) Translation, Humour and the Media: Translation and Humour 2. London: Continuum, 138–152. Schwab, Klaus. 2016. The Fourth Industrial Revolution. Geneva: World Economic Forum. Short, M. 2001. “Style”. In Rajend Mesthrie (ed.) Concise Encyclopedia of Sociolinguistics. Amsterdam: Elsevier, 280–283. Smith, Stephen. 1998. “The language of subtitling”. In Yves Gambier (ed.) Translating for the Media: Papers from the International Conference Languages and the Media. Turku: University of Turku, 139–149. Snell-Hornby, Mary. 1988. Translation Studies: An Integrated Approach. Amsterdam: John Benjamins. Snell-Hornby, Mary. 2006. The Turns of Translation Studies: New Paradigms or Shifting Viewpoints? Amsterdam: John Benjamins.
References
261
Susam-Saraeva, Şebnem. 2008. “Translation and music: Changing perspectives, frameworks and signifcance”. The Translator 14(2): 187–200. Susam-Saraeva, Şebnem. 2015. Translation and Popular Music: Transcultural Intimacy in Turkisk-Greek Relations. Oxford: Peter Lang. Szarkowska, Agnieszka. 2016. Report on the Results of an Online Survey on Subtitle Presentation Times and Line Breaks in Interlingual Subtitling: Part 1: Subtitlers. SURE Project. avt.ils.uw.edu.pl/fles/2016/10/ SURE_Report_Survey1.pdf Szarkowska, Agnieszka and Olivia Gerber-Morón. 2018. “Viewers can keep up with fast subtitles: Evidence from eye movements”. PLoS One 13(6). journals.plos.org/plosone/article?id=10.1371/journal. pone.0199331 Szarkowska, Agnieszka and Olivia Gerber-Morón. 2019. “Two or three lines: A mixed-methods study on subtitle processing and preferences”. Perspectives 27(1): 144–164. Szarkowska, Agnieszka, Izabela Krejtz, Zuzanna Klyszejko and Anna Wieczorek. 2011. “Verbatim, standard, or edited? Reading patterns of different captioning styles among deaf, hard of hearing, and hearing viewers”. American Annals of the Deaf 156(4): 363–378. Szarkowska, Agnieszka, Izabela Krejtz, Maria Łogińska, Łukasz Dutke and Krzysztof Krejtz. 2015. “The infuence of shot changes on reading subtitles: A preliminary study”. In Elisa Perego and Silvia Bruti (eds.) Subtitling Today: Shapes and Their Meanings. Newcastle: Cambridge Scholars Publishing, 99–118. Takeda, Kayoko. 2014. “The interpreter as traitor: Multilingualism in Guizi lai le (Devils on the Doorstep)”. Linguistica Antverpiensia NS: Themes in Translation Studies 13: 93–111. Talaván, Noa. 2006. “Using subtitles to enhance foreign language education”. Porta Linguarum 6: 41–52. Talaván, Noa. 2010. “Subtitling as a task and subtitles as support: Pedagogical applications”. In Jorge Díaz Cintas, Anna Matamala and Josélia Neves (eds.) New Insights Into Audiovisual Translation and Media Accessibility. Amsterdam: Rodopi, 285–299. Talaván, Noa. 2011. “A quasi-experimental research project on subtitling and foreign language acquisition”. In Laura Incalcaterra McLoughlin, Marie Biscio and Máire Áine Ní Mhainnín (eds.) Audiovisual Translation: Subtitles and Subtitling: Theory and Practice. Oxford: Peter Lang, 197–217. Talaván, Noa, Ana Ibáñez and Elena Bárcena. 2017. “Exploring collaborative reverse subtitling for the enhancement of written production activities in English as a second language”. ReCALL 29(1): 39–58. Taylor, Christopher. 2016. “The multimodal approach in audiovisual translation”. Target 28(2): 222–236. Titford, Christopher. 1982. “Sub-titling: Constrained translation”. Lebende Sprachen 27(3): 113–116. Törnqvist, Egil. 1995. “Fixed pictures, changing words: Subtitling and dubbing the flm Babettes Gæstebud”. TijdSchrift voor Skandinavistick 16(1): 47–64. Trudgill, Peter. 1999. “Standard English: What it isn’t”. In Tony Bex and Richard Watts (eds.) Standard English: The Widening Debate. London: Routledge, 117–128. Tseng, Chiao-I. 2013. Cohesion in Film: Tracking Film Elements. Basingstoke: Palgrave Macmillan. UNESCO. 1976. Recommendation on the Legal Protection of Translators and Translations and the Practical Means to Improve the Status of Translators, 22 November. portal.unesco.org/en/ev.php-URL_ ID=13089&URL_DO=DO_TOPIC&URL_SECTION=201.html UNESCO Institute for Statistics. 2012. Information Bulletin 8. Montreal. United Nations. 2006. Convention on the Rights of Persons with Disabilities. un.org/disabilities/documents/ convention/convoptprot-e.pdf Vandaele, Jeroen. 1999. “Each time we laugh: Translated humour in screen comedy”. In Jeroen Vandaele (ed.) Translation and the (Re)Location of Meaning: Selected Papers of the CETRA Research Seminars in Translation Studies 1994–1996. Leuven: CETRA, 237–272. Vandaele, Jeroen. 2002. “(Re)constructing humour: Meaning and means”. The Translator 8(2): 149–172. Van de Poel, Marijke and Géry d’Ydewalle. 2001. “Incidental foreign-language acquisition by children watching subtitled television programs”. In Yves Gambier and Henrik Gottlieb (eds.) (Multi)Media Translation: Concepts, Practices and Research. Amsterdam: John Benjamins, 259–274. Vanderplank, Robert. 1988. “The value of teletext sub-titling in language learning”. ELT Journal 42: 272–281.
262
References
Vanderplank, Robert. 2016. Captioned Media in Foreign Language Learning and Teaching Subtitles for the Deaf and Hard-of-Hearing as Tools for Language Learning. Basingstoke: Palgrave Macmillan. Vanoye, Francis. 1985. “Conversations publiques”. Iris 3(1): 99–118. Van Wert, William. 1980. “Intertitles”. Sight and Sound 49(2): 98–105. Villanueva Jordán, Iván Alejandro. 2019. “‘You better werk’. Rasgos del camp talk en la subtitulación al español de Rupaul’s Drag Race”. Cadernos de Tradução 39(3): 156–188. Vinay, Jean-Paul and Jean Darbelnet. 1958/1995. Comparative Stylistics of French and English: A Methodology for Translation. Trans. J. C. Sager and M. J. Hamel. Amsterdam: John Benjamins. Voellmer, Elena and Patrick Zabalbeascoa. 2014. “How multilingual can a dubbed flm be? Language combinations and national traditions as determining factors”. Linguistica Antverpiensia NS: Themes in Translation Studies 13: 232–250. Volk, Martin. 2008. “The automatic translation of flm subtitles: A machine translation success story?”. In Anna Sagvall Hein, Joakim Nivre, Mats Dahllöf and Beáta Megyesi (eds.) Resourceful Language Technology: Festschrift in Honor of Anna Sågvall Hein. Uppsala: Acta Universitatis Upsaliensis, 202–214. Volk, Martin, Rico Sennrich, Christian Hardmeier and Frida Tidström. 2010. “Machine translation of TV subtitles for large scale production”. JEC 2010: Bringing MT to the User. uu.diva-portal.org/smash/ get/diva2:420760/FULLTEXT01.pdf VRT. 2013. Normen en Instructies Voor Open Ondertiteling. Vlaamse Radio- en Televisieomroeporganisatie. Internal Document. Wahl, Chris. 2005. “Discovering a genre: The polyglot flm”. Cinemascope 1. madadayo.it/Cinemascope_archive/cinema-scope.net/index_n1.html Wahl, Chris. 2007. “Du Deutscher, Toi Français, You English: Beautiful! The polyglot flm as a genre”. In Miyase Christensen and Nezih Erdogan (eds.) Shifting Landscapes: Films and Media in European Context. Newcastle: Cambridge Scholars Publishing, 334–350. Wales, Katie. 1989. A Dictionary of Stylistics. London: Longman. Wilkinson-Jones, Phil. 2017. “Digital economy bill will require on-demand programmes to include subtitles”. Cable.co.uk, 9 February. cable.co.uk/news/digital-economy-bill-will-require-on-demand-programmesto-include-subtitles-700001735 Williams, Gareth Ford. 2009. Online Subtitling Editorial Guidelines V1.1. London: BBC. Williamson, Lee and Raquel de Pedro Ricoy. 2014. “The translation of wordplay in interlingual subtitling: A study of Bienvenue chez les Ch’tis and its English subtitles”. Babel 60(2): 164–192. Zabalbeascoa, Patrick. 1996. “Translating jokes for dubbed television situation comedies”. The Translator 2(2): 235–267. Zabalbeascoa, Patrick. 1997. “Dubbing and the nonverbal dimension of translation”. In Fernando Poyatos (ed.) Nonverbal Communication and Translation. Amsterdam: John Benjamins, 327–342. Zabalbeascoa, Patrick. 2008. “The nature of the audiovisual text and its parameters”. In Jorge Díaz Cintas (ed.) The Didactics of Audiovisual Translation. Amsterdam: John Benjamins, 21–38. Zhang, Leticia Tian and Daniel Cassany. 2019. “The ‘danmu’ phenomenon and media participation: Intercultural understanding and language learning through the ministry of time”. Comunicar 58(27): 19–29. Zhang, Xiaochun. 2013. “Fansubbing in China”. MultiLingual, July/August: 30–37. researchgate.net/ publication/279931748_Fansubbing_in_China
10.2
Filmography
90 Day Wondering. 1956. Chuck Jones, USA. About a Boy. 2002. Chris Weitz and Paul Weitz, USA. Aislados. 2005. David Marqués, Spain. American Beauty. 1999. Sam Mendez, USA. Annie Hall. 1977. Woody Allen, USA. Astérix & Obélix: Mission Cléopâtre. 2002. Alain Chabat, France/Germany.
References Avatar. 2009. James Cameron, USA. ¡Ay, Carmela! 1990. Carlos Saura, Spain/Italy. Babel. 2006. Alejandro González Iñárritu, France. Baby Cart in the Land of Demons [Kozure Ôkami: Meifumando]. 1973. Kenji Misumi, Japan. Battaglia di Algeri, La/The Battle of Algiers. 1966. Gillo Pontecorvo, Italy. Beneath the 12-Mile Reef. 1953. Robert D. Webb, USA. Bienvenu chez les Ch’tis/Welcome to the Sticks. 2008. Dany Boon, France. Boom Boom Bang. 1998. Hanno Baethe and Zaki Omar, Germany. Borgen. 2010–2013. Adam Price, Denmark. Bridget Jones’s Diary. 2011. Sharon Maguire, USA. Brokeback Mountain. 2005. Ang Lee, USA. The Broken Hearts Club. 2000. Greg Berlanti, USA. Captain Fantastic. 2016. Matt Ross, USA. Casablanca. 1942. Michael Curtiz, USA. Charade. 1963. Stanley Donen, USA. Chelsea. 2016–2017. Rik Reinholdtsen and Blake Webster, USA. Chicken Run. 2000. Peter Lord and Nick Park, UK. A Christmas Carol. 2009. Robert Zemeckis, USA. Cold Feet. 1997. Mike Bullen, UK. The Commitments. 1991. Alan Parker, Ireland/UK/USA. Countryfle. 1988. BBC, UK. Cyrano de Bergerac. 1950. Michael Cordon, USA. Cyrano de Bergerac. 1990. Jean-Paul Rappeneau, France. Dances with Wolves. 1990. Kevin Costner, USA. Depuis qu’Otar est parti . . . [Since Otar Left]. 2003. Julie Bertucelli, Belgium/France. Downton Abbey. 2010–2015. Julian Fellowes, UK. Empresses in the Palace. 2011–2015. Zheng Xiaolong, China. Famille Bélier, La. 2014. Eric Lartigau, France/Belgium. Father’s Little Dividend. 1950. Vincente Minnelli, USA. The Fault in Our Stars. 2014. Josh Boone, USA. Fawlty Towers. 1975–1979. John Howard Davies and Bob Spiers, UK. Finding Nemo. 2003. Andrew Stanton and Lee Unkrich, USA/Australia. La for de mi secreto [The Flower of My Secret]. 1995. Pedro Amodóvar, Spain/France. The Flying Deuces. 1939. A. Edward Sutherland, USA. Four Weddings and a Funeral. 1994. Mike Newell, UK. Frasier. 1993–2004. David Angell, Peter Casey and David Lee, USA. The French Lieutenant’s Woman. 1981. Karel Reisz, UK. Friends. 1994–2004. David Crane and Martha Kauffman, USA. Frozen. 2013. Chris Buck and Jennifer Lee, USA. Gazon Maudit [French Twist]. 1995. Josiane Balasko, France. The Good Wife. 2009–2016. Michelle King and Robert King, USA. The Grapes of Wrath. 1940. John Ford, USA. Grease. 1978. Randal Kleiser, USA. Grey’s Anatomy. 2005. Shonda Rhimes, USA. Guantanamera. 1995. Tomás Gutiérrez Alea and Juan Carlos Tabío, Cuba/Spain/Germany. Guizi lai le/Devils on the Doorstep. 2000. Wen Jiang, China. La Haine. 1995. Matthieu Kassovitz, France. Have I Got News for You. 1990–present. Ian Hislop and Paul Merton, UK. Head on. 1998. Ana Kokkinos, Australia. Horror Express. 1972. Eugenio Martín, UK/Spain. House M.D. 2004–2012. David Shore, USA. House of Cards. 2013–2018. Beau Willimon, USA.
263
264
References
If You Are the One [非诚勿扰]. 2010–present. Wang Peijie and Xing Wenning, People’s Republic of China. Inglourious Basterds. 2009. Quentin Tarantino, Germany. The Interpreter. 2005. Sydney Pollack, UK. The Invisible Subtitler. 2013. Aliakbar Campwala, UK. It’s a Free World . . . . 2007. Ken Loach, UK. It’s a Wonderful Life. 1946. Frank Capra, USA. J’ai toujours voulu être une sainte [I Always Wanted to be a Saint]. 2003. Geneviève Mersch, Belgium/ Luxembourg. Jane the Virgin. 2014–2019. Jennie Snyder Urman, USA. Joanna Lumley’s Greek Odyssey: Episode 4: Mount Olympus and beyond. 2011. Dominic Ozanne, UK. Joseph and the Amazing Technicolor Dreamcoat. 1999. David Mallet and Steven Pimlott, UK. The Killing [Forbrydelsen]. 2007–2012. Søren Sveistrup, Denmark/Norway/Sweden/Germany. The King and I. 1956. Walter Lang, USA. The Last Time I Saw Paris. 1954. Richard Brooks, USA. The Life of Brian. 1979. Terry Jones, UK. Lijmen. 2000. Robbe De Hert, Belgium. Loaded. 1994. Anna Campion, New Zealand/UK. The Lord of the Rings: The Fellowship of the Ring. 2001. Peter Jackson, New Zealand/USA. Lost in Translation. 2003. Sofa Coppola, USA. Love & Friendship. 2016. Whit Stillman, Ireland. Mamma Mia. 2008. Phyllida Lloyd, USA. Manhattan Murder Mystery. 1993. Woody Allen, USA. Man on Fire. 2004. Tony Scott, USA. The Man with the Golden Arm. 1955. Otto Preminger, USA. Master of None. 2015–2017. Aziz Ansari and Alan Yang, USA. McLintock! 1963. Andrew V. McLaglen, USA. Memento. 2000. Christopher Nolan, USA. Meridian. 2016. Curtis Clark, USA. Monsoon Wedding. 2001. Mira Nair, India. Moulin Rouge. 2001. Baz Luhrmann, Australia. Mr Bean. 1990–1995. Rowan Atkinson and Richard Curtis, UK. Mrs. Doubtfre. 1993. Chris Columbus, USA. My Big Fat Greek Wedding. 2002. Joel Zwick, USA/Canada. My Fair Lady. 1964. George Cukor, USA. My Name Is Joe. 1998. Ken Loach, Spain/Italy/France/UK/Germany. Narcos. 2015–2017. Carlo Bernard, Chris Brancato and Doug Miro, USA/Colombia/Mexio. The Night of the Living Dead. 1968. George A. Romeo, USA. Night Watch [Ночной Дозор, Nochnoy Dozor]. 2004. Timur Bekmambetov, Russia. The Offce. 2001–2003. Ricky Gervais and Stephen Merchant, UK. Pane e Tulipani. 2000. Silvio Soldini, Italy. The Passion of the Christ. 2004. Mel Gibson, USA. Peaky Blinders. 2013–2019. Steven Knight, UK. The Piano. 1993. Jane Campion, New Zealand. Popeye for President. 1956. Seymour Kneitel, USA. Pride and Prejudice. 1995. Simon Langton, UK. Prime Suspect. 1991–2003. Christopher Menaul et al., UK. Pulp Fiction. 1994. Quentin Tarantino, USA. QI. 2003–present. John Lloyd, UK. The Rocky Horror Picture Show. 1975. Jim Sharman, UK. Roma. 2018. Alfonso Cuarón, Mexico. Romeo and Juliet. 1968. Franco Zeffrelli, UK/Italy. Royal Wedding. 1951. Stanley Donen, USA.
References
265
RuPaul’s Drag Race. 2009. Nick Murray, USA. Sebastiane. 1976. Paul Humfress and Derek Jarman, UK. Secret Agent. 1936. Alfred Hitchcock, UK. The Secret History of Our Streets. 2012. BBC, UK. Secrets and Lies. 1996. Mike Leigh, France/UK. Seinfeld. 1989–1998. Larry David and Jerry Seinfeld, USA. Sex Drive. 2008. Sean Anders, USA. Shakespeare in Love. 1998. John Madden, USA/UK. Sherlock. 2010–2017. Mark Gatiss and Steven Moffat, UK. Shrek. 2001. Andrew Adamson and Vicky Jenson, USA. The Simpsons. 1989. James L. Brooks, Matt Groening and Sam Simon, USA. Sita Sings the Blues. 2008. Nina Paley, USA. Slumdog Millionaire. 2008. Danny Boyle and Loveleen Tandan, UK. Snatch. 2000. Guy Ritchie, UK/USA. The Snows of Kilimanjaro. 1952. Henry King, USA. The Sopranos. 1999–2007. David Chase, USA. The Sound of Music. 1965. Robert Wise, USA. South Park. 1997–2020. Trey Parker, Matt Stone and Brian Graden, USA. Spanglish. 2004. James L. Brooks, USA. Star Wars. 1977. George Lucas, USA. The Stranger. 1946. Orson Welles, USA. Taxi Driver. 1976. Martin Scorsese, USA. Thelma & Louise. 1991. Ridley Scott, USA. These Dirty Words. 2014. Jens Rijsdijk, Netherlands. A Touch of Frost. 1992–2004. Roger Bamford, Roy Battersby et al., UK. A Touch of Spice [Πολίτικη Κουζίνα]. 2003. Tassos Boulmetis, Greece. Toy Story 3. 2010. Lee Unkrich, UK. Trainspotting. 1996. Danny Boyle, UK. Trainspotting 2. 2017. Danny Boyle, UK. U2: The Best of 1990–2000. 2002. Bill Carter and Anton Corbijn, USA. Usain Bolt: The Fastest Man Alive. 2012. Gael Leiblang, UK. Vicky Cristina Barcelona. 2008. Woody Allen, Spain/USA. Voices from Robben Island. 1995. Adam Low, UK. What Have I Done to Deserve This? [¿Qué he hecho yo para merecer esto?]. 1984. Pedro Almodóvar, Spain. The Wire. 2002–2008. David Simon, USA. The Wolf of Wall Street. 2013. Martin Scorsese, USA. Women on the Verge of a Nervous Breakdown [Mujeres al borde de un ataque de nervios]. 1988. Pedro Almodóvar, Spain.
Index
360° video 28 4K 93 abbreviations 128, 137–138, 140, 190, 211 Abdallah, Kristina 142 accents 12, 17, 39, 80, 194, 227, 237 access services 13, 55, 61 accessibility 6, 11, 13, 55, 62, 243 accuracy 22, 23, 45, 50, 142, 143, 247 acoustic code 4, 5, 35, 73, 86 adaptation 3, 35, 38, 43, 88, 143, 179, 207, 222, 248 adaptor 35, 37 add-ons 62 address (forms of ) 73, 154–155, 186–187 Aegisub 16 Al-Adwan, Amer 92 Allan, Keith 181 Amara 16, 163, 245 Amazon 13, 16, 29, 52, 245 Amazon Prime 16, 29, 52, 245 Anderman, Gunilla 186 Apter, Ronnie 195 ARD 14 Armstrong, Stephen 243 Arnáiz, Carmen 191 Artegiani, Irene 46 artifcial intelligence (AI) 62, 243 Asimakoulas, Dimitrios 218, 219 Asociación de Traducción y Adaptación Audiovisual de España (ATRAE) 60 assistive services 9, 13 Association des Traducteurs/Adaptateurs de l’Audiovisuel (ATAA) 60 asynchrony 95, 102, 106 Athanasiadi, Rafaella 244 Attardo, Salvatore 218, 219 audio description 6, 73, 104 audio scrubbing 50, 246 audiomedial (texts) 4, 5 Audiovisual Translation (AVT) 1, 3–7, 9–11, 33, 51–53, 56, 59–63, 65, 73, 84, 104, 146, 184, 204, 218, 219, 221, 223, 243–245
Audiovisual Translators Europe (AVTE) 60 augmented reality 28 automatic speech recognition (ASR) 23, 33, 51, 243 Ávila Cabrera, José Javier 81, 240 Ávila, Alejandro 81, 192, 240 Baker, Mona 70 Ball, Alan 68 Ballester Casado, Ana 192 Baños, Rocío 8, 243, 248 Bartrina, Francesca 61 Bassnett, Susan 61 Baxter, Sarah 128 Benini, Nadia 239, 240 Ben-Porat, Ziva 206 Berne Convention 58 Betacam 48 Bhaba, Homi 146 bitmap 27 Bleichenbacher, Lukas 80 Blu-ray 3, 6, 13, 21, 27–29, 33, 36, 38, 42, 47, 48, 52, 55–57, 93, 107 Bogucki, Łukasz 11 Bolaños-Garcia-Escribano, Alejandro 62, 244 Bond, Esther 52–53 BookBox 15 Bordwell, David 66 Boria, Monica 65 Bosseaux, Charlotte 196 Bourdieu, Pierre 238 Branigan, Edward 65 Bravo, Conceição 16 Bréan, Samuel 61 Brewer, Jenny 96 British Broadcasting Corporation (BBC) 13, 137 broadband 56 broadcast resolution 93 Broadstream Solutions 49 Brondeel, Herman 103, 109 Brown, Andy 29, 141, 149 Burchardt, Aljoscha 243 Burgers, Christian 221
Index Burridge, Kate 181 Bywood, Lindsay 24, 243, 244 Calque 207–209 camera 4, 18, 40, 70, 71, 73, 84, 132, 165, 172, 197, 221, 240; movement 4, 70–71, 73–74; position 84 camp talk 180 canned laughter 220, 221 capital expenditure (CAPEX) 50 captioning (closed) 12 Carroll, Mary 27, 32, 39, 91, 103, 136, 175, 184, 196 Cary, Edmond 8 Cassany, Daniel 19 Castro Roig, Xosé 116 Cazden, Courtney 179 Ceefax 12 censorship 163, 190, 192, 238–241 centre (justifed) 95–96, 99 Cerezo-Merchán, Beatriz 61, 62 Cerón, Clara 121 Channel 4 47 channel: acoustic 4, 73, 86; semiotic 85, 99, 101; visual 4 character: number 19, 50–51, 89, 97–99, 105, 109, 116, 117, 147, 173; width 97–98 characters per line (cpl) 44, 51, 97–99, 173 characters per second (cps) 107 Chaume, Frederic 4, 7, 61, 67, 107 Chen, Yuping 75 Chiaro, Delia 217, 218, 223–224, 236 Chisholm, Brad 30 cinema translation 6 ClipFlair 16 cloud-based (platforms/systems/ecosystems) 43, 48, 51, 108, 244–248 cloud-subtitling 47 Code of Good Subtitling Practice 91, 142, 184 code: acoustic 4; linguistic 4, 72; verbal 4; visual 4, 30 cognitive: load 77, 108, 150, 170; processing 153 coherence 57, 70, 72–73, 142–143, 168–170, 221 cohesion: intersemiotic 69–72, 160, 168–169; linguistic 72, 168; semiotic 101, 216 colours 18–19, 27, 83, 88, 130, 136, 246 Columbia Tristar Home Video 14 combined continuity 34, 39, 40 comedy 81, 149, 182, 222, 227, 236 compensation 157, 189, 215, 216, 224, 238 competence 16, 57, 62 Computer Aided Translation (CAT) 244 computer games 2, 6, 245 condensation 4, 14, 24, 37, 107, 122, 147, 148, 150–151, 154, 169 conformance 34
267
connotative meaning 150, 183, 185, 190, 192 constrained translation 4, 6, 223 contraction 139, 154 convention 12, 58, 59, 116, 130, 200 conversation/conversational 89, 165, 166 conversion 35, 36, 50, 112, 116, 140, 141 copyright 34, 43, 54, 58, 59, 196, 197, 244 Cornu, Jean-François 31, 61, 65 corporate videos 145 Corrius, Montse 79 co-text 119, 150, 159, 204, 205, 214, 225 covert translation 203–207, 209 credit: flm 59; subtitler 59 crowdsourcing 242 cue/cueing 34, 45, 102, 103, 127, 172, 246 cultural references (CRs): extralinguistic (ECR) 146, 202, 218; intertextual 202–204, 206; intralinguistic 146; real-world 203; third culture 204–206 culture bumps 202 culture-bound (terms) 73, 217, 229, 237 culture-specifc references 218, 236 cut 34, 86, 114–115, 133 cybersubtitling/cybersubtitles 7, 93 D/deaf 6, 9, 10, 12, 14, 20, 21, 23, 24, 26, 28, 51–53, 69, 88, 113, 136 D’Ydewalle, Géry 16, 109, 113 Danan, Martine 15 Dancyger, Ken 66 Dànmù 18 Darbelnet, Jean 210 Davis-Ponce, Jamie 196 De Bonis, Giuseppe 83 De Higes-Andino, Irene 54, 79, 83 de Pedro Ricoy, Raquel 218, 219, 223 De Rosa, Gian Luigi 218, 222 De Sousa, Sheila 243 Deckert, Mikołaj 11 deictics 160 Delabastita, Dirk 4, 5, 6, 65, 223, 224 delay (function) 113 deletion 66, 77, 147, 149, 162, 163, 167, 191, 216 denotative meaning 185 deprofessionalization 248 Deryagin, Max 96 Desblache, Lucile 195 Desilla, Louisa 67, 68 desktop-based systems 246 dialect 180, 183, 188, 194, 239 dialogue: dynamics 68, 175; exchange 4, 9, 12, 17, 18, 30, 33, 35, 39, 42, 51, 67, 78, 82, 102–104, 109, 124, 125, 134, 143, 165, 167, 185, 201, 203; functions 199; hard-boiled 88; interactional 67–68, 89; list 33–35, 39–40, 41, 42, 42, 47–48, 56, 66, 74, 120,
268
Index
123, 196–197, 221; narrative-informative 68; overlapping 39, 83, 103, 127; structuring 67, 69; transcript 39, 42–43, 50; turn 83 Díaz Cintas, Jorge 6, 7, 10, 27, 31, 37, 50, 52, 61, 62, 78–80, 83, 91, 93, 99, 149, 190, 193, 203, 212, 223, 229, 238–240, 242, 244 diegetic 70, 72, 79, 80, 81, 84, 163, 167, 204 Digital Terrestrial Multimedia Broadcast (DTMB) 101 Digital Versatile Disk (DVD) 6, 9, 12–14, 20–21, 27, 28, 29, 33, 36, 38, 42–44, 47–48, 52–57, 78, 93–95, 98, 100–101, 107, 110, 136, 152, 192, 227, 239 Digital Video Broadcasting (DVB) 101 digital: box 190; cinema 28, 94; immigrant 2; native 2 digitization 1, 2, 20, 27, 33, 34, 36–38, 239, 243 discourse: oral 89, 179, 182; written 179, 182 Discovery Channel 194 Disney 14 distribution: channel 52, 57; format 115 Dollerup, Cay 15 domestication 83, 183 Dorado, Carle 61 dropbox 245, 246 DualSub 20 dubbing: lip-sync 8; partial 52 Duran Eppler, Eva 84 duration: maximum 51, 106; minimum 51, 106 Dwyer, Tessa 65, 80, 239 dysphemism 181 Eco, Umberto 64 Edelberg, Elisa 13 editing 4, 24, 58, 65, 70, 71, 73, 74, 88, 114, 132, 143, 169, 237 Eldalees, Hani Abdulla 239 Elks, Sonia 2 Ellender, Claire 184, 188 ellipsis 126, 134, 160 emoticons 18, 19, 221 encoding 35, 65 entertainment software 55 equivalence/equivalent 79, 109, 110, 111, 116, 143, 144, 199, 223 errors 22, 23, 36, 43, 51, 143, 144 Espasa, Eva 146 Estopace, Eden 53 euphemism 181, 182, 189 European Association for Studies in Screen Translation (ESIST) 91 European Broadcasting Union (EBU) 22, 39 European Commission 13, 15 European Skills, Competences, Qualifcations and Occupations (ESCO) 32 exoticism 80, 205, 206, 218, 223, 228–230, 234 expletive 123, 127, 189–193
explicitation 90, 207, 210–215, 233, 235, 237 exposure time 95, 105, 106, 168 extradiegetic 70, 88 extratextuality 204 eyetracking 170 fandubs/fandubbing 7, 8 fansubs/fansubbing 74, 78, 212, 239 FAR model 143 feedback effect 76 Ferri, Paolo 2 File Transfer Protocol (FTP) 34 flm festival 19, 33 flm translation 4–6, 31, 61 Fleex 16 Flemish television (VRT) 17 font 19, 51, 59, 74, 96, 97, 99, 119, 130, 134, 135, 136 foot/feet 116, 117 footage 18, 40, 57, 117 foreignisation 184 format/formatting 27, 36, 142 forms of address 73, 154, 155, 186 Fox, Wendy 88, 95, 193 frame 33, 34, 42, 50, 93, 94, 100, 101, 103, 104, 105, 114–117, 238 frame rate 33, 34, 101, 114 frames per second (fps) 30, 34, 99, 100, 102, 109 Franco, Eliana 8 Franzon, Johan 196, 199, 200 Gambier, Yves 6, 16 García, Boni 20 García, Martha 196–197 General Theory of Verbal Humour (GTVH) 218 generalization 152, 157, 210 genesis fle 43, 54 genre 3, 8, 53, 61, 74, 79, 88, 89, 144, 145, 149, 166, 172, 185, 196, 221, 237, 244 geolect 180 Georgakopoulou, Panayota 43, 45, 149, 243 Gerber-Morón, Olivia 77, 113 Gerzymisch-Arbogast, Heidrun 6 Ghia, Elisa 77, 184 Glennie, Alasdair 17 globalization 51, 54, 58, 184, 235, 237, 239 glocalization 92, 229 gloss 18, 78, 212 Gold, John 244 Goodwin, Charles 89 Google: Drive 245, 246; Glass 244; Translate 20, 243 Gorlée, Dinda L.196 gossiping effect 76 Gottlieb, Henrik 6, 19, 76, 178 Graeme, Ritchie 218 grawlix 190, 240
Index Greco, Gian Maria 7 Griffn, Emily 13 Grutman, Rainier 79 Gutt, Ernst-August 148 habitus 238, 241 Halliday, M.A.K 70 Hanna, Sameh 238 Hannay, Mike 70 Hanoulle, Sabien 66 hard of hearing 6, 9, 10, 12, 14, 20, 21, 23, 28, 51–53, 88. 113, 136 Harrenstien, Ken 243 Hassan, Ruqaiya 70 He, Ming 18–19 headnote 78 hearing impaired 10, 12, 13, 20, 21, 24, 193 Heiss, Christine 84 Hermes project 53 high defnition broadcasting 101 Hulu 52, 245 humour: audiovisual 219; aural 237; communitybased/culture-based 218, 223, 229, 232, 235; culture-bound 237; functions/functioning of 217–218, 220; language-dependent 223–224; linguistic 230, 236; national 223, 229; translating 222; unintentional 236; verbal/ verbally expressed 218, 223–224, 237; visual 219, 237 Hurtado, Amparo 6 hybridity 146 hypernym 6, 189, 210, 213, 234 hyponym 189, 210 iconography 72 ideology 201, 238, 240 idiolect 179 Ifix 52 immersive environments 28 implicature 68–69, 72, 161, 180 Incalcaterra McLoughlin, Laura 16, 184 inserts 9, 22, 35, 85, 95, 134 interactive software 7, 55 interculturalism 196, 205, 206 interjections 77, 89, 163, 189, 192 interlingual 4, 6–11, 14, 15–16, 19–22, 23–25, 27–29, 47, 51, 77, 83, 93, 103, 130, 136, 142–144, 184, 243, 244, 246 Interlingual Live Subtitling for Access (ILSA) 24 interpersonal 67, 148, 163–164, 196 interpreting 72, 206, 220 intersemiotic: cohesion 69–72, 160, 168–169; texture 70–71, 75 intertextual (macro) allusions 203, 204, 206–209 intertitles 30, 31, 65 in-time 86, 104, 115–116 intonation 15, 73, 89, 100, 123, 132, 157, 237
269
intralingual 4, 6, 8–10, 12, 14–18, 20–25, 27–29, 51, 53, 77, 83, 136, 143, 178, 246 invectives 191 iQiyi 19, 52 irony 123, 175 isochrony 8 italics 35, 83, 130–136 Ivarsson, Jan 27, 32, 39, 54, 59, 60, 91, 103, 136, 142, 175, 184, 196 Jäckel, Anne 183–184 Jakobson, Roman 4 Jan Ivarsson Award 60 Jay, Timothy 181 jokes: bi-national 229–230; international 229–230; national 230; refecting a culture 229; refecting an institution 229 Jones, Sam 17 Kapsaskis, Dionysios 46, 240 Karamitroglou, Fotios 76, 91 karaoke 16 Kaufmann, Francine 185 key names and phrases (KNP) lists 47 kinesics 73 kinetic synchrony 8 Kothari, Brij 15, 196 Kovacˇicˇ, Irena 148 Kozloff, Sarah 67, 182 Krämer, Mathias 84 Krejtz, Izabela 114 Kruger, Jan-Louis 150, 239–240 Kuo, Arista Szu-Yu 142 Laks, Simon 109 language service provider (LSP) 33, 55–56, 196 language: colloquial 151; dominant 79; lesser 79; variation 178–183, 185, 194, 238 Larkin-Galiñanes, Cristina 219 Latif, Nida 73 Lawson, Mark 17 Lefevere, André 238 legibility 27–28, 95, 97, 99, 107, 118, 119, 130 lektoring 8 Lertola, Jennifer 16 lexical: recreation 207–208, 215; taxonomy 203 Lievois, Katrien 206 limitations: space 10, 78, 137, 139, 148, 212, 216; time 10, 78, 137, 139, 212, 216 line breaks 77, 91, 100, 107–108, 144, 169–172, 174; 21 12, 27 Linell, Per 68 linguacultural 2, 62 linguistic: competence 16, 57; corrector 36; variation 68, 179, 183–185, 217 literal translation 76, 153, 188, 199, 207–210, 213, 222, 226, 230, 232–233
270
Index
Liu, Yu 69–71 loan 47, 207–208, 213, 217 localization: end-to-end 53; multilingual 52 locked cut 34 Lomheim, Sylfest 149 Low, Pete 195–196, 199 Luckmann, Thomas 72 Luyken, Georg Michael 38 lyrics 9, 16, 65, 126, 134, 195–200 machine: learning 243; translation (MT) 51, 62, 243–244 Mackenzie, Lachlan 70 macro 44, 222, 224 Mailhac, Jean-Pierre 186 manipulation: ideological 238, 240; technical 238 Marleau, Lucien 71 Martínez Pérez, Juan 143 Martínez-Sierra, Juan José 80, 218, 222 Mason, Ian 6, 238 Massidda, Serenella 37, 242, 246 Matamala, Anna 150, 242 Mateo, Marta 195 Mayoral Asensio, Roberto 6 Mazur, Iwona 73 McAlone, Nathan 16 McDonald, James 181 Media and Entertainment Services Alliance (MESA) 52 Mehrez, Samia 146 memory tools51 Memsource 245 merge/merging (subtitles) 20, 26, 45, 51 metadata 35 metalinguistic features 227 Meyer, John 218–219 Meylaerts, Reine 79 MGM 21 microtasking 57, 62, 242 mini-max effect 148 minimum gap 51, 114 Minutella, Vincenza 218 mise-en-scène 65, 69–70, 72, 84–85, 165 misrecognition 23 modality 155 mode: aural 67; aural-verbal 65, 70, 72; auralnonverbal 65; verbal 65, 71; visual 84, 210; visual-nonverbal 65, 70; visual-verbal 65 Montgomery, Coleen 183–184 multidimensional translation 6 multilanguage vendor (MLV) 33, 39 MUltilingual Subtitling of multimediA content (MUSA) 243 multimedia 1, 5, 51, 55, 62, 243 multimedia translation 6 multimedial (texts) 5
multimodal: communication 2; functioning 65; texts 5, 64, 69–70; translation 6 multimodality 72, 78, 82 multisemiotic 5, 64–65, 70, 82 multitasking 24, 77 Muñoz-Sánchez, Pablo 52 Murillo Chávez, Javier André 58 music 6, 8, 10, 12, 50, 65, 133, 165, 182, 195–196, 200, 237 musical 9, 16, 22, 165, 194, 197, 199 narration 4, 7–9, 65, 70–72, 82, 85, 88, 152, 169 Nedergaard-Larsen, Birgit 203 NER model 143 Netfix 16–17, 29, 42, 47, 52–53, 57, 92, 96, 98, 110, 112, 115, 119, 196, 245 netizen 3 Neves, Josélia 6, 13 Newmark, Peter 203 Nikolic´, Kristijan 39, 43, 45, 142 Noack, Rick 18 Nord, Christiane 205 Nornes, Abé M. 185, 238 no-translation 83–84 Nsofor, Adebunmi 58 NTR model 143–144 NTSC 30, 33, 101, 104, 106 numbers 47, 66, 77, 104, 107, 120, 138–140, 149 O’Hagan, Minako 18 O’Halloran, Kay L. 69–71 O’Sullivan, Carol 3, 31, 65, 80, 87 obscenicons 240 Ofcom 13, 26 offcial equivalent 217 omission 4, 9, 69, 143, 147–150, 155–156, 161–162, 164, 169, 207, 216–217, 223 onscreen narration/text 85 OOONA 93–94, 108, 108, 245, 246, 247 operating expenditure (OPEX) 50 orality 88 Orero, Pilar 61 originating/origination 35 orthotypography/orthotypographic rules 119 OTT 12, 29, 42, 52–54, 57, 92, 137 out-time 103–104, 113, 115–116 over-scan 94 overt translation 76 Oziemblewska, Magdalena 46 Pai, Feng-shuo Albert 222 PAL 30, 33, 101, 106 paralinguistic 4, 10, 12, 14, 21, 175, 221, 237 parody 197, 204, 219–220 Parra López, Guillermo 175 patronage 238
Index Patterson, Jake 29 Pedersen, Jan 107, 143–144, 202, 204–206, 217, 237 Perego, Elisa 77, 84, 160, 170 Pérez-González, Luis 3, 11 pivot: language 54; translation 43 pixel 98 Pöchhacker, Franz 62, 143 polyglot flm 79, 81 Poool (The) 59 Popowich, Fred 243 post-editing 23, 62, 243–244 Poyatos, Fernando 73 Prensky, Marc 2 project/client management 56, 247–248 pronunciation 15–16, 122, 179–180, 184–185, 188, 194–195, 199, 224, 227 proofer 36 prosodic features 105, 126, 175 prosumer 3, 78 proxemics 16, 73, 84 Pujol, Dídac 192 pun 223 punctuation 23–24, 46, 108, 116, 119–121, 123–125, 128, 134, 142, 144, 176, 226 quality: assurance/assessment (QA) 141; control (QR) 36, 51, 141, 246, 248; controllers (QCers) 36, 248; management (QM) 141 Rajendran, Dhevi 170 Ranzato, Irene 184, 202–208, 218, 223 Raskin, Victor 218 rates 12, 43, 45, 51, 55–56, 60, 91, 101, 106–107, 110, 112–113, 149 readability 99–100, 118–119, 125, 142–144, 160, 172, 184–185, 195, 200 reading speed 14, 22, 24, 78, 103, 106–107, 110, 116, 136, 144, 147, 149, 169 realia 202 reception research 76, 84 reduction 23, 37, 45–46, 66, 105, 146–149, 157, 159, 169, 220 redundancy 71, 161–162, 182 reference: culture-bound 229, 237; ethnographic 203–204; extralinguistic 146, 202, 218; geographical 203–204; socio-political 203–204 register 70, 89, 99, 113, 137, 147, 154, 179–180, 184–185, 193, 221, 225 Reid, Helen 90, 172, 186 Reiss, Katharina 4–5 relevance (theory) 148 Remael, Aline 8, 13, 18, 23–24, 62, 65–67, 69, 142–143, 146, 161, 193, 203, 205–207, 223, 229 remuneration 56, 58, 142 respeaking 14, 23–24, 51, 62, 244
271
resynch 51 Reviers, Nina 13, 65, 69 reviser 37 revoicing 7, 16 rewriting 89, 147, 184–185, 238 rheme 157 rhyme 198–200 rhythm 10, 15, 25, 32, 49, 57, 73–74, 99, 105, 126, 134, 146, 150, 152, 172, 176–177, 186, 196, 198–200, 237, 248 Richardson, Kay 182 rights: broadcasting 58; distribution 58; economic 58; exploitation 58; moral 58–59 Robert, Isabelle 24, 142–143, 205 Robson, Gary D. 22, 114 Rodríguez, Ashley 54 Roettgers, Janko 57 Romaine, Suzanne 180 Romero-Fresco, Pablo 23–24, 55, 88, 107, 113, 143 rough translation 37 Roxborough, Scott 54 royalties 58 Rush, Jeff 66 safe area 4, 51, 94, 97–99, 105 Sandford, James 107 Sanz-Ortega, Elena 84 Sasamoto, Ryoko 18 Sawaf, Hassan 244 Schäffner, Christina 142 Schilling, Natalie 179, 180 Schröter, Thorsten 217, 222 Schwab, Klaus 2 screen translation 60, 91 screenplay: post-production 66; pre-production 66 screenwriting 66–67 script 4, 10, 39, 50–51, 68, 123, 149, 165, 193, 218, 220–221, 243 script opposition 218 SDH 6, 10, 12–16, 21, 22, 24, 27–28, 74, 83, 93, 103, 246 SECAM 30, 33, 101, 104, 106 segment(ing) 45, 47, 50, 169, 171 segmentation: lexico-syntactic 172; rhetorical 172, 175–176; visual 172 self-censorship 192, 238 semiotics 64, 70, 202, 223 sensory impaired 13 Şerban, Adriana 79 set top box 12 Short, M. 179 shot: alternating/shot-reverse-shot 73–74, 132; change(s) 30, 37, 49–51, 58, 103–104, 113–116, 115, 133, 172, 174, 243, 246–247 sign language interpretation 13
272
Index
sign: iconographic 4; nonverbal 70; system 5, 64–65, 71; typographical 91, 97; verbal 85; visual 64 signing 13 simulation 36, 58 sing along 16 sing-a-long-a 16, 17 six-second rule 109, 174 slang 88, 178, 180–182, 184, 189, 202 Smith, Stephen 45 Snell-Hornby, Mary 5–6, 61 social semiotic theory 69 sociolect 180, 185 songs 4, 9, 12, 16, 33, 35, 85, 105, 126, 133–134, 195–199, 239 sound bridge 74, 115 sound wave 247 soundtrack 7–10, 12, 14–15, 20, 22, 33–34, 39, 43, 46–47, 50, 75–77, 81–82, 101, 103–104, 107, 113, 119, 123, 126, 135–136, 146–147, 161, 165, 168, 178, 190, 195, 197, 199, 205, 214, 220 spatial limitations/constraints 92, 212, 216, 237 speaker-independent respeaking software 23 SpeakUp 14 specifcation 57, 210 speech: community 179; linguistic features 175; marked 67, 178–180, 182; paralinguistic features 175; scripted 88–89; spontaneous 69, 88–89; unmarked 179 spotter 34, 37, 39, 103 spotting: list 34, 40, 43, 45; List Footages & Titles 40; rhetorical 175 stenocaptioner 22 stenographer/stenography 22 stenotype/stenotypist 10, 22 Steyn, Faans 150 streaming 3, 13, 16, 21, 27, 33–35, 38, 40, 44, 48, 53, 57, 93, 107 streaming platform/service 19, 52, 246 style guide 47, 60, 92, 120, 142 stylebook 47, 119–120 subordinate translation 158, 180 substitution 8, 207, 212, 217, 224, 234 Subtitle Edit 16 Subtitle Workshop 16 subtitle/subtitling: 3D 28; add-on 243; altruist 7; app 16; audio 9; background 97, 119; bilingual 19, 20, 93; block 25–26; burnt-in 85; closed 20, 26–27; colour 97, 136; company/ companies 18, 36, 39, 47, 53, 58, 59, 93, 99, 193, 247; conventions 92, 95; cumulative 25, 25–26; diagonal 19; display rate 24, 35, 43, 106–107, 110, 140, 147; didactic 14; edited 24; editor 37, 48, 50, 247; electronic 28, 37, 136; fle 20, 48; freeware 16, 52, 105; forced (narrative) 26, 136; genesis 43; guerrilla 7;
guidelines 47, 89, 92–93, 118–120, 143; hard 26–27; immersive 95; integrated 10, 88, 95–96; interlingual 9, 11, 14–15, 20–21, 24, 27, 47, 51, 83, 93, 103, 130, 136, 142–144, 184, 243, 244; intralingual 6, 12, 14–18, 20, 27–28, 53, 136, 178, 246; laser 28, 36; layout 91–92, 95, 103; list 22, 48, 92, 169; live 24; master 40, 43, 45; mechanical 27; monolingual 81, 244; multilingual 20, 243; number 44, 104, 117; offine 22; online 22; open 14, 20, 27; optical 27; photochemical 27; placement 95, 100; platform 60, 96, 245; pop-up 10, 25; pop-on 25; positioning 28, 94–95, 99; pre-prepared 22–23, 25, 143; pre-rendered 27; presentation rate 107, 113; program 23, 37, 48–50, 94, 99, 101, 103–105, 107–108, 113, 116–117, 141; projection 19, 121; quality 13, 46, 76, 141–142; realtime 22; roll-up 26; scrolling 18; scroll-up 19; semi-live 10, 22; simultaneous 20, 22, 24, 247; software 35, 48, 50, 57, 97, 169; speed 107; standards 110, 120; style 45, 47, 89, 154; thermal 27; tone 193; track 19, 21; verbatim 24; visibility 96; workfow 37, 38, 46, 57, 244; workstation 48, 243 SUbtitling by MAchine Translation (SUMAT) 244 supertitles 9 surtitles/surtitling 22–23, 35 Susam-Saraeva, Şebnem 196 SVOD 36 swearwords/swearing 88, 146, 181, 189–193, 240 symbols 22, 40, 64, 108, 128–129, 240 synchronization (temporal) 101 synchrony/sync: lip 8, 37; linguistic 77; visual 73–74 Szarkowska, Agnieszka 24, 46, 77, 113–114, 149, 170, 176 taboo words 88, 181, 183–184, 189–192, 217, 238, 244 Takeda, Kayoko 83 Talaván, Noa 16, 184 Taylor, Christopher 61 technology: analogue 2, 48, 97; digital 2, 48, 52, 62, 76; text-to-speech 244 TED Talks 245 telecine 30, 33, 101 teletext 12, 27, 97 telop 18 template(s) 14, 34–35, 37, 39–40, 43–49, 53, 56, 66, 92–93, 112, 119–120, 123, 142, 221, 244, 246–247 templator 37, 39 temporal limitations/constraints 3, 24, 35, 37, 129, 146, 164, 196, 238 territories 7, 54
Index text on screen 26, 65, 75, 85, 87–88, 99, 105, 110, 119, 134, 197 theatrical: distribution 58; release 34, 36, 43, 55, 56, 94, 116 theme 69, 80, 157, 232 timecode 22, 27–28, 43, 50–51, 103–105, 104, 115 timed text style guide 92, 112 timing 10, 34–36, 39, 42–43, 48, 50–51, 54, 74, 101–103, 106, 108, 246 Titford, Christopher 6, 146 title cards 30 topnote 78, 212 Törnqvist, Egil 76 Traflm 78 training 22, 24, 45, 49, 53, 58, 60–63, 141 transadaptation 6 transcreation 62 transcribe/transcription 9–10, 14, 16–19, 24, 33, 39–40, 42–43, 45, 83, 89, 190, 194–195, 208, 243–244, 246 transculturality 204, 214, 230 transfle 43 translation: brief 143–144, 196, 238; machine 51, 62, 243–244; memory 62, 243–244; priorities 222; resources 242; revision 142; shift 145, 155–156; strategy/strategies 7, 10, 189, 203–207, 217, 222, 232, 235, 239; tools 242, 244 transliteration 208 transposition 184, 207, 213–214, 232 Trudgill, Peter 179 Tseng, Chiao-I 70 tù cáo 18 TV 2–3, 6, 8–9, 12–13, 16, 18–19, 22, 24, 26, 29–30, 39–40, 42–43, 46, 52–54, 57, 59, 64, 76, 81, 84, 87, 88, 90, 94–95, 98, 100, 107, 127, 145, 149, 180, 182, 185, 190–193, 196, 202, 207–208, 211, 221, 230, 240 TV5MONDE 14 typography 138 Ultra HD 94 UNESCO 59–60, 79 United Nations 12 Van de Poel, Marijke 16 van Mulken, Margot 221 Van Rensbergen, Johan 113
273
Van Wert, William 30 Vandaele, Jeroen 219, 221 Vanderplank, Robert 15 Vanoye, Francis 67 VCD 2, 6, 13, 27, 29 velotype/velotypist 22–23 vendor 34, 39, 47, 50, 53, 57–59, 98, 105, 119–120, 128, 149, 242, 246 verbatim 146 VHS 2, 6, 27, 29, 34, 48, 191–192 video: fle formats 34; games 2, 7, 55, 91; on demand (VOD) 13, 16, 27, 36, 43 viewing speed 107, 113 Viki 52 Villanueva Jordán, Iván Alexandro 180 Vimeo 245 Vinay, Jean-Paul 210 virtual: reality 7, 28; workspace 248 VOD 13, 19, 38, 43, 47–48, 52–54, 56–57, 95, 98, 100, 107, 136, 240 Voellmer, Elena 79 voice in off 136 voiceover 4, 7–9, 55, 61, 65, 88, 104, 133, 152, 243 Volk, Martin 243 VRT 217 vulnerability 76, 78 vulnerable translation 77, 82 Wahl, Chris 79–80 Wales, Katie 179 Wang, Wei 75 waveform 50, 103, 246 Web Video Text Tracks (Web-VTT) 35 Wilkinson-Jones, Phil 14 Williams, Gareth Ford 25 Williamson, Lee 218–219, 223 Wincaps 35, 51, 93–94, 96, 103, 108, 116, 169, 170 wordplay 184, 222, 224, 226, 235–237 words per minute (wpm) 106–107, 109–110, 149 YouTube 16, 52, 103, 243, 246 Zabalbeascoa, Patrick 47, 65, 79, 221–223 Zhang, Leticia Tian 19 Zhang, Xiaochun 18 ZOO Digital 245