237 107 1MB
English Pages 255 Year 2015
Subtitling Today
Subtitling Today: Shapes and Their Meanings Edited by
Elisa Perego and Silvia Bruti
Subtitling Today: Shapes and Their Meanings Edited by Elisa Perego and Silvia Bruti This book first published 2015 Cambridge Scholars Publishing Lady Stephenson Library, Newcastle upon Tyne, NE6 2PA, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2015 by Elisa Perego, Silvia Bruti and contributors All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1-4438-8035-3 ISBN (13): 978-1-4438-8035-0
CONTENTS
Introduction ............................................................................................... vii Chapter One ................................................................................................. 1 Subtitling Today: Forms, Trends, Applications Elisa Perego and Silvia Bruti Chapter Two .............................................................................................. 15 Audiovisual Translation and Sociolinguistic Adequacy Gian Luigi De Rosa Chapter Three ............................................................................................ 33 Reading Cohesive Structures in Subtitled Films: A Pilot Study Olli Philippe Lautenbacher Chapter Four .............................................................................................. 57 The Language of Inspector Montalbano: A Case of Irony in Translation Mariagrazia De Meo Chapter Five .............................................................................................. 77 Cultural References in Fansubs: When Translating is a Job for Amateurs Ornella Lepre Chapter Six ................................................................................................ 99 The Influence of Shot Changes on Reading Subtitles – A Preliminary Study Agnieszka Szarkowska, Izabela Krejtz, Maria àogiĔska, àukasz Dutka and Krzysztof Krejtz Chapter Seven.......................................................................................... 119 Real Time Subtitling for the Deaf and Hard of Hearing: An Introduction to Conference Respeaking Saveria Arma
vi
Contents
Chapter Eight ........................................................................................... 135 France’s National Quality Standard for Subtitling for the Deaf and Hard of Hearing: An Evaluation Tia Muller Chapter Nine............................................................................................ 171 Telop and Titles on the Japanese Small Screen Claire Maree Chapter Ten ............................................................................................. 189 It Ain’t Over Till the Fat Lady Sings: Subtitling Operas and Operettas for the DVD Market Adriana Tortoriello Chapter Eleven ........................................................................................ 203 Subtitling – From a Chinese Perspective Dingkun Wang Chapter Twelve ....................................................................................... 221 Learner Corpus of Subtitles and Subtitler Training Anna Bączkowska
INTRODUCTION ELISA PEREGO AND SILVIA BRUTI
Subtitling, a well-known, established and widespread form of audiovisual translation, nowadays comes in many forms and accomplishes many purposes. It enables viewers to access, understand, enjoy, interpret and remember an audiovisual product, but it can at times guide or limit the viewer’s interpretative options or achieve a decorative function. It can be invisible and easing–as most manuals desire–or intrusive and taxing but still adored especially by niche film fanatics. Given its many forms, functions and audiences, subtitling has lately been studied from several evolving perspectives. This collection of contributions wishes to show at least some, and it does so assembling papers that analyse aspects of subtitling in several audiovisual genres (ranging from TV series and variety programmes to operas and operettas, including feature films and live conferences) and combinations of languages (Chinese, English, Finnish, French, Italian, Japanese and Polish), and welcoming both traditional and descriptive frameworks and novel methodological approaches in the field of Audiovisual Translation (AVT). The volume includes papers reporting case studies on language transfer strategies in specific situations, e.g. when the source text is challenging because it heavily relies on multilingualism and it requires to tackle the issue of translating sociolinguistic varieties or because the subtitler has to render irony, culture-specific references, comic nuances in peculiar genres, source language specificities in very distant target languages. It includes descriptive papers that offer a state of the art overview on cutting-edge subtitling methods, such as telop, real time subtitling for the deaf and hard of hearing, subtitling status and policy in a given European and nonEuropean country. It also includes papers reporting on and empirical research assessing subtitle reading in particularly challenging situations or assessing the effect of specific subtitle features on comprehension. It tackles the issue of teaching translation for subtitling and the importance of corpus data to make the teaching process more focused and effective. The empirical papers offer an invaluable perspective on the changes subtitling research is undergoing. They both resort to eye tracking methodology thus following a research path that has recently flourished in
viii
Introduction
AVT and that is ever more interested in the viewer’s reaction to, and comprehension of, subtitling vs. the translator’s problem-solving processes. Such papers emphasise the lack of a solid empirical methodology in a discipline that has long been mainly descriptive and they raise methodological issues that need to be tackled in future empirical research in AVT.
Acknowledgments This research was partially supported by the University of Trieste Research Fund FRA 2013 (“Towards an empirical evaluation of audiovisual translation: A new integrated approach”) awarded to Elisa Perego. The authors would like to thank Francesca Bozzao for her initial typesetting work, and Christopher Taylor and Serenella Zanotti for their support throughout the editing work.
CHAPTER ONE SUBTITLING TODAY: FORMS, TRENDS, APPLICATIONS1 ELISA PEREGO2 AND SILVIA BRUTI3
1. Recent Developments in Subtitling When subtitling began to be practiced (1909 at the cinema, and 1938 on TV according to Ivarsson 2004 and Ivarsson and Carroll 1998), the main concern of practitioners was to convey the dialogue of the actors to the audience. Technical and translational problems–e.g. how to place subtitles on the distribution copies or how to distribute the same film in different languages–soon arose. This paved the way for scholars to tackle them in the attempt to offer solutions or to perform systematic analyses on the evolution of subtitles from intertitles, on their economics, distribution, principles and conventions. The first manuals for subtitlers started to appear much later (e.g., Ivarsson and Carroll 1998) along with in-house unpublished guides outlining specific rules and company policies. The proliferation of audiovisual media, the need to access original versions of AV products as soon as possible and the newly-acquired flexibility of dubbing countries have recently led to an increase in the volume and the nature of such activity. Furthermore, the idea that subtitles accomplish a mere translational function and that they are simply “a translation [appearing] at the bottom of the screen during the scenes of a motion picture or television show in a foreign language” (MerriamWebster Online) is nowadays outdated, or at least too restrictive. Currently, subtitles come in several forms and their applications are manifold, and they have further contributed to the recent growth in
1
Elisa Perego is the author of paragraph 1 and of the overview on the articles by Arma et al. in paragraph 2. Silvia Bruti is the author of the remainder of paragraph 2. 2 Università di Trieste, Italy. Email address: [email protected] 3 Università di Pisa, Italy. Email address: [email protected]
2
Chapter One
subtitling volume all over the globe. The papers collected in this special issue illustrate such a varied and fluid situation very well. A first addendum to the traditional definition of subtitling, one which describes standard interlingual subtitling written in a language different from the language of the original audiovisual product, would highlight the presence of a parallel and equally common subtitling form, i.e. intralingual subtitling, written in the same language as of the original audiovisual product. Traditionally, same language subtitling was thought of as a tool to enable deaf and hard of hearing viewers to access audiovisual products. Indeed, intralingual subtitling is able to render the dialogue in the same language along with additional information on the auditory elements of the soundtrack. As some of the papers show, however, besides being an invaluable accessible film service (Szarkowska et al. and Muller), nowadays intralingual subtitling takes different forms and labels, and it can accomplish several new functions. If produced live, on the fly, through respeaking techniques, it can be exploited to make AV products other than films, e.g. conferences (Arma), accessible. If superimposed in postproductions, especially in Japanese variety programmes, it is known as telop (Maree) and it accomplishes a peculiar entertaining function for hearing viewers–it mainly highlights comic hints but at the same time it contributes to manipulating the source intended meaning and to leading viewers towards a univocal interpretation of such. Although very creative uses of intralingual subtitles are possible, this is certainly best known for its socially relevant and didactic applications–or at least it has until very recently. Not only do same language subtitles serve as an aid for deaf or hard of hearing people, but they can also have a major impact on literacy and reading abilities (making the reading practice an incidental, automatic, and subconscious part of popular TV entertainment) (Kothari 1998, 2000; Kothari and Takeda 2000), and on second language learning and acquisition (d’Ydewalle and Pavakanun 1995; d’Ydewalle and Van de Poel 1999; Kuppens 2010). Standard subtitles are no less important: beside accomplishing their primary role (i.e., providing a written translation of the original dialogue), they can be exploited to “teach, revive and maintain minority languages” (Ivarsson and Carroll 1998, 7), to distribute art house films from small countries, and to make AV material accessible before official release. This latter aspect is very much appreciated in the Chinese world (Wang), which heavily relies on amateur subtitles (i.e. fansubs, “subtitles made for foreign audiovisual products in a non-professional environment”, Lepre) to overcome media censorship often imposed by the government on dubbed productions.
Subtitling Today: Forms, Trends, Applications
3
Overall, subtitling has evolved and some of its forms have developed from necessary aids to extra layers added to the original AV product. In particular, telops (Maree) along with fansubbing (Lepre) are very particular forms of subtitling which most of all show the extent of subtitling evolution in terms of functions and conventions. Telops stretch the traditional idea of same language subtitles to the extent that they no longer only render what is being said for a deaf audience but they reproduce part of the dialogue disambiguating it for a hearing audience, emphasizing it and making it redundant thanks to the graphic conventions used. Telops do not enable viewers to access, enjoy and interpret an AV product but they are added to limit the viewer’s interpretative options by directing his/her attention and by ruling out ambiguity. They also achieve a decorative function, thus going against the chasteness which has always been typical of SDH (subtitling for the deaf and hard of hearing) and of standard subtitling, whose primary feature should be invisibility. Fansubs can be intrusive and taxing, and they often resort to unconventional stylistic features which would be unacceptable in professional practice, which is still more concerned with usability criteria. Besides coming in different forms and accomplishing several functions, subtitles have recently been studied from different, new perspectives. Research on subtitling–just like its forms and uses–has evolved. From a purely technical and translational perspective, from in-depth linguistics studies it nowadays includes more modern and interdisciplinary ways of approaching this subject. The different aspects of subtitling have been studied via eye tracking and empirical methods, and aspects of subtitling that are related to its reception, usability, and effectiveness have attracted the attention of several scholars. This special issue is representative also in this respect. It includes traditional descriptive papers reporting case studies on language transfer strategies in specific situations, e.g. when the source text is challenging because it relies heavily on multilingualism or the issue of translating sociolinguistic varieties (De Rosa) needs to be tackled or because the subtitler has to deal with and render irony (De Meo), culture-specific items (Lepre), comic nuances in certain genres (opera and operetta, Tortoriello), source language specificities in a very distant target language (Wang). It includes descriptive papers that offer a state of the art overview on cutting-edge subtitling methods, e.g. telop (Maree), real time subtitling for the deaf and hard of hearing (Arma), subtitling status and policy in a given European (Muller) and non-European (Wang) country. It tackles the importance of subtitling corpora to diagnose the competence of subtitle trainees and to prepare teaching materials (Bączkowska). It also includes papers reporting
4
Chapter One
on empirical research assessing subtitle reading in particularly challenging situations, e.g. when subtitles are displayed over shot changes, or assessing the effect of specific subtitle features, such as internal cohesive structure, on comprehension (Lautenbacher). The empirical papers offer an invaluable perspective on the changes subtitling research is undergoing. They both focus on eye tracking methodology thus following a research path that has recently flourished in AVT and that is increasingly concerned with the viewer’s reaction to, and comprehension of, subtitling vs. the translator’s problem-solving processes. Such papers emphasise the lack of a solid empirical methodology in a discipline that has long been mainly descriptive and they raise methodological issues that need to be tackled in future empirical research in AVT. To conclude, the audiovisual genres tackled in each paper and the languages involved in the analyses involve a number of languages. AV genres range from TV series and variety programmes to operas and operettas, including feature films and live conferences. The languages examined, as either source or target language, are (in alphabetical order) Chinese, English, Finnish, French, Italian, Japanese and Polish.
2. The Contributions to the Present Volume Given the variety of themes tackled by the authors, ordering papers has not been easy. We have decided to arrange them according to the main themes they explore, starting from those that deal with more general matters in translating audiovisual texts and then delving into more specialised topics. Gian Luigi De Rosa investigates one of the thorniest problems in translation, i.e. the rendering of different varieties of language and of socio-pragmatic elements. His specific focus of attention is on two Brazilian TV series, Mandrake (2005-2007) and FDP (2012). The former was broadcast on Sky TV in Italy in 2009 in its dubbed version in Italian and the latter was subtitled by the students of Portuguese in an MA course at the University of Salento. The two case studies offer ample material for extending the reflection to all the texts that share these properties. De Rosa aims to analyse a series of issues related to the audiovisual translation of mixed-language texts, i.e. texts that are strongly characterised by the presence of different varieties and of socio-pragmalinguistic elements. The investigation focuses particularly on the two aforementioned TV series, Mandrake and FDP. Both of them partly conform to the Brazilian Portuguese neo standard, but they also contain a
Subtitling Today: Forms, Trends, Applications
5
“strongly connoted spoken language with elements from sub-standard varieties belonging to popular varieties of BP which are used in less monitored contexts and situations and in order to characterize characters”.
This is possible, the author contends, because TV language tends to use exaggerated sociolinguistic features in order to more precisely connote some of the characters. When marked texts need to be translated, a main strategy seems to emerge, especially in dubbing: choices that are marked in the original are turned into either a neutralised non-standard or, at best, into monitored informal speech. Markedness powerfully reflects the relativity of sociolinguistic situations that are couched in very different linguistic forms across different lingua-cultural pairs. In this regard, De Rosa shows that the main difficulty in translating the two Brazilian TV series revolves around the frequent overlap of the diastratic and diaphasic dimensions. Choosing popular varieties of Italian cannot be a palatable solution, in that popular varieties correspond to dialects or regional varieties of Italian and their use would thus be diatopically marked and, as such, unacceptably domesticating. A compromise solution needs to be pursued, i.e. one that combines both neutralisation of markedness and consequently a standardisation of the text, and the dislocation of markedness from one dimension of variation to another or from one level of analysis to another. Olli Philippe Lautenbacher discusses the results of a small scale pilot study to ascertain how comprehension takes place when watching a subtitled film, submitting the same sample with different types of subtitles (L1, L2 or no subtitles) to three different informant groups. The author aims to evaluate the combined impact of filmic cohesive structures and interlinguistic (L1) or intralinguistic (L2) subtitles on the comprehension process of the audience, presupposing, in line with the main literature on the topic, that bimodal input, i.e. input deriving from both dialogue and subtitles (but also from image and subtitles), is profitable for recollection and comprehension corroboration. The experiment is carried out with a short excerpt from a French film, which was shown to three groups of Finnish university students who were asked to answer limited response items in a questionnaire. Their viewing experience was also recorded by means of eye tracking measures. The best results were obtained by the students who watched the extract with Finnish subtitles, with almost 75% of the answers being satisfactory, against slightly less than 60% for those who watched the clip with French subtitles and some 40% for those who watched it with no subtitles. Differences in results between the groups that were shown intra- and interlinguistic subtitles are actually confined to just a few questions. Careful multimodal analysis of the various questions
6
Chapter One
suggested that the outcome of the experiment strongly depends on the kind of cohesive links that are exploited. The types of cohesion can thus be: minimal cohesion, in which subtitles support the dialogue alone; narrative cohesion, when several expressions in the excerpt develop the same theme and thus cohesion may be in the form of more or less explicit redundancy; and multimodal cohesion, when dialogical elements are sustained by image and sound. In the first case subtitles in Finnish seem to be helpful, whereas in the case of narrative cohesion both subtitles produce comparable results. When multimodal cohesion is involved, a distinction needs to be drawn, i.e. when an expression is completely supported by the audiovisual mode, so much so that the images repeat the content of the utterance, both the L1 and L2 subtitles strongly support the viewer’s understanding; when, instead, cohesion is only partial, the understanding hinges more on the linguistic utterance, so the students who could access the L1 subtitles had better results. An equally challenging topic, the translation of irony, is the object of Mariagrazia De Meo’s contribution. Translating irony is always problematic in that it is a phenomenon that aims at producing an emotive response in the audience, who need to be actively involved. De Meo chooses an inductive and descriptive approach to investigate the translation of verbal irony in the English subtitles of the detective TV series Il Commissario Montalbano, a successful series based on the eponymous novels by Sicilian author Andrea Camilleri. Much of the fortune of both novels and TV series rests on the main protagonist, Montalbano, a fractious Sicilian detective who works in the police force of Vigata, an imaginary Sicilian town, lives a single life, is a gourmet, and a long-distance swimmer with a wonderful ocean-front house. He is often confronted with puzzling crimes that necessitate his wits, stamina and a special ability to deal with bureaucratic and political pressures that in most cases require him to close the case quickly, without stepping on the wrong toes. Salvo Montalbano’s speech, in line with Camilleri’s own literary jargon, is a mixture of standard Italian and Sicilian dialect, heavily imbued with ironic remarks and tones. After reviewing the main approaches to irony in translation studies, De Meo opts for a dynamic and pragmatic approach and underlines the double role of the translator as interpreter and ironist, whose main task is to recodify and re-contextualise the ironic triggers for the target audience. The analysis highlights the fact that both metafunctional and structural ironic triggers are generally translated, included the case of echoic utterances. Repetitions, which are usually avoided in subtitles as redundant elements, are instead retained also when they are not the main ironic triggers, but
Subtitling Today: Forms, Trends, Applications
7
simple ironic cues in the unrolling of dialogue. The subtitler pays careful attention to guarantee maximum effect with minimum effort, relying in part on the support of the paralinguistic and prosodic features of the text and on the audience familiarity with Montalbano’s patterns of behaviour (e.g. the target of his ironic remarks are more or less always the same characters). Specific types of subtitles and some related problems are investigated in the contributions by Lepre, Tortoriello and Maree. Lepre takes a close look at fansubs, an increasingly popular and globally spread type of amateur translation and aims to observe whether the strategy adopted by fansubbers in translating cultural references differs from that of professionals; Tortoriello is concerned with DVD subtitles for operas and operettas, a genre which was in the past destined for connoisseurs, but which, thanks to better availability, now reaches the more general public; and Maree reports on the use of telops in Japanese television. Adriana Tortoriello aims at analysing the distinguishing features of a rather new product, i.e. operas that are filmed during a live performance in order to produce a DVD which is later subtitled, intra- and/or interlinguistically. This novel type of opera subtitling is distinct both from live opera surtitling and from more conventional types of subtitling for DVDs. The starting point of the contribution is a careful account of the nature of opera subtitles for the DVD: if opera surtitles are not available either before or after the opera itself is staged, DVD subtitles can rely on better viewing conditions and can be read several times, as the DVD can be stopped and replayed according to the viewer’s liking. Furthermore, they seem to be more similar to fansubs than to both traditional subtitles and opera surtitles, thanks to their length, the use of repetition and the tendency to repeat the features of the source text especially in the rhyming pattern of the lyrics. Tortoriello also explores another sub-genre, the operetta, or light opera, which developed around the middle of the 19th century, with different themes and different language features. Operettas can be produced in two alternate ways, i.e. in the original language with surtitles, or having the libretto translated and adapted and then producing the opera in the language of the audience, as happens with the English National Opera, to which DVD subtitles could be added as a third, compromise solution. They provide a written and thus more permanent text but at the same time they allow the audience to follow the opera in the language in which it was written. In the case of operettas, whose content is light and includes humour and satire, longer subtitles are more helpful in making sure the audience captures the gist of the message.
8
Chapter One
The author offers examples of how an operetta, The Mikado (1885, Sullivan and Gilbert) was both intra- and interlingually subtitled for a DVD release based on its Australian staging. The various subtitlers (Tortoriello was responsible for the Italian subtitles) worked together to give the final product a certain consistency despite language differences. The analysis of various examples point to some characterising elements: first of all DVD subtitles for the opera/operetta need to take into account the so-called “musical constraint”, entailing issues of rhyming and rhythm requirement; secondly, they are produced for a different kind of audience, with better viewing conditions and higher reading speed, and aim to render the text more closely; but perhaps, more importantly, they need to consider the presence of the theatre audience in the filmed text, which leaves traces in terms of reactions to both the performance and to the surtitles that were projected (to which the subtitler may also have no access). As for the content, the nature of the examples analysed proves the necessity of carefully considering the farcical nature of the plot, which needs to be understood, and the tendency to update content whenever possible but to leave out very specific cultural (Australian) references that, given the Japanese setting of the operetta, would probably puzzle the audience. Also of relevance are the metareferences to the act of translation, which testify to the rather “subversive” nature of these subtitles (Nornes 1999). Still devoted to a form of subtitling that was, and still is, considered abusive and subversive, is the contribution by Ornella Lepre, who deals with fansubbing. Fansubbing developed out of a practice that fans adopted to translate and spread Japanese anime, but it was soon extended to all genres (with a special preference for TV series, which are the object of investigation of this article) and soon acquired gigantic proportions. The growth of the phenomenon–Lepre argues–brought about a decisive improvement in the quality of translations, as communities of amateur subtitlers became more organised and devoted more attention to quality control. In particular, Lepre investigates, both quantitatively and qualitatively, how cultural references–adopting Pedersens’ (2005: 2) definition of “extralinguistic culture-bound reference” (ECR)–are rendered in both fansubs and official translations of two non consecutive seasons (for a total of 43 episodes) of the US comedy series 30 Rock, one which is particularly rich in cultural references (with an average of 45 per episode). Cultural references were first identified, then classified into eleven categories, drawing on previously proposed taxonomies; their translations were also classified as either source language- or target language-oriented and examined for frequency by using statistical instruments, e.g.
Subtitling Today: Forms, Trends, Applications
9
“regression models that estimate how one or more independent variables affect a dependent variable, by estimating the parameters that define their relationship”.
For example, the category of measure has the largest positive coefficient, which means that, quite expectedly, ECRs referring to this topic are usually adapted for the target language. Conversely, categories such as people, fiction and geo display the largest negative coefficients, which means that they are less likely to be adapted than other categories. In order to compare the translating strategies used by fansubbers with those adopted by professional translators, Lepre used the translation of the same ECRs in the dubbed version of the series, as it has not been subtitled in Italian. Results for dubbing are on the whole in line with those of fansubs, yet there is variation in time, from season 1 to season 3, in the direction of an evident decrease in target language procedures. Although more data are needed to corroborate the emerging trend, the research has highlighted that, as SL-oriented procedures appeared in fansubs first and only later in official translations, fansubs are very likely to provide guidelines and preferable trends in audiovisual translation. Another recently emerged form of subtitles, i.e. telops, is the object of Claire Maree’s paper. As the author makes clear, heavy use of texts and graphics of a variety of colours, sizes and fonts, which may slide into view diagonally or also pop up from a celebrity’s talking mouth, is a major characteristic of Japanese variety programming. Telops are thus a form of inscription of text onto the screen, which derives from television opaque projector. Although applied linguistics research has evidenced that text on screen common to Japanese TV serves the purpose of summarising, framing, highlighting or also embellishing the action which is taking place, telop seem to have taken on the more specific function of the tabloidisation of news broadcasts and the highlighting of comic content in entertainment broadcasting, by selecting only some of the talk enunciated on screen. They tend in fact to capture and maintain the viewers’ attention through engaging their gaze. In the course of time they have expanded from sporadic text in white lettering with a black edge to more varied forms that exploit a vast array of animations, orthographic variants, colours, symbols, special effects and graphics, including pictures and emoticons. By analysing a set of examples coming from different genres, Maree shows how text-on-screen successfully constructs salient “media personas anchored in identifiable social identities” and gets rid of those which are not in line with the favoured ideological scheme. The exaggerated use of
10
Chapter One
non standard orthography and non-normative speech forms are instrumental to depicting language ideologies. The contribution by Wang also offers an interesting and up-to-date account of the fortunes of subtitling in China, a country where both the production and interest in audiovisual translation and in subtitling in particular are rocketing to meet the ever-increasing demand of the audience to access the latest productions of world cinema. Domestic productions have also increased dramatically, at the expenses of Hollywood films, mainly because of the strict censorship the government imposes. Yet, this strict control has largely favoured audiovisual piracy, both because it eschews the lengthy process of approval and also because the products are much cheaper. The author reports that one of the most popular ways of accessing audiovisual material is downloading it at Internet cafés, which may or may not be legally authorised. Fansubbing was born in 2003, but it is by now a well-established phenomenon that often turns into a real business: YYeTs, the largest Chinese fansubbing group, is turning form a fansubbing group to a sanctioned provider of translation services. Before the growth of fansubbing, however, dubbing was also used, but contemporary viewers often find it at odds with the visuals and prefer subtitling, despite the many difficulties involved by the linguistic fragmentation and the still high rate of illiteracy among the masses, especially in rural areas. Other complications arise from the Chinese writing system, in which each character is semantically independent but can also combine with other characters to form words and sentences. In order to widen the availability and the quality of subtitling, the author advocates more scholarly attention to audiovisual translation from Chinese researchers and possible contacts with foreign academics to establish common terminology and procedural rules, especially for subtitling. Three contributions in the volume deal specifically with different aspects and applications of intralingual subtitling, and one deals with audio subtitling, i.e. all forms of accessible translation. In her paper, Saveria Arma gives a detailed account of quite a new and very interesting type of real time subtitling for the deaf and hard of hearing, i.e., conference respeaking generated by speech-to-text (STT) technology. STT refers to the translation of spoken words into text. The spoken words are those uttered by a conference participant or by a professional operator, the respeaker, who listens to the source text and renarrates it, condensing and rephrasing it. Such vocal input is then transformed into written words which take the form of subtitles and make the event accessible to a specific audience (persons with hearing
Subtitling Today: Forms, Trends, Applications
11
impairments) which is also a very heterogeneous one in terms of background knowledge, education, reading behaviour, and language competence. The paper illustrates the technical, linguistic and professional aspects of respeaking-based live subtitling for the deaf and the hard of hearing in a conference setting–although this technique can be used in a wide array of different contexts–and it shows that the job of the respeaker shares a number of aspects with that of the interpreter and of the subtitler. However, it is quite particular and much more based on the ability to know and communicate with the target audience without patronising them. Tia Muller offers a French perspective on subtitling for the deaf and hard of hearing. In particular, the author thoroughly describes a document (Charte relative à la qualité du sous-titrage à destination des personnes sourdes ou malentendantes, that is, in English, the Charter relating to the quality of subtitles addressed to the deaf or hard of hearing) and all the 16 rules for good subtitling it includes, and she evaluates them “in relation to SDH addressees’ opinions captured in a 2010 survey, other European guidelines, and empirical studies, in order to assess the validity of the components it sets out for all the stakeholders involved”.
Each of the Charte’s 16 rules is analysed resorting to Arnáiz-Uzquiza (2012)’s typology. On this basis, the author considers pragmatic parameters (which include the addressees’ characteristics, SDH production’s aim, the production date, and its authoring), technical parameters, aesthetic-technical parameters (e.g., those which pertain to the visual aspects of subtitling and that are a direct consequence of the production process and of the configuration of the finished product, such as reading speed and delay in live subtitling), purely aesthetic parameters (which refer to the purely visual aspects of subtitles, e.g., number of lines, subtitle placement, box usage, shot changes, font style and size, number of characters per line, subtitle justification, line spacing, synchrony with the image), linguistic parameters (editing and segmentation) and extralinguistic parameters (aspects that represent non-verbal information included in the audiovisual text, e.g. sound effects, music, paralinguistic elements and character identification). After the thorough analysis which places France on the European audiovisual map, the author concludes that much work is still needed to improve the real effectiveness of the rules that govern French SDH even after their implementation in 2011. Szarkowska, I. Krejtz, àogiĔska, Dutka, and K. Krejtz present a preliminary eyetracking study on the influence of shot changes (i.e. cuts) on reading subtitles. The aim of the study is to demonstrate empirically whether the presence of a shot change really forces viewers to re-read the
12
Chapter One
subtitles–as maintained in literature–or not. The study has been conducted in Poland with deaf, hard of hearing and hearing participants watching two-line subtitles for the deaf and hard of hearing (SDH). Participants were exposed to short excerpts from various audiovisual material: a feature film (Love actually, 2003, Richard Curtis) and two documentaries (Super Size Me, 2004, Morgan Spurlock and Roman Polanski: Wanted and Desired, 2008, Marina Zenovich). The results, although preliminary, are counterintuitive and show that, overall, reading subtitles is effective also in challenging situations, and that the type of programme being watched and its structural complexity can influence the subtitle reading behaviour of viewers. The paper by Bączkowska shows instead some applications of interlingual subtitling in language teaching. The work illustrates some of the results of the Learner Corpus of Subtitles (LeCoS) project developed by the author and some collaborators at Kazimierz Wielki University, Bydgoszcz, Poland, whose participants are students of English Philology (MA level) and Modern Languages with an English major and a Russian minor (BA level). The project aims to identify the preliminary subtitling competence of modern language students, to prepare materials to be used to teach subtitling, and develop a complete subtitling module for translation students. As the name of the project reveals, one of the steps is the compilation of a corpus of interlingual subtitles produced by Polish students of Modern Languages, which is structured as an increasable database. The contribution focuses however on one corpus component, namely a stand-alone subcorpus (Corpus B), which is qualitatively analysed. Discussion of data provides ample evidence of overreliance on the source text, which often results in literal translations and calques, and excessive lexical and syntactic precision. Students very often take little notice of the typical diamesic shifts required in subtitles and reproduce many features of orality, such as interjections, expletives and backchannel cues. Apart from interesting hints as to the areas where students need more guidance and training, the paper also shows that corpus data can provide valuable insight into the translation difficulties and pitfalls that subtitling trainees are likely to meet, and could thus prove to be beneficial for the design of subtitling courses.
References d’Ydewalle, Géry and Pavakanun, Ubolwanna. 1995. “Acquisition of a Second/Foreign Language by Viewing a Television Program.” In Psychology of Media in Europe: The State of the Art, Perspectives for
Subtitling Today: Forms, Trends, Applications
13
the Future, edited by Peter Winterhoff-Spur, 51-64. Opladen, Germany: Westdeutscher Verlag GmbH. d’Ydewalle, Géry, and Van de Poel, Marijke. 1999. “Incidental Foreign Language Acquisition by Children Watching Subtitled Television Programs.” Journal of Psycholinguistic Research, 28: 227–244. Ivarsson, Jan. 2004. “A Short Technical History of Subtitles in Europe.” Retrieved from www.transedit.se Ivarsson, Jan, and Carroll, Mary. 1998. Subtitling. Simrishamn: TransEdit. Kothari, Brij. 1998. “Film Songs as Continuing Education: Same Language Subtitling for Literacy.” Economic and Political Weekly, 33: 2507–2510. —. 2000. “Same Language Subtitling on Indian Television: Harnessing the Power of Popular Culture for Literacy”. In Redeveloping Communication for Social Change: Theory, Practice and Power, edited by Karin Wilkins, 135–146. New York: Rowman and Littlefield. Kothari, Brij and Takeda, Joe. 2000. “Same Language Subtitling for Literacy: Small Change for Colossal Gains.” In Information and Communication Technology in Development, edited by Subhash Bhatnagar and Robert Schware, 176–186. New Delhi: Sage. Kuppens, An. 2010. “Incidental Foreign Language Acquisition from Media Exposure.” Learning, Media and Technology, 35: 65-86. Nornes, Abé Mark. 2007. “Cinema Babel: Translating Global Cinema”, Minneapolis, MN: University of Minnesota Press. Pedersen, Jan. 2005. “How is Culture Rendered in Subtitles?”. In MuTra: Challenges of Multidimensional Translation, edited by Heidrun Gerzymisch-Arbogast and Sandra Nauert. Saarbrücken. 2-6 May 2005. Accessed 25 January 2013. www.euroconferences.info/proceedings/2005_Proceedings/2005_Pede rsen_Jan.pdf
CHAPTER TWO AUDIOVISUAL TRANSLATION AND SOCIOLINGUISTIC ADEQUACY GIAN LUIGI DE ROSA1
1. Introduction The language of TV fiction is created and organized (Nencioni 1983) through a process of reconstruction and representation of language in context. The language of TV fiction may be intended both as a reproduction of a face-to-face dialogue (Bazzanella 1994, 2002) and as a variety transmitted with Multiple Senders – Heterogeneous Receivers (Pavesi 2005; Rossi 1999, 2002). The dialogues of the Brazilian contemporary TV series show a number of features of Brazilian Portuguese (hence BP), which reveal, as can be seen from the structure of the BP language (Fig. 1-1), an advanced process of re-standardization (Bagno 2005, 2012, De Rosa 2011, 2012, Lucchesi 2004, Perini 2007, 2010). However, despite the presence of recurrent features, the language used in TV fiction–such as sit-coms, serials (including costume serials) and soap operas–shows to be less homogeneous and more varied than film language. This is due to the fact that each fictional subgenre has different communicative aims and textual and dialogic features. In fact, texts characterized by formal to highly formal registers with a tendency towards standard and/or neo-standard language (mainly in TV fiction) can be found together with texts which show (sometimes excessive) tendencies towards sub-standard varieties and features. On the basis of the verisimilitude agreement between sender and receiver, the process of writing dialogues should not have external constraints with reference to the text or to the context, nor should they have external pressures. If this is only partially true
1
Università del Salento, Italy. Email address: [email protected]
16
Chapter Two
Fig. 1-1 Architecture of Brazilian Portuguese
Audiovisual Translation and Sociolinguistic Adequacy
17
for the creation of film language, it is even less true for the language of TV fiction, where the excessively concentrated use of sub-standard (stigmatized or non-stigmatized) features is possible because of the hyperconnotation and hyper-characterization of fictional characters. Both TV series chosen for analysis use elements and oral varieties which tend towards the neo-standard BP but they also show a strongly connotated spoken language with elements from sub-standard varieties belonging to popular varieties of BP (hence PBP) which are used in less monitored contexts and situations and in order to characterize characters. In the series Mandrake, the scriptwriters José Henrique Fonseca, Tony Bellotto e Felipe Braga, have freely adapted the character created by the novelist Rubem Fonseca. The series focuses on Mandrake, a criminal lawyer from Rio de Janeiro and most of the series’ characters speak a substandard variety which makes the original linguistic and stylistic choices of the writer even more characterized. The second series chosen for analysis, FDP, tells the vicissitudes of a soccer referee. The original screenplay has been written by Adriano Civita, Francesco Civita and Giuliano Cedroni and the series is directed by Katia Lund (among the others), who is also co-director of Cidade de Deus (2002). The language used is a neo-standard variety of BP and the use of PBP varieties is only sporadic. Despite the presence of some limits due to the strong characterization of characters, the language of TV Brazilian fictions makes use of neostandard, non-standard and sub-standard varieties and elements with the aim of showing the contemporary diamesic variation which is, consequently, recognized by the audience as the reproduction of language in context. The features of the spoken language used can be distinguished according to those elements which show the existence of a grammar of spoken language which is different from the grammar of written language 2 . However, text types, genres and the features of the target audience (which in the case of networks such as HBO–where the two TV series are broadcast–is large and heterogeneous) represent two further extra-linguistic variables which are to be added to the diamesic variation
2
Textual and interactional features of spoken language are frequent hesitations, interruptions, false starts, editing, repetitions, paraphrases, overlaps, etc. Syntactic features of spoken language are short sentences, juxtapositions, non-clausal units, ellipsis; discursive markers are: então, ora at the beginning of a turn or of an utterance; textual or pragmatic connectives such as isso, aí; attention getters such as olha. In the lexical domain, there is a wide use of slang and present-day slang (calão, gíria comum or gíria de grupo), and also of obscene and offensive words, which are frequent elements of colloquial language.
18
Chapter Two
of language and account for the presence of highly marked elements from sub-standard language, as in the case of the detective fiction, Mandrake, or of the “mixed genre” TV series set in the world of soccer such as FDP.
2. Audiovisual Translation and Sociolinguistic Adequacy Translation is usually defined in terms of transposition from a (variety of) standard language to another (variety of) standard language and most of the problems which are object of discussion among translation theorists imply this default situation (Berruto 2010, 899). However, if this default situation may be considered arguable in the domain of literary translation because literary language is more and more often being characterized by sociolinguistically marked elements and varieties, it is even more questionable in the domain of audiovisual translation (Díaz Cintas 2009, Díaz Cintas and Anderman 2009, Pavesi 2005, Perego 2005, Perego and Taylor 2012). In this domain, the transmitted language varieties, such as the spoken language used in the Brazilian audiovisual products, show a high degree of diastratic and diaphasic markedness which is even higher in TV products which tend to characterize and connotate sociolinguistically some of the characters by exaggerating factitiously their linguistic and expressive features. In fact, in many Brazilian TV series (in this case, mainly in the series Mandrake) the tendency towards a factitious recreation of spontaneous spoken language is particularly visible in those characters who speak the PBP variety. The result is the presence of an excessive use of those features which usually have a lower frequency in spoken language. From a translational point of view, problems related to the sociolinguistic adequacy of translation are due to the fact that in Brazilian TV fiction products, many spoken varieties of BP are used and that substandard elements and/or diastratic and diaphasic varieties are used alternatively with neo-standard3 language. “Guardando alla traduzione dalla prospettiva sociolinguistica, il problema centrale è appunto quello dei testi sociolinguisticamente marcati per la compresenza di più varietà di lingua, ciascuna delle quali per definizione è portatrice di significati sociali intrinseci alla comunità linguistica della
3
The standard variety of BP is highly codified and acquired through formal teaching; it is used only in formal contexts and for some written genres (mainly academic writing). In practice, the diasystem of the BP has an overt prestige variety (standard variety), a covert prestige variety (neo-standard variety), and a sub-system of popular stigmatized varieties.
Audiovisual Translation and Sociolinguistic Adequacy
19
lingua di partenza. Si tratta quindi della traduzione del significato sociale associato agli elementi (forme, parole, costrutti) di una lingua che lo veicolano.” (Berruto, 2010, 900).
On the basis of what Berruto says, by sociolinguistic adequacy is meant the rendering of “the social meaning of linguistic signs” (2010, 900). The rendering of meaning implies the use of a number of different strategies because sociolinguistic equivalence may be hardly achieved due to the differences in the structures of source and target languages (Cf. Berruto 1987, 1995, 2006). Indeed, a sociolinguistically equivalent rendering of marked elements would imply the identification of a similarly marked translation equivalent along the diaphasic or diastratic continuum or in the dimension of diatopic variation. This process which should tend towards the naturalization of the language of the sociolinguistically marked varieties–and not towards the neutralization–may lead, on the other hand, to the total neutralization of the degree and type of markedness of translation equivalents, which in this way, will result standardized in the target text. This latter solution is the most used in the Italian dubbing of TV series and has, as a result, a target language which may be seen as a type of dubbese tending towards a neostandard or a monitored informal spoken language. Besides the achievement of sociolinguistic equivalence and the neutralization of markedness, in the domain of audiovisual translation, other translation compensatory strategies can be considered and applied with reference to sociolinguistic adequacy, such as those proposed by Berruto (2010, 902): “[S]i può (bi) rendere un elemento marcato nella lingua di partenza per una certa dimensione di variazione con un elemento marcato nella lingua d’arrivo per un’altra dimensione di variazione, o, (bii) rendere un elemento marcato a un certo livello di analisi con un elemento marcato per un altro livello di analisi, o ancora, (biii),rendere l’elemento marcato nella forma neutra standard e tradurre in altro punto contiguo del testo un elemento non marcato nella lingua di partenza con un elemento marcato nella lingua d’arrivo; con eventuale somma o combinazione di (bi), (bii) e (biii): resa di un elemento di un certo livello di analisi marcato per una dimensione mediante un elemento di un altro livello di analisi marcato per un’altra dimensione di variazione e/o in un altro punto del testo.”
20
Chapter Two
The translation strategies discussed by Berruto propose to create and keep a variational opposition between the marked element and the rest of the text4.
2.1. Mandrake and FDP The two TV series analyzed in this paper show a partially specialized language. In the Mandrake series, where the main character is a lawyer, the language of law is frequently used. In the FDP series the main character is a soccer referee and the use of the language of soccer is limited. Nevertheless, in both cases explanations and sometimes trivializing reformulations are used as a narrative strategy and in order to make the two specialized languages less opaque. For this reason, the analysis will focus particularly–both in the original and the subtitled and dubbed versions–on a series of scenes and communicative situations from Mandrake and FDP where different varieties of language and situations of multilinguism with code-switching and code-mixing (BP/Spanish) phenomena are present. In the first episode of Mandrake, titled A cidade não é aquilo que se vê do Pão de Açúcar, (TV transposition of O caso de F.A. by Rubem Fonseca 5 ) different varieties of BP with a certain degree of diatopic, diastratic and diaphasic markedness are present together with codeswitching and code-mixing phenomena, which are neutralized in the dubbed version. However, before considering the translation strategies used by translators in the adaptation and dubbing of the first episode of Mandrake, a series of closer analyses are needed. Scenes should first be analyzed through a macroanalysis of TV fiction conversation, through an analysis of the historical and geographical context (diatopic variation), and an analysis of a series of extralinguistic factors and their possible influence on characters, such as their level of education, job, social status, age and sex (diastratic variation). Scenes also need to be analyzed through a microanalysis of TV fiction conversation in order to identify details related to the situation where the interaction is taking place (socio-situational
4
The strategies indicated by Berruto can be referred to Nida’s concept of dynamic equivalence (1964) even though they are oriented towards sociolinguistic adequacy. 5 O caso de F.A is included in the collection of tales titled Lúcia McCartney (1967).
Audiovisual Translation and Sociolinguistic Adequacy
21
variation) 6 , which involves those pragmatic elements which occur in spoken language and generate contrasts such as proximity/distantiation, clarity/opaqueness, power/submission, together with other conversational strategies used by speakers during the dialogue (Cf. Orletti 2008, Preti 2004, 2008). In the first segment analyzed, two lawyers are among the participants of the dialogic interaction: Mandrake and Wrexler, and Jorge Fonseca, who is the promoter of the girls of the Sun Shine, the place where Mandrake goes to meet Pâmela, the girl one of his clients is in love with: Mandrake – Original version Mandrake: Jorge Fonseca: Mandrake: Jorge Fonseca: Mandrake: Jorge Fonseca: Mandrake: Wrexler: aí?
Como é que você me achou aqui? O problema não é como é que te achei aqui, é o que é que eu tô fazen(d)o aqui! É, o que é que cê tá fazendo aqui? Cê tá a fim da Pâmela, não tá? Tô! Yo quiero la plata!!! La plata é com esse aqui ó! Aqui não tem plata nenhuma…mas o que é isso
The first thing to be noticed is that despite some differences in terms of status and role of the participants, the interaction can be defined symmetrical. The language spoken by Mandrake is gradually adapted to the language spoken by Jorginho through a series of conversational strategies such as the informal pronoun of address VOCÊ which is replaced by its aphaeretic form CÊ (which can be used only as a subject). The language spoken by Jorginho reveals his low-intermediate educational level which reflects the diasystem of urban popular varieties. Furthermore, the presence of some slang words should be interpreted with reference to Jorginho’s job and the place where he works and may be considered as a sort of a sub-code or a jargon. From a morphophonetic point of view, besides the use of the aphaeretic form CÊ, it is interesting to notice the non-articulation of the dental voiced occlusive /d/, which is made agglutinated by the nasal consonant in the forms of the gerund vs. Conversely, from a morphosyntactic point of view, the alternation between TU/(VO)CÊ in the
6
Relevant factors of the diaphasic variation are the level of formality or informality of the communicative situation and the speaker’s degree of attention and control in the linguistic production.
Chapter Two
22
language spoken by Jorginho, the use of the clitic forms for the direct second person TU (“O problema não é como é que TE achei aqui…”) associated to the (VO)CÊ may have the stylistic-expressive function of connoting the language spoken by Jorginho as sub-standard. However, the interpretation of this linguistic trait has radically changed over the last thirty years thus contributing to the restructuration of the forms of the pronominal paradigm with the implementation of the grammaticalized pronominal forms VOCÊ/VOCÊS and A GENTE. This restructuration has brought a number of grammatical implications at different levels. In fact, the neo-standard BP shows the alternation of the two indirect pronouns of address TU/VOCÊ but the pronoun TU with the verb in the third person is still considered a marked element. The alternation may occur both for the syntactic form of subject and object and for the other syntactic functions with the unstressed forms and with those forms introduced by a preposition (for example: Eu queria levar VOCÊ no show/Eu queria TE levar no show; Isso é para TI!/Isso é para VOCÊ). The way and the frequency this alternation occurs in the language spoken by Jorginho together with some lexical elements from the semantic field of sex (Preti 2010) are, obviously, to be considered as part of the process of characterization and connotation of the character through language. This process becomes more marked if the alternation is compared with the constant use of the pronominal form VOCÊ in the language spoken by Mandrake and Wrexler and when, later in the episode, the alternation will include the subject pronoun: “Agora se VOCÊ qui ser ficar 3 horas com ela dá pra fazer por 2.5. Mas o melhor mesmo é TU dormir com a mulher, entendeu? TU dorme com ela, 4 contos, passa a noite inteira CONTIGO, TE ama, TE dá beijo na boca, entendeu? Olha nos TEUS olhos e fala meu amor”. However, it needs to be said that although this alternation is considered marked, its markedness is now interpreted as a diaphasic variation and as a feature of the informal BP. The situation radically changes in the Italian dubbed version, as visible from the dialogues reported below: Mandrake –Italian dubbed version Mandrake: Jorge Fonseca: Mandrake: Jorge Fonseca: Mandrake: Jorge Fonseca: Mandrake: Wrexler:
Come ha fatto a trovarmi? La domanda non è come ti ho trovato, ma perché sono venuto qui! Perché è venuto qui? Stai cercando Pamela, giusto? Giusto! E io voglio la grana!!! La grana ce l’ha lui! Qui non c’è nessuna grana…e questo cos’è?
Audiovisual Translation and Sociolinguistic Adequacy
23
In the translated version, the translator/adaptator adopts a compensatory translation strategy by replacing the diatopic and diastratic markedness with diaphasic markedness characterized by the use of a colloquial lexis and of a present-day slang (such as “grana” referring to money) and by the pronoun of address. Indeed, the use of the Spanish language in the original version has the function of stressing the characterization and the connotation of Jorginho, whose language can be considered as slang (gíria de grupo) because of the criminal underworld he lives in. However, the language he speaks mainly shows colloquial features and features of present-day slang (gíria comum)7 which are, therefore, accessible to the Brazilian audience of the TV series. Furthermore, in the Italian version, the interaction becomes asymmetric because the lawyer keeps on addressing Jorginho using the formal pronoun LEI in order to create distance from him and Jorginho addresses the lawyer using the informal TU, thus creating informality/intimacy (they had previously met at the Sun Shine). In the preceding scene, where Jorginho arrives in Mandrake’s office and asks his secretary to tell the lawyer he is there, translation problems are clearly related to the symmetric/asymmetric interaction and to the pronouns of addressing, as visible below: Mandrake – original version Jorge Fonseca:
Mandrake: Jorge Fonseca:
Ehm… desculpa a invasão é que eu tô queren(d)o levar um particular com você. Cê teve na Sun shine, tô queren(d)o te mostrá umas paradas! Pode deixar, Dona Marisa, que eu cuido aqui do… Jorge Fonseca, mas pode me chamar de Jorginho!
7
By gírias we mean slang and present-day slang varieties. We can distinguish between the gíria de grupo, which is accessible only to restricted communities and includes the historic slang, the transitory youth slang and the transitory military slang, and the gíria comum, which includes those slang and present-day slang elements which are part of the spoken language of most Brazilians. On a syntactic level, the gíria is quite similar to Portuguese; it is at the semantic level that changes occur and that resemanticization takes place. The gíria provides a sense of group belonging (for example the surfer language) and is accessible only to restricted communities. In this perspective, the gíria acts as a barrier between the dominant culture (white and rich) and the dominated culture (black, half-caste and South-American Indian) and is, consequently, widely used in the Brazilian suburbs, particularly in the favelas, where it is interpreted as a mechanism for tribal cohesion and for group self-defence and as an interactive code.
Chapter Two
24
In the original version, the degree of diatopic and diastratic markedness is clearly visible from Jorginho’s first line. From a morphophonetic point of view, the markedness is realised through the aphaeretic forms of the pronoun of addressing , of the verb estar , of the apocope of the voiced dental occlusive in the gerund /d/8 and of the final vibrant in the infinitive. From a morphosyntactic point of view, there are no irregular concordances within noun phrases or between subject and verb, apart from the alternation between TU/VOCÊ. Jorginho’s attempt to reduce distance is clearly visible when Mandrake tries to make the interaction more formal creating distance from his interlocutor through the use of a formal allocutive form as in “que eu cuido aqui do…” which is usually followed by O SENHOR. Conversely, Jorginho adopts an approaching strategy by moving the communicative exchange towards an informal level: “Jorge Fonseca, mas pode me chamar de Jorginho!”. In the Italian version, the process of translation and adaptation has as a result a total neutralization of markedness on all its dimensions. Mandrake – Italian dubbed version Jorge Fonseca:
Mandrake: Jorge Fonseca:
Ehm…, mi scusi se la disturbo, ma qui possiamo parlare in privato. Ieri al Sun shine volevo farle vedere una cosa. Venga in ufficio, signor… Jorge Fonseca, ma tutti mi chiamano Jorginho!
Furthermore, the translator-adaptator does not adopt any compensatory strategy in the translation of Jorginho’s lines and even makes explicit the addressing formula, which is omitted in the original: “Venga in ufficio, signor…”. This is an example of some of the choices the translator will adopt in the dialogues which follow where both social status and role of the participants are stressed and Mandrake’s attempt to keep the distance from Jorginho are made clear through the use of a certain level of formality in his wordings. In the dubbed version, the last two lines are of utmost importance; Jorginho starts off by using the courtesy pronoun LEI addressing Mandrake: “mi scusi se la disturbo, ma qui possiamo parlare in privato” but in the last line he adopts his approaching strategy: “Jorge
8
“O gerúndio, no dialeto de São Paulo, perde o d nas desinências, and o......ano, endo.....eno, indo.......ino, ondo......ono: andano, veno, caíno, pôno. Este fato é atribuído por uns à influência africana, enquanto outros autores o aproximam do tupi” (Mendonça 2012, 86).
Audiovisual Translation and Sociolinguistic Adequacy
25
Fonseca, ma tutti mi chiamano Jorginho!”. Conversely, Mandrake tries to keep a certain interpersonal distance and to protect his positive face. It may be assumed that in the Italian adaptation the translator compensates the diastratic and diatopic markedness of Jorginho’s lines with an asymmetric interaction on the basis of the audience’s expectations and of discursive routines in institutional communication. In terms of politeness, Jorginho’s repeated approaching attempts may be interpreted as face threatening acts (Cf. Brown and Gilman 1960; Brown and Levinson 1987; Goffman 1972; Holmes 1995). The second series chosen for analysis, FDP, shows some slightly different features in terms of sociolinguistic markedness along the diaphasic and diastratic variation axes and along the diatopic variation dimension. This may be due to the change of setting from Rio de Janeiro to São Paulo. However, as described above, this series is a mixed-genre series and the story focuses not only on the professional life of the main character (who is a soccer referee and a regular guest on soccer TV programmes) but also on his private and daily life: from his family relationships (mother, son and ex-wife) to his friends and acquaintances. From a linguistic point of view, in FDP as in Mandrake, there are different varieties of BP with a certain degree of diatopic, diastratic and diaphasic markedness, and also occurrences of code-mixing particularly in the language spoken by Guzmán, a character from Argentina who speakes a sort of portunhol. Due to lack of space and time, the analysis will focus only on some problems related to the translation of diastratic and diaphasic markedness in a scene of the series’ second episode. FDP – Original version Guzmán: Juarez: Guzmán: Juarez: Guzmán: Juarez: Guzmán: Juarez: Guzmán: Guzmán: Juarez: Guzmán: Guzmán:
Quer um copo de leite? Quem é você? Guzmán! Que que cê tá fazendo aqui? Tchê… tomando um copo de leite. Não aqui na cozinha. Aqui na minha casa? Ah... dormindo com a tua mãe. Como é que é? Ehh... ela não te contou. Dá para abaixar a tesoura? Está me dando um nervoso. Quer dizer que você é o... Namorado. Amante.
Chapter Two
26 Guzmán: Guzmán:
Parceiro sexual. Bartolomeo Guzmán, prazer.
The multilingual and mixed-language reality is so evident that something similar in the dubbed version9 could be obtained by replacing the Portuguese in the source text with the language of the target audience, preserving Guzmán’s typically Spanish suprasegmental features. In practice, the translation should aim to find a sociolinguistic equivalence of this diatopic markedness characterized by code-mixing in order to give the audience the idea of listening to a non-native speaker who does not totally master the language because his own mother-tongue language is neither the source language (in this case Portuguese) nor the target language (in this case Italian). As mentioned above, the language spoken by Guzmán clearly shows a number of interferences with Spanish. Besides suprasegmental features generally related to his typical Argentinian accent, rhythm and intonation, there are some interferences and/or calques such as the articulation of which coincides with the Spanish . In fact, even though the BP palatalizes the voiceless dental occlusive, when this latter is followed by a palatal vowel /t/+/i/, in a voiceless palatal occlusive fricative or voiceless post-alveolar the sound of the final does not get higher to become a [i], and the initial diphthong is monophthongized. Further proof of his nationality is the use of the typical Rio de la Plata expression tchê, which acts as a discursive marker and is used by the character with the metatextual function of focusing, in order to give emphasis to his action: “Tchê… tomando um copo de leite”. As anticipated above, the Hispano-Argentinian features of the language spoken by Guzmán may be re-used in the dubbed version keeping the same degree and type of diatopic markedness. However, in the subtitled version the strategy of translating or compensating the diatopic trait referring to the L3 may be considered redundant because the subtitle is an indirect and “transparent” translation and appears simultaneously with the
9
Dubbing is a form of revoicing and consists in the substitution of the original audio track in the source language with a new audio track in the target language. Dubbing can be defined as: a) an isosemiotic interlinguistic translation from an oral code to another oral code); b) a translation which eliminates the dialogues in the source language; c) a translation which is based on the synchronization of the recording with the existing footage in order to give the audience the idea that original performers are actually speaking the target language.
Audiovisual Translation and Sociolinguistic Adequacy
27
original audio track. In the subtitling process10, the source message, which is orally expressed, becomes a written message (with a transformation on the diamesic variation axis). Contents also partially change because they may be reduced or adapted (undertranslation) or made explicit when they are disambiguated in a sort of overtranslation by adopting a series of strategies that expand form and/or content. FDP – Italian subtitled version Guzmán: Vuoi un po’ di latte? Juarez: Chi sei? Guzmán: Guzmán! Juarez: Che ci fai qui? Guzmán: Bevo un po’ di latte. Juarez: Non in cucina, in casa! Guzmán: Vado a letto con tua madre. Juarez: Che cosa? Guzmán: Non ti ha detto niente. Guzmán: Abbassa le forbici. Guzmán: Mi mettono un'ansia. Juarez: Saresti il... Guzmán: Fidanzato. Guzmán: Amante. Guzmán: Partner sessuale. Guzmán: Bartolomeo Guzmán, piacere.
As visible in the lines above, the subtitle cannot show the two dimensions of sociolinguistic markedness (both diatopic and diaphasic) which are present in the original version. Typical elements of colloquial BP such as the aphaeretic form CÊ and the use of the interrogative “Quê que…?” which have both a certain degree of diaphasic markedness do not have an adequate translation equivalent in sociolinguistic terms. The translator-adaptator chooses to reproduce the diaphasic markedness by using a variety of colloquial expressive Italian as in the case of the semantic overtranslation of “dormindo com a tua mãe” with “vado a letto con tua madre”. Conversely, elements belonging to spoken language, such as the focusing tchê, lose their function in the subtitle. Furthermore, the scene reported above is also characterized by a verbal and non-verbal humour (visual humour: Guzmán is completely naked).
10
Subtitling is a form of translation which implies a variation of communicative aims and contents (which are condensed and reduced) from the source text to the target text and which has as its main aim that of making the target text more comprehensible and accessible to the target culture.
28
Chapter Two
For this reason, the translation needs to take into account the polysemous features of the audiovisual text which uses two channels to communicate a message and also verbal and non-verbal semiotic elements. In conclusion, the subtitles reported above are the result of a dynamic equivalence (Nida 1964) and reproduce both the illocutive and perlocutive forces which are present in the lines of the dialogue, but they are also an example of undertranslation in terms of sociolinguistic adequacy.
3. Conclusion In the light of the results achieved from the analysis, it is clear that the presence of marked elements in the language of TV series is filtered and depends on economic and socio-cultural needs. The audiovisual translation of markedness also depends on external factors and on sociolinguistic and translation problems. As Berruto says (2010, 910), sociolinguistic and variety elements risk to be, by definition, really untranslatable because they reflect the strong relativity of sociolinguistic situations. However, it is also true that, very often, sociolinguistic adequacy cannot be obtained because the structures of source and target languages have their own distinctive features (Bombi 2000, 152). In the specific case of Brazilian TV series, the difficulty in translating depends on the fact that the dimensions of diastratic and diaphasic variation often overlap and that the PBP is very often used to describe a higher level of markedness in TV series. In this perspective, translation strategies could not even aim at sociolinguistic equivalence because popular varieties of the Italian language correspond to dialects or regional varieties of Italian and their use would be, as a result, diatopically marked with a consequent unacceptable domestication of the text (apart from those cases where verbal humour needs to be communicated). This explains why the strategies which are mainly used are the neutralization of markedness which brings to a standardization of the text and the dislocation of markedness from a dimension of variation to another or from a level of analysis to another.
Audiovisual Translation and Sociolinguistic Adequacy
29
References Berruto, Gaetano. 2010. “Trasporre l’Intraducibile: il Sociolinguista e la Traduzione.” In Comparatistica e Intertestualità. Studi in Onore di Franco Marenco, Tomo II, edited by Giuseppe Sertoli, Carla Vaglio Marengo and Chiara Lombardi, 899-910. Alessandria: Edizioni dell’Orso.
Further Reading Agost, Rosa. 1999. “Traducción y Doblaje: Palabras, Voces, Imágenes.” Barcelona: Ariel. Bagno, Marcos. 2005. “Português ou Brasileiro?”. São Paulo: Parábola. —. 2012. “Gramática Pedagógica do Português Brasileiro.” São Paulo: Parábola. Bazzanella, Carla 1994. Le Facce del Parlare. Un Approccio Pragmatico all'Italiano Parlato. Firenze: La Nuova Italia. —. 2002. Sul Dialogo. Contesti e Forme di Interazione Verbale. Milano: Guerini. Berruto, Gaetano. 1987. Sociolinguistica dell’Italiano Contemporaneo. Roma: La Nuova Italia Scientifica. —. 1988. “Di Qualche Problema Sociolinguistico della Traduzione.” In Annali della Facoltà di Lettere dell'Università di Cagliari [Studi in memoria di A. Sanna], 8(45), 345-365. —. 1993. “Varietà Diamesiche, Diastratiche, Diafasiche.” In Introduzione all’Italiano Contemporaneo. La Variazione e gli Usi, edited by Alberto A. Sobrero, 37-92. Roma-Bari: Editori Laterza. —. 1995. Fondamenti di Sociolinguistica. Roma-Bari: Editori Laterza. Bombi, Raffaella. 2000. “Problemi Generali della Traduzione di Testi Plurilingui: il Caso del Pygmalion di George Bernard Shaw.” In Documenti Letterari del Plurilinguismo, edited by Vincenzo Orioles, 145-182. Roma: Il Calamo. Brown, Penelope and Levinson, Stephen. 1987. Politeness: Some Language Universals in Language Use. Cambridge: Cambridge University Press. Brown, Roger and Gilman, Albert. 1960. “The Pronouns of Power and Solidarity.” In Style in Language, edited by Sebeok Thomas Albert, 253-276. Cambridge: MIT Press. De Rosa, Gian Luigi. 2011. “Reflexos do Processo de Restandardização do PB no Falado Fílmico Brasileiro Contemporâneo.” In Línguas Pluricêntricas: Variação Linguística e Dimensões Sociocognitivas/
30
Chapter Two
Pluricentric Languages: linguistic Variation and Sociognitive Dimensions, edited by Augusto S. Silva, Amadeu Torres and Miguel Gonçalves, 377-391. Braga: Aletheia, Publicações da Faculdade de Filosofia da Universidade Católica Portuguesa. De Rosa, Gian Luigi. 2012. Mondi Doppiati. Tradurre l'Audiovisivo dal Portoghese tra Variazione Linguistica e Problematiche Traduttive. Milano: Franco Angeli. Díaz Cintas, Jorge, ed. 2009. New Trends in Audiovisual Translation. Bristol: Multilingual Matters. Díaz Cintas, Jorge and Anderman, Gunilla, eds. 2009. Audiovisual Translation. Language Transfer on Screen. London: Palgrave Macmillan. Duro, Miguel. 2001. La Traducción para el Doblaje y la Subtitulación. Madrid: Cátedra. Giglioli, Pier Paolo and Fele, Giolo. 2000. Linguaggio e Contesto Sociale. Bologna: Il Mulino. Goffman, Erving. 1972. “On Face-Work: An Analysis of Ritual Elements in Social Interaction.” In Communication in Face-to-Face Interaction, edited by John Laver & Sandy Hutcheson, 179-196. Harmondsworth: Penguin. Holmes, Janet. 1995. Women, Men and Politeness. New York: Longman. Labov, William. 1972. Sociolinguistic Patterns. Philadelphia: University of Pennsylvania Press. Lucchesi, Dante. 2004. Sistema, Mudança e Linguagem. São Paulo: Parábola. Mendonça, Renato. 2012. A Influência Africana no Português do Brasil. Brasília: FUNAG. Nencioni, Giovanni. 1983 [1976]. “Parlato-Parlato, Parlato-Scritto, Parlato-Recitato.” In Di Scritto e di Parlato. Discorsi Linguistici. Bologna: Zanichelli. Nida, Eugene Albert. 1964. Toward a Science of Translating. Leiden: E. J. Brill. Orletti, Franca. 2008. La Conversazione Diseguale. Roma: Carocci. Pavesi, Maria. 2005. La Traduzione Filmica. Roma: Carocci. Perego, Elisa. 2005. La Traduzione Audiovisiva. Roma: Carocci. Perego, Elisa and Taylor, Christopher. 2012. Tradurre l’Audiovisivo. Roma: Carocci. Perini, Mário Alberto. 2007. Gramática Descritiva do Português. São Paulo: Editora Ática. —. 2010. Gramática do Português Brasileiro. São Paulo: Parábola. Preti, Dino. 2004. Estudos de Lingua Oral e Escrita. Rio de Janeiro: Editora Lucerna.
Audiovisual Translation and Sociolinguistic Adequacy
31
—. 2008. Cortesia Verbal. São Paulo: Editorial Humanitas/USP. Rossi, Fabio. 1999. Le Parole dello Schermo: Analisi Linguistica del Parlato di Sei Film dal 1948 al 1957. Roma: Bulzoni. Rossi, Fabio. 2002. “Il Dialogo nel Parlato Filmico.” In Sul Dialogo. Contesti e Forme di Interazione Verbale, edited by Carla Bazzanella, 161-175. Milano: Guerini. Toury, Gideon. 1995. Descriptive Translation Studies and Beyond. Amsterdam: John Benjamins. Weinreich Uriel, Labov William and Herzog, Marvin. I. 1975. “Empirical Foundations for a Theory of Language Change.” In Directions for historical Linguistics: A Symposium, edited by Winfred P. Lehmann & Yakov Malkiel, 95-195. Austin-London: Columbia University.
Abstract Audiovisual translation and sociolinguistic adequacy Keywords: Socio-pragmatics, marked varieties, TV language, dubbing, subtitling This paper aims to analyze a series of issues related to the audiovisual translation of mixed-language texts, which are characterized by the presence of different varieties of languages and of socio-pragmalinguistic elements. The analysis will focus particularly on two Brazilian TV series: Mandrake (2005-2007) and FDP (2012). The former was broadcast on Sky TV in Italy in 2009 in its dubbed version in Italian and the latter was subtitled by the students of Portuguese in a MA course at the University of Salento (Italy) within a project organized by the Departmental Group of Audiovisual Translation in November 2012.
CHAPTER THREE READING COHESIVE STRUCTURES IN SUBTITLED FILMS: A PILOT STUDY OLLI PHILIPPE LAUTENBACHER1
1. Introduction This pilot study analyses through a limited number of data how film comprehension takes place, by comparing informant groups viewing the same L2 feature film sequence in three different subtitling conditions. The basic idea is to describe the effects that interlinguistic L1 subtitles or intralinguistic L2 subtitles might have on the viewers’ general comprehension of an L2 film sequence, and more precisely on their perception of its constitutive cohesive structures. Looking at a film with or without subtitles changes the overall structures at play, since from the receivers’ point of view, the on screen text enters the same–unified although multimodal–audiovisual whole they are watching (Lautenbacher 2010). The description will be done here through a threefold qualitative analysis based on: (a) a test consisting of limited response items to 13 dichotomously scored comprehension questions; (b) a multimodal analysis of the corresponding subparts of the sequence describing the actual on-screen sources of the information asked in the test; and, to a lesser extent, (c) eyetracking measures of the informants of each subtitle condition group watching these particular subparts.
1
University of Helsinki, Finland. Email address: [email protected]
34
Chapter Three
1.1. Some Previous Findings A few earlier studies support the usefulness of subtitling for the understanding of audiovisual documents. Markham, Peter and McCarthy (2001) showed that comprehension at a general level was made easier through subtitle reading, in a study involving the writing of a summary of their viewings by the informants. Some studies tend to agree that subtitled documents, both in first and second language (L1 and L2) lead to better results in video content comprehension than audiovisual programmes with no subtitling (Baltova 1999; Bianchi & Ciabattoni 2007). Many studies have discussed the importance of viewers’ language proficiency in subtitled programme reception, some of them suggesting that L2 subtitles are not necessarily suitable for low-level students/learners or that they should be adapted to a more adequate level (Danan 2004), others concluding that beginners are globally advantaged by L1 subtitling, whereas at more advanced levels of proficiency viewers seem to gain more from intralinguistic L2 subtitles (Bianchi & Ciabattoni 2007), thus confirming Markham, Peter & McCarthy’s (2001) general suggestion of a progressive implementation of subtitle configurations in language teaching, where lower-level students ought to begin with subtitled films before progressing to L2 subtitles and finally to original documents without subtitles. Research on cohesive links between dialogue and its transcription through subtitle text with respect to sequence understanding has shown that this auralvisual combination of linguistic elements does not overload the receiver’s faculties. On the contrary, many arguments have put forward that bimodal input is beneficial for recollection and comprehension reinforcement (Bird & Williams 2002; Danan 2004; Vanderplank 2010). The semantic relations between image and subtitled dialogue also seem to play an important part in content comprehension (Paivio 1986; Bianchi & Ciabattoni 2007), especially in cases of subtitles being “well adapted” to the audiovisual sequence (Caimi 2006). The cognitive effectiveness of subtitling processing was also confirmed by Perego, Del Missier, Porta and Mosconi (2010). Yet, contradictions still appear in this field, since some studies show no clear difference in comprehension results between programmes with high vs. low image-dialogue correlations (Markham 1999), others being even more negative about the impact of subtitles on processing image content (Lavaur and Nava 2008). Obviously more research has yet to be done on the exact nature of the relations between specific visual and textual elements before any generalization can be made about combined meaning-making mechanisms between image and text. An important paper pertaining to information processing and the role of redundancy was suggested by Lang (1995).
Reading Cohesive Structures in Subtitled Films
35
1.2. Information Redundancy as a Basis for Cohesive Structures From the reception point of view, the more or less redundant nature of the information appears to be of utmost importance in meaning grasping. Multimodal analyses of AV-documents clearly show how a meaning suggested by the author(s) of an audiovisual programme is actually underlined in several ways (in dialogues, images, symbols, text, sounds, music, etc.) in order to make sure that the viewers grasp the gist of it (Baldry & Thibault 2006; Lautenbacher 2012). This can be done not only simultaneously but also sequentially, making it sometimes difficult to pinpoint the link between the interacting elements. In this paper, in contrast with what Lang (1995) suggested, we do not wish to separate audio/video redundancy from “withinchannel redundancy (such as repetition or word frequency)”, because in a narrative film, similarly to what happens in novels, the dialogic audio channel alone, for instance, contains redundancies which have a part to play in the overall sequence understanding process. Furthermore, when the programme is subtitled, the viewer is dealing with an additional source of redundancy, because subtitles have an impact on cohesive structures not only if the viewer has some knowledge of the original soundtrack language, but simply because subtitles can compensate for many other elements as well in the AV-document (Lautenbacher 2010).
1.3. Cohesive Structure Types In multimodal documents, meaning can be transmitted, among other modes, by words or sentences in the soundtrack (dialogue or off-voices). In the case of subtitled foreign films, two configurations occur in this matter: (1) the film is in an unknown language and the subtitles will be the only source of information for the reception of linguistic meaning, thus almost totally compensating for the dialogue, although one is entitled to argue that facial expressions, gestures, voice tonality and volume, among other things of that nature, might be meaningful to viewers even when they do not understand the original language; (2) the film is in some L2 of the viewer (which is the case in the situation analysed in this paper), and then dialogue and subtitles enter into a common meaning-building structure, where viewers use both the L2 dialogue and the subtitle text to infer significance. This is what I call the Minimal Cohesive Structure of subtitled films, which is fundamentally linguistic and prototypically simultaneous, although twofold: (a) it can be intralinguistic, thus more redundant because of the morpho-semantic nature of the match; (b) it can be interlinguistic, i.e. less redundant because translation into another language occurs, thus limiting the match to its semantics, except
36
Chapter Three
in cases of common vocabulary between languages. Still, one should not forget that the subtitling process also has to include all kinds of extralinguistic features that are significant, especially elements of cultural nature (such as knowledge shared only within a given cultural context). Hence, both intralinguistic and interlinguistic subtitles can be considered as “translations” rather than mere “transcriptions” of aural forms to written forms. When this minimal structure is combined with narrative redundancy, to my view, the subtitled film viewer is dealing with a Narrative Cohesive Structure. In this case, she/he must create semantic links between different segments of the dialogue (and their corresponding subtitles) at different spots of the viewed programme. A narrative cohesive structure takes shape with time, during the whole viewed sequence. An important thing to notice, as we shall see in the analysis, is that narrative cohesive structure can also be based on deduction rather than plain repetition, and, in that sense, can be seen as a “thematic redundancy”: a sentence heard at one stage can serve as a reference for a pronoun at a later point, and both of these in turn can be referents for yet another lexical designation. This narrative cohesive structure is similar to what was already described for written texts by Charolles (1978), who suggested four types of “meta-rules of coherence”, the first being a rule of “repetition” (or “thematic continuity” obtained by anaphoric linkage for instance between a pronoun and its referent) and the second a rule of “progression” (stating that this repetition needs to be completed by new information in order to create coherence).2 In the audiovisual context, a third important type of cohesive structure is the one that integrates images and sounds – which are semiotic features – to minimal cohesion, thus adding audiovisual redundancy to the reading process. This is a truly Multimodal Cohesive Structure. The clearest cases of such redundancy would be those in which dialogue sentences are preceded, accompanied or followed by pictures and sounds describing (or referring to) the linguistically depicted events.3 This relation between image, sound and dialogue or subtitles can also be of completive nature, rather than merely illustrative. One might say that cohesive structures are built upon redundancy, but the exact element that is redundant is not necessarily explicit.
2
The third and fourth meta-rules of Charolles (1978) are, respectively, the “noncontradiction” rule (stating that a sentence has to build on true premises given within the text) and the “relation” rule (defining that textual facts or events have to be plausible in the world they refer to). 3 This could also be described as a form of “repetition” in the terms suggested by Charolles (1978), although the communication medium is not the same.
Reading Cohesive Structures in Subtitled Films
37
Finally, a fourth kind of cohesive structure should also be postulated, although it will not be developed in this study, namely Contextual Cohesive Structure. This expression designates all possible links that are made in viewers’ minds between the programme being watched and their prior knowledge of the described state of affairs. In other words, to comprehend a film, its internal cohesive structures must always be integrated with the viewer’s mnemonic knowledge about the audiovisual genre and its commercial context, original language, intertextual links, cultural references, etc. For instance, viewers always have at least some kind of expectations concerning the film they are about to see and those expectations necessarily influence how the film is perceived and understood.
2. Experiment Outline The pilot experiment described here was carried out with a total of 21 Finnish-speaking informants divided into three groups, and consisted of one single viewing of a short excerpt from a French film, followed by a written questionnaire about the sequence. The viewing was also recorded by means of eye tracking. All test questions (and answers to them) were in Finnish, as it was the mother tongue (L1) of the informants. No specific indication about the objectives of the testing was given to the informants beforehand: they knew they would see a French film sequence and have to answer questions about what they saw.
2.1. Informants The participants were Finnish university-level students, but none of them could be considered as specialized in French, even though they had all been studying French for several years at school (3-11 years). They were either technology students taking French lessons at Aalto University language centre in Espoo or first fall students from the French section of the Modern language Department of the University of Helsinki. No proficiency test was conducted for this particular pilot study, since all participants were students with an evaluated listening comprehension level of B 1.2, following the Common European Framework of Reference for Languages: Learning, Teaching, and Assessment (CEFR)4. All the informants saw the excerpt with the original French (L2) soundtrack, the only variable that differentiated them being subtitle language. A first group of seven informants, SubFIN, saw the excerpt with Finnish 4
http://www.coe.int/t/dg4/linguistic/CADRE1_EN.asp
38
Chapter Three
subtitles, i.e., in their L1; the second group, SubFRA, watched the same excerpt with French subtitles, thus being confronted to an intralinguistic subtitling in their L2; the third group, SubNO, looked at the sequence in its original format, i.e., with no subtitles at all.
2.2. Viewed Material The AV document that was used in the experiment was a 5’32’’-long excerpt from the DVD Un long dimanche de fiançailles (A very long engagement, Jean-Pierre Jeunet 2004) in its original French soundtrack. Finland being a subtitling country, this would also be the soundtrack the audience would have had on their cinema screens or television sets at home. The reason why a feature film was selected, rather than a documentary or any other type of production, was that it is the most likely type of programme to be seen in the original language version on Finnish television (and in cinemas), since news is naturally in Finnish, documentaries use Finnish voice-overs, cartoons are dubbed, and soap-operas or series are mainly in English. The subtitles presented in the viewings were from the DVDs sold on the Finnish and French markets (region code 2). The excerpt used in this experiment was taken from the middle of the film (55’04’’–60’36’’ out of a total duration of 127’35’’), and was carefully chosen for its internal thematic cohesion: the excerpt constitutes an autonomous “chapter”5 of the film, so that knowledge of other parts of the story was not necessary in order to answer the questionnaire. The rest of the film was not shown to the informants.6
2.3. Questionnaire The post-viewing test consisted of three series of questions about general comprehension, recollection of specific linguistic items in L2 as well as different visual and audio items, but this study shall focus only on comprehension. The film sequence and the questionnaire were first given to three native French speakers, in order to check whether the questions were not too difficult for the originally intended audience. This proved not to be the case: the French viewers were able to answer all comprehension questions correctly.
5
Called “La femme prêtée” (“The lent woman”) in the chapter selection of the French DVD. 6 For an overview of the sequence with its English subtitles, see Appendix A.
Reading Cohesive Structures in Subtitled Films
39
In contrast with many earlier studies, this questionnaire did not consist of multiple choice answers, because I did not wish to give any clues to the informants (else than those that were implicated by the formulation of the questions), so that their answers would be as close as possible to an autonomous retrieval of what they truly had stored in their minds. Even if the multiple choice option would have been easier for collecting the correct/incorrect answers, it would not have given a precise idea of what was really understood by the viewers. Obviously though, this choice led to certain difficulties with some questions in deciding whether the given answers were to be accepted or not, and necessarily entailed a detailed examination of those answers.7 Since the questions called for open answers and these could be considered false or correct, but also incomplete in various ways, the answers were categorized as “good”, “partial”, “wrong” and “null” (in the case of no answer being given or a response of the type “cannot say”). For the most basic level of questionnaire analysis, “good” and “partial” answers can be joined as a set of satisfactory answers and treated as a whole, and, correspondingly, “wrong” answers and questions left unanswered can be considered as a block of unacceptable answers (as will be done in chapter 3, below). This dichotomy in answer analysis reveals interesting and quite strong tendencies distinguishing the different subtitle condition groups, yet not fundamentally changing the global results of the finer categorization (kept in chapter 4).
3. Results Overall, the figures show that the best results were obtained by the SubFIN group, with almost 75% of answers being satisfactory, against slightly less than 60% for SubFRA and some 40% for SubNO. The precise figures are given in Table 3-1. Total Comprehension Results (7 informants per group; 13 questions) Group Satisfactory Satisfactory Percentage answers answers / informant SubFIN 67 9.57 73.6% SubFRA 52 7.4 57.1% SubNO 35 5.00 38.5% Table 3-1 Overall results per subtitle condition for general comprehension questions. The best results were obtained by the viewers who had subtitles in their mother tongue (L1). 7
For an English translation of the Comprehension test questionnaire, see Appendix B.
40
Chapter Three
Nevertheless, the overall differences between the three groups observed in Table 1 will not be so straightforward in a more detailed scrutiny, as appears in Table 3-2.
Table 3-2 Number of satisfactory answers for each question (Q) per group. Each group consisted of 7 informants answering to the 13 questions8. Table 3-2 shows that the SubFIN condition group distinguished itself from the two other groups by obtaining clearly better scores for questions Q6, Q11 and Q13, with at least four satisfactory answers more.9 On the other hand, the chart also shows that the SubNO condition group got poorer scores than both other groups for questions Q7 and Q9 (with at least 3 satisfactory answers less) as well as for Q2, Q3 and Q10 with a less clear-cut difference (at least 2 satisfactory answers less). The remaining questions did not show significant differences between groups (Q4, Q5, Q8 and Q12). 8
See Appendix B. This was also the case for Q1 (“How many children did the woman have before she got married? What about her husband?”), but we shall leave that question aside in this paper, first of all because the question is twofold and would blur the results of the present analysis, and secondly simply because it concerns numbers, which might call for a separate analysis. Let us merely notice here that had we decided to select the simple mentioning of “5” as a good answer, the results of SubFRA would have reached the same score as SubFIN. 9
Reading Cohesive Structures in Subtitled Films
41
In other words, if the overall sequence understanding seems to be the strongest with L1 subtitles, as Table 1 suggested, it must be underlined that this is truly the case only with certain questions (Q6, Q11 and Q13). Table 1 also showed that the intralinguistic subtitles in L2 (SubFRA), in turn, generally led to significantly better results than the no subtitle condition (SubNO). Perhaps more interestingly though, Table 2 reveals that another series of questions led to shared best scores by both SubFIN and SubFRA (Q2, Q3, Q7, Q9 and Q10). Hence, it seems important to determine why certain questions led to better results in SubFIN only, whereas others showed almost the same levels both in SubFIN and SubFRA as opposed to the weaker SubNO (see Table 3-3).
Table 3-3 Number of satisfactory answers: best relative SubFIN scores in Q6, Q11 and Q13 (first set of columns); best relative scores of both SubFIN and SubFRA in Q2, Q3, Q7, Q9 and Q10 (second set of columns).
4. Multimodal Analysis and Discussion A closer multimodal analysis of the excerpt seems necessary to shed light on what exactly distinguishes the two series of questions mentioned above.
4.1. Better Scores with L1 Subtitles Only In Q6 (see Table 3-4), the “good” answer was to be found in one sentence of the dialogue, when the husband says that “Over there [back on the
Chapter Three
42
frontline], it is all we have to keep going” (56’52’’)10. As a “partial” answer, one could also accept the idea that he asks for more wine, which he does, by gesturing. No
Question
Group
Q6
What does the man say to his wife about the wine in the dining room?
Sub NO Sub FRA Sub FIN
Good answer
Partial answer
Wrong answer
No answer / null
-
1
2
4
1
-
3
3
4
2
1
-
Table 3-4 Individual answers to question 6 But worthy of note in Q6 is that “asking for more wine” was the only “partial” answer given in SubNO, whereas in SubFIN, the accepted “partial” answers consisted of saying that “this is what keeps us/me going” (but not mentioning the battlefield or frontline meant by “over there”). In other words, the “partial” answer in the SubNO group was based on the picture, but with Finnish subtitles the answers were building upon the subtitled dialogue. Finally, in SubFRA, all “wrong” answers showed that the husband’s sentence was not fully understood, although the French subtitle had been read, as shown by the eye-tracking recording. The cohesion structure in this case remains weak because the only shared or redundant element between image and (subtitled) dialogue is the wine, but the picture (of the husband asking for more) and what is heard (his utterance about drinking back on the frontline) do not fully overlap, and redundancy is only partial. In Q11 (see Table 3-5), again, the answer is in the soundtrack, the woman saying she was told to “tie up a coloured cloth at the window, if she didn’t want the young man to come up [to her apartment]” (58’40’’). A red ribbon is then shown in close-up immediately after the sentence.
10
For all time frames and corresponding scene descriptions, see Appendix A.
Reading Cohesive Structures in Subtitled Films
No
Q11
Question Why did the woman carry a red cloth in her hand, when the young man arrived?
43
Good answer
Partial answer
Wrong answer
No answer / null
Sub NO
1
-
3
3
Sub FRA
2
-
1
4
Sub FIN
4
2
-
1
Group
Table 3-5 Individual answers to question 11 Precisely as in Q6 earlier, there is in Q11 only one link between the subtitled audio and the events described on screen. Here it is the piece of cloth, and the answer to the question cannot be found solely by viewing the image. The “partial” answers in SubFIN stated that “the woman used it as a sign”, which is true, but incomplete, and also that “she didn’t know whether to put [the cloth] in the window or not”, which is already a psychological analysis of the character of the woman, rather than a description of her reasons. Conversely, the “wrong” answers were mere guesses of the type “It meant something to her”, “It belonged to her husband” (SubNO) or “It was somehow linked to Poland” (SubFRA). In this particular scene, every SubFIN informant hit all four dynamic areas of interest (AOI) that were defined for this sequence in the eye-tracking analysis11: (AOI-1) the young man waiting in the street behind the window; (AOI-2) the first subtitle mentioning the red cloth; (AOI3) the second subtitle saying “if [she] didn’t want him to come up”; and (AOI4) the ribbon itself, held by the woman (58’39’’–58’ 48’’). This was not the case in SubFRA: in the same order, only six informants’ gazes crossed the AOI-1 and AOI-2, five entered AOI-3, and seven fixed the red cloth of AOI-4. It is worth noting though that the corresponding French subtitles were divided in two, explaining perhaps why the second part of the sentence (which gave the answer to the question) was only viewed by five informants. Q13 (see Table 3-6) also shows the same characteristics as Q6 above. The young man suggests that “they [the woman and himself] should just tell her husband they did it, to calm him down” (59’58’’). At the same time, in the 11
An “area of interest” is a zone of the picture that is preselected by the researcher on the eye-tracking device, and which gives him all the information concerning the viewing of that particular area (i.e., scanpaths and points of fixation within that zone). For a film, the AOI has to be “dynamic”, because camera movements make it necessary to “follow” the area or picture item under scrutiny, which can be changing position on the screen during the scene.
Chapter Three
44
picture, he prepares to leave, but finally ends up following the woman into the bedroom. The exact significance of the sentence is not fully supported by the visual context, which also carries other meanings: two wrong answers were “they should have a drink” (in SubNO and SubFRA) or suggestions like “they should tell him what they are doing” (SubFRA). No
Question
Group
Q13
What does the young man suggest to the woman they should do?
Sub NO Sub FRA Sub FIN
Good answer
Partial answer
Wrong answer
No answer / null
1
1
1
4
2
-
4
1
6
-
-
1
Table 3-6 Individual answers to question 13 All of the filmic segments corresponding to questions Q6, Q11 and Q13 thus seem to combine one single linguistic sentence (containing the answer) with one or more visual clues that only very partially support the given linguistic meaning.
4.2. Better Scores both in L1 and L2 Subtitles Now if we turn to those answers where SubFRA had as good or almost as good results as SubFIN, it would seem that at least items Q3, Q7, and Q9 are truly different from the abovementioned cases of Q6, Q11 and Q13, in which SubFIN was undoubtedly better (4.1). In Q3 (see Table 3-7), the fact that “the husband couldn’t have children” was clearly understood by both SubFRA and SubFIN, but also quite well so by SubNO, even if the latter got to hear the repetition only in aural form, which might explain the slight difference.
Reading Cohesive Structures in Subtitled Films
No
Q3
Question
Group
What physiological problem did her husband have?
Sub NO Sub FRA Sub FIN
45
Good answer
Partial answer
Wrong answer
No answer / null
3
1
2
1
5
1
-
1
6
-
-
1
Table 3-7 Individual answers to question 3 In the excerpt, the right answer to Q3 is given by a sentence from the soundtrack, but crucially, the same information is given at several points of the sequence (55’50’’; 55’57’’; 56’09’’; 57’59’’; 58’24). Put another way, the information needed for answering Q3 comes from a larger cohesive structure of a redundant nature in the dialogue, or even plain repetition (as between 56’09’’and 57’59’’). The two retained “partial” answers in SubNO and SubFRA added that the husband could not father children anymore, thus probably inferring from the overall war context that he might have been injured (which is not the case, since he had adopted all his children already before the war and there is no mention in the sequence about the cause of his physiological problem). The narrative cohesive structure is very strong in Q3. The same kind of semantic reiteration appears also, but in a different way, in Q9 (see Table 3-8). The husband asks his wife to have a sixth child with his friend (Bastoche). No
Q9
Question
Group
What does the man ask his wife to do?
Sub NO Sub FRA Sub FIN
Good answer
Partial answer
Wrong answer
No answer / null
2
1
3
1
2
5
-
-
3
4
-
-
Table 3-8 Individual answers to question 9 Unlike Q3 though, there is for Q9 no explicit sentence in the dialogue that formulates that idea, which must therefore be deduced by the viewer. Nevertheless, it is part of the overall main thread of the sequence, and indirectly suggested on several occasions. More concretely, a necessary
Chapter Three
46
connection had to be made between the three following dialogue segments (given in English here), if one was to answer Q9: 1) 57’50’’ (the husband): My only way out is to have a sixth child. 2) 58’11’’ (the husband): It’s not betrayal if I’m asking you. 3) 59’58’’ (the young man): Elodie, we’ll say we’ve done it. The relation between these narrative segments is anaphoric in nature: to have a sixth child serves as a reference for “the thing” I’m asking you [to do] (revealed by the pronoun le in French), both of which in turn are referents for it (or the “thing we are supposed to have done”). Such a redundancy based on deduction needs a more firm understanding of its constitutive segments, which is in part given by subtitling, be it intralinguistic L2 or interlinguistic L1. Here again, the cohesion structure built by the narrative is what helps the viewer to grasp the expected answer. All the “partial” answers also mentioned having a sixth child, but without saying that it should be done with someone else. Of course, this does not mean that the informants who gave these answers did not realize that. Turning now to Q7 (see Table 3-9), one finds a case of redundancy of a somewhat different nature. The two main things the husband tells about the war are that “he saw soldiers whose cartridge belts exploded on them” (57’10’’) and “they once had to use the body of a friend as a shield” (57’20’’). Neither of these dialogue sentences is strictly speaking repeated, but both are made especially salient in the film because they are strongly supported by image and sound, as they are followed by visual scenes of those precise events, with very loud explosions (57’17’’ and 57’26’’ respectively). Hence, in these cases, the cohesion comes from a multimodal redundancy combining dialogue, subtitle, image and sound effects, and its strength comes from the fact that it is the entire content of the sentence (and not only one of its constitutive details) that is re-presented through the audiovisual mode. No
Question
Group
Q7
Mention two things the man tells his family about the war.
Sub NO Sub FRA Sub FIN
Good answer
Partial answer
Wrong answer
No answer / null
-
3
2
2
5
1
-
1
5
2
-
-
Table 3-9 Individual answers to question 7. All “partial” answers consisted of only one of the two stories.
Reading Cohesive Structures in Subtitled Films
47
Let us finally turn shortly to Q2, which also appeared in the set of similar results between SubFIN and SubFRA, as we saw in Table 3. In this case, it appears that between the sentence from the soundtrack “…d’origine polonaise comme moi” and its English subtitle “She was Polish, like me” (55’56’’), there is nothing but a minimal cohesive relation. There are no other sources of information than this short bit of dialogue that can help to answer Q2. Even though the overall proportions of correct answers were important in none of the three groups for this particular question, one might notice that both SubFIN and SubFRA performed slightly better than SubNO, which might speak in favour of a certain usefulness of redundancy, be it through its most basic form, i.e., intralinguistic or interlinguistic subtitling. Reading what is heard could indeed strengthen aural reception, as has been noted in literature (see 1.1.). The same observation could be made concerning the results of Q10 (“Who is Kléber Bouquet?”), where it is the monologue only that gives the answer to the question by declaring “[…] Kléber Bouquet, also known as Bastoche” (58’36’’). Notice that both in Q2 and Q10, the viewers are dealing with a mere detail with regard to the narrative.12
5. Conclusions On the basis of these preliminary results, one is entitled to suggest that different subtitle conditions change the way in which viewers grasp the inherent cohesive structures of an L2-film. More specifically, it appears to be the relation between subtitles and redundancy that plays a key role with regard to comprehension. The Minimal Cohesion Structure, where subtitles support the dialogue alone, seems globally to help the non-native speaker spectator with respect to the L2 audio track, especially in pinpointing single specific lexical items (cases Q2 and Q10, in this particular study). At a more elaborate level of cohesion, namely Narrative Cohesion Structure, where several sentences scattered at different points of the film develop the same idea (often linked to the main tread), cohesion can appear as a redundancy/repetition of explicit linguistic items (as in Q3) or as a thematic redundancy based on a deduced implicit idea (Q9). In these cases, both L1 and L2 subtitles seem to help understanding the described events. When Multimodal Cohesion Structure is at play, i.e., when dialogical elements are supported by image and sound, a distinction has to be made between two situations: either a sentence is totally supported by the 12 The same can be stated about Q4 and Q5 as well as Q12, all questions that do not really differentiate the groups from each other.
48
Chapter Three
audiovisual mode (a case of strong multimodal redundancy), almost as if the picture “repeated” the content of the utterance in its entirety, in which case both L1 and L2 subtitles strongly support the viewer’s understanding (Q7); or, it is only a sub-part of a sentence that finds its audiovisual counterpart in the scene (a case of weak multimodal redundancy), which puts more weight to the understanding of the linguistic utterance, thus giving better comprehension results only with L1 subtitles (Q6, Q11 and Q13). Of course, this qualitative small scale pilot study will need to be checked through a larger quantitative experiment, using a bigger number of subjects and a stronger methodology, e.g. with pre-tests on language proficiency, a more thorough selection of pertinent questions (better distinguishing between recollection and comprehension questions, or between numeral and lexical questions, for instance) and statistical reliability checking. The main problem with large scale experiments, though, is probably that the use of open questions becomes difficult. As we noticed in the present study, all individual answers (sometimes especially the wrong ones) might in fact be very enlightening when it comes to measuring comprehension.
References Baldry, Anthony and Thibault, John Paul. 2006. Multimodal Transcription and Text Analysis. London: Equinox. Baltova, Iva. 1999. “Multisensory Language Teaching in a Multidimensional Curriculum: The Use of Authentic Bimodal Video in Core French.” The Canadian Modern Language Review/La Revue Canadienne des Langues Vivantes, 56(1): 31-48. Bianchi, Francesca and Ciabattoni, Tiziana. 2007. “Captions and Subtitles in EFL Learning: An Investigative Study in a Comprehensive Computer Environment.” In From Didactas to Ecolingua. An Ongoing Research Project on Translation and Corpus Linguistics, edited by Anthony Baldry, Maria Pavesi, Carol Taylor-Torsello and Christopher Taylor, 69-90. Trieste: Edizione Università di Trieste. Bird, Stephen A. and Williams, John N. 2002. “The Effect of Bimodal Input on Implicit and Explicit Memory: An Investigation into the Benefits of within-Language Subtitling.” Applied Psycholinguistics, 23: 509-533. Caimi, Annamaria. 2006. “Audiovisual Translation and Language Learning: The Promotion of Intralingual Subtitles.” The Journal of Specialized Translation, 6: 85-98. Charolles, Michel. 1978. “Introduction aux Problèmes de la Cohérence Textuelle.” Langue Française, 38 : 7-42.
Reading Cohesive Structures in Subtitled Films
49
Danan, Martine. 2004. “Captioning and Subtitling: Undervalued Language Learning Strategies.” Meta: Journal Des Traducteurs, 49(1): 67-77. Lang, Annie. 1995. “Defining Audio/Video Redundancy from a LimitedCapacity Information Processing perspective.” Communication Research, 22(1): 86-115. Lautenbacher, Olli Philippe. 2012. “From still Pictures to Moving Pictures – Eye Tracking Text and Image.” In Eye Tracking in Audiovisual Translation, edited by Elisa Perego, 135-155. Roma: Aracne Editrice. —. 2010. “Kompensaatio Elokuvatekstityksen Strategiana [Compensation as a Subtitling Macro-Strategy].” MikaEL:Electronic Proceedings of the KäTu Symposium on Translation and Interpreting Studies, Volume 4. Retrieved from: http://www.sktl.fi/@Bin/40728/Lautenbacher_MikaEL2010.pdf. Lavaur, Jean Marc and Nava, Sophie. 2008. “Interférences Liées au sousTitrage Intralangue sur le Traitement des Images d’une Séquence Filmée.” In Actes du Congrès National de la Société Française de Psychologie, edited by J.M. Hoc, & Y. Corson. pp. 59-64. Markham, Paul. 1999. “Captioned -Videotapes and Second-Language Listening Word Recognition.” Foreign Language Annals, 32(3): 321328. Markham Paul L., Peter Lizette A. and McCarthy, Teresa J. 2001. “The Effects of Native language vs. Target Language Captions on Foreign Language Students’ DVD video comprehension.” Foreign Language Annals, 34(5): 439-445. Paivio, Allan. 1986. “Mental Representations: A Dual Coding Approach.” New York: Oxford University Press. Perego Elisa, Del Missier Fabio, Porta Marco and Mosconi Mauro. 2010. “The Cognitive Effectiveness of Subtitle Processing.” Media Psychology, 13: 243-272. Vanderplank, Robert. 2010. “State-of-the-Art Article – Déjà vu? A Decade of Research on Language Laboratories, Television and Video in Language Learning.” Language Teaching, 43(1): 1–37.
Abstract Reading cohesive structures in subtitled films. A pilot study Key words: Reception, redundancy, comprehension, subtitles The aim of this pilot study is to analyse on a small scale how comprehension takes place when watching a subtitled film, through a
Chapter Three
50
comparison of three informant groups viewing the same L2 film sequence in different subtitle conditions: L1, L2 and no subtitles. The idea is to grasp the combined impact of filmic cohesive structures and interlinguistic (L1) or intralinguistic (L2) subtitles on the comprehension process of the viewers. The description is made through a threefold qualitative analysis based on a test consisting of limited response items to a questionnaire, a multimodal analysis describing the actual on-screen sources of the information asked in the test and, though to a lesser extent, eye-tracking measures. The results of this pilot study suggest that subtitles in L1 might be more effective for understanding narrative parts with weak multimodal cohesive structures, i.e., showing low text-image redundancy. On the contrary, in the case of strong narrative or multimodal cohesive structures (i.e., showing high redundancy between dialogue sentences or between utterance contents and image), intralinguistic L2 subtitles seem to lead to the same comprehension result levels as L1 subtitles.
Appendix A Overview of the excerpt Time
Scene description13 English subtitles (dialogue / off-voice)
Sounds / speaker
55’04’’
Young woman lying in her bed in an old countryside house.
Sound of a man falling with his bicycle
55’12’’ 55’19’’
A postman on the ground in front of the house, picking up his letters. The young woman comes out from the front door.
55’20’’ 55’23’’
55’32’’
55’34’’
13
//- All for me?//
woman
//No. just this one.//
postman
//If I’m not picking up gravel, I’m picking up the postman!//
old man
An elderly couple enters the picture, //The gravel’s gone? A fairfight.// helping the postman to get back on his //Fighting is never fair.// feet.
postman
woman
Unfortunately, the Warner Bros. Clip and Still Licensing Dept. did not wish to license material from A Very Long Engagement for use in a scientific journal article. Consequently, only scene descriptions can be given here, instead of actual screenshots.
Reading Cohesive Structures in Subtitled Films
55’36’’ 55’41’’ 55’42’’
55’47’’
55’50’’ 55’51’’ 55’54’’ 55’56’’ 55’57’’
//- Can I offer you a short one?/ - Won’t say no.// The young woman reading a letter on a path outside the house + Superimposed picture of another woman writing a letter, sitting at a table. Four children posing at a photographer’s studio.
56’09’’ 56’12’’
56’18’’ 56’20’’
old woman postman Music begins wife voice
//Miss, I beg you to keep my secret / to yourself.// … //When I met my husband, / he already had four children.//
//None were his.//
…
//Out of kindness, / he married a widow with TB.//
…
The young woman reading the letter, //She was Polish, like me.// walking on seaside hills. //He adopted her children / before her death.//
… …
//I was also an unmarried mother.//
56’00’’ 56’02’’ 56’04’’
51
Same as previous, from a different camera angle + Superimposed picture of a fountain pen writing.
//He then found himself / the father of five,//
…
//though he couldn’t have any / of his own.//
…
General view from Montmartre: a man is pushing children in a carriage downhill, towards the woman (the //You’re going to run me over!// letter writer). The letter writer in close shot.
Children laughing and screaming wife
//We had four years of tenderness.//
wife voice
//We had plans.//
…
56’24’’
Scenery: an old moving picture from a beach.
//We dreamt / of seeing the sea together.//
… Sounds from the seaside
56’26’’
The young woman reading the letter, walking on the seaside hills.
//Then the war came.//
…
//I thought Bastoche / would take care of him,//
…
56’23’’
56’31’’
Chapter Three
52 56’35’’ 56’37’’
Family having supper in their dining room.
56’39’’ 56’41’’
56’52’’
The husband asks for more wine by gesturing. His wife picks up a new bottle and puts it on the table. The man fills up his glass while speaking
56’57’’
The wife in bed, in the dark, eyes wide open (close-up).
57’00’’
The husband on the other side of the bed, also unable to sleep.
57’05’’
View from under the Eiffel tower. The couple and one of their children are sitting on a bench. Children behind them are playing with fireworks.
57’10’’
57’14’’
//their being posted together.//
…
//But during his leave, / in September, 1915,//
…
//just after the Battle of Artois,//
…
//I knew / nothing would ever be the same.//
…
//Over there, / it’s all we have to keep going.//
//Once, in a burning field, / I saw some comrades…// //Their cartridge belts / exploded like fireworks.//
husband
Sounds of explosions husband
…
57’17’’
Picture from the war: cartridge belts exploding on the soldiers carying them.
Sounds of explosions
57’20’’
Family walking //Once we even had to use / towards the camera the body of a friend// in an old style //as a shield.// shopping mill.
husband
The dead body of a soldier is being hit by bullets. The corpse is moved by someone hiding behind it.
Sounds of fire weapons and explosions
57’24’’ 57’26’’
…
Reading Cohesive Structures in Subtitled Films 57’36’’ 57’38’’
57’42’’
The young woman reading the letter while walking on the seaside + Superimposed picture of the fountain pen writing.
53
//There had been something / on his mind.//
wife voice
//He came out with it / on the second night.//
…
57’47’’
The couple lying //If I desert, the gendarmes / side by side in bed. will come and get me.//
husband
57’48’’
Old black and white film picture of a man being executed against a tree.
Sound of a firing squad
57’50’’
In their appartment, the husband (on the foreground) speaks to his wife (washing in the background)
//My only way out / is to have a sixth child.//
husband
//When you have six children, / they send you home.//
…
57’59’’
The young woman reading the letter, walking on the seaside hills.
//As I said, / he couldn’t have children.//
wife voice
58’02’’
The wife writing the //I didn’t dare think / letter, sitting at a what was on his mind,// table. //but he just wouldn’t let it go.//
…
View from inside a zoological museum, //It’s not betrayal if I’m asking you.// seen from a //Especially if it’s with Bastoche.// balcony. The family enters the picture. //Don’t touch!//
Music ends husband
57’55’’
58’04’’ 58’06’’ 58’11’’ 58’15’’ 58’18’’
58’24’’
58’27’’
The family walking hastily through the // The five aren’t mine, / museum, followed so why not a sixth?// by the camera. //You’d have nine months to wait. / The war will be over by then.//
…
… wife (to one of the children) husband
wife
//It will never be over. Never!// 58’31’’ 58’34’’ 58’36’’
husband //They’d be nine months of hope. / Have you no heart?// The young woman //And then one day, I found a note / on the seaside (long from Kléber Bouquet,//
… wife voice Sound of
Chapter Three
54
58’39’’ 58’40’’
shot).
seagulls
The wife standing at //known as Bastoche,// the appartment window, looking //saying to tie a coloured cloth at / down into the street, the window if he wasn’t to come up.// where a young man is standing waiting.
… …
Camera mouvement towards a red cloth held by the woman . From where he is standing, the young man cannot see it. Sound of knocking at the door 58’48’’
58’56’’
Inside the appartment, the woman opens the door, and the young man enters, greeting her.
58’57’’ 59’00’’
[Bonjour. – Élodie? – Oui…] (Untranslated passage in the English subtitles)
[wife – young man – wife]
//Coffee?//
wife
//I’d love some!//
young man
They sit at the table, //- After you. she serves coffee. - Please.//
wife young man
59’09’’
//Thank you.//
young man
59’15’’
//Sugar?//
young man
59’16’’
//No, thank you.//
wife
59’24’’
//It’s mother of pearl?//
young man
59’25’’
//Yes, Benjamin made it.//
wife
59’34’’
//- It’s odd… / - How strange we never met.//
young man wife
59’39’’
//The children aren’t here?//
young man
59’40’’
//Yes, but…//
wife
59’42’’
//They’re playing outside.//
Sound of children playing
The woman stands
Reading Cohesive Structures in Subtitled Films 59’48’’
up.
59’50’’
The woman //No, no… sorry.// standing, a bottle in her hand.
…
59’52’’
Reverse shot: the young man, still sitting, addresses the woman.
young man
59’54’’
(camera position as //Neither do I.// in 59’50’’)
wife
59’58’’
(camera position as //Elodie, we’ll say we’ve done it.// in 59’52’’)
young man
59’59’’
(camera position as //It’ll calm him down.// in 59’50’’)
…
60’02’’
(camera position as //it will make things easier, in 59’52’’) for you, for me…//
…
60’05’’
Camera behind the //for everybody.// man: he stands up, ready to leave, walking towards the door (towards the camera).
…
60’12’’
Reverse shot, camera close to the floor showing the skirt of the woman in foreground. The skirt falls down.
60’16’’
Reversed shot: the young man turns back to find out the woman is undressing. She turns back an
60’25’’
//- A pick-me-up?/ - Please.//
55
//I don’t fancy one.//
wife young man
Music begins
Chapter Three
56 disappears in to the bedroom. The young man follows her. 60’31’’
Back to the woman by the seaside, still reading the letter.
60’36’’
End of excerpt
Appendix B The comprehension test questionnaire (translated from Finnish) Q1 - How many children did the woman have before she got married? What about her husband? Q2 - What country was the letter writer from? Q3 - What physiological problem did her husband have? Q4 - Did the family ever go to the seaside? Q5 - In what year did the man come home from the war for holidays? Q6 - In the dining room, what does the man say to his wife about the wine? Q7 - Mention two things the man tells his family about the war. Q8 - What would happen to the man if he deserted? Q9 - What does the man ask his wife to do? Q10 - Who is Kléber Bouquet? Q11 - Why did the woman carry a red cloth in her hand when the young man arrived? Q12 - Where are the children of the woman? Q13 - What does the young man suggest to the woman they should do?
CHAPTER FOUR THE LANGUAGE OF INSPECTOR MONTALBANO: A CASE OF IRONY IN TRANSLATION MARIAGRAZIA DE MEO1
1. Introduction While translation has been traditionally concerned with the transfer of meaning from a source language to a target language text, the translation of irony has focused more on the transfer of its interpretation and effect; it is concerned with the said as much as with the unsaid (Hutcheon 1994), which, in order to be perceived as ironic, needs to remain embedded in the target text. In line with previous studies (Hutcheon 1994, Attardo 2007), De Wilde suggests that “irony happens rather than exists” (2010, 41), in the sense that it is triggered by the receiver’s active participation and to this end it aims at producing an emotive response. Irony is generally considered as saying one thing and meaning something else (Booth 1974, Muecke 1982, Barbe 1995); however, it is not appropriate to explain verbal irony in mere linguistic terms, since “irony is a pragmatic category which triggers an endless series of subversive interpretations” (Mateo 1994, 125). The aim of this paper is to examine the translation of verbal irony in the subtitles of the detective TV-series Inspector Montalbano through an inductive and descriptive approach. Irony is one of the most frequently used features of speech connotating the language devised by the Sicilian writer Andrea Camilleri 2 for his multifaceted and realistic character. Translated in more than 35 languages, the novels achieved national and international acclaim that was sustained by the TV-series’adaptation, broadcast in more than ten countries. The reasons behind this enthusiastic 1
Università di Salerno, Italy. Email address: [email protected] Camilleri is both the TV-series screenwriter and the successful author of the novels that inspired it.
2
58
Chapter Four
response are mainly due to Camilleri’s ability to convey a Mediterranean atmosphere through a strong sense of place and identity, which manages to be convincingly transferred on screen. Sicily already retained a popular fictional image, internationally known, mainly as a sexist and patriarchal society, as the homeland of mafia that had exported its control worldwide and also as the centre of a considerable diaspora that saw the migration of thousands of people mainly to North America and Australia, exporting its traditions and cultural heritage, from the end of the nineteenth century for over a hundred years. As maintained by Carroli (2010), a lecturer in Italian Studies in Australia, Camilleri is successful in reinvigorating a regional narrative 3 , seducing international audiences with a touch of exoticism mixed to realism, where the boundaries between fiction and reality are blurred and the characters start living independently from the author4. In addition to that, Montalbano’s language is characterized by a mixture of standard Italian, Sicilian dialect and invented words used with a persistent ironic tone, as a personal way of interpreting reality and as the expression of a passionate attitude towards a society full of poignant contradictions. If this “intertextual métissage of languages and styles have attracted the reader worldwide” (Carroli 2010, 157), this is also a consequence of the creative choices of daring translators such as Stephen Sartarelli and Serge Quadruppani, who respectively translated the novels in English and French, inserting local slang and invented formulas in the target language (De Santis 2011). To return to the similarly popular TV-series, it can be argued that this managed to engage audiences not only nationally, despite some inevitable criticism, and to drive them towards the reading of the novels. On screen, the outstanding baroque settings, the postcard shots of the Mediterrenean blue sea and the constant presence of inviting food, while apparently feeding into the stereotypical image of a beautiful but backward Sicily, frame a complex reality of fierce criticism, enhancing interpretation to go beyond appearances. The analysis of verbal irony in translation implies consideration of the ironic triggers (Hutcheon 1994, Pelsmaekers and Van Besien 2002) through which it is conveyed in the source dialogue and then encoded in the subtitles. In this paper an analysis of these ironic signals will be carried 3 In the regional detective genre, namely the giallo, differently from his eminent Sicilian predecessor Leonardo Sciascia, Camilleri uses a light and sardonic tone when dealing with serious issues such as politics and mafia, in order to achieve the same strong criticism. 4 Between then many examples, the fictional town of Vigàta has several tourist web sites and its name has been added on road signs and maps associated to the real town of Porto Empedocle, where the story is set.
The Language of Inspector Montalbano
59
out, setting off from the hypothesis that, in subtitling, the textual elements which function as ironic clues may be considered redundant features of discourse and therefore omitted in the recodification of the target text. Moreover, the paper will build on the principle that in subtitling irony, non-verbal elements such as prosody and paralinguistic codes remain visible and are an important support in the interpretation of ironic utterances. Although the main concern will be on the analysis of ironic triggers and on their recodification in the subtitles, elements such as intentionality and intertextuality (Hatim and Mason 1990, Hutcheon 1994) will also be addressed, as they are essential in the receiver’s interpretation, and may not share a common background in the two texts. In short, a holistic perspective will be adopted, in order to place equal importance on the three constituents of irony: the ironist, the interpreter and the context in which it unfolds (Hutcheon 1994). A qualitative analysis will be carried out with clear awareness of the elusiveness of irony and the importance of its evaluative attitude: the description of verbal irony in translation first depends on the researcher who, starting from an analysis of the source dialogue, will express an evaluative judgement on the translator’s choices and on his/her ability to turn into the ironist for the target audience. On the one hand, the researcher may fail to perceive irony in the source dialogue, or in the subtitles, in the same way as the translator and, on the other, he/she may also fail to consider the range of reasons that would account for the possible absence of irony in the subtitles. This might have been caused by the translator’s misreading but also by textual problems. Therefore, proceeding in this investigation also requires the acknowledgement of a number of variables that, in each case, will guide the empirical observations.
2. Framing Verbal Irony In his analysis of the relevance theory applied to translation, Gutt (1991, 1998) underlined the importance of defining, first of all, what to translate. This preliminary step is essential when dealing with an elusive concept such as irony that is often presented as a form of humour (Zabalbescoa 2005), although not confined to it; actually humour, like criticism, is seen as an intended effect of irony (Nash 1985, Vandaele 2002, Pelsmaekers and Van Besien 2002). Irony has been described, at the textual level of the utterance, in terms of the speech act theory (Austin 1975, Searle 1969). As a locutionary act, the ironic utterance shows some ambiguity in its propositional content between what is said and the situation. As an illocutionary act, the
60
Chapter Four
utterance’s conventional value of being a statement, promise, etc., is not fully valid since the ironic statement implies primarily an evaluative attitude, hence, its perlocutionary effect on the hearer is one of criticism with or without a humorous effect. Hutcheon (1994) developed a dynamic and pragmatic approach to irony defined as something whose “semantic and syntactic dimensions cannot be considered separately from the social, historical and cultural aspects of its context of development and attribution” (Hutcheon 1994: 17).
Not only does irony communicate meaning, but also an emotional response; therefore it is marked by an evaluative edge. The ironic setting is considered to be a social one in the sense that it implies the existence of a relation between the ironist and its interpreter, both operating within the circumstances in which irony is happening. It is also defined as political because, although it might produce humour, it always includes such concepts as hierarchy, subordination and judgement, which add affective charge to the utterance (Hutcheon 1994). Hence, even in cases of it being tactful and genteel (Holdcroft 1983, Berrendonner 1981) with the aim to produce a humorous effect in its victim, irony remains an expression of criticism whose effect can range from mild mockery to heavy embarrassment and humiliation. Furthermore, as already hinted at, the ironist’s intentionality can only be uncovered by the interpreters’ active participation and therefore both figures play an equally important role in making irony happen. Hence, Hutcheon (1994) argues that irony is essentially relational, as well as inclusive and differential. Its inclusiveness refers to the fact that the literal component cannot be entirely excluded in the interpretation of an ironic performance but it remains an essential part of the overall meaning, particularly in the case of irony triggered by exaggerations, understatements or quotations rather than contradictions (Sperber & Wilson 1992). As it will be later argued in this paper, the concept of inclusiveness considers from a different perspective the traditional two-stage approach (Grice 1975), in which the literal element is usually rejected in order to enhance irony’s essentially figurative nature. Ultimately, irony is defined as differential in the sense that it is triggered by the difference between the said and the hidden, whereas other tropes and rhetorical figures, such as metaphor and allegory usually function at the level of similarity. Hutcheon (1994) also refers to the concept of discursive community, which provides a context of shared assumptions between sender and receiver. Hence, verbal irony cannot function outside a context in which the relationship
The Language of Inspector Montalbano
61
between the said and the unsaid is understood as relevant, rather than generating mere confusion. Furthermore, a discursive community is also considered relevant as it accounts for the acknowledgement of meaningful relationships existing between former texts in order to understand their connotative dimension. Hatim and Mason (1990, 129) argue that intertextuality is “a signifying system which operates by connotation. It requires a social knowledge for it to be effective as a vehicle of signification”. In this perspective, the translator is seen as a mediator that intervenes to fill the distance between texts (de Beaugrande and Dressler 1981). Another important aspect to consider is that understanding irony is more dependent on belonging to the same discursive community rather than on the actual competence of the speakers. This implies a shift “from the notion of elitism toward an acceptance of the fact that anyone has different knowledge and belongs to different discursive communities” (Hutcheon 1994, 97); therefore, everyone is in the position to use and understand irony as a natural mechanism. The pragmatic account of irony (Hutcheon 1994, Attardo 2000, 2007, Chikhachiro 2009) is greatly influenced by Grice’s cooperative principle (1975), which has been considered by most scholars as a starting point for debate since the theory of conversational maxims helps to account for ambiguity in communication. Grice essentially defined an ironic utterance as flouting the maxim of quality, stating that the speaker needs to say what he believes to be true. In this two-stage approach to the construction of irony it is possible to distinguish two phases: a first moment, corresponding to literal meaning and a second stage of opposite and figurative meaning that discards the literal one. In opposition to this definition of irony as essentially based on the violation of the maxim of quality, Sperber and Wilson (1992) present irony as primarily based on echoic mention. The main ironic triggers are expressed less by the opposition between what is meant and the literal meaning than by the echoic repetition of a previous utterance, made in order to express the ironist’s attitude. Once more, the idea of irony as elitarian and dependent on the speaker’s and hearer’s competence is discarded, as this theory also presumes a form of spontaneity in the construction and understanding of irony that is therefore looked at as a natural device encountered in communication. However, not every repeated utterance can be ironic; that is, there are utterances that are overtly reported just with the intent of genuinely informing the hearer. Interpretative confusion is avoided because of the notion of relevance (Sperber and Wilson 1995), in relation to the fact that “echoic utterances are echoic interpretations of an attributed thought or utterance” (Sperber
62
Chapter Four
and Wilson 1992, 65). The notion of relevance serves the purpose of rendering the interpretation of irony the most direct and natural one in a given text. Although recognising that irony is never an entirely overt process, thinking in terms of relevance and of its two constituents of effect and effort helps to deal with it. When an utterance is an expression of verbal irony, according to Sperber and Wilson (1992) and their one-stage theory, it usually offers one possible rational interpretation that is able to guarantee the ironist’s achievement of maximal effect with minimal effort on the part of the hearer, as long as they share a familiar context. The theories developed by Grice (1975) and Sperber and Wilson (1992, 1995), which have been referred to in terms of binary opposition, are both relevant to the pragmatic approach chosen in this descriptive analysis of verbal irony. The idea of an evaluative judgement expressed through echoic mention is not necessarily in contrast with Grice’s cooperative principle (Hatim and Mason 1990). For instance, an ironic understatement, while respecting the maxim of quality, may violate that of quantity, as the utterance would not be as informative as required. Attardo (2000) argues that the way forward is to add new considerations to Grice’s two-stage approach that continues to be valid. The violation of any of the four communication maxims or equally of none of them is able to trigger irony. On the other hand, developing the theory of echoic mention (Sperber and Wilson 1992), the co-presence of the literal and the implied meanings both contribute to the activation of verbal irony. Furthermore, irony is not necessarily echoic but should be dealt with mainly in terms of contextual inappropriateness. In other words, as the semantic model is not entirely appropriate for dealing with the interpretation of irony, Attardo (2000) maintains that irony operates at a pragmatic level, as its meaning is semantically inappropriate but contextually relevant, and it needs to be inferred because never overtly expressed. “It is possible to […] define as ironical an utterance that, while maintaining relevance, explicitly or implicitly violates the condition for textual appropriateness” (Attardo 2000: 817).
The cooperative principle and its maxims are still important, as the ironist’s purpose is to deviate from them as little as possible to make irony effective and intelligible; their violation operates in terms of “the smallest possible disruption” (Attardo 2000, 814). The conciliating position just presented tends to solve the binary opposition between the two-stage and the one-stage approach, which part of the literature on the analysis of irony had fostered (Giora 1997, Gibbs 1994, Gibbs and Colston 2007).
The Language of Inspector Montalbano
63
3. Irony and the Translator To talk about translation in general and, for the specific purpose of this paper, to describe irony in translation requires focusing primarily on the translator performing the double role of interpreter and ironist, as his/her task is to re-codify and re-contextualize the ironic triggers for the target audience. As argued by Hatim and Mason (1990), translation is about making judgements, expressing one’s opinion on someone else’s intentions. Therefore, the translator’s interpretation is as important as the author’s intention and both components need to be taken into account in theoretical discussion. In Hutcheon’s (1994) dynamic approach to Translation Studies research, the analysis focuses on the adaptation of irony to different target contextual environments with particular attention to interpretative issues, addressed by de Wilde as a “translational interpretive oriented analysis” (2010, 28). The interpretation mechanism requires a degree of comparison between the source and target text, and establishing possible terms of comparison is problematic. “What markers are needed to ensure that irony happens?” (Hutcheon 1994, 178) and what are the textual signs that guide the interlocutor/interpreter? It is evident that no fixed taxonomy could ever be entirely reliable; a “repertory method” (Koster 2000, 98) that predetermines rigidly the elements of analysis would be “limited methodologically in that its ‘a priori’ nature pinpoints irony to concrete linguistic and textual manifestations without being able to account for more interpretative processes” (de Wilde 2010: 33).
The list of ironic markers would be too long and varied if aimed at including the variety of lexical, syntactic or textual elements that may trigger the plurality of functions and effects (Hutcheon 1994, Perlsmaekers and Van Besien 2002) embedded in an ironic utterance. On the other hand, since the ironist’s intention is often that of being recognised as such at least by one of his/her interlocutors or victims, it makes sense to refer to wide categories of ironic markers, although any comparative procedure will have to remain dynamic and flexible. At a textual level, Hutcheon distinguishes between two different types of ironic markers, as some of them “function meta-ironically” (1994, 154), that is, contextually they do not express incongruities between what is said and the situation but function as ironic cues (Pelsmaekers and Van Besien: 2002) to the presence of an ironic utterance, through the use of interjections, forms of address, incomplete sentences, etc. Other markers “function structurally” as they “structure the more specific context in
64
Chapter Four
which the said can brush up against some unsaid in such a way that irony and its edge come into being” (Hutcheon 1994, 154). It is this latter range of signals that Hutcheon groups as follows: “(1) various changes of register; (2) exaggeration/understatement; (3) contradiction/incongruity; (4) literalization/simplification; (5) repetition/echoic mention” (1994, 156). Changes of register may be introduced, for instance, by the use of different forms of address, modifications of tone, style and register, while exaggeration and understatement by the use of figures of speech such as hyperboles or litotes. Contradiction and incongruity are activated when saying one thing that is contrary to what is happening, a trigger that in Gricean terms breaks the maxim of quality, whereas the fourth category of simplification usually happens when there is an approximation to wordplay. The last, and probably the most familiar category, entails echoic mention, repetition and mimicry. None the less “a successful ‘marker’ will always be dependent upon a discursive community to recognize it, in the first place, and then to activate an ironic interpretation in a particular shared context” (Hutcheon 1994: 159).
This account of elements that function as ironic markers does not leave out reference to paralinguistic signs such as gestures and facial expressions as well as prosodic features like the tone of voice, which, as exemplified in the description of the scenes considered for analysis, also offer an essential contribution to understanding irony since they remain available on screen for the target audience (Perlsmaekers and Van Besien 2002). Moving along these guidelines, the examples presented have been chosen with the purpose to give a descriptive account of the most recurrent patterns through which verbal irony is conveyed in the source dialogue and of those encountered in the subtitles. In view of the fact that translating irony implies primarily the transfer of an effect rather than just textual meaning, the purpose has been not only to consider the translation strategies (Mateo 1995) employed in the subtitles, but rather to place the focus on the translator’s attempt to maintain or to introduce new ironic markers that function as effective triggers for the target audience. The data analysed come from a corpus of eight episodes of the detective TV-series Inspector Montalbano, subtitled into English, however the samples were selected from a sub-corpus of three episodes. In addition to the reference to textual elements, other paratextual factors that trigger the audience’s expectations will also be addressed. These include the familiarity that the source and target audience may have developed with the situations in which irony happens and with the characters that are Montalbano’s usual victims. Nonetheless, as Delabastita (1990, 63) points out
The Language of Inspector Montalbano
65
“in the absence of any objective basis of assessment, target language reader reaction is no more predictable than source language reader’s reaction is measurable”.
Subtitling irony may present few formal problems, since ambiguity is not based on misuse or misinterpretation of linguistic elements, as in the case of puns and wordplay, but it is a cognitively complex process in that it depends on pragmatic elements that trigger a number of possible interpretations, in relation to a specific socio-cultural context (Chiaro 2005, Zabalbescoa 2005). Echoic mention and repetition are important cues that act both meta-ironically, in the case of interjections, forms of address, incomplete sentences, redundant explanations and repetitions, etc., and structurally as the most recurrent triggers of irony. Therefore, although they are generally considered redundant features of discourse that tend to be omitted from subtitling, their translation is often essential in conveying the ironic effect. Montalbano uses humour and irony mainly as a way to express indignation. Both the readers and the audience develop a sense of familiarity with the characters and are prepared for the humorous exchanges of ironic remarks between them. Although it is not really possible to distinguish good from bad irony as they both conceal a negative judgement, Montalbano’s irony can be looked at as going from polite mockery (Holdcroft 1983) to a more hostile form of criticism. On the one hand, Montalbano is ironic with his friends and colleagues in quite an affectionate tone, in particular when speaking to his competitive deputy Augello, or with detective Fazio. Among his favourite victims of a rather harmless form of irony and sarcasm are Pasquano and Jacomuzzi at Forensics, who are usually ready to reply with sardonic remarks. On the other hand, Montalbano’s irony becomes sharper when addressed to corrupt representatives of government, such as commissioners and judges, who are often depicted as using their positions of charge in order to disguise ineptitude, nepotism and the pursuit of personal interests. This difference is also marked by the fact that while friends usually recognize themselves as the victims and are able to answer back, engaging in a sort of fair exchange, the others do not, as they remain unaware. The scene in Table 4-1 presents a sample of irony, which is structurally triggered by a sudden change of register that frequently marks the conversations between Pasquano, the outspoken but conscientious doctor in charge of autopsies, and the inspector. Here Pasquano’s explicit swearing is contrasted by Montalbano’s polite and unexpected reply “la ringrazio della sua squisita cordialità” which is translated literally as “thanks for the exquisite cordiality”. The utterance structurally functions
66
Chapter Four
as an ironic trigger in its contradiction to the situation. In this example the colloquialisms and swearing may be considered ironic cues preparing for the ironic utterance and they have also been translated literally, including the explicit interjection “che cazzo/what the fuck”, which marks the translator’s deliberate intent of maintaining Pasquano’s tone in order to emphasize the abrupt inversion in Montalbano’s reply. Pasquano
Montalbano Pasquano Montalbano
Perché sente il bisogno di venire a rasparsi le corna con me? Ma che cazzo vuole? Buongiorno dottore! Si è sfogato? Ancora no, grandissima rottura de’ gabbasisi. Uhm. Posso parlare? Intanto la ringrazio della sua squisita cordialità.
Why do you feel the need to lock horns with me? What the fuck do you want? Hello? Is it all out? Not yet. You, huge pain in the ass. Can I talk? I’ll explain later, thanks For the exquisite cordiality.
Table 4-1: from Montalbano’s Croquettes In the second excerpt (see Table 4-2), irony functions at a textual level as an exaggeration. The example also shows the translator’s concern to maintain direct resemblance at word level to achieve a similar verbal effect. Montalbano is deliberately underestimating his investigation and emphasising the importance of Augello’s case. The audience’s interpretation needs to be backed up by awareness of the life-long friendship between the interlocutors. The deputy Mimì Augello embodies the stereotype of the narcissist Latin lover, extremely vulnerable to the charm of women, unfaithful, egocentric and ambitious, unable to control his feelings of competition towards Montalbano. This context sets the background for continuous exchanges of ironic comments about each other. Montalbano’s language reveals how the line between irony and sarcasm is not clearly marked (Muecke 1982, Haiman 1998; Mizzau 1984) and the use of an exaggerated tone of voice or gestures may contribute to the change of perception, making the message more explicit and therefore enhancing sarcasm (Montgomery et al. 2007). Irony and sarcasm are not considered as two separate features of discourse but as part of the same category, although sarcasm is recognized as “an overtly aggressive type of irony” (Attardo 2007, 137) where the victim and the hearer are both aware of the criticism implied. Moreover, sharing the same cultural perspective is essential to appreciating the ironic reference to the gesture of bowing at the prefect. The Italian audience is likely to understand and appreciate the overtly ironic tone of the last utterance, through this allusion to the rather common and despicable expectation of building careers on favouritism
The Language of Inspector Montalbano
67
rather than merit; hence, over-formality marks the hierarchical relation with people in powerful positions. This scene offers yet another case of literal translation at word level. Augello Montalbano
A te come va? Ah… roba da poco. Niente a che vedere con la tua inchiesta che rischia di avere delle ripercussioni internazionali. A proposito, com’è andata dal prefetto? Ti sei ricordato di fare l’inchino?
What about you? Not a lot. Unlike your case, which could have international repercussions. How did you get on with the prefect? Did you remember to bow?
Table 4-2: from The Potter’s Field In the excerpt that follows (see Table 4-3), irony is first signalled by echoic mention and repetition that function meta-ironically and can be considered as the most frequent ironic marker connoting Montalbano’s language, emphasized in this scene by marked intonation and gestures. If the audience is familiar with the series, they will probably be prepared for Montalbano’s habit of taking every occasion to mock Jacomuzzi less for a specific reason than for an ancestral rivalry between the police and other investigating forces. This probably has to do with the fact that rather than following the scientific approach of Forensics to establishing facts, the inspector prefers to rely on instinct and intuition for his investigations. Jacomuzzi from Forensics is reporting on the possible weapon used in a murder. What seems a rather innocent comment on a possible clue offers Montalbano a good opportunity to make an ironic remark. The echoic phrase “una squama de pesce/a fish scale?” repeated by both interlocutors functions meta-ironically and is an example of an ironic cue. The translator does not omit the source dialogue repetitions that act as echoic triggers, probably in order to maintain the maximal effect with minimal interpretative effort, with a view to leading into the next utterance “mi hai consegnato il colpevole/You’ve solved the case!” that functions structurally as an expression of contradiction and incongruity with the situation. This becomes more explicit in the rhetorical question that follows “Che tipo di pesce è/What kind of fish was it?” While, initially, the irony is not noticed by Jacomuzzi, he then becomes fully aware of being the victim of criticism: “perché te la prendi sempre con uno della scientifica/why have you got it in for us Forensics guys?” The utterance offers a sample of the fourth type of ironic trigger which is identified by Hutcheon (1994) as literalization, that is, the answer does not reply coherently to the semantic meaning of the previous question but comes as
68
Chapter Four
a new question explicitly addressing Montalbano’s irony. Because of the need to condense, the subtitles mitigate somewhat the ironic effect because of the omission of the periphrastic utterances “E si può sapere/is it possible to know?” and of Jacomuzzi’s enforcement in his reply “ma io voglio capire/but I would like to understand”, which help to stress the ironic tone in the dialogue. In any case the ironic force of the scene is also conveyed by marked intonation and gestures that compensate for the condensation of the subtitles. Jacomuzzi
Montalbano Jacomuzzi Montalbano
Jacomuzzi
Coltello da cucina, molto usato. Manico di legno. A proposito, io ho trovato tra la lama e il manico una squama de pesce. Una squama de pesce? Eh! Una squama de pesce. Jacomù, mi hai consegnato il colpevole! E si può sapere che tipo de pesce è? Sarago, orata, triglia? No, tu dimmelo perché se no mi lasci nell’ansia. Ma io voglio capire Montalbano, perché te la prendi sempre con uno della scientifica?
A kitchen knife, used a lot. Wooden handle. Between the blade and the handle I found a fish scale. A fish scale? Yeah, a fish scale. Jacomù, you’ve solved the case. What kind of fish was it? Bream, bass, mullet? Tell me, don’t leave me hanging. Why have you got it in for us Forensics guys?
Table 4-3: from The Snack Thief In the third example (see Table 4-3) the translator’s intention to mark the utterances as ironic is also emphasized by the presence in the subtitles of typical features of oral discourse in the written form. Hence, the use of colloquialisms such as “yeah” and the shortening form for the proper name “Jacomù”, although being redundant features of spoken language that could have been easily omitted in translation, serve the purpose of signalling the sardonic tone of the dialogue. The scene continues, as shown in Table 4-4, with Montalbano’s explicitation of his criticism in an overtly sarcastic tone. Again at this point we find that some interjections have been omitted. This is the case of “putacaso/suppose”, of “ma dico potrebbe/I’m saying it might”, stressing the remoteness of the possibility that Jacomuzzi’s comment might be of any interest to the investigations, and further on of the rhetorical device added to the last question “ma mi vuoi dire/but do you want to tell me?” Once more, the translation of “che minchia” in “what the fuck” in the
The Language of Inspector Montalbano
69
subtitles marks the attack in a rather direct and explicit manner, to compensate previous omissions. Its use is very common in informal spoken language and has a marked connotation, in particular in the mixture of Sicilian dialect and standard Italian that characterizes Montalbano’s language, but this often tends to be omitted in translation.
Jacomùǡ
ǡǫ ǡputacasoǡ
ǯ°
ǡ
ǡma dicopotrebbeǡǡ Ǥ mivuoidireche minchiadisignificatoha ʹͲǤͲͲͲ ǡͳͻǤͻͲ
Ǥǫ Eglialtri30perchénon lomangiano?
±ǡ
îǤ Table 4-4: from The Snack Thief
Jacomùǡ Ǥ ǡ
ǡ Ǥ whatthefuck coulditmean ǡ ʹͲǡͲͲͲ ǡͳͻǡͻͲ ǫ ǯ ͵Ͳǫ
Ǥ
The examples that follow (see Tables 4-5, 4-6, 4-7) offer more cases of irony triggered by echoic mention as the most recurrent element in Montalbano’s construction of irony. In the example in Table 4-5, irony is structured by the echoic mention of the title “cavaliere del lavoro/order of merit for labour”, uttered the first time by Fazio for the informative purpose of reporting to the inspector the details of a person found dead. In the source dialogue, Montalbano repeats the prestigious title with the ironic purpose of showing a strong indignation for a person who, regardless of a public title, was involved with Mafia. The example shows that, although Grice’s maxim of quality is not flouted because the
70
Chapter Four
statement is true, the ironic utterance is based on its echoic mention with a negative attitude. In the subtitle, the omission of the echoic mention stops the transfer of verbal irony and Montalbano’s question loses its negative evaluation. Fazio
Montalbano
Pagnozzi Calogero, cavaliere del lavoro. Nato a Vigàta nel 1940 Senti facciamo prima. Ma nel tuo dossier c’è scritto pure che il cavaliere del lavoro Pagnozzi faceva affari con la Mafia?
So, Calogero Pagnozzi. Order of Merit for Labour Born in Vigàta in 1940… Does your report mention that Pagnozzi did business with the Mafia?
Table 4-5: from Montalbano’s Croquettes Camilleri’s writing is often the expression of indignation and reproach against State corruption. Missing the ironic trigger at a textual level, the target audience is unlikely to grasp the author’s intention of fierce criticism for a corrupt man who has achieved State recognition. After a few lines, in Table 4-6, there is another ironic comment along the same line. Here the translator maintains the repetition and therefore the ironic trigger represented by the use of a metaphor. However, the substitution of “chi di dovere/someone in charge” with the more general “someone” softens the bitter reference not only to the police but primarily to the corruption and weakness of the Italian government in prosecuting criminals. Fazio
Montalbano
L’hanno fermato ultimamente per ubriachezza molesta e per guida pericolosa però poi se l’è sempre cavata, perché chi di dovere ha chiuso un occhio. Stiamo sempre a chiudere un occhio.
Stopped for being drunk and disorderly and for dangerous driving, always got off because someone turned a blind eye. We’re always turning a bling eye.
Table 4-6: from Montalbano’s Croquettes A further example of explicitation of this theme comes in Table 4-7, a scene in which Montalbano has a direct confrontation with his superiors, who are usually portrayed negatively. Hence, when irony is directed against arrogant representatives of the State, the victims usually remain unaware. Therefore, there is no possible complicity or negotiation to be
The Language of Inspector Montalbano
71
established between the ironist and his interlocutor. Here Montalbano is just listening to the incoherent conclusions formulated by the commissioner, who is more interested in closing the case as quickly as possible, using Albanian immigrants as scapegoats, than in discovering the truth. Montalbano, whose facial expression on screen betrays disapproval, limits his replies to echoic mentions of the commissioner’s utterances, which are literally translated. Commissioner
Montalbano Commissioner
Montalbano
Ha capito benissimo che sto parlando dell’uccisione del commendator Pagnozzi e di sua moglie. L’uccisione del commendator Pagnozzi e di sua moglie? È un’ipotesi condivisa anche dal giudice Scognamiglio. Ma non mi dica che non aveva fatto il collegamento? Eh signor questore, io il collegamento non l’avevo fatto.
You know very well that I mean The murder of Pagnozzi and his wife. The murder of Pagnozzi and his wife? Judge Scognamiglio shares the same theory. Don’t tell me you hadn’t made the connection too. No, sir. I hadn’t made the connection.
Table 4-7: from Montalbano’s Croquettes In the original dialogue the ironic function of the last utterance is further stressed by its syntactic construction which presents a case of left dislocation of the object “il collegamento/the connection” in order to further emphasize Montalbano’s criticism. Although the translation does not present the same type of syntactic construction, the repetition of the exact word order just uttered by the commissioner is still effective in triggering irony through echoic mention. In addition to verbal irony the scene is also marked by prosody and explicit body gestures. As outlined by Pelsmaekers and Van Besien (2002) in their quantitative account on subtitling irony, at a locutionary level to maintain ambiguity in the subtitles is rather straightforward and the majority of verbal cues are generally not omitted but might undergo some degree of modification and therefore produce a different effect. In most of the examples presented above the ironic triggers are maintained, so that at a textual level the ambiguity is also evident in the subtitles, in spite of the omission of some interjections and rhetorical devices. However, as argued in the construction of the theoretical framework of this paper, irony requires contextual interpretation in a discursive community (Hutcheon 1994). Montalbano’s audience may not be from Italy and not belong to the
72
Chapter Four
same linguistic or regional background and they may already hold specific expectations due to a previously established stereotypical image of Sicily: nevertheless, they may still share common sensitivity in their interpretation. The realistic portrayal of Camilleri’s characters, although strongly localized, makes them universal and capable of inspiring participation and empathy, thanks to their strong sense of justice, their beliefs in the positive values related to preserving local traditions and their strong opposition against the corruption of contemporary society.
4. Conclusion The translation of irony is a rather elusive phenomenon that has been addressed from a number of different angles. This paper has adopted a dynamic and pragmatic approach that places equal importance on the ironist and on the interpreter considered in a shared context. Through this approach a description has been carried out of what happens when irony is part of an audio-visual product and becomes therefore subtitled. In order to describe the mechanisms that guide the translation of verbal irony at a textual level, Hutcheon’s (1994) descriptive framework of ironic markers that can function both meta-ironically and structurally has been considered. The samples selected showed that ambiguity, expressed at a textual level through changes of register, exaggeration, understatement, contradiction and literalization, does not present particular problems in the subtitles, which usually present cases of literal translation, while there are no cases of adopting new ironic triggers that were not already present in the source dialogue. Moreover, the attention was focused on samples that presented cases of echoic mention and repetition, as the starting hypothesis was that they might have been omitted or reduced because redundant, whether functioning meta-ironically or structurally. Most of the examples show that repetitions are also maintained not only when they are the main ironic triggers but also when they are present in the source dialogue merely as ironic cues, in the form of interjections, colloquial forms of address, etc., hence, their omission would seem less relevant. The analysis shows the translator’s concern to emphasize ironic utterances in the subtitles, placing maximal effort in the re-codification of verbal ironic triggers in order to guarantee maximal effect and minimal effort on behalf of the target audience. Moreover, the existence of paralinguistic and prosodic features present on screen supports the target audience’s interpretation as well as the fact that Montalbano’s irony develops patterns of familiarity and expectation in the audience, as he tends to use it always with the same
The Language of Inspector Montalbano
73
characters. While acknowledging the translator’s active presence in the subtitles as he/she manages to act as the ironist in the majority of situations, at this stage of the research not much can be added about the audience’s emotive response referred to throughout the paper, therefore further investigation should be carried out, particularly on how their expectations may influence the perception of verbal irony.
References Primary sources Camilleri, Andrea & Sironi, Alberto. 2011. “The Potter’s Field.” In Inspector Montalbano, edited by Andrea. Camilleri & Alberto Sironi. Aztec International, Australia: C. Degli Esposti. Camilleri, Andrea & Sironi, Alberto. 2012. “Montalbano’s Croquettes.” In Inspector Montalbano, edited by Andrea Camilleri & Alberto Sironi. Acorn Media UK: C. Degli Esposti. Camilleri, Andrea & Sironi, Alberto. 2012. “The Snack Thief.” In Inspector Montalbano, edited by Andrea Camilleri & Alberto Sironi. Acorn Media UK: C. Degli Esposti.
Secondary sources Attardo, Salvatore. 2000. “Irony as Relevant Inappropriateness.” Journal of Pragmatics, 32: 793-826. —. 2007. Irony in Language and Thought. New York: Taylor and Francis. Austin, John Langshaw. 1975b. How to do Things with Words. Oxford: Oxford University Press. Barbe, Katharina. 1995. Irony in Context. Amsterdam: John Benjamins. Berrendonner, Alain. 1981. Éléments de Pragmatique Linguistique. Paris: Les éditions de Minuit. Booth, Wayne. 1974. A Rhetoric of Irony. University of Chicago Press. Carroli, Piera. 2010. “Camilleri’s Detective Narrative: The Global Triumph of a Sicilian Inspector.” In Sweet Lemons 2. International Writings with a Sicilian Accent, edited by Venera Fazio & Delia De Santis, 156-160. Ottawa: Legas. Chiaro, Delia. 2005. “Verbally Expressed Humour and Translation: An Overview of a Neglected Field.” Humor. International Journal of Humor Research, 18(2): 135-145. Chakhachiro, Raymond. 2009. “Analysing Irony for Translation.” Meta, 54(1): 32-48.
74
Chapter Four
De Beaugrande, Robert-Alain & Dressler, Wolfgang. 1981. Introduction to Text Linguistics. London: Longman. Delabastita, Dirk. 1990. “Translation and the Mass Media.” In Translation, History & Culture, edited by Susan Bassnett & Andre Lefevere, 97109. New York: Cassell. De Wilde, July. 2010. “The Analysis of Translated Literary Irony: Some Methodological Issues.” Linguistica Antverpiensia, 9: 25-44. De Santis, Raffaella. 2011, March 11. “Tradurre Montalbano nello Slang del Bronx.” La Repubblica: 58. Gibbs, Raymond W. 1994. The Poetics of Mind: Figurative Thought, Language and Understanding. Cambridge: Cambridge University Press. Gibbs, Raymond W. & Colston, Herbert L. 2007. Irony in Language and Thought. A Cognitive Science Reader. New York: Taylor & Francis Group. Giora, Rachel. 1997. “Understanding Figurative and Literal Language: The Graded Salience Hypothesis.” Cognitive Linguistics, 7: 183-206. Grice, Herbert Paul. 1975. “Logic and Conversation.” In Syntax and Semantics: Speech Acts (Vol. 3), edited by Peter Cole & Jerry Morgan, 41-58. New York: Academic Press. Gutt, Ernst-August. 1991. Translation and Relevance. Cognition and Context. Oxford: Blackwell. —. 1998. “Pragmatic Aspects of Translation. Some Relevance-Theory Observations.” In The Pragmatics of Translation, edited by Leo Hickey, 41-53. Clevedon: Multilingual Matters. Haiman, John. 1998. Talk is Cheap: Sarcasm, Alienation, and the Evolution of Language. Oxford: Oxford University Press. Hatim, Basil & Mason, Ian. 1990. Discourse and the Translator. London: Longman. Holdcroft, David. 1983. “Irony as a Trope, and Irony as Discourse.” Poetics Today, 4(3): 493-511. Hutcheon, Linda. 1994. Irony’s Edge: The Theory and Politics of Irony. London: Routledge. Koster, Cees. 2000. From World to World: An Armamentarium for the Study of Poetic Discourse in Translation. Amsterdam: Rodopi. Mateo, Marta. 1995. “The Translation of Irony.” Meta, 40(1): 171-178. Mizzau, Marina. 1984. L’Ironia. La Contraddizione Consentita. Milano: Feltrinelli. Montgomery et al. 2007. Ways of Reading. London: Routledge. Muecke, Douglas Colin. 1982b. Irony and the Ironic. London: Methuen. Nash, Walter. 1985. The Language of Humour. London: Longman. Pelsmaekers, Katja & Van Besien, Fred. 2002. “Subtitling Irony.
The Language of Inspector Montalbano
75
Blackadder in Dutch.” The Translator, 8(2): 241-266. Searle, John. 1969. Speech Acts. Cambridge: Cambridge University Press. Sperber, Dan & Wilson, Deirdre. 1992. “On Verbal Irony.” Lingua, 87(1): 53-76. —. 1995b. Relevance: Communication and Cognition. Oxford: Blackwell. Vandaele, Jeroen. 2002. “(Re)constructing Humour: Meaning and Means.” The Translator, 8(2): 149-172. Zabalbescoa, Patrick. 2005. “Humour and Translation: An Interdiscipline.” Humour, International Journal of Humour Research, 18(2): 185-207.
Abstract The language of Inspector Montalbano: a case of irony in translation Key words: subtitling, verbal irony, ironic triggers The translation of irony implies the recognition of the elements that produce it, as it is true for all the linguistic and rhetoric phenomena that connotate language, with the added difficulty that understanding irony presents the issue of interpretation and metalinguistic awareness, since “irony happens rather than exists” (De Wilde 2010: 41), in the sense that it can either be triggered by the receiver’s active participation or remain completely invisible and latent in the source text. Setting off from previous research concerning the translation of humour in the subtitles of the detective TV-series Inspector Montalbano, the aim of this paper is to focus more specifically on irony. From a methodological point of view, the issues concerning the translation of irony in subtitling will be considered adopting a dynamic methodological perspective (Hutcheon 1994), developing a descriptive framework of the ways in which irony is constructed in the source text and then carried across in the English subtitles. Through a range of examples, the focus will be placed on its heterogeneous functions at a linguistic, textual and pragmatic level. What happens when the author of the source text and the receiver do not share common background knowledge, since the interpretation of irony relies heavily on a set of shared assumptions in order to be carried across? And how is irony treated when this is a constitutive part of this source text and not just a marginal element? The translator, quite evidently, becomes the ironist, performing the double action of interpreting and reframing irony for the target audience.
CHAPTER FIVE CULTURAL REFERENCES IN FANSUBS: WHEN TRANSLATING IS A JOB FOR AMATEURS ORNELLA LEPRE1
1. Fansubs: An Introduction Over the last 25 years, a new type of subtitling, often referred to as fansubbing, has gained a relevant place in audiovisual translation. Although its rapid growth has stimulated the interest of researchers, a standardized nomenclature for the field is still missing. Bold (2011) reports a variety of expressions employed in academic studies to indicate both the activity (e.g., fansubbing, home-made subtitling, amateur subtitling, fan-based subtitling) and the translators who undertake such activity (e.g., fansubbers, internet subtitlers, amateur subtitlers, amateur subtitle producers). In this article, I refer indifferently to “fansubs” or “amateur subtitles” to indicate subtitles made for foreign language audiovisual products in a non-professional environment. The first fansubs were produced in the 1980s for Japanese cartoons (anime), as a spontaneous answer from groups of devoted fans to the dearth of commercial releases of anime in the US (Inëz Chambers 2012, Massidda 2012). During the early stages of fansubbing, the growth of the phenomenon was held back by the technical complexity of the subtitling process. Before they could subtitle a video, fans had to acquire the video in VHS format and rip it to a personal computer. Distribution was just as difficult, as it required posting VHS copies of the subtitled product around the world. The advent of the first Apple personal computer, with advanced but user-friendly video editing software, made the production of subtitles considerably easier; however, it was with the internet that fansubbing saw
1
Imperial College London, UK. Email address: [email protected]
78
Chapter Five
unprecedented growth. In particular, the online sharing of videos made the process much faster. Anime fansubs are still produced today, along with fansubs for films, TV series, anime and documentaries. Many fansubbing communities have spawned in non-English speaking countries, with fans taking it upon themselves to subtitle English language films and TV series2. The rapid expansion of fansubbing attracted the attention of researchers, who analyzed how this new form of audiovisual translation evolved in specific countries such as Brazil (Bold 2011), Italy (Massidda 2012), and China (Chu 2012). The growth in fansubbing arguably brought about an improvement in the quality of translations: as fansubbing groups became bigger and better organized, they could devote more time and resources to quality control. Many of these groups share a common structure (with slight variations, owing to the medium, the genre, and to group policies), where specific roles deal with different phases of the fansubbing process. In particular, the main tasks involved in the production of these subtitles are (1) acquisition of source material (with or without original language subtitles), (2) timing and translation, (3) editing, and (4) distribution of the product. The details of each phase vary across groups. For example, timing, spotting and translation may or may not be performed by the same person; the video file may be provided by members of the group who “cap” it from the original broadcast, or by external releasers; the final product distributed can be a video file with permanently embedded subtitles (hardsubs) or a text file that users must then attach to a video file obtained from other sources (softsubs). How the work is divided among fansubbers also depends both on the specific group and on the type of video subtitled. For example, Bold (2011) analyses a process that is not always present, called resync, where the timing of the fansubs is modified slightly, so that they are in sync with the audio of a different version of the video file; sometimes, a separate person, the resyncer, is employed for the task. Díaz-Cintas and Muñoz Sánchez (2006) point out that, when songs have a relevant place in anime, one of the fansubbers may be in charge of the “karaoke” parts. They also mention another role for hardsubs only, the encoder, who produces the subtitled program by merging the fansubs with the video. By releasing subtitles as a text file rather than distributing a subtitled video, some fansubbing groups aim at avoiding copyright infringement. In fact, the legal status of fansubs is one of their most controversial aspects, and
2 While the phenomenon is not limited to English language products, these represent by far the largest share of fansubbed programs.
Cultural References in Fansubs
79
the focus of considerable academic interest (Kirkpatrick 2003, Lee 2010, Rembert-Lang 2012). Distributing fansubbed videos is undoubtedly illegal, but releasing text files made to be used in conjunction with illegally downloaded videos can hardly be claimed to be a lawful practice. Still, some authors highlight how, irrespective of the reasons why fansubs are produced, they contributed to the international success of many anime series and TV shows, creating a fan base for them outside their country of origin and often generating anticipation for an official release (Mäntylä 2010). Despite the ethical and legal issues raised by fansubs, they have become a worldwide phenomenon, worthy of academic investigation for its magnitude and peculiar features. While no official data exist on the number of fansub users around the world, certain fansub websites provide useful indications on the matter. Itasa, Italian Subs Addicted, one of the main Italian fansubbing communities, has a message board with more than 300,000 registered users; Chu (2012) performed a survey on a Chinese community with around 700,000 members. The demand for fansubs can also be inferred by looking at the number of downloads, when such statistics are available: at the time of writing, the main page of Legendas.tv, a website providing Brazilian fansubs (also analyzed by Bold in her 2011 study), displays 70,000 downloads for the latest episode of the US TV series The Walking Dead—an impressive number reached in just three days, as the episode aired in the US on March 21, 2013, with the subtitles released only hours later.
1.1. Fansubbing in Italy In recent years, due to the increasing popularity of fansubs, significant but rapid changes occurred in the way TV series are enjoyed by the Italian public. Instead of watching TV programs when they are broadcast, usually with Italian dubbing, more and more people choose to download programs from the Internet and watch them in the original language with the help of fansubs. Currently, the subtitles for approximately 80,000 episodes per week are downloaded on average from Itasa’s website. This number is indicative of the size of the phenomenon, as these downloads can be considered the tip of an iceberg. The website only provides text files without the actual videos; however, a larger proportion of viewers prefer to download, through file-sharing software, the videos where other users have “pasted” the subtitles made by the fansubbers, and the figures mentioned above do not include these downloads. The growth of fansubbing affects not only the viewership, but also the translators of foreign language programs. In 2009, the agency responsible
80
Chapter Five
for the Italian version of the US TV comedy The Big Bang Theory was compelled to change its dubbing director and adaptors. The decision was motivated by complaints from Italian fans who were accustomed to watching the series in the original language with fansubs and were displeased with the drastic changes occurred in the dubbed version (Innocenti and Maestri 2010). Cultural adaptation is not a new phenomenon in Italian television (Tomaiuolo 2007); nevertheless, if The Big Bang Theory had aired with the same translation in 2005—when the practice of fansubbing was still in its early days—instead of 2009, the radical adaptation might have gone unnoticed. The speed at which fansubs are released, usually less than three or four days after the original airing date of the program and often only hours later, has also had repercussions on the Italian networks’ broadcasting schedules. In 2010, FOX Italia decided to air the final season of the cult US TV series Lost almost simultaneously with the US: each weekly episode aired the next day in English with Italian subtitles, followed by the dubbed version only one week later. The series finale aired at the same time as in the US, at 6 AM in Italy (three hours later, fansubs were available on the web), with the subtitled episode broadcast in the evening. Lost was undoubtedly a special case. Over the years, it had gained a vast following of fans who were not willing to wait long to see what would happen next in the story, and the series was partly responsible for the growth of the fansubbing phenomenon in Italy. Some of the founders of Itasa even declared that Lost was the main reason behind the community’s birth in 2006, and Lost was arguably the reason why many Italian viewers discovered and turned to fansubs. However, the case of Lost, although unprecedented in Italy, was not unrepeated: the next year, similar airing policies were adopted for other US TV series such as Glee, No Ordinary Family and, in 2011, Falling Skies. In view of the reach fansubs have gained among the Italian audience, this paper applies quantitative methods provided by statistical inference to study how fansubbers deal with culture-specific items. As a reference, translations made by professionals are also analyzed throughout the discussion.
2. Fansubs and Cultural References 2.1. Fansubs: Main Features and Technical Conventions The subtitles analyzed in this study were created by Itasa. While most fansubs for Japanese anime are characterized by unconventional stylistic features (e.g., multiple colors and lines, special fonts, text that is not
Cultural References in Fansubs
81
always positioned in the lower portion of the screen; see Ferrer Simó 2005, and Pérez-González 2006, 2007), Itasa’s subtitles look much more like professional subtitles. Before examining the translations, it is important to ascertain what technical rules, if any, govern these subtitles, as technical constraints have the potential to determine the degree of text reduction and affect linguistic cohesion. Even layout and spatial issues, which may appear purely formal, are “inevitably linked to the distribution of text on the screen, and therefore to linguistic matters” (Díaz Cintas and Remael 2007, 81). Elements such as the number of lines in a subtitle, the structure of line breaks and the number of characters per line or per second can influence translation choices, by expanding or limiting the set of possible translations available. Itasa’s subtitles do not use multiple colors or fonts and have the “standard” fixed position in the lower part of the screen. The translators must follow precise guidelines when producing the subtitles3. Although they are not required to use specific software, most of them use a freeware software called VisualSubSync, as some members of the community have created a modified version that includes all the rules and parameter values to which their subtitlers must adhere. For example, Itasa’s “translator’s manual” specifies that subtitles can only have one or two lines; two lines are used when the first one exceeds 40 characters. A line can never have more than 45 characters. Lines with dialogue spoken by different characters are not allowed: if two characters speak in the same subtitle, each line can only be used for one of them. If a set of subtitles does not comply with these rules, a “reviser” will have to fix all the errors before releasing the subtitles to the public. Less specific instructions are given for line breaks, which are left to the common sense of the single translators; the community’s guidelines only advise splitting the text following syntactic as well as visual criteria, conforming to the view that linguistically coherent segmentation may improve readability and comprehension (Perego 2008; this view has been challenged by empirical studies which reached different conclusions on the effectiveness of subtitle processing in relation to different types of text segmentation - see Perego et al. 2010, and Rajendran et al. 2011). Other relevant parameters are the minimum (1 second) and maximum length (5 seconds) of each subtitle, the minimum gap between them (30 milliseconds) and, most importantly, the number of characters per second allowed. While this number depends on the length of each subtitle (the
3
This information comes from interviews to members of the community and from the author’s own experience as a fansubber.
82
Chapter Five
constraint being looser for longer subtitles), there is a target number, set at 30. Such a number is much higher than what is normally considered acceptable in the professional world, where the widespread “six-second rule” (Gielen and d’Ydewalle 1989) states that a viewer can read in six seconds a subtitle with two lines, each with 37 characters (resulting in an average of 12 characters per second). If the reading speed required from viewers seems even higher in view of Italy’s tradition as a “dubbing country,” where a large portion of the audience is still not accustomed to watching subtitled programs, both fansubbers and fansub users are likely to have a higher reading speed than the average Italian TV viewer (due to their younger age and their familiarity with subtitles; see Massidda 2012). However, the higher number is also evidence of Itasa’s negative attitude towards reduction, which is seen as a necessary evil but rejected wherever possible, disregarding readability in favor of “faithfulness.” On the whole, these rules and conventions suggest that Itasa’s translators might have more freedom than professionals in choosing their preferred translation procedures.
2.2. Translating Culture: Extralinguistic Culture-Bound References and Translation Procedures This paper borrows Pedersen’s definition of “extralinguistic culturebound reference” (ECR), a “culture-bound linguistic expression which refers to an extralinguistic entity or process, and which is assumed to have a discourse referent that is identifiable to a relevant audience as this referent is within the encyclopedic knowledge of this audience” (Pedersen 2005, 2). La Forgia and Tonin (2009) also discussed the translation of cultural references in AVT, focusing on intertextual references and comparing the solutions adopted in the dubbed and subtitled versions of a TV series (Supernatural) in different languages and modalities, including Italian fansubs. Pedersen’s definition includes intertextual references and covers a variety of other types, as long as they are expressed linguistically. On a similar note, Caffrey (2009) analyzed culturally marked non-verbal cues (CNVCs) in fansubs for Japanese anime. However, CNVCs differ from ECRs in that the former are not necessarily linked to a linguistic expression; as non-verbal items are often “translated” in anime fansubs with glosses on screen, this allows to carry out a study of intersemiotic translation (Gottlieb 2005) which is not possible for all types of fansubs. As a case study to see how fansubs deal with ECRs, I chose the US comedy series 30 Rock, a choice that was mainly determined by the huge number of cultural references it contains: the total number recorded by
Cultural References in Fansubs
83
viewing the 43 selected episodes and noting down the ECRs was 1953, for an average of 45 per episode. The 43 episodes make up the first and third seasons of the series, aired on NBC, respectively, from October 2006 to May 2007, and from October 2008 to May 2009. Each episode was translated by a team of 4 or 5 people; the composition of the team was mostly stable throughout each season and both seasons had the same reviser, a translator who was in charge of proofreading the team’s work and ensuring linguistic and stylistic consistency in each episode and throughout the series. Seasons 1 and 3 were chosen, instead of two consecutive seasons, in order to better highlight possible changes in translation practices; although three years might not seem like a very long interval, such changes are not so remote, as fansubs in Italy gained greatly in popularity during these years. Although the only deadlines fansubbers have for the release of their subtitles are self-imposed, fansubbing groups often work with very strict time schedules. Itasa’s subtitles for 30 Rock were usually released between 4 and 7 days after each episode had aired in the US. Compared to other series, for which fansubs are available the morning after the original airing date, 4–7 days is not a particularly rushed deadline for the translators. However, the speed is still considerable compared to the official dubbed version of 30 Rock, broadcast on Sky Italia’s satellite channels beginning in January 2009, a 2.5 years delay from the show’s US airing. The first step in preparation for the analysis was to record all occurrences of ECRs, along with their Italian translations. The references were then classified by subject into eleven categories. This study adapts taxonomies proposed by Newmark (1988) and Espindola and Vasconcellos (2006) to define twelve cultural categories; the distribution of the cultural references is shown in Table 5-1 and examples clarify the content of each category4. Columns 4 and 5 include examples of translations from Itasa’s fansubs and from the official dubbed version, in order to illustrate some possible translation procedures. The references were then classified according to the translation procedures used. As the statistical model chosen for the empirical analysis (the probit model) requires all the variables to be expressed in binary form, each reference was labeled with a 1 if the translators chose a TL-oriented approach, or with a 0 if they opted for a SL-oriented approach. An initial look at the sample tells us that a variety of procedures was used in creating these fansubs. Table 5-2 illustrates the translation procedures considered
4
In the tables, back-translation is provided in square brackets unless the Italian is a name or a literal translation of the original.
Chapter Five
84
relevant for this study, many of which were first proposed by Vinay and Darbelnet (1973), Newmark (1988) and Aixelá (1996). Number of occurrences 407
152
Subject
Examples
Fox
Itasa
People
David Blaine
David Blaine
Mc Lyte
un prestigiatore [an illusionist] Paris Hilton
at a parade
a una parata
a una sagra [at a country festival]
Hurricane Katrina the Oscars the Super Bowl
l’uragano Katrina gli Oscar la finale di football [the football final] a Midtown
l’uragano Katrina gli Oscar al Super Bowl
Social culture/ Current events
186
Entertainment/ Sports
175
Toponyms/places
to Midtown
He’s from Bowie, Maryland 176
186
Food and drink
Fiction (names of fictional characters, titles of fictional works)
smells like corn chips
Viene dal Maryland [He’s from Maryland] puzza di fritto [smells like fried food]
Mozzarella sticks
Salatini [Crackers]
The Hamburglar My friend in accounting, Lando Calrissian
Ronald McDonald Il mio amico della contabilità, Han Solo
Mc Light
al centro [to the centre] Viene da Bowie, Maryland puzza di patatine al mais [smells like corn chips] Mozzarelline impanate [fried baby mozzarellas] Ronald Un mio amico della contabilità, Lando Calrissian
Cultural References in Fansubs 46
Institutions
the White House 3:15. Time for Union break.
28
Religion
Like Jesus in the wilderness my whole church group
23
Measuring system
127 pounds. 6'2"
154
Other
R-rated
his ATM pin codes
la Casa Bianca Facciamo tutti una pausa. [Let’s all take a break.] come Gesù nella foresta [like Jesus in the forest] i miei amici [my friends]
57 chili [57 kilos] 1 metro e 90 [190 cm] a luci rosse [“red lights”, Italian expression that can be used as an adjective with a similar meaning to R-rated for sexual content] i codici delle sue carte di credito [his credit card pin codes]
85 la Casa Bianca Facciamo una pausa. [Let’s take a break.]
come Gesù nel deserto [like Jesus in the desert] il mio intero gruppo parrocchiale [my whole church group] 127 libbre 1 e 85 [185 cm] vietata ai minori [forbidden to minors]
i codici di tutti i suoi bancomat [all his cash card pin codes]
Table 5-1. Examples of ECRs from episodes of the US TV series 30 Rock and their translation
86
Chapter Five
Target Language oriented procedures (= 0)
Source Language oriented procedures (= 1)
Specification: 1) Explicitation AFI Æ American Film Institute 2) Addition At the Borgata Æ Al Borgata hotel I’m gonna smash these barrels! Æ adesso rompo i barili come Mario in Donkey Kong! [I’m gonna smash the barrels like Mario in Donkey Kong!]
Retention If there’s a red carpet Æ se c’è un red carpet Calque Spring break Æ pausa di primavera A-plus Æ A-più Linguistic Translation 5 inches Æ 5 pollici
Generalization David Blaine Æ un prestigiatore [an illusionist] a Barnes and Noble Æ una libreria [a bookshop] Limited Universalization confederate flag Æ bandiera a stelle e strisce [stars and stripes flag] Absolute Universalization a pancake house Æ un negozio di dolci [a sweet shop] Substitution 1) Cultural substitution (or cultural equivalence) The Hamburglar Æ Ronald McDonald Tom Brady Æ David Beckham 2) Paraphrase Scott Peterson Æ un condannato a morte [a man sentenced to death] Omission I studied voice at Northwestern Æ Ho studiato canto [I studied singing]
Table 5-2. Translation procedures observed in the sample
Cultural References in Fansubs
87
Omitting the ECR is considered here a TL-oriented approach, as it implies an extreme modification to the SL text in order to maximize the relevant information in the TL text and minimize the viewer’s effort. Using a preexisting “official equivalent” is considered neither TL- nor SL-oriented, as it can be argued that this choice is a “bureaucratic rather than linguistic process” (Pedersen 2005, 3). Examples of this procedure are the rendering of the cartoon family “the Jetsons” as “i Pronipoti” (the official Italian name of the series) and the use of “la Casa Bianca” for “the White House.”
3. Empirical Analysis – Probit Regression The goal of this analysis is to study how selected factors affect the choice between SL-oriented and TL-oriented procedures for translating ECRs. Statistical inference provides a variety of instruments that can be profitably used in linguistics5. In particular, regression models estimate how one or more independent variables affect a dependent variable, by estimating the parameters that define their relationship. When the phenomenon under investigation can be expressed as a binary outcome (as it is in our case, where a TL-oriented procedure is chosen or not), the relationship can be modeled using a probit model, which represents how the change in independent variables affect the probability of a particular event occurring. As mentioned in the previous section, many factors are likely to determine the choice of translation procedures. The analysis includes the cultural categories to which each ECR belongs and other constraints to its translation. The probit model can be expressed as follows: p=)ሺߙ ߚଵ ݂ ݀ ߚଶ ݃݁ ߚଷ ݈ܽ݅ܿݏ ߚସ ݁݊ ݐ݊݁݉݊݅ܽݐݎ݁ݐ ߚହ ݈݁݁ ߚ ݂݅ܿ ݊݅ݐ ߚ ݅݊ ݏ݊݅ݐݑݐ݅ݐݏ ߚ଼ ݈݊݅݃݅݁ݎ ߚଽ ݎ݁ݐ ߚଵ ܿ ݀݁݊݅ܽݎݐݏ݊ ߝሻ where TL, our dependent variable, is the probability that a TL-oriented procedure is used (measured by the probability that TL, a binary variable, has value 1); ) is the Cumulative Distribution Function (CDF) of the standard normal distribution; D is a constant; H is the error term. The explanatory variables are also dichotomous variables: they can only assume two values, 0 or 1, as illustrated in the examples in Table 5-3. If an ECR belongs to more than one cultural category, the explanatory variables for all its cultural categories will have value 1.
5
A comprehensive discussion on the possible applications of statistical methods in this field is in Baayen (2008), Johnson (2008) and Gries (2009).
88
Chapter Five
The variable constrained is inserted to isolate the effect of specific translation constraints on the choice between SL-oriented and TL-oriented translations. First, this variable helps to control for the fact that additional constraints can derive from the particular nature of audiovisual translation. In episode 1.5, 30 Rock jokes about product integration, a marketing technique in which the advertised product is not only seen on the screen (the so-called “product placement”), but also integrated into the show. In a scene paid by Snapple, a popular US brand of juice drinks, the dialogue in Example (1) takes place: (1) Liz: “I’m sorry, you’re saying you want us to use the show to sell stuff?” Jack: “Look, I know how this sounds.” Liz: “No, come on, Jack, we’re not doing that. We’re not compromising the integrity of the show to sell—” Pete: “Wow. This is Diet Snapple?” Liz: “I know, it tastes just like regular Snapple, doesn’t it?” Frank: “You should try ‘Plum-a-Granate,’ it’s amazing…” Cerie: “I only date guys who drink Snapple.” Jack “Look, we all love Snapple—Lord knows I do—but focus, here.”
As can be seen in Fig. 5-1, which shows four frames taken from this short scene, there are several visual references to the product, which would make it very difficult for the translators to substitute Snapple with a different brand of drinks, more familiar to the Italian audience.
Fig. 5-1. Constraints determined by visual references (a bottle of Snapple) in 30 Rock
Cultural References in Fansubs
89
In addition, some ECRs are used in puns. In episode 3.14, the characters are discussing a way to make it possible for Jenna to appear both in a TV show and in a movie without too much stress (see Example (2). Jack suggests a solution: (2) “…I’ll scale back the movie, we could cut the lesbian scene.” Jenna, disappointed, replies: (2) “ But the Oscars love that kind of thing!”
which prompts puzzled looks from Jack and Liz, skeptical both about this supposedly well-known fact and about the possibility of Jenna being considered for an Academy Award. But Jenna clarifies: (2) “There’s two guys at my gym named Oscar.”
Here, the reference to the “Oscars” did not pose a challenge for the fansubbers, as the awards are familiar to the Italian public and Oscar is also a proper name in Italy. However, if “Oscar” were not immediately recognizable as a proper name, the translation of this ECR would have been more difficult, even if the “Oscars,” intended as the Hollywood movie awards, are known to the Italian audience. Inserting the constrained variable helps to separate the effect these constraints have on the choice we intend to study from the effect of the cultural categories. For each observation, then, if the ECR clearly presents a characteristic that can limit the translators’ options, constrained will have value 1. Its role in the choice between TL- and SL-oriented procedures is captured by the coefficient ߚଵ . Table 5-3 presents the results of the regression. Explanatory variable
Estimated ȕ coefficients (effect on the probability of TL-oriented procedures)
Food geo institutions entertainment social religion
.1965132 -.6965164 -.187561 -.5477017 -.1042192 -.4614795
90
Chapter Five
constrained .4262691 people -1.209733 fiction -.7278151 measure .8594652 other .3784236 constant -.6419328 Note. ȕ for institutions significant at 90%; all other ȕs significant at 95%
Table 5-3. Results of probit regression performed on fansubs In column 2, the estimated coefficient represents the effect each explanatory variable has on the probability that a TL-oriented procedure is chosen (i.e., the probability that the dependent variable TL has value 1). A positive coefficient means that, all other things equal, an increase in the explanatory variable leads to an increase in the predicted probability. Since our explanatory variables can only assume two values, 0 and 1, a positive coefficient indicates that if a variable has value 1, as opposed to 0, the probability of seeing a TL-oriented translation will increase. As the magnitude of each coefficient indicates the strength of each variable’s positive or negative effect on the predicted probability, the statistical analysis provides intuitive results for the variables representing the various cultural categories. For example, the variable measure, which has value 1 if the ECR pertains to a particular measuring system, has the largest positive coefficient. As can be expected, such ECRs are usually adapted for the target language; a measure expressed in inches will almost invariably be converted into its metric equivalent for the Italian viewers. In contrast, people, fiction and geo display the largest (in absolute value) negative coefficients; not surprisingly, all other things equal, proper names (which account for the largest part of these categories) are less likely to be heavily adapted than other ECRs. The coefficient associated with the constrained variable is not statistically significant. In this case, neither a positive nor a negative effect would have been the obvious result. There is no reason to believe that ECRs used, for example, to create wordplay should be more likely (or less likely) to be adapted for the target audience. Translators may be lucky and find an “easy” pun that allows them to keep both the wordplay and the reference (as in the case of the “Oscars” pun mentioned above), but other times a proper name, although familiar to the target audience, has to be replaced if translators want to reproduce a similar joke in the TL.
Cultural References in Fansubs
91
4. Fansubbers and Professionals: Moving in the Same Direction? Although there are no official Italian subtitles for 30 Rock, we can check how professional translators dealt with the same ECRs in the dubbed version of the series. A probit analysis performed on the official translation shows that professional translators dealt with the different cultural categories in a way that mirrors very closely the decisions of fansubbers. Explanatory variable
Estimated ȕ coefficients (effect on probability of TL-oriented procedures)
food
-0.22549
geo
-0.50985
humor
0.586575
people
-0.73546
fiction
0.196423
institutions
0.001397
entertainment
-0.11536
social
-0.01878
religion
0.381314
other
-0.05713
constant
-0.47059
Note. All ȕs significant at 95%
Table 5-4. Results of probit regression performed on official translations Again, the estimated coefficients from the regression are significant at a 95% level of confidence; hence, the relationship between our variables is not due to chance. Most estimated coefficients in Table 5-4 display the same sign (positive or negative) as those in the analysis on fansubs, and
92
Chapter Five
their relative magnitude is also very similar, which indicates that the cultural categories that are most likely to be treated with a TL-oriented approach are the same for both groups. Even so, a difference can be observed if we adopt a diachronic perspective. As Fig. 5-2 shows, the official translations made for dubbing display a noticeable decrease in the use of TL-oriented procedures. As dubbing directors and dialogue adaptors were the same for both seasons, this finding suggests a possible change in translation strategies. 35 TL-oriented translations
30 25 20 15 10 5 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 Episode Fig. 5-2. Number of TL-oriented procedures used per episode by professional translators
This interpretation, however, is not obvious. The decrease might simply reflect a change in the type of references featured in the series: for example, if an episode only contained proper-name ECRs, that episode would likely feature a heavy use of retention as a translation procedure. A probit regression can once again shed light on the issue, allowing us to check whether this is the case, or whether we can instead hypothesize a change in translation strategies. As fansubs do not seem to display such temporal variation, we will use them as a benchmark. To perform the check, a new explanatory variable is added, named season3. Like all the other independent variables used in the model, season3 is a binary variable: it assumes value 1 if an observation is taken from a Season 3 episode and 0 otherwise.
Cultural References in Fansubs
93
With the inclusion of the new independent variable, the signs and relative magnitudes of the old coefficients do not change for either group of translators. However, an important difference emerges. For the official translation, the estimated coefficient for the new variable season3 is -0.32 (statistically significant at 95%). Its negative value means that, if we look at a particular point in the data, if season3 has value 1 for that observation, the probability that the dependent variable TL also has value 1 decreases. In other words, if the ECR is taken from an episode from the third season of the series, it is less likely that a TL-oriented procedure is used. The magnitude of the coefficient indicates that this effect is quite strong; specifically, an estimated value of -0.32 means that the probability of observing a TL-oriented procedure is 32% lower than it is for ECRs taken from the first season. In contrast, there is no such change in fansubs between seasons 1 and 3, as the estimated coefficient for season3 in the fansubs sample is small and not significant. The probit model allowed us to take into account factors such as the cultural category to which each ECR belongs, as well as other constraints that affect the translation. The results indicate that the translators were noticeably more prone to opt for source-language oriented procedures when dealing with ECRs in the third season of the series, all other things being equal.
5. Conclusions By looking at amateur subtitles for 30 Rock, a popular US TV comedy, this paper highlighted the many different translation procedures fansubbers used to deal with extralinguistic culture-bound items. A qualitative analysis of these subtitles found that a SL-oriented approach was adopted for a large number of culture-specific references. A statistical analysis was then used to investigate the factors motivating the choice between SL- and TL-oriented procedures, showing how the cultural category of each reference was an important factor in the decision. Further investigation is needed to see if these findings can be considered representative of fansubbing as a whole. Also, it could be useful to compare fansubs to official subtitles for the same series, an analysis that was not feasible for 30 Rock, as official Italian subtitles are not available. Nevertheless, thanks to the large number of cultural references in the series, some interesting results were obtained by comparing the fansubs with the professional translations of the same items made for its Italian dubbed version. In particular, despite a striking similarity in how the two groups of translators were guided by the cultural
94
Chapter Five
categories in the choice between a SL- and a TL-oriented approach, professional translators displayed a stronger inclination than fansubbers towards TL-oriented procedures in the first season analyzed, followed by a sharp increase in the use of SL-oriented procedures, which brought them much closer to fansub translation trends. In 2010, when the third season of 30 Rock aired with its official Italian dubbing, fansubbing of TV series had become a much more widespread phenomenon in Italy than it was three years earlier. Therefore, we cannot discard the possibility that professional translators, being able to access the fansubs already released, would look at some of the translation solutions chosen by fansubbers and adopt similar techniques. However, it might also be easier for fansubbers than it is for professionals to adapt to changing tastes, as they do not have to conform to established conventions; this interpretation would explain why an emerging tendency towards SLoriented procedures appears in fansubs first, and only later in official translations. In any case, the results suggest that fansubs might provide indications on the direction audiovisual translation is evolving.
References Aixelá, Javier Franco. 1996. “Culture-Specific Items in Translation.” In Translation, Power, Subversion, edited by Román Álvarez & CarmenÁfrica Vidal, 52-78). Cleveland/Bristol/Adelaide: Multilingual Matters. Baayen, R. Harald. 2008. Analyzing Linguistic Data: a Practical Introduction to Statistics Using R. Cambridge: Cambridge University Press. Bold, Bianca. 2011. “The Power of Fan Communities: An Overview of Fansubbing in Brazil.” Tradução em Revista, 11(2): 1-19. Caffrey, Colm. 2009. “Relevant Abuse? Investigating the Effects of an Abusive Subtitling Procedure on the Perception of TV Anime using Eye Tracker and Questionnaire” (PhD Thesis). Dublin City University, Dublin, Ireland. doras.dcu.ie/14835/1/ColmPhDCorrections.pdf Accessed 7 June 2013. Chu, Donna. 2010. “Fanatical Labor and Serious Leisure in the Internet Age: A Case of Fansubbing in China.” In Frontiers in New Media Research, edited by Francis L. F. Lee, Louis Leung,Jack Linchuan Qiu & Donna S. C. Chu, 259-277. New York: Taylor & Francis/Routledge. Díaz-Cintas, Jorge & Muñoz Sánchez, Pablo. 2006. “Fansubs: Audiovisual Translation in an Amateur Environment.” The Journal of Specialised Translation, 6: 37-52. Retrieved from www.jostrans.org/issue06/issue06_toc.php
Cultural References in Fansubs
95
Díaz-Cintas, Jorge & Remael, Aline, eds. 2007. Audiovisual Translation: Subtitling. Manchester: St. Jerome Publishing. D’Ydewalle, Géry, Van Rensbergen, Johan & Pollet, Joris. 1987. “Reading a Message when the Same Message is Available Auditorily in Another Language: The Case of Subtitling.” In Eye Movements: From Physiology to Cognition, edited by J. K. O’Regan & A. LévySchoen: 313-321). Amsterdam/New York: Elsevier Science Publishers. Espindola, Elaine & Vasconcellos, Maria Lucía. 2006. “Two Facets in the Subtitling Process: Foreignisation and/or Domestication procedures in Unequal Cultural Encounters.” Fragmentos, 30: 43-66. Ferrer Simó, María Rosario. 2005. “Fansubs y Scanlations: la Influencia del Aficionado en los Criterios Profesionales.” Puentes, 6: 27-43. Gottlieb, Henrik. 2005. “Multidimensional Translation: Semantics turned Semiotics.” In MuTra: Challenges of Multidimensional TranslationSaarbrücken 2-6 May 2005, edited by Heidrun Gerzymisch-Arbogast & Sandra Nauert. www.euroconferences.info/proceedings/2005_Proceedings/2005_Gottlieb _Henrik.pdf. Accessed 7 June 2013. Gries, Stefan Th. 2009. “Statistics for Linguistics with R: a Practical Introduction.” Berlin/New York: Mouton de Gruyter. Innocenti, Veronica & Maestri, Alessandro. 2010. “Il Lavoro dei Fan. Il Fansubbing come Alternativa al Doppiaggio Ufficiale in The Big Bang Theory.” In Le Frontiere del “Popolare” tra Vecchi e Nuovi Media. Atti del Convegno Internazionale di Studi sull’Audiovisivo Media Mutations MM2010, edited by Claudio Bisoni. Bologna: Alma Mater Studiorum Università di Bologna. Inëz Chambers, Samantha Nicole. 2012. “Anime: from Cult following to Pop Culture Phenomenon.” The Elon Journal of Undergraduate Research in Communications, 3(2): 94-101. Johnson, Keith. 2008. “Quantitative Methods in Linguistics.” Malden, MA: Blackwell Publishing. Kirkpatrick, Sean. 2003. “Like holding a Bird: What the Prevalence of Fansubbing can teach us about the Use of Strategic Selective Copyright Enforcement.” Temple Environmental Law and Technology Journal, 21: 131-134. La Forgia, Francesca & Tonin, Raffaella. 2009. “In un Tranquillo Weekend di Paura, un Esorcista Volò sul Nido del… Un Case Study sui Rimandi Intertestuali nel Sottotitolaggio e Doppiaggio Italiano e Spagnolo della Serie Supernatural. inTRAlinea, 11” http://www.intralinea.org/archive/article/In_un_tranquillo_weekend_di_paura_un_Esorcista. Accessed 7 June 2013.
96
Chapter Five
Lee, Hye-Kyung. 2010. “Cultural Consumers and Copyright: A Case Study of Anime Fansubbing.” Creative Industries Journal, 3(3): 235Ǧ250. Mäntylä, Teemu. 2010. “Piracy or Productivity: Unlawful Practices in Anime Fansubbing.” (MA/MS thesis). Aalto University School of Science and Technology. Retrieved from http://lib.tkk.fi/Dipl/2010/urn100297.pdf Massidda, Serenella. 2012. “The Lost World of Fansubbers, the other Translators.” In L’altro: I Molteplici Volti di un’Ineludibile Presenza, edited by Simona Cocco, Massimo Dell'Utri & Simonetta Falchi, 193209. Rome: Aracne Editrice. doi: 10.4399/978885485657814 Newmark, Peter. 1988. “A Textbook of Translation.” Hertfordshire: Prentice Hall. Pedersen, Jan. 2005. “How is Culture rendered in Subtitles?” In MuTra: Challenges of Multidimensional Translation–Saarbrücken, 2-6 May 2005, edited by Heidrun Gerzymisch-Arbogast & Sandra Nauert. www.euroconferences.info/proceedings/2005_Proceedings/2005_Pedersen _Jan.pdf Accessed 25 January 2013. Perego, Elisa 2008. “What would we read best?: Hypotheses and Suggestions for the Location of Line Breaks in Film Subtitles.” The Sign Language Translator and Interpreter, 2(1): 35-63. Perego, Elisa, Del Missier, Fabio, Porta, Marco, & Mosconi, Mauro. 2010. “The Cognitive Effectiveness of Subtitle Processing.” Media Psychology, 13(3): 243-272. Pérez-González, Luis. 2006. “Fansubbing Anime: Insights into the “Butterfly Effect” of Globalization on Audiovisual Translation.” Perspectives: Studies in Translatology, 14(4). Special issue: Manga, Anime and Video Games: The Translator’s Turn?,: 260-277. —. 2007). Intervention in New Amateur Subtitling Cultures: A Multimodal Account. Linguistica Antverpiensia, 6, 67-80. Rajendran et al. 2011. “Effects of Text Chunking on Subtitling: A Quantitative and Qualitative Examination.” Perspectives: Studies in Translatology, 21(1): 5-21. Rembert-Lang, Latoya D. 2010. “Reinforcing the Tower of Babel: The Impact of Copyright Law on Fansubbing.” Intellectual Property Brief, 2(2): 21-33. Tomaiuolo, Saverio. 2007. “Translating “America’s Most Nuclear Family” into Italian: Dubbing and Cultural Adaption in The Simpsons.” Translation and Interpreting Studies, 2(2): 43-73. Vinay, Jean Paul & Darbelnet, Jean. 1973. Stylistique Comparée du Français et de l’Anglais. Paris: Didier.
Cultural References in Fansubs
97
Abstract Cultural references in fansubs: When translating is a job for amateurs Keywords: fansubs, cultural references, TV series In view of the recent popularity of fansubs, this paper studies how this new form of audiovisual translation deals with culture-specific items. In particular, it illustrates a variety of translation procedures “fansubbers” use when translating extralinguistic culture-bound references (Pedersen 2005). Then, a statistical analysis shows how different cultural categories guide the translators’ choice between a source language- and a target languageoriented approach. Finally, the study looks at how the same cultural references were treated by professional translators in the dubbed version of the series. The comparison highlights a striking similarity in how the two groups of translators were guided by the cultural categories in choosing between SL- and TL-oriented procedures. In addition, while fansubs do not present significant changes in the relative frequency of the two approaches, the official translations display, at first, a stronger preference than fansubbers for TL-oriented procedures, followed by a sharp increase in the use of SL-oriented procedures, which brought them much closer to fansub translation trends.
CHAPTER SIX THE INFLUENCE OF SHOT CHANGES ON READING SUBTITLES – A PRELIMINARY STUDY AGNIESZKA SZARKOWSKA, IZABELA KREJTZ, MARIA àOGIēSKA, àUKASZ DUTKA AND KRZYSZTOF KREJTZ1
1. Introduction It has been widely accepted in the professional literature on subtitling that subtitles should not be displayed over shot changes (see for example Díaz Cintas and Remael 2007, ITC Guidance on Standards for Subtitling 1999, Williams 2009). The reason for this, as stated by Díaz Cintas and Remael (2007, 91), is that “studies in eye movements […] have shown that if a subtitle is kept on screen where there is a cut change, the viewer is led to believe that a change of subtitle has also taken place and starts rereading the same onscreen text”. Such eyetracking studies, however, are quite difficult to come by. The only eyetracking study explicitly addressing the issue of subtitle reading and shot changes we have managed to find is the one by de Linde and Kay (1999). De Linde and Kay (1999) analysed the influence of shot changes on the reading of subtitles in two short clips taken from the BBC: one with a small number of shot changes per subtitle (1.3 shot change per subtitle) and the other with a high number (3.5 shot changes per subtitle). The authors report that the second clip had a significantly higher number of deflections, i.e. gaze shifts from the subtitle area to the image, than the clip
1
A. Szarkowska, M. àogiĔska and à. Dutka: University of Warsaw, Poland. I. Krejtz and K. Krejtz: University of Social Sciences and Humanities, Poland. Contact email address: [email protected]
100
Chapter Six
with the smaller number of shot changes (de Linde and Kay 1999, 61). They attribute this result to the high number of shot changes in the second clip. There are potentially two problems with this finding. First, the nature of the second clip in a way invited viewers to look at the screen frequently as the programme was about how an advertisement was made. Second, the clip contained an unusually high number of shot changes per subtitle: one subtitle crossed as many as 9 shot changes, while another one – 5. Such subtitles are rarely encountered in professional subtitling, which makes them unrepresentative of subtitling in general. If it is not possible to abide by the rule of not crossing shot changes, for instance owing to the fast editing of a film, another professional rule is that a subtitle should be anchored over a shot change for at least several frames in order to “allow the reader time to adjust to the new picture” (ITC Guidance on Standards for Subtitling 1999, 12). In this study, we only examine subtitles which are anchored over shot change for at least 20 frames before the shot change and at least 20 frames after.
2. Previous Eyetracking Research on Subtitling Eye tracking has been used extensively to research eye movements in reading static texts (for an overview see Duchowski 2002, Rayner 1998, Rayner et al. 2012). Research on reading subtitles in dynamic multimedia texts such as films, however, is still in its infancy (for an overview see Szarkowska et al. (2013) and Kruger and Steyn (2014)). Although first pioneering studies using eye tracking to investigate the process of reading subtitles were carried out as early as in the 1980s by Gery d’Ydewalle and his colleagues (see d’Ydewalle et al. 1985; d’Ydewalle et al. 1987; d’Ydewalle, & van Rensbergen, 1989), there are still a number of research areas and questions which remain unanswered. D’Ydewalle et al. (1987) are widely quoted as having provided evidence for the famous “six-seconds rule”, according to which a two-line subtitle should appear on screen for six seconds – not less because viewers will not be able to read it, and not more because they will re-read a longer subtitle. D’Ydewalle et al. (1991) found that the process of reading subtitles which appear in a film is a largely automatic behaviour. D’Ydewalle and his colleagues also investigated the influence of sound (d’Ydewalle et al. 1987; d’Ydewalle and Gielen 1992) and that of the number of lines on the subtitle reading process (d’Ydewalle & De Bruycker 2007). In a series of studies, Jensema and colleagues examined subtitles for the deaf and hard of hearing for television broadcasts in the US, where such subtitles are referred to as closed captions (Jensema 1998; Jensema et
The Influence of Shot Changes on Reading Subtitles
101
al. 2000a; Jensema et al. 2000b). First, they established the optimum speed of displaying subtitles, which for most viewers was 145 words per minute (Jensema 1998) and which was similar to the actual speed at which subtitles were displayed on American TV (141 wpm, see Jensema et al. 1996). Jensema (1998) further established that viewers could follow subtitles displayed with the speed of up to 170 wpm. A correlation between the reading speed and time spent in the subtitle area was found: the higher the rate, the more time will be spent in the caption area (Jensema et al. 2000b). Last but not least, they famously stated that “the addition of captions to a video resulted in major changes in eye movement patterns, with the viewing process becoming primarily a reading process” (Jensema et al. 2000a, 275). Recent years brought more emphasis on cognitive processing with a number of studies using eye tracking to investigate the way viewers’ attention is split between various information channels when reading subtitles (cf. de Linde and Kay 1999; Specker 2008, Perego et al. 2010; Perego & Ghia 2011; Caffrey 2012; Ghia 2012). Some studies were devoted to the reading process of edited vs. verbatim subtitles (see de Linde and Kay 1999; Szarkowska et al. 2011) and the influence of various linguistic variables, such as cohesion and word frequency (Moran 2012) and line segmentation (Perego et al. 2010; Rajendran et at. 2011) on the process of reading subtitles. Eye tracking research has provided important insights into our knowledge on how people read subtitles and it continues to be a dynamically developing research area within audiovisual translation. The attention allocation to subtitles, the subtitles reading speed and the educational applications of subtitling seem to be among the most promising research avenues. Many research questions regarding subtitling remain unresolved, and a number of professional subtitling practices still require evidence which would back it up or invalidate it. The present study aims to answer one such question by investigating the influence of shot changes on the subtitle reading process.
3. Research Questions and Hypotheses In this preliminary study conducted in 2012 we report on selected results from a large-scale eyetracking study on deaf, hard of hearing and hearing participants watching subtitles for the deaf and hard of hearing (SDH) in Poland. Here, we only address the question whether shot changes trigger the re-reading of subtitles. We examine three possible case scenarios when a viewer encounters a subtitle crossing a film cut:
Chapter Six
102
1) deflecting to image, i.e. the viewer moves the eyes from subtitles to the image, 2) re-reading, i.e. the viewer goes back to the beginning of a subtitle and starts re-reading it, 3) no re-reading, i.e. the viewer continues reading or watching the image as if no shot change occurred. In order to verify which of the following scenarios took place most frequently in our study, we examine eyetracking data from two-line subtitles from a feature film and two documentaries.
4. Method 4.1 Participants The total number of participants analysed in this study was 67, out of whom 23 were deaf, 15 were hard of hearing and 29 were hearing.
4.2 Materials Participants saw clips subtitled at the speed of 15 characters per second (cps). The following subtitling settings were used: -
Maximum number of characters: 38 Minimum interval between consecutive subtitles: 4 frames Minimum duration of a subtitle: 25 frames Maximum duration of a subtitle: 150 frames 3 frames before cut 3 frames after cut
Each viewer was shown a set of clips, together lasting nearly 13 minutes. In this preliminary study, we analyse seven two-line subtitles from a feature film (Love actually, 2003, Richard Curtis) and seven twoline subtitles from two documentaries (five subtitles from Super Size Me, 2004, Morgan Spurlock and two from Roman Polanski: Wanted and Desired, 2008, Marina Zenovich).
The Influence of Shot Changes on Reading Subtitles
Film
No. of subtitles going over shot changes for at least 20 frames before and after a shot change 10 5
Total no. of subtitles in the clip
Love actually Super Size Me Roman Polanski: Wanted and Desired TOTAL
23 21 21
65
103
No. of twoliners 7 5
2
2
17
14
Table 6-1. General statistics on clips with subtitles displayed over shot changes For the purposes of this study, only subtitles displayed over shot changes were analysed. Among those, we only examined the ones anchored over a shot change for at least twenty frames before and after the shot change. Each subtitle analysed here was displayed over one shot change only. In Table 6-1 and Table 6-2 we present detailed characteristics of the clips and the subtitles.
Film Love actually Super Size Me Roman Polanski: Wanted and Desired AVERAGE
Clip duration (in seconds) 70 65
No. of shot changes in a clip
Average no. of words per subtitle
27 14
8.4 7.4
Average no. of characters with spaces per subtitles 49.3 49
69
9
10.5
68
68 seconds
16.66
8.76 words
55.43 characters
Table 6-2. Detailed statistics on subtitles displayed over shot changes There are different types of shot changes in films: cut, dissolve, fadein, etc. In the case of the material analysed in this study, only cuts were taken into consideration. All subtitles displayed over shot changes in our study belonged to the same scene; that is to say, while some subtitles crossed film cuts within a scene, none of them crossed film cuts between scenes. In Love actually, the most frequent reason for film cuts is the editing technique known as
104
Chapter Six
shot-reverse shot, whereby the camera alternates between one of the two characters addressing one another. In Super Size Me, fast film editing is combined with different shots complementing voiceover narration, whereas in Roman Polanski: Wanted and Desired there are cuts between ‘talking heads’ and some archive material, such as black and white pictures.
4.3 Procedure Participants were informed that they were going to watch subtitled clips related to different topics. They were asked to watch the clips carefully as they would have to answer three comprehension questions after each clip. The real nature of the experiment, i.e. the analysis of the subtitle reading processes, was not revealed. Participants were tested individually. First, they were asked to sign a written consent form to take part in the study. Then, there were seated in front of a monitor with an eyetracker, where 9-point calibration and validation were performed. The test began with a few questions eliciting personal information, such as age, degree and onset of hearing loss, the use of hearing aids and implants, language of everyday communication, type of school attended and proficiency in English. After viewing each clip, participants had to answer three closed-ended comprehension questions related to the content of the clip. Finally, the participants received promotion kits from the University of Warsaw – the institution under the auspices of which the study was conducted.
4.4 Eye Movement Recording Participants’ eye movements were recorded with SMI RED eyetracking system with a sampling rate of 120 Hz. Participants sat in front of a 21-inch monitor at a distance of about 60 cm. The eyetracker manufacturer’s software BeGaze was used in the data analysis. For statistical analysis and data preparation we used R and IBM SPSS Statistics version 20.
4.5 Analysis of Subtitle Reading Patterns In order to examine the reading patterns of subtitles displayed over shot changes, we analysed the following sets of data:
The Influence of Shot Changes on Reading Subtitles
105
4.5.1 Comparison of Deflections between Subtitles with Shot Changes and Subtitles without Shot Changes Starting with the question whether subtitles with shot changes induce more deflections to the image than subtitles without shot changes, we drew areas of interest (AOIs) on entire subtitles (see Fig. 6-1) and compared glance counts (i.e. deflections from image to subtitle, measured as the number of saccades from outside of the subtitle AOI that ended with a fixation in this region) between the subtitle AOI and the image in the two types of subtitles. A higher number of deflections in the subtitles displayed over shot changes would corroborate the findings by de Linde and Kay (1999).
Fig. 6-1. An example of AOI used for the analysis of deflections
4.5.2 Comparison of AOIs on Subtitle beginning Before and After the Shot Change As mentioned above, AOIs were drawn on subtitles which appear at least 20 frames before a shot change and remain on screen for at least 20 frames after the shot change. We marked AOIs at the beginning of subtitles, with AOIs taking up about one-third to one half of a subtitle, depending on its content and layout (see Fig. 6-2).
Fig. 6-2. AOI at the beginning of a subtitle crossing a shot change
106
Chapter Six
We compared subject hit count (i.e. the percentage of participants who entered the AOI compared to all selected participants), fixation count (i.e. the number of all fixations in a particular AOI) and first fixation duration (i.e. the duration of the first fixation that enters the AOI) in the AOIs before (AOI pre) and after the shot change (AOI post). Similarities in those measures would point to analogous reading patterns between the subtitle beginning before and after the shot changes, suggesting possible rereading. Significant differences between the two types of AOIs (AOI pre and AOI post) would indicate that subtitles were not re-read. 4.5.3 Transition Matrix Analysis After the Shot Change In order to examine eye movement patterns in greater detail, we created AOIs for the transition matrix analysis. This tool enabled us to perform an analysis which areas on the screen viewers moved their eyes to and from. We marked a set of AOIs after shot changes on the following areas of the screen: 1) the beginning of a subtitle, 2) the remaining part of the subtitle, and 3) the unsubtitled image (see Fig. 6-3).
Fig. 6-3. Example of AOIs on three areas: image, subtitle beginning and the rest of the subtitle after the shot change
As defined in the BeGaze Manual (2011, 181), transition matrix contains information on the “number of consecutive fixation transitions inside and between selected AOIs for all selected trials”. Holmqvist et al. (2011, 190-191) argue that “a saccade within an AOI is not and should not be called a transition”. The authors believe the term ‘within-AOI transition’ is confusing as the term ‘transition’ refers to a gaze shift from one AOI to another. They suggest that such as saccade should be called a ‘within-AOI saccade’ and reported as a structural zero in the transition matrix. In our study, however, we retain the term ‘within-AOI transitions’ in line with the manufacturer’s manual, because we are interested both in
The Influence of Shot Changes on Reading Subtitles
107
the number of eye movements between AOIs and within an AOI, and as such we do not wish to treat within-AOI transitions as structural zeroes. The transition matrix analysis allowed us to see whether on seeing a shot change viewers continued reading the subtitle, looked at the image or started re-reading the subtitle from the beginning.
5. Results All statistics are reported with Greenhouse-Geisser correction if the sphericity assumption was violated, and the post hoc comparisons of simple effects were calculated with Bonferroni correction for multiple comparisons.
5.1 Analysis of Deflections from the Subtitle to the Image The analysis of deflections between subtitles displayed over shot changes and those which do not cross any shot changes can yield interesting results regarding subtitle reading behaviour. We hypothesised, after de Linde and Kay (1999), that shot changes can cause an increase in the deflections from the subtitle to the image, measured here by the number of glance counts. In order to verify this hypothesis, we ran two separate two-way analyses of variance (ANOVA), one for the feature film and one for the documentary clips. The design of the analysis was 3x2, with the group as the between-subject variable (deaf, hard of hearing, hearing) and the subtitle type as the within-subject variable (shot change vs. no shot change). The analysis for the feature film did not yield any significant differences for either the subtitle type or for the group differences. This means that shot changes in the feature film did not modify the reading and viewing behaviour of our participants in terms of deflections to the subtitle region. The analysis for the documentary clips, however, presents a different story. We observed two significant main effects. First, all the participants made significantly more glances to the subtitle area when there was a shot change (M = 1.66, SE = 0.06) compared to subtitles which did not cross a shot change (M = 1.34, SE = 0.04), F(1, 64) = 51.55, p < .001, eta2 = 0.446. Second, hearing participants made significantly fewer glances from image to subtitles (M = 1.35, SE = 0.07) than hard of hearing participants (M = 1.65, SE = 0.09). Deaf participants (M = 1.51, SE = 0.08) did not differ from the other two groups F(2, 64) = 3.38, p < .05, eta2 = 0.096.
Chapter Six
108
5.2 Analysis of Differences in Reading the Beginning of Subtitles Before and After the Shot Change 5.2.1 Subject Hit Count (%) A comparison of the percentage of subjects visiting the AOI marked as the beginning of the subtitle before and after shot changes can reveal possible differences in subtitle reading behaviour among the participants (see Table 6-3). Subtitle beginning Subtitle beginning AOI hits before the AOI hits after the shot change shot change % Count % Count 68.90% 27.34% Feature film 347 136 76.35% 30.85% Documentary 368 149 AVERAGE 72.63% 29.10% Table 6-3. The percentage of participants who at least once visited the subtitle beginning AOI before and after the shot change The results demonstrate that while the beginning of the subtitle was fixated by almost three in four participants before the shot change, only less than one third of the participants looked at the same area after the shot change. It is also worth noting that about one in four people did not look at the subtitle beginning before the shot change at all. 5.2.2 Fixation Count A 3 x 2 x 2 mixed design analysis of variance (ANOVA) was performed on fixation count, with viewers (hearing, hard of hearing, deaf) as a between-subject factor and two within-subject factors: subtitle presentation (pre- vs. post-shot change) and type of clip (feature film vs. documentary). No significant group differences were found, which means that all viewers regardless of their hearing loss had a similar number of fixations. However, we observed differences for the main effect of the presentation type and clip type as well as an interaction of these two variables. In general, participants had fewer fixations on the subtitle beginning after the shot change (M = 0.54, SE = 0.45) than before the shot change (M = 1.80, SE = 0.12), F(1, 64) = 174.96, p < .001, eta2 = 0.732. Deaf and hard of hearing participants tended to have more fixations at the subtitle
The Influence of Shot Changes on Reading Subtitles
109
beginning (M = 1.24, SE = 0.12, M = 1.34, SE = 0.15) than hearing participants (M = 0.94, SE = 0.11), however this effect reached only statistical tendency level, F(2, 64) = 2.87 p = 0.06, Șp2 = .082. A significant interaction between the presentation type and clip type, F(1, 64) = 7.72 p < .01, Șp2 = .108, revealed that after the shot change there is no significant difference in the average fixation count between feature and documentary clips. The difference is significant at the beginning of subtitle presentation, where participants viewing the documentary clips made more fixations (M = 1.97, SE = 0.13) than while viewing a feature film (M = 0.56, SE = 0.05). 5.2.3 First Fixation Duration To analyse the differences in first fixation duration, we selected participants who looked at least once at the subtitle beginning after the shot change. Otherwise, the average values for first fixation duration would be biased by zero data, resulting in unrealistically low values. A 3 x 2 x 2 mixed design analysis of variance (ANOVA) was performed on the first fixation duration with viewers (hearing, hard of hearing, deaf) as a betweensubject factor and two within-subject factors: subtitle presentation (pre vs. post shot change) and type of clip (feature film vs. documentary). The main effect of the presentation type and the group were significant. If participants looked at the subtitle beginning after the shot change, then their first fixation duration (M = 216.38, SE = 10.89) was significantly longer than the average first fixation duration just after the subtitles appeared on the screen, (M = 193.08, SE = 5.99), F(1, 45) = 4.72, p < .05, eta2 = 0.095. Those deaf participants who looked at the beginning of subtitles after the shot change had longer first fixation (M = 227.93, SE = 10.87) than hearing participants (M = 180.02, SE = 11.49), whereas hard of hearing participants did not differ significantly from the other two groups (M = 206.23, SE = 13.67), F(2, 64) = 4.59, p < .05, Șp2 = .170.
5.3 Transition Matrix Analysis We also analysed the number of transitions, i.e. the number of consecutive fixation transitions both within an AOI and between selected AOIs. As mentioned before, we hypothesised that on seeing a shot change, viewers can: 1) look at the image (deflecting), 2) go back to the beginning of the subtitle (re-reading),
Chapter Six
110
3) or continue reading the subtitle (no re-reading). Table 6-4 presents the total number of transitions which occurred within and between AOIs marked as the beginning of a subtitle, the rest of the subtitle and the image, as shown in Fig. 6-3.
Film type
Transitions from the rest of the subtitle to the image
Feature
257
Transitions from the rest of subtitle to the beginning of the subtitle 63
Documentary
185
67
Transitions within the subtitle
1044 541
Table 6-4. Transitions between different AOIs after the shot change The results show that the largest number of consecutive fixation transitions were made within the AOI marked as the rest of the subtitle, demonstrating that on seeing a shot change, most viewers continued reading the subtitle. There were also a number of transitions made from the subtitle area to the image (deflections). The number of transitions from the subtitle to the beginning of the subtitle area was the smallest, suggesting almost no re-reading. This pattern is similar in both types of films. The following matrix (see Fig. 6-4) illustrates the probability of fixating on a particular AOI depending on the position of the eyes at the moment a shot change occurred.
Fig. 6-4 Transition matrix analysis of AOIs after the shot change in all clips
The Influence of Shot Changes on Reading Subtitles
111
If at the moment a shot change occurred the viewers were looking at the image, then they were most likely to continue looking at this area (72%). Likewise, if viewers were reading a subtitle, it is quite probable that they continued reading the subtitle regardless of the shot change (70%) . Those viewers who moved their eyes from the rest of the subtitle to another AOI were more likely to look at the image (23%) than at the beginning of the subtitle (7%). As a result, the chance of a viewer’s making a transition from the subtitle area to the beginning of the subtitle and thus re-reading the subtitle was very small.
6. Discussion 6.1 Deflections to the Image In contrast to the previously quoted study by de Linde and Kay (1999, 66), who found a significant main effect of the type of programme on the duration of deflections (the programme with higher number of shot changes inducing more deflections), we found a higher number of deflections in the documentary clips, which in fact had fewer number of shot changes than the feature film clip in our study. It should also be noted that de Linde and Kay compared durations of deflections between different types of programmes, while in our study we examined differences in the number of deflections between different types of subtitles both in the same clips and between the two genres. Differences in reading patterns between subtitles displayed over shot changes in the feature and documentary films can be attributed to a different nature of shot changes in the two kinds of clips. As mentioned before, the feature film consisted of shots alternating between the two speakers talking in the scene, whereas the documentaries contained archive material combined with shots of talking heads (in the Roman Polanski film) or different scenes illustrating the voiceover narration (Super Size Me). Therefore, the shot changes in the documentaries were less predictable than in the feature film. They also introduced new information, complementing the narration and adding to it, which made viewers look at the screen more to keep track of what was going on in the film. As stated in the Results section above, we found no significant differences in the number of deflections between subtitles with and without shot changes in the feature film. One reason for the lack of differences may come from the very fast editing in the film. Some subtitles were displayed over more than one shot change (not analysed here) and the speech rate in the clip was very high. There were 27 shot changes in
Chapter Six
112
the clip lasting 70 seconds, which means a shot change occurred about every 2.5 seconds on average. This type of editing, with images flashing before the viewers’ eyes, may have made them more “immune” to the shot changes.
6.2 Differences in Subtitle Reading Patterns Before and After the Shot Change 6.2.1 Subject hit count (%) The analysis of the subject hit count has demonstrated that the subtitle beginning before the shot change was looked at by nearly 75% participants whereas the same area after the shot change was visited only by about 30% people. This result may be taken to mean that the re-reading of the subtitles after the shot change occurred was minimal. We also observed a small difference between the two types of clips: slightly more viewers looked at the AOIs in the documentary clips compared to the feature film. This may be attributed to the differences between the clips: the documentaries, especially the one about Roman Polanski, contained more words per subtitle (see Table 6-2) than the feature film, hence it took viewers more time to read them. Interestingly, about 25% participants did not even look at the subtitle beginning before the shot change. Some participants probably skipped entire subtitles – for instance, one hearing person with good knowledge of English (the original language of the clips) reported after the tests that she did not look much at the subtitles as she could easily follow the dialogue. This, of course, points to an important limitation of this study, which was the lack of potential correlations between the declared English language proficiency of the participants and the percentage of subtitles skipped. We may pursue this line of research in further studies. Another reason why some viewers did not fixate the subtitle beginning may stem from the phenomenon known as word skipping (see Rayner et al. 2012, 114). It is a well-known fact in reading studies that short and more frequent words tend to be skipped more often than words that are long and rare. What is more, as noted by Rayner et al. (2012, 115), “readers often do not fixate either the first or last words of a line in text”. 6.2.2 Fixation Count In all groups of participants the fixation count was significantly higher at the subtitle beginning before the shot change (with the mean value of
The Influence of Shot Changes on Reading Subtitles
113
1.80) than after the shot change (0.54). This points to a more regular reading pattern when the subtitle first appeared on screen (i.e. before the shot change) than when it was displayed after the shot change. In the former case, participants most probably read the subtitle, while in the latter – if they looked at the subtitle at all – they possibly re-initiated reading, but quickly realised that the same text is still displayed on screen. The differences in the reading times of subtitles before and after the shot change between the feature film and the documentaries may be attributed to the differences in their subtitle display times. The average duration of subtitles before and after the shot change for both film types differed, as shown in Table 6-5. The average subtitle duration before a shot change in documentaries was longer by 16 frames compared to the feature film.
Feature film Documentary
Average subtitle duration before a shot change (in frames) 37 53
Average subtitle duration after a shot change (in frames) 46 34
Table 6-5. Average subtitle duration before and after a shot change (in frames) for the two types of clips Another explanation for the longer reading time before the shot change in the documentary may come from the fact that the clip about Roman Polanski contained more text to be read (both in terms of the average number of words and the number of characters, see Table 6-2). So, with more text and longer display time, the subtitles in the documentary naturally induced more fixations. When it comes to the differences in the number of fixations among the three groups of participants, there was only a tendency of deaf and hard of hearing viewers to have more fixations at the subtitle beginning compared to hearing viewers. This is in line with previous studies (see Szarkowska et al. 2011), where deaf and hard of hearing viewers had more fixations than hearing ones. 6.2.3 First Fixation Duration First fixation duration is often “interpreted as reflecting the time taken for fast processes such as recognition and identification” (Holmqvist et al. 2011, 385). In our study this measure can be taken as a means of recognition and identification of the subtitle after a shot has changed. A longer fixation, as noted by Holmqvist et al. (2011, 381), “is often
114
Chapter Six
associated with a deeper and more effortful cognitive processing”. Such deeper processing may be observed in our participants: those of them who looked at the beginning of the subtitle after the shot change had longer first fixation duration compared to the duration of the first fixation before the shot change. To interpret these results, we need to remember that the participants generally looked much less at the beginning of a subtitle after the shot change than before the shot change; however, once they looked at the subtitle after the shot change, they dwelled there for a longer period of time and had a longer first fixation duration than when they saw the subtitle for the first time. This might suggest some confusion and disturbance of their reading process–first, they tried to read the text, possibly thinking that a new subtitle appeared on screen given that the shot has changed, but then they may have realised they are actually looking at the same subtitle. First fixation duration is also “a measure of cognitive activity as it reflects the decision when to move the eyes after first fixating a word” (Rayner et al. 2012, 136). Longer first fixation duration may be an indication of cognitive activity aimed at establishing whether to continue the re-reading of the beginning of the subtitle after the shot change, or to move the eyes to another area. The first fixation after the shot change thus appears to be of a different nature than the one before the shot change: in the latter case, the first fixation is simply the first of a series of fixations made when reading the subtitle from its onset, whereas in the former case the fixation serves to establish the further course of action: to recognize whether the same or different text is being displayed, to decide whether to read it or not, and finally to move to another area (this course of action being taken frequently, as demonstrated by the smaller number of fixations in the subtitle beginning after the shot change).
6.3 Transition Matrix Transition matrix indicates how many times participants’ gaze has shifted from one AOI to another or how many saccades were made within the same AOI. The transition matrix analysis in this study has demonstrated that shot changes do not seem to have a large influence on the subtitle reading and clip viewing process because most participants continued to read the subtitle or to look at the image as if no shot change occurred. On seeing a shot change, only a small number of participants (7%) moved their eyes to the subtitle beginning, which would suggest rereading. Only about 23% of those who were reading the rest of the subtitle at the moment a shot change occurred were likely to deflect to the image
The Influence of Shot Changes on Reading Subtitles
115
area. 70% participants remained in the rest of the subtitle AOI, which suggests they continued reading the subtitle when the shot change occurred. Last but not least, the probability of fixating particular AOIs does not seem to be influenced by the AOI size. The AOI marked as ‘the image’ was the largest AOI by far, but it had similar measures to the AOI marked as ‘the rest of the subtitle’, which was relatively small.
7. Conclusion The preliminary results of this research have rendered interesting results, challenging the widely held beliefs that shot changes trigger the rereading of subtitles. In our study, only about one third of the participants moved their eyes to the beginning of the subtitles on seeing a shot change. The transition matrix analysis has shown that the probability of going back from the subtitle to the subtitle beginning – which would be indicative of re-reading–is rather small (7%). Those participants who moved their eyes back to the beginning of a subtitle after the shot change looked at this area less than when they saw it before the shot change, as indicated by the smaller number of fixations. Interestingly, their first fixation duration was significantly longer than before the shot change, pointing to more effortful cognitive processing, possibly related to an effort to establish whether the text they see had already been read. We are aware of several limitations of this study, such as the small number of clips, subtitles, film genres and languages, which altogether limit the generalizability of the results. It is our intention to continue this line of research by examining more clips from other film genres and languages, subtitled at different reading speeds, and by analysing subtitles crossing more than one shot change.
Acknowledgements We are indebted to Wojciech Figiel for his invaluable help and support at the early stages of this project. We are also grateful to our participants who kindly devoted their time to our study. Special thanks go to the following Warsaw institutions: Instytut Gáuchoniemych, Fundacja Echo, OĞrodek Szkolno-Wychowawczy dla Gáuchych and Polski Związek Gáuchych for helping us organize the eyetracking tests. Many thanks to Jan-Louis Kruger for his lucid comments on the methodological issues related to the eyetracking analysis of shot changes. This research is supported by research grant No. IP2011 053471 for “Subtitles for the deaf
116
Chapter Six
and hard of hearing on digital television” from the Polish Ministry of Science and Higher Education for the years 2011-2013.
References Baker, Robert G. 1982. Monitoring Eye-Movements while watching Subtitled Television Programmes. A Feasibility Study. London: Independent Broadcasting Authority. Caffrey, Colm. 2012. “Eye Tracking Application for measuring the Effects of Experimental Subtitling Procedures on Viewer Perception of Subtitled AV Content.” In Eye Tracking and Audiovisual Translation, edited by Perego, Elisa, 225-260. Roma: Aracne Editrice. D’Ydewalle, Géry, Muylle, Patrick & van Rensbergen, Johan. 1985. “Attention Shifts in partially Redundant Information Situations.” In Eye Movements and Human Information Processing, edited by R. Groner, G.W. McConkie, & C. Menz, 375–384. North-Holland: Elsevier. d’Ydewalle, Géry, Van Rensbergen, Johan & Pollet, Joris. 1987. “Reading a Message when the Same Message is Available Auditorily in Another Language: The Case of Subtitling.” In Eye movements: From Physiology to Cognition, edited by J. K. O’Reagan & A. Lévy-Schoen, 313-321. Amsterdam / New York: Elsevier Science Publishers. d’Ydewalle, Géry & Van Rensbergen, Johan. 1989. “Developmental Studies of Text-Picture Interactions in the Perception of Animated Cartoons with Text.” In Knowledge Acquisition from Text and Pictures, edited by Heinz Mandl & Joel R. Levin, 233–248. NorthHolland: Elsevier. d’Ydewalle et al. 1991. “Watching Subtitled Television: Automatic Reading Behaviour.” Communication Research, 18(5): 650-666. d’Ydewalle, Géry & Gielen, Ingrid. 1992. “Attention Allocation with Overlapping Sound, Image, and Text.” In Eye Movements and Visual Cognition: Scene Perception and Reading, edited by Keith Rayner, 415-427. New York: Springer-Verlag. d’Ydewalle, Géry & De Bruycker, Wim. 2007. “Eye Movements of Children and Adults while reading Television Subtitles.” European Psychologist, 12(3): 196-205. De Linde, Zoé & Kay, Neil. 1999. The Semiotics of Subtitling. Manchester: St. Jerome. Díaz Cintas, Jorge & Remael, Aline. 2007. “Audiovisual Translation: Subtitling.” Manchester: St. Jerome.
The Influence of Shot Changes on Reading Subtitles
117
Duchowski, Andreaw T. 2002. “A Breadth-First Survey of Eye Tracking Applications.” Behavior Research Methods, Instruments, and Computers: 1-16. Ghia, Elisa. 2012. “The Impact of Translation Strategies on Subtitle Reading.” In Eye Tracking in Audiovisual Translation, edited by Perego, Elisa, 155-181. Roma: Aracne Editrice. Holmqvist et al. 2011. Eyetracking. A Comprehensive Guide to Methods and Measures. Oxford: Oxford University Press. ITC Guidance on Standards for Subtitling, available at: http://www.ofcom.org.uk/static/archive/itc/itc_publications/codes_guid ance/standards_for_subtitling/subtitling_1.asp.html Jensema, Carl, McCann, Ralph & Ramsey, Scott. 1996. “ClosedCaptioned Television Presentation Speed and Vocabulary.” American Annals of the Deaf, 141(4): 284-292. Jensema, Carl. 1998. “Viewer Reaction to Different Television Captioning Speeds.” American Annals of the Deaf, 143: 318-324. Jensema et al. 2000a. “Eye Movement Patterns of Captioned Television Viewers.” American Annals of the Deaf, 145(3): 275–285. Jensema, Carl, Danturthi, Ramalinga Sarma & Burch, Robert. 2000b. “Time Spent viewing Captions on Television Programs.” American Annals of the Deaf, 145(5): 464-468. Kruger, Jan-Louis & Faans Steyn. 2014. “Subtitles and Eye Tracking: Reading and Performance.” Reading Research Quarterly, 49(1): 105–120. Moran, Siobhan. 2012. “The Effect of Linguistic Variation on Subtitle Reception.” In Eye Tracking and Audiovisual Translation, edited by Perego, Elisa, 183-223. Roma: Aracne Editrice,. Perego, Elisa & Ghia, Elisa. 2011. “Subtitle Consumption according to Eye-Tracking Data: An Acquisitional Perspective.” In Audiovisual Translation Subtitles and Subtitling: Theory and Practice, edited by McLoughlin, Laura Incalcaterra, Biscio, Marie & Mhainnín, Máire Áine Ní, 177-195. Oxford: Peter Lang. Perego, Elisa, Del Missier, Fabio, Porta, Marco & Mosconi Mauro. 2010. “The Cognitive Effectiveness of Subtitle Processing.” Media Psychology, 13: 243–272. Rajendran et al. 2012. “Effects of Text Chunking on Subtitling: a Quantitative and Qualitative Examination.” Perspectives: Studies in Translatology 21(1): 5-21. Rayner et al. 2012. “Psychology of Reading.” Second edition. New York and London: Psychology Press.
118
Chapter Six
Rayner, Keith. 1998. “Eye Movements in Reading and Information Processing: Twenty Years of Research.” Psychological Bulletin, 124: 372–422. Rayner et al. 2012. “Psychology of Reading, 2nd edition.” New York: Psychology Press. SMI Manual 2011. BeGaze Manual. Version 3.0. Specker, Elizabeth A. 2008. “L1/L2 Eye Movement Reading of Closed Captioning: A Multimodal Analysis of Multimodal Use.” Unpublished PhD thesis. University of Arizona. Szarkowska et al. 2011. “Verbatim, Standard, or Edited? Reading Patterns of Different Captioning Styles among Deaf, Hard of Hearing, and Hearing Viewers.” American Annals of the Deaf, 156 (4): 363-378. Szarkowska et al. 2013. “Harnessing the Potential of Eyetracking for Media Accessibility.” In Translation Studies and Eye-Tracking Analysis, edited by Sambor Grucza, Monika PáuĪyczka & Justyna. Zając, 153-183. Frankfurt am Mein: Peter Lang. Available on: http://avt.ils.uw.edu.pl/en/publikacje/ Williams, Gareth Ford. 2009. “bbc.co.uk. Online Subtitling Editorial Guidelines V1.1.”
Abstract The influence of shot changes on reading subtitles – A preliminary study Key words: Subtitling, subtitling for the deaf and hard of hearing, eyetracking, shot changes, reading In this eyetracking study we examine the influence of shot changes on reading subtitles. Two-line subtitles from a feature film and two documentaries are analysed with a number of eyetracking measures. The results show that – contrary to the commonly held view–less than 30% viewers look back at the beginning of the subtitle after the shot change.
CHAPTER SEVEN REAL TIME SUBTITLING FOR THE DEAF AND HARD OF HEARING: AN INTRODUCTION TO CONFERENCE RESPEAKING SAVERIA ARMA1
1. Introduction Respeaking is a relatively recent technique consisting in the application of speech-to-text technology to the production of written texts (Keyes 2007, Lambourne 2007, Remael and Van der Veer 2007, Romero Fresco 2011)2. The result of this process is the translation of an oral text into a voice-based written product, which can be used for different purposes. These purposes are likely to affect the linguistic, syntactical and formal features of the final text. When respeaking also involves a respeaker, that is, a professional operator mediating between the source and the final text, more source texts and more emitters are involved in this translation process. As a result, the target text is the product of the interrelation between the original text production of the speaker (first emitter) and the mediation of the respeaker (second emitter), called to match and combine a wide range of variables, mainly of technical/technological, target-related and purpose-related nature. Moreover, the final text is the result of a source oral text turned into another oral (re-spoken) text transcribed into a written text (and, depending on the type of speech-to-text software used, likely to
1
Artis Project and CulturAbile. Email address: [email protected] e [email protected] 2 As a definition of ‘text’, we hereby consider the one provided by Gottlieb (2005: 3) that is “any combination of sensory signs carrying communicative intention”.
120
Chapter Seven
be edited before final delivery). Based on this, respeaking configures as a complex activity from a number of different perspectives. This contribution represents an introduction to respeaking-based live subtitling for the deaf and the hard of hearing in a conference setting. Because of the scarcity of statistical data, this contribution is characterized by an empirical approach to technical, linguistic and professional aspects of respeaking.
2. A Definition of Respeaking Respeaking involves the production of a written text by means of a speaker-dependent recognition software3. Respeaking can be used in a wide variety of settings, from medical applications to transcriptions of prerecorded audio files to live subtitling for the deaf and the hard of hearing applied to TV, conferences, lectures and seminars-just to mention a few. For both medical dictation and off-line transcription purposes, respeaking can be considered as a transcription technique that simply ‘exploits’ speech recognition technology for the advantages, among others, of carrying out hand-free operations. When applied to real-time subtitling for the deaf and the hard of hearing, respeaking becomes a highly specialized activity that requires a skilled operator (called respeaker) to re-speak the original oral text and to translate it into subtitles for an audience with special needs.
3
Speech recognition softwares can be speaker-dependent or speaker-independent. So far, only speaker-dependent acoustic models have proved to be fairly reliable for conference live subtitling. They require the software to ‘learn’ the specific features of the speaker’s voice. At the same time, the speaker can speak almost ‘naturally’ but shall pay attention to correct spelling and pronunciation of words, in order to avoid recognition and dictation errors. However, speaker-independent softwares are being developed and improved by Apple, Google and Nuance, especially for automated telephone interfaces. Speaker-independent engines can be used by different speakers, with no need to train the system in advance. Good performances of these softwares, however, rely on a combination of powerful processors, fast independent speech recognition engines, excellent microphones and sophisticated software noise elimination techniques. Nevertheless, this contribution will explore different aspects of conference respeaking involving the use of speaker-dependent Nuance Dragon Naturally Speaking software.
Real Time Subtitling for the Deaf and Hard of Hearing
121
2. 1. Respeaking from a Process/Product Perspective Based on Gottlieb (2005) and Eugeni (2008), respeaking can be seen as a form of inspirational translation, since it does not rely on any codified norms. From the point of view of the process (time) which, as argued by Gottlieb (2005, 2), also includes “the semantics and temporal progression”, respeaking can be an intra (inter)-linguistic isosemiotic translation. “Isosemiotic” refers to the fact the re-spoken test is delivered orally and therefore uses the same channel as the source text. However, the simultaneous character acknowledged to the process of respeaking by Eugeni (2008) can only be applied when respeaking is used to transcribe or subtitle a real-time event. From the point of view of the process (space), which also includes “the semiotics and texture, or composition” (Gottlieb 2005, 2), respeaking as an audiovisual text per se can be defined as a nonsynchronic, intra (inter)-linguistic and diasemiotic translation, since the final text is always a written text. From the point of view of the source text, respeaking can involve one or more texts. For example, in a medical setting, the source text can be represented by structured or random (oral, pre-recorded or written) notes prepared by the doctor or by no text at all. In the first case, the doctor dictates and transcribes his/her notes; in the second case, the doctor produces a new text from scratch. When it comes to the off-line transcription of an audio file, respeaking means listening to an existing audio file and transcribing it by means of a speech recognition software. In both cases, however, the delivery of the final text might be delayed, since the doctor or the transcriber are still able to correct and edit the text before it finally reaches its end users4. With regard to subtitling for the deaf and the hard of hearing, usually the respeaker listens to an oral text delivered in a live setting and translates it into subtitles, with little or no time for editing, while respecting the linguistic and communication needs of the intended final audience. It can be easily understood that respeaking involves a number of different operations, depending not only on the channel, but also on the genre of both source and target texts, setting, delivery mode (realtime/offline) and nature of the final (intended) audience. Genre(s) of source and target texts represent particularly outstanding variables. The translation process triggered by respeaking can occur within the same genre or between different genres, as it happens for conference
4
From a translation-based perspective, the text to be edited can be either seen as the intersemiotic translation of an (audio) source file and as a secondary source text from which the final text delivered derives.
122
Chapter Seven
live subtitling for the deaf and the hard of hearing. In this case, an oral text (mainly non-structured and non-planned, like a presentation or a public speech) is turned into subtitles for people who cannot hear. The respeaker/subtitler will have to consider subtitling guidelines, the linguistic and communication needs of the final audience and a number of technical constraints (namely the décalage between dictation, editing and delivery times, reading speed, number of rows on a single page, visualization settings). As recalled, respeaking-based subtitling can be applied to a number of different settings, from conferences to seminars, from sport to religious events, from TV to festivals, from lectures to web broadcast. Each setting has its own peculiar features, which are likely to affect both the process and the product of respeaking. With regard to the final (intended) audience, it shall be pointed out that respeaking is generally considered as an activity that fulfils communicative purposes. When these purposes are meant to meet the special needs of the audience, the form of the final product is also widely affected. However, the characteristics of the audience are only prototypical. In this sense, the communicative purposes of respeaking are referred to an intended model. For example, it is acknowledged that the community of hearing impaired persons (in almost all the countries worldwide) is not homogenous, especially in terms of degree of impairment, linguistic skills, reading speed and use of sign language. For this reason, respeakers will have to consider all these variables and find the best possible balance to reach their communication goals. In the following paragraphs, we will describe in more detail respeaking-based conference subtitling for an audience of deaf and hearing impaired persons and will see how the afore-mentioned variables influence both the work of respeakers and the real-time delivery of subtitles.
3. Respeaking-Based Live Subtitling Live subtitling is one of the main applications of respeaking. Whereas it is widespread and renown in many European and non-European countries, on the one hand in Italy live subtitling is still relatively rare and limited to a low percentage of TV programs5, and almost absent from other application fields like web and digital TV broadcasting. On the
5
Respeaking-based live subtitling is used by RAI, the Italian public TV Broadcaster and by Canale 5. Stenotyping is also used at RAI to subtitle newscasts and other live programmes live.
Real Time Subtitling for the Deaf and Hard of Hearing
123
other, the main techniques used for real-time subtitling (stenotyping and respeaking) are applied to parliamentary reporting (at both the Chamber of Deputies and the Senate)6. When it comes to the accessibility of conferences via subtitling, Italy lies behind most countries, where it is generally acknowledged as a right of persons with hearing impairment7. Despite some progress, many deaf associations and organizations operating in the field of disability and sensory impairment choose not to subtitle their events or opt for sign language interpreting. In other terms, they do not consider that both modalities (live subtitling and sign language) are equally important to provide full access to everyone, also in compliance with the principles of the UN Convention on the Rights of Persons with Disabilities8. However, in recent years, the number of live-
6 Both stenotyping and respeaking are used for Parliamentary reporting. The first consists in the use of a special typing machine while the second one exploits speech recognition technologies (Arma 2007). Recent informal exchanges between the community of stenotypists and respeakers (both belonging to the broader category of speech-to-text reporters) have pointed out that respeaking could soon become an interesting skill for traditional stenotypists. A more detailed analysis of the competences, skills and knowledge of respeakers and stenotypists can be found in Arma (2007), where a comparison is drawn between traditional (stenotype) and modern respeaking-based Parliamentary reporting. 7 As an example, article 504 of the Rehabilitation Act of 1973 in the USA establishes that any agency, school or institution receiving federal financial assistance provide persons with disabilities an opportunity to be fully integrated into the mainstream. The article also stresses that persons with disabilities are allowed placement in regular classrooms with support services, such as CART, to eradicate any barriers to the complete educational experience. In 1998, Congress amended the Rehabilitation Act to require Federal agencies to make their electronic and information technology accessible to people with disabilities (article 508). For further information on the American Rehabilitation Act, see http://www.hhs.gov/ocr/civilrights/resources/factsheets/504.pdf and http://deafness.about.com/gi/dynamic/offsite.htm?zi=1/XJandsdn=deafnessandzu= http%3A%2F%2Fwww.section508.gov%2F. Similarly, the ADA (Americans with Disabilities Act) enacted in July 1990 establishes that a qualified student or employee with a disability shall be able to participate or to perform essential functions. It underlines that it is the right of every deaf or hard-of-hearing individual to receive these services at school, in the workplace and certain specified meeting places unless the cost to provide such services is unduly burdensome. For further information see ADA 2010 (http://deafness.about.com/gi/dynamic/offsite.htm?zi=1/XJandsdn=deafnessandzu =http%3A%2F%2Fwww.usdoj.gov%2Fcrt%2Fada%2Fadahom1.htm). 8 The full version of the UN Convention is available at http://www.un.org/disabilities/documents/convention/convoptprot-e.pdf.
124
Chapter Seven
subtitled conferences and events has slightly increased, as the awareness of the importance of subtitling and accessibility as a general concern also does.
4. Conference Respeaking A conference is generally understood as an event aimed at gathering several persons to discuss a particular topic9. Though its features are different from those of a seminar, a symposium, a meeting, a workshop or a round-table, the term ‘conference’ will be used to cover the general concept. Respeaking a conference means making the event accessible to the deaf audience by means of live subtitles produced by means of a speech-recognition software. From the point of view of a respeaker, conference respeaking usually triggers the following phases: preparing the conference, respeaking the conference, carrying out post-conference operations.
4.1. Preparing for Conference Respeaking For a professional respeaker, preparing a conference means dealing with three main aspects: conference material, conference participants and conference settings. Unfortunately, it is not always possible to receive the confirmation of an assignment in time for an excellent preparation. More often, conference organizers do not provide working documents beforehand, and would limit themselves to the conference programme only. For this reason, it is important to check it carefully and make sure that the speakers’ names are spelled correctly. For each of the presentations, the respeaker ideally has to conduct a short research and assemble a list of unknown words and expressions to train the software. The names of the speakers and their organizations, companies or academic affiliation are also included in this list. As an alternative, all speech-to-text softwares allow selecting a potentially unlimited number of documents and having the system automatically process them to find the unknown words. The respeaker has to train the software to the pronunciation and the spelling of new words. Internet also grants access to potentially unlimited video and audio resources. If there is enough time to prepare, the respeaker can search old presentations and speeches given by conference participants in other
9
Within the broader category of “conference”, many subcategories are to be found: academic conferences, business, conferences, trade conferences, unconferences. In this contribution, we use the hyponym “conference” to encompass the other types of gatherings as well.
Real Time Subtitling for the Deaf and Hard of Hearing
125
occasions and get acquainted to their pace of delivery and pronunciation. If the respeaker has often worked for the same organization or dealt with the same topics, these operations become quicker and easier to carry out. A combination of background and context-specific knowledge and vocabulary will boost the productiveness of the respeaker. Talking with experts, exploring the topic(s) through the Internet and participating into social conversations and exchanges with specialists can also be of great help, especially when the respeaker has to anticipate words, correcting or finishing sentences. The disambiguation of homonyms and similar words is also a must: in this sense, a wellmaintained personal speech profile and its related vocabulary are the most precious resource of a professional conference respeaker. Before leaving for the conference venue, the respeaker has to make sure that the speech recognition software, as well as the microphone headset, works properly. With regard to the conference settings and delivery modes in particular, respeaking can be provided on site, that is, at the conference venue, or through a dedicated online platform (distant or remote respeaking). Various CART (Computer Assisted Real-time Transcription) and STTR (Speech-to-Text Reporting) systems can be adapted and used for respeaking as well. In case of on-site delivery, conference organizers shall provide a separate room or a dedicated space to the respeaker, otherwise he/she runs the risk of disturbing the audience with his/her own voice and, reversely, to be disturbed by the volume of the speakers’ microphone or by environmental noises. The ideal setting for on-site respeaking is a sound-proofed environment (an interpreting booth or a separate room with audio/video connection to the conference hall); when this is not possible, a table-top interpretation booth or an adapted shield are enough. In order to isolate the respeaker’s voice, a useful instrument is the so-called stenomask, a sort of sound-proofed mask-like microphone (very popular among American court reporters) which stops external noises from disturbing the respeaker’s delivery. In case of remote respeaking, the client is informed in advance about the online platform to connect to. Before the event starts, the respeaker generally ‘creates’ a virtual event to which a unique web link is assigned. This link can be shared with the conference organizers, who shall connect a PC to the internet, plug a projector (and a screen) and access the link. From his/her remote desk, the respeaker connects the speech recognition software to the platform, receives the audio from the conference hall and dictates the
126
Chapter Seven
subtitles, which are automatically displayed onto the web page and therefore become visible to the conference audience10.
4.2. Respeaking the Conference Today, speech recognition is still far from being an exact science. First, there is still no reliable speaker-independent software for conference respeaking on the market, which requires (and will do in the near future) the presence of a skilled operator with correct pronunciation and technical context-specific vocabulary. The day of the event, the respeaker is called to check and fix the audio settings of the booth or the room where he/she is working. Audio interferences, external noises or even the degree of sensitivity of personal computers sometimes can affect the performance of respeaking. It is important to carefully check input and output audio settings, since these parameters do play a vital role in the performance of a respeaker. Too high or too low input (from the conference room) or output (of the respeaker’s microphone) volumes can make it difficult for the respeaker to dictate, edit, check and deliver subtitles. Before the conference starts, a respeaker usually checks the final conference programme again and takes notice of possible last minute changes. Similarly, if there is enough time before the proceedings start, the respeaker shall train the software again and update his/her personal vocal profile with the vocabulary of newly provided presentations or documents. Conference respeakers usually work in pairs or alone11. In the first case, each respeaker generally works for about thirty of forty-five minutes each; in the latter case, a respeaker subtitles the whole event. When working in pairs with their own respeaking stations12, a ‘switch’ button might be necessary to quickly switch from one respeaker to the other. A hard copy of the material provided beforehand can be useful to countercheck the spelling and to carry out a quick brainstorming. Having a notepad or simply a piece of paper available is also a good tip for any professional respeaker: similarly to conference interpreting, working
10 Similarly, it is possible for end users to read subtitles on individual devices (i.e. smartphones or tablets) by connecting to a remote platform. These visualization instruments broaden the range of use of subtitles for a number of different applications, from Universities to hospitals and banks. 11 When it comes to TV subtitling, respeakers usually work in pairs or even with a third person, whose role is to manage shift rotations from one respeaker to the other. In this case, each respeaker works for about five or ten minutes consecutively. 12 A ‘respeaking station’ for conference respeaking is a laptop with a professional version of a speech-recognition software and a personal headset with headphones and microphone.
Real Time Subtitling for the Deaf and Hard of Hearing
127
in pairs means being able to help if the colleague is missing or misspelling numbers, names or foreign nouns. In a conference respeaking setting, usually the respeaking ‘station’ is connected to a projector which displays subtitles on a wall or on a big screen for everybody to read. The type of font chosen for subtitles can affect the enjoyment of subtitles themselves. Usually ‘Arial’, ‘Tahoma’ and ‘Times new Roman’ are the most preferred fonts. The font width may vary in relation to the dimension of the conference hall and the screen/wall, as well as to the distance between the subtitles and the place where deaf persons are sitting. Usually subtitles occupy the left or the right side of speakers. Less often, they are positioned over the heads of speakers themselves. The number of subtitle lines is not fixed. It may vary between five and ten lines if the screen is dedicated to subtitles only. If the screen is split and shared with presentations, videos or sign language interpreting, the number of lines can be decreased to four or even two. When sign language interpreters are also working, video signals (interpreter’s and respeaker’s outputs) can be mixed and delivered on a single screen, usually with subtitles under the interpreter. In this case, the ideal respeaker is a person who also has a sufficient background in sign language as well, so as to avoid confusing the bilingual deaf audience. Reading times of deaf persons shall be considered as well; generally, each row shall be displayed for a minimum of five seconds13. Conference can be considered as a genre with a relatively high degree of standardization. Usually, conferences are divided into the following parts: welcome address of the organizers and the authorities, presentation of the speakers, first (moderated) session, coffee break, second (moderated) session, lunch, afternoon session, closing remarks. One or more question and answer sessions might be scheduled at the end of the conference or at the end of each session. Each of the above-mentioned parts has standardized ‘markers’ which are very well known to respeakers. The awareness of the genre is important, as it allows anticipating some sentences, saving time for reformulations and reducing subtitles delay; similarly, knowing the conference agenda and being able to spell the conference participants’ names and roles properly also makes the task less difficult. As an example, usually the introductory part of a conference, during which conference speakers, organizers and institutional parties are introduced, can be very fast and therefore very difficult to subtitle if the respeaker does not know how to dictate or write the participants’ names or
13
Different display methods can apply, depending on the speech recognition used, on the organizers’ requirement or on preferences of the audience. Generally, when all lines are full, the first line disappears and the whole text scrolls up.
128
Chapter Seven
their affiliations14. It shall be pointed out that conference speakers ignore the presence of a respeaker and are not aware of his/her role, that is, facilitating the communication for an audience with special needs through subtitling. Too often, respeaking is widely misunderstood or understated. Many persons still think it is a sort of quick typing, or an automatic speaker-independent transcription, or some combined form of automatic transcription and live ‘human’ editing. For this reason, speakers tend not to pay attention to the way they speak and do not realize they make the respeaker’s job even more uneasy. Presentations can be pre-prepared or based on impromptu speech. The language used in these contexts can be positioned on a continuum line from formal/written language to informal/oral language, as pointed out by Arma (2007)15. Sometimes, written texts are not written to be read, as they are formal papers or official documents simply read aloud by the speakers. In this case, speakers tend to speak very quickly, but sentences are more organized and structured. In the second case, speakers tend to be more slowly, but their sentences are less planned and structured, often unfinished and filled with repetitions and informal expressions. During the question and answer sessions, respeaking times tend to be more relaxed, though other problems might occur, namely persons speaking out of the microphone or overlapping voices.
4.3. Linguistic Choices Linguistic choices in respeaking deserve peculiar attention. As we have seen, conference respeaking means subtitling (live) an oral text for a special audience (persons with hearing impairments). However, this is nothing but an ‘intended’ audience: deaf and hearing impaired people do not share the same linguistic competence, especially when it comes to grammar and syntax. Generally speaking, persons who use sign language tend to reproduce the same sentence structure into Italian, so they have more difficulties reading subtitles at conferences. In addition, the semantic field covered by signs is wide and the same sign can correspond to a range of different Italian words. At the opposite of this continuum, we find deaf people who are perfectly acquainted with subtitles and written language
14
According to the type of speech-recognition software used, it is possible to create the so-called ‘macros’, so the respeaker can dictate the macro instead of the name associated to. 15 An extensive study of speech-to-text reporting through stenotyping and respeaking is provided by Arma (2007) with a full analysis of the continuum line between formal/written and informal/spoken language.
Real Time Subtitling for the Deaf and Hard of Hearing
129
and can read or speak almost normally. Good respeakers are aware of these differences and have a solid background of work with deaf people. This background is fundamental to ‘tailor’ subtitles during conferences. In conferences with a majority of signing deaf persons, subtitles tend to have a more simplified structure; the respeaker usually tends to insert more periods and to respect the subject + verb + object structure more; as a consequence, sentences tend to be shorter and the degree of reformulation of the original text is slightly higher. Especially when it comes to impromptu speech, this can be a very demanding task. With a majority of bilingual or oral deaf persons, the respeaker generally tends to respect the original sentence structure more and to dictate longer sentences, while simplifying the sentence structure less. However, the balance between reformulation, simplification and fidelity to the original text is not something easy to achieve and keep during the whole conference. Most deaf persons refuse a ‘patronizing’ attitude and prefer verbatim respeaking even if this means longer, quicker and more difficult sentences. Similarly, the balance between neutrality and interpretation in respeaking remains unexplored. Conference respeaking triggers a high quantity of unfinished sentences, colloquial and informal expressions, misspellings and mistakes (especially when it comes to names and dates). Respeakers must decide whether ‘correcting’ the speaker or leave the (un)wanted mistake for deaf persons to read. The most ‘purist’ translation fellows may argue that it is not the task of any translator/interpreter/respeaker to correct the original intent or the source text production; however, since the ultimate task of a respeaker is to mediate and facilitate the communication, non-invasive interventions may be of some help to the communication as a whole. Any professional respeaker shall be able to cope with a number of stressful situations, which can occur during conferences, especially with regard to the following: - Subtitling errors. These errors can originate from wrong dictation or wrong pronunciation of the respeaker16 (some causes can be miscomprehension of the source text, stress, coughing or distraction, bad or modified audio settings). - Fast delivery pace of speakers. Often speakers realize that they are speaking too fast; more often, somebody tell them to slow down. Less frequently, the respeaker communicates with speakers directly and asks them to speak more slowly or repeat (however, strings like ‘the microphone is not working’, ‘Please, talk louder’ are not
16
Very hilarious situations can happen when the respeaker does not have enough time to correct his/her output and bad words are displayed because of a wrong recognition and transcription.
Chapter Seven
130
-
-
infrequent at all). These requests are dictated and therefore visible to the whole audience17. Wrong audio setup. In this case, the best solution is to stop dictating and reset audio again from scratch. If this is not possible, the respeaker can choose to use the keyboard instead of voice, even though this means being relatively more slowly than dictating. Subtitling from an interpreted speech. Conferences are often interpreted from and into other languages. In this case, respeakers generally work in interpreting booths, listen to the main audio of the conference room or use relay interpreting. It shall be pointed out that in Italy only a few informal tests of intra-lingual respeaking have been conducted and that almost all subtitled conferences provide Italian subtitles18. In case a respeaker has to use relay interpreting, he/she will have to consider the décalage occurring between the delivery of the interpreter, the dictation to the speech recognition software and the final output on the screen. This décalage might bring to subtitles overlapping slides, videos and pictures and compromise the comprehension of the final audience.
4.4. Post-conference operations The job of a respeaker does not end when the conference does. Postconference operations can be as demanding as respeaking is. Usually respeakers collect feedback from the audience and the organisers, especially in terms of visualization settings, pace of delivery, use of technical or difficult vocabulary and overall quality of the service provided. Often respeakers are asked to provide the subtitles file in order to produce a verbatim report of the event. It shall be pointed out that the subtitle file cannot be considered as a reliable verbatim report, since the main task of conference respeaking for the deaf and hard of hearing is not a mere transcription or a summary/full report but rather the communication of an oral content to persons who cannot hear. Respeaking can be applied
17
In this case, a respeaker generally uses bold prints or a different colour. Eventually, the sentence can be introduced by a special symbol at the beginning and the end of the sentence, like ‘***’. 18 First, it would be very difficult for a non-native or non-bilingual respeaker to use speech recognition into a foreign language. For this reason, relay interpreting proves to be more practical. Secondly, since the majority of deaf persons in Italy attending a conference are Italian, subtitles in a foreign language are not required, though they would be of the utmost importance for a foreign audience. This is possibly one of the main future developments of remote respeaking.
Real Time Subtitling for the Deaf and Hard of Hearing
131
to reporting as well, but it requires other skills and procedures. Despite this, the subtitles file can be considered as a basis for the production of conference proceedings or reports, if the audio registration of the event is also available. In this case, respeakers themselves or the conference organizers can edit the file and produce a final version for publication. The importance of this file is still too understated. The availability of the subtitles file can be of great help for future use and applications in a number of different fields. First, the edited subtitles file can be used for online publication, while improving the performances of the website where it is published in terms of localization and positioning on search engines. Second, it can be used as a basis for producing off-line closed captions of videos (to be published on YouTube, on the website or even on a DVD). Third, it represents a valid and cost-effective instrument of advertising the proceedings of the conference and to disseminate information about the conference itself, while granting transparency and universal access to everybody. Last, since this subtitles file generally is a text file, it can be easily processed and read by speech synthesis instruments and therefore used by the blind or visually impaired.
5. Conclusions Respeaking is a technique that consists of the dictation and transcription of an oral or written content by means of a speech recognition software that is trained on the voice of the user and automatically writes what he/she is saying. When aimed at making live events fully accessible to persons with a hearing impairment, respeaking becomes a live subtitling technique that can be used in a variety of different settings, from TV to sport events, from school to conferences. In this article we have sketched the features of conference respeaking, with a focus on the language used by the respeaker, as well as on technical and professional aspects of this relatively new profession which is still not recognized in Italy but whose importance is crucial for the community of deaf persons, though not limited to. As we have seen, when approaching conference respeaking, a respeaker has to carry out many different tasks before, during and after the event, from the preparation of technical instruments to the training of the software, from sentence reformulation and simplification to the need of balancing neutrality and interpretation with the tendency to patronize end users. Finally, we have seen that real respeaking-based subtitling is not a matter of transcription or reporting, but rather a matter of communicating to an audience with special needs in a form (subtitles) that has a strongly conventionalized structure. For this
132
Chapter Seven
reason, despite the similarities (in terms of cognitive processes) with interpreting19, intra-lingual respeaking remains an activity for which deep knowledge of the final audience and subtitling conventions shall be a must.
References American Rehabilitation Act of 1963, Retrieved from http://www.hhs.gov/ocr/civilrights/resources/factsheets/504.pdf and http://deafness.about.com/gi/dynamic/offsite.htm?zi=1/XJandsdn=deaf nessandzu=http%3A%2F%2Fwww.section508.gov%2F ADA American with Disabilities Act of 2010. Retrieved from http://deafness.about.com/gi/dynamic/offsite.htm?zi=1/XJandsdn=deaf nessandzu=http%3A%2F%2Fwww.usdoj.gov%2Fcrt%2Fada%2Fadah om1.htm. Arma, Saveria. 2007. La Resocontazione Parlamentare tra Stenotipia e Riconoscimento del Parlato. Unpublished MA Dissertation. SSLMIT, Forlì. Arumí Ribas, Marta & Romero Fresco, Pablo. 2008. “A Practical Proposal for the Training of Respeakers.” JoSTrans - The Journal of Specialised Translation. Retrieved from http://www.jostrans.org/issue10/art_arumi.php Eugeni, Carlo. 2008. La Sottotitolazione in Diretta TV. Analisi Strategica del Rispeakeraggio Verbatim di BBC News. Unpublished PhD thesis. University “Federico II”, Naples. Gottlieb, Henrik. 2005, May. “Multidimensional Translation: Semantics turned Semiotics.” MuTra Conference Proceedings. Retrieved from http://www.euroconferences.info/proceedings/2005_ Proceedings/2005_proceedings.html Keyes, Bettye. 2007, July. “Realtime by Voice: Just what you need to know.” Paper presented at Intersteno Congress, Prague. Paper retrieved from: http://www.intersteno.it/materiale/Praga2007/praga_conferences/ BettyeKeyes.htm Lambourne, Andrew. 2007, May. “Real-time Subtitling: Extreme Audiovisual Translation.” Paper presented at the conference LSP Translation Scenarios, Vienna.
19
Useful insights into formation and training for respeakers, as well as into professional and cognitive similarities between respeakers and simultaneous interpreters are provided by Arumi Ribas and Romero Fresco (2008).
Real Time Subtitling for the Deaf and Hard of Hearing
133
Remael, Aline & van der Veer, Bart. 2007, May. „Teaching LiveSubtitling with Speech Recognition Technology: What are the Challenges?” Paper presented at the conference LSP Translation Scenarios, Vienna. Romero-Fresco, Pablo. 2011. Subtitling through Speech Recognition: Respeaking. Manchester: St. Jerome Publishing UN Convention of the Rights of Persons with Disabilities of 2008. Retrieved from: http://www.un.org/disabilities/documents/convention/convoptprote.pdf.
Abstract Real time subtitling for the deaf and hard of hearing: An introduction to conference respeaking Key words: real time subtitling, deaf, conference respeaking Subtitling through respeaking is a relatively well-known technique for live TV programs and news. However, when it comes to conference respeaking, literature and practices become much less abundant. This contribution represents an introduction to respeaking-based live subtitling for the deaf and the hard of hearing in a conference setting. In particular, the author focuses on linguistic choices and strategies, as well as on technical and professional aspects and post-conference operations, which are deeply influenced by the special type of ‘event’ and ‘target audience’. As a matter of fact, respeakers do perform a number of different operations, from the preparation of technical instruments to the training of the software, sentence reformulation and simplification. Real respeakingbased subtitling appears not to be a matter of transcription or reporting, but rather a matter of communicating to an audience with special needs in a form (subtitles) that has a conventionalized structure. Given the scarcity of literature concerning conference respeaking, this contribution is strongly focused on empirical data.
CHAPTER EIGHT FRANCE’S NATIONAL QUALITY STANDARD FOR SUBTITLING FOR THE DEAF AND HARD OF HEARING: AN EVALUATION TIA MULLER1
1. Introduction In France the Loi pour l'égalité des droits et des chances, la participation et la citoyenneté des personnes handicapées (the Equal Rights and Opportunities, Participation and Citizenship of People with Disabilities Act (No. 2005-102)), passed by the government in 2005, required all stateowned and private channels with a minimum annual audience share of 2.5% to use adapted subtitles to make 100% of their programming accessible to the deaf and hard-of-hearing (HoH) by 12 February 2010 (Muller 2012). Prior to the introduction of this law, French channels were under no obligation to provide subtitling for the deaf and HoH (SDH). However, scholars, professionals and associations (Remael 2007; Jullien, personal communication, 30 June 2010; Caasem 2010) lamented that regulations of this kind, which came into force across Europe around that time, promoted the rapid increase in the quantity of SDH to the detriment of quality. The ensuing discussions in France between associations, subtitlers and the Conseil Supérieur de l'Audiovisuel (CSA) informed a directive in the 2010 governmental Program, which entailed the creation of a reference document about minimum SDH requirements (Secrétariat d'État chargé de la Famille et de la Solidarité, 2010). This document, the Charte relative à
1
Universitat Autònoma, Spain. Email address: [email protected]
136
Chapter Eight
la qualité du sous-titrage à destination des personnes sourdes ou malentendantes (Charte),2 was signed by major SDH stakeholders and put into practice on 12 December 2011. Written by a consortium of interested parties and published by the CSA, the Charte reflects customary French SDH norms or, as is the case with rules seven and 11, homogenizes them. It does not introduce anything new. Although the Charte is not legally binding for its signatories, the CSA does, however, have the power to send a formal warning and later penalize those signatories who disregard it. This article describes the Charte and studies its 16 constituent rules by evaluating them in relation to SDH addressees’ opinions captured in a 2010 survey, other European guidelines, and empirical studies, in order to assess the validity of the components it sets out for all the stakeholders involved. The rules that make up the Charte correspond to what Hermans identifies as ‘strong, institutionalized norm(s)’ that have been ‘issued by an identifiable authority armed with the power to impose sanctions for non-compliance’ (1999, 82). Throughout this article the French critères has been translated as ‘rule(s)’, and ‘French set of rules’ is used to refer to the document that contains these rules—the Charte.
2. Methodology Bartoll (2008) identified three subtitling parameters—pragmatic, linguistic and technical. Building on Bartoll’s classification Arnáiz-Uzquiza (2012) maintains the pragmatic and linguistic, but subdivides his technical parameter into three—aesthetic, technical and aesthetic-technical—and also creates an additional SDH-specific parameter, the extralinguistic. Each of Arnáiz-Uzquiza’s six parameters is defined by a number of characteristics that are, in turn, shaped by a range of ‘variables’. For example, the linguistic parameter is defined by ‘language’ and ‘density’, and these two characteristics can be further shaped by an ‘intralingual variable’ or by the ‘verbatim’ or ‘condensed’ variables respectively. This paper associates each of the Charte’s 16 rules with a SDH characteristic and then groups them using Arnáiz-Uzquiza’s typology. Once grouped according to these parameters each of the Charte’s 16 rules are then evaluated, primarily in relation to three documents: a 2010 survey (French survey) that captured French deaf and HoH people’s opinions on SDH norms on television (Muller, Forthcoming), and the
2
See Appendix A.
France’s National Quality Standard for Subtitling
137
current guidelines used for SDH on television in two European countries, the UK and Spain (OFCOM 1999; AENOR 2012).3 The Union Nationale des Associations de Parents d'Enfants Déficients Auditifs, a deaf and HoH association, helped design the French survey.4 Its objective was to examine the participants’ opinions on SDH, focusing on the various techniques and methods employed on French television while also suggesting innovative approaches. The survey was posted on the association’s website, and responding to its 58 questions gave French SDH addressees their first opportunity ever to voice their opinion at a national level. Participation was voluntary and there were a total of 124 responses. This article draws on other fields of knowledge. Indeed, due to its complexity and its ‘functional nature’, SDH, and by extension its study, draws on many different disciplines and areas of research—including film studies, musicology, Deaf studies,5 linguistics, and within translation studies, interlingual subtitling, SDH theory and live subtitling—in order ‘to arrive at a better understanding of the whole’ (Neves 2005, 314). Finally, this article substantiates certain points by using material gathered by interviewing established SDH professionals in France. As research into SDH is in its infancy in France, insights from authorities in the field were of great value.
3. Pragmatic Parameter Arnáiz-Uzquiza’s (2012) pragmatic parameter includes addressees’ characteristics, SDH production’s aim, the production date, and its authoring. None of these elements are covered by the Charte’s 16 rules. However, they are discernible in its title, introduction, layout and signatories. The Charte relative à la qualité du sous-titrage à destination des personnes sourdes ou malentendantes can be translated as ‘Charter relating to the quality of subtitles addressed to the deaf or HoH’. It is rather unusual to use the term ‘charter’ as it refers to ‘constitutional laws
3 Norms or standards that exist in the USA, Canada, South America or Australia are not used in this study as their political, cultural and educational contexts vary greatly from those in Europe. 4 See Appendix B. 5 Deaf with a capital letter refers socially to the Deaf community, for whom sign language is generally the mother-tongue; deaf written with a lowercase d refers to the medical condition (Sacks 1990).
138
Chapter Eight
established by a sovereign’ (Robert, Rey and Rey-Debove 2002, 406).6 There is a possible semantic link between this title and the issue of human rights as it evokes the Charter of Fundamental Rights of the European Union (2000). Additionally, there seems to be a practical link between the two documents as the European document, like the French set of rules, refers to people with disabilities and their right to ‘measures designed to ensure their independence, social and occupational integration’ (European Parliament; European Commission; European Council, 2000, 14). The conjunction ‘or’ in the title is used between the two categories of addressees, yet researchers normally use the conjunction ‘and’ (the deaf ‘and’ HoH) thereby bringing the two distinct groups together. The physiological, psychological and social differences between the deaf and HoH have been discussed extensively by Audiovisual Translation scholars, such as de Linde and Kay (1999), Neves (2005), Díaz Cintas (2009), and Bartoll and Martínez Tejerina (2010). Further studies by Báez Montero and Fernández Soneira (2010) and Pereira (2010a, 2010b) have shown that due to the groups’ differing needs, separate guidelines that would ultimately lead to varying sets of televised SDH should be envisaged. The title of the French set of rules could lead the reader to believe that different sets of subtitles for the two groups are being put forward; however, this is not the case. Instead, the conjunction was chosen to highlight that a person with hearing loss is either deaf or HoH (Jullien, personal communication, 23 February 2013). The brief introduction to the six-page Charte contains an outline of the legal background (see 1. Introduction above), the scope of the rules of television, and reminds readers that each rule should be respected at all times when producing SDH. The main body of the document is divided into three sections that correspond to different types of programmes: all, pre-recorded and live. Under the ‘all programmes’ section, five rules outline issues such as subtitle editing and legibility. In the next section, ‘pre-recorded programmes not broadcast live’, nine rules cover subjects including reading speed and shot changes. There are then two final rules relating to ‘all live programmes broadcast live or subtitled in live conditions’ that deal with character identification and delays between speech and subtitles. Each of the 16 rules consists of up to two explanatory sentences. However, there are no accompanying examples, with the exception of two footnotes—the first illustrates the sound effect rule and
6
My translation.
France’s National Quality Standard for Subtitling
139
the second the segmentation rule—and a detailed graphic that accompanies the point about required reading speed. The Charte ends with the date it was signed, the names of the representatives from the Ministère de la Culture et de la Communication, the Ministère des solidarités et de la cohésion sociale,7 and the CSA who acted as witnesses, and a list of 32 signatories and their organisational affiliations. The 32 signatories are grouped into three sub-categories: associations, agencies and broadcasting corporations. Distributed under these headings there are eight deaf and HoH national associations and one subtitlers’ association (Caasem); 13 subtitling agencies; and nine broadcasting corporations plus one media association. The nine broadcasting corporations represent the 26 state and privately-owned channels, which they own between them and that make up 100% of the digital terrestrial television (DTTV) operators in France, while the media association signed on behalf of an additional 33 cable, satellite or ADSL (via the Internet) television channels.
4. Technical Parameter Referring to the characteristics that are least visible to addressees (Arnáiz-Uzquiza 2012) the technical parameter is dealt with in the Charte solely through the ‘broadcasting norms’ rule. Conforming to European regulations the Charte stipulates that subtitles broadcast on DTTV must be displayed in accordance with the European Standard, the Digital Video Broadcasting (DVB); Subtitling systems. First created in 1997 to homogenise subtitling display norms across European countries, the European Telecommunication Standard ETS 300 743 (European Broadcasting Union 1997) was updated to encompass new technologies in 2006 becoming the EN 300 743 standard (European Broadcasting Union, 2006). The original standard, along with any future updates, was ratified by the French government as a departmental order on 21 December 2001 (Fabius 2001). More flexible than the previous Teletext system DTTV subtitles are bit-map images that make it possible to employ a greater range of colours, symbols, font styles and sizes when creating subtitles. Unlike with the old
7
The Department of Culture and Communication and the Department of Solidarity and Social Cohesion.
140
Chapter Eight
system, the viewer does not have to turn off the subtitles in order to change channel.
5. Aesthetic-Technical Parameter Arnáiz-Uzquiza (2012, 118) points out that the aesthetic-technical parameter affects ‘the subtitles’ visual aspect’ and that rather than being ‘directly influenced by the subtitlers’ choices, is a consequence of the production process and of the configuration of the finished product’.8 The Charte contains two elements that relate to this parameter—reading speed and delay in live subtitling.
5.1. Reading Speed Rule six of the Charte stipulates that for all pre-recorded programmes the subtitle reading speed should be 12 characters per second (cps), 20 characters for two seconds, 36 characters for three seconds, and 60 characters for four seconds. It also specifies that there should be a 20% tolerance margin for these speeds. Reflecting what is currently typical in France, these reading speeds allow for limited subtitle editing while remaining readable for the average reader (Jullien, personal communication, 8 March 2013). Although slightly lower than the Spanish 15cps norm (AENOR 2012), the French recommendation is consistent with the British guidelines (OFCOM 1999). It is worth noting in relation to this that 70% of the deaf and 74% of the HoH respondents to the French survey stated that they found subtitles for pre-recorded programmes to be at the right speed for them to have time to read everything. However, this means that 30% and 26% respectively acknowledged that they have difficulties in reading subtitles. Deaf people are known to find reading skills difficult to master. For example, it is typical for deaf 18-year-olds to have a reading age and writing skills similar to that of a hearing nine to 10-year-old (Lepot-Froment 2004). Further evidence in a French report notes that 54% of people with severe hearing loss aged between six and 25 state they have trouble reading, writing and counting, while the same is true for only 6% of their hearing counterparts (DREES 2007). It could, therefore, be argued that a slight reduction in reading
8
My translation.
France’s National Quality Standard for Subtitling
141
speed might benefit all SDH viewers. However, this would require more extensive text editing—something that is not necessarily welcomed by deaf and HoH viewers (see 7.1. Editing section below) as it can make subtitles unreadable. This quandary imposes the conclusion that the current reading speed for pre-recorded programmes, although not satisfactory for all SDH viewers, is adequate for the majority of them. The Charte does not set a reading speed for live subtitling in France. Similar to the situation in the UK and Spain this is currently dictated by how fast speakers talk. However, aiming to be exhaustive, the UK and Spain’s guidelines dedicate several paragraphs to the matter whereas the subject is not tackled in the French document. In the French survey 39% of deaf and 42% of HoH participants found live subtitles too fast, while 44% and 45% respectively found them to be just the right speed.9 These results indicate that further research on the average reading speed for live programmes is necessary to discern whether a maximum reading speed that maintains a minimum delay (see 5.2. Delay in live subtitling section below) should be set to improve accessibility for all.
5.2. Delay in Live Subtitling Rule 16 of the Charte stipulates that during live events the delay between speech and the corresponding subtitle should be less than 10 seconds. Live subtitling in France is mainly performed using speech recognition software10 or velotype keyboards.11 French is a particularly difficult language to write due to its complex spelling system, high number of homophones, and countless grammatical rules. Furthermore, perfect spelling is mandatory at all levels and across all facets of society, including television. Consequently, unlike in any other country, it is common in France for channels to have as many as four people working on the production of live subtitles in order to eliminate errors (Caschelin 2013). This emphasis on eliminating errors causes a great deal of delay. For example, based on a live subtitle quality control test of the two principal French channels (the privately-owned TF1 and the stateowned France 2) during the debate between the two final candidates for the
9
The remainding 17% and 13% respectively found live subtitles too slow. See Arma’s contribution in this volume. 11 A veyboard requires the user to press several keys simultaneously and produces syllables rather than letters. 10
142
Chapter Eight
2012 presidential elections—the fifth most watched programme of the year (Médiamétrie 2013)—the CSA found that on average (55% of the time) the subtitles on France 2 were delayed by between 11 and 20 seconds while on TF1 there was a five-to-10-second delay 37% of the time and an 11-to-20second delay 28% of the time (CSA 2012). These results show that channels experience difficulties in achieving the delay of less than 10 seconds required by the Charte, which was already in effect at the time of this debate. Lambourne et al. (2004) note that when events on screen require synchrony between the image and the sound, subtitle delays of more than five or six seconds can make comprehension problematic for viewers. On this basis the current delay in live subtitling should be reduced in line with the Charte’s rule (or further) to improve viewers’ experience and comprehension of live events. One possible way to achieve this would be to stop prioritising perfect spelling over delay. Another would be to delay the broadcast of live events by a few seconds in order for subtitlers to produce the subtitles and release them simultaneously with the programme—the method used in the Netherlands (Romero-Fresco 2011). In this respect, it is significant that participants in the French survey rated ‘a minimum delay between speech and the display of the subtitle’ second, while ‘few language mistakes’ came fourth out of the five aspects of live subtitling they had to assess as most or least important to them.12 Arguably this could be seen to support the need to reconsider the current approach to live subtitling in France.
6. Aesthetic Parameter This parameter refers to the visual aspects of subtitles (Arnáiz-Uzquiza 2012). The Charte covers four of these characteristics: number of lines, subtitle placement, box usage and shot changes. However, it fails to provide information on other aesthetic elements, such as font style and size, number of characters per line, subtitle justification, line spacing or synchrony with the image.
12
Respondents had to rank the five aspects of live subtitling they considered to be most or least important to them. The results show that a good position on screen came first; a minimum delay between speech and the display of the subtitles, second; an acceptable reading speed, third; few language mistakes, fourth; and subtitles that include everything that is said, fifth.
France’s National Quality Standard for Subtitling
143
6.1. Number of Lines The third rule of the Charte stipulates that there should be up to two lines of subtitles for pre-recorded programmes and three for live ones. The physical limitations of the size of the screen, the image itself and the subtitle reading time restrict the number of lines available for subtitles. For these reasons, most studies indicate that the maximum amount should be two full lines of text (Luyken, Herbst, Langham-Brown, Reid, and Spinhof 1991; Becquemont 1996; Ivarsson and Carroll 1998; Díaz Cintas and Remael 2007). However, depending on the type of programme, these researchers agree that three lines could occasionally be used. Like the Charte, the Spanish and British guidelines both recommend that three lines should only be used in exceptional circumstances and mostly for live programmes. As such, current research and European guidelines would seem to support the Charte’s stance on the optimal number of lines.
6.2. Subtitle Placement In France subtitles are usually placed at the bottom of the screen, but the Charte fails to endorse this norm as it does not specifically mention on-screen subtitle placement. However, part of the third rule of the Charte does suggests that, whenever possible, subtitles should not hide any onscreen information (names and titles of interviewees, definitions, opening or closing credits) or other important visual elements such as maps, graphs or speakers’ mouths—which allows for lip reading. Indeed, subtitles should not obstruct important on-screen elements because information is lost and it may render the subtitle illegible. Not only do the British and Spanish guidelines support this approach but, when rating five different facets of live subtitles, participants in the French survey also deemed a good position on screen that would not hide any information to be most important. These factors suggest that, as indicated by the Charte, subtitles should not obstruct important on-screen elements.
6.3. Box Rule five of the Charte stipulates that across all television networks DTTV subtitles should be displayed in a dark translucent box and that the letters should be outlined in black. Associated more with the Teletext system and rarely used with DTTV subtitles, this box creates a better contrast between the image and subtitles making the latter easier to read. The issue raised comments from a number
144
Chapter Eight
of the French survey participants, who noted that they would like it to be included at all times as they had experienced a decrease in legibility when digital subtitles were introduced. This has been corroborated by a later online survey conducted by the association Médias Sous-Titrés (DrouvroySimonnet 2011) in which 74% of participants voted in favour of the automatic inclusion of a box. A measure recommended in the British and Spanish guidelines, the Charte supports the improvement of on-screen legibility by including this rule.
6.4. Shot Changes The Charte’s rule 14 specifies that subtitles should remain discreet by respecting shot changes (i.e. they should not be displayed across these changes) and by following the rhythm of the programme as much as possible. Indeed, subtitles that are shown across shot changes are confusing as they ‘cause the viewers to return to the beginning of a partially read subtitle and start re-reading’ (de Linde and Kay 1999, 6). In practice, though, it is not always feasible to follow this rule. It is currently popular for films to feature rapid editing and a large number of shorter shots. Bordwell and Thompson (2008, 246) recently gave the example of The Bourne Supremacy in which the average shot length is ‘less than two seconds’. This fast pace makes it difficult for subtitlers to systematically respect shot changes while also respecting the rhythm of the film and the reading speeds. Moreover, usually added at the post-production stage, subtitles have the potential to disfigure images, which form the essence of audiovisual texts (Becquemont 1996). Although some degree of visual disruption is inevitable for deaf and HoH viewers, the Charte rightly suggests that subtitles should be as unobtrusive as possible, which as Neves (2005, 130) has previously pointed out facilitates the viewer’s processing load between images and subtitles therefore easing interpretation and comprehension.
7. Linguistic Parameter SDH consists, in part, of re-constructing the audio channel into written messages. The Charte covers two important elements of this process— editing and segmentation.
France’s National Quality Standard for Subtitling
145
7.1. Editing The first and second rules of the Charte state that subtitles should not only respect the oral message but also French spelling, grammar and conjugations, thus pointing towards a preference for edited rather than verbatim SDH, which convey everything that is said. Although there is a perception amongst some deaf and HoH people that verbatim subtitles are the best means of receiving the same amount of information as hearing viewers (Kyle 1992; Neves 2005), they can be extremely difficult to follow due to high speech rates. Analysing speech rates in live programmes on BBC channels Romero-Fresco (2009) notes that sports coverage averages 160wpm and interviews 230wpm. These figures confirm that if subtitles were displayed verbatim they would be too quick for most readers, and SDH readers in particular. The French survey results also support this, with respondents classifying verbatim subtitles as the least important element and placing greater value on acceptable reading speeds and fewer language mistakes. This preference indicates that, as suggested by the Charte, there is a need for some degree of editing in SDH. However, editing is a complex exercise in SDH as subtitlers are forced to make ‘selective judgements’ (de Linde and Kay 1999, 17). They must be cautious when altering words or sentence structure because the intended meaning of the text has to be maintained. Editing methods such as omission, condensation and reformulation need to be used carefully in order to preserve visual cohesion and narrative coherence (Neves 2005). For example, omitting easily lipread words could be extremely disconcerting for Deaf viewers, those with residual hearing or their hearing family members. Another example is markers of speech. Although not usually applicable to interlingual subtitles, including them in SDH could be beneficial as they often give an indication of a character’s personality. However, while the Spanish and British guidelines dedicate four and three pages to the editing of subtitles respectively, including various examples, no editing methods are discussed in the Charte. The failure of the French set of rules to address the issue could lead to disorientating divergences for SDH readers across channels or programmes as SDH subtitlers might choose differing editing techniques for similar situations.
7.2. Segmentation In subtitling, segmentation is the division of the written text into sections or segments of syntactic units (Díaz Cintas and Remael 2007).
146
Chapter Eight
The Charte specifies in its 13th rule that, to facilitate overall understanding, segmentation within a subtitle (line breaks) and over several subtitles needs to respect these units. This is illustrated in a footnote by an incorrect (Il déteste les jeunes/filles.) and a correct example (Il déteste/les jeunes filles.). This characteristic is discussed in a similar manner in the Spanish and British guidelines. For readers to comprehend a written text, they need to decode it ‘by accessing, identifying and holistically combining letters into words, words into phrases and phrases into sentences’ (Perego 2008, 213). This process, known as parsing, is usually done at the level of the syntactic unit. In other words, readers do not read word by word but rather search for groups of words. Deaf readers seem to act similarly and ‘seek the nucleus of syntactic units to create visual representations derived from the mental translation of the semiotic shape in sign language’ (Virole and Martenot 2006, 467).13 As both hearing and Deaf viewers read texts at the syntactic unit level, it seems important that in relation to subtitles—another kind of text—the Charte should address optimal segmentation in an unambiguous manner. A recent experiment used a subtitled video excerpt to test cognitive processing and recognition in relation to subtitle segmentation. Although only hearing participants took part in the test, the researchers involved concluded that ‘subtitle segmentation quality did not have a significant impact’ on subtitle processing (Perego, Del Missier, Porta, & Mosconi 2010, 263). Further empirical research is needed to ascertain whether or not a similar conclusion would be reached for Deaf and HoH participants. .
8. Extralinguistic Parameter The extralinguistic parameter encompasses aspects that represent nonverbal information present in the audiovisual text (Arnáiz-Uzquiza 2012), this includes sound effects, music, paralinguistic elements and character identification.
8.1. Sound Effects Focused on the matter of sound effects, the tenth rule of the Charte stipulates that they must always be displayed in red. Although it has been shown that this colour is difficult to read on screen (Baker et al. 1984),
13
My translation.
France’s National Quality Standard for Subtitling
147
using a separate colour for sound effect subtitles could help to make them easily recognisable. The British and Spanish guidelines also recommend using a distinct colour for such effects, albeit not red but other colours or combination of colours (background and/or letters). The Charte further clarifies in a footnote that only those sound effects that are meaningful to the plot or cannot be deduced from the image should be described. This is supported by an explanation that in the case of an on-screen explosion it would be unnecessary to describe it with the word Explosion as this is already evident to the viewer. This approach, which reduces the decoding load for SDH readers, is recommended in various studies and guidelines (de Linde and Kay 1999; OFCOM 2009; Neves 2005; AENOR 2012).
8.2. Music and Songs The Charte’s tenth rule also states that music should be rendered in magenta. As with the colour red, research suggests that magenta should be avoided as it is considered difficult to read on screen (Baker et al. 1984). Although the British guidelines specify avoiding magenta, the colour did score well in the French survey, with 74% opting to describe it as ‘satisfactory’. However, as magenta has commonly been used in France for three decades this choice might have more to do with SDH addressees’ familiarity with the colour and how this helps them to understand certain types of subtitle, rather than how legible it is on screen. The Charte stipulates that for songs there should be a transcription of French and foreign lyrics, or by default, that there should be an indication of the singer’s name and the song’s title. However, it fails to give further guidance with regards to music. Bordwell and Thompson (2008, 273) stress the important role musical scores (music) play, explaining that ‘by reordering and varying musical motifs’ filmmakers ‘subtly compare scenes, trace patterns of development, and suggest implicit meanings’. This indicates that a lack of subtitles that interpret music would deprive viewers of aural cues that enrich narratives and aid comprehension. The fact that 81% of respondents to the French survey declared that they would like a description of musical scores suggests that this is an aspect that the Charte should have addressed. Explained in greater detail in the British and Spanish guidelines, the exercise of adapting acoustic messages into written language can be very difficult in practice. It requires SDH subtitlers to have an understanding of music’s various functions within the narrative and therefore, as Neves (2008) points out, demands musical interpretation skills that they may not currently possess. Specific training
148
Chapter Eight
might be required for them to be able to interpret and translate musical scores into written text. Moreover, because these subtitles may require different actions and skills it would be helpful if documents like the Charte outlined the distinction between songs and music more clearly.
8.3. Paralinguistic Information Paralanguage refers to the non-verbal signs contained within speech that modify meaning and which may convey emotion. These elements include timbre, resonance, loudness, tempo, pitch, intonation range, syllabic duration and rhythm (Poyatos 1993). Although the inclusion of ‘paralinguistic information [in subtitles] may be considered redundant for hearers, it is fundamental for the deaf’ (Neves 2005, 149), as these signs often accompany the communicative act but are usually not visually interpretable. The Charte (rules 11 and 12) requires that words be put in brackets when they are whispered or are uttered as an aside, and that when several people speak at once the text should appear in capital letters. However, the French set of rules does not mention how other paralinguistic elements should be rendered. This could lead to subtitlers in France using differing techniques, thus creating confusing dissimilarities for SDH viewers across channels, or could result in the failure to render these elements at all. The BBC guidelines dedicate three pages to the matter and give detailed explanations for sarcasm, irony, accents, stuttering, and silences (BBC 2009). Given that punctuation cannot fully translate all paralinguistic signs (Neves 2005, 148), it can be postulated that what Neves (2009, 161) calls ‘an explicitation’ of the elements is necessary. This technique, recommended by the BBC guidelines, consists of ‘making explicit in the target text information that is implicit in the source text’ (Klaudy 2008, 80), e.g. where relevant, explanatory adjectives such as Slurred or Ironic should be placed at the beginning of subtitles. The majority of the French survey participants stated that they would like paralinguistic signs to ‘always’ be included in the subtitles, supporting the argument that the Charte should have addressed a wider variety of these elements.
8.4. Character Identification and Location When viewers do not have access to aural information, the easy identification and location of characters who are speaking is essential. As outlined in rules seven, eight, nine and 15, the Charte recommends that subtitlers should adhere to a combination of methods.
France’s National Quality Standard for Subtitling
149
Firstly, it states that the colour code defined for SDH should be respected for all pre-recorded programmes. Recent research has revealed that this code, which is unique to France, was created between 1982 and 1984 by the National Institute of the Young Deaf of Paris in collaboration with a group of Deaf people (Constantinidis, personal communication, 12 December 2012). Defined for the purpose of character identification in pre-recorded subtitles, this code stipulates the use of white for all onscreen dialogues, whether or not the character’s mouth is visible, and yellow for all off-screen dialogues. Cyan is used for characters’ interior monologues, narrators, and voice-overs in news reporting and documentaries. Green is used to indicate that a character is speaking a foreign language. In these cases, the specific foreign language is either spelt out (He speaks English) or, provided that this information is given to the hearing audience, translated into French. By contrast, in the UK and Spain one colour is normally assigned to a character throughout a programme (OFCOM 1999; AENOR 2012). Secondly, the Charte stipulates that an en dash should be used to indicate every change of speaker and that the subtitle should be placed under the speaker. Although it has been in use for nearly thirty years and was created for and by the Deaf, it can be argued that this French colour code, along with the use of a dash and subtitle placement, is not always adequate for the identification and location of characters. For the colour code to be effective one first needs to know the meaning of each colour, yet 41% of the French survey participants answered that they did not know the colour code by heart. This is likely to make decoding the subtitles more difficult and to increase the overall reading time required. Furthermore, although the use of white, a dash and subtitle placement might help to locate speaking characters on screen, the task immediately becomes problematic if the camera position changes. Since free-ranging camera movements (orienting shots, crane shots, prolonged following shots, etc.) have come to constitute ‘a default menu for shooting any scene’ (Bordwell 2006, 136), a subtitle may make the character on the right look as if s/he is on the left, thus complicating their identification. Moreover, using the colour yellow for characters located off screen might become insufficient when there is a group of people talking off screen or when there are voices of unknown characters off screen. It is worth noting here that 45% of the French survey participants stated experiencing difficulties when identify off-screen characters. Using the colour green should also perhaps be questioned in the context of multiple colours adding to the viewers’ decoding effort (Neves 2005). Multilinguism is a recent growing trend as films ‘increasingly star
150
Chapter Eight
foreign actors, and take place in foreign locations’ (Mingant 2010, 713). The most straightforward approach may be to use words such as He speaks English to preface the subtitle for each utterance in a foreign language prior to the translation into French. It could be posited that a decrease in the amount of colours from six to the three that are easiest to read on screen—white for character identification, yellow for sound effects and green for music—would improve legibility and therefore render unnecessary the dark translucent box currently used to make subtitles easier to read. The removal of this box would also minimise the impact on the original image. Thirdly, the Charte in rule 15 states that for all live programmes, a name tag should be placed at the beginning of the subtitle and the appropriate colour code should be used particularly when several people speaking might become confusing. As they spell out characters’ names, these tags can be deemed the most efficient way to identify characters in both live and pre-recorded programmes. A similar approach can be seen in the Spanish and British guidelines, which stipulate that name tags should be used whenever confusion around character identification is possible (OFCOM 1999, 14; AENOR 2012, 11). When asked which method they liked the most for pre-recorded programmes, an overwhelming 81% of the French survey participants found name tags satisfactory. This marked preference would seem to indicate that name tags, which are already the most accessible way to identify characters who are on or off screen in live programming (where the speech rate is faster and denser), should be also considered for pre-recorded programmes. Consequently, adding a colour to avoid confusion, as is currently requested for live subtitling, may be superfluous and add unnecessary complexity rather than clarification. Furthermore, it might slow down the subtitler’s work and add to the deciphering effort for SDH viewers.
9. Conclusion Although SDH has existed for over 30 years in France, the Charte represents a first attempt at creating a national quality standard. It indicates a willingness to address concerns about the declining quality of SDH. However, the Charte remains more of a stepping stone than a set of definitive guidelines as it fails to address a range of elements. Aspects such as font, characters per line, synchrony with the image, and subtitle justification are missing, as are detailed descriptions of linguistic issues such as editing. Also absent are signatures from scholars working in the
France’s National Quality Standard for Subtitling
151
field, the other principal French subtitlers’ association (Ataa) as well as relevant references and a bibliography. While aimed at experienced subtitlers and subtitling agencies, who are already familiar with industry jargon, and at broadcasters, who could then check that the pertinent rules have been applied, the Charte falls short of providing exhaustive guidance about how to resolve the issues that subtitlers face on a daily basis. The Spanish and British guidelines are more comprehensive; both explain most of the issues tackled in their French counterpart in greater detail and give explanatory examples. Pereira and Lorenzo (2005) argue that guidelines should not only outline generalities but also explain specific issues in detail and suggest clear strategies that can be used to solve them, thereby enabling those who are less experienced to use the same tactics when faced with similar problems. Based on this definition the Charte falls short of being an exhaustive guide. The inconsistent use of norms hinders deaf and HoH viewers, as adaptation time is then required each time they switch between channels, thereby hampering comprehension (Remael 2007). Had it been more precise and inclusive, the Charte may have gone some way to encouraging different subtitlers, subtitling agencies and broadcasters to use the same rules and to make similar linguistic choices, thereby improving consistency and aiding understanding. The findings of this study have a number of important implications for future practice in France. Firstly, live subtitling could benefit from a set maximum reading speed. Secondly, as channels may experience difficulties in keeping within the required maximum delay between speech and subtitle in live SDH, the prioritisation of perfect spelling could be reviewed. Thirdly, failing to distinguish between or explain songs and musical scores might hinder subtitlers and hamper comprehension. Fourthly, a wider variety of paralinguistic elements could be addressed along with how they should be displayed. Fifthly, the current six-colour code could be replaced by a simpler three-colour code: white (with name tags for character identification), yellow for sound effects and green for music subtitles. In turn, this would mean that the dark translucent box surrounding subtitles could be removed as legibility would be improved. Finally, this study also constitutes a call for further empirical research on several SDH variables as there are a number of generally applied rules of thumb that should be tested. The results of this research support the idea that the Charte could be expanded, and that some existing practices should be questioned based on further research at a national level.
152
Chapter Eight
Acknowledgements The author is grateful to the studios Audioprojects for their help in carrying out this project and wishes to thank Verónica Arnáiz-Uzquiza for her comments and Jen Rutherford for her constructive commentaries and linguistic revisions.
Working document CSA. (2011, December 12). Charte relative à la qualité du soustitrage à destination des personnes sourdes ou malentendantes. Retrieved December 3, 2012, from Conseil Supérieur de l'Audiovisuel: http://www.csa.fr/Espace-juridique/Chartes/Charte-relative-a-la-qualitedu-sous-titrage-a-destination-des-personnes-sourdes-ou-malentendantesDecembre-2011
References AENOR. 2012. Norma UNE 153010 Subtitulado para Personas Sordas y Personas con Discapacidad Auditiva. Madrid: AENOR. Arnáiz-Uzquiza, Verónica 2012. “Los Parámetros que Identifican el Subtitulado para Sordos. Análisis y Clasificación.” In Multidisciplinarity in Audiovisual Translation Vol 4, edited by Monti et al., 103-133. Alicante: Alicante Universidad. Báez Montero, Immaculada C. & Fernández Soneira, Ana Ma. 2010. “Spanish Deaf People as Recipients of Closed Captioning.” In Listening to Subtitles: Subtitles for the Deaf and Hard-of-Hearing, edited by Anna Matamala & Pilar Orero, 26-44. Bern: Peter Lang. Baker, Robert G., Lambourne, Andrew D. & Rowston, Guy. 1984. Handbook for Television Subtitlers. Winchester: Southampton University and Independent Broadcating Authority. Bartoll, Eduard. 2008. Paramètres per a una Taxonomia de la Subtitulació. (Unpublished doctoral thesis). Universitat Pompeu Fabra, Barcelona. Retrieved from http://www.tdx.cat/handle/10803/7572 Bartoll, Eduard & Martínez Tejerina, Anjana. 2010. “The Positioning of Subtitles for the Deaf and Hard of Hearing.” In Listening to Subtitles: Subtitles for the Deaf and Hard-of-Hearin, edited by Anna. Matamala & Pilar Orero, 69-86. Bern: Peter Lang. BBC. 2009, January 5. “Online Subtitling Editoria Guidelines V1.1.” Retrieved September 30, 2012, from BBC: http://www.bbc.co.uk/guidelines/futuremedia/accessibility/subtitling_g
France’s National Quality Standard for Subtitling
153
uides/online_sub_editorial_guidelines_vs1_1.pdf Becquemont, Daniel. 1996. “Le Sous-titrage Cinématographique : Contrainges, Sens, Servitudes.” In Les Transferts Linguistiques dans les Médias Audiovisuels, edited by Yves Gambier, 145-155. Villeneuve d'Ascq: Presses Universitaires du Septentrion. Bordwell, David. 2006. The Way Hollywood Tells It: Story and Style in Modern Movies. Berkely and Los Angeles: University of California Press. Bordwell, David & Thompson, Kristin. 2008. Film Art: An Introduction. New York: McGraw-Hill. Caasem. 2010, May 15. “Un Service Menacé.” Retrieved from Collectif des Adaptateurs de l'Audiovisuel pour les Sourds et les Malentendants: http://www.caasem.fr/category/un-service-menace Caschelin, Sylvain. 2013, January. “Respeaking Techniques in France.” JoSTrans. (Pablo Romero Fresco, Interviewer) Retrieved June 21, 2013, from http://www.jostrans.org/issue19/int_caschelin.php CSA. 2012, May. CSA. Étude Qualité du Sous-Titrage pour Sourds et Malentendants du Débat Présidentiel François Hollande / Nicolas Sarkozy du 02/05/2012. Retrieved December 13, 2012, from UNISDA: http://unisda.org/IMG/pdf/CSA_-_Etude_soustitrage_SM_debat_02052012_TF1-France2.pdf de Linde, Zoé & Kay, Neil. 1999. The Semiotics of Subtitling. Manchester: St Jerome. Díaz Cintas, Jorge & Remael, Aline, eds. 2007. Audiovisual Translation: Subtitling. Manchester: St Jerome. Díaz-Cintas, Jorge. 2009. “Por una Preparación de Calidad en Accesibilidad Audiovisual.” Trans: Revista de traductología, II, 45-59. DREES. 2007. “Handicap Auditif en France - Apports de l'Enquête HID 1998-1999.” Observatoire régional de santé des Pays-de-la-Loire. Paris: Direction de la recherche, des études, de l'évaluation et des statistiques. Drouvroy-Simonnet, Sophie. 2011, January 17. “Sondage Express sur le Sous-Titrage.” Retrieved December 08, 2012, from Médias Sous-titrés: http://www.medias-soustitres.com/television/actualites/Sondageexpress-sur-le-sous European Broadcasting Union. 1997. “Digital Video Broadcasting (DVB); Subtitling System: ETS 300 743.” Valbonne: ETSI. —. 2006. “Digital Video Broadcasting (DVB); Subtitling system: ETS 300 743.” Valbonne: ETSI. Fabius, Laurent. 2001. Arrêté du 24 décembre 2001 Relatif à la Télévision Numérique Hertzienne Terrestre Fixant les Caractéristiques des
154
Chapter Eight
Signaux Emis. Paris: Ministère de l'économie, des finances et de l'industrie. Hermans, Theo. 1999. Translation in Systems: Descriptive and Systemic Approaches Explained. Manchester: St Jerome. Ivarsson, Jan & Carroll, Mary. 1998, October 17. “Code of Good Subtitling Pratice.” Retrieved December 11, 2012, from ESIST: http://www.esist.org/ESIST%20Subtitling%20code_files/Code%20of %20Good%20Subtitling%20Practice_en.pdf Klaudy, Kinga. 2008. “Explicitation.” In Routledge Encyclopedia of Translation Studies, edited by Mona Baker, 80-84. New York: Routledge. Kyle, Jim. 1992. “Switched On: Deaf People's Views on Television Subtiting.” Bristol: Centre for Deaf Studies. Lambourne et al. 2004. “Speech-Based Real-Time Subtitling Services.” International Journal of Speech Technology, 7, 269-279. Lepot-Froment, Christiane. 2004. “La Conquête d'une Langue Orale et Ecrite.” In L'Enfant Sourd: Communication et Langage, edited by Christiane Lepot-Froment & Nadine Clerebaut, 83-163). Brussels: De Boeck. Luyken et al. 1991. Overcoming Language Barriers in Television: Dubbing and Subtitling for the European Audience. Manchester: The European Institute for the Media. Médiamétrie. 2013. Médiamat Annuel 2012. Paris: Médiamétrie. Mingant, Nolwenn. 2010. “Tarantino's Inglourious Basterds: A Blueprint for Dubbing Translators?” Meta: Tranlators' Journal, 55(4): 712-731. doi:10.7202/045687ar Muller, Tia. 2012. “Subtitles for Deaf and Hard-of-Hearing People on French Television.” In Translation across Europe: An Ever-changing Landscape Vol. 7, edited by Elena Di Giovanni & Silvia Bruti (Eds.), Audiovisual, 257-273. Berlin: Peter Lang. —. (Forthcoming). “Long Questionnaire in France.” In The Reception of SDH in Europe, edited by Pablo Romero-Fresco. Berlin: Peter Lang. Neves, Josélia. 2005. Audiovisual Translation: Subtitling for the Deaf and Hard-of-Hearing. (Unpublished doctoral thesis). Roehampton University, London. Retrieved January 07, 2013, from http://www.tandf.co.uk/journals/authors/style/reference/tf_A.pdf —. 2008. “Le Sous-Titrage pour Sourds et Malentendants : A la Recherche d'une Qualité Possible.” In La Traduction Audiovisuelle: Approche Interdisciplinaire du Sous-Titrage, edited by Jean-Marc Lavaur, & Adriana Serban, 43-54. Brussels: De Boeck. —. 2009. “Interlingual Subtitling for the Deaf and Hard-of-Hearing.” In
France’s National Quality Standard for Subtitling
155
Audiovisual Translation: Language Transfer on Screen, edited by Jorge Díaz Cintas & Gunilla Anderman, 151-169. New York: Palgrave Macmillan. OFCOM. 1999. ITC Guidance on Standards for Subtitling. London: Office of Communication. Perego, Elisa. 2008. “Subtitles and Line-Breaks: Towards Improved Readability.” In Between Text and Image: Updating research in screen translation, edited by Delia Chiaro, Christiane Heiss & Chiara Bucaria, 211-223. Amsterdam: John Benjamins Publishing Co. Perego, Elisa, Del Missier, Fabio, Porta, Marco & Mosconi, Mauro. 2010. “The Cognitive Effectiveness of Subtitle Processing.” Media Psychology, 13(3): 243-272. doi:10.1080/15213269.2010.502873 Pereira Rodríguez, Ana M. & Lorenzo García, Lourdes. 2005. “Evaluamos la Norma UNE 153010: Subtitulado para Personas Sordas y Personas con Discapacidad Auditiva. Subtitulado a Través del Teletexto.” Puentes, 6: 21-26. Pereira, Ana. 2010a. “Criteria for Elaborating Subtitles for Deaf and Hard of Hearing Adults in Spain.” In Listening to Subtitles: Subtitles for the Deaf and Hard of Hearing, edited by Ana Matamala & Pilar Orero, 87102. Bern: Peter Lang. —. 2010b. “Including Spanish Sign Language in Subtitles for Deaf and Hard of Hearing.” In Listening to Subtitles: Subtitles for the Deaf and Hard of Hearing, edited by Ana Matamala & Pilar Orero, 103-113. Bern: Peter Lang. Poyatos, Fernando. 1993. Paralanguage: A Linguistic and Interdisciplinary Approach to Interactive Speech and Sound. Amsterdam: John Benjamins. Remael, Aline. 2007. “Sampling Subtitling for Deaf and Hard of Hearing in Europe.” In Media for All: Subtitling for the Deaf, Audio Description, and Sign Language, edited by Jorge Díaz Cintas, Pilar Orero & Aline Remael, 23-52. Amsterdam: Rodopi. Robert, Paul, Rey, Alain & Rey-Debove, Josette. 2002. Le Petit Robert Dictionnaire de la Langue Française. Paris: Société du nouveau Littré, Le Robert. Romero-Fresco, Pablo. 2009. “More Haste Less Speed: Edited versus Verbatim Respoken Subtitles.” Vigo International Journal of Applied Linguistics, 6: 109-133. Retrieved December 16, 2012, from http://webs.uvigo.es/vialjournal/abstract_6_6.html —. 2011. “Quality in live subtitling. The reception of respoken subtitles in the UK.” In Media for All 3. Audiovisual Translation and Media Accessibility at the Crossroads, edited by Aline Remael, Pilar Orero &
156
Chapter Eight
Mary Carroll, 111-133. Amsterdam: Rodopi. Sacks, Oliver. 1990. “Seeing Voices: A Journey into the World of the Deaf.” New York: Harper Perennial. Secrétariat d'État chargé de la Famille et de la Solidarité. 2010. Plan 20102012 en Faveur des Personnes Sourdes ou Malentendantes. Paris: Ministère du travail, des relations sociales, de la famille, de la solidarité et de la ville. Virole, Benoît & Martenot, Danielle. 2006. Problèmes de Psychopédagogie. In Psychologie de la Surdité, edited by Benoît Virole, 453-472. Brussels: De Boeck.
Abstract France’s national quality standard for subtitling for the deaf and hard-ofhearing: An evaluation Keywords: subtitling for the deaf and hard-of-hearing; norms; France Published by the country’s audiovisual regulatory body (the CSA) the Charte relative à la qualité du sous-titrage à destination des personnes sourdes ou malentendantes, a national quality standard consisting of a set of critères (rules) relating to television subtitles for the deaf and hard-ofhearing (HoH) in France, was signed and implemented in December 2011. The objective of the Charte was to establish minimum subtitling rules across television channels and programmes. This paper evaluates these rules in relation to other European guidelines, empirical research on subtitling for the deaf and HoH (SDH) and its addressees, the opinions of French deaf and HoH people captured in a 2010 survey, and the experiences of professionals working in the field of SDH in France. Using Arnáiz-Uzquiza’s (2012) typology for SDH parameters this assessment is structured according to pragmatic, technical, aesthetic-technical, aesthetic, linguistic and extralinguistic elements. This paper concludes with a call for more comprehensive guidelines relating to linguistic aspects, paralinguistic elements and music subtitles as the Charte fails to provide adequate direction on these issues. It also suggests that the current colour code used to identify characters could be replaced by name tags.
France’s National Quality Standard for Subtitling
157
Appendix A Charte Relative à la Qualité du Sous-titrage à la Destination des Personnes Sourdes ou Malentendantes Après l’application par les éditeurs de services de télévision des dispositions quantitatives découlant de la loi du 30 septembre 1986 relative à la liberté de communication, visant à rendre accessibles, à partir du 12 février 2010, les programmes aux personnes souffrant d’un handicap auditif, le Conseil supérieur de l’audiovisuel s’est attaché à mettre en œuvre la mesure 37 du plan handicap 2010.2012, relative à l’amélioration de la qualité du sous-titrage à la télévision. À cette fin, après concertation de l’ensemble des partenaires, a été élaborée la présente charte relative à la qualité du sous-titrage à destination des personnes sourdes ou malentendantes. Le sous-titrage doit être réalisé spécifiquement pour l’usage des personnes sourdes ou malentendantes en respectant les 16 critères suivants. Pour Tous Les Programmes 1 – Respect du sens du discours. 2 – Respect des règles d’orthographe, de grammaire et de conjugaison de la langue française. 3 – Respect de l’image. Le sous-titre, limité à deux lignes pour les programmes en différé et à trois lignes pour le direct, ne doit pas cacher, dans la mesure du possible, les informations textuelles incrustées14 ni les éléments importants de l’image15. 4 – Diffusion des sous-titres sur la TNT selon la norme DVB_Subtitling (EN 300 743), conformément à l’arrêté dit « signal » du 24 décembre 2001. 5 – Parfaite lisibilité. Il est recommandé que les sous-titres se présentent sur un bandeau noir translucide et si possible avec des lettres ayant un contour noir, quel que soit le réseau et notamment en TNT.
14
Présentations des intervenants, titres, définitions, génériques… Les lèvres des locuteurs qui permettent la lecture labiale, les informations imagées comme les cartes géographiques ou schémas explicatifs, etc. 15
158
Chapter Eight
Pour Les Programmes De Stock Diffusés En Différé 6 – Temps de lecture approprié : 12 caractères pour une seconde, 20 caractères pour deux secondes, 36 caractères pour trois secondes, 60 caractères pour quatre secondes.16 Les laboratoires seront incités à respecter ces critères avec une tolérance de 20 %.
7 – Utilisation systématique du tiret pour indiquer le changement de locuteur. 8 – Placement du sous-titre au plus proche de la source sonore. 9 – Respect du code couleurs défini pour le sous-titrage: • Blanc: locuteur visible à l’écran (même partiellement); • Jaune: locuteur non visible à l’écran (hors champ); • Rouge: indications sonores; • Magenta : indications musicales et paroles des chansons ; • Cyan : pensées d’un personnage ou d’un narrateur dans une fiction, commentaires en voix hors champ dans les reportages ou les documentaires ; • Vert : pour indiquer l’emploi d’une langue étrangère17. • Particularité : les émissions (hors documentaires) intégralement doublées18 en français doivent être sous-titrées selon le code couleur approprié.
16
Une seconde étant composée de 25 images. Si la transcription dans la langue concernée n’est pas possible, on place trois petits points verts à gauche de l’écran après avoir indiqué si possible de quelle langue il s’agit. 17
France’s National Quality Standard for Subtitling
159
10 – Indication des informations sonores19 et musicales20. 11 – Utilisation des parenthèses pour indiquer les chuchotements et les propos tenus en aparté. 12 – Utilisation de majuscules lorsque le texte est dit par plusieurs personnes (un usage des majuscules pour toute autre raison est à proscrire sauf pour certains sigles et acronymes). 13 – Découpage phrastique sensé. Lorsqu’une phrase est retranscrite sur plusieurs sous-titres, son découpage doit respecter les unités de sens afin d’en faciliter sa compréhension globale21. 14 – Respect des changements de plans. Le sous-titrage doit se faire discret et respecter au mieux le rythme de montage du programme. Pour Les Programmes Diffusés En Direct Ou Sous-Titrés Dans Les Conditions Du Direct 15 – Distinction des intervenants par l’indication de leur nom en début de prise de parole et l’usage de couleurs appropriées, notamment lorsque le programme fait intervenir plusieurs personnes dans un échange qui peut être confus. 16 – Réduction du temps de décalage entre le discours et le sous-titrage visant à ramener ce décalage en dessous de 10 secondes. Ne pas omettre une partie significative du discours sous prétexte de supprimer le décalage pris par rapport au direct, mais l’adapter éventuellement. Tous les propos porteurs de sens doivent être rapportés.
18
Les voix des comédiens lisant la traduction des propos des intervenants se superposent aux voix d’origine. 19 Description des bruits significatifs qui ne sont pas induits par l’image (il est inutile d’indiquer « explosion » si l’explosion se voit à l’écran). 20 Transcription des chansons françaises ou étrangères. Par défaut, indiquer le nom du chanteur et le titre. 21 Un découpage excessif ou inapproprié peut gravement compromettre la bonne compréhension du discours. Á la place de « Il déteste les jeunes / filles. », on préférera « Il déteste / les jeunes filles ».
160
Chapter Eight
Fait à Paris Le 12 décembre 2011 En présence de : Le ministre de la culture et de la communication Monsieur Fréderic MITTERRAND La secrétaire d’État aux solidarités et à la cohésion sociale Madame Marie-Anne MONTCHAMP Le président du Conseil supérieur de l’audiovisuel Michel BOYON Les signataires: Les associations: Pour l’Union Nationale pour l’Insertion Sociale du Déficient Auditif (UNISDA) Monsieur Cédric LORANT, Président Pour la Fédération Nationale des Sourds de France (FNSF) Monsieur Philippe BOYER, Président Pour le Mouvement des Sourds de France (MDSF) Monsieur René BRUNEAU, Président Pour le Bureau de Coordination des associations des devenus sourds et malentendants (BUCODES) Monsieur Richard DARBERA, Président Pour Médias-soustitres Madame Sophie DROUVROY, Responsable éditoriale Pour l’Union Nationale des Associations de Parents d’Enfants Déficient Auditifs (UNAPEDA) Madame Nicole GARGAM, Présidente Pour le Collectif des Adaptateurs de l’Audiovisuel pour les Sourds et Malentendants (CAASEM) Monsieur Denis POUDOU, Président Pour l’Association Française pour l’Information et la Défense des sourds s’Exprimant Oralement (AFIDEO) Madame Clémentine VIE, Présidente Pour l’Association Nationale de Parents d'Enfants Déficients Auditifs (ANPEDA) Monsieur Didier VOÏTA, Président Les laboratoires : Pour le laboratoire Red bee media Monsieur Andrea GENTILI, Directeur Pour les laboratoires Echo Live et Vectracom
France’s National Quality Standard for Subtitling
Monsieur Gérard LETIENE, Directeur Pour le laboratoire Teletota Monsieur Thierry FORSANS, Directeur Pour le laboratoire Dubbing Brothers Monsieur Mathieu TAIEB, Directeur commercial Pour les laboratoires Titra Film Paris et TVS Madame Isabelle FRILLEY, Président – Directeur général Pour le laboratoire Cinekita Madame Madeleine KOUADIO – TIMMERMAN, Gérante Pour le laboratoire Nice Fellow Monsieur Stéphane BUHOT, Gérant Pour le groupe LVT Monsieur Claude DUPUY, Directeur Pour le laboratoire Cinecim Madame Catherine MERIC, Directrice Pour le laboratoire Imagine Monsieur Pierre-Yves COLLIGNON, Président Pour le laboratoire Blue Elements : Monsieur Christophe LARTILLEUX, Président Pour le laboratoire ST’501 Monsieur Dominique POUZET, Gérant Pour Multimédia France Productions (MFP) Monsieur Martin AJDARI, Président – Directeur général Les chaînes Pour TF1, Eurosport et LCI Monsieur Nonce PAOLINI, Président -Directeur général Pour TMC et NT1, Madame Caroline GOT, Directrice générale Pour France Télévisions, Monsieur Rémy PFLIMLIN, Président – Directeur général Pour le groupe Canal +, Monsieur Frédéric MION, Secrétaire général Pour le groupe M6, Monsieur Nicolas de TAVERNOST, Président du Directoire Pour NRJ 12, Monsieur Gérard BRICE-VIRET, Directeur délégué au pôle télévision Pour Direct 8 et Direct Star, Monsieur Yannick BOLLORE, Directeur général de Bolloré Média Pour BFM TV Monsieur Alain WEILL, Président Pour le groupe Lagardère Active,
161
Chapter Eight
162
Monsieur Antoine VILLENEUVE, Directeur général des chaînes de télévision France et International Pour l’ACCeS, Monsieur Xavier SPENDER, Président
Appendix B French Survey to the Deaf and HoH I. Vos habitudes télévisuelles 1.1. Combien d'heures pas jour regardez-vous la télévision ? 0h 3-4h 1-2h
4-5h
2-3h
5-6+h 1.2. Quelle(s) type (s) d'émission(s) regardez-vous le plus souvent ? Série
Actualités
Film
Sport
Jeu
Magazine
Documentaire
Divertissement 1.3. Quels sont les noms de vos 3 émissions préférées ? 1 2 3 1.4. Utilisez-vous les sous-titres lorsque vous regardez la télévision ? Toujours
10-25% du temps
Plus de 75% du temps
Moins de 10% du temps
50-75% du temps
Jamais
25-50% du temps 1.5. Comment savez-vous si une émission sera sous-titrée ou pas ? Teletexte Guides télé Annonces télévision Sites internet
Amis/Relations
France’s National Quality Standard for Subtitling
163
1.6. Que pensez-vous des sous-titres télévisuels en général ? Satisfaisant Insuffisant Bien
Pas efficace du tout
II. Le code couleurs des sous-titres 2.1. Connaissez-vous le code couleurs des sous-titres par cœur ? Oui Non 2.2. Pensez-vous que l'usage des couleurs dans les sous-titres rend une émission facile à suivre ? Toujours Presque toujours Parfois
Jamais
2.3. Que pensez-vous des couleurs utilisées ? Satisfaisant
Insuffisant
Bien
Pas efficace du tout
2.4. Pourquoi?
2.5. Que pensez-vous de la couleur blanche pour les dialogues de personnes à l'écran ? Satisfaisant Insuffisant Bien
Pas efficace du tout
2.6. Savez-vous reconnaître qui parle lorsqu'un groupe de gens est à l'écran ? Toujours Parfois Presque toujours
Jamais 2.7. Que pensez-vous de la couleur jaune pour les dialogues de personnes hors écran ? Satisfaisant Insuffisant Bien
Pas efficace du tout
2.8. Savez-vous reconnaître qui parle dans un groupe de gens hors écran ? Toujours
Parfois
Presque toujours
Jamais
164
Chapter Eight
2.9. Que pensez-vous des tirets (-) en début de phrase pour identifier un personnage ? Par ex. - Je ne sais pas. Satisfaisant
Insuffisant
Bien
Pas efficace du tout
2.10. Que pensez-vous de l'utilisation de plusieurs points de ponctuation (!!) (!?) lorsqu'une personne parle fort ou est fâchée ? Satisfaisant
Insuffisant
Bien
Pas efficace du tout
2.11. Que pensez-vous de l'utilisation de majuscules lorsque plusieurs personnes disent la même chose en même temps, par ex. - AU REVOIR. ? Satisfaisant Insuffisant Bien
Pas efficace du tout
2.12. Pensez-vous que des sous-titres qui indiquent l'intonation seraient utiles, par ex. (Ironique) ? Toujours Parfois Presque toujours
Jamais 2.13. Pensez-vous que des sous-titres précisant l'accent seraient utiles, par ex. (Accent américain) ? Toujours Parfois Presque toujours
Jamais
2.14. Que pensez-vous de la couleur cyan (bleue) pour un narrateur ou les pensées d'un personnage ? Satisfaisant Insuffisant Bien
Pas efficace du tout
2.15. Que pensez-vous de la couleur verte pour signaler une langue étrangère ? Satisfaisant Insuffisant Bien
Pas efficace du tout
2.16. Que pensez-vous de la couleur rouge pour les effets sonores ? Satisfaisant
Insuffisant
Bien
Pas efficace du tout
France’s National Quality Standard for Subtitling
165
2.17. Pour les effets sonores, que préférez-vous ? Tous les sons doivent être dans les soustitres
Seulement les sons nécessaires à la compréhension de l'émission
2.18. Comment préférez-vous que les sons soient décrits ? Utilisation de mots Description de ce qu'est le qui reproduisent les son (Il éternue) sons (atchoum!) 2.19. Souhaitez-vous de la ponctuation dans les sous-titres de sons et de musique ? Une majuscule au Des parenthèses entourant début les sous-titres Un point final Aucune ponctuation n'est nécessaire 2.20. Que pensez-vous de la couleur magenta pour les effets de musique ? Satisfaisant
Très mauvais
Mauvais
Bien 2.21. Les sous-titres d'effets de musique vous semblent-ils ? Satisfaisant
Insuffisant
Bien
Pas efficace du tout
2.22. Souhaitez-vous que les titres, l'interprète et les paroles des chansons soient indiqués dans les sous-titres ? Toujours
Presque toujours
Jamais
Parfois
2.23. Que pensez-vous des ellipses (…) qui indiquent qu'il n'y a aucun son pendant plus de 20 secondes ? Satisfaisant Insuffisant Bien
Pas efficace du tout
2.24. Lorsqu'il y a de la musique instrumentale ou de fond dans un film ou une série, que préférez-vous ? Une indication du genre de musique, par ex. (Musique angoissante)
Une indication que c'est une musique de fond, par ex. (Musique de fond)
166
Un symbol indiquant qu'il y a de la musique, par ex. (…)
Chapter Eight
Aucune indication, c'est inutile
III. Les sous-titres des séries/films et des journaux 3.1. Que pensez-vous de la taille des lettres ? Satisfaisant Insuffisant Bien
Pas efficace du tout
3.2. Pourquoi ?
3.3. Que pensez-vous du type d'écriture utilisée (Police) ou (Police) ? Satisfaisant Insuffisant Bien
Pas efficace du tout
3.4. Pourquoi ? 3.5. Pour vous, les sous-titres de films/séries sont Trop rapide, je ne peux pas les lire J'ai le temps de tout lire
Trop lent, je peux les relire plusieurs fois
3.6. Selon vous, les sous-titres de films/séries doivent Contenir tout ce qui Contenir seulement les est dit même si cela éléments essentiels à la veut dire que les compréhension du sous-titres resteront programme avec des sousmoins longtemps à titres plus longtemps à l'écran l'écran 3.7. Pour vous, les sous-titres des journaux ou des évènements sportifs sont Trop rapide, je ne Trop lent, je peux les relire peux pas les lire plusieurs fois J'ai le temps de tout lire
France’s National Quality Standard for Subtitling
167
3.8. Où préférez-vous que les sous-titres soient positionés à l'écran ? En-bas de l'écran
En-haut et en-bas de l'écran
En-haut de l'écran
Au-dessus de tout commentaire
3.9. Selon vous, les sous-titres des informations doivent Contenir tout ce qui Contenir seulement les est dit même si cela éléments essentiels à la veut dire que les compréhension du sous-titres resteront programme avec des sousmoins longtemps à titres plus longtemps à l'écran l'écran 3.10. Que pensez-vous des vignettes noms pour l'identification de personnes, par ex. Bruce Toussaint: Satisfaisant Insuffisant Bien
Pas efficace du tout
3.11. Que pensez-vous d'un changement de couleur pour chaque nouvel interlocuteur ? Satisfaisant Insuffisant Bien
Pas efficace du tout
3.12. Pour les sous-titres sportifs ou des journaux, classez par ordre de préférence (1 à 5) les éléments suivants: Un décalage Les sous-titres comportent minimum entre la tout ce qui est dit parole et l'affichage des sous-titres Une vitesse de Un bon positionnement à lecture acceptable l'écran Peu de fautes de Français 3.13. Comment préférez-vous l’affichage des sous-titres des journaux réalisés en direct ? Mot à mot (les mots En bloc (plusieurs mots s'affichent les uns s'affichent d'un coup) après les autres) 3.14. Que pensez-vous des sous-titres télévisuels en général ? Satisfaisant Insuffisant Bien
Pas efficace du tout
168
Chapter Eight
IV. Vous-même 4.1. Êtes-vous: Homme
Femme
4.2. Vous avez: Moins de 20 ans
De 20 à 30 ans
De 30 à 40 ans
De 40 à 50 ans
De 50 à 60 ans
Plus de 60 ans
4.3. Votre niveau d'études: CAP/BEP
DEUG/Licence
BAC/BAC PRO/BT/BP
Doctorat
4.4. Êtes-vous : Sourd Malentendant
Devenu Sourd Entendant vivant avec des personnes sourdes et/ou malentendantes
Professionnel s'occupant de personnes sourdes et malentendantes 4.5. À quel âge a-t-on découvert votre surdité ou à quel âge êtes-vous devenu(e) sourd(e) ? Naissance 20-29 ans Avant 2 ans
30-49 ans
2-4 ans
> 50 ans
5-19 ans 4.6. Avez-vous un handicap associé à votre surdité ? Si oui, lequel ? Oui
Non
4.7. Souffrez-vous de daltonisme ? Oui
Non
4.8. Pour compenser votre surdité, utilisez-vous un dispositif de correction auditive ?
France’s National Quality Standard for Subtitling
Contours d'oreille
Implant cochléaire
Intra-auriculaires
Aucun dispositif
4.9. Quel est votre mode de communication ? Langue des Signes Française (LSF) Français signé
Français oral avec LPC Bilingue - LSF + Français oral
Français oral 4.10. Combien d'heures lisez-vous journaux, livres, … par semaine ? 0h
3-4h
1-2h
4-5h
2-3h
5-6+h 4.11. Éprouvez-vous quelques difficultés à lire le Français? Oui
Non 4.12. Éprouvez-vous quelques difficultés à lire les sous-titres ? Toujours
Souvent
Parfois
Jamais
4.13. Utilisez-vous une aide auditive pour regarder la télé ? Boucle magnétique
Je n'utilise rien Casques (ex. Je n'utilise rien ; je n'en ai Sensheiser) pas besoin 4.14. Utilisez-vous une aide visuelle pour regarder la télé ? Lentilles de contact Lunettes
Je n'utilise rien Je n'utilise rien ; je n'en ai pas besoin
V. Suggestions Souhaitez-vous ajouter un commentaire, faire une remarque supplémentaire?
169
CHAPTER NINE TELOP AND TITLES ON THE JAPANESE SMALL SCREEN CLAIRE MAREE1
1. Introduction The heavy use of texts and graphics is a defining characteristic of Japanese variety programming (Gerow 2010, Hambleton 2011). On free to air TV, viewers are bombarded with text of a variety of colours, sizes and fonts which are placed as titles framing the screen, and which also appear simultaneously with dialogue and/or narration. Text may slide into view diagonally, or emerge from a celebrity’s talking mouth. It may spin, glitter, sparkle or undulate. Generally referred to in Japanese as telop (from television opaque projector), but also known as open captions (Kawabata 2006), textual information (Shitara 2011a), impact captioning (Park 2009), open caption telop (O’Hagan 2010), the text and graphics superimposed in post-production are ubiquitous. Research in applied linguistics and translation studies indicates that text-on-screen common to Japanese TV (herein J-TV) functions to summarise, contextualise, recap, emphasise and decorate the action taking place (Suto, Kawakami and Katai 2009, Sakamoto 1999, Sakamoto, 2000, Shiota 2005, Kimura 2011, Shitara 2006, 2009, 2011a). Telop contribute to the tabloidisation of news broadcasts (Kawabata 2006) and highlight “comic content” (O’Hagan 2010, 70) in entertainment broadcasting. By engaging and maintaining attention beyond the show to advertising, telop attempt to engage the commodity that is the viewers’ gaze (Gerow 2010, 141-2). Although telop duplicate dialogue and narration enunciated on screen, they do not reproduce the entirety of the talk. Through this process 1
University of Melbourne, Australia. Email address: [email protected] This research was supported in parts by grants from JSPS KAKENHI and the University of Melbourne.
172
Chapter Nine
of selective textualisation, telop regiment language (Park 2009) by enforcing the producer’s interpretive account onto the broadcast product. This paper argues that the inscription of text onto the screen as telop is an act of media-talk translation and visualisation which relies on excessive use of subcultural and non-standard speech forms to effectively engage the gaze.
2. A Short History of Telop Linguistic research into text used in television broadcasting is an emerging area. Linguist Shitara Kaoru’s2 analysis of the historical shifts in text appearing in broadcasts by the Japanese national broadcaster, Japan Broadcasting Corporation (herein NHK), traces the movement from black and white handwritten text to colorful digital text-graphic combinations (Shitara 2006, 2009a, 2009b, 2011a, 2011b, 2013). Over the 40 year period of quiz and variety programming housed in the NHK Archives that Shitara has researched, not only has there been an increase in the volume of text inscribed into television programming which peaks markedly from the 2000s, but there has been a shift in the types of textual information contained in variety programmes in general. The earliest examples of text on television were in the form of handwritten captions and subtitles predominantly used to identify people as they appeared in programmes, or to show the lyrics to songs being performed (Shitara 2011a). Textual information such as the title of the programme was written on props and sets in the 1960s. It was not until the Tokyo Olympics (1964) that text on screen was photo-typeset. From the 1970s, extra information, such as song titles and short descriptors, began to appear as text-on-screen. With the appearance of more variety programming later into the night, textual information also increased in the 1980s. Lyrics (and translations) were projected as well as short titles compartmentalising the action on screen into themes or segments. In 1988, display character generators came into use, and whilst the impact of new technology can be noted, the steep increase in text, in the form of intralingual subtitles that are most commonly known as telop, occurs in the 2000s. Text had mostly been a combination of white lettering with a black edge, but with the introduction of display character generation, graphics altered to become colourful and took on a decidedly more decorative function. It needs be noted that in the period before the introduction of display character generation, text was being used not only to project information, but for additional effect (Shitara 2011a, 5). However, as 2
Names are written according to Japanese conventions: last name, first name.
Telop and Titles on the Japanese Small Screen
173
Shitara (2011a) explains, the increased decorativeness of telop is a feature of 1980s NHK variety shows, and the synchronisation of talk and text which fosters a feeling of “being there” similar to “live running coverage” (2011a, 5-6) is a noticeable change from the 2000s. Anthropologist and communication studies scholar Kimura Daichi (2011) discusses telop in his work on citational practices in the media. He argues there is an overabundance of parentheses in media texts and academic writing which are used not to demarcate quotes from a known and/or identifiable source, but as projections of language from an unknown and/or unidentifiable position, signifying that it is “not the speaker here” (2011, 153) who is speaking. Kimura touches briefly on telop as “expressive parenthesis” and recounts a passage from Matsumoto (2005, 157-160 qtd in Kimura 2011, 63-64) describing how telop, which was initially used to illustrate difficult passages, quite by chance came to be seen as an interesting addition to entertainment TV. While early telop visualised segments of speech difficult to understand due to articulation or the use of dialect (Kimura 2011, O’Hagan 2010), the comic potential of visualising non-standard language was soon recognised and exploited. In its contemporary form, text-on-screen is multi-functional acting to provide both information about the scene unfolding and to enhance the entertaining aspects of the broadcast. Telop are reproductions of dialogue which are not aimed at the deaf and hard of hearing (see O’Hagan 2011), and have, therefore, been seen as superfluous additions cluttering the screen. Anecdotal evidence suggests that as the quantity of telop grew, viewers first deemed the imposition of interpretative meaning via telop as unnecessary and overbearing. However, Shitara’s (2009) survey on attitudes towards telop shows that hearing students and adults alike agree that telop are useful in instances where viewers seek clarification of confusing segments or compensation for noisy surroundings. Furthermore, the findings indicate that telop enable hearing viewers to better understand an interesting or critical part of a broadcast, or enjoy the program more. As telop have become ubiquitous in J-TV, it is this very repetitive and parenthetical nature which is manipulated to produce comic effect. Telop “hook” (Shitara 2011a, 7) the viewer. They create a broadcasting context in which “synchronicity and deferment coexist” (Kimura 2011, 63). Indeed, an experimental study investigating the influence of telop on viewers’ interpretation of quiz questions, (Suto et al. 2009, 677) suggests that visualizing an incorrect answer prompts the viewer to consider the “answerer is inferior to the average person”. Telop enable viewers to easily understand the producer’s interpretations, and this may lead to uniform
174
Chapter Nine
interpretations being made (Suto et al. 2009, 677). This underlines the influence speech selected-to-be-projected as text has on viewers’ conception of the speaker themselves. Textualisation and visualisation facilitate another comic layer to variety programming. Gerow (2010, 133) notes that in this visual layering on contemporary J-TV the image is treated “not as a window onto the world but as a flat surface on which telop are to be overlaid”. Telop are now an entrenched part of comic TV in much the same way as canned laughter (O’Hagan 2010). Textualisation plays the straight man (tsukkomi) 3 “external to the performance, prodding and defining the clowns on screen” (Gerow 2010, 125). Use of colloquial speech and dialect is manipulated for comic effect, and the trivial is dramatised for similar purposes (O'Hagan 2010, 85). This aspect of telop is especially relevant to the present paper, as it points to the role of how language ideologies play in the use of non-standard orthography and linguistic manipulation of stereotypes. Contemporary telop are both visually complex and programme specific. They take on a variety of colours, fonts, sizes and animation (spinning, pulsating). Non-standard punctuation (for example: !!, !?) , graphics and symbols found in keitai (cell phone) text-messaging and manga (for example: K; See Fig. 9-1) are notable features used to produce affect. Font colour, size and animation provide emphasis and add interpretative detail (Shiota 2005) to the action on screen. Sakamoto (2000) divides telop into two typologies of “function” and “form”. In terms of function, telop give direct renderings of the dialogue to facilitate comprehension or bridge a scene change. In terms of form, telop make use of animation, orthography, color, symbols, special effects and graphics (including pictures and emoticons). As well as colour, animation and linking with sound effects, the visualisation of selected spoken dialogue in telop relies heavily on normative tropes–especially those of femininity and masculinity–to produce styles that frame personalities as belonging to distinctive identity categories. These stereotypical speech styles could be envisioned as echoing the yakuwari (role) (Kinsui 2003, 2007, 2011) styles used as short-hand in manga, which conjure up a specific type of character, such as the ‘eccentric professor,’ or the ‘high-society madam’ and transfer extralingual information about the fictional character to the reader. Use of a
3
Boke (the clown) and tsukkomi (the straight person) are essential elements of Japanese stand-up comedy and television comedy programming. For a discussion of this in relation to telop see Gerow (2010).
Telop and Titles on the Japanese Small Screen
175
dialect or interactional particles, for example, index a particular occupation, gender, age, education and so on. Through text-on-screen, the high pitched cuteness of a young female pop-star, the handsome MC’s sexy voice and the flamboyant drag queen’s drawl, are not only audible, but visible. As Rowsell and Abrams (2011, 2) remind us “(t)echnology provides virtual spaces for experimentation with language, as well as with identity.” As such, telop emerge as a site where language traverses from the spoken to the written whereby identity and the language ideologies (Irvine and Gal 2000, Kroskrity 2000, Silverstei 1979, Woolard and Schieffelin 1994)—that is, the attitudes and beliefs towards language and its users—that underpin the translation and editing processes are inscribed visually through the social act of writing (Sebba 2007).
3. Inscribing Ideologies onto the Screen One of the characteristics of contemporary telop is heavy use of decorative punctuation and non-standard writing conventions. In a sample of programmes broadcast between January-March, 2006, Shitara (2006, 55) identifies multiple instances of exclamation marks showing not only “surprise” but “unpredictability” or emotional profundity. A doubling or trebling of marks increases the perceived intensity. Similarly, question marks indicate “doubt” as well as “bewilderment”. Both exclamation marks and question marks are examples of punctuation and symbols integrated into non-standard contemporary Japanese writing. The use of these to emphasise the emotional stance of the locutor, or to impose a postproduction interpretation, is what interests us here. Non-conventional writing is not a new phenomenon in Japan (Gottlieb 2010, Miller 2004, 2011, Tranter 2008). It has been described as “playful” (Gottlieb 2010), “subversive” (Miller 2004, 2011), “eccentric” (Nishimura 2003), and/or “deviant” (Okamoto 2006). These terms invoke the agency of the author who is doing the writing, thereby highlighting the role of the author—in this case the post-production team. The team involved in authoring the telop for a particular television broadcast may include the programme director, editor and station quality maintenance staff from the relevant TV stations, along with their assistants and/or teams. The process itself involves scripting, rough editing, and previewing, followed by reediting and final reviews. It is time consuming, yet conducted within a strict timeframe to ensure timely delivery of content. The social act of writing text (Sebba 2007, 2009) into audiovisual media is a multi-party, multi-staged process in which the post-production team shape the broadcast into its final form.
176
Chapter Nine
In her analysis of humour and telop, O’Hagan (2010) identifies the use of Sakamoto’s (2000) functions and forms in a quiz show for comic effect. Humour is highlighted by, in particular, “direct renderings” of the dialogue, which “confirm the common ground between the viewer and the contestant” by indicating what it is they are supposed “to laugh about” (O'Hagan 2010, 81). These “direct renderings” are always already interpretative. The utterances have been subjectively selected in the postproduction process and the text selected may not represent all of the audible information. The form in which the utterance is reproduced—font, colour, punctuation, position etc.—is determined in line with the programme’s overall aesthetics and is arrived at by a complex editorial process of negotiation of digital tools by, and between, producer and editor. This complex process of translation (from audible to text) and visualisation (through the selection of font, color, and animation etc.) is informed by language ideologies which shape ideas of how an utterance should look. The following section will elaborate the use of non-standard orthography as it is deployed in these so-called direct renderings, which, as I will demonstrate, manipulate the contextual meaning of the selected text. The examples come from the variety lifestyle programme onêMANS4 (Nippon Television Network Corporation (NTV) 2008-2009; herein OM).
3.1. Framing the Humour Hosted by popular boy band Tokio’s5 bassist Yamaguchi Tatsuya with the aid of NTV announcer Seyama, OM featured a cast of experts who offered advice on culinary skills, fashion, make-up and interior styling. The experts were all well-known specialists in fields as diverse as Japanese floral art, make-up artistry, choreography, cooking and traditional dance. These experts were also assembled and labelled as onê (literally ‘older sister’; a colloquial term for “queen” or “queer guy”). Onê personalities reemerged in the 2000s in Japanese popular culture as a visible media presence (Maree 2013, 2014)6. A symbol of the malleable body and the possibility of transformation through lifestyle make-overs, 4 The show first appeared in 2006 on Saturdays, and moved to a 7 pm Tuesday spot in October 2007. It achieved 18% ratings on March 18 2008, but was pulled off air in a final 2 hour special in March 2009. 5 Managed by mega-management company Johnny and Associates, Tokio is a multimedia group whose members, apart from playing concerts to packed houses, are also popular TV-show hosts, and screen actors. 6 The figure of the queer male has a long history in contemporary Japanese media (see Ishida and Murakami 2006).
Telop andd Titles on the Jaapanese Small Screen
177
the experts were a pivottal point in OM O as a succcessful formaat (Maree 2013). The programme title, t OnêMAN NS could be glossed as qu ueen-guy, fem-chap, faairy-male. Its complexity becomes b clearrer when brok ken down: o-nê-MANS.. The honoriific prefix o is written in hiragana (th he cursive syllabary). T The second coomponent in katakana (thee blocked sylllabary) is the nê whicch is derived from either the kanji chaaracter nê ጜ meaning ‘older sisterr’ or nê ጠ meeaning ‘womaan superior in age7.’ MANS S is taken directly from m English and is written in capitals. Thhis title plays with the rich palette of writing coonventions thaat inform Jappanese populaar culture. Japanese w writing entailss a conventionalised cho ice between multiple scripts and involves a coombination off at least kanjii (Chinese ch haracters), hiragana (thhe cursive sylllabary) and katakana k (the boxed syllab bary), and often Romann letters and Arabic A numeraals. In standaard Japanese, kanji is used for f many lexiccal items, hira agana for function words, and katakkana for onom matopoeia andd loan words. Deviation D from standaard writing prractices is creeatively usedd for stylistic effect in literature, addvertising andd print media. NHK’s Shinn yôji yôgo jiten (New Character aand Phraseoloogy Dictionarry) is used aas a guide to o writing practices inn broadcast media. m Operatting within thhese guidelin nes, postproduction tteams are ablee to work a greeat deal of creeative orthogrraphy into their telop pproduction. Ussing a small sample of exam mples from OM, O I will now investiggate how writting conventio ons are creativvely negotiated d in telop to manipulatte language iddeologies and stereotypes foor comic effecct.
Fig. 9-1 Deepp Kiss (oneeMA ANS NTV, 27/5//2008 On Air) 7
This may be used in gangs and a nightlife in ndustries.
178
Chapter Nine
Fig. 9-1 is from a two hour special edition of OM aired in May 2008. Entitled “Nittere ôdan o-bus8 geinôjin o kattte ni daihenshin supesharu” (Traversing NTV majorly transforming ugly personalities as we please special), the show features personalities being caught (with a special net9) within the NTV headquarters and brought into for a wardrobe check and total hair/fashion make-over. After successful transformation into a glamorous celebrity, one of the desires voiced during the elaborate restyling process is brought to reality in a highly staged “reveal” (Hill 2005)10. The screen shot (see Fig. 9-1) shows how a combination of text and image both frames and comments on the unfolding speech and action. In this example, a ‘candid camera’ scene is accentuated by the circular black border which frames the low shot in a peephole fashion. The comedians shown in the centre are as yet unaware of the camera, and are being enticed into believing a glamorous looking “Japanese-American model” printed on souvenir t-shirts from the Cannes film festival is a fan. They have eagerly accepted the shirts and hastily put them on. The “Japanese-American model” is in fact fellow comedian, Shizu-chan, who is seeking revenge on younger male comedians for not treating her as a woman. For this special edition of OM, she has undergone a dramatic make-over in fashion, hair-style and make-up being transformed from an ôgara (well-built/large) woman unable to get a boyfriend, to a fukkura (plump/fluffy) celebrity who could walk the red carpet of fame with the best of them. If she can trick the comedians into believing she is a glamorous foreign model, the last laugh will be on her. The busy screen shown in Fig. 9-1 is common to variety style television programmes circa 2008. The title of the segment is projected onto a white banner top left of the screen. It reads “Shizu-chan dokkiri sakusen chû // Yoshimoto geinin ni kirei to iwasetai!”11 (Shizu-chan (red 8
O-busu (honourable ugliness) is a term indicative of the linguistic creativity of the OM world. For a discussion of this see Maree (2013). 9 When firing at a target, a note in black text on a yellow banner appears on the lower left screen to stress that the net is being used safely. 10 Hill (2005: 30) situates the “‘reveal’ when ordinary people respond to the end results” as a crucial part of the make-over genre where programming follows a format of “informative address (the instructional part of the program) within the spectacle of ‘the reveal’ (the makeover part of the program)”. This is true, too of the OM format, however, the majority of those who are transformed are already celebrities or media personalities, and the people to whom they are “revealed” are more often celebrities themselves, too. 11 Underlining indicates the use of kanji (characters). Bold text indicates the use of katakana (boxed syllabary). Plain text indicates the use of hiragana (cursive
Telop and Titles on the Japanese Small Screen
179
text) candid camera operation in progress (white text) wants to make Yoshimoto 12 comedians say she’s beautiful (blue text)). The “candid camera operation in progress” flashes as each new comic pair is brought into the room. This mimics the flashing of an ‘on air’ sign, and frames the segment as ‘live’ footage. Dokkiri (literally surprise; an abbreviation of the term dokkiri camera, ‘surprise camera’ i.e. candid camera) is written in katakana, as is kirei (beautiful) in the second line. The second line of blue text reads “wants to make Yoshimoto comedians say she’s beautiful!”. There is no grammatical particle linking the text “Shizu-chan (the comedian’s stage name comprised of a diminutive of her given name and the address form chan)” to the second line, however, these join to make the statement “Shizu-chan wants to make Yoshimoto comedians say she is pretty!” The exclamation mark adds visual force to the statement, representing the urgency and/or earnestness with which Shizu-chan has organised the hidden camera scene. On the bottom left there is an image of Shizu-chan in her made-over form as printed onto the fake souvenir t-shirt. The text “Nikkei moderu desu ([I am/this is a] Japanese-American model)” in quotation marks is projected over the image. The citation marks emphasises the deception at play; the person depicted is neither a model nor Japanese-American. The use of the copula in its formal desu form also implies a social distance. Without knowing the source of the citation, it is unsure whether it should be interpreted as “I am a Japanese-American model” or “this is a JapaneseAmerican model”; however, we can read this as a labelling by the external post-production voice. Shizu-chan is being invoked not only as a fashion model, but also as foreign, and of somewhat ‘exotic’ Japanese-American heritage, a position that has great cultural capital in Japanese popular culture. From the lower left of screen extends a line of text which reads “dîpu kisu (deep kiss)”. It is punctuated by a shining pink heart with white border. This telop is a visual representation of the reply one of the pair gives when asked by MC Yamaguchi what they would do if on a date with the model. The whole utterance “[indecipherable] hitome o kinisezu dîpu kisu (literally: without a care for people’s attention deep kiss; (I would) deep kiss without worrying what people thought)” is projected as two separate telop that occur one after the other on the screen. The first part of the phrase “without a care for people’s gaze” is projected from the same
syllabary). 12 Yoshimoto refers to Yoshimoto Kôgyô Co., a large production entity and comic talent management company in Japan.
Chapter Nine
180
position, usiing the same font f style and colour – whitte with blue edge – but is of a fracttionally smalller size. The “hai (yeah)” uttered by on ne of the other comeddians on screeen in the breaak between “w without a caree for” and “deep kiss” is not repliccated in text. The font sizze is increased for the phrase “deepp kiss”. The stattement “deep kiss without worrying whhat people tho ought” is met with haand clapping laughter from m the others present. Thee comical force of the joke is amplified by a cro oss over to thee studio wherre regular guests watcching the videeo are shown n laughing. Thhe image then cuts to Shizu-chan in her everydday clothing. She is laughiing as she waatches the footage on a monitor. Cannned laughter plays and gassps of surprisee “eehhh” can be hearrd as well. The T hidden caamera segmennt has proved d Shizu’s make-over tto be a succeess. She has forced f her collleagues to ex xpress an attraction to her. As Fig. 9-1 illustrattes, creative use of puncttuation, syllab bary and graphics com mbine with thhe overall fram ming of the sccene to createe a comic candid cam mera moment. The comicc force is ennhanced by selective reproductionn of a suppoosedly ad lib b utterance th that in turn has h been rendered intto decorative telop in a com mplex post-prroduction process. The common ground of com medy is fostereed by the tellop and refereencing of extra-prograamme informaation (Gerow 2010). 2 The taaboo of deep kissing k in public is plaayed to full coomic extent as a a reaction tto finding a previously p unattractive peer desirablee. The foreign n exoticness o f a Japanese-A American model is allso invoked. These combiine to highligght the foolisshness of finding a traansformed Shiizu-chan sexuaally attractivee. 2a) volleyy of complimennts
Fig. 9-2 Com mpliments (oneeeMANS NTV 27 7/5/2008 On Aiir)
Telop andd Titles on the Jaapanese Small Screen 2b) render meen powerless
2c) devilish
181
22d)super beautiiful, eh!
When thhe image retuurns to the can ndid camera scene, an orcchestrated version of O Ode to Joy plays as the comedian prodduces a “hom me kotoba renpatsu! (vvolley of comppliments)” (Fiig. 9-2a). Notiice here that th he candid camera lenss effect has beeen suspended d, perhaps to allow for the titling of the segmentt with the red text on lowerr right of the sscreen. Again n, the title is an interessting mix of hiragana, kattakana, kanji and punctuaation. The first complim ment “otoko o hone nuki ni n suru (emassculate a man n/render a man powerlless)” is projeected in a two o part telop: ootoko o; otokko o hone nuki ni suruu. The break after a the first “otoko o” vissually stressess that it is men who w will be rendereed powerless.. The obviouss sexual overrtones are also reflected in the neext phrase which w is proj ected: “mash hôteki na (devilish- liike) (2c). Maashô no (“deevilish, diabollic”) when ap pplied to women meaans a “tempttress”. The fiinal statemennt “meccha kirei k yan! (super beauttiful, eh!)” driives home thee wonder at thhe beauty of th he model, and reinforcces the successs of the candid camera staaged make-over reveal. The comedian is set up ass the fool (boke), and the ddialogue projeected onto the screen iss a large part of o this comedy y. As dialoguee is a central method m of manzai (staand-up) comeedy in Japan n, the visualiisation reinfo orces the stupidity off the fool in not n realising the model iss in fact his well-built w colleague, S Shizu-chan. Thhe audience iss in on the canndid camera seet up, and part of the slap-stick hum mour emergin ng from it rellies on the vo oyeuristic elements. Inn this segmentt, the voyeurissm is layered. Viewers watcch the instudio regullars reacting not n only to th he post-producction edited scene, s but also to the im mage of Shizuu-chan viewin ng the recordinngs as well. The T visual layering reliies on an undderstanding of o a) the convventions of co omedy in which the foool performs, b) b the conven ntions of the caandid camera segment, c) the relatioonship betweeen male and feemale comediians within thee comedy genre, d) thee impossibilitty of a well-bu uilt woman beeing deep-kissable and sexually devvilish, and e)) the conventtions of Japan anese writing practices which enhannce the comiic force of th he before. In this example,, comedy which reliess on alignmeent of the au udience with stereotypes of o bodily desirability iis visualised through t the au udiovisual pacckage, which combines c
182
Chapter Nine
recorded image and sound overlayed with sound effects, music, text and graphics.
3.2. Beauty and Pain: Normative Femininity/Masculinity Fig. 9-3 shows two screen shots from a OM “total coverage document!” of youth rejuvenation treatments offered by beauticians. Broadcast in December, 2008, this segment is shown over two editions of OM. Comedian Haruna Ai–a regular OM guest–is one of three personalities who have volunteered to undergo cosmetic facial treatments. One of the aims of this two part segment (the second part is aired in the following edition) is to identify and discuss the risks of non-invasive treatments. Once again, a complex combination of recorded image layered with sound, text and graphics enhances the comic force of the scene by indexing extraprogramme knowledge and manipulating gendered stereotypes. Prior to the treatment scene, Haruna Ai has been accompanied my OM experts Uematsu Kôji (fashion consultant) and Hirasawa Takashi (hair stylist) to a consultation with a cosmetic surgeon. Ai,13 who has voiced concerns about the size of her face and the state of her upper arms, discusses the risks associated with treatments for these self-identified problem areas. Well known as a transgendered comedian, Ai also talks of the treatments she has had in the past to bring her closer to her feminine ideal. As she describes how much more squared her face was, OM regular fashion expert Uematsu Kôji remarks that she must have looked like an electric fan, and an image of one is projected above the first character of the line of text “senpûki mitai datta tte koto!? (so you looked like an electric fan!?)”. The gag draws a laugh, and this shifts into a narrated section in which Ai is positioned as: “Ms Haruna, although having the body of a boy she was born with the heart of a girl”. An image is then projected of a younger Haruna Ai, looking more boyish than her current girl-like persona. Lonesome piano music underplays the image which fills the whole screen. With a trill, the line ““sukoshi demo onna no ko ni chikadukitai (want to get even a little closer to women’)” appears in red text enclosed in quotations marks. It is a key phrase of the gentle narration that continues. Haruna Ai’s openness about her gender history forms part of the knowledge manipulated in the selection of telop used for her on13 Haruna Ai is referred to as Ai-chan, a combination of her first name and the informal address-form “chan” in OM. Hirasawa Takashi, is known by his first name. Uematsu Koji is known both by his last name (usually with the formal address-form “san”), and also by his nickname Kô-chan which is a shortening of his first name and the informal address-form “chan”.
Telop and Titles on the Japanese Small Screen
183
screen utterances. Throughout the programme, pink font is used for renderings of her words and she plays heavily on her over-the-top girlish, idol-like media persona. However, at certain points through the series, colour and font choice are manipulated to reflect the indexing of a masculinity that undercuts her comic persona. The radio-frequency facial treatment Ai undergoes involves heating subcutaneous dermal tissue. Ai has been warned of the painful heat sensation it produces, and has willingly agreed to undergo it on camera. Similar to the candid camera scene discussed above, the camera is set as a voyeur. Visible to us is the doctor performing the treatment, and Seyama-announcer can be seen peeking around a door in the background. Fig. 9-3 shows the part of the scene where Ai endures the painful cosmetic procedure to slimline her face and remove her double chin. a) a~tsu
b) atsui
c) just like a male seal
Fig. 9-3. hoottttt, HOT (oneeMANS, NTV, 2/12/2008)
After a close up of Haruna Ai’s face writhing in pain, the scene changes to a full shot of her on the treatment bed. From the sheets draped over the body emerges red text written in katakana. The text in 3a) expands and contracts twice, to be replaced by the phrase in 3b). We can see Haruna squirming under the sheets, and hear the scream a- a- atsui– which we could gloss as “ho-ho-hot!” The yells are loud and raw. In the world of normative femininity, this expression would not be heard. In 3a) a tilde is used to represent an elongation of the vowel sound, and the final katakana “tsu” truncates the word. In standard Japanese writing, vowel elongation within a word is marked by a “䞊” when using the katakana script. In non-conventional writing, the tilde “~” is also used to indicate elongation that may be paralinguistic. Shitara (2011b, p. 51-52) notes that the tilde “~” is used in telop to emphasise the emotional state of the speaker, and/or to stress expressions and/or states being experienced on screen. Here, it emphasises the pain that Ai is enduring. The second telop (3b) forms the word “atsui (hot)” in its entirety. It is also written in the
184
Chapter Nine
boxed syllabary, katakana, and there is no elongation. The visualisation, then, is used not only to emphasise the expression of extreme pain, but also the force of the exclamation in somewhat rough language. The visualisation of Ai’s pain-induced groans takes on added meaning when situated in relation to the complexity of her media-persona; an openly MTF comedian who shot to fame for her comic imitations of popular teenage singing idols, Ai adopts a candy-cute voice to produce her most well-known catch phrase “iu yo ne (well you should talk)”. However, a perceived masculinity is always lurking within her stage persona, and is often invoked for comic purpose. In this segment as well, this perceived underlying maleness is refreshed in the memory of the viewers through the use of narration and an earlier image of Ai, as discussed above. Upon hearing her pain-filled screams emerging from the treatment room, fellow OM regular fashion consultant Uematsu Kôji loudly proclaims “maru de osu no azarashi yo ne (just like a male seal, I tell you, right)”. The telop for this is projected in the blue and white font coloration used for Uematsu’s utterances. The phrase “male seal” is larger than the surrounding text, with the key linguistic components projected in katakana in reversed colours. The textual representation highlights ‘maleness’ and ‘animal-ness’ through the script, font, colour, size and punctuation. The phrase itself as voiced by Uematsu aligns with the onê-kyara style – a campy style that makes use of stereotypical women’s language and a scathing wit (Maree 2013, 2014). The final particle “yo ne” is generally classified as belonging to stereotypical women’s language. This construction allows the speaker to assert an opinion whilst seeking affirmation. Ai’s positioning as a self-identified transgendered comedian is key in understanding the punch line here. The guttural sounds she emits as she undergoes the treatment are incongruent with her girlish, idol-like character, and Uematsu’s comments invoke the underlying “maleness” that prefaced the segment. Ai is open about the surgical procedures she has had on her body and is even prepared to undergo procedures on camera. Viewers will also be aware that sex-realignment surgery is legal in Japan, and it is possible for individuals who fulfil the set criteria to alter their legal sex14. Ai has chosen gender realignment surgery, and maintains her legal name; this reality is also often emphasised for comical effect in the program. By highlighting the complexity of Ai’s non-normative gender 14 The Act on Special Cases in Handling Gender for People with Gender Identity Disorder (2003/2004) stipulates the following requirements for an individual changing their sex on their registration papers; such individuals must: 1) be over 20 years of age; 2) be currently not married; 3) have no children under 20 years of age; 4) have the genital composition of the other sex.
Telop and Titles on the Japanese Small Screen
185
identity through textualisation, and making it the source of comedy, alternative genders are at once made visual at the point they are erased. It is here that the work of telop in visualising and playing with language ideologies comes into play.
4. Conclusion As both Park’s (2009) work on the regimentation of language via subtitling on Korean television and Gerow’s work on telop as a strategy to “commodify the gaze” (2010, 119) indicate, the inscription of text-onscreen is a multifarious media process that has implications for how we understand, negotiate and engage with language in the context of digital media. Language in the media is also intrinsically linked to linguistic identities: from the linguistic performance of celebrity, to the performance of self on television. Text-on-screen constructs salient media personas anchored in identifiable social identities (region, gender, age), and erases (Irvine and Gal 2000, Park and Bucholtz 2009) those “inconsistent with the ideological scheme” (Irvine and Gal 2009, 38). The use of text is an act of media-talk through translation and visualisation. It relies on the excessive use of non-standard orthography and non-normative speech forms that index language ideologies.
References Gerow, Aaron. 2010. “Kind Participation: Postmodern Consumption and Capital with Japan's Telop TV.” In Television, Japan and Globalization, edited by Mitsuhiro Yoshimoto, Eva Tsai, & Jung-Bong Choi, 117-150. Anne Arbor, MI: Center for Japanese Studies, The University of Michigan. Gottlieb, Nanette. 2010. “Playing with Language in E-Japan: Old Wine in New Bottles.” Journal of Japanese Studies, 30(3): 393-407. Hambleton, Alexandra. 2011. “Reinforcing Identities? Non-Japanese Residents, Television and Cultural Nationalism in Japan.” Contemporary Japan - Journal of the German Institute for Japanese Studies, Tokyo, 23(1): 27-47. Hill, Annette. 2005. “Reality TV: Audiences and Popular Factual Television.” London, England / New York, USA: Routledge. Irvine, Judith T. & Gal, Susan. 2000. “Language Ideology and Linguistic Differentiation.” In Regimes of language, edited by Paul V. Kroskrity, 35-83. Santa Fe, NM: School of American Research Press.
186
Chapter Nine
Ishida, Hitoshi & Murakami, Takanori. 2006. “The Process of Divergence between 'Men who love Men' and 'Feminised Men' in Postwar Japanese Media.” Intersections: Gender, History and Culture in the Asian Context, 12. Retreived from http://intersections.anu.edu.au. Kawabata, Miki. 2006. “Terebi nyûsu bangumi ni okeru keishikiteki gorakuka no genjô to sono mondai: Jimaku/teroppu o chûshin toshite” [Well-Guided or Misled?: Open Captions and the Tabloidization of the TV News Programs in Japan]. Sôgô Kagaku Kenkyû, 2: 209-219. Kimura, Daiji. 2011. “Kakko no imiron” [Semantics of Parenthesis]. Tokyo, Japan: NTT Shuppan. Kinsui, Satoshi. 2003. “Bâcharu Nihongo: Yakuwari-go no Nazo” [Virtual Japanese: The mystery of Role Language]. Tokyo, Japan: Iwanami Shoten. —. 2007. “Yakuwari-go kenkyû no chihei” [Frontiers of Role Language Research]. Tokyo, Japan: Kuroshio Shuppan. —. 2011. “Yakuwari-go kenkynj no tenkai” [Developments of Role Language Research]. Tokyo, Japan: Kuroshio Shuppan. Kroskrity, Paul V. 2000. “Regimes of language: Ideologies, Polities, and Identities.” Santa Fe, NM: School of American Research Press. Maree, Claire. 2013. “Writing Onê: Deviant Orthography and Heternormativity in Contemporary Japanese Lifestyle Culture.” Media International Australia, 147: 98-110. —. (Forthcoming 2014). “The Perils of Paisley and Weird Manwomen: Queer crossings into Primetime J-TV.” In Multiple Translation Communities in Contemporary Japan, edited by Beverly Curran, Nana Sato-Rossberg & Kikuko Tanabe. Manchester, UK: St. Jerome Publishing. Miller, Laura. 2004. “Those Naughty Teenage Girls: Japanese Kogals, Slang, and Media Assessments.” Journal of Linguistic Anthropology, 14(2): 225-247. —. 2011. “Subversive Script and Novel Graphs in Japanese Girls' culture.” Language & Communication, 31(1): 16-26. NHK Hôsô Bunka Kenkyû-jo. 2004. Shin yôji yôgo jiten (New Character and Phraseology Dictionary). Tokyo, Japan: NHK Hôsô Bunka Kenkyû-jo. Nishimura, Yukiko. 2003. “Linguistic Innovations and Interactional Features of Casual Online Communication in Japanese.” JCMC, 9(1). Retrieved from http://jcmc.indiana.edu. O'Hagan, Minako. 2010. “Framing Humour with Open Caption Telop.” In Translation, Humour and the Media: Translation and Humour, Volume 2, edited by Delia Chiaro, 70-88. London, UK / New York, NY: Continuum.
Telop and Titles on the Japanese Small Screen
187
Okamoto, Shigeco. 2006. “Perception of Hiniku and Oseji: How Hyperbole and Orthographically Deviant Styles influence IronyRelated Perceptions in the Japanese Language.” Discourse Processes, 41(1): 25-50. Park, Joseph Sung-Yul. 2009. “Regimenting Languages on Korean Television: Subtitles and Institutional Authority.” Text & Talk, 29(5): 547-570. Rowsell, Jennifer & Abrams, Sandra Schamroth. 2011. “(Re) conceptualizing I/identity: An Introduction.” National Society for the Study of Education Yearbook, 110(1): 1-16. Sakamoto, M. 1999. “Hanran suru jimaku bangumi no kôzai” [The Merits and Demerits of Programs Overflowing with Subtitles]. GALAC, 25: 30-35. Sakamoto, M. 2000. “Teroppu seitai-gaku: Terebi jimaku no kenkyû” [The Ecology of Telop: Research into TV Subtitles]. Retrieved from http://www.aa.alpha-net.ne.jp/mamos/lecture/jimaku99.html#top. Sebba, Mark. 2007. “Spelling and Society: The Culture and Politics of Orthography around the World.” Cambridge, UK / New York, NY: Cambridge University Press. —. 2009. “Sociolinguistic approaches to Writing Systems Research.” Writing Systems Research 1(1): 35-49. Shiota, E. 2005. “Baraeti bangumi ni okeru teroppu no yakuwari: Hatsuwa rikai no shikumi o saguru” [The Role of Telop in Variety Shows: Investigating the Workings of Utterance Understanding). In Media to kotoba 2: Kumikorareu ôdiensu [Media and Language 2: The Audience installed], edited by K. Miyake, N. Okamoto, & A. Atsushi, 33-58. Tokyo, Japan: Hizuji Shobo,. Shitara, Kaoru. 2006. “Terebi no tôku kônâ o yomu: Dôitsu hatsuwa o tomawanai moji teroppu no jittai” [Reading TV Talk Segments: State of Text Telop not Accompanied by Identitcal Speech]. Mukogawa Joshi Daigaku Gengo-bunka Kenkûjo Nenpo, 18: 37-61 —. 2009a. “Terbi shichôsha taido to moji teroppu: Gakusei to Seijin no taihi” [Television Audience attitudes a Text Telop: Comparison of Students and Adults]. Mukogawa Joshi Daigaku Gengo-bunka Kenkûjo Nenpo, 20: 29-55. —. 2009b. “Kôhaku Uta Gasen ni miru 30-nenkan no moji teroppu: 1960nendai kara 1980-nendai made” [30 Years of Text Telop in Kôhaku Uta Gasen: From 1960s to 1980s]. Mukogawa Joshi Daigaku Gengo-bunka Kenkûjo Nenpo, 21: 47-58. —. 2011a. “NHK kuizu bangumi ni miru moji jôhô no hensen” [Shifts in Textual Information in NHK Quiz Shows). Gengo to Kôryû, 59: 90103.
188
Chapter Nine
—. 2011b. “Baraeti bangumi ni okeru moji teroppu no kijutsuteki kenkyû: Hyôki kôka no kôzô bunseki” [A Descriptive Study of Text Telop in Variety Shows: An Analysis of the Structure of Effective of Writing]. Doctorate Thesis submitted to Mukogawa Women’s University, Nishinomiya, Japan. Silverstein, Michael. 1979. “Language Structure and Linguistic Ideology.” In The Elements, edited by Paul R. Clyne, Wiliam F. Hanks & Carol L. Hofbauer, 193-247. Chicago, IL: Chicago Linguistic Society. Suto, Hidetsugu, Kawakami, Hiroshi & Katai, Osamu. 2009. “Influences of Telops on Television Audiences' Interpretation.” Lecture Notes in Computer Science, 5612: 670-678. Tranter, Nicolas. 2008. “Nonconventional Script Choice in Japan.” International Journal of the Sociology of Language, 192: 133-151. Woolard, Kathryn A. & Schieffelin, Bambi B. 1994. “Language Ideology. “ Annual Review of Anthropology, 23: 55-82.
Abstract Telop and titles on the Japanese small screen Keywords: Telop. Mediatalk, TV The heavy use of texts and graphics is a defining characteristic of Japanese variety programming (Gerow 2010, Hambleton 2011). Telop (from television opaque projector) contribute to the tabloidisation of news broadcasts (Kawabata 2006) and highlight “comic content” (O’Hagan 2010, 70) in entertainment broadcasting. By engaging and maintaining attention beyond the show to advertising, telop function to engage the commodity that is the viewers’ gaze (Gerow 2010, 141-2). Through a process of selective textualisation, telop regiment language (Park 2009) by enforcing the producer’s interpretive account onto the broadcast product. This paper argues that the inscription of text onto the screen as telop is an act of media-talk translation and visualisation which relies on excessive use of subcultural and non-standard speech forms to effectively engage the gaze.
CHAPTER TEN IT AIN’T OVER TILL THE FAT LADY SINGS: SUBTITLING OPERAS AND OPERETTAS FOR THE DVD MARKET ADRIANA TORTORIELLO1
1. Introduction Opera librettos have been translated, and their live performances have been surtitled intra- and interlingually, for quite some time now, and indeed, a fair amount of literature does exist on the subject (see, e.g., Burton 2009, and Desblache 2007 and 2009, amongst others). Operas have also been transposed for the big screen–i.e., effectively turned into films, scripted, and directed by film directors. One recent (2006) example amongst many, Kenneth Branagh famously adapting and directing Mozart’s Magic Flute to an English libretto, with dialogue written by Stephen Fry. These films would then be subtitled, inter- and/or intralingually, for DVD. An interesting analysis in this respect can be found in Citron (2000), who discusses the fate of operas when they are turned into films, and looks at “what happens when the camera becomes a major force in shaping narrative and representation.” (2000, 69) However, a different, and hitherto relatively unexplored, phenomenon is that of filming the live performance in order to produce a DVD–which in turn gets intra- and/or interlingually subtitled. The aim of this article is therefore that of exploring this type of opera subtitling–defining its main characteristics, and attempting to identify the aspects that distinguish it both from live opera surtitling and from more conventional forms of subtitling for DVD: not quite the same as subtitling a film for DVD, and certainly very different from surtitling for a live production, subtitling an
1
University College London, UK. Email address: [email protected]
190
Chapter Ten
opera for DVD is a complex type of subtitling that deserves a chapter of its own. After a general introduction on the nature of DVD subtitling of operas, the article will focus on the added challenges of subtitling comic operas and operettas. This will be supported by the analysis of an English operetta, The Mikado, which was subtitled into a number of languages for the DVD market.
2. Source Text and Audience Design In all forms of subtitling of audiovisual material, the source text, far from being constituted by ‘words’, is a semiotic complex made of a number of signs and codes that interact in the creation of the overall meaning of the text itself (Delabastita 1989, Chaume 2004a, 2004b). In a rather colourful metaphor, Gottlieb (1994, 101) compares subtitling to an amphibian, and argues that “it flows with the current of speech, defining the pace of reception; it jumps at regular intervals, allowing a new text chunk to be read; and flying over the audiovisual landscape, it does not mingle with the human voices of that landscape: instead it provides the audience with a bird’s-eye view of the scenery” (my emphasis). And because of its nature, Gottlieb observes, borrowing House’s terms, that subtitling is an overt, as opposed to a covert, type of translation (see House 1997, 2001). And indeed, unlike dubbing, where the original soundtrack is replaced, in the case of subtitling the original text (images and soundtrack) is always there. A direct consequence of this is the fact that the emperor is naked, so to speak, and that therefore, as Díaz-Cintas (2003, 43-44) and Díaz-Cintas and Remael (2007, 57) point out, this is a particularly vulnerable type of translation, because the target language audience has the possibility to access at the same time both the source language text and the translation. Coming to the specifics of opera subtitling, in operas“the verbal text has been matched to a melody or vice versa which results in an inseparable couple–and this is enacted in singing together with orchestration, acting and scenery” (Virkkunen 2004. 95). The operatic text, in other words, is yet another semiotic complex in which words, to put it with Desblache (2007, 155), “are multimedial in that they are, both as signified and signifiers, part of several interdependent elements necessary to the meaning of the overall lyrical form”. Therefore, one factor that distinguishes operas from other audiovisual texts is yet another form of constraint, namely, what Desblache (2009, 79)
It Ain’t Over Till the Fat Lady Sings
191
refers to as “music constraints”: because of the specific nature of this text, it is very important for the language chosen to achieve a “musical quality”. Another very important factor that distinguishes operas from, say, films, and makes them more akin to theatre plays, is also the fact that they come to life in a live performance. Hence, as in the case of theatre, the debate is open as to what exactly the source text is: is it the libretto, or is it the performance? Although librettos are indeed translated in order to be further adapted for live performances, I would argue that this is more akin to a literary translation and that in talking about opera sur/subtitling, the text we should be looking at is the actual performance. As pointed out by Virkkunen (2004, 95), “even the most close-to-libretto-as-possible surtitling strategy must take account of the opera’s metamorphosis from dramatic text into stage interpretation”. An interesting parallel could, of course, be drawn between this dichotomy and the one that exists between the translation of a written play and the translation for the stage, i.e., for the actual mise-enscène (see, e.g., Bassnett 1985, 1991, and Tortoriello 1997). Further: if we concentrate on the subtitling of an opera for DVD release, the question arises as to whether, once a stage production is filmed, this changes the nature of the text itself and / or its fruition and hence generates further challenges for the subtitler. As remarked by Virkkunen (2004), the spectator, as addressee of the performed text, experiences it in the here-and-now, a situation in which it is not possible to go back and listen/read again if something was not clear or escaped one’s attention the first time around. But this allows us to notice, in the case of the DVD, a crucial difference, the existence of a double layer, so to speak: the live performance, with its live audience, is filmed and thus becomes an audiovisual text whose audience are the viewers of the DVD. Moreover, the subtitler of the filmed performance deals with a text that might (or might not) have already been surtitled when live. In this ‘Russian dolls’ kind of situation, in order to define his/her priorities and strategies, the subtitler must take into account both the multisemiotic and multimodal text that is constituted by the live performance and the final, filmed text that somehow encapsulates it and yet takes on a different (multisemiotic and multimodal) identity. The above considerations come with a number of implications. First of all, scripta manent. Against the temporary nature of surtitles, which “cannot be read in advance or bought afterwards in a printed form” (Virkkunen 2004, 92), DVD subtitles are there to stay. They can be read over and over again, the video can be started, stopped, rewound, and so on. This entails a very different sort of audience design. If we accept that one always translates with a reader / spectator in mind (see, e.g., Vermeer
192
Chapter Ten
1989, Nord 1997), then the point of departure of a subtitler is very different from that of a surtitler. Characteristics, needs and expectations of the audience who buy a DVD to watch, and listen to, an opera, will differ in quite considerable ways from those of a live opera’s audience. And I would argue that these differences are quite crucial in determining the above-mentioned subtitlers’ priorities and strategies. Jonathan Burton (2009), surtitler with London’s Royal Opera House, openly argues in favour of the subtitlers’ invisibility, when he states that “the subtitler’s aim should be transparency, or even invisibility” (2009, 63). To which end, the surtitler should aim at simplifying both vocabulary and grammar and at levelling out the most archaic nuances and the most colourful language present in the original. Judi Palmer, surtitler coordinator at the Royal Opera House, interviewed alongside Jonathan Burton in issue 10 of JosTrans (2008), states that good surtitling should consist in “short, concise, well-written English”, and that to that end the surtitler should aim at keeping what is absolutely essential while omitting the rest: “if we remain invisible […] we’ve achieved our aims”. This opinion is shared by Desblache (2007, 166), who states that “like interpreters, surtitlers aim for an invisibility which enhances the comprehension of the text without taking precedence over other theatrical components”. Interestingly, however, both Burton and Desblache seem to agree on the fact that the audience now clamour for surtitles. “We are now a textdominated society, and audiences expect to know in detail what words are being sung”, says Burton (2008, 61), while Desblache (2007, 166) observes that having become used to their (less-than-invisible) presence, the audience are now “very nearly unanimously hungry for surtitlers”. And she adds that this applies to both inter- and intralingual surtitles, noting that surtitlers are “popular even when there is no language transfer issue, i.e. when the text is sung in the native tongue of the country” (2007, 167). So what about the case of the DVD subtitles? The (in)visibility of subtitles and subtitlers has been a matter of discussion for some time now, at least since the advent of fansubbers who, with their experimental and bold approach, were happy to do away with the traditional attitude according to which subtitles were meant to be like Victorian children–the more discreet, the better; and this is not the place to further the discussion (for more on this, see e.g. Díaz Cintas and Muñoz Sánchez 2006 and, more recently, O’Sullivan 2011). However, in this context, I would tend to say that DVD subtitles do tend to be bolder and less ‘invisible’ than live surtitles. They tend to be longer, they use repetition to a certain extent, and
It Ain’t Over Till the Fat Lady Sings
193
they try and recreate some of the stylistic features of the original text (especially as concerns the rhyming pattern of the lyrics). Once again, this difference might be accounted for in terms of audience design: the expected reading speed of a DVD audience is likely to be higher, because of a number of factors. The definition of a DVD image is usually very good, so the legibility of the subtitles is greater. The conditions of reception are likely to be very different as well: a DVD would normally be watched at home, in a small ‘auditorium’ that might be the viewer’s living room, and probably in ideal watching conditions, while the audience in the theatre experience a live performance with all the lessthan-perfect conditions that come with it–people shuffling in their seats, coughing etc. And of course this is compounded by the above mentioned fact that live surtitles cannot be stopped, played again and so on while the DVD opera and its subtitles can be experienced over and over again and a tricky part can be rewound and watched again if need be. In trying to define the characteristics of DVD subtitling, one more consideration will be useful. Burton (2009, 67) warns against ‘howlers’ in surtitling, observing that if the surtitler were to create an unintentionally funny situation through a bad phrasing or a misunderstanding, the audience would be more than ready to laugh at it even though that particular point in the opera was not meant to be funny, thus infuriating the singers and creating a rather problematic situation. In the case of a filmed performance, once again, this makes us aware of the added vulnerability of the DVD subtitler, who would be faced with a bout of laughter the source of which would not necessarily be apparent.
3. Operettas For the purposes of this article, a particularly interesting and fruitful genre (or sub-genre) is that of the operetta. The term operetta is often used as a synonym of light opera, or comic opera, and although it is sometimes possible to notice some degree of terminological disagreement in this respect, I believe it is useful to follow the perspective adopted by Desblache (2009, 73-74), who simply refers to light opera as opposed to opera seria, concentrating on the main distinctions between the two in terms of topics–historical and mythological vs. contemporary–and structure–presence or absence of spoken dialogue. Light operas became popular around the middle of the 19th century, and with a shift in content came also a shift in audience and a shift in language. No longer the exclusive remit of cultivated audiences who expected operas to be written in the traditional languages–mostly Italian
194
Chapter Ten
and French–these lighter versions started being written in a number of other languages, amongst which English, and thus spoke to wider audiences and dealt with more topical subjects. The translation of these more popular works might follow one of two paths. In other words, the choice is between producing them in the original language with surtitles, or, as is the case of the productions staged in England by the English National Opera, having the libretto translated and adapted and then producing the opera in the language of the local audience. I would argue that DVD subtitles are a third, middle of the road solution: like a translation/ adaptation of a libretto, they are more permanent–scripta manent, as remarked above–but at the same time, like surtitles, they allow the viewer to follow the opera whilst listening to it in the language in which it was written. And in the case of a light opera, they might be the better option: because of the characteristics discussed above, they allow themselves to be longer and thus less concise in content and closer to the original in form. Operettas–light operas–are characterised by the presence of spoken dialogue but also of humour and often satire, and this needs more than telegraphic subtitles that would only convey the gist of what is being said or sung.
4. The Mikado The article will now move on to a case study that will analyse the subtitling for the DVD of an operetta by Arthur Sullivan and W. S. Gilbert, The Mikado (1885), one of the twelve operas that resulted from their collaboration and that came to be known as the Savoy Operas. Set in the Japanese town of Titipu, the operetta is in fact Gilbert’s pretext to satirise quite openly the Victorian society in which the authors lived, with all its pompous morality and superficial, formal rules and regulations hardly sufficient to hide an ever-increasing level of duplicity and corruption. The Mikado’s son, Nanki-Poo, who had fled the palace disguised as humble minstrel to escape the advances of an elderly lady belonging to his father’s court, Katisha, falls in love with Yum-Yum, who is betrothed to the ex tailor and now Lord High Executioner, Ko-Ko. In a farcical crescendo, involving much corruption on the part of all the high officials involved and a lot of ingenuity on the part of the two youngsters, the two finally manage to get together while Ko-Ko ends up marrying Katisha.
It Ain’t Over Till the Fat Lady Sings
195
5. Analysis The DVD in question resulted from the filming of Opera Australia’s 2011 production of The Mikado, conducted by Andrew Greene and directed by Christopher Renshaw (the DVD was released in 2012). The operetta was interlingually subtitled into Italian, French, Spanish and German, as well as intralingually subtitled into English. Since the Italian version was subtitled by the author of the article, this will be the main source of the analysis (so in part this might also be considered a ‘self-analysis’). Since, however, we worked as a team when we subtitled the opera into the various languages, considerations coming from the collective discussions that took place during the subtitling work will partly come into the picture (while acknowledging that different languages might well have different priorities and different traditions, we were also aware that it is considered good practice, and rightly so, to ensure a certain degree of consistency across the languages that appear on the same DVD). The operetta is all in rhyme, and as mentioned before, it features a mixture of atemporal settings and costumes, and very topical and contemporary references and allusions. As is the case in most light operas, it contains many examples of humour and cultural references, and inevitably a lot of them are supported by the images and so become the subtitler’s classic headache constituted by the verbal/visual puns, jokes etc. A number of examples will now illustrate the situation, along with the choice of strategies adopted by the subtitlers. The translated examples, as mentioned above, will be taken from the Italian version.
5.1. Examples The first example concerns a case of semiotic reiteration between verbal code and kinesic code; but it also give us an opportunity to see what happens to the rhyming pattern, as well as to the length of the subtitles. Nanki-poo, the son of the Mikado disguised as a minstrel, is singing about a sailor coming home to his sweetheart (and sitting her on his knees); whilst singing it, he slaps his knee. As can be seen, in translation the rhyming pattern has been kept almost identical, and the slapping gesture has been made to coincide with the words that describe the action. Thus, the same degree of semiotic cohesion was achieved in the target text. The final rhyme has become a rhyming couplet, a strategy that has sometimes been implemented as visually it
196
Chapter Ten
works particularly well in subtitling: in reading subtitles, a lot of information needs to be processed and retained at the same time, and rhyming couplets are ideal in this respect as the viewer does not need to refer too far back in order to detect the rhyme. In terms of the length of the subtitles, it is worth noting that they are, indeed, fairly long–far from conveying the gist of the text, they follow it pretty closely; the reading speed of some of the subtitles reaches 171 wpm, against the recommended reading speed of 1202, thus indicating that the priority here has been that of conveying the semantic content of the text (also in order to preserve the above-mentioned semiotic cohesion). Original English text But the happiest hour a sailor sees
Italian subtitles Ma al marinaio nulla di più piacerà
Backtranslation But the sailor won’t like anything more
Is when he's down At an inland town
Di sbarcare finalmente in un paese fra la gente
Than landing in a town amongst people
With his Nancy on his knees, yeo-ho!
La pupa sulle ginocchia prenderà
He’ll put his sweetheart on his knees
And his arm around her waist
E stretta stretta la terrà
And hold her very tight
Example 10-1 The following example is a case of a pun which in this case is primarily verbal. What is interesting here is that the reason why the pun must be recreated at all costs–above and beyond considerations of loyalty toward text, authors and audience–is what was referred to above as the added vulnerability of the DVD subtitler of a filmed live performance.
2
120 was the reading speed ‘suggested’ by the client. Indeed, above and beyond the oft-quoted role of cultural mediators, translators (and subtitlers) often find themselves involved in another sort of mediation: that between themselves and the requirements of the clients. If a client has read somewhere that the ‘recommended’ reading speed for this sort of text is 120 wpm, then that is what they will demand. But since we are not machines, and neither are our audience, we know better than that–a number of other factors come into the equation to increase readability and legibility of subtitlers, amongst which good line breaks, correct use of punctuation, good collocational choices, and, in this case, also a good rhyming pattern. So what subtitlers end up doing more often than not is try and educate the clients in the name of the best possible fruition on the part of the potential buyers of the DVDs in question. It is an argument that often works, at least in this field of work where, unlike that of Hollywood blockbusters, clients are still interested in quality.
It Ain’t Over Till the Fat Lady Sings
197
Nanki-Poo is talking about the moment he fell in love with Yum-Yum, who sadly was promised to her guardian, a simple tailor (Ko-Ko). The pun is played on the double meaning of the noun ‘suit’. The actor, who at this moment in time is speaking, not singing, stresses the word ‘suit’, then pauses for effect, then adds ‘was hopeless’. This is of course followed by laughter from the audience–which is the ultimate clue for the DVD subtitler: much as when subtitling comedy programmes with canned laughter, at this point the subtitler must make sure s/he finds a solution that is as witty, or at least as funny. Original English text We loved each other at once, but she was betrothed to her guardian Ko-Ko,
Italian subtitles Ci siamo innamorati all'istante, ma lei era promessa al suo custode, Ko-Ko,
Backtranslation We fell in love straight away, but she was betrothed to her guardian, Ko-Ko,
a cheap tailor, and I saw that my suit was hopeless. [actor pauses, audience laughs]
un volgare sarto... che mi dava del filo da torcere!
A cheap tailor… who was giving me a hard time [lit., who was giving me some thread to twist, which in Italian means ‘give someone a hard time’]
Example 10-2 This is an example that will allow us to show what strategies were adopted in the part of the operetta that was distinctly topical and hence peppered with present-day cultural references. At this point, Ko-Ko, the former tailor now become Lord High Executioner, sings a rather long song in which he enumerates the characters that “would none of them be missed” if he was to execute them. The references here are directly related to the here-and-now of the performance, as opposed to the original libretto which featured such characters as “the banjo serenader, and the others of his race”, and “that singular anomaly, the lady novelist”–as would befit a Victorian satirical operetta. When they were culture-specific as well as time-specific, the subtitled version chose to keep the modernity but not the geographical location of the references, so to speak. My Kitchen Rules is an Australian cooking game show (and the Frenchman hinted at is Manu Feidel, French celebrity chef and one of the judges of the game), while Gold Logies refers to the winners of the “Gold Logie Award for Most Popular Personality on Australian Television”. In both instances, the strategy adopted was that of generalisation, since
198
Chapter Ten
retention, or even explicitation (in Pedersen’s 2005 sense) of the references would have created a comprehension problem in the target audience. In the latter example, conveying the semantic meaning was particularly relevant since the utterance ends with ‘kiss my fist’, which of course is accompanied by a very theatrical gesture on the part of the actor. Other contemporary references, such as references to climate change, or to ‘twitterists’ of course did not pose the same sort of problem and were kept as such in translation. Once again, the subtitles are fairly ‘substantial’ and at times display a reading speed which is remarkably higher than the recommended 120 wpm (the last subtitle in the example has a reading speed of 164 wpm). Original English text That Frenchman and the other one who judge My Kitchen Rules Who give new definition to the label kitchen tools That morning television host who's funny as a cyst Gold Logies he has kissed but it's time to kiss my fist
Italian subtitles E la gara di cucina per casalinghe annoiate
Backtranslation The cooking competition for bored housewives
Se il giudice son io le mando a pelar patate
If I was the judge, I’d send them to peel potatoes
Il presentatore del talk show mi ha nauseato già
The talk show presenter Has already made me sick
Ha incontrato gran personalità? Ora il mio pugno incontrerà!
Has he met famous characters? Now he’ll meet my fist
Example 10-3 The final example (see Example 10-4) illustrates a completely different and very interesting phenomenon. This kind of opera allows itself to become, in a way, the ‘parody’ of itself; and in an operatic attempt at breaking the fourth wall, so to speak, the actor uses this meta-comment, and the ‘dialogue’ that ensues with the surtitler, to draw the audience in and make them aware of the surtitling process. So this becomes the one and only case in which the audience of the DVD are also made aware of it, because without that surtitle (and, therefore, its subtitled translation) the text would have completely lost its cohesion. (Another interesting example along similar lines, that did not directly involve surtitles, but that still
It Ain’t Over Till the Fat Lady Sings
199
involved the actor/character pointing to himself and thus making the audience aware of the process, so to speak, was the case in which the line sung by the same actor read “Or worst of all the actor who's an extra lyricist”). Original English text That trendy thing in opera
Italian subtitles Quella cosa di moda nell'opera
If the plot seems like a mess
Se la trama è un disastro
Backtranslation That trendy thing in opera If the plot is a disaster The cheeky surtitler
Il sopratitolatore birbone I don’t have this song
That nice surtitlist Non ho questa canzone [This song's not on my list] (surtitle)
What? Come?
What? [Normal transmission will resume shortly] (surtitle)
Il normale servizio riprenderà tra breve
Normal service resume shortly
will
Example 4
6. Conclusions Through the analysis carried out in the course of this article we have been able to notice a number of things. First of all, the fact that in the interlingual sur / subtitling of operas, the ‘diagonal translation’ (Gottlieb 1994) involved is not only spoken ST to written TT but also, and above all, sung ST to written TT, and that this brings with it an additional constraint, the one that, as we have seen, Desblache (2009, 79) refers to as “music constraint”. Issues of rhyming and rhythm, as well as the actual musicality of the language, will be added to the usual ‘constraints’ that characterise subtitling and will be therefore incorporated in the hierarchy of priorities of the sur/subtitler. Coming to the specific characteristics of subtitling operas and operettas for the DVD market, we have noticed a number of factors that distinguish it from the surtitling of live performances. Some important differences we remarked upon are as follows: DVD subtitles are there to stay, opposed to the temporary nature of surtitles; a DVD can be stopped and started while a live performance cannot be rewound; the conditions of reception are better in the case of a home system and hence the average reading speed of
200
Chapter Ten
the DVD audience is higher. This is likely to determine a different set of priorities–the subtitlers are less likely to pursue the ‘invisibility’ sought after by surtitlers, and more likely to aim for a fuller transfer of both form and content of the ST in their subtitles. One rather important phenomenon that was observed in the case of the subtitling of filmed live performances is what was referred to as ‘added vulnerability’, deriving from the presence of the theatre audience in the filmed text, and hence from the presence, in the subtitler’s source text, of the reactions of the audience to a) the performance, but also b) to the live surtitles, which might or might not be accessible to the subtitler him/herself. As far as operettas are concerned, it has been possible to observe that the nature of the subtitles is to a fair extent determined by the nature of light operas, i.e., specifically, by the fact that the sung parts are often alternated with spoken parts, as well as by the fact that, because of their farcical and/or satirical nature some parts are very topical (and constantly updated in order to remain so). In other words, here the semantic content is as important as the formal characteristics of the text, and I would add, out of content, rhythm and rhyme, content and rhyme are likely to be prioritised: content, in order to keep the topical (and usually humorous) nature of the text; and rhyme, because subtitles that keep the rhyming pattern of the original are visually quite powerful–indeed, the fact that in some cases the rhyming pattern was actually slightly modified in order to create some rhyming couplets, which are easier to process, would indicate that the subtitler did give this aspect a fair amount of consideration. In terms of content, and focusing on the more topical parts, it has been noted that the subtitler did follow the updating of the content, but chose to level out the more specifically Australian references by adopting a strategy of generalisation–keeping those references would have resulted in a ‘double foreignisation’ effect (Japan? Australia?) that would have left the audience totally mystified. Perhaps the most interesting point in the original text is represented by the meta-references such as the ones mentioned in the last example of the analysis. Although not necessarily posing particular problems in translation, they are very revelatory of the actual nature of the text in question and therefore should be born in mind: this is a text that allows itself to foreground the process and hence play with the fourth wall, possibly subverting the traditional roles of actors and audiences. One cannot help but wonder whether this sort of situation might encourage some more ‘subversive’ (Nornes 1999, and O’ Sullivan 2011) forms of subtitling…
It Ain’t Over Till the Fat Lady Sings
201
References Bassnett, Susan. 1985. “Ways through the Labyrynth. Strategies and Methods for translating Theatre Texts.” In The Manipulation of Literature. Studies in Literary Translation, edited by Theo Hermans, 87-102. London: Croom Helm. —. 1991. “Translating for the Theatre: the Case against Performability.” TTR, IV (1): 99-111. Burton, Jonathan. 2009. “The Art and Craft of Opera Surtitling.” In Audiovisual translation: Language transfer on screen, edited by Jorge Díaz Cintas & Gunilla Anderman, 58-70. London: Palgrave Macmillan. Burton, Jonathan & Palmer, Judi. 2008. “Interview on JosTrans with Lucile Desblache.” Retrieved from: http://www.jostrans.org/issue10/int_burton_palmer.php. Chaume, Frederic. 2004a. “Cine y Traducción.” Madrid: Cátedra. —. 2004b. “Film Studies and Translation Studies: Two Disciplines at Stake in Audiovisual Translation.” Meta, 49 (1): 12-24. Citron, Marcia. 2000. Opera on Screen. New Haven and London: Yale University Press. Delabastita, Dirk. 1989. “Translation and Mass Communication: Film and TV Translation as Evidence of Cultural Dynamics.” Babel, 35:4: 193-218. Desblache, Lucile. 2007. “Music to my Ears, but Words to my Eyes? Text, Opera and their Audiences.” Linguistica Antverpiensia New Series, 6: 155-170. —. 2009. “Challenges and Rewards of Libretto Adaptation.” In Audiovisual Translation: Language Transfer on Screen, edited by Jorge Díaz Cintas & Gunilla Anderman, 71-82. London: Palgrave Macmillan. Díaz Cintas, Jorge. 2003. “Teoría y Práctica de la Subtitulación: Inglés/Español.” Barcelona: Ariel. Díaz Cintas, Jorge & Muñoz Sanchez, Pablo. 2006. “Fansubs: Audiovisual Translation in an Amateur Environment.” Journal of Specialised Translation, 6: 37-52. Retrieved from http://www.jostrans.org/issue06/art_diaz_munoz.php. Díaz Cintas, Jorge & Remael, Aline. 2007. “Audiovisual Translation: Subtitling.” Manchester: St Jerome. Gottlieb, Henrik. 1994. “Subtitling: Diagonal Translation.” Perspectives: Studies in Translatology, 2(1): 101-121. House, Juliane. 1997. “A Model for assessing Translation Quality.” Meta, 22(2): 103-109.
202
Chapter Ten
—. 2001. “Translation Quality Assessment: Linguistic Description versus Social Evaluation.” Meta, 46(2): 243-257. Nord, Christiane. 1997. Translating as a Purposeful Activity. Manchester: St Jerome. Nornes, Abe Mark. 1999. “For an Abusive Subtitling.” In The Translation Studies Reader, second ed. (2004) edited by Lawrence Venuti, 447469. New York and London: Routledge. O’Sullivan, Carol. 2011. Translating Popular Film.” London: Palgrave Macmillan. Pedersen, Jan. 2005. “How is Culture rendered in Subtitles?” In MuTra Conference Proceedings. Retireved from http://www.euroconferences.info/proceedings/2005_Proceedings/2005 _proceedings.html. Tortoriello, Adriana. 1997. “Semiotica e Traduzione Teatrale: un’Ipotesi di Lavoro.” In Traduzione, Società e Cultura 7, edited by Gabriella Di Mauro & Federica Scarpa, 59-115. Padova: Cleup. Vermeer, Hans. 1989. “Skopos and Commission in Translational Action.” In The Translation Studies Reader second ed. (2004), edited by Lawrence Venuti, 226-238. New York and London: Routledge. Virkkunen, Riitta. 2004. “The Source Text of Opera Surtitles.” Meta, 49(1): 89-97.
Abstract It ain’t over till the fat lady sings. Subtitling operas and operettas for the DVD market Keywords: subtitles, operas, operettas, DVD The paper aims at analysing the distinguishing features of a rather new product, i.e. operas that are filmed during a live performance in order to produce a DVD which is later subtitled, intra- and/or interlinguistically. This novel type of opera subtitling is distinct both from live opera surtitling and from more conventional types of subtitling for DVD.
CHAPTER ELEVEN SUBTITLING – FROM A CHINESE PERSPECTIVE DINGKUN WANG1
1. Introduction Literature on the topic of subtitling has largely neglected the Chinese context, though there has been a growth of interest in subtitling worldwide. This neglect is rather unfortunate, because the production of Chinese subtitling is soaring, in response to the increasing demand of the Chinese audience; hundreds of the latest products of world cinema and media are subtitled in Chinese every day. This paper is organized in five parts. First, it seeks to explain how subtitling is received in Mainland China. Secondly, it introduces the challenges presented by Chinese to subtitling. Thirdly, it generalizes the pros and cons in current research on Chinese subtitling. Fourthly, it provides hints for future research in Chinese subtitling. Finally, it proposes ways to improve the current practice of subtitling and the reception of foreign audiovisual products in Mainland China. In this way, I will show that Chinese subtitling should be included as an essential component in AVT research.
2. Background: How Subtitling is received in the Chinese Context 2.1 Some Changes and a Paradox Contemporary China is in an era of expanding education and economic boom. New concepts, new styles and new values are emerging. They are spreading at great speed through different channels such as the Internet
ͳThe
Australian National University (ANU), Australia. Email address: [email protected]
204
Chapter Eleven
and trans-national communication. In this panorama a great number of foreign films and TV programs are reaching Chinese viewers every day, either through officially sanctioned distributions or through channels of piracy. This is, to some extent, associated with China’s progress in international trade and economic growth. However, this situation also reveals a paradox between the control taken by the Chinese government over foreign cultural products and the accessibility of these materials to the Chinese audience. In addition, the Chinese government’s initiative for establishing filmmaking as an industry has evident influence on the growth of domestic production in the Chinese film market. Domestic films grossed 8,273,190,000 RMB (approximately 1.27 billion AUD) in China in 2012. This is a significant growth, in comparison with the amount grossed by domestic films in the Chinese market in 2010, which was 5,733,520,000 RMB (0.88 billion AUD). 2 According to a report by Reuters, China’s domestic production “out-competed” imported films on the Chinese market in the middle of 2013, Hollywood films in particular. However, the intervention of Chinese authorities is essential to this outcome: they turned down a series of import plans in 2013, including blockbusters such as Despicable Me 2 and World War Z. Besides, although the film and media industries in China are acquiring considerable importance, censorship persists despite the changed political circumstances at different times., as the Chinese government still sees film and media industries as a powerful tool of propaganda and official ideology.
2.2. Censorship As is well-known, censors edit films so as to remove unpalatable content. Sometimes censorship bans a film outright because it is difficult to “edit” it without jeopardizing its quality. In contrast to this regime of censorship, there is no efficient rating system in China to define the audience’s age for a particular film. Most films released in China are censored to meet the standard of “General”. Therefore, they must be accessible to all audiences. However, audiences are growing more and more sensitive to censorship, particularly when it interrupts the sequence of the narrative. As Gottlieb (1994, 268) states:
2
Relevant information (which is in Chinese) was found on the official website of the State Administration of Radio, Film and Television of China: http://www.chinasarft.gov.cn/catalogs/gldt/20070831155418550472.html.
Subtitling – From a Chinese Perspective
205
The feedback-effect from the original–whether that consists of recognizable words, prosodic features, gestures, or back-ground visuals– may be so strong that [...] the friction between original and subtitle causes noise, and the illusion of the translation as the alter ego of the original is broken. More and more Chinese audiences are in the habit of viewing the subtitled version of an original. In their effort to understand the original, the Chinese translation is not as indispensable as it was before. Most of them can understand parts of English conversations, or are sensitive to a few words, particularly swear words. They voluntarily compare the subtitles with the Source Language (from now on SL) speech and are disappointed when the translation does not match the meaning of the SL speech faithfully. The Chinese government could exercise more power over the viewing habits of Chinese audiences by bringing previously banned films into the mainstream and applying stricter rating to them rather than by trying to ban them outright. The firm stance of the Chinese authorities on the issue of censorship is at odds with recent innovations in communication technology, which have made it almost impossible to hide the content of an original from any audience. Censorship has simply encouraged the Chinese audience to access such films through other channels – unofficial and even illegal.
2.3. Piracy and Fansubbing 2.3.1. Audiovisual Piracy in China Shujen Wang (2003) quotes the following journalistic comment in his book Framing Piracy–Globalization and Film Distribution in Greater China: “China in some ways represents a nightmare scenario for corporate America, a post-Napster Wild West chaos where any intellectual property can be illegally copied, and commonly is”.3 In my opinion, audiovisual piracy is prevalent in China for at least two reasons, the speed at which pirated products can be distributed and the ease with which they can meet the demand of Chinese consumers at lower prices. First, piracy skips the lengthy bureaucratic process of approval and hence distributes home video products faster than the legitimate distributors. Piracy is deeply embedded
3
Lisa Movius: Salon. Com, 8 July 2002.
206
Chapter Eleven
in the domestic social context, technology and authority’s control in modern China. As one of the largest DVD markets, in addition to its transnational economic and legal environment, China provides a space for pirated optical media production and its growing consumption. Secondly, pirated products offer more options with lower prices to Chinese consumers than officially sanctioned distributors. Bureaucratic intervention delays the official release of DVDs on the Chinese market and hence encourages piracy. Piracy also competes with legitimate distributors and erodes cinema profits. Although young people nowadays will queue in front of box offices for a few blockbusters, the majority of Chinese consumers will resort to piracy, which provides more options at a cheaper price. As censorship expands and intensifies, piracy finds an alternative and more “secure” channel to continue functioning–the Internet. Online piracy has an extremely high audience reception in China. Millions of Internet cafés in China, either licensed or unlicensed, provide facilities for illegal downloading. Zhang (2013) considered an alliance between the government and industries essential, and decisions made by Chinese authorities on banning and blocking websites as unwise. Nevertheless, Chinese audiences resort to illegal downloading for the payment-free resources provided by fansubbing groups, with exceptional efficiency and speed of release. 2.3.2 Fansubbing as a Catalyst of the Audience Reception of Subtitling Fansubbing emerged in China in 2003, when broadband networks were established in every major city. Chinese websites provide peer-to-peer (P2P) services and BitTorrent downloads and most foreign films and TV programs on these websites are fansubbed. At present, YYeTs is the largest Chinese fansubbing group. It is in the process of transformation from a fansubbing group to a sanctioned provider of translation services. This transformation includes developing working relationships with the Chinese cyber franchises Sohu (ᩌ⤀) and Youku (Ո䞧); these online franchisees purchase the distribution right of many foreign media products and hire YYeTs to produce subtitles. Though disputes still arise regarding the group’s subtitling and resource-sharing activities, YYeTs sets an example for Chinese fansubbing with its working method and strategies for transformation. Despite the controversial aspects, the emergence and development of Chinese fansubbing correspond to the social, economic and political issues mentioned previously. Fansubbing groups make a significant contribution to the progress of Chinese subtitling by taking the Chinese audience
Subtitling – From a Chinese Perspective
207
further into world media. Chinese fansubbing has brought world media and cinema into the daily life of Chinese people; Chinese fansubbing also brings Hollywood dominance into Chinese society, reaching the Chinese audience mostly outside authorized distribution. As Chinese fansubbing spreads, subtitling has become the preferred mode of AVT for Chinese audiences, at least for those living in major cities.
2.4. Subtitling as a Preferred Mode of AVT Before Chinese fansubbing emerged, the practice of AVT in the Chinese context developed in spite of difficulties that arose from Chinese social, economic and political conditions and which still remain. Chinese audiences welcomed dubbing as a mode of translating foreign films upon their being exposed to them for the first time. They found dubbed versions to be more vivid than subtitled ones due to their insufficient knowledge of foreign languages and cultures (Qian 2004, 2009; Zhang 2004). However, the contemporary Chinese audience finds that dubbing often inevitably forces a contradiction between the linguistic and visual information in a foreign original. For example, in a dubbed Hollywood film, characters who are ethnically and physically non-Chinese speak Mandarin; these characters live in a fictional world based on a reality that is geographically and culturally different from that in China. This fault of dubbing becomes more and more obvious and unacceptable to the contemporary Chinese audience. Hence dubbing is losing its appeal with the audience in Mainland China, at least to those in major cities. Despite this, the use of subtitling as a single mode of AVT is still unpractical in the Chinese context. At present, dubbing and voice-over are still used in TV broadcasts, while subtitling functions as an equal alternative means of translation, particularly in news programs. In Chinese cinemas, foreign films are subtitled and dubbed and audiences decide which version they are going to watch.
2.5. Obstacles to the use of Subtitling as a Single Mode of AVT There are at least three obstacles to the use of subtitling as a single mode of AVT in Mainland China. Firstly, there are language barriers. Most of the population of Mainland China is weak at foreign languages, English in particular. Although English is a compulsory subject at every level of education, Chinese people still lack essential contact with authentic linguistic environments. For example, students majoring in foreign languages lack opportunities to communicate with foreigners, or
208
Chapter Eleven
are unwilling to do so even if they have such opportunities 4 , though educational institutes make efforts to hire foreign teachers. Secondly, for many historical reasons, the Chinese population still suffers from a high rate of illiteracy. This is mainly associated with the inability to read Mandarin. Multilingualism exists among different ethnic minorities in China, which is the third obstacle. Although Mandarin is taught nation-wide, it is still difficult for most ethnic minorities to read Mandarin Chinese fluently. Meanwhile, dispute arises in issues regarding Mandarin as a compulsory course in China’s national education system among members of those ethnic minorities. Therefore, it is difficult for subtitling to function as a single and accessible mode of AVT in Chinese cinemas and TV broadcasts. It is particularly difficult for rural residents and ethnic minorities in China to read Chinese texts on the screen. However, young viewers from major cities, who form the majority of the audience group of foreign audiovisual products, find subtitling more popular and entertaining. This is also due to the fast and massive dissemination of foreign audiovisual products through the Internet and other distribution channels. To fulfill the demand of these audiences, subtitling is carried out in the Chinese context with incredible efficiency. Unfortunately, the practice of subtitling in the world’s second largest film market has so far attracted little attention from research in AVT.
3. Challenges Presented by Chinese to Subtitling 3.1. Display Conventions Traditionally, Chinese subtitles are centred at the bottom of the screen. They are usually white when the film is for public exhibition. For numbers Arabic numerals are used instead of Chinese characters. Each line of a subtitle should stay on the screen for three to six seconds. The duration should not be shorter than one second if the subtitle contains only one word. The number of characters per line is fifteen or less. Chinese subtitling uses little punctuation: periods are not used, quotation marks are used in a few cases, such as lyrics, letters and words from foreign languages, except English. A space is used when there are
4 This attitude of Chinese students is also apparent in those who study abroad. For example, most Chinese students in Australia stick with each other and with the local Chinese community (i.e. the environment that is familiar to them and easy for them to survive in), rather than reaching out to native English-speakers (and other cultural communities) in Australia.
Subtitling – From a Chinese Perspective
209
two sentences in one exposure. Ellipses are inserted in a sentence to represent a false start, pause, slip of tongue, or interruption. Short dashes are used to show different turns of a dialogue, i.e. lines spoken by different characters.
3.2. Linguistic Rules and Audiovisual Constraints The unique characteristics of the Chinese language present a number of obstacles to subtitling. Firstly, each Chinese character is semantically independent but can also form words and sentences with other characters. Chinese translators often use plainer or even some dialectical terms in order to condense the target text (TT). For example, the northern Chinese dialectical term “க (shá)” can be used to represent the meaning of “what”, instead of the more standard word “ӰѸ(shéme)” (Shu 2009). A Chinese subtitle always appears in the most concise form, subject to the maintenance of the meaning of the original in the subtitled version. However, no matter which format a translator chooses, a Chinese character takes more space than an individual letter in, say, English. Secondly, subtitled versions in Chinese show alteration to the SL information, particularly when the original is in a European language. Such alteration may also correspond to the complex multimodal systems of audiovisual media. For example, if one character says “come out of the car!” to another, the Chinese subtitle can simply show “ࠪᶕ” (chu-lai) (i.e. out-come), which condenses the SL speech to “come out!”. The word “ࠪᶕ” presents no difficulties to the audiences in recognizing to whom and where the speaker is speaking; the visual elements are representing such information while the verbal information is delivered through the vocal channel in the original. However, by using “ࠪᶕ”, the translation makes something implicit to the SL audiences explicit to the TL audiences. The character “ᶕ” (lai) specifies the direction to which the speaker wants the other person to move (e.g. to the point where the speaker is standing). The English sentence does not convey this information. A more convincing example is the following. Consider a scene in which two travellers coming out of a forest see a mountain. One says, “we need to go up there”, while pointing at the mountain. This phrase could be translated in each of the following ways:
210
Chapter Eleven ᡁԜᗇкኡ (we-need to-climb up-mountain) ᡁԜᗇ䎠ࡠ䛓к䶒৫ (we-need to-walk to-that-upside-go) ᡁԜᗇ⡜к䛓ᓗኡ (we-need to-climb-up-that-mountain)
However, a subtitler would most likely prefer the first, so as to match the manner and meaning of the speaker while ensuring the conciseness of the subtitle. The SL speech does not contain the meaning of “ኡ” (i.e. mountain); the visuals deliver the actual reference to the mountain. However, the translation does not make this explicit because it does not contain the demonstrative “䛓” (i.e. that) and the quantitative “ᓗ” (i.e. the unit used to represent lifeless objects such as buildings, hills and mountains) before the character “ኡ” (i.e. mountain); by using “䛓ᓗኡ” (i.e. that mountain), the Chinese sentence clarifies the entire message that the speaker intends to convey. Therefore, this Chinese translation still cooperates with the visual as the SL speech does. In comparison, “ᡁԜᗇ 䎠ࡠ䛓к䶒৫” is too long and does not conform to a Chinese-speaker’s phrasing in a comparable situation. As for “ ᡁ Ԝ ᗇ ⡜ к 䛓 ᓗ ኡ ”, it represents an unnecessary effort of a translator in clarifying everything the speaker means. This demonstrates the choices a subtitler must make when translating an audiovisual text, and also hints at the importance of research on Chinese subtitling in the wider research on AVT. Thirdly, it is impossible for Chinese to imitate the stylistic effects obtained in the SL: the translation of wordplay is a case in point, as shall be seen in the following example: Leonard: All right, well, let me see if I can explain your situation using physics/What would you be if you were attached to another object by an inclined plane, wrapped helically around an axis? ྭˈᡁቍ䈅⭘⢙⨶ᆖᶕ䀓䟺⧠൘Ⲵ༴ຳˋྲ᷌㻛㔁൘…/…а њ䖤ᗳᡀ㷪⣦Ⲵᯌ䶒кˈл൪ՊᱟӰѸ˛ Sheldon: Screwed ᆼ㳻 (The Big Bang Theory Season 4 2011)
Leonard tries to explain to Sheldon the relationship with Amy, a woman whom Sheldon considers as a “friend but not a girlfriend”. To show Sheldon that Amy is badgering him into improving their relationship, Leonard compares the situation to a physical movement
Subtitling – From a Chinese Perspective
211
which eventually leads Sheldon to the word “screwed”. Yet, the initial meaning of “screwed” has no semantic similarity with the situation referred to by Sheldon. Although the pun created by the word “screwed” cannot be retained in translation, the word “ᆼ㳻 (wan-dan) (done; end, or ended; damaged)” does not reduce the humorous effect. It clarifies the secondary meaning of the pun,, which still corresponds to the consequence of the situation described by Leonard, though the morphological design has to be changed. However, Chinese translators, in many cases, seek to recreate wordplay in Chinese in the same place where it occurs in the SL. For example: Charlie: You know what’s a good book?/Under the Dinning-Room Table by Richard Gobbler ⸕䚃аᵜྭҖੇ˛ˋည⡭߉Ⲵlj佀ṼлⲴ〈ᇶNJ Alan: But—But it does not compare to…/Wait Till Your Liver Fails by Hope Udai նᱟ∄нк…/…⦻↫Ⲵljㅹ⠶㛍NJ (Two and a Half Men Season 5 2007)
“Robert Gobbler” is transformed into “ည⡭”, which alludes to “ ⡭” (bite for arousing you). “ည”, pronounced as “yáo”, is a Chinese surname. The character sounds close to “” (yӽo), which means “bite”. The translation corresponds to the meaning of “Under the Dinning-Room Table”–of which the translation is “佀ṼлⲴ〈ᇶ” (secret under the dinning-room table)–to remind the audience of a previous encounter between Alan and his former wife. The same strategy applies to the translation of “Wait Till Your Liver Fails by Hope Udai”. The name “Hope Udai” alludes to “hope you die”, which is the meaning Alan intends to convey. The word “hope” can be translated as “ ᵋ ” (wàng), the pronunciation of which is close to the Chinese surname “⦻” (wáng). Thus “Hope Udai” is translated as “⦻↫”, which implies “ᵋ↫”. Fourthly, on the syntactic level, a Chinese translator prefers conciseness of translation over literal fidelity to the ST. Differences in the syntactic structures of Chinese and English often compel the translator to change the word order or even the sentence order of the ST. Thus, a Chinese audience may read the subtitle of one part of the SL sentence while hearing the other. The Chinese translation given below follows the SL word order. This makes it less intelligible because it contains two separate parts seemingly unconnected to each other. The TT consists of two separated parts, which are seemingly unconnected to each other. The second part, in particular, is an incomplete sentence. The reason for this inconsistency in the TT is that the translator somehow imposes the syntactic structure of the SL on the TT.
Chapter Eleven
212
To me, it meant being somebody in a neighborhood full of nobodies ሩᡁᶕ䈤ᱟаᡀቡᝏ൘ањ┑ݵᰐሿংⲴ⧟ຳѝ (to me-is-a kind of-achievement in-a-full of-fameless-pawns-ofenvironment in) (Goodfellas 1990)
I would present the original SL information above with two turns of subtitles. Firstly, the meaning “to me” can be translated as below: ⮡ ᡃ ᮶ 宜 (to-me-come to-say)
Secondly, the sentence “it meant…” can be translated as the sentence below: 扨㎞✂䧏厌⦷扨₹嗘㫈䯍◉摛⒉ⅉ⯃⦿ (this-meaning-can-in-this-grass root-community-inside-being outstanding)
My translation changes the original word order and hence helps the Chinese audience recognize the connection between the two parts of the Chinese sentence. In addition, the words “嗘㫈” (grass root) and “⒉ⅉ⯃ ⦿ ” (being outstanding) are associated with the comparison between “somebody” and “nobody” but through a fashion which is closer to the TL perspective. This example also demonstrates that English tends to use hypotaxis more than Chinese: English sentences can in fact be organized and connected by cohesive ties such as coordinative, correlative and subordinating conjunctions, relative pronouns, adverbs and prepositions. Hence English sentences often display a complex structure which consists of a main clause and several subordinate clauses. In contrast, a Chinese sentence relies on word order, which rarely involves a connective to maintain a logical relation with other sentences. So if English sentences are compact, stressing the cohesion built on morphological and syntactic principles, Chinese sentences are diffusive, relying on a particular context to maintain the coherence between them. The issues mentioned above make Chinese subtitling a unique subject within research in subtitling, but still, up to now, insufficiently treated by the Chinese academia.
Subtitling – From a Chinese Perspective
213
4. Subtitling research in the Chinese context The study of AVT has been widely acknowledged as a fully developed discipline in which subtitling attracts the most attention. However, among major researchers focusing on the Chinese context, only Shaochang Qian and Chuanbai Zhang have their works on AVT published in the Western academic journal Meta: Translators’ Journal, among other researchers such as Chapman Chen (Hong Kong) and Sheng-jie Chen (Taiwan). Diaz Cintas described this lack of communication with the wider world context during an interview with Dong, who visited Imperial College London in 2012.
4.1. Some researched topics in the Chinese context Researchers, such as Yunxing Li (ᵾ䘀)ޤ, Zhengqi Ma (哫ҹ僁), Shaochang Qian (䫡㓽᰼) and Chunmei Zhao (䎥᱕ẵ), introduced major issues regarding the technical constraints in AVT, including the unique characteristics and constraints of audiovisual media (i.e. how it differs from printed media), which force translators to think differently in seeking a TL solution. These authors and their works have raised attention in the Chinese context and invited more scholarly research in AVT. More and more effort has been made in the research into Chinese subtitling in recent years, due to the increased popularity of DVD among Chinese viewers and the progress of information technology. Up to the present, Haiya Dong (㪓⎧䳵) is the only person who has completed doctoral research on subtitling (i.e. subtitling humour) in Mainland China, according to the records provided by the China Doctoral Dissertation Full-text Database ( ѝ ഭঊ༛ ᆖ ս 䇪᮷ ޘ᮷ ᮠ ᦞᓃ ) on cnki.net (ѝഭ⸕㖁). Subtitling humour is one of the most frequently addressed topics in the Chinese context. Works on this subject overlap with those that focus on the translation of cultural references. As for research in subtitling humour, and that of Chinese subtitling in general, most of it is conducted through case studies rather than through state-of-the-art contributions. The majority of these works have obvious overlaps in their theoretical basis with more general works on translation; researchers in China mostly choose to base their discussions on theories, such as Skopos theory (Huang 2006; Wang 2011; Tang 2012, etc.) and dynamic/functional equivalence (Su 2008; Peng 2012ˈetc.), which are primarily developed on the basis of textual translation, rather than seeking the more up-to-date theories directly related to subtitling.
214
Chapter Eleven
Further research interest is given to fansubbing with two major focuses: the focus on the socio-cultural dimension (Nie 2013; Ma 2013) and that on the addressees of translation (Ding 2013). In addition, Chinese scholars acknowledge the function of subtitling in foreign language teaching and learning. Researchers in Mainland China observe that Chinese students share the widespread assumption that English proficiency can be enhanced by watching American series. Some (e.g. Tan 2013; Xu 2013) provide guidelines for appropriate use of TV series as supporting materials in language learning: 1. Choosing different types of subtitles (i.e. interlingual, intralingual and bilingual) in the viewing process; 2. Viewing the same material repeatedly so as to fully understand the dialogue; 3. Practise one’s pronunciation by imitating the characters on screen. Even though they are the main force in the Chinese academia, scholars from Mainland China are constrained in addressing subjects related to subtitling and AVT in general. This is largely due to the lack of material and virtual resources (i.e. financial, technical and theoretical supports), and efficient communication with their peers abroad. It is apparent that more and more postgraduate students have chosen subtitling as their research topic, those pursuing Master’s degrees in particular. According to CNKI (i.e. ѝ ഭ ⸕ 㖁 ), 9545 theses were contributed to subjects on subtitling between 2007 and 2013, and 600 of them focused specifically on translation. These works have addressed a rather limited range of issues, such as subtitling humour and cultural references, and overlap with each other in theoretical basis and source-language materials. Hence, research in the Chinese context is yet to broaden the variety of topics, or to introduce and apply relevant theories and findings accomplished elsewhere to the Chinese context. Furthermore, there are some additional problems in completed works in Mainland China. First, most authors, including those mentioned above, have used fansubs as data without carefully measuring the strategies and standards applied by fansubbing. Second, those who sought to assess the quality of fansubs were rather hasty in their evaluations of translations without providing detailed analysis and explanation of their assessment criteria. As for the first problem, Chinese researchers might consider the explanation of the reasons for choosing fansubs as their research topic irrelevant to their primary focus. More importantly, any attempt to elaborate these issues will lead to subjects related to government
Subtitling – From a Chinese Perspective
215
censorship and piracy, which are too delicate to be addressed openly by researchers in Mainland China. As the majority of Chinese researchers are avoiding discussion of such topics, they have given birth to a sort of selfcensorship, at least in the research on AVT.
4.2. The self-Censorship Tendency Self-censorship jeopardizes research efforts made in the Chinese context, joining other obstacles, such as the availability of source materials and the lack of peer-to-peer communication. Some researchers sought to pursue their own line of research and hence diversify research subjects in AVT thanks to their expertise in other foreign languages, or by focusing on translation from Chinese into English. For instance, Yue Li (ᵾᛖ) (2012) focused on the Chinese subtitling of Spanish films so as to make a state of the art contribution to an even less studied subject. As for the works on the translation from Chinese into other languages, Min Zhao (䎥 ) (2011) conducted a comparative study of the English and Japanese subtitles of the Chinese film To Live (lj⍫⵰NJ). However, even in these new research paths, some researcher still let self-censorship restrict their academic objectivity. Chinese researchers should seek to participate in the new trends of research in the international scenario, rather than confining themselves to limited subjects and to the resources at hand or avoiding facts deemed sensitive. Chinese researchers must resolve to face challenges and address contemporary issues with an objective attitude and a holistic perspective. Only in making efforts with such resolutions can research in the Chinese context provide constructive insights to the practice of AVT, contribute to progress in a wider perspective and invite more scholarly attention to relevant subjects concerning Chinese AVT, subtitling in particular.
5. Suggestions for Future Research This section puts forward three areas for future research to consider. First, effort in developing pedagogical support for the practice of subtitling in the Chinese context is still lacking. This absence of pedagogical guidelines for subtitling and AVT contrasts with China’s economic status as the world’s second largest film market. Hence, it is essential for future research to explore this area further, in addition to the use of subtitling in foreign language education. Secondly, future research needs to develop a set of standardized terminology to ensure the efficiency of communication among research
216
Chapter Eleven
peers, domestically and overseas. This could be achieved through the development of institutional organizations, the benefits of which are commonly acknowledged by academic communities worldwide. Thirdly, creative Chinese subtitling should be a further point of interest. The practice of creative subtitling was pioneered, surprisingly, by nonprofit fansubbing groups. Among them, the leading fansubbing group YYeTs produced the Chinese-subtitled version of Sherlock. This version introduced the series into the Chinese context shortly after its premiere in UK (i.e. 2010) and has since then maintained great popularity among Chinese audiences. Creative subtitling in the Chinese context attracts less attention due to controversy regarding the legitimacy of fansubbing worldwide. More essential factors to this lack of attention can also be related to the neglect of topics related to the Chinese context and the lack of scholarly attention to Chinese subtitling in the wider world context.
6. Ways to Improve the Practice of Subtitling in Mainland China In the current circumstances, the audiovisual industries in Mainland China need more people endowed with both knowledge of translation and enthusiasm for AVT (subtitling in particular). For example, fansubbing groups such as YYeTs should be granted more opportunities to participate in legitimate audiovisual distribution channels. In addition, the government and industries should make the reception of audiovisual products more budget-friendly for the Chinese audience. This would invite more Chinese viewers into authorized venues, while discouraging them from using pirated audiovisual products. More liberal media industry policies, for foreign audiovisual products in particular, are equally essential for the control of piracy. One way to reach this goal is to adopt a more tolerant appraisal procedureˈthereby allowing more high-quality audiovisual products to reach Chinese audiences through authorized channels. Such quality can be defined in terms of artistic or entertainment value. This would require the government to distinguish art and entertainment from ideology and politics. A more relaxed socio-political environment would allow the world’s audience to better understand China and its people, and the Chinese audience to better understand the rest of the world. Then the practice of piracy would also be more likely to be contained.
Subtitling – From a Chinese Perspective
217
References Chen, Chapman. 2004. “On the Hong Kong Chinese Subtitling of English Swearwords.” Meta, 49(1): 135-147. Chen, Sheng-Jie. 2004. “Linguistic Dimensions of Subtitling. Perspectives from Taiwan.” Meta, 49(1): 115-124. Ding, Lingling ( б ⧢ ⧢ ). 2013. “On Characteristics of Fansubbing Strategies for American TV” ( 㖾 ᆇ ᒅ 㓴 ᖡ 㿶 㘫 䈁 ⢩ ⛩ ᷀ ). Overseas English (⎧ཆ㤡䈝), (1), 110㸫111. Dong, Haiya (㪓⎧䳵). 2007. A Panoramic Survey of the Translation of Situation Comedy: Communicating Humor across Cultures (ᛵᲟௌ ᒭ 唈 㘫 䈁 Ⲵ ཊ ݳ㿶 䀂 ). PhD dissertation, Shanghai International Studies University, Shanghai. Dong, Haiya, (㪓⎧䳵). 2012. “Research and Teaching in AVT - an Interview with Jorge Díaz-Cintas” (㬜ࣳਁኅⲴ㿶ੜ㘫䈁⹄ウ৺ᮉᆖ üüJorge Diaz Cintas 䇯䈸). Shanghai Journal of Translators᧤ୖᾏ ⩻幠), (4): 53-57. Gottlieb, Henrik. 1994. “Subtitling: Dialognal Translation.” Persective: Studies in Translatology, 2(1): 101-121. Huang, Chenqi (哴⩋⩚). 2007. Skopos of English-Chinese Subtitling (Ӿ ⴞⲴ䇪䀂ᓖሩ㤡≹⭥ᖡᆇᒅ㘫䈁Ⲵ䈅᧒ᙗ⹄ウ). Master's Thesis, Shaihai Jiao Tong University, Shanghai. Li, Yue (ᵾᛖ). 2012. Estudio sobre la Traducción de Subtitulos del Cine y Televisión del Español al Chino: Desde la Perspectiva Funcional (Ӿ 䈝䀰࣏㜭䀂ᓖ⍵䈸㾯≹ᖡ㿶ᆇᒅ㘫䈁). Master's Thesis, Shanghai International Studies University, Shanghai. Li, Yunxing (ᵾ䘀)ޤ. 2001. “Subtitling Strategies” (ᆇᒅ㘫䈁Ⲵㆆ⮕). Chinese Translators (ѝഭ㘫䈁) (4). Ma, Lihong (傜࡙㓒). 2013. “On American TV Fever” (Ā㖾✝ā⧠䊑 ᮷ॆ䀓䈫). New Media (ᯠჂփ), (1), 77㸫78㸪89. Ma, Zhengqi (哫ҹᰇ). 1997. “On General Principles in AVT” (䇪ᖡ㿶㘫 䈁Ⲵสᵜࡉ). Modern Communication and Media: a Journal of Beijing Broadcasting Institute (⧠ԓՐ˖ेӜᒯᆖ䲒ᆖᣕ), (5): 59-63. Nie, Yonghua (㙲≨ॾ). 2013. “Skopos in Subtitling Cultural References – with Examples from The Big Bang Theory” (ӾⴞⲴ䇪㿶䀂䈸᮷ॆ
218
Chapter Eleven
ੁⲴ㘫䈁üüԕ㖾lj⭏⍫བྷ⠶⛨NJѪֻ). A Journal of Li Shui College (ѭ≤ᆖ䲒ᆖᣕ), 35(1): 44㸫50. Peng, Jie (ᖝ⌱). 2012. The Functional Equivalence in the Subtitles of Desperate Housewives (Frist Season) (࣏㜭ሩㅹ⨶䇪㿶䀂л㖾lj 㔍ᵋⲴѫྷNJ˄ㅜаᆓ˅Ⲵᆇᒅ㘫䈁). Master's Thesis, Southwest Petroleum University, Chengdu. Qian, Shaochang. 2004. “The Present Status of Screen Translation in China.” Meta, 49(1): 52-58. —. 2009. “Screen Translation in Mainland China.” In Dubbing and Subtitling in a World Context, edited by Gilbert Chee Fun Fong & Kenneth K. L. Au, 13-22. Hong Kong: The Chinese University Press. Shu, Kei. 2009. “Translation for the Hong Kong Audience: Limitations and Difficulties.” In Dubbing and Subtitling in a World Context, edited by Gilbert Chee Fun Fong & Kenneth K. L. Au, 213-220. Hong Kong: The Chinese University Press. Su, Dan (㣿ѩ). 2007. A Study of Functional Equivalence in Subtitle Translation (䇪ᆇᒅ㘫䈁ѝⲴ࣏㜭ሩㅹ). Master's Thesis, Shanghai International Studies Unviersity, Shanghai. Tan, Weilu (䉝⧞⫀). 2013. “Learning English by Viewing American TV Dramas: on Automonous Learning in English in University Contexts” ( Ā ⴻ 㖾 ᆖ 㤡 䈝 ā о ᖃ ԓ བྷ ᆖ ⭏ 㤡 䈝 㠚 ѫ ᆖ Ґ ). Magnificant Writing (ॾㄐ), (1): 186. Tang, Weiqing (ୀছᒶ). 2012. Influence of Skopos Theory and Recption Aesthetics on Subtitle Translation - A Case Study of THE SHAWSHANK REDEMPTION (ⴞⲴ䇪ਸ᧕ਇ㖾ᆖᤷሬлⲴ⭥ᖡᆇ ᒅ 㘫 䈁 ü ü lj 㛆 ⭣ Ⲵ ݻᮁ 䍾 NJ ). Master's Thesis, Shanghai International Studies University, Shanghai. Wang, Shujen. 2003. Framing Piracy: Globalization and Film Distribution in Greater China. Rowman & Littlefileld Publisher. Wangn, Yuxiao (ᰪⅢᲃ). 2011. Skopos of Subtitling (ӾĀⴞⲴ䇪ā䀂ᓖ ࠶᷀ᆇᒅ㘫䈁ㆆ⮕). Master's Thesis, Shanghai International Studies University, Shanghai. Xu, Shuyi (ᗀ㡂Ԛ). 2013. “Using American TV Sitcoms in University English-Language Teaching – a Case Study in the Application of Hanna Montana” (ሶ㖾䘀⭘ࡠབྷᆖ㤡䈝䈮าᮉᆖѝüüԕᛵᲟௌ lj≹၌g㫉ຄ၌NJѪֻ). Young Writers: Education Edition (䶂ᒤ ᮷ᆖᇦˉᮉ㛢䇪ы), 3: 90.
Subtitling – From a Chinese Perspective
219
Zhang, Chunbai. 2004. “The Translating of Screenplays in the Mainland of China.” Meta, 49(1): 182-192. Zhang, Qiaoli (ᕐᐗ㦹). 2011. On Subtitling Humour in American TV Dramas (㖾ᆇᒅѝ䀰䈝ᒭ唈Ⲵ≹䈁⹄ウ). Master's Thesis, Central China Normal University, Wuhan. Zhang, Yin (ᕐ㧩). 2013. “Briefly on the Contemporary Dissemination of Films on the Internet” (⍵᷀ ᖃԓ ⭥ᖡⲴ 㖁㔌 Ր ). The Visual: Papers on Films and Literature (㿶㿹˖ᖡ㿶᮷ᆖ䇪ы). Zhao, Chunmei ( 䎥 ᱕ ẵ ). 2002a. “On Four Major Contrasts in Film Translation” (䇪䈁ࡦ⡷㘫䈁ѝⲴഋሩѫ㾱⸋). Chinese Translators (ѝഭ㘫䈁), (4): 49㸫51. —.2002b. “On the Objectives of Film Translation: Delivering the Style of the Original” (䈁ࡦ⡷Ⲵ䘭≲˖Ր䙂⡷Ⲵ仾Ṭ). In Research in Cross-Cultural Communication (䐘᮷ॆՐ᧒䇘о⹄ウ), edited by Zhang Huayong (䎥ॆࣷ). Beijing: People’s Literature Publishing. Zhao, Min (䎥). 2011. Rethinking Translation Strategies in Light of Culture – with a Point of Departure from the Skopos and Characteristics of Subtitling (Ӿ᮷ॆ㿶䀂ሩ㘫䈁ㆆ⮕࠶㊫Ⲵ䟽ᯠᇑ 㿶üüӾᆇᒅ㘫䈁ⲴⴞⲴ৺ަ⢩ᖱࠪਁ). Master's Thesis, Beijing Internation Studies University, Beijing.
Abstract Subtitling - From a Chinese perspective Key words: subtitling, Chinese, current status Literature on the topic of subtitling has largely neglected the Chinese context, though there has been a growth of interest in subtitling worldwide. This is an unfortunate neglect because the production of Chinese subtitling is soaring, in response to the increasing demand of the Chinese audience; hundreds of the latest products of world cinema and media are subtitled in Chinese every day. Despite this, research has yet to address the practice of Chinese subtitling and its current status in details. This paper will attempt to address this weakness by proceeding as follows. First, it seeks to explain how subtitling is received in Mainland China. This is to demonstrate how the practice of subtitling paves the way for the access of foreign audiovisual products to the audience in Mainland China. Secondly, it introduces the challenges presented by the Chinese language
220
Chapter Eleven
to the practice of subtitling. Such challenges can be categorized into two groups–those associated with the audiovisual medium and those associated with the fundamental differences between Chinese and other languages (e.g. English). Thirdly, it generalizes the development and shortcomings in current research on Chinese subtitling. Fourthly, it provides hints to future research in Chinese subtitling. Finally, it proposes ways to improve the current practice of subtitling and the reception of foreign audiovisual products in Mainland China. In this way, I intend to justify Chinese subtitling as an essential component of the research in AVT.
CHAPTER TWELVE LEARNER CORPUS OF SUBTITLES AND SUBTITLER TRAINING ANNA BĄCZKOWSKA
1. Introduction Over twenty years have passed since in 1991 Henrik Gottlieb defended the study of subtitling as a subfield of Translation Studies and acknowledged it as a new university discipline. Today no one would dare to contest subtitling as a valid field of investigation. Moreover, it is no longer “on the fringe of translation studies” (Gottlieb 1991, 161). In fact, the nineties proved to be the “golden age” of audiovisual translation (AVT), which is now a “resolute and prominent area of academic research” (Díaz Cintas, 2009, 1), no longer the Cinderella of Translation Studies but a full-blown research field in its own right. Among the different AVT modes, subtitling in particular is gaining popularity and is growing exponentially and proportionally to the booming development of digital television on the one hand, and to EU directives concerning the role of subtitles in promoting foreign language learning on the other. Subtitling has always been the dominant type of AVT in the Scandinavian countries, while in many other European countries, dubbing and voice-over have enjoyed greater popularity. However, a growing interest in subtitling can be observed, both on the part of private TV channels and new generations of viewers (including university students). While TV channels take advantage of the cheapest form of screen translation, viewers appear to appreciate the full exposure to the original (English) soundtrack offered by subtitled films, and thus the possibility of learning or brushing up on their knowledge of a foreign language. In Poland, this still young and novel interest in subtitling has led to a growing number of university modules dedicated specifically to subtitling, as elective undergraduate or postgraduate university courses as well as standalone open courses. It is only natural to expect, therefore, that universities
222
Chapter Twelve
will be fated to launch BA or MA courses of this kind as part of official university programmes in the near future if they wish to keep pace with global trends and grapple with a growing competition on the national tertiary market. With the demands for professional subtitlers still being far from satiated, this specialism route available as an elective part of what is generally known in Poland as an English Philology programme, is most likely to blossom. The aim and the main motivation for the launching of the Learner Corpus of Subtitles (LeCoS) project in 2010 was offering a module on subtitling, built in a general Translation Studies specialism route. The initial stage of the LeCoS project involves a compilation of subtitles written by students. The data are next used to prepare subtitling activities and tasks based on learner-driven mistranslation analysis. For over four years now, a rich material has been gathered and the data have already been successfully incorporated as part of subtitling classes in our current translation courses. The purpose of the present paper is to share some hands-on observations gleaned from the ongoing project and, in particular, to indicate some of the most tenacious translation problems typical of the students participating in the project. Naturally enough, as the project has not yet terminated, and by necessity the qualitative investigation cannot avoid having a fragmentary nature and tentativeness, the analysis is not flawless and conclusive but preliminary and suggestive. Despite these limitations, however, it appears that both the procedure used for corpus compilation and corpus-based course implementation have some indicative power as regards the trainees’ initial stage of language competence and translation skills as well as their progress observable over the span of the course. Corpus data are rich carriers of information from which possible suggestions may spring as to how future courses on subtitling can be designed and conducted.
2. The LeCoS Project 2.1 Participants and Project Aims The LeCoS project is conducted at Kazimierz Wielki University, Bydgoszcz, Poland. The participants of the project are students of English Philology (MA level) and Modern Languages with an English major and a Russian minor (BA level). The overall number of the project participants is 120 students; however, for the current study, only BA students have been considered, with the number of attendees amounting to 102. The students of Modern Languages/English Philology are on average twenty-
Learner Corpus of Subtitles and Subtitler Training
223
one to twenty-four years of age, and are in the second year of their university BA or MA programme. They are at a higher intermediate/advanced level of English, oscillating between B2 and C1 of the Common European Framework of Reference for Languages in the case of BA students and between C1 and C2 in the case of MA students. At BA level, students of Modern Languages take a number of obligatory courses in linguistics (translation studies, applied linguistics, semantics, pragmatics, syntax, morphology, phonology, etc.) as well as in literary and cultural studies (history of British and American literature, theory of cultural studies, intercultural communication, etc.), which are more or less equally balanced. Unfortunately, even if students choose a linguistics-oriented specialism route (e.g. applied linguistics or translation studies) the literary-cultural load remains very high. The course in translation studies spans several semesters and starts in the second year of a three-year-long BA programme. Theoretical and practical aspects of translation and terminology are covered by several specialists within three or four semesters. All translation-related modules (e.g. introduction to TS; theories in TR; terminology; practical translation, etc.) are obligatory for BA students and elective for MA students. The second year BA students have so far subtitled one and the same film: Notting Hill, starring Julia Roberts (Anna Scott) and Hugh Grant (William Thacker), released in 1999 and directed by Roger Michell. It is a romantic comedy that tells a story of a love affair between Anna Scott, a famous and rich American actress, and William Thacker, a modest, unknown and poor travel bookshop owner. They meet one day in William’s bookshop when Anna visits London to film some shots in England for her new picture. As part of the translation course, MA students of Modern Languages are involved in subtitling another film, What Women Want, starring Mel Gibson (Nick Marshall) and Helen Hunt (Darcy McGuire), directed by Nancy Meyers and produced in 2000. Plans are afoot to continue the project with both BA and MA students in subsequent years with different films, to achieve greater variety of data and obtain more and varying examples illustrative of a wide array of subtitling strategies. When a subtitling module comes to fruition, students will be exposed to a selection of scenes and subtitles dedicated specifically to a given translation problem or a particular subtitling strategy. The aim of the LeCoS project is to compile a corpus of data comprising interlingual English into Polish subtitles created by BA and MA students of Modern Languages in order to (1) diagnose the initial stage of students’ translation skills and identify language-specific problem areas; (2) prepare a set of activities (drawing on corpus data) to facilitate the design of an introductory module in subtitling; and (3) prepare an
224
Chapter Twelve
introductory university course in subtitling. Throughout the project, students (i) analyze their own renderings, (ii) compare their translations with professional translations, (iii) learn subtitling strategies, and (iv) employ these strategies in order to improve their subtitles. The project was launched in the winter semester (October 1) of 2010 and it started with a pilot study, which terminated at the end of January 2011. The data gathered during this study are labelled Corpus A. The compilation of Corpus B started in October, 2011; in parallel to the previous academic year, it lasted one semester, until the end of January 2012. The study dubbed Corpus C started in October 2012 and terminated at the end of January 2013. MA students’ subtitles based on What Women Want constitute Corpus P. Corpora A, C and P will not receive further attention in this paper, whose primary goal is to describe data retrieved from Corpus B. Details concerning project participants and project implementation are elaborated in the next section.
2.2. Course description The general course of Translation Studies in the BA programme at our university covers several semesters and usually starts in the second year. The course of Translation Studies alternates between (optional) lectures and (obligatory) tutorials, and it includes a number of milestone topics and major subfields, e.g. translation strategies, methods and procedures, the translation of proper names and language- and culture-specific notions, preliminary information on AVT as well as technical, legal, medical, business and community translation and interpreting. In the second year of BA programmes, lectures start in the winter semester and run in parallel to tutorials conducted in a computer laboratory wherein students prepare subtitles for Notting Hill. There are altogether fifteen hours of lectures. The first lectures (averaging two meetings, each lasting forty-five minutes) touch upon the topic of AVT and focus squarely on subtitling so that the core assumptions and constraints associated with subtitling receive due attention; covered as well are some examples of reduction strategies illustrated by a number of authentic examples. Theoretically, students should acquire the basics of the theory of Translation Studies by the end of the semester, which should also entail better practical skills in translation. This did not prove to be the case, however, because lectures are not obligatory and students may but do not always participate in the fifteen hours of the dedicated module. We can assume, therefore, that while the project participants (those who took part in lectures) should have some basic knowledge of subtitling and translation, a sound theoretical
Learner Corpus of Subtitles and Subtitler Training
225
knowledge is not equally acquired by all students, and the nuts and bolts of the business will be acquired on the job, i.e. during tutorials. As a result, trainees needed to be reminded of the possibility of reducing the original text and omitting some spoken language features (e.g. discourse markers and interjections) as well as toning down vulgar language wherever it is appropriate. The tutorials in subtitling span one semester, i.e. thirty contact hours, with two weekly classes lasting forty-five minutes each. As the subtitling classes proceed, distinct aspects of the translation process are focused upon. About eight meetings (fifteen hours) are taken up by practical subtitling conducted by groups of two to four students. With eight to ten minutes of the film being rendered in each class, roughly half of the film is covered during the whole term. The second part of the course (fifteen contact hours) is geared towards explicit teaching and conscious analysis of a selection of subtitling strategies on the basis of students’ own (anonymous) renditions. This gives the project participants the opportunity to compare their translations with those written by other groups, which stimulates reflection and critical thinking over subtitling strategies and subtitling quality standards. By presenting students with a number of translations (in total, nine versions of each subtitle in the case of Corpus B and nineteen in the case of Corpus C) and juxtaposing texts written by non-professional translators with the DVD professional translation, a great variety of options may be discussed.
2.3. The structure of the LeCoS corpus The whole corpus contains over 450,000 running words (c. 640,000 tokens) of which circa 350,000 words (Corpus A, B, and C; see Table 121) constitute data based on translations of Notting Hill (NH Corpus) prepared by sophomore BA students and circa 100,000 words (Corpus P) represent translations of What Women Want (WWW Corpus) written by sophomore MA students. As already alluded to above, Corpus P, gathered in the summer semester of 2012, is beyond the scope of this paper and will not be discussed here. Corpus A, compiled in 2010, is treated as a pilot study. This is mainly due to the unavailability of regular use of the computer laboratory1 (and the consequent difficulties in systematic collection of students’ output) as well as frequent students’ absences
1
Students wrote their translations by hand and their subtitles were later keyed in by a colleague, Marek KieĞ (Teacher Training College, Koszalin, Poland), who offered technical support for the project, and file formatted by a Ph.D researcher, Anna Klein, who joined the project in 2011.
Chapter Twelve
226
towards the end of the winter term that resulted in undesired but necessary regroupings, which changed the ultimate number of groups and influenced students’ output parameters. These were the main obstacles that hampered systematic data storage in electronic format and full control of project progression; hence, Corpus A is treated as a pilot LeCoS project and is used in qualitative analyses but not for the provision of statistical data. Corpus B contains a finished, stand-alone part of the data gathered during winter term 2011. The project participants had two computers at their disposal per each group which consisted of two to three participants; one computer was used to type the students’ subtitles (using Microsoft “Word” software) while the other served as a reference tool, with easy and fast access to the Internet (e.g. Wikipedia, online dictionaries) and the OxfordPWN Polish-English Dictionary, which is installed on each workstation. Taking turns, one student would focus on typing while the other or others controlled the plot by viewing the scenes as well as assisted the typist in verifying the meanings of unknown lexical items or new culture- and language-specific notions. While rendering the soundtrack the tempo was relatively fast (on average, eight minutes of the film were translated in ninety minutes) but there was ample space to discuss options within groups. The same procedure was used with students participating in the compilation of Corpus C in the subsequent academic year. The total number of words in Corpus C oscillates around 250,000 words. It is the most fruitful corpus in the project compiled so far in terms of the number of words gathered within one semester. This is because there were more second year students in 2012 than in the previous years and only two students were included in each group (while in previous years occasionally there were groups of three or four). Notting Hill Corpus No. of students No. of words and tokens
Total 102 c. 350,000 (c. 500,000)
Corpus A 2010 34 28,000 (c. 42,000)
Corpus B 2011 28 c. 75,000 (c. 90,000)
Corpus C 2012 40 c. 250,000 (c. 370,000)
Table 12-1. Number of students, words, and tokens in LeCoS (NH Corpus), Sub-corpora A, B, and C. In 2011, 28 students took part in the project (26 females and 2 males). In Corpus A there were 34 participants in total (26 females and 8 males), and in Corpus C there were 40 participants (31 females and 9 males). The present study is based largely on Corpus B with only occasional digressions concerning Corpus A or C. This limitation considered, some
Learner Corrpus of Subtitles and Subtitler Training
227
tentative coonclusions aree observable, and it is bellieved that, at least to some extennt, they can be b applied to o students off subtitling co ourses in general. Natturally enoughh, the overall impression w will be fuller, richer r and more reliablle once the whole w project terminates t as then the iden ntification of learner suubtitling probllem areas willl predicate nott only on the corpus of subtitles wrritten for Nottting Hill, butt also for Wh What Women Want W and possibly for other films inncluded in thee project in thee near future. For now, there are tw wo main subcoorpora at our disposal: d one rrelying on No otting Hill (Corpus A, B, and C) annd one based on What Wom men Want (C Corpus P). The structurre of the overaall corpus is illustrated in Fiig. 12-1. LeCoS struccture (Corpus NH and d WWW) Corpus NH: A,B,C (78%)
Structure S of Coorpus NH (Corpora A,,B,C) Corpus C A (8,5%))
Corpus W WWW: P (22%)
Corpus C C (70,5%)
H WWW = W What Women Want. W Fig. 12-1. Corpus structure. NH = Notting Hill;
The whoole corpus, i.e., subcorporra A, B, C, and P, contaains circa 450,000 tokkens, of whicch Corpus B constitutes ccirca 17% off all data available (seee Fig. 12-2).
Corpu us B
Corpus A, B, C, P Corpuss B (c.17%) Fig. 12-2. Corpus B.
3. The LeeCoS Projecct – Data A Analysis Considerring the limiteed scope of daata derivable ffor analysis frrom a still ongoing prooject, the corppus samples un nder discussioon in the present paper shall largelyy revolve arouund deficiencies in translatiion skills disceernible in
228
Chapter Twelve
Corpus B (see Bączkowska in press for quantitative analysis of data retrieved from Corpus B). The main unit of the corpus is a subtitle, and each subtitle is numbered for ease of reference. Our main source of examples will comprise subtitles from number 602 to 1,462 (860 subtitles), which span roughly one hour of the film. Each original subtitle is represented by learner subtitles proposed by nine groups, which make in total 7,740 subtitles prepared by students in Corpus B. Judging by the material available in Corpus B, the main problem areas for sophomore students encompass proper names, expletives, the correlation between words and the on-screen image, and stylistic awkwardness. Selected examples will be discussed in the section that follows. The students’ subtitles are compared with the professional version available on DVD. The author of the professional version, ElĪbieta Gaáązka-Salomon, is an experienced and well-known Polish subtitler who works largely for Polish public television. In order to facilitate students’ listening comprehension of the original soundtrack, which was seriously hampered by noises coming from outside the computer lab, students were provided with the original text that was retrieved from the DVD subtitles for the deaf and hard of hearing and, whenever needed, completed with the missing elements. The instructor played the film throughout the class when students created their subtitles in Polish and paused the film after each subtitle. Thus, the exposure to the original text was twofold: through the script and by listening to the original soundtrack.
3.1 Reinforcing Visually Available Information Recently, an increasing interest in a multimodal approach to subtitling can be observed, which consists in paying special attention and considering the importance of nonverbal signals in audiovisual texts. In line with a multimodal approach to subtitles (see Delabastita 1989, de Linde & Kay 1999, Taylor 2003, Taylor 2004, Ventola, Charles & Kaltenbacher 2004, Gambier 2006, Díaz Cintas & Remael 2007, Gottlieb 2001a, Chuang 2009, Perego 2009, Gottlieb 2010, Tortoriello 2011, Bączkowska 2011, Bączkowska 2012, Bączkowska & KieĞ 2012), some semiotic resources deployed in a scene, in particular sounds and visual signals, are treated on a par with words in the analysis of the overall meaning emerging out of these in a scene, as a conversation pans out. As a consequence, the thesis that information coming from visual signs in a scene does not need to be repeated in a subtitle has received stronger theoretical grounds, and the use of reduction strategies (in particular condensation and deletion; see Gottlieb 1991) whenever there is a duplication of semiotic resources (e.g.
Learner Corpus of Subtitles and Subtitler Training
229
the same message is conveyed by picture and word) has been granted even stronger legitimization (Taylor, 2004, 161). The non-professional translations gathered through the LeCoS project clearly show cases of reduplication of semiotic resources as a result of ignoring information encoded by nonverbal signals. Some examples presented below have been extracted from Corpus B and contrasted with the professional DVD version. In one of the scenes (subtitle 602), the main characters, Anna and William, are walking at night after a party, passing by some private gardens in Notting Hill, a borough of London. Suddenly, they stop, decide to climb over a high fence surrounding one of the gardens and sneak inside. They manage to enter the garden but not without difficulty. When the woman suggests to climb the fence by saying “Let’s go in,” the camera is showing the characters talking in front of the fence, with the gardens clearly visible in the background. This visual support notwithstanding, in the translation process most groups decided to use even more words than the actors uttered, and thus they overemphasised what is visible on the screen by resorting to deixis. (Unless otherwise noted, the group numbers refer to Corpus B groups, which include: G1-G9). The deictic tam (there) was used by group 6 and 9. This rendering contrasts with the choice made by group 7 and the professional DVD version where deixis is avoided. [602] 00:44:31,154–00:44:33,713 Let’s go in. WejdĨmy tam. [G6; G9] (Let’s get there.) WejdĨmy. [G7; DVD] (Let’s get in.)
Later in the same scene (subtitle 621), Anna decides to climb over the fence to the other side, while William attempts to discourage her by warning about the possible risk. Both characters are clearly visible and occupy the whole space on screen; moreover, there are no other characters participating in the scene and thus Anna and William are the only addressees of their utterances. The presence of visual signals and the lack of unratified participants2 in the situation (bystanders, eavesdroppers) or of ratified but unaddressed characters (side participants), legitimise the
2
Goffman (1981) distinguished between ratified and unratified participants of a communicative situation, wherein the speaker and the hearer represent ratified (sanctioned) participants while side participants (elsewhere also known as auditors) and overhearers, both intentional (eavesdroppers) and unintentional (bystanders), represent unratified participants.
230
Chapter Twelve
omission of the appellative ‘Anna’ in this scene. Interestingly, in the professional version the vocative is not mentioned. This was not the case in learner translations: students repeated the name (yet in the nominal rather than vocative form) even though a mistake in character identification was out of the question. Time constraints also favour the omission of the vocative: there are roughly 4.5 seconds at the subtitler’s disposal to render the original soundtrack in this scene. It will be remembered that the prescribed (for TV films) number of characters per line in a maximum of six seconds of subtitle display (six-second rule) is 37 characters, which makes 74 characters per a two-liner (Tomaszkiewicz 2006, 127, Díaz Cintas & Remael 2007, 63, 89). In accordance with what is recommended by a number of scholars (d’Ydewalle, van Rensbergen, & Pollet 1987, Brondeel 1994, both cited in Díaz Cintas & Remael 2007, 96; Díaz Cintas & Remael 2007, 97-98), concerning the maximum six second exposition for one subtitle, and the average reading speed typical of Poles established at the level of c. 145 characters per minute (Tomaszkiewicz 2006), which makes 2.5 words per second or 12 characters per second (Gottlieb 2001b, 169), there are c. 56 spaces available in this scene. Admittedly, it is highly recommended to squeeze the original dialogue into a manageable twoliner which would not exceed this number. The transcribed original soundtrack in the scene under inspection has 62 characters, whereas the professional DVD version reduced the original text to 38 characters. On the other hand, the rendition offered by one of the groups of students (G7) contains 54 characters, and that by group G8 as many as 58. Given the explicitness of visual signals and the importance of the following action on screen, stripping the subtitle of a term of address would reduce the text and thus would certainly improve readability and facilitate the reception of the whole scene. [621] 00:45:36,594–00:45:41,287 Anna, don’t. It’s harder than it . . ./No, it’s not. It’s easy. Ania, nie.. to jest trudniejsze niĪ… To nie takie… To jest áatwe. [G7] (Anna, no…it is more difficult than . . .No, it’s not. In fact, it is easy.) Anna, nie. To trudniejsze, niĪ… / Nie, jednak nie. To caákiem proste. [G8] (Anna, no. It (is) more difficult than.../No, actually (it’s) not. ) To trudniejsze niĪ…/Nie, wcale nie. To áatwe. [DVD] (This is more difficult than.../No, not at all. It (is) easy.)
Similarly, in subtitle 779, there are several characters visible on the screen: Tessa, who has just arrived at a party; Bella, a woman in a wheelchair; and Bella’s husband, Max, who introduces Tessa to his wife.
Learner Corpus of Subtitles and Subtitler Training
231
As Bella is being introduced, her name is mentioned. The guest’s name does not need to be reinforced by repeating it in the subtitle, inasmuch as Tessa is the center of attention in the scene and is clearly visible. Another reason for the omission of Tessa is the fact that seconds before, in the previous scene, she is briefly described by Max to his friends, with the last line being Her name is Tessa, which functions as an introduction of the character, although we only see her for the first time in the next scene. Thus we learn about Tessa before she is shown. It seems that repeating the woman’s name just seconds later is unnecessary, hence its omission in the professional version. Some students maintained Tessa in the subtitle (G1, G4), whereas other groups resorted to omission strategy and dropped the proper word (G6, G8, G9, G2, G3). Interestingly, group seven reduced the subtitle even more by eradicating the name Bella, which illustrates an extreme case of relying on visual signals. [779] 00:59:10,874–00:59:13,353 Tessa, this is Bella, my wife./Hello. To moja Īona, Bella./— [G6, G8, G9] (This is my wife, Bella.) To jest moja Īona./CzeĞü [G7] (This is my wife./Hi.) To Bella, moja Īona./CzeĞü. [G3] (It’s Bella, my wife./Hi.) To moja Īona Bella./CzeĞü. [G2] (This is my wife Bella./Hi.) Tessa, moja Īona Bella./Witam. [G4] (Tessa, my wife Bella./Welcome.) Tessa, to moja Īona Bella./CzeĞü. [G1] (Tessa, it’s my wife Bella./Hi.) Bella, moja Īona./— [DVD]. (Bella, my wife.)
Interestingly, some groups went one step further and also omitted Bella’s reply Hello. Let us note on the margin that this omission is justifiable and is based on the premises emanating from Conversation Analysis. As explicitly advocated by Tomaszkiewicz (2009, 25–27), typical greetings, to give a simple example, are often deleted in subtitles as they make reference to the viewer’s general knowledge of the world, in this case the structure of a conversation held by viewers from similar socio-cultural environments (e.g. Anglo-American or European). Predictability of social, often ritualised reactions warrants omission of, say, Hello, as it is only natural to utter (and then reply) it when one person is introduced to another one. Due to this highly ritualised and conventionalised talk, at least the reply may be easily dropped in a subtitle. Unlike in the previous scene, here there are more participants in the conversation; some of them are not active interlocutors and play the role of side participants (auditors) only, so they are ratified participants but unaddressed at this particular moment. For example, William (side participant) stands at a distance while Tessa (hearer and speaker) enters
232
Chapter Twelve
the house, approaches Bella, and is introduced to her. The viewer is likely to adopt William’s perspective in this scene (William is in the foreground, closer to the viewer than Tessa, Bella and Max). As a result, there is no doubt as regards the addressee of This is Bella, my wife, and preserving the vocative in a subtitle does not seem to be justifiable.
3.2. Literal Rendition One of the persistent problems in students’ translations is the fact that students tend to rely excessively on the original script and translate words literally. Rather than being precise and perspicuous, they often create unnatural texts, with superfluous wordiness, occasionally even comical or balancing on the verge of incomprehensibility. Overprecision bespeaks an inexperienced subtitler and is often indicative of insufficient linguistic competence. In the samples discussed below, which illustrate lexical problems trainees need to grapple with, the key words for our analysis are “around the edges,” which in the original were used in the following utterance: [603] 00:44:33,714–00:44:36,593 Only the people living around the edges are allowed in.
The English phrase around the edges was rendered as: w okolicy [G7, G9] (in the surroundings), w pobliĪu [G1, G5] (in the vicinity), na przedmieĞciach [G3, G4] (on the outskirts), na obrzeĪach [G6, G8] (on the edges), or wzdáuĪ kraĔców [G2] (along the edges). Excerpts illustrating translations offered by G6, G8 and G2 are presented below. [603] Tylko ludzie, którzy mieszkają na obrzeĪach mogą tam wejĞü. (Only the people who live on the edges can get there.) [G6] WstĊp mają tylko osoby Īyjące na obrzeĪach. (Entrance is only for persons living on the edges.) [G8] Tylko ludzie, którzy Īyją wzdáuĪ kraĔców./Mogą z nich korzystaü. (Only the people who live along the edges can use them.) [G2]
While the form of on the edges as translated in the first two sentences above may sound acceptable and logical, it is not appropriate stylistically. A lack of noun (which is expected in Polish) specifying or reinforcing the object introduced in the previous sentence may be sensed. Were the expression followed by the equivalent of garden (na obrzeĪach ogrodu), the rendition might be possible. WzdáuĪ kraĔców in turn sounds rather
Learner Corpus of Subtitles and Subtitler Training
233
awkward due to the fact that kraniec (edge, border) collocates in Polish with objects of greater area, such as a forest or the world, and then it takes the preposition on rather than along. Along could be used to talk about waterways (e.g. to swim along the edges of an island: páynąü wzdáuĪ kraĔców wyspy). It sounds odd to talk about kraniec in the context of a small garden situated in the middle of a buzzing city. Kraniec does not only trigger associations with greater objects but also creates a conceptually different scene where the epistemic distance between the object and the conceptualiser is substantially great to allow envisaging the area from a bird’s eye view (e.g. na kraĔcu Ğwiata: on the edge of the world). The edge is slightly blurred and imprecise and the object is metaphorical or abstract. Corpora of the Polish language (The National Corpus of Polish Language3, an online corpus of circa 400 million tokens, The Polish Web Corpus4 of c. 120 million words, and PELCRA sampler corpus of 10 million words available on CD) display the following contexts co-occurring with kraniec: (as a modifier) world, solar system, Earth, bay, city, Europe; (in postposition) opposite, far, cosmic, up to, the other. A similar sense of great distance, yet within the borders of a city, is created by the expression on the outskirts (na przedmieĞciach). This translation makes little sense in the context of the scene, as the characters are walking in Notting Hill, a borough close to central London, well away from the outskirts of London. Suggesting then that the garden is open solely to those who live kilometers away from it creates a puzzling rule. [602] Tylko ludzie mieszkający na przedmieĞciach mają do nich dostĊp. (Only the people who live on the outskirts have access to them.) [G4] Tylko ludzie mieszkający na przedmieĞciach mogą tu wejĞü. (Only the people living on the outskirts can get inside here.) [G3]
The versions offered by other groups opting for the Polish equivalents of vicinity or surroundings sound much more natural. In addition, they use fewer characters; still, compared with the professional rendition they are less economical. [602] Tylko ludzie Īyjący w pobliĪu mogą tam wejĞü. (Only the people living in the vicinity can get inside there.) [G1]
3 4
Available at: www.nkjp.uni.lodz.pl. Available at: the.sketchengine.co.uk
234
Chapter Twelve Tylko ludzie, którzy mieszkają w pobliĪu mogą tam wejĞü. (Only the people who live in the vicinity can get inside there.) [G5] Tylko ludzie, którzy Īyją w okolicy, mogą tam wejĞü. (Only the people who live in the neighbourhood can get in there). [G7] Tylko ludzie z okolicy mogą siĊ tam dostaü. (Only the people from the neighbourhood can get there.) [G9]
The professional subtitler has managed to sidestep all these problems by reducing the long utterance to a short statement, adhering thus to Gottlieb’s (1991a) condensation strategy: Tylko dla mieszkaĔców. [DVD] (Only for residents.)
In this way, the collocational problem was avoided and literal translation was successfully evaded, which, full of unnecessary details, may overwhelm the viewers and blur the gist of the matter. In addition to the issues described above, lexically taxing problems were also discernible in the case of polysemous words, e.g. wholesome (subtitle 658) and culture-specific items, e.g. apricots soaked in honey (subtitle 162). Taken together, the above-mentioned problems stem both from insufficient lexical-collocational competence of the trainees as well as their overreliance on the original soundtrack.
4. Repetitions and interjections Another problem area identified in Corpus B is excessive repetition of elements that are frequent in spoken discourse, especially in spontaneous dialogue, but are uncommon in written discourse (including subtitles), which tends to be more formal. Subtitles should follow a stylistic adjustment known in audiovisual translation as diamesic shift (Gottlieb 2001a). The concept refers to the need to transfer spoken language features (lexical, syntactic, pragmatic and stylistic) from the original to subtitles, which represent a specific subtype of written discourse. Speech tends to be hesitant, full of reformulations, false starts, filled pauses, to name the most frequent features, and thus it is more fragmentary, chaotic and jerky; in other words, spoken language is full of intrasemiotic redundancies and it frequently lacks conciseness and precision typical of the written mode of expression. Being subject to rules of appropriateness hovering somewhere between spoken discourse (which already shuttles between authentic and scripted mode) and a written text, subtitles should not ignore properties
Learner Corpus of Subtitles and Subtitler Training
235
inherent in both types of discourse organization. Within a range of features typical of spoken discourse, repetitions and interjections are among the most frequently used in everyday language, yet they should be avoided in subtitles. To analyse the case of repetitions, let us discuss subtitle 684. It is symptomatic that three groups of students decided to keep repetitions in the subtitles; interestingly, two of them proposed three repetitions instead of the two present in the original, out of either meticulousness or carelessness. [684] 00:51:09,674–00:51:12,553 No, no, leave it. Let’s, you know... Nie nie nie, odpuĞü sobie. Wiesz... [G6] (No, no, no, leave it. You know...) Nie, nie. Daj spokój. No wiesz... [G3] (No, no. Leave it. You know...) Nie, nie, nie, daj spokój. Wiesz... [G4] (No, no, no, leave it. You know...) [DVD–text omitted]
Excessive repetition can also be observed in subtitles 811–812. The original Perfect. Absolutely perfect has been rendered as Idealna. Idealna w 100% by Group 2. Not only was the expression of admiration repeated but, strangely enough, some extra words were inserted to strengthen the emotional load in the utterance. Other groups resorted to alternative techniques, yet most students proposed repetition, either literally or by using a synonym of ideal in the second subtitle. [811–812] 01:01:20,274–01:01:22,193 Perfect.//Absolutely perfect. Idealna.//— [G6] (Ideal) Idealna.//Absolutnie. [G7] (Ideal.//Absolutely.) Idealna.//Zdecydowanie. [G8] (Ideal.//Definitely.) Idealna.//NaprawdĊ doskonaáa. [G9] (Ideal.//Really perfect.) Doskonale.//— [G1] (Perfect) Idealna.//Idealna w 100%. [G2] (Ideal.//Ideal in 100%.) Idealna.//Absolutnie idealna. [G3] (Ideal.//Absolutely ideal.) Idealna.//Bez dwóch zdaĔ. [G4] (Ideal.//Without a question.) Ideaá. W kaĪdym calu. [DVD] (Ideal. Every inch.)
Along with repetitions, inarticulate sounds expressing hesitation (e.g. mhm, uhm, er) or mild cries of surprise (e.g. oh, wow), also referred to by other scholars as interjections (see next section for comparison with expletives), should definitely be jettisoned in subtitles; not only because they can be heard in the original, are usually accompanied by revealing
236
Chapter Twelve
non-verbal behaviour, and are often language independent, at least some of them in the European culture, but also because of diamesic shift (Gottlieb 2010) discussed above. Likewise, articulate filled pauses (or verbal pauses), such as stallers (expressions that allow one to gain time, e.g. I mean), hedges (expressions encoding uncertainty or vagueness, or counteracting abruptness, e.g. sort of), and others (see Stenström 1990), are typically considered as examples of unnecessary information. Before moving on to the analysis of students’ texts, let us focus first on the notion of interjections and their formal classifications. Interjections are believed to originate in primitive physiological sounds or gestures (Wierzbicka 1992, 176; Gehweiler 2010, 317). They are spontaneous, emotional and onomatopoeic non-words (e.g. mhm, oh), which have no homonymous lexical substitutes (Gehweiler 2010, 317). From this it transpires that interjections are not meaningless and inert segments of speech but have a full semantic-pragmatic status; moreover, they are important bearers of illocutionary force (Wierzbicka 1992). The so called primary interjections, which are of interest for this study, are monomorphemic and syntactically independent, and thus, unlike particles, they “are not fully integrated into the syntax of utterances” (Ameka 1992, 108). Moreover, they are holophrastic (Gehweiler 2010, 316), i.e. they express longer stretches of thought in a single segment, often substituting the whole sentence. They are deictic and context-dependent as they make reference to actual speech by dint of short indivisible segments (Wilkins 1992, 124; Ameka 1992, 108; Goddard 1998) that are typically set off by pauses. They should be distinguished from secondary interjections (God!, Jesus!, Fuck!), which are lexicalised and may consist of more than one morpheme or orthographic unit. Secondary interjections are analyzed in the present paper under the rubric of expletives. Typologically, interjections pose some problems for theorists and so there is no theoretical consensus as to which discourse segments interjections engulf or which units should not be considered as interjection. Importantly, the existing typologies are somewhat contradictory or prone to overlap, often leading to cross-classifications. Specifically, while in line with Stenström (1990), some interjections may span filled pauses, which run the gamut of hesitations, stallers, backchannelling, hedges, etc., for Biber, Johansson, Leech, Conrad & Finegan (1999, 1089) hesitators (e.g. hum, er, uh) or backchannels (i.e. responses to assertions, e.g. uh, huh, mhm) are not internal to filled pauses. Fraser (1999, 938), on the other hand, distinguishes a separate category of pause markers (hum, well), which, in line with Biber et al. (1999), he excluded from the category of discourse markers. Pause markers are
Learner Corpus of Subtitles and Subtitler Training
237
partially convergent with some discourse elements identified by Stenström (1990) as filled pauses. Fischer (2006, 427) in turn regards interjections and discourse markers as examples of one larger class of discourse particles, building her claim on the observation that they both play the same function in discourse. Kryk (1992) views interjections as a subclass of a superordinate notion which conflates discourse markers and particles. This stance partially converges with what Norrick (2009, 866) contends by stating that “interjections in everyday talk routinely function as pragmatic markers”. Given the multiplicity of approaches to interjections, in our analysis we shall adhere to a more general notion of interjections sensu largo (including Stenström’s filled pauses as well as hesitators and backchannels proposed by Biber et al. 1999), with the proviso that they do not encompass expletives. Considering the communicative function interjections may play, most of the cases found in our data are illustrative of an expressive use, i.e. uses which disclose the “speaker’s mental state”, with phatic (focusing on establishing relationship between the speaker and addressee) and conative functions (focusing on the auditor) being marginalised. Now, returning to subtitles, it can be a daunting task for a viewer to process minute features of orality (see prefabricated orality proposed by Chaume 2004, 168–170), such as interjections, in a written text. Surprisingly, their highly pervasive overprecision led students to render even such semantically implausible cases, which certainly hinder intelligibility. For example, in subtitle 704, only two groups (G7 and G1) discarded the translation of paralinguistic signals expressing hesitation. The professional version leaves the recognition of hesitation to auditory signals retrievable from the scene, in particular from aural and visual resources. [704] 00:52:44,194–00:52:48,330 Baby, who is it?//Uh, it’s, uh . . . Och, to jest, och [G6] (Oh, this is, oh) To . . . [G7, G1] (It . . .) Em, to jest . . . [G3] (Uhm, it is . . .) YYY, to . . . [G4] (Uhm, it . . .) Kto to? [DVD] (Who is it?)
By way of summary, let us note that students have a strong penchant for literal translation, displaying great conscientiousness and care in maintaining exactness and close resemblance to the original text.
238
Chapter Twelve
Profusion of non-propositional discourse markers, however, is not in line with subtitling conventions. Clearly, despite their supposed familiarity with these conventions as viewers of subtitled cinema productions and television films (recently rapidly growing in number), students’ intuitive judgments concerning subtitling standards, viewers’ expectations and the quality of subtitling have not resulted in frequent use of reduction techniques so widely employed in subtitling. Their pervasive overreliance on the original soundtrack leads to a level of precision which is unnecessary for the understanding of a scene and which often causes obscurity, awkwardness and unnaturalness in the target text. What surprises most is the fact that not only are the subtitles similar to the source text, but occasionally they are even longer than the original. Students seem to lack confidence both in terms of language competence in general (especially pragmatic competence), and translation competence in particular. This observation might imply that some issues (e.g. subtitling of interjections, the diamesic shift) should receive much greater attention during general lectures on AVT. Presumably, the problem should be addressed in theoretical and practical classes in order to offset the imbalance between viewers’ and experts’ expectations, and trainees’ initial skills and their intuitive judgments.
5. Expletives Following Biber et al. (1999, 1094), the term expletive refers to inserts of exclamatory function that express speakers’ emotions, e.g. annoyance, excitement or pain. Unlike primary interjections, they involve strong emotions, hence they are lexicalised through, for example, taboo or semitaboo words as well as their euphemistic forms, which may be phonologically modified, yet potentially reconstruable, versions of the original taboo words they derive from. They are thus often offensive and have homonyms in other lexical items. This approach notwithstanding, in more traditional approaches (Greenbaum & Quirk 1985), expletives are viewed as a subgroup within interjections, along with discourse markers, hesitators, attention signals, etc. More recent typologies (e.g. Biber et al. 1999) exclude expletives from interjections and grant them an independent status. In our analysis, we follow the more recent approach to expletives and, as a result, expressions codifing emotions traditionally treated as interjections (oh, wow) sensu largo, including filled pauses (cf. Stenström above), are not analysed under the rubric of expletives but instead as interjections. We shall concentrate in this section on elements whereby
Learner Corpus of Subtitles and Subtitler Training
239
stronger emotions are expressed than mere oh (even if oh is an insert) conveying minute surprise, or uhm encoding hesitation. Moreover, in the present paper both exclamatory expressions (including taboo words and their euphemistic forms) and non-exclamatory expressions (e.g. bloody, goddam) will be analysed under the heading of expletives. Leaving aside the typological discrepancies concerning interjections and expletives, the main distinctive feature between them is that the former are largely inserts, viz. stand-alone discourse segments (mm, mhm, uh, etc.) which “do not enter into syntactic relations with other structures” (Biber et al. 1999, 1082), whereas the latter can be merged with a sentence. More specifically, while expletives are acknowledged to be lexicalised, detached, free-standing, independent discourse segments, which have the status of elements constituting a self-contained turn, alternatively they can be integrated with a sentence, with utterance-initial position being most common, and only occasionally in intersentiential position, with the final position in an utterance being extremely rare (Biber et al. 1999, 1094). In subtitling, expletives may be translated but are usually toned down (Díaz Cintas and Remael 2007, 195). Interjections, on the other hand, are most often not expected to be translated (unless for purposeful style maintenance) if not downright eradicated. In the example below, subtitle 608, William attempts to climb over the fence circumscribing a private garden in Notting Hill (see the brief description of this scene above) and upon falling back down to the pavement he exclaims, Whoops-a-daisies! The expletive is both funny and anachronistic and the ideal equivalent should render these features thoroughly. The DVD offers O psia kostka (dog’s angle) which is funny, hackneyed and rather old-fashioned, whereas LeCoS data provide us with the following solutions: psia koĞü/kostka (dog’s ankle or the diminutive of ankle), motyla noga (butterfly’s leg), and kurcze/Ċ belek (rough equivalent of shoot). Psia koĞü and kurcze/Ċ belek are mild (euphemistic) expletives and often used, thus the only potentially marked form out of these is motyla noga. The latter is funny but not in widespread use, rather favoured by teenagers. The critical factor in the rendering of this expletive is not so much looking for an ‘equivalent’ of a mild expletive as its markedness stemming from reference to old-fashioned lexis. While all these versions are mild exclamations, none seems to be particularly marked, although psia kostka, indeed, sounds a bit dated. [608] 00:44:50,474–00:44:53,273 Whoops-a-daisies. Psia kostka! [G6, G9, G2, G3, G4, G5]
Chapter Twelve
240 Psia koĞü! [G7] Motyla noga. [G8] KurczĊ belek! [G1] O psia kostka! [DVD]
Students tended to retain the original emotional load encoded by expletives. In some cases, they even made it stronger, as in subtitle 938, whereby the original perfectly neutral irritating was replaced by the emotionally marked wkurzające (annoying), which is a mild (euphemistic) swear word in colloquial Polish. [938] 01:10:49,794–01:10:51,713 The thing that is so irritating... Chodzi o to, Īe to wkurzające... [G6] (The thing is that it is annoying...)
5.1 Nicknames and proper names Creativity and invention but at the same time vacillation best characterise proposals for translation equivalents of nicknames and proper names. For illustrative purposes, in this section we shall analyze William’s nickname Flopsy (622) and the names of the English magazines Horse and Hound (306) and Hello (764). [622] 00:45:48,354–00:45:50,273 Come on, Flopsy!
The English word flop means to fail or to fall down and hang loosely, e.g. when talking about dog’s ears or about hair (PWN/Oxford Dictionary). The Oxford English Corpus (OEC) (which consists of c. 2.5 billion words) shows the following collocations for hair flopping: lankly (over fox’s eye), into or one’s face, limply (in front of one’s eyes or in his eyes), in the wind, over the face, on one’s forehead, like a dry mop, around one’s eyes and ears. The other meaning of flop indicating failure through semantic prosody (Bublitz 1995, Lewandowska-Tomaszczyk 1996) is found in OEC with the following collocates: miserably, spectacularly, horribly, badly, dramatically, ultimately, etc. The problem with Flopsy in 622 is that both meanings of flop converge with the visual information retrievable from the scene. On the one hand, we have the main character’s hair, with a longish fringe falling down on the eyes and flopping around, and on the other we are exposed to a scene wherein the main character fails to climb over the high fence to get inside the garden, as opposed to
Learner Corpus of Subtitles and Subtitler Training
241
the woman, who overcomes the obstacle with flying colours, thus causing embarrassment to the man. There are more scenes that demonstrate William’s failure, including his social clumsiness, ‘bad luck’ with women, and downturn in business. Depending on whether one predicates more on the visual signals and the character’s image (with flopping hair) or the amusing sequence of actions revolving around the fence climbing scene, the renditions may be encoded by different options. Most students chose the second option by proposing such nouns (some functioning as nicknames) as Niezdaro (Oaf), Ciamajdo (Bungler) or Gapciu (Clumsy Oaf), while others went for the visual signals and proposed Dyndas (derivative of the Polish word dyndaü, Eng. dangle). Surprisingly and rather incomprehensibly, several groups came up with Klopsie (vocative for fatty) or—most surprisingly—Szczeniaczku (diminutive of puppy). Likewise, proper names have been treated rather inconsistently. While most groups borrowed the name of the magazine Horse and Hound without any change, thus making the Polish text sound exotic but at the same time congruent with such visual signals (as a close-up of the magazine lying on a coffee table can be seen before this utterance), others resorted to literal translation (KoĔ i pies, English Horse and Dog) or to explication by retaining the original name completed with a literal and exact translation of the second word, i.e., specifying the hunting skills of hounds (KoĔ i pies myĞliwski; Horse and Hound). One group changed the name into KoĔ i kot (Horse and Cat); this, however, cannot be treated as an attempt to domesticate the name, but rather a mistake, as to the best of our knowledge, there is no such magazine on animals in Poland. Group G4 in turn resorted to explication of the proper name and added the word pismo (magazine). The name of the magazine causes comic effect in this scene and the subsequent one, thus the translation of this proper name is highly recommended. [306] 00:30:14,730–00:30:18,359 And you’re from, uh, Horse and Hound. Pan jest z Horse & Hound/KoĔ i Pies MyĞliwski? [G1/A] (You are from Horse and Hound/Horse and Hunting Dog?) Pan jest z... Horse & Hound? [G3/A] (You are from... Horse and Hound?) KoĔ i Pies [G5/A] (Horse and Hound) Pan jest z „Konia i Psa”? [DVD] (Are you from „KoĔ i Pies”?) [1458] 01:51:36,514–01:51:40,524 The readers of Horse and Hound will be absolutely delighted.
242
Chapter Twelve Czytelnicy Horse & Hound byliby zachwyceni. [G5; G2; G5] (The readers of Horse and Hound would be delighted). Czytelnicy Konia i Psa bĊdą zachwyceni. [G3; G8; G3] (The readers of Horse and Hound will be delighted). Czytelnicy Psów i Koni bĊdą zachwyceni. [G6] (The readers of Hounds and Horses will be delighted). Czytelnicy Koni i Psów bĊdą zachwyceni. [G1] (The readers of Horses and Hounds will be delighted). Czytelnicy Psa i Kota byliby zachwyceni. [G7] (The readers of Hound and Cat would be delighted). Czytelnicy pisma KoĔ i Pies bĊdą zachwyceni. [G4] (The readers of Horse and Hound magazine will be delighted). Czytelnicy „Konia i Psa” bĊdą wniebowziĊci. [DVD] (The readers of „KoĔ i Pies” will be enraptured).
As regards the example below, the British magazine Hello used to be available in Poland (and in Polish) about a decade ago, with the title retained in English. In the subtitles, the name of the magazine was left unchanged by most groups of students, while it was translated by the professional subtitler. Still, one translation tried to naturalise it by making reference to the popular Polish Internet gossip portal Pudelek, which is an interesting attempt at domestication of the original text (G8). Alternatively, students from G9 resorted to generalization by classifying the paper as gutter press (prasa brukowa). [764] 00:58:18,874–00:58:22,913 My whole life ruined because l don’t read Hello magazine. Caáe moje Īycie legáo w gruzach. Bo nie czytam Hello. [G1, G2, G3, G4, G5, G6, G7] (All my life ruined because I don’t read Hello.) Zrujnowaáem sobie Īycie bo nie czytam Pudelka. [G8] (I ruined my life because I don’t read Pudelek.) Moje Īycie legáo w gruzach, bo nie czytam brukowców. [G9] (My life ruined because I don’t read gutter press.) Moje Īycie leĪy w gruzach, bo nie czytam „Halo”. [DVD] (My life has fallen into ruins because I don’t read „Halo”.
Learner Corpus of Subtitles and Subtitler Training
243
6. Discussion and conclusions Given the fact that our data are limited and the project is still in progress, at the risk of some simplification, it may be asserted that corpusinformed examination of learner subtitles allows us to observe two main tendencies. First, students excessively rely on the original text and tend to render each and every word, which betrays their language and translation diffidence. Second, there is a stylistic defect in students’ data that consists in direct transposition of spoken discourse into written text (subtitles). On a more specific note, inconsistency in rendering some lexical items (polysemous words, proper names, etc.) was observed, probably due to the uncertainty concerning the degree of domestication students are allowed to rely on. This seems to be consequent upon a lack of background theoretical knowledge to which the students could resort. The corollary of this incompetence was that students’ subtitles relied largely on intuitive and uninformed decisions, which triggered overreliance on the original soundtrack and led to overprecision on lexical and syntactic levels, as well as frequent literal translation. The resultant renditions are spotted with unnecessary repetitions (so typical of spoken discourse) or even with transliteration of paralinguistic signals, which is also partly connected with the absence of a diamesic shift in students’ subtitles. An area that definitely requires more attention is the rendition of expletives and interjections; Corpus B is overburdened with taboo words and marred with unnecessary vocables. Interestingly, another issue that surfaced in the learner corpus is the importance of conventional dialogue structure when applying reduction strategies. The rules of Conversation Analysis may dispense a subtitler from rendering greetings, leave-takings, phatics and similar conversational routines, which often have little propositional meaning. It seems that this issue should receive more stringent application in subtitler training, as instances illustrative of dialogue structure disregard are way too many in Corpus B. Finally, the correlation between a subtitle and visual signals available on screen, in particular as regards proper names and forms of address, should be accentuated during both theoretical and practical classes. All these conclusions summon subtitler trainers to reconsider prefatory problems faced by would-be subtitlers and redesign courses in subtitling taking into consideration corpus-driven results. These observations confirm some of the previous reports on nonprofessional subtitling. For example, excessive dependence on context was hinted at by Bogucki (2009) in his preliminary analysis of fansubs. Wordiness and too literal renditions occurring in learner subtitles were
244
Chapter Twelve
also noted by Incalcaterra McLaughlin (2012) in her report on a project conducted with native speakers of Italian at the National University of Ireland. She informs about similar trends to those observed in LeCoS, showing avoidance of reduction techniques and a particularly strong reluctance to introduce even minimal cuts, even if students were familiarised with these techniques prior to subtitling activities. As subtitling involves more than sheer translation skills, and subtitler training brings results counter to the instructor’s expectations, despite explicit teaching of the basics of practical subtitling, it appears legitimate to suggest that much more care and attention should be geared towards laying the foundations of subtitling norms (Pedersen 2007, 2011) from the very beginning, in both lectures and practical classes. Learning subtitling does not occur at the flick of a switch; gradual and developmental progress may span some time, as our data suggest, more than one semester.
References Ameka, Felix. 1992. “Interjections: the Universal yet Neglected Part of Speech.” Journal of Pragmatics, 18: 101-118. Bączkowska, Anna. 2011. “Some Remarks on a Multimodal Approach to Subtitles.” Linguistics Applied, 4: 47-65. —. 2012. “Multimodal Analysis of Im/politeness in Film Subtitles.” In Lingua: Nervus Rerum Humanarum. Essays in Honour of Professor Stanisáaw Puppel on the Occasion of his 65th Birthday, edited by Koszko, 61-78. PoznaĔ: University of Adam Mickiewicz. Bączkowska, Anna & KieĞ, Marek. 2012. “Multimodal Subtitling of Direct Compliments in Polish, Italian and Swedish.” In Cognitive Processes in Language, edited by Krzysztof Kosecki & Janusz Badio, 41-52. Frankfurt: Peter Lang. Bączkowska, Anna. In press. “Quantitative Study of Learners’ Subtitles and Corpus-Based Translator training Implications.” Biber et al. 1999. “Longman Grammar of Spoken and Written English.” London: Pearson Education. Bogucki, àukasz. 2009. “Amateur Subtitling on the Internet.” In Audiovisual Translation: Language Transfer on Screen, edited by Jorge Díaz Cintas & Gunilla Anderman, 49-57. Basingstoke: Palgrava Macmillan. Brondeel, Herman. 1994. “Teaching Subtitling Routines.” Meta, 34/1: 2633. Bublitz, Wolfram. 1995. “Semantic Prosody and Cohesive Company: Somewhat Predictable.” Duisburg: L.A.U.D.
Learner Corpus of Subtitles and Subtitler Training
245
Chaume, Frederic. 2004. “Cine y Traducción.” Madrid: Cátedra. Chuang, Ying-Ting. 2009. “Subtitling as a Multi-Modal Translation.” In Dubbing and Subtitling in a World Context, edited by Gilberg C.F. Fong & Kenneth K.L. Au, 79-90. Hong Kong: Chinese University Press. Delabastita, Dirk. 1989. “Translation and Mass Communication: Film and T.V. Translation as Evidence of Cultural Dynamics.” Babel, 35; 193218. d’Ydewalle, Géry, Van Rensbergen, Johan & Pollet, Joris. 1987. “Reading a Message when the Same Message is Available Auditorily in Another Language: The Case of Subtitling.” In Eye Movements: From Physiology to Cognition, edited by J. Kevin. O’Regan and Ariane Lévy-Schoen, 313-321. Amsterdam: North Holland. de Linde, Zoé & Kay, Neil. 1999. “The Semiotics of Subtitling.” Manchester: St. Jerome. Díaz Cintas, Jorge. 2009. “Introduction – Audiovisual Translation: An Overview of its Potential.” In New Trends in Audiovisual Translation, edited by Jorge Díaz Cinta, 1-18. Bristol: Multilingual Matters. Díaz Cintas, Jorge & Remael, Aline. 2007. “Audiovisual Translation: Subtitling.” Mancherster: St. Jerome. Fischer, Kerstin. 2006. “Frames, Constructions, and Invariant Meanings: the Functional Polysemy of Discourse Particles.” In Approaches to Discourse Particles, edited by Kirsten Fischer, 427-448. Fraser, Bruce. 1999. “What are discourse markers.” Journal of Pragmatics, 31: 931- 952. Gambier, Yves. 2006. “Multimodality and audiovisual translation.” MuTra2006–Audiovisual translation scenarios. http://www.euroconferences.info/proceedings/2006_Proceedings/2006 _Gambier_Yves.pdf. Gehweiler, Elke. 2010. “Interjections and expletives.” In Historical Pragmatics, edited by Andreas. H. Jucker & Irma Taavitsainen, 315350. Berlin: Mouton de Gruyter. Goddard, Cliff. 1998. “Semantic Analysis. A Practical Introduction.” Oxford: Oxford University Press. Goffman, Erving. 1981. “Forms of Talk.” Philadelphia: University of Pennsylvania Press. Gottlieb, Henrik. 1991. “Subtitling – A New University Discipline.” In Teaching Translation and Interpreting, edited by Anne Loddegaard & Cay Dollerup, 161-171. Amsterdam: John Benjamins. Gottlieb, Henrik. 2001a. “Subtitling: Visualising Filmic Dialogue.” In Traducción Subordinada (II). El Subtitulado (inglés-español/galego,
246
Chapter Twelve
edited by Lourdes Lorenzo García & Ana M. Pereira Rodrígues, 85110. Vigo: Universidade de Vigo. Gottlieb, Henrik. 2001b. “Subtitling.” In Routledge Encyclopedia of Translation Studies, edited by Mona Baker, 244-248. London: Routledge. —.2010. “Multidimensional Translation.” In Understanding Translation, edited by Anne Schjoldager, Henrik Gottlieb & Ida Klitgård, 39-66. Aarhus: Akademia. Greenbaum, Sidney & Quirk, Randolph. 1995. “A Student’s Grammar of the English Language.” London: Longman. Incalcaterra McLoughin, Laura. 2012. “Subtitling and the Didactics of Translation.” In Global Trends in Translator and Interpreter Training: Mediation and Culture, edited by Séverine Hubscher-Davidson & Michal Borodo, 126-146. London: Continuum. Kryk, Barbara. 1992. “The Pragmatics of Interjections: the Case of Polish “no”.” Journal of Pragmatics, 18: 193-207. Lewandowska-Tomaszczyk, Barbara. 1996. “Cross-Linguistic and Language-Specific Aspects of Semantic Prosody.” Language Science, 18: 153-178. Norrick, Neal R. 2009. “Interjections as Pragmatic Markers.” Journal of Pragmatics, 41: 866-891. Pedersen, Jan. 2007. “Scandinavian Subtitles. A Comparative Study of Subtitling Norms in Sweden and Denmark with a Focus on Extralinguistic Cultural References.” (Unpublished doctoral dissertation). University of Stockholm, Sweden. —.2011. “Subtitling Norms for Television: An Exploration Focussing on Extralinguistic Cultural Reference.” Amsterdam: John Benjamins. Perego, Elisa. 2009. “The Codification of Nonverbal Information in Subtitled Texts.” In New Trends in Audiovisual Translation, edited by Jorge Díaz Cintas, 58-69. Bristol: Multilingual Matters. Schjoldager, Anne, Gottlieb, Henrik & Klitgård, Ida. 2010. “Understanding Translation.” Aarhus: Academica. Stenström, Anna-Brita.1990. “Lexical Items Peculiar to spoken Discourse.” In The London-Lund Corpus of Spoken English, edited by Jan Svartvik, 137-177. Lund: LUP. Taylor, Christopher. 2003. “Multimodal Transcription in the Analysis, Translation and Subtitling of Italian Films.” The Translator, 9: 191205. —.2004. “Multimodal Text Analysis and Subtitling.” In Perspectives on Multimodality, edited by Eija Ventola, Cassily Charles & Martin Kaltenbacher, 153-172. Amsterdam: John Benjamins.
Learner Corpus of Subtitles and Subtitler Training
247
Tomaszkiewicz, Teresa 2006. “Przekáad Audiowizualny.” [Audiovisual Translation]. Warszawa: PWN. —.2009. “Linguistic and Semiotic approaches to Audiovisual Translation.” In Analysing Audiovisual Dialogue: Linguistic and Translational Insights, edited by Maria Freddi & Maria Pavesi, 19-30. Bologna: Clueb. Tortoriello, Adriana. 2011. “Semiotic Cohesion in Subtitling: the Case of Explicitation.” In Audiovisual Translation in Close-Up, edited by Adriana ùerban, Ana Matamala & Jean-Marc Lavaur, 61-75. Frankfurt: Peter Lang. Ventola, Eija, Charles, Cassily & Kaltenbacher, Martin. 2004. “Perspectives on Multimodality.” Amsterdam: John Benjamins. Wierzbicka, Anna. 1992. “The Semantics of Interjections.” Journal of Pragmatics, 18: 159-192. Wilkins, David P. 1992. “Interjections as Deictics.” Journal of Pragmatics, 18: 119-158. Dictionary PWN/Oxford Dictionary – Wielki sáownik PWN/Oxford
Abstract The purpose of this paper is to present some of the results of the Learner Corpus of Subtitles (LeCoS) project. The project has the following aims: diagnosing the initial subtitling competence of trainees studying modern languages, preparing teaching materials to teach subtitling, and launching a subtitling module for translation students. The project encompasses the compilation of a corpus of interlingual subtitles produced by Polish students enrolled in a Modern Languages course with English as their main foreign language. While currently the overall size of the corpus is around 450,000 words, the present paper focuses on the qualitative analysis of a stand-alone subcorpus (Corpus B). The study shows that students rely excessively on the original text. This overreliance on the original soundtrack leads to frequent literal translation and overprecision on the lexical and syntactic level. Students do not consider the need of diamesic shift typical of subtitles and, as a result, they tend to render features of spoken discourse, in particular interjections, expletives and backchannelling. On a more general note, the study shows that corpus data can provide useful insight into translation difficulties encountered by subtitling trainees, and the analysis of typical translation problems can prove valuable for the design of subtitling courses.