413 83 2MB
English Pages 315 Year 2009
Efforts and Models in Interpreting and Translation Research
Benjamins Translation Library (BTL) The BTL aims to stimulate research and training in translation and interpreting studies. The Library provides a forum for a variety of approaches (which may sometimes be conflicting) in a socio-cultural, historical, theoretical, applied and pedagogical context. The Library includes scholarly works, reference books, postgraduate text books and readers in the English language.
EST Subseries The European Society for Translation Studies (EST) Subseries is a publication channel within the Library to optimize EST’s function as a forum for the translation and interpreting research community. It promotes new trends in research, gives more visibility to young scholars’ work, publicizes new research methods, makes available documents from EST, and reissues classical works in translation studies which do not exist in English or which are now out of print.
General Editor
Associate Editor
Honorary Editor
Yves Gambier
Miriam Shlesinger
Gideon Toury
Rosemary Arrojo
Zuzana Jettmarová
Rosa Rabadán
Michael Cronin
Werner Koller
Sherry Simon
Daniel Gile
Alet Kruger
Mary Snell-Hornby
José Lambert
Sonja Tirkkonen-Condit
John Milton
Maria Tymoczko
University of Turku
Bar-Ilan University Israel
Tel Aviv University
Advisory Board Binghamton University Dublin City University Université Paris 3 - Sorbonne Nouvelle
Ulrich Heid
University of Stuttgart
Amparo Hurtado Albir
Universitat Autónoma de Barcelona
W. John Hutchins
University of East Anglia
Charles University of Prague Bergen University UNISA, South Africa Catholic University of Leuven University of São Paulo
Franz Pöchhacker
University of Vienna
Anthony Pym
University of León Concordia University University of Vienna University of Joensuu
University of Massachusetts Amherst
Lawrence Venuti
Temple University
Universitat Rovira i Virgili
Volume 80 Efforts and Models in Interpreting and Translation Research. A tribute to Daniel Gile Edited by Gyde Hansen, Andrew Chesterman and Heidrun Gerzymisch-Arbogast
Efforts and Models in Interpreting and Translation Research A tribute to Daniel Gile
Edited by
Gyde Hansen Copenhagen Business School
Andrew Chesterman University of Helsinki
Heidrun Gerzymisch-Arbogast Universität des Saarlandes
John Benjamins Publishing Company Amsterdam / Philadelphia
8
TM
The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.
Library of Congress Cataloging-in-Publication Data Efforts and models in interpreting and translation research : a tribute to Daniel Gile / edited by Gyde Hansen, Andrew Chesterman, and Heidrun GerzymischArbogast. p. cm. (Benjamins Translation Library, issn 0929-7316 ; v. 80) Includes bibliographical references and index. 1. Translating and interpreting--Research--Methodology. I. Gile, Daniel. II. Hansen, Gyde. III. Chesterman, Andrew. IV. Gerzymisch-Arbogast, Heidrun. P306.5.E34
2008
418'.02072--dc22 isbn 978 90 272 1689 2 (Hb; alk. paper)
2008035789
© 2008 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Co. · P.O. Box 36224 · 1020 me Amsterdam · The Netherlands John Benjamins North America · P.O. Box 27519 · Philadelphia pa 19118-0519 · usa
Table of contents Preface
vii
Scientometrics and history An author-centred scientometric analysis of Daniel Gile’s œuvre Nadja Grbić & Sonja Pöllabauer The turns of Interpreting Studies Franz Pöchhacker
3
25
Conceptual analysis The status of interpretive hypotheses Andrew Chesterman
49
Stratégies et tactiques en traduction et interprétation Yves Gambier
63
On omission in simultaneous interpreting: Risk analysis of a hidden effort Anthony Pym
83
Research skills Doctoral training programmes: Research skills for the discipline or career management skills? Christina Schäffner
109
Getting started: Writing communicative abstracts Heidrun Gerzymisch-Arbogast
127
Construct-ing quality Barbara Moser-Mercer
143
Efforts and Models in Interpreting and Translation Research
Empirical studies How do experts interpret? Implications from research in Interpreting Studies and cognitive science Minhua Liu The impact of non-native English on students’ interpreting performance Ingrid Kurz Evaluación de la calidad en interpretación simultánea: Contrastes de exposición e inferencias emocionales. Evaluación de la evaluación Ángela Collados Aís
159
179
193
Linguistic interference in simultaneous interpreting with text: A case study Heike Lamberger-Felber & Julia Schneider
215
Towards a definition of Interpretese: An intermodal, corpus-based study Miriam Shlesinger
237
The speck in your brother’s eye – the beam in your own: Quality management in translation and revision Gyde Hansen
255
Publications by Daniel Gile
281
Name index
295
Subject index
299
Preface With this volume, colleagues and friends wish to honor Daniel Gile for his tireless efforts in Interpreting & Translation Research. It presents a selection of the kinds of research and models that he has inspired or promoted, or that are closely connected with some of his main research interests. As can be seen from the first two articles, which deal with Daniel Gile’s impact on Interpreting & Translation Research, and also from the impressive list of his publications at the end of the volume, Daniel Gile has been, and is still, a catalyst for a wide range of research in this field. This is also reflected in the other articles. His efforts, one might say, have been models for our own. In their author-centred scientometric study, Nadja Grbić and Sonja Pöllabauer investigate (some of) Daniel Gile’s efforts for the scientific community. They apply mathematical and statistical methods to the range of his academic work in order to explore the development and thematic landscape of his publications and citations, and his network with co-authors – thus showing the considerable impact of his œuvre on the field of T&I Studies. Franz Pöchhacker’s description of the turns in Interpreting Studies emphasizes the influence of some central personalities and their activities on the methodological and paradigmatic turns and shifts in the discipline, and outlines the roles played by precursors, pioneers and masters as well as their impact on the field. In a metascientific schema borrowed from Snell-Hornby, the work of Daniel Gile and its benefit for the scientific community is honored. Interpretive hypotheses are the concern of Andrew Chesterman, who considers interpretive hypotheses to be essential conceptual tools for any research project, as observations, data and/or test results always have to be interpreted in some way. Various kinds of meaning and possible interpretation types are compared. Interpre tive hypotheses, which have their roots in hermeneutics, are defined in relation to the standard empirical types of hypotheses. Having borrowed concepts and terms from everyday language and from other disciplines, TS lacks a consistent terminology. The effect can be confusion and a loss of rigor and transparency. This is demonstrated by Yves Gambier, with examples from TS where a number of key terms are discussed and illustrated in different taxonomies
Efforts and Models in Interpreting and Translation Research
of so-called “strategies”. Because of this diversity in the perception and use of concepts and terms, a stable meta-language in TS still remains to be desired. Anthony Pym has a closer look at Gile’s cognitive Effort Models, arguing that they may underestimate the context-sensitive aspect of simultaneous interpreting. In particular, Pym raises the question of context-dependent strategies and interpreters’ awareness of variable levels of risks, especially with respect to omissions. Lowrisk omissions, as distinct from high-risk ones, should perhaps not be classed automatically as errors. Training students for a professional career as researchers is the issue addressed by Christina Schäffner, who sees this as a collective responsibility of universities. She outlines proposals for systematic doctoral training of a complex set of research skills and career management skills, and the importance of this training for future high quality research. Different requirements are compared and commented on with reference to actual research training in TS (especially with respect to the United Kingdom). Getting started, i.e. writing an abstract according to what readers might expect – a difficult form of writing – is itself a research skill. In her article, Heidrun GerzymischArbogast presents the principle of the ‘four tongues’ of the speaker and the ‘four ears’ of the listener. A discussion of the different dimensions of an abstract and their interplay illustrates how one can get one’s ideas across effectively in an abstract. According to Barbara Moser-Mercer, survey research in interpreting needs more methodological rigor, and also more comparable studies of the perception of quality by the users of conference interpreting. Expert professional performance deserves quality questionnaires based on valid, relevant questions and including the construction of categories which determine different aspects of the perception of the multi-dimensional construct “quality”. Minhua Liu provides an overview of empirical research on expertise and effort in simultaneous interpreting. She surveys research dealing with the various challenges encountered by interpreters, and their successes and failures in dealing with these challenges. Using conceptual tools developed in cognitive psychology, she examines the differences between novice and expert interpreters, comparing different levels of skills, strategies, cognitive ability and performance. Non-native speakers of English are often unaware of the extra cognitive load that their accent can impose on the interpreter. Reporting an empirical study with students of simultaneous interpreting (Kodrjna), Ingrid Kurz discusses a hypothesis derived from Gile’s Effort Models: that a higher processing capacity is required for
Preface
comprehension when the speaker has a strong foreign accent. The study clearly confirms this hypothesis. In her article about emotional inferences and quality assessment in simultaneous interpreting, Ángela Collados Aís reports the results of an empirical investigation comparing the intonation of two simultaneous interpreters. The study shows the extent to which intonation differences – in this case monotonous or non-monotonous intonation – can affect the evaluation of the quality of the interpretation, as well as judgements of the personality of the speaker. Heike Lamberger-Felber and Julia Schneider present results from an empirical analysis of linguistic interference in simultaneous interpreting under different working conditions. They use test data from a corpus of interpretations by 12 professional interpreters. The results highlight the specific problem of inter-subject variability in interpreting and its consequences for the use of statistics in empirical interpreting research. In another empirical study on corpus analysis of tagged comparable inter-modal corpora obtained with professional translators and interpreters, Miriam Shlesinger follows Gile’s appeal to look at similarities between the different modes of translation: oral and written. This interdisciplinary study at the interface of corpus linguistics, statistics and Translation Studies aims to bring new insights to the special properties of interpreted discourse. Quality management in professional translation, in particular the relationship between the translation competence and the revision competence of students and professional translators, is the subject of two empirical longitudinal studies by Gyde Hansen. The research question asked is whether good translators are also good revisers. Conceptual models illustrate self-revision and other-revision, and the differences between the competences needed in these processes. The editors
Scientomectrics and history
An author-centred scientometric analysis of Daniel Gile’s œuvre Nadja Grbić & Sonja Pöllabauer University of Graz, Austria
There are only two kinds of scholars: those who love ideas and those who hate them. (Émile-Auguste Chartier)
The article begins with a quantitative scientometric study of Daniel Gile’s published writings. The study focuses on the diachronic development of Gile’s writings and several other aspects of his scientific oeuvre such as types of publications; languages of publication; media of publication; topics; and co-authorships. The co-authorship data are then displayed by means of a coauthorship network and discussed within the framework of network analysis. In addition, a brief keyword analysis of the titles of the publications provides a first glimpse into the range of topics which are tackled in Daniel Gile’s publications. The paper then goes on to deepen the insights gleaned from the scientometric study in an analysis of citations of Gile’s writings by other authors. The citation analysis represents a first attempt to investigate Gile’s “impact” on the scientific community, even though it is inherent to the nature of citation analysis that the data obtained by such an approach can never be complete. We think that all three approaches can provide interesting insights into Gile’s oeuvre. Owing to his pioneering publications on scientometrics and citation analysis, Daniel Gile strikes us as a perfect “candidate” for such an author-centred scientometric approach.
Keywords: Interpreting Studies, scientometrics, network analysis, keyword
analysis, citation analysis
Nadja Grbić & Sonja Pöllabauer
1. Introduction We had stumbled across scientometrics before (though at the time we did not really know what it was actually about). It was not until 2006, however, that it provoked our interest in earnest − not least because of Daniel Gile’s pioneering works in that field (Gile 2000, 2005, 2006). In our first attempts to tackle topics in Translation and Interpreting (T/I) Studies using scientometric and network analytical methods, we experimented with a range of different data (Grbić and Pöllabauer 2006; Pöllabauer 2006; Grbić 2007, 2008; Grbić and Pöllabauer forthcoming a; Grbić forthcoming; Pöllabauer forthcoming) and research instruments (e.g. publication counting, (key/title) word analysis, network analysis (co-authorship and co-word analysis)). As these approaches proved capable of yielding interesting insights into different topics of research, we thought it would be worthwhile to delve even deeper into the tangle of scientometrics methods and try to sketch an overview of different methods and their possible uses in T/I Studies (Grbić and Pöllabauer forthcoming b). Daniel Gile is one of the most productive authors in Interpreting Studies (IS) and, to quote Emile-Auguste Chartier, he certainly loves new ideas. In the first scientometric study on IS by Franz Pöchhacker (1995: 49), a comparison of two research periods, Daniel Gile topped the list of leading authors in both periods under study, from 1952 to 1988 (28 items) and from 1989 to 1994 (28 items).1 In what follows, we focus on Daniel Giles’ academic work (his publications) and try to underline the “impact” he has made on T/I Studies. We first present a scientometric study of Gile’s writings on T/I to explore the diachronic development of Gile’s writings and several other aspects of his scientific work such as types of publications, languages of publication, media of publication, topics, and co-authorships. His cooperations with co-authors are discussed within the framework of network analysis and displayed in a network map. The titles of his publications are the basis for a keyword analysis, which we carried out in order to obtain a glimpse into the range of topics he has focused on during his career. His “impact” will then be investigated further on the basis of a first brief analysis of citations of his works.
1. In our corpus, Daniel Gile has 71 publications in the first and 43 publications in the second period of Pöchhacker’s study, which is most probably a consequence of the increasingly rapid dissemination and availability of comprehensive databases and other resources.
Scientometric analysis of Daniel Gile’s œuvre
2. Treasure hunting – methods and corpus Scientometrics is the application of mathematical and statistical methods to scientific output with the aim of achieving a better understanding of the mechanisms of scientific research as a social activity. Although scientometrics is most widely applied to the collection of information regarding the evolution, dynamics and trends of scholarly communities or academic disciplines, it can also be used to study research output at various levels, not only at a macro level (national output according to country, region, etc.), but also on a meso level (university, group), or even on a micro level (individual researcher) (cf. Gauthier 1998: 9). In this paper we will concentrate on the micro level of analysis and focus on Daniel Gile’s publication output and citations of his writings. In this section, we will only present the methods of publication counting, keyword analysis and network analysis briefly, as we have discussed these methods in detail in another article (for a comprehensive description of the methods cf. Grbić and Pöllabauer forthcoming b). Citation analysis will be discussed in more detail here, as this is the first time we have used this method in a scientometric study. 2.1
Publication counting and content analysis
Although scientific output may be categorized in many different ways, e.g. publications, “ideas”, conference presentations, development of new methodologies, research projects, partnerships, transfer of knowledge and technology, new products, quality standards, education (Geisler 2000: 116-117, 122), publications still seem to be among the most prestigious formal channels of scholarly communication in scientific communities and are therefore used as an indicator of scientific activity. The quantitative analysis of publication output can provide insights into a range of scholarly processes, e.g. the growth (or decrease) of publication rates, the origin and evolution of disciplines, publication policy, interdisciplinarity, the relationships between researchers, etc. The most commonly used source of scientometric data are the databases of the ISI Web of Science, which is in the hands of the company Thomson Scientific (cf. Harzing and van der Wal 2007: 1, see also 2.3.2). The indicators which are usually studied are: publication rates in a certain time period, authors, document types, language of publications, journals, publishing houses, etc. Despite the benefits of publication counting, it has to be borne in mind that there are also flaws, e.g. the unreliability of databases, the problem of counting methods, etc. (cf. Geisler 2000: 164f.). In scientometrics, content analysis based on keywords, title words, words taken from abstracts, or classification codes is applied to scrutinize the “thematic
Nadja Grbić & Sonja Pöllabauer
landscape” of a corpus of publications. The results of such an analysis may be used to investigate the frequency and importance of topics at a certain time (ranking), to trace the evolution of topics or key concepts over time (time series) or to show relations between topics using semantic networks (cf. Stock 2001: 40). Methodologically, a difference is made between the use of title words on the one hand and words from abstracts and indexing words on the other. Whereas title or abstract words are chosen by the authors themselves and therefore represent an internal author-viewpoint, indexing words, which are chosen by indexers of bibliographic databases, represent an external viewpoint and may reflect a very subjective perspective on the content of a given publication (Braam, Moed and van Raan 1991: 253). As authors themselves might also employ misleading words in their titles, one should always bear in mind that such categorizations can never function as perfectly objective criteria for assessing the content of a given publication unless the classification is made on the basis of a thorough reading of the whole paper. 2.2
Network analysis
Social network analysis (NA) is an analytical instrument often used in scientometrics. It makes use of a set of tools and methods to explore social structures and the relationships between the different agents (individual people or groups of actors) or other entities (ideas, concepts) within these social networks (Trappmann, Hummell and Sodeur 2005: 14). A set of diverse, often highly differentiated methods and techniques for sampling, collecting, visualizing and mapping data are used in NA. Various mathematical and statistical routines may be used to analyze and visualize network data. There are a number of commercial and open-source programmes available for NA studies (ibid.: 221f.). For the purposes of this paper, we will use UCINET (Borgatti, Everett and Freeman 2002) and Pajek software (Batagelj and Mrvar 1996/2007). Various bibliographic indicators have been used to generate and analyse network structures in scientometrics (Noyons 2004: 238): bibliographic records of publications (author(s), title, source in which the document is published, year of publication, abstracts) or additional specific information (author / editor keywords, database index terms, classification codes, citations) (ibid.: 239). A network is formally defined as a set of agents that may or may not stand in relation to one another. The ties within a network are modelled using a set of nodes (representing e.g. agents, groups, etc.), which are portrayed as (labelled) dots with a set of links between these nodes. Social structures may be analysed on the level of (1) individual actors (ego or ego-centred networks), (2) total networks (the entirety of a group of agents), or (3) groups within groups (cf. Jansen 2003: 33). In this contribution, we present an ego-centred network with Daniel Gile in the centre of the network and his peers (co-authors) as his “alters” (“ego”
Scientometric analysis of Daniel Gile’s œuvre
stands for the focal node of the network and does not refer to egocentricity as a character trait, cf. White 2000: 477). Co-authorship analyses may help to identify the relationships and ties between the agents in or across an academic specialty (cf. Gauthier 1998: 10). They are often viewed as important scientometric indicators of collaborations (cf. Bordons, Morillo and Gómez 2004: 201). Such studies explore the links between (co-) authors of publications, based on the assumption that “when two or more researchers jointly sign a paper, intellectual and / or social links” (Gauthier 1998: 13) will exist between them. 2.3
Citation analysis
Citation and co-citation analysis are two instruments which are used in scientometrics to trace, describe and visualize structures and relations within and between texts, disciplines, journals, and authors. Citation analysis uses quantitative techniques to explore, count, and describe connections between documents and to study the relations between cited and citing units (cf. Rousseau and Zuccala 2004: 513). The patterns and frequency of citations are used to assess the “impact” of certain publications or individual authors (or groups of authors) and to measure the “quality” of given publications. In citation analysis, researchers basically count how often publications or authors are cited. Such analyses are based on the assumption that more influential works (or researchers) are cited more often (cf. ibid.: 1). As a consequence, results of such analyses are often used to evaluate the research output of specific fields and subsequently to allocate funding. To equate “citedness” with quality may, however, represent a fallacy. Meho (2007: 3), for instance, points to problems such as cronyism (colleagues citing each other reciprocally), self-citations (cf. also Hyland 2003), ceremonial citations (authors citing authorities in a field without having read the original work), or negative citation (relevant authors / works are not cited for a number of reasons, cf. Stock 2001: 32). It is nonetheless often assumed that such negative examples are relatively insignificant, and that citations can generally help to identify the key players in a given field and the literature pertaining to that field and thus provide access to “poorly disseminated, poorly indexed or uncited works” (Meho 2007: 3). Co-citation analysis is a specific form of citation analysis, which explores the connections between two documents or units of analysis: two publications are linked, for instance, if they are both cited in a third article. Co-citation analyses help to map scientific disciplines or the development of specialities and emerging areas of research (for more detailed reflections on citation analysis, citation behaviour and motives for citing see, for instance, Leydesdorff and Amsterdamska 1990).
Nadja Grbić & Sonja Pöllabauer
For the purposes of this article, we will use citation analysis to explore the relationships between Daniel Gile and his peers in T/I Studies. Such author studies (McCain 1990: 195) use (co-)cited authors’ names as a unit of analysis. In network analytical terms, this would be called an “ego-centred” network (Jansen 2003: 33), with the oeuvre of a single selected author at the centre of investigation. The network illustrates the relationships between this author (ego) and his peers (alters) and can be visualised in network graphs (although, due to the scope and methodology of this paper, the citation network will not be visualised in this contribution). As Borgman (2000: 148) quite rightly states, “tracing citations [in print form] is a laborious task”. The rapid expansion of new technological solutions and the World Wide Web, however, have made it possible to process citations in an electronic format. This development also brought advantages for citation analysis, though Internet scientometrics as a field is still sometimes considered as somewhat lowbrow (Fröhlich 2000: 13). Web-based scientometric analysis faces the same issues of reliability and validity as traditional scientometric methods (cf. Borgman 2000: 148), e.g. as to inconsistency of data, lack of data, faulty bibliographic data, subjective use of keywords etc. Recently, Internet sources have also been used to process and retrieve (co)citation data, an approach repeatedly also referred to as cybermetrics or webometrics (cf. ibid.: 143). Google Scholar (GS), for instance, has become one of the sources which can now be used to carry out citation analysis. For our analysis, we will rely on Publish or Perish (PP), a free online software programme which uses GS to retrieve data for citation analysis. We base our study on the assumption that what Harzing and van der Wal (2007: 1) state with regard to Strategy and International Business can also be assumed to be true for T/I: “[t]he use of GS particularly benefits academics publishing in sources that are not (well) covered in ISI.” 2.3.1 Publish or perish Reading the heading of this subsection, one would not automatically think of a software programme but rather of a derogatory description of a certain kind of publication behaviour which is often said to govern publication strategies in some disciplines – sometimes, it is maintained, at the cost of quality. Nonetheless, PP here is the somewhat misleading name of a Web-based citation analysis software programme. PP is available for non-profit use courtesy of Anne-Wil Harzing. It was written by Tarma Software Research Pty. Ltd. (Melbourne, Australia) and first released (Version 1.0) on www.harzing.com in October 2006 (Tarma Software Research 2007). At the time of writing, Version 2.4 is available and will be used for our analysis. PP allows for a detailed analysis of citations, including the calculation of a number of statistics, e.g. total number of papers, total number of citations, average
Scientometric analysis of Daniel Gile’s œuvre
number of citations per paper, average number of papers per author, average number of citations per year. It also allows users to automatically calculate some of the more complex scientometric indices (e.g. Hirsch’s index, Egghe’s index) (cf. Harzing 2007a). What is novel about PP is that it retrieves citation data using GS rather than some of the more widely used, commercial databases. 2.3.2 ISI Web of Science vs. Google Scholar Traditionally, citation studies have tended to use Thomson’s ISI Web of Science databases. To be able to access the Thomson databases, users have to maintain a (costly) subscription. Besides, it is also often maintained that the Social Sciences and Humanities are grossly underrepresented in the ISI databases (cf. Harzing 2007b). As Harzing (ibid.) points out, for some scholars, the Web of Science may also underestimate an individual’s citation impact, which is why sources such as GS might provide better results: “[Many] academics show a substantially higher number of citations in GS than in the Web of Science.” The Web of Science also only includes citations from ISI-listed journals. This means that hardly any journals pertaining to T/I Studies are included in the ISI databases (cf. Grbić and Pöllabauer forthcoming b). Where citations are included in the ISI databases, only the first author is counted: in the case of co-authored papers this is a serious disadvantage for the co-authors (for details on the problems of counting authorships cf. Grbić and Pöllabauer forthcoming a). Another drawback of the ISI databases is the very strong emphasis on and predominant inclusion of English-language sources; authors writing in other languages are thus automatically at a disadvantage (cf. Harzing 2007b). The ISI databases do not generally include books or chapters in books. GS, on the other hand, does include non-English literature and also books or individual chapters in books (cf. ibid.). One must nevertheless always bear in mind that databases only include a (subjective) choice of entries and are never complete or unflawed. (For critical appraisals of the ISI databases see also Stock 2001 and Hicks 2004.) GS is a free Web engine that indexes scholarly literature across an array of sources (peer-reviewed papers, theses, books, abstracts and articles, from academic publishers, professional societies, preprint repositories, universities and other scholarly organizations) and disciplines. It allows users to search for papers, abstracts and citations. A beta version was released in 2004 (cf. Google 2008). Like any other database, GS has its limitations: bibliographic citations are always prone to errors ranging from typographical errors, problems with names with diacritics or ligatures to incomplete references, missing relevant references, etc. GS, for instance, might also include non-scholarly citations; besides, not all relevant journals are indexed and it is not entirely clear upon which criteria the editors decide to include certain journals and exclude others (cf. Harzing 2007b). Not all
Nadja Grbić & Sonja Pöllabauer
specialties are covered to the same extent. As Harzing notes (ibid.), however, the Humanities and Social Sciences are included with a wide range of literature. Entries in GS are not cleaned manually and they are processed automatically, which can sometimes also lead to nonsensical results (cf. ibid.). One last drawback is that GS is not updated as often as, for instance, the ISI databases. In spite of these shortcomings, however, Harzing points out that while “the output of Publish or Perish is only as good as its input, […] I do believe that in most cases GS presents a more complete picture of an academic’s impact than the Thomson ISI Web of Science” (Harzing 2007b). One can thus fairly assume that a high citation impact in PP suggests that the researcher in question probably does have a considerable impact on his/her field; “[h]owever the reverse is not necessarily true” (ibid.). 2.4
The corpus
The corpus of Daniel Gile’s publications has been compiled using several databases: the Bibliography of Interpreting and Translation (BITRA) from the Department of Translation and Interpreting of the University of Alicante, the Translation Studies Bibliography (TSB) published by John Benjamins, the LiDoc database of the Department of Translation Studies in Graz, and the list of publications published by Daniel Gile on his website. The corpus was compiled in October 2007. As online bibliographic databases are a relatively new tool in T/I Studies and because experience has shown that databases are never complete, we decided to use more than one database to obtain the relevant data. When comparing the relevant hits of the different databases, we found that only 40 publications were covered in all three T/I-specific databases – which once again proves that most databases are incomplete. Table 1. Databases Type of Database
Database
T/I Studies
Website Gile BITRA LiDoc TSB Google Scholar MLA ISI Web of Science
Other
N 188 135 89 78 129 41 7
Scientometric analysis of Daniel Gile’s œuvre
In addition to the T/I specific databases, we used three further resources to explore the general “visibility” of an IS scholar: the databases of the ISI Web of Science, the database of the Modern Language Association (MLA), and GS. Interestingly, GS provided the highest number of hits, followed by the MLA database. The ISI Web of Science search yielded only 7 hits, which again proves that the ISI coverage of our discipline is not representative at all. Table 1 provides an overview of the search results. All in all, we found that Daniel Gile’s oeuvre amounts to an impressive total of 203 publications, with the first published in 1979, the last in 2007. (As the list on Daniel Gile’s website contained only 188 items, he will have to complete his own list after the publication of this volume.) 3. The treasure chest – results of the study Lots of offspring – publication analysis
3.1
As mentioned above, Daniel Gile’s first publication can be traced back to 1979. It is a paper on “Bilinguisme, interférence et traducteurs” published in Traduire. Figure 1 shows his overall production from 1979 to 2007. 18 16 14 12 10 8 6 4 2
Figure 1. Timeline of overall production
2007
2006
2005
2004
2003
2002
2001
1999 2000
1998
1997
1996
1995
1993 1994
1992
1991
1990
1989
1987 1988
1986
1985
1984
1982 1983
1981
1979
0
Nadja Grbić & Sonja Pöllabauer
Table 2. Type of document Type of Document Article in journal Article in collective volume Review Collective volume Monograph Dissertation Other
N
%
110 51 29 6 4 2 1 203
54.2 25.1 14.3 3.0 2.0 1.0 0.5 100.0
An important indicator in publication counting is document type. Depending on the focus of a scientometric analysis and the discipline in question, the following categories can be included under “publication”: publications available for sale, grey literature (i.e. university publications such as series of books or working papers which are not available at bookshops), audio-visual media, articles in online-journals, internet documents, university writings, and patents (cf. Stock 2001). The quantitative distribution of document types found in the corpus is shown in Table 2. The most prominent type of document is the category “article in journal” with 54.2 %. The data in the corpus are particularly interesting when compared to other studies handling large-scale IT corpora. Pöchhacker’s scientometric study (1995: 25) with a corpus of 627 publications on IS issued between 1989 and 1994 showed that 38.4 % of the corpus fell under the category articles in journals; a study by Grbić and Pöllabauer (forthcoming a), who worked on a corpus of 595 publications on community interpreting in German-speaking countries published between 1979 and 2006, showed that 46 % of the writings under study were articles in journals. Although the studies do not cover the same periods and do not focus on the same types of interpreting, it can still nonetheless be assumed on the basis of these figures that Daniel Gile’s articles in journals represent a relatively high proportion of his oeuvre as a whole in comparison with general publication practices in the field of IS. It is also interesting to take a closer look at the journals themselves. Daniel Gile has published 110 articles and 29 reviews in various journals. Figure 2 shows the distribution of journal articles in descending order. Most of the journal articles were published in Meta between 1982 and 2005, 26 of which were written in French. Ranks 2, 3 and 4 are occupied by journals published by Translators’ and / or Interpreters’ associations (Japanese, French and international). The relatively high proportion of practitioners’ journals leads us to assume that it might be important to Gile, parallel to his academic work, to disseminate his research results among practitioners.
Scientometric analysis of Daniel Gile’s œuvre
TTR
1
Palimpsestes
2
Multilingua
2
Lebende Sprachen
2
Conference Interpretation and Translation
2
Communicate
2 3
Forum The Interpreters' Newsletter
4
Hermes
4
Babel
4 6
Target Bulletin de l'AIIC
7
Traduire
10
JAT Bulletin
17 26
Meta 0
5
10
15
20
25
30
Figure 2. Articles in journals (>1 publication)
As for reviews, more than one was published in the following journals (number of reviews in parentheses): Target (9); The Interpreters’ Newsletter (8); Meta (4); The Journal of Specialised Translation (3); and the Bulletin de l’AIIC (2). Another interesting indicator in scientometric research is the language of publication. As a consequence of the increasing internationalisation and globalisation of science and research, English has become the common language used at many conferences, meetings, discussion forums, and publications. It seems to have become a generally accepted fact that English is the lingua franca of academia, and it is often claimed that there is “no science beyond English”. The data in our corpus showed the following distribution of language use in Daniel Gile’s publications (Table 3): Table 3. Language of publication Language English French Chinese German Galician Hungarian Russian Spanish
N
%
112 83 2 2 1 1 1 1 203
55.2 40.9 1.0 1.0 0.5 0.5 0.5 0.5 100
Nadja Grbić & Sonja Pöllabauer
The corpus shows that most of Daniel Gile’s publications have been in English. Nonetheless, French, his mother tongue, ranks second, which suggests that there does seem to be a market for publications in IS written in languages other than English. This is also in line with insights from scientometric studies pertaining to other disciplines, which proved that there are language areas with markets large enough to publish journals in these languages (van Leeuwen 2004: 377). In Sociology and the Humanities, there is another reason why many works are published in languages other than English – that producers of scientific literature strive to incorporate the social context of their research in their work and thus demonstrate a strong national orientation, also with respect to language use (Hicks 2004: 484). 3.2
Lots of topics – content analysis
To explore the range of topics tackled in Daniel Gile’s publications we opted for a quantitative title word analysis. Such an approach benefits from the internal author-viewpoint with regard to the content of the publications. It is a useful method, especially when it is not possible to gain access to the full texts before indexing them. In this case, we used WordSmith by Oxford Publishing (Scott 2004) to analyse the titles of the publications in order to provide some general information about word frequency. Thus we will be able to present a basic overview of the “thematic landscape” of Daniel Gile’s oeuvre. Our corpus (titles of publications) was read into Wordsmith in ASCII format and ran through the Wordlist tool to generate a frequency list of words used in the titles. High-frequency function words and proper names were deleted from the data (stoplist). The wordlist, which was automatically generated by the programme, was then lemmatised manually. As the titles were in various languages, we manually conflated words with the same or a similar meaning in different languages, wherever such words appeared more than once. Table 4 presents the words with the highest frequency (all words that appeared 4 times or more). It is not surprising that “interpreting” (interpretation, interprétation, interpretación) (99) tops the list – it is the core subject of Daniel Gile’s scientific and didactic work. (The concord tool in Wordsmith showed that “interpreting” was only used twice in combination with “studies”). The high occurrence rate of “translation” (traduction) (48) and “translator” (traducteur) (14) shows that Daniel Gile not only specialises in IS but also addresses translation issues in his work. As for the interpreting type, the words “conference” (conference) (40) and “simultaneous” (simultanée, Simultandolmetschen) (21) confirm that his main interest lies in this field; “consecutive” only appears 6 times. The second most frequently occurring word is “research” (recherche), which appears 49 times. Other words
Scientometric analysis of Daniel Gile’s œuvre
Table 4. Word frequency Word
Freq.
INTERPRETING; INTERPRÉTATION; INTERPRETATION; INTERPRETACIÓN
99
RESEARCH; RECHERCHE
49
TRANSLATION; TRADUCTION
48
CONFERENCE; CONFÉRENCE
40
TRAINING; FORMATION; ENSEIGNEMENT; TEACHING
36
REVIEW; COMPTE RENDU
28
JAPANESE; JAPONAIS
23
SIMULTANEOUS; SIMULTANÉE; SIMULTANDOLMETSCHEN
21
TRANSLATOR; TRADUCTEUR
14
STUDY; ÉTUDE
13
MODULE
12
INTERPRETER
11
QUALITY; QUALITÉ; CALIDAD
10
LANGUAGE; LANGUE
9
ANALYSIS; ANALYSE
8
FRANÇAIS
8
BOOK
8
ISSUE
8
METHODOLOGICAL; METHODOLOGIQUE
8
TECHNICAL; TECHNIQUE
8
ASPECT
7
CONSECUTIVE; CONSÉCUTIVE
6
SCIENTIFIC; SCIENTIFIQUE
6
COGNITIVE
5
ÉVALUATION; EVALUACIÓN
5
LEXICAL; LEXICALE
5
MODEL
5
PART; PARTIE
5
PROBLEM; PROBLÈME
5
EMPIRICAL; EMPIRIQUE
4
COMMUNICATION
4
CONTRIBUTION
4
EFFORT
4
INTERDISCIPLINARITY; INTERDISCIPLINARITÉ
4
INALCO
4
LOGIC; LOGIQUE
4
METHOD; MÉTHODE
4
REFLECTIONS; REFLEXIONS
4
ROLE
4
PERSPECTIVE
3
Nadja Grbić & Sonja Pöllabauer
pertaining to research as a practice include (with foreign language equivalents and frequency in parentheses): “study” (étude) (13); “analysis” (analyse) (8); “methodological” (methodologique) (8); “scientific” (scientifique) (6); “empirical” (empirique) (4); “interdisciplinarity” (interdisciplinarité) (4); and “method” (méthode) (4). These results from the quantitative analysis are a clear indication of Daniel Gile’s concern with research at both empirical and meta-theoretical levels. Another intense focal point of his work is “training” (formation, enseignement, teaching) (36). Daniel Gile is also interested in issues of quality, which is substantiated by his use of the words “quality” (qualité, calidad) (10) and “évaluation” (evaluación) (5). The languages which appear most often are “Japanese” (japonais) (23) followed by “français” (8). 3.3
Lots of friends – co-author analysis
As mentioned above, Daniel Gile has published 203 works. Fourteen of these publications were written with co-authors. 10 of these co-authored publications are the work of 2 or 3 authors – (5 were written by 2 authors and 5 by 3); 2 publications were written by 4 authors each; and there are 2 articles on which 6 and 7 authors respectively collaborated. If, instead of applying whole counting, a system of adjusted counting is used, where each co-author receives a fractional count, Daniel Gile has a score of 193.98. Daniel Gile’s cooperations with his co-authors can be analysed from a network analytical viewpoint (ego network). Ego networks can be defined as “networks consisting of a single actor (ego) together with the actors they are connected to (alters) and all the links among those actors” (Everett and Borgatti 2005: 31). An ego in an ego-centred network is the “focal” node. Basically, a network has as many egos as it has nodes. Egos may be persons, groups, organisations, etc. (cf. Hanneman and Riddle 2005 “Ego Networks”). The central node of our network is Daniel Gile. In what follows, we will therefore concentrate primarily on Gile’s connections with his peers. There is a range of different properties of ego networks that can be analysed. Owing to the scope of this contribution, we will limit ourselves to a selection of some of the more basic NA indicators. The position of the nodes of a network and the links (length, position) between them are often irrelevant. The thickness of the links can, however, provide information about the intensity / frequency of relationships (cf. Jansen 2003: 92). This co-author network (see Figure 3) was visualised using the Kamada Kawai algorithm (“free”), which is often used for such graphs (cf. e.g. Leydesdorff 2004: 6) and is in PAJEK. It displays a network as a system of dynamic springs to minimise the energy within that system (for details cf. Grbić and Pöllabauer forthcoming a and b). The nodes in the network below were not rearranged manually, which is to
Scientometric analysis of Daniel Gile’s œuvre
say that we did not place Gile deliberately at the centre of the network – it stands to reason, though, as a result of the nature of his ties with his co-authors, that he automatically became the centre of the network. The stronger links indicate ties to co-authors with whom he cooperated more than once. The co-authorship network comprises a total of 30 nodes (“egos”); the maximum number of potential links between one agent and the others is therefore 29. As the graph displays all of Daniel Gile’s co-authorships, Gile obviously has ties with all the others. A small number of authors (Dam, Hansen, Collados – stronger links in the graph) cooperated more than once (twice each) with Gile, the rest of the authors in the network each only cooperated with Gile on a single joint publication. The number of connections of a node with other nodes in the network is called degree. An agent’s degree is thus a simple indicator of an agent’s centrality in a network, i.e. how strongly s/he is embedded in the network (cf. Jansen 2003: 94). Agents with a high degree can be assumed to be more closely connected to their peers than those with a lesser degree. Two nodes are unconnected (degree of 0) if they are not linked with each other. Agents with a 0 degree can often be interpreted as ‘outsiders’ in a given network. Again, as the network is an ego-centred network with Gile as central node, he obviously has the highest degree (32); the other authors are mainly connected through him. The degrees of his co-authors are as follows: 6 authors have a degree of 6 (1 joint publication: Anderman, Fraser, Newmark, Pearce, Rogers, Zlateva), 6 a degree of 5 (1 joint publication: Cenková, Dam, Kondo, Lambert S., Pöchhacker, Tommola), 7 a degree of 3 (1 joint publication: Gerzymisch-Arbogast, House, Rothkegel; Collados, Dubslaff, Hansen, Martinsen), 7 a degree of 2 (Gambier, Lambert J., Malmkjaer, Sánchez, Schjoldager, Snell-Hornby, Taylor) and 3 a degree of 1 (Alikina, DeDax, Kurz). As all of the authors are connected via Gile there are no outsiders (with a degree of 0) in this network. The density of a network is another calculable analytical indicator, which determines how fast information spreads and which agents obtain information earlier than others (by having more connections with alters than others). If all nodes (N) within a network are linked, the density (‘delta’) is 1. Delta is defined as the relation between the number of all existing ties and the number of all potential ties. The number of all possible ties is calculated as follows: N*(N-1) (cf. Jansen 2003: 94-95). The whole co-authorship network has 30 nodes in total (including Gile). This means a total of 870 possible links, where 136 links between the different agents are realised (= a). The density of the whole network is therefore a/[N*(N-1)] = 0.1563, i.e. only 15.63% of all possible ties have actually been realised.
Nadja Grbić & Sonja Pöllabauer
Figure 3. Co-authorship network
Scientometric analysis of Daniel Gile’s œuvre
Statistics for ego networks can be calculated using UCINET. If we take a closer look at Gile’s ego-network, we find that he has 72 ties with other writers. The number of ties is the number of connections among all the nodes in an ego network. The total of all possible ties he could have with his alters is 812 (the size of Gile’s ego-network is 29 (=n); the number of all possible ties is therefore N*(N-1) = 29*28). The density of the ego-network centred around Gile is therefore 0.087 (8.87% of all possible ties are present). 3.4
Lots of fans – citation analysis
The most cited articles or authors in a field can be easily identified by means of General Query in PP. As Gile is mostly associated with IS, we carried out a generally query with “Interpreting Studies” as search phrase. The query was not restricted to the Humanities and also resulted in hits from other disciplines not connected to IS. If the results are sorted according to “total cites”, Gideon Toury ranks second (the first T/I scholar in the list; the first entry refers to neuroscience and not T/I) and Gile ranks twelfth, making him the T/I scholar with the second highest number of citations. Positions 3 to 11 are hits that do not pertain to T/I Studies. If we sort the results according to “cites per year”, Toury ranks seventh and Gile nineteenth. GS does not differentiate between “translation” and “interpreting” studies, which is why Toury, who publishes primarily in the field of TS, is included in the hits. The fact that Gile nonetheless occupies second place is a first indication of his considerable impact on the field of T/I Studies. An Author Impact Analysis provides an even more precise indication of the impact of an author’s publications. We searched for “D Gile” in the “author name” search field. No names were excluded (e.g. RD Gile) and the subject areas were not restricted (e.g. to Humanities), as some of the entries in GS are faulty. The search result (230 hits) was then corrected manually, which left us with 129 hits for “our” Daniel Gile. Of these 129 publications, only the monograph Basic Concepts and Models is listed twice (once having been cited 208 times and then again having been cited once – it is probably safe to assume that the second entry is faulty, involving, for example, a spelling mistake). As PP does not allow for the merging of two entries, this faulty entry could not be corrected and was ignored.
Nadja Grbić & Sonja Pöllabauer
Figure 4. Screenshot Publish or Perish
As can be seen from the screenshot, PP also provides some basic statistics: the 129 publications were cited 648 times in total; the cites per year amount to 24; the average number of citations per publication is 5.02. Gile attracts 623.43 cites per author, the papers per author category shows a total of 120.34 and there are 1.21 authors per paper in the GS corpus. PP also calculates some of the more complex scientometric indices, such as the h-index (Hirsch’s Index) or the g-index (Egghe’s Index). The h-index is an index used to quantify a scholar’s scientific output. It measures the cumulative impact of a researcher’s output by looking at the number of times their work has been cited (cf. Tarma Software Research 2007). It was proposed by J.E. Hirsch in 2005, hence the name (Hirsch 2005). A scholar has an hindex “if his / her Np papers have at least h citations each, and the other (Np-h) papers have no more than h citations each” (Hirsch 2005: 1). Gile’s h-index is 11. As a point of comparison, Toury has an h-index of 9. The g-index is an improvement on the h-index and gives more weight to highly cited articles (cf. ibid.). It was developed by Leo Egghe in 2006 (Egghe 2006) and is defined as follows: “[g]iven a set of articles ranked in decreasing order of the number of citations they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations” (ibid.). Gile’s g-index
Scientometric analysis of Daniel Gile’s œuvre
is 22, Toury’s is 33. Though PP can provide more variants of these two indices, we will leave it at that. The data presented so far clearly indicate that Gile has left quite an impact, both on GS and on the world of T/I Studies. 4. Conclusion This paper was conceived as an attempt to use citation analysis in combination with basic scientometric tools in the context of T/I Studies. The scope of the paper limits it to being no more than a mere attempt – but it would certainly be worthwhile to experiment with more data and with different computer programmes. One could go into much more detail with respect to citation analysis and the different citation indices and so attain an even closer insight into the impact of an individual author. Basing our study on PP, we only used data gleaned from GS, along with some basic citation statistics. It would certainly be interesting to widen the corpus and to take a closer look at the publications themselves that comprise the corpus, in order to obtain a more thorough insight into the thematic landscape, to trace the evolution of ideas and also to collect data on occurrences of co-citation, which could then also be visualised. We hope that this paper, despite its limited scope, can help to show that such a combined approach would be an interesting way in which to obtain an overview of the research landscape in a given field, in our case T/I Studies. A note of caution is nonetheless essential: as the discrepancies we have presented demonstrate, evaluations based entirely on citation or scientometric analyses should always be taken with a proverbial pinch of salt. The results and indices can vary (sometimes quite considerably) depending not only on the data and the databases used, but even on considerations such as the age of the authors (how long someone has been in the academic publishing market). Such studies can thus be an interesting and relevant glimpse into a field of study, but they are neither adequate nor sufficient means for determining the quality of publications or deciding on the allocation of funds. References Batagelj, V. and Mrvar, A. 1996/2007. Pajek. Program for Analysis and Visualization of Large Networks. Reference Manual. Version October 6, 2007. Ljubljana: University of Ljubljana, http://vlado.fmf.uni-lj.si/pub/networks/pajek/ (accessed 10 January 2007). Bordons, M., Morillo, F. and Gómez, I. 2004. “Analysis of cross-disciplinary research through bibliometric tools.” In Handbook of Quantitative Science and Technology Research: The Use of Publication and Patent Statistics in Studies of S&T Systems, H.F. Moed, W. Glänzel and U. Schmoch (eds), 437–456. Dordrecht/Boston/London: Kluwer Academic Publishers.
Nadja Grbić & Sonja Pöllabauer Borgatti, S.P., Everett, M.G. and Freeman, L.C. 2002. Ucinet for Windows: Software for Social Network Analysis. Harvard, MA: Analytic Technologies. Borgman, C.L. 2000. “Scholarly communication and bibliometrics revisited.” In The Web of Knowledge. A Festschrift in Honor of Eugene Garfield [ASIS Monograph Series], B. Cronin and H. Barsky Atkins (eds), 143–162. Medford, NJ: Information Today. Braam, R.R., Moed, H.F. and van Raan, A.F.J. 1991. “Mapping of science by combined co-citation and word analysis II: Dynamical aspects.” Journal of the American Society for Information Science 42 (4): 252–266. Egghe, L. 2006. “Theory and practice of the g-index.” Scientometrics 69 (1): 131–152. Everett, M. and Borgatti, S.P. 2005. “Ego network betweenness.” Social Networks 27: 31–38. Fröhlich, G. 2000. “Output-Indikatoren, Impact-Maße: Artefakte der Szientometrie? Das Messen des leicht Meßbaren.” AGMB aktuell. Mitteilungsblatt der Arbeitsgemeinschaft für Medizinisches Bibliothekswesen 7: 13–17. Gauthier, É. 1998. Bibliometric Analysis of Scientific and Technological Research: A User’s Guide to the Methodology. Montreal: Université du Québec à Montréal, Observatoire des Sciences et des Technologies. [Online]. http://www.ost.uqam.ca/OSTE/pdf/rapports/1998/Bibliometric_analysis_scientific_research.pdf (accessed 11 July 2006). Geisler, E. 2000. The Metrics of Science and Technology. Westport, Connecticut and London: Quorum Books. Gile, D. 2000. “The history of research into conference interpreting: A scientometric approach.” Target 12 (2): 297–321. Gile, D. 2005. “Citation patterns in the T&I didactics literature.” Forum 3 (2): 85–103 [Also online: http://cirinandgile.com/05%20CitationTIdidacticsformatted.rtf (accessed 10 December 2007)]. Gile, D. 2006. “L’interdisciplinarité en traductologie: Une optique scientométrique.” In Interdisciplinarité en traduction – Interdisciplinarity in Translation. Vol. II, S. Öztürk Kasar (ed.), 23–37. Istanbul: Isis [Also online: http://cirinandgile.com/0206interdiscmethodIstanb.doc (accessed 10 December 2007)]. Google. 2008. “About Google Scholar.” Online: http://scholar.google.com (accessed 15 January 2008). Grbić, N. 2007. “Where do we come from? What are we? Where are we going? A bibliometrical analysis of writings and research on sign language interpreting.” The Sign Language Translator & Interpreter 1 (1): 15–51. Grbić, N. 2008. “Gebärdensprachdolmetschen im deutschsprachigen Raum. Szientometrische Befunde.” In Translationswissenschaftliches Kolloquium I. Beiträge zur Übersetzungs- und Dolmetschwissenschaft (Köln-Germersheim), B. Ahrens, L. Černý. M. Krein-Kühle and M. Schreiber (eds). Frankfurt am Main/Berlin/Bern/Bruxelles/New York/Oxford/Wien: Peter Lang (forthcoming). Grbić, N. [forthcoming]. “Wörter machen Leute. Eine Analyse von deutschsprachigen Publikationen zum Community Interpreting.” In Migration als Transkulturation / Translation: Zwischen Fremdwahrnehmung und Selbstverantwortung, G. Vorderobermeier and M. Wolf (eds). Münster/Hamburg/Berlin/Wien/London: LIT Verlag. Grbić, N. and Pöllabauer, S. 2006. “Forschung zum Community Interpreting im deutschsprachigen Raum: Entwicklung, Themen und Trends.” In „Ich habe mich ganz peinlich gefühlt“. Forschung zum Kommunaldolmetschen in Österreich: Problemstellungen, Perspektiven und
Scientometric analysis of Daniel Gile’s œuvre Potenziale, N. Grbić and S. Pöllabauer (eds), 11–36. Graz: Institut für Translationswissenschaft. Grbić, N. and Pöllabauer, S. [forthcoming a]. “Counting what counts: Research on community interpreting in German-speaking countries – A scientometric study.” Accepted for publication in Target. Grbić, N. and Pöllabauer, S. [forthcoming b]. “To count or not to count: Scientometrics as a methodological tool for investigating research on translation and interpreting.” Submitted for publication in Translation and Interpreting Studies. Hanneman, R.A. and Riddle, M. 2005. Introduction to social network methods. Riverside, CA: University of California. Online: http://faculty.ucr.edu/~hanneman/ (accessed 15 January 2008). Harzing, A.-W. 2007a. “Publish or perish.” Online: http://www.harzing.com/resources.htm (accessed 15 January 2008). Harzing, A.-W. 2007b. “Google Scholar – a new data source for citation analysis.” Online: http:// www.harzing.com/resources.htm (accessed 15 January 2008). Harzing, A.-W. and van der Wal, R. 2007. “Google Scholar: The democratization of citation analysis.” Accepted for Ethics in Science and Environmental Politics, online: http://www.vr.se/do wnload/18.34261071168fe6a62080001557/harzing_vanderwal_2007.pdf (accessed 15 January 2008). Hicks, D. 2004. “The four literatures of social science.” In Handbook of Quantitative Science and Technology Research: The Use of Publication and Patent Statistics in Studies of S&T Systems, H. Moed, W. Glänzel and U. Schmoch (eds), 473–495. Boston and London: Kluwer Academic Publishers. Hirsch, J.E. 2005. “An index to quantify an individual’s scientific research output.” arXiv:physics /0508025v5, 29 Sept. 2005. Online: http://arxiv.org/pdf/physics/0508025 (accessed 24 January 2008). Hyland, K. 2003. “Self-Citation and Self-Reference: Credibility and promotion in academic publication.” Journal of the American Society for Information Science and Technology 54 (3): 251–259. Jansen, D. (2003) Einführung in die Netzwerkanalyse. Grundlagen, Methoden, Anwendungen [UTB 2241]. Opladen: Leske + Budrich. Leydesdorff, L. 2004. “The university-industry knowledge relationship: Analyzing patents and the science base of technology.” Journal of the American Society for Information Science and Technology 55 (11): 991–1001. Online: http://users.fmg.uva.nl/lleydesdorff/HiddenWeb/ index.htm (accessed 15 January 2008). Leydesdorff, L. and Amsterdamska, O. 1990. “Dimensions of citation analysis.” Science, Technology and Human Values 15: 305–335 [also online: http://users.fmg.uva.nl/lleydesdorff/ sthv90/, accessed 15 January 2008]. MacCain, K.W. 1990. “Mapping authors in intellectual space: Population genetics in the 1980s.” In Scholarly Communication and Bibliometrics, C.L. Borgman (ed.), 194–216. Newbury Park/London/New Delhi: Sage. Meho, L.I. [2007]. “The rise and rise of citation analysis.” Paper accepted for publication in Physics World, online: http://eprints.rclis.org/archive/00008340/01/PhysicsWorld.pdf (accessed 15 January 2008). Noyons, E.C.M. 2004. “Science maps within a science policy context.” In Handbook of Quantitative Science and Technology Research: The Use of Publication and Patent Statistics in Studies
Nadja Grbić & Sonja Pöllabauer of S&T Systems, H.F. Moed, W. Glänzel and U. Schmoch (eds), 237–255. Dordrecht, Boston and London: Kluwer Academic Publishers. Pöchhacker, F. 1995. “Writings and research on interpreting. A bibliographic analysis.” The Interpreters’ Newsletter 6: 17–32. Pöllabauer, S. 2006. “‘During the interview, the interpreter will provide a faithful translation.’ The potentials and pitfalls of researching interpreting in immigration, asylum, and police settings: Methodology and research paradigms.” Linguistica Antverpiensia NS 5: 229–244. Pöllabauer, S. [forthcoming]. “Forschung zum Dolmetschen im Asylverfahren: Interdisziplina rität und Netzwerke.” Accepted for publication in Lebende Sprachen. Rousseau, R. and Zuccala, A. 2004. “A classification of author co-citations: Definitions and search strategies.” Journal of the American Society for Information Science and Technology 55 (6): 513–529. Scott, M. 2004. WordSmith Tools version 4. Oxford: Oxford University Press. Stock, W. 2001. Publikation und Zitat. Die problematische Basis empirischer Wissenschaftsfor schung. [Kölner Arbeitspapiere zur Bibliotheks- und Informationswissenschaft 29]. Cologne: University of Applied Sciences Cologne, also online: http://www.fbi.fh-koeln.de/institut/papers/kabi/volltexte/band029.pdf (accessed 29 March 2007). Tarma Software Research. 2007. Publish or Perish User’s Manual. Online: http://www.harzing. com/resources.htm (accessed 15 January 2008). Trappmann, M., Hummell, H.J. and Sodeur, W. 2005. Strukturanalyse sozialer Netzwerke. Konzepte, Modelle, Methoden. Wiesbaden: Verlag für Sozialwissenschaften. van Leeuwen, T. 2004. “Descriptive versus evaluative bibliometrics. Monitoring and assessing of national R&D systems.” In Handbook of Quantitative Science and Technology Research: The Use of Publication and Patent Statistics in Studies of S&T Systems, H. Moed, W. Glänzel and U. Schmoch (eds), 373–388. Boston and London: Kluwer Academic Publishers. White, H.D. 2000. “Toward ego-centered citation analysis.” In The Web of Knowledge. A Festschrift in Honor of Eugene Garfield [ASIS Monograph Series], B. Cronin and H. Barsky Atkins (eds), 475–496. Medford, NJ: Information Today.
The turns of Interpreting Studies Franz Pöchhacker
University of Vienna, Austria Borrowing from the title as well as relevant contents of Mary Snell-Hornby’s latest book on Translation Studies, this paper reviews the development of Interpreting Studies as an academic (sub)discipline and examines it for shifts and milestones that might qualify as “turns” while probing the conceptual content of this popular label. In analogy to Snell-Hornby’s attribution of the roles of precursor, pioneer, master and disciple to those creating and working within a particular “tradition”, this metascientific scheme is applied to the development of interpreting research since the 1950s, with Daniel Gile portrayed as the tradition’s master. Engaging with his groundbreaking historiography, the well-known four-period classification is extended, with particular emphasis on developments since the mid-1990s and newly influential memes and methods as well as disciplinary sources. The notion of “paradigm” is then taken up to discuss various research traditions in the discipline, viewing shifts from one paradigm to another as the “turns” at issue in this paper. Under this heading, the “social turn” and the “qualitative turn” in Interpreting Studies are discussed in terms of their theoretical, methodological and epistemological implications.
Keywords: Interpreting Studies, paradigms, tradition, shifts, empirical turn, social turn, qualitative turn
1. Introduction The development of research on interpreting has been a dominant concern in Daniel Gile’s impressive career. Indeed, he has pioneered the historiography of the field, with a widely known four-period account that dates back to the 1980s. His “overview of conference interpretation research” presented at the 29th Annual Conference of the American Translators Association in Seattle happened to be my first encounter with his work – and with the man himself, obviously providing rich and lasting inspiration. His account of the evolution of interpreting research
Franz Pöchhacker
helped me (and countless others) develop a sense of orientation as to who was who, what had been done, and where one might go in this field. What was less clear at the time was the position of interpreting research on the global map of scholarly pursuits. To which broader community, with what sort of attributes, would one belong if one engaged in the study of interpreting? Twenty years ago, it seemed like being an interpreting researcher typically meant being a psychologist – or a Parisian. The evolution and the status – as a field of study and in relation to other disciplines – of what we have come to call Interpreting Studies have since been the subject of reflection and analysis by both the scholar celebrated with this book and the colleague wishing to honor him with this contribution. In addition, though, the present paper also pays homage in another dimension, vicariously, as it were, by representing the like-minded endeavors of Daniel Gile’s equally inspiring and accomplished colleague, Mary Snell-Hornby. By borrowing from the title and contents of her latest book, The Turns of Translation Studies (Snell-Hornby 2006), to build on what Daniel Gile has created over the last two decades, I hope to pay due tribute to both of my mentors while highlighting their shared commitment to the fruitful development of the Translation (and Interpreting) Studies community. After all, both of them have fostered the vision of a broad and diverse academic discipline within which Interpreting Studies has its rightful place, each serving this community, among other things, as President of the European Society for Translation Studies. Sharing that vision and commitment, my aim with this paper is to enrich our view of Interpreting Studies by portraying its evolution and status as an increasingly autonomous (sub)discipline, while foregrounding concepts that have been used to analyze the development of Translation Studies as a parent discipline. These include Chesterman’s (1997) “memes” and Snell-Hornby’s “turns” as well as the notion of “paradigm”, used by Snell-Hornby with reference to Vermeer (1994) and given a prominent role also in my attempt some years ago to provide an account of research traditions and disciplinary orientations in Interpreting Studies (Pöchhacker 2004). Several conceptual relations will be explored: one is the connection between turns and paradigms, and between the latter and various breakthroughs, shifts and milestones; another is the relationship between periods and paradigms; and a third, which will be taken up first, is the way “turns” in the history of ideas and research are connected with sociological factors, ultimately emphasizing that the course of scientific development, not least in the field under study, is shaped as much by personalities and their activities as by new memes and theories.
The turns of Interpreting Studies
2. Precursors and pioneers The close association between ideas as such and the people by whom they are put forward, is shown very clearly in Snell-Hornby’s (2006) initial chapter on the emergence of Translation Studies as a discipline. Drawing on Lefevere’s (1977) description of “traditions”, which is in turn based on the metascientific scheme proposed by Radnitzky (1970), Snell-Hornby characterizes the evolution of the field by assigning to influential scholars the roles of “precursor”, “pioneer”, “master” or “disciple”. Thus, Friedrich Schleiermacher is mentioned as one of the precursors of modern Translation Studies, whereas Roman Jakobson and especially James Holmes are referred to as pioneers, defined by Radnitzky (1970: 9) as those who formulate the tradition’s “raw program” and sometimes also its “manifesto”. Two other names in that category will be more familiar to specialists in interpreting – Otto Kade and Danica Seleskovitch. Since these are obviously scholars who would figure prominently also when the focus is narrowed to Interpreting Studies, they represent an ideal point of departure for a reflection on the precursors, pioneers, masters and disciples in the tradition of interpreting research, and a welcome link between that zoomed-in view and the broader picture of the discipline. Compared to the study of (written) translation and the more or less academic debate on the subject, which goes back at least to Ciceronian times, interpreting has a much more limited tradition, with systematic reflection on its practice – and essentially also the professionalization thereof – going back no further than the twentieth century. Interpreting of indisputably professional quality was seen during the Paris peace negotiations of 1919, and it was a young interpreter working there, Jean Herbert, who would probably fit the label of precursor for Interpreting Studies. Though Jean Herbert’s academic pursuits were in the field of oriental studies, his Interpreter’s Handbook (Herbert 1952) contains many ideas – on the conference interpreter’s mission, personal qualities, audience orientation, etc. – that anticipate future lines of investigation in Interpreting Studies. Having identified Herbert, the pioneer interpreter, interpreting-service organizer, teacher and textbook author, as a precursor of the field, we can conveniently use the publication of his 1952 Handbook to establish a dateline for the development of systematic reflection on interpreting. A few years later, Viennese-born Eva Paneth followed suit with her “Investigation into Conference Interpreting”, submitted as a Master’s thesis at the University of London (Paneth 1957). Her work with existing published sources and especially her extensive fieldwork at interpreter training institutions on the Continent makes this very first academic thesis on interpreting an important milestone in the development of Interpreting Studies. But even though Paneth continued to work and publish in the field, she would not generally be regarded as a pioneer. Sociological factors may account for that in
Franz Pöchhacker
part, including the fact that her main affiliation was with modern languages rather than an interpreter training institution or the conference interpreting profession. The latter applied quite fully to Danica Seleskovitch, and there can be little doubt that she perfectly fits the model of a pioneer of Interpreting Studies. After all, Snell-Hornby acknowledges her – for the idea of interpreting based on non-verbal sense – as a pioneer, not only of Interpreting Studies but also of modern Translation Studies as a whole. The dual role of pioneer ascribed to Seleskovitch is indeed highly significant, as it suggests that in the 1960s and 1970s, the study of translation and of interpreting were still progressing in unison, if not in sync. This is exemplified by Otto Kade as one of the leading representatives of the so-called Leipzig School of Translation Studies. A pioneer of modern Translation Studies in Snell-Hornby’s account (and with ample justification), Kade’s early papers on note-taking in consecutive interpreting (Kade 1963) and on simultaneous interpreting, which included valuable insights into cognitive and psycholinguistic processes and raised communicative as well as ethical issues, set the stage for subsequent work by such representatives of the German tradition of interpreting research as Hella Kirchhoff (e.g. 1976/2002) in West Germany and Heidemarie Salevsky (1987) in what was then still the GDR. While it is now quickly being forgotten, the East-West division constituted a formidable barrier to scholarly exchange and community-building, in Germany and in Europe, if not the world as a whole. It is against this backdrop that the role of Ghelly Chernov (e.g. 1979/2002) as yet another pioneer of Interpreting Studies must be seen. Though he worked at the UN in New York and had access to Western channels of publication, the influence of his work in the European heartland of interpreting was rather limited. It would be fascinating to speculate on his role in the field if his 1978 monograph had been published not in Moscow (in Russian) but in English, in, say, London or Amsterdam, as it was more than a quarter-century later (Chernov 2004). What distinguished Chernov’s work, apart from his fundamental insight into probabilistic (prediction-based) comprehension as a cornerstone of the simultaneous interpreting process, was his pioneering interdisciplinary research in cooperation with experimental psychologists. In doing so, around 1970, he practiced what would be preached, not least by Daniel Gile, many years later, and pursued a line of (psycholinguistic-cognitive) research that, championed for many years by Barbara Moser-Mercer (e.g. 1997), yielded outstanding studies – most notably the PhD theses by Miriam Shlesinger (2000) and Minhua Liu (2001) – as recently as the turn of the millennia. Bringing together simultaneous interpreting and experimental psychology was also among the early achievements of Ingrid Kurz, nee Pinter (1969), yet another pioneering figure in the field of conference interpreting research. She did
The turns of Interpreting Studies
this in “personal union”, having trained as an interpreter and studied psychology at the same time. And by the criterion of a 1969 doctorate in psychology, Henri Barik would also have to be counted among the pioneers of Interpreting Studies. But this may well overextend the notion of a pioneer in the sense used here to account for the seminal publications and personalities that shaped the tradition of an academic field. Mindful that neither Chernov, Kade or Seleskovitch, conference interpreters all of them, lived to celebrate the first 50 years of the field since Herbert’s Handbook, it seems appropriate and coherent to retain their names as those of the three pioneers of Interpreting Studies. Admittedly, the proposal to appoint one precursor and single out three pioneers only works well for the formative decades of interpreting research, until around 1980. Even so, it fails to give due credit to such key figures as David Gerver (e.g. 1976), and remains centered on Europe, though ground-breaking writings on interpreting were also produced in Japan in the 1960s. What is more, it does not do justice to the fact that interpreting is practiced beyond international conferences and organizations, and that it is not limited to the use of spoken languages. In the United States, at least, sign language interpreters have organized, or been organized, since the mid-1960s, and pioneer professionals and scholars such as Dennis Cokely (e.g. 1984) could well claim their rightful place as pioneers of sign language Interpreting Studies. I would therefore go beyond the scheme of roles, which Radnitzky himself suggests are borne by “publications rather than persons” (1970: 9), and briefly reflect on various milestones in the emergence of our discipline before considering how these relate to the role of master and to expressions such as “new ideas” or “shifting points of view” which are associated in Snell-Hornby’s account with the notion of paradigms as well as with the “turns” that constitute the central theme of this paper. 3. Milestones and masters With reference to Kuhn (1962) and Vermeer (1994), Snell-Hornby undertakes to review “the development of Translation Studies over the past thirty years” in terms of “paradigmatic changes or milestones” (2006: 159). It is not quite clear to what extent she wishes to equate “milestones” and “turns”; but if a “milestone” in her account is a paradigm shift in the Kuhnian sense, and a “turn” is presumably a shift from one paradigm to another, the two concepts indeed reflect considerable conceptual overlap, not least because they can both be mapped onto the same metaphorical domain, namely that of a pathway, course or road. The road metaphor, in particular, lends itself to the use of “milestones” and at the same time suggests the possibility of a point along the way where the road turns. Nevertheless, for all the
Franz Pöchhacker
connotative potential of “milestones” and “turns”, I would suggest that one can also profitably make a distinction between a turning point or change of course in some forward movement, and an indicator or marker of how far along one has come; in other words, between a turn, or shift in direction, and a milestone or landmark along the way. In connection with the development of an academic discipline, the idea of milestones, or landmarks reached, is a useful and flexible descriptive scheme. Though it is not the main concern of this paper, I will propose a list of such milestones for Interpreting Studies, mainly in order to highlight that there are many different dimensions in which such milestones can be reached, and that all of these should probably be taken into account when considering the state of disciplinary development in terms of the complex notion of paradigm as discussed in more detail below. Indeed, the idea of the paradigm is also illustrated by Snell-Hornby (2006: ix) with the metaphor of breaking new ground in a given “territory”, which seems no less apt for the idea of innovation and new frontiers captured by the notion of milestones. To start at the very beginning, and indeed long before the conventional second-half-of-the-twentieth-century account of interpreting research, the interview study on the work and abilities of early conference interpreters by Jesús Sanz (1931), rediscovered thanks to his compatriot Jesús Baigorri Jalón (2000), surely deserves credit as the first milestone on the road to a full-fledged field of research. Following an extended barren stretch, the historical studies by Alfred Hermann (1956/2002) and Karl Thieme (Thieme et al. 1956) as well as the MA thesis by Paneth (1957) were landmark contributions to the literature in the 1950s, aside from Herbert’s (1952) Handbook already mentioned above. While these milestones represent “firsts” in the categories of empirical study, academic thesis, and monograph, others can be discerned with reference to methodological approaches. They include the first experimental study, by Oléron and Nanpon (1965/2002), devoted to simultaneous interpreting as such (rather than involving an interpreting task, as in the study by Anne Treisman, for example), and the first experimental study on consecutive interpreting, by Seleskovitch (1975). Fellow members of the Paris School (e.g. Lederer 1981) could also claim the distinction of having done the first analysis of a substantial corpus of authentic professional interpreting performance, or the first doctoral thesis devoted to court interpreting (Driesen 1985). The very first PhD thesis on interpreting as such was defended by Henri Barik (1969), but a review of doctoral theses on various types or professional domains of interpreting, and in various countries and languages, would yield a much longer roster of ground-breaking achievements. Considering the crucial importance of doctoral theses for progress in an academic discipline, the founding of the first doctoral studies program, by Seleskovitch, at the University of Paris/Sorbonne Nouvelle in 1974 must be singled out as
The turns of Interpreting Studies
the crucial landmark in the emergence of the discipline. Seminal PhD research has of course been launched also from other disciplinary territory, but the existence of a dedicated PhD studies program, such as the Paris one in traductologie, would seem vital to creating a research environment that facilitates a coherent, incremental approach and consistent activity. A steady output of advanced research will in turn create the material needed to sustain institutionalized publication channels. In line with a broader translatological outlook, however, publishers covering translation and interpreting have tended to favor an inclusive policy, for book series as well as journals, rather than devoting an entire series or periodical to interpreting only. Books on interpreting, such as the first collective volume devoted to court interpreting (Roberts 1981), have thus been accommodated in such series as “Cahiers de traductologie”, founded in 1979 by Paris School disciple Jean Delisle, and the Paris-based “Collection Traductologie” (e.g. Seleskovitch and Lederer 1984, 1989). The same applies to book series in English, most notably the quintessential “Benjamins Translation Library” (BTL), the third volume of which was a collection of papers on interpreting research (Lambert and Moser-Mercer 1994). Monographs on interpreting have since found an inviting home in that series, as highlighted by the recent bumper year of BTL volumes on interpreting (Angelelli 2004a; Chernov 2004; Diriker 2004; Hale 2004; Sawyer 2004). Book series on interpreting are still rare; examples include the (sign language-based) “Studies in Interpretation” series started by Gallaudet University Press in 2003, and a German-language series edited by Dörte Andres, “InterPartes”, which in 2004 started publishing volumes based on MA theses. A BTL subseries of sorts devoted solely to interpreting is constituted by the volumes of selected papers from the Critical Link conferences (Carr et al. 1997; Roberts et al. 2000; Brunette et al. 2003; Wadensjö et al. 2007), still the only institutionalized international conference series on interpreting (aside from the more regional and profession-oriented FIT Forum on court interpreting and legal translation). As milestones go, the first Critical Link Conference at Geneva Park (Toronto) in 1995 is surely among the key foundational events in the history of the discipline. The same must be said of the very first international – and interdisciplinary – gathering of researchers interested in interpreting, (co-)organized by David Gerver in Venice in 1977 (Gerver and Sinaiko 1978). Incidentally, subsequent conferences on interpreting held in Italy proved no less significant. Thus, the 1986 Trieste Symposium (Gran and Dodds 1989) is said to have ushered in the era that superseded the predominance of the Paris School; and at the close of the millennium, the international “Conference on Interpreting Studies” hosted at Forlì (Garzone and Viezzi 2002) for the first time showcased a variegated discipline incorporating multiple professional domains and methodological approaches,
Franz Pöchhacker
completing a process of growth and maturation in the course of the 1990s of which the 1994 Turku Symposium (Gambier et al. 1997) had offered a first glimpse. Among the vital components of an academic discipline are scientific journals, ideally of the international, peer-reviewed kind. As mentioned above, interpreting scholars have had access, from the very beginning in the 1950s, to a number of journals publishing papers on translation and interpreting, such as Babel or Meta. The special issue of Target on “Interpreting Research”, guest-edited by Daniel Gile in 1995, is one of the most remarkable examples. Even so, The Interpreters’ Newsletter, founded as a vehicle for information exchange in the wake of the Trieste Symposium, soon outgrew its name and became the field’s first dedicated periodical for academic papers. A total of 13 (roughly annual) issues were published from 1988 to 2005. Midway into this significant effort by colleagues at the University of Trieste, Barbara Moser-Mercer sent an unmistakable signal of the field’s (inter)disciplinary aspirations by starting up Interpreting, the first international peer-reviewed journal of research on interpreting, together with a cognitive scientist as co-editor and five associate editors with a background in psychology or neuroscience. Aside from setting high standards for published research, a peer-reviewed international journal like Interpreting (whose boards were revamped when new editors took over in 2004) also has fundamental implications regarding language: Whereas French served as a commonly accepted working language of scholarship in interpreting until well into the 1980s, the tide had been turning in favor of English for some time, and over the 1990s, English became the undisputed lingua franca of Interpreting Studies (and most other fields). While some journals, such as Meta and Forum, commendably avoid such monolingualism, the operation of a reliable peer-review system in a relatively small and specialized but all the more international field of research ultimately mandates a shared language, if referees are to be selected on the basis of their relevant expertise rather than their linguistic background. Even more so than at the back end of the scientific publishing process, a language shared by all members of the discipline is an indispensable requirement for its input stage, the quality of which depends on the linguistic accessibility of the state of the art, in terms of both content and methods. Aside from the processing of relevant literature, this relates to the crucial importance of research training, in PhD Schools or similar courses. And while Daniel Gile has played a role in many of the milestones reviewed above, his untiring commitment to the dissemination of information and know-how, on the printed page and in the lecture room, makes his name inseparable from the research training that is so vital to progress in our discipline. Whether as sole editor of the IRTIN/CIRIN Bulletin since 1990; as the first interpreting scholar appointed to the CE(T)RA Chair, in 1993; as the co-organizer of the 1997 Aarhus Seminar on Interpreting Research and chief editor of
The turns of Interpreting Studies
the resulting resource book (Gile et al. 2001); and of course as the author of numerous publications offering keen analysis and methodological guidance (e.g. 1990, 1991, 1995, 1998), Daniel Gile, more than any other member of the interpreting research community who has taken further what was built up by the pioneers, deserves to be addressed as the field’s master. In one way or another, he has provided inspiration and orientation to most, if not all interpreting researchers active today, making them, or us, disciples. (Incidentally, Mary Snell-Hornby, zooming in on Interpreting Studies, suggests that “former ‘disciples’ of Translation Studies such as Miriam Shlesinger and Franz Pöchhacker became pioneers of Interpreting Studies” (2006: 163) – a view with which I must humbly beg to differ.) In Radnitzky’s (1970: 9) description of the roles of master and disciple, the masters “carry out a part of the program and their work sets the standard by means of which the disciples measure their success.” Considering Daniel Gile’s equally supportive and uncompromising way of critiquing the work of newcomers and veterans alike, the focus on standard-setting and authoritative judgment seems particularly appropriate. Equally pertinent is the reference to carrying out “a part of the program”: the Effort Models (e.g. Gile 1997/2002) have provided a robust theoretical frame of reference for the study of cognitive processing in all modes of (conference) interpreting. But here precisely, in the parenthetical identification of conference interpreting as the default domain under consideration, lies the main problem with the above account of tradition shapers and movers in Interpreting Studies. As indicated toward the end of the previous section, the increasing diversification of interpreting research since the 1980s, and the substantial input from scholars entering the field via different disciplinary feeder roads, has tended to blur the outline of developmental stages in Interpreting Studies. Succinctly put, one could set the focus to interpreting in signed languages or community-based settings, and suggest incumbents for the role types in these traditions. For sign language interpreting, one might acknowledge pioneers like Robert Ingram (1978, 1985) and accord a similar status to Dennis Cokely and Cynthia Roy, if not classifying the latter or both of them as masters. By the same token, Ruth Morris, Holly Mikkelson and Susan Berk-Seligson would be leading protagonists in the domain of court interpreting, while such roles – of pioneers or masters – would be played by Brian Harris, Roda Roberts, Cecilia Wadensjö and Ian Mason in the tradition of community interpreting research. Again, though, the role-play should not be taken too far, lest the modestly sized Interpreting Studies community be presented as a sort of all-star team. In that metaphorical domain, the image of captains and team players, analogous to Radnitzky’s masters and disciples, would seem more appropriate, and also leaves
Franz Pöchhacker
open the possibility that there may be several different, and perhaps even competing, teams on, or in, the field. Before moving on to the issue of competing teams, or paradigms, I would like to close these reflections on the master(s) – or captain(s) – by stressing that I take a master – such as Daniel Gile – to play a multi-faceted role, bringing and promoting innovation and progress not only by proposing a new idea or developing a new method, that is, by virtue of his or her intellectual output, but also by disseminating, networking, training, synthesizing and organizing. Through these activity types, which are a matter of sociology much more than of epistemology, the role of master in Radnitzky’s scheme, and his notion of “tradition” in general, steer us toward the Kuhnian concept of paradigm, which after all foregrounds the scientific community and what is shared by its members. In this light, masters can be characterized as paradigm builders, whereas pioneers are ‘turners’, bringing about a turn from one paradigm, or era, to another. 4. Periods and paradigms Not very much has been written to date about eras or periods in the history of Interpreting Studies, Daniel Gile’s writings being the most notable exception. Some time before his first “overview of conference interpretation research and theory” (Gile 1988), Jennifer Mackintosh, speaking at the close of the 1986 Trieste Symposium, had discerned the beginning of “‘The Trieste Era’ in interpretation studies” (Gran and Dodds 1989: 268). Indeed, having played an active part in that buoyant development, Gile (1988) made a two-fold distinction between a “first stage of interpretation research and theory (IRT)” in the 1960s and 1970s, which gave rise to “a general philosophy of interpretation as a communication activity rather than a linguistic transcoding operation” (1988: 364), and a “new start” driven by a growing consensus that “the time has come to go further than assertions and try to study in a more precise way specific issues beyond general theories and principles” (1988: 367). In view of early experimental work on such issues as time lag and pause patterns (Oléron and Nanpon 1965/2002; Goldman-Eisler 1967; Barik 1969), Gile’s initial contrast, between “general theories and models” and “proper research” aiming to “test particular hypotheses by experimental methods” (1988: 364), was elaborated on in his much-quoted four-period history of interpreting research (1994: 149ff), which comprised “first steps” in the 1950s; psychological experimenting in the 1960s and early 1970s; the practitioners’ period from the early 1970s to the mid-1980s; and the “Renaissance” ushered in by the Trieste Symposium. Although Gile’s account also mentions individual “theoreticians and researchers”, the
The turns of Interpreting Studies
fundamental concern underlying his periodization seems to be methodological. In this respect, the “Renaissance” in Interpreting Studies can be understood as a revalidation of the scientific tradition started by the likes of Gerver and GoldmanEisler in the late sixties and seventies, drawing on the disciplinary frameworks of cognitive psychology and psycholinguistics, with experimenting as the methodology of choice. The overall goal of elucidating the cognitive process of interpreting was arguably the same as that addressed by Seleskovitch with her deverbalization model and even her “psychological approach” (Seleskovitch 1976). Much work in the Trieste era, as published in successive issues of The Interpreters’ Newsletter, thus reflected the methodological (re)orientation of interpreting research, which was expressed also in the editors’ introduction to the Trieste Symposium proceedings: “A scientific approach means that an intuitive hypothesis is subjected to experimental testing” (Gran and Dodds 1989: 12). Alluding in no uncertain terms to the difference in approach between the Paris School and the Trieste era of interpreting research, Moser-Mercer (1994: 20) distinguished between a “liberal arts community” (e.g. Seleskovitch) and researchers (like Barik, Gerver, Gile, Mackintosh, Pinter, Stenzl and herself) working in what she called the “natural science paradigm”. With the focus clearly on the issue of methodological rigor, the programmatic displacement of the Paris School by “a scientific, interdisciplinary approach” (Gran and Dodds 1989: 14) largely fits the Kuhnian notion of a paradigm shift. Gile’s (1994) account of periods in Interpreting Studies, leading toward higher scientific aspirations, ends in the mid-1990s but is validated in a subsequent quantitative analysis of conference interpreting research (CIR). Based on bibliographical data, and especially some sobering findings regarding interdisciplinarity, Gile sees a “contrast between the changing aspirations and the (relatively) static reality over the years” (2000: 315). Whether one shares this rather pessimistic view of a “static reality” or would rather draw hope from the various milestones recorded for the past decade of Interpreting Studies, it must be stressed that Gile’s analysis is limited to research on conference interpreting, through the 1990s. The latter is precisely the decade that saw momentous changes in the definition of the field of interpreting as such, which one could summarize as a period of diversification. As interpreting in community-based settings gained increasing recognition, as a professional practice as well as an object of research, the concept of interpreting came to be expanded, and with it the repertoire of models and methods for studying it. Most consequentially, the ground-breaking PhD work by Roy (1993/2002, 2000) and Wadensjö (1993/2002, 1998), from the perspectives of sociolinguistic discourse analysis (conversation analysis, pragmatics) and the sociology of interaction as well as a dialogic philosophy of language and communication, offered a research model that proved
Franz Pöchhacker
particularly well suited to the analysis of dialog interpreting in institutional interaction. Shifting the focus from text production and information processing to coconstructed “talk” and discourse management in triadic encounters, and relying on the analysis of authentic discourse, this novel research approach to interpreting as discourse-based interaction can aptly be described as a new paradigm. Having listed numerous milestones and identified at least three different paradigms (out of the more detailed account in my 2004 textbook), we are ready now to link these to the idea of turns foregrounded in the title of this paper. 5. Turns, turns… While the notion of turns is pivotal to Snell-Hornby’s (2006) overview of modern Translation Studies, not too much effort has been expended on clarifying what it is and signifies, in the context of Translation Studies or of other disciplines. The idea of a turn has obvious intuitive appeal, and its basic meaning of a change in direction or re-orientation is readily apparent. In connection with a branch of science or field of academic study, however, it would be relevant to specify just what aspect(s) of a disciplinary matrix (to use Kuhn’s terminological replacement for his initial choice of paradigm) must be affected by such change for there to be a turn in the development of the field as a whole. Recalling that Kuhn saw paradigms as constituted by shared basic assumptions, theories, models, values and standard methods, any of these – novel theories, methods, models or basic premises – might be the driving forces of a turn. Alternatively, a paradigm can also be characterized more holistically as a worldview, as a certain way of ‘seeing’, of conceptualizing and approaching the object of study in a scientific community. It is in this broader sense, of a change in conceptual focus, that one can best relate to the turns that have been identified in various disciplines, including linguistics and literary studies as root disciplines of Translation Studies. Following the linguistic turn in twentieth-century Western philosophy, and the pragmatic turn in linguistics, a far-reaching ‘cultural turn’ has been identified in the humanities and the social sciences (cf. Bachmann-Medick 2006). Indeed, the cultural turn was what Snell-Hornby described so effectively for Translation Studies in the 1980s (Snell-Hornby 1986, 1988) and even more vividly in her 2006 update. In terms of its change in focus, that theoretical reorientation in translation theory might also have been characterized as a functional turn. After all, the rethink initiated by Hans Vermeer in Germany foregrounded the function, or purpose, of translation as much as its cultural embeddedness, and it is Vermeer’s (1994: 3) definition of paradigmatic change as “the straightforward leap to a new idea or point of view” that is cited by Snell-Hornby (2006: 2) as the chief inspiration
The turns of Interpreting Studies
on the topic of “paradigms and progress”. Nevertheless, it was the cultural dimension that was to prevail, not least because it linked up with the fundamental shift in the humanities at large from objectively described meaning to meaning as contextually constrained and subjectively construed. 5.1
The empirical turn
Against this background, it seems remarkable that, notwithstanding such shared pioneers as Kade and Seleskovitch, the cultural turn in Translation Studies apparently did not carry over to Interpreting Studies at the time. On the contrary, the years when the former was taking its cultural turn, in the mid- to late 1980s, were those dominated by the call for more stringent (objective, scientific) empirical research on interpreting, epitomized by Daniel Gile’s (1990) dichotomy between “speculative theorizing vs. empirical research”. The neurolinguistic paradigm launched at the University of Trieste chimed with this call most resoundingly, but so did the many experimental investigations in the classic information-processing tradition of Gerver. In the late 1980s, then, Translation Studies and Interpreting Studies were apparently no longer in sync, having veered off into different directions – in terms of conceptual focus and methodological approach. Interestingly, Snell-Hornby’s account of Translation Studies in the 1990s identifies a change of course for the discipline as a whole which she designates as “the empirical turn” (2006: 115ff). Underlying this development, in her view, were new departures in the subfield of Interpreting Studies, as championed by Daniel Gile with regard to methodology and interdisciplinary cooperation, and extended to other domains of interpreting by his (and her) disciples. Recalling my earlier review of milestones, which included reference to major observational (corpusbased) studies of interpreting carried out as early as the 1970s, not to mention psychologists’ experimental research in the preceding decade, I would suggest that the empirical turn in Interpreting Studies predates the 1990s, or even that there was never a need for such a turn in interpreting research, given its strong roots in experimental psychology rather than linguistic or literary theorizing. Though expressed in the mantra of empirical research, the shift identified by Snell-Hornby may have had to do with a push for interdisciplinary scientific research, as promoted by Barbara Moser-Mercer and especially in Daniel Gile’s (1994) call for an “opening up” in Interpreting Studies, more so than with a fundamental change in methodological approach. If anything, one could speak of an empirical ‘re-turn’ following upon the Paris School’s emphasis on disseminating strongly held ideas and insights rather than questioning them through fresh empirical research. A number of indicators for an interdisciplinary turn in Interpreting Studies could be listed for the 1990s. These would include Gile’s (1994) plea at the Vienna
Franz Pöchhacker
Translation Studies Congress in 1992 on the theme of “Translation studies – an interdiscipline?”; the choice of plenary speakers at the 1994 Turku Conference (Gambier et al. 1997); Barbara Moser-Mercer’s launch of the journal Interpreting in cooperation with cognitive scientist Dominic Massaro; and her Ascona Workshops bringing together leading representatives of the cognitive sciences with researchers in interpreting. As noted above, the results of such interdisciplinary endeavors in the 1990s were relatively modest (Gile 2000: 315), prompting even such key representatives of scientific interdisciplinarity as Laura Gran to claim for interpreting scholars a research paradigm of their own, suggesting that “we have to become more and more aware of the specificity of our discipline” (Fabbro and Gran 1997: 26). 5.2
The social turn
Whether or not there was an interdisciplinary turn in the 1990s, there is no doubt that Interpreting Studies – and Translation Studies as a whole – experienced a profound change resulting from the emergence of community-based interpreting. In her account of the “turns of the 1990s” in Translation Studies, Snell-Hornby gives ample space to “the discovery of new fields of interpreting”, which she welcomes as “perhaps the most exciting development of the decade” (2006: 116). The application of new technologies, mainly in various forms of remote interpreting, is appropriately singled out as one of two broad areas of expansion. There is mention of a “technological turn”, but Snell-Hornby rightly points out that interpreting can be said to have taken it early in the twentieth century with the invention of electroacoustic transmission systems for simultaneous interpreting. She then goes on to describe the field of community interpreting, illustrating its variety and complexity with reference to different institutional settings, cultural constellations and language modalities. Even so, she does not put a label on these developments but presents them under the heading of the empirical turn before moving on to discuss the “globalization turn” (2006: 128) as the second major reorientation of the decade in Translation Studies. If one narrows the focus to the (sub)discipline of Interpreting Studies, I would argue that the developments perceived by Snell-Hornby as highly remarkable even in the much bigger picture of Translation Studies as a whole do deserve special mention. Indeed, in my talk at the 10th Prague Conference on Translation and Interpreting in 2003 (Pym et al. 2006), exactly ten years after the term “Interpreting Studies” first appeared in the title of a paper (Salevsky 1993), it was suggested that the evolution of the field in the 1990s might be seen as a turn or paradigm shift best characterized as a process of “going social” (Pöchhacker 2006).
The turns of Interpreting Studies
Based on a review of Interpreting Studies in terms of memes, models and methods, I have argued that the social turn in our discipline has been taken in several dimensions. At the heart of it is a shift from cognition to ‘interaction’ as the main conceptual point of reference on the map of ideas about interpreting (cf. Pöchhacker 2004: 60), the meme of interpreting as an interactive discourse process gaining prominence alongside, or at the cost of, interpreting as cognitive information processing. Rather than cognitive processes, the dynamics of interpersonal interaction were brought into view, giving renewed significance to efforts at modeling interpreting at the level of interaction (e.g. Anderson 1976/2002). Even so, this shift in memes and models of interpreting, which applies, in principle, to interpreting in conference settings as well as in face-to-face dialog, would not necessarily amount to a paradigm shift, as alluded to in Snell-Hornby’s (2006: 159) distinction between “new paradigms or shifting viewpoints”. In addition to the adoption of a different point of view, however, Interpreting Studies in the 1990s also experienced a reorientation toward previously distant disciplinary frameworks. On the one hand, the study of (interpreter-mediated) interaction was likely to benefit from such research traditions as interactional sociolinguistics and the sociology of interaction, as exemplified by Wadensjö’s adoption and elaboration of Goffman’s work. On the other hand, engaging with interpreting in community-based settings also brought into view the institutional contexts in which it takes place, and with it the particulars of various social institutions, be they judicial, medical, or educational. Thus, analysts seeking to understand the workings of social institutions had reason to take an interest in social theory as such, as done most prominently by Moira Inghilleri (e.g. 2003) in her work on interpreting in Britain’s asylum process. Whether centered on the (micro) sociology of interaction or the (macro) sociology of societal institutions, approaches to interpreting from the field of sociology nowadays hold great and obvious promise. One might note, however, that interpreting researchers have not really been oblivious to the dimension of interpreting as a profession in society, and to the tools furnished for its study by the sociology of professions. Kurz (1991), for one, conducted a survey to investigate conference interpreters’ occupational prestige, a topic also addressed by Feldweg’s (1996) study of the conference interpreting profession. The model in Joseph Tseng’s (1992) study on the professionalization of conference interpreting in Taiwan, in turn, has had productive repercussions on socio-professional analyses of signed-language and community-based interpreting, as carried out more recently by Mette Rudvin (2007). Unlike Snell-Hornby (2006), scholars such as Michaela Wolf (2007) and Erich Prunč have identified a “sociological turn in translation studies in the late 1990s” (2007: 42). It seems too early to say whether this applies also to Interpreting Studies
Franz Pöchhacker
as a whole. As long as social theory as such does not emerge more fully as a mainstream concern for interpreting researchers, it seems safer to subsume the distinctly sociological, and in particular the macro-social orientation under the implications of the social turn. This seems appropriate also because the social turn in Interpreting Studies is associated not only with a disciplinary rapprochement with sociology but also with yet another methodological reorientation. This shift, or extension of the methodological repertoire to include social science methods, is easily identified by its very name as part of the social turn. And that label, while admittedly rather coarse, can in turn serve to reveal a shift of even more fundamental significance. 5.3
The qualitative turn
Social-science methods have played a part in the evolution of Interpreting Studies from its very beginning, from the interview study by Sanz (1931) to the ethnographic fieldwork by Paneth (1957) to the surveys by Bühler (1986) and Kurz (1993/2002), to name but a few. In the course of the 1990s, the use of questionnaires gained further ground, not least in relation to community interpreting, where service providers, interpreters and users of interpreting services have been questioned about such issues as interpreting needs, role expectations and satisfaction. Many of these studies have relied on quantification, but various service-related attitudes and perceptions informed by personal experience have increasingly been captured by the use of interviews and focus groups yielding qualitative data. In combination with various observational techniques, this has given a certain currency to the ethnographic approach in Interpreting Studies. Examples include Claudia Angelelli’s (2004b) work on medical interpreters and, for conference settings, the study by Ebru Diriker (2004), who has also taken up Michael Cronin’s (2002) plea for a “cultural turn in Interpreting Studies”. Transcriptions of semi-structured or in-depth interviews and fieldnotes from participant observation constitute a rich source of qualitative data in their own right. Increasingly, however, such data are triangulated, where possible, with discourse records of authentic interpreter-mediated encounters, as featured in Wadensjö’s (1998) seminal work. And while such discourse data have been explored with the help of different analytical approaches – from conversational analysis to functional pragmatics to critical discourse analysis, the implication of using such qualitative data is ultimately the same: namely, the postmodern acknowledgment that data are not ‘there’ as a given but taken by the analyst, and interpreted in the light of his or her socio-cognitive background and orientation. Thus, the critical engagement with various types of qualitative data has brought interpreting researchers closer to the non-essentialist epistemologies that inform what is described
The turns of Interpreting Studies
most comprehensively as the qualitative research paradigm (Denzin and Lincoln 2000). Compared to the pioneering work of experimental psychologists in the 1960s, recent studies of interpreting from the perspective of critical science (e.g. Inghilleri 2005, Pöllabauer 2005) suggest that Interpreting Studies has clearly taken some sharp turns. But while there have indeed been some major shifts in theory, methodology and epistemology, as reviewed above, these must also be seen in the overall context of progress in scientific thinking. As an activity constrained and shaped by its broader environment, interpreting research, like interpreting as such, has changed with the times, undergoing transformations that have affected late-twentieth-century ways of doing science in general, and taking up intellectual trends emerging in closely related fields and its root or parent disciplines. 6. Conclusion: Our turn Interpreting Studies early in the twenty-first century is clearly different from what it was, say, two decades earlier, when it is said to have taken its empirical turn. The discipline has progressed along numerous milestones and undergone theoretical and methodological shifts, some of which have been so deep and consequential as to be identified here as paradigmatic shifts or turns. Given its status as a subdiscipline within Translation Studies, one could assume such shifts to have carried over from the parent discipline, and this is in part what motivated the analysis presented in this paper. Using Snell-Hornby’s (2006) account as a point of reference and departure, several turns have been described that, on the one hand, roughly correspond with shifts in Translation Studies and related fields, and on the other, reflect a timing and manifestation of their own. Limiting the focus to Interpreting Studies as such, the turns identified include a methodological reorientation in the late 1980s, a more far-reaching paradigm change toward the end of the century, and a related methodological shift implying also a challenge to the more empiricist epistemological foundations of a field once thought to be a mere testing ground for experimental psychologists. Indeed, this is where one has to identify a quintessential paradigm shift that would qualify as such even in the eyes of those who adopt a more conservative position on Kuhnian “scientific revolutions” (and might find the identification of several turns or paradigm shifts in a scarcely fifty-year-old field of research somewhat exaggerated). What really happened to interpreting research – whether in the early 1970s or 1990s or as late as the early 2000s – was its emergence as a disciplinary matrix to begin with, a “tradition” that, as Lefevere put it, is “consciously shaped and established by a number of people who share the same, or at least analogous, goals over
Franz Pöchhacker
a number of years” (1977: 1). Among the people shaping that paradigm, Daniel Gile, taking the torch from Danica Seleskovitch, surely stands out as the most influential leader – the paradigm builder, or master, providing innovative ideas and methodological guidance as well as networking and training resources for the benefit of the community. It has been a privilege to share with Daniel Gile the same, or at least analogous, goals over twenty years, and to see a field taking shape that some of us, years ago, would dream up, as Daniel Gile once drew it in a dedication of his Regards sur la recherche. His masterful drawing is reproduced below, closing a chapter that is dedicated wholeheartedly to the master of the pen as well as the booth.
References Anderson, R.B.W. 1976/2002. “Perspectives on the role of interpreter.” In Pöchhacker and Shlesinger (eds), 209–217. Angelelli, C.V. 2004a. Revisiting the Interpreter’s Role: A Study of conference, court, and medical interpreters in Canada, Mexico, and the United States. Amsterdam/Philadelphia: John Benjamins. Angelelli, C.V. 2004b. Medical Interpreting and Cross-Cultural Communication. Cambridge: Cambridge University Press. Bachmann-Medick, D. 2006. Cultural Turns: Neuorientierungen in den Kulturwissenschaften. Reinbek bei Hamburg: Rowohlt. Baigorri Jalón, J. 2000. La interpretación de conferencias: el nacimiento de una profesión. De París a Nuremberg. Granada: Comares.
The turns of Interpreting Studies Barik, H.C. 1969. A Study of Simultaneous Interpretation. PhD thesis, University of North Carolina at Chapel Hill. Brislin, R.W. (ed.). 1976. Translation: Applications and Research. New York: Gardner Press. Brunette, L., Bastin, G., Hemlin, I. and Clarke, H. (eds). 2003. The Critical Link 3: Interpreters in the Community. Amsterdam/Philadelphia: John Benjamins. Bühler, H. 1986. “Linguistic (semantic) and extra-linguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters.” Multilingua 5 (4): 231–235. Carr, S.E., Roberts, R., Dufour A. and Steyn, D. (eds). 1997. The Critical Link: Interpreters in the Community. Papers from the First International Conference on Interpreting in Legal, Health, and Social Service Settings (Geneva Park, Canada, June 1–4, 1995). Amsterdam/Philadelphia: John Benjamins. Chernov, G.V. 1978. Teoriya i praktika sinkhronnogo perevoda [Theory and Practice of Simultaneous Interpretation]. Moscow: Mezhdunarodnyye otnosheniya. Chernov, G.V. 1979/2002. “Semantic aspects of psycholinguistic research in simultaneous interpretation.” In Pöchhacker and Shlesinger (eds), 99–109. Chernov, G.V. 2004. Inference and Anticipation in Simultaneous Interpreting: A probability-prediction model. Amsterdam/Philadelphia: John Benjamins. Chesterman, A. 1997. Memes of Translation: The Spread of Ideas in Translation Theory. Amsterdam/Philadelphia: John Benjamins. Cokely, D. 1984. Towards a Sociolinguistic Model of the Interpreting Process: Focus on ASL and English. PhD thesis, Georgetown University. Cronin, M. 2002. “The empire talks back: Orality, heteronomy and the cultural turn in Interpreting Studies.” In Pöchhacker and Shlesinger (eds), 387–397. Denzin, N.K. and Lincoln, Y.S. 2000. Handbook of Qualitative Research, 2nd ed. Thousand Oaks/ London/New Delhi: Sage. Diriker, E. 2004. De-/Re-Contextualising Conference Interpreting: Interpreters in the Ivory Tower? Amsterdam/Philadelphia: John Benjamins. Driesen, C.J. 1985. L’interprétation auprès des tribunaux pénaux de la République Fédérale d’Allemagne. Thèse de doctorat, Université de la Sorbonne Nouvelle. Fabbro, F. and Gran, L. 1997. “Neurolinguistic research in simultaneous interpretation.” In Conference Interpreting: Current Trends in Research, Y. Gambier, D. Gile and C. Taylor, C. (eds), 9–27. Amsterdam/Philadelphia: John Benjamins. Feldweg, E. 1996. Der Konferenzdolmetscher im internationalen Kommunikationsprozeß. Heidelberg: Groos. Gambier, Y., Gile, D. and Taylor, C. (eds). 1997. Conference Interpreting: Current Trends in Research. Amsterdam/Philadelphia: John Benjamins. Garzone, G. and Viezzi, M. (eds). 2002. Interpreting in the 21st Century: Challenges and Opportunities. Amsterdam/Philadelphia: John Benjamins. Gerver, D. 1976. “Empirical studies of simultaneous interpretation: A review and a model.” In Brislin (ed.), 165–207. Gerver, D. and Sinaiko, H.W. (eds). 1978. Language Interpretation and Communication. Proceedings of the NATO Symposium, Venice, Italy, September 26–October 1, 1977. New York/London: Plenum Press. Gile, D. 1988. “An overview of conference interpretation research and theory.” In Languages at Crossroads. Proceedings of the 29th Annual Conference of the American Translators Association, D.L. Hammond (ed.), 363–371. Medford, NJ: Learned Information.
Franz Pöchhacker Gile, D. 1990. “Scientific research vs. personal theories in the investigation of interpretation.” In Aspects of Applied and Experimental Research on Conference Interpretation, L. Gran and C. Taylor (eds), 28–41. Udine: Campanotto. Gile, D. 1991. “The processing capacity issue in conference interpretation.” Babel 37 (1): 15–27. Gile, D. 1994. “Opening up in interpretation studies.” In Snell-Hornby et al. (eds), 149–158. Gile, D. 1995. Regards sur la recherche en interprétation de conférence. Lille: Presses Universitaires de Lille. Gile, D. 1997/2002. “Conference interpreting as a cognitive management problem.” In Pöchhacker and Shlesinger (eds), 163–176. Gile, D. 1998. “Observational studies and experimental studies in the investigation of conference interpreting.” Target 10 (1): 69–93. Gile, D. 2000. “The history of research into conference interpreting: A scientometric approach.” Target 12 (2): 297–321. Gile, D., Dam, H.V., Dubslaff, F., Martinsen, B. and Schjoldager, A. (eds). 2001. Getting Started in Interpreting Research: Methodological reflections, personal accounts and advice for beginners. Amsterdam/Philadelphia: John Benjamins. Goldman‑Eisler, F. 1967. “Sequential temporal patterns and cognitive processes in speech.” Language and Speech 10 (3): 122–132. Gran, L. and Dodds, J. (eds). 1989. The Theoretical and Practical Aspects of Teaching Conference Interpretation. Udine: Campanotto. Hale, S. 2004. The Discourse of Court Interpreting: Discourse practices of the law, the witness and the interpreter. Amsterdam/Philadelphia: John Benjamins. Herbert, J. 1952. The Interpreter’s Handbook: How to become a conference interpreter. Geneva: Georg. Hermann, A. 1956/2002. “Interpreting in Antiquity.” In Pöchhacker and Shlesinger (eds), 15–22. Inghilleri, M. 2003. “Habitus, field and discourse: Interpreting as a socially situated activity.” Target 15 (2): 243–268. Inghilleri, M. (2005). “Mediating zones of uncertainty: Interpreter agency, the interpreting habitus and political asylum adjudication.” The Translator 11 (1): 69–85. Ingram, R.M. 1978. “Sign language interpretation and general theories of language, interpretation and communication.” In Gerver and Sinaiko (eds), 109–118. Ingram, R.M. 1985. “Simultaneous interpretation of sign languages: Semiotic and psycholinguistic perspectives.” Multilingua 4 (2): 91–102. Kade, O. 1963. “Der Dolmetschvorgang und die Notation.” Fremdsprachen 7 (1): 12–20. Kirchhoff, H. 1976/2002. “Simultaneous interpreting: Interdependence of variables in the interpreting process, interpreting models and interpreting strategies.” In Pöchhacker and Shlesinger (eds), 111–119. Kuhn, T.S. 1962/1996. The Structure of Scientific Revolutions, 3rd ed. Chicago/London: The University of Chicago Press. Kurz, I. 1991. “Conference interpreting: Job satisfaction, occupational prestige and desirability.” In XIIth World Congress of FIT – Belgrade 1990. Proceedings, M. Jovanović (ed.), 363–376. Beograd: Prevodilac. Kurz, I. 1993/2002. “Conference interpretation: Expectations of different user groups.” In Pöchhacker and Shlesinger (eds), 313–324.
The turns of Interpreting Studies Lambert, S. and Moser‑Mercer, B. (eds). 1994. Bridging the Gap: Empirical Research in Simultaneous Interpretation. Amsterdam/Philadelphia: John Benjamins. Lederer, M. 1981. La traduction simultanée – expérience et théorie. Paris: Minard Lettres modernes. Lefevere, A. 1977. Translating Literature: The German Tradition from Luther to Rosenzweig. Assen: Van Gorcum. Liu, M. 2001. Expertise in Simultaneous Interpreting: A Working Memory Analysis. PhD dissertation, University of Texas at Austin. Moser-Mercer, B. 1994. “Paradigms gained or the art of productive disagreement.” In Lambert and Moser-Mercer (eds), 17–23. Moser-Mercer, B. 1997/2002. “Process models in simultaneous interpretation.” In Pöchhacker and Shlesinger (eds), 149–161. Oléron, P. and Nanpon, H. 1965/2002. “Research into simultaneous translation.” In Pöchhacker and Shlesinger (eds), 43–50. Paneth, E. 1957/2002. “An investigation into conference interpreting.” In Pöchhacker and Shlesinger (eds), 31–40. Pinter, I. 1969. Der Einfluß der Übung und Konzentration auf simultanes Sprechen und Hören. Dissertation, Universität Wien. Pöchhacker, F. 2004. Introducing Interpreting Studies. London/New York: Routledge. Pöchhacker, F. 2006. “‘Going social?’ On pathways and paradigms in Interpreting Studies.” In Pym et al. (eds), 215–232. Pöchhacker, F. and Shlesinger, M. (eds). 2002. The Interpreting Studies Reader. London/New York: Routledge. Pöllabauer, S. 2005. “I don’t understand your English, Miss.” Dolmetschen bei Asylanhörungen. Tübingen: Gunter Narr. Prunč, E. 2007. “Priests, princes and pariahs: Constructing the professional field of translation.” In Wolf and Fukari (eds), 39–56. Pym, A., Shlesinger, M. and Jettmarová, Z. (eds). 2006. Sociocultural Aspects of Translating and Interpreting. Amsterdam/Philadelphia: John Benjamins. Radnitzky, G. 1970. Contemporary Schools of Metascience. Göteborg: Akademiförlaget. Roberts, R.P. (ed.). 1981. L’interprétation auprès des tribunaux. Actes du mini-colloque tenu les 10 et 11 avril 1980 à l’Université d’Ottawa. Ottawa: University of Ottawa Press. Roberts, R.P., Carr, S.E., Abraham, D. and Dufour, A. (eds). 2000. The Critical Link 2: Interpreters in the Community. Selected papers from the Second International Conference on Interpreting in Legal, Health and Social Service Settings, Vancouver, BC, Canada, 19–23 May 1998. Amsterdam/Philadelphia: John Benjamins. Roy, C.B. 1993/2002. “The problem with definitions, descriptions and the role metaphors of interpreters.” In Pöchhacker and Shlesinger (eds), 345–353. Roy, C.B. 2000. Interpreting as a Discourse Process. Oxford: Oxford University Press. Rudvin, M. 2007. “Professionalism and ethics in community interpreting: The impact of individualist versus collective group identity.” Interpreting 9 (1): 47–69. Salevsky, H. 1987. Probleme des Simultandolmetschens. Eine Studie zur Handlungsspezifik. Berlin: Akademie der Wissenschaften der DDR. Salevsky, H. 1993. “The distinctive nature of Interpreting Studies.” Target 5 (2): 149–167. Sanz, J. 1931. “Le travail et les aptitudes des interprètes parlementaires.” Anals d’Orientació Professional 4: 303–318.
Franz Pöchhacker Sawyer, D.B. 2004. Fundamental Aspects of Interpreter Education: Curriculum and Assessment. Amsterdam/Philadelphia: John Benjamins. Seleskovitch, D. 1975. Langage, langues et mémoire. Étude de la prise de notes en interprétation consécutive. Paris: Minard Lettres modernes. Seleskovitch, D. 1975/2002. “Language and memory: A study of note-taking in consecutive interpreting.” In Pöchhacker and Shlesinger (eds), 121–129. Seleskovitch, D. 1976. “Interpretation, a psychological approach to translating.” In Brislin (ed.), 92–116. Seleskovitch, D. and Lederer, M. 1984. Interpréter pour traduire. Paris: Didier Érudition. Seleskovitch, D. and Lederer, M. 1989. Pédagogie raisonnée de l’interprétation. Paris/Brussels: Didier Érudition/OPOCE. Shlesinger, M. 2000. Strategic Allocation of Working Memory and Other Attentional Resources. PhD dissertation, Bar-Ilan University. Snell-Hornby, M. (ed.). 1986. Übersetzungswissenschaft – eine Neuorientierung. Zur Integrierung von Theorie und Praxis. Tübingen: Francke. Snell-Hornby, M. 1988. Translation Studies. An Integrated Approach. Amsterdam/Philadelphia: John Benjamins. Snell-Hornby, M. 2006. The Turns of Translation Studies. Amsterdam/Philadelphia: John Benjamins. Snell-Hornby, M., Pöchhacker, F. and Kaindl, K. (eds). 1994. Translation Studies – An Interdiscipline. Amsterdam/Philadelphia: John Benjamins. Thieme, K., Hermann, A. and Glässer, E. 1956. Beiträge zur Geschichte des Dolmetschens. Munich: Isar. Tseng, J. 1992. Interpreting as an Emerging Profession in Taiwan – A Sociological Model. MA thesis, Fu Jen Catholic University, Taipeh. Vermeer, H.J. 1994. “Translation today: Old and new problems.” In Snell-Hornby et al. (eds), 3–16. Wadensjö, C. 1993/2002. “The double role of a dialogue interpreter.” In Pöchhacker and Shlesinger (eds), 355–370. Wadensjö, C. 1998. Interpreting as Interaction. London/New York: Longman. Wadensjö, C., Englund Dimitrova, B. and Nilsson, A.-L. (eds). 2007. The Critical Link 4: Professionalisation of interpreting in the community. Amsterdam/Philadelphia: John Benjamins. Wolf, M. 2007. “Introduction: The emergence of a sociology of translation.” In Wolf and Fukari (eds), 1–36. Wolf, M. and Fukari, A. (eds). 2007. Constructing a Sociology of Translation. Amsterdam/Philadelphia: John Benjamins.
Conceptual analysis
The status of interpretive hypotheses Andrew Chesterman
University of Helsinki, Finland In the natural sciences, the task of the researcher is usually seen as the generation and testing of hypotheses. These hypotheses are taken to be possible answers to questions concerning the description, prediction, and explanation of natural phenomena. But there is also another kind of hypothesis, an interpretive hypothesis. The status of interpretive hypotheses is not as clear as that of descriptive, predictive or explanatory ones. This paper aims to clarify this status, showing the respects in which interpretive hypotheses are like other kinds, and the respects in which they are different. Hermeneutic research methods based on the generation and testing of interpretive hypotheses do not seem fundamentally different from those of traditional empirical sciences. Interpretive hypotheses simply apply to different kinds of data. They can be particularly relevant to the research goal of explanation.
Keywords: meaning, hypothesis, hermeneutics, method
1. Interpreting obscure meaning Daniel Gile (2005a) has suggested that research in Translation Studies uses two main paradigms, one taken from the liberal arts tradition and the other from empirical science. I would like here to explore one sense in which these two paradigms may not be so different after all. In the natural sciences, and in the philosophy of natural science, the task of the researcher is usually seen as the generation and testing of hypotheses. In Popper’s terms, science proceeds by a process of conjectures and refutations, i.e. by developing hypotheses and then testing them, trying to falsify them (e.g. Popper 1963). These hypotheses are taken to be possible answers to questions concerning the description, prediction, and explanation of natural phenomena. But there is also another, conceptual kind of hypothesis, an interpretive hypothesis.
Andrew Chesterman
Interpretive hypotheses are conjectures about what something means. Their general relevance to translators and interpreters is neatly illustrated in Gile’s Sequential Model of Translation (e.g. Gile 2005b: 102), which shows how a translator seeks a meaning hypothesis for segments of the source text, and then checks it for plausibility. As a first formulation, we can state the basic form of these hypotheses as follows: the interpretation of X is hypothesized to be Y, or simply X is interpreted as Y. We shall revise this formulation in due course. Such hypotheses have their roots in hermeneutics. The term “hermeneutics”, of course, takes us back to Hermes, the messenger god, who was also the god of translators. After all, translation involves interpretation. Many phenomena need interpreting: not just ancient or complex, difficult texts, as was often the case with the original use of the term “hermeneutics”, but any semiotic object or message, from works of art to dreams. However, I will use the term in a somewhat narrower sense here, drawing largely on Niiniluoto (1983: 165f). It is particularly in cases where conventional linguistic, community-based meanings do not suffice for understanding, but we nevertheless suspect some hidden significance, that we need a hermeneutic approach in this narrower sense. As Gadamer put it (Misgeld and Nicholson 1992: 69-70), “it [hermeneutics] is entrusted with all that is unfamiliar and strikes us as significant.” It is customary to distinguish various types of obscure meaning as objects of hermeneutic research. The historical (intended) meaning of X is its original meaning, in the place and time that X was created. For instance, we might wish to know how the notion of translation was understood in Ancient India, and how that interpretation then changed (see e.g. Trivedi 2006). And any translator trying to understand a source text may be searching for its historical meaning, i.e. what it meant to its original readers, particularly if the skopos involves pragmatic equivalence. The hidden meaning of X is the meaning of which the agent (here, the creator of X) may be unaware, such as the unconscious meaning of a dream. A curious example of hypotheses about hidden meaning in Translation Studies is Venuti’s article (2002) on the potential psychoanalytic significance of errors. Venuti argues that certain symptomatic errors (such as his own slip, translating Italian superstite as “superstitious” instead of “surviving”) reveal something about the translator’s unconscious attitude, e.g. towards the source text, or, in the case of another translation discussed by Venuti, towards a father-figure. The hidden meaning in the text is thus taken to point to something outside the text. Another example is a recent article by the psychoanalyst Adam Phillips (2007) on the new Penguin translations of Freud’s work. The publisher of the English “standard edition” of Freud (translated by Strachey) insisted that the new translation should not draw on existing translations, and clearly did not like the idea of the retranslation project. Phillips interprets this attitude in the light of Freud’s analysis of Moses and the rise of
The status of interpretive hypotheses
monotheism, resisting the challenge of other gods: the scholar here finds a kind of hidden significance via this comparison. Phillips then analyses translation in general as involving a kind of psychoanalytical transference. In psychoanalysis, transference works to bring about change. Retranslations can also bring about change, by introducing new voices (interpretations) into the conversation, providing more food for thought and scope for further understanding of the original; retranslations add to a text’s history of influence. Here too the argument is about a phenomenon’s hidden meaning. The internal meaning of X is the meaning that X (e.g. a work of art) has quite apart from its context of creation. These are the meanings of objects in Popper’s World 3, objective products of human minds. Such meanings may differ from those intended by their creators. And finally, the scholar’s meaning is the meaning of X to the researcher, the analyst, or the contemporary world: for instance the significance of X to some application of it. This is the kind of meaning that is involved when we wonder what lessons we might draw from history, or what the modern relevance of an ancient text or law might be. Under what circumstances are the prescriptive statements offered by the classical translators of the past still relevant today? This last kind of meaning leads to an important point concerning the relation between interpretive hypotheses and other kinds of hypothesis. It is widely assumed that the “hermeneutic method” as such is a method that is particularly, even exclusively, appropriate for certain of the human sciences, such as history, anthropology, literary theory, aesthetics, semiotics. However, interpretive hypotheses are also relevant to the natural sciences. In other words, it is not the case that interpretive hypotheses are somehow outside the methodology of the hard sciences. Consider for instance how traces of collisions between particles in the experiments at CERN may one day be interpreted as evidence of the existence of the mysterious Higgs boson particle. The traces in the collision chamber are read as obscure texts, which contain or suggest a meaning. In other words, the meaning of the observed traces is first of all the scholar’s / scientist’s meaning: what do these traces mean to us (in this example: physicists)? Scientific observations are always “interpreted” in the light of a theory. 2. Varieties of hermeneutic AS The fundamental methodological similarity between empirical and hermeneutic methods has been explored in particular by the Norwegian philosopher Dagfinn Føllesdal. He argues (1979) that the hermeneutic method is simply an application of the basic hypothetico-deductive method to a different kind of data – data that
Andrew Chesterman
are meaningful, that have to do with meaning. He defines the hypothetico-deductive method as follows ([1979] 1994: 234; emphasis original): As the name indicates, it is an application of two operations: the formation of hypotheses and the deduction of consequences from them in order to arrive at beliefs which – though they are hypothetical – are well supported, through the way their deductive consequences fit with our experiences and with our other well-supported beliefs.
As an example, he discusses the meaning of the mysterious Passenger in Ibsen’s Peer Gynt, who makes a surprising appearance in Act V, first on the boat with the hero and again later. This figure has been interpreted by scholars in various ways: as representing fear, death, the devil, Ibsen himself, or Byron’s ghost. The research debate then concerns the assessment of these competing hypotheses. (I return to the crucial issue of hypothesis testing below.) Føllesdal points out that it may not be possible to arrive at a final conclusion, because the data in such a case are not exhaustive. So several interpretations are left hanging in the air together – perhaps as Ibsen intended. (See also Føllesdal et al. 1984, where other examples are also discussed, e.g. from historical research.) Note the formulation I just used: the strange Passenger is interpreted as something, or as representing something. The central notion of this “AS” in hermeneutics captures something of how we often conceptualize our ability to understand: we understand something unfamiliar as something more familiar, i.e. in terms of something that already exists in our conceptual repertoire. The actual term “the hermeneutic as” comes from Heidegger (see e.g. 1962: 186f), but the idea is much older. – Note further that the relevant verb before as need not be “interpret” or “understand”. We can also see / regard / consider / take / view / accept /... X as (or e.g.: in terms of) Y. These can all be variant expressions of different kinds of interpretive hypotheses. To make matters more complex, the hermeneutic as may not be explicitly expressed at all, and remain implicit. The formulation “in this paper, the term X refers to Y” can be explicated as “in this paper, I choose to interpret the term X as denoting Y”. I will now distinguish five kinds of hermeneutic as, corresponding to different forms of interpretive hypotheses. These do not correspond directly to the types of meaning outlined above, although some meaning types may be more frequently involved with some kinds of as than others. I will present the five kinds as separate categories, but I suspect that water-tight borderlines are hard to draw here, and my types are not all mutually exclusive. One reason for this is that different kinds of hypotheses, and different alternatives of the same kind of hypothesis, may be used in different contexts. My list is an attempt to conceptualize some of the rich variety
The status of interpretive hypotheses
of interpretive hypotheses, and at the same time shed light on what an interpretive hypothesis is. I will start with the type of example discussed by Føllesdal, which is perhaps the prototypical type of interpretive hypothesis. I will call this the representing as, or the symbolic as. As in the Ibsen example, this as has to do with the interpretation of works of art, including texts, signs of all kinds, from ancient hieroglyphs to contemporary emoticons. The meanings thus represented may be hidden, as in the Venuti and Freud examples above. But they may also be internal as well. Consider for instance Christine Brooke-Rose’s experimental novel Between (1968). The heroine of this novel is an interpreter, who is forever moving between conferences, countries, languages and cultures, and seems to have lost her identity. This is symbolized in the fact that the novel contains not a single instance of the verb “to be”, in any form. This absence is also a sign, which we can easily interpret as representing the loss of identity: a hidden meaning, perhaps, but also an internal one, part of the artistic structure of the novel, intended and created by the novelist. The other types I will suggest are not discussed by Føllesdal, but they seem to me to represent entirely plausible extensions of his basic insight. My second type of hermeneutic as is the metaphorical or analogical as. It signifies that X is being interpreted as being like something else, as being like Y. This similarity gives X some of its meaning, it helps us to understand X. The kinds of meaning involved in such comparisons seem to be partly scholar’s meaning, and partly perhaps also hidden meaning. For instance, translation itself has been seen in terms of a great many metaphors: a traditional one is that of the bridge across a cultural and linguistic border. (See Round 2005 for a recent survey.) Other examples can be taken from two recent proposals concerning our interpretation of the translator’s role. Michael Cronin (2000) suggests that translators can be seen as nomads, metaphorically (or even concretely) travelling from place to place, perhaps rootless. This analogical as implicitly links Translation Studies to travel theory (see e.g. Polezzi 2006). Carol Maier (2006), on the other hand, suggests that the translator can also be seen as a “theoros”, someone who in Classical Greece travelled, looking with wonder, in search of wisdom. This metaphor, Maier argues, can shed light on our reading of the role of translators as characters in fiction. Thirdly, there is a classificatory as. Here, X is interpreted as a kind of Y. In other words, this as postulates a hypernym under which X can be classified, and thus suggests a particular perspective, revealing some facets of X and hiding others. This kind of as abounds in translation research. A well-known example is André Lefevere’s claim (1992) that translation can be seen as a form of rewriting. Rewriting is the cover-term, beneath which translation sits beside anthologizing, paraphrasing, summarizing, and so on. More recently, translation has been seen as a form of intervention (Munday 2007), a perspective which puts a rather different
Andrew Chesterman
light on translation, implicitly aligning it e.g. with political debate or activism, or perhaps medical treatment. Compare the earlier view of translation as a kind of manipulation, illustrated e.g. in Hermans (1985). Such analogies seem to reveal aspects of scholar’s meaning. Then there is a compositional as, which offers a way of conceptualizing a complex concept in terms of its possible subtypes (as I am doing now, with the concept “interpretive hypothesis”). This type is the converse of the classificatory as. If I define equivalence as consisting of two types, or five types, I am positing “equivalence” as a hypernym and the various types as its hyponyms. I am in effect proposing an interpretation about the composition of this hypernym. In such cases, the hermeneutic as is often only implicit. Formulations such as “there are four types of X” can thus often be explicated as “X can be interpreted as consisting of four types”. And finally there is a definitional as, which is used in many forms of definition. Consider, for instance, the many ways in which scholars have defined the meaning of equivalence: as identity, as similarity, as interpretive resemblance, as a mapping or matching, and so on. Or the competing definitions of translation itself. All definitions are ultimately interpretations. And like interpretations, they can vary; they can develop into more accurate or more comprehensive definitions. Consider the various definitions of the metre, for instance. Since 1983, this length has been defined as the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second, but it was originally interpreted as being one tenmillionth part of the quadrant of the Earth, and there have been several quite different definitions in between.1 The definitions have become increasingly precise. But extra precision tends to reduce the extension of a term: witness the sad fate of the ex-planet Pluto, now no longer deemed to fall within the definition of a planet. On the other hand, in our own field, Toury’s norm-based definition of translation (1995) created a wider extension of the term. I claimed above that interpretive hypotheses are also relevant to the natural sciences. Analogies may be used in the preliminary conceptualization of the research problem or field, as when we see light as waves or as particles. Definitions and category classifications are used in data analysis. Test results need to be interpreted. In these respects, interpretive hypotheses are unavoidable, especially insofar as scientific research relies on natural language. We might compare some of these hypotheses to the core assumptions of a theory, which are not necessarily tested in a given research project. Interpretive hypotheses are thus essential conceptual tools, with their own functions in any research project. This section has aimed to extend the interpretation of what an interpretive hypothesis is, partly by offering a compositional as. The implication of this is that 1.
See .
The status of interpretive hypotheses
the definition of the notion can be wider than the one that seems to be implicitly assumed by Føllesdal. But what kind of empirical basis can be found for these interpretive hypotheses? 3. Assessing interpretive hypotheses Empirical hypotheses are testable. They are tested against evidence, against criteria of parsimony, logic and descriptive or explanatory power, and against alternative hypotheses. Popperians would add a further requirement: empirical scientific hypotheses should be falsifiable. What is the situation with interpretive hypotheses? Are they only tested “conceptually” (cf. Gile 2004a, Chesterman 2004)? Conceptual testing – perhaps a better term would be assessment – takes place by argument, practical reason. One can argue about the logic and parsimony of an interpretive hypothesis; but one can also check how well it fits the evidence and how fruitful it turns out to be. In practice, this means that if a given definition or analogy etc. proves to lead to good research questions or analyses or empirical hypotheses, this counts as added value. Weaker alternatives gradually fade from use in the scholarly community. I agree with Gile that this process is partly Kuhnian (Kuhn 1970), in that it is influenced by social and convenience factors; but I think it is also Popperian, in that even conceptual testing can weed out weak hypotheses. If this is “Popper-inspired” assessment, rather than strictly Popperian testing (Gile 2004b: 125), so be it. Interpretive hypotheses are not falsifiable, then – they are not right or wrong: they are “revisable agreements” (Misgeld 1991: 177), better or worse than alternatives. Some may be very weak indeed, of course, and be rapidly overwhelmed by counter-evidence and/or argument. With this in mind, we can refine the basic form of an interpretive hypothesis in terms of an underlying abductive inference, thus: it is hypothesized that if X is interpreted as Y, added value will ensue (the added value being that we will understand X better, be able to examine it fruitfully, derive further interesting research questions, solve a problem, improve a situation, and so on). This formulation is implicitly comparative. It implies that the hypothesized interpretation of X as Y is better (in some sense) than (a) no interpretation at all, and (b) alternative interpretations. The formulation is also predictive: if no added value ensues, we can dump the hypothesis... That said, it must be acknowledged that scholars often tend to be more concerned with inventing and propagating new interpretive hypotheses than in assessing or revising existing ones. In the studies referred to above, Cronin and Maier focus on the new insights that their hypotheses can shed on the role and status of translators, on the benefits of adopting this particular perspective; they
Andrew Chesterman
thus seek to show its added value. The authors’ use of evidence is illustrative, to support the hypothesis under consideration, to justify the possibility of this particular interpretation. But it would be good to see more work which compares such hypotheses with others, or tests them against data which might cause difficulties for them. One example of this kind of testing is Pym’s (2007) assessment of some alternative definitions of translation. The tendency towards the generation and illustration of hypotheses rather than their testing is not inevitable. Let us return to Føllesdal’s original example, the strange Passenger. In his analysis of the way the competing interpretations have been defended, Føllesdal suggests a number of empirical factors that need to be considered. The most obvious one is the relation between the proposed hypothesis and the actual data that prompted it: this particular scene in the play (and later scenes in which the Passenger appears). How well does the interpretation fit this evidence? At least well enough for the evidence to act as an illustration of the hypothesis: a minimum requirement, for the interpretation must at least be possible. Second, there is the relation between the hypothesis and other evidence in the play: other scenes, where the Passenger does not appear. How well is the hypothesis supported by this additional evidence? Does a hypothesis concerning only a part of the play lead to a coherent and comprehensive understanding of the play as a whole, or fit with our preliminary understanding of this whole? Or is there counter-evidence elsewhere in the play? Third: what about quite different evidence, such as biographical evidence, data from Ibsen’s letters, interviews, his other work, and so on? (This looks exactly like the triangulation method used in some empirical research projects.) Does this external evidence also support the hypothesis? Again, what about counter-evidence? Fourth: how well does the hypothesis fare in competition with other hypotheses? Are other possibilities taken into account, and shown to be less adequate? And finally, what consequences does the hypothesis imply? In particular, what testable consequences? Føllesdal points out that these criteria are, in principle, largely the same as those used in testing empirical hypotheses in the natural sciences. Indeed, as I have said, Føllesdal’s basic argument is that there is no fundamental methodological difference between natural sciences and humanities using a hermeneutic method. A hypothesis is tested against primary evidence, against additional evidence, against alternative hypotheses and possible counter-evidence. The fact that scholars in the humanities do not always follow such a rigorous testing procedure in practice is another matter. Føllesdal shows that if this kind of testing is carried out, some of the proposed interpretations of the meaning of the Passenger look distinctly better than others. He also points out, however, as mentioned above, that it may be impossible to arrive at a conclusive solution because there is simply not enough evidence. This is not unusual when we are dealing with meaningful data.
The status of interpretive hypotheses
A strict falsificationist position is thus inappropriate in research on meaningful data. An analogy, for instance, is not falsifiable. But strict falsificationism may not be a realistic position for hard scientists either, as argued by Lakatos (see Lakatos and Musgrave 1970), since falsification can seldom be absolute: after all, testing methods and decisions about operationalization rely on auxiliary hypotheses that are also fallible. Nevertheless, we can at least underline the importance of checking interpretive hypotheses against potential counter-examples and additional evidence, and against alternative interpretive hypotheses. Counter-evidence will not actually falsify an interpretive hypothesis, but it can provide a good reason for preferring some other alternative instead, when different hypotheses are weighed against each other. The assessment of interpretive hypotheses is thus relative, not absolute. To what extent do we find a given hypothesis convincing? (This is often the case for empirical hypotheses in the hard sciences as well, of course, e.g. when one assesses the relative probabilities of a given hypothesis vs. the null hypothesis.) Nevertheless, in a Popperian spirit, we can try to specify the kind of evidence that would indeed count against or weaken a given interpretive hypothesis. In his discussion of the Passenger in Peer Gynt, Føllesdal notes ([1979] 1994: 238) that in assessing competing hypotheses, one criterion is the degree of specificity of the evidence called upon: the more specific the evidence, the stronger the claim can be. If a hypothesis is based primarily on very general evidence and there is specific evidence that runs counter to the hypothesis, or if not all parts of the phenomenon are taken into consideration, then the hypothesis is much less convincing. And we can also note the importance of deriving testable consequences from interpretive hypotheses. If, for example, we were to propose that students will become better translators if they are taught to see their professional role as that of a theoros (this is not Maier’s claim, just an invented example), we could in principle test the claim with a comparative study based on two teaching methods with matched groups of students, one highlighting the theoros metaphor and the other, say, the translator’s role as bridge-building mediator. With data of the kind we are discussing here, meaningful data in a broad sense, one contribution of interpretive hypotheses can be their cumulative effect. Given that the assessment of these hypotheses will seldom be conclusive, a new interpretive hypothesis at least offers a new way of seeing, in addition to existing ones. This characteristic is particularly relevant for our understanding of works of art, of course, which may benefit from being amenable to a multiplicity of interpretations, offering a rich mixture of meanings. Take the metaphorical or analogical as, or the classificatory as, for instance: in theory, anything can be seen as similar in some respect to practically anything else, and can be classified in any number of ways, depending on the purpose, perspective, context etc. Every analogy and every classification highlights some aspects and obscures others. From this point of
Andrew Chesterman
view, the cumulative effect of interpretive hypotheses can enrich understanding. The added value of an interpretive hypothesis may indeed be literally “additional” depth of understanding or range of significance, rather than the defeat of an alternative hypothesis. And there may be no end to possible interpretations. 4. Interpretive hypotheses and explanations I would finally like to compare interpretive hypotheses with the standard empirical types: predictive, descriptive, explanatory. Interpretive hypotheses are predictive in the implicit abductive sense mentioned above (if X is interpreted as Y, added value will ensue); but unlike empirical predictive hypotheses they are not falsifiable. Interpretive hypotheses are more closely related to descriptive ones, since descriptive hypotheses, with the basic logical form all X (of type T) have feature F, implicitly involve definitions and interpretations of X (and T) and F. But the closest similarity is with explanatory hypotheses. After all, an interpretation is already an explanation of a kind, since it is a way in which someone “makes sense” of the phenomenon in question. This aspect of interpretive hypotheses is especially evident in qualitative research, such as that carried out within a framework such as Grounded Theory (Glaser and Strauss 1967). In such a framework, data analysis proceeds by a series of interpretive hypotheses which are subjected to continuous testing against further data, until one arrives at an explanatory interpretation that best accounts for the totality of the dataset. With respect to our example of Peer Gynt, for instance, one could argue that the play as a whole is better explained / interpreted as being about sin, love, identity and/or death than about Byron. This kind of interpretive explanation is largely a form of unification (see Salmon 1998). Explanation-by-unification works by relating the explanandum to its wider context, making explicit its relations with a wider network, so that it is no longer an isolated phenomenon. If a strange event, for instance, is interpreted as being analagous to some other, more familiar event, or as being a hyponym of a more familiar type of event, the strange event itself seems less obscure. It is this sense of the strange event no longer being isolated that gives us the feeling that we understand it, to some extent at least. (For further analysis of varieties of explanation, with particular reference to Translation Studies, see Chesterman 2008.) In this context, it is interesting to see a similarity between the role of hermeneutic empathy and the role of inference by analogy (Niiniluoto 1983: 176). Classical arguments by analogy are based on the idea that if certain phenomena share a number of given features, we can infer that they may also share some other features. We can thus infer something about interpreting by studying translation (and
The status of interpretive hypotheses
vice versa), because the two modes share many features. The role of empathy in hermeneutic research, as a source of understanding e.g. a literary work, is similar, in that it is based on the analogy between the scholar and the writer as human beings, with emotions, needs, imagination, etc. Both empathy and analogy can thus serve as aids in generating explanatory hypotheses: the inferences made can then be tested in the ways outlined above. 5. Conclusion My conclusion concurs with that of Føllesdal: interpretive hypotheses are what we use when we try to understand meaningful yet obscure phenomena. The method of generating and testing such hypotheses does not significantly differ in principle from the standard hypothetico-deductive method used in the natural sciences, except for the point that interpretive hypotheses are not falsifiable (although they can certainly be unconvincing). To return to a hermeneutic perspective: one’s interpretive hypotheses help to constitute one’s horizon, in Gadamer’s terms. To the extent that this horizon is shared with other people, e.g. within a given research paradigm, it provides an initial framework for understanding (Gadamer’s Vorverstehen). To the extent that this horizon exists and is shared via discourse, interpretive hypotheses may appear to be more or less convincing partly according to the effectiveness of this discourse rhetoric itself, which may thus play a more influential role here than is the case with empirical hypotheses: another point of difference. Like all hypotheses, interpretive ones too are tools to be made use of in our quest for understanding. If a hypothesis turns out not to be so useful after all, or rather less useful than an alternative, we can refine it or drop it. On the other hand, if it leads to interesting new questions and new empirical hypotheses, let’s run with it as far as it takes us!2 References Brooke-Rose, C. 1968. The Brooke-Rose Omnibus. Manchester: Carcanet Press. Chesterman, A. 2004. “Paradigm Problems?” In Translation Research and Interpreting Research. Traditions, Gaps and Synergies, C. Schäffner (ed.), 52–56. Clevedon: Multilingual Matters.
2. Warm thanks to members of the MonAKO research seminar, and to Anthony Pym, for helpful comments on an early version of this paper.
Andrew Chesterman Chesterman, A. 2008. “On explanation.” In Beyond Descriptive Translation Studies. Investigations in homage to Gideon Toury, A. Pym et al. (eds), 363–379. Amsterdam/Philadelphia: John Benjamins. Cronin, M. 2000. Across the Lines: Travel, language and translation. Cork: Cork University Press. Føllesdal, D. 1979. “Hermeneutics and the Hypothetico-Deductive Method.” Dialectica 33 (3–4): 319–336. Also in Readings in the Philosophy of Social Science, M. Martin and L.C. McIntyre (eds), 1994, 233–245. Cambridge, Mass.: MIT Press. Føllesdal, D., Walløe, L. and Elster, J. 1984. Argumentasjonsteori, språk og vitenskapsfilosofi. Olso: Universitetsforlaget. Gile, D. 2004a. “Translation Research versus Interpreting Research: Kinship, difference and prospects for partnership.” In Translation Research and Interpreting Research. Traditions, Gaps and Synergies, C. Schäffner (ed.), 10–34. Clevedon: Multilingual Matters. Gile, D. 2004b. “Response to the invited papers.” In Translation Research and Interpreting Research. Traditions, Gaps and Synergies, C. Schäffner (ed.), 124–127. Clevedon: Multilingual Matters. Gile, D. 2005a. “The liberal arts paradigm and the empirical science paradigm.” Available at http://www.est-translationstudies.org/ > Research issues. Gile, D. 2005b. La traduction. La comprendre, l’apprendre. Paris: Presses Universitaires. Glaser, B.G. and Strauss, A.L. 1967. The Discovery of Grounded Theory. Stategies for Qualitative Research. New York: Aldine de Gruyter. Heidegger, M. 1962. Being and Time. (Translated by John Macquarrie and Edward Robinson.) New York: Harper and Row. Hermans, T. (ed.) 1985. The Manipulation of Literature. London: Croom Helm. Kuhn, T. 1970. The Structure of Scientific Revolutions. Chicago: University of Chicago Press. (2nd edition.) Lakatos, I. and Musgrave, A. (eds) 1970. Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press. Lefevere, A. 1992. Translation, Rewriting and the Manipulation of Literary Fame. London: Routledge. Maier, C. 2006. “The translator as theoros.” In Translating Others, vol. 1, T. Hermans (ed.), 163–180. Manchester: St. Jerome. Misgeld, D. 1991. “Modernity and hermeneutics: A critical-theoretical rejoinder.” In Gadamer and Hermeneutics, H.J. Silverman (ed.), 163–177. London: Routledge. Misgeld, D. and Nicholson, G. 1992. “Writing and the Living Voice. Interview with Hans-Georg Gadamer.” In Hans-Georg Gadamer on Education, Poetry, and History, D. Misgeld and G. Nicholson (eds), 63–71. Albany, N.Y.: State University of New York Press. Munday, J. (ed.) 2007. Translation as Intervention. London: Continuum. Niiniluoto, I. 1983. Tieteellinen päättetly ja selittäminen. Helsinki: Otava. Phillips, A. 2007. “After Strachey.” London Review of Books, 4.10.2007, 36–38. Polezzi, L. (ed.) 2006. Translation, Travel, Migration. Special issue of The Translator, 12 (2). Popper, K.R. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge. London: Routledge and Kegan Paul. Pym, A. 2007. “On history in formal conceptualizations of translation.” Across Languages and Cultures 8 (2): 153–166.
The status of interpretive hypotheses Round, N.G. (2005). “Translation and its metaphors: The (N+1) wise men and the elephant.” SKASE 1 (1): 47–69. Salmon, W.C. 1998. Causality and Explanation. New York: OUP. Toury, G. 1995. Descriptive Translation Studies and beyond. Amsterdam/Philadelphia: John Benjamins. Trivedi, H. 2007. “In our own time, on our own terms: ‘Translation’ in India.” In Translating Others, vol. 1, T. Hermans (ed.), 102–119. Manchester: St. Jerome. Venuti, L. 2002. “The difference that translation makes: The translator’s unconscious.” In Translation Studies: Perspectives on an emerging discipline, A. Riccardi (ed.), 214–241. Cambridge: Cambridge University Press.
Stratégies et tactiques en traduction et interprétation Yves Gambier
Université de Turku, Finlande Translation Studies (TS) has borrowed and still borrows concepts from different disciplines. These loan words do not yet comprise a coherent system, hence the weakness of our metalanguage. The paper deals with the concept of strategy and its various terms, as used in the literature on translation and interpreting. To what extent do the different classifications overlap? Do translation and interpreting scholars define and apply similar types of strategies? I argue that we need the notion of tactics as well as that of strategy in order to better explain what is going on in translating and interpreting. The paper compares statements and claims by different authors, with the aim of strengthening the terminology of TS.
Keywords: cultural references, strategy, tactic, taxonomy, translation problem
1. Introduction La traductologie a emprunté et continue d’emprunter nombre de ses concepts à diverses disciplines. Ces emprunts sont loin cependant d’être toujours justifiés et de constituer un ensemble cohérent, d’où la fragilité de son métalangage (Gambier & van Doorslaer, 2007) – fragilité préjudiciable à sa reconnaissance et à la formation des futurs traducteurs et interprètes. Les réflexions qui suivent relèvent de l’approche conceptuelle, tandis que bien des publications se contentent d’une définition de travail. Mais à force de multiplier de telles définitions contingentes, ad hoc, la traductologie perd en rigueur et en transparence. En confrontant les efforts de différents auteurs, nous visons à renforcer la terminologie propre de notre domaine d’activité et de recherche, pour permettre si possible une meilleure compréhension et une meilleure transmission de ce domaine.
Yves Gambier
2. Co-errance ou cohérence terminologique? La maturité d’une discipline se mesure sans doute à la définition non ambiguë de son objet, aux méthodes qu’elle développe, à la puissance explicative de ses modèles, à la cohérence de son métalangage, au consensus généré entre ses spécialistes. La traductologie connait des divergences quant à l’extension sémantique de son objet; elle a pris des tournants successifs (culturel, idéologique, cognitif, sociologique, etc.) qui l’ont poussé à des co-errances dans sa terminologie; certains de ses choix terminologiques ne passent pas toujours bien d’une langue à l’autre. Ainsi équivalence, norme, convention, fonction, pour ne prendre que quatre exemples, peuvent prêter plus à des malentendus qu’à des accords, un même terme référant à plusieurs concepts ou étant employé de manière variée. Sans parler des ambiguïtés suscitées par des termes comme par exemple similarité, système, adaptation, localisation, transfert, etc. Une telle confusion entrave le développement interdisciplinaire, les communications avec les milieux non-universitaires (traducteurs professionnels, clients, donneurs d’ouvrage, décideurs, éditeurs) mais aussi au sein même de la discipline, en employant ou pas une lingua franca, en explorant les discours et pratiques d’ailleurs. Le concept de stratégie a d’abord été utilisé pour la chose militaire (ce que révèle son étymologie: stratos ‘armée’, àgein ‘commander’). Il dénote un plan général et un commandement, en vue d’atteindre un certain objectif explicite. Le stratège se doit d’anticiper tous les facteurs qui pourraient avoir un impact sur cet objectif visé. Pour arriver à ses fins, il réalise des actions coordonnées ou tactiques, adaptées à une situation donnée, modifiables, mais sans perdre de vue la stratégie décidée. Il n’y a donc pas en principe de stratégie sans tactiques ni de tactiques sans stratégie planifiée. Le marketing, le management, la communication, les sports, l’apprentissage des langues, certaines analyses de discours, la politologie, la sociologie à la Bourdieu, la gestion des universités aujourd´hui… recourent au concept de stratégie, mais avec ou sans son corollaire (tactique). Les termes en italique ci-dessus indiquent les caractéristiques fondamentales de la stratégie qui n’est pas un finalisme idéaliste: toute stratégie tient compte des conditions (matérielles, topographiques, sociales, etc.) dans lesquelles elle cherche à se réaliser. Elle se distingue des objectifs et des tactiques qui lui sont néanmoins consubstantiels. On peut vouloir gagner une élection (objectif): il faut comprendre ce que veulent les électeurs (stratégie) et décider d’aller sur les marchés, de multiplier les réunions de quartier, d’ouvrir un blog, etc. (tactiques). Qu’en est-il en traductologie? Peut-on décrire les régularités du comportement du traducteur, les régularités ou normes dans les traductions sans consensus sur les définitions et les types de stratégie/tactique mis en œuvre? Entre Levý (1967), Popovič (1970, 1975), Hönig & Kussmaul (1982) et Wilss (1983) qui sont sans doute les premiers à avoir esquissé le rôle de la stratégie dans le processus de la
Stratégies et tactiques en traduction et interprétation
traduction comme suite de décisions, et les récentes publications sur l’intervention, l’activisme du traducteur appelé à prendre des positions éthiques et politiques, a-t-on affaire à un concept constant, non ambigu? 3. Vue d’ensemble en traductologie C’est certainement avec le tournant psycholinguistique que la traductologie a commencé à utiliser assez systématiquement stratégie: que se passe-t-il en effet quand un traducteur passe d’un texte de départ à un texte d’arrivée? Comment réalise-t-il ce transfert? Comment traite-t-il sens, contenu, message, intentions de l’auteur? Il existe plusieurs termes en anglais pour dire ce passage (perpétuant la métaphore du déplacement en traduction): strategies, procedures, techniques, operations, changes, shifts, methods, replacements, trajections, adjustment techniques, etc. Ces termes apparemment synonymes réfèrent-ils à un même et unique concept ou à plusieurs concepts, peut-être hiérarchisés? Cette variation terminologique peut s’expliquer par les manières dont les auteurs abordent les «problèmes» de traduction, par les disciplines de référence qui ont marqué justement ces auteurs dans leur formation et/ou leur carrière (littérature comparée, stylistique, traduction biblique, psycholinguistique, analyse de discours, linguistique textuelle, etc.), ou encore par les objectifs de leur recherche (théorique, descriptif, explicatif, pédagogique). L’étendue des catégories et classifications des stratégies est tantôt ambitieuse, cherchant à couvrir toutes les possibilités, tantôt bornée à un genre ou type de texte à traduire ou à un «problème» particulier de traduction. Dans le cas des genres, on considère les textes littéraires, les pièces de théâtre, les livrets d’opéra, la poésie, les chansons, les bandes dessinées, les publicités, les livres pour enfants, etc. Parfois, on vise plutôt les types de texte (politique, de vulgarisation, féministes, post-coloniaux, etc.). Pour chaque genre ou type, plus ou moins circonscrit avec rigueur, on propose une liste de stratégies supposées appropriées, quels que soient les langues en présence, le contexte et ses contraintes, la finalité et les fonctions de la traduction à achever. Ainsi pour ne prendre qu’un seul exemple, Jones (1989) suggère les stratégies de tranference, de convergence/ divergence, d’improvisation, d’abandonment, d’estrangement pour rendre «la» poésie. Pour les autres genres et textes, sont récurrentes par exemple les stratégies de transcription, d’ajout, de compensation, de suppression ou de réduction, d’adaptation, d’équivalence, de changement syntaxique. On a donc d’une part des textes figés dans un genre ou type et d’autre part une panoplie de stratégies prises dans l’absolu, sans oublier le fait qu’un même terme ne recouvre pas nécessairement les mêmes phénomènes. Par exemple, ajout peut inclure une simple addition lexicale, et/ou une explicitation, une note de traducteur, une préface.
Yves Gambier
Dans le cas des «problèmes» de traduction, définis a priori indépendamment des lecteurs ciblés, de la visée de traduction, des priorités du traducteur, de ses conditions de travail, on propose aussi des listes de stratégies, comme si toponyme, métaphore, jeu de mot, juron, humour, forme de politesse, proverbe, élément culturel (cf. section 5), etc. posaient toujours et irrémédiablement un «problème» et appelaient la mise en oeuvre des mêmes stratégies. De fait, les stratégies suggérées sont souvent logiquement listées. Ainsi pour rendre un calembour, on peut opter en langue d’arrivée pour un calembour, un non-calembour, un autre procédé rhétorique comme la répétition, l’allitération ou une rime, l’omission, le calque, la compensation, l’addition (on introduit un jeu de mots là où il n’y en a pas dans le texte de départ), ou encore le commentaire, la note du traducteur. Dans la majorité des cas, la liste semble purement logique, ne tenant compte d’aucune contrainte contextuelle et textuelle, comme si telle décision stratégique était une application mécanique. Malgré tout, les auteurs n’offrent pas toujours le même nombre et les mêmes types de stratégies, même si on retrouve souvent un même noyau dur de six à huit stratégies (reproduction, adaptation, substitution ou paraphrase, modification, condensation, omission, explicitation). Par exemple, pour les noms propres, Vermes (2003) donne la transference, la substitution, la translation, la modification tandis que Särkkä (2007) indique l’importation, la modification, l’expansion (avec glose) et l’omission. Est-ce à dire que les stratégies ne sont pas aussi automatiquement définissables? En fait, les publications sont de deux sortes au moins: une majorité est prescriptive (quelles stratégies pour quel problème?), niant tous les efforts pour considérer la traduction comme un acte contextualisé de communication interculturelle; d’autres sont plutôt descriptives, cherchant à savoir, à travers des études de cas, comment tel phénomène a été traduit, dans un contexte donné. Par ailleurs, on notera dès à présent que souvent la stratégie porte sur des éléments ponctuels, plutôt que sur l’ensemble du texte à traduire, occultant la différence entre stratégie et tactique. 4. Différentes taxinomies de stratégies en traduction Bien des chercheurs ont classé les stratégies – sans forcément que leur classification ait des justifications explicites, un niveau de généralisation identique. Certaines de ces classifications semblent porter à la fois sur des procédures (comment parvenir à un résultat désiré?) et sur le produit obtenu (comparé au texte de départ). Le tableau qui suit est un choix quelque peu arbitraire d’auteurs, souvent, semble-t-il, cités à propos des stratégies; d’autres taxinomies avec d’autres auteurs ou les mêmes auteurs avec d’autres références auraient pu être aussi sélectionnés. L’objectif ici n’est pas bien sûr d’être exhaustif mais d’approfondir la problématique
Stratégies et tactiques en traduction et interprétation
et notamment de voir jusqu’où ces propositions sont comparables, se chevauchent, se différencient, sans entrer dans les détails ni oser une comparaison entre toutes. Remarquons d’emblée qu’on va de quatre-cinq classes de stratégies à une douzaine ou même plus. On a gardé les appellations en anglais utilisées par les auteurs, présentés indépendamment les uns des autres dans le tableau synoptique. On peut constater qu’un même concept peut appeler divers termes: ainsi, le «sens expliqué» peut être désigné par paraphrase ou équivalent descriptif (Newmark), par description (Molina & Hurtado), par certaines stratégies pragmatiques (Chesterman), par explicitation, rephrasing, etc. Inversement, un même terme n’a pas forcément la même extension sémantique. Ainsi substitution, chez Molina & Hurtado, est une technique surtout chez les interprètes, qui consiste à changer un élément linguistique en un élément nonlinguistique (un geste par ex.) ou inversement (la main sur le coeur d’un Arabe pour signifier «merci») tandis que, chez Malone, elle regroupe diverses solutions de traduction sans identité avec la source (manifestant donc l’absence de rapport direct entre les langues en présence). Chez Vinay & Darbelnet, Molina & Hurtado, l’adaptation est une proposition d’équivalence culturelle (par ex. cyclisme (France) = baseball (Etats-Unis) = cricket (Royaume-Uni) alors que Newmark la considère comme la forme la plus libre de traduction, utilisée pour la poésie, le théâtre et appelant à une sorte de reformulation, de conversion culturelle du texte de départ – non sans analogie avec la notion de domestication chez Venuti (1995). La compensation chez Vinay & Darbelnet ainsi que chez Molina & Hurtado est l’omission d’un élément là où il apparait dans le texte de départ pour le réintroduire ailleurs dans le texte d’arrivée. Pour Chesterman, elle est une motivation possible pour une stratégie mais n’est pas une stratégie en soi. On notera avec ces exemples que stratégie s’applique tantôt au texte entier ou systématiquement à un même élément à travers tout le texte (par ex. les éléments dits culturels), tantôt à un point particulier, ponctuel (par ex. lexical). Ou encore qu’elle s’applique pendant le travail même du traducteur ou pour décrire le résultat de ce travail. Un des problèmes majeurs avec les taxinomies est l’absence de critères explicites qui les fondent. On semble s’attarder parfois sur le niveau linguistique, parfois sur la distance entre les langues (de la correspondance la plus formelle à l’équivalence très relative), parfois sur la quantité d’informations à transmettre, parfois sur des segments de traduction comme produit fini. En outre, certaines classifications sont établies à partir de catégories discrètes (Molina & Hurtado), ou selon une sorte de continuum, du plus littéral au plus libre (Vinay & Darbelnet, Newmark), ou encore selon des oppositions binaires (Catford). Comment dès lors comparer, confronter ces taxinomies qui exigent une réflexion contrastive, psycholinguistique ou cognitive (cf. section 7)?
Oblique translation – transposition – modulation – equivalence – adaptation
– – – – –
Direct translation – borrowing – calque – literal translation Category shifts – structure shifts – class–shifts – unit–shifts – intra–system shifts
Level shifts (shifts from grammar to lexis)
“Translation shifts”
5 Techniques of adjustment
7 Procédés/ procedures /methods
addition substitution alteration footnotes adjustments of language to experience
Catford 1965
Nida 1964
Vinay & Darbelnet 1958/1995
Vinay & Darbelnet, Nida, Catford, Malone et van Leuven-Zwart
Matching – equation – substitution Zigzagging – divergence – convergence Recrescence – amplification – reduction Repackaging – diffusion – condensation Reordering
5 Trajections
Malone 1986, 1988
–
– –
modulation specific/ generalization modification semantic, stylistic and syntactic modifications mutation addition deletion radical change of meaning
3 categories of micro structural shifts
Van Leuven– Zwart 1989–1990
Yves Gambier
Stratégies et tactiques en traduction et interprétation
Newmark 1988 (8 methods and 15 procedures) 8 Methods (relate to the whole text) 15 Procedures (sentences and smaller units) – – – – – – – –
word-for-word translation literal translation faithful translation semantic translation adaptation free translation idiomatic translation communicative translation
– – – – –
transference cultural equivalent descriptive equivalent synonymy through-translation or calque, loan translation – modulation – compensation
– – – – – – – –
couplets naturalization functional equivalent componential analysis shifts or transpositions recognized translation paraphrase notes, glosses
Chesterman 1997 (30 Strategies) 10 Syntactic strategies
10 Semantic strategies
10 Pragmatic strategies
– – – – – – – – – –
– – – – – – – – – –
– – – – – – – –
literal translation loan, calque transposition unit shift phrase structure change clause structure change sentence structure change cohesion change level shift rhetorical scheme change
synonymy antonymy hyponymy/hyperonymy converses abstraction change emphasis expanding, compressing paraphrase trope change other semantic changes
cultural filtering additirsonal change speech act change coherence change partial tranon, omission explicitation, implicitation interpeslation visibility changes (notes, glosses, etc.) – transediting – other pragmatic changes
Molina & Hurtado 2002 (18 translation techniques) – – – – – –
adaptation explicitation, addition compensation reduction generalization literal translation
– – – – – –
linguistic amplification calque discursive creation established equivalent particularization transposition
– – – – – –
compression borrowing description substitution modulation variation
Yves Gambier
5. Traduire les références culturelles Considérons maintenant un «problème» particulier: celui des référents culturels, appelés aussi par une variété d’autres désignations (culture-specific items, realia, culturemes, cultural references, etc.) qui sont loin d’être synonymiques. Encore une fois, nous ne signalons que quelques auteurs tout en sachant que bien d’autres ont traité tel ou tel aspect dit culturel. La traduisibilité ou pas des spécificités culturelles a été l’objet de nombreuses polémiques, à partir très souvent d’une conception essentialiste, statique, homogène de la culture, supposée à l’abri de frontières bien délimitées et supposée englober des pratiques aux contours nets, quais indépendantes des groupes qui introduisent des variations, sinon des contradictions. En traductologie, on se donne alors des catégories de phénomènes ou des listes de mots dits culturels, comme preuves de la culture (Nida 1945, 1964; Newmark 1988), comme si les délimitations lexicales coïncidaient absolument avec celles culturelles, comme si par exemple banlieue fonctionnait dans tout (con) texte comme suburb, annulant les connotations et associations créées par les usages, alors que même en français, le terme est ambigu: en banlieue ne réfère pas au même style de vie, aux mêmes caractéristiques socio-démographiques que dans les banlieues. De telles catégories et listes s’épuisent à répertorier ce qui serait toujours problématique, touchant la faune, la flore, la géographie, les institutions, l’histoire, l’éducation, etc. – que ce soit pour traduire des textes de fiction, de spécialité ou des films (Nedergaard-Larsen 1993). Elles ont des limitations évidentes: d’une part en présentant des items prétendument culturels comme objectifs, auto-évidents, d’autre part en traitant ces items en dehors de toute contextualisation, sans fonction aucune. Or, par exemple, l’Académie française n’est pas un problème en soi mais le devient dans certaines situations et selon les connaissances, la familiarité des récepteurs avec ce qui se passe, dans quelques milieux, à Paris, en France. Ce référent, cité dans un film, peut être difficile à rendre pour un public australien mais ne poser aucune difficulté quand on sous-titre pour des spectateurs espagnols qui ont une institution similaire à Madrid. De fait, ces éléments culturels – qu’ils soient un mot, un phénomène, un événement, un nom propre – sont problématiques non pas isolés, dans le vide ou dans l’absolu, mais quand il y a rapprochement avec d’autres éléments culturels d’ailleurs: quelle est alors la fonction de ces éléments? Quelles sont les cultures mises en présence – plus ou moins «distantes» l’une de l’autre? Quelle est l’évolution de ces éléments dans le temps? Si golf était mis en italique dans le sous-titrage espagnol de 1933 de The Maltese Falcon, il n’en était plus de même dans les versions de 1967 et de 1992, tant ce sport est devenu plus familier à un plus grand nombre.
Stratégies et tactiques en traduction et interprétation
Quand on considère les stratégies des références culturelles, de nouveau on constate des efforts de systématisation, de classification. Ainsi Ivir (1987) propose sept procédures (emprunt, définition, traduction littérale, substitution, création lexicale, omission, addition), tout en se défendant d’être prescriptif, à l’opposé de Newmark (1988: 81-93) qui établit une corrélation entre le type de référence et la stratégie de traduction (par ex. un mot référant à la vie politique/sociale, est rendu soit littéralement, soit par un équivalent officiel, soit par une glose). D’autres auteurs se sont attachés à ordonner de telles stratégies pour ces référents culturels, trouvés dans les textes littéraires, la presse, etc. (Hervey & Higgins 1992, avec quatre procédures; Franco 1995 et 1996, avec onze; Mailhac 1996 avec douze; Mayoral 2000; Kwiecinski 2001; Leppihalme 1997 et 2001; Davies 2003, avec 7 procédures; Khanjankhari & Khatib 2005). Il en est même lorsqu’on regarde les écrits sur la traduction audiovisuelle (six procédures pour Nedergaard-Larsen 1993, trois stratégies générales et six procédures pour Tomaszkiecz 2001, dix pour Pedersen 2007). Toutes ces classifications ont des hésitations – donnant parfois des exemples contextualisés, parfois des exemples abstraits, proposant des étiquettes doubles pour des stratégies voisines ou se chevauchant; et toutes peinent à justifier les catégories suggérées, même si en général elles tentent de les organiser en allant de la naturalisation/domestication extrême (paraphrase, substitution, explication) à la plus grande exoticisation (emprunt direct, transfert). Pour résumer, on dira que la plupart des études sur les stratégies, portant sur les référents culturels ou pas, offrent des classifications et des terminologies qui peuvent aider à déblayer les problématiques et à clarifier les définitions. Elles sont les premières étapes pour organiser le matériel à scruter plus méticuleusement et pour former les futurs traducteurs. Néanmoins les limitations de ces classifications demeurent fortes: – il y a ambiguïté dans les définitions et les étiquettes attribuées aux stratégies; – il y a chevauchement dans les catégories d’un auteur à l’autre, chevauchement qui n’est pas toujours facile à démontrer tant les critères à la base de ces catégories ne sont pas explicites; – il y a souvent focalisation sur le micro-niveau (lexical), comme si la traduction revenait à un mot-à-mot ou que l’unité de traduction était uniquement le mot; – il y a des difficultés à ordonner les stratégies les unes par rapport aux autres, l’explicitation n’étant pas, par exemple, toujours plus proche de la langue de départ que la paraphrase, selon les effets qu’elle créé, déterminés par le contexte; – il y a décontextualisation des problèmes et des stratégies supposées afférentes; – il y a généralisation souvent hâtive alors que le nombre de cas traités est souvent réduit.
Yves Gambier
6. Stratégies en interprétation de conférence La littérature sur les stratégies en interprétation de conférence est moins volumineuse que pour la traduction: le domaine abordé est certes plus limité – il s’agit surtout de la simultanée. Cependant on y retrouve certaines mêmes ambiguïtés, en particulier entre la stratégie globale, anticipatrice et la stratégie portant sur un problème particulier, entre la stratégie intentionnelle, à visée délibérée et la stratégie quasi automatique, entre la stratégie choisie pendant le processus de l’interprétation et la stratégie considérée au vu des résultats. En présentant quelques travaux en interprétation de conférence, dans leur ordre chronologique, nous cherchons à mieux cerner le concept de stratégie, tel qu’il a été ou qu’il est utilisé. Remarquons que la place prise par l’interprétation dite de communauté, l’interprétation auprès des tribunaux, l’interprétation dans les médias (cf. par ex. Alexieva 1997), devrait susciter d’autres recherches, les contraintes situationnelles imposant certainement de nouvelles possibilités aux interprètes. Pourtant on notera que dans les quatre volumes de Critical Link (1997-2007), sur l’interprétation en milieux hospitalier, policier, judiciaire, postal, etc., aucune mention de stratégie n’est faite dans les index, sauf dans le premier volume pour renvoyer aux efforts afin de promouvoir l’accès aux services de santé dans certains Etats américains. Seleskovitch (1968) réfère à quatre techniques d’analyse, au moment de la saisie du discours original (faire appel à ses connaissances antérieures; mobiliser ses propres points de vue, ce qui peut faciliter le travail de la mémoire; visualiser le sens entendu; observer style et raisons d’être du discours en question). Elle n’envisage pas de stratégies au niveau de la reformulation, de la restitution par l’interprète. Gile (1995a et b) consacre des chapitres spécifiques (respectivement ch. 5 et ch. 8) aux stratégies et tactiques de l’interprète, opérant des distinctions en accord avec son modèle d’Efforts, entre la phase d’écoute mobilisant certaines capacités de traitement et d’analyse, la phase de production et l’effort de mémoire à court terme. D’où les différences entre les stratégies fondamentales de fidélité (où qualité de la performance et liberté de choix de l’interprète sont en jeu), les stratégies de préparation ad hoc (à la fois thématique et terminologique) qui devraient faciliter par exemple la compréhension en cabine, et les 19 tactiques en ligne (pour la simultanée), c’est-à-dire une fois que l’interprète doit combiner tous ses efforts pour accomplir son travail en direct. Soulignons que l’auteur est un des rares à associer stratégies ET tactiques (cf. section 2), tandis que la majorité s’en tient aux seules stratégies, mêlant alors niveau global et niveau local, décision d’ensemble et solution ponctuelle (cf. section 7). Parmi les tactiques suggérées dans le cours de l’interprétation, qui «s’appliquent chacune à une ou plusieurs catégories de déclencheurs ou de difficultés» (1995a: 130), on citera la reconstitution par le contexte, l’appel au collègue passif en cabine, l’option d’un hyperonyme, l’omission
Stratégies et tactiques en traduction et interprétation
(consciente), la paraphrase, la simplification, etc. Ces tactiques ont des coûts (en temps, en capacité de traitement, en perte d’information, en effets psychologiques sur les auditeurs); elles sont choisies en fonction des circonstances et en respect de certaines lois (id.: 137-140), comme celle du moindre effort, celle de la recherche du rendement informationnel maximum (voir aussi 1995b: 201-204). Gile précise en outre les quelques spécificités stratégiques et tactiques pour la consécutive et la traduction à vue (1995a: 140-141; 1995b: 204-206). Dans (1995b: 192-201), l’auteur réorganise ces stratégies et tactiques plus nettement selon les divers stades de compréhension (quatre tactiques), de prévention (quatre tactiques aptes à prévenir les risques d’échec) et de reformulation (13 tactiques). Kalina (1998), qui s’appuie sur une conception délibérée, orientée de la stratégie, propose, à partir surtout d’analyses de produits finis, outre des stratégies de préparation, deux catégories principales: les stratégies de compréhension et celles de production. Les premières incluent l’inférence (capacité de retrouver des informations grâce à ce qu’on a appréhendé du discours et grâce à ses propres connaissances), l’anticipation et la segmentation ou découpage sémantique (chunking). Les secondes portent sur l’énoncé de départ (on a alors affaire au transcodage et aux transformations syntaxiques) et sur l’énoncé d’arrivée (en jouant sur l’écart chronologique entre l’orateur et l’interprète ou EVS: ear-voice span, tantôt pour réduire l’effort de mémoire, tantôt pour en accroitre le besoin afin de diminuer le risque d’erreur dans la reformulation; en condensant, en ajoutant des segments de discours, etc.). En plus, Kalina définit des stratégies stylistiques et de présentation pour améliorer la communication (plutôt que pour régler un problème particulier) – l’interprète s’efforçant alors de contrôler ses pauses, son intonation, son débit, sa prosodie, etc. Parmi ces stratégies de production, l’auteure propose aussi des stratégies de secours quand les autres stratégies ont failli, telles que la compression, la neutralisation, l’omission, l’approximation et les diverses formes de réparation (repair). On admettra que ces dernières stratégies sont de nature variable: lexicosémantique, pragmatique, sociolinguistique ou interactionnelle. Pour Setton (1999: 50-53), les quatre stratégies en ligne (dans le flux du travail en cabine) les plus fréquemment citées seraient: attendre, pour englober davantage d’input avant d’interpréter; temporiser (stalling) ou gagner du temps sans rendre les auditeurs mal à l’aise; découper l’énoncé en unité cohérente (ou saucissonnage, réalisable à divers degrés selon les langues employées); et anticiper, sur la base de ce qu’on peut prédire du développement syntaxique, rhétorique du discours de départ et sur la base des informations, des arguments qui sont présentés. Malgré la fréquence d’analyse de certaines stratégies, comme l’anticipation et la compression (voir par exemple Chernov 2004), le statut théorique du concept de stratégie, dans une tâche cognitivo-linguistique comme la simultanée, demeure imprécis; en outre, son identification s’appuie sur des outils méthodologiques sujets à caution
Yves Gambier
(introspection, rétrospection, transcription). Dans ces conditions, l’apprentissage puis le passage de ces stratégies explicites à des compétences implicites, quasi automatiques, demandent encore largement à être validés (id.: 125). Pöchhacker (2004: 132-136) rattache les stratégies essentiellement au processus d’interprétation (on-line strategies) lié à l’attention, à la compréhension, aux structures des langues en présence – se focalisant par exemple sur les stratégies d’anticipation, de compression, en rapport avec le travail de la mémoire, la complexité et le débit du discours à interpréter. Mais ces stratégies doivent être complétées par les normes d’attente des auditeurs et les conventions de communication en culture d’arrivée qui vont orienter également certains choix de l’interprète – normes qui ne sont pas sans rappeler les lois de Gile et qui induisent les stratégies de condensation, d’adaptation, etc. (Schjoldager 1995). Pöchhacker reste plutôt elliptique sur les stratégies hors ligne (off- line strategies) qui précèdent le processus même de l’interprétation c’est-à-dire les stratégies de préparation à une conférence (collecter la terminologie, annoter des documents, etc.). Sans décrire davantage d’autres références (comme par exemple Kirchhoff 1976, Gernsbacher et al. 1997, Al-Khanji et al. 2000, Napier 2004, Bartłomiejczyk 2006), on peut constater que les catégories de stratégies tendent à être labellisées de façon assez identique mais que, comme en traduction, ces catégories ne se recoupent que partiellement d’un auteur à l’autre, parfois sous la même étiquette (par ex. addition), parfois sous des étiquettes différentes. Comme stratégies d’ensemble, on a l’anticipation, la visualisation, l’association personnelle (en fonction de son propre bagage). Comme stratégies de résolution de problème ou tactiques, on a l’addition (explicative), l’approximation, la restructuration des informations, la compression, l’inférence, la temporisation, le rendu littéral, l’omission (depuis l’abandon d’une répétition à la réduction de redondances, en passant par la suppression de certains segments), la paraphrase, les différentes formes de réparation, le transcodage, l’emprunt, la simplification, la généralisation, etc. Quelques-unes de ces options pourraient certainement être regroupées, comme stratégies de réduction (omission, interprétation littérale, simplification), ou comme stratégies de réalisation (approximation, addition, compression) mais les divers types d’omission sont aussi des réalisations. Il resterait non seulement, comme en traduction, à approfondir les critères de classification, les chevauchements entre les taxinomies proposées, mais aussi à confronter les stratégies communes ou du moins désignées par les mêmes termes entre traduction et interprétation.
Stratégies et tactiques en traduction et interprétation
7. Que signifie-t-on? Majoritairement, les stratégies en traductologie apparaissent liées à un processus, attachées à un problème, orientées vers un but, analogues aux tactiques (cf. section 2) Qu’en est-il exactement? En nous appuyant sur les distinctions binaires de Chesterman (2005), tentons d’approfondir les éléments définitoires et les enjeux du concept. Il y a d’abord l’opposition entre processus et résultats. Si certains comme Nida (1964: 226-229) et Jääskeläinen (1993, 1999) insistent sur les opérations de résolution de problème, d’autres mettent l’accent sur les résultats obtenus, en comparant notamment le texte de départ (TD) et le texte d’arrivée (TA) et en assumant des degrés d’équivalence entre eux. C’est la cas de Vinay & Darbelnet (1958), de Catford (1965), de Delisle (1993), de van Leuven-Zwart (1989-1990) – d’où l’établissement possible de relations traductionnelles ou catégories de transformation (shifts), comme par exemple chez Schjoldager (1995) s’interrogeant sur l’interprétation simultanée et inspirée par Delabastita (1989): elle distingue alors entre répétition (relation formelle entre un élément de TD et un de TA), permutation (élément de TA placé à un autre endroit que dans TD), addition (le TA ajoute une information), omission et substitution. Cette dernière peut être équivalente (un élément du TD est rendu de manière fonctionnelle), paraphrastique, spécifiante (le TA explicite), généralisante (un élément du TA contient moins d’information que l’élément du TD correspondant), chevauchante (un autre point de vue est introduit en TA) ou peut-être une substitution proprement dite (divergence entre TD et TA). Quelques-uns hésitent et proposent deux termes: c’est le cas de Molina & Hurtado (2002: 508-509), qui tentent de différencier entre techniques, comme résultats obtenus au niveau micro-textuel, et stratégies comme mécanismes pour trouver des solutions à des problèmes de traduction. D’évidence, les termes choisis sont souvent ambigus, manifestant tantôt une action (ex. compression, réduction), tantôt un résultat (ex. calque, note du traducteur). La seconde dichotomie porte sur le texte en son entier ou sur un problème particulier (global vs. local: cf. Séguinot 1989, 1991). Dans le premier cas, les décisions sont à mettre en relation avec le skopos, les normes initiales, les lois (Toury 1995) et les universaux de traduction: des choix, des recommandations, des directives sont précisés par les éditeurs, les donneurs d’ouvrage, les clients – entre par exemple une traduction plutôt littérale, adéquate ou plutôt libre, acceptable, fluide, entre une traduction complète ou partielle, entre exoticisation ou domestication, naturalisation (Venuti 1995), entre traduction sémantique ou communicative (Newmark 1988), entre traduction documentaire ou instrumentale (Nord 1991, 1997), entre traduction directe ou indirecte (Gutt 1991), entre traduction manifeste ou masquée (overt/covert translation) (House 1997), entre traduction minimale ou maximale (Hatim & Mason 1997). Ces stratégies globales ou
Yves Gambier
macro-stratégies, de nature socio-culturelle, qui vont affecter les choix au microniveau textuel et cognitif (stratégies locales, mini-stratégies ou mieux tactiques), à différentes phases de la traduction – produisent, légitiment, justifient un certain type de texte en relation avec une identité culturelle à construire, à préserver, à reproduire, à assimiler, à transformer, à démonter, etc. La troisième opposition repose sur la résolution de problèmes vs. la routine. Celle-ci serait devenue intuitive, un automatisme alors que celle-là serait consciente, délibérée (Lörscher 1991: 76). Dans les deux cas, on se situe au niveau linguistico-cognitif: le traducteur, grâce à ses compétences et expériences, accumulent des solutions à des problèmes locaux. Deux questions surgissent ici alors: que fautil entendre par «problème» et à quel moment de la traduction se situent ces problèmes? On a vu (section 3) que souvent des stratégies ou solutions systématiques s’appliqueraient à des problèmes (linguistiques, pragmatiques, textuels, culturels) récurrents, hors contexte. Mais on peut distinguer divers moments dans le travail d’un traducteur: ses problèmes ne sont pas exclusivement linguistiques. D’où la possibilité d’envisager des stratégies d’analyse (pour comprendre le document de départ), des stratégies de recherche documentaire et terminologique, des stratégies de production (Chesterman 1997: ch. 4), incluant celles de compensation ou de réparation (en interprétation, cf. section 6), des stratégies d’organisation (de son temps, de ses prix, de ses supports), des stratégies de formation (pour faire face aux évolutions technologiques par exemple), des stratégies de relecture et de révision, des stratégies de distanciation (Chesterman & Wagner 2002: 68-72). La traduction proprement dite ou au sens étroit est précédée et suivie d’efforts qui exigent des choix orientés et donc des décisions de nature stratégique et tactique. A force de travailler pour le même type de texte ou avec le même commanditaire, le traducteur peut transformer certaines de ses stratégies en routines. Mais cela n’implique pas que ces solutions toutes trouvées soient inconscientes à jamais: même là où il n’y a pas de problème apparent, un choix puis une décision doivent être faits; l’absence de problème n’entraine pas un comportement non-stratégique. On a des degrés de conscience de la situation de traduction et des stratégies suivies. Si besoin, par exemple sous la pression d’une demande du client ou d’un chercheur en verbalisation concourante (think-aloud protocol), le traducteur peut être amené à justifier ce qu’il a fait et pourquoi, alors même que pendant son travail, sa décision (et sa justification) n’est pas apparue comme dilemme conscient. Une conduite (automatique, liée aux habitudes), signe de professionnalisation, peut toujours se retransformer en action (procédant d’une intentionnalité et dépendant d’un projet déclaré, finalisé). De manière analogique, on peut dire qu’on dirige sa voiture de manière quasi automatique, sans vraiment se rendre compte de tous ses gestes, de toutes ses décisions jusqu’au moment où on doit répondre au téléphone mobile, où on a un bras cassé: ce qui était routine redevient réfléchi.
Stratégies et tactiques en traduction et interprétation
Qu’en est-il des problèmes (Toury 2002)? Sont-ils si objectifs, indépendants du niveau de compétence et des conditions de travail du traducteur, comme le suggère Nord (1991: 151, 158-160)? On définira un problème comme une difficulté pour quelqu’un dans une situation donnée, qui demande une ou plusieurs manières de trouver des solutions appropriées. Cette définition s’éloigne du problème absolu, permanent, comme si tel trait linguistique ou élément culturel posait toujours un problème (cf. sections 3 et 5). Le «quelqu’un» en question peut être le traducteur (novice ou expérimenté), les récepteurs visés (lecteurs, spectateurs aux connaissances, attentes et besoins variables), le client (éditeur, distributeur de films, compagnie de TV, responsable de marketing, agence publicitaire, etc.)… ou le chercheur. Ce que le traducteur peut anticiper comme problème pour les téléspectateurs n’est pas forcément un problème pour lui: il peut par exemple reconnaitre facilement l’ironie stéréotypée qui s’attache à Belge dans une répartie d’un film français, et savoir aussi qu’aux Etats-Unis, les connotations ne seront pas partagées. D’où la solution de traduire par Polish, utilisé dans des plaisanteries américaines similaires. La traduction d’un jeu de mot dans un contexte précis peut se réaliser sans difficulté mais susciter l’interrogation d’un traductologue. 8. Proposition De ce qui précède, il apparait que le concept de stratégie fonctionne en relation avec, la plupart du temps, celui de processus (ensemble d’actes logique plus ou moins contrôlé mis en oeuvre pour atteindre un but), celui de norme (orientant les choix), celui de situation ou contexte (ensemble de conditions et de contraintes qui permet d’exclure certaines possibilités) et celui de problème. Et pourtant nombre d’écrits tendent à occulter ces relations conceptuelles pour se limiter à la seule confrontation stratégie/problème. Par ailleurs certaines distinctions, qui ne sont pas toujours faites, sont opératoires et amènent à préciser le concept de stratégie (et de tactique): – le niveau d’intervention qui peut être global (affectant tout le texte à traduire) ou local (micro-niveau rhétorique, stylistique, lexical, sémantique, pragmatique); – le degré de conscience du traducteur (dimension psycho-cognitive): ce dernier doit d’abord reconnaitre, identifier, spécifier le problème à résoudre, le verbaliser jusqu’à un certain point (au moins pour lui-même), puis il cherche des solutions (préliminaire, temporaire, alternative, acceptable) avant de prendre une décision, optant pour une seule solution toujours justifiable (solution parfois intériorisée, quasi automatique, routinière) (Lörscher 2002). Sous la pression du temps, de ses conditions de travail, il n’envisage pas, ne compare pas toutes les solutions possibles: ses compétences, ses expériences antérieures, ses
Yves Gambier
connaissances, son idéologie, son éthique l’aident déjà à sélectionner ce qui sera le plus probablement pertinent. Entre la stratégie (ou tactique) devenue routinière insconsciente, refoulée, celle pré-consciente (non nommée mais potentiellement disponible) et celle consciente (pesée dans ses dimensions et ses effets éventuels), le traducteur ne cesse d’être actif: ce n’est pas un stratège de salon. – Autre distinction utile: quand une décision doit être prise, considère-t-on la solution finale, le résultat ou les solutions intermédiaires pendant le processus même de traduction, incluant par exemple la recherche documentaire et terminologique, la vérification auprès d’un spécialiste et l’anticipation des conséquences du choix qui sera fait? On a vu (section 7) que les stratégies peuvent s’appliquer avant, durant et après le contrat de traduction à remplir. Elles ne se confondent pourtant ni avec les objectifs (section 2) ni avec les effets qu’elles entrainent: on peut vouloir domestiquer, exoticiser sa traduction, celle-ci ne sera pas nécessairement perçue comme naturalisée, cibliste, ethnocentrique ou comme exocentrique, sourcière. – Un certain nombre de facteurs sont à considérer dans l’appréhension des stratégies et tactiques, facteurs qui autorisent ou pas certaines options, qui déterminent des priorités: ainsi l’emprunt, l’inversion sont fonction des structures et des normes des langues mises en présence. Les normes d’attente, de responsabilité, de communication et de relation (Chesterman 1997: 76-77, 113) aident également à la sélection de ces stratégies et tactiques. Enfin, les décisions stratégiques du traducteur ne dépendent pas de son seul bon vouloir (illusion qui teinte parfois le discours du traducteur): elles sont aussi sujettes aux directives et choix imposé par le client, l’auteur, l’éditeur, le journaliste, l’expert, l’ingénieur du son, le chef des ventes, l’agent publicitaire, etc. (Buzelin 2007). Ces décisions apparaissent comme le résultat final d’une activité rationnelle, avec ses risques, ses coûts, ses bénéfices, ses limitations, ses alternatives. On propose finalement de différencier entre ce qui se passe au niveau du processus et ce qui se passe au niveau du résultat. Dans la dimension processuelle, on distinguera entre stratégies (globales ou macro-stratégies) et tactiques (ou micro-stratégie, stratégie locale): les premières sont des choix à faire dans une situation précise ou en vue de certaines probabilités, de sorte qu’on atteint un but donné (par exemple, faire que le texte à traduire soit lisible, fluide selon les normes et conventions de la langue/culture d’arrivée, pour un public bien ciblé – quitte à réorganiser ce texte, adapter plusieurs de ses informations). Ces stratégies vont orienter, sinon déterminer le déroulement, la vitesse des opérations de traduction, d’interprétation, c’est-à-dire les tactiques mises en œuvre par le traducteur, l’interprète (aspects cognitif, linguistique et comportemental de leur travail). Au niveau des résultats, si on veut observer, décrire les relations entre TD et TA, on peut parler de
Stratégies et tactiques en traduction et interprétation
solution (Zabalbeascoa 2000: 122), à la fois comme résolution d’un problème suite à un ensemble de décisions et d’actions et comme conséquence de la prise en compte de diverses possibilités aboutissant à un choix pertinent, acceptable, légitime, dans une situation donnée. En résumé, et comparé par exemple aux propositions de Chesterman (2005), nous obtenons trois termes distincts correspondant à trois concepts voisins: Proposition
Chesterman (2005)
Stratégie (globale/macro-stratégie) Tactique (locale/micro-stratégie): consciente
Method (general way of translating) Strategy (problem solving/cognitive way to solve a problem at different phases in the translating process) Technique (textual/linguistic procedures)
Tactique (locale/micro-stratégie): automatisée (routine) Solution (pour les différences et les similarités entre TD et TA)
Shifts (results of a procedure): “observable as kinds of differences between target and source”); cf. “equivalence” for similarities
La confusion polysémique de stratégie, telle qu’elle se manifeste aujourd’hui dans nombre d’écrits, devrait être dépassée par un métalangage plus assuré, plus stable. Jusqu’où cette évolution sera-t-elle menée? Il en va, entre autres, de la communication de la traductologie envers d’autres disciplines et de la formation des futurs traducteurs et interprètes, aptes à affiner leurs choix et à justifier leurs décisions. References Alexieva, B. 1997. “A typology of interpreter-mediated events.” The Translator 3 (2): 153–174. Al-Khanji R., El-Shiyab, S. and Hussein, R. 2000. “On the use of compensatory strategies in simultaneous interpretation.” Meta 45(3): 540–557. Bartłomiejczyk, M. 2006. “Strategies of simultaneous interpreting and directionality.” Interpreting 8 (2): 149–174. Buzelin, H. 2007. “Translations in “the making”.” In Constructing a Sociology of Translation, M. Wolf and A. Fukari (eds), 135–169. Amsterdam/Philadelphia: John Benjamins. Catford, J.C. 1965. A Linguistic Theory of Translation. Oxford: OUP. Chernov, Ch. 2004. Inference and Anticipation in Simultaneous Interpreting. (Original en russe: 1987). Amsterdam/Philadelphia: John Benjamins. Chesterman, A. 1997. Memes of Translation. Amsterdam/Philadelphia: John Benjamins. Chesterman, A. 2005. “Problems with strategies.” In New Trends in TS. In honour of Kinga Klaudy, A. Károly A. and F. Ágota (eds), 17–28. Budapest: Akadémiai Kiadó. Chesterman, A. and Wagner, E. 2002. Can Theory Help Translators? Manchester: St. Jerome.
Yves Gambier Davies E.E. 2003. “A goblin or a dirty nose?” The Translator 9 (1): 65–100. Delabastita, D. 1989. “Translation and mass-communication: Film and TV translation as evidence of cultural dynamics.” Babel, 35 (4): 193–218. Delisle, J. 1993. La traduction raisonnée. Ottawa: PU d’Ottawa. Franco Aixelà, J. 1995. “Specific cultural ítems and their translation.” In Translation and the Manipulation of Discourse, Peter Jansen (ed.), 109–123. Leuven: CETRA. Franco Aixelà, J. 1996. “Culture-specific items in translation.” In Translation, Power, Subversion, R. Alvarez and C. África Vidal (eds), 52–78. Clevedon: Multilingual Matters. Gambier, Y. and Van Doorslaer, L. (eds). 2007. Metalanguage of translation. Special issue of Target 19 (2). Gensbacher, M.A. and Shlesinger, M.1997. “The proposed role of suppression in simultaneous interpreting.” Interpreting 2 (1): 119–140. Gile, D. 1995a. Regards sur la recherche en interprétation de conférence. Lille: PU. Gile, D. 1995b. Basic Concepts and Models for Interpreter and Translator Training. Amsterdam/ Philadelphia: John Benjamins. Gutt, E.-A. 1991. Translation and Relevance. Cognition and Context. Oxford: Blackwell. Hatim, B. and Mason, I. 1990 (1997). Discourse and the Translator. London: Longman. Hervey, S. and Higgins, I. 2002 (1992). Thinking French Translation. A Course in Translation Method: French into English. London: Routledge. Hönig, H.G. and Kussmaul, P. 1982. Strategie der Übersetzung. Tübingen: Gunter Narr. House, J. 1997. Translation Quality Assessment. A model revisited. Tübingen: Narr. Ivir, V. 1987. “Procedures and strategies for the translation of culture.” Indian Journal of Applied Linguistics 13 (2): 35–46. Jääskeläinen, R. 1993. “Investigating translation strategies.” In Recent Trends in Empirical Translation Research, S. Tirkkonen-Condit and J. Laffling (eds), 99–120. Joensuu: University of Joensuu. Jääskeläinen, R. 1999. Tapping the Process: An Exploratory Study of the Cognitive and Affective Factors Involved in Translating. Joensuu: University of Joensuu. Jones, F. 1989. “On aboriginal sufferance: A process of poetic translating.” Target 1 (2): 183–189. Kalina, S. 1998. Strategische Prozesse beim Dolmetschen: Theoretische Grundlagen, empirische Fallstudien, didaktische Konsequenzen. Tübingen: Gunter Narr. Khanjankhani, M. and Mohammad, K. 2005. “On cultural distance and translation strategies.” Translation Studies 2 (11): 39–68. Kirchhoff, H. 1976 “Simultaneous interpreting. Interdependance of variables in the interpreting process, interpreting models and interpreting strategies.” In Pöchhacker and Shlesinger (2002) (eds), p.110–119. Kwiecinski, P. 2001. Disturbing Strangeness. Foreignisation and Domestication in Translation Procedures in the Context of Cultural Asymmetry. Torún: Wydawnictwo Edytor. Leppihalme, R. 1997. Culture Bumps. An Empirical Approach to the Translation of Allusions. Clevedon: Multilingual Matters. Leppihalme, R. 2001. “Translation strategies for realia.” In Mission, Vision, Strategies and Values. A celebration of translator training and TS in Kouvola, Kukkonen P. and Hartama-Heinonen, R. (eds), 139–148. Helsinki University Press. van Leuven-Zwart, K.M. 1989/1990. ”Translation and original. Similarities and dissimilarities, I and II.” Target 1 (2): 151–181 and 2 (1): 69–95.
Stratégies et tactiques en traduction et interprétation Levý, J. 1967. “Translation as a decision process.” In To Honor Roman Jakobson for his 70th birthday. The Hague: Mouton. Vol.3, 1171–1182. Lörscher, W. 1991.Translation Performance, Translation Process and Translation Strategies. A Psycholinguistic Investigation. Tübingen: Gunter Narr. Lörscher, W. 2002. “A model for the analysis of translation processes within a framework of systemic linguistics.” Cadernos de Tradução 10 (2): 97–112. Also available at http://www. cadernos.ufsc.br/ Mailhac, J.-P. 1996. “The formulation of translation strategies for cultural references.” In Language, Culture and Communication in Contemporary Europe, C. Hoffmann (ed.), 132–151. Clevedon: Multilingual Matters. Malone, J.L. 1986. “Trajectional analysis. Five cases in point.” Babel 1: 13–25. Malone, J.L. 1988. The Science of Linguistics in the Art of Translation. New York: State University of New York Press. Mayoral Asensio, R. 2000. “La traducción de referencias culturales.” Sendebar 10–11: 67–88. Molina, L. and Hurtado Albir, A. 2002. “Translation techniques revisited. A dynamic and functional approach.” Meta 47 (4): 498–512. Napier, J. 2004. “Interpreting omissions. A new perspective.” Interpreting 6 (2): 117–142. Nedergaard-Larsen, B. 1993. “Culture-bound problems in subtitling.” Perspectives 1(2): 207–241. Newmark, P. 1995 (1988). A Textbook of Translation. London: Prentice Hall. Nida, E. 1945. “Linguistics and ethnology in translation problems.” Word 1 (2): 194–208. Nida, E. 1964. Toward a Science of Translation. Leiden: Brill. Nord, C.1991. Text Analysis in Translation. Theory, Methodology, and Didactic Application of a Model for Translation-Oriented Text Analysis. Amsterdam/Atlanta: Rodopi. Nord, C.1997. Translating as a Purposeful Activity. Manchester: St. Jerome. Pedersen, J. 2007. “Cultural interchangeability. The effects of substituting cultural references in subtitling.” Perpectives 15 (1): 30–48. Pöchhacker, F. 2004. Introducing Interpreting Studies. Londres: Routledge. Pöchhacker, F. and Shlesinger, M. (eds). 2002. The Intepreting Studies Reader. Londres: Routledge. Popovič, A.1970. “The concept of shift of expression in translation analysis.” In The Nature of Translation, Holmes J. et al. (eds), 78–87. The Hague: Mouton. Popovič, A. 1975. Teoria umeleckého prekladu. Bratislava: Tatran. Särkkä, H. 2007. “Translation of proper names in non-fiction texts.” Translation Journal 11 (1), January 2007 (online). Schjoldager, A. 1995. “An exploratory study of translational norms in simultaneous interpreting: Methodological reflections.” Hermes, Journal of Linguistics 14: 65–87. Séguinot, C. 1989. The Translation Process. Toronto, York University: HG Publications. Séguinot, C. 1991. “A study of student translation strategies.” In Empirical Research in Translation and Intercultural Studies, S. Tirkkonen-Condit (ed.), 79–88.Tübingen: Gunter Narr. Seleskovitch, D. 1968. L’interprète dans les conférences internationales: problèmes de langage et de communication. Paris: Minard. Setton, R. 1999. Simultaneous Interpreting. A Cognitive-Pragmatic Analysis. Amsterdam/Philadelphia: John Benjamins. Tomaszkiewicz, T. 2001. “Transfer des références culturelles dans les sous-titres.” In (Multi)Media Translation. Concepts, Practices and Research, Y. Gambier and H. Gottlieb (eds), 237–247. Amsterdam/Philadelphia: John Benjamins.
Yves Gambier Toury, G. 1995. Descriptive TS and beyond. Amsterdam/Philadelphia: John Benjamins. Toury, G. 2002. “What’s the problem with translation problem?” In Translating and Meaning, Part 6, B. Lewandowska-Tomaszczyk and M. Thelen (eds), 57–71. Maastricht: Hogeschool Zuyd, School of Translation and Interpreting. Venuti, L. 1995. The Translators’ Invisibility. An History of Translation. London: Routledge. Vermes, P. 2003. “Proper names in translation. An explanatory attempt.” Across Languages and Cultures 4 (1): 89–108. Vinay, J.-P. and Darbelnet, J. 1958. Stylistique comparée du français et de l’anglais. Paris: Didier. Translated and edited into English by J.C. Sager and M.J. Hamel, 1995. Comparative Stylistics of French and English. A Methodology for Translation. Amsterdam/Philadelphia: John Benjamins. Wilss, W. 1983. “Translation strategy, translation method and translation technique: Towards a clarification of three translational concepts.” Revue de phonétique appliquée 66(8): 143–152. Zabalbeascoa, P. 2000. “From techniques to types of solutions.” In Investigating Translation, Beeby Allison et al (eds), 117–127. Amsterdam/Philadelphia: John Benjamins.
On omission in simultaneous interpreting Risk analysis of a hidden effort Anthony Pym
Universitat Rovira i Virgili, Tarragona, Spain One of the long-standing debates in studies on simultaneous interpreting would pit “contextualists”, who see interpreters’ performances as being conditioned by contextual determinants, against “cognitivists”, who analyze performances in terms of cognitive constraints that would be the same for all professionals, regardless of context. Gile’s Effort Models would seem to be very much in the cognitive camp. However, modeling of the resources used when interpreters make omissions suggests that cognitive management may actively respond to contextual factors such as the aims of the discourse, the strategies of the speakers, and the variable risks of the text items. Analysis of the data from one of Gile’s experiments indicates that the cognitive management of omissions is indeed highly variable. Omissions that are low-risk for the aims of the discourse occur in a constant background mode, almost without sourcetext stimuli, such that in repeat performances they are found with similar frequency but in different places. On the other hand, omissions that incur high levels of risk tend to be repaired in repeat performance. This suggests that simultaneous interpreters strive for non-omission only in the case of high-risk contextualization. Further, since their management skills must incorporate enough contextualization for the necessary risk analysis to take place, the cognitive strategies of interpreters should be modeled in the same terms as those of all other linguistic mediators.
Keywords: Translation Studies, Interpreting Studies, conference interpreting, communication strategies, risk analysis, omission
1. Cognition vs. context in Interpreting Studies Perhaps the most cited part of Daniel Gile’s theoretical work is his modeling of the way different efforts are distributed in interpreting. Gile’s Effort Models portray
Anthony Pym
the key ways in which simultaneous interpreting is different from consecutive interpreting, and the ways both of those differ fundamentally from written translation. The models are also able to say things about what must be happening in the interpreter’s brain. Recent tendencies, however, emphasize the work of the interpreter as being grounded in a sociocultural context; it is claimed that much of what is done can only be understood within that context, rather than through cognitive modeling alone. More specifically, researchers working on context-dependent aspects of interpreting have sought to base their insights on appeals to the political and cultural importance of the settings (Cronin 2002), motivated shifts of footing (Diriker 2004), judicious omissions as ethically enhancing coherence (Viaggio 2002; Gumul and Andrzej 2007), and surveys of user expectations (for example, Kurz 1993; Moser 1996; Pöchhacker 2002). That is the debate we would like to address here. Gile’s Effort Models seek to describe interpreting as an activity in itself, with its own principles and constraints. Contextualists are more prone to consider factors that influence all modes of translation, if not all communication, with little methodological concern for the specificity of interpreting. The result is more a difference in perspective rather than a full-frontal confrontation. For example, when interpreters shift their discursive footing (speaking momentarily with their own first person, for example), it is ostensibly due to the nature of the situation and not to any particular problem in the distribution of cognitive efforts (cf. Diriker 2004). Or again, when surveys indicate that “completeness” is not the highest concern for the users of interpreting services (who might be content enough with “essential information” and a “pleasant voice”), the finding would seem not to support cognitive models that would have all interpreters trying to be as complete as possible in their renditions. Seen from this perspective, Gile’s models might seem to deny the context-sensitive nature of interpreting, particularly simultaneous interpreting, and instead present this professional activity as a mode of expertise that would essentially be the same no matter what the social context. Such would appear to be one of the most vital current debates in Interpreting Studies. We ask, quite simply, if simultaneous interpreting really is as independent as the Effort Models would suggest. We look on that debate from the outside, as someone who has spent considerable time analyzing written translations (towards the end of this paper we explain the reasons behind our interest in interpreting). In principle, we are quite used to seeing translation as a sociocultural phenomenon, and we should thus generally be on the side of the contextualists in Interpreting Studies. At the same time, however, we remain a little perturbed by what seems to be a dialogue of the deaf. On the one hand, most contextualists privilege the products of consecutive interpreting, dialogue interpreting of various kinds, and the politicized contexts of community interpreting, and if they do turn to conference interpreting it is to survey user expectations,
Risk analysis of a hidden effort
perceptions of quality as usability, and snippets of hors texte. On the other, research on simultaneous interpreting continues to work from products (measuring quality without reference to context) and from models of processes (using whatever insights can be gained from neurology and cognitive science), mostly without reference to variable settings. Researchers seem to be talking about quite different objects here, with neither camp really engaging in the territory of the other. We would like to situate that debate on some kind of common ground. We will thus take data from a simple experiment in which Gile claims to vindicate his Effort Models for simultaneous interpreting, and we will try to re-interpret the data in a context-sensitive way. Our hope is that this minor intervention will encourage others to think critically about what context is, and about the way it might interact with interpreting as a set of independent professional skills. 2. Simultaneous interpreting as a separate land Gile (1995: 169) proposes that simultaneous interpreting (SI) requires the deployment of effort for four general purposes: Listening and Analysis (L), Short-term Memory (M), Speech Production (P), and Coordination of these Efforts (C). We thus get the equation: SI = L + P + M + C Gile then assumes that since the available cognitive processing capacity is limited, the sum of these Efforts must be less than the available processing resources (which logically means that no one Effort can be greater than the available resources). The task of the interpreter is partly to distribute resources in an efficient way across these four different types of Effort. The model is thus able to describe errors and infelicities in terms of processing resources that are either too limited or inefficiently distributed. For example: “the interpreter may try too hard to produce an elegant reformulation of segment A, and therefore not have enough capacity left to complete a Listening task on an incoming segment B” (1995: 171). The model is also able to describe the way in which simultaneous interpreting differs from consecutive interpreting, since the different time constraints make different types of processing capacity available. It is thus correct to talk about Gile’s Effort Models, in the plural. The theoretical usefulness of the Effort Models is certainly their potential to account for the way different kinds of errors seem to occur in the different modes of translation. Here we are nevertheless interested in the way they describe the specificity of simultaneous interpreting as an activity. The following features are assumed to typify the way simultaneous interpreters work:
Anthony Pym
1. Gile notes that in simultaneous interpreting, there is “the general rule that Efforts deal with different segments” (1995: 170). For example, “[...,] Production acts on speech segment A, while Memory acts on segment B which came after A, and Listening and Analysis acts on segment C, which came after B” (1995: 170). Evidence for this is found in the many studies that note the simultaneity of source-text input and target-text output in simultaneous interpreting, reaching as much as 80% according to some accounts (see the summary in Chernov 2004: 13-14). Nothing could be more specific to simultaneous interpreting than this simultaneity. 2. A consequence of simultaneity would be what Gile terms the “tightrope hypothesis”. This posits that interpreters work “close to processing capacity saturation, which makes them vulnerable to even small variations in the available processing capacity for each interpreting component” (1999: 153). The tightrope hypothesis is deemed necessary in order to explain “the high frequency of errors and omissions that can be observed in interpreting even when no particular technical or other difficulties can be identified in the source speech […]: if interpreters worked well below saturation level, errors and omissions should occur only when significant difficulties came up in the source speech” (Gile 1999: 159). This hypothesis is supported by the experiment that we will investigate below. 3. The image of simultaneous interpreters multitasking on a tightrope is enhanced by neurological evidence that the normal lateralization of language functions does not apply in simultaneous interpreting, where cerebral activity would appear to light up like Disneyland (our very loose paraphrase of Fabbro and Gran (1997: 21-24)). The special extent of this activity would support the notion that the interpreter is concerned first and foremost with managing the competing cognitive processes. 4. Gile invites the interesting assumption that conference interpreters do not distribute their resources in terms of the specific communication act as social discourse. Such would seem to be the implication of his statement that, “[w]hile they are interpreting, interpreters have to concentrate on everything the speaker says, whereas delegates can select the information they are interested in” (1995: 165). This is coherent with the assumption that “[t]he interpreters’ relevant extralinguistic knowledge, and sometimes the terminological part of their linguistic knowledge, are less comprehensive than the delegates’” (1995: 165). That could mean that interpreters are unable or unrequired to distribute their resources in terms of the context formed by the delegates’ wider knowledge. This would in turn explain features like the relative lack of explicitation in some simultaneous interpreting situations (see Shlesinger 1989, cited in Pym 2007).
Risk analysis of a hidden effort
5. Hence reference to the “norm of completeness” in simultaneous interpreting, which basically states that interpreters should attempt to render everything that is said. This “norm of completeness” and the assumption of simultaneity (our point 1 above) mutually reinforce each other, since if the interpreters were not trying to be complete, we would have limited reason to assume that they are listening while they are speaking. 6. As a consequence of this “norm of completeness”, and indeed of all the above assumptions, any omissions an interpreter makes can be seen as indication of lower quality. Thus, in the study that we are going to look at, Gile assesses “errors and omissions” as one single category, which seems to imply that all omissions are to be seen on the same level as errors, i.e. as indicators of lesser quality. The assumptions of simultaneity and completeness, along with the other points, thus configure an image of simultaneous interpreting as an activity that is not context-dependent and can therefore be studied in terms of cognitive science. 3. The moot point of omission All the above points would seem to concern, to varying degrees, the question of omission. From the ideal non-omission it follows that interpreters have to deploy different Efforts on different segments, simultaneously, hence the tightrope, hence the complex cerebral activity, hence the Effort Models, hence the specificity, as we have seen. We have also noted that, for Gile, some degree of quality in simultaneous interpreting can be indicated by non-omission. Gile does of course have many more balanced and subtle things to say about quality and the way it is perceived, but let us push the point by staying with the general position outlined so far: good interpreters should not leave things out, not just in order to produce a tour de force with each rendition but because, in principle, they do not know enough about the context to make such decisions. As is frequently said, they are experts in interpreting, not in all the various subject matters they have to deal with. Note that on this point (and indeed on many others) Gile exhibits a strong belief in the division of labor, such that one should not presume to act in an area in which one is not fully “expert”. This ideological option contradicts the similarly strong ideal of the Renaissance scholar and citizen, able to understand the basics of many areas, and prepared to act with common sense and careful curiosity in numerous fields of communication. Gile’s interpreters, it seems, are not part of that particular Renaissance. The question of omission is of interest here for several reasons: 1. Written translators, who have relatively vast expanses of problem-solving time, are surely the ones who should not be allowed to omit, if indeed
Anthony Pym
processing capacity were the only criterion. It seems strange, if not paradoxical, to have non-omission being used to characterize the specificity of simultaneous interpreting, where the time pressure should, if anything, condone a certain amount of omission, at least of a certain kind. 2. Conference interpreters do of course use omission. False starts, hesitations and unnecessary repetitions are routinely omitted, basically since such improvements in the quality of discourse are seen as part of the interpreter’s service function. Such omissions are nevertheless considered trivial; they are certainly not of the kind that could be used to evaluate a rendition negatively. On the other hand, there is a rich range of compressions, generalizations and implicitations, both syntactic and semantic, by which interpreters habitually buy themselves free spaces. It is not altogether clear at which point those processes involve such a clear loss of semantic content that they should be called omissions. Different analysts give different borderlines. The moot point is the degree to which something in the source text might be considered implicit in the context, and thus dispensable. In this way, the question of legitimate omission is closely related to the role of context. If we are to evaluate omissions, the cognitive dimension requires the contextual. 3. Gile’s models are of efforts, but his observations are only of the efforts embodied in products. One can see an input and suppose there is a comprehension effort corresponding to it; one can see an output and say that work has been done to produce it. But can one see the effort expended in omission? (The same simple problem appears when we ask how much effort is actually expended on Short-term Memory and Coordination, both of which Efforts are assumed to exist but neither of which can be measured directly.) If there is an Effort corresponding to omission, which of Gile’s categories should it go under? How should we know?1 1. The epistemological problems of testing the existence, number and separation of the Efforts would appear to go well beyond the question of omission. For instance, the overlapping of efforts would actually appear to be difficult to measure, since there seems to be no direct way of simultaneously testing the type of the efforts that are presumed to be simultaneous. How can we really ascertain that cognitive resources are deployed on two or three tasks at the same time, rather than in quick succession? The problem is rather like the classical principle of uncertainty ensuing from Heisenberg’s problems when observing sub-atomic particles, the speed and directionality of which could not be measured at the same time. Indirect measures such as EEG mapping, the tracing of “failure sequences”, and a certain degree of introspection can indeed use triangulation to project the duration and nature of different cognitive activities, but not with any certainty. Some part of what we observe may still be wishful thinking. To this extent, Gile’s models function as what Andrew Chesterman terms “interpretative hypotheses” (we prefer the term “model” for the same thing, as in Gile – hypotheses should be testable), which Chesterman also likens to the problems of observing sub-atomic particles (in this volume). We believe the
Risk analysis of a hidden effort
4. Gile’s models assume that when cognitive resources are invested in one task, there are fewer resources available for other tasks at that time. This makes sense. In terms of game theory, interpreting would involve zero-sum results: if Speech Production wins, Listening loses. Seen in this way, the model does not allow for particularly cooperative efforts, based on the idea that it is possible for all players to win. However, the effort invested in omission might be of precisely the latter kind, since it frees up space for the other Efforts. 5. For all these reasons, it is not immediately obvious that non-omission is always a virtue. Fabbro and Gran (1997: 24), among others, see expertise in terms of gaining a certain freedom from words: “students are afraid of missing part of the original message and stick to the superficial structure of discourse, while professionals are more familiar with language switching and are flexible, relaxed and detached enough to forget words and concentrate on meaning.” The question, for us, is at which point the disposition to “forget words” becomes something more precise than “forgetting about words”. At which point is omission a measure of valid effort? There is some simmering disagreement in the research community on these points. Strangely, the arguments seem to be “for” vs. “against” omission, with very few attempts actually to answer the question of “valid” vs. “invalid” omission. Sergio Viaggio, for example, is eloquent in defending the need to omit redundant information, and in explaining how intelligent use of context allows this to be done. His attack is nevertheless on […] an underrating of the pragmatic aspect of communication in conference interpreting – an underrating that amounts to underrating relevance and, with it, acceptability itself. It is an unavoidable if regrettable fact that students (and more than a few veterans) do not take duly into account the social import of their job: they seem to switch on automatically into a default mode of interpretation in which texts, though oral, appear suspended in thin air, come from nowhere and no one in particular and going [sic] nowhere and to no one in particular. (2002: 229)
Fair enough. What we are dealing with is apparently justified context-sensitive omission vs. the “default norm” of non-omission. But is this really an affair of “delete as much as you can” vs. “delete as little as possible”?
principle of uncertainty applies to the observation of translation processes across the board, lending all these models an interpretative aspect.
Anthony Pym
4. Omission and risk The question of omission intimately concerns the question of quality, as well as context. If an omission is considered unquestionably valid (as in the case of false starts, for example), then this is surely because “high quality” is not the same thing as rendering everything in the source text. Quality, in the broadest sense, must thus be a measure of the extent to which a communication act achieves its aims, and that is precisely the direction in which we would like to take our analysis. We do not accept, at least not a priori, that the use of omissions indicates a reduction in quality, since such an assumption would answer our questions before we look at any evidence. Our interest in this question derives from slightly different concerns. In our work on the ethics of translation (cf. Pym 1997, 2004), we have proposed that the collective effort put into any ethical communication act must be of less value than the mutual benefits derived from that communication act. This is so as to achieve cooperation between the participants. That approach has enabled us to describe translation as a relatively high-effort mode of cross-cultural communication, ideally restricted to high-reward communication acts. In subsequent models, we have proposed that translators should be able to distribute their efforts in accordance with communicative risks. This means that they would ideally work hardest on problems involving the highest risk, where risk is defined as the probability of noncooperation between the participants. In principle, we would thus hope to connect the analysis of actual translating (as a mode of risk-based effort distribution) with a wide-ranging ethics of cross-cultural communication (where the goal is cooperation). To do this, however, we have to know something about efforts and the ways in which they are actually distributed. Hence our interest in Gile’s models. We move now to the small experiment in which Gile attempted to substantiate one aspect of his Effort Models. The experiment was designed to test whether errors and omissions correspond to points of difficulty in the source speech, or whether they can be found “even when no particular technical or other difficulties can be identified in the source speech” (Gile 1989: 159). If the latter is true, it would justify the “tightrope hypothesis”, whereby interpreters work close to saturation level and thus experience problems purely as a result of the conflicting Efforts. This would in turn support the argument that interpreters’ performances are conditioned more directly by cognitive processing than by contextual factors. In Gile’s experiment, ten subjects listened to a question and interpreted the answer to that question, from English into French. Here we reproduce the beginning of the tandem (the rest of the interpreted answer follows a little later):
Risk analysis of a hidden effort
Question: You suggested that through Kodak you can manipulate technology and fit in with this information revolution. Can you be more specific about the kind of products that Kodak will eventually be able to produce? Answer: I’m sure my... I don’t even know these people yet but I know scientists and engineers well enough to know that they would not be very happy if I preannounced products, but since I don’t know all about what the products are, I can speak loosely I guess. Gile makes no further comment on the setting of these texts, nor on any instructions given to the interpreters. For our concerns, however, we must assess the nature of this discourse in terms of contextualization, aims, risks, and strategies, all of which can be done on the basis of the texts presented. We keep our analysis as brief as possible. 4.1
Contextualization
Contexts can be seen in many different ways, which is why there are many kinds of contextualists. Our analysis here uses the notion of the “contextualization cue”, which Gumperz defines as “any feature of linguistic form that contributes to the signalling of contextual presuppositions” (Gumperz 1982: 131, cf. Gumperz 1992). This approach means that certain features in the text allow us to presuppose or assume certain things about the way the text is being used. That interpretation then constitutes a context. Here we do not have information on the suprasegmental features that would have interested Gumperz (intonation can indicate which information is new, what missing information is required, and what might be ironic, for example). Nevertheless, a certain number of contextualization cues are certainly operative in the short piece of text that we have cited so far: one may assume that this is an interview; the interviewer seeks specifically new information about technology (no journalist needs the old); the interviewee has recently been employed by Kodak, has spoken previously, has referred to an information revolution in positive terms, and is assumed to be an expert in the field. Of course, there is no guarantee that these elements are true; they are interpretations. The world thus created can nevertheless frame a series of further specifications about communicative aims, risks and strategies, and hence about quality.
Anthony Pym
4.2
Communication aims
Given the above, one might then posit that the aim of the communication act is to establish cooperation between the two interlocutors. The interviewer ostensibly desires to obtain readily publishable knowledge that is new; the interviewee seeks to comply by suggesting new trends, although without revealing industrial secrets, and perhaps without telling lies. The interviewer should gain new text (but not too new); the interviewee should gain publicity (but only of the positive kind). 4.3
Communication risks
The risks incurred would then involve the possibility of non-cooperation. That is, the communication act would fail if industrial secrets were revealed, if the information about trends was old, boring, unlikely or false, if the interviewer were left with nothing to report, if the information were too technical to be understood by a general audience, or if the interviewee were to appear in a negative light. 4.4
Communication strategy
To manage these multiple risks, the interviewee chooses a communication strategy based on presenting personality along with only vague technical information. We thus find more enthusiasm than specific technical facts, possibly in the hope that this will distract from the basic contradiction: the interviewer fundamentally seeks information that the interviewee cannot provide. If this strategy works, the interviewer will be able to report on the person as well as the two technical indications, and multiple risks will thus be minimized. In the absence of any specific information concerning the situation where interpreting is required, we must assume that the above analysis applies equally as well to the French text to be produced. Let us now review the items that the interpreters omitted. Here we represent the response discourses as a single, collective version, and we have struck through almost all the omissions that Gile notes as such: I’m sure my... I don’t even know these people yet but I know scientists and engineers well enough to know that they would not be very happy if I pre-announced products, but since I don’t know all about what the products are, I can speak loosely I guess. I think when you look at the imaging side of Kodak, let’s concentrate on that, and recognize that for the not, for the foreseeable future, as
Risk analysis of a hidden effort
far as capture goes, that the silver halide capture media is probably the most cost-effective, highest resolution means of capturing visual memories, or visual images, that one could ask for. So to me, you want to put that in the context of being a very effective way of getting the information to begin with, then you’ve got to talk about how you get that information into a digital form to use over information networks, I think you can begin to think of a whole array of possibilities. Once you start thinking in a broader context of Kodak’s imaging business really being to preserve visual memories, and to communicate them, and to distribute them, in perhaps ways that are totally different than people envision today, then I’ll let your imagination run off with you, cause mine sure does with me. I laid awake the last two nights thinking about those possibilities, and they’re really exciting but ninety percent of my ideas may never work, but there’s ten percent that will be killers. Five omissions are not struck through here, for the simple reason that they seriously compromise coherence and/or contextual cues to the extent that the communication aim would be difficult to achieve. This concerns the phrase “can speak loosely”, mention of “the imaging side of Kodak”, the specification “as far as capture goes”, the numerical percentages towards the end, and the reference to “killers” at the very end. Since those five elements seem important for the discourse to achieve the communicative aim, we initially see the corresponding omissions as incurring high risk. Almost all the other omissions, we claim, can be made without jeopardizing the fundamental aims of the communication act, and should thus be low-risk.2 Here is what we find once we take out the low-risk omissions: I don’t even know these people but I know scientists would not be very happy if I pre-announced products, but I can speak loosely. When you look at the imaging side of Kodak, as far as capture goes, the silver halide capture media is probably the most cost-effective means of capturing visual memories, or visual images, that one could ask for. So to me, you want to put that in the context of
2. There are several other ways of categorizing omissions, of course. Barik’s classical distinctions were between “skipping” (a minor word omitted), “comprehension omission” (something not understood), “delay omission” (omission of a stretch of text because the interpreter has to catch up) and “compounding omission” (where the interpreter regroups elements) (Barik 1975/2002). This categorization mixes several criteria: what we can see, what we consider unimportant, and what the interpreter’s reasons seem to have been. Our high/low risk categories, on the other hand, only consider the omission in relation to the communicative aim.
Anthony Pym
being a very effective way of getting the information to begin with, then you’ve got to talk about how you get that information into a digital form to use over information networks, I think you can begin to think of a whole array of possibilities. Once you start thinking in a broader context of Kodak’s imaging business really being to preserve visual memories, and to communicate them, and to distribute them, in perhaps ways that are totally different than people envision today, then I’ll let your imagination run off with you. I laid awake thinking about those possibilities, and ninety percent of my ideas may never work, but there’s ten percent that will be killers. The first thing to note here is that, collectively, the simultaneous interpreters are doing something quite similar to consecutive interpreters. These low-risk omissions have reduced the number of words by 23%, and they would not seem to have made the communicative aim any less attainable. The message of the Effort Models, however, is that individual interpreters do not do omit to such an extent. They are collectively capable of doing it, as we have just seen, but something in their professionalism, or in their brain, keeps them from using quite so many omissions. We will now try to find out why. 5. What interpreters risk on the tightrope The innovative part of Gile’s research design was that the subjects repeated the same task later. Gile finds that “there were some new e/o’s [errors and omissions] in the second version when the same interpreters had interpreted the same segments correctly in the first version” (1999: 153). We are then told that “[t]hese findings strengthen the Effort Models’ ‘tightrope hypothesis’ that many e/o’s are due not to the intrinsic difficulty of the corresponding source-speech segments, but to the interpreters working close to processing capacity saturation.” Gile’s argument here seems to rely on a relative absence of patterning. If the errors and omissions do not correspond to source-text triggers (i.e. if there is no obvious causal patterning), then they must be the result of difficulties with processing capacity. This is because the experiment only envisages two kinds of causation: source text vs. things in the brain. Here we ask if there is not a third kind of causation, based on the need for risk management in the communication act. We thus seek to show that there is some degree of patterning corresponding to this third kind of cause. We do this by taking Gile’s data on omissions (not on “errors and omissions” as just one group) and categorizing them in terms of communicative risk. Gile
Risk analysis of a hidden effort
focuses on the performance of each individual subject, not in order to analyze interpreter styles but to emphasize that most of the errors and omissions are made by just a few interpreters, which would suggest that those subjects were weaker at managing Efforts. Our analysis, however, seeks the quantitative patterning of slightly larger numbers, so we look at the interpreters as a group. Also, since we are interested in comparing the earlier and later renditions, we have eliminated one subject for whom not all the material was available. Here we accept Gile’s identification of omissions, with a few modifications. For instance, one subject gave “chercheurs” for “scientists and engineers”, which we find understandable and even elegant; Gile, however, is more demanding and classifies it as an error, along with “le monde scientifique” for the same phrase. We admit that these two renditions are not particularly exact (they were both corrected in the second version), but surely they are not out-and-out mistakes? In our analysis we have recognized that they omit one of the two terms given in English (“engineers”), albeit without major consequence, and we have therefore counted them as low-risk omissions. Our hypotheses are then as follows: H1: The segments that are most omitted in the first version tend to be low-risk. That is, the omissions are part of a general economy of time management, mostly as part of a general strategy of implicitation. H2: The omissions that are restored in the second version tend to be high-risk. That is, since the second performance will allow more processing space (if only because the listening-comprehension tasks will be quicker), not so many omissions will be required. Further, the added capacity will be focused on solving the problems that are high-risk, as would seem to be rational. H3: The new omissions introduced in the second version tend to be low-risk. That is, if the added capacity is used to produce new omissions, they will be of minor importance, as part of the general practice of implicitation. Table 1 shows our list of omissions in the two versions, and our very broad categorization of the risks involved (see the Appendix for notes on the risk evaluations).
Anthony Pym
Table 1. Omissions and risks Omitted segment (in bold where necessary)
Omissions in first version
Omitted in second version
Restored in second version
New omission in second version
I don’t even know these people yet
1
0
1
2
I don’t even know these people yet
2
2
0
LOW
scientists and engineers
2
0
2
LOW
well enough
1
1
0
LOW
well enough
1
1
0
But since I don’t know all about what the products are
6
4
2
LOW
I can speak loosely
2
0
2
HIGH
The imaging side of Kodak
3
0
3
HIGH
let’s concentrate on that
2
1
1
For the foreseeable future
2
1
1
LOW
as far as capture goes
1
0
1
HIGH?
highest resolution
7
5
2
LOW?
in perhaps ways that are
2
3
2
Risk estimation
LOW
LOW
LOW
LOW
cause mine sure does with me
3
1
2
HIGH
The last two nights
4
2
2
LOW
And they’re really exciting
3
2
1
ninety percent, ten percent
1
0
1
HIGH
that will be killers
7
4
3
HIGH?
TOTALS
48
24
24
1
10
LOW
Risk analysis of a hidden effort
Table 2. Results Omissions in first version Low-risk High-risk
31 17
Omitted in second version 19 5
61% 29%
Restored in second version 12 12
39% 71%
If we separate the omissions in terms of risk (ignoring the real problems of the question marks accompanying our estimations), the results are eloquent enough (see Table 2). We may now review our hypotheses: H1: The segments that are most omitted in the first version tend to be low-risk. Yes indeed, 31 of the 48 omissions (64.5%) appear to be low-risk, as expected. They can be made without major negative effects on the quality of the communication act. H2: The omissions that are restored in the second version tend to be high-risk. Yes, this is clearly the case: the high-risk restorations are 12 of the original 17 (71%), as opposed to 12 of the original 31 low-risk omissions (39%). This is also to be expected. The time savings due to the repeated nature of the task were invested in the high-risk areas of discourse. H3: The new omissions introduced in the second version tend to be low-risk. Very much so – they all are. This may indeed suggest that they are not seen as serious shortcomings by the interpreters themselves. In view of these results, we claim that the distribution of omissions is patterned in terms of high vs. low risk. That is, context-sensitive risk analysis may account for part of the decision-making processes of simultaneous interpreters. This finding clearly does not imply that Gile’s Effort Models are somehow inoperative. The explanatory power of those models remains intact. All we have done here is apply a model of the way interpreters might prioritize the problems they face. There can be no doubt that they then have to manage cognitive resources in order to solve those problems. There can be no doubt that decision-making requires both cognitive resources and contextualization. By the same token, application of the Effort Models should not imply that source-text difficulty is totally irrelevant. In our analysis, most omissions (7 each) correspond to two source-text items that would appear to present significant challenges. The first is the phrase “highest resolution”, which occurs in the stacked noun phrase “cost-effective highest-resolution”, all of which demands considerable unpacking in French. The second is “that would be killers”, which is an unpredictable colloquial expression requiring quick circumlocution of some kind.
Anthony Pym
6. Why is non-omission an ideal? For Gile, the main usefulness of the Effort Models is to characterize the specificity of simultaneous interpreting. Our risk analysis of omissions might seem to challenge that purpose, since we have shown that the same logic that justifies omissions in consecutive interpreting can be found, albeit to a much lesser extent, in simultaneous interpreting. Our approach would thus tend to level out the land of translation, of which the modes of interpreting are only parts, different in degrees but not in nature. Gile is nevertheless fundamentally right to characterize non-omission as an ideal of simultaneous interpreting, and we are not saying this just because we write in homage to Daniel Gile. He is right on at least two counts. First, as a professional conference interpreter using input from two other professional conference interpreters (this is how his identification of “errors and omissions” was verified), he was voicing an ideal from within the profession. To posit that omissions are in the same bag as errors is not evident to us, on the outside of the profession, speaking from a concern with communication acts in general, but it would seem to be transparently logical within the profession, and to that extent cannot be wrong. Second, the arithmetic clearly shows that, when simultaneous interpreters can avoid omissions, they tend to do so. That is, when the interpreters had added capacity (in the second version), they spent it on reducing the total number of omissions from 48 to 34, a drop of 29%. The interpreters thus displayed recognition of what they should be doing, even when they do not always do it (it is for this reason that repeat performances should become a basic research method for the identification of norms). That said, we would like our risk analysis also to be partly right. Although the interpreters invested their Efforts in restoring 71% of the high-risk omissions in the second version, as would appear rational, the same cannot be said of the lowrisk omissions: there the total drops from 31 low-risk omissions in the first version to 19 in the second, to which we must add the 10 new low-risk omissions, giving a total drop from 31 to 29, only 6.5%, which is scarcely a plunge. That is, although simultaneous interpreters recognize that they should make as few high-risk omissions as possible, they also have a fairly constant background practice of low-risk omissions, presumably used as a time-saving strategy that is not dictated by the distribution of source-text forms (since we get new omissions the second time around). This, of course, is entirely compatible with the Effort Models. The interpreters use this background omission activity to release resources for other tasks, and not in order to represent or misrepresent a source text. Why should simultaneous interpreters nevertheless strive for non-omission, on the level of ideology if nothing else? This is a question that the Effort Models
Risk analysis of a hidden effort
cannot answer, it seems. What those models characterize are the norms and ideals of different professional activities, not the reasons for those norms and ideals. It seems to us, however, that risk analysis can provide some kind of logic here. Here are two reasons: 1. For simultaneous interpreters, any significant gap in the output is likely to be high-risk, since the audience is thereby made aware that it is not receiving something, and this could undermine the relation of implicit trust between interpreter and audience. For a mediator of any kind, once you lose trust, you lose everything. Gaps are not as visible in the other modes of translation. 2. As a general supposition (made by Gile, as we have seen), the speaker knows more about the context than the interpreter, and is thus better able to judge the distribution of communicative risk. Thus, when in doubt, the interpreter should trust the speaker rather than the interpreter’s own intuition about what is new and what is redundant.3 The interpreter should then logically include as much as possible of the input text. In essence, this is a strategy of risk transfer. After all, if the included segment goes wrong, the speaker took that risk first, and should thus suffer the primary consequences. Both these reasons are basically compatible with the Effort Models (we are largely saying the same thing in slightly different words). More problematic, both are compatible with the general idea that mediators are risk-averse. Is that why interpreters, along with most mediators, do not often get the high rewards? Is that why they work for an hourly rate, without extras for exceptionally cooperative communication? 7. Postscript: Why Gile’s basic model might apply to written translation Gile asks whether his basic model might apply to written translation. In principle, the applicability is considered marginal, since “processing capacity requirements are much lower in written translation” (1995: 185); indeed, “time constraints in translation can be considered virtually nonexistent when compared to the time constraints of interpretation” (1995: 186). We accept that there is a difference in scale between time constraints in the two modes. However, contemporary translation practices are promoting more and more situations in which the translator’s time-on-task is highly regulated, such that time is regularly assessed as a variable in the final quality equation (for example, a translation may be linguistically poor but economically acceptable because it is on-time). This is particularly 3. Of importance here is the constructive role played by repetitions in creating “involvement” in a conversation (after Gumperz 1982), highlighted by Tannen (1989/2007).
Anthony Pym
true in the localization industry, where the uses of translation memories and the integration of machine translation produce huge time savings, making deadlines a key factor in any translation project. Hence our initial reluctance to accept the idea that written translation is somehow indifferent to time constraints. Further, we suspect that something of the “overlapping of efforts” considered typical of simultaneous work might also apply in the case of written translation. In screen-recording data of students translating we observe documentation processes punctuated with reformulations and spelling corrections, or quick revisions occurring in the midst of what seems a text-comprehension phase. One kind of activity triggers solutions in another, as Robinson (1997) notes when considering the role of daydreaming and a whole range of other “subliminal skills” in the translation process. Simultaneity is not the exclusive preserve of simultaneous interpreting. We are sure that conference interpreters would resist any suggestion that daydreaming be considered part of their efforts, just as they would logically resist “revision” and “documentation” being listed among the major cognitive activities involved in their task. At the same time, however, interpreters do effect repairs in their discourse (in high-risk situations), and they do carry out documentation in preparation for conferences (so as better to manage the risks), and they do (or should) have access to electronic memories in the booth (not all the eggs are in “short-term memory”). Should we radically discount such activities from future models of effort? So should interpreters and written translators really be at the extreme ends of future models? Or is it time to test, as best we can, the number and nature of the efforts that concern linguistic mediation as a whole? Is it not time to work, not in terms of two disciplines or sub-disciplines trying to cooperate with each other, but as the one discipline, trying to solve just one set of basic problems? Our hope is that a revisiting of Gile’s Effort Models might produce something slightly larger, something hopefully suited to recognition that all mediators, at one time or another, are taking risks on tightropes. May we not fall alone!
Risk analysis of a hidden effort
Appendix Justification of risk estimations Since our analysis depends on crudely labelling renditions as “low-risk” or “highrisk”, here we briefly give reasons for doing so in each case. This part of our approach clearly needs refinement, and we are far from satisfied with the hodge podge nature of the reasons we enlist. We nevertheless feel that most readers would agree with most of our calls, and we trust that our problems here will stimulate others to find some subtle, formalized and convincing solutions. Omitted segment (in bold where necessary)
Reasons for the risk estimation
Risk estimation
I don’t even know these people yet
“je ne les connais pas encore” – Gile asks “Who is ‘they’?” We believe the referent is likely to be recoverable from previous conversation, since the speaker uses the similarly deictic “these people”.
LOW
I don’t even know these people yet
“je ne connais pas ces gens” – Gile notes “Omission of the ‘yet’ idea.” The idea of “yet” is surely implicit in the fact that the speaker is going to work with these people?
LOW
scientists and engineers
“le monde scientifique”, “les chercheurs”. The reference is to “scientists and engineers” in general, so we believe these renditions to be low-risk generalizations.
LOW
Well enough
“je connais” – The idea of “well” is missing, but if the speaker knows them and knows that they would be unhappy, the knowledge stands chance of being good enough, surely?
LOW
Well enough
“que je connais bien” – The idea of “enough” is missing, but if the speaker knows them well and states what for, the “enough” would seem implicit.
LOW
Anthony Pym
But since I don’t know all about what the products are
Complete omissions. The confession of partial ignorance might be an endearing humility trait, but an alternative and perfectly good reason for “speaking loosely” has already been given. One version gives “dont on ignore encore la nature...”, which seems more worthy of a red card, since the speaker clearly does know about the general nature of the products. This is uncorrected in the second version, contrary to what we would have predicted.
LOW
I can speak loosely
Complete omissions. The speaker has just indicated inability to “pre-announce products” and then starts to speak about new products. If there is no bridging reference to “speaking loosely”, the transition must appear to be a complete contradiction.
HIGH
The imaging side of Kodak
The one outright omission is not important (since the same phrase is picked up later in the text), but three versions have “l’image de Kodak”, which might be an omission of “side” but is more obviously completely misleading (the speaker’s reference is to a part of the company’s activities). One could obviously claim that this is a mistake rather than an omission.
HIGH
let’s concentrate on that
Omissions. The speaker clearly does concentrate on that, so there is no pressing need to announce it.
LOW
For the foreseeable future
Omissions. The speaker says that silver halide is currently cost-effective, and no alternatives are mentioned, so the present situation will presumably continue into the future. There is a clear loss of perspective, but without major consequence.
LOW
Risk analysis of a hidden effort
as far as capture goes
The word “capture” appears three times in the sentence, so omission of this one specification would not seem important. However, the speaker’s whole point is that cost-effectiveness is limited to capture and does not extend to “getting that information into digital form”, which is what most excites the speaker. The restriction is thus more important than it would appear at first sight.
HIGH?
highest resolution
Omission. But the phrase comes after “cost-effective” and specifies what the effect is. The specification should also be implicit in the nature of silver halide capture, if and when the audience knows what that is.
LOW?
in perhaps ways that are
Subjective assessment is already implicit in the initial framing of the discourse.
LOW
cause mine sure does with me
Omissions. The phrase is preceded by “I’ll let your imagination run off with you” and followed by “I laid awake the last two nights…”. Some transition is needed, if only to suggest that the speaker is really not sleepless because of the audience’s imagination.
HIGH
The last two nights
The number of nights is not important. We know already the speaker has been lying awake because of ideas.
LOW
and they’re really exciting
If the ideas keep him awake, then they are likely to be exciting. No overwhelming need to underline the fact.
LOW
ninety percent, ten percent
Gile simply notes that both percentages have been omitted. What replaces them is not clear, but we fail to see how the last segment can make sense without percentages of some kind.
HIGH
Anthony Pym
that will be killers
Eight complete omissions. Part of the idea might be recoverable from the fact that 10% of the ideas are not in the “never work” category, but the speaker’s whole purpose is to suggest that you only need one or two brilliant ideas, and implicitly to suggest that the speaker and the company have a few of them.
HIGH?
References Barik, H.C. 1975/2002. “Simultaneous interpretation. Qualitative and linguistic data.” In The Interpreting Studies Reader, F. Pöchhacker and M. Shlesinger (eds), 79–91. London and New York: Routledge. Chernov, G.V. 2004. Interference and Anticipation in Simultaneous Interpreting. Amsterdam/ Philadelphia: John Benjamins. Cronin, M. 2002. “The empire talks back: Orality, heteronomy and the cultural turn in Interpreting Studies.” In The Interpreting Studies Reader, F. and M. Shlesinger (eds), 386–397. London and New York: Routledge. Diriker, E. 2004. De-/Re-Contextualizing Conference Interpreting. Interpreters in the Ivory Tower? Amsterdam/Philadelphia: John Benjamins. Fabbro, F. and Gran, L. 1997. “Neurolinguistic Research in Simultaneous Interpretation.” In Conference Interpreting: Current Trends in Research, Y. Gambier, D. Gile, C.Taylor (eds), 9–27. Amsterdam/Philadelphia: John Benjamins. Gile, D. 1995. Basic Concepts and Models for Interpreter and Translator Training. Amsterdam/ Philadelphia: John Benjamins. Gile, D. 1997. “Conference Interpreting as a Cognitive Management Problem.” In Cognitive Processes in Translation and Interpreting, J. Danks, G.M. Shreve, S.B. Fountain and M.K. McBeath (eds), 196–214. London: Sage Publications. Gile, D. 1999. “Testing the Effort Models’ tightrope hypothesis in simultaneous interpreting – a contribution.” Hermes 23: 153–172. Gumul, E. and Andrzej L. 2007. “The time constraint in conference interpreting: Simultaneous vs. consecutive.” Research in Language 5: 165–18. Gumperz, J.J. 1982. Discourse Strategies. Cambridge: Cambridge University Press. Gumperz, J.J. 1992. “Contextualization and understanding.” In Rethinking Context: Language as an Interactive Phenomenon, C. Goodwin and A. Durante (eds), 229–252. Cambridge: Cambridge University Press. Kurz, I. 1993/2002. “Conference interpretation: Expectations of different user groups.” In The Interpreting Studies Reader, F. Pöchhacker and M. Shlesinger (eds), 312–324. London and New York: Routledge. Moser, P. 1996. “Expectations of users of conference interpretation.” Interpreting 1 (2): 145–178.
Risk analysis of a hidden effort Pöchhacker, F. 2002. “Researching interpreting quality.” In Interpreting in the 21st Century: Challenges and Opportunities. G. Garzone and M. Viezzi (eds), 95–106. Amsterdam/Philadelphia: John Benjamins. Pöchhacker, F. and M. Shlesinger (eds). 2002. The Interpreting Studies Reader. London and New York: Routledge. Pym, A. 1997. Pour une éthique du traducteur. Arras: Artois Presses Université / Ottawa: Presses de l’Université d’Ottawa. Pym, A. 2004. “Propositions on Cross-Cultural Communication and Translation.” Target 16 (1): 1–28. Pym, A. 2007. “On Shlesinger’s proposed equalizing universal for interpreting.” In Interpreting Studies and Beyond: A Tribute to Miriam Shlesinger, F. Pöchhacker, A. L. Jakobsen, and I. M. Mees (eds), 175–190. Copenhagen: Samfundslitteratur Press. Robinson, D. 1997. Becoming a Translator. An Accelerated Course. London and New York: Routledge. Shlesinger, M. 1989. Simultaneous Interpretation as a Factor in Effecting Shifts in the Position of Texts on the Oral-Literate Continuum. MA thesis, Tel Aviv University. Tannen, D. 1989/2007. Talking Voices: Repetition, Dialogue, and Imagery in Conversational Discourse. Second edition. Cambridge: Cambridge University Press. Viaggio, S. 2002. “The quest for optimal relevance: The need to equip students with a pragmatic compass.” Interpreting in the 21st Century: Challenges and Opportunities, G. Garzone and M. Viezzi (eds), 229–244. Amsterdam/Philadelphia: John Benjamins.
Research skills
Doctoral training programmes Research skills for the discipline or career management skills? Christina Schäffner
Aston University, Birmingham, England As for any academic discipline, the future of Translation Studies depends on new generations of researchers. But new researchers need to have knowledge in their discipline and also competence in research skills. This paper addresses the issue of skills training for doctoral students, mainly from the perspective of the United Kingdom. UK Research Councils expect doctoral students to be able to demonstrate research skills and techniques specific to their topic, but they also expect them to understand research funding procedures and to manage their career progression. The paper explores the extent to which such a complex set of skills can be achieved effectively in a doctoral training programme.
Keywords: doctoral training programmes, skills requirements for doctoral students, research management, career management
1. Introduction: research quality assessment In a society increasingly concerned with social relevance and the economic impact of research, mechanisms of quality assessment have become common practice and are constantly subjected to monitoring and revision. Standards and benchmarks are made available against which the output of research can be measured. In England for example, a Research Assessment Exercise (RAE) is conducted at regular intervals by the Higher Education Funding Council. The process is one of peer review, with departments or research groups making submissions to subject panels. Submissions come in the form of up to four pieces of work by each research active scholar which will be graded according to quality levels (from 4* to 1*). There is no specific panel for Translation Studies for the 2008 RAE (whose results will be published at the end of 2008), which means that Translation Studies
Christina Schäffner
research will be judged by panels such as European Studies, English Language and Literature, Linguistics, Communication, Cultural and Media Studies – to name just the most likely ones out of a total of 67 sub-panels. In the unit European Studies (where Translation Studies was included in the 2001 RAE), the quality levels are defined as: 4* Quality that is world-leading in terms of originality, significance and rigour. 3* Quality that is internationally excellent in terms of originality, significance and rigour but which nonetheless falls short of the highest standards of excellence. 2* Quality that is recognised internationally in terms of originality, significance and rigour. 1* Quality that is recognised nationally in terms of originality, significance and rigour. Unclassified Quality that falls below the standard of nationally recognised work. (RAE 2006) In addition to the four pieces of research output for each academic, departments have to submit a report on their research environment (which includes information about research students, research income, research structure, staffing policy and research strategy) and esteem indicators for individual scholars or groups (e.g. editorships, leading roles in academic associations). On the basis of all this information, an overall quality profile for the department will be calculated, which will then determine the level of government funding for the institution. Doctoral research plays an important role in the report on the research environment, and it has already been announced that in future assessments of research excellence even more attention will be given to support for doctoral students, and in particular to the provision for doctoral training. Improvements in doctoral training are seen as essential in ensuring high quality research in future. It is nowadays becoming a more widely accepted practice to have doctoral programmes at universities where research skills are taught in a more systematic way. In designing such programmes, reference can be made to benchmarking documents and similar guidelines which address requirements for doctoral degrees. Here we examine the issue of skills training for doctoral students, commenting on the aims and content of such training, mainly from the perspective of the United Kingdom. UK Research Councils expect doctoral students to be able to demonstrate research skills and techniques that are specific to their topic, but they also expect them, for example, to understand the procedures of research funding, to be aware of the process of academic or commercial exploitation of research results, and to take ownership for and manage their career progression. Such a complex set of
Doctoral training programmes
skills goes beyond the quality of the PhD thesis itself as the immediate aim of doctoral research. This wider perspective also highlights the responsibilities of universities for preparing researchers whose continuing work will advance both their discipline and society as a whole. 2. Quality of doctoral research Benchmarking is increasingly applied in the area of education, e.g. the framework of qualifications for the European Higher Education Area (often refered to as ‘the Bologna process’ for the first and second cycle of higher education). In this overall process of elaborating a framework of comparable and compatible qualifications for higher education systems, descriptors have now also been put forward for third cycle qualifications, i.e. doctoral awards. In a report from a Joint Quality Initiative informal group (2004), the descriptors for doctoral awards have been specified as follows: Qualifications that signify completion of the third cycle are awarded to students who: – have demonstrated a systematic understanding of a field of study and mastery of the skills and methods of research associated with that field; – have demonstrated the ability to conceive, design, implement and adapt a substantial process of research with scholarly integrity; – have made a contribution through original research that extends the frontier of knowledge by developing a substantial body of work, some of which merits national or international refereed publication; – are capable of critical analysis, evaluation and synthesis of new and complex ideas; – can communicate with their peers, the larger scholarly community and with society in general about their areas of expertise; – can be expected to be able to promote, within academic and professional contexts, technological, social or cultural advancement in a knowledge based society. (Dublin descriptors 2004)
These descriptors (often referred to by the label “Dublin descriptors”) apply to the outcome of doctoral research. They do not exclusively refer to the PhD thesis as the final product of the research process, but to knowledge and skills which are usually tested in a public defense, or a disputation, or a viva voce examination as well. Having the PhD thesis accepted and being awarded a doctorate is the end of an important stage in the academic career of young scholars, but it is also the beginning of a new career in which on-going research and publications are a normal component. However, it is assumed that the knowledge and skills which are
Christina Schäffner
demonstrated in a PhD thesis and examination have been acquired in the course of doing doctoral research. Acquisition of knowledge and skills is increasingly seen as a joint activity, i.e. the doctoral students have a duty to enhance their knowledge and skills, and universities as well are expected to provide training programmes and other forms of support. These requirements reflect the changed nature of doctoral research. In the past, it was quite normal for a student to experience doctoral research as a one-toone relationship with the supervisor. Students would work independently on their topic of research, meet their supervisor regularly to discuss problems and progress and get advice. The supervisor would take the student by the hand, help and guide them in their academic research and also provide moral support (which is reflected in the German word ‘Doktorvater’), and accompany them until they are ready to work on their own. Students might not know if there are other doctoral students in the same department or faculty (as I was surprised to find out when I recently visited a Translation Studies department at an established university in Central Europe). One-to-one interaction with a supervisor is of course a very important aspect, but the provision of research training is becoming more and more the collective responsibility of universities. In recent years, a number of doctoral programmes have been set up in various countries. In fact, a Google search for “doctoral programme in translation” resulted in 161 hits (search done on 17 March 2008), listing programmes, among others, in Spain, Slovenia, Poland, and the UK. The very name ‘programme’ suggests a more structural approach, a programme offered by the university, for the students to follow. Such programmes are expected to provide training in discipline-specific knowledge and research methods, and also develop transferable skills in view of career prospects. In 2001, the UK Research Councils produced a joint statement on skills training requirements for research students. In the introductory section of this statement the research councils re-emphasise their “belief that training in research skills and techniques is the key element in the development of a research student, and that PhD students are expected to make a substantial, original contribution to knowledge in their area, normally leading to published work” (Joint Statement 2001). The purpose of the document is presented as providing universities “with a clear and consistent message aimed at helping them to ensure that all research training is of the highest standard, across all disciplines” (ibid.). The skills presented in the document are the outcome of the training process. Students may already possess some of them at the beginning of their doctoral studies (in particular in the case of mature students, or as a result of research conducted for a Master’s dissertation); others can be explicitly taught or developed during the course of the research. The skills are arranged in seven groups: (A) Research skills and techniques, (B) Research environment, (C) Research
Doctoral training programmes
management, (D) Personal effectiveness, (E) Communication skills, (F) Networking and team-working, (G) Career management. This list reflects the changed nature of what is expected of a PhD today. Whereas in the past, the focus was on producing a thesis, PhD candidates today are also expected to be trained researchers. UK universities tend to regulate the amount of research training students have to undertake. For example, Aston University’s General Regulations for Students Registering for Higher Degrees by Research and Thesis (2004) specify a “minimum of 90 hours’ appropriate skills training, including conference sessions, between the research start and the submission of the thesis” (with the normal thesis period defined as three years). Universities are required to develop research skills training programmes and to demonstrate to the authorities (e.g. in institutional audits, in university-internal research provision reviews) that the most appropriate programme is in place and that students acquire the required skills (with completion rates being one significant performance indicator in this respect). Some of the skills are of a more general nature and can be developed in training programmes at university level (e.g. IT training, Health and Safety training, Time Management). Others are more effectively developed at the level of the discipline (within one department or subject-specific across universities), and skills specifically needed for the student’s research topic can also be developed in the meetings with the supervisor. Admittedly, completing a PhD in three years and attending sessions of a doctoral training programme is more easily achievable for the ‘prototypical’ full-time student. But since the skills listed in the joint statement by the UK Research Councils give a good overview of competences and skills expected of graduates of doctoral programmes, I will now look at them in more detail and provide some comments with reference to research training in Translation Studies. 3. Skills requirements for doctoral students 3.1
Research skills and techniques
The first group of skills, Research skills and techniques, requires that students be able to demonstrate: 1. the ability to recognise and validate problems and to formulate and test hypotheses; 2. original, independent and critical thinking, and the ability to develop theoretical concepts; 3. knowledge of recent advances within one’s field and in related areas;
Christina Schäffner
4. an understanding of relevant research methodologies and techniques and their appropriate application within one’s research field; 5. the ability to analyse critically and evaluate one’s findings and those of others; 6. an ability to summarise, document, report and reflect on progress. (Joint Statement 2001) These aspects are already relevant when it comes to formulating a topic for one’s research. When students apply to do PhD research at a UK university, they are normally expected to come with a proposal for a topic, based on some preliminary research, and be able to demonstrate that the chosen topic is a valid one. That is, doctoral candidates are expected to select a topic themselves, and a supervisor will be appointed on the basis of their own expertise. Supervisors will then provide guidance in the topic refinement and in the refinement of research questions to be addressed and/or hypotheses to be formulated. Preparing an outline of the intended research is facilitated if the topic builds on the research topic pursued in the Master’s dissertation (provided a PhD applicant has completed a Master’s programme in Translation Studies). It is often in the course of a Master’s programme that interest in research is stimulated, for example if students realise that there is much more that can be said about their topic than the word limit for the dissertation allows. Acquiring knowledge of recent advances within one’s field is essential in order to contextualise one’s own research topic. However, recent advances also need to be related to earlier traditions and insights and to the history of the discipline. In the field of Translation Studies, we have seen a widening of perspectives and increased interdisciplinarity. Recent advances show increased investigations into agency, causation, and ideology, which have been inspired by research in neighbouring disciplines, in particular postcolonial studies, sociology, and anthropology. In a climate where research students are expected to complete their thesis within three years, prior knowledge of main theoretical approaches and developments in the discipline is essential; i.e. it cannot be assumed that most of the first year of a doctoral programme can be devoted to doing basic reading. In November 2007, the UK Higher Education Academy published the results of its first national survey of postgraduate research students’ experiences (PRES 2007). Students were asked to rate (from 1 to 5, with 5 being the highest score) several aspects in terms of importance and satisfaction. Supervision had the highest mean agreement of 3.93, followed by skills development (3.86). 82.2% of the respondents agreed that their supervisors have the necessary skills and subject knowledge to adequately support their research. However, only 62.1% agreed with the statement I have received good guidance in my literature search from my supervisors (19.5% disagreed). This result also signals that doctoral students find it
Doctoral training programmes
difficult to decide which literature to read, in which sequence, and how much to read (often in order to fill gaps in their knowledge about the discipline). If students apply for PhD research after the completion of a Master’s programme in Translation Studies, they should have gained a good understanding of the development of the discipline to decide how their own topic is related to existing knowledge, and also be able to decide which literature they may have to re-read. It will then be possible to relate the new advances more critically to the existing knowledge. In fact, the Dublin descriptors, mentioned above, are stricter in that they speak of “a systematic understanding of a field of study”. This puts more emphasis on a wider knowledge of the discipline compared to the reference to “recent advances within one’s field” in the UK Research Councils’ statement. The Dublin descriptors also set higher expectations for research methodology when they speak of “mastery of the skills and methods of research associated with that field”, compared to “understanding of relevant research methodologies and techniques” in the UK statement. For each specific topic, the most appropriate research methods will need to be selected. Daniel Gile has repeatedly criticised the way researchers have moved away “from their initial disciplines and embrace[d] research paradigms for which they are not prepared,” which results in an “overall methodological weakness” (Gile 2004: 29). For doctoral students, deciding on a relevant methodology and being trained in using it competently is essential. The individual sessions with the supervisor are often not sufficient in this respect, since supervisors themselves will not always be experts in the specific methodology required for the student’s project. Doctoral programmes with large cohorts of students can have specific seminars to address different theories and approaches in Translation Studies and can also provide training in different research methods. Smaller departments with smaller numbers of students will find this more difficult to achieve. Training programmes jointly developed and delivered by several universities with Translation Studies programmes would be a way forward. Summer schools as well as national or international doctoral training programmes (e.g. the CETRA programme) can serve as models. An additional advantage of attending international summer schools is that doctoral students will become familiar with research planning, policies, and procedures as they apply to other countries, thus allowing them to ‘escape’ (even if only virtually and temporarily) the specific institutional or national framework in which they conduct their own research and the conventions they need to follow for presenting the PhD thesis (a point highlighted by Gile at a talk at Salford University in February 2008). An integral part of enhancing one’s knowledge about the discipline is the ability to analyse critically and evaluate one’s own findings and those of others (point A5 in the UK statement). This requires engaging with the original literature as far as possible. There are obviously constraints when it comes to literature written in
Christina Schäffner
another language, but in my experience, it happens too often that students rely on secondary (or even tertiary) sources. Part of understanding and analysing arguments by others is to see how they were developed in their own context and at their own time. Moreover, critical engagement must not be interpreted as having to criticise work by others for the sake of criticism. Arguments and findings by scholars need to be related to the stated aims of the research. It can then be evaluated whether the claims are sufficiently backed up by evidence, whether methods and findings can be tested in another study, etc. Such skills can be developed in individual sessions with the supervisor, but this is probably more effectively done in a group of doctoral students. Using one particular research paper or book and doing a thorough critical analysis (and/or writing a review, and/or a mock peer review) could be a good exercise. In this way, rigorous critical reading skills can be developed, which should also avoid misperceptions of other authors’ statements and/or intentions. 3.2
Research environment
The next group of skills concerns the Research environment. More specifically, the skills listed here refer to the ability to: 1. show a broad understanding of the context, at the national and international level, in which research takes place; 2. demonstrate awareness of issues relating to the rights of other researchers, of research subjects, and of others who may be affected by the research, e.g. confidentiality, ethical issues, attribution, copyright, malpractice, ownership of data and the requirements of the Data Protection Act; 3. demonstrate appreciation of standards of good research practice in their institution and/or discipline; 4. understand relevant health and safety issues and demonstrate responsible working practices; 5. understand the processes for funding and evaluation of research; 6. justify the principles and experimental techniques used in one’s own research; 7. understand the process of academic or commercial exploitation of research results. (Joint Statement 2001) With reference to point 1 above, doctoral students in the UK would need to be familiarised with the importance of the RAE in order to understand that the quality levels used by the RAE panels in evaluating research output are the levels they will have to aim at themselves. Most of the skills above are of a general nature and
Doctoral training programmes
concern the social, professional and ethical responsibilities of a researcher. Some of them (especially points 2 and 4) can probably best be dealt with at the institutional level in specific seminars for all doctoral students across disciplines. Such an approach would also allow the development of an awareness of issues that apply to other disciplines, thus contributing to a wider understanding of the social role of research and of potential differences between the natural sciences and the humanities in this respect. Points 5 and 7 go beyond the primary aim of doctoral research, i.e. producing a PhD thesis. The inclusion of these aspects also reflects the changed nature of research in institutions of higher education. Universities (at least in the UK) are seen more and more as businesses with a corporate identity and unique selling points, as providers of programmes to their (paying) customers. Income generation is one of the tasks of academics, and increasingly so, and includes applying for funding to research councils, to industry and other bodies. In the area of Translation Studies, and in the humanities more generally, it is more difficult to get funding from industry, or to work on a research topic which will lend itself directly to commercial exploitation. In the UK, the main sources of external funding for research projects are the Research Councils (mainly the Arts and Humanities Research Council, AHRC, and the Economic and Social Research Council, ESRC), and funding is granted primarily for study leave to give an individual researcher the time to complete a monograph. There are surely many more opportunities for conducting translation related research for industry and public services than we are currently aware of. The sooner a doctoral student is made aware of the needs to secure research funding, the better they can plan their own career. The inclusion of these skills in the joint statement thus highlights the wider aims of doctoral research: research training cannot be limited to the specific needs of the thesis topic but is basically training for a professional career as a researcher. Admittedly, not all candidates who complete a PhD programme will stay at universities and embark on an academic career in which research is an essential part. Some will wish to work as professional translators or interpreters, or as managers in the translation industry, or in other related fields. Knowledge and skills in funding applications and exploitation of research results is useful for such work as well, but it is difficult to fit in a training programme. It is surely possible to give general overview talks in an institution-wide training programme, but different disciplines have not only different options for applying for funding, but also different requirements for formulating an application, not to mention differences in policies and procedures as they apply in individual countries. Completing the required papers for funding application for research projects is a time-consuming task, and the evaluation process is very rigorous.
Christina Schäffner
What might be useful as part of a training programme is doing a mock review of an application for a research project, if possible on the basis of a previous (either successful or unsuccessful) application by a colleague from the same institution (which requires willingness on the part of colleagues to share their applications with doctoral students). During the three years of doctoral study, it is probably more realistic to have students complete applications for small-scale funding, e.g. for participation in a conference or a summer school. But since a job at a university is far from being guaranteed when the thesis nears completion, preparing an application for funding to conduct a new research project is becoming more and more a reality for today’s doctoral students. 3.3
Research management
Research management (group C) is more directly related to carrying out the research for the PhD itself. The specific skills mentioned here are the ability to: 1. apply effective project management through the setting of research goals, intermediate milestones and prioritisation of activities; 2. design and execute systems for the acquisition and collation of information through the effective use of appropriate resources and equipment; 3. identify and access appropriate bibliographical resources, archives, and other sources of relevant information; 4. use information technology appropriately for database management, recording and presenting information. (Joint Statement 2001) The supervisor will have an important role to play here in setting intermediate deadlines and guiding decisions about realistic timescales. In particular in the first year of doctoral research, it is essential for the supervisor to set deadlines for the completion of specific tasks, in discussion and agreement with the doctoral student. Such specific tasks to be achieved by an agreed deadline could be to read a particular book or article and write a short evaluative report about it; or to analyse a particular number of examples and report on patterns identified (see also the useful information in Williams and Chesterman 2002). Both supervisor and doctoral student will then be able to discuss whether the deadline was too tight or too generous, or whether it was realistic but the student had problems in completing a task in a realistic timeframe. Should this be the case, more specific training in time management can be provided. As part of doctoral study, it is quite normal to set interim stages. At Aston University, for example, all doctoral students are initially enrolled as research students. Within one year (for full-time students), they have to submit a so-called
Doctoral training programmes
qualifying report, which is a more substantial report about the aim of the research, the research method(s) employed, the data (to be) analysed, initial findings, and it also includes a literature review and an outline of the next steps with a time-scale. On the basis of this report and an oral examination, conducted by the supervisor and at least one more academic, a decision will be taken as to the validity and feasibility of the research. Students will then be registered as PhD students, or, if the project is less original, as MPhil students, or students can be required to withdraw if there is no evidence of substantial research. At other universities, students are enrolled as MPhil students, with a similar report and examination to decide whether they can be up-graded to PhD. Preparing for this first milestone of the doctoral research is thus a natural opportunity to develop one’s skills in project management and working to deadlines. Concerning points 2 and 3 above, the nature and amount of information to be acquired depends on the specific research topic. It is again the supervisor who can provide valuable advice to doctoral students in this respect. Moreover, most universities already provide training sessions on data collection, library use, Internet sources, etc., whether separately or as part of a structured doctoral training programme. 3.4
Personal effectiveness
The inclusion of a section on personal effectiveness in the UK Research Councils’ statement may come as a surprise. The abilities listed here do not apply exclusively to doctoral students. In fact, they are equally applicable to established researchers and to the professions in general. This section refers to the ability to: 1. demonstrate a willingness and ability to learn and acquire knowledge; 2. be creative, innovative and original in one’s approach to research; 3. demonstrate flexibility and open-mindedness; 4. demonstrate self-awareness and the ability to identify one’s own training needs; 5. demonstrate self-discipline, motivation, and thoroughness; 6. recognise boundaries and draw upon sources of support as appropriate; 7. show initiative, work independently and be self-reliant. (Joint Statement 2001) Students themselves find it often difficult to assess their own training needs. What works well is for departments to prepare a training needs analysis questionnaire which students can complete when they start their doctoral programme. In such a questionnaire they can identify their own degrees of ability (e.g. as basic, skilled, expert) in respect of the seven groups of skills listed in the joint statement by the UK research councils, and specify priorities for development (as high, medium,
Christina Schäffner
low). In conjunction with their supervisor, they can then agree which training sessions should be attended when, and whether training for specific needs can best be provided at university level, or by attending a workshop held somewhere else, or by arranging specific individual sessions with the supervisor. This questionnaire will then be completed again a year later, with the student registering progress and/or identifying new needs. Keeping research diaries may be one way to record perceived needs as they occur. Most of these skills listed here for personal effectiveness can probably not be taught explicitly in a structured programme. More informal settings are more conducive for their development, such as reading circles or postgraduate gatherings organised by the doctoral students themselves. Working for three years full time on a doctoral thesis is a demanding task, and it is quite common that students experience phases of despair (probably more frequently than experiencing stages of enjoyment and satisfaction). Informal meetings with other doctoral students for exchanging experience and talking about problems can be very helpful in regaining confidence. Many universities or departments already provide a dedicated postgraduate common room for such social gatherings. In May 2003, the Translation Studies PhD students at Aston University organised a one-day PhD colloquium on the topic ‘Research training in Translation Studies: Sharing good practice’. This was the first event that got Translation Studies PhD students enrolled at UK universities together to talk about their experience as PhD students. The initial intention was to share experience and opinions about supervisory arrangements at the various universities, about training provision, institutional support, and questions of personal concern. The students very much appreciated this opportunity, but it became clear that they also wanted to use this colloquium as a forum to speak about their own research topic and get feedback from fellow doctoral students. This links to the next section in the joint statement about research skills. 3.5
Communication skills
These refer to the ability to: 1. write clearly and in a style appropriate to purpose, e.g. progress reports, published documents, thesis; 2. construct coherent arguments and articulate ideas clearly for a range of audiences, formally and informally through a variety of techniques; 3. constructively defend research outcomes at seminars and the viva examination; 4. contribute to promoting the public understanding of one’s research field;
Doctoral training programmes
5. effectively support the learning of others when involved in teaching, mentoring or demonstrating activities. (Joint Statement 2001) The development of communication skills is essential, and as many opportunities as possible should be provided for doctoral students to present their research, orally or in writing, and get feedback. Postgraduate seminars and conferences are ideal in this respect since doctoral students will be presenting their research to other doctoral students, which means that all participants are at roughly the same levels of their academic career, which should also reduce any potential worries and ‘stage fright’ about speaking in public. Having doctoral students themselves organise and run postgraduate conferences is also a useful contribution to the development of related skills. For example, writing a conference call, assessing abstracts for acceptance, chairing a session at a conference are good for developing communication and time management skills. Editing proceedings of such a postgraduate conference (ideally with one or two other doctoral students as co-editors, and if possible with the co-editors working at another university) develops presentation skills, project and time management, and team work. Organizing a conference, chairing a session, and editing a collection of papers can also be practised as mock activities as part of a training programme, and for the editing task, advice can be gained from Gile and Hansen (2004). Another essential forum for developing communication skills is an active research environment in the department and the university. Regular lectures (including by guest speakers), workshops, research seminars etc. will not only let doctoral students practise their own communication and presentation skills but will also let them experience the communication skills of others. Moreover, a rich academic programme will also contribute to the development of knowledge about research in the discipline, and will show doctoral students which research methods have been used by other scholars. The 2007 Postgraduate Research Experience Survey mentioned above also requested comments on the doctoral students’ experience of the infrastructure and the intellectual climate in their departments. The results showed that intellectual climate was a poorly rated area. More than a quarter of students (25.9%) said that the intellectual climate had failed to meet their expectations. Students had been asked whether they agreed that their department provides a good seminar programme for research degree students (57.2% agreed; 19.5% disagreed), whether they agreed that the research ambience in my department or faculty stimulates my work (49.3% agreed), and whether they felt integrated into their department’s community (49.0% agreed; 26.7% disagreed). The report on the survey says that
Christina Schäffner
students from arts and humanities, social sciences […] were, on average, less likely than other disciplines to agree that they felt integrated into their department’s community and that the research ambience in their department or faculty stimulates their work. Social scientists were also least positive about their department providing a good seminar programme for research students. (PRES 2007)
These results can be interpreted as a clear signal to universities to provide more opportunities for intellectual exchange. Academics in the UK often complain that the requirements of teaching and especially administration are becoming ever more demanding, and that as a result, it is unfortunately the research which suffers. Despite these realities of the life of academics, every effort should be made to keep up a rich programme of lectures, workshops, etc. Another way of developing communication skills is being involved in teaching, which in UK universities is not an absolute must. The results of the Postgraduate Survey showed varied experience of teaching opportunities. 61.1% of the respondents said that their experience gained through teaching had been worthwhile (22.4% disagreed), but only 40.4% said that they had received adequate support and guidance in their teaching (32.5% disagreed). Teaching courses on translation theories, or on theoretical approaches, or on translation history will also be excellent opportunities to enhance doctoral students’ knowledge about their discipline, and thus help them to put their own research topic in a wider perspective. Guidance for teaching can be provided by the supervisor and/or other colleagues in the department. Such guidance can come in the form of teaching material provision, team teaching, peer observation of classes. Doctoral students who contribute to teaching should also be members of the teaching staff and participate in staff meetings. In this way, they will feel included in the community of teachers and researchers. 3.6
Networking and teamworking
Active participation in workshops and conferences is also a good opportunity to develop contacts, an issue which is addressed in the next section of the UK Research Councils’ statement. Networking and team-working concerns the ability to: 1. develop and maintain co-operative networks and working relationships; with supervisors, colleagues and peers, within the institution and the wider research community; 2. understand one’s behaviours and impact on others when working in and contributing to the success of formal and informal teams; 3. listen, give and receive feedback and respond perceptively to others. (Joint Statement 2001)
Doctoral training programmes
These skills too, are not specific to Translation Studies scholars, but apply to any profession. Giving feedback and reacting to feedback received can probably relatively easily be incorporated into a research skills training programme. Whenever such programmes include presentations by the doctoral students, sufficient time should be made available for discussing both the content of the presentation and also its style and effectiveness. Most of the doctoral research in Translation Studies is conducted as individual research (cf. the typical ‘lone’ researcher in the humanities), although in some cases doctoral students may be part of a team working on a specific topic (e.g. if the PhD research is embedded in a larger project for which funding may have been secured). Team activities can be built in a training programme as well, e.g. if a small group of doctoral students is asked to come up with a team report on a published article. International summer schools are essential in networking, with, for example, the CETRA alumni being a good example. Networking and team work is also becoming more and more relevant in view of funding opportunities for projects which require collaboration across borders (e.g. EU funding). 3.7
Career management
The final section in the Joint Statement is devoted to career management and lists the following skills. Students should be able to: 1. appreciate the need for and show commitment to continued professional development; 2. take ownership for and manage their career progression, set realistic and achievable career goals, and identify and develop ways to improve employability; 3. demonstrate an insight into the transferable nature of research skills to other work environments and the range of career opportunities within and outside academia; 4. present their skills, personal attributes and experiences through effective CVs, applications and interviews. (Joint Statement 2001) These skills go definitely beyond the aims of a doctoral training programme that is intended to develop knowledge and skills for producing a high-quality PhD thesis and defending it successfully. In view of the time available for completing the thesis, it may seem impossible to include sessions to develop the above skills as part of a structured doctoral programme. In fact, the UK Research Councils’ statement says in the introduction that the “development of wider employment-related skills should not detract from that core objective.” The Postgraduate Research Experience Survey too showed that the development of research skills was rated much
Christina Schäffner
higher by the respondents than was the development of transferable skills. 88.1% said that developing research skills was important, but only 67.6% rated the development of transferable skills as important. This result can be interpreted in two ways. On the one hand, it clearly indicates that doctoral students see training in the skills required for conducting their own specific research as being of immediate relevance to their short-term aim of completing the PhD. They do not (yet) think beyond the PhD as long as most (or all) of their time is devoted to doing the research and writing the thesis. On the other hand, the survey result also suggests that universities and departments do not yet succeed in raising an awareness of the long-term effects of skills training. If we agree that universities do have a responsibility to prepare their young doctoral students for a professional career, they should also give them training in the development of transferable skills. Writing a CV, writing a job application and practising job interviews (e.g. as mock interviews) can be taught explicitly and also very efficiently as part of a general training programme delivered to all doctoral students at a university. 4. Conclusion As stated above, such a complex set of skills as presented in the joint statement by the UK Research Councils on skills training requirements for research students goes beyond training for conducting research and producing a PhD thesis as the immediate aim of doctoral research. Not all of these skills will need to be developed explicitly in a structured doctoral programme. Depending on the individual circumstances of the doctoral student (e.g. previous training, previous experience, full-time or part-time study), different mechanisms can be used to support their learning and skills development as appropriate. As indicated above, some of the skills listed in the joint statement can be taught in a structured programme (e.g. CV writing, IT skills), those of a more subject-specific nature can partly be taught in a department or subject group or more effectively at a doctoral summer school (e.g. evaluating findings by others, constructing coherent arguments in presenting one’s own findings), whereas other skills will develop on their own given a supportive environment (e.g. recognizing boundaries, reflecting on progress). Training provision should allow for such flexibility, and doctoral students’ attendance at workshops, summer schools and conferences should be recognized as valuable elements of a doctoral training programme. What is essential is to create the conditions in which doctoral students can experience and contribute to stimulating discussions about a variety of topics related to research in Translation Studies in particular, but also to issues of a more general nature in respect of research (e.g.
Doctoral training programmes
ethics, funding). That is, the quality of the research environment is more important than designing a specific course with a specified number of hours. The question asked in the title of this paper (Doctoral training programmes: research skills for the discipline or career management skills?) should therefore not be answered with a preference for one component. That is, rather than speaking of ‘or’, we should speak of ‘and’. Both research skills for the discipline and career management skills have their place in a doctoral training programme. As long as we agree on the knowledge and skills that are the desired outcome of the training, the actual training provision in terms of content and structure can be decided by the individual universities. That is, documents like the Dublin descriptors or the UK Research Councils’ Statement can be used as guidelines for developing training programmes, but we should not attempt to regulate the actual structure, content, or hours to be spent on a particular topic. Daniel Gile has repeatedly claimed that weaknesses in research expertise in Translation Studies are the result of the lack of research training, the “lack of appropriate training in research principles and their concrete implementation” (Gile and Hansen 2004: 304), and the lack of quality control in the field. Providing more formal research training for doctoral students should thus also in the long term result in more consistent research quality within the discipline of Translation Studies. References Gile, D. 2004. “Translation Research vs. Interpreting Research: kinship, differences and prospects for partnership.” In Translation Research and Interpreting Research: Traditions, Gaps and Synergies, C. Schäffner (ed.), 10–34. Clevedon: Multilingual Matters. Gile, D. 2008. “Ideology and intercultural encounters in research: the case of TS.” Paper presented at the Symposium on Ideology and cross-cultural encounters – Research and Methodology in Translation and Interpreting, University of Salford, 27 February 2008. Gile, D. and Hansen, G. 2004. “The editorial process through the looking glass.” In Claims, Changes and Challenges in Translation Studies, G. Hansen, K. Malmkjær and D. Gile (eds), 297–306. Amsterdam/Philadelphia: John Benjamins. Dublin descriptors 2004. “Shared ‘Dublin’ descriptors for Short Cycle, First Cycle, Second Cycle and Third Cycle Awards.” A report from a Joint Quality Initiative informal group. 18 October 2004. Available at http://www.jointquality.nl/ (Last accessed on 17 March 2008). Joint statement 2001. “Joint Statement of Skills Training Requirements of Research Postgraduates.” Available at http://www.grad.ac.uk/cms/ShowPage/Home_page/Policy/National_ policy/Research_Councils_training_requirements/p!eaLXeFl#Joint%20Statement%20of %20Skills%20Training%20Requirements%20of%20Research%20Postgraduates%20(2001. (Last accessed on 17 March 2008). PRES 2007. “Postgraduate Research Experience Survey.” Available at http://www.heacademy. ac.uk/ourwork/research/surveys/pres. (Last accesssed on 17 March 2008).
Christina Schäffner RAE 2006. “RAE 2008. Panel criteria and working methods.” Available at http://www.rae.ac.uk/ pubs/2006/01/docs/lall.pdf. (Last accessed on 17 March 2008). Williams. J. and Chesterman, A. 2002. The Map. A Beginner’s Guide to Doing Research in Translation Studies. Manchester: St. Jerome.
Getting started Writing communicative abstracts Heidrun Gerzymisch-Arbogast Saarland University, Germany
Even a journey of 1000 miles starts with a first step. Confucius
The article relates to a PhD School which Daniel Gile and I jointly taught within the framework of a European Marie Curie PhD training program on critical reading and writing of research papers on 29th April 2007 in Vienna. It presents an overview of formulae for writing abstracts as a basic research skill for young researchers when “getting started” in research, as they are offered by standardization institutes, universities and conference conveners. Reflecting Daniel Gile’s comments on the topic, it is suggested that writing abstracts for conferences needs to take into account more than just the factual dimension. On the basis of Schulz von Thun’s communication model (1981) several interrelated dimensions of writing abstracts for conferences are discussed and exemplified.
Keywords: abstracts, communicative meanings, research papers, PhD training
1. Introduction Most people who want to share their work, ideas and findings with others at a conference or through a written paper are likely to have to go through the process of writing and submitting a summary or an abstract. How summaries differ from abstracts is a controversial subject much discussed in the research literature (cf. section 3), the most popular distinction being that abstracts are “meta-texts”, i.e. reporting on another text, and summaries are condensed forms of longer texts with a variety of strategies suggested for text condensation (with reference to translation cf. Thome 2003).
Heidrun Gerzymisch-Arbogast
In academic writing an abstract usually precedes a scholarly paper to allow readers a chance to see if the paper (or some section of the paper) is relevant to them before reading much of it. Most scholarly journals and academic conferences are constructed on the basis of peer reviews of abstracts submitted by prospective contributors which assess the potential added value of a paper: how much new data, new theoretical input, new methodology, new ideas and results which might interest the audience and stimulate research seem to be offered? In addition to these functions, titles and abstracts are filed electronically and their keywords are used in today’s electronic information retrieval systems. These are but some of the reasons why abstract writing is an important basic research skill. This article will deal with academic abstracts from a communicative perspective. It suggests a communicative theoretical framework from which practical guidelines can be developed which hopefully will be useful for young researchers when “getting started” in writing abstracts for research papers or conferences. 2. Phenomenon and problem Though abstracts are brief (perhaps because they are brief), they are a difficult form of writing. The standards vary by domain, communicative situation and purpose, and the format may depend, among other things, on whether an abstract relates to an oral presentation or a written paper. From a reader’s perspective, written papers which present hitherto unpublished ideas or findings […] can be informationally dense, as interested readers can take their time to understand them and process all the information. Readers are also free to stop reading a paper when they think it is not relevant or interesting enough, or to skip those parts of a paper they feel are not relevant enough for them or too difficult to process for their apparent added value. (Gile 2007)
This is in contrast to oral presentations where (if they are informationally very dense) […] people in the audience may feel somewhat frustrated, put questions to the author, request clarification and hasten to read the paper when it is published. If [oral] papers are not dense enough, people in the audience, who are captive to the extent they are civilized and polite, can find it very frustrating to spend successive periods of 20 minutes listening to speakers who – they feel – have nothing worth listening to say. (Gile 2007)
While we may be painfully aware of this phenomenon when we ourselves are ‘recipients’, when we take on the role of speakers and authors we often forget to ask
Writing communicative abstracts
ourselves what in our (suggested) paper or presentation will be of interest to our assumed readers and hearers. Will our contribution ‘fit’ their expectations? Will they be familiar with the problem and questions raised in our contribution so that they can be interested in the solutions we present and how they are (logically) developed? Do they share our own theoretical background and methodological stance? Will they be willing and ready to get involved and discuss controversial issues? Or will they be more ‘passive’ readers and listeners, expecting much information of a general or introductory nature and reinforcement of their own ideas rather than challenging new ideas? And – very important – will our discourse ‘style’ be acceptable to or ‘suit’ the audience? We have to rely on our own hypotheses and judgments when trying to answer these questions. It is, however, worthwhile to devote enough time and planning to address these questions in abstract writing and to structure a scholarly contribution according to what we think our readers or hearers might expect. This is not only true for the content of the abstract but (maybe even more so) for the hidden interpersonal meaning dimensions that influence what the author says and how s/he says it. The appreciation of a written or oral paper, for which the abstract sets the stage, depends on the degree to which our own hypotheses about what we think our readers might expect meet or overlap with our readers’ or listeners’ actual expectations. A scholarly contribution is often not ‘good’ or ‘bad’ by itself but ‘good’ or ‘bad’ for a certain audience. Therefore, when writing abstracts for articles or presentations, not only do we have to solve the problem of choosing what information is presented in what detail and in which way (as put forward by Grice’s well-known general conversational maxims of quantity, quality, relevance and manner, Grice 1975), but also how our contribution fits our reader’s and audience’s expectations. This problem is at the heart of writing abstracts for communicative purposes. Beyond Grice’s general cooperative principle in communication our specific question is whether we can identify dimensions that reflect varying readers’ and writers’ perspectives in written abstracts for papers or conferences. Before we examine this question, however, let us first look at what resources are available on abstract writing. 3. ‘State of the art’ & deficits: resources on abstracts There are a number of resources offering information or guidance in abstract writing. Some of these are:
Heidrun Gerzymisch-Arbogast
3.1
International and national standards organizations
They set rules for the content and form of abstracts. Examples are: – ISO 214 International Organization for Standardization: Documentation: Abstracts for publication and documentation. 1976. Geneva; – ANSI, The American National Standards Institute: American national standards for writing abstracts. ANSI Z39.14.1979. New York; – DIN 1426: Normenausschuss Bibliotheks- und Dokumentationswesen (NARD) in DIN Deutsches Institut für Normung e.V.: Inhaltsangaben von Dokumenten. Kurzreferate, Literaturbericht. 1988. Berlin. 3.2
Research papers or books on writing abstracts
These address theoretical issues like the function, content or form of abstracts and discuss – among other things – their status as a text type and relative autonomy, their methodology and differentiation from summaries including condensation strategies. Relative to their function, issues as to whether abstracts should be ‘informative’ or ‘indicative’ are discussed, i.e. if abstracts should condense the information in the reference text or report on what the reference text is about. Relative to the question of content, there is discussion about the information that should be included, such as a problem statement, theoretical background, study method or research results, and whether concepts (keywords) should be listed. The discussion of the form of abstracts relates primarily to length, format and style. Wellknown examples of research books and articles on abstracts are Borko/Bernier (1975); Kretzenbacher (1990); Oldenburg (1992); Endress-Niggemeyer (1985); Rothkegel (1990); Fluck (1988a) and Gnutzmann (1991). As different as the issues and approaches discussed may be, it can be said in general terms that researchers agree that the nature of abstracts is determined largely by the intended purpose and use of the abstract. 3.3
Linguistics associations
These provide practical guidelines on the contents and form of abstracts. Examples are: – LAGB, Linguistics Association of Great Britain, Oxford University; – The Linguistics Society of America.
Writing communicative abstracts
3.4
Universities
They set general academic and university-specific guidelines and some offer courses in abstract writing. Some examples which can be found on the internet are: – University of Purdue http://owl.english.purdue.edu/ – University of California at http://www.linguistics.ucsb.edu/ – University of Leeds at http://www.leedsmet.ac.uk/ 3.5
Other information sources on abstracts
These are easily and abundantly available on the internet as conference and journal information or in abstract writing courses and workshops, such as for example John Benjamin’s Submission Guidelines for Authors and Editors. We can safely generalize that while these typical guidelines provide helpful information about abstract writing, they – focus on the factual dimension of writing abstracts, varying with respect to content matter and degree of detail that should be included in an abstract (with some of them even contradicting each other); – do not take the reader/listener perspective into account; – do not all reflect general standards – some of them are only applicable to writing abstracts for a particular institution or conference. These issues are at the heart of this article, which suggests four interrelating dimensions for writing abstracts for communicative purposes. 4. Writing communicative abstracts 4.1
Theoretical foundation
When considering abstracts from a communicative perspective, we need to proceed from a theoretical framework that adequately models the complexity of the parameters involved. One of the most differentiated models designed for this purpose is the Watzlawik-based communication square model (better known as Four Tongues – Four Ears Model) by Schulz von Thun (1981). It applies to all communicative situations where factual and interpersonal dimensions interact and suggests that any communicative event has four dimensions (the following passage is an adaptation from the German website http://www.schulz-von-thun.de/modkomquad.html):
Heidrun Gerzymisch-Arbogast
Whenever we communicate, four utterance dimensions and their interplay are activated. Anything we say – whether we realize it or not – simultaneously contains four types of messages: – a factual message (i.e. what is spoken about); – a self-indicative (“Selbstkundgabe”) message about the speaker (i.e. what is revealed about the personality of the speaker); – a relationship message (i.e. how the speaker relates to the hearer, what the speaker thinks of the hearer); – an appellative message (what the speaker wants the hearer to do for him/her). The four ‘tongues’ of the speaker/author are matched by four ‘ears’ of a hearer/listener. It can be said that all communicative partners speak with four tongues and listen with four ears, and the satisfaction (or lack of it) arising from the communication largely depends on the quality of the interplay of the four tongues and ears. The factual level of communication focuses on data, facts and results. Similar to Grice’s maxims of quantity, quality, relevance and manner, the factual level is governed by the criterion of truth (i.e. is what is being said true or not?), the criterion of relevance (i.e. is what is being said relevant to the topic under discussion or not?), the criterion of quantity (i.e. are the facts presented sufficient for the discussion of a particular topic or do other facts need to be considered?) and the criterion of manner (i.e. are the facts presented clearly, briefly and in orderly way?)1. A speaker/author will present facts as s/he thinks they are clear and understandable. A recipient reads/listens to such facts and data, forms an opinion and/or may ask questions where clarification is needed. Every utterance also contains a “self-indicative message”, a hidden message about how the speaker represents her/himself, what s/he stands for. Whether we like it or not, when we speak we also reveal something about ourselves, how we position ourselves relative to a topic or a group, how we perceive our role in a community. The self-indicative message may be explicit (e.g. indicated by expressions like ‘I think’, ‘I stand for’, ‘in my opinion’) or implicit, which, according to Schulz von Thun (1981), makes any message a small sample of a speaker’s personality – an idea that may be very disconcerting to all of us as speakers. While a speaker/author implicitly or explicitly gives some indication about him/herself, recipients acknowledge how speakers present themselves with their “self-indicative ears” and form their own opinion about the speaker, what kind of person s/he is, what the speaker/author’s orientation or inclination or mood is. An utterance also reveals something about the relationship between speakers and hearers. The relationship message is implied in how communicative partners address each other, in the wording they use, the emphasis they place on 1. For communication intentionally deviating from Grice’s maxims cf. Gerzymisch-Arbogast (1988).
Writing communicative abstracts
certain concepts or the intonation, and the body language that accompanies a spoken message. Whether we do that consciously or not, we give ourselves away in what we think of the other, how we feel about the topic under discussion, how we appreciate the author/speaker. The relationship indication is a delicate and powerful dimension, for which our partners often have a very sensitive and sometimes even overly sensitive (relationship) ear. The relationship messages we send and receive – and of which we may not be aware – determine how we feel we are treated by the way others speak to us, what they think of us and how we stand with others. The quality of many factual messages depends on the quality of the relationship message we send or receive. The appellative message is an inherent part of any utterance as well. When we take the floor or communicate in writing, we will, as a rule, want to achieve something for our efforts, exert some influence on a state of affairs, a development. We do not just send out a neutral signal but also appeal to others to feel the same way as we do or to make others do what we want them to do. Overtly or covertly there are wishes, claims, advice, suggestions for effective action etc. implanted in our talk. The appellative ear is therefore particularly open to the question of what I should do, think or feel now. 4.2
Four dimensions in abstracts or: writing abtracts with Four Tongues and Ears
4.2.1 The factual dimension Practically all literature sources agree that in abstracts, no matter whether they are for conferences or academic papers, the content should be factual and the style understandable and concise, the specifics being subject to individual requirements by journal editors or conference conveners. An academic text as a rule provides new information on a research topic and follows the pattern of theoretical scientific writing in that it includes: 1. the topic of a paper; 2. a statement about the phenomenon and problem treated in the paper; 3. an overview of the literature that has been written on the topic; 4. the theoretical basis on which the author intends to solve the problem raised in 2.; 5. the method or methodology in developing a solution for the problem; 6. the proposed solution and/or results; 7. a proof of explanatory ‘adequacy’ (to use Noam Chomsky’s expression) of the theoretical solution proposed in 6., i.e. some evidence in favor of the solution to show that it can practically be applied to solve the problem stated in 6. (Mudersbach 1999: 316ff).
Heidrun Gerzymisch-Arbogast
This information structure facilitates the scholarly writing process in that it controls the coherence of our own writing. At the same time it facilitates the reading process in that we can look for new information in expected places. We can therefore say that an abstract, reporting on the contents of a scholarly reference text, should factually include the same categories. This could mean that an abstract could be as short as 6 sentences (excluding the title, i.e. 1. above), stating the phenomenon and the related problem, positioning it against the existing literature, stating the theoretical basis and the methods used to develop a solution or carry out an investigation, and indicating the (expected) solution and explanatory evidence. When longer abstracts are requested or written, they need to incorporate the above points and may add information to the individual points which meet the criteria of – truth – relevance – sufficiency. When an abstract says the paper will report the findings of an empirical study the results of which refute existing theories or findings, the ‘truth’ criterion is relatively easy to assess if a reader or hearer is familiar with the literature in the field. If in addition the data and facts provided are ‘sufficient’, then there are good reasons for expecting an excellent paper at a conference (on a factual basis). However, it is difficult to establish the relevance of the paper to the audience in session “[w]hen the abstract announces a discussion of an already popular topic (a few examples are interdisciplinarity in TS, translation competence, training curricula and school translation versus professional translation) or is too vague about the presentation’s specific contribution, [...] (Gile 2007) The relevance problem is further addressed when Gile observes with respect to abstracts handed in for the 5th EST Congress in Ljubljana that […] there are often lengthy general introductions, which turn out to be very similar to the introductions offered in the subsequent written papers, and just one or two sentences about the author’s specific contribution. And yet, in most cases, these are the sentences on which referees will base their assessment. Perhaps authors could be bold enough to do without introductions designed to fill in the quota of a few hundred words for the abstract and offer a bit more information about their original input? [...]. (Gile 2007)
Much of this criticism is equally true for abstracts handed in for the MuTra Marie Curie Conference Series 2005 – 2007 (ample examples can be found at http://www. euroconferences.info). With respect to communicative abstracts, we can therefore summarize that while the factual dimension is a relatively constant component for research papers
Writing communicative abstracts
and conference abstracts alike (provided the conference is thematically oriented and organized), the other three of Schultz von Thun’s message dimensions are given less or no attention in the literature on abstract writing. 4.2.2 The self-indicative dimension Although often forgotten or downplayed, the message of how an author sees and positions him/herself in the (scientific) community is implicitly noted by others. By using certain concepts in an abstract without going into details of explaining what they mean, we identify ourselves with a certain ‘school’ which may appeal to people who share the same knowledge base. At the same time this may make people outside that school feel ‘not addressed’ and thus rejected or even intimidated. As a consequence they may not be inclined to accept a factual proposition, no matter how justified and convincing the argument is on a factual basis. The implied self-indicative dimension can therefore be said to interrelate with the factual dimension. It also interrelates with the appellative dimension and may compromise a speaker’s implied appellative claim that others accept his/her ideas. 4.2.3 The relationship dimension The same is true for the relationship dimension. We may find that our abstract is rejected because the organizers and evaluators belong to a different ‘school’ and therefore reject our proposal for precisely that reason even if it is factually excellent. Before writing an abstract for a conference or a journal publication, it therefore saves time and effort to check what the conveners of a conference or the editors of a journal stand for – and as a result refrain from even applying when it is clear that our own stance proposes a thesis that is outside the group’s scope and positions. Young researchers are mostly aware of and honor the relationship with their supervisor and/or host by explicitly mentioning the relationship; if this is done, it needs to be weighed against such factual factors as relevance and sufficiency. In addition, the relationship dimension may require some courtesy towards the host/ editor by appreciating his/her alternative views. When invited to contribute to a journal or a conference it goes without saying that we want to be aware of our host’s/editor’s research or other delegates’/authors’ contributions on our topic and show enough courtesy to relate our own work to their position on the subject, the least gesture of politeness being that we implicitly indicate that we have read our host’s or the editor’s work. Failures in respecting the relationship dimension may result, whether we think this is justified or not, in a rejection of an abstract. Although this may seem (and, in fact, may be) highly subjective, the author who feels that the revision asked for is not relevant or that his/her ideas are not welcome, may want to save time and effort and withdraw the abstract. In this case, it may be better to move on than try to comply with what is being asked for.
Heidrun Gerzymisch-Arbogast
4.2.4 The appellative dimension As a rule, we are all motivated by the wish to obtain acceptance for our ideas and propositions and we need to feel that we are making a worthwhile contribution. All three dimensions discussed above interrelate in obtaining that objective as they influence how our abstracts and subsequent presentations or papers are received and accepted. Acceptance levels may differ by individual person, by group or ‘school’ or by norm and convention. A paper which is not understood by some people may be excellent in the opinion of others. Intercultural discourse varies with respect to the value that is attributed to how ‘understandable’ a text or argument is vs. how deep or differentiated the discussion of a complex problem is. As a result, the same paper or its abstract may cause some people to feel frustrated (if they do not understand) or flattered (if they do understand a complex argument) or downgraded and undervalued (if something is discussed in detail which they consider trivial). It is therefore advisable to structure an abstract according to what we think our readership or audience is willing to accept as far as knowledge, challenge, interest and commitment are concerned. 5. Examples Following are four examples of abstracts submitted for the 2005 – 2007 MuTra Conference Series (available at www.euroconferences.info). With the exception of Georgios Floros’ abstract, which is presented first, the abstracts are too long for the purposes of this article and are therefore given only as excerpts to show the elements and differing degrees to which they reflect the communicative dimensions discussed above (bold indicates the factual dimension, italics the self-indicative dimension, underline the relationship dimension and small capitals the appellative dimension): Georgios Floros: (University of Cyprus) Text linguistic concepts in simultaneous interpretation This paper aims at discussing the practical use of integrating text linguistic concepts in the training of conference interpreters. The motive for this discussion emerged in the framework of the Masters in Conference Interpreting, which has been offered at the University of Cyprus during the last three years. Text linguistic concepts such as theme-rheme-organization (TRO) and thematic progression have formed an integral part of the theoretical training of conference interpreters ever since the MA Program was first launched. The consideration of text linguistic aspects in the curriculum of
Writing communicative abstracts
this Program generally aims at familiarizing interpreting trainees with the notion of textual organization, especially since these students come from quite different academic backgrounds and an acquaintance with texts as organized communicative entities could not be taken for granted. The examination of the practical use of text linguistic concepts in the training of interpreters should be conducted in connection with anticipation, a term usually reserved for the microstructural aspects of discourse, implying a “guessing” of syntactic components and positions or the semantic completion of units up to the sentence level (in simultaneous interpreting). This paper proposes an expansion of the notion of anticipation in order to cover the macrostructural aspect of texts as well. Furthermore, the paper argues that text linguistic concepts a) offer a practical and necessary training tool both for consecutive and for simultaneous interpreting and b) they contribute to a better output by affecting anticipation. MuTra 2007 LSP Translation Scenarios, Summary Abstracts http://www.euroconferences.info/2007_abstracts.php Mandana Taban (University of Vienna) Language as a Means of Creating Identity in Films Currently, Prof Dr Mary Snell-Hornby is conducting a project “Literary translation as multimedial communication” at the University of Vienna (funded by the Austrian Science Foundation…The topic of my dissertation developed in the course of my cooperation on this project as a research assistant. ….. by analysing the results of this empirical study I hope to find out more about the role of language in a film and more importantly, in the audience participation of “The Other”. Since this is work in progress, I cannot be more precise about the goals of my research. MuTra 2005 Challenges of Multidimensional Translation, Summary Abstracts http://www.euroconferences.info/2005_abstracts.php
Heidrun Gerzymisch-Arbogast
Mary Carroll (Titelbild/Berlin) Subtitling for the Deaf and Hearing-impaired in Germany: History and Status Quo Some 19% of Germany’s population of 80 million suffers from hearing loss. Though large in number, the deaf and hearing impaired in Germany have little access to news, information and entertainment… This paper will look at the development of subtitling services on television and in cinemas for the deaf and hard of hearing in Germany. It will examine some of the demands of hearing-impaired audiences, stressing the need for research in this field. … Presentation of these projects should be seen as a basis for discussion on subtitling football matches and sports events live and as a call for research into this area. What type of subtitles best meet the needs and interests of deaf and hard-of-hearing audiences? What type of subtitles optimise reception? The paper will be co-presented by Mary Carroll and Donya Zahireddini, Titelbild Subtitling and Translation GmbH, and Christiane Müller, ZDF MuTra 2006 Audiovisual Translation Scenarios, Summary Abstracts http://www.euroconferences.info/2006_abstracts.php Lew Zybatow (University of Innsbruck) Multidimensionale Translation: metatheoretische Reflexionen First introduced as my plenary title at the Leipzig LICTRA-Conference in 2001, this question is now gaining popularity in translation studies in general as is shown by the latest EST-Newsletter’s editorial. … This paper offers some considerations on how to avoid the danger of speculative theorizing and on providing a methodological basis and framework as a necessary precondition for the establishment of Translation Studies as a scientific discipline. MuTra 2005 Challenges of Multidimensional Translation, Summary Abstracts http://www.euroconferences.info/2005_abstracts.php
Writing communicative abstracts
6. Concluding remarks In summary we can say that when preparing an abstract, whether it precedes a paper or a conference presentation, we need to be aware of a. the general purpose of the abstract and the related communicative setting, i.e. whether our abstract fits the purpose of the scholarly journal or conference. The standards for abstract writing may vary by editor or event and we need to make sure we conform to the requested standards; b. the expectations of our prospective readers/listeners. For abstracts to be accepted by reviewers, they need to fit readers’ expectations, not only with respect to the factual dimension and scholarly writing norms. Acceptance and/ or rejection may often be motivated by relationship or self-indicative dimensions and their interplay. Finally, accepting the relationship and self-indicative limitations in others and in ourselves may lead us to be more “[…] skeptical towards our own criticism [when] expressing opinions or positions contrary to or different from the readers’” (Gile 2001: 25). References Borko, H. and Bernier, C. 1975. Abstracting Concepts and Methods. New York, San Francisco, London: Academic Press. Carroll, M. 2005. “Subtitling for the deaf and hearing-impaired in Germany: History and status quo.” Abstract. MuTra Marie Curie Euroconference Audiovisual Translation Scenarios. Programme. May 3rd. 2005, available at www.euroconferences.info Endress-Niggemeyer, B. 1985. “Referierregeln und Referate – Abstracting als regelgesteuerter Textverarbeitungsprozess.” Nachrichten für Dokumentation 36 (1985). 110–123. Floros, G. 2007. “Text linguistic concepts in the training of conference interpreters.” Abstract. MuTra Marie Curie Euroconference LSP Translation Scenarios. Programme. May 2nd. 2007, available at www.euroconferences.info Fluck, H.R. 1988. “Zur Analyse und Vermittlung der Textsorte ‘Abstract’.” In Fachbezogener Fremdsprachenunterricht, C. Gnutzmann, (ed.), 67–90, Tübingen: Narr. Gerzymisch-Arbogast, H. 1988. “Das Absurde in den Dramen Harold Pinters. Versuch einer Erklärung aus linguistischer Sicht.” Die Neueren Sprachen 4/88. 405–421. Gile, D. 2001. “Critical reading in (interpretation) research.” In Getting Started in Interpreting Research, D. Gile et al. (eds), 23–38. Amsterdam/Philadelphia: John Benjamins. Gile, D. 2007. “Of congres abstracts and papers.” Research Issues February 7, 2007, available at www.est-translationstudies.org (last visited 10 February 2008). Grice, H. P. 1975. “Logic and conversation.” In Speech Acts (= Syntax and Semantics, 3), Cole/ Morgan (eds), 41–58. German: “Logik und Konversation.” In Handlung, Kommunikation, Bedeutung, G. Meggle (ed.), 243–265. Frankfurt a.M.: Suhrkamp (1993) (stw 1083).
Heidrun Gerzymisch-Arbogast Gnutzmann, C. 1991a. “Abstracts und Zusammenfassungen im deutsch-englischen Vergleich: Das Passiv als interkulturelles und teiltextdifferenzierendes Signal.” In Interkulturelle Wirtschaftskommunikation, B.-D. Müller (ed.), 363–378. München: Iudicium. Kretzenbacher, H.L. 1990. Rekapitulation. Textstrategien der Zusammenfassung von wissenschaftlichen Fachtexten. Tübingen: Narr. Mudersbach, K. 1999. “Richtlinien zum Schreiben von wissenschaftlichen Publikationen (Kurzfassung).” In Wege der Übersetzungs- und Dolmetschforschung, H. Gerzymisch-Arbogast, D. Gile, J. House und A. Rothkegel (eds), 316–319. Jahrbuch Übersetzen und Dolmetschen I. Tübingen: Narr. Anhang. (English version, available as ‘Guidelines for the Publication of Research Books or Papers’ at www.translationconcepts.org /publications.htm). Oldenburg, H. 1992. Angewandte Fachtextlinguistik. ‘Conclusions’ und Zusammenfassungen. Tübingen: Narr. Rothkegel, A. 1995. “Abstracting from the perspective of text production.” Information Processing & Management, Vol. 31, No.5: 777–784. Schulz von Thun, F. 1981. Miteinander reden 1, Störungen und Klärungen. Allgemeine Psychologie der zwischenmenschlichen Kommunikation. Hamburg: Rowohlt. Taban, M. 2006. “Language as a means of creating identity in films.” Abstract. MuTra Marie Curie Euroconference Audiovisual Translation Scenarios. Programme. May 4th 2006, available at www.euroconferences.info Thome, G. 2003. “Strategien der Textverkürzung bei der Übersetzung ins Deutsche.” In Textologie und Translation, H. Gerzymisch-Arbogast et al. (eds), 305–330. Jahrbuch Übersetzen und Dolmetschen 4/II. Tübingen: Narr. Zybatow, L. 2005. “Multidimensionale Translation: Metatheoretische Reflexionen.” Abstract. MuTra Marie Curie Euroconference Audiovisual Translation Scenarios, available at http:// euroconferences.info/2006_abstracts.php
Linguistic societies LAGB Linguistics Association of Great Britain, Oxford University http://www.essex.ac.uk/linguistics/LAGB/Autumn03/1.html (last visited 10 February 2008). Linguistics Society of America http://lsadc.org/info/meet-annual08-abguide.cfm (Last visited 10 February 2008).
Standardization information ANSI, The American National Standards Institute: American national standards for writing abstracts. ANSI Z39.14.1979. New York. DIN 1426: Normenausschuss Bibliotheks- und Dokumentationswesen (NARD) in DIN Deutsches Institut für Normung e.V.: Inhaltsangaben von Dokumenten. Kurzreferate, Literaturbericht. 1988. Berlin. ISO 214 International Organization for Standardization: Documentation: Abstracts for publication and documentation. 1976. Geneva.
Writing communicative abstracts
Web resources: EST website February 2007 http://www.est-translationstudies.org/(Last visited 17 May 2008). MuTra website http://euroconferences.info/2007_abstracts.php?konferenz=2007&abstracts=1 (Last visited 10 February 2008). Prodainformad http://www.translationconcepts.org/pdf/Publication_Guidelines.pdf (Last visited 10 February 2008). Schulz von Thun http://www.schulz-von-thun.de/mod-komquad.html (Last visited 10 February 2008), in German. University of Purdue http://owl.english.purdue.edu/ (Last visited 8 April 2007). University of California http://www.linguistics.ucsb.edu/ (Last visited 8 April 2007). University of Leeds at http://www.leedsmet.ac.uk/ (Last visited 8 February 2007).
Construct-ing quality Barbara Moser-Mercer
Ecole de traduction et d’interprétation, Université de Genève, Switzerland There is a large body of research devoted to exploring quality in interpreting and various authors have identified survey methodology as one of the most frequently applied to this line of research. However, this line of research lacks fundamental and principled guidance regarding survey methodology, in spite of a fairly rich literature on survey design. This paper attempts to remedy this by developing a succinct, yet comprehensive guide to questionnaire design for quality research in interpreting, covering important concepts such as validity, reliability, construct design and ethical dimensions.
Keywords: interpreting quality, questionnaire design, research methodology
1. Introduction In her contribution to the topic of quality in conference interpreting Kurz (2001: 397) writes that “questionnaires have been the most common means to determine user expectations and/or responses.” Quoting Gile (1991: 163-64) she goes on to say that “they are the most straightforward scientific way of collecting data on actual quality perception by delegates”. Kurz (2001) offers an overview of questionnaire-based studies starting with Bühler (1986), her own series of surveys (1989, 1993, 1994, 1996); Gile (1990); Meak (1990); Ng (1992); Marrone (1993); Vuorikoski (1993, 1998); Kopycynski (1994); Mack and Cattaruzza (1995); Moser (1995, 1996); Feldweg (1996); Collados Aís (1998) and Andres (2000). To this list we could surely add a number of survey studies carried out on quality in other forms of interpreting such as those by Mesa (1997) and Pöchhacker (2000) in the field of health care interpreting, or by Kadric (2000) covering court interpreting. This author’s contribution will be limited, however, to the issue of research on quality in conference interpreting. Pöchhacker (2001: 414) clearly states that “empirical studies on quality in interpreting have been carried out along various
Barbara Moser-Mercer
methodological lines, the most popular and productive of which has been the survey.” His section on Methodological Approaches (ibid.: 423) attempts to identify different quality criteria across the studies he discusses, and he describes his article as a survey of “the state of the art in Interpreting Studies in search of conceptual and methodological tools for the empirical study and assessment of quality across the typological spectrum […] of interpreting”. Yet Pöchhacker is silent on whether the methodological tools used in these studies were appropriate to the research question(s) asked. Nor does his Models and Methods approach to researching interpreting quality (2002) conclude with specific methodological guidance on how to carry out survey research, which he considers “the most popular and productive” methodological approach to researching quality in interpreting (Pöchhacker 2002: 98). Kurz (2001: 403) raises the issue of lack of comparability of indicators across the survey studies she reviews and cites Mack and Cattaruzza (1995) as calling for better coordination and harmonization in the implementation of surveys, and Marrone (1993) for suggesting the development of a standard questionnaire that would be applicable to all interpreting situations. But she too is silent on whether the questionnaire designs employed by the various authors in their surveys of quality in interpreting were indeed appropriate for the research questions they asked and conformed to minimum standards of survey research. In short, the literature on quality research in interpreting is fairly rich in examples of survey-type studies, but lacks fundamental and principled guidance regarding survey methodology. This, however, is one of the most basic prerequisites for attaining the goal that Mack and Cattaruzza (1995) had in mind, that of comparability across studies in interpreting quality. In view of the extensive use of survey instruments in interpreting research and the relative casualness with which questionnaires are designed, this author would like to follow up on an earlier publication (Moser-Mercer 1996: 49) and contribute to methodological rigor in this very popular area of interpreting research. 2. Surveying quality – some preliminaries To the extent that quality surveys probe respondents’ attitudes towards a service, they go beyond the collection of purely factual information (such as gender, age, number of conferences attended in which interpretation was offered, etc.) and measure psychological characteristics. And this is where the following principles must be borne in mind when developing survey instruments: – They should discriminate as widely as possible across the variety of human response and not just identify a few extreme individuals whilst showing no
Construct-ing quality
difference between individuals clustered at the center of the scale. We call this the discriminatory power of the survey instrument. – They should be highly reliable, i.e. they should produce the same results on different occasions. As quality questionnaires usually probe respondents’ attitudes towards specific dimensions of the construct quality, the questionnaire as a whole must be internally consistent, meaning that people should answer related items (questions) in one and the same questionnaire in the same way. To ensure internal reliability the questionnaire designer can split the entire set of questions into two sets comprising half the complete questionnaire each (this is called the split half method). If the questionnaire is internally reliable respondents’ scores on each half should be similar; this would be assessed using correlational statistics. Item analysis is another method. It involves taking each item in the questionnaire and calculating the correlation between the respondent’s score on that item and the score for the questionnaire as a whole. External reliability can be checked through the test-retest procedure. This is, however, not very practical in quality surveys unless we decide to submit one and the same questionnaire on successive conference days (which is already fairly common in some larger scientific conferences where delegates are asked to fill in questionnaires for each session or conference day). – They should be valid, i.e. they should be measuring what was originally intended (we will return to that in our discussion of constructs below). The most straightforward method for checking a questionnaire’s validity is to inspect its contents to see whether it does indeed measure what it is supposed to. While this might be fairly obvious with some questionnaires, or parts of questionnaires, designed to collect more factual information, it is far more complex when dealing with multi-dimensional constructs such as quality. This is where an unequivocal definition of the quality that is being surveyed in a given questionnaire is fundamental to establishing face validity. In addition to checking for face validity the content of the questionnaire also needs to undergo a similar assessment as we need to ensure that the content is representative of the area which the questionnaire is intended to cover. We also need to check construct validity – in other words, does the construct, in our case quality, even exist in the minds of the respondents, or only in the minds of the beholders (those who are engaged in researching it)? In other words, are conference delegates thinking about quality? We may find, for example, that quality as a construct is too multi-dimensional or that respondents don’t really think in terms of quality (e.g. this is a good/bad interpretation), but more in terms of other constructs such as comprehension, for example (“I don’t understand what the speaker is trying to say”). Last but not least, we need to check for population validity. It is indeed unacceptable to make claims about the population as a
Barbara Moser-Mercer
whole if all one has surveyed is a small group of interpreting students. This dimension has certainly been considered by Kurz (1989, 1993, 1994, 1996) in her attempt to survey different user groups and establish commonalities and differences among them. It was also taken into account in Moser (1996). While descriptions of the samples used in survey studies on quality are often quite detailed, the conclusions drawn by the various authors are often more sweeping than the population sample justifies. 3. Construct-ing quality We have concluded above that some facts can easily be measured directly, whereas others are more elusive and do not lend themselves to direct measurement. Quality is certainly one of those constructs that elude direct measurement (direct questions such as “What did you think of the quality of interpretation in this conference?” are a case in point) and hence need to be broken down into more tangible components. This is where many of the more theoretical writings on the study of quality in interpreting have attempted to make a contribution (Pöchhacker 2002; Garzone 2002; Kalina 2002; Shlesinger 1997), whose results, however, are rarely reflected in the questionnaires designed to investigate quality in interpreting. After all, constructs represent the sum of attributes of a specific concept, in our case quality, which are defined by established theories. While we can experience speaking speed directly through our senses, we cannot experience quality, but infer it through experiencing interpreters at work whose output reflects attributes of quality. The first step in the measurement of quality is therefore to distinguish the construct from other similar constructs by defining it clearly, precisely and unambiguously. This can be accomplished by determining theoretically what specific characteristics are relevant to the given construct. This must be followed by an operational definition of the construct quality in order to relate the construct we have defined theoretically to our observations in real life. We must therefore decide how we can elicit responses that will indicate the degree of presence or absence of a specific construct attribute in the minds of our respondents. I think it is fair to conclude that most of the survey research on quality lacks a rigorous description of the construct quality and that we continue to search for quality concepts that can be operationalized. Many studies on quality in interpreting are overly ambitious in the sense that the authors’ awareness of quality being multidimensional and complex leads to the construction of a questionnaire that tries to address too many attributes in a non-specific way; this is ultimately counter-productive as the results fail to address a more circumscribed quality construct. Studying quality in interpreting is simply too broad a construct as it eludes theoretical
Construct-ing quality
definition and cannot be fully operationalized. The distinction between quality as perceived by the user, the team-mate, the employer, etc. (Moser-Mercer 1996; Kurz 1989; Pöchhacker 2002), already delimits the construct and reduces the number of attributes which can then be explored in much greater detail. Employing the same approach to construct definition for all three of the above groups would then facilitate inter-group comparisons. This is where past research on what constitutes quality can be very useful, as we first might wish to develop a sort of model of quality that allows us to understand its individual dimensions. A brainstorming session with representatives of various user groups would be an excellent starting point. In response to the question “What is quality in interpreting for you?” we might obtain answers from different categories of users of interpretation such as “being able to understand what the speaker says”, “understanding the speaker’s intentions”, “understanding the dynamics in the conference room”, “allowing me to successfully conclude a business deal”, “being able to acquire information efficiently”, and we may also obtain negative answers such as “not having to listen to a high-pitched voice”, or “not having to infer technical terms”. We can then group these responses and establish categories (or criteria according to Kurz 2001). Thus, for example, the first two responses above could be regrouped into one category labeled comprehension of content, a component of the construct that we can then proceed to measure by using a Likert-type scale. While some headway has been made in trying to assemble a model for the construct quality (Pöchhacker 2002; Kalina 2002), quality attributes have usually been suggested by researchers and not by users. This renders some of the research on quality circular as the questionnaire design reflects merely the researcher’s perception of the construct quality, leaving the respondent with little choice but to react to that perception and offering little possibility for respondents to include their own perceptions. The issue is not whether an attribute is measurable, as there is great variety in the measurement tool box, but whether the attributes correspond to reality, i.e. to the way in which users perceive quality. Adding an open-ended question, such as Comments is not an effective remedy, but might generate interesting information when used during the piloting phase of a questionnaire. 4. Measures and scales Once we have established our construct components and categorized them we can begin to explore ways of measuring them. Measures are developed by giving operationalized constructs a dimension as they qualify and sometimes quantify a trait in a single dimension, such as presence or absence, or the amount, intensity, value, frequency of occurrence, or ranking or rating or some other form of comparative
Barbara Moser-Mercer
valuation or quantification. Measures must be accurate, precise, valid, reliable, relevant, realistic, meaningful, and sensitive. We have already covered the notions of validity and reliability earlier in this article. For a measure to be relevant it must bear a relation to the construct component: returning to our example of content comprehension and situating it in a scientific conference, it might make little sense to ask respondents about their opinion of the interpreter’s style, as style might not be a terribly relevant measure of how well a user comprehends the original. The measure should indeed be realistic and practical; despite the fact that style is not an entirely irrelevant component of the construct quality, even in scientific conferences, it might be neither realistic nor practical to measure it as its contribution to comprehension would most likely be minimal – which brings us back to our first principle above, that of the discriminatory power of the survey instrument. A measure should be sensitive and detect the presence or absence of an attribute/ construct component, and furthermore detect its intensity and changes in the levels of intensity with sufficient precision. In short, when using a scale we need to provide meaningful starting, interim, mid and end points. One of the most common scales used in questionnaires are summated ratings (Likert-type scales named after Likert 1932). In order to produce a Likert-type scale we need to produce an equal number of favorable and unfavorable statements about the construct component (that implies a five-point scale that often ranges from strongly agree to agree to undecided to disagree and strongly disagree). It is important that all components of the construct be measured the same way as otherwise comparability across construct components is compromised and checks for internal validity are no longer possible. Using different scales for different questions is one of the most frequent mistakes in questionnaire design, and the literature on quality surveys in interpreting is no exception to this. 5. Questions – yours for the asking Once we have defined our construct(s), developed our measurements, studied the size and characteristics of our population, we can finally move on to asking our questions. In general it is wise to be economical: we should ask for the minimum of information required for the research purpose. Our respondents’ time is precious. Being parsimonious usually means that we limit the respondents’ effort required to complete the questionnaire, while at the same time focusing on those construct components that have a high level of predictability for influencing the population’s attitude to our construct. Gathering too much information is not useful; asking for information only because it seems interesting runs the risk of
Construct-ing quality
negatively influencing our return rate as respondents give up half-way through the questionnaire and we are left with many missing values. We should also make sure that the questions can indeed be answered. Asking conference delegates how many times they have used interpreting over the past five years may be difficult to answer, and chances are that responses will be highly unreliable, the quality of the response therefore too low to be considered. Or, worse still, the respondent skips the question. Questions can be categorized into fixed (fixed choices) and open-ended questions (the least structured). With the former, respondents select among a fixed set of answers, while the latter allow for additional, open choices, or are entirely unstructured as is the case with comment-type questions. Fixed questions are easy to code and quantify, whereas open-ended questions are not. Their advantage is, however, that they deliver rich information, that the respondent doesn’t feel constrained by the pre-printed answers, that the respondent can say what he or she thinks and that the question is usually more realistic. When composing questions we also need to consider certain pitfalls such as complexity, technical terms, ambiguity, double-barrelled items, negatives, emotive language, leading questions and invasion of privacy. Complex questions cannot be simply answered and need to be broken up into logical components: “What importance do you attach to enunciation and prosody?” cannot be simply answered and must be separated into two questions. Using the terms enunciation and prosody runs the risk of a non-response as users may not know these technical terms. Using general language such as speaking clearly and speech rhythm will remedy this mistake. Asking two questions at once, which is similar to the problem of complex questions as described above, creates a dilemma for the respondent: he or she might well agree with one part of the question, but disagree with the other: “Is it important for you to understand the essentials of a speech and to hear the proper technical terms?” should either be separated into two questions to allow for a differentiated response, or set up as a more-or-less question: “Is it more important for you to understand the essentials of a speech or to hear the proper technical terms?”. In the interests of avoiding a situation where respondents generally tend to agree or disagree with every question put before them, called response set (see below), about half the items in a scale should be positive towards the object and about half negative. It is also confusing to answer a question with a double negative, even when one negative is camouflaged as in the following example, “It is not possible to misunderstand a speaker’s message because the interpreter failed to produce the correct technical term”. Leading questions are among the most frequent mistakes seen in questionnaires on quality in interpreting: the respondents are invited to agree or disagree.
Barbara Moser-Mercer
This is the kind of question we tend to answer with “Yes, but”, or “No, but”, yet the response is coded as yes or no: “In scientific-technical conferences the reproduction of correct technical terminology is of overriding importance”. Such a statement is better phrased as a list item with other construct categories added for the respondent to tick or rate: “Rank the following five criteria in terms of their importance for you to follow successfully the interpretation in a scientific-technical conference: – correct interpretation of acronyms – completeness of rendition – correct technical phraseology – focus on essential information – correct technical terminology.” This example illustrates how a researcher interested in what determines perceptions of quality of delegates in scientific-technical conferences would try and isolate those construct attributes that most realistically define the notion of quality in the minds of the delegates. Rather than inviting the respondents to simply say “yes” or “no”, they ask them to consider which of the categories identified by the researcher as defining the notion of quality in scientific-technical conferences are most important to respondents. Of course, this sample question would have matching control questions appearing later on in the questionnaire. Examples of possible control questions would be: “To successfully understand the original in a scientific-technical conference the interpreter must use correct technical terminology.” “In a scientific-technical conference technical acronyms need not be transposed for the original to be entirely comprehensible.” “A complete rendition of the original speech in a scientific-technical conference is essential to full comprehension.” “In a scientifictechnical conference the interpreter should above all render the essential information.” “The use of correct technical phraseology is not essential to understanding the interpretation of a presentation in a scientific-technical conference.” Each of these five control questions, which represent a mix of positive and negative statements to avoid response-set bias (see below), would be rated on a five-point Likert scale. Correlational analyses between responses to these five control questions and the ranking provided by respondents on the original question would allow the researcher to check for internal reliability on the one hand, and deliver a more finegrained analysis of what each construct category contributes to the construct of quality in scientific-technical conferences as a whole. This will inform follow-up research in that certain construct categories whose importance for the construct quality in scientific-technical conferences was not confirmed are eliminated, while other categories will be investigated in more detail. The final assembly of questions in a questionnaire needs to conform to certain conventions in order to avoid the risk that respondents will answer automatically
Construct-ing quality
with “agree” or “disagree” when faced with a long series of questions that invite them to agree, or disagree for that matter, or which reveal too much about the researcher’s intentions. To avoid a constant error from this effect, items need to be an unpredictable mixture of positive and negative statements about the construct. This has the effect of keeping the respondent thinking about each item. On the whole it is important to make it clear to the respondents that the questionnaire contains both positive and negative items, because questionnaires with only positive or negative items, items that will be contrary to the respondents’ feelings, set up strong negative emotions. If all items are in the same direction the respondent may also begin to interpret the objective of the research and then respond in compliance with or defiance of it. With research on quality in conference interpreting being carried out almost exclusively by practicing conference interpreters, pleasing the researcher assumes disproportionate importance. If we consider this bias in conjunction with the fact that it is interpreting researchers who define the construct quality without much input from users, except in the case of follow-up research mentioned above, we will have to call into question the validity of many of the survey results we take as evidence today. 6. Ensuring quality of quality surveys Hardly any quality survey in interpreting has been piloted. Quality assurance in questionnaire design for studying interpreting quality should indeed become a priority, whether the research is carried out by a student with a limited sample in a university setting, or whether the researcher goes public and surveys conference delegates. Some quality assurance must be carried out during the design phase. It is then that the questionnaire should be pre-tested on selected persons who represent the range of persons who will make up the sample population. The research design and the questionnaire should also be sent to expert colleagues, those who are familiar with researching quality in interpreting and the potential sample population. Pre-testing and expert review are among the best ways to ensure validity and reliability. By testing the questionnaire before it is widely distributed researchers can assess whether they are asking the right group of people the right questions, whether these questions are put in the recommended form, and whether the respondents are willing and able to give the researchers the information they need. If respondents in the pretest sample have difficulty with certain items it is likely that the ultimate sample population will experience similar problems. Among the problems encountered are those that relate to the form of questions (see above), the length of time it takes to read and understand each question and to fill in the questionnaire,
Barbara Moser-Mercer
difficulties understanding questionnaire instructions, and uncompleted items. Once these problems have been addressed and a revised version of the questionnaire is available we still need to consider two issues: the instructions and the formatting. The first part of the questionnaire should present the introduction and instructions. These should state the purpose of the survey, explain who the data collector is, why the survey is conducted and why the respondents were selected. There should be sufficient information about how to complete the questionnaire, how long it will normally take to complete it, about how the data will be used and who will have access to the information. The introduction should conclude with assurances of confidentiality and anonymity of data; this is a legal requirement in many countries, usually within the purview of Institutional Review Boards for research on human subjects. While such institutional review of projects involving human subjects is required in psychology, medicine and the social sciences, schools of translation and interpreting have no such tradition. This raises issues in research ethics which will be addressed in the next section. Formatting the questionnaire should focus on good layout and composition, response space that is sufficient to record the answers, and readability that will ensure good response rates. The delivery-mode of the questionnaire has a decisive influence on response rate: the easier we make it for respondents to fill in our questionnaire, the higher the response rate. While some scenarios do not offer a lot of choice in terms of delivery-mode – surveying interpreting quality in a classroom setting will most likely involve paper and pencil – others offer the whole range from traditional paper and pencil to e-mail and web-based questionnaires. Googling software and questionnaires offers a wealth of responses with many of the software packages offering turn-key solutions, from questionnaire design all the way through to e-delivery, statistical analysis and even web reporting. Many of these packages are simple to use and produce good descriptive statistics, including correlational statistics, and have excellent graphics capability. If investing in commercial software is not warranted one of the GPL products such as Moodle (www. moodle.org) will certainly meet more than just basic needs. Moodle’s survey functionality is easy to use and produces a well-formatted web-based questionnaire. For statistical analysis one needs to export to a spread-sheet program and use the program’s statistics package. It is indeed of immense value to be able to compute statistics not only for meeting the objectives of the research project, but also in order to check for internal consistency and reliability of the questionnaire (see our basic principles of questionnaire design above). Knowing that quality is a complex construct with many attributes, it is of considerable importance to be able to compute relationships between these attributes in order to see their relative influence on the construct quality. While this is standard in survey studies in other disciplines,
Construct-ing quality
research on interpreting quality has rarely gone beyond basic descriptive statistics, despite the fact that sample size would at times have allowed for more powerful statistics (e.g. Gile 1990; Kurz 1993; Mack & Cattaruzza 1995). 7. Good ethics is good science – good science is good ethics This section is designed to highlight only those issues that are relevant to survey research. For a more complete treatment of research ethics in the social sciences see: – the American Association for Public Opinion Research at http://www.aapor. org/bestpractices – the Belmont Report at http://www.nuhresearch.nhg.com.sg/obr/Belmont%20 Report.pdf – the Declarations of Helsinki at www.cirp.org/library/ethics/helsinki/ – or the RESPECT report of the European Commission’s IST Program at http:// www.respectproject.org/ethics/index.php The most relevant issues for our purposes are informed consent, confidentiality, respect for human subjects, and justice. Informed consent implies that we provide sufficient information (subject matter, time commitment, benefits) about the survey to allow the respondents to determine whether they want to participate. Confidentiality protects the privacy of the respondents as they expect the researcher not to share publicly the information they provide. In interpreting research, we often receive e-mail invitations to participate in a survey with the request to fill in the attached questionnaire and return it to the sender by a specific date. Guarantees regarding how the researcher will ensure confidentiality of the respondent’s data returned via e-mail are never included. It is the researcher’s obligation to assure respondents that only those who need to know their identities will have access to the information. Researchers are also responsible for protecting the autonomy and freedom of individuals while conducting their research. Problems can easily arise when conducting classroom research where students are implicitly expected to participate and the power relations between student and teacher (researcher) are such that even if a student preferred not to participate, he would often yield. As to the concept of justice in research, we should ask ourselves whether the research has real value to many people, or whether the data we are gathering are liable to benefit only a few. Studies on the impact of new technologies in interpreting are a case in point: Some studies commissioned by institutional employers, or those carried out in collaboration with them, have gone to some length to ensure that the quality of interpretation was looked at by different constituencies (employers, interpreters, users). Given the highly charged nature of this topic for
Barbara Moser-Mercer
the profession, this approach ensured objectivity of the data-gathering procedure and of the research results. 8. Conclusion Quality in interpreting is one of the more frequently studied concepts in the interpreting research literature, but is mostly neglected from a methodological point of view. Given the potential that well-designed research on quality in interpreting has for developing norms for different types of interpreting and different interpreting scenarios (for a discussion of the possibility of introducing the concept of norms as a heuristic instrument to account for variability in quality criteria and standards as perceived and practiced by interpreters and users see Garzone 2002:110ff), and for shaping interpreters’ working conditions, this author feels that this line of research demands greater methodological rigor. We live in an age of total quality management and interpreters still have to fight, on an everyday basis, for the most fundamental ingredients of good interpreting performance, such as an advance copy of the speaker’s speech, a short briefing, an in-house terminology list or something as simple as the agenda, sufficient oxygen in the booth and a functioning table lamp. All of these factors influence quality of service, and we owe it to ourselves and our profession to produce quality studies of the highest methodological standard whose results will provide convincing evidence of what goes into expert professional performance and how it can be guaranteed. Hopefully, this article can make a modest contribution to improving the quality of quality surveys. References Andres, D. 2000. Konsekutivdolmetschen und Notizen. Empirische Untersuchung mentaler Prozesse bei Anfängern in der Dolmetscherausbildung und professsionellen Dolmetschern. Unpublished PhD dissertation, University of Vienna. Bachmann, L.F. 2003. Fundamental Considerations in Language Testing. Oxford: Oxford University Press. Bühler, H. 1986. “Linguistic (semantic) and extra-linguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters.” Multilingua 5 (4): 231–235. Collados Ais, A. 1998. La evaluacion de la calidad en interpretacion simultanea. La importancia de la communicacion no verbal. Granada: Editorial Comares. Coolican, H. 1994. Research Methods and Statistics in Psychology. London: Hodder & Stoughton. Feldweg, E. 1996. Der Konferenzdolmetscher im internationalen Kommunikationsprozess. Heidelberg: Julius Gross.
Construct-ing quality Garzone, G. 2002. “Quality and norms in interpretation.” In Interpreting in the 21st Century, G. Garzone and M. Viezzi (eds), 107–120. Amsterdam/Philadelphia: John Benjamins. Gile, D. 1990. “L’évaluation de la qualité de l’interprétation par les délégués: une étude de cas.” The Interpreters’ Newsletter 3: 66–71. Gile, D. 1991. “A communication-oriented analysis of quality.” In Translation: Theory and Practice. ATA Scholarly Monograph Series, M.L. Larson (ed.), 188–200. Binghamton, NY: SUNY. Gile, D. 1995. “Fidelity assessment in consecutive interpretation: An experiment.” Target 7 (1): 151–164. Hayes, N. 1997. Doing Qualitative Analysis in Psychology. Hove: Psychology Press. Kadric, M. 2000. “Thoughts on the quality of interpretation.” Communicate 4. Available at www. aiic.net/ViewPage.cfm?page_id=197 (Last accessed March 2008). Kalina, S. 2002. “Quality in interpreting and its prerequisites: A framework for a comprehensive view.” In Interpreting in the 21st Century, G. Garzone and M. Viezzi (eds), 121–132. Amsterdam/Philadelphia: John Benjamins. Kopczynski, A. 1994. “Quality in conference interpreting: Some pragmatic problems.” In Translation Studies. An interdiscipline, M. Snell-Hornby, F. Pöchhacker and K. Kaindl (eds), 189–198. Amsterdam/Philadelphia: John Benjamins. Kurz, I. 1989. “Conference interpreting: User expectations.” In Coming of Age. Proceedings of the 30th Annual Conference of the Amercian Translators Association, D. Hammond (ed.), 143–148. Medford, NJ: Learned Information. Kurz, I. 1993. “Conference interpretation: Expectations of different user groups.” The Interpreters’ Newsletter 5: 13–21. Kurz, I. 1994. “What do different user groups expect from a conference interpreter?” The Jerome Quarterly 9 (2): 3–7. Likert, R.A. 1932. “A technique for the measurement of attitudes.” Archives of Psychology 140: 1–55. Meak, L. 1990. “Interprétation simultanée et congrès medical: Attentes et commentaires.” The Interpreters’ Newsletter 3: 8–13. Moser, P. 1996. “Expectations of users of conference interpretation.” Interpreting 1 (2): 145–178. Moser-Mercer, B. 1996. “Quality in interpreting: Some methodological issues.” The Interpreters’ Newsletter 7: 43–55. Ng, B.C. 1992. “End users’ subjective reaction to the performance of student interpreters.” The Interpreters’ Newsletter 1: 35–41. Marrone, S. 1993. “Quality: A shared objective.” The Interpreters’ Newsletter 5: 35–41. Mack, G. and L. Cattaruzza. 1995. “User surveys in simultaneous interpretation: A means of learning about quality and/or raising some reasonable doubts.” In Topics in Interpreting Research, J. Tommola (ed.) 51–68. Turku: University of Turku. Pöchhacker, F. 2001. “Quality assessment in conference and community interpreting.” Meta 46 (2): 410–425. Pöchhacker, F. 2002. “Researching interpreting quality: Models and methods.” In Interpreting in the 21st Century, G. Garzone and M. Viezzi (eds), 95–106. Amsterdam/Philadelphia: John Benjamins. Potter, W.J. 1996. An Analysis of Thinking and Research about Qualitative Methods. Mahwah, NJ: LEA.
Barbara Moser-Mercer Shlesinger, M. 1997. “Quality in simultaneous interpreting.” In Conference Interpreting: Current Trends in Research, Y. Gambier, D. Gile and C. Taylor (eds), 123–131. Amsterdam/Philadelphia: John Benjamins. Todd, Z., Nerlich, B., McKeown, S. and Clarke, D.D. 2004. Mixing Methods in Psychology. Hove: Psychology Press. United States General Accounting Office. 1993. Developing and Using Questionnaires. Washington, D.C.: GAO. Vuorikoski, A.-R. 1993. “Simultaneous interpretation: User experience and expectation.” In Translation – the vital link. Proceedings of the XIIIth World Congress of FIT, C. Picken (ed.), 317–327. London: Institute of Translation and Interpreting. Vuorikoski, A.-R. 1998. “User responses to Simultaneous Interpreting.” In Unity in Diversity? Current Trends in Translation Studies, L. Bowker, M. Cronin, D. Kenny and J. Pearson (eds), 184–197. Manchester: St. Jerome.
Empirical studies
How do experts interpret? Implications from research in Interpreting Studies and cognitive science Minhua Liu
Fu Jen University, Taiwan In this article, expertise in simultaneous interpreting is defined as the result of well-practiced strategies in each of the comprehension, translation, and production processes, and the interaction among these processes, which are specific to the needs of the task of simultaneous interpreting. What allows the interaction among the comprehension, translation, and production processes to act in sync is interpreters’ ability to manage their mental resources in an efficient manner, particularly in the way attention is managed. Expert-novice difference is examined by comparing skills and sub-skills, by analyzing the cognitive abilities underlying the act of simultaneous interpreting, and by providing evidence and counter-evidence from Interpreting Studies and cognitive science.
Keywords: attention, expert, expertise, novice, simultaneous interpreting
1. Introduction How expert interpreters go about their trade has long been a popular topic in the field of Interpreting Studies. Even though many studies have attempted to explore this topic from different perspectives (e.g. Barik 1973, 1975; Liu Schallert & Carroll 2004; Moser-Mercer, Frauenfelder, Casado & Künzli 2000; Padilla, Bajo, Cañas & Padilla 1995) and have helped paint a general picture of expertise in interpreting, we still do not know much about the types of processes and abilities involved. This is partly due to the fact that expertise in interpreting cannot be clearly defined since the task of interpreting does not have clearly set goals. The commonly mentioned goals of interpreting, e.g. to facilitate communication across languages and cultures, are too vague to guide research that can measure the achievement of such goals. Even the often-used criteria for judging the quality of interpreting, i.e.
Minhua Liu
accuracy, completeness, appropriate language use, and smooth delivery, lack agreed-upon and reliable methods of measurement to produce consistent findings. This situation is to a large extent due to the great variety of texts produced by interpreters, which makes generalization difficult. Owing to the lack of clearly defined objectives and consistently reliable measuring devices for performance, research in Interpreting Studies has often opted to compare expert and novice performance in order to determine if there are observable differences in behaviors or abilities that can be attributed to different stages of expertise development (e.g. Liu et al. 2004; Padilla et al. 1995). However, we have to note that expertise defined through this contrastive approach is rather relative since any more skilled group can be considered the “experts” and a less skilled one the “novices”1 (Chi 2006: 22). This relativity in defining experts and novices is another factor underlying the difficulty in comparing the results of different studies on interpreting expertise and in making generalizations across studies. However, this relative approach can illuminate our understanding of how experts become the way they are, so that novices can learn to become experts (Chi 2006: 23). Indeed, knowing how expert interpreters perform their craft differently from novices and how expertise progresses along a developmental course is crucial to the success and efficiency of interpretation training. In this article, I will first examine the skills of interpreting. I will then discuss the sub-skills and cognitive abilities that may underlie the act of interpreting. In the next step, I will present what some researchers have proposed to account for expertise in interpreting by offering evidence and counter-evidence from Interpreting Studies and cognitive science. 2. Skills in interpreting Since the interpreting process itself cannot be directly studied, the interpretation output is often analyzed to provide insight into the component processes executed and thus the skills used to perform the task (Dillinger 1989). Studies on the quality of interpretation output have shown that expert interpreters are more accurate than novice interpreters. For example, when measured by the percentage of propositions correctly interpreted, expert interpreters accurately interpreted approximately 17% more of the content than bilingual non-interpreters (Dillinger 1989, 1. In addition to “expert interpreters,” studies of interpreting have also used other terms, such as “professional interpreters,” “experienced interpreters,” etc. In this article, the term “expert interpreters” will be used unless the discussion requires otherwise. Likewise, the term “novice interpreters” will be used to include both interpreters still in training and people with no experience in interpreting, unless otherwise required in the context.
How do experts interpret?
1990). Using a system that did not count propositions to categorize a meaning unit, Liu (2001) found that expert interpreters were more accurate (45%) in their performances than second-year interpretation students (33%) and first-year students (25%) (see also Liu et al. 2004). When the number of errors and omissions (combined to be termed “translation disruptions”) was used as a measure of quality in interpreting, expert interpreters committed fewer disruptions (about 5%) than both student interpreters (about 8%) and bilingual non-interpreters (about 10%) (Barik 1975). It is quite possible that expert interpreters’ more accurate output results from better comprehension during interpreting. It is also possible that different processes are at work when it comes to producing more accurate output. Interpreting involves three obvious processes and skills: comprehension, translation, and production. I will discuss each of these processes and skills individually as well as in combination, offering empirical evidence from the literature as to the roles each of these skills play in expert interpreting.2 2.1
The comprehension process and skill
Dillinger (1989) investigated whether comprehension during simultaneous interpreting was different from that during listening – in terms of syntactic processing, proposition generation, and frame-structure processing as shown in free recall. The results showed that in general, the two comprehension tasks were not different from each other in these aspects. Dillinger concluded that “comprehension in interpreting is not a specialized ability, but the application of an existing skill under more unusual circumstances” (1989: 97). If comprehension during simultaneous interpreting involves the same set of components as during listening, we would assume that expert and novice interpreters do not perform much differently in comprehending the input materials during simultaneous interpreting. In the same study, Dillinger (1989, 1990) also investigated the difference in the comprehension process between experts and bilingual non-interpreters as shown in accuracy in interpreting and in recall. The results showed that there was no significant difference in the two groups’ ability of syntactic processing. In terms of semantic processing, expert interpreters performed significantly better than non-interpreters in the category of directness of mapping (how closely the syntactic importance of the clause matches the semantic importance of the information), indicating more efficient proposition generation 2. Most of the studies reported here use simultaneous interpreting as the mode of choice in the experiments. Therefore, the discussion in this article pertains to this mode of interpreting, unless otherwise specified.
Minhua Liu
(Dillinger 1989). To be more exact, expert interpreters seemed to have learned “to be more selective in the surface information they will process semantically, as a function of the conceptual frame structure that is to be built with it” (1989: 86). Despite Dillinger’s own conclusion that the expert interpreters in his study had not acquired any qualitatively different skills particular to simultaneous interpreting (1989, 1990), I would argue that being more selective in processing information and more sensitive to the conceptual frame structure of the source speech do constitute a qualitative difference between experts and novices. It is worth noting that many of the differences observed between expert interpreters and non-interpreters in Dillinger’s study were associated with the more difficult procedural text (versus the easier narrative text), thus suggesting that “any special comprehension abilities may only appear clearly with more difficult materials” (Dillinger 1989:89). I would further argue that the procedural text in Dillinger’s study is closer to the speeches professional interpreters usually encounter in their work, while narrative discourses are much more uncommon. Therefore, the observed differences or lack thereof may be attributed to how close the experimental task is to the domain of interpreting. Indeed, studies have shown that the ecological validity of a task was a more important factor for experts than for novices (Hodges, Starkes & MacMahon 2006). Other studies using different investigative approaches have also shown selectivity by experts when processing information during simultaneous interpreting. For example, in determining the seriousness of different omissions made by his participants in simultaneous interpreting, Barik (1975) found that a substantially greater proportion of the omissions made by expert interpreters were of the minor type, while only less than half of the ones made by novice interpreters (student interpreters and bilingual non-interpreters) were minor omissions. A similar conclusion was made in a study that investigated whether expert and novice interpreters were different in their selection of more important or less important information when circumstances limited the possibility of full interpretation (Liu et al. 2004). The results showed that this was indeed the case. Expert interpreters demonstrated a greater ability in distinguishing the more essential meaning units from the more secondary ones, correctly interpreting 48% and 42% respectively. The second-year students were less selective in interpreting the more or less important meaning units (34% vs. 32% respectively in accuracy rates), and first-year students, in turn, did not seem to be able to discriminate the importance of the meaning units, with accuracy rates of essential and secondary meaning units both at 25%. Also studying whether interpreters processed sentences the same way in simultaneous interpreting as in listening, Isham (1994) found that expert interpreters adopted two distinctive approaches when processing information during
How do experts interpret?
simultaneous interpreting. Some interpreters’ comprehension pattern resembled that of typical listeners in that their recall protocols reflected the effects of clause boundary and sentence boundary, while other interpreters performed differently, exhibiting their own pattern of information processing. Studies in brain research have shown that the comprehension process can take place at a conceptual level rather than at a sentential or discourse level, where no distinction is shown between sentence and discourse-level boundaries while context is being constructed (e.g. Salmon & Pratt 2002). This shows that the choice and use of strategy plays a more important role than expected in information processing. It also implies that the way interpreters process information may be shaped by the specific tasks involved in simultaneous interpreting. In addition to the studies that examined interpreters’ output, several studies that investigated the processing behavior of experts and novices during simultaneous interpreting also provided evidence of expert interpreters’ more semanticbased approach to information processing. Using an error detection task in dichotic listening, these studies found that professional interpreters detected significantly more semantic errors while student interpreters significantly more syntactic ones (Fabbro, Gran & Gran 1991; Ilic 1990). Expert interpreters’ superiority in semantic processing was also shown in lexical processing. In performing a word categorization task, expert interpreters reacted faster than student interpreters and non-interpreters in categorizing nontypical words, possibly suggesting more efficient access to the semantic information of words (Bajo, Padilla & Padilla 2000). The same study also showed that training in simultaneous interpreting seemed to contribute to the development of this ability, as student interpreters showed significant improvement in this task after one year of training in simultaneous interpreting while non-interpreters did not improve significantly. These results suggest that differences in syntactic processing cannot adequately explain expert-novice difference in comprehension. The difference may lie more in processing efficiency and different approaches to semantic processing. It is also possible that the difference between experts and novices lies in the production process, as Dillinger himself suggested (1989, 1990), or in the translation process, or both. The fact that Dillinger’s expert interpreters were more accurate than noninterpreters in their interpretation output but not in their recall seemed to suggest this possibility, as interpretation output reflects the result of the interaction of different processes in simultaneous interpreting, while recall reflects the result of comprehension and the work of long-term memory.
Minhua Liu
2.2
The translation process and skill
In simultaneous interpreting, part of the translation process can be indirectly inferred from the way interpreters segment the source speech. The interpreters’ segmentation choices do not seem to be only guided by how the interpreters decode the input materials, as professional interpreters were found to either cut into the segmented chunks of the source speech, or combine two or more of the original chunks (Goldman-Eisler 1972). How interpreters segment the source speech can be influenced by the translation process itself, in that an equivalent in the target language must be identified (Goldman-Eisler 1972; Kirchhoff 1976/2002). This “equivalence relation” (Kirchhoff 1976/2002: 114) between the source language and the target language makes the process of simultaneous interpreting a dynamic one where the interpreter is constantly making decisions on how to segment the source speech (see also Oléron & Nanpon 1965/2002). This situation may partially explain the variation in the ear-voice-span (EVS)3 observed in simultaneous interpreting of different language combinations (e.g. Goldman-Eisler 1972; Oléron & Nanpon 1965/2002). We may further argue that a critical aspect of the expertise in simultaneous interpreting lies in the interpreter’s ability to recognize patterns in the equivalence relation between the two languages in question. If the equivalence relation determines the minimum size of a segment, it is the interpreter’s processing capacity that determines the maximum size of the segment during simultaneous interpreting (Kirchhoff 1976/2002). The translation process also takes effort. Translating words aloud takes about 50 percent more time than simply repeating the words (Oléron & Nanpon 1964, cited in Oléron & Nanpon 1965/2002). Cheung (2001) found that student interpreters who were allowed to use the English terms in the source speech performed better than those who were required to produce an all-Cantonese output in simultaneous interpreting. The code mixing of English and Cantonese in interpretation output and skipping part of the translation process helped reduce the mental resource requirements during simultaneous interpreting. Facing the extra task of translating during simultaneous interpreting, interpreters inevitably have to adopt strategies specific to this task. Studies have found that expert interpreters tend to process larger chunks of input (e.g. Davidson 1992; McDonald & Carpenter 1981). This may at least partially explain why expert interpreters often sound less literal in their interpretations than novices (Barik 1975; McDonald & Carpenter 1981; Sunnari 1995). This phenomenon may be attributed, on the one hand, to expert interpreters’ ability to resort to more semantic-based 3. The time lag between when the original message is heard to the time the translated message is uttered.
How do experts interpret?
processing; it may also be a result of the way expert interpreters segment the source speech and how they plan to translate that segment into the target language. 2.3
The production process and skill
The interpreter’s output speech is often characterized by a ritardando and accelerando pattern and is generally less smooth than usual speech (Gerver 1969; Shlesinger 1994). When compared with the output in a shadowing task, the interpretation speech often contains more pauses (Gerver 1969). When the input rate increases, the interpreters may lag farther behind the speaker, speak less and pause more, but their own speech rates remain generally unchanged (Gerver 1969) and are usually maintained at 96-110 words per minute (wpm) (Gerver 1975). Research has shown that an individual’s overt and covert speech seems to have comparable speed limits (Landauer 1962, cited in Cowan 2000/2001). Interpreters’ normal speaking rate during simultaneous interpreting is comparable to the optimal input rate of 95-112 wpm for simultaneous interpreting (Gerver 1969), implying a limit on working memory that can be used for processing information during simultaneous interpreting, be it maintaining the source speech subvocally or uttering one’s own speech overtly. Considering the fact that the normal rate of spontaneous speaking is 160-180 wpm (Foulke & Sticht 1969, cited in Dillinger 1989), the empirical evidence mentioned above points to the obvious interference by the comprehension process (and the translation process) on the production process, and shows how interpreters manage their processing to overcome the interference by using strategies or control mechanisms (Dillinger 1989). One such strategy or control mechanism may be the interpreters’ specific use and control of their attention by sharing or switching back and forth between the different tasks, manifested in the more frequent pauses and the reduction of their own speaking rate. Other strategies in the production process are observed.Chernov (1979), while comparing the number of syllables in both English input and Russian output texts in simultaneous interpreting, found that expert interpreters’ output contained fewer syllables than that of novice interpreters. Chernov explained that when the source speech was presented at a faster rate than the interpreters’ own speech rate, or when the target language translation was longer than the source language text, expert interpreters adopted strategies involving either lexical or syntactical compression in order to not lag too far behind the source speech. Other researchers (e.g. Sunnari 1995) also observed similar strategies used by expert interpreters, such as the deletion of superfluous or redundant words, and the choice of shorter sentences or shorter words in their output.
Minhua Liu
There have not been many studies that directly investigate expert-novice difference in interpretation delivery. In part of her study investigating the output by expert interpreters and second-year and first-year interpretation students, Liu (2001) asked raters to listen to the output of simultaneous interpreting without listening to the source speeches. The results showed that expert interpreters’ output was considered much more meaningful, more coherent, and sounded smoother and more natural than the other two groups. In another study, it was found that novice interpreters had a tendency to pick out segments in a speech and link them arbitrarily, resulting in an output lacking coherence (Sunnari 1995). Also, beginner interpreters’ output showed high variations in speed and disruptions in the speech/pause ratio, while expert interpreters strove to maintain a steady output rate (Kirchhoff 1976/2002: 115). These results may naturally lead to the assumption that expert interpreters have better verbal skills than novices. Some studies have attempted to compare the verbal fluency of experts and novices by adopting tools from psychology. One study used verbal fluency tasks to compare expert and novice interpreters and found no difference between the two groups (Moser-Mercer et al. 2000). In her study on the development of expertise in consecutive interpreting, Cai (2001) found that interpreters’ language ability (as measured by the number of compound sentences or incomplete sentences produced) was the least important factor among all the variables that could distinguish professional interpreters, trained students and untrained students. If expert interpreters’ better-sounding output is not a result of their higher degree of verbal fluency, this expert-novice difference may lie in the different approaches or strategies adopted in the production process or in the interaction of different processes during simultaneous interpreting. How interpreters monitor their output during simultaneous interpreting may be one such factor. 2.3.1 Output monitoring We monitor our speech output when we talk. We often discover our speech mistakes and correct them, but sometimes fail to do so, possibly due to external (e.g. noise) or internal distractions (e.g. forming a thought). Interpreters also monitor their speech output during simultaneous interpreting, as evidenced by the fact that interpreters sometimes correct themselves during simultaneous interpreting (e.g. Gerver 1969). What makes output monitoring different in simultaneous interpreting, in addition to the extra cognitive load of continuing to process the incoming message, is that two streams of speech are received at the same time (Isham 1994). This seems to be an independent factor that causes interference with comprehension in spoken language simultaneous interpreting, as spoken language interpreters showed inferior recall of the source speech compared to sign language interpreters (Isham
How do experts interpret?
1994; Isham & Lane 1993). This phenomenon can be explained by what has been observed in working memory research: that verbal and spatial information interfere more with the same type than different types of information (Baddeley 1986). The “phonological interference” (Isham 1994: 204), caused by two streams of speech interfering with each other is something interpreters have to learn to overcome in the development of expertise. Research has shown that the ability to be less affected by this effect is indeed one of the characteristics that differentiate expert from novice interpreters. Delayed auditory feedback, a measure often used to judge speech fluency, was adopted in several studies to tap possible expert-novice difference in interpretation output delivery (Fabbro & Darò 1995; Moser-Mercer et al. 2000; Spiller-Bosatra & Darò 1992). Combined, these studies showed that more experienced interpreters were less affected by delayed auditory feedback than less experienced interpreters. These authors attributed the lack of interfering effects by the condition of delayed auditory feedback to the interpreters’ acquired ability to pay less attention to their own output and thus more attention to comprehending the input. Research has shown that perception of speech stimuli was worse when participants engaged in flawless whispered reading than when participants switched their attention away from whispered reading (Cowan, Lichty & Grove 1990; cited in Cowan 2000/2001). In simultaneous interpreting, however, interpreters cannot afford to switch their attention away from the output as it has to be checked against the input content for accuracy. This comprehension-production interaction seems to be only possible when the interpretation output is checked against some semantic representation of the input rather than its speech representation, the maintenance of which will undoubtedly take up space meant for temporarily storing the incoming information. Therefore, rather than conceptualizing the output monitoring process as a process of comparing the output with the input, it may be more appropriate to view it as a quick checking mechanism, the extent of which is determined by the interpreter’s processing capacity at that moment. This strategic adjustment apparently does not and cannot only happen in the process of production. As mentioned earlier, expert interpreters seem to adopt more semantic-based processing strategies in the comprehension and translation processes that allow them to free up some of their mental resources. One further beneficial effect of the ability to resort to more semantic-based processing is the interpreter’s ability to anticipate the upcoming information based on the context that is provided. There is evidence from brain research that words correctly predicted from the context evoked a smaller amplitude of N400 (a waveform showing a difference between event-related potentials, which are the brain’s response to stimuli) than words not predicted, suggesting less processing effort by the brain (Salmon & Pratt 2002). A better ability to predict upcoming information may free
Minhua Liu
up some of the mental resources of the interpreter during the production process, as the parallel comprehension process of the next information segment becomes a process of checking and confirming what has been predicted, rather than the usual more effortful comprehension process. 3. Expert-novice differences in sub-skills of interpreting and cognitive abilities After reviewing the evidence of the differences between expert and novice interpreters, our next question is: how do we make a link between these observed performance differences and the sub-domain cognitive abilities that may underlie these differences? While the aforementioned studies mostly examined the task of simultaneous interpreting and its product (i.e. output), other studies have investigated hypothesized underlying cognitive abilities that are thought to be related to expertise in interpreting. These include the ability to listen and speak at the same time (e.g. Chincotta & Underwood 1998) and working memory span (e.g. Kopke & Nespoulous 2006; Liu et al. 2004; Padilla et al. 1995). 3.1
Concurrent articulation and articulatory suppression
It is often thought that interpreters having to listen and speak at the same time is what causes difficulties in simultaneous interpreting. However, research has suggested that simultaneous listening and speaking per se may not be a difficult task to master and that it, by itself, does not represent a differentiating ability between expert and novice interpreters. For example, it was shown that expert interpreters did not perform better than student interpreters in the task of shadowing (Moser-Mercer et al. 2000). In further analysis, it was observed that expert interpreters made more substitution-type errors than student interpreters. This result seems to suggest a possible difficulty that expert interpreters faced in changing their processing strategies from simultaneous interpreting to shadowing (see also Sabatini 2000/2001). Studies have consistently found longer EVS for simultaneous interpreting than for shadowing (e.g. Anderson 1994; Gerver 1969; Tresiman 1965). The extra time required for the task of simultaneous interpreting is due to the different processes needed for this task, i.e. comprehension, translation, and speech planning. These extra processes require extra effort. Also, it was shown that significantly fewer words were correctly interpreted than were correctly shadowed as the input rate increased (Gerver 1969).The different performances in simultaneous interpreting and shadowing suggest that different strategies may be involved in performing these two tasks.
How do experts interpret?
Research has shown that verbal stimuli are maintained through subvocal articulatory rehearsal (i.e. subvocalization) before they are further processed, and that subvocalization plays a key role when the order of the stimuli is important for processing or remembering the information, such as in the case of stimuli with complicated syntactic structure (Baddeley 1986). When the articulatory rehearsal mechanism is suppressed (i.e. articulatory suppression), recall for stimuli is affected (Baddeley 1986). During simultaneous interpreting, interpreters are constantly engaged in articulatory suppression as they continue to utter their output in the target language. Evidence of the negative effect of articulatory suppression on recall is also documented in the interpreting literature. It has been shown that interpreters’ recall of input material after simultaneous interpreting is not as good as after listening (Chincotta & Underwood 1998; Darò & Fabbro 1994; Gerver 1974; Isham 1994; Isham & Lane 1993; Lambert 1989). In addition to the fact that interpreters have to devote their limited mental resources to different tasks at the same time, the suppressed subvocalization mechanism also seems to be a major factor. However, in a study that investigated the effect of concurrent listening and speaking by measuring accuracy of the interpretation output instead of recall, Gerver (1972) found that interpreters’ performance did not suffer (with over 85% of output correctly interpreted) when an average of 75% of the total time was spent on listening and speaking at the same time (cited in Gerver 1976). The discrepancy between the quality of interpretation output and post-interpretation recall may be explained by the fact that, as mentioned earlier, the interpretation output directly reflects the on-going interaction of different processes in simultaneous interpreting, while recall shows the effect of further processes on the processed information after entering the long-term memory. Research comparing more and less experienced interpreters has shown that while all interpreters’ recall of the input material is affected by articulatory suppression, the recall of more experienced interpreters is not as seriously affected as that of those who are less experienced (e.g. Chincotta & Underwood 1998; Pinter 1969, cited in Gerver 1976). Chincotta & Underwood (1998) suggest that extensive practice at listening and speaking at the same time may have partially released expert interpreters from the effect of articulatory suppression, thus allowing their attention to be directed more efficiently than novice interpreters to the input materials with minimal monitoring of the spoken output. The results and the implications thereof are very similar to those in studies involving the effect of delayed auditory feedback mentioned earlier. It is quite possible that expert interpreters learn to take a “short-cut” in processing information by bypassing the mechanism of maintaining the input stimuli in a verbatim manner. The evidence provided by some studies discussed earlier, such as Isham (1994) and Salmon and Pratt (2002), shows that comprehension takes
Minhua Liu
place without adhering to sentence boundaries, and that expert interpreters seem to adopt a more semantic-based processing strategy, implying that this “short-cut” approach may be used. 3.2
Working memory
While it is undisputed that our mental resources are limited in their capacity, there have been several past attempts to investigate individual differences in mental capacities that may account for performance differences in various cognitive tasks. Although it is the long-term memory on which the interpreters rely to store all kinds of knowledge and information (language and world knowledge), it is the interpreters’ working memory that is crucial in carrying out all the processes during interpreting. Different studies in interpreting have attempted to investigate this aspect of the interpreters’ ability. Most of them borrowed concepts and tools from cognitive psychology to explain and measure working memory span and its efficiency. Despite using the expert-novice paradigm and similar concepts and measuring tools, most of the studies have produced inconsistent results. Some studies showed that working memory span increased with experience in interpreting (Bajo et al. 2000; Darò & Fabbro 1994; Padilla et al. 1995), while others revealed no difference in working memory span among interpreters with different experience levels (Kopke & Nespoulous 2006; Liu et al. 2004). The studies by Bajo et al., Darò and Fabbro, and Padilla et al. used digit span tests to measure interpreters’ short-term memory capacity. All three studies found that interpreters with higher level expertise had a larger digit span. However, Kopke and Nespoulous (2006) found no difference in simple span tasks between expert and novice interpreters. Simultaneous interpreting involves a continuous online processing of information and allocation of working memory resources to different concurrent tasks. It is unlikely that a mere large storage capacity can account for the successful management of the task. The results of some of these studies also contradicted what has been observed in different studies: a lack of correlation between short-term memory span and higher-order language comprehension performance (Gathercole & Baddeley 1993). While simple digit or word span tests measure the holding capacity of working memory, other more elaborate span tests, such as the reading span test, measure working memory efficiency at maintaining and processing information (Daneman & Carpenter 1980). The Bajo et al. study and the Padilla et al. study also used the reading span test and likewise found that interpreters with higher expertise levels had a larger span. Liu (2001) used the listening span test, similar in basic concept to the reading span test but different in presentation mode, and found no difference between expert interpreters and the two groups of novice interpreters
How do experts interpret?
(see also Liu et al. 2004). Kopke and Nespoulous (2006), also using the listening span test, found significant differences but observed that novice interpreters outperformed the experts in their working memory span. The mixed results shown by these studies seem to suggest that the interpreters’ working memory span does not fully explain or account for expertise in interpreting. Other mechanisms may be at work or even play a more important role in interpreting expertise. 3.3
Attention
Attention is often conceptualized as part of the human memory system or mental resources (e.g. Baddeley 1986; Cowan 2000/2001). It has been proposed that, when performing multiple tasks, one’s attention either has to be shared by the tasks or has to be switched back and forth between tasks. While the manner in which attention can be shared is unclear, there seems to be more empirical evidence for the latter view (Cowan 2000/2001). Cowan (2000/2001) proposed two possible explanations for the function of attention during simultaneous interpreting. One calls for rapid attention-switching between the listening task and the speaking task, and the other involves well-practiced listening and speaking skills that require less attention. Despite the lack of direct evidence, there seems to be some indirect evidence to support both hypotheses. One piece of evidence that seems to support the attention-switching view is that interpreters pause more and longer when the input rate increases (Gerver 1969). It seems that more frequent and longer pauses amid the production of output are ways that allow the interpreters to direct their attention to the more difficult comprehension task. Cowan’s second hypothesis cannot explain this phenomenon. As mentioned earlier, it has been observed that novice interpreters’ output is comparatively more fragmented and incoherent than that of experts (e.g. Liu 2001; Sunnari 1995). It is possible that novice interpreters have not acquired the ability to switch their attention between listening and speaking at the right time. They may pay too much attention to monitoring their output and fail to catch the incoming message. According to Cowan (2001), “separate items or chunks can be combined into a single, larger chunk only if they can be present in the focus of attention at the same time” (cited in Cowan 2000/2001: 136). During simultaneous interpreting, if chunks of information – whether from the just-translated segment, the currentlystored new information, or the still present abstract representation of the previous information – are to be coherently linked together, it appears that the interpreter’s attention has to be brought to these elements in order for them to form a larger, meaningful chunk of information. If any of these segments of information fall out of the focus of attention, the result may be fragmented and incoherent.
Minhua Liu
In a study involving the solicitation of learners’ own feedback on the process of acquiring expertise in simultaneous interpreting, beginning students cited “concentration” as the main difficulty in acquiring the simultaneous interpreting skill (Moser-Mercer 2000). These student interpreters might have been referring to their difficulty in devoting more attention to comprehending the source speech. It is generally agreed in the literature on attention that switch of attention takes time and effort and that switch of attention takes place only when higher priority stimuli appear in the other channel (Solso 1998). This implies that, in addition to the ability of efficiently managing their attention, interpreters also have to be efficient at judging the overall situation so as to allow their attention to switch effectively between tasks. Cowan’s second hypothesis indicates well-practiced listening and speaking skills as contributing factors to expertise in simultaneous interpreting. Some observed strategies adopted by expert interpreters in the processes of comprehension, translation, and production, as mentioned earlier, may provide evidence for this view. For example, faster access to lexical information, selectivity in processing information, the use of bigger chunks as translation units, and the ability to pay less attention to their own output, are strategies which seem to allow expert interpreters to proceed with the simultaneous interpreting task with more efficiency and with less effort. It is also possible that both mechanisms depicted in Cowan’s hypotheses are at work when it comes to expertise development in simultaneous interpreting. That is, expertise may involve each component process becoming less effortful and the attention mechanism more efficient. Studies in brain research provide some evidence of expertise-related efficiency in the components of a task as well as in the attentional control of the overall task. For example, functional Magnetic Resonance Imaging (fMRI) scans of a skilled portrait artist and of a non-artist were made as each drew a series of faces. The level of activation appeared lower in the expert than in the novice, suggesting that a skilled artist may process facial information more efficiently and with less effort (Solso 2001). More specifically, most studies involving the practice of cognitive tasks showed practice-related decreases in brain activation in the area involving working memory and attentional control (Hill & Schneider 2006). For example, in a skill acquisition experiment involving a motor tracking task on the effect of practice on brain activation, it was observed through fMRI that both quantitative and qualitative changes occurred in brain activity as a skill was acquired. There was a general reduction of brain activation, but changes differed substantially across areas. Activation in the areas where working memory and attentional control were involved either nearly dropped out or was reduced substantially, while areas involving motor and perception remained active (Hill & Schneider 2006). These results seem to imply that as expertise is
How do experts interpret?
acquired, the efficiency gained is more pronounced in the functions of attention and working memory than in the component processes of the task. The above observation supports the fundamental premises of Gile’s Effort Models (1995) in that the processes and operations of interpreting take effort4 and that the development of expertise in interpreting may not result in automatic processes, i.e. in significantly decreased requirements on processing capacity for these processes and components, but in better management of mental resources. That is to say, while the comprehension effort and the production effort may become less capacity-demanding as expertise develops, it is the increasingly efficient capacity management mechanism that contributes the most to the advancement of the skill of interpreting. In Gile’s model, the coordination effort assumes the role of managing and coordinating the three basic efforts of comprehension, memory and production (Gile 1995: 169). In this aspect, the coordination effort in Gile’s model is not unlike the attention mechanism in Cowan’s theory, while the memory effort in the Effort Models is viewed as more of a storage mechanism where information is temporarily kept before further processing takes place. Again, the essence of the Effort Models implies that rather than an increased capacity (i.e. bigger storage) of the interpreters’ memory, it is the efficient resource management that contributes to the advancement of interpreting expertise. In this sense, the coordination effort seems to play the most crucial role. 4. Defining expertise in interpreting What do all these studies on interpreting expertise tell us? What characteristics stand out to allow us to distinguish an expert interpreter from a novice? From the analysis of studies on interpreting expertise, we observe that expert interpreters’ performance is characterized by fewer errors, faster responses, and less effort being made. Expert interpreters are better at providing more accurate and complete interpretations and they seem to be quicker at accessing lexical information, all performed using less effort. However, what makes expert interpreters different from novices goes beyond accuracy, speed and effort. Expert interpreters also demonstrate qualitative differences in their interpretation processes and output. They may not differ from novice interpreters in the syntactic processing of comprehension, but they do differ in their ability to use more flexible semantic processing. One such semantic processing strategy is their ability to perceive and 4. The three basic components of interpreting, therefore, are termed the comprehension effort, the memory effort, and the production effort in the Effort Models (Gile 1995).
Minhua Liu
distinguish the importance of the input material and to pay more attention to the overall conceptual framework of the source speech. It is quite possible that experts learn to bypass the subvocalization of the source speech and take a “short-cut” in processing information. In addition, expert interpreters’ ability to quickly understand the overall structure of the source speech may contribute to their success in predicting upcoming information, which in turn, helps them engage in a less effortful comprehension process, freeing up their mental resources for other processes. Expert interpreters’ more semantic-based processing strategy and their ability to perceive the importance and overall structure of the source speech during comprehension may also contribute to their ability to segment the input material into bigger chunks during the process of translation. Through extended practice in the task of simultaneous interpreting between two specific languages, expert interpreters may also develop the ability to recognize patterns in the equivalence relation between the two languages and thus allow a faster transition from the source language to the target language. While expert interpreters and novices do not differ in their general verbal fluency, the former have learned to pay less attention to their own output. They may monitor their output by adopting a quick checking mechanism against the semantic representation of the input. From the analysis above, we observe that expert interpreters seem to have developed well-practiced strategies in each of the comprehension, translation, and production processes. However, these strategies are developed and practiced as a result of the interaction among the comprehension, translation and production processes that are specific to the needs of the task of simultaneous interpreting. What allows the interaction among the comprehension, translation, and production processes to act in sync is the interpreters’ ability to manage their mental resources in an efficient manner. Particularly, it seems that expert interpreters have developed an ability to efficiently manage their attention so that it can be switched between different processes according to the specific demand at a particular moment during the task of simultaneous interpreting. An effective act of attention-switching, in turn, calls for good judgment of the overall situation during interpreting so as to allow the interpreter’s attention to switch effectively between tasks. We are just beginning to piece together evidence to create a more coherent picture of the expertise of interpreting. Despite the complexity of the interpreting task, we are beginning to see that the current knowledge and new findings in other fields, such as cognitive science, are quite compatible with some findings in Interpreting Studies and with a model specific to the task of translation/interpreting, the Effort Models. Our challenge now is to produce more well-designed empirical studies on interpreting that are guided by research questions relevant to the current understandings of human cognition.
How do experts interpret?
References Anderson, L. 1994. “Simultaneous interpretation: Contextual and translation aspects.” In Bridging the Gap: Empirical Research in Simultaneous Interpretation, S. Lambert and B. MoserMercer (eds), 101–120. Amsterdam/Philadelphia: John Benjamins. Baddeley, A.D. 1986. Working Memory. New York: Oxford University Press. Bajo, M.T., Padilla, F. and Padilla, P. 2000. “Comprehension processes in simultaneous interpreting.” In Translation in Context: Selected Papers form the EST Congress, Granada, 1998, A. Chesterman, N. Gallardo San Salvador and Y. Gambier (eds), 127–142. Amsterdam/Philadelphia: John Benjamins. Barik, H.C. 1973. “Simultaneous interpretation: Temporal and quantitative data.” Language and Speech 16 (3): 237–270. Barik, H.C. 1975. “Simultaneous interpretation: Qualitative and linguistic data.” Language and Speech 18 (2): 272–297. Cai, X. 2001. “The process of consecutive interpreting and competence development: An empirical study of consecutive interpreting by Chinese-French interpreters and students of interpreting.” (In Chinese). Xiandai Waiyu [Modern Foreign Languages] 2001 (3): 276–284. Chernov, G.V. 1979. “Semantic aspects of psycholinguistic research in simultaneous interpretation.” Language and Speech 22: 277–295. Cheung, A. 2001. “Code mixing and simultaneous interpretation training.” The Interpreters’ Newsletter 11: 57–62. Chi, M.T.H. 2006. “Two approaches to the study of experts’ characteristics.” In The Cambridge Handbook of Expertise and Expert Performance, K.A. Ericsson, N. Charness, P.J. Feltovich and R.R. Hoffman (eds), 21–30. New York: Cambridge University Press. Chincotta, D. and Underwood, G. 1998. “Simultaneous interpreters and the effect of concurrent articulation on immediate memory: A bilingual digit span study.” Interpreting 3 (1): 1–20. Cowan, N. 2000/01. “Processing limits of selective attention and working memory: Potential implications for interpreting.” Interpreting 5: 117–146. Cowan, N. 2001. “The magical number 4 in short-term memory: A reconsideration of mental storage capacity.” Behavioral and Brain Sciences 24: 87–185. Cowan, N., Lichty, W. and Grove, T.R. 1990. “Properties of memory for unattended spoken syllables.” Journal of Experimental Psychology: Learning, Memory, & Cognition 16: 258–269. Daneman, M. and Carpenter, P.A. 1980. “Individual differences in working memory and reading.” Journal of Verbal Learning and Verbal Behavior 19: 450–466. Darò, V. and Fabbro, F. 1994. “Verbal memory during simultaneous interpretation: Effects of phonological interference.” Applied Linguistics 15: 365–381. Davidson, P.M. 1992. “Segmentation of Japanese source language discourse in simultaneous interpretation.” The Interpreters’ Newsletter, Special Issue 1: 2–11. Dillinger, M. 1990. “Comprehension during interpreting: What do interpreters know that bilinguals don’t?” The Interpreters’ Newsletter 3: 41–58. Dillinger, M.L. 1989. Component processes of simultaneous interpreting. Unpublished doctoral dissertation, McGill University, Montreal. Fabbro, F. and Darò, V. 1995. “Delayed auditory feedback in polyglot simultaneous interpreters.” Brain and Language 48: 309–319. Fabbro, F., Gran, B. and Gran, L. 1991. “Hemispheric specialization for semantic and syntactic components of languages in simultaneous interpreters.” Brain and Language 41: 1–42.
Minhua Liu Foulke, E. and Sticht, T. 1969. “Review of research in the intelligibility and comprehension of accelerated speech.” Psychological Bulletin 72 (1): 50–62. Gathercole, S.E. and Baddeley, A. 1993. Working Memory and Language. Hillsdale, NJ: Erlbaum. Gerver, D. 1969. “The effects of source language presentation rate on the performance of simultaneous conference interpreters.” In Proceedings of the Second Louisville Conference on Rate and/or Frequency-Controlled Speech, E. Foulke (ed.), 162–184. Louisville, KY: Center for Rate-Controlled Recordings, University of Louisville. Gerver, D. 1972. Simultaneous and Consecutive Interpretation and Human Information Processing (Social Science Research Council Research Report, HR 566/1). London: Social Science Research Council. Gerver, D. 1974. “Simultaneous listening and speaking and retention of prose.” Quarterly Journal of Experimental Psychology 26: 337–341. Gerver, D. 1975. “A psychological approach to simultaneous interpretation.” Meta 20 (2): 119–128. Gerver, D. 1976. “Empirical studies of simultaneous interpretation: A review and a model.” In Translation: Applications and Research, R. Brislin (ed.), 165–207. New York: Gardner. Gile, D. 1995. Basic Concepts and Models for Interpreter and Translator Training. Amsterdam/ Philadelphia: John Benjamins. Goldman-Eisler, F. 1972. “Segmentation of input in simultaneous translation.” Journal of Psycholinguistic Research 1 (2): 127–140. Hodges, N.J. Starkes, J.L. and MacMahon, C. 2006. “Expert performance in sport: A cognitive perspective.” In The Cambridge Handbook of Expertise and Expert Performance, K.A. Ericsson, N. Charness, P.J. Feltovich and R.R. Hoffman (eds) 471–488. New York: Cambridge University Press. Hill, N.M. and Schneider, W. 2006. “Brain changes in the development of expertise: Neuroanatomical and neurophysiological evidence about skill-based adaptations.” In The Cambridge Handbook of Expertise and Expert Performance, K.A. Ericsson, N. Charness, P.J. Feltovich and R.R. Hoffman (eds), 653–682. New York: Cambridge University Press. Ilic, I. 1990. “Cerebral lateralization for linguistic functions in professional interpreters.” In Aspects of Applied and Experimental Research on Conference Interpretation, L. Gran and C. Taylor (eds), 101–110. Udine: Camponotto. Isham, W. P. 1994. “Memory for sentence form after simultaneous interpretation: Evidence both for and against deverbalization.” In Bridging the Gap: Empirical Research in Simultaneous Interpretation, S. Lambert and B. Moser-Mercer (eds), 191–211. Amsterdam/Philadelphia: John Benjamins. Isham, W.P. and Lane, H. 1993. “Simultaneous interpretation and the recall of source-language sentence.” Language and Cognitive Processes 8: 241–264. Kirchhoff, H. 1976/2002. “Simultaneous interpreting: Interdependence of variables in the interpreting process, interpreting models and interpreting strategies.” In The Interpreting Studies Reader, F. Pöchhacker and M. Shlesinger (eds), 111–119. London: Routledge. Kopke, B. and Nespoulous, J-L. 2006. “Working memory performance in expert and novice interpreters.” Interpreting 8 (1): 1–23. Lambert, S. 1989. “Information processing among conference interpreters: A test of the depthof-processing hypothesis.” In The Theoretical and Practical Aspects of Teaching Conference Interpretation, L. Gran and J. Dodds (eds), 83–91. Udine: Campanotto.
How do experts interpret? Landauer, T.K. 1962. “Rate of implicit speech.” Perceptual & Motor Skills 15: 646. Liu, M. 2001. Expertise in Simultaneous Interpreting: A Working Memory Analysis. Unpublished doctoral dissertation, the University of Texas at Austin. Liu, M., Schallert, D.L. and Carroll, P.J. 2004. “Working memory and expertise in simultaneous interpreting.” Interpreting 6 (1): 19–42. McDonald, J. and Carpenter, P. 1981. “Simultaneous translation: Idiom interpretation and parsing heuristics.” Journal of Verbal Learning and Verbal Behavior 20: 231–247. Moser-Mercer, B. 2000. “The rocky road to expertise in interpreting: Eliciting knowledge from learners.” In Translationswissenschaft: Festschrift für Mary Snell-Hornby zum 60. Geburtstag, M. Kadric, K. Kaindl and F. Pöchhacker (eds), 339–352. Tübingen: Stauffenburg. Moser-Mercer, B., Frauenfelder, U.H., Casado, B. and Künzli, A. 2000. “Searching to define expertise in interpreting.” In Language Processing and Simultaneous Interpreting, B.E. Dimi trova and K. Hyltenstam (eds), 107–131. Amsterdam/Philadelphia: John Benjamins. Oléron, P. and Nanpon, H. 1964. “Recherches sur la répétition orale de mots présentés auditivement.” L’Année Psychologique 64: 397–410. Oléron, P. and Nanpon, H. 1965/2002. “Research into simultaneous translation.” In The Interpreting Studies Reader, F. Pöchhacker and M. Shlesinger (eds), 43–50. London: Routledge. Padilla, P., Bajo, M.T., Canas, J.J. and Padilla, F. 1995. “Cognitive processes of memory in simultaneous Interpretation.” In Topics in Interpreting Research, J. Tommola (ed.), 61–71. Turku: Centre for Translation and Interpreting, University of Turku. Pinter, I. 1969. Der Einfluß der Übung und Konzentration auf Simultanes Sprechen und Hören. Unpublished doctoral dissertation, University of Vienna. Sabatini, E. 2000/2001. “Listening comprehension, shadowing and simultaneous interpretation of two ‘non-standard’ English speeches.” Interpreting 5 (1): 25–48. Salmon, N. and Pratt, H. 2002. “A comparison of sentence- and discourse-level semantic processing: An ERP study.” Brain and Language 83: 367–383. Shlesinger, M. 1994. “Intonation in the production and perception of simultaneous interpretation.” In Bridging the Gap: Empirical Research in Simultaneous Interpretation, S. Lambert and B. Moser-Mercer (eds), 225–236. Amsterdam/Philadelphia: John Benjamins. Solso, R. L. 1998. Cognitive Psychology (5th ed.). Needham Heights, MA: Allyn and Bacon. Solso, R.L. 2001. “Brain activities in a skilled versus a novice artist: An fMRI study.” Leonardo 34 (1): 31–34. Available at http://muse.jhu.edu/demo/leonardo/v034/34.1solso.html (Last accessed 26 January 2008). Spiller-Bosatra, E. and Darò, V. 1992. “Delayed auditory feedback effects on simultaneous interpreters.” The Interpreters’ Newsletter 4: 8–14. Sunnari, M. 1995. “Processing strategies in simultaneous interpreting: ‘Saying it all’ versus synthesis.” In Topics in Interpreting Research, J. Tommola (ed.), 109–119. Turku: Centre for Translation and Interpreting, University of Turku. Treisman, A. 1965. “The effect of redundancy and familiarity on translating and repeating back a foreign and a native language.” British Journal of Psychology 56: 369–379.
The impact of non-native English on students’ interpreting performance Ingrid Kurz
University of Vienna, Austria English has become the world’s lingua franca and dominant conference language. Consequently, interpreters are increasingly confronted with nonnative speakers whose pronunciation differs from Standard English. Non-native source texts which deviate from familiar acoustic-phonetic patterns make perception more difficult for the interpreter, who, according to Gile’s Effort Models, is forced to devote a considerable part of his processing to the Listening and Analysis Effort. For students and novices in the interpreting profession such situations are particularly difficult to cope with. The paper describes some of the major findings of a study carried out by Dominika Kodrnja (2001) as a diploma thesis under the author’s supervision to demonstrate the detrimental effect of a strong non-native accent on students’ interpreting performance.
Keywords: processing capacity, listening and analysis effort, resource management and allocation, non-native accent
The Scholar Brassay – who passed away recently – was a professor at Cluj University in Hungary. He was an extremely ambitious scientist, almost a genius. He had a threefold right to eccentricities – as a learned man, as a professor and as a genius. This right, however, he used with modesty, his only creed being that teachers were superfluous, a principle of which he was living proof. He studied French and English from books without any practical help. He painted a keyboard on his writing desk – black and white – and practiced playing the piano to perfection on this mute surface. It so happened that Professor Brassay represented Hungary at a conference of philologists in London. He gave a two-hour lecture in English. The audience listened in silent awe. When Brassay had finished, the Chancellor of Cambridge University, who was also Chairman of the Conference, approached Brassay, shook his hand,
Ingrid Kurz
thanked him and said, “The most astounding revelation of your lecture was, dearest Brassay, that Hungarian is obviously so similar to English.” (Gregor von Rezzori 1956, translated by Elvira Basel) 1. Introduction Although science and technology have succeeded in eliminating most of the barriers to communication, one hurdle remains – the language barrier. International organizations would be unable to function without the assistance of professional conference interpreters who ensure communication among speakers and participants with different linguistic and cultural backgrounds. Since interpreting is not a mechanical but a highly complex cognitive operation, interpreters first of all must understand the message they receive. “To interpret one must first understand” (Seleskovitch 1978: 11). Interpretation is communication and thus involves the analysis of the original message and its conversion into a form appropriate to the listener. English has become the dominant conference language and is increasingly being used by speakers with mother tongues other than English whose pronunciation deviates from Standard English. International English is often one of the standard varieties (such as British or American English), but need not be so, as there are many local forms which reflect features of the speakers’ mother tongues. The kind of English used in a meeting between a Nigerian and a Japanese businessman, for example, will probably be very different from that used by an Arab and an Ugandan – though very little study has been made of the nature and extent of this kind of variation (Crystal 1992: 122). According to Daniel Gile’s Effort Models (Gile 1995), a higher processing capacity is required for comprehension when the speaker has a strong foreign accent. The present paper provides empirical evidence for this hypothesis. Following a brief discussion of simultaneous interpreting (SI) as a complex cognitive process with the focus on Gile’s Effort Models, it describes a pilot study that was carried out at the University of Vienna. Ten students interpreted a recorded English source text, one part of which was read by a native speaker and the other part by a nonnative speaker. The impact of the non-native accent on information transfer was measured, and students’ subjectively experienced difficulties with the two speakers were elicited by means of a questionnaire and follow-up interviews. Following the discussion of the findings, some didactic implications are mentioned.
The impact of non-native English
2. Simultaneous interpreting as a complex cognitive activity Laypersons tend to be primarily impressed by interpreters’ ability to listen and speak at the same time. As has been shown in empirical studies (Pinter 1969; Kurz 1996), however, simultaneous listening and speaking is definitely not the main difficulty in SI. SI is a highly complex cognitive activity that involves intensive information processing. During an average working day, depending on the speed of the spoken language, conference interpreters process and utter approx. 20,000 words, while translators translate between 2,000 and 3,000 words per day (according to the UN standard of between six and eight pages). This means that in one day an interpreter processes ten times as many words as a translator (Seleskovitch 1978: 122). 2.1
Management of processing capacity in SI
From the very beginning, much of the literature on conference interpreting has focused on the mental processes involved in SI. Early models were developed by Gerver (1976) and Moser (1978). As Massaro and Shlesinger (1997) point out, […] the human brain does not seem to have been programmed to process a text of any level of complexity between any two languages at any speed. When it comes to the composite skill of simultaneous interpreting (SI), there are inherent limitations to the capacity of the interpreter – no matter how expert and versatile – to perform this online interlingual conversion. (Massaro and Shlesinger 1997: 13)
It is not surprising, therefore, that much of the literature on conference interpreting focuses on the limits of interpreters’ processing capacity and the compensatory strategies they adopt. Only in SI does attention constantly have to be divided between comprehension of the input and production of the output. The proportions of attentional capacity required for the comprehension, on the one hand, and production, on the other, must continuously fluctuate. […] This unremitting and fluctuating capacity sharing will heavily tax the limited processing resources of the simultaneous interpreter. Orchestrating the portions of attention that must be assigned to the various ongoing activities may be the vulnerable spot in simultaneous interpretation (de Groot 1997: 27).
Since interpreters’ processing capacity is limited, it is essential that they are able to manage their cognitive resources efficiently. Gile’s Effort Models draw on the concept of finite processing capacity. The basic model builds on, and is highly compatible with, models relating to processing capacity, attention, and the allocation and management of cognitive resources developed by cognitive psychologists (e.g. Broadbent 1958; Kahnemann 1973; Norman
Ingrid Kurz
and Bobrow 1975; Wessells 1982; Anderson 1990; Baddeley 2000). A description of the models developed by interpreters and cognitive scientists would go beyond the scope of this paper. For an overview see Massaro and Shlesinger (1997). A full-length discussion of Gile’s well-known Effort Models would mean carrying coals to Newcastle, so I can be brief on this point. The Effort Models were developed initially as a conceptual framework for interpretation students. SI can be modeled as a process consisting of three main components or Efforts: a Listening and Analysis Effort, a Speech production Effort, a Short-term memory Effort, plus a Coordination Effort, which is required to coordinate the other three Efforts. Gile speaks of “efforts” in order to underscore the non-automatic nature of these components (Gile 1995, 1997). At each point in time, each Effort has specific processing capacity requirements. In order for interpretation to proceed smoothly, total capacity requirements should not exceed the total available capacity, and “capacity available for each Effort should be sufficient to complete the task the Effort is engaged in” (Gile 1995: 171). The Listening and Analysis Effort seems to be the most crucial. It is defined as […] consisting of all comprehension-oriented operations, from the analysis of the sound waves carrying the source language speech which reaches the interpreter’s ears, through the identification of words, to the final decisions about the “meaning” of the utterance. (Gile 1995: 162)
Good understanding of a segment of speech leaves more capacity for the other requirements. If, however, the Listening and Analysis Effort is affected by elements containing phonetic, lexical or syntactic errors, the basis for all other efforts is strongly corroded or even destroyed. Even experienced interpreters find it difficult to deliver a coherent target text in these circumstances. For students and novices in the interpreting profession such situations often prove impossible to cope with. Massaro and Shlesinger (1997: 43) point out that little has been done to subject Gile’s Models to systematic testing. The pilot study described in section 3 is an attempt to provide empirical evidence for the negative impact of excessive listening and analysis requirements on information transfer in SI. 2.2
English – the No.1 conference language
English has become the lingua franca worldwide: English […] has come to be spoken worldwide by a large and ever-increasing number of people – 800,000,000 by a conservative estimate, 1,500,000,000 by a liberal estimate. Some 350,000,000 use the language as a mother tongue, chiefly in the USA (c. 220 million), the UK (c. 55 million), Canada (c. 3 million, Ireland
The impact of non-native English
(c. 3.5 million), and South Africa (c. 2 million). A further 400 million use it as a second language, in such countries as Ghana, Nigeria, Tanzania, Pakistan, and the Philippines. It has official status in over 60 countries. Estimates also suggest that at least 150 million people use English fluently as a foreign language, and three or four times this number with some degree of competence. In India, China, and most of the countries of Western Europe, the presence of English is noticeable or rapidly growing. English is also the language of international air traffic control, and the chief language of world publishing, science and technology, conferencing, and computer storage. (Crystal 1992: 121)
English is the language most widely used in conference settings, even by speakers born into a different language family. Participants in international meetings who are non-native speakers and whose elocution differs from Standard English are usually unaware of how difficult they may be to interpret. Consequently, interpreters working from English are increasingly confronted with a diversity of accents, which makes their demanding work even more difficult. It has become more and more customary for a speaker invited to an international conference to hold his speech in a foreign language. Quite inevitably, speakers who do not master the language used often do not manage to present their speech in the most appropriate way from the point of view of linguistic production, thus impeding or constraining comprehension and, therefore, communication. (Mazzetti 1999: 125)
2.3
Non-native accent – an additional burden for interpreters
The interpreter translates a source text into a target language for participants who do not understand the speaker’s language. He receives the source text via headphones in the booth (in SI). Participants in the conference room also listen to the interpretation via headphones. The speaker is at the starting point of the chain of communication. He is the presenter of the source text and largely determines processing conditions. His voice has a particular timbre, he chooses the structure of his speech, the style of his presentation, his delivery speed, etc. Last but not least, the speaker is also defined by his pronunciation, which may be familiar to the interpreter or not. The more the speaker’s pronunciation deviates from what the interpreter is used to, the more difficult the task for the interpreter in the first processing phase, i.e. comprehension. In the worst-case scenario, communication may be constrained or impeded from the start (Kurz 1996; Kodrnja 2001; Pöchhacker 2004). This problem has been frequently referred to in the literature. Acoustic problems, poor sound quality, or a speaker with a strong accent do indeed constitute an additional cognitive load for the interpreter. “Strong accents […] also increase
Ingrid Kurz
the Listening and Analysis Effort’s processing capacity requirements” (Gile 1995:173), and [b]ad pronunciation by a non-native speaker forces the interpreter to devote much processing capacity to the Listening and Analysis Effort, and therefore slows down production. This in turn overloads the Memory Effort and results in loss of information from memory. Alternatively, memory is not overloaded, but production becomes very difficult because the interpreter has to accelerate in order to catch up with the speaker, resulting in deterioration of output quality or decreased availability of processing capacity for the Listening and Analysis Effort and the loss of a later segment. (Gile 1995: 176)
Stress studies among interpreters (Cooper et al. 1982: 104; AIIC 2002: 25) have shown that a high percentage of conference interpreters consider an unfamiliar foreign accent of a speaker to be a stress factor. In the AIIC Workload Study (2002), a representative sample of professional conference interpreters rated unfamiliar accent as the fourth most stressful factor (62%) behind speed (75%), speakers reading from notes (72%) and inadequate technical equipment (71%). 71% of the subjects confirmed that difficult accent was a type of stress “very frequently” encountered in professional assignments. There are only a few empirical studies, however, that have investigated the impact of a strong non-native accent on interpreters’ performance. First attempts in this regard were made at the University of Vienna between 2001 and 2002. Basel (2002) carried out a case study trying to determine whether interpreters who are familiar with the mother tongue of a non-native speaker who uses English for his presentation are capable of a better information transfer than interpreters who are unfamiliar with the speaker’s mother tongue. A similar study was carried out by Mazzetti (1999) in Trieste. The findings of both studies seem to indicate that a non-native speaker of English is less difficult to interpret for an interpreter who is familiar with the speaker’s mother tongue. An MA thesis written by Dominika Kodrnja (2001) under the author’s supervision at the University of Vienna compared students’ performance when interpreting an identical text presented by a native and a non-native speaker. A brief description of this study will be given in section 3 below. 3. Pilot study 3.1
Material
The material consisted of audio recordings of two versions of an English source text (kindly provided by Miriam Shlesinger), one of them read by a native speaker and
The impact of non-native English
the other one by a non-native speaker with a strong foreign accent. The source text consisted of 591 words. The native speaker took 4 minutes 46 seconds to read the text, and the non-native speaker 4 minutes 43 seconds. Thus, the difference in the delivery rate of the two speakers was negligible. Unfortunately, we have no information regarding the speaker’s origin/mother tongue. Miriam Shlesinger could not enlighten us on that point either. Our impression was that the speaker’s mother tongue might be Arabic, but there is no confirmation of that. His presentation suffers from mispronunciations as well as major prosodic flaws (rhythm, intonation). 3.2
Subjects
The subjects were ten students from the Institute of Translation and Interpreting of the University of Vienna, all of whom had at least two semesters’ experience with simultaneous interpreting and attended an advanced English – German simultaneous interpreting class run by the author and supervisor of the study. 3.3
Purpose
The study set out to measure the impact of the presentation of an English source text by a non-native speaker with a strong accent on students’ performance. It was expected that owing to the higher processing requirements for listening and analysis there would be a higher information loss in the interpretations of the discourse presented by the non-native speaker. Another objective was to elicit students’ subjectively perceived difficulties. Unlike in Mazzetti’s study (1999), the two recordings differed only in pronunciation. Thus, comprehension difficulties normally arising with non-native speakers as a result of syntactic errors and non-idiomatic usage could be excluded. Nevertheless, it was expected that students would probably have considerable difficulties because they are not yet experts and have to cope with a number of problems even under less difficult conditions (MoserMercer 2000; Moser-Mercer et al. 2000). 3.4
Method
The ten subjects were divided into two groups of five persons each (Group A and Group B). An attempt was made to form two homogeneous groups based on the author’s assessment of the students. Each group was presented with an audio recording of both speakers. The source text was divided into two parts. Group A interpreted the native speaker part first and the non-native speaker part second, whereas Group B interpreted the non-native speaker first and the native speaker second.
Ingrid Kurz
Group A Part 1 Part 2
native speaker non-native speaker
text read by
Group B non-native speaker native speaker
This experimental design permitted both an inter-group comparison and an intragroup comparison. Prior to the experiment, subjects were informed that they would be asked to interpret a five-minute speech on the topic “development of language” from English into German. They were told that there would be a change of speakers halfway through the text, but did not receive any additional information regarding the purpose of the study. The experiment was carried out in a classroom of the Institute of Translation and Interpreting, which is equipped with booths corresponding to ISO Standard 2603. Students’ interpretations were recorded and transcribed. Upon completion of the interpretation task, all subjects received a questionnaire asking them to assess the two speakers in terms of terminology, speed, accent, and pronunciation. In follow-up interviews subjects were asked about their subjective impressions and how they handled problems and difficulties during the interpretation of the non-native speaker. 3.5
Evaluation of students’ interpreting performance
For the purpose of evaluating students’ interpreting performance, the source text was divided into propositions. According to Alexieva (1999: 45), in order for a text to be comprehensible, propositions must form a coherent entity. In line with this, the text was largely segmented into individual sentences. Some of the longer sentences were divided into sub-propositions. This resulted in a total of 29 propositions, the first part of the text comprising 14, and the second comprising 15 propositions. The transcribed interpretations were compared with the source-text propositions and evaluated on the basis of the criteria established by Moser-Mercer et al. (1998): faux-sens, contre-sens, ommission, nuance. In order to determine whether the non-native speaker’s accent affected the information loss in students’ interpretations, an inter-group comparison was carried out first, i.e. a comparison of the performance of Groups A and B when interpreting the same source text, but different speakers. 3.5.1 Results of inter-group comparisons A comparison was made of the performance of Groups A and B with regard to the individual propositions. Group A interpreted the native speaker in propositions
The impact of non-native English
1 – 14, and Group B in propositions 15 – 29. Group A interpreted the non-native speaker in propositions 15 – 29, and Group B in propositions 1 – 14. In part 1, a total of 70 propositions were interpreted by each group (5 subjects x 14 propositions). A comparison of the interpretations of Groups A and B of propositions 1 – 14 shows that students in Group A, who interpreted the native speaker, succeeded in rendering 64.3% of the propositions correctly, which corresponds to an information loss of approx. 36%. By way of comparison, students who had to interpret the non-native speaker first (Group B) managed to interpret only 30.7% of the propositions correctly, which means that their performance was clearly inferior, as almost 70% of the information contained in the source text got lost – a 33% higher information loss than in Group A. In part 2, each group interpreted a total of 75 propositions (5 subjects x 15 propositions). With regard to the interpretation of propositions 15 – 29, we can see that students in Group A, who interpreted the non-native speaker, managed to interpret only 42.1% of the propositions correctly – a loss of information of approx. 60%. This represents a 22% higher information loss in comparison with students in Group B, who lost only about 36% of the information contained in the source text, as 63.7% of the propositions were rendered correctly. The inter-group performance differences in the interpretations of the native speaker were negligible: 64.3% correctly rendered propositions (Group A) vs. 63.7% (Group B). 3.5.2 Results of intra-group comparisons The intra-group comparisons show the performance differences within each group, depending on the source-text speaker. The number of correctly interpreted propositions is shown below. Group A Group B
native speaker non-native speaker native speaker non-native speaker
45 propositions (64.3%) 31 propositions (42.1%) 47.75 propositions (63.7%) 21.5 propositions (30.7%)
These figures show that both groups performed far better when interpreting the native speaker (64.3% and 63.7% correctly rendered propositions) than when interpreting the non-native speaker (42.1% and 30.7% correctly rendered propositions). It is interesting to note, however, that students in Group A, who interpreted the native speaker first and the non-native speaker second, showed a lower information loss when interpreting the non-native speaker (about 22%) than students in Group B, who interpreted the non-native speaker first (33%). The higher information loss in the interpretations of Group B of the non-native speaker may be
Ingrid Kurz
due to the order of presentation. Subjects in Group B were confronted with the non-native speaker first, which means that over and above the difficulties they had with the speaker’s accent, they had no opportunity to “warm up”, i.e. to get acquainted with the topic. (However, it should also be borne in mind that the two groups may not have been completely homogeneous and that this may have influenced this difference in their performance results.) 3.5.3 Analysis of propositions in terms of number of errors The study also looked at how many propositions were misinterpreted by a large majority of the subjects (defined as 4 – 5 incorrect renderings in a group of five subjects). In the interpretations of the native speaker only 3 propositions were rendered incorrectly by 4 – 5 out of five subjects as compared with a total of 12 propositions in the interpretations of the non-native speaker. 3.6
Evaluation of the questionnaire
In order to get some feedback on students’ subjective impressions, a questionnaire in two copies was distributed upon completion of the interpretation task: One copy referred to the native speaker, the other one to the non-native speaker. Subjects were asked to assess the difficulties they had with terminology, delivery rate and accent on a 1-5 scale (1 = easy, 2 = fairly easy, 3 = manageable, 4 = difficult, 5 = very difficult). 3.6.1 Subjective assessment of terminology Regarding terminology, the part of the source text read by the native speaker was considered “fairly easy” by both groups, whereas the part read by the non-native speaker was considered “manageable”. Clearly, the non-native speaker’s accent affected students’ subjective perception of the difficulty of the text. 3.6.2 Subjective assessment of delivery speed The delivery speed of the native speaker was considered “fairly easy” to cope with, while the speed of the non-native speaker received the score “manageable”. Measurements showed that in the first part of the text, the average speed of the nonnative speaker was 3 syllables per minute higher than that of the native speaker. Part 1:
Group A – native speaker at 217 syllables per minute; Group B – non-native speaker at 220 syllables per minute.
In part 2, the native speaker’s average delivery rate was 9 syllables per minute higher than that of the non-native speaker.
The impact of non-native English
Part 2:
Group A – non-native speaker at 194 syllables per minute; Group B – native speaker at 203 syllables per minute.
The non-native speaker’s delivery speed was thus considered to be more difficult (“manageable“) than that of the native speaker (“fairly easy” to cope with) not only in the first part of the text, where it was on average 3 syllables per minute higher, but also in the second part, where it was on average 9 syllables per minute lower than that of the native speaker. This shows that the non-native accent not only had an immediate, measurable impact on students’ performance (higher loss of information) but also gave rise to the subjective impression of higher delivery speed. 3.6.3 Subjective assessment of pronunciation When asked whether the speakers’ pronunciation was considered difficult to cope with, all subjects answered “no” for the native speaker and “yes” for the non-native speaker. Neither Group A nor Group B had particular problems with the speech rhythm of the native speaker, whereas four subjects in Group A and three subjects in Group B found the speech rhythm of the non-native speaker very difficult. All students felt that the strong accent of the non-native speaker greatly added to the difficulty of the interpreting task. 3.7
Evaluation of follow-up interviews
Following the interpretation of the source text and completion of the questionnaire, all subjects were asked to answer a number of questions orally in follow-up interviews. What was your first response to the speaker with the strong accent? How did you cope with the difficulties? What was your impression of the nonnative speaker as compared with the native speaker? Did you develop a coping strategy for interpreting the non-native speaker? The answers again confirmed the hypothesis that most subjects would be irritated one way or the other by the strong non-native accent. Six out of the ten subjects stated that they were shocked by the pronunciation of the non-native speaker, and two subjects initially wondered whether the language used by the speaker was actually English. Comments regarding the comprehensibility of the non-native speaker included statements such as: “I simply could not understand him”, “I completely failed to understand him”, “I did not really understand him”, “I only got part of it”. When asked how they had tried to cope with the difficulties and whether they had developed a strategy for the interpretation of the non-native speaker, three of the ten subjects stated that, at the beginning, they had no strategy at all because they were “shell-shocked”. They failed to understand the speaker and were unable to concentrate. However, the interviews revealed that ultimately these subjects, too, had
Ingrid Kurz
made a number of problem-solving attempts, e.g. turning up the volume, trying to catch individual words and make sense of them, keeping a shorter/longer time lag, trying to summarize and condense. All subjects, in fact, tried to apply one or the other kind of coping strategy. The hypothesis that the subjective impression of the majority of subjects would be that the native speaker was much easier to interpret could also be confirmed. Seven of the ten subjects found the native speaker’s speech, which – apart from pronunciation – was identical with that of the non-native speaker, was characterized by fairly simple terminology, lower delivery speed, and a logical structure (cf. Kodrnja 2001: 121-122). 4. Discussion The pilot study described in this paper was triggered by the numerous references in the literature stating that a strong, unusual or unfamiliar accent constitutes a complicating factor for simultaneous interpreting. In practice, of course, non-native speech involves not only phonetic but also syntactic and idiomatic deviations. [W]hat interpreters loosely refer to as ‘foreign accent’ goes far beyond the nonstandard pronunciation of individual phonemes and extends to deviations at supra-segmental as well as lexical and syntactic levels. (Pöchhacker 2004: 129)
According to Gile’s Effort Models, even experienced interpreters need to devote additional processing capacity to speech comprehension – or the listening and analysis phase – under these conditions. This may lead to capacity shortages in the other phases and an impaired interpreting performance. It was to be expected that students, who are much less skilled and experienced than experts, would have serious problems with the interpretation of a source text that differed from its “twin” on one criterion only – pronunciation/accent. As was expected, there was a markedly higher loss of information in the interpretations of the non-native speaker. Students obviously had great difficulties in managing and allocating their cognitive resources. Too much mental capacity was needed for comprehension (listening and analysis), so that the capacities required for speech processing and speech production were insufficient. No satisfactory interpreting performance was possible under these circumstances. These findings have a number of interesting didactic implications. Given the fact that English has become the dominant conference language, it is mandatory that student interpreters be given the opportunity to interpret speakers with a wide range of non-native accents in the course of their training (see also Kodrnja 2001:
The impact of non-native English
124). An extremely valuable didactic aid in this regard is the SCIC speech repository, a collection of authentic speeches by native and non-native speakers. One of the aspects interpreter training programs have to focus on is listening comprehension, in order to prepare student interpreters to provide an acceptable performance even under adverse conditions (Kalina 1998: 268). Interpretation schools should also build up audio- and video libraries consisting of original recordings of speeches given at multilingual/ international conferences. Special attention should be paid to the handling of deficient source texts. Likewise, students should be encouraged to develop and practice adequate strategies to cope with these problems. References AIIC. 2002. Workload Study. Geneva: AIIC. Alexieva, B. 1999. “Understanding the source language text in simultaneous interpreting.” The Interpreters’ Newsletter 9: 45–59. Anderson, J.R. 1990. Cognitive Psychology and Its Implications (3rd ed). New York: Freeman. Baddeley, A. 2000. “Working memory and language processing.” In Language Processing and Simultaneous Interpreting, B. Englund Dimitrova and K. Hyltenstam (eds), 1–16. Amsterdam/Philadelphia: Benjamins. Basel, E. 2002. English as lingua franca: Non-Native Elocution in International Communication. A Case Study of Information Transfer in Simultaneous Interpretation. Unpublished PhD thesis, University of Vienna. Broadbent, D.E. 1958. Perception and Communication. New York: Pergamon Press. Cooper, G., Davies, R. and Tung, R.L. 1982. “Interpreting stress: Sources of job stress among conference interpreters.” Multilingua 1 (2): 97–107. Crystal, D. 1992. An Encyclopedic Dictionary of Language and Languages. Oxford: Blackwell. Gerver, D. 1976. “Empirical studies of simultaneous interpretation: A review and a model.” In Translation: Applications and Research, R.W. Brislin (ed.), 165–207. New York: Gardner Press. Gile, D. 1995. Basic Concepts and Models for Interpreter and Translator Training. Amsterdam/ Philadelphia: Benjamins. Gile, D. 1997. “Conference interpreting as a cognitive management problem.” In The Interpreting Studies Reader (2002), F. Pöchhacker and M. Shlesinger (eds), 163–176. London/New York: Routledge. de Groot, A.M.B. 1997. “The cognitive study of translation and interpretation: Three approaches.” In Cognitive Processes in Translating and Interpreting, H.J. Danks, G.M. Shreve, S.B. Fountain and M.K. McBeath (eds), 25–56. Thousand Oaks/London/New Delhi: Sage Publications. Kahnemann, D. 1973. Attention and Effort. Englewood Cliffs, NJ: Prentice-Hall. Kalina, S. 1998. Strategische Prozesse beim Dolmetschen. Tübingen: Gunter Narr. Kodrnja, D. 2001. Akzent und Dolmetschen. Informationsverlust beim Dolmetschen eines nonnative speaker’s. Unpublished MA thesis, University of Vienna.
Ingrid Kurz Kurz, I. 1996. Simultandolmetschen als Gegenstand der interdisziplinären Forschung. Vienna: WUV-Universitätsverlag. Massaro, D. 1975. Experimental Psychology and Information Processing. Chicago: Rand McNally. Massaro, D. and Shlesinger, M. 1997. “Information processing and a computational approach to the study of simultaneous interpretation.” Interpreting 2 (1/2): 13–53. Mazzetti, A. 1999. “The influence of segmental and prosodic deviations on source-text comprehension in simultaneous interpretation.” The Interpreters’ Newsletter 9: 125–147. Moser, B. 1978. “Simultaneous interpretation: A hypothetical model and its practical application.” In Language Interpretation and Communication, D. Gerver and W.S. Sinaiko (eds), 353–368. New York/London: Plenum Press. Moser-Mercer, B. 1978. “Simultaneous interpretation.” In Language Interpretation and Communication. Proceedings of the NATO Symposium, Venice, Italy, September 26-October 1, 1977, D. Gerver and H.W. Sinaiko (eds), 353–368. New York/London: Plenum Press. Moser-Mercer, B. 2000. “The rocky road to expertise in interpreting.” In Translationswissenschaft. Festschrift für Mary Snell-Hornby zum 60. Geburtstag, M. Kadric, K. Kaindl and F. Pöchhacker (eds), 339–352. Tübingen: Stauffenburg. Moser-Mercer, B., Frauenfelder, U.H., Cassado, B. and Künzli, A. 2000. “Searching to define expertise in interpreting.” In Language Processing and Simultaneous Interpreting: Interdisciplinary Perspectives, B. Englund Dimitrova and K. Hyltenstam (eds), 107–131. Amsterdam/ Philadelphia: Benjamins. Norman, D.A. and Bobrow, D.G. 1975. “On data-limited and resource-limited processes.” Cognitive Psychology 7: 44–64. Pinter, I. 1969. Der Einfluss der Übung und Konzentration auf simultanes Sprechen und Hören. Unpublished PhD thesis, University of Vienna. Pöchhacker, F. 2004. Introducing Interpreting Studies. London/New York: Routledge. Rezzori, G. von. 1956. Roda Rodas Geschichten. Hamburg:Rowohlt. Seleskovitch, D. 1968. L’interprète dans les conférences internationales: problèmes de langage at de communication. Paris: Minard. Wessells, M.G. 1982. Cognitive Psychology. New York: Harper & Row.
Evaluación de la calidad en interpretación simultánea Contrastes de exposición e inferencias emocionales. Evaluación de la evaluación* Ángela Collados Aís
Universidad de Granada, España This article describes an experimental study which measured the extent to which the monotonous intonation of an interpreter can cause receivers to negatively evaluate the resulting interpretation, particularly when compared to the non-monotonous interpretation of another interpreter. In addition, the study analyses the emotional inferences made by the receivers of the monotonous intonation, and its effect on their assessment of the interpretation. The article also describes the results of various discussion groups and focused interviews which formed part of the study. This type of qualitative research methodology was found to provide a better explanation of certain aspects of the complex assessment process and to offer a more accurate insight into the way receivers evaluate simultaneous interpretations.
Keywords: Simultaneous interpretation, quality assessment, experimental study, discussion groups, focused interviews
* Esta investigación, como muchas otras, le deben gran parte a la dedicación, respeto e impulso que una personalidad de la talla de Daniel Gile ha ofrecido siempre a nuevos investigadores, muchos de ellos situados fuera del núcleo investigador predominante, también geográfico y lingüístico. Él ha conseguido, como nadie antes y difícilmente después, que la comunidad investigadora en interpretación haya ensanchado sus límites, tanto horizontales como verticales.
Ángela Collados Aís
1. Planteamiento general1 Presentamos los resultados de una investigación diseñada con dos objetivos iniciales básicos: analizar en qué medida una entonación monótona en el/la2 intérprete podría incrementar sus efectos negativos sobre la evaluación de la interpretación, por parte de usuarios de la misma, si se confronta a una entonación no monótona, y analizar las inferencias emocionales que realizarían los usuarios a partir de la monotonía de la voz del/la intérprete y su participación en dicha evaluación. Explorar los efectos de la entonación monótona mediante el contraste que el usuario puede percibir cuando un/a intérprete de simultánea con entonación “monótona” trabaja en cabina con un/a intérprete con entonación “no monótona” podría acercar la experimentación a la realidad profesional más habitual del/la intérprete y, por tanto, a las condiciones en las que un usuario efectúa dicha evaluación. Por otra parte, analizar las inferencias emocionales realizadas por el usuario permitiría estudiar más de cerca las vías de evaluación, en concreto, el peso de las vías emocionales de influencia frente a las vías cognitivas que, al menos en principio, podrían estar más presentes en el ámbito de las expectativas de calidad que en el de la evaluación, lo que explicaría determinados comportamientos del usuario. En este sentido, los estudios provenientes del ámbito de la Psicología (cf. Scherer 1995) y los últimos avances en Neurociencia (cf. Damasio 2006) parecen aportar explicaciones que avalarían esta hipótesis. Tangencialmente, se pretende también incidir en estudios experimentales anteriores aplicados a la investigación sobre evaluación de la calidad y la incidencia de parámetros no verbales, específicamente la entonación monótona (Collados Aís 1998, 2001, 2007). De esta forma, en la presente investigación se replican experimentos anteriores, añadiendo dos vertientes nuevas: por una parte, la ya mencionada vertiente de contrastes entonativos entre intérpretes, y, por otra parte, un enfoque de carácter más cualitativo que, a través de cuestionarios más abiertos, recoge la posición de los usuarios respecto a las interpretaciones evaluadas. Los resultados obtenidos en el experimento, concebido inicialmente como piloto, a pesar de que parecen reforzar las evidencias respecto al peso negativo de la entonación monótona sobre la evaluación y las inferencias emocionales realizadas, también plantean nuevos interrogantes. De ahí que, tras el análisis de estos resultados, se lanzan nuevas hipótesis que hacían necesario acudir a un enfoque 1. El trabajo de investigación se presenta en el marco de los proyectos de investigación HUM2007-62434/FILO (Ministerio de Ciencia y Tecnología. España) y P07-HUM-02730 (Junta de Andalucía. España). 2. La consideración del género se mantiene en el caso de la interpretación como excepción por motivos de ‘facilidad redactora y lectora’.
Evaluación de la evaluación
metodológico distinto que ayudase a entender el comportamiento global de los usuarios cuando evalúan, más que a fijar dicho comportamiento. Esta nueva vía de acercamiento a las pautas evaluadoras de los usuarios, de carácter cualitativo, consiste en la realización de grupos de discusión y entrevistas focalizadas. Los grupos de discusión se reconocen como técnica de investigación especialmente útil para estudiar el comportamiento y los procesos de toma de decisiones, habiéndose utilizado en gran medida en técnicas de mercado (Valles 1997: 284). Su combinación con otras técnicas permiten corroborar resultados o profundizar en relaciones específicas sugeridas por resultados más cuantitativos (ibidem: 299), aunque también se insiste en el potencial intrínseco como método “autosuficiente o autocontenido de investigación social” (Morgan 1988 apud Valles 1997). La realización de un primer grupo de discusión y el análisis de los datos, plantea nuevamente una redefinición en el diseño metodológico y, de esta forma, introducimos otra técnica de investigación suplementaria: la entrevista focalizada dirigida a la obtención de fuentes cognitivas y emocionales (Valles 1997: 184-185) de los comportamientos evaluadores de los receptores de una interpretación simultánea (IS). Si bien existe cierta similitud con la técnica de los Thinking-Aloud Protocols, aplicados tanto a traducción (cf. Hurtado Albir 2001: 183-185; Toury 2004: 297-303) como a interpretación (cf. Kalina 1998: 151-159, 205-209), en este caso los grupos de discusión y las entrevistas focalizadas se centran en las reacciones espontáneas y la evaluación posterior, por parte de los sujetos, de su propia evaluación. En definitiva, en el presente trabajo, por tanto, los objetivos finales pretenden delimitar el marco evaluador en el que se sitúan los usuarios y adentrarse en otros elementos menos estudiados y que no suelen considerarse en técnicas de investigación más cuantitativas. El uso de una metodología experimental conlleva ventajas evidentes a la hora de delimitar y controlar variables, sin embargo su aplicación, al margen de otros riesgos, plantea límites que hacen necesario el acercamiento al objeto de estudio desde vías complementarias que, a través de la triangulación metodológica (cf. Gile 2005: 165), puedan ofrecer una imagen más completa de dicho objeto. 2. Estado de la cuestión 2.1
Estudios procedentes del ámbito de la Interpretación
La voz del/la intérprete, como elemento crucial en la evaluación de su calidad, es considerada intuitivamente desde épocas tempranas como esencial, bien porque una voz agradable casi siempre lleva varios puntos de ventaja (Herbert 1970), bien porque puede ayudar a convencer al oyente de la calidad de la idea formulada (Gile
Ángela Collados Aís
1991). Ilustrativas nos parecen las palabras de Baigorri respecto a los inicios de la interpretación, trasvasables a la actualidad: “la manera como los usuarios percibieron la calidad de la actuación de los intérpretes dependió en cada momento de sus criterios subjetivos. […] los únicos de los que disponemos para evaluar la calidad” (Baigorri 2000: 57). También en el ámbito de las asociaciones profesionales, la presentación y con ello la voz o la entonación, han sido tema de preocupación. En un estudio temprano llevado a cabo por la AIIC (1986 apud Mazzetti 2005: 127) se informa sobre el peso de la entonación y el lenguaje corporal en relación con la comprensión en la comunicación oral. Posiblemente, el hecho de que, posteriormente, en la investigación empírica aplicada al ámbito de la calidad de la interpretación, el foco se situara en las expectativas de calidad de los usuarios o intérpretes (Bühler 1986; Kurz 1989, 1993; Marrone 1993, etc.) y los resultados arrojasen la absoluta preeminencia de la transmisión del contenido frente a la forma, hizo que el tema de los aspectos no verbales quedase relegado en la investigación.3 En 1990 se contrastan por primera vez las expectativas en una situación real de interpretación (Gile 1990) y a mediados de los noventa los resultados de un experimento (Gile 1995) alertan sobre las dificultades de los usuarios en la verificación de sus expectativas de calidad, concretamente sobre el parámetro que ellos considerarían esencial, la transmisión correcta de la información. A finales de los noventa otro experimento (Collados Aís 1998) confirma la mermada capacidad evaluadora de los usuarios y un posible trasvase de prioridades hacia otros parámetros más “verificables”. En 1999, nuevamente Gile, comprueba que, incluso en condiciones óptimas de evaluación, se confirmaría dicha mermada capacidad evaluadora, y en 2007 se confirman resultados anteriores (Fernández Sánchez et al. 2007). Desde entonces, los distintos trabajos experimentales que estudian la influencia de los elementos no verbales sobre la evaluación de los usuarios parecen haber dejado evidencias claras en cuanto a determinados aspectos, tales como la divergencia entre expectativas y evaluación y el peso destacado de los elementos no verbales en esta última (cf. Pradas Macías 2003; Garzone 2003; Cheung 2003; Iglesias Fernández 2007; Stévaux 2007, etc.), aunque con interrogantes tales como la no confirmación de los efectos negativos de la entonación monótona sobre la evaluación de los usuarios en uno de los últimos estudios realizados (Collados Aís 2007). Se plantean, además, en esta reciente investigación, otros interrogantes a partir de las valoraciones que recibe el parámetro entonación en las IS control o manipuladas de otros diez parámetros (cf. Collados Aís, Pradas Macías, Stévaux y García Becerra 2007).
3. Salvo en determinados estudios en los que, por la población que analizaban, se otorgaba mayor importancia de determinados elementos no verbales aunque permaneciendo el ranking estable en los resultados básicos (Kurz y Pöchhacker 1995; Russo 2005).
Evaluación de la evaluación
Si nos acercamos a investigaciones que estudian específicamente la entonación del/la intérprete, con independencia de su grado de monotonía y de las expectativas y evaluación de la calidad de los usuarios, vemos que existen también evidencias sobre rasgos peculiares propios de la entonación del/la intérprete y, en general, en la emisión vocal de los mismos, que se centran fundamentalmente en un incremento importante de pausas no gramaticales, en posiciones “no usuales” y en una prosodia específica con acentuación de elementos que en el habla espontánea o leída no se producirían (Shlesinger 1994; Williams 1995). De hecho, Shlesinger (1994) califica dicha entonación como una entonación sui generis. Ahrens (2005) aporta un estudio exhaustivo sobre la entonación en IS analizada a través de un amplio corpus. Sus conclusiones apuntan también a aspectos específicos del estilo discursivo del/la intérprete (ibidem: 230) condicionados por el propio proceso de la interpretación y las condiciones comunicativas específicas bajo las cuales se produce. Así, el/la intérprete tiende a una segmentación prosódica mayor del texto por motivos estratégicos de segmentación de contenido (ibidem: 227). Nafá Waasaf (2005) en una amplia investigación dedicada íntegramente a la entonación de la interpretación a partir de un corpus extraído del Parlamento Europeo, sin embargo, muestra el uso de estrategias entonacionales y retóricas adecuadas por parte de los/las intérpretes durante la interpretación, si bien matiza que esta entonación, comunicativamente aceptable, puede verse alterada en determinados momentos por las demandas cognitivas del proceso de la interpretación (ibidem: 678), constatando la mencionada entonación sui generis más como una excepción específica a la IS que como una regla, a diferencia de los estudios precedentes (cf. Shlesinger 1994; Williams 1995; Ahrens 2005). 2.2. Estudios procedentes de otras disciplinas Estudios procedentes de otras disciplinas, como la Psicología, ofrecen evidencias de que las señales vocales ejercen una gran influencia en las percepciones del oyente, y cómo, en general, estas respuestas se basan en estereotipos asociados con diversas cualidades vocales o entonaciones (Knapp 1988: 286-287). Los estudios de la comunicación de la expresión vocal emocional pueden ser considerados como estudios sincrónicamente rezagados, pero paralelos a los estudios de la comunicación de la expresión facial emocional (Fernández Dols et al 1990: 277). Aunque es a partir de la década de los ochenta cuando cuentan con un marco teórico integrador. Este marco es proporcionado por Scherer (1986) y su Modelo de los Procesos Componentes que vincula estados emocionales con parámetros acústicos (Jiménez Fernández 1986: 59). El modelo parte de los siguientes presupuestos: a) determinadas emociones se reconocen a través de la voz muy por encima del azar; b) existe una gran variabilidad entre los individuos, e incluso
Ángela Collados Aís
intraindividual, a la hora de expresar emociones; y c) a pesar de esta variabilidad, deben existir determinadas reglas o parámetros estables por los que es posible el reconocimiento significativo de emociones. A partir de su modelo, Scherer (1986: 149 y ss.) predice los efectos vocales de emociones concretas, manifestándose la Frecuencia Fundamental (Fo) como el parámetro acústico más utilizado y fiable a la hora de identificar indicadores acústicos emocionales (Scherer 1986: 295; 1995: 239). Por lo que respecta a las emociones más directamente relacionadas con nuestro objeto de estudio, la monotonía y la inferencia de emociones como aburrimiento o indiferencia, los indicadores resultantes más importantes se centrarían fundamentalmente en una Fo baja, variabilidad de la Fo baja y velocidad de emisión lenta (1982: 300; 1995: 241). Los efectos de la monotonía, han sido estudiados desde sus efectos sobre la comprensión del mensaje (Mehrabian y Williams 1969). En este sentido la entonación hace posible que, respetando las normas lingüísticas de una lengua, el oyente reciba adecuadamente lo dicho por un determinado emisor, facilitando el procesamiento de la estructura discursiva (cf. Venditti y Hirschberg 2003), en base a su función cohesiva o integradora que “divide el hilo fónico en parcelas, de modo que el oyente puede percibirlo como un oleaje facilitando la comprensión” (Álvarez Muro 2001: 1), así como en su función de progresión de la información y de prominencia de información dada y nueva (cf. Bolinger 1986).4 De ahí que no resulte extraño que los resultados de los estudios experimentales de Brown (1982) demuestren que el incremento de la entonación haga aparecer a los emisores como más competentes, mientras que la reducción de la entonación produce la impresión de una menor competencia. En cuanto a la persuasión, los efectos de la monotonía parecen claramente negativos, en el sentido de que la credibilidad, como componente fundamental de la persuasión, conduce a que determinadas decisiones sobre la personalidad del emisor, que incluyen juicios sobre su veracidad, dinamismo, simpatía o competencia, se tomen sobre la base vocal (cf. Knapp 1988; Brennan y Williams 1995). Los últimos avances en Neurociencia estudian las vías de inferencia emocional como procesos emocionales vinculados a procesos cognitivos, explicando así también su participación en los procesos de toma de decisiones. De hecho, la importancia de las emociones se explica desde un punto de vista evolutivo, de forma que el subsistema emocional se activaría antes que el cognitivo, de manera consciente o inconsciente, mediante la activación de disparadores establecidos por la evolución o por experiencias individuales (Damasio 2006: 57). De esta forma, y dado que los niveles cognitivos y emocionales están continuamente conectados (ibidem: 72), el sistema emocional seleccionaría la 4. Para una extensa bibliografía sobre aportaciones de la lingüística a la entonación, véase Laver (1994) y Hirschberg (2006). Una amplia revisión de las funciones de la entonación es recogida en Nafá Waasaf (2006).
Evaluación de la evaluación
gama de posibles decisiones pertinentes para una situación dada, predeterminando en determinados supuestos dicha toma de decisiones y convirtiéndose en “mecanismos” que asisten a los procesos racionales (Simón 1997: 374). 3. Objetivos e hipótesis Los objetivos de esta investigación se centran en sondear la evaluación de la calidad realizada por usuarios de una IS monótona frente a una interpretación no monótona, así como la repercusión de los contrastes entre distintas interpretaciones que pueden recibir durante un mismo evento comunicativo. Al mismo tiempo, se pretenden analizar las vías emocionales a partir de la cuales podrían establecer dichas evaluaciones. Dado que los resultados de las distintas investigaciones previas, realizadas con el mismo material, muestran una tendencia general a la detección y castigo de la entonación monótona, pero también muestran una cierta contradicción, como objetivos tangenciales interesaba también añadir datos respecto al comportamiento general frente a una interpretación monótona, con independencia del orden de visionado, y el acercamiento a las explicaciones de los sujetos mediante cuestionarios más abiertos que los utilizados hasta la fecha. Las hipótesis que se derivan de los objetivos de este estudio son las siguientes: Hipótesis 1: En la evaluación de una IS, la entonación monótona del/la intérprete se castiga por los usuarios con independencia del cumplimiento del resto de parámetros de calidad. Hipótesis 2: La inferencia de actitudes emocionales negativas, realizada a partir de la entonación monótona del/la intérprete, podría vincularse con la evaluación negativa de una IS. Hipótesis 3: En caso de producirse contraste de una interpretación no monótona y una interpretación monótona, los usuarios castigarían más la IS monótona y premiarían aún más la no monótona. Además, el replanteamiento de la investigación suposo ampliar los objetivos a fin de estudiar las pautas evaluadoras de los receptores de una IS desde un punto de vista más global. Se trataría de establecer un marco hipotético de evaluación de la calidad en interpretación que pudiese explicar el efecto moderado de la entonación monótona como minimizador de la calidad de una IS, según la siguiente hipótesis. Hipótesis 4: Los usuarios parten en su evaluación de una IS de un cierto grado de monotonía habitual en la entonación de los/las intérpretes.
Ángela Collados Aís
4. Material y método 4.1
Metodología
El diseño metodológico comprende la realización de un experimento en el que la evaluación de los sujetos se realiza en dos grupos a fin de evaluar dos interpretaciones (véase 4.3.): una IS manipulada en cuanto a su entonación monótona (ISM) y una IS control (ISC). Mientras el primer grupo visualiza y evalúa primero la ISM y después la ISC (grupo MC), en el otro grupo los usuarios visualizan y evalúan primero la ISC y después la ISM (grupo CM). El diseño, modificado y ampliado, incluye la realización de varios grupos de discusión y entrevistas focalizadas. El primer grupo de discusión “expertos” incluye especialistas en Psicología, Interpretación y Semiótica y tiene como objetivo debatir acerca del propio proceso de evaluación desde distintos enfoques a fin de nutrir los siguientes grupos. El segundo, denominado grupo “sujetos”, se bifurca a su vez en dos subgrupos con composiciones similares a las del experimento. Por otra parte, se realizan entrevistas focalizadas con tres sujetos: filóloga, estudiante avanzado de interpretación y experta. La organización de los grupos de discusión y de las entrevistas focalizadas prevén la evaluación de seis IS, repartidas en tres bloques, que incluyen tanto interpretaciones auténticas como manipuladas a fin de partir de un proceso evaluador actualizado para los sujetos. Tanto el guión de los grupos de discusión, como el número de participantes, va modificándose a partir de la primera realización y en base a los sucesivos análisis parciales de los resultados obtenidos. El guión base plantea diversos temas relativos a la evaluación de una interpretación, aunque es flexible a fin de adaptarse a las reacciones e intervenciones de los participantes. A partir del segundo grupo de discusión se les solicita a los participantes que planteen sus opiniones y comentarios espontáneos desde el mismo momento en que comienzan a escuchar la primera interpretación. Las evaluaciones se recogen en un cuestionario muy breve y las entrevistas y los debates de los grupos de discusión son grabados en audio y, posteriormente, transcritos. La presente investigación, desde la realización del experimento piloto hasta la finalización de los grupos de discusión, se lleva a cabo desde mayo de 2007 hasta enero de 2008. 4.2
Sujetos
En total, el número de sujetos participantes en el experimento piloto se eleva a dieciocho, diez profesores de distintos departamentos de Filología de la UGR (Universidad de Granada) con docencia en la FTI (Facultad de Traducción e Interpretación) y ocho estudiantes de último curso de la Licenciatura de Traducción e
Evaluación de la evaluación
Interpretación de la FTI, especialidad en interpretación de conferencias, repartidos en dos grupos. Dado que los objetivos de la investigación incluían una importante vertiente cualitativa, de carácter exploratorio, la combinación entre expertos en lengua en general y sujetos cuasi-intérpretes, nos parecía una mixtura adecuada para que estos estuviesen familiarizados con elementos que van más allá de la mera recepción de una interpretación y pudiesen aportar explicaciones útiles a fin de reforzar posteriores investigaciones con usuarios más prototípicos. En cuanto a los grupos de discusión, el primero de ellos, el grupo “expertos”, está formado por cinco profesoras, dos de la Facultad de Psicología, especializadas en Habla, dos de Interpretación de la FTI y una investigadora de la Facultad de Filosofía y Letras, especializada en Semiótica. Los grupos de discusión “sujetos” siguen la composición del experimento piloto a fin de contrastar resultados y están compuestos por seis (repartidos en dos sesiones de dos y cuatro sujetos) y siete sujetos, respectivamente. A partir de la realización del primer grupo de discusión, se llevan a cabo las tres entrevistas individuales focalizadas. 4.3
Material
En el caso del experimento piloto, el material consiste en dos DVDs (ISM e ISC) en los que se visualiza la emisión de un discurso alemán al que se le superpone la voz de una intérprete que realiza la IS hacia el español. Dado que se trata de material experimental, este fue sometido a la manipulación de la autora, centrándose la única diferencia entre la ISM y la ISC en el parámetro entonación (monótona en la ISM y no monótona en la ISC). Las IS fueron realizadas por la misma intérprete para no introducir diferencias vocales que pudiesen sesgar los resultados y fueron precedidas por diferentes estudios preliminares que adaptaron el contenido y la expresión oral de la interpretación a la realidad profesional (cf. Collados Aís 1998). Por su parte, la manipulación del parámetro entonación fue realizada in situ en el lugar de la grabación (Centro de Instrumentación Científica de la UGR) con ayuda de una psicóloga que elicitó en la intérprete un estado emocional negativo de baja activación (cf. Scherer 1995). La ISC vendría a plasmar una posible calidad ideal de una IS, mientras que la ISM mantendría todos los parámetros de calidad idénticos a la ISC, salvo el de entonación. Posteriormente, las dos versiones fueron testadas mediante estudios piloto y análisis acústicos (programa Visi Pitch), y aplicadas en distintas investigaciones (cf. Collados Aís 1998, 2001, 2007). Las evaluaciones de los sujetos se recogen en dos tipos de cuestionarios.5 El primero incorpora un rango de cinco para las preguntas cerradas sobre la valoración global de la interpretación (cf. Gile 1990) y cuatro parámetros de calidad: 5.
Nos referiremos a los apartados de los cuestionarios sobre los que se basa el presente trabajo.
Ángela Collados Aís
cohesión, transmisión correcta del discurso original, agradabilidad de la voz y entonación, así como evaluaciones acerca de la profesionalidad y la fiabilidad que les produce la intérprete. El segundo cuestionario comienza con dos preguntas abiertas que indagan sobre lo más positivo y lo más negativo de la interpretación que acaban de escuchar. El segundo bloque de preguntas incluye la valoración numérica de la actitud de la intérprete (también en un rango de 5) y su definición según propuestas incluidas o añadidas por los sujetos. Por último, se incluye un tercer bloque destinado a recoger las percepciones de los sujetos sobre los intraparámetros vocales normalmente asociados a la entonación: velocidad, volumen y tono. En el caso de los grupos de discusión y las entrevistas focalizadas, el material está compuesto por el guión elaborado por la investigadora, así como por un breve cuestionario que indaga sobre tres aspectos de la evaluación de la interpretación: valoración global, entonación y actitud de la intérprete de las seis versiones de IS en audio, incluidas con el objetivo de que los sujetos partan de procesos evaluadores actualizados. A fin de que las evaluaciones sean comparables con el experimento, se incluyen las mismas versiones de IS (ISM e ISC), y, por otra, otras cuatro interpretaciones auténticas seleccionadas por la autora de un amplio corpus paralelo.6 De estas, dos IS han sido seleccionadas en base a criterios perceptivos y acústicos (programa Multi Speech) de homogeneidad con las dos interpretaciones manipuladas: una interpretación monótona y una interpretación melodiosa, equivalente en cuanto a calidad a la interpretación control manipulada. Las otras dos interpretaciones revisten problemas en cuanto a fluidez y dicción y han sido incluidas a fin de diversificar el proceso evaluador de los sujetos. 5. Resultados y discusión 5.1
Experimento piloto
5.1.1 Valoración global y de parámetros de calidad Como se puede ver en el gráfico 1, todos los apartados son mejor valorados en la ISC por lo que respecta al grupo CM. Las mayores diferencias se dan en el parámetro entonación, donde la ISC supera a la ISM en 3,11 puntos, así como en los apartados agradabilidad de la voz (2,11) y valoración global (1,56). Las menores diferencias, por el contrario, se dan en los apartados transmisión correcta (0,17) y cohesión (0,44). En todo caso, las valoraciones más altas las recibe la ISC en los apartados agradabilidad de la voz (4,77) y cohesión (4,55), aunque el resto de apartados también 6. De un amplio corpus multilingüe de discursos originales e interpretaciones grabadas a través del canal EbS (Europe by Sattelite) en el seno de los proyectos de investigación mencionados anteriormente.
Evaluación de la evaluación
5
ISC
5
4
ISM
4
2
2
1
1
0
0
Gráfico 1. Valoraciones grupo CM
ISC
v. gl o co b a l he t. sió co n rr ec ta en v to oz na ció n
3
v. gl o co b a l he t. sión co rr ec ta en vo z to na ció n
3
ISM
Gráfico 2. Valoraciones grupo MC
supera los 4 puntos de media. En el caso de la ISM, la menor valoración corresponde a entonación (1,33) y agradabilidad de la voz (2,66). El resto de apartados supera el 3 de media, salvo valoración global que se acerca con un 2,88. En el caso del grupo MC (véase gráfico 2), las valoraciones de la ISC superan en todos los apartados los 4 puntos de media, siendo también los apartados entonación y agradabilidad de la voz los más valorados (4,77, en ambos). En el caso de la ISM, las valoraciones menores vuelven a corresponder a los apartados entonación y agradabilidad de la voz (1,44 y 3, respectivamente). El resto de valoraciones supera en todo caso los 3 puntos. Respecto a las diferencias entre las interpretaciones, la ISC supera a la ISM en todos los apartados, salvo el destinado a transmisión correcta. Las diferencias mayores se dan en entonación (3,33) y agradabilidad de la voz (1,77). Los parámetros de contenido, o no reflejan diferencias, como es el caso de transmisión correcta (4 en ambos casos), o estas son mínimas, como en cohesión (+/-0,33). 5.1.2 Intraparámetros vocales En el caso de la condición experimental CM, la ISC es valorada de forma más alta que la ISM en todos los intraparámetros. Las diferencias oscilan entre 1,33 para tono y 1 para volumen. En la condición experimental MC, tanto tono (1,07) como volumen (1) son valorados más altos en el caso de la ISC que en la ISM. La velocidad no arroja ninguna diferencia. 5.1.3 Actitud, profesionalidad y fiabilidad de la intérprete En el caso del grupo CM (véase gráfico 3), la actitud de la intérprete obtiene una puntuación de 4,11 para la ISC y 2,88 para la ISM, lo que arroja una diferencia de 1,23 a favor de la primera. En el caso de la profesionalidad la diferencia es de 1,66 (4,77 frente a 3,11) y en fiabilidad de 1 punto (4,44 frente a 3,44).
Ángela Collados Aís
5
ISC
5
ISM
4
ISM
4 3 2
ISC
2 1
pr of es io na lid ad fia bi lid ad ac tit ud
0
Gráfico 3. Valoraciones grupo CM
1 0
pr of es io na lid fia ad bi lid ad ac tit ud
3
Gráfico 4. Valoraciones grupo MC
En el caso del grupo MC (véase gráfico 4), las valoraciones de la ISC superan a la ISM en actitud en 2 puntos (2,44 frente a 4,44), en fiabilidad la diferencia es de 1,22 (3,33 frente a 4,55), y en profesionalidad de 1 punto (3,33 frente a 4,33). En el caso del grupo CM, la actitud de la intérprete que infieren los sujetos para la ISC es de neutralidad en ocho casos, de interés en seis casos y uno de alegría y entusiasmo, respectivamente. Para la ISM, se producen siete menciones de aburrimiento y abatimiento, respectivamente, seguidas de cinco menciones referidas a desinterés, tres de tristeza y dos de neutralidad. En el grupo MC, la ISM obtiene siete calificaciones de aburrimiento, seis de desinterés y dos de abatimiento y neutralidad, respectivamente. En el caso de la ISC, la mayor parte de calificaciones se sitúan en el apartado de interés (nueve menciones), seguidas de seis calificaciones de animación y tres de entusiasmo. El apartado neutralidad no obtiene ninguna mención. 5.1.4 Valoraciones abiertas En la condición CM, la ISM recibe doce comentarios en el apartado dedicado a ‘lo más positivo’ de la interpretación. De estos comentarios, nueve se refieren a cuestiones de contenido, concretamente a cohesión y coherencia (cuatro comentarios) y cinco a la fidelidad. En todo caso, estos comentarios son matizados por los sujetos a la hora de realizarlos con expresiones del tipo “supongo”, “parece haber” o “en general”. El resto de comentarios se vinculan con cuestiones de producción lingüística (uno) y ritmo y fluidez “razonables” (dos). Respecto a ‘lo más negativo’, se producen trece comentarios, de los cuales tres hacen referencia a los efectos que sobre sí mismos tiene la interpretación: “la falta de fluidez hace que se incremente el ritmo con la consiguiente pérdida de información”, “la interpretación produce nerviosismo” o “incomodidad”. Los demás comentarios se reparten en seis referidos a la entonación y cinco a la fluidez (titubeos y vacilaciones, incremento de ritmo, pausas).
Evaluación de la evaluación
En la misma condición, la ISC recibe veintiún comentarios en las respuestas respecto a ‘lo más positivo’, de los cuales 18 se refieren a cuestiones no verbales, tales como agradabilidad de la voz, entonación, seguridad y dominio de la intérprete, falta de titubeos y de silencios, alta profesionalidad y buena calidad en general. Tres comentarios se refieren a cuestiones de contenido, específicamente a la coherencia y cohesión. Preguntados acerca de ‘lo más negativo’, se producen seis comentarios, que, sin embargo, se refieren a que no se ha podido apreciar nada negativo, salvo un comentario que menciona “las pausas que producen ligeros desfases”. En la condición experimental MC, ‘lo más positivo’ de la ISM ha recibido trece menciones, que se reparten en los siguientes apartados: expresión (dos), transmisión del contenido (dos), ritmo y fluidez (cuatro), la calma, seguridad o confianza que transmitía la intérprete (tres), otros dos comentarios afirmaban que lo más positivo fueron la “la temática del ponente” y que “parecía profesional, pero mejorable”. En todo caso, la mayoría de las menciones iban precedidas de matizaciones tales como “me parece…” o “probablemente”. Respecto a ‘lo más negativo’ se producen doce menciones, de las que cuatro se refieren a fluidez y ocho a la entonación, bifurcándose hacía calificaciones de monotonía y efectos sobre el oyente que provocarían que “no se pudiera seguir bien la lógica del discurso”, porque “no era natural”, “era de autómata por lo que aburría y nivelaba la información” o que “no denotaba por parte de la intérprete interés en lo que estaba diciendo”. La ISC recibe, en esta condición, veintiún menciones, de las cuales quince se refieren a cuestiones no verbales (entonación, voz, naturalidad, ritmo…) y tres a la transmisión del contenido, dos a la actitud que transmite así como al interés que la intérprete pone “en su desempeño” y la última que afirma que se trataría de “una interpretación perfecta”. Respecto a lo más negativo únicamente se realizan cuatro comentarios y tres de ellos comienzan haciendo referencia a que se trata de cuestiones nimias. Los comentarios se refieren a algún fallo de expresión y a algún momento de duda de la intérprete. 5.1.5 Discusión Los resultados indican que a partir del reconocimiento de la monotonía de la intérprete, la ISM es valorada por debajo de la ISC en prácticamente todos los apartados y en cualquier condición experimental. Estos resultados confirmarían resultados anteriores, también en cuanto al trasvase a otro parámetro muy vinculado con la entonación como es la agradabilidad de la voz que serían los parámetros peor valorados (Collados Aís 1998). No se confirmarían, por tanto, los últimos resultados obtenidos en el reciente experimento respecto a las mismas interpretaciones (Collados Aís 2007). Si tenemos en cuenta el número de sujetos de este experimento piloto, así como que provienen de una especialidad más cercana a las lenguas, lo que podría haberles dotado de una mayor sensibilidad hacia aspectos
Ángela Collados Aís
no verbales (cf. Kurz 1993; Kurz y Pöchhacker 1995), los resultados no pueden ser considerados en ningún caso como concluyentes. Sin embargo, sí consideramos que arrojan una serie de datos interesantes y que plantean interrogantes que merece la pena seguir investigando. De hecho, no deja de llamar la atención que las valoraciones globales de ambas interpretaciones, y en cualquier condición experimental se hayan movido fundamentalmente en las franjas 3 y 4. A pesar de que la monotonía de la interpretación es detectada por los sujetos (1,33 y 1,44, valoraciones respectivas según condición experimental para el parámetro entonación), percibiéndose la agradabilidad de la voz y los intraparámetros vocales en consonancia a lo esperado para la monotonía de la voz y con las actitudes inferidas, que no dejan de ser negativas para la ISM (sobre todo en la condición CM), el trasvase del “castigo” se ha mantenido en unos límites que parecen considerar que la ISM estaría dentro del margen de una interpretación aceptable, de calidad media. Se constata, por tanto, la gran relación entre entonación y agradabilidad de la voz (Collados Aís 1998, 2007; Iglesias Fernández 2007), así como entre estas y la inferencia emocional (Collados Aís 1998). Respecto a la actitud, el contraste de exposición es claro, sobre todo si nos fijamos en las diferencias que se producen en la inferencia de la ISC según condición experimental. El contraste con la monotonía lleva a los sujetos a inferir emociones más “activas” y “positivas”, como el interés o el entusiasmo cuando sin contraste percibían más la neutralidad de la intérprete (cf. Collados Aís 1998). Sin embargo, las puntuaciones otorgadas a los distintos parámetros considerados prácticamente permanecen estables, es más la ISC, recibe puntuaciones menores en los parámetros de contenido considerados, así como en profesionalidad, lo que llevaría a plantear la cuestión de si realmente una mayor activación del/la intérprete y la vinculación de esta mayor activación con una emoción positiva, y no neutral, sería incluso considerada por los usuarios como menos profesional, en el sentido posiblemente de visualizar una mayor intervención del/la intérprete (cf. Kopzcynski 1994). Esta hipótesis podría ser considerada en el sentido de que, percibido un determinado nivel de entonación melódica, mayor en contraste con una entonación monótona, podría producirse un ‘efecto de U invertida’ (cf. Collados Aís 1998). Curiosamente también en el caso de la ISM, a pesar de que la valoración global sí decrece en un 0,78 si se trata del segundo visionado, son los valores de cohesión y transmisión correcta los que, junto con fiabilidad, aunque mínimamente, aumentan. 5.2
Grupos de discusión y entrevistas focalizadas
5.2.1 Entrevista focalizada y grupo de discusión ‘expertos’ Respecto a las seis IS evaluadas, los resultados muestran puntuaciones similares entre las expertas, al menos en lo que se refiere a la constatación de las versiones
Evaluación de la evaluación
menos y más adecuadas. Así, se han reconocido y valorado en consecuencia tanto las versiones monótonas como las versiones control, experimentales como auténticas, sin haberse detectado posibles diferencias en cuanto a dicho material. Por lo que se refiere al debate, este comienza con la cuestión de una posible prevención respecto a asumir los niveles más altos o más bajos del rango de puntuación. Unánimemente se considera que sí existe dicha prevención natural a valorar en los extremos, sobre todo si se trata de una única escucha y que esta prevención se situaría fundamentalmente en el extremo inferior más que en el superior. Sin embargo, se matiza que es posible que esta prevención no se produzca en una situación real, en la que no se tiene que valorar numéricamente, sino que lo único que se valora es, en general, si se considera buena o mala (cf. Gile 1990). Planteado el tema de la actitud de las intérpretes, las intervenciones parecen sugerir que es uno de los aspectos por los que más se han dejado guiar, concretamente por la satisfacción que haya podido producir en ellas mismas como receptoras evaluadoras, y que dicha satisfacción ha sido trasvasada a la valoración global. Es decir, que lo importante sería, en palabras de una participante, corroboradas por el resto, “no la emoción que denote la intérprete, sino la emoción que produzca en el receptor”. Las participantes consideran que es lógico que lo no verbal afecte a la evaluación dado que la forma llega “antes y más”. Sin embargo, se plantea que si el tema sobre el que versara la interpretación fuese de la especialidad del usuario y, por tanto de gran interés para este, el receptor podría intentar centrarse más en el contenido que en la forma. Aunque también sería posible que le afectase más lo no verbal precisamente porque le molestaría más por el interés mayor en el contenido. Las participantes constatan que en todas las interpretaciones, sus valoraciones de la entonación no han sido excesivamente altas, incluso en aquellas en las que la entonación no era monótona. A raíz de esta cuestión, se plantea espontáneamente la posibilidad de que lo que suceda es que la entonación del/la intérprete sea distinta a la que estamos acostumbrados (“artificial o no espontánea”). Esta entonación distinta se vincula temáticamente con la codificación que, en general, tenemos de determinados géneros: locutores, amigos…, pero no de intérpretes. No estaría claro el código del/la intérprete como profesional y tampoco el que tendría el usuario del/la intérprete. La pregunta que se plantea por parte de la participante lingüista a las intérpretes presentes en el grupo de discusión, respecto a lo que ellas considerarían una buena interpretación, es respondida por estas aludiendo al factor ‘espontaneidad’ de la interpretación que llevaría implícito el “que no se notara que es una interpretación”. Esta respuesta es corroborada por las demás participantes en calidad de usuarias de interpretaciones. Se concluye que de ahí se derivaría que “precisamente la buena interpretación sería la que rompe el código del profesional de la interpretación, que sí contiene una entonación específica, distinta a la de otros códigos”. Una buena interpretación “sería, por tanto, la antítesis del código profesional del intérprete”. En todo caso, habría que
Ángela Collados Aís
distinguir entre aquellos usuarios que no tienen un código predefinido, en cuyo caso sería así, y aquéllos otros que sí lo tienen. En ese caso, posiblemente, lo único que no esperan sería una sobreactuación pero sí aceptarían indicadores de ese código, concretamente en su entonación. 5.2.2 Entrevistas focalizadas y grupos de discusión ‘sujetos’ En el grupo ‘sujetos’ se han realizado dos entrevistas focalizadas con dos sujetos según subgrupo y dos grupos de discusión. Dado el espacio disponible se resumen conjuntamente los resultados más sobresalientes de los grupos de discusión y entrevistas. La dinámica de estos grupos difiere de la del grupo ‘expertos’ en la medida en que no se realizan todas las evaluaciones de las interpretaciones antes de comenzar a discutir, sino que primero se propone el tema general de la discusión y cuando este decae se comienza con la primera evaluación. Así, sucesivamente, se van intercalando evaluaciones de interpretaciones con comentarios y debates. Posiblemente uno de los datos más interesantes del debate acerca del proceso de evaluación realizado por los sujetos haya sido la focalización espontánea en la primera interpretación (ISC auténtica) en aspectos de contenido a la hora de evaluarla, mientras que en la segunda interpretación (ISM auténtica) la focalización se centraba en aspectos derivados de la entonación de, obviando los aspectos de contenido. Por otra parte, al evaluar y comentar la cuarta interpretación (ISM manipulada) retrocedieron a la segunda versión para diferenciar el tipo de monotonía, afirmando que mientras que la segunda versión no dejaba “entrar” en el contenido, obstaculizando la comprensión del mensaje, la cuarta versión no obstaculizaba la comprensión dado que, a pesar de la monotonía, estructuraba adecuadamente a través de la entonación. Al ser confrontados, al final de la sesión, con sus sucesivas valoraciones, y si bien en un principio no lo consideraban así, pasaban a ser conscientes de que habían seguido unos límites en sus valoraciones, tanto en valores máximos como mínimos, salvo un sujeto de nacionalidad alemana que comentó las posibles influencias culturales y que ella concretamente, no había tenido ningún problema en ese sentido. Respecto a la entonación monótona, varios sujetos reconocieron que no habían sido conscientes de su influencia a la hora de valorar y que, de hecho, no había afectado drásticamente a las valoraciones globales dado que normalmente la entonación del/la intérprete suele caracterizarse por una cierta monotonía y “rareza” y que, además, habían tenido en cuenta que interpretar era una actividad extremadamente difícil. Un sujeto dijo pedir conscientemente de una interpretación que fuese “agradable”, lo que englobaba fundamentalmente aspectos no verbales, y que estaría dispuesto a “sacrificar” determinados elementos de contenido. A este comentario se sumaron unánimemente el resto de sujetos. También había práctica unanimidad en que el orden de las interpretaciones afectaba a la valoración de las mismas a partir de la primera escucha.
Evaluación de la evaluación
Posiblemente en la primera valoración influyese el “ideal o modelo” de interpretación que tuviesen según sus experiencias previas con la interpretación y que, en sucesivas valoraciones, a no ser que ese ideal fuese muy marcado, influyesen más las experiencias inmediatamente anteriores. 5.2.3 Discusión Los resultados de los distintos grupos de discusión organizados, así como las entrevistas focalizadas realizadas, han arrojado un buen número de coincidencias entre sujetos y han evidenciado pautas de comportamiento bastante similares de las que, en ocasiones, ellos mismos no eran conscientes, al menos a la hora de evaluar. Así, los límites en los que la mayoría se mueven a la hora de valorar, sobre todo en la franja inferior, y que provienen fundamentalmente de dos elementos: por una parte, una justificación bastante extendida de la gran dificultad de la tarea de interpretar y, por otra, del modelo previo de IS que tienen, que parece incluir dicha entonación peculiar del/la intérprete (cf. Shlesinger 1994; Williams 1995; Ahrens 2005), así como una cierta monotonía. Estos resultados explicarían en parte las bajas puntuaciones que normalmente ha recibido la entonación (no manipulada hacia la monotonía) del/la intérprete en los distintos experimentos realizados (cf. Collados Aís, Pradas Macías, Stévaux y García Becerra 2007), así como su trasvase moderado a las valoraciones globales. La influencia de los elementos no verbales en una evaluación se considera prácticamente unánime a raíz de las valoraciones que iban otorgando a las distintas versiones de interpretación y llama la atención que esta influencia de los elementos no verbales no siempre es inconsciente como se asumía (Collados Aís 1998) sino que es una demanda consciente de algunos sujetos que estarían dispuestos a renunciar a elementos de contenido a favor de una interpretación que sea “agradable”. Dado que determinadas prestaciones no verbales impiden en ocasiones llegar al contenido, se entiende aún más esta preferencia por lo no verbal frente a lo verbal, ya que no se trataría en determinados supuestos y en puridad de sentido de una opción entre verbal y no verbal, sino del acceso o no a lo verbal a través de lo no verbal. De hecho, por lo que se refiere a la entonación, se daría este supuesto en caso de entonaciones que directamente entorpecieran la comprensión (cf. Álvarez Muro 2001; Venditti y Hirschberg 2003) frente a entonaciones que únicamente molestasen. Así, incluso, una entonación monótona se ha justificado por la vasta experiencia de la intérprete que cae en la rutina, deduciendo de este hecho que el contenido posiblemente sea muy bueno. Si, por el contrario, la monotonía del/la intérprete se asocia con inseguridad, la deducción sería la contraria (en este sentido sería interesante verificar si en estas deducciones intervendrían factores como la edad inferida del/la intérprete). En todo caso, según los resultados, se produce una inferencia de una determinada actitud de la intérprete y esta actitud se infiere, no solamente pero de
Ángela Collados Aís
forma prioritaria, de la entonación según los expertos (cf. Brown 1982; Ladd, Scherer y Silverman 1986; Scherer 1986, 1995; Wichmann 2000). Respecto al orden de las interpretaciones, todos los sujetos consideran que este influye, lo que estaría en contradicción con los resultados obtenidos en el experimento piloto. 6. Conclusiones A nuestro modo de ver, la conclusión más importante de cara a la evaluación de la evaluación de la calidad en IS es que su estudio desde enfoques complementarios (cf. Gile 2005: 165) puede abrir nuevas perspectivas que incidan en las razones del comportamiento de los usuarios de la misma, de forma que las explicaciones de determinadas pautas colaboren en una definición o redefinición del papel del/la intérprete. En este sentido, vemos conveniente ampliar la presente investigación, mediante la realización de un grupo de discusión de usuarios prototípicos, a fin de corroborar determinados aspectos que parecen vislumbrarse a partir de los resultados obtenidos en la presente investigación, desde sus dos vertientes, y que pasamos a resumir a modo de conclusiones. La primera conclusión es la confirmación de la detección, incluso con inferencias que van más allá de los intraparámetros vocales fijados por vía técnico-acústica, y castigo que los receptores realizan de una IS monótona (cf. Collados Aís 1998), en mayor medida de lo que sugieren las expectativas previas de calidad de los usuarios (cf. Kurz 2001, 2003; Pradas Macías 2003, 2004). Es posible que efectivamente la activación de vías emocionales juegue un papel destacado a través de las inferencias realizadas, matizadas por las experiencias previas e influyendo así en la decisión (cf. Damasio 2005) acerca de si ha sido una buena interpretación o no. Sin embargo, el hecho de que los usuarios partan de la extrema dificultad de la tarea de interpretar y focalicen en lo “realmente importante” a través de determinados indicadores y aunque no puedan verificar su cumplimiento (Gile 1995, 1999; Collados Aís 1998; Fernández Sánchez et al. 2007), puede justificar en determinadas ocasiones una mayor valoración de la prestación, incluso con entonación monótona. Por otra parte, la existencia de un modelo previo que contiene entre sus elementos un código no habitual y aceptado, al menos en cuanto a su entonación sui generis (Shlesinger 1994; Williams 1995; Ahrens 2005), puede conllevar que una menor valoración no pueda considerarse baja en términos absolutos sino únicamente relativos, en comparación con otras interpretaciones menos monótonas. A esta explicación se sumaría otra, la existencia de unos límites de valoración en los que se moverían los usuarios y que fundamentalmente estarían situados en las franjas más inferiores de las escalas de valoración, pero que también tendrían un tope superior, posiblemente porque el usuario que valora positivamente la
Evaluación de la evaluación
entonación, dirige su atención preferente al análisis del contenido y la expresión, convirtiéndose estos parámetros entonces en objeto de su crítica. Estos límites, junto a la justificación de la dificultad de la tarea de interpretar y a la imposibilidad de una posible ‘perfección’, posiblemente expliquen por qué en el experimento realizado no se produjesen diferencias importantes según el orden de evaluación de las interpretaciones. Por lo que respecta al modelo previo, los receptores no siempre parecen tenerlo, pero cuando lo tienen, este se convierte en su referencia mediata. La referencia inmediata serían las demás interpretaciones que actualizarían dicho modelo. La entonación parece confirmarse como un elemento básico en la inferencia emocional también en interpretación y parece ser uno de los apartados más sensibles en cuanto al orden de las interpretaciones, pudiendo dar giros, como en el presente experimento, desde una valoración mayoritaria de neutralidad en entonaciones no monótonas hacia inferencias de actitudes bastante más activas por parte del/la intérprete, si la valoración se produce después de haber escuchado y valorado una interpretación monótona. Este hecho nos parece sumamente elocuente de cara a una explicación adicional para las pautas evaluadoras de los usuarios que parten de la asignación de un papel poco visible para el/la intérprete (cf. Kopczynski 1994) y al que pocas veces ven como lo que realmente es, un órgano decisiorio básico. En este sentido, consideramos importante que el/la intérprete conozca las motivaciones que guían al usuario en su evaluación para incorporar elementos que colaboren en el éxito de su interpretación o, dicho con otras palabras: que una interpretación de calidad sea percibida como tal por los usuarios (Collados Aís 1998). Referencias bibliográficas Ahrens, B. 2005. Prosodie beim Simultandolmetschen. Frankfurt: Lang. AIIC. 2004. Practical Guide for Professional Conference Interpreters. www.aiic.net Álvarez Muro, A. 2001. “Análisis de la oralidad: Una poética del habla cotidiana.” Estudios de Lingüística Española 15. http://elies.rediris.es/elies15/index.html#ind Baigorri, J. 2000. La interpretación de conferencias: El nacimiento de una profesión. De París a Nuremberg. Granada: Comares. Bolinger, D. 1986. Intonation and its Parts. Melody in Spoken English. Standford: Standford University Press. Brennan, S. E. y Williams, M. 1995. “The feeling of another’s knowing: Prosody and filled pauses as cues to listeners about the metacognitive states of speakers.” Journal of Memory and Language 34: 383–398. Brown, B.L. 1982. “Experimentelle Untersuchungen zur Personenwahrnehmung aufgrund vokaler Hinweisreize.” En Vokale Kommunikation: Nonverbale Aspekte des Sprachverhaltens, K.R. Scherer (ed.), 211–227. Weinheim-Basilea: Beltz.
Ángela Collados Aís Bühler, H. 1986. “Linguistic (semantic) and extralinguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters.” Multilingua 5 (4): 231–235. Cheung, A. 2003. “Does accent matter? The impact of accent in simultaneous interpretation into Mandarin and Cantonese on perceived performance quality and listener satisfaction level.” En Evaluación de la calidad en interpretación de conferencias: investigación, Á. Collados Aís, M.M. Fernández Sánchez y D. Gile. (eds), 85–96. Granada: Comares. Collados Aís, Á. 1998. La evaluación de la calidad en interpretación simultánea. La importancia de la comunicación no verbal. Granada: Comares. Collados Aís, Á. 2001. “Efectos de la entonación monótona sobre la recuperación de la información en receptores de interpretación simultánea.” Trans 5: 103–110. Collados Aís, Á. 2007. “La incidencia del parámetro entonación.” En Evaluación de la calidad en interpretación simultánea: parámetros de incidencia. À. Collados Aís, E.M. Pradas Macías, E. Stevaux y O. García Becerra (eds), 159–174. Granada: Comares. Collados Aís, Á., Pradas Macías, E.M., Stevaux, E. y García Becerra, O. (eds), 2007. Evaluación de la calidad en interpretación simultánea: parámetros de incidencia. Granada: Comares. Damasio, A. 2006. En busca de Spinoza. Neurobiología de la emoción y los sentimientos. Madrid: Crítica Fernández Dols, J.M., Iglesias, J.M. y Carrera, J. 1990. “Comportamiento no verbal y emoción.” En Motivación y emoción, S. Palafox, y J. Vila (eds), 255–307. Madrid: Alhambra Universidad. Fernández Sánchez, M. M., Collados Aís, Á., Nobs Federer, M. L., Pradas Macías, E. M. y Stévaux, E. 2007. “La incidencia del parámetro transmisión correcta del discurso original.” En Evaluación de la calidad en interpretación simultánea: parámetros de incidencia, Á. Collados Aís, Pradas Macías, E.M., Stevaux, E. y O. García Becerra (eds), 89–104. Granada: Comares. Garzone, G. 2003. “Reliability of quality criteria evaluation in survey research.” En Evaluación de la calidad en interpretación de conferencias: investigación, Á. Collados Aís, M.M. Fernández Sánchez y D. Gile (eds), 23–30. Granada: Comares. Gile, D. 1990. “L’evaluation de la qualité de l’interprétation par les délégués: une étude de cas.” The Interpreters’ Newsletter 3: 66–71. Gile, D. 1991. “A communication-oriented analysis of quality in nonliterary translation and interpretation.” En Translation: Theory and practice. Tension and interdependence. M.L. Larson (ed.), 188–200. Binghamton NY: SUNY. Gile, D. 1995. “Fidelity assessment in consecutive interpretation: An experiment.” Target 7 (1): 151–164. Gile, D. 1999. “Variability in the perception of fidelity in simultaneous interpretation.” Hermes 22: 51–79. Gile, D. 2005. “Empirical research into the role of knowledge in interpreting: Methodological aspects.” En Knowledge Systems and Translation, H. Dam, J. Engberg y H. Gerzymisch-Arbogast (eds), 149–171. Berlín/Nueva York: Mouton de Gruyter. Herbert, J. 1970. Manual del intérprete. Ginebra: Librairie de l’Université Georg. Hirschberg, J. 2006. “Some bibliographical references on intonation and intonational meaning.” http://arxiv.org/PS_cache/cmp-lg/pdf/9405/9405003.pdf Hurtado Albir, A. 2001. Traducción y Traductología. Introducción a la Traductología. Madrid: Cátedra. Iglesias Fernández, E. 2007. “La incidencia del parámetro agradabilidad de la voz.” En Evaluación de la calidad en interpretación simultánea: parámetros de incidencia, Á. Collados Aís, Pradas Macías, E.M., Stevaux, E. y O. García Becerra (eds), 37–51. Granada: Comares.
Evaluación de la evaluación Jiménez Fernández, A. 1985. Marcadores emocionales en la conducta vocal. Tesis doctoral. Universidad de Madrid. Kalina, S. 1998. Strategische Prozesse beim Dolmetschen. Tubingia: Gunter Narr. Knapp, M.L. 1988. La comunicación no verbal. El cuerpo y el entorno. Barcelona: Paidós Comunicación. Kopczynski, A. 1994. “Quality in conference interpreting: some pragmatic problems.” En Bridging the Gap, S. Lambert y B. Moser-Mercer (eds), 87–99. Amsterdam/Philadelphia: John Benjamins. Kurz, I. 1989. “Conference interpreting user expectations.” En Coming of Age, D. Hammond (ed.), 143–148. Medford, NJ: Learned Information. Kurz, I. 1993. “Conference interpretation: expectations of different user groups.” The Interpreters’ Newsletter 5: 13–21. Kurz, I. 2003. “Quality from the user perspective.” En Evaluación de la calidad en interpretación de conferencias: investigación. Á. Collados Aís, M.M. Fernández Sánchez y D. Gile (eds), 3–22. Granada: Comares. Kurz, I. y Pöchhacker, F. 1995. “Quality in TV interpreting.” Translatio- Nouvelles de la FIT- FIT Newsletter.15 (3/4): 350–358. Ladd, R.D.; Scherer, K.R. y Silverman, K. 1986. “An integrated approach to studying intonation and attitude.” En Intonation in Discourse, C. Lewis (ed.), 125–138. San Diego: College-Hill Press. Laver, J. 1994. Principles of Phonetics. Cambridge: Cambridge University Press. Marrone, S. 1993. “Quality: A shared objective.” The Interpreters’ Newsletter 5: 35–41. Mazzetti, A. 2005. “The influence of segmental and prosodic deviations on source-text comprehension in simultaneous interpretation.” The Interpreters’ Newsletter 12: 125–147. Mehrabian, A.H. y Williams, M. 1969. “Nonverbal concomitants of perceived and intended persuasiveness.” Journal of Personality and Social Psychology 13: 37–58. Morgan, G. 1988. Beyond Method: Strategies for Social Research. Londres: Sage. Nafá Waasaf, M.L. 2005. Análisis acústico-discursivo de la entonación en interpretación simultánea inglés británico-español peninsular. Aplicaciones a la didáctica y la investigación de lenguas. Tesis doctoral. Universidad de Granada. Pradas Macías, E.M. 2003. Repercusión del intraparámetro pausas silenciosas en la fluidez: influencia en las expectativas y en la evaluación de la calidad en interpretación simultánea. Tesis Doctoral. Universidad de Granada. Pradas Macías, E.M. 2004. La fluidez y sus pausas: enfoque desde la interpretación de conferencias. Granada: Comares. Russo, M. 2005. “Simultaneous film interpreting and users’ feedback.” Interpreting 7 (1): 1–26. Scherer, K.R. 1982. “Stimme und Persönlichkeit – Ausdruck und Eindruck.” En Vokale Kommunikation: Nonverbale Aspekte des Sprachverhaltens, K.R. Scherer (ed.), 188–210. WeinheimBasilea: Beltz. Scherer, K.R. 1986. “Vocal affect expression: A review and a model for future research.” Psychological Bulletin 99 (2): 143–165. Scherer, K.R. 1995. “Expression of emotion in voice and music.” Journal of Voice 9 (3): 235–248. Shlesinger, M. 1994. “Intonation in the production and perception of simultaneous interpretation.” En Bridging the Gap, S. Lambert y B. Moser-Mercer (eds), 225–236. Amsterdam/ Philadelphia: John Benjamins.
Ángela Collados Aís Simón, V.M. 1997. “La participación emocional en la toma de decisiones.” Psicothema 9 (2): 365–376. Stévaux, E. 2007. La incidencia del parámetro acento. En Evaluación de la calidad en interpretación simultánea: parámetros de incidencia, Á. Collados Aís, Pradas Macías, E.M., Stevaux, E. y O. García Becerra (eds), 17–35. Granada: Comares. Toury, G. 2004. Los estudios descriptivos de traducción y más allá. Metodología de la investigación en estudios de traducción. Traducción y edición de Rabadán, R. y R. Merino. Madrid: Cátedra. Valles, M.S. 1997. Técnicas de investigación cualitativa. Reflexión metodológica y práctica profesional. Madrid: Síntesis. Venditti, J.J. y Hirschberg, J. 2003. “Intonation and discourse processing.” Proceedings International Conference of Phonetics Sciences 15. Barcelona, España: http://www.1.cs.columbia/edu/ nlp/papers2003/venditti_hischberg.03.pdf Williams, S. 1995. “Observations on anomalous stress in interpreting.” The Translator 1 (1): 47–66.
Linguistic interference in simultaneous interpreting with text A case study Heike Lamberger-Felber & Julia Schneider University of Graz, Austria
Linguistic interference in simultaneous interpreting is among those phenomena that many authors have written about, while few have actually investigated it. Following Daniel Gile’s request for more empirical data, the authors have tried to analyse frequency and types of interference in a corpus of 36 interpretations by twelve professional conference interpreters. Results indicate the high incidence of interference (INT) in professional interpreters’ output as well as the high variability in both frequency and type of INT among the subjects. The lack of correlations between INT and other investigated parameters seems to indicate a certain independence of INT from other output parameters (e.g. semantic deviations).
Keywords: simultaneous interpreting, interference, SI with text, performance variability in SI
1. Introduction Linguistic interference in interpreting is a well-known phenomenon. Warnings of interference in the literature are frequent. Most authors agree that interference is a problem to be avoided because of its negative impact on interpreting quality. However, to date, few systematic empirical research projects have been carried out and little is known about the types and the actual occurrence of INT in interpretations and the influence of different parameters such as language pairs, A-B vs. B-A, beginners vs. professionals etc. This is a surprising fact considering the long tradition of error analysis in IS and the current interest among researchers in issues concerned with interpreting quality.
Heike Lamberger-Felber & Julia Schneider
The results of a pilot study carried out at the University of Graz (Schneider 2007) showed that INTs seem to be a characteristic feature of interpretations and, due to their high frequency, have a strong impact on interpreter output. A larger case study, based on the pilot study and investigating further the frequency and types of INT to be observed in simultaneous interpretations, thus seemed a logical next step. The aim of the case study is twofold. Firstly, different interference “typologies” are discussed regarding their relevance and practical usability as parameters in empirical SI (simultaneous interpreting) research, based on a pilot study realized at the University of Graz (Schneider 2007). Secondly, two hypotheses will be tested. Hypothesis 1 is mentioned by various authors with regard to SI with text; it is suggested that owing to double input (visual + auditive) interferences are more frequent in situations where interpreters use the written manuscript when simultaneously interpreting a read-out speech (e.g. Daniel Gile in his Effort Models on SI with text; Gile 1995). Hypothesis 2 states that since interpreters are aware of the risk of interference, the possibility to prepare the speaker’s manuscript in advance reduces the frequency of interference. 2. Interference in different disciplines Linguistic interferences are generally defined as “[t]hose instances of deviation from the norms of either language which occur in the speech of bilinguals as a result of language contact” (Weinreich 1953: 1). While Weinreich was the first to systematically study interference in bilinguals, thus establishing the field of contact linguistics, interference has also been a subject of research in comparative linguistics, psychology and foreign language learning. However, it has to be pointed out that in all disciplines there is a terminological confusion as to the use of the term interference and the notion of interference in general. While the phenomenon is sometimes referred to in the literature as interference, the terms (negative) transfer, inhibition, transference, cross-linguistic influence, code switching or borrowing are also used for describing identical or at least very similar phenomena. The use of some of these other terms can be explained by the fact that while some disciplines focus on parole-related interference (e.g. research on (foreign) language learning), others mainly deal with INT related to langue (e.g. comparative linguistics, contact linguistics), that is, originally parole-related INTs that are eventually integrated into a language system as a result of a sustained language contact. In contact linguistics and comparative linguistics various attempts have been made to categorize different types of INT, sometimes resulting in very complex and detailed typologies that seem of limited use for the analysis of larger quantities of data (e.g. Carstensen 1968; Tesch 1978). The most frequent classification to be
Linguistic interference in simultaneous interpreting
found in the literature is the basic distinction of INT related to phonology, lexicon and grammar. In any case, INTs are defined as errors or as deviations from linguistic norms, as the negative results of the influence of one language on another. 2.1
Interference in Translation Studies
This negative view of INT is also predominant in TS: “translation interference spoils the target text and introduces elements alien to it” (Râbceva 1989: 88). Translation, which can be considered a case of direct language contact, seems to be particularly prone to INT (cf. Hansen 2002). Most texts about interference in translation are of a didactic nature. Interference typologies are less frequent than in linguistics and contributions are often based on rather unsystematic descriptions of personal observations. Many authors refer to INT when discussing typical student errors such as the use of false friends. Literal translation or translationese are other problems that are associated with the presence of INTs in translations (Kussmaul 1995; Hönig 1997; Hansen 2002). INT is considered a typical feature of semiprofessional translations, the general advice being that INT should be avoided by applying appropriate translation techniques such as thorough semantic analysis of the source text and a concentration on topdown processes. Very few contributions address linguistic interference from a more theoretical point of view. Some authors discuss mainly the causes of interference phenomena and try to explain them according to different approaches such as the theory of translatorial action (Holz-Mänttäri 1989), scenes-and-frames semantics (Schäffner 1989) or Descriptive Translation Studies (Toury 1995). In Translation Studies, INT can also be defined more widely as a projection of characteristics of the source text into the target text resulting in a violation of parole-related target text norms. Interference can be lexical, thematic, micro- and macrotextual, situational and cultural (Kupsch-Losereit 1998). However, very few authors (e.g. Hansen 2002; Horn-Helf 2005) follow up on this broad definition of INT that does not concentrate exclusively on linguistic aspects. 2.2
Interference in Interpreting Studies
“Interferences […] are a well-known form of target-text contamination with source-culture material” (Pöchhacker 1994b: 176). Like in TS, linguistic interference in IS is mentioned mostly in didactic texts as a problem to be avoided, e.g. through deverbalisation (Théorie du sens, Seleskovitch/ Lederer 2002). General comments on INT and warnings are frequent (Pöchhacker 1994b; Gile 1995), but
Heike Lamberger-Felber & Julia Schneider
very few empirical case studies begin to investigate the actual occurrence of INT in interpreting and the influence of different parameters such as language pairs, A-B vs. B-A, beginners vs. professionals etc. on the frequency and type of interferences (e.g. Hack 1992; Kock 1993; Garwood 2004). Empirical data on the quantity and types of interferences in interpreting products are rare. Most empirical studies carried out in the field use a qualitative approach and are of an explorative nature. These studies focus mainly on student performance in the classroom or in exam situations (e.g. Hack 1992; Kock 1993; Russo and Sandrelli 2003; Ballardini 2004) and aim to describe different types of INT. The mode of interpreting that has received most attention so far is simultaneous interpreting. The use of interference typologies is quite disparate; while some authors do not use interference typologies as such, and limit their studies to the description of single types of INT (mostly newer contributions such as those of Ballardini 2004 and Garwood 2004), others arrive at elaborate and detailed typologies; e.g. Kock (1993) uses a typology including phonological, syntactic, grammatical, lexical, cultural, pragmatic, textual, intralingual and “deliberately used” INTs as well as a type of INT called simultaneous short-circuit. Hack (1992) differentiates between phonological, morphological, syntactic, lexical and semantic INT as well as INT in grammatical agreement, false friends, blends, neologisms and slips of the tongue. However, all authors using such detailed typologies report difficulties in assigning single interferences to the categories established, and partly resorted to introducing specific categories for “borderline cases”; this calls into question the practical usability of these typologies and the comparability of the data obtained in using them (e.g. Hack 1992: 67). Methodological problems in assigning INTs to different INT types are also mentioned by Stummer, who has undertaken the most exhaustive quantitative study to date on the occurrence of INT in simultaneous interpreting (Stummer 1992). However, the most salient result found by Stummer, namely the high variability among interpreters in terms of the incidence of INT, was also confirmed in Schneider’s pilot study and the present case study. 3. An interference typology for empirical SI research For the purpose of this study, INT is defined as the result of the auditive and/or visual influence of the source language (SL) or source text (ST) on structures/elements of the target text (TT) that results in a deviation from the norms of the target language (TL). In view of the difficulties encountered by authors in using their interference typologies for the analysis of interpretations in IS, Schneider (2007) developed a
Linguistic interference in simultaneous interpreting
Table 1. Interference typology INT unrelated to SI situation
SI-specific INT
Phonological INT Lexical INT Morphosyntactic INT
Simultaneous “short circuit” Grammatical agreement with ST elements
new classification with an intentionally limited number of clearly defined categories. Several types of INT mentioned in the literature were not included in the typology, either because they are not covered by the definition of INT used (e.g. intralingual INT) or because they were irrelevant and/or methodologically difficult or even impossible to determine in the corpus (e.g. receptive/productive INT, INTs deliberately used by the interpreter). The typology consists of two macro categories (INTs unrelated to SI and INTs specific to SI) that are further divided into a total of five types of INT (see Table 1). The first category comprises types of INT that are not specific to SI and can also be observed in unmediated multilingual communication. The three types of INT in this category are those most often referred to in the literature: phonological, lexical and morphosyntactic INT. If the interpreter observes the phonological rules of the SL rather than those of the TL in pronouncing a TL-element, causing a deviation from the norms of the TL, this deviation is considered a phonological INT. (1) “English” pronounciation of German names/terms: Professor Mugler ['mәglә] – Professor Mugler ['mәglә] Lexical INT is either the direct adoption of a SL-element into the TT without translation, the use of a TL-equivalent for a SL-element with a wrong semantic, connotative or functional value, or the creation of a neologism following the example of the SL. In all three cases the norms of the TL are violated.
(1) health insurance – Gesundheitsversicherung
(2) SMEs – SMEs
(3) business environment – (wirtschaftliche) Umwelt
(4) technical assistance – technische Assistenz
(5) stem from the fact – stammen aus der Tatsache
In the case of morphosyntactic INT, elements in the TL are arranged according to the syntactic rules of the SL, morphological structures of the SL are copied in the TL and/or function words are used following the example of the SL. The common
Heike Lamberger-Felber & Julia Schneider
denominator of these varying instances of INT is thus the violation of the morphosyntactic norms of the TL.
(1) small and medium-scale industries – klein- und mittelgroße Betriebe
(2) Use of prepositions: a. the importance of small and medium-sized enterprises in their economic cooperation – die Bedeutung der Klein- und Mittelbetriebe in der Zusammenarbeit b. benefit from – Nutzen ziehen von c. in 1993 – in 1993 (3) Use of articles: a. will benefit particularly from improvements in the business environment – … werden vor allem von Verbesserungen in dem geschäftlichen Umfeld b. over the past decade – in dem letzten Jahrzehnt c. in the form of – in der Form von d. no article is used with “UNIDO” in the TT (4) Constructions with of: 700 million dollars of investment – 700 Millionen Dollar von Investitionen (5) Syntax: a. This is why as early as 1983 the United Nations Economic Commission for Europe …– das ist der Grund, warum im Jahre 1983 die UN-ECE … b. the effort of the self-proclaimed do-gooders has reached almost ridiculous proportions in the US – … haben nun die Anstrengungen der selbst erklärten Wohltäter lächerliche Ausmaße in den Vereinigten Staaten angenommen (6) Transposed digits: eighty-three – achtunddreißig The second category of INT comprises interferences that can be considered specific to SI. The type of INT known as “simultaneous short circuit” (“simultaner Kurzschluss”, term coined by Kock 1993: 55) is defined by Lamberger-Felber (1998: 114) as a type of temporal INT that originates in the SI process and that results in a wrong linking of information in the TT; new SL-information interferes with information that has already been processed but not yet verbalised by the interpreter.
(1) This fall after our meeting, your own US chapter will be meeting in San Diego, next year, in 1993, the ISBC will be meeting in Warsaw, the first time ever in a post-communist country. – Wir wissen, dass in San Diego zum
Linguistic interference in simultaneous interpreting
Beispiel eine ähnliche Konferenz abgehalten werden wird und wir werden uns auch hier mit den postkommunistischen Ländern beschaffen.
(2) to satisfy the need of virtually every citizen – um die virtuellen Bedürfnisse … zu erfüllen
The last type of INT is characterised by grammatical agreement of TL-elements with ST-elements in number, person or gender. While the outcome of this type of INT may look similar to morphosyntactic INT, the difference lies in the fact that the INT is not langue-related, but rather due to the direct acoustic and/or visual impact of a specific ST construction on TL production. As such, it is a type of INT that is very specific to the SI-situation.
(1) the role that the small business sector plays – welche Rolle die Kleinbetriebe in … spielt
The interference typology was tested in Schneider’s pilot study and proved useful for the present study since all INTs in the corpus could be categorised according to the typology. The main objective of the study being a quantitative analysis of INT, it was deemed more important to establish clearly defined and easily distinguishable categories rather than provide a detailed qualitative analysis of INT types. 4. Case study Building on the results of the pilot study, a case study was carried out on a larger corpus, using a slightly modified method and testing an additional hypothesis. Finally, the data were examined for possible correlations with other parameters such as errors, omissions, time lag and length of interpretations. 4.1
Corpus, method and scope of the study
The three input speeches I, II and III – all read out by the speaker using a written manuscript – were part of a larger conference corpus recorded and described on the basis of a text description model by Pöchhacker (1994a). Twelve conference interpreters with at least ten years of professional experience were asked to interpret the three audio-recorded speeches of 8-10 minutes length each from English into their A-language German. For the purpose of the study, the interpreters were divided into three groups A, B and C. Each group had to interpret one speech with the speaker’s text, which had been given to them with enough time to prepare it for use in the booth (PT, prepared text), one speech with the manuscript available for use in the booth but without preparation time (T), and the third speech without
Heike Lamberger-Felber & Julia Schneider
Table 2. Experimental setup
Speech I Speech II Speech III
Group A
Group B
PT T O
T O PT
Group C O PT T
ever seeing the speaker’s text (O). Interpretations were recorded and transcribed. The data thus obtained were used by Lamberger-Felber (1998) to investigate other parameters as mentioned above. The pilot study focused on the interpretations of speech III only; the data obtained were incorporated into the present case study. Interferences in the target text were determined on the basis of definitions in Schneider’s interference typology. In order to reduce subjectivity, both authors first determined INT independently on the basis of both transcripts and recordings. As a second step, only those INT were counted that both authors had marked and for which both authors agreed on the type of INT according to the typology used. Each individual instance of INT was counted as one unit, even if it appeared within a sequence already marked as INT (e.g. a lexical INT within a morphoysyntactical INT). Results were then analysed to test two hypotheses on the influence of working conditions on SI with text: Hypothesis 1: Owing to double input (auditive and visual), INTs are more frequent in SI with text. Hypothesis 2: Preparation reduces the frequency of INT in SI with text. On the basis of the data obtained, the frequency of different types of INT and possible correlations with other parameters were established. The aim of the study was to investigate the phenomenon of linguistic interference with regard to frequency and type of occurrence under different working conditions. While the underlying definition of “interference” may be considered quality-related from a linguistic viewpoint, the frequency of INT in an interpretation cannot be considered a quality parameter as such; no data are available as to the influence of INT in quality evaluations by different groups (interpreters, users, teachers etc.). The discussion of the results obtained from a quality perspective would have to be the subject of a further study.
Linguistic interference in simultaneous interpreting
4.2
Results
4.2.1 Frequency of interference and influence of working conditions As in the pilot study, INT proved to be a very frequent phenomenon; all interpreters produced INTs in all their interpretations, and the total number of INTs (240, 217 and 311 respectively in the interpretations of the three speeches) confirmed the relevance of the subject under study. A comparison between the three speeches shows an average total of 31.1 INTs per 100 words of the original for the interpretations of speech I, 26.2 for speech II and 29.6 for speech III. In order to check the potential influence of the availability of the speaker’s text in the booth on interpretation output, the average number of INTs produced under each of the three working conditions was compared (see Figure 1). Results show that while interpreters working with a prepared text (PT) produced more INTs (an average 2.7 per 100 words/ST) than those without (O; 2.3 INTs/100 words); there was almost no difference between working without (O) and working with an unprepared manuscript (T; 2.3 INTs/100 words). At the same time, variability among the subjects was high for all three working conditions; for interpreters working without text (O), standard deviation (SD) was ±0.76 for an average of 2.29 INTs per 100 words/ST, for group PT SD was ±0.75 for an average of 2.72 INTs per 100 words/ST, and for interpreters working with an unprepared text (T) SD was as high as ±1.39 for an average of 2.28 INTs/100 words/ST. A comparison of the performances with/without text for each of the three speeches produced the following results: 2,8 2,7 2,6 2,5 2,4 2,3 2,2 2,1 2 all 3 speeches O
T
PT
Figure 1. Average number of INTs per 100 words/source text when working without text (O), with text (T) and with prepared text (PT)
Heike Lamberger-Felber & Julia Schneider
3 2,5 2 1,5 1 0,5 0 I
II
without text (O)
II
with text (T+PT)
Figure 2. Average number of INTs per 100 words/ST when working with/without manuscript
3 2,5 2 1,5 1 0,5 0 I
II O
T
III PT
Figure 3. Average number of INTs per 100 words/ST under all working conditions
For two out of three speeches, the average number of INTs was higher for interpreters working without text (O) than for those working with text (T+PT). Only for speech I did the use of the speaker’s text increase the average number of INTs. In order to check the influence of text preparation, results of the two working conditions with text but without preparation (T) and with text and preparation (PT) were compared: The group using a prepared manuscript in the booth produced the highest average number of INTs for two out of three speeches (I and II). For speech III, interpreters working without text (O) showed the highest incidence of INT, followed by those working with text and preparation. For two out of three speeches the frequency of INT was lowest in the group working with an unprepared text (T). As a next step, individual performances of all twelve interpreters were checked for all three speeches (see Figures 4, 5 and 6).
Linguistic interference in simultaneous interpreting
50 45 40 35 30 25 20 15 10 5 0
INT per interpreter - speech I
A1 A2 A3 A4 B1 B2 B3 B4 C1 C2 C3 C4 O
T
PT
Figure 4. Number of INTs per interpreter for speech I
INT per interpreter - speech II 35 30 25 20 15 10 5 0 A1 A2 A3 A4 B1 B2 B3 B4 C1 C2 C3 C4 O
T
PT
Figure 5. Number of INTs per interpreter for speech II
For all three speeches, the variability among interpreters is considerable; the standard deviation for speech I is ±1.22 (average 2.59 INTs/100 words/ST), for II ±0.96 (average 2.19) and for III ±0.83 (average 2.47). Figure 7 gives an overview of interpreters’ performance for all three speeches:
Heike Lamberger-Felber & Julia Schneider
INT per interpreter - speech III 50 40 30 20 10 0 A1 A2 A3 A4 B1 B2 B3 B4 C1 C2 C3 C4 O
T
PT
Figure 6. Number of INTs per interpreter for speech III
7 6 5 4 3 2 1 0 A1
A2
A3
A4
B1
Speech I
B2
B3
Speech II
B4
C1
C2
C3
C4
Speech III
Figure 7. Number of INTs/100 words/ST for all interpreters
The total number of INTs varies considerably among interpreters; for speech I between 9 (B4) and 45 (B1), for speech II between 3 (A3) and 29 (A2) and for speech III between 13 (C1) and 42 (B1). In two instances, the interpretation with the highest and with the lowest number of INTs were produced under the same working condition (T). The average number of INTs each interpreter produced per speech varies between 15 (C1) and 36.33 (B1). As shown in Figure 7, many interpreters perform similarly in all three speeches (low SD, e.g. ±2.52 for A2 for an average of 28.67 INTs per speech) while others show major deviations between interpretations (SD ±12.5 for B1 for an average of 36.33 INTs per speech).
Linguistic interference in simultaneous interpreting
5 4,5 4 3,5 3 2,5 2 1,5 1 0,5 0 A1
A2
A3
A4
B1
B2
without text
B3
B4
C1
C2
C3
C4
with text
Figure 8. Average number of INTs per 100 words/ST with/without text
In order to see to what extent the availability of the speaker’s text had an influence on the frequency of INT, individual performances under both text conditions (T and PT) were compared with the same subject’s interpretation without text (O). Data show that for most interpreters, the use of the speaker’s manuscript does not have a strong impact on the frequency of INT. Nevertheless, 9 out of twelve subjects (75%) produced on average fewer INTs when working without text than when working with text. At the same time, half of the subjects had their lowest INT-rate (out of three) when working with text (see Figure 8). As to the influence of text preparation on frequency of INT, the evaluation had to consider the subjects’ indication as to their de-facto working condition; interpreter A1 did not use the text he/she was given immediately prior to the interpretation, thus de facto working without text (O) instead of with text (T); and interpreter B2 had not prepared the text he/she was to interpret under condition PT (text and preparation) and was thus working twice under the same condition T (text without preparation). The influence of preparation was therefore only compared for 10 interpreters, 6 of whom (60%) produced more interferences per 100 words of the source text when working with a prepared text, while for 4 (40%), preparation of the manuscript reduced the frequency of INT (see Figure 9).
Heike Lamberger-Felber & Julia Schneider
6 5 4 3 2 1 0 A1
A2
A3
A4
B1
B2
B3
text + preparation (PT)
B4
C1
C2
C3
C4
text without preparation (T)
Figure 9. Average number of INTs per 100 words/ST with/without preparation (PT/T)
4.2.2 Frequency of different types of INT While all INT types established in Schneider’s typology were found in the corpus, their frequency is very variable. In fact, 86% of INTs are either of lexical (45%) or morphosyntactic (41%) nature, followed by 9% INTs due to “simultaneous short circuit”, 4% to grammatical agreement of elements of the target text with elements of the source text and only 1% to phonological interference from the source text (Figure 10). Only 13% of all INTs found in the corpus are specific to the simultaneous interpretation situation (simultaneous short circuit and grammatical agreement).
4%
9%
1% 45%
41%
Phonological INT
Lexical INT
Morphosyntactic INT
Grammatical agreement
Simultaneous short circuit
Figure 10. Frequency of different types of INT
Linguistic interference in simultaneous interpreting
100% 80% 60% 40% 20% 0% I Phonological INT Morphosyntactic INT Simultaneous short circuit
II
III Lexical INT Grammatical agreement
Figure 11. Frequency of different types of INT according to source text
In order to see whether this distribution varies depending on input, the frequency of different INT types was also calculated separately for each of the three source texts, showing considerable differences; while lexical and morphosyntactic INT accounts for the vast majority of INTs in interpretations of all three speeches, morphosyntactic interference alone accounts for 59.9% of all INTs for speech I as opposed to only 27.1% of INTs for speech III. For the interpretations of speech III, it is morphosyntactic INT that ranks highest at 50.48% as opposed to 32.9% for speech I. For all three speeches, phonological interference is the least frequent type of INT, with not a single occurrence in interpretations of speech II (Figure 11). When comparing the frequency of different INT types among subjects, the result is similar; again, morphosyntactic and lexical INT account for the vast majority of INTs for all twelve subjects. But while for interpreter A4 the highest occurrence of INT is morphosyntactic with 63.4% of all INTs produced (compared to 22.7% for interpreter B3), s/he ranks last for the relative frequency of lexical INT (26.8%, compared to 58.2% for C2). While grammatical agreement is present in eleven out of twelve subjects’ interpretations, only four out of twelve interpreters produce one or more phonological interferences. The incidence of INT produced by simultaneous short circuit varies from 1.4% (C4) to 17.8% (C1) (Figure 12).
Heike Lamberger-Felber & Julia Schneider
100% 80% 60% 40% 20% 0% A1 A2 A3 A4 B1 B2 B3 B4 C1 C2 C3 C4 Phonological INT Morphosyntactic INT Simultaneous short circuit
Lexical INT Grammatical agreement
Figure 12. Frequency of different types of INT per interpreter
4.2.3 Correlations with other parameters Linguistic interference is only one parameter among many that can be isolated and quantified in interpreter output. Since most studies concentrate on one or very few such parameters, possible correlations which might help to further understand the process of interpretation are rarely investigated. The authors therefore decided to make use of the data already quantified for the same corpus (for methodology and results see Lamberger-Felber 1998) in order to look at possible correlations between interference and other output- or input-specific parameters. 4.2.3.1 Output-specific parameters Using Spearman’s Rank, the performances of the twelve subjects were rated using the following parameters: errors (E), omissions (O), time lag, length of interpretations and lexical variability (Table 3). The ranking reflects an interpreter’s position within the group of twelve for his/her overall interpretation performance (all three speeches). The method should show whether e.g. interpreters who produced relatively more INTs in their interpretations in comparison to other subjects also tended to produce more errors or omissions, and whether there is any systematic relation between INTs and time lag, length of interpretations and lexical variability. Spearman’s rank coefficient showed no statistically significant correlation between interference and any of the above mentioned parameters. What is more, ρ=-0.08 for the correlation between INTs and semantic deviations (E+O), indicating complete independence of both parameters. Non-significant correlations were
Linguistic interference in simultaneous interpreting
Table 3. Ranking of interpreters for different parameters
A1 A2 A3 A4 B1 B2 B3 B4 C1 C2 C3 C4
INTs
O
E
O+E
Time lag
Length
Lexical Variability
3 2 4 11 1 7.5 10 12 9 7.5 6 5
4 10.5 10.5 5 3 8.5 6 7 1 8.5 2 12
3 10 1 5.5 5.5 9 8 12 2 7 4 11
4 11 10 6 3 8 5 9 1 7 2 12
8 11 10 6 1 9 5 4 3 7 2 12
3 4 1 8 12 2 9 7 11 5 10 6
2 4 1 5 12 6 10 9 11 7 3 8
found between time lag and INTs (ρ=-0.257; longer time lag – fewer INTs), between length of interpretations and INTs (ρ=0.208; longer interpretations – more INTs) and between lexical variability and INTs (ρ=0.285; less lexical variability – more INTs). 4.2.3.2 Correlation interference – input-specific parameters A comparison of each interpreter’s interference ranking for the three speeches shows a significant correlation between speeches I and II (ρ=0.01) and between speeches I and III (ρ=0.01). The correlation of INT frequency for each interpreter between speech II and speech III points into the same direction, but is below significance level (ρ=0.483). The three input speeches had been described previously by Pöchhacker, including numeric data for speed of presentation, intonation and dynamics of the speaker (Pöchhacker 1994a). A negative correlation was found between speed of presentation and INT; the number of INTs was highest for the “slowest” speech I, and lowest for the “fastest” speech II. No correlation was found between intonation/dynamics as quantified by Pöchhacker and frequency of INT. In the experiment, interpreters were also asked to a) evaluate the difficulty of each of the three interpreted speeches and b) to compare their performance for each of the three speeches. The frequency of interference was highest for speech I, which had been judged easiest by 100% of the subjects. At the same time, 80% of the subjects considered their interpretation of this same speech (I) to be the best of the three.
Heike Lamberger-Felber & Julia Schneider
4.3
Discussion
While no conclusions can be drawn as to the impact of INT on interpretation quality, the results of this study show linguistic interference to be a relevant subject of study in interpreting research. INT is very frequent in interpretations by professional conference interpreters and seems to occur with a certain regularity regardless of the source text. Since only three source texts were used which had been checked for comparability, this would have to be confirmed using a variety of different input texts. All twelve interpreters produced interferences, albeit of considerably varying frequency. As to the influence of using a written manuscript in the booth on the incidence of INT, both hypotheses have to be rejected for this corpus. 4.3.1 Hypothesis 1: Owing to double input (auditive and visual), INTs are more frequent in SI with text For two out of the three speeches, the group working without the speaker’s text (O) produced more INTs than those interpreters who did have access to the text (PT+T). At the same time, a comparison of individual performances shows that nine out of twelve interpreters produced fewer INTs on average when working without text. The overall average of INTs per 100 words of the source text however was highest when interpreters worked with a prepared text in the booth. This not very clear-cut picture does therefore not confirm the negative impact of interpreting from a written source on frequency of INTs as found by Agrifoglio in a comparison between SI, CI (consecutive interpreting) and sight translation (Agrifolio 2004: 51ff.). A direct comparison between sight translation and SI with text could provide interesting insights in this respect. The results obtained in the present study, however, seem to indicate that – variability among subjects is higher than the impact of the specific working conditions on the frequency of INT; – performance varies considerably depending on whether interpreters worked with a prepared text or not, thus making it difficult to speak of “SI with text” as a single mode of interpretation; – variability among interpreters is highest when working with an unprepared text in the booth, which might be an indication of the specific strategies required (and more or less successfully mastered) for this mode of interpretation; – the small number of subjects in each of the three groups (four) and the possible influence of uncontrolled input-specific parameters (different results for different speeches) make it impossible to draw further conclusions; – more data will be needed.
Linguistic interference in simultaneous interpreting
4.3.2 Hypothesis 2: Preparation reduces the frequency of INT in SI with text As indicated above, there is indeed considerable difference between the two “textconditions”. What comes as a surprise, however, is the fact that the average frequency of INT per 100 words was higher when subjects interpreted using a prepared manuscript, compared to interpreting using an unprepared text. This result is confirmed for all three speeches. Also the fact that 60% of the subjects produced more INTs when working with a prepared text seems to confirm this. While it seems unlikely that text preparation should increase the frequency of INT in SI, it may well be an indication of pragmatic choices made by interpreters; working with a prepared text may for example lead interpreters to work with a longer time lag, thus putting them under pressure when speakers accelerate. It may even cause them to be less attentive towards both input and output owing to the lack of the “surprise factor”. It would be most interesting to see to what extent different types of interference are considered acceptable strategic choices by professional interpreters, who may well consciously opt for a solution “closer to the original” in order to avoid semantic deviations. 4.3.3 Frequency of different types of INT The vast majority of INTs, independent of source text and intersubject differences, is of lexical or morphosyntactic nature and thus not exclusive to the simultaneous interpretation situation. This confirms that while the contact between two languages may be more immediate for the simultaneous interpreter as compared to other bilingual individuals, the (qualitative) impact of this contact is very similar for both groups. The prevalence of the two above-mentioned categories justifies specific attention in interpreter training. In order to provide targeted input to students, however, a more detailed description of INT types as provided by Hansen for written translations (Hansen 2006:112ff) would certainly also be useful for SI. The fact that the distribution of INTs among the two categories varies considerably for the three speeches could be seen as a possible indication of the influence of input on frequency of different types of INT. Also, lexical INT and morphological INT seem to be indirectly proportional; while the overall percentage of the two categories is similar for all three speeches, the incidence of lexical INT is lower when there are more morphosyntactical INT and vice versa. However, the very limited corpus of three speeches does not allow any further conclusions. At the same time, the considerable differences among subjects as to which of the two INT categories is more prevalent in their output would call for more detailed auto-evaluation among students in order to set individual training goals. The category of “simultaneous short circuit” seems to be a very individual phenomenon which hardly affects some interpreters while causing up to 20% of the total number of INTs for others. Both phonological interference and
Heike Lamberger-Felber & Julia Schneider
grammatical agreement with source-text elements, quantitatively speaking, do not have a major impact on interpreters’ output. 4.3.4 Correlation with other parameters The data of this corpus do not allow the authors to draw any conclusions as to the correlation between linguistic interference and other product-related parameters. This may well be due to the small number of subjects and the limited amount of data. It is, however, surprising that there should be complete independence of INT and semantic deviations in interpretations; both might be considered, to a certain extent, non-voluntary deviations from interpretation norms, and have often been mentioned in the context of processing capacity problems. If this result were to be confirmed for a larger corpus, it would mean that – interpreters whose output is linguistically closer to the original, thus containing more interference, don´t at the same time produce more semantic errors in their interpretations, but – at the same time, this linguistic closeness to the source text does not help them to avoid such deviations either. The correlation between interpreters’ performance for the different speeches confirms that the frequency of interference is indeed part of what characterizes individual interpreter performance. The fact that the interpretations were made under different working conditions for each of the speeches seems to indicate that these working conditions are less relevant for the frequency of INT than each individual interpreter’s “personal style”. The data seem to indicate that the frequency of INT decreases when the speaker’s speed of delivery increases. Since only three speeches with similar rates of presentation were used in the study, these findings would have to be confirmed for a larger corpus. Interestingly enough, it was the speech that all interpreters had considered easiest that caused the highest average number of INTs per 100 words/ST. This might be a confirmation of Dam’s results obtained in a case study on the option between formbased and meaning-based interpreting of texts of varying difficulty (Dam 2001). At the same time, the fact that it was for this same speech that 80% thought they had produced their “best” interpretation shows that interferences, unlike semantic deviations (E+O), are not perceived as a problem during interpretation or at least considered less relevant for quality evaluation by the interpreters themselves.
Linguistic interference in simultaneous interpreting
5. Conclusion and perspectives Because of the limited amount of data and the high variability among subjects’ interpretations, results from this case study can only offer tentative insight into the phenomenon of linguistic interference in SI with/without text. Within these limits, results seem to indicate that the mode of interpretation (with/without text) has less impact on frequency of INT than individual interpreting strategies applied by different professional conference interpreters with comparable qualification. The vast majority of INTs are either lexical or morphosyntactic interferences, with varying predominance of either category for the three speeches and among the twelve subjects. Moreover, INT seems to be independent of other output parameters such as semantic deviations, time lag, length of interpretations or lexical variability. In spite of the methodological limitations of this study, the sheer frequency of interference in interpreter output seems to warrant further investigation into the phenomenon of INT, using different input texts, language pairs etc. The impact, if any, of frequency and/or type of INT on quality evaluation by different groups also merits further attention. References Agrifoglio, M. 2004. “Sight translation and interpreting: A comparative analysis of constraints and failures.” Interpreting 6 (1): 43–67. Ballardini, E. 2004. “Interferenze linguistiche nella traduzione a vista dal francese in italiano: appunti a margine di un corso di interpretazione di trattativa.” In Lingua, mediazione linguistica e interferenza, G. Garzone and A. Cardinaletti (eds), 273–285. Milano: Franco Angeli. Carstensen, B. 1968. “Zur Systematik und Terminologie deutsch-englischer Lehnbeziehungen.” In Wortbildung, Syntax und Morphologie. Festschrift zum Geburtstag von Hans Marchand am 1. Oktober 1967, H.E. Brekle and L. Lipka (eds), 32–45. The Hague/Paris: Mouton. Dam, H.V. 2001. “On the option between form-based and meaning-based interpreting: The effect of source text difficulty on lexical target text form in Simultaneous Interpreting.” The Interpreters’ Newsletter 11: 27–55. Garwood, C. 2004. “L’interferenza nell’interpretazione simultanea: il caso della lingua inglese.” In Lingua, mediazione linguistica e interferenza, G. Garzone and A. Cardinaletti (eds), 303–323. Milano: Franco Angeli. Gile, D. 1995. Basic Concepts and Models for Interpreter and Translator Training [Benjamins Translation Library 8]. Amsterdam/Philadelphia: John Benjamins. Hack, A.-C. 1992. Interferenzen beim Simultandolmetschen. Versuch einer Erklärung auf der Grundlage der Erkenntnisse der Zweitsprachenerwerbsforschung und des Bilingualismus. Heidelberg: Diploma Thesis. Hansen, G. 2002. “Interferenz bei Referenz im Übersetzungsprozess.” In Linguistics and Translation Studies. Translation Studies and Linguistics [Linguistica Antverpiensia], L. van Vaerenbergh (ed.), 303–326. Antwerpen: Hogeschool Antwerpen.
Heike Lamberger-Felber & Julia Schneider Hansen, G. 2006. Erfolgreich Übersetzen. Entdecken und Beheben von Störquellen. [Translations wissenschaft Band 3]. Tübingen: Narr. Holz-Mänttäri, J. 1989 “Interferenz als naturbedingtes Rezeptionsdefizit – ein Beitrag aus translatologischer Sicht.” In Interferenz in der Translation [Übersetzungswissenschaftliche Beiträge 12], H. Schmidt (ed.), 129–134. Leipzig: Verlag Enzyklopädie. Hönig, H.G. 21997. Konstruktives Übersetzen [Studien zur Translation 1]. Tübingen: Stauffenburg. Horn-Helf, B. 2005. “Interferenzprobleme beim Übersetzen technischer Texte.” Fachsprache 27 (3–4): 139–158. Kock, K. 1993. Die Rolle der Interferenz beim Simultandolmetschen. Handlungsbedingungen und Erscheinungsformen. Heidelberg: Diploma Thesis. Kupsch-Losereit, S. 1998. “Interferenzen.” In Handbuch Translation, M. Snell-Hornby et al. (eds), 167–170. Tübingen: Stauffenburg. Kußmaul, P. 1995. Training the Translator [Benjamins Translation Library 10]. Amsterdam/ Philadelphia: John Benjamins. Lamberger-Felber, H. 1998. Der Einfluss kontextueller Faktoren auf das Simultandolmetschen. Eine Fallstudie am Beispiel gelesener Reden. Graz: Doctoral Thesis. Pöchhacker, F. 1994a. Simultandolmetschen als komplexes Handeln [LIP – Language in Performance 10]. Tübingen: Narr. Pöchhacker, F. 1994b. “Simultaneous Interpretation: Cultural Transfer’ or ‚Voice-Over Text’?” In Translation Studies. An Interdiscipline [Benjamins Translation Library 2], M. Snell-Hornby, F. Pöchhacker and K. Kaindl (eds), 169–178. Amsterdam/Philadelphia: John Benjamins. Râbceva, N. K. 1989. “Conceptual Background for Interlinguistic Interference.” In Interferenz in der Translation [Übersetzungswissenschaftliche Beiträge 12], H. Schmidt (ed.), 88–92. Leipzig: Verlag Enzyklopädie. Russo, M. and Sandrelli, A. 2003. “La direccionalidad en interpretación simultánea: un estudio sistemático sobre el tratamiento del verbo.” In La direccionalidad en traducción e interpretación: perspectivas teóricas, profesionales y didácticas, D. Kelly et al. (eds), 407–425. Granada: Atrio. Schäffner, C. 1989. “An account of knowledge use in text comprehension as a basis for framebased interference.” In Interferenz in der Translation [Übersetzungswissenschaftliche Beiträge 12], H. Schmidt (ed.), 65–72. Leipzig: Verlag Enzyklopädie. Schneider, J. 2007. Die Quantifizierung von Interferenzen beim Simultandolmetschen mit Text: Eine Pilotstudie. Graz: Diploma Thesis. Seleskovitch, D. and Lederer, M. 22002. Pédagogie raisonnée de l’interprétation. Paris/Brussels: Didier Érudition. Stummer, E. 1992. Interferenzen beim Simultandolmetschen. Eine empirische Untersuchung auf der Grundlage eines psycholinguistischen Modells. Heidelberg: Diploma Thesis. Tesch, G. 1978. Linguale Interferenz: theoretische, terminologische und methodische Grundlagen zu ihrer Erforschung. Tübingen: Narr. Toury, G. 1995. Descriptive Translation Studies and Beyond. [Benjamins Translation Library 4]. Amsterdam/Philadelphia: John Benjamins. Weinreich, U. 1953. Languages in Contact. Findings and Problems. The Hague/Paris/New York: Mouton.
Towards a definition of Interpretese An intermodal, corpus-based study* Miriam Shlesinger
Bar-Ilan University, Ramat Gan, Israel Apart from its contribution to the analysis of translated discourse as such, corpus-based translation studies has often involved the comparison of translated corpora and comparable originals, in an attempt to isolate the features that typify translations, whether globally or in a more restricted set. The study reported here applied a similar methodology to the analysis of interpreted discourse, comparing it not to non-interpreted (spontaneous, original) spoken discourse but to its written (translated) counterpart. A computerized analysis of the interpreted outputs of six professional translator-interpreters rendering the same text from their second to their first language in both modalities revealed a set of marked differences between them in terms of richness (type-token ratio), and of a range of lexico-grammatical features. Despite its drawbacks in terms of ecological validity, the methodology used in this study is seen as a tool for extrapolating a set of stylistic and pragmatic features of interpreted – as opposed to translated – outputs, and may constitute an extension of the range of the paradigms available to corpus-based translation studies. A statistical analysis of the morphological data generated pointed to salient differences between the two corpora, and it is these differences that are at the core of the present study. The methodological implications and possible extensions are also discussed below.
Keywords: modality, corpus-based translation studies, intermodal, comparable corpora, tagger, Hebrew
* This research was partially supported by the Israel Science Foundation, grant no. 1180/06. I am very grateful to Noam Ordan of Bar Ilan University and to Prof. Alon Itai and Dalia Bojan of the Knowledge Center for Processing Hebrew for their generous assistance in processing the data upon which this paper is based. It is thanks to their patience and their expertise that numerous queries could be conducted, using the special KWIC software for the retrieval of occurrences and statistics of automatically tagged Hebrew text: http://yeda.cs.technion.ac.il:8088/ queryXML/
Miriam Shlesinger
1. Introduction Daniel Gile’s (incredibly prolific) writings have been an inspiration to anyone interested in gaining a better understanding of conference interpreting as a cognitive process, as a product of mental activity, as a skill and as a form of translation. He has broadened and deepened the discussion on each of these topics, and numerous others, and has forced us to step back and take another look (many other looks, in fact). In his sweeping discussion of Translation Research versus Interpreting Research, he speaks of the “scholarly dimensions of the two disciplines” (2004: 15) and of the need to build a scholarly tradition – one which requires, of course, a sustained, ongoing dedication to it on the part of scholars who regard it as a long-term activity, and a willingness to engage in ever-deeper explorations within their preferred paradigm. Indeed, the call for sustained empirical work has been a recurrent theme in Gile’s writings; another has been his appeal for translation research and interpreting research to work together, since “translation and interpreting share much, both as professional activities and as research activities [making them] natural partners in development” (2004: 30). It seems only right to conclude then, as he does, that those of us whose research touches upon both translation and interpreting1 would do well to develop reliable and replicable ways of looking for similarities, as well as differences, between the two modalities. 2. Corpus-based Interpreting Studies – methodological considerations Corpus-based translation studies has used machine-readable corpora to arrive at generalizations about (translated) language in use rather than language systems in the abstract, and to discern features of translation, whether specific to or independent of any particular language pair, text type, individual translator, level of expertise or historical period. A small body of studies has involved comparisons between modes of interpreting in which the simultaneous mode of translation in the spoken modality was compared with sight translation (Agrifoglio 2004), with consecutive interpreting (Lambert 1988; Gile 2001) or with simultaneous interpreting in the signed modality (Russell 2002). Isham (1994, 1995), among others, also compared the spoken and the signed modalities of simultaneous interpreting. 1. Interpreting as used in the present study is confined to the simultaneous mode, the spoken modality and the conference setting. It goes without saying that studies of other modes (e.g. consecutive), of the signed modality and of different settings will further enrich our understanding of each of these, and our ability to isolate the mode-specific, the modality-specific and the setting-specific features of interpreting as well as the properties common to all of them.
Towards a definition of Interpretese
These studies were directed at different aims, and used different methodologies; none of them, however, used a large body of machine-readable data. Broadly speaking, in fact, few (computerized) corpus-based studies have attempted to discern features of the different modes or modalities of translation, or to pinpoint features of interpreted – as opposed to translated – texts, so as to refine our largely intuitive knowledge about the properties of interpreted outputs as such – and by extension, to shed light on the properties of constrained spoken discourse. While it has been suggested that the use of machine-readable databases may be “viable and revelatory not only for the study of interpreting, per se, but for translation studies as a whole” (Shlesinger 1998: 486), the method does have its drawbacks, such as the near impossibility of incorporating the full gamut of paralinguistic and prosodic features (Shlesinger 1994; Ahrens 2005), and the laborintensiveness of the transcription process required (which accounts for the low proportion of transcribed, spoken texts in large corpora such as the British National Corpus), though this deterrent may soon be offset in part by the application of speech recognition software. The norms of transcription must also be tailored to the goal at hand, to prevent unwarranted omissions and “corrections”. Borochovsky (2003) refers to the methodological advantages of comparing texts which exist in both written and oral form, such as an oral lecture that has been committed to writing for publication or a talk-show presented with closed captions, but draws attention to the changes effected by the transfer from one medium to the other, including the tendency to omit “superfluous” items and to “correct” register as well as what is perceived (by the transcriber) as grammatical errors (number, gender, verb form, definite article) or “inappropriate” lexical choices. Another drawback relates to the exceptionally high number of variables and the challenge of achieving a satisfactory degree of ecological validity (cf. Lindquist & Giambruno 2006; Jakobsen et al. 2007: 228) despite the artificiality and the “denaturing” effect of performing translational tasks under experimental conditions, with no one but the researcher as an audience. The study reported below, based on within-subject variance across the two modalities, is a case in point: it used the outputs of the same (six) participants, all of them professional translators and interpreters, interpreting and later translating the same non-domain-specific source text from their second language, English, into their first, Hebrew. It is uncertain whether the artificiality of the text, written expressly for experimental purposes, detracted in any way from the ecological validity of the study. As for the effect of task repetition: since the translation was performed more than three years after the interpreting task the participants were unlikely to recall either the text itself or the strategies they had used to render it; in any event, possible recall of the strategies used while interpreting would not have compromised the relevance of the written task. In short, the two sets of texts may
Miriam Shlesinger
be seen as independently produced outputs based on the same input, and the design was one that may arguably provide sufficient grounds for a claim of “other things being equal”; i.e. for regarding modality (written vs. oral) as the main or only variable responsible for the observed differences. The corpora analyzed were neither parallel (comparing originals and translations) nor comparable in the usual sense (comparing same-language original and translated texts). Rather, they were comparable intermodal – a proposed extension of the categorization proposed by Baker (1995). Baker defined parallel corpora as consisting of original, sourcelanguage texts in language A and their translated version in language B. Comparable corpora consist of separate collections of texts in the same language, one comprising original texts in the language in question and the other comprising translations in that language from a given source language or set of source languages. Comparable intermodal corpora then would be those consisting solely of translations, in different modalities or in different modes. Granted, comparable intermodal corpora based on the same source text, such as the ones used in the present study, are likely to be experimental rather than authentic; however, comparable intermodal corpora based on different source texts, particularly in the case of large corpora, are no less revealing. The types of corpora are thus as follows: Table 1. Types of corpora Category
Original
TT
Modality
Languages
Texts
Parallel Parallel Comparable Comparable
Written Oral Written Oral
Written Oral Written Oral
Translation Interpreting Translation Interpreting
Bilingual Bilingual Monolingual Monolingual
“Same” “Same” Different Different
In the present study, target texts produced in the first two categories were compared: Table 1a. Types of corpora compared in the present study Method
ST
TT 1
TT 2
Modality
Languages
Texts
Intermodal
Written/ written-to-beread (not included in the study)
Oral
Written
Different
Same
Same
Towards a definition of Interpretese
3. Previous corpus-based studies of interpreted versus translated outputs Corpus-based research involving both written and oral translational outputs is still in its infancy. In one such study (Shlesinger & Malkiel 2005), the authors focused on the processing of cognates. Their findings were seen as providing empirical evidence in support of the claim that interpreters are likelier than translators to opt for the “default option”; i.e. while both translators and interpreters are assumed to begin with formal correspondence (Ivir 1981) and to make recourse to the Minimax strategy (Levý 1967), the latter were found to do so more often, notwithstanding a consciously cultivated strategy of interference avoidance (Gile 1987). Russo et al. (2006) drew on Laviosa’s (1998) study of lexical density in written translation and used a part-of-speech (POS) analysis to produce a tagged and lemmatized transcript of European Parliament plenary sessions to study interpreted speeches. Noting the role of corpus linguistics in refining the insights gleaned from studies of lexical density, they observed that – contrary to Laviosa’s findings for written translation – the Spanish interpreted speeches under review had a slightly higher lexical density than that of the speeches originally delivered in Spanish. The authors were unable to determine whether this was attributable to typical features of interpreting (such as stylistic or semantic self-corrections or various forms of explicitation), but in accounting for their finding, they underlined the presence of a difference “concerning lexical density in two types of translational activity, written translation and simultaneous interpreting” (p. 247) and proposed extending the study to include consecutive interpreting as well. The higher lexical density of the interpreted outputs seems to run counter to the pattern suggested by Halliday (1989), but to be in keeping with that proposed by Shlesinger (1989), whereby interpreting exerts a leveling effect: oral texts become more literate, literate texts become more oral. Starting from a vantage point similar to that of the Shlesinger & Malkiel study cited above, Jakobsen et al. used idioms rather than cognates as their target strings, and calculated the frequency with which interpreters coined a cognate idiom (2007: 218). Sight translation rather than simultaneous interpreting was the task under review, and was compared to written translation, using micro-level, nondeliberate reflections of cognitive processes as a point of departure. The authors hypothesized that oral (sight) translators – performing at a self-paced but faster rate than written translators – would be likelier to resort to direct transfer (transcoding). However, while the interpreters used fewer non-cognate solutions, they were found – unlike in the Shlesinger & Malkiel study – to favor paraphrase as their strategy of choice. In a more process-oriented intermodal study (again, sight translation vs. written translation), Dragsted and Hansen (2007) used yet another methodology –
Miriam Shlesinger
triangulating eye tracking and keystroke logging – and posed a very practical question: is there any added value to the much longer time spent on the written modality, and could the use of speech recognition prove to be a viable alternative to the method usually applied in written translation? The extensive body of literature centering on written vs. spoken language has pointed to numerous differences between the two. Having been produced under different circumstances, and subject to different discourse norms, they comprise two separate systems, each with its own set of rules (Lakoff 1982). However, they also display frequent overlaps, whereby each of these modes may be characterized by features more typical of the other (Stubbs 1986; Halliday 1989). Citing research on the salient differences between spoken and written outputs in general, Dragsted and Hansen (2007) stress the correlation between the level of lexical variety and time available for task performance (“speakers must make such choices very quickly whereas writers have time to deliberate,” Chafe & Danielewicz 1987: 86). While the quality of the outputs produced by the translators in Dragsted and Hansen’s study was not appreciably better than that of the interpreters – an intriguing finding, given the fact that the former took more than 10 times longer – the lexical variety of the written outputs, measured in terms of type-token ratios, was indeed greater. (See Hönig 1998 for a detailed discussion of features of the target text in relation to its communicative effect.) 4. Hebrew as a target language Until recently, the use of a quantitative paradigm, involving (labor-intensive) transcription, morphological tagging and statistical analysis of the results, has been impracticable – and in the case of some languages, impossible. In the case of Hebrew, the target language examined here, this was due to (1) its use of an orthography that could not be accommodated (until recently) by commonly used existing software (e.g. WordSmith Tools); (2) its very high proportion of homographs, caused by the non-representation of most vowels, drastically lowering the prospects of automatic disambiguation; and (3) its highly inflected morphology, including the affixation of prepositions and pronouns. It was only thanks to the recent development of a sophisticated tagger capable of overcoming these difficulties, whether fully or largely, that it has become possible to analyze Hebrew corpora, though some of the analysis (e.g. of names, foreign words and forms that the tagger does not “recognize” for various reasons) must still be done manually.
Towards a definition of Interpretese
Thus, besides the aforementioned methodological hurdles of corpus-based Interpreting Studies, in general – the “messiness” of both the process and the product,2 the need to reflect the paralinguistic dimension and the technical challenge of producing transcripts in machine-readable format – the present study involved the language-specific difficulties entailed in investigating outputs in a language which, until very recently, had defied most of the tools available for text analysis. As explained by Itai et al. (2006) and Yona and Wintner (2006), Hebrew, like other Semitic languages, has a rich and complex morphology. The major word formation machinery is root-and-pattern, where roots are sequences of three (typically) or more consonants, and patterns are sequences of vowels (sometimes also consonants) into which the root consonants are inserted. Its inflectional morphology is highly productive: the combination of a root with a pattern produces a base or a lexeme, which can be inflected for number and gender; nominals may be in either absolute or construct forms (status), often displaying the same surface form in both; they may also take pronominal suffixes – again inflected for number and gender – indicating possessives or direct objects. As mentioned above, the absence of most vowels in the written (surface) form results in an exceptionally high proportion of homographs, distinguishable only by syntactic and contextual analysis. Thus, even a three-letter form may have a wide range of homographic representations whose disambiguation lies in the (unwritten) vowels and in contextual information. The surface form s-f/p-r, for example, may have any of the following six interpretations (the homologous middle consonant may be read as either f or p; the vocalization would not normally appear, but is artificially inserted here, in italics): safar ‘he counted’; siper ‘(he) told’; supar ‘(it) was told’; sefer ‘a book’; sapar ‘a barber’; sfar ‘outlying regions’. To compound the ambiguity, prefix and suffix particles may be added to openclass forms (and sometimes to closed-class categories, like prepositions). These include the definite article, prepositions such as “in”, “as”, “to” and “from,” sentence-initial interrogative markers and word-final directional markers, as well as subordinating conjunctions and relativizers and the coordinating conjunction “and”. In many cases, a complex morpho-syntactic and contextual disambiguation procedure is required in order to determine whether a letter belongs to the root or serves as an affix (indicated here in italics), and the number of potential interpretations is inordinately high. Thus, for example, the form b-g-d-a may be either 2. “Interpretation is about as ‘messy’ as it gets. It involves all stages of language processing from low to high levels, and for processing both input and output. And if this were not complicated enough, the input and output processes involve different languages, which of course requires that one understand the nature of bilingualism (or multilingualism). Indeed, few questions in language processing research can claim to be more convoluted than interpreting” (Isham, cited in Gile, 1997b:114).
Miriam Shlesinger
bagda ‘(she) betrayed’ or ba-gada ‘on the riverbank’; the form m-s-p-r may be either mesaper ‘(he) tells’ or mi-sefer ‘from a book’; the form l-v/b-n may be lavan ‘white’ or la-ben ‘to the son’. By the same token, the form h-r-k(h)-v-t may have any of the following six interpretations (the homologous middle consonant may be read either as k or as kh; the ha may represent either the definite article or a sentence-initial interrogative): hir-kav-ta ‘you (m.) have assembled’; hir-kavt ‘you (f.)’ have assembled; har-ka-vat ‘the assembling of ’; ha-ra-ke-vet ‘the train’; ha-rakhav-ta ‘have you (m.) ridden’?, ha-ra-khavt ‘have you (f.) ridden’? 5. Method The methodology adopted in the present study would not have been possible without MorphTagger, a sophisticated morphological analyzer, or else it would have been limited to manual counts and intuitive judgments. As Itai et al. (2006) point out, Computational lexicons are among the most important resources for natural language processing (NLP). Their importance is even greater in languages with rich morphology, where the lexicon is expected to provide morphological analyzers with enough information to enable them to correctly process intricately inflected forms. (No page number)
This broad-coverage lexicon of Modern Hebrew, used as a research tool in Hebrew lexicography and lexical semantics, is open for browsing on the web and several search tools and interfaces now facilitate online access to its information. It comprises over 20,000 entries and supports morphological analyzers and a morphological generator. The analyzer interacts with, but is separate from the lexicon. It first generates all the inflected forms induced by the lexicon – 473,880 inflected forms before attaching prefixes – and then conducts a database lookup. In addition to inflected forms, the analyzer allows as many as 174 different sequences of prefix particles to be attached to words. It can currently analyze over 80 words per second (Yona and Wintner 2006), but its capabilities are more advanced morphologically than syntactically; thus, a particular word may be analyzed (morphologically) as belonging to the benoni (similar to the English participle) category, but its syntactic ambiguity – it may be a noun, an adjective or a verb – cannot yet be resolved by the analyzer (not unlike the –ing form in English, where the token “giving” may be interpreted as verb, noun or adjective, depending on its syntactic context). As noted above, the two corpora, both of them in Hebrew – an oral one comprising six oral outputs and a written one comprising six written outputs – are renderings of the same English source text by the same six professional translator-interpreters. (The source corpus has not been analyzed for the purposes
Towards a definition of Interpretese
of this study.) These outputs were automatically tagged, using MorphTagger (BarHaim et al. 2007), maintained by the Knowledge Center for Processing Hebrew at the Technion – Israel Institute of Technology. It achieves an approximately 92% accuracy rate and has gradually come to support a sizeable number of morphological markers. For the purposes of the present study MorphTagger succeeded in analyzing 74% of the tokens in the written corpus and 72% of the tokens in the oral one. The main reason for unresolved tokens is that some items are not (yet) present in the lexicon on which the tagger is based. Thus, for example, the lexicon does not display most proper names or foreign words, and it is apparently this fact that accounts for its less robust (by 2 percent) handling of the oral texts, which feature more frequent transfer (or transcoding) of source-language words (cf. Dam 1998; Bartlomiejczyk 2006). The results below are based on the MorphTagger analysis of the two corpora. A discussion of their implications follows. 6. Results 6.1
Lexical variety – type-token ratio
The oral corpus consisted of 8,317 tokens comprising 5,493 types. The written corpus consisted of 8,968 tokens comprising 6,592 types. However, given the limitations of the tagger – especially its inability to analyze lexemes based on nonHebrew radicals – it was able to recognize and analyze only 5,125 and 5,727 tokens, respectively. The ratio of types to tokens (TTR) is a well-known measure of linguistic richness; in the present study, not only was the TTR for the written corpus higher on average, it was also higher for each of the six subjects. The type-token ratio of the oral corpus as a whole (taking the six oral outputs as a single corpus) was found to be 0.655, whereas that of its written counterpart was 0.735. Table 2. Type-token ratios Participant 1 2 3 4 5 6 Average
Oral
Written
0.65 0.66 0.66 0.67 0.66 0.63 0.655
0.72 0.72 0.73 0.74 0.74 0.76 0.735
Miriam Shlesinger
6.2
The verb system
Hebrew verbs are built by applying the production rule of a pattern (binyan) to a sequence of letters (the radicals). There are seven patterns, each with its own syntactic rules and semantic meaning. The most basic of these is pa’al (literally: ‘he/it did’), a paradigm borrowed by the early medieval Hebrew grammarians from Arabic (Chomsky 1982); this pattern, also known as kal ‘easy’, is considered the simplest and most basic. Its reflexive form is nif ’al ‘he/it was done’ and its causative is hif ’il ‘he/it caused to be done’. Thus, for example, the pa’al form of the verb a-x-l ‘to eat’ is axal ‘he/it ate’, the nif ’al form is ne’exal ‘he/it is/was eaten’ and the hif ’il form ‘to feed’ is he’exil ‘he/it fed’. Some (11%) verbs are used in only one of the seven patterns while others are used in 2 (14.2%), 3 (15%), 4 (12.1%), 5 (14.9%), 6 (14.7%) or all seven (18.5%) patterns (Morgenbrod & Serifi 1978: vi). While almost all of the patterns manifested themselves differently in the two modalities, the findings that emerged as they pertain to two of the most common patterns, pa’al and nif ’al, are striking. As noted above, pa’al is the basic, simple pattern, and it is this pattern that predominated in the oral outputs, whereas the distribution of the morpho-syntactically more complex nif ’al was reduced by half. Table 3. Verb patterns (pa’al and nif ’al) Written (n = 437) Pattern Pa’al Nif ’al
6.3
Oral (n = 410) Instances 169 42
% 39 10
Pattern Pa’al Nif ’al
Instances 188 21
% 46 5
The definite article
Biblical Hebrew would render “this boy” as ha-yeled ha-zeh ‘the boy the this’, with the definite article ha affixed to each of the two words. In the later Mishnaic Hebrew, however, this form was reduced by discarding the definite article in both words, yielding the simpler form yeled zeh ‘boy this’ (Chomsky 1982: 164). In Modern Hebrew, the latter form is the one most often used in writing, whereas the spoken registers tend to use the more redundant Biblical one (Agmon-Fruchtman 1981: 12). Indeed, in the corpus examined here, the definite article was found to be considerably more common in the oral than in the written mode: Of the 1,991 instances in which the definite article was optionally called for it occurred 567 times out of 1,991 (0.28) in the written translations, as opposed to 573 times out of 1,634 (0.35) in the oral ones.
Towards a definition of Interpretese
Table 4. The definite article Written (n = 1991)
Oral (n = 1634)
Instances of definite article 567
6.4
% 28
Instances of definite article 573
% 35
Part-of-speech distribution
A part-of-speech (POS) analysis of the two corpora revealed clear differences in the syntactic breakdown of the tokens, as shown in Table 4, particularly as this pertains to pronouns (a finding that may be accounted for by the prevalence of analytic forms in the oral corpus) and adjectives (a finding attributable to an artifact of the source text, which involved particularly long and frequent strings of modifiers) but also as regards function words, often associated with the explicitness or redundancy of spoken language: prepositions, conjunction and copulas. As for the use of verbs, Behar (1985), who applied quantitative measures to Hebrew texts and found that children’s literature contained 30% more verbs than literature for adults, accounted for this on the premise that verb-based constructions are more accessible and more dynamic. In the present study, verb-based constructions were also found to be more common in oral than in written translations, but not significantly so. Table 5. POS analysis (a partial list, arranged in descending order of frequency for each of the two modes) Written (n = 5,727) Part of speech Noun Adjective Verb Preposition Conjunction Adverb Participle Pronoun Negation Copula
Oral (n = 5,125) Instances 1991 1097 437 401 353 335 289 225 129 118
% 34.7 19.1 7.6 7.0 6.1 5.8 5.0 3.9 2.2 2.0
Part of speech Noun Adjective Verb Adverb Preposition Conjunction Pronoun Participle Copula Negation
Instances % 1634 31.8 785 15.3 410 8 397 7.7 385 7.5 341 6.6 335 6.5 233 4.3 134 2.6 123 2.4
Miriam Shlesinger
6.5
Possessives
The concept of possession is expressed in Biblical Hebrew by means of pronominal suffixes and by modifications of the nominal; i.e. by a synthetic form. In Mishnaic Hebrew, on the other hand, an analytic form is more commonly used, with an independent pronoun, as in European languages, to express possession (Chomsky 1982: 165). In Modern Hebrew, it is the analytic form that dominates the spoken registers, whereas the synthetic form is more commonly found in writing (Dubnov 2000: 22). Thus, for example, Mishnaic and Modern Hebrew would represent maxshev ‘computer’ in the possessive case as maxshevi ‘my computer’, maxshevenu ‘our computer’ etc. whereas Biblical Hebrew would use ha-maxshev sheli ‘the computer mine’, ha-maxshev shelanu ‘the computer ours’ etc. Of the 1,991 nouns in the written texts, 174 (8.7%) are inflected for possession (rather than retaining the pronoun as a separate lexical item), whereas only 80 (4.9%) of the 1,634 nouns in the oral outputs are similarly inflected. The fact that there are almost twice as many cases of inflection for possession in the written modality translations as in the orally translated ones points to a consistent pattern. Table 6. Possessives – synthetic vs. analytic Written (n = 1991) Inflected for possession (synthetic) 174
6.6
Oral (n = 1634) % 8.7
Inflected for possession (analytic) 80
% 4.9
Lexical choices
Studies of paradigmatic lexical choices in translation (e.g. Toury 1995: 206-220) revolve on the question of how the “meaning” of such items is to be determined. Studies of paradigmatic choices in interpreting are less ambitious, and have been largely confined to (1) discussions of modality-dependent retrieval mechanisms (e.g. de Groot 1997; Setton 2003); (2) cases “in which a single term in one language has multiple meanings in another and requires the interpreter or translator to choose” (e.g. Lindquist & Giambruno 2006: 118); and (3) register-related choices, particularly in the case of court interpreters, who tend to lower the register when interpreting for the non-English-speaking defendant and to raise it when addressing the judge (e.g. Hale 1997: 46). Studies of the relative frequency of “high-register” and “low-register” lexemes as a function of modality are less frequent. The examples cited here are a mere sampling of the striking differences that were found in terms of lexical choices, with the participants showing a clear preference for the
Towards a definition of Interpretese
Table 7. Lexical choices as features of register
WE BUT
Lexical item
Oral
Anaxnu (‘we’ – unmarked) Anu (‘we’ – formal) Aval (‘but’ – unmarked) Ulam / Akh (‘but’ – formal)
19 3 34 8
Written 2 15 12 36
unmarked form when interpreting but a clear preference for a formal, marked alternative when translating. A discussion of the semantic properties of each of these lies beyond the scope of this paper; suffice it to say that paradigmatic choices among available patterns are among the key indicators of register in spontaneous speech, and it stands to reason that such choices apply to translations as well. Another salient lexical choice is the higher frequency of non-Hebrew borrowings. The search for this particular feature was performed manually, yielding such examples as the following: Table 8. Lexical transcoding in interpreting Original
Oral (transcoding)
Written (Hebrew)
Operations Program Adaptation Ambitious Formative
Operatziyot Programma Adaptatziya Ambitziozi Formativi
Mangenonim Toxnit Ibud Sha’aftani Me’atzvot
7. Discussion and conclusion When it comes to the corpus-based analysis of translated – including interpreted – corpora, one must agree with Gile that its status will depend on the type of analysis and the inferencing done on the basis of its findings (Gile 2002: 362). If, as is claimed above, the method adopted in the present study is successful in isolating modality-dependent features, we may point to several findings which seem, prima facie, to typify the product of (simultaneous) interpreting, as distinct from translation. Translation studies devotes considerable attention to the phenomenon known as translationese (Gellerstam 1986), the telltale indicators that one is reading a translation. Translation scholars even speak of it as a third code (Frawley 1984), and recent attempts, using support vector machines, have provided “clear
Miriam Shlesinger
evidence of the existence of translationese features even in high quality translations” (Baroni & Bernardini 2006: 260). For lack of wide-scale findings concerning Hebrew spontaneous speech vs. writing, it is not possible to determine whether the findings for interpreted outputs are entirely in keeping with those for spontaneous speech (vs. writing), though it would appear that the features that distinguish spoken Hebrew from written Hebrew are (even) more pronounced in interpreted and translated outputs, respectively. It stands to reason that many of these features will be found in the products of (simultaneous) interpreting as well, but it may also be argued that interpreted discourse displays features which set it apart; i.e. features of interpretese. While some of these features have been demonstrated in comparisons of interpreted to original spoken (or read-aloud) discourse, using both parallel and comparable corpora, there have been very few attempts to use machine-readable corpora for comparing interpreted to translated discourse. It appears, moreover, that the imbalance in corpus-based translation studies will not be overcome without ongoing efforts to analyze large, multilingual corpora of interpreted discourse, and that the use of intermodal comparable corpora will promote our understanding of interpreting. Furthermore, this comparison with translated discourse will deepen our understanding of translation itself, both as a modality-specific (written) category and as a generic one. Further studies along the lines described here, involving a variety of language pairs, may eventually allow us to propose a set of potential I(nterpreting)universals, by analogy with (or as a subcategory of) what Chesterman (2004: 8) refers to as potential T-universals (cf. Bernardini & Zanettin 2004: 58-60). The analysis presented here has focused on the translated and interpreted product but may be used to infer about the process as well. The list of patterns and features discussed in the present study is very far from exhaustive; it has focused on measurable small-scale units and has said little about the broader patterns, and nothing about syntax, about cohesion and much more. It has also said nothing about using the methodology to discern between-subject differences. Setton (forthcoming) offers the most comprehensive discussion to date of CIS (corpusbased Interpreting Studies) and points to its rich potential: The prospects seem particularly exciting for the study of interpretation, where corpora are potentially richer by one or two dimensions than both monolingual and translation data. The peculiar conditions of production, and the possibility of tracking the intensive use of local context which interpreters need to manage these conditions, make interpreting corpora a rich undeveloped resource for the study of psycholinguistic and pragmatic processes. (No page number)
This rather optimistic prognosis is very much in keeping with that of another renowned interpreting scholar:
Towards a definition of Interpretese
Since translation and interpreting share so much, the differences between them can help shed light on each, so that besides the autonomous investigation of their respective features, each step in the investigation of one can contribute valuable input towards investigation of the other (Gile 2004: 23).
References Agmon-Fruchtman, M. 1981. “On the determination of the noun and on some stylistic implications.” Hebrew Computational Linguistics 18: 5–18. [Hebrew] Agrifoglio, M. 2004. “Sight translation and interpreting: A comparative analysis of constraints and failures.” Interpreting 6 (1): 43–67. Ahrens, B. 2005. “Prosodic phenomena in simultaneous interpreting: A conceptual approach and its practical application.” Interpreting 7 (1): 51–76. Baker, M. 1995. “Corpora in translation studies: An overview and suggestions for future research.” Target 7 (2): 223–243. Bar-Haim, R., Sima’an, K. and Winter, Y. Forthcoming. “Part-of-Speech tagging of modern Hebrew text.” To appear in Journal of Natural Language Engineering. Baroni, M. and Bernardini, S. 2006. “A new approach to the study of translationese: Machinelearning the difference between original and translated text.” Literary and Linguistic Computing 21 (3): 259–274. Bartlomiejczyk, M. 2006. “Lexical transfer in simultaneous interpreting.” Forum 4 (2): 1–23. Behar, D. 1985. “Stylistic determinants.” Hebrew Computational Linguistics 23: 44–54. [Hebrew] Bernardini, S. and Zanettin, F. 2004. “When is a universal not a universal? Some limits of current corpus-based methodologies for the investigation of translation universals.” In Translation Universals: Do they exist?, A. Mauranen and P. Kujamäki (eds), 51–62. Amsterdam/ Philadelphia: John Benjamins. Borochovsky-Bar-Abba, E. 2002. “What cannot be committed to writing – a study of parallel spoken and written texts.” In Speaking Hebrew: Studies in the Spoken Language and in Linguistic Variation in Israel, S. Izre’el (ed.), 353–374. Tel Aviv: Tel Aviv University. [Hebrew] Chafe, W. and Danielewicz, J. 1987. “Properties of spoken and written language.” In Comprehending Oral and Written Language, R. Horowitz and S. J. Samuels (eds), 83–113. San Diego: Academic Press. Chesterman, A. 2004. “Hypotheses about translation universals.” In Claims, Changes and Challenges in Translation Studies, G. Hansen, K. Malmkjaer and D. Gile (eds), 1–13. Amsterdam/Philadelphia: John Benjamins. Chomsky, W. 1982. Hebrew: The Eternal Language. Philadelphia: Jewish Publication Society. Dam, H. V. 1998. “Lexical similarity vs. lexical dissimilarity in consecutive interpreting: A product-oriented study of form-based vs. meaning-based interpreting.” The Translator 4 (1): 49–68. Dragsted, B. and Hansen, I.G. 2007. “Speaking your translation: Exploiting synergies between translation and interpreting.” In Interpreting Studies and Beyond, F. Pöchhacker, A.L. Jakobsen, and I.M. Mees (eds), 251–274. Copenhagen: Samfundslitteratur. Copenhagen Studies in Language.
Miriam Shlesinger Dubnov, K. 2000. “Synthetic and analytic possessive pronouns related to nouns in spoken Hebrew.” Hebrew linguistics 47: 21–26. [Hebrew]. Frawley, W. 1984. “Prolegomenon to a theory of translation.” In Translation. Literary, Linguistic and Philosophical Perspectives, W. Frawley (ed.), 159–175. Newark: University of Delaware Press. Gellerstam, M. 1986. “Translationese in Swedish Novels Translated from English.” In Translation Studies in Scandinavia: Proceedings from The Scandinavian Symposium on Translation Theory (SSOTT) II, Lund 14–15 June 1985, W. Wollin and H. Lindquist (eds), 88–95. Sweden: CWK Gleerup. de Groot, A. M. B. 1997. “The cognitive study of translation and interpretation: Three approaches.” In Cognitive Processes in Translation and Interpreting, J. H. Danks, G. M. Shreve, S. B. Fountain and M. K. McBeath (eds), 25–56. Thousand Oaks/London/New Delhi: Sage. Gile, D. 1987. “Les exercices d’interprétation et la dégradation du français.” Meta 32 (4): 420–428. Gile, D. 2001. “Consecutive vs. simultaneous: Which is more accurate?” Interpreting Studies 1: 8–20. Gile, D. 2002. “Corpus studies and other animals.” Target 14 (2): 361–363. Gile, D. 2004. “Translation research versus interpreting research: Kinship, differences and prospects for partnership.” In Translation Research and Interpreting Research: Traditions, Gaps and Synergies, C. Schäffner (ed.), 10–34. Clevedon: Multilingual Matters. Gile, D. 2004. “Translation Research versus Interpreting Research: Kinship, differences and prospects for partnership.” In Translation Research and Interpreting Research. Traditions, Gaps and Synergies, Ch. Schäffner (ed.), 10–34. Clevedon, Buffalo, Toronto: Multilingual Matters. Hale, S. 1997. “The treatment of register variation in court interpreting.” The Translator 3 (1): 39–54. Halliday, M. A. K. 1989. Spoken and written language. Second edition. Oxford: Oxford University Press. Hönig, H. G. 1998. “Positions, power and practice: Functionalist approaches and translation quality assessment”. In Translation and Quality, C. Schäffner (ed.), 6–34. Clevedon: Multilingual Matters. Isham, W. P. 1994. “Memory for sentence form after Simultaneous Interpretation: Evidence both for and against verbalization.” In Bridging the Gap: Empirical Research in Simultaneous Interpretation, S. Lambert and B. Moser‑Mercer, B. (eds), 191–211. Amsterdam/Philadelphia: John Benjamins. Isham, W. P. 1995. “On the Relevance of Signed Languages to Research in Interpretation.” Target 7 (1): 135‑149. Itai, A., Wintner, S. and Yona, S. 2006. “A Computational Lexicon of Contemporary Hebrew.” In Proceedings of LREC-2006, Genoa, Italy, May 2006. (unnumbered). Ivir, V. 1981. “Formal Correspondence vs. Translation Equivalence Revisited.” In Theory of Translation and Intercultural Relations, I. Even-Zohar and G. Toury (eds), 51–59. Tel-Aviv: The Porter Institute for Poetics and Semiotics, Tel Aviv University. [= Poetics Today 2:4]. Jakobsen, A. L., Jensen, K. T. H. and Mees, I. M. 2007. “Comparing modalities: Idioms as a case in point.” In Interpreting Studies and Beyond, F. Pöchhacker, A.L. Jakobsen, and I. Mees (eds), 217–249. Copenhagen Studies in Language. Copenhagen: Samfundslitteratur.
Towards a definition of Interpretese Lakoff, R. T. 1982. “Some of my favorite writers are literature: The mingling of oral and literate strategies in written communication.” In Spoken and Written Language: Exploring Orality and Literacy, D. Tannen (ed.), 239–247. Norwood: Ablex. Lambert, S. 1988. “Information processing among conference interpreters: A test of the depth of processing hypothesis.” Meta 33 (3): 377‑387. Laviosa, S. 1998. “Core patterns of lexical use in a comparable corpus of English narrative prose.” Meta 43 (4): 557–570. Levý, J. 1967. “Translation as a Decision Process.” In To Honor Roman Jakobson II, 1171–1182. The Hague: Mouton. Lindquist, P. P. and Giambruno, C. 2006. “The MRC approach: Corpus-based techniques applied to interprter performance analysis and instruction.” Forum 4 (1): 103–138. Morgenbrod, H. and Serifi, E. 1978. “Computer-analysed aspects of Hebrew verbs: The binjanim structure.” Hebrew Computational Linguistics 14: v–xv. Russell, D. 2002. Interpreting in Legal Contexts: Consecutive and Simultaneous Interpretation. Burtonsville, Md.: Linstok Press. Russo, M., Bendazzoli, C. and Sandrelli, A. 2006. “Looking for lexical patterns in a trilingual corpus of source and interpreted speeches: Extended analysis of EPIC (European Parliament Interpreting Corpus).” Forum 4 (1): 221–249. Setton, R. 2003. “Words and sense: Revisiting lexical processes in interpreting.” Forum 1 (1): 139–168. Setton, R. Forthcoming. “Corpus-based interpretation studies (CIS): reflections and prospects.” To appear in Corpus-based Translation Studies: Research and Applications (provisional title; St. Jerome) (Paper delivered at Symposium on Corpus-based Translation Studies: Research and Applications, Pretoria, July 22–25, 2003). Shlesinger, M. 1989. Simultaneous Interpretation as a Factor in Effecting Shifts in the Position of Texts on the Oral‑Literate Continuum. Unpublished M.A. Thesis. Tel Aviv: Tel Aviv University. Shlesinger, M. 1994. “Intonation in the production and perception of simultaneous interpretation.” In Bridging the Gap: Empirical Research in Simultaneous Interpretation, S. Lambert and B. Moser-Mercer (eds), 225–236. Amsterdam/Philadelphia: John Benjamins. Shlesinger, M. 1995. “Shifts in cohesion in simultaneous interpreting.” The Translator 1 (2): 193‑214. Shlesinger, M. 1998. “Corpus-based Interpreting Studies as an offshoot of corpus-based translation studies.” Meta 43 (4): 486–493. Shlesinger, M. and Malkiel, B. 2005. “Comparing modalities: Cognates as a case in point.” Across 6 (2): 173–193. Stubbs, M. 1986. “Lexical density: A computational technique.” Talking about Text. Discourse Analysis Monograph 13. University of Birmingham: English Language Research, 1986: 27–42. Toury, G. 1995. Descriptive Translation Studies and beyond. Amsterdam/Philadelphia: John Benjamins. Yona, S. and Wintner, S. 2006. “A finite-state morphological grammar of Hebrew.” Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages: 9–16.
The speck in your brother’s eye – the beam in your own Quality management in translation and revision Gyde Hansen
Copenhagen Business School (CBS), Denmark Global and national changes have resulted in new requirements for quality management and quality control in translation. International standards like the recent European Quality Standard for Translation Services, EN 15038 (2006), have been developed in order to give clients an assurance that they are receiving high-quality translation work. According to some of these standards, target texts have to be revised at least twice or, ideally, three times by others than the translator him/herself. Revision and revision processes have also come more into focus in TS research. According to Gile (2005), who has developed a mathematical formula that defines quality as the balanced sum of quality parameters, revision tasks are usually carried out by experienced translators. In two empirical longitudinal studies at CBS, the relation between translation competence and revision competence of students and professional translators was investigated. The question posed was: “are the good translators also the good revisers?” In this article, quality parameters and revision processes are described and shown in models. The question is raised whether it would be an advantage to establish special training in revision, parallel to the translator training.
Keywords: quality, revision competence, revision training, longitudinal study, professional translation, experience
1. Introduction Daniel Gile was one of the first to work with quality assessment in translation and interpreting research, and he talked openly about errors and omissions, and about a distinction between “linguistic errors” and “translation errors” (Gile 1994: 46f). He also carried out empirical studies on the perception of errors (Gile: 1985, 1995,
Gyde Hansen
1999a, 1999b), especially with respect to interpreting. In Gile (1995: 31), he described the difficulties involved in defining the concept of “quality” and (ibid.: 38) he gives examples of different perceptions of quality. In Gile (2005: 60), he proposes a mathematical formula that defines quality as a sum of individual, pragmatic, text internal and text external factors: Q(T, c, e) = ∑[pi(c, e) éval(FTi) + p’j p(c, e) éval(FETj)] where Q(T, c, e) refers to the quality Q of the translation T as it is perceived by the evaluator e under conditions c FTi where i = 1, 2, 3… are text-internal factors FETj where j = 1, 2, 3… are text-external factors pi and p’j are the respective relevance of the text-internal and the text-external factors in relation to the evaluator’s individual preferences in the given situation. And “éval” means evaluation. Using this formula, each of the factors can be evaluated as positive, negative or neutral. Additionally, Gile (ibid.: 62) presents a formula for an inter-subjective or collective assessment, which is the arithmetic average (“moyenne”) of the individual evaluations: Q moyenne = 1/n ∑ Q(T, c,e) In (2005: 66), he says however: Enfin, chaque réviseur et chaque client peut avoir ses propres préferences textuelles. Il est donc difficile de parler avec précision d’une qualité de la traduction dans l’absolu. L’évaluation sera toujours en partie subjective. My translation: Ultimately, every reviser and every client may have his/her special textual preferences. That is why judging the precise quality of a text in absolute terms is difficult. The evaluation will always be, in part, subjective.
According to Gile (2005: 53) the reviser is usually an experienced translator who reads and corrects the translations and who improves them. Gile (ibid.: 67) talks about translators revising each others’ translations and he points out that the pro cess of revision can be a source of frustration, especially in cases where the translator disagrees with the revisions. Actually, such frustrations led me to start teaching revision courses at the CBS, in 1983. The original goal of this training was to protect future professional translators by strengthening their assertiveness. Some of the professionals I interviewed in the eighties had complained that they always lacked arguments to explain and justify changes when they had to revise the texts of their colleagues.
Quality management in translation and revision
In the revision courses and exams at the CBS, which always took place in the final semester of their Masters’ degree course, the students were asked to revise defect, authentic, already published, non-literary target texts which are used in everyday life in Denmark (Hansen 1996). Criticism and revision of translated texts always constituted about 25% of the translator training. 1.1
Two longitudinal studies
In 2003, I began carrying out experiments with the students, the first students’ longitudinal study, because I wondered why some of the good translators among them proved to be poor revisers and vice versa. I am currently working on a second professionals’ longitudinal study “From student to expert”, an empirical study with 40 former CBS students who are professionals today. The same sample group participated, 10 years ago, in an empirical project on profiles, translation processes and products, where I investigated sources of disturbances in translation processes (Hansen 2006). In 2006/2007, I contacted them again and visited them at their workplaces. Their profiles, translation processes and translation products are now being investigated again. The objective of this study is the development and improvement of quality standards after graduation from the CBS. The methods used are both within-subject variance across situations (as students and experts), but the subjects’ results are also compared with each other. According to the results of a questionnaire and interviews with all of them: – 14 of the experts work today as professional translators in institutions, organizations, companies, translation agencies or as freelancers, and 3 of these work mostly as professional revisers; – 8 hold management positions; – 13 work with marketing, consulting or as personal assistants; – 5 have become teachers. So far, 28 of the 40 earlier subjects have participated in the new experiments – including 8 bilinguals. As the professional translators often have to revise each others’ work, their revision competence was also tested again. 2. Theories and revision research Translators and revisers need grounding in translation theory. As to the evaluation of translations, Koller (1979: 216ff) points out that translation criticism and translation assessment should be carried out with the translator’s and the evaluater’s
Gyde Hansen
theoretical orientation and translation norms in mind, and that norms and situations vary and change. 2.1
Translation theories and models
Especially the essence of at least some complementary theories or different important theoretical approaches is indispensable: these could, for example, include Koller’s theory of equivalence (1979, 2001) and the Skopos theory (Reiss/Vermeer 1984; Vermeer 1996). Being confronted with different positions and assumptions – especially the functional approach in relation to the theory of equivalence – raises awareness as to different norms, expectations and quality criteria. Theory gives translators and revisers a basis for translation decisions and the terminology to argue for corrections and changes. According to the experts in the professionals’ longitudinal study, the theoretical discussions we had 10 years ago about issues like equivalence or adequacy, acceptability, grammaticality, functionality and skopos have proved to be very useful in professional life and, they say, have made them more flexible than they might otherwise have been. This is also the case for those professionals who do not translate today. However, professional translators do not only profit from translation theory, but also from knowledge and terminology relating to linguistics, pragmatics, and stylistics. Some (4) of the 14 professional translators who work with revision said in the interviews that theory and a professional classification of errors, based on theory, is helpful in situations where revision becomes problematic because their colleagues, whose work they have to revise, are rather sensitive to corrections of their work. Models like the CBS process model (Hansen 2006: 270), Hönig’s Flussdiagramm (1995: 51) and Gile’s Sequential model (1995 and 2005: 102) are useful. The models of Hönig and Gile can be used to complement each other. They train students on the one hand to think prospectively and in a skopos-oriented way (Hönig’s Flussdiagramm), and on the other hand to be oriented retrospectively, monitoring their production (Gile’s Sequential model). In the revision courses at the CBS, Gile’s Sequential model has recently proved to be especially useful for the revision of translations with TMS, when translators need to check whether already translated, proposed segments fit logically with the rest of the text. The CBS Model (see appendix), which is based on the theory of functional translation with the addition of important ideas from Koller (see Hansen (1995: 88ff, 2006: 270), has proved to be a useful guideline for the entire translation pro cess, from the analysis of the translation brief and the ST to the evaluation and revision of the TT.
Quality management in translation and revision
2.2
Revision research
Both Mossop 2007b and Künzli (2007a: 116) give an overview of some recent empirical studies of revision. Brunette 2000 discusses key concepts specific to translation assessment and establishes (ibid.: 170ff) a terminology of translation quality assessment where she defines five assessment procedures in relation to the purpose of the assessment. She divides between Translation quality assessment (TQA), usually used to check the degree to which professional standards are met without contact to the translator, Quality control, a monolingual or bilingual revision with contact to the translator on request, Pragmatic revision, where there is no contact between the translator and the reviser, Didactic revision, which is primarily intended to help translators hone their skills, and Fresh look, reading the target text as an independent text. Mossop’s guidebook (2001/2007a) describes principles and procedures for editors and revisers of non-literary texts. Apart from a discussion of important questions and problems with revision tasks and processes, he provides a glossary of editing and revision terms. Empirical studies of revision processes have been carried out by Krings 2001 and Brunette et al. 2005. In both studies, unilingual and comparative revision was compared, with the result that comparative revision yielded a better target text. Krings 2001 used TAPs and video recording of unilingual revisions of a machine translation. There was no access to the ST and the study showed that without the possibility to go back to the ST, serious errors remain uncorrected. Brunette et al. 2005 compared the results of unilingual revisions with comparative revisions of the same translations a few days later by the same subjects, professional translators. She also concludes that the comparative revision gave better results. Künzli (2007a, 2007b) has worked with many of the typical problems of revision. He carried out empirical research with TAPs in revision processes, where he investigated the “external revision”, i.e. the changes actually made, and “internal revision”, which is what the reviser formulates mentally during the revision pro cess. In Künzli (2007b: 46f), using TAPs he also analyzed the ethical dilemmas and loyalty conflicts between the different parties involved in translation and revision projects, and especially the “conflict between the economic demand for speed and the ethical demand for thoroughness, reliability or quality”. In the German-speaking area of Translation Studies, much research has been done on quality management and on classification and grading of errors. Some examples are Reiss 1971; House 1997; Stolze 1997; Nord 1998; Gerzymisch-Arbogast 2001; Schmitt 1998, 2001, and recently Mertin (2006), who discusses different criteria and classifications of errors and their application to professional translation in the business world.
Gyde Hansen
3. Concepts of quality Translation quality is obviously a central issue in the many national and international standards, norms and certificates by associations, governments and institutions addressing translation processes and products, such as the ISO, DIN, ASTM, SAE standards and lately the EN-15038 standard. The title of the recent CIUTI FORUM 2008 was “Enhancing Translation Quality: Ways, Means, Methods”. The informal definitions of the concept of quality are various, e.g.: – Quality is a question of individual perception. Quality is defined according to idiosyncratic parameters or criteria. Everyone has his/her own definition of quality and the definitions can vary considerably. – Quality is a cultural issue. Expectations as to quality can vary in different countries and cultures and can be seen as a question of social and political appropriateness. – Quality is meeting the clients’ needs – it is the clients’ satisfaction. For many organizations or companies like, for example, the European Commission, the UN and Daimler (CIUTI-Forum 2008), a main indicator of quality is the clients’ reaction, his/her degree of satisfaction and especially the number of complaints from clients. It is regarded as crucial to gain the confidence of the clients and to be aware of all the reactions in order to maintain a good reputation. The client-related concept of quality is followed up by regular evaluations by the service providers, with surveys showing the consumers’ perception of the quality of the translation services. A model of such a survey can be seen in, for example, Mertin (2006: 285). – Quality is fulfilment of the skopos. Quality is seen as the fulfilment of the purpose of the translation. It is defined according to the function of the translation under defined pragmatic conditions. – Quality is “fitness for use”. This is in line with the skopos interpretation. The idea is, however, that anything that goes beyond the clients’ needs is regarded as a waste. – Quality is the degree of equivalence between ST and TT. In this case, quality is defined by the degree of conformity with the ST, and characterized as accuracy and consistency of, for example, the terminology. – Quality is the result of a good process, where the concept of quality is seen as an aspiration. The idea is that high-quality processes produce high-quality translations and that cooperation between responsible colleagues during the processes creates good results and trust. – Quality is also described as “not merely an absence of errors”.
Quality management in translation and revision
4. Frequently mentioned problems with revision In interviews with the professional translators who get their work revised, the problem most often mentioned is that of unnecessary changes or over-revision. A typical situation is that the reviser wants the TT to appear as if it had been translated by him/herself, or that the reviser does not demonstrate much tolerance for the translator’s original suggestions, even in cases where they are not obviously incorrect. In his study, Künzli (2007a: 124) observes a similar problem with a large number of unjustified changes and with revisers who “impose their own linguistic preferences at the expense of the translator’s decision”. A connected problem is frustration about messy revisions; messy because both important and unimportant changes are inserted in the TT without any attempt to grade or justify them. Mossop (2007a: 176) mentions this problem and proposes a visual distinction between necessary changes and mere suggestions. Another problem is the use of evaluation forms, especially if they are used not only for quality control, but also for quality assessment, i.e. as a tool for hiring and firing translators. There seems to be a need for transparent forms for different purposes. Gile’s above-mentioned formula of quality (section 1) could be useful for such purposes because – at least theoretically – it also takes positive or neutral results into consideration. Giving and taking criticism is problematic for some translators and revisers. Also the thought of being constantly monitored seems to make translators particularly sensitive to criticism. Some translators contest the evaluation or do not like to hear about the revisions because they do not understand why so much has to be changed. Others like to get feedback and explanations about “why things have to be changed”. It is sometimes the case that revisers do not like to talk with the translators whose work they have revised. As mentioned earlier, colleagues revising colleagues, a kind of peer reviewing, is frequently used in professional situations. This can also be problematic because some colleagues do not like to criticize their peers and this can give rise to conflicts of interest. Poor quality of the source text is a problem frequently mentioned by translators and revisers. Experts are not always good writers and drafts written by non-native speakers of the language can be unclear. The worst-case scenario is when the experts themselves are not even able to explain what their text actually means. Gile (1995: 118) mentions the problem of poor quality of the ST and pleads for the “author-is-no-fool” principle, which means that translators should work hard on comprehending the sense of the source text, “again and again until they reach a Meaning Hypothesis that makes sense, or finally come to the conclusion that the author actually did make an error”.
Gyde Hansen
As my interviews with the professional translators and revisers show, frustration at the constant cuts in the time and money provided for translation services is a problem for usually meticulous revisers. The prominent strategy in businesses is maximizing profit: the aim is to achieve as high a quality as possible – at the same time as costs are continually minimized. For the revisers, who have been accustomed not to let errors pass, it is nearly unbearable that the revision part of the process is sometimes first cut back and then cancelled altogether. Mossop (2007a: 114) calls this a “struggle between time (that is, money) and quality”. It is a dilemma where the professional reviser may be forced to give priority to quality. Künzli (2007b: 54) also mentions this problem. In his study, he observed that “[r]evisers need a revision brief, stating explicitly what is expected from them in terms of full or partial revision and what parameters of the draft translation they are supposed to check.” 5. Translation and revision processes The translation and revision processes are complicated because many brains, concepts and perceptions are involved. They are also complex processes of confronting meaning/sense on one hand and confronting and/or keeping apart form/expression on the other hand. The keeping apart seems to be particularly important in translation between cognate languages like German and Danish, as the two languages often show unexpected differences. False friends constitute a large part of the errors made in relation to this language pair (Hansen 2006: 115, 276 and 279). In order to get to a better understanding of the translation and revision pro cesses and to illustrate the relationship between expression and sense in a text during these processes, in Hansen 2008, I resorted to classical semiotic theories and models by Hjelmslev (1943, 1966); Baldinger 1966 and Heger 1971. Overall, I follow Bühler (1934, 1982), who regards signs as units of different dimensions like morpheme, word, phrase/clause, paragraph and even text. As signs they are used in actual situations where we refer to phenomena or in general statements where we refer to classes of phenomena. Figure 1 shows a model of the translation process which takes place in the brain of the translator in a communication situation. On the right side of the model, there are two lines referring to the phenomenon/class. This is meant to express that it is not expected that the reference with two signs should be totally equivalent.
Quality management in translation and revision
situation occasion
concepts
SL content
TL content
SL sign
TL sign phenomenon/class
SL form
TL form Translation
Figure 1. The translation process
If we also take the author’s production process of the source text and the revision process into account, the model becomes more comprehensive. As can be seen in Figure 2, the participants/brains involved in the process from the ST to the revision of the TT now include the producer/author of the ST, the translator, and the reviser of the TT – three brains at work on the same text. Their concepts have to converge but the forms have to be kept apart, at least during the process, in order to avoid interference and to keep a critical distance. In the case of self-revision, only two brains are involved, the author of the source text and a translator (see Figure 3). For the translator, however, there are two different processes and they affect each other. Self-revision is a different pro cess in comparison to revision of other translators’ target texts. One reason why self-revision is difficult is that people fall in love with their own formulations. The same myelin threads are used again and again. The space of time between writing and revising the translation, looking at the task with “fresh eyes”, plays an important role here.
Gyde Hansen
situation occasion
concepts 1
SL content
3
2 TL content
SL sign
TL sign
TL content
TL sign phenomenon/class
SL form
TL form
Production
Translation
TL form Revision
Figure 2. Source text, translation and revision
situation occasion
concepts 1
SL content
SL sign
2
TL content
TL sign
TL content
TL sign phenomenon/class
SL form Production
Figure 3. Self-revision
TL form Translation
TL form Self-revision
Quality management in translation and revision
It is obvious that the process becomes even more complicated with the use of Translation Memory Systems, where the translator, parallel to translating, has to check matches and to revise both pre-translated sentences that appear on the screen and his/her own final translation. As Mossop (2007a: 115) puts it, “the translator has a mixed translation/revision job to perform”. In large organizations like the UN, according to the Languages and Conference Services Division, there are thousands of brains involved in the translation and revision processes and it has to look as if it were one hand that has written all the documents, as if it were one brain. In addition to the responsible actors mentioned above, there are pre-editors who check the ST for the correctness of the information provided and who monitor references and terminology, eliminate ambiguities and ensure validation of terms and diplomatic and political inviolability. There are particular requirements for the concordance and synchronization of such processes. The more languages involved, the more diversity there is, and the more attention there must be given to recruitment of translators, training, content management, terminology control and planning, in order to maintain quality. Discussions, coaching, contacts and quality circles, annual reports and meetings are used to achieve a systematic and comprehensive quality control. As an example of practices in companies with many brains involved, in her description of the development, realization and control of the complicated translation processes and revision processes of Mercedes-Benz, Mertin (2006: 259ff) presents the methods and tools of project management and quality management, supported by Workflow-Management-Systems. 6. Longitudinal studies at the CBS The revision training I started in 1983 and, especially, the exams in translation and revision, where students had to analyze the source text and to revise the target text, identifying, classifying and correcting errors and arguing for their changes, showed that it was not necessarily the case that those students who were the good translators were also the good revisers. This observation had to be investigated empirically. What could be the reasons? My assumption was that translation competence and revision competence, though closely connected, are different competences, and that not even experienced translators are automatically good revisers. In the two longitudinal studies, I investigated the relation between the two competences.
Gyde Hansen
6.1
Experiments
6.1.1 The students’ longitudinal study From 2003 – 2007, I carried out small pilot experiments with students every year at the beginning of the last semester of their translator and revision training at the CBS. What was tested was their ability to produce an acceptable translation of an everyday text, and to find and correct the errors in a simple translated text. As only about 5.5 million people speak Danish, in translator training both directions of translation had always been weighted equally. For the experiments, the translation direction was from Danish into German, and the same was the case for the revision. This means that some of the students had to translate into and revise their second language. Usually between 25 and 30% of the students are to some degree bilingual. The students received a translation brief and the ST. They were asked to translate the text without aids. They were allowed the time they needed and most of them finished the translation task within 15 minutes. Then they received the revision task where they were asked to revise the TT. It can perhaps be argued against these experiments that it is primarily the linguistic competence that is being tested; however, grammatical, lexical and idiomatic errors constitute an important part of revision processes, and I also tested pragmatic adequacy, accuracy, attentiveness, insertion of new errors, the ability to find alternatives and the degree of over-revision. 6.1.2 The professionals’ longitudinal study The experiments of 2007 with the revision task were similar to the students’ task. It was also the same text that had to be revised. The translation task was different, because with the professional subjects I did new experiments with Translog and retrospection with replay, as I had in my experiments in 1997. I had already carried out similar translation-revision tests with the same subjects in 1997 and was, therefore, able to compare all the results. 6.2
Evaluation and analysis of the results
The translation and the results of the revision task were analyzed and evaluated, separately and anonymously. The TT of the translation task was evaluated according to criteria which had been spontaneously agreed on by a competent native speaker of German and myself (see Hansen 2007: 15). For the revision task, I also used native speakers of German. We could agree on 17 errors, and we then counted how many of these 17 errors the subjects had marked and corrected and how many they had ignored. These were the main criteria for the evaluation of the revision task. Wrong corrections and unnecessary changes were also registered but, as these numbers were small, we did not include these results in the general evaluation.
Quality management in translation and revision
However, as mentioned in section 4, over-revision is a frequently mentioned problem in professional revision, so I kept an eye on the subjects’ unnecessary changes and wrong corrections, and the results are given in section 6.3.2. The evaluation categories for both tasks were good, acceptable and poor. In order to be able to compare the results of the two tasks only the extreme results, poor and good, were compared. All the average results, i.e. those in the acceptable group, were ignored. For each course from 2003 to 2007, and also for the professionals’ longitudinal study (students’ and experts’ experiments in 1997/2007), we only counted the subjects who were in the groups good – good (GG) or poor – poor (PP) in both tasks, and those who were in the groups poor in one and good in the other task (PG) or vice versa (GP). 6.3
Results
The students’ longitudinal study, for the courses from 2003 to 2007, shows that all four combinations are represented, apart from 2006, which was an “annus horribilis” with only poor results. It should also be mentioned that the curriculum at the CBS was changed between 2004 and 2005, which must have had an impact on the results. However, what is interesting is that all four constellations are represented, nevertheless (see Table 1). As to wrong corrections and unnecessary changes, there were, on average, about 0.5 wrong corrections per student. Generally, there was, on average, 1 unnecessary change per student each year, apart from 2006, when there was an average of 3.2 unnecessary changes per person. In 2007, I did several control experiments with the students of the 2007 course, using other translation and revision tasks, and I got similar results, i.e. there were always some who were good at translation and poor at revising and vice versa. Table 1. Results of the first longitudinal study: Students from 2003 to 2007
both tasks: GG PP GP PG Acceptable: total:
2003
2004
2005
2006
2007
2 2 1 1 2 8
3 3 1 2 8 17
2 6 2 2 5 17
0 8 0 0 3 11
3 7 1 2 5 18
Gyde Hansen
Accidentally, the control experiments showed one interesting result in relation to one of Mossop’s principles for correcting (2007a: 156): “don’t retranslate!” The task of the experiment was a revision of a translation into German of an official website-text about the Dansk Sprognævn (Danish Language Council). The text contains about 40 errors on two pages. A bilingual student with German as her main language did exactly what Mossop warns against: she retranslated the text. When she realized that she had misunderstood the task, in a second try, a week later, she revised the same text. The results are surprising: in the retranslated text, she corrected 29 of the 40 errors, she retained 9 errors, and in 2 cases she inserted a new error. In the revised text, she could only correct 12 errors, i.e. she ignored 26 and made 2 new errors. As a reviser, she was not attentive to the errors of the translator and/or could not distance herself from the translator’s proposals (see Figure 2). This seems to indicate that translation and revision must draw, at least partly, on different skills and competences (see Figures 4 and 5). All four constellations were also found in the professionals’ longitudinal study – both in 1997 and again in 2007, see Table 2: Table 2. Results of the second longitudinal study: Students in 1997 as experts in 2007 1997 both tasks: GG PP GP PG acceptable: total:
4 3 5 1 15 28
2007 5 4 1 3 15 28
6.3.1 Some observations – In the group of five subjects who in 2007 have good results in both tasks, there are three bilinguals and two native speakers of Danish who translated into their L2. For one of the bilinguals Danish is his L1 and for the other two German is their L1. – Three of the five above-mentioned subjects, who in 2007 have good results in both tasks, also proved to be good in both tasks in the 1997 experiments. It is interesting that one of the three is not bilingual – it is her L2 she revises and translates into. Of the two bilinguals who are in this group, one has Danish and the other German as their L1.
Quality management in translation and revision
– As we train translation in both directions, all subjects were tested again in 2007, for translation from German into Danish. This experiment showed that the five above-mentioned subjects with good results in both tasks also were good translators in the other direction, which for three of them meant working into L1. – As to their profession today, it can be said that two of the five subjects with good results in both tasks have worked for 10 years as professional translators in large companies. They can definitely be called “experienced translators”. Three of the professional subjects hold today management positions. One of them has worked as a professional translator for some years. The other two only translated occasionally. With respect to the question of “experience”, it is interesting that the two experienced translators were already good at both tasks in 1997. As they were competent 10 years ago, the question could be asked: what has “experience” contributed to their revision competence? – The results show that there are professional translators (with 10 years’ experience) who have good results for translating but poor results for the revision task and vice versa. – There were eight bilinguals who participated in the experiments in 2007. Four of them had the result acceptable in the translation task and good (1), acceptable (2) or poor (1) in the revision task. One of the bilinguals had poor results in both tasks. None of them work as translators today. 6.3.2 Corrections, wrong corrections and unnecessary changes The revision task of the 28 experts (8 bilingual, 20 native Danish) gave the following results: – corrections: the bilinguals on average corrected 11.5 (max 16, min 6 of the 17 errors), i.e. 2.5 more errors than the native speakers, who on average revised 9 errors (max 14, min 5 errors); – wrong corrections: bilinguals 0.25 on average, native speakers 0.2; – unnecessary changes: bilinguals 2.25 each on average (max 5, min 1), native speakers 1.25 (max 3, min 0). In this connection, it can be mentioned that one of the bilinguals who was “good” at both the translation and the revision task, because she identified the errors and corrected them properly, inserted the most, i.e. 5, “unnecessary changes”. During the experiment before she starts revising, she immediately expresses her doubts: “how much can I correct without embarrassing the translator?” She explains this and says that both revising and receiving corrections are sources of conflicts between some of her colleagues. My general impression is that students and professionals revise texts according to the way their teachers have revised (or marked) their written translation
Gyde Hansen
tasks. However, bilinguals may have the tendency not just to correct the most obvious errors but also to improve the text. 7. Discussion of the results The two studies should be regarded as pilot studies with general, not domainspecific texts. The subjects should have been able to complete the tasks easily. What can be concluded from the first results is that: – the relationships between bilingualism/non-bilingualism, translation competence when translating into L1 or L2, and revision competence are complicated and deserve further investigation; – not even experienced translators are automatically good revisers and vice versa. One important aspect of the revision process was not tested explicitly in my experiments, i.e. the ability to explain, classify and justify the changes. This ability seems to be crucial if we think of the frequently mentioned problems of revision processes and the need to give and take feedback, and connected with this, conflicts between colleagues because of over-revision or unnecessary changes (see section 4 of this article). However, the ability to classify, describe and explain phenomena comes from knowing or being aware of their existence. Knowing what to look for presumably supports the process of identifying the errors. 8. Describing, explaining, and justifying changes: what can be done? Mossop (2207a: 9) suggests that revision training could wait until after university studies and that such training should preferably be part of a practicum in a workplace. The results of my professionals’ longitudinal study (1997 – 2007), however, show something else. As can be concluded from the interviews in 2007 with the experts at their workplaces, systematic revision training, as part of the university curriculum, can improve and facilitate the revision processes considerably. Furthermore, being able to explain and argue for the necessity of changes seems to prevent frustrations and conflicts. In the questionnaire and interviews, 10 of the 14 translation experts point out that the revision course at the CBS was important for their profession. As two of them say, for example: ”Vi kan gå lidt mere professionelt til det, uden at være krænkende”. (We can revise more professionally – nobody feels offended.)
Quality management in translation and revision
”Det, at argumentere for rettelserne, hjælper til at gøre revisionerne professionel – gør at personlige konflikter opstår sjældnere end hos andre kollegaer”. (The ability to argue for the changes adds professionalism to the revisions – with the effect that conflicts with our colleagues occur less often than between other groups [where colleagues do not have the necessary tools to argue].) ”På grund af undervisningen er det meget hurtigere at forklare “Fehlerbündel” præcist”. (Because of the training it is much quicker to explain complex errors precisely.) (My translations) For “Fehlerbündel”, i.e. complex errors see Hansen (1996: 156f). It can be concluded that the CBS-Model and the classification of errors which we developed over the years are still useful 10 years later in professional environments, in Denmark and Sweden and even for quality control in large companies in Germany. 8.1
The CBS Classification of errors
Several aspects have an impact on the classification, evaluation and grading of errors: – The need for and the purpose of the classification, and, especially, the purpose of the grading of the errors. The issue here is whether the revision and the grading is mostly text and client/reader-oriented or business-oriented (with the purpose of hiring and firing translators), a distinction made by Mossop (2007a: 118). – Traditions, ethical rules, norms and standards concerning translation in the translators’ countries, cultures, and languages, and additionally curricula of translator training including social, political and cultural aspects. Traditions of language acquisition and training in genres, registers and terminology are also important. – The environment in which the translation and revision processes are carried out, e.g. international organization, company, translation agency, translation bureau, free-lance translator, or students’ translations – and the kind and purpose of the translation task. Not all texts need full revision. Sometimes less than full revision is perfectly acceptable. – Typical text types that have to be translated – legal texts, technical or marketing texts, with or without TMS. – The languages and language pairs involved.
Gyde Hansen
The classification of errors (see appendix) can be used for all kinds of texts. It is a very general classification and the types of errors can cover several subtypes. Our training has always been text and client/reader-oriented, but it is an open classification and, if necessary, it may even be supplemented with aspects that make it business-oriented, for example, including the assessment of aspects of the translator’s service, such as “keeping deadlines” or “following style-sheets”. What seems to be really difficult is describing and, especially, grading good translations. Perhaps workplace frustrations could be avoided and the profession could become more attractive if successes were mentioned more often. As to the language-pair involved, the classification reflects the typical errors between German and Danish and vice versa. An attempt was made by Pavlović 2007 to apply this classification to the language pair Croatian–English. It worked well, though the typical errors had to be related differently to the description levels and units. The CBS Classification of errors was also tested in an investigation of differences between errors produced by human translation, and errors produced by translation supported by TMS. The study proved to be useful even though the errors are different, especially on the text-linguistic level where segmentation, reference, co-reference, inconsistent terminology, and wrongly expressed directive speechacts play tricks. The proposed matches do not always fit in the context. On the semantic level, the translated terms or expressions may, for example, be either too general or too specific and this creates logical problems with respect to coherence. 8.2
Who is a good reviser?
With the exception of small countries, where translators tend to work in both directions, professional translators usually translate into their mother tongue. They are often bilinguals and sometimes translation is even equated with bilingualism. Bilingual translators – and often also those who translate into their mother tongue – may well lack the conceptual tools needed for the justification of decisions or changes. They may be able to translate automatically with a perfect result. In order to argue and justify their decisions, however, they would need translation theory, terminology, and some knowledge of linguistics and the stylistics of genre and register. As described in Hansen (2003: 33ff.), in translation process research with Think-aloud protocols or Retrospection, it is much easier for the subjects to comment on their translations into their foreign language than on their translations into their mother tongue. The reason seems to be that they have learnt the grammatical system of the foreign language consciously and in doing so have also acquired the terminology to describe potential problems, changes and errors. The ideal reviser seems to be a competent (bilingual) translator who in relation to his/her main language or mother tongue has the awareness, knowledge and
Quality management in translation and revision
theoretical background comparable to that of a non-bilingual translator who had to learn the target language the hard way. Here we may have one explanation as to why good translators are not always good revisers. 9. Improvement, training and experience Correcting errors, unwarranted omissions and unclear passages is one part of the revision process – according to Gile (2005: 53), the reviser, an experienced translator, also improves the translations. But as mentioned in section 4, too much improvement, which is sometimes regarded as unnecessary, causes irritation. Some results from the professionals’ longitudinal study show that striking a balance between correcting and improving texts may be a question of empathy and experience – but that it is also a question of resources. As one of the experts in translation and revision expresses it during the interview: Man denkt immer, es geht noch besser, ich muss die perfekte Formulierung finden, ich muss den Text verschönern, denn man sieht ja die ganzen Möglichkeiten, die man hat. Das Problem ist aber, dass man keine Zeit dazu hat. Mit den Jahren, mit der Erfahrung und Routine habe ich gelernt, dass man einfach weiter muss, dass man schnelle Lösungen finden muss, die in Ordnung sind, statt immer etwas Besseres finden zu wollen. Die Erfahrung hat mir da sehr weitergeholfen alle brauchbaren Lösungen, die lesbar sind, zu akzeptieren. Der Leser weiβ ja nicht, dass dort auch etwas anderes hätte stehen können – wenn es nur nicht falsch ist. My translation: It is always as if it could be done better, as if I should find the perfect formulation, as if I should upgrade the text, because I can see all the possibilities for improvement. But the problem is that there is no time for that. Via experience and practice I have learnt that I have to move on, to find quick solutions which are ok – instead of always trying to find something which could be better. Experience has helped me considerably to accept all usable solutions as long as they are correct and readable. The recipient does not know that there could have been something else in the text – as long as what is written is correct.
Here we see the effect of experience. The improvements which are sometimes regarded as “unnecessary changes” need to be investigated further. Improving a text may not always be lucrative and translators may sometimes feel offended by improvements that at first sight seem unnecessary. However, this need not imply that all changes for the better that go beyond correcting obvious errors are superfluous or that they evoke negative reactions. Attitudes to improvements are closely related to the purpose and situation of the revision procedure. For many recipients of revision, not only corrections of errors but all changes for the better are highly welcome and appreciated.
Gyde Hansen
Knowledge Terminology
Languages Cultures
Empathy, Loyalty
Creativity
style,variation, alternatives
Overview
coherence, logic, style, reference
Courage
Ability to abstract
Attentiveness
Translation
interference
Competence
Accuracy
pragmatics, brief, process, decisions
Translation theories
. pragmatics, brief, function, information, register, style, involved partners’ needs
Revision skills
semantics, consistency, grammar, idiomaticity, style, facts
Ability to take decisions
errors
Ability to use aids, resource persons
Translation technology
Figure 4. Translation Competence Model (based on Hansen (2006: 27)
10. Conclusion The general interest in explicit standards and norms, quality assessment and revision can be interpreted as an indicator of some imbalance, especially as this enormous interest in quality is accompanied by a constant economization of time and money. Translation competence and experience cannot automatically be equated with revision competence, nor do they necessarily correlate with bilingualism or being a native speaker of the target language. In Hansen (2006: 27), I investigated and defined the relation between typical errors and the translator’s attitude, qualifications, abilities, skills and competences – see the Translation Competence Model, Figure 4. As also illustrated by the brain models in section 5, translation revision seems to require additional skills, abilities and attitudes, and/or enhanced levels of competence in certain areas. In the following Revision Competence Model (Figure 5), the revision competence is shown to be closely related to the translation competence, but partly different.
Quality management in translation and revision
Knowledge Terminology
Fairness, Tolerance
Creativity
style, variation, alternatives
Attentiveness
changes, improvements, classifications, gradings
errors, omissions, inconsistency
Overview
Revision
coherence, logic, style
Courage
pragmatics, brief, process, decisions
Translation theories
Languages Cultures
Empathy, Loyalty
. pragmatics, briefs, function, information, register, style, involved partners’ needs
Competence Accuracy
semantics, consistency, grammar, idiomaticity, style, facts
Ability to take decisions
Ability to abstract
interferences, proposals
Ability to explain
Ability to use aids, resource persons
argumentation, justification, clarification of changes
Translation technology
Figure 5. Revision Competence Model
With respect to the necessary presuppositions of revision, which are attentiveness to pragmatic, linguistic, stylistic phenomena and errors, the ability to abstract or distance oneself from one’s own and others’ previous formulations, fairness, and explaining and arguing – these can be trained at universities, in separate Masters’ courses on revision. As can be seen from the interviews with the professional translators who were trained ten years ago in revision procedures, being familiar with revision pro cesses makes it easier to give and take (constructive) criticism. Being well grounded in the theories, tools and procedures for revision is a good starting point for the profession – the rest can/must then be left to experience and practice. References Baldinger, K. 1966. “Sémantique et structure conceptuelle.” Clex 8, I. Brunette, L. 2000. “Towards a terminology for translation quality assessment: A comparison of TQA practices.” The Translator 6 (2): 169–182.
Gyde Hansen Brunette, L. Gagnon, C. and Hine, J. 2005. “The GREVIS project: Revise or court calamity.” Across Languages and Cultures 6 (1): 25–45. Bühler, K. 1934, 19823. Sprachtheorie. Die Darstellungsfunktion der Sprache. Stuttgart: Fischer. Gile, D. 1985. “La sensibilité aux écarts de langue et la selection d’informateurs dans l’analyse d’erreurs: une expérience.” The Incorporated Linguist 24 (1): 29–32. Gile, D. 1994. “Methodological aspects of interpretation and translation research.” In Bridging the Gap, S. Lambert and B. Moser-Mercer (eds), 39–56. Amsterdam: John Benjamins. Gile, D. 1995a. “Fidelity assessment in consecutive interpretation: An experiment.” Target 7 (1): 151–164. Gile, D. 1995b. Basic Concepts and Models for Interpreting and Translator Training. Amsterdam/ Philadelphia: John Benjamins. Gile, D. 1999a. “Variablility in the perception of fidelity in simultaneous interpretation.” Hermes 22: 51–79. Gile, D. 1999b. “Testing the Effort Models’ tightrope hypothesis in simultaneous interpreting – a contribution.” Hermes 23: 153–173. Gile, D. 2005. La traduction. La comprendre, l’apprendre. Paris: Presses Universitaires de France. Hansen, G. 1996. “Übersetzungskritik in der Übersetzerausbildung.” In Übersetzerische Kompetenz, A.G. Kelletat (ed.), 151–164. Tübingen: Peter Lang. Hansen, G. 1999. “Das kritische Bewusstsein beim Übersetzen.” Copenhagen Studies in Language (CSL) 24: 43–66. Hansen, G. 2003. “Controlling the process. Theoretical and methodological reflections on research in translation processes.” In Triangulating Translation, F. Alves (ed.), 25–42. Amsterdam/Philadelphia: John Benjamins. Hansen, G. 2006. Erfolgreich Übersetzen. Entdecken und Beheben von Störquellen. Tübingen: Gunter Narr. Hansen, G. 2007. “Ein Fehler ist ein Fehler, oder …? Der Bewertungsprozess in der Übersetzungsprozessforschung.” In Quo vadis Translatologie? G. Wotjak (ed.), 115–131. Berlin: Frank & Timme. Hansen, G. 2008. “Sense and stylistic sensibility in translation processes.” In Profession Traduc teur, D. Gile, C. Laplace and M. Lederer (eds). Paris: Lettres modernes minard. Forthcoming. Heger, K. 1971. Monem, Wort und Satz. Tübingen: Max Niemeyer. Hjelmslev, L. 1943, 19662. Omkring Sprogteoriens Grundlæggelse. København: Gyldendal. Gerzymisch-Arbogast, H. 2001. “Equivalence parameters and evaluation.” Meta 46 (2): 327–342. Hönig, H.G. Konstruktives Übersetzen. Tübingen: Stauffenburg. House, J. 1997. Translation Quality Assessment: A Model Revisited. Tübingen: Narr. Koller, W. (1979, 20016): Einführung in die Übersetzungswissenschaft. Heidelberg: Quelle & Meyer. Krings, H. 2001. Repairing Texts. Kent, Ohio: Kent State University Press. Künzli, A. 2007a. “Translation revision.” In Doubts and Directions in Translation Studies, Y.Gambier, M. Shlesinger and R. Stolze (eds), 115–126. Amsterdam/Philadelphia: John Benjamins. Künzli, A. 2007b. “The ethical dimension of translation revision. An empirical study.” JoSTrans 8: 42–56.
Quality management in translation and revision Mertin, E. 2006. Prozessorientiertes Qualitätsmanagement im Dienstleistungsbereich Übersetzen. Leipzig: Peter Lang. Mossop, B. 2007a2. Editing and Revising for Translators. Manchester: St. Jerome. Mossop, B. 2007b. “Empirical studies of revision: What we know and need to know.” JoSTrans 8: 5–20. Nord, C. 1998. “Transparenz der Korrektur.” In Handbuch Translation, M. Snell-Hornby, H.G. Hönig, P. Kussmaul and P.A. Schmitt (eds), 374–387. Tübingen: Stauffenburg. Pavlović, N. 2007. Directionality in Collaborative Translation Processes. A Study of Novice Translators. PhD thesis. Tarragona: Universitat Rovira i Virgili. Reiss. K. Möglichkeiten und Grenzen der Übersetzungskritik. München: Hueber. Reiss. K./Vermeer, H.J. 1984. Grundlegung einer allgemeinen Translationstheorie. Tübingen: Niemeyer. Schmitt, P.A. 1998. “Qualitätsmanagement.” In Handbuch Translation, M. Snell-Hornby, H.G. Hönig, P. Kussmaul and P.A. Schmitt (eds), 394–399. Tübingen: Stauffenburg. Schmitt, P.A. 2001. Evaluierungskriterien für Fachübersetzungsklausuren. Tübingen: Narr. Stolze, R. 1997. “Bewertungskriterien für Übersetzungen – Praxis, Didaktik, Qualitätsmanagement.” In Translationsdidaktik, E. Fleischmann, W. Kutz and P.A. Schmitt, 593–602. Tübingen: Narr. Vermeer, H.J. 1996: A Skopos Theory of Translation. Heidelberg: TEXTconTEXT.
Gyde Hansen
Appendix 1.
CBS Model Analysis ST Text-external Situation t time, place t sender, receiver t medium t occasion t text function
Text-internal Content t subject matter t message, t meaning, sense Form/style t layout/illustrations t composition t syntax t intonation, rhythm t diction, wording t stylistic devices
Conclusion ST:
what is said and how? message and stylistic devices?
Strategies ST to TT Situation of ST and TT identical or not identical? Are there differences t as to the receiver(s)? t as to as to the function?
ST – content: degree of speciality t general phenomena? t partly general phenomena? t SL-specific phenomena? TT: necessary changes? t additional information needed? t reduction possible/necessary? t compensation possible/needed? ST – form and style t as to: word, phrase, sentence, t passage, text, lay-out, illustrations TT: necessary changes as to form? t adaptation necessary: e.g. t caused by norms and conventions? t caused by formal/stylistic constraints?
Revision TT Evaluation of TT Adequacy as to: t situation t function, t TT receiver(s) Adequacy as to: t content, message t form and style Correction of errors Explaining errors as to: t pragmatics t text-linguistics t semantics t lexicon t terminology t idiomatics t style t syntax/word order t morphology t facts t orthography t omissions, unwarranted t interference t…
Figure 6. CBS Model
2.
CBS Classification of errors
Classification of errors in relation to the affected units and levels of linguistic and stylistic description Pragmatic errors (pragm) – misinterpretation of the translation brief and/or the communication situation, e.g.: – Misunderstanding of the translation brief: wrong translation type (e.g. documentary-informative translation instead of communicative-instrumental translation, often a deixis problem) – Not adapting the TT to the target text receiver, the TT function and the communication situation: lack of important information, unwarranted omission of ST units (omis) or too much information in relation to the ST and/or the TT receiver’s needs, e.g. dispensable explanations (disp).
Quality management in translation and revision
– Disregarding norms and conventions, e.g. as to genre, style, register, abbreviations. Text-linguistic errors – violation of the semantic, logic or stylistic coherence: – Incoherent text: not semantically logical, often caused by wrong connectors or particles (sem.log) – Wrong or vague reference to phenomena, e.g. wrong pronouns, or wrong usage of articles (ref) – Temporal cohesion not clear (tense) – Wrong category, e.g. indicative instead of subjunctive mood, active instead of passive voice (cat) – Wrong modality, e.g. via inappropriate modal particles or negations (mod) – Wrong information structure, often caused by word order problems (word order) – Unmotivated change of style (change of style). Semantic (lexical) errors (sem): wrong choice of words or phrases. Idiomatic errors (idiom): words and phrases that are semantically correct, but would not be used in an analogue context in the TL. Stylistic errors (style): wrong choice of stylistic level, stylistic elements and stylistic devices. Morphological errors – also “morpho-syntactical errors” (msyn): wrong word structure, or wrong as to number, gender or case, etc. Syntactical errors (syn): wrong sentence structure. Facts wrong (facts): errors as to figures, dates, names, abbreviations, etc. Classification of errors in relation to the cause “interference” or “false cognates” Interference is regarded as a projection of unwanted features from one language to the other. These errors are based on an assumption of symmetry between the languages which appears in some cases, but not in the case in question. Several levels and units of linguistic and stylistic description are affected, and the errors can also be characterized as, for example, pragmatic, text-linguistic, lexical-semantic, syntactic or stylistic errors. Considering the language pair German and Danish, the following kinds of interference prevail: Lexical interference (int-lex): words and phrases are transferred from SL to TL. This is especially often the case with prepositions
Gyde Hansen
Syntactic interference (int-str): the sentence structure or the word order is transferred Text-semantic interference (int-ref): the use of, for example, pronouns and articles is transferred Cultural interference (int-cult): culture-specific phenomena are transferred.
Publications by Daniel Gile 1979 « Bilinguisme interférence et traducteurs. » Traduire 98. (No page numbers.) 1981 « Les informateurs et la recherche. » Traduire 106 (I): 17–18. 1982 a. « L’enseignement de la traduction japonais-français à l’INALCO. » Traduire 109 (I): 23–25. b. « Fidélité et littéralité dans la traduction: un modèle pédagogique. » Babel 28 (1): 34–36. c. « Initiation à l’interprétation consécutive à l’INALCO. » Meta 27 (3): 347–351. 1983 a. « L’enseignement de l’interprétation: utilisation des exercices unilingues en début d’apprentissage. » Traduire 113 (I): 7–12. b. « La traduction et l’interprétation en Océanie. » Meta 28 (1): 17–19. c. « Initiation à la traduction scientifique et technique japonais-français à l’INALCO: la recherche d’une optimisation des méthodes. » Bulletin des anciens élèves de l’INALCO: 19–28. d. « Aspects méthodologiques de l’évaluation de la qualité du travail en interprétation simultanée. » Meta 28 (3): 236–243. e. « Les petits lexiques informatisés: quelques réflexions. » Bulletin de l’AIIC 11 (3): 56–59. f. « Des difficultés de langue en interprétation simultanée. » Traduire 117: 2–8. 1984 a. « Des difficultés de la transmission informationnelle en interprétation simultanée. » Babel 30 (1): 18–25. b. « Les noms propres en interprétation simultanée. » Multilingua 3/2: 79–85. c. La formation aux métiers de la traduction japonais-français: problèmes et méthodes, doctoral dissertation, INALCO, université Paris III. d. « La recherche terminologique dans la traduction scientifique et technique japonais-français: une synthèse. » Meta 29 (3): 285–291.
Efforts and Models in Interpreting and Translation Research
e. « La recherche: pourquoi et comment. » Bulletin de l’AIIC 12 (2): 43–44. f. « Le français des apprentis traducteurs: l’exemple des étudiants de l’INALCO. » Traduire 121: 5–11. g. « La logique du japonais et la traduction: un exemple. » Contrastes 9: 63–77. 1985 a. « L’anticipation en interprétation simultanée. » Le Linguiste/de Taalkundige 1–2. b. « La sensibilité aux écarts de langue et la sélection d’informateurs dans l’analyse d’erreurs: une expérience. » The Incorporated Linguist 24 (1): 29–32. c. « Théorie, modélisation et recherche dans la formation aux métiers de la traduction. » Lebende Sprachen 24 (1): 15–19. d. « La logique du japonais et la traduction des textes non littéraires: une présentation du problème. » Babel 31 (2): 86–93. e. « Le modèle d’efforts et l’équilibre d’interprétation en interprétation simultanée. » Meta 30 (1) (special issue on conference interpreting): 44–48. f. « De l’idée à l’énoncé: une expérience et son exploitation pédagogique dans la formation des traducteurs. » Meta 30 (2): 139–147. g. “Goyakuakuyaku no byori (book review).” Meta 30 (2): 177–179. h. « L’analyse dans la traduction humaine. » Proceedings of COGNITIVA 85 Paris: 63–67. i. « Les termes techniques en interprétation simultanée. » Meta 30 (3): 199–210. j. « Publications sur l’interprétation. » Bulletin de l’AIIC 13 (2): 18–19. k. « L’interprétation de conférence et la connaissance des langues: quelques réflexions. » Meta 30 (4): 320–331. 1986 a. « L’exercice d’Interprétation-Démonstration de sensibilisation unilingue dans l’enseignement de l’interprétation consécutive. » Lebende Sprachen 31 (1): 16–18. b. « L’anticipation en interprétation simultanée – première partie. » Traduire 127: 20–23. c. “East is East and West is West? Impressions japonaises. » AIIC Newsletter 6: 16–18. d. « La traduction médicale doit-elle être réservée aux seuls médecins? » Meta 31 (1): 26–30. e. « Le travail terminologique en interprétation de conférence. » Multilingua 5 (1): 31–36. f. « L’anticipation en interprétation simultanée – deuxième partie. » Traduire 128: 19–23.
Publications by Daniel Gile
g. « L’ enseignement de la recherche terminologique dans la traduction japonaisfrançais. » Dans TERMIA 84, Actes du colloque international de terminologie tenu à Luxembourg du 27 au 29 août 1984, G. Rondeau et J.C. Sager (dir), 177–182. h. « JAT Language Status Questionnaire Report. » JAT Bulletin 17. i. « Traduction et interprétation: deux facettes d’une même fonction. » The Linguist 3. j. “A brief presentation of translation journals.” JAT Bulletin 18. k. “Observations on the nature of professional translation as an act of communication (Module 1).” JAT Bulletin 20. l. « La reconnaissance des kango dans la perception du discours japonais. » Lingua 70 (2–3): 171–189. m. « La compréhension des énoncés spécialisés chez le traducteur: quelques reflexions. » Meta 31 (4): 363–369. n. “The nature of quality in professional translation (Module 2).” JAT Bulletin 21. 1987 a. “Fidelity in the translation of informative texts (Module 3).” JAT Bulletin 22. b. “An operational model of translation (Module 4).” JAT Bulletin 24. c. “Understanding technical texts for translation purposes (Module 5).” JAT Bulletin 24. d. “Der Einfluss der Übung und Konzentration auf simultanes Sprechen und Hören – une thèse scientifique sur l’interprétation (review). » Bulletin de l’AIIC XV (1): 21. e. « Bibliographie de l’interprétation: compléments juin 85-décembre 86. » Bulletin de l’AIIC XV (1): 40. f. “Terminological sources in technical and scientific translation (Module 6).” JAT Bulletin 25. g. “Learning technical translation (Module 7).” JAT Bulletin 26. h. “Theory and research in translation (Module 8).” JAT Bulletin 27. i. “Interpretation research and its contribution to translation research (Module 9).” JAT Bulletin 28. j. « La terminotique en interprétation de conférence: un potentiel à exploiter. » Traduire 132: 25–30 & Meta 32 (2): 164–169. k. “An overview of the characteristics of translation from Japanese into Western languages (Module 10).” JAT Bulletin 29. l. “Language and message in Japanese – an example (Module 11).” JAT Bulletin 30. m. “The “logic” of Japanese – Words (Module 12).” JAT Bulletin 31.
Efforts and Models in Interpreting and Translation Research
n. “The block analysis method (Module 13).” JAT Bulletin 32. o. “Advancing towards a good comprehension of Japanese: A long marathon (Module 14).” JAT Bulletin 33. p. « Les exercices d’interprétation et la dégradation du français: une étude de cas. » Meta 32 (4): 420–428. 1988 a. La traduction et l’interprétation au Japon. Meta 33 (1) (numéro spécial dirigé par D. Gile). b. « Observations sur l’enrichissement lexical dans la progression vers un japonais langue passive pour l’interprétation de conférence. » Meta 33 (1): 79–89. c. « L’enseignement de la traduction japonais-français: une formation à l’analyse. » Meta 33 (1): 13–21. d. « Les publications japonaises sur la traduction: un aperçu. » Meta 33 (1): 115–126. e. « La connaissance des langues passives chez le traducteur scientifique et technique. » Traduire 136: 9–15. f. « Relay Interpretation: An Exploratory Study – Une étude de Jennifer Mackintosh (review). » Bulletin de l’AIIC, XVI (2): 16–17. g. « Le partage de l’attention et le ‘Modèle d’efforts’ en interprétation simultanée. » The Interpreters’ Newsletter 1 (Scuola Superiore di Lingue Moderne per Interpreti e Traduttori, Universita degli studi di Trieste): 4–22. h. “Tsuyaku: eikaiwa kara dojitsuyaku made (review).” The Interpreters’ Newsletter 1: 49–53. i. “A lexical characterization of translators and interpreters (Part 1).” JAT Bulletin 42. j. “A lexical characterization of translators and interpreters (Part 2).” JAT Bulletin 43. k. “Japanese Logic and the training of translators.” In Languages at Crossroads. Proceedings of the 29th Annual Conference of the American Translators’ Association, October 12–16 (Seattle), D. Hammond (ed.), 257–264. Medford, NJ: Learned Information. l. “An overview of Conference Interpretation Research and Theory.” In Languages at Crossroads, Proceedings of the 29th Annual Conference of the American Translators’ Association, October 12–16 (Seattle), D. Hammond (ed.), 363–372. Medford, NJ: Learned Information. 1989 a. « Perspectives de la recherche dans l’enseignement de l’interprétation. » In The theoretical and practical aspects of teaching conference interpretation, L. Gran and J. Dodds (eds), 27–33. Udine: Campanotto Editore.
Publications by Daniel Gile
b. La communication linguistique en réunion multilingue: les difficultés de la transmission informationnelle en interprétation simultanée, doctoral dissertation, Université de la Sorbonne Nouvelle, Paris, 484 pages. c. « Simultaneous Interpretation: Contextual and Translation aspects – Un travail expérimental de Linda Anderson (review). » The Interpreters’ Newsletter 2: 70–72. d. « Bibliographie de l’interprétation auprès des tribunaux. » Parallèles 11: 105–111. e. « Les flux d’information dans les réunions interlinguistiques et l’interprétation de conférence: premières observations. » Meta 34 (4): 649–660. 1990 a. “Scientific research vs. personal theories in the investigation of Interpretation.” In Aspects of Applied and Experimental Research on Conference Interpretation, L.Gran and C.Taylor (eds), 28–41. Udine: Campanotto. b. “Interpretation research projects for interpreters.” In Aspects of Applied and Experimental Research on Conference Interpretation, L.Gran and C.Taylor (eds), 226–236. Udine: Campanotto. c. « Les ordinateurs portatifs: situation et perspectives. » Bulletin de l’AIIC XVIII (1): 15–16. d. « La traduction et l’interprétation comme révélateurs des mécanismes de production et de compréhension du discours. » Meta 35 (1): 20–30. e. Basic Concepts and Models for Conference Interpretation Training. First version, unpublished monograph, 92 pages. f. “Issues in the training of Japanese-French conference interpreters.” Forum 6: 5–7. g. « L’évaluation de la qualité du travail par les délégués: une étude de cas. » The Interpreters’ Newsletter 3: 66–71. h. “Miriam Shlesinger – Simultaneous Interpretation as a factor in effecting shifts in the position of texts on the oral-literate continuum (review).” The Interpreters’ Newsletter 3: 118–119. i. « Les articles sur l’interprétation dans Parallèles (review). » The Interpreters’ Newsletter 3: 119–120. j. « Actes du colloque de Trieste sur l’enseignement de l’interprétation: book review. » Meta 35 (4): 578–585. k. “Review: The theoretical and practical aspect of teaching conference interpretation.” Meta 35 (4): 782–783.
Efforts and Models in Interpreting and Translation Research
1991 a. « Prise de notes et attention en début d’apprentissage de l’interprétation consécutive – une expérience-démonstration de sensibilisation. » Meta 36 (2–3): 431–439. b. Guide de l’interprétation à l’usage des organisateurs de conférences, brochure, Paris, Premier ministre, délégation générale à la langue française et Ministère des affaires étrangères – ministère de la francophonie, 24 pages. c. « Enseigner l’interprétation à l’ISIT. » Revue de l’Institut Catholique de Paris 38: 139–147. (Co-author: Françoise de DAX.) d. “The processing capacity issue in conference interpretation.” Babel 37 (1): 15–27. e. « La radiodiffusion sur ondes courtes et l’interprète de conférence. » Meta 36 (4): 578–585. f. “Compte rendu: ATA – scholarly monograph series, Vol. IV.” Meta 36 (4): 662–663. g. “A communication-oriented analysis of quality in non literary translation and interpretation.” In Translation: Theory and Practice. Tension and Interdependence, M.E. Larson (ed.), 188–200. Binghamton: American Translators Association Scholarly Monograph Series, Vol. V, State University of New York. h. “Methodological aspects of interpretation (and translation) research.” Target 3 (2): 153–174. 1992 a. “Ruth Morris: The impact of interpretation on the role performance of participants in Legal Proceedings (Compte rendu).” The Interpreters’ Newsletter 4: 87–89. b. “Predictable sentence endings in Japanese and Conference Interpretation.” The Interpreters’ Newsletter, Special Issue 1: 12–23. c. “Haruhiro Fukuii et Tasuka Asano, Eigotsuyaku no jissai: An English Interpreter’s Manual (book review).” The Interpreters’ Newsletter, Special Issue 1: 69–71. d. “The Quarterly Journal of the Interpreting Research Association of Japan (book review).” The Interpreters’ Newsletter, Special Issue 1: 71–72. e. « Les fautes de traduction: une analyse pédagogique. » Meta 37 (2): 251–262. f. “Basic theoretical components in translator and interpreter training.” In Teaching Translation and Interpreting, C. Dollerup and A. Loddegaard (eds), 185–193. Amsterdam/Philadelphia, John Benjamins. g. « Pour que les écoles de traduction universitaires soient vraiment utiles. » Turjuman, Tanger 1 (1): 63–74 de Tanger, et Rivista internazionale di tecnica della traduzione, numero 0, SSLM Campanotto Editore: 7–14.
Publications by Daniel Gile
h. « La transformation lexicale comme indicateur de l’analyse dans l’enseignement de la traduction du japonais. » Meta 37 (3): 397–407. i. “Book review: Shoichi Watanabe – Research on Interpretation Training Methodology as a Part of Foreign Language Training. Report on a project funded by the Japanese Ministry of Education, published in March 1991 by the laboratory of Prof. Kazuyuki MATSUO, Faculty of Foreign Studies, Sophia University, Tokyo (Book review in English, book in Japanese).” Parallèles 14: 117–120. j. “Book review: Sonja Tirkkonen-Condit (ed.), Empirical Research in Translation and Intercultural Studies: Selected Papers of the TRANSIF Seminar, Savonlinna.” Target 4 (2): 250–253. 1993 a. “Translation/Interpretation and Knowledge. In Translation and Knowledge, Y. Gambier and J. Tommola (eds), 67–86. SSOTT IV, University of Turku: Centre for Translation and Interpreting. b. « Les sublaptops en cabine: L’exemple du Pocket PC. » Bulletin de l’AIIC 21 (3): 76–78. c. « Compte rendu de la thèse de B. Strolz. » The Interpreters’ Newsletter 5: 107–109. d. « Les outils documentaires du traducteur. » Palimpsestes 8: 73–89. (Paris: Presses de la Sorbonne Nouvelle.) e. “Using the Effort Models of Conference Interpretation in the classroom.” Folia Translatologica 2: 135–144. Charles University of Prague (Proceedings of the Prague Conference: Translation Strategies and Effects in Cross-Cultural Value Transfer and Shifts, Prague, 20–22 October 1992.) 1994 a. “Opening up in interpretation studies.” In Translation Studies: An Interdiscipline, M. Snell-Hornby, F. Pöchhacker and K. Kaindl (eds), 149–158. Amsterdam/Philadelphia: John Benjamins. b. « La disponibilité lexicale et l’enseignement du vocabulaire japonais. » Cipango, numéro spécial: Mélanges offerts à René Sieffert, à l’occasion de son soixantedixième anniversaire, Publications Langues ‘O, Paris. 315–331. c. “The process-oriented approach in translation training.” In Teaching Translation and Interpreting 2, C. Dollerup and A. Lindegaard (eds), 107–112. Amsterdam/Philadelphia: John Benjamins. d. “Methodological aspects of interpretation and translation research.” In Bridging the Gap: Empirical Research in Simultaneous Interpretation, S. Lambert and B. Moser-Mercer (eds), 39–56. Amsterdam/Philadelphia: John Benjamins.
Efforts and Models in Interpreting and Translation Research
1995 a. Basic Concepts and Models for Translator and Interpreter Training. Amsterdam/Philadelphia. John Benjamins. b. Regards sur la recherche en interprétation de conférence. Lille. Presses universitaires de Lille. c. “Interpretation research: A new impetus?” Hermes 14: 15–29. d. « La lecture critique en traductologie. » Meta 40 (1): 5–14. e. « La recherche empirique sur l’interprétation de conférence: une analyse des tendances de fond. » TTR 8 (1): 201–228. f. “Fidelity assessment in consecutive interpretation: An experiment.” Target 7 (1): 151–164. g. Special issue. Interpreting Research. Target 7 (1). h. “Review of Pöchhacker, Franz. Simultandolmetschen als komplexes Handeln.” Target 7 (1): 185–188. 1996 a. « La formation à la recherche traductologique et le concept CERA Chair. » Meta 41 (3): 486–490. 1997 a. “Conference Interpreting as a cognitive management problem.” In Cognitive Processes in Translation and Interpreting, J.E. Danks, G.M. Shreve, S.B. Fountain and M.K. McBeath (eds), 196–214. Thousand Oaks/London and New Delhi: Sage Publications. b. Conference Interpreting: Current Trends in Research, Y. Gambier, D. Gile and Ch. Taylor (eds). Amsterdam/Philadelphia: Benjamins. c. “Methodology: Round table report.” In Y. Gambier, D. Gile and Ch. Taylor (eds), 109–122. Co-author: Ingrid Kurz. d. “Postscript.” In Y. Gambier, D. Gile and Ch. Taylor (eds). 207–212. e. “Interpretation research policy.” With F. Pöchhacker, J. Tommola, S. Lambert, I. Cenková and M. Kondo. In Y. Gambier, D. Gile and Ch. Taylor (eds). 69–88. f. “Interpretation Research: Realistic expectations.” In Transferre Necesse Est, Proceedings of the 2nd International Conference on Current Trends in Studies of Translation and Interpreting. 5–7. September, 1996, Budapest, Hungary, K. Klaudy and J. Kohn (eds), 43–51. Budapest: Scholastica. g. “Realista elképzelések a tolmacsolas kutatasanak jövojerol (Hungarian translation of “Interpretation Research: Realistic Expectations”).” Modern Nyelvoktatas III (1–2): 65–75.
Publications by Daniel Gile
h. “ EST Focus: Report on research training issues.” In Translation as Intercultural Communication, M. Snell-Hornby, Z. Jettmarova and K. Kaindl (eds), 339–350. Amsterdam/Philadelphia: John Benjamins. 1998 a. “ Conference and Simultaneous Interpreting.” In Routledge Encyclopedia of Translation Studies. M. Baker (ed.), 40–45. London and New York: Routledge. b. “Flexibility and modularity in translator and interpreter training.” In Language Facilitation and Development in Southern Africa. Papers presented at an Internatiional Forum for Language Workers on 6–7 June 1997, A. Kruger, K. Wallmach and M. Boers (eds), 81–86. Pretoria: South African Translator’s Institute. c. “Norms in research on Conference Interpreting: A response to Theo Hermans and Gideon Toury.” Mutlilingual Matters. Current Issues in Language and Society 5 (1–2): 99–106. d. “Funcions e rendementos dos modelos na investigation sobre interpretacions.” (“Fonctions et performances des modèles dans la recherche sur l’interprétation”), Viceversa 4: 11–23. e. “Observational studies and experimental studies in the investigation of Conference Interpreting.” Target 10 (1): 69–93. 1999 a. “Variability in the perception of fidelity in simultaneous interpretation.” Hermes 22: 51–79. b. Wege der Übersetzungs- und Dolmetschforschung. (co-editor with H. Gerzymisch-Arbogast, J. House and A. Rothkegel). Tübingen: Narr. c. “Internationalization and institutionalization as promoters of interpretation research.” In Gerzymisch-Arbogast, Gile, House and Rothkegel (eds). 167–178. d. “Prospects and challenges of interdisciplinarity in research on conference interpreting.” In Proceedings of the 2nd International Conference on Translation Studies, National Taiwan Normal University, 87–94. GITI. e. “Testing the Effort Models’ tightrope hypothesis in simultaneous interpreting – a contribution.” Hermes 23: 153–172. f. “Review of Kurz, Ingrid. 1996. Simultandolmetschen als Gegenstand der interdisziplinären Forschung. Wien: WUV-Universitätsverlag.” Target 11 (1): 175–178. g. “Use and misuse of the literature in interpreting research.” The Interpreters’ Newsletter 9: 29–43.
Efforts and Models in Interpreting and Translation Research
h. “Doorstep interdisciplinarity in Conference Interpreting Research.” In Anovar/anosar. Estudios de traducción e interpretation, A. Alvarez Lugris and A. Fernandez Ocampo (eds), 41–52. Vigo: Universidade di Vigo. 2000 a. “The history of research into Conference Interpreting. A scientometric approach.” Target 12 (2): 299–323. b. “Opportunities in Conference Interpreting Research.” In Investigating Translation, Selected Papers from the 4th international Congress on Translation., Barcelona 1998, A. Beeby, D. Ensinger and M. Presas (eds), 77–89. Amsterdam/ Philadelphia: John Benjamins. c. “Issues in interdisciplinary research into Conference Interpreting.” In Language Processing and Simultaneous Interpreting, B. Englund Dimitrova and K. Hyltenstam (eds), 89–106. Amsterdam/Philadelphia: John Benjamins. 2001 a. “Useful research for Students in T&I Institutions.” Hermes 26: 97–117. b. “Interpreting Research: What you never wanted to ask but may like to know.” Communicate 11: (http://www.aiic.net/ViewPage.cfm). c. “The role of consecutive in interpreter training: A cognitive view.” Communicate 14: (http://www.aiic.net/ViewPage.cfm). d. “Les clichés et leurs cousins dans la formation des traducteurs.” Palimpsestes 13: 65–80. e. Getting Started in Interpreting Research. Amsterdam/Philadelphia: John Benjamins (co-edited with H. Dam, F. Dubslaff, B. Martinsen and A. Schjolda ger). f. “Conclusion: Issues and prospects.” In Gile et al. (2001), Getting Started in Interpreting Research. 233–240. g. “Selecting a topic for PhD research in interpreting.” In Gile et al. (2001), Getting Started in Interpreting Research. 1–21. h. “Critical reading in (interpretation) research.” In Gile et al. (2001), Getting Started in Interpreting Research. 23–38. i. « L’évaluation de la qualité de l’interprétation en cours de formation. » Meta 46 (2): 379–393. j. “Consecutive vs. Simultaneous: Which is more accurate?” Tsuuyakukenkyuu – Interpretation Studies 1 (1): 8–20. k. “Being constructive about shared ground.” Target 13 (1): 149–153. l. “Review of Setton, Robin. 1999. Simultaneous Interpretation: A cognitive-pragmatic analysis.” Amsterdam/Philadelphia: John Benjamins, Target 13 (1): 177–183.
Publications by Daniel Gile
2002 a. “Decision-making in professional translation and interpreting.” In Current Status of Translation Education, International Conference on Translation Studies, 17–33. Seoul, Korea: Sookmyung Women’s University. b. “Recent trends in research into Conference Interpreting.” In Multidisciplinary Aspects of Translation and Interpretation Studies, Proceedings of the 2nd International Conference on Translation and Interpretation Studies, 3–29. Seoul, Korea. c. “Training and Research in Conference Interpreting.” Conference Interpretation and Translation 4 (1): 7–24. d. “Interpreting: More than an intercultural communication phenomenon.” Rikkyo Journal of Intercultural Communication Studies 1: 61–73. e. “The interpreter’s preparation for technical conferences: Methodological questions in investigating the topic.” Conference Interpretation and Translation 4 (2): 7–27. f. “Corpus studies and other animals.” Target 14 (2): 361–363. g. « La qualité de l’interprétation de conférence: une synthèse des travaux empi riques. » (avec Á. Collados Aís). In Recent Research into Interpreting: New Methods, Concepts and Trends (in Chinese), Cai, ShiaoHong (ed.), 312–326. Hong Kong: Maison d’editions Quaille. h. “Conference Interpreting as a cognitive management problem.” In The Interpreting Studies Reader, F. Pöchhacker and M. Shlesinger (eds), 165–176. Amsterdam/Philadelphia: John Benjamins. 2003 a. “Cognitive investigation into conference interpreting: Features and trends.” In Avances en la investigación sobre interpretación, Á. Collados Aís and J.A. Sabio Pinilla (eds), 1–27. Granada: Comares. b. Á. Collados Aís, M. Fernández Sánchez, D. Gile (eds). La evaluación de la calidad en interpretación: investigación. Granada: Comares. c. “Quality assessment in conference interpreting: Methodological issues.” In Collados Áis, Fernández Sánchez and Gile. 109–123. d. “Justifying the deverbalization approach in the interpreting and translation classroom.” Forum 1 (2): 47–63. e. “Review of Pöchhacker, Franz. 2000. Dolmetschen: Konzeptuelle Grundlagen und deskriptive Untersuchungen.” Target 15 (1): 169–172. 2004 a. With E. Alikina. Trudnosti ustnogo posledovatel’nogo perevoda: kognitinyj aspect. (The difficulties of consecutive interpreting: Cognitive aspects). In Obuchenie inostrannym jazykam kak sredstvu mezhkulturnoj kommunikatsii I
Efforts and Models in Interpreting and Translation Research
b. c. d. e.
f.
g. h. i.
j. k. l. m. n. o.
professional’noj dekatel’nosti (Learning foreign languages as a means of intercultural communication and professional activity). Proceedings of the international conference, T. Serova (ed.), 300–304. Perm: Perm State Technical University Press. With G. Hansen and K. Malkmjær (eds), Claims, Changes and Challenges in Translation Studies. Amsterdam/Philadelphia: John Benjamins. With G. Hansen. “The editorial process through the looking glass.” In Hansen, Malmkjær and Gile (eds). 297–306. “Integrated Problem and Decision Reporting as a translator training tool.” The Journal of Specialised Translation 2: 2–20. (http://www.jostrans.org). “Review of Zubaida Ibrahim’s doctoral dissertation Court Interpreting in Malaysia. 2002.” The Journal of Specialised Translation 2: 115–120. (http://www. jostrans.org). “Translation Research versus Interpreting Research: Kinship, differences and prospects for partnership.” In Translation Research and Interpreting Research. Traditions, Gaps and Synergies, Ch. Schäffner (ed.), 10–34. Clevedon, Buffalo, Toronto: Multilingual Matters. “Responses to the Invited Papers.” In Ch. Schäffner (ed.), 124–127. With M. Rogers, P. Newmark, J. Fraser, A. Pearce, G. Anderman and P. Zlateva: “The Debate.” In Ch. Schäffner (ed.), 35–48. “Issues in research into conference interpreting.” In Übersetzung Translation Traduction. Ein internationals Handbuch zur Übersetszungsforschung. An International Encyclopedia of Translation Studies. Encyclopédie internationale de la recherché sur le traduction, H. Kittel, A.P. Frank, N. Greiner, T. Hermans, W. Koller, J. Lambert and F. Paul (eds), 767–779. Berlin/New York: Walter de Gruyter. “Review of Roberts et al. 2000. Critical Link 2.” Target 16 (1): 162–166. “Review of Hatim, Basil. Teaching and Researching Translation.” Target 16 (1): 172–176. “Review of Israël, Fortunato (ed.) 2002. Identité, altérité, équivalence? La traduction comme relation.” Target 16 (2): 379–383. “Review of Garzone & Viezzi. (eds) 2002. Interpreting in the 21st century: Challenges and opportunities.” Target 16 (2): 389–393. “Review: The Interpreting Studies Reader.” JoSTrans 1 (2): 126–127. “Review: Perspectives on Interpreting.” Interpreting 6 (2): 235–238.
2005 a. La traduction. La comprendre, l’apprendre. Paris: Presses Universitaires de France (PUF).
Publications by Daniel Gile
b. “Teaching conference interpreting: A contribution.” In Training for the New Millenium, M. Tennent (ed.), 127–151. Amsterdam/Philadelphia: John Benjamins. c. « La recherche sur les processus traductionnels et la formation en interprétation de conférence. » Meta 50 (2): 713–726. d. “Training students for quality: Ideas and methods.” In IV Jornadas sobre la formación y profesión del traductor e intérprete Calidad y traducción Perspectivas académicas y profesionales, IV Conference on Training and Career Development in Translation and Interpreting Quality in translation Academic and professional perspectives, M.E. García, A. González Rodríguez, C. Kunschak and P. Scarampi (eds), published on CD-ROM, ISBN: 84–95433–13–3. Madrid, Universidad Europea de Madrid, Departamento de traducción e Interpretación y Lenguas Aplicadas de la Universidad Europea de Madrid. e. “Directionality in conference interpreting: A cognitive view.” In Directionality in interpreting. The ‘Retour’ or the Native? R. Godijns and M. Hinderdael (eds), 9–26. Gent: Communication and Cognition. f. « Traduction et interprétation: convergences et divergences cognitives. » Traduire 206: 65–83. g. “Empirical research into the role of knowledge in interpreting: Methodological aspects.” In Knowledge Systems and Translation. H. Dam, J. Engberg and H. Gerzymisch-Arbogast (eds), 149–171. Berlin & New York: Mouton de Gruy ter. h. “Citation patterns in the T&I didactics literature.” Forum 3(2): 85–103. i. “Review: Interpreters at the United Nations: A history.” JoSTrans 2 (3): 109–110. 2006 a. “Conference Interpreting.” In Encyclopedia of Language and Linguistics, 2nd Ed. Vol. 3, K. Brown (ed.), 9–23. Oxford: Elsevier. b. “Interpreting Studies as an academic discipline: Sociological and scientific aspects.” In West and East: Developments in Translation Studies (in Chinese), GU Zhengkun and SHI Zhongyi (eds), 283–301. Tianjin, China: Baihua Literature and Art Publishing House. c. “Fostering professionalism in New Conference Interpreting Markets: Reflections on the role of training.” In Professionalization in Interpreting: International Experience and Developments in China, Mingjiong Chai and Zhang Ailing (eds), 15–35. Shanghai: Shanghai Foreign Language Education Press. d. « L’interdisciplinarité en traductologie: une optique scientométrique. » Dans Interdisciplinarité en traduction. Actes du 11e Colloque International sur la Tra-
Efforts and Models in Interpreting and Translation Research
duction organisé par l’Université Technique de Yildiz, Öztürk Kasar (dir), 23–37. Istanbul: Isis. e. « Regards sur l’interdisciplinarité en traductologie. » Dans Qu’est-ce que la traductologie? M. Ballard (dir), 107–117. Artois: Presses Université. 2007 a. “Review/Compte rendu de Englund Dimitrova, Birgitta. 2005. Expertise and Explicitation in the Translation Process.” Hermes 38: 235–238. b. « A la recherche de la complémentarité de la traduction et l’interpretation en cours de formation à travers des modules théorico-méthodologiques. » Transversalités 102: 59–72. (Revue de l’Institut Catholique de Paris).
Name index A Agmon-Fruchtman 246, 251 Agrifoglio 232, 235, 238, 251 Ahrens 197, 209–210, 239, 251 Alexieva 72, 79, 186, 191 Al-Khanji 74, 79 Álvarez Muro 198, 209, 211 Anderson, J.R. 182, 191 Anderson, L. 168, 175, 285 Anderson, R.B.W. 39, 42 Andres 31, 143, 154 Angelelli 31, 40, 42 B Bachmann-Medick 36, 42 Baddeley 167, 169–171, 175–176, 182, 191 Baigorri 30, 42, 196, 211 Bajo 159, 163, 170, 175, 177 Baker 240, 251, 289 Baldinger 262, 275 Ballardini 218, 235 Bar-Haim 245, 251 Barik 29, 30, 34–35, 43, 93, 104, 159, 161–162, 164, 175 Baroni 250–251 Bartlomiejczyk 245, 251 Basel 180, 184, 191 Batagelj 6, 21 Behar 247, 251 Bernardini 250–251 Bernier 130, 139 Bobrow 182, 192 Bolinger 198, 211 Bordons 7, 21 Borgatti 6, 16, 22 Borgman 8, 22–23 Borko 130, 139 Borochovsky 239, 251 Braam 6, 22 Brennan 198, 211 Broadbent 181, 191 Brooke-Rose 53, 59 Brown 198, 210–211 Brunette 31, 43, 259, 275–276
Bühler, H. 40, 43, 143, 154, 196, 212 Bühler, K. 262, 276 Buzelin 78–9 C Cai 166, 175 Carpenter 164, 170, 175, 177 Carr 31, 43, 45 Carroll, M. 137–139 Carroll, P.J. 159, 177 Carstensen 216, 235 Catford 67–68, 75, 79 Cattaruzza 143–144, 153, 155 Chafe 242, 251 Chernov, Ch. 73, 79 Chernov, G.V. 28–29, 31, 43, 86, 104, 165, 175 Chesterman 26, 43, 55, 58–60, 67, 69, 75–76, 78–79, 88, 118, 126, 175, 250–251 Cheung 164, 175, 196, 212 Chi 160, 175 Chincotta 168–169, 175 Chomsky 133, 246, 248, 251 Cokely 29, 33, 43 Collados Ais 17, 143, 154, 194, 196, 201, 205–206, 209–212, 291 Cooper 184, 191 Cowan 165, 167, 171–173, 175 Cronin 40, 43, 53, 55, 60, 84, 104 Crystal 180, 183, 191 D Dam 17, 44, 234–235, 245, 251, 290, 293 Damasio 194, 198, 210, 212 Daneman 170, 175 Danielewicz 242, 251 Darbelnet 67–68, 75, 82 Darò 167, 169, 170, 175–177 Davidson 164, 175 Davies 71, 80 de Groot 181, 191, 248, 252 Delabastita 75, 80
Delisle 31, 75, 80 Denzin 41, 43 Dillinger 160–163, 165, 175 Diriker 31, 40, 43, 84, 104 Dodds 31, 34–35, 44, 284 Dragsted 241–242, 251 Driesen 30, 43 Dubnov 248, 252 E Egghe 9, 20, 22 Endress-Niggemeyer 130, 139 Everett 6, 16, 22 F Fabbro 38, 43, 86, 89, 104, 163, 167, 169, 170, 175–176 Feldweg 39, 43, 143, 154 Fernández Dols 197, 212 Fernández Sánchez 196, 210, 212, 291 Floros 136, 139 Fluck 130, 139 Foulke 165, 176 Franco Aixelà 71, 80 Frawley 249, 252 Freeman 6, 22 Fröhlich 8, 22 Føllesdal 51–53, 55–57, 59, 60 G Gadamer 50, 59, 60 Gambier 17, 32, 38, 43, 63, 80, 287–288 Garwood 218, 235 Garzone 31, 43, 146, 154–155, 196, 212, 292 Gathercole 170, 176 Gauthier 5, 7, 22 Geisler 5, 22 Gellerstam 249, 252 Gerver 29, 31, 35, 37, 43, 165–166, 168–9, 171, 176, 181, 191 Gerzymisch-Arbogast 17, 132, 139, 259, 276, 289, 293 Giambruno 239, 248, 253
Efforts and Models in Interpreting and Translation Research Gile 3–6, 8, 10–14, 16, 17, 19–22, 25, 26, 28, 32–35, 37–38, 42–44, 49, 50, 55, 60, 72–74, 80, 83–92, 94–95, 97–99, 100–101, 103–104, 115, 121, 125, 127–128, 134, 139, 143, 153, 155, 173, 176, 179–182, 184, 190–191, 193, 195–196, 201, 207, 210, 212–213, 215–217, 235, 238, 241, 243, 249, 251–252, 255–256, 258, 261, 273, 276 Glaser 58, 60 Gnutzmann 130, 139,140 Goldman-Eisler 34–35, 164, 176 Gómez 7, 21 Gran 31, 34–35, 38, 44, 86, 89, 104, 163, 176, 284–285 Grbić 4–5, 9, 12, 16, 22–23 Grice 129, 132, 139 Gumperz 91, 99, 104 Gumul 84, 104 Gutt 75, 80 H Hack 218, 235 Hale 31, 44, 248, 252 Halliday 241–242, 252 Hanneman 16, 23 Hansen, G. 17, 121, 125, 217, 233, 235–236, 257–258, 262, 266, 271–272, 274, 276, 292 Hansen, I.G. 241–242, 251 Harzing 5, 8–10, 23 Hatim 75, 80, 292 Heger 262, 276 Heidegger 52, 60 Herbert 27, 29, 30, 44, 195, 212 Hermann 30, 44, 46 Hermans 54, 60–61, 289 Hervey 71, 80 Hicks 9, 14, 23 Higgins 71, 80 Hill 172–3, 176 Hirsch 9, 20, 23 Hirschberg 198, 209, 212, 214 Hjelmslev 262, 276 Hodges 162, 176 Holz-Mänttäri 217, 236 Hönig 64, 80, 217, 236, 242, 252, 258, 276 Horn-Helf 217, 236 House 17, 75, 80, 259, 276, 289 Hummell 6, 24
Hurtado Albir 67, 69, 75, 81, 195, 212 Hyland 7, 23 I Iglesias Fernández 196, 206, 212 Ilic 163, 176 Inghilleri 39, 41, 44 Ingram 33, 44 Isham 162, 166–167, 169, 170, 176, 238, 243, 252 Itai 237, 243–244, 252 Ivir 71, 80, 241, 252 J Jääskeläinen 75, 80 Jakobsen 239, 241, 252 Jansen 6, 8, 16–17, 23 Jones 65, 80 K Kade 27–29, 37, 44 Kadric 143, 155 Kahnemann 181, 191 Kalina 73, 80, 146–147, 155, 191, 195, 213 Kirchhoff 28, 44, 74, 80, 164, 166, 176 Knapp 197–198, 213 Kock 218, 220, 236 Kodrnja 179, 183–184, 190–191 Koller 257–258, 276 Kopczynski 155, 211, 213 Kopke 168, 170–171, 177 Kretzenbacher 130, 140 Krings 259, 276 Kuhn 29, 34–36, 41, 44, 55, 60 Künzli 159, 177, 259, 261–262, 276 Kupsch-Losereit 217, 236 Kurz 17, 28, 39, 40, 44, 84, 104, 143–144, 146–147, 153, 155, 181, 183, 192, 196, 206, 210, 213, 288–289 Kussmaul 64, 80, 217 Kwiecinski 71, 80 L Ladd 210, 213 Lakatos 57, 60 Lakoff 242, 253 Lamberger-Felber 220, 222, 230, 236 Lambert 17, 31, 45, 169, 177, 238, 253, 288, 292
Landauer 165, 177 Laver 198, 213 Lederer 30–31, 45–46, 217, 236 Lefevere 27, 41, 45, 53, 60 Leppihalme 71, 80 Levý 64, 81, 241, 253 Leydesdorff 7, 16, 23 Likert 147–8, 155 Lincoln 41, 43 Lindquist 239, 248, 253 Liu 28, 45, 159, 160–162, 166, 168, 170–171, 177 Lörscher 76–67, 81 M Mack 143–144, 153, 155 Maier 53, 55, 57, 60 Mailhac 71, 81 Malone 67–68, 81 Marrone 143–144, 155, 196, 213 Mason 33, 75, 80 Massaro 38, 181–182, 192 Mayoral 71, 81 Mazzetti 183–185, 192, 196, 213 McDonald 164, 177 Meak 143, 155 Meho 7, 23 Mehrabian 198, 213 Mertin 259–260, 265, 277 Misgeld 50, 55, 60 Molina 67, 69, 75, 81 Morgan 195, 213 Morgenbrod 246, 253 Morillo 7, 21 Moser, B. 181, 192 Moser, P. 84, 104, 143, 146, 155 Moser-Mercer 28, 31–32, 35, 37–38, 45, 144, 147, 159, 166–168, 172, 177, 181, 185–186, 192 Mossop 259, 261–262, 265, 268, 270–271, 277 Mrvar 6, 21 Mudersbach 133, 140 Munday 53, 60 Musgrave 57, 60 N Nafá Waasaf 197–198, 213 Nanpon 30, 34, 45, 164, 177 Napier 74, 81 Nedergaard-Larsen 70–71, 81 Nespoulous 168, 170–171, 177
Name index Newmark 17, 67, 69, 70–71, 75, 81, 292 Ng 143, 155 Nicholson 50, 60 Nida 68, 70, 75, 81 Niiniluoto 50, 58, 60 Nord 75, 77, 81, 259, 277 Norman 181, 192 Noyons 6, 23 O Oldenburg 130, 140 Oléron 30, 34, 45, 164, 177 P Padilla 159, 160, 163, 168, 170, 175, 177 Paneth 27, 30, 40, 45 Pavlović 272, 277 Pedersen 71, 81 Phillips 50–51, 60 Pinter (Kurz) 28, 35, 45, 169, 177, 181, 192 Pöchhacker 4, 12, 17, 24, 26, 33, 38–39, 42, 45–46, 74, 81, 84, 105, 143–144, 146–147, 155, 183, 190, 192, 196, 206, 213, 217, 221, 231, 236, 288, 291 Polezzi 53, 60 Pöllabauer 4, 5, 9, 12, 16, 22–24, 41, 45 Popovič 64, 81 Popper 49, 51, 55, 57, 60 Pradas Macías 196, 209–210, 212–213 Pratt 163, 167, 170, 177 Prunč 39, 45 Pym 38, 45, 56, 59, 60, 86, 90, 105 Râbceva 217, 236 Radnitzky 27, 29, 33–34, 45 Reiss 258–259, 277 Rezzori 180, 192 Riddle 16, 23 Roberts 31, 33, 43, 45, 292 Robinson 100, 105 Rothkegel 17, 130, 140, 289 Round 53, 61 Rousseau 7, 24 Roy 33, 35, 45 Rudvin 39, 45
Russell 238, 253 Russo 196, 213, 218, 236, 241, 253 S Sabatini 168, 177 Salevsky 28, 38, 45 Salmon, N. 163, 167, 170, 177 Salmon, W.C. 58, 61 Sandrelli 218, 236, 253 Sanz 30, 40, 45 Särkkä 66, 81 Sawyer 31, 46 Schäffner 125, 217, 236, 292 Scherer 194, 197–198, 201, 210, 213 Schjoldager 17, 44, 74–75, 81, 290 Schmitt 259, 277 Schneider, J. 216, 218, 221–222, 228, 236 Schneider, W. 172–173 Schulz von Thun 127, 131–132, 140–141 Scott 14, 24 Séguinot 75, 81 Seleskovitch 27–31, 35, 37, 42, 46, 72, 81, 180–181, 192, 217, 236 Serifi 246, 253 Setton 73, 81, 248, 250, 253, 290 Shlesinger 28, 33, 45–46, 80–81, 86, 105, 146, 156, 165, 177, 181–182, 184–185, 192, 197, 209, 210, 213, 239, 241, 253, 285 Silverman 210, 214 Simón 199, 214 Snell-Hornby 17, 25–30, 33, 36–39, 41, 46 Sodeur 6, 24 Solso 172, 177 Spiller-Bosatra 167, 177 Stévaux 196, 209, 212, 214 Sticht 165, 176 Stock 6, 7, 9, 12, 24 Stolze 259, 277 Strauss 58, 60 Stubbs 242, 253 Stummer 218, 236 Sunnari 164–6, 171, 177
T Taban 137, 140 Tannen 99, 105 Taylor 17, 43, 288 Tesch 216, 236 Thieme 30, 46 Thome 127, 140 Toury 19, 20–21, 54, 61, 75, 77, 82, 195, 214, 217, 236, 248, 253, 289 Trappmann 6, 24 Treisman 30, 177 Trivedi 50, 61 U Underwood 168–169, 175 V Valles 195, 214 van Doorslaer 63, 80 van Leeuwen 14, 24 van Leuven-Zwart 68, 75, 80 van Raan 6, 22 Venditti 198, 209, 214 Venuti 50, 53, 61, 67, 75, 82 Vermeer 26, 29, 36, 46, 258, 277 Vermes 66, 82 Viaggio 84, 89, 105 Viezzi 31, 43, 292 Vinay 67–68, 75, 82 Vuorikoski 143, 156 W Wadensjö 31, 33, 35, 39, 40, 46 Weinreich 216, 236 Wessells 182, 192 White 7, 24 Williams, J. 118, 126 Williams, M. 197–198, 209–11, 213–4 Wilss 64, 82 Wintner 243–244, 252–253 Wolf 39, 46 Y Yona 243–244, 252–253 Z Zabalbeascoa 79, 82 Zanettin 250–251 Zuccala 7, 24 Zybatow 138, 140
Subject index A ability 52, 111, 113–6, 118–120, 122, 161–164, 166–168, 170–174, 181, 238, 266, 270–271, 274–275 accuracy 161, 169, 260, 266, 274–275 accuracy rate 162, 245 abstracts 6, 9, 121, 127–139 allocation of working memory/ cognitive resources 170, 181, 190 ambiguïté 71 analogy 57–59 analysis, analyses: author impact analysis 19 citation/co-citation analysis 3, 5, 7, 8, 19, 21 content analysis 5, 14 co-word analysis 4 item analysis 145 keyword analysis 3–5 network analysis 3–6, 16 part-of-speech analysis 241, 247 risk analysis 83, 97–99 appellative message 132, 133 appellative claim 135 see dimensions articulatory rehearsal/suppression 169 assessment 52, 55–57, 103, 109, 144, 188–189, 193, 255–259, 261, 272, 274 see quality assessment collective assessment 256 inter-subjective assessment 256 translation assessment 257, 259 assessment of interpretive hypotheses 57 attention 110, 135, 159, 165, 167, 169, 171–174, 181 attention-switching 171–2, 174 attentiveness 266, 274–275
B bibliographic data/indicators/ records 6, 8–10 see also citation bilingualism 270, 272, 274 BITRA 10 brain activation 172 C capacity 95, 98, 170, 181–182, 190 processing capacity 85–88, 94–95, 99, 164, 167, 173, 179–184, 234, 286 career management 109, 113, 117, 123, 125 career progression 110, 123 CBS Model 258, 271, 278 CETRA 115, 123 CIRIN 32 citation 7–9, 21 citedness, cited/citing units 7 classification codes 5, 6 classification of errors 258, 271–272, 278–279 co-authorship 3, 7, 17–18 cognition 39, 83 cognitive 40, 67, 77, 79, 83–84, 88–89, 97, 100, 159, 160, 168, 170, 172, 181, 183 cognitive process/processing 28, 33, 35, 39, 85–86, 90, 180, 238, 241 cognitive resources 88–89, 97, 181 cognitive science 38, 85, 87, 159–160, 174 coherence 69, 84, 93, 134, 166, 274–275, 279 cohérence terminologique 64 context-sensitivity 84–85, 89 see risks communication 15, 34–35, 66, 73–74, 78–79, 83–84, 86–87, 89, 90, 99, 127, 132, 159, 180, 183, 219
communication aims 92–94 see risks communication situation 128, 262, 278 communication skills 113, 120–122 communication strategy 83, 92 communicative 28, 69, 75, 129, 131–132, 136, 139, 242, 278 communicative setting 139 comparative linguistics 216 completeness 84, 87, 150, 160 comprehension/comprensión 28, 100, 145, 147–148, 150, 161–168, 180–183, 185, 190–191, 196, 198, 208–209 see process probabilistic comprehension 28 comprehension tasks 95, 161, 171 concentration 172 conceptual testing 55 confidentiality 116, 152–153 conflicts 259, 261, 269, 270–271 confusion polysémique 79 construct components 147–148 construct validity 145 context 51, 57–58, 83–91, 93, 99, 116, 163, 167, 234, 250, 272 context-dependent 84, 87 context-sensitive 84–85, 89, 97 contextualists 83–84, 91 contrastes entonativos 194 control 165, 207, 265 attentional control 172, 202, 196, 200 IS control 196, 200 see quality control control experiments 207, 267–268 control mechanisms 165 control questions 150 coping strategy 189–190 corpus-based studies 237–238, 243, 250
Efforts and Models in Interpreting and Translation Research comparable corpora 237, 240, 250 intermodal corpora 240 parallel corpus 240 correction, corrections 239, 258, 266–7, 269, 273, 278 counter-evidence 55–57, 169–170 cronyism 7 see citation cultural references 70–71 culturemes 70–71 see références culturelles cumulative effect 57–58 cybermetrics 8 see citation/webometics D delayed auditory feedback 167, 169 dimension, dimensions 30, 78, 128–129, 131–133, 135–136, 145–147, 250 appellative dimension 135–136 cognitive dimension 77, 88 contextual dimension 88 cultural dimension 37 ethical dimensions 143 factual dimension 127, 131, 133–136, 139 paralinguistic dimension 243 relationship dimension 135 scholarly dimensions 238 self-indicative dimension 135–136, 139 discriminatory power 145, 148 discursive footing 84 discussion groups 193 double input 216, 222, 232 E ecological validity 162, 239 efectos de la monotonía 198 efficiency 160, 163, 170, 172–173 effort, efforts 39, 63, 66, 71–72, 83–90, 95, 98, 100, 173, 182 cognitive efforts 84 collective effort 90 cooperative efforts 89 comprehension effort 88, 173 coordination effort 173, 182 listening and analysis effort 179, 182, 184
memory effort 173, 182, 184 processing effort 167 production effort 173, 182 short-term memory effort 182 speech production effort 182 Effort Models 33, 83–85, 87, 90, 94, 97–100, 173–174, 179–182, 190, 216, 282, 284, 287, 289 elementos no verbales 196, 207, 209 emociones, emocional 197–198, 201, 206, 211 enfoques complementarios 210 entonatión/interpretación monótona 194, 196, 199–202, 206–211 equivalence 50, 54, 68, 79, 164, 174, 258, 260 error, errors 9, 50, 85–87, 90, 94–95, 98, 161, 163, 188, 215, 225, 230, 239, 255, 258, 260–261, 271–275, 278–279 experience 40, 68, 90, 120–124, 146, 151, 170, 255, 269, 273–275 experienced translator 256, 273 experimental study 30, 193 expertise 84, 89, 114, 125, 159–160, 164–168, 170–177, 238 expert-novice paradigm 170 explanation 49, 58, 193, 273 external reliability 145 F face validity 145 false friends 217–218, 262 see interference falsifiable 55, 57–59 falsification/falsificationism 57 feedback 120–123, 167, 172, 188, 261, 270 focal node 7, 16 focused interviews 193 foreign accent 180, 184–185, 190 Four Tongues-Four Ears Model 131–133 g-index (Egghe’s Index) 20 G Google Scholar, GS 8–11, 19–21 grading 259, 271–272 grammatical agreement 218–219, 221, 228–230, 234
Grice’s maxims 132 grupos de discusión 195, 200–202, 206, 208–209 H Hebrew 237, 239, 242–250 hermeneutic methods 51 hermeneutics 49, 50, 52 h-index (Hirsch’s Index) 20 historiography 25 hypothesis 35, 49, 50–59, 86, 171–172, 180, 189–190, 216, 221–222, 232–233 descriptive hypotheses 58 explanatory hypothesis 58–59 interpretive hypothesis 49, 53–55, 57 meaning hypothesis 50, 261 predictive hypothesis 58 tightrope hypothesis 86, 90, 94, 104, 289 hypothesized interpretation 55 hypothetico-deductive method 51–52, 59 I indexing words 6 inferencia emocional 198 information loss 185–187 information processing 36–37, 39, 163, 181 input material(s) 161, 164, 169, 174 input rate 165, 168, 171 input-specific parameters 230–232 INT types 218, 221, 228–229, 233, 279–280 interactional sociolinguistics 39 interference 167, 215–219, 228–235, 241, 263, 274, 278–280 see false friends intermodal 237, 240–241 intermodal comparable corpora 250 internal reliability 145, 150 international English 180 interpreting: community interpreting 12, 38, 40, 84 conference interpreting 27–28, 33, 35, 39, 83–84, 89, 143, 151, 181, 238
Subject index consecutive interpreting 28, 30, 84–85, 98, 166, 232, 238, 241 court interpreting 30–33, 143 modes of interpreting 98, 238 simultaneous interpretation 193, 228, 233 simultaneous interpreting 30, 38, 83–88, 98, 100, 159–174, 180–181, 185, 190, 215–216, 218, 238, 241, 249–250 interpreting expertise 160, 171, 173 interpreting performance 30, 154, 179, 186, 190, 230, 234 see performance investigación suplementaria 195 ISI Web of Science 5, 9–11
modality-dependent features 249 monitoring 166–167, 169, 171, 258 MorphTagger 244–245
J Joint Quality Initiative 111
O omission 66, 69, 71, 74–75, 83, 87–90, 93, 96, 98, 101–103, 278 non-omission 83, 87–89, 98 unwarranted omission 239, 273, 278 operationalization 57 operations 52, 65, 182, 249
K key concepts 6, 259 L lexical density 241 lexical processing 163 LiDoc 10 longitudinal study 255, 257–258, 266–268, 270, 273 low-register lexemes 248 M master 25, 27, 29, 33–34, 42 meaning 14, 37, 49, 50–54 communicative meanings 127 hidden meaning 50–51, 53 historical meaning 50 internal meaning 51 see meaning hypothesis mental resource(s) 159, 164, 167–171, 173–174 métalangage 63–64, 79 method 34, 49, 51–52, 56, 59, 79, 98, 119, 130, 133, 145, 185, 221, 230, 239–240, 242, 249 milestones 25–26, 29, 30–32, 35–37, 41, 118 MLA 10–11 modality 237–238, 240, 242, 248–250, 279
N network, networks 3–8, 16–19, 58, 122 co-authorship network 17–18 ego-centred network 6, 8, 16–17 semantic networks 6 networking 34, 42, 113, 122–123 niveau du processus/du résultat 78 non-native accent 179–180, 183–184, 189 norm of completeness 87
P Pajek software 6 paradigm, paradigms 25–26, 29, 30, 34–39, 41–42, 49, 115, 237–238, 242 paradigm shift 29, 35, 38–39, 41 paradigmatic lexical choices 248 parsimony 148 peer reviewing 261 performance 95, 154, 160, 168–170, 173, 179, 186–188, 191, 215, 225, 230–234 periods 4, 12, 26, 34–35 personal effectiveness 113, 119–120 persuasión 198 PhD training 127 Pioneer 25, 27–29 population validity 145 precursor 25, 27–29 problem-solving 87, 190 procedure, procedures 56, 65, 68–69, 79, 110, 115, 145, 154, 243, 259, 275
process, processes 85, 88, 116, 161, 167–169, 172–174, 243 automatic processes 173 cognitive processes 28, 39, 86, 241 component processes 160, 173 comprehension processes 161, 163, 165, 168, 174 decision-making processes 97 dimension processuelles 78 see dimensions documentation processes 100 revision processes 255, 259, 262, 265, 270–271, 273, 275 scholarly processes 5 translation processes 89, 257, 260 processing resources 85, 181 see processing capacity see processing effort production 11, 72–73, 76, 85–86, 89, 159, 171–174, 182–184, 246, 258, 263–264 see production effort production process 163, 165–166, 263 professional translation 134, 255, 259 proposition 67, 77, 79, 135, 161 psycholinguistic-cognitive research 28 publication counting 5, 12 Publish or Perish (PP) 8, 10, 20 Q qualitative research 41, 58, 193 quality 7, 15–16, 21, 27, 85, 87–88, 90, 97, 99, 110, 125, 154–155, 222, 242, 255–265 see assessment interpreting quality 143–144, 151–153, 215 see interpreting expertise translation quality 259–260 quality assessment 109, 193, 255, 259, 261, 274 quality assurance 151 quality control 125, 255, 259, 261, 265, 271 quality criteria 144, 154, 258 quality profile 110
Efforts and Models in Interpreting and Translation Research quality research 110, 143–144 questionnaire design 143, 147–148, 151–2 R références culturelles 70–71 relationship 26, 112, 132–133, 135–136, 139, 262 research environment 110, 112, 116, 121, 125 research funding 109–110, 117 research management 109, 118, 173, 179 research methodology 115, 143, 193 research papers 127–130, 134 response rate 152 retrospection 266, 272 revision 100, 109, 135, 255 comparative revision 259 over-revision 261, 266–267, 270 self-revision 263–264 unilingual revision 259 revision competence 255, 257, 265, 269–270, 274–275 revision training 255, 265, 270 risk, risks 83, 90–101, 216 see risk analysis communicative risks 90, 92, 94, 99 context-sensitive risk 97 high-risk 83, 95, 97–100 low-risk 83, 93–95, 97–98, 101 S scientometric indicators 7 scientometrics/scientometric study 3–8, 12 internet scientometrics 8 segment 86, 96, 99, 164, 168, 171, 182, 184 segmentation 73, 164, 272 selectivity 162, 172 self-indicative 132, 135–136, 139 semantic processing 161, 163, 174 Sequential Model of Translation 50, 258
Shifts 25–26, 41, 65, 68–69, 79, 84 SI with text/without text 215–216, 222–224, 227, 232–233, 235 sight translation 232, 238, 241 simultaneity 86–7, 100 simultaneous short circuit 218, 220, 228–230, 233 situation 64, 76–79, 92, 219, 221, 228, 233, 262–264 skill, skills 161, 164–165, 172–173, 238, 268, 274 research skills 109–125, 127–128 skills requirements 109, 113 skills training requirements 112, 124 sociocultural context 84 sociology of interaction 35, 39 span-tests 170–171 split half method 145 standards 5, 109, 110, 116, 128, 130–131, 139–140, 144, 154, 255, 257, 259–160, 274 strategy, strategies 65, 69, 73–74, 79, 83, 91, 98–99, 110, 127, 130, 159, 164–168, 170, 172, 174, 181, 189, 232, 235, 239, 241, 262, 278 coping strategy 189, 190–191 stress studies 184 subvocalization 169, 174 survey studies 143–144, 146, 152 T tactic 63 tagger 237, 242, 245 taxonomy 63 technique, techniques 6, 7, 65–69, 75, 79 adjustment techniques 65 research techniques 109, 110–116 translation techniques 65, 68–69, 75, 217 terminology, terminologies 63, 71, 150, 188, 258–260, 265, 271–272
testable consequences 56–57 text preparation 224, 227, 233 tightrope hypothesis 86, 90, 94 time lag 34, 164, 190, 221, 230–231, 233, 235 title words 5–6 tradition, traditions 25, 27–29, 33–35, 37, 39, 41, 49, 114, 152, 215, 238, 271 traductologie 31, 63–65, 70, 75, 79 trajections 65, 68 transcoding 241, 245, 249 translation competence 134, 255, 265, 270, 274 translation criticism 257 see assessment/revision Translation Memory Systems 265 translation problem 63 translationese 217, 249–250 translator training 255, 257, 266, 271 triangulación metodológica 195 turn, turns 25 empirical turn 25, 37–38, 41 social turn 25, 38–40 qualitative turn 25, 40 TSB 10 U UCINET 6, 19 unification 58 unnecessary changes 261, 266–267, 269–270, 273 see revision V variability in SI 215 variabilidad 197–198 variation terminologique 65 W webometrics 8 see citation and cybermetrics within-subject variance 239, 257 word frequency 14–15 working memory 165, 167–173 written translators 87, 100, 241
Benjamins Translation Library A complete list of titles in this series can be found on www.benjamins.com 83 Torikai, Kumiko: Voices of the Invisible Presence. Diplomatic interpreters in post-World War II Japan. vii, 191 pp. + index. Expected February 2009 82 Beeby, Allison, Patricia Rodríguez Inés and Pilar Sánchez-Gijón (eds.): Corpus Use and Translating. Corpus use for learning to translate and learning corpus use to translate. x, 151 pp. + index. Expected January 2009 81 Milton, John and Paul Bandia (eds.): Agents of Translation. vi, 329 pp. + index. Expected January 2009 80 Hansen, Gyde, Andrew Chesterman and Heidrun Gerzymisch-Arbogast (eds.): Efforts and Models in Interpreting and Translation Research. A tribute to Daniel Gile. 2009. ix, 302pp. 79 Yuste Rodrigo, Elia (ed.): Topics in Language Resources for Translation and Localisation. 2008. xii, 220 pp. 78 Chiaro, Delia, Christine Heiss and Chiara Bucaria (eds.): Between Text and Image. Updating research in screen translation. 2008. x, 292 pp. 77 Díaz Cintas, Jorge (ed.): The Didactics of Audiovisual Translation. 2008. xii, 263 pp. (incl. CD-Rom). 76 Valero-Garcés, Carmen and Anne Martin (eds.): Crossing Borders in Community Interpreting. Definitions and dilemmas. 2008. xii, 291 pp. 75 Pym, Anthony, Miriam Shlesinger and Daniel Simeoni (eds.): Beyond Descriptive Translation Studies. Investigations in homage to Gideon Toury. 2008. xii, 417 pp. 74 Wolf, Michaela and Alexandra Fukari (eds.): Constructing a Sociology of Translation. 2007. vi, 226 pp. 73 Gouadec, Daniel: Translation as a Profession. 2007. xvi, 396 pp. 72 Gambier, Yves, Miriam Shlesinger and Radegundis Stolze (eds.): Doubts and Directions in Translation Studies. Selected contributions from the EST Congress, Lisbon 2004. 2007. xii, 362 pp. [EST Subseries 4] 71 St-Pierre, Paul and Prafulla C. Kar (eds.): In Translation – Reflections, Refractions, Transformations. 2007. xvi, 313 pp. 70 Wadensjö, Cecilia, Birgitta Englund Dimitrova and Anna-Lena Nilsson (eds.): The Critical Link 4. Professionalisation of interpreting in the community. Selected papers from the 4th International Conference on Interpreting in Legal, Health and Social Service Settings, Stockholm, Sweden, 20-23 May 2004. 2007. x, 314 pp. 69 Delabastita, Dirk, Lieven D’hulst and Reine Meylaerts (eds.): Functional Approaches to Culture and Translation. Selected papers by José Lambert. 2006. xxviii, 226 pp. 68 Duarte, João Ferreira, Alexandra Assis Rosa and Teresa Seruya (eds.): Translation Studies at the Interface of Disciplines. 2006. vi, 207 pp. 67 Pym, Anthony, Miriam Shlesinger and Zuzana Jettmarová (eds.): Sociocultural Aspects of Translating and Interpreting. 2006. viii, 255 pp. 66 Snell-Hornby, Mary: The Turns of Translation Studies. New paradigms or shifting viewpoints? 2006. xi, 205 pp. 65 Doherty, Monika: Structural Propensities. Translating nominal word groups from English into German. 2006. xxii, 196 pp. 64 Englund Dimitrova, Birgitta: Expertise and Explicitation in the Translation Process. 2005. xx, 295 pp. 63 Janzen, Terry (ed.): Topics in Signed Language Interpreting. Theory and practice. 2005. xii, 362 pp. 62 Pokorn, Nike K.: Challenging the Traditional Axioms. Translation into a non-mother tongue. 2005. xii, 166 pp. [EST Subseries 3] 61 Hung, Eva (ed.): Translation and Cultural Change. Studies in history, norms and image-projection. 2005. xvi, 195 pp. 60 Tennent, Martha (ed.): Training for the New Millennium. Pedagogies for translation and interpreting. 2005. xxvi, 276 pp. 59 Malmkjær, Kirsten (ed.): Translation in Undergraduate Degree Programmes. 2004. vi, 202 pp. 58 Branchadell, Albert and Lovell Margaret West (eds.): Less Translated Languages. 2005. viii, 416 pp. 57 Chernov, Ghelly V.: Inference and Anticipation in Simultaneous Interpreting. A probability-prediction model. Edited with a critical foreword by Robin Setton and Adelina Hild. 2004. xxx, 268 pp. [EST Subseries 2] 56 Orero, Pilar (ed.): Topics in Audiovisual Translation. 2004. xiv, 227 pp.
55 Angelelli, Claudia V.: Revisiting the Interpreter’s Role. A study of conference, court, and medical interpreters in Canada, Mexico, and the United States. 2004. xvi, 127 pp. 54 González Davies, Maria: Multiple Voices in the Translation Classroom. Activities, tasks and projects. 2004. x, 262 pp. 53 Diriker, Ebru: De-/Re-Contextualizing Conference Interpreting. Interpreters in the Ivory Tower? 2004. x, 223 pp. 52 Hale, Sandra: The Discourse of Court Interpreting. Discourse practices of the law, the witness and the interpreter. 2004. xviii, 267 pp. 51 Chan, Leo Tak-hung: Twentieth-Century Chinese Translation Theory. Modes, issues and debates. 2004. xvi, 277 pp. 50 Hansen, Gyde, Kirsten Malmkjær and Daniel Gile (eds.): Claims, Changes and Challenges in Translation Studies. Selected contributions from the EST Congress, Copenhagen 2001. 2004. xiv, 320 pp. [EST Subseries 1] 49 Pym, Anthony: The Moving Text. Localization, translation, and distribution. 2004. xviii, 223 pp. 48 Mauranen, Anna and Pekka Kujamäki (eds.): Translation Universals. Do they exist? 2004. vi, 224 pp. 47 Sawyer, David B.: Fundamental Aspects of Interpreter Education. Curriculum and Assessment. 2004. xviii, 312 pp. 46 Brunette, Louise, Georges Bastin, Isabelle Hemlin and Heather Clarke (eds.): The Critical Link 3. Interpreters in the Community. Selected papers from the Third International Conference on Interpreting in Legal, Health and Social Service Settings, Montréal, Quebec, Canada 22–26 May 2001. 2003. xii, 359 pp. 45 Alves, Fabio (ed.): Triangulating Translation. Perspectives in process oriented research. 2003. x, 165 pp. 44 Singerman, Robert: Jewish Translation History. A bibliography of bibliographies and studies. With an introductory essay by Gideon Toury. 2002. xxxvi, 420 pp. 43 Garzone, Giuliana and Maurizio Viezzi (eds.): Interpreting in the 21st Century. Challenges and opportunities. 2002. x, 337 pp. 42 Hung, Eva (ed.): Teaching Translation and Interpreting 4. Building bridges. 2002. xii, 243 pp. 41 Nida, Eugene A.: Contexts in Translating. 2002. x, 127 pp. 40 Englund Dimitrova, Birgitta and Kenneth Hyltenstam (eds.): Language Processing and Simultaneous Interpreting. Interdisciplinary perspectives. 2000. xvi, 164 pp. 39 Chesterman, Andrew, Natividad Gallardo San Salvador and Yves Gambier (eds.): Translation in Context. Selected papers from the EST Congress, Granada 1998. 2000. x, 393 pp. 38 Schäffner, Christina and Beverly Adab (eds.): Developing Translation Competence. 2000. xvi, 244 pp. 37 Tirkkonen-Condit, Sonja and Riitta Jääskeläinen (eds.): Tapping and Mapping the Processes of Translation and Interpreting. Outlooks on empirical research. 2000. x, 176 pp. 36 Schmid, Monika S.: Translating the Elusive. Marked word order and subjectivity in English-German translation. 1999. xii, 174 pp. 35 Somers, Harold (ed.): Computers and Translation. A translator's guide. 2003. xvi, 351 pp. 34 Gambier, Yves and Henrik Gottlieb (eds.): (Multi) Media Translation. Concepts, practices, and research. 2001. xx, 300 pp. 33 Gile, Daniel, Helle V. Dam, Friedel Dubslaff, Bodil Martinsen and Anne Schjoldager (eds.): Getting Started in Interpreting Research. Methodological reflections, personal accounts and advice for beginners. 2001. xiv, 255 pp. 32 Beeby, Allison, Doris Ensinger and Marisa Presas (eds.): Investigating Translation. Selected papers from the 4th International Congress on Translation, Barcelona, 1998. 2000. xiv, 296 pp. 31 Roberts, Roda P., Silvana E. Carr, Diana Abraham and Aideen Dufour (eds.): The Critical Link 2: Interpreters in the Community. Selected papers from the Second International Conference on Interpreting in legal, health and social service settings, Vancouver, BC, Canada, 19–23 May 1998. 2000. vii, 316 pp. 30 Dollerup, Cay: Tales and Translation. The Grimm Tales from Pan-Germanic narratives to shared international fairytales. 1999. xiv, 384 pp. 29 Wilss, Wolfram: Translation and Interpreting in the 20th Century. Focus on German. 1999. xiii, 256 pp. 28 Setton, Robin: Simultaneous Interpretation. A cognitive-pragmatic analysis. 1999. xvi, 397 pp.
27 Beylard-Ozeroff, Ann, Jana Králová and Barbara Moser-Mercer (eds.): Translators' Strategies and Creativity. Selected Papers from the 9th International Conference on Translation and Interpreting, Prague, September 1995. In honor of Jiří Levý and Anton Popovič. 1998. xiv, 230 pp. 26 Trosborg, Anna (ed.): Text Typology and Translation. 1997. xvi, 342 pp. 25 Pollard, David E. (ed.): Translation and Creation. Readings of Western Literature in Early Modern China, 1840–1918. 1998. vi, 336 pp. 24 Orero, Pilar and Juan C. Sager (eds.): The Translator's Dialogue. Giovanni Pontiero. 1997. xiv, 252 pp. 23 Gambier, Yves, Daniel Gile and Christopher Taylor (eds.): Conference Interpreting: Current Trends in Research. Proceedings of the International Conference on Interpreting: What do we know and how? 1997. iv, 246 pp. 22 Chesterman, Andrew: Memes of Translation. The spread of ideas in translation theory. 1997. vii, 219 pp. 21 Bush, Peter and Kirsten Malmkjær (eds.): Rimbaud's Rainbow. Literary translation in higher education. 1998. x, 200 pp. 20 Snell-Hornby, Mary, Zuzana Jettmarová and Klaus Kaindl (eds.): Translation as Intercultural Communication. Selected papers from the EST Congress, Prague 1995. 1997. x, 354 pp. 19 Carr, Silvana E., Roda P. Roberts, Aideen Dufour and Dini Steyn (eds.): The Critical Link: Interpreters in the Community. Papers from the 1st international conference on interpreting in legal, health and social service settings, Geneva Park, Canada, 1–4 June 1995. 1997. viii, 322 pp. 18 Somers, Harold (ed.): Terminology, LSP and Translation. Studies in language engineering in honour of Juan C. Sager. 1996. xii, 250 pp. 17 Poyatos, Fernando (ed.): Nonverbal Communication and Translation. New perspectives and challenges in literature, interpretation and the media. 1997. xii, 361 pp. 16 Dollerup, Cay and Vibeke Appel (eds.): Teaching Translation and Interpreting 3. New Horizons. Papers from the Third Language International Conference, Elsinore, Denmark, 1995. 1996. viii, 338 pp. 15 Wilss, Wolfram: Knowledge and Skills in Translator Behavior. 1996. xiii, 259 pp. 14 Melby, Alan K. and Terry Warner: The Possibility of Language. A discussion of the nature of language, with implications for human and machine translation. 1995. xxvi, 276 pp. 13 Delisle, Jean and Judith Woodsworth (eds.): Translators through History. 1995. xvi, 346 pp. 12 Bergenholtz, Henning and Sven Tarp (eds.): Manual of Specialised Lexicography. The preparation of specialised dictionaries. 1995. 256 pp. 11 Vinay, Jean-Paul and Jean Darbelnet: Comparative Stylistics of French and English. A methodology for translation. Translated and edited by Juan C. Sager and M.-J. Hamel. 1995. xx, 359 pp. 10 Kussmaul, Paul: Training the Translator. 1995. x, 178 pp. 9 Rey, Alain: Essays on Terminology. Translated by Juan C. Sager. With an introduction by Bruno de Bessé. 1995. xiv, 223 pp. 8 Gile, Daniel: Basic Concepts and Models for Interpreter and Translator Training. 1995. xvi, 278 pp. 7 Beaugrande, Robert de, Abdullah Shunnaq and Mohamed Helmy Heliel (eds.): Language, Discourse and Translation in the West and Middle East. 1994. xii, 256 pp. 6 Edwards, Alicia B.: The Practice of Court Interpreting. 1995. xiii, 192 pp. 5 Dollerup, Cay and Annette Lindegaard (eds.): Teaching Translation and Interpreting 2. Insights, aims and visions. Papers from the Second Language International Conference Elsinore, 1993. 1994. viii, 358 pp. 4 Toury, Gideon: Descriptive Translation Studies – and beyond. 1995. viii, 312 pp. 3 Lambert, Sylvie and Barbara Moser-Mercer (eds.): Bridging the Gap. Empirical research in simultaneous interpretation. 1994. 362 pp. 2 Snell-Hornby, Mary, Franz Pöchhacker and Klaus Kaindl (eds.): Translation Studies: An Interdiscipline. Selected papers from the Translation Studies Congress, Vienna, 1992. 1994. xii, 438 pp. 1 Sager, Juan C.: Language Engineering and Translation. Consequences of automation. 1994. xx, 345 pp.