Technology-Enhanced Language Learning for Specialized Domains: Practical applications and mobility 9781138120433, 9781315651729


115 52 14MB

English Pages [309] Year 2016

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Technology-Enhanced Language Learning for Specialized Domains- Front Cover
Technology-Enhanced Language Learning for Specialized Domains
Title Page
Copyright Page
Contents
List of figures
List of tables
Notes on contributors
Foreword
Acknowledgements
Introduction
References
PART 1: General issues about learning languages with computers
Chapter 1: Languages and literacies for digital lives
Introduction
Language-related literacies
Information-related literacies
Connection-related literacies
Redesign
Towards a mobile, critical, digitally literate future
References
Chapter 2: Promoting intercultural competence in culture and language studies: outcomes of an international collaborative project
Introduction
Rationale
Method
Results and discussion
Conclusions
Acknowledgements
References
Chapter 3: Return on investment: the future of evidence-based research on ICT-enhanced business English
The spread of ICT and EMI
Determining factors of evidence-based research on ICT-enhanced learning
Conclusion: beyond two-dimensional models for evidence-based ICT-enhanced research on ESP learning
References
Chapter 4: L2 English learning and performance through online activities: a case study
Introduction
UNED distance learning methodology
English grammar and e-learning
Task-based learning and technology
The present study
Discussion and concluding remarks
References
PART 2: Languages and technology-enhanced assessment
Chapter 5: Language testing in the digital era
Introduction
History and evolution of technology-enhanced assessment
Computer assisted language testing (CALT)
Future directions
References
Chapter 6: Synchronous computer-mediated communication in ILP research: a study based on the ESP context
Introduction
The speech act of advice
Retrospective verbal reports in ILP research
Methodology
Results
Conclusion
Acknowledgements
References
Appendix A
Chapter 7: The COdA scoring rubric: an attempt to facilitate assessment in the field of digital educational materials in higher education
Quality assessment tool (COdA): motivations
Antecedents
COdA: development
COdA: assessment
COdA: scoring rubric
Lines for future research
References
Appendix
Chapter 8: Enabling automatic, technology-enhanced assessment in language e-learning: using ontologies and linguistic annotation merge to improve accuracy
Introduction
OntoTag: an architecture and methodology for the joint annotation of (semantic) web pages
The linguistic ontologies
The experiment: development of OntoTagger
OntoTagger at work: an annotated example
Results
Conclusions
Acknowledgements
References
PART 3: Mobile-assisted language learning
Chapter 9: Challenges and opportunities
in enacting MALL designs for LSP
Introduction
LSP and mobility
Mobile tools for vocabulary learning
Designing and enacting MALL
Discussion
References
Chapter 10: Designer learning: the teacher as designer of mobile-based classroom learning experiences
Brief literature review
The study
Discussion of the research question
Issues of concern
Future directions
Acknowledgments
References
Chapter 11: Mobile and massive language learning
Mobile-assisted language learning
Language MOOCs
MALMOOCs
Notes
References
PART 4: Language massive open online courses
Chapter 12: Academic writing in MOOC environments: challenges and rewards
Introduction: ESP, EAP, and academic writing
Academic writing and MOOCs
MOOC challenges to academic writing
MOOC benefits to academic writing
Conclusion
References
Chapter 13: Language MOOCs: better by design
Introduction
Affordances of the massive online format
Challenges of the massive online format
Recommendations
References
Chapter 14: Enhancing specialized vocabulary through social learning in language MOOCs
Introduction
Technology-enhanced language learning in specialized linguistic domains
Key issues in the acquisition of specialized vocabulary
Social language learning
MOOCs as an emerging option for specialized language learning: the case of “Professional English”
Conclusion
References
PART 5: Corpus-based approaches to specialized linguistic domains
Chapter 15: Corpus-based teaching in LSP
Introduction
Phraseology-centered materials
Register-centered materials
Conclusion
Acknowledgments
References
Chapter 16: Transcription and annotation of non-native spoken corpora
Introduction
Types of non-native speech data
Transcription
Annotation
Applications
Conclusions
References
Chapter 17: Using monolingual virtual corpora in public service legal translator training
Public service interpreting and translation as a new profession and academic discipline
Corpus-based translation studies (CTS) and their application to PSIT legal translation training
Conclusion
References
PART 6: Computer-assisted translation tools for language learning
Chapter 18: Computer-assisted translation tools as tools for language learning
Introduction
Features of CAT tools
Using CAT tools in language learning
Concluding remarks
References
Chapter 19: Applying corpora-based translation studies to the classroom: languages for specific purposes acquisition
Introduction
Background
Methodology, corpus design and compilation
Data extraction and analysis
Using corpora in translation: an example
Conclusions
References
Chapter 20: VISP: a MALL-based app using audio description techniques to improve B1 EFL students’ oral competence
Introduction
MALL apps: state of the art
Audio description (AD) and its application in the foreign language (FL) classroom
An ad-based MALL app – VISP: videos for speaking
A qualitative pilot case study
Conclusions and suggestions for future research
Note
References
Afterword: technology and beyond – enhancing language learning
References
Index
Recommend Papers

Technology-Enhanced Language Learning for Specialized Domains: Practical applications and mobility
 9781138120433, 9781315651729

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Technology-Enhanced Language Learning for Specialized Domains

Technology-Enhanced Language Learning for Specialized Domains provides an exploration of the latest developments in technology-enhanced learning and the processing of languages for specific purposes. It combines theoretical and applied research from an interdisciplinary angle, covering general issues related to learning languages with computers, assessment, mobile-assisted language learning, the new language massive open online courses, corpus-based research and computer-assisted aspects of translation. The chapters in this collection include contributions from a number of international experts in the field with a wide range of experience in the use of technologies to enhance the language learning process. The chapters have been brought together precisely in recognition of the demand for this kind of specialized tuition, offering state-of-the-art technological and methodological innovation and practical applications. The topics covered revolve around the practical consequences of the current possibilities of mobility for both learners and teachers, as well as the applicability of updated technological advances to language learning and teaching, particularly in specialized domains. This is achieved through the description and discussion of practical examples of those applications in a variety of educational contexts. At the beginning of each thematic section, readers will find an introductory chapter which contextualizes the topic and links the different examples discussed. Drawing together rich primary research and empirical studies related to specialized tuition and the processing of languages, Technology-Enhanced Language Learning for Specialized Domains is an invaluable resource for academics, researchers and postgraduate students in the fields of education, computer-assisted language learning, languages and linguistics, and language teaching. Elena Martín-Monje is Lecturer at UNED (Universidad Nacional de Educacion a Distancia), Spain, where she teaches in the areas of English for Specific Purposes and Computer-Assisted Language Learning, also her fields of research. Izaskun Elorza is Associate Professor in English Language and Linguistics, University of Salamanca, Spain, where she is involved in teaching and researching in the areas of English language, grammar and corpus linguistics. In the area of ICTs’ applications to language teaching and learning, she is particularly interested in corpus-based language modelling. Blanca García Riaza is Lecturer at the School of Education and Tourism, University of Salamanca, Spain, where she teaches in the areas of English for Specific Purposes, Oral Communication and ICTs. Her research interests focus on corpus-based discourse analysis and mobile learning.

Routledge Research in Education

For a complete list of titles in this series, please visit www.routledge.com. 142 Using Narrative Inquiry for Educational Research in the Asia Pacific Edited by Sheila Trahar and Wai Ming Yu

148 A New Vision of Liberal Education The good of the unexamined life Alistair Miller

143 The Hidden Role of Software in Educational Research Policy to Practice Tom Liam Lynch

149 Transatlantic Reflections on the Practice-Based PhD in Fine Art Jessica B. Schwarzenbach and Paul M. W. Hackett

144 Education, Leadership and Islam Theories, discourses and practices from an Islamic perspective Saeeda Shah

150 Drama and Social Justice Theory, research and practice in international contexts Edited by Kelly Freebody and Michael Finneran

145 English Language Teacher Education in Chile A cultural historical activity theory perspective Malba Barahona 146 Navigating Model Minority Stereotypes Asian Indian Youth in South Asian Diaspora Rupam Saran 147 Evidence-based Practice in Education Functions of evidence and causal presuppositions Tone Kvernbekk

151 Education, Identity and Women Religious, 1800–1950 Convents, classrooms and colleges Edited by Deirdre Raftery and Elizabeth Smyth 152 School Health Education in Changing Times Curriculum, pedagogies and partnerships Deana Leahy, Lisette Burrows, Louise McCuaig, Jan Wright and Dawn Penney

153 Progressive Sexuality Education The Conceits of Secularism Mary Lou Rasmussen 154 Collaboration and the Future of Education Preserving the Right to Think and Teach Historically Gordon Andrews, Warren J. Wilson, and James Cousins 155 Theorizing Pedagogical Interaction Insights from Conversation Analysis Hansun Zhang Waring 156 Interdisciplinary Approaches to Distance Teaching Connected Classrooms in Theory and Practice Alan Blackstock and Nathan Straight 157 How Arts Education Makes a Difference Research examining successful classroom practice and pedagogy Edited by Josephine Fleming, Robyn Gibson and Michael Anderson 158 Populism, Media and Education Challenging discrimination in contemporary digital societies Edited by Maria Ranieri 159 Imagination for Inclusion Diverse contexts of educational practice Edited by Derek Bland

160 Youth Voices, Public Spaces, and Civic Engagement Edited by Stuart Greene, Kevin J. Burke, and Maria K. McKenna 161 Spirituality in Education in a Global, Pluralised World Marian de Souza 162 Reconceptualising Agency and Childhood New Perspectives in Childhood Studies Edited by Florian Esser, Meike Baader, Tanja Betz, and Beatrice Hungerland 163 Technology-Enhanced Language Learning for Specialized Domains Practical applications and mobility Edited by Elena Martín-Monje, Izaskun Elorza and Blanca García Riaza 164 American Indian Workforce Education Trends and Issues Edited by Carsten Schmidtke 165 African American English and the Achievement Gap The Role of Dialectal Codeswitching Holly K. Craig 166 Intersections of Formal and Informal Science Edited by Lucy Avraamidou and Wolff-Michael Roth

This Page is Intentionally Left Blank

Technology-Enhanced Language Learning for Specialized Domains

Practical applications and mobility Edited by Elena Martín-Monje, Izaskun Elorza and Blanca García Riaza

First published 2016 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2016 selection and editorial matter, Elena Martín-Monje, Izaskun Elorza and Blanca García Riaza; individual chapters, the contributors. The right of Elena Martín-Monje, Izaskun Elorza and Blanca García Riaza to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging in Publication Data Names: Monje, Elena Martâin, editor. | Elorza, Izaskun, editor. | Garcâia Riaza, Blanca, editor. Title: Technology-enhanced language learning for specialized domains : practical applications and mobility / edited by Elena Martâin Monje, Izaskun Elorza and Blanca Garcâia Riaza. Description: Milton Park, Abingdon, Oxon ; New York, NY : Routledge, [2016] Identifiers: LCCN 2015037006| ISBN 9781138120433 | ISBN 9781315651729 Subjects: LCSH: Language and languages—Study and teaching— Technological innovations. | Computer-assisted instruction. | Educational technology. | Interdisciplinary approach in education. Classification: LCC P53.855 .T44 2016 | DDC 418.0078—dc23 LC record available at http://lccn.loc.gov/2015037006 ISBN: 978-1-138-12043-3 (hbk) ISBN: 978-1-315-65172-9 (ebk) Typeset in Galliard by Swales & Willis Ltd, Exeter, Devon, UK

Contents

List of figures List of tables Notes on contributors Foreword

x xi xii xvii

J O ZE F C O L PAERT ( U NIVERS IT EIT A NT WERP E N, BELG IUM)

Acknowledgements Introduction

xxi 1

I ZAS KU N EL O RZA ( U NIVERS IT Y O F S A L A MA NCA, SPAIN), BL AN C A G ARC ÍA RIA ZA ( U NIVERS IT Y O F S A LAMANCA, SPAIN) AN D E L EN A MA RT ÍN-MO NJE ( U NIVERS IDA D NACIONAL DE ED U C ACI Ó N A DIS TA NCIA , S PA IN)

PART 1

General issues about learning languages with computers

7

  1 Languages and literacies for digital lives

9

M ARK P EG RU M ( U NIVERS IT Y O F WES T ERN A USTR ALIA, AU S TRAL I A)

  2 Promoting intercultural competence in culture and language studies: outcomes of an international collaborative project

23

M ARG ARI TA VINA GRE ( U NIVERS IDA D A U T Ó NOMA DE M AD RI D , S PAIN)

  3 Return on investment: the future of evidencebased research on ICT-enhanced business English

35

AN TO N I O J . J IMÉNEZ-MU ÑO Z ( U NIVERS IDA D DE OV IEDO, SPAIN)

  4 L2 English learning and performance through online activities: a case study M . ÁN G EL ES ES CO BA R ( U NIVERS IDA D NA CIONAL DE EDUCACIÓN A D I S TAN CI A, S PA IN)

47

viii Contents PART 2

Languages and technology-enhanced assessment

59

  5 Language testing in the digital era

61

M I G U EL F E RNÁ NDEZ Á LVA REZ ( CH ICA GO STATE UNIV ER SITY, USA)

  6 Synchronous computer-mediated communication in ILP research: a study based on the ESP context

73

VI CEN TE BELT RÁ N-PA L A NQ U ES ( U NIVERSITAT JAUME I, SPAIN)

  7 The COdA scoring rubric: an attempt to facilitate assessment in the field of digital educational materials in higher education

86

EL EN A D O MÍNGU EZ RO MERO , IS A BEL DE AR MAS R ANER O AN D AN A F ERNÁ NDEZ-PA MP IL L Ó N CES T EROS (UNIV ER SIDAD CO M P L U TENS E DE MA DRID, S PA IN)

  8 Enabling automatic, technology-enhanced assessment in language e-learning: using ontologies and linguistic annotation merge to improve accuracy

102

ANTONIO PAREJA-LORA (UNIVERSIDAD COMPLUTENSE DE MADRID, SPAIN)

PART 3

Mobile-assisted language learning

127

  9 Challenges and opportunities in enacting MALL designs for LSP

129

J O S H U A U N DERWO O D ( L O NDO N KNO WL E DG E LAB, U C L I N S TI TU T E O F EDU CAT IO N, U NIT ED K ING DOM)

10 Designer learning: the teacher as designer of mobile-based classroom learning experiences

140

N I CKY H O CKL Y ( T HE CO NS U LTA NT S -E, U N ITED K ING DOM)

11 Mobile and massive language learning

151

TI M O TH Y R EA D, EL ENA BÁ RCENA ( U NIVE R SIDAD NACIONAL DE ED U C ACI Ó N A DIS TA NCIA , S PA IN) A ND A GNES K UK ULSK A- HULME ( TH E O P E N U NIVERS IT Y, U NIT ED KINGDOM)

PART 4

Language massive open online courses

163

12 Academic writing in MOOC environments: challenges and rewards

165

M AG G I E S O KO L IK ( U NIVERS IT Y O F CA L IFOR NIA, BER K ELEY, USA)

Contents  ix 13 Language MOOCs: better by design

177

F E RN AN D O RU BIO ( U NIVERS IT Y O F U TA H , USA), CARO L I N F U CH S ( CO L U MBIA U NIVERS IT Y, USA) AND ED WARD D I XO N ( T HE U NIVERS IT Y O F P ENN SYLVANIA, USA)

14 Enhancing specialized vocabulary through social learning in language MOOCs

189

EL EN A M ARTÍN-MO NJE A ND PAT RICIA VENTUR A ( U N I VERS I D A D NA CIO NA L DE EDU CA CIÓ N A DISTANCIA, SPAIN)

PART 5

Corpus-based approaches to specialized linguistic domains 201 15 Corpus-based teaching in LSP

203

TONY BERBER SARDINHA (SÃO PAULO CATHOLIC UNIVERSITY, BRAZIL)

16 Transcription and annotation of non-native spoken corpora

216

M ARI O C ARRA NZA DÍEZ ( U NIVERS ITAT A U T ÒNOMA D E BARC E L O N A , S PA IN)

17 Using monolingual virtual corpora in public service legal translator training

228

M ARÍ A D E L MA R S Á NCH EZ RA MO S A ND FRA NCISCO J . VI G I ER M O RENO ( U NIVERS IDA D DE A L CA LÁ, SPAIN)

PART 6

Computer-assisted translation tools for language learning

241

18 Computer-assisted translation tools as tools for language learning 243 MARÍA FERNÁNDEZ-PARRA (SWANSEA UNIVERSITY, UNITED KINGDOM)

19 Applying corpora-based translation studies to the classroom: languages for specific purposes acquisition

255

M O N TS E RRAT BERMÚ DEZ BA U S EL A ( U NIVER SIDAD ALF O N S O X EL S A BIO , S PA IN)

20 VISP: a MALL-based app using audio description techniques to improve B1 EFL students’ oral competence

266

ANA IBÁÑEZ MORENO (UNIVERSIDAD NACIONAL DE EDUCACIÓN A DISTANCIA, SPAIN) AND ANNA VERMEULEN (UNIVERSITEIT GHENT, BELGIUM)

Afterword: technology and beyond – enhancing language learning Index

277 282

Figures

1.1 3.1 4.1 5.1 8.1 8.2 8.3 12.1 15.1 15.2 17.1 18.1 19.1 20.1

A framework of digital literacies Combined Learning Experiences: student time Evaluation per group in all tasks Selection of items in a CAT OntoTag’s Experimentation: OntoTagger’s Architecture Text (excerpt) to be annotated The distilled text (excerpt) to be annotated Threaded forums: WebCT 2002 vs. Coursera 2015 Use of Sketch-Diff for supply/remove in CAM Chart view in COCA for past tense verb forms Concordance function SDL MultiTerm termbase exported into MS Word Concordance lines for “cepas” with context word “lactoba*” Introduction screen in VISP v1

11 41 55 65 106 107 108 169 207 212 235 251 261 270

Tables

2.1 6.1 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10 8.11 8.12

8.13 10.1 16.1

Categorization of participants’ excerpts Verbal probe questionnaire The distilled text excerpt ‘Director: Peter Chelsom’, tagged by means of DataLexica The distilled text excerpt ‘Director: Peter Chelsom’, tagged by means of FDG The distilled text excerpt ‘Director: Peter Chelsom’, tagged by means of LACELL The DataLexica-tagged text excerpt ‘Director: Peter Chelsom’, after the first standardization sub-phase The FDG annotations for ‘Director’ after the first standardization sub-phase The LACELL annotations for ‘Director’ after the first standardization sub-phase The FDG lemma and category information for the word ‘Director’ included in its associated L+POS file The FDG category and morphological (POS+M) information for the word ‘Director’ The FDG syntactic (Syn) information for the word ‘Director’ The FDG semantic (Sem) information for the word ‘Director’ The DataLexica semantic (Sem) information for the word ‘Director’ The L+POS combined annotations obtained for the chunk ‘Director: Peter Chelsom’ by means of OntoTagger in the combination sub-phase The final integrated (and merged) annotation obtained by means of OntoTagger for the chunk ‘Director: Peter Chelsom’ Mobile tasks Examples of the encoding and annotation of mispronunciations with their expected pronunciation (transcribed in SAMPA) and as pronounced by the learner (transcribed in X-SAMPA)

27 77 108 110 111 113 115 116 117 117 118 118 119

121 122 145

224

Contributors

Isabel de Armas Ranero (Universidad Complutense de Madrid, Spain) has been working at the Complutense University of Madrid since 1980. Since 2000, she has participated intensively in the different Committees of the University Library for Information and Support for Teaching and Research. Her current focus is the development of e-learning courses, in collaboration with the Main Library of the UCM. Elena Bárcena (Universidad Nacional de Educación a Distancia, Spain) is an associate professor at UNED, where she teaches Theoretical and Applied Linguistics. She is the founding director of the ATLAS (Applying Technologies to Languages) group and is currently working on MALL and MOOCs. She has the Certification for Professor from the Spanish Agency of Quality Evaluation and Certification. Vicente Beltrán-Palanques (Universitat Jaume I, Spain) is pursuing a PhD in Applied Linguistics at Universitat Jaume I (Castellón, Spain), where he teaches English as a Foreign Language courses and English for Specific Purposes courses. He is a member of GRAPE research group, Group for Research on Academic and Professional English. His research interests include: interlanguage pragmatics, testing pragmatics, language assessment and multimodality. Tony Berber Sardinha (São Paulo Catholic University, Brazil) is a professor with the Applied Linguistics Graduate Program and the Linguistics Dept, Sao Paulo Catholic University, Brazil. His current interests are corpus linguistics, metaphor analysis, and applied linguistics, including the relationship between language and culture, and multi-dimensional approaches to language description. Montserrat Bermúdez Bausela (Universidad Alfonso X el Sabio, Spain) is an associate professor in Translation and Interpreting at Universidad Alfonso X el Sabio (Madrid, Spain). She currently teaches Linguistics Applied to Translation, Theory and Practice of Translation, English for Translator Trainees, New Technologies Applied to Translation and Software Localization. Her research interests include Textual Linguistics, Discourse Analysis, the application of CAT Tools to research in the field of Translation Studies, and English for Specific Purposes, among others.

Contributors  xiii Mario Carranza Díez (Universitat Autònoma de Barcelona, Spain) has researched on Spanish pronunciation acquisition by Japanese students, computer-assisted pronunciation training (CAPT) and mobile-assisted language learning (MALL). Presently, he works at the Spanish Department of the Autonomous University of Barcelona, while he finishes his PhD on nonnative spoken corpora for developing pronunciation training tools enhanced with speech technologies. Jozef Colpaert (Universiteit Antwerpen, Belgium) teaches Instructional Design, Educational Technology and Computer-Assisted Language Learning at the University of Antwerp, Belgium. He is editor of Computer-Assisted Language Learning (Taylor and Francis) and organizer of the International CALL Research Conferences. He is currently working on the empirical and theoretical validation of Educational Engineering, a novel instructional design and research method. Elena Domínguez Romero (Universidad Complutense de Madrid, Spain) lectures at the School of Humanities of the Complutense University of Madrid (UCM), where she has participated in several Teaching Innovation Projects intended to enhance the implementation of Campus Virtual practice in the official programs offered by the School. Edward Dixon (University of Pennsylvania, USA), PhD in German, is Technology Director of Penn Language Center at the University of Pennsylvania. He works in areas related to classroom instruction, research and online language education. In 2011, Ed received Penn’s affiliated faculty award for distinguished teaching in the College of Liberal and Professional Studies. Izaskun Elorza (University of Salamanca, Spain) is an associate professor at the University of Salamanca, where she teaches Corpus Linguistics and English Language and Grammar enhanced with ICTs for language learning and corpus processing. She convened the ‘Corpus Linguistics for 21st Century Language Learning’ Roundtable under the auspices of the Language Learning Roundtable Conference Grant Program (2011) and, as a member of ATLAS Research Group, her current research focuses on the development of apps for mobile language learning. M. Ángeles Escobar (Universidad Nacional de Educación a Distancia, Spain), PhD, is Associate Professor at the Universidad Nacional de Educación a Distancia (UNED, Spain). Her main areas of current research are Applied Linguistics, L2 Acquisition and Learning. She has published over twenty articles in specialized international journals, and authored or co-edited books in leading academic publishers. Miguel Fernández Álvarez (Chicago State University, USA) is an associate professor in the Bilingual Education Program at Chicago State University. He holds a PhD in English Philology (University of Granada, Spain) and two Master’s Degrees: MA in Education (University of Granada, Spain), and MA

xiv Contributors in Language Testing (Lancaster University, United Kingdom). His areas of interest include Bilingual and Multicultural Education, Second Language Acquisition and Language Testing. Ana Fernández-Pampillón Cesteros (Universidad Complutense de Madrid, Spain) is Lecturer and Researcher in the Faculty of Philology at the Universidad Complutense de Madrid (UCM), where she has participated in projects related to e-learning, b-learning, educational innovation and computational linguistics. Her main areas of research are computational lexicography (dictionaries, glossaries, thesauri and ontologies) and its application to virtual education (e-learning, semantic web and LMS). María Fernández-Parra (Swansea University, United Kingdom) completed her PhD in Formulaic Language and Computer-Assisted Translation in March 2012 at Swansea University, where she is currently Lecturer in Translation Studies. She continues her research into several aspects of formulaic language, focusing on the translation of formulaic expressions, and various topics in computer-assisted translation. Carolin Fuchs (Columbia University, USA), PhD, is Lecturer in the TESOL/ Applied Linguistics Program at Teachers College, Columbia University. Her research interests within technology-based language learning and teacher education include multiliteracies, language play, and Web 2.0 tools. She regularly conducts telecollaborations with countries such as China, England, Germany, Japan, Spain, Taiwan and Turkey. Blanca García Riaza (University of Salamanca, Spain) holds a PhD from the University of Salamanca. She is now Lecturer at the Department of English Studies at the Escuela de Educación y Turismo, University of Salamanca. Her research interests focus on discourse analysis and corpus linguistics within the systemic functional framework and mobile devices as learning tools. Nicky Hockly (The Consultants-E, United Kingdom) is Director of Pedagogy of The Consultants-E, an award-winning online training and development organization. She is a teacher, trainer and international plenary speaker, and has written several award-winning methodology books on technologies in ELT. Her most recent is Going Mobile (2014), co-authored with Gavin Dudeney. She lives in Barcelona and is a technophobe turned technophile. Ana Ibáñez Moreno (Universidad Nacional de Educación a Distancia, Spain) is Professor at the Faculty of Philology of the Spanish National University of Distance Education, UNED. She holds a PhD in English Linguistics. Her research focuses on the use of audio description as a didactic tool in the foreign language classroom and on the development of MALL applications based on audio description. Antonio J. Jiménez-Muñoz (Universidad de Oviedo, Spain) is Associate Professor of English at the University of Oviedo, Spain. Before his current

Contributors  xv position, he has taught Foreign Languages and Linguistics at the universities of Kent and Hull. His research analyses the impact of theoretical approaches upon learners’ evidence-based performance, contributing regularly to academic journals and edited collections. Agnes Kukulska-Hulme (The Open University, United Kingdom) is Professor of Learning Technology and Communication in the Institute of Educational Technology at The Open University and Past-President of the International Association for Mobile Learning. She has published widely on mobile learning research and practice and has a special interest in mobile language learning beyond the classroom. Elena Martín-Monje (Universidad Nacional de Educación a Distancia, Spain) is Lecturer at UNED (Spain), where she teaches mainly in the areas of English for Specific Purposes and CALL (Computer-Assisted Language Learning), also her fields of research as a member of ATLAS (http://atlas.uned.es). Both her research and teaching practice have received official recognition, with a Prize for Doctoral Excellence at UNED and a University Excellence in Teaching Award. Antonio Pareja-Lora (Universidad Complutense de Madrid, Spain) got a PhD in Computer Science and Artificial Intelligence from the Universidad Politécnica de Madrid (UPM) in 2012. Currently, he is a Senior Lecturer and Researcher at Universidad Complutense de Madrid (UCM), and a member of the ATLAS (UNED) and ILSA (UCM) research groups. He is also the Convenor of AENOR’s AEN/CTN 191 (Terminología) and quite an active expert within ISO/TC 37. Mark Pegrum (University of Western Australia, Australia) is an associate professor in the Faculty of Education at the University of Western Australia, where he specializes in mobile learning and, more broadly, e-learning. His current research focuses on mobile technologies and digital literacies. He is the author or co-author of four books, with his most recent publication being Mobile Learning: Languages, Literacies and Cultures (Palgrave Macmillan, 2014). Timothy Read (Universidad Nacional de Educación a Distancia, Spain) is a Senior Lecturer in Computer Science. He co-founded the ATLAS research group, researching in Educational Technology, MALL, OER/OEP and Massive Open Social Learning. He held several managerial posts including: Founding-Director of Open UNED, Associate Pro-Vice-chancellor of Emerging Technologies and Director General of the Centre for Technical Development. Fernando Rubio (University of Utah, USA) is Associate Professor of Spanish Linguistics at the University of Utah, where he is also Co-Director of the Second Language Teaching and Research Center. His research interests are in the areas of Applied Linguistics and Teaching Methodologies including technology-enhanced language learning and teaching.

xvi Contributors María del Mar Sánchez Ramos (Universidad de Alcalá, Spain) currently lectures Translation Technology in the Department of Modern Philology at the University of Alcalá (Madrid, Spain). Her research is focused on translator training, corpus-based translation studies, localization and translation technology. She is an active member on different national and international research projects. Maggie Sokolik (University of California, Berkeley, USA) holds a BA from Reed College, and an MA and PhD from UCLA. She developed and taught the MOOC “Principles of Written English,” which has enrolled approximately 400,000 students to date. She is Director of College Writing Programs and the University of California, Berkeley. Joshua Underwood (London Knowledge Lab, UCL Institute of Education, United Kingdom) is a teacher at the British Council in Bilbao and a visiting researcher at the LKL (Institute of Education, London). He is particularly interested in developing learner autonomy and learner-centred approaches and is involved in teacher training. He is an expert in learner-centred design methods and co-editor of the Routledge Handbook of Design in Educational Technology (2013). Patricia Ventura (Universidad Nacional de Educación a Distancia, Spain) holds an Honours Degree in English Studies and a Master’s Degree in ICT for Teaching and Processing Languages from UNED. She is a teacher and a virtual tutor at CUID (the University Centre for Distance Language Education at UNED). She is currently doing her PhD thesis on social media integration in the context of online and mobile professional English MOOCs and she is a member of the European project ECO “E-learning, Communication and Open Data.” Anna Vermeulen (Universiteit Ghent, Belgium) is an associate professor at the Department of Translation, Interpreting and Communication of the faculty of Arts and Philosophy at the Ghent University (Belgium). She teaches Spanish structures, translation from Spanish into Dutch and Audiovisual Translation, especially subtitling and audio description. Her research interests and publications focus on translation strategies, multilingualism and pragmatics in audiovisual translation. As a visiting professor she held conferences and workshops at many Universities in Spain and Latin-America. Francisco J. Vigier Moreno (Universidad de Alcalá, Spain) is a lecturer in Translation Studies at the University of Alcalá (Madrid, Spain), where he specializes in legal translation in both undergraduate and postgraduate programmes. His main research fields include translator training, legal translation and interpreting, and translator accreditation. Margarita Vinagre (Universidad Autónoma de Madrid, Spain) holds an MPhil in Applied Linguistics from Trinity College Dublin and a PhD in English from the University of Seville. She currently lectures in English Linguistics at the Autónoma University of Madrid. Her main research interests focus on the use of new technologies in foreign language teaching.

Foreword Jozef Colpaert Universiteit Antwerpen, Belgium

This book reads like a kaleidoscopic snapshot of an erupting volcano. Since the early years of computer-assisted language learning (CALL), we have been privileged witnesses of a magnificent technological evolution, resulting in the recent explosion of Massive Online Open Courses (MOOCs), mobile apps, virtual environments, augmented reality and ambient intelligence. This unique episode in the history of mankind went along with a remarkable evolution in pedagogical approaches from behaviorism and cognitivism fanning out in a wide variety of communicative, learner-centered, task-based, situational or sociocultural approaches. Several recent CALL books have attempted to stop the clock and to look back, mostly trying to link what we have learned so far with a specific pedagogical theory. This book focuses on how to better serve specific needs of adult learners in specialized linguistic domains. It looks at technological advances from different angles or mirrors. I have tried to identify the mirrors which generate this kaleidoscopic view. The first mirror is the mirror of specialization. It shows the richness of specialized, professional domains such as academic, business, medical, technological or psychological language use, but it also reflects the fascination for the borderline between general and specialized vocabulary. Specialized words are not necessarily less frequent, highly technical words but they can be everyday words with a special meaning. The question is: what makes a word or multi-word unit ‘special’ or ‘specialized’? The second mirror shows the various approaches somewhere between specialized language and specific purposes. When talking about language for specific purposes (LSP), the contributors to this book seem to focus more on skills (critical thinking, problem-solving, collaboration, autonomy and flexibility), literacies (multimodality, networking, participation) and competences (intercultural, telecommunicative or socio-pragmatic) than on specialized vocabulary. The third mirror would be the view taken on the required pedagogical-didactic approach. Some scholars state that there is no difference at all with general language learning. Others claim that a specific didactic approach is required. A studentcentered approach based on Activity Theory, Socio-constructivism or Task-Based Language Learning. An approach which equips, empowers and enables adult students in their semi-autonomous learning process.

xviii Foreword This book also looks at specialized domains through the mirror of technology. It shows a wide variety of quickly evolving mobile devices, learning environments and tools for language analysis, assessment and translation. As the technologies themselves are expected to have a short lifespan, the contributors to this volume do not focus intensively on their specifics, nor on their hypothetical added value, but they rather highlight types of technology and their affordances. They emphasize sustainable aspects such as issues regarding availability of networks and devices, evidence-based design and integration, and effectiveness (return on investment). The fifth mirror is the mirror of the targeted audience. Many traditional LSP courses are face-to-face courses, often individual and private. The more specialized, the more students are isolated and spread out at the same time. Technology affords to bring these students together, to interconnect them, to create a community and a learning environment, to teach them, and to have them execute real-world tasks and co-construct artefacts. The usefulness of MOOCs appears to have much more to do with these aspects than with trying to enroll as many students as possible. Technology can bring isolated students from all over the world together in a learning community, while providing the most appropriate pedagogical-didactic approach. This brings us to the sixth mirror. This pedagogical-didactic approach should not be uniform. It should offer some degrees of freedom in order to cater for personalization and contextualization. Adaptation to personal learning styles, preferences, attitudes, needs, goals and perceptions, but also to the personal context with its limitations and opportunities. The learner is certainly not being overlooked in this volume. A wide myriad of aspects are being covered including critical awareness, discovery learning, student achievement and performance, motivation and ICT proficiency, literacies and competences, attrition and plagiarism. The seventh mirror would be the mirror of language and linguistics. Language as learning content in terms of words, idioms and collocations (multi-word units), syntax, discourse, register and style. Language as learning object in terms of knowledge, skills, competencies, literacies, speech acts and tasks. Language as study object with word frequency lists, keyword lists, collocation tables, and n-gram lists. Linguistics as provider of routines for building concordancing, transcription and translation tools. The eighth mirror is the teacher mirror. The teacher as lecturer, personal guide, scaffolding coach, author of learning content, administrator of the learning process and confident. As there are so very little content (Open Educational Resources, textbooks) and teaching models available for every specific domain, teachers should more than ever become designers. Designers of their content, teaching model, learning model, evaluation model, content, technology, in other words . . . their learning environment. Language testing is a multifaceted mirror on its own. Testing, assessment, examination and evaluation each have their own connotations and collocate with terms such as placement, achievement, norm-referenced, criterion-referenced, formal, informal, diagnostic, formative, summative and adaptive. It is generally

Foreword  xix being accepted that the content, methods and criteria for evaluating performance in specific domains are derived from an analysis of the target language use situation. The – by definition – small number of testees entails problems regarding the labour-intensiveness of this analysis on the one hand and regarding the norms and criterions to be applied across domains on the other. Finally, the research mirror reflects all possibilities. From Action Research over Design-Based Research to typical quantitative-qualitative experimental triangulation in an instruction-delivery-outcome model, but also original approaches where learner corpora can be used for language acquisition research purposes. All these mirrors freely gravitate around the main topic of this book, showing the richness and colorfulness of what is currently happening in the field. But at the same time, this book also shows the challenges technology-enhanced LSP is confronted with. Challenges we should continuously consider when talking or writing about specialized domains. We can summarize them into five glasses to wear when reading the following chapters: transdisciplinarity, ontologies, open content, the role of technology, and task design. The more fragmented the learning goals, learning content, student population, media and actors involved, the more need for a transdisciplinary approach. In order to achieve more progress, linguists, psychologists, pedagogues, subject matter specialists, educational designers, and technologists try to learn more from each other’s domains. Or form an interdisciplinary research team. Interesting in itself, but not necessarily a panacea. The real solution for advancing in the field is to build common concepts across disciplines, which makes the difference between an interdisciplinary and a transdisciplinary approach. The best way to formulate these common concepts is in terms of ontologies. These ontologies are accurate specifications of concepts such as learning objects, linguistic routines, test constructs, learner analytics . . . formulated in such a way that all specialists involved can work with them and contribute to their further evolution. On top of that, any discipline that respects itself should work on a domain-specific terminology. So LSP, strangely enough, could be considered a specific or professional domain on its own. So if we wanted to make a LSPlexicon, which words would be on that list? In earlier publications and presentations, I have repeatedly warned against ‘blurred ontologies’: terms like flipped classrooms, blended learning, digital pedagogy, design-based research, serious games, affordances, digital natives, twenty-first century skills, big data and virtual learning environments. Each of these terms are pervasive, but persuasive at the same time. The problem is that no one remembers their originally intended meaning, and we all started using these terms with our own meaning in mind. This ontological approach also applies to learning content. Publishers are reluctant to develop learning materials for specialized domains, given the limited market and labor-intensiveness of content development. Hence the need for making content more reusable, exchangeable, generic and open. Co-construction of open content, in an Open Educational Resources approach, is the way to go. But therefore we need ontological object models and content structures.

xx Foreword Technology is not the starting point for design. Nor does it carry an inherent, measurable and generalizable effect on learning. The role and shape of required technology depend on the design of the learning environment for a specific context. The added value of a technology is the extent to which its affordances match the requirements of that learning environment. And we should remember that affordances as ontologies should be defined and specified as perceived new possibilities for realizing our goals on the one hand, and that they depend on the context on the other. So a particular technology might appear very useful in one domain, and not useful at all in another. Finally, it’s all about tasks and how to make them more meaningful, useful, enjoyable and authentic. Task design should be a priority research theme in the area of specialized domains. The more specialized the domain, the more effort should be put in the process of constructing tasks the learner can identify himself with. Using these five glasses, the reader will discover why this volume was needed, which exciting research opportunities are being offered and how it can contribute to further developments in the field.

Acknowledgements

The editors of this book are very grateful to the Spanish Ministry of Science and Innovation for funding the research activities (FFI2011-29829) undertaken by the ATLAS group that have given rise to this publication, and most particularly to Elena Bárcena, research leader of the group, for her insightful comments and suggestions as well as her continuous support.

This Page is Intentionally Left Blank

Introduction Izaskun Elorza University of Salamanca, Spain

Blanca García Riaza University of Salamanca, Spain

Elena Martín-Monje Universidad Nacional de Educación a Distancia, Spain The last decades have witnessed rapid changes in the development and evolution of applications of Information and Communication Technologies (ICTs) to education. On the verge of the new millennium, the broad areas that were identified as key emerging technologies were internet and the web, wireless technology, local power generation, speech recognition software and machine translation (Bates, 2001). Fifteen years later, we feel it is high time we set our attention on the current development of technological applications in order to become more aware of what we have reached so far as well as of the path we have followed to reach it, before we may reflect on which guidelines should be set or can be expected for the next future to come. The emphasis here is not placed so much on how technology will develop in the next years, but rather on the consequences that past changes are having for language teaching and learning and how these and the coming changes will affect them. This volume attempts to provide a representative sampling of how technological applications have contributed to enhance language learning in a variety of ways in the last years. The contributors are representative of the field and present and discuss the most relevant aspects of how technology has enhanced language learning in different specialized domains. The issues treated range from the most general aspects, such as the question of the now-more-than-ever need for language learners to be digitally literate, to practical applications of different technologies in specific teaching and learning contexts. As a result, this collection of topics and perspectives constitutes an accurate overview of the most updated research in the area and of representative examples of the current range of available technological applications. In addition, it also constitutes an insightful picture of how ICTs have changed not only the possibilities but also the perspective on language learning, probably the most profound change being a shift from ‘the learner classroom’ perspective to ‘the learner community’ perspective. The topics covered in the volume revolve around five areas, which can be considered key in today’s technologically enhanced language teaching and learning in specialized domains, namely assessment, mobile-assisted language learning (MALL), language massive open online courses (LMOOCs), corpus linguistics methodology, and translation. Some of these areas are very close to the emerging

2  Izaskun Elorza et al. technologies described by Bates, such as speech recognition (cf. Carranza Diez in this volume). Mobile-assisted language learning is another example, when he suggested in 2001 that over the following five years learners would have “great mobility and be able to access learning while travelling (on buses or planes, for example) or in a café without having to be hardwired” (p. 40). Another case are massive online open courses, which have many of the advantages which had been identified already for online learning, such as their flexibility, that was envisaged at the time as “clearly of great value to many mature adults trying to balance work, family and study requirements”, or also “the opportunity to work collaboratively and closely with colleagues around the world and to have access not only to the course instructors, but to textbook authors and experts from other institutions” (p. 43). All these aspects are mostly related to the ‘communication’ dimension of ICTs, as they have mainly changed the number of ways that teachers and learners have available to communicate (or not) along the learning process so that learning may take place at all. However, in this overall picture one more aspect which deserves attention is corpus methodology. Corpus applications to language teaching and learning and assessment of language learning are treated in this volume as other key technologically related features which have come to the fore in the last few years in the context of education, and which were not mentioned by Bates, probably because they are more related to the ‘information’ dimension of ICTs. The exponential development of research and practical applications in the field of corpus linguistics is producing a steady spread not only of the use of attested models of language as the main input source for language teaching and learning but also, and most importantly, of a perspective on the need to use attested input. Teaching and learning in the context of specialized linguistic domains have greatly benefitted from corpus linguistics and its methodology, and this explains the sound presence of this ‘informational’ dimension in the present volume. On the other hand, technology-enhanced assessment has developed greatly in these years. According to the UNESCO Institute for Information Technologies in Education (2012), assessment has developed in mainly four ways, depending on how ICTs affordances are exploited: traditional assessment tests supported by computer, computer-enhanced item presentation and performance acquisition and/or computer-based adaptive tests, adaptive learning environments including an assessment module, and assessment 2.0 systems, such as Netfolio, WebPA or Open Mentor (in Whitelock, 2011). Looking ahead, the latest issue of the Horizon Report (Johnson et al., 2015) – which identifies the key trends and emerging technologies for teaching and learning – advances the most relevant challenges for technology-enhanced learning in the next years: (1) adequately defining and supporting digital literacy, (2) blending formal and informal learning, (3) complex thinking and communication, and (4) integrating personalized learning, among others. All these are aspects that permeate the different chapters in this volume, e.g. Pegrum’s chapter on digital literacies, the whole section devoted to MOOCs (blending formal

Introduction 3 and informal learning), Vinagre’s chapter on intercultural projects (complex thinking and communication) or the outline of the progress made in MALL (personalized learning). The update treatment of technological applications for language learning, as well as the comprehensive perspective adopted, mirrored in the variety of practical areas presented, makes the volume relevant for a wide range of readers, not only teachers and learners of languages in specialized domains, translators and translator trainers, but also people with a more general interest in applications of technology to these domains from the academia and beyond. In this way, the volume aims to attend a real demand for this kind of specialized tuition and processing of languages, constituting thus a representative state-of-the-art of technological and methodological innovation in these disciplines. The book as such is divided into six sections: (1) General issues about learning languages with computers; (2) Languages and technology-enhanced assessment; (3) Mobile-assisted language learning; (4) Language Massive Open Online Courses (5) Corpus-based approaches to specialized linguistic domains; and (6) Computer-assisted translation tools for language learning. The volume editors are very grateful to Prof. Dr. Jozef Colpaert, who has kindly written the foreword to the book. The first part deals with broad aspects about computers as tools to assist language learning processes. In the first contribution, Mark Pegrum reflects upon the globalized world of digital literacies, whose current importance would have been almost inconceivable in pre-digital times. These digital literacies are considered as crucial elements for students to successfully acquire effective communication skills in a digital era. The second contributor of the part is Margarita Vinagre, who presents the findings of a three-month online collaborative project between undergraduate students of British Civilization and Culture in a Spanish university and undergraduate students of Spanish as a Foreign Language in a British university regarding intercultural competence. The third contributor, Antonio J. Jimenez-Muñoz introduces a discussion of the aspects required in order to carry out sound evidence-based research in the field of Business English at tertiary level in a way that is flexible, granular and abstract enough to encompass present and future changes of technologies and educational materials. The second part in this volume is devoted to languages and technologyenhanced assessment. Its first contributor, Miguel Fernández Álvarez, focuses his chapter on Computer-Assisted Language Testing (CALT) as a field and presents the advantages and drawbacks associated with it, showing the current state of the art and exploring the future of technology-enhanced language testing, based on existing practices and methodologies. The second contributor, Vicente Beltrán Palanques, investigates the performance of the speech act of advice by means of interactive discourse completion tasks/tests (IDCTs) and retrospective verbal reports (RVRs) in the domain of English for Psychology. The third contributors, Elena Domínguez Romero, Isabel de Armas Ranero and Ana Fernández-Pampillón Cesteros, present the COdA tool for the evaluation

4  Izaskun Elorza et al. of digital educational materials at university level, and the development process behind the COdA scoring rubric. The fourth contributor is Antonio Pareja Lora, who presents an annotation architecture and methodology, together with a prototype that has been built in order to reduce the rate of errors in POS (Part of Speech) tagging. Part Three deals with mobile-assisted language learning (MALL) as particularly appropriate for learning Languages for Specific Purposes (LSPs). Its first contributor, Nicky Hockly, describes a small-scale classroom-based action research project carried out with two different levels of international EFL (English as a Foreign Language) students in the UK, over a two-week period, proposing six parameters for the effective design and sequencing as key to designing effective mobile-based tasks for the communicative language classroom. The second contributors in this part, Timothy Read, Elena Bárcena and Agnes Kukulska-Hulme argue that mobile devices are particularly potent tools for students in Language MOOCs (Massive Open Online Courses) since they complement the learning experience by providing three affordances: portable course clients, mobile sensorenabled devices, and powerful small handheld computers. The third contributor to this section is Joshua Underwood, who analyses requirements for vocabulary learning tools, technology affordances that can be exploited for vocabulary learning, and mobile learning designs, and then identifies opportunities to better support enactment of MALL designs. Part Four focuses on Language Massive Open Online Courses (LMOOCs). Its first contributor, Maggie Sokolik reports on outcomes from an ongoing academic writing MOOC taught through the University of California, Berkeley and edX.org, outlining suggestions for the teaching of writing-focused MOOCs. The second contributors to this part are Fernando Rubio, Carolin Fuchs and Edward Dixon, who provide recommendations for designers and teachers on how to take advantage of the affordances of the massive online medium and overcome its challenges, arguing that language MOOCs (LMOOCs) would benefit from combining some of the features of the connectivist and instructivist approaches to provide the best possible conditions for language learning. The third contributors, Elena Martín-Monje and Patricia Ventura present the emerging model of LMOOCs as a convenient vehicle for reaching professionals who need to learn specialized vocabulary in foreign languages, making an emphasis in the use of social learning through Web 2.0 tools such as Facebook. They also address issues of technology-enhanced language learning in specialized domains and potential problems in the acquisition of professional vocabulary and showcase the second edition of “Professional English” LMOOC as empirical evidence of the progress made in this field. Part Five shifts the attention to corpus-based approaches to specialized linguistic domains. Tony Berber Sardinha presents an overview on how corpus linguistics can be explored in the LSP classroom, illustrating it with analyses of two corpora: commercial aviation maintenance manuals and research papers. Mario Carranza Díez addresses the process of transcribing and annotating

Introduction 5 spontaneous non-native spoken corpora for the empirical study of second language pronunciation acquisition and the development of computer-assisted pronunciation training applications. The last contribution in this part, by María del Mar Sánchez Ramos and Francisco J. Vigier Moreno showcases the use of monolingual virtual corpora in public service legal translator training, claiming that corpus management tools can be used to help trainees acquire expertise in this specific language domain. Finally, Part Six, which gives an outline of computer-assisted translation tools for language learning, opens with a reflection on how computer-assisted translation tools can be used for this purpose, by María Fernández Parra. Montserrat Bermúdez Bausela discusses the use of corpora in translation studies and the application of “ad hoc corpora” in LSP and lastly, Ana Ibáñez Moreno and Anna Vermeulen provide empirical evidence on how a MALL-based app incorporating audio description techniques can improve students’ oral competence. These six parts of the volume follow a similar pattern: The first contribution in each part has a more theoretical nature and the other chapters provide empirical evidence of the progress made in the field. Furthermore, they are closely interconnected, revealing all those subjacent themes that generate the kaleidoscopic view which Jozef Colpaert refers to in his foreword. There is an emphasis on the increasing relevance of learning communities – explicitly stated in Pegrum’s, Martín-Monje and Ventura’s or Sokolik’s contributions – and the importance of learning and task design is stressed (Colpaert; Domínguez Romero, de Armas Ranero & Fernández-Pampillón; Rubio, Fuchs & Dixon as well as Underwood). Besides, assessment proves to be key in language learning and permeates virtually all sections: Fernández Álvarez and Underwood discuss the affordances of assessment through mobile devices; Rubio, Fuchs and Dixon devote a whole section to assessment in MOOCs; the links between linguistic annotation in corpora and technology-enhanced assessment are addressed by Carranza Díez and Pareja-Lora; and in the last section Ibáñez Moreno and Vermeulen contemplate assessment and self-assessment in audio-visual translation. This volume also demonstrates how closely related some of the fields are. There is a natural bond between corpus-based approaches to specialized linguistic domains and computer-assisted translation tools for language learning, as evidenced in some shared research interests: Bermúdez-Bausela shows how corpora can be successfully used for translation studies and provides practical ideas for the language instructor, and some more useful suggestions can be found in Ibáñez Moreno and Vermeulen’s case study or the opening chapters to Parts Five and Six, by Berber Sardinha and Fernández-Parra respectively. Another natural partnership is that of MALL and LMOOCs, as demonstrated by Read, Bárcena and Kukulska-Hulme as they ponder over mobile and massive language learning. In sum, Technology-Enhanced Language Learning for Specialized Domains: Practical Applications and Mobility encompasses the main fields in which LSP and technological applications are making progress hand in hand, for the time being and most likely for the near future.

6  Izaskun Elorza et al.

References Bates, T. (2001). The continuing evolution of ICT capacity: The implications for education. In Glenn M. Farrell (Ed.), The changing faces of virtual education (pp. 29–46). Vancouver: The Commonwealth of Learning. Johnson, L., Adams Becker, S., Estrada, V. & Freeman, A. (2015). NMC Horizon Report: 2015 Higher Education Edition. Austin, TX: The New Media Consortium. UNESCO Institute for Information Technologies in Education (2012). Technologyenhanced assessment in education. Policy Brief (January 2012). Moscow: UNESCO. Whitelock, D. (2011). Activating assessment for learning: Are we on the way with Web 2.0? In Mark J. W. Lee & Catherine McLoughlin (Eds.), Web 2.0-Based E-learning (pp. 319–341). New York: International Science Reference.

Part 1

General issues about learning languages with computers

This Page is Intentionally Left Blank

1 Languages and literacies for digital lives Mark Pegrum University of Western Australia, Australia

Introduction In a world woven ever more tightly together through physical travel and migration networks, complemented by virtual communication networks, the need to operate in more than one language is greater than ever. But language alone is not enough. A knowledge of how to negotiate between diverse cultures, and a facility with digital literacies, are equally in demand on our shrinking planet. Thus, at the very moment when automated translation technologies are eroding some basic language learning needs for some individuals (Orsini, 2015; Pegrum, 2014a), language teachers find themselves with an ever more diverse portfolio of responsibilities for teaching language(s), culture(s) and literacies. For some time now, language teachers have been going beyond the teaching of language to emphasize culture and, especially, intercultural communicative competence (Byram, 1997). In a digitized era, it is equally necessary to go beyond the teaching of traditional literacies to emphasize digital literacies (Dudeney, Hockly & Pegrum, 2013). Digital literacies are part of the broad set of 21st-century skills increasingly seen as essential for global workplaces, as well as for our personal lives, social lives and civic lives in local, national and regional communities, and ultimately the global community. The 21st-century skills include creativity and innovation, often linked to entrepreneurship (Zhao, 2012), along with critical thinking and problem-solving, collaboration and teamwork, and autonomy and flexibility (Mishra & Kereluik, 2011; P21, n.d.; Pegrum, 2014a); all of these are underpinned by digital communication tools and the literacies necessary to use them effectively. In short, students need to learn how to interpret the messages that reach them through digital channels, and how to express their own messages digitally. Digital literacies must be taught alongside language and more traditional literacy skills. This new emphasis on digital literacies builds on decades of growing emphasis on multiple literacies. These have included, notably, visual literacy and multimodal (or multimedia) literacy, reflecting the shift from a word-centric to a visual culture (Frawley & Dyson, 2014); and information literacy, reflecting the shift from a culture of limited publication channels to an online culture with few gatekeepers (Dudeney et al., 2013). Thus, the concept of multiple literacies – or even

10  Mark Pegrum multiliteracies (Cope & Kalantzis, 2000; Kalantzis & Cope, 2012) – which are fostered and facilitated by digital technologies is not new, though arguably there is still a lack of appreciation of the significance of the digital literacies skillset. Meanwhile, today’s digital communication networks are continuing to build the importance of some of these literacies, as well as highlighting literacies of which we previously had little awareness, and introducing literacies which could not have existed prior to the digital era (Pegrum, 2014b). Keeping up with new literacies is a pressing issue for educators. It has been suggested that there is a “continuous evolution of literate practice that occurs with each new round of ICTs” (Haythornthwaite, 2013, p. 56), linked to the “‘perpetual beta’ that is today’s learning and literacy environment” (ibid., p. 63). Notwithstanding recent initiatives to include a greater focus on digital tools in national curricula, teacher standards and teacher development programmes (Pegrum, 2014a), instances of effective engagement with digital literacies and digital practices are still limited, perhaps most notably at tertiary level (Johnson et al., 2014; Selwyn, 2014), where there has traditionally been a lesser focus on teaching development. As the pressure grows for educators to become designers of customized learning environments and tailored learning experiences for their students (Laurillard, 2012; Pegrum, 2014a), digital literacies will need to become a core consideration, not just in terms of student learning outcomes, but in terms of educator development and learning design. Drawing on numerous preceding discussions, Dudeney et al.’s (2013) framework of digital literacies (see Figure 1.1) provides an overview of the key points of emphasis that language teachers and students need to consider within the landscape of digital literacies. These are divided loosely into four focus areas, though of course neither the focus areas nor the individual literacies are mutually exclusive, but rather intersect in multiple ways. In this chapter, we will examine illustrative examples of key literacies from each focus area, investigating how they support language comprehension and production. Along the way, we will consider newly emerging literacies not included in the original framework. We will also make frequent reference to mobile literacy, one of several macroliteracies indicated in bold in the framework, given that it is now evolving into perhaps the most significant literacy skillset of our time – one that, like all macroliteracies, pulls together the other literacies.

Language-related literacies The first focus area is related to language, and the communication of meaning in channels which complement or supplement language. While it may seem strange to start a catalogue of digital literacies with print literacy, it remains core to communication online (Pegrum, 2011), and can be practised and honed on a plethora of social media platforms. That said, it is important to remember that print literacy takes on new inflections online; we read and write differently in digital channels compared to paper channels (Baron, 2008; Coiro, 2011), and there may be disadvantages to reading on small screens which impede our overview of the shape of a

Languages and literacies for digital lives  11 First focus: language *

Second focus: information

Third focus: connections

Fourth focus: (re-)design

Print literacy Texting literacy

**

Hypertext

Tagging

literacy

literacy Search

Personal

literacy

literacy

Multimedia

Information

Network

literacy

literacy

literacy

Filtering literacy

Participatory

Increasing complexity ***

literacy ****

Gaming literacy

Intercultural literacy

Mobile literacy *****

Code literacy

Remix literacy

Figure 1.1 A framework of digital literacies Source: Dudeney et al. (2013), Table 1.1, p. 6, reproduced by permission.

text (Jabr, 2013; Greenfield, 2014), or reading hypertext peppered with links that reduce our focus on the content at hand (Carr, 2010; Greenfield, 2014).

Multimodal literacy But language alone is no longer regarded as sufficient to carry meaning in a culture that has shifted from “telling the world” to “showing the world” (Kress, 2003), a process facilitated by digital tools that make it easy to create and share multimedia artefacts (Takayoshi & Selfe, 2007). Multimodal, or multimedia, literacy, which involves drawing on different semiotic systems (Bull & Anstey, 2010) to interpret and express meaning in formats ranging from word clouds through infographics to digital stories (Pegrum, 2014b), is crucial to support both language comprehension and production online. With the help of teachers, students need to develop the ability to choose appropriate representational modes or mixtures of modes, grounded in a solid understanding of their respective advantages: “When is text the best way to make a point? When is the moving image? Or photos, manipulations, data visualizations? Each is useful for some types of thinking and awkward for others” (Thompson, 2013, Kindle location 1666). Communication of meaning, whether in on-the-job, personal or public

12  Mark Pegrum settings, increasingly requires a sophisticated consideration of audience, purpose, genre, form and context (Conference on English Education Position Statement, 2005, cited in Miller & McVee, 2012, Kindle location 193). In essence, students need to become designers of meaning: Facility with design – the process of orchestrating representational modes and their interconnection – is . . . vital for composing a text that can meet the communication demands of new and future multimodal environments. (Miller & McVee, 2012, Kindle location 177; italics in original) If students need to become designers, they also need to become disseminators of meaning through their designs, engaging in the social literacy practices that have become a fundamental part of everyday communication (Jenkins, Ford & Green, 2013; Miller & McVee, 2012). In this way, multimodal literacy flows into network literacy. Current and emerging technological developments point towards a growing role for multimodality in meaning-making. With the spread of mobile smart devices, real-time multimedia recording, on-the-fly editing and near instantaneous dissemination are becoming an intrinsic part of constructing, reflecting on and sharing our experiences, opinions and learning; while augmented reality apps can already layer textual and multimedia information over real-world environments, effectively interweaving multimodal representations of reality with reality itself (Pegrum, 2014b). Thus, multimodal literacy also feeds into mobile literacy. In other developments, 3D printers will soon make their way into our lives and our classrooms, raising new questions: “What literacy will 3-D printing offer? How will it help us think in new ways? By making the physical world plastic, it could usher in a new phase in design thinking” (Thompson, 2013, Kindle location 1688). Communicative design will thus expand into 3-dimensionality, while work currently underway on devices that can simulate smell and taste (Woodill & Udell, 2015) will expand multimodality itself far beyond our present understandings. In the process, multimodal literacy will become an ever more important complement to language comprehension and production.

Code literacy There is little doubt that multimodal literacy can be enhanced by a degree of code literacy, that is, the ability to read and write computer language. Without coding skills, our digital communications are restricted to the templates provided by commercial organizations and the channels sanctioned by political institutions (Pegrum, 2014b). A familiarity with code changes this. “It doesn’t take long to become literate enough to understand what most basic bits of code are doing” (Pariser, 2011, Kindle location 3076) and, from there, to begin tweaking and modifying our digital communications to express our messages as precisely as we wish, and disseminate them as widely as we wish. If students are to take full control as digital designers of meaning, they must simultaneously learn to be designers of the channels through which meaning is communicated.

Languages and literacies for digital lives  13 Awareness of code literacy has recently been boosted by recognition from senior politicians, including U.S. President Barack Obama’s promotion of Code.org’s Hour of Code campaign (Finley, 2014), UK Prime Minister David Cameron’s support for the same campaign and the introduction of coding into the National Curriculum (Gov.uk, 2014), and Singaporean Prime Minister Lee Hsien Loong’s endorsement of coding at the launch of Singapore’s Smart Nation vision (Lim, 2014). Such projects follow the lead established by countries like Israel and Estonia (The Economist, 2014; Olson, 2012), and dovetail with wider international initiatives like Codecademy (www.codecademy.com) and Mozilla Webmaker (webmaker.org). Moreover, they fit with a trend towards setting up self-directed makerspaces in libraries, colleges and schools (ELI, 2013; Johnson et al., 2014); here, the tools are made available to create with technology, where ‘creating’ can range from designing apps (an increasingly critical skill in our mobile era, and an increasingly critical component of mobile literacy) through to building machinery. “Do you speak languages or do you code languages?” asks a recent British Council poster. The implication is clear: it is possible, and even necessary, to do both. Speaking human languages is important for communicating precisely and widely in a globalized era; coding computer languages is important for communicating precisely and widely in a digitized era.

Information-related literacies The second focus area is related to finding, evaluating and cataloguing information, crucial skills in a world where information is ubiquitous, but where its quality and value must be established by its end users.

Information literacy According to the ‘extended mind’ theory of cognition, we have always outsourced elements of our cognition in order to scaffold our thinking (Thompson, 2013), with printed books being perhaps the most obvious example of external memory devices. Printed materials already demand that we bring information literacy skills to bear on the content we read: Books do not make you smart all by themselves. In fact, they can make you stupid if you believe everything they say or if you only read books that contain viewpoints you already believe in. (Gee, 2013, Kindle location 2990) Yet the need for information literacy skills is now exponentially greater thanks to the advent of the internet, where almost anyone can publish almost anything at any time. Critically evaluating content is thus an integral part of digital reading comprehension; and this may include bringing a critical eye to bear on potentially deceptive or distracting multimedia elements (requiring multimodal literacy); on the underlying structure of the communication (requiring code literacy); and on

14  Mark Pegrum apps that interweave geotagged information – that is, information with a geographical address (Pegrum, 2014b) – with the real-world settings to which it refers (thus feeding into mobile literacy). Despite the protestations of some technological enthusiasts, the possibility of looking up almost any fact online does not amount to a valid argument against learning facts (Pegrum, 2011). After all, the ability to critique information is dependent on a prior baseline of knowledge or, perhaps more accurately, an existing conceptual framework based on prior knowledge. This is what enables us, and of course our students, to ask appropriate critical questions and, having satisfied ourselves of the reliability of newly encountered facts or ideas, to connect them to our existing understanding. In the end, it is the ability to make connections that is essential: Facts on their own are not enough! While collecting information is gathering dots, knowledge is joining them up, seeing one thing in terms of another and thereby understanding each component as part of a whole. (Greenfield, 2014, Kindle location 3673) Indeed, if used appropriately, our new technological tools may facilitate this process of connection-making, or meaning-making through connections, since they “make it easier for us to find connections – between ideas, pictures, people, bits of news – that were previously invisible” (Thompson, 2013, Kindle location 156). At this point, information literacy begins to blur into network literacy.

Data literacy Data literacy, a new skillset not included in Figure 1.1. is in some ways a specialized extension of information literacy. It is only emerging as our digital technologies are starting to allow us to generate, capture, analyse and display ever more extensive and complex data sets (Thompson, 2013), otherwise known as big data (Feinleib, 2013; Mayer-Schönberger & Cukier, 2013). The phenomenon of the quantified self, a term attributed to Kevin Kelly and Gary Wolf (Feinleib, 2013; Havens, 2014), is based on the tracking of daily data via mobile and especially wearable devices, and allows individuals to take an informed approach to managing their health, fitness, diet and other elements of their lifestyles, often through app-based dashboards which display relevant metrics (Havens, 2014; Johnson et al., 2014). In coming years, the quantified self will intersect with learning analytics (Johnson et al., 2014), where algorithms are applied to big data in an educational context in order to identify patterns and make predictions (U.S. Dept of Education, 2012). The insights obtained “can be used to improve learning design, strengthen student retention, provide early warning signals concerning individual students and help to personalize the learner’s experience” (de Freitas et al., 2014, p. 1), thus benefiting whole learning communities as well as individual learners. More user-friendly analytics tools and display formats are beginning to unlock the appeal of big data, whose analysis can be difficult for non-specialists

Languages and literacies for digital lives  15 to understand (Daniel, 2014). Nevertheless, while ordinary users of data may not wish to delve too deeply into the underpinning algorithms, a responsible approach to the use of our growing data stores demands a general technological literacy, ideally incorporating some code literacy, to help us and our students to ask the right critical questions (boyd & Crawford, 2012; Wallach, 2014). This is especially important for teachers if, as some claim, learning analytics has the potential to impact educational design by instigating data-driven approaches (Grant, 2013; Johnson et al., 2014) that, by implication, would temper some of the more ideologically based elements of traditional education. At the same time, the ability to critique data visualizations (of the kind that may influence user or learner activity) and infographics (of the kind that may influence community opinion), or to generate our own, demands multimodal literacy skills. Naturally, data literacy is also intimately bound up in mobile literacy, given the centrality of mobile collection and display devices. Indeed, it is at the intersection of mobility and multimodality that students might first start to acquire data literacy, pulling together text and visuals to display and critique the data generated with their mobile devices.

Connection-related literacies The third focus area is related to establishing a personal online presence through which to connect with others as we immerse ourselves in digital information flows, interact across geographical, linguistic and cultural boundaries, and take part in online initiatives that often spill over into our offline lives.

Network literacy With the gradual shift away from traditional community ties and towards networked social structures (Rainie & Wellman, 2012), mirrored in and supported by the rise of the internet (Pegrum, 2014b), we find ourselves living in a network society “constructed around personal and organizational networks powered by digital networks and communicated by the Internet and other computer networks” (Castells, 2013, Kindle location 399). It is, moreover, a society of “permanent connectivity” underpinned by ubiquitous mobile devices (ibid., Kindle location 465). In this context, where our knowledge, our relationships, and even our actions are intimately interwoven with the digital networks that support them, it is necessary for students to develop the literacy to make effective use of their personal and professional networks. In a time when “knowledge is more social than ever” (Cope & Kalantzis, 2013, p. 329), this means being able to draw on the knowledge that resides in a network of connections since, as Weinberger (2011) points out, “knowledge is becoming inextricable from – literally unthinkable without – the network that enables it” (Kindle location 145). Beyond this, network literacy also entails knowing how to use networks to obtain support, build collaboration, spread ideas and influence, and accomplish both online and offline goals (Dudeney et al., 2013; Pegrum, 2014b).

16  Mark Pegrum Telecollaboration projects, especially those that leverage social networking and social media platforms, provide an ideal training ground where students can practise language and hone intercultural competence while building multilingual, multicultural networks of connections that they can continue to draw on and contribute to in later life (Pegrum, 2014a). In the process, they are likely to find themselves further developing their multimodal literacy and information literacy skills. What is more, given that social media are increasingly mobile – accessed on mobile devices by users on the move, who hook into their online networks to support and share their real-world experiences – network literacy is also a crucial component of mobile literacy (Pegrum, 2014b). Learning, it is clear, can be supported by networks. Formal learning within the four walls of a classroom can be supported by digital networks of information and contacts accessed on computers. Situated learning outside the classroom can be supported by digital networks accessed on mobile devices. In fact, George Siemens’ theory of connectivism, proposed as “a model of learning in an age defined by networks”, posits that “knowledge and cognition are distributed across networks of people and technology and learning is the process of connecting, growing, and navigating those networks” (Siemens & Tittenberger, 2009, p. 11). But learning may involve networks on an even more basic level. It would seem that learning and intelligence are fundamentally about the formation of connections, or networks, in the human brain (Greenfield, 2014; Siemens & Tittenberger, 2009). There may even be synergies, as yet little explored, between the neural networking of the human brain and the digital networking of the planet (Castells, 2013; Greenfield, 2014). Network literacy, then, may have ramifications for education which go far beyond those that we have considered to date.

Participatory literacy Network literacy shades into participatory literacy as students become more active contributors to our networked culture, in which people are not “simply consumers of preconstructed messages” but are involved in “shaping, sharing, reframing, and remixing media content in ways which might not have been previously imagined” (Jenkins et al., 2013, Kindle location 189). Every time students tweet or blog, upload podcasts or videos, post to forums or add to Wikipedia articles, and remix or recirculate media artefacts, they are practising a range of digital literacies as well as developing a sense of participatory literacy – in other words, a sense of their ability, their right, and even their responsibility, to play an active role in a shared global digital culture. As Rheingold (2013) puts it: The more people who know how to use participatory media to learn, inform, persuade, investigate, reveal, advocate, and organize, the more likely the future infosphere will allow, enable, and encourage liberty and participation. (Kindle location 5881)

Languages and literacies for digital lives  17 Research suggests that those who are social and active online are also social and active offline (Castells, 2013; Thompson, 2013). More than this, our virtual networks are coming to function as springboards for real-world social, civil and political action, most strikingly realized in the protest movements that have rocked the world in the wake of the Arab Spring. Speaking of such movements, Castells (2012) argues that: the fundamental power struggle is the battle for the construction of meaning in the minds of the people. Humans create meaning by interacting with their natural and social environment, by networking their neural networks with the networks of nature and with social networks. This networking is operated by the act of communication. (Kindle location 206) As our environment has shifted to one characterized by “horizontal networks of multimodal communication” (Castells, 2013, Kindle location 312), these have facilitated new flows of meaning largely beyond political or commercial control, thus supporting the rise of autonomous, self-organizing, networked movements, and spreading interest in and sympathy for those movements. The effects of new kinds of networking, and the real-world collective actions they facilitate, may be far-reaching: “the transformation of communication affects everything in human life, and maybe (just maybe) induces changes in the wiring of our brains over time” (ibid., Kindle location 375). It is imperative for students to understand that their participation in our networked culture is more than digital; it can have many kinds of real-world consequences, most of them much less unsettling than popular revolutions. It has been suggested that educators should instigate a “participative pedagogy, assisted by digital media and networked publics, which focuses on catalyzing, inspiring, nourishing, facilitating, and guiding literacies essential to individual and collective life in the 21st century” (Rheingold 2013, Kindle location 5833). Referred to by others as participatory pedagogy, this might involve students engaging with real-world issues, supported by digital networks that facilitate interaction with wider communities and allow the dissemination of students’ learning to those communities: At the heart of the idea is to allow students to participate in knowledgecreating activities around shared objects and to share their efforts with the wider community for further knowledge building that is a legitimate part of civilization (Scardamalia & Bereiter, 2006). (Vartiainen, 2014, p. 109) Students do not always have to change the wider world through their learning, but they have to know that they can do so. They need to understand how to leverage digital networks and digital literacies for this purpose, whether their aim is

18  Mark Pegrum to foster cross-cultural appreciation by sharing media artefacts; or to raise public awareness by lending their voices to environmental campaigns; or to generate collective action on pressing issues in their local communities.

Redesign It has been argued that literacy can be seen as a process of design, or redesign, connected to identity and agency (Kalantzis & Cope, 2012; Kress, 2010). The fourth focus area is related specifically to redesigning meaning, and circulating and responding to redesigned meanings within digital networks. It comprises one major literacy, which stands somewhat apart from other literacies.

Remix literacy Remix, which involves reworking existing cultural artefacts to express new meanings, is a hallmark of digital culture (Gibson, 2005). Often taking the form of internet memes (that is, ideas or concepts that spread virally through digital networks), it is a chosen mode of self-expression for many young people (boyd, 2014; Lessig, 2007). More than this, it is becoming a common way for all of us to tailor meanings to our needs and wishes: “As [media] material spreads, it gets remade: either literally, through various forms of sampling and remixing, or figuratively, via its insertion into ongoing conversations and across various platforms” (Jenkins et al., 2013, Kindle location 631). As Hunter Walk, a former product development manager at Google, puts it: “everybody is in this duality of being a creator and a consumer. So you’re essentially riffing on everybody else’s work” (cited in Thompson, 2013, Kindle location 1564). Remix involves responding critically and creatively to others, with each response serving as a base for further critiques and creativity; this is a culture where, much as Cope and Kalantzis (2013) were seen to say about knowledge, “[r]eading and writing have never been so intrinsically social” (Belshaw, 2014, unpag.; italics in original). Remix, Belshaw (2014) goes on to say, is “the heart of digital literacies” and, consequently, “[d]igital literacies can be developed by remixing other people’s work”. Remix literacy is, indeed, a macroliteracy which draws on all of the other literacies discussed in this chapter. It typically involves the critical interpretation of meaning (information literacy), the reworking of texts and artefacts (print and multimodal literacy), and the dissemination of the remixed material (network literacy). At its most technologically sophisticated, it may involve the tailoring of digital templates and channels (code literacy). At its most effective, it is likely to be located at the sharp end of participatory literacy (Dudeney et al., 2013), as it seeks to persuade, reveal, advocate and organize, to borrow a handful of verbs from Rheingold’s (2013) participatory media list. And it intersects squarely with mobile literacy, since mobile devices are playing an ever greater role in the capturing, reworking and sharing of media, most notably via the mobile social media platforms where remixes are often expressly designed to be showcased and circulated.

Languages and literacies for digital lives  19 Given the cultural relevance of remix, and especially its resonance among young people, it is worthy of a place in any language teaching repertoire, where it can form part of a strategic approach to engaging students in contemporary cultural discussions while simultaneously developing their language and digital literacies.

Towards a mobile, critical, digitally literate future “We’re in a period where the cutting edge of change has moved from the technology to the literacies made possible by the technology”, suggests Rheingold (2012, Kindle location 132). This is good news for language teachers, whose stock-in-trade is language and literacy education. But the advent of the digital era means that traditional language and literacy skills must be complemented by digital literacies. As the initial desktop and laptop phase of the digital era morphs into a second, mobile phase, our digital literacies will be largely mediated by mobile smart devices, so that in due course mobile literacy is likely to become the principal macroliteracy of our time. As we have seen, all other digital literacies feed into mobile literacy in one way or another, with some of them taking on added importance and some taking on new inflections as they ‘go mobile’. Of course, as mobile devices and mobile literacy become more interwoven into our daily lives, it will be important for us – and our students – to build on the critical elements inherent in all digital literacies and ask probing questions about the social, cultural, political, economic, commercial, legal, ethical, health and environmental impacts of our new devices and our new lifestyles; a critical mobile literacy, in other words, will be vital (Pegrum, 2014a, 2014b). No matter how our devices and their usage patterns may be changing, digital literacies are here to stay. Effective language use is dependent on literacy skills, and that increasingly means digital literacy skills. Only by integrating digital literacies with language, culture and more traditional literacies can language teachers ensure that today’s learners are fully prepared as maximally effective communicators, ready for their lives in a globalized, digitized era.

References Baron, N. S. (2008). Always on: Language in an online and mobile world. New York: Oxford University Press. Belshaw, D. (2014). The essential elements of digital literacies. http://digitalliteraci.es/. boyd, d. (2014). It’s complicated: The social lives of networked teens. New Haven, CT: Yale University Press. http://www.danah.org/books/ItsComplicated.pdf. boyd, d. & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. Bull, G. & Anstey, M. (2010). Evolving pedagogies: Reading and writing in a multimodal world. Carlton South, VIC: Curriculum Press. Byram, M. (1997). Teaching and assessing intercultural communicative competence. Clevedon, Somerset: Multilingual Matters.

20  Mark Pegrum Carr, N. (2010). The shallows: What the internet is doing to our brains. New York: W. W. Norton. Castells, M. (2012). Networks of outrage and hope: Social movements in the internet age. Cambridge: Polity. Castells, M. (2013). Communication power (2nd ed.). Oxford: Oxford University Press. Coiro, J. (2011). Predicting reading comprehension on the internet: Contributions of offline reading skills, online reading skills, and prior knowledge. Journal of Literacy Research, 43(4), 352–392. Cope, B. & Kalantzis, M. (Eds.). (2000). Multiliteracies: Literacy learning and the design of social futures. London: Routledge. Cope, B. & Kalantzis, M. (2013). Introduction: New media, new learning and new assessments. E-learning and Digital Media, 10(4), 328–331. http://www.www ords.co.uk/pdf/freetoview.asp?j=elea&vol=10&issue=4&year=2013&article=1 _Introduction_ELEA_10_4_web. Daniel, B. (2014). Big data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology [Early view], pp. 1–17. de Freitas, S., Gibson, D., Du Plessis, C., Halloran, P., Williams, E., Ambrose, M., Dunwell, I. & Arnab, S. (2014). Foundations of dynamic learning analytics: Using university student data to increase retention. British Journal of Educational Technology [Early view], 1–14. Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital literacies. Harlow, Essex: Pearson. The Economist. (2014, Apr. 26). A is for algorithm. The Economist. http://www.econ omist.com/news/international/21601250-global-push-more-computer-scienceclassrooms-starting-bear-fruit. ELI [EDUCAUSE Learning Initiative]. (2013). 7 things you should know about . . . makerspaces. EDUCAUSE. http://net.educause.edu/ir/library/pdf/eli7095.pdf. Feinleib, D. (2013). Big data demystified: How big data is changing the way we live, love and learn. San Francisco, CA: The Big Data Group. Finley, K. (2014, Dec. 8). Obama becomes first president to write a computer program.Wired. http://www.wired.com/2014/12/obama-becomes-first-presidentwrite-computer-program/. Frawley, J. K. & Dyson, L. E. (2014). Mobile literacies: Navigating multimodality, informal learning contexts and personal ICT ecologies. In M. Kalz, Y. Bayyurt & M. Specht (Eds.), Mobile as mainstream: Towards future challenges in mobile learning. 13th World Conference on Mobile and Contextual Learning, mLearn 2014, Istanbul, Turkey, November 3–5, 2014, Proceedings (pp. 377–390). http://www.academia .edu/9001887/. Gee, J. P. (2013). The anti-education era: Creating smarter students through digital learning. New York: Palgrave Macmillan. Gibson, W. (2005, Jul.). God’s little toys. Wired, 13(7). http://archive.wired.com/ wired/archive/13.07/gibson.html. Gov.uk. (2014, Dec. 8). Maths and science must be the top priority in our schools, says Prime Minister [Press release]. Gov.uk. https://www.gov.uk/government/news/ maths-and-science-must-be-the-top-priority-in-our-schools-says-prime-minister. Grant, R. (2013, Dec. 4). How data is driving the biggest revolution in education since the Middle Ages. VentureBeat. http://venturebeat.com/2013/12/04/ how-data-is-driving-the-biggest-revolution-in-education-since-the-middle-ages/.

Languages and literacies for digital lives  21 Greenfield, S. (2014). Mind change: How digital technologies are leaving their mark on our brains. London: Rider. Havens, J. C. (2014). Hacking h(app)iness: Why your personal data counts and how tracking it can change the world. New York: Jeremy P. Tarcher. Haythornthwaite, C. (2013). Emergent practices for literacy, e-learners, and the digital university. In R. Goodfellow & M. R. Lea (Eds.), Literacy in the digital university: Critical perspectives on learning, scholarship, and technology (pp. 56–66). London: Routledge. Jabr, F. (2013, Apr. 11). The reading brain in the digital age: The science of paper versus screens. Scientific American. http://www.scientificamerican.com/article. cfm?id=reading-paper-screens. Jenkins, H., Ford, S., & Green, J. (2013). Spreadable media: Creating value and meaning in a networked culture. New York: New York University Press. Johnson, L., Adams Becker, S., Estrada, V. & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition. Austin, TX: The New Media Consortium. http://www.nmc.org/pdf/2014-nmc-horizon-report-he-EN.pdf. Kalantzis, M. & Cope, B. (2012). Literacies. Port Melbourne, VIC: Cambridge University Press. Kress, G. (2003). Literacy in the new media age. London: Routledge. Kress, G. (2010). Multimodality: A social semiotic approach to contemporary communication. London: Routledge. Laurillard, D. (2012). Teaching as a design science: Building pedagogical patterns for learning and technology. New York: Routledge. Lessig, L. (2007, Mar.). Laws that choke creativity. TED. http://www.ted.com/ talks/larry_lessig_says_the_law_is_strangling_creativity. Lim, Y. L. (2014, Nov. 25). Equip students with skills to create future tech: PM. The Straits Times. http://goo.gl/KppxTu. Mayer-Schönberger, V. & Cukier, K. (2013). Big data: A revolution that will transform how we live, work and think. London: John Murray. Miller, S. M. & McVee, M. B. (2012). Multimodal composing: The essential 21st century literacy. In S. M. Miller & M. B. McVee (Eds.), Multimodal composing in classrooms: Learning and teaching for the digital world. New York: Routledge. Mishra, P. & Kereluik, K. (2011). What is 21st century learning? A review and synthesis. Presented at SITE 2011, Nashville, USA, Mar. 7–11. http://punya.educ.msu. edu/presentations/site2011/SITE_2011_21st_Century.pdf. Olson, P. (2012, Sep. 6). Why Estonia has started teaching its first-graders to code. Forbes. http://www.forbes.com/sites/parmyolson/2012/09/06/why-estoniahas-started-teaching-its-first-graders-to-code/. Orsini, L. (2015, Jan. 12). Microsoft, Google aim to break the language barrier. Read Write. http://readwrite.com/2015/01/12/google-microsoft-language-translation. P21 [Partnership for 21st Century Skills]. (n.d.). Framework for 21st century learning. http://www.p21.org/about-us/p21-framework. Pariser, E. (2011). The filter bubble: What the internet is hiding from you. London: Viking. Pegrum, M. (2011). Modified, multiplied and (re-)mixed: Social media and digital literacies. In M. Thomas (Ed.), Digital education: Opportunities for social collaboration (pp. 9–35). New York: Palgrave Macmillan. Pegrum, M. (2014a). Mobile learning: Languages, literacies and cultures. Basingstoke, Hampshire: Palgrave Macmillan.

22  Mark Pegrum Pegrum, M. (2014b). Mobile literacy: Navigating new learning opportunities and obligations. In W. C.-C. Chu, H.-C. Chao & L. C. Jain (Eds.), Proceedings of [the] International Computer Symposium (ICS) 2014, vol. 1 (pp. 660–667). Taichung: Dept of Computer Science, Tunghai University. Rainie, L. & Wellman, B. (2012). Networked: The new social operating system. Cambridge, MA: MIT Press. Rheingold, H. (2012). Net smart: How to thrive online. Cambridge, MA: MIT Press. Rheingold, H. (2013). Participative pedagogy for a literacy of literacies. In A. Delwiche & J. Jacobs Henderson (Eds.), The participatory cultures handbook. New York: Routledge. Selwyn, N. (2014). Digital technology and the contemporary university: Degrees of digitization. London: Routledge/SRHE. Siemens, G. & Tittenberger, P. (2009). Handbook of emerging technologies for learning. http://elearnspace.org/Articles/HETL.pdf. Takayoshi, P. & Selfe, C. L. (2007). Thinking about multimodality. In C. L. Selfe (Ed.), Multimodal composition: Resources for teachers (pp. 1–12). Cresskill, NJ: Hampton Press. Thompson, C. (2013). Smarter than you think: How technology is changing our minds for the better. London: William Collins. U.S. Dept of Education. (2012). Enhancing teaching and learning through educational data mining and learning analytics: An issue brief. Washington, DC: U.S. Dept of Education. http://tech.ed.gov/wp-content/uploads/2014/03/edm-la-brief.pdf. Vartiainen, H. (2014). Designing participatory learning. In P. Kommers, T. Issa, T. Issa, D.-F. Chang & P. Isaías (Eds.), Proceedings of the International Conferences on Educational Technologies (ICEduTech 2014) and Sustainability, Technology and Education 2014 (STE 2014), New Taipei City, Taiwan, 10–12 December, 2014 (pp. 105–112). IADIS Press. Wallach, H. (2014, Dec. 19). Big data, machine learning, and the social sciences: Fairness, accountability, and transparency. M. https://medium.com/@hannawal lach/big-data-machine-learning-and-the-social-sciences-927a8e20460d. Weinberger, D. (2011). Too big to know: Rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. New York: Basic Books. Woodill, G. & Udell, C. (2015). Enterprise mobile learning: A primer. In C. Udell & G. Woodill (Eds.), Mastering mobile learning: tips and techniques for success. Hoboken, NJ: Wiley. Zhao, Y. (2012). World class learners: Educating creative and entrepreneurial students. Thousand Oaks, CA: Corwin/NAESP.

2 Promoting intercultural competence in culture and language studies Outcomes of an international collaborative project Margarita Vinagre Universidad Autónoma de Madrid, Spain Introduction The ability to integrate new technologies in the foreign language classroom has become an essential part of learning in the 21st century. One of the basic activities that facilitates this process is undoubtedly telecollaboration. This refers to the application of online communication tools to bring together classes of language learners in geographically distant locations with the aim of developing their foreign language skills and intercultural competence through collaborative tasks and project work. In these projects, “internationally-dispersed learners in parallel language classes use Internet communication tools such as email or synchronous chat in order to support social interaction, dialogue, debate, and intercultural exchange with expert speakers of the respective language under study” (Belz, 2004). Research in telecollaboration has shown its potential to stimulate participants’ intercultural competence (Liaw, 2006; Vogt, 2006; Vinagre, 2010), and authors such as Schulz Lalande, Dykstra, Zimmer and James (2005) have suggested that it is through intercultural competence that individuals manage to function effectively and successfully within their culture and within other cultures. However, despite its relevance, the European Comission’s Report on Education and Training (2010) has intercultural awareness as one of the less assessed key competences for life-long learning.

Intercultural communicative competence The definition of intercultural communicative competence is diverse (Wiseman, 2001) and its assessment difficult and complex (Byrnes, 2008). In this study we have followed Byram’s (1997) model, which defines it as “the ability to relate to and communicate with people who speak a different language and live in a different cultural context” (p. 1). This author suggests that successful communication depends on the ability to see and manage relationships with others. Therefore, becoming an intercultural communicator does not only entail acquiring knowledge about the foreign culture, but also requires developing those

24  Margarita Vinagre skills and attitudes that are necessary to understand and relate to people from other countries. These skills, attitudes and knowledge shape Byram’s model of Intercultural Communicative Competence (ICC) which consists of five interdependent principles: (a) attitudes, (b) knowledge, (c) skills of discovery and interaction and (d) skills of interpreting and relating. The interplay of these four principles should lead to the fifth, namely (e) critical cultural awareness. This final component underlies all the others, since it focuses on comparison and evaluation, key abilities for any learner who is to become a truly intercultural communicator. The first principle, attitudes (of curiosity and openness) refer to the ability to relativize oneself and value others. Knowledge (of social groups and their products and practices in one’s own and in one’s interlocutor’s country) refers to the knowledge of the rules that moderate individual and social interaction (p. 58). Skills of discovery and interaction are defined as the “ability to acquire new knowledge of a culture and cultural practices and the ability to operate knowledge, attitudes and skills under the constraints of real-time communication and interaction” (p. 61). Skills of interpreting and relating describe a speaker’s ability “to interpret a document or event from another culture, to explain it and relate it to documents or events from one’s own” (p. 61). Finally, the last component in Byram’s model is critical cultural awareness, which refers to the capacity to critically evaluate perspectives and products in one’s own and others’ cultures (p. 101). Within this model, specific objectives and guidelines are described to assess the intercultural experience.

Developing intercultural competence through telecollaboration Many studies have demonstrated the potential of telecollaborative exchanges for developing intercultural competence. Authors such as Ware and Kramsch (2005) analysed how to improve intercultural competence through the analysis of critical incidents. Others such as Liaw (2006), Vinagre (2010) and more recently Schenker (2012), mentioned in their studies that participants in telecollaborative exchanges discovered and reflected on their own behaviours and cultural beliefs, which led to an increase of their awareness and understanding towards their partners’ culture. Along the same lines, Schuetze (2008) mentioned that participants in a telecollaborative exchange increased their motivation, which, in turn, improved their attitudes and fostered intercultural competence. Other studies have focused on the tools that have been developed and used in recent years to assess intercultural competence. Sinicrope, Norris and Watanabe (2007) offered a summary of these tools and suggest that a combination of instruments is the most effective way to assess this competence. Most authors agree with this statement and have recommended the use of portfolios, learning diaries, interviews (or questionnaires, together with samples of reflective writing (GodwinJones, 2013) for its assessment. However, although these seem to be adequate to evaluate the knowledge and skills components in Byram’s ICC model, the assessment of attitudes is far more complex and, as authors such as Vogt (2006)

Promoting intercultural competence 25 suggest, it is not possible to assess them in telecollaborative exchanges, even when a combination of instruments is used. Despite this difficulty, this author suggests that implementing these exchanges is very useful for teachers since they can trace the development of these attitudes. In this study, we were interested in discovering whether participants in a tele­ collaborative exchange developed their intercultural competence and, if so, what traces were found of Byram’s ICC components. Unlike in the previous studies, in which email and eforums were used as tools for the participants’ interaction, we used a wiki as a mediating tool. We set out to find answers to the following research questions: RQ1) What traces of the different objectives of Byram’s (1997; 2000) model of intercultural communicative competence (ICC) were found in an online exchange among university students who collaborated in a wiki? RQ2) Did the exchange have an effect on the students’ perceptions of the foreign culture?

Rationale Project description We organized a telecollaborative exchange between twenty undergraduate students of British Civilization and Culture at a Spanish university and ten undergraduate students of Spanish at a British university for a period of eight weeks. The participants’ profile was very different. The students in the UK were mature students learning at a distance, whereas the Spanish students were first-year students learning in a traditional face-to-face setting. All participants were studying the foreign language at an advanced level (B2/C1) of the Common European Framework of Reference for languages. Before the project started, all students had an induction session in which the objectives, tasks and general guidelines of the project were explained. Participants were then organized into small groups of three and a wiki page was created for each of the groups with the aim of completing a series of writing tasks. On the wiki’s home page the teachers uploaded relevant information concerning the project, tasks to complete and a timeline. We also included two links (one at the beginning and another at the end of the project) to the pre- and post-project questionnaires.

Tasks The students were asked to carry out four tasks with their group members and they had to write in the foreign language. The tasks were negotiated and designed jointly by the teachers involved in the project and were based on O’Dowd and Ware’s (2009) classification. Each task involved information exchange, comparison and analysis and were to be carried out collaboratively in the wiki. Task #1 “General introductions” encouraged students to write a brief introduction about

26  Margarita Vinagre themselves in English or Spanish (or both) including where they lived, their hobbies, studies, reasons for studying a foreign language, etc. In Task #2 “Visits to your partner’s country”, students were asked to relate their experiences whilst visiting their partner’s country in the foreign language. In Task #3 “Comparisons between Spain and the UK’s tourist websites”, students were asked to look at two tourist websites corresponding to the UK and Spain and write their impressions on the main aspects of the countries that were highlighted by the tourist websites. In Task #4 “Generalizations versus stereotypes”, students had to write a list of some of the stereotypes foreigners mention about their own culture, and stereotypes about their partner’s culture. Then they had to look at these together and rewrite them as generalizations. A final requirement of this task was to visit the workspaces of other students and, using the Discussion facility, leave comments on their stereotypes and generalizations.

Method In order to find answers to the first research question, we analysed the content (final product) of the ten wiki pages created by each of the groups, together with the discussion pages (process and reflection) linked to same. As has already been mentioned, we adopted Byram’s (1997; 2000) objectives and guidelines for the assessment of the intercultural experience as parameters for data analysis. Wiki transcripts were tagged manually by the author and an independent researcher. When the examples could be included in more than one category, both researchers decided jointly which category best described the example. Examples of entries in Byram’s five categories can be seen in Table 2.1 and excerpts originally in Spanish have been translated by the author. The names of all participants have been changed and the labels indicate the initial of the participant’s name together with their wiki number. In order to find answers to the second research question, we analysed the participants’ answers to the two online questionnaires. The pre-project questionnaire included 18 closed questions organized on a five-point Likert scale on the participants’ knowledge and use of intercultural skills in cultural experiences prior to the project. The post-project questionnaire included the same closed questions as those in the pre-project questionnaire, but it also included two open questions for feedback on the project.

Results and discussion RQ1) What traces of the different objectives of Byram’s (1997; 2000) model of intercultural communicative competence (ICC) were found in an online exchange among university students who collaborated in a wiki? After analysing the wiki pages we found traces of all the ICC objectives in all groups’ contributions, although we shall only provide some illustrative examples in each of the components.

Table 2.1  Categorization of participants’ excerpts Objectives

Examples

a Attitudes (of curiosity and openness)

Sorry if it is too long, but I love sharing my experiences with other people (SW2) I’m interested to know what you think about the Scottish and whether you think they’re different from the English. The English think that they have a very dry and ironic sense of humour (DW2) The first time I spent a week with a friend in Andalucía. We flew to Malaga and then we took a bus to Granada. I was very surprised to see a gypsy woman with a baby shouting and begging for money in the street and the houses built in caves (RW8) As regards our task I think that today Spanish stereotypes are different. Thirty years ago English people used to think that Spain was a beautiful country but poor with a very repressive culture where many people did not care about the time. Many English people also thought that Spaniards liked bullfighting, flamenco and wine. However now the stereotypes have changed and we think they are people who know how to enjoy themselves – sometimes a bit too much – who have some economic problems but with a lot more political and social freedom. We are aware of the massive changes Spanish society has undergone in the last thirty years because a lot of English people visit Spain and have the opportunity to see for themselves (JW1) In reality I think that tourists live in a parallel universe and don’t have a lot of contact with the inhabitants. For example, when we went from Salamanca to France by bus we got a puncture on the road very close to a small village that is not visited by tourists. It was very interesting because with our teachers’ help we managed to talk to the villagers and that was a lot more fascinating than the touristic villages which all sell the same souvenirs (DW2) I think that we English have a good sense of humour although sometimes we find other cultures’ sense of humour difficult to understand. This happens because we fail to understand some cultural references for example (GW6)

b Knowledge (of social groups and their products)

c Skills of discovery and interaction

d Skills of interpreting and relating

e Critical cultural awareness

28  Margarita Vinagre Objective 1: attitudes of curiosity and openness According to Byram (1997, pp. 33–34), attitudes of curiosity and openness are preconditions and anticipated outcomes of intercultural competence, since the success of intercultural communication depends on establishing and maintaining good social relationships. In the content from the wiki pages we found many examples of this desire to establish reciprocal relationships with their partners in comments such as the following: “Well, that is all for now, if you want to know more about me just ask!” (SW2) and “if you have any doubt of Spanish language or Spanish culture, please do not hesitate to contact me” (MW3). Other comments reflect those attitudes of curiosity and openness mentioned by Byram (1997; 2000): “Why do you want to study at the University of Edinburgh? I’m curious” (DW2). “It is curious that, after reading what we have written about these stereotypes a few times, one begins to understand how shameful they are and how they can really hurt people from other countries”(JW1). An interest to discover a different perspective in one’s own and others’ cultures is also reflected in the numerous and detailed questions participants asked their partners which focused mostly on the analysis and comparison of specific aspects in both cultures. Other indicators of this interest can be observed in the following comments: “Hi, I’d like to know what you think about this topic. Here you have an example of stereotype and I’d love to know whether you think it’s acceptable or not” (PW2). “If you wanna add something or discuss my stereotypes please write, I would be glad to know your opinion about them :)” (LW9). The ability to relativize oneself and value others and the capacity to “suspend disbelief about other cultures and belief about one’s own” are also crucial attitudes that help minimize the negative effects that stereotypes and preconceived ideas may have in our perceptions of others. In this respect, comments such as those by Faye show her eagerness to question fixed beliefs regarding her own and her partners’ cultures: I think that there are stereotypes in all cultures and that there are people who really think that Spanish people sleep a siesta every day, go for tapas and party all the time and that they are very religious and stuff like that . . . Usually those people have never visited the country or, if they have, they had their eyes closed. Clearly these traditions are a part of Spanish culture, however there are many ways of being Spanish; I’m sure that the Catalans think that their way of life is very different from those of the Madrilenians or Basque. It’s impossible to describe all Spanish people or all foreigners belonging to a specific culture with a few adjectives, that’s absurd. They’re caricatures that can make you laugh for a few minutes but shouldn’t be taken seriously. (FW7) This type of reflection is essential among participants in telecollaborative projects since effective learning depends largely on “strategic conflict avoidance and strategic construction of co-operative social interaction” (Vinagre, 2008, p. 1031).

Promoting intercultural competence 29 Objective 2: knowledge of social groups and their products Byram (1997) distinguishes between declarative and procedural knowledge “of social groups and their products and practices in one’s own and in one’s inter­locutor’s country and of the general processes of societal and individual interaction” (p. 58). Declarative knowledge refers to facts and information relating to those social groups, whereas procedural knowledge refers to information about processes of interaction. We found examples of declarative knowledge in comments such as the one by Robert in which he shows his knowledge of the historical memories of both cultures: A long time ago Franquist Spain despised the United Nations and this was made manifest in bitter humorous slogans such as the following: “they have ONU (one) but we have two!” Nowadays Spain values its protagonism in the European Union, United Nations etc. This is reflected on the Spanish web page where we can find a reference to the already mentioned section on the UNESCO’s index on the main headlines. England, a lot more sceptical in its islander isolation, ignores such acknowledgements, not because English places have been excluded – there are over twenty in the index – but because of pride. Is it that we see ourselves as self-sufficient and snub recommendations and appreciation from others? (RW9) Other traces of declarative knowledge can be observed in comments such as the one by Marta in which she shows her knowledge about the causes of possible misunderstandings between both cultures: I have visited both (web) pages, the Spanish and the British one. I must say both have pretty good designs with plenty of pictures that call your attention at first sight. To what extent do these pages represent the countries? Well, obviously when a foreigner visits these pages you want to call his or her attention so that this person may think “Let’s go and visit this country, it seems interesting”. However not everything is fun and beaches and holidays. (MW3) The students also showed their procedural knowledge in their ability to establish and maintain relationships with speakers from other cultures. Although the exchange was organized by the teacher, the participants established first contact and also maintained communication until the end of the exchange. Furthermore, some students included comments such as the following, which also reflect this aspect: “Finally, last year I went to Ireland due to a scholarship, to work there 3 months. I stayed in Killarney, a lovely small town in the South of Ireland, and Cork. I made very good friends there and I am still in contact with all of them” (SW2). “In the end I’ll go to Madrid on my way to Oviedo where my cousin from Australia will attend a conference at the university. If you’d like to have a drink, please use my wife’s email address” (CW1).

30  Margarita Vinagre Objective 3: skills of interpreting and relating Byram (1997) defines these skills as “the ability to interpret a document or event from another culture, to explain it and relate it to documents and events from one’s own” (p. 61). In this component students should be able to identify areas of dysfunction and misunderstanding in an interaction. This aspect becomes apparent in Diana’s comment: I´m looking forward to reading your impressions. However, I must let you know that I’ll be away on holidays from April 20th to May 7th and I won’t be able to comment on your thoughts. I’m really sorry, I’ll try to catch up when I get back. (DW2) Her concern and apology for not being able to read her partners’ comments and contribute to the discussion show that she is aware of the importance of answering messages promptly in order to avoid conflicts in communication. These skills of interpreting and relating also refer to the ability to mediate in conflictive interpretations. This aspect can be seen in comments such as the one by Maria: “Maybe you do not agree, in that case I would be happy to listen to your opinion, as it must show the reality more than mine” (MW3).

Objective 4: skills of discovery and interaction Byram (1997) defines these skills as the “ability to acquire new knowledge of a culture and cultural practices and the ability to operate knowledge, attitudes and skills under the constraints of real time communication and interaction” (p. 61). Students included in their correspondence explicit references to shared meanings and values that are specific to the partner’s culture: There, in London I stayed for two weeks and two days of those weeks I took time to visit Oxford. I liked both cities, but London much more. Moreover, I liked London because of the cosmopolitan atmosphere of the city, there were plenty of different cultures: English, Indians, Americans . . . That really impressed me because I could feel more free and independent. In Madrid there are many cultures as well but not so well-integrated. On the contrary, London’s mixture of cultures are pretty well-integrated. (MW3) Another objective within this component refers to the ability to use the language in an appropriate manner in a variety of contexts. This aspect can be observed in comments such as Robert’s, in which he shows his knowledge of the communicative differences between both languages: Hi N and L. I’ve been thinking for a while about how to correct each other’s written texts. I have chosen as a method focusing on the sentences and expressions that seem to belong to the category “things that a native English speaker wouldn’t say” . . . My intention is not to criticise the way you express

Promoting intercultural competence 31 yourselves . . . but rather offer you some examples that perhaps will help you identify the subtle differences that many times make us, foreign language students, trip over. (RW9) Finally, the capacity to identify the relations between both cultures can be observed in comments such as the following: At the end of the day, Spanish people’s concerns and longings are very similar to those of people in other countries. People are more similar than different and usually those differences are superficial. It’s more interesting to find things that help us understand each other and laugh together at our peculiarities. (FW7)

Objective 5: cultural critical awareness The last objective in Byram’s model of ICC is critical cultural awareness. This author defines it as the ability to evaluate products, perspectives and practices in one’s own culture and other cultures. More specifically, he highlights the ability to identify and interpret values in documents and events, to carry out evaluative analyses of documents and events and to interact and mediate in intercultural exchanges whilst being aware of the differences between both belief systems. We found traces of this ability in all participants’ contributions, but especially in those by two students of the Spanish group, one from Britain and the other from Australia, both bilingual speakers living in Spain: Having experience of both cultures, I understand both points of view: When Spanish people think about England the first things that come to their mind are rainy weather, tea at 5 o’clock in the afternoon, people that won’t try to understand foreigners if they don’t have a good accent, bull fighting . . . When some English people come to Spain they celebrate parties or weddings by drinking all night long, and they think this is what Spanish people do all the time. Many Spanish people have “siesta”, specially in the south, where it’s hotter. This allows people to work till late in the evening. In fact Spain is one of the hardest working countries. (KW4) RQ2) Did the exchange have an effect on the students’ perceptions of the foreign culture? In order to answer the second research question, we carried out a paired t-test to compare the students’ answers to the pre-and post-project questionnaires. We did not find significant changes regarding their opinions and ideas about the foreign culture and country. In this respect, in response to the open-ended question #20: Think of your opinions about the people and culture of your partner’s country before taking part in this project. Has this exchange project changed any of those ideas you had about the people and culture of your partner’s country, or not? Please explain,

32  Margarita Vinagre most students mentioned that they had already visited the foreign country many times, that they knew the culture well and, for this reason, they had not changed their opinions. However, we did find a significant change in participants’ perceptions regarding their cultural knowledge. Bearing in mind that students expressed strong agreement with 1 and strong disagreement with 4, students rated their knowledge of the foreign culture and of intercultural processes significantly lower after the exchange (M=2.73, SD=0.21) than before (M=2.90, SD=0.21), t (17)=2.56, p < 0.03). This mean change indicates that students perceived that their knowledge and cultural skills had increased throughout the exchange.

Limitations of this study The small number of participants in this project and the limited amount of data analysed make it impossible for the findings to be generalized but offers thoughts for future research. All the participants in this study had already been to their partners’ country at least once and most of them had spent a considerable amount of time there, which meant that they had experienced the foreign culture before the exchange. We believe that this prior experience may have had an impact on the findings, facilitating the presence of traces of all the objectives of Byram’s ICC (1997; 2000). It is likely that, without this previously acquired knowledge and attitudes towards the foreign culture, the presence of traces might have been smaller, especially with regard to those aspects concerning the ability to change perspective. In this respect, authors such as Liaw (2006) and Vinagre (2010) have mentioned the difficulties that participants in telecollaborative exchanges face when having to decentre and show empathy for others. In this study we believe that this aspect was facilitated by the positive (in some cases enthusiastic) view that participants had of the foreign culture. Another limitation of this study refers to the subjective nature of the data gathered though the self-evaluation questionnaires. Although it is essential in these exchanges to consider the students’ perceptions and assessment of their own learning, it is possible that an alternative study – one in which self-evaluation questionnaires together with other assessment tools (essays, interviews or learning diaries) were used – might show different results.

Conclusions The main objective in this study was to foster the development of intercultural competence of a group of Spanish and English students who participated in a telecollaborative project in a wiki. In order to assess this development we searched for traces of the objectives mentioned by Byram (1997; 2000) for the assessment of ICC. After carrying out a qualitative analysis of the content from the wiki pages and discussion comments, we found evidence of all the objectives suggested by this author. The quantitative and qualitative analysis of the participants’ responses to the self-evaluation questionnaires showed that students perceived an

Promoting intercultural competence 33 improvement in their knowledge of the foreign culture and intercultural communication. However, according to the students, the exchange did not have any effect on the opinions they already had regarding the foreign culture. This could be due to the fact that students had already visited their partners’ countries and had already experienced the foreign culture before the exchange. Another plausible explanation could be the superficial way in which students discussed the most controversial cultural topics. This superficiality may have been caused by the participants’ fear of upsetting or offending their partners if they expressed opinions that opposed or conflicted with those of their partners. Their desire to defend their own culture could also explain this lack of critical reflection about their own and their partners’ culture. In order to improve this aspect in future exchanges we suggest integrating them in the foreign language classroom so that teachers can clarify misunderstandings, foster the formulation of hypotheses that try to explain the cultural differences they have observed and encourage reflection and informed discussions of the topics addressed during the exchange.

Acknowledgements I would like to thank the book editors and the three anonymous reviewers for their insightful comments. This chapter has benefitted greatly from their feedback and suggestions.

References Belz, J. (2004). Telecollaborative language study: A personal overview of praxis and research. Selected papers from the 2004 NFLRC symposium. Retrieved from http://nflrc.hawaii.edu/NetWorks/NW44/belz.htm, accessed 24 July 2014. Byram, M. (1997). Teaching and assessing intercultural communicative competence. Clevedon: Multilingual Matters. Byram, M. (2000). Assessing intercultural competence in language teaching. Sprogforum, 18(6), 8–13. Byrnes, H. (2008). Articulating a foreign language sequence through content: A look at the culture standards. Language and Teaching, 41(1), 103–118. European Commission (2010). New skills for new jobs: Action now. Retrieved from http://ec.europa.eu/social/main.jsp?catId=568&langId=en&eventsId=232&fur therEvents=yes, accessed 24 July 2014. Godwin-Jones, R. (2013). Integrating intercultural competence into language learning through technology. Language Learning & Technology, 17(2), 1–11. Liaw, M. (2006). E-learning and the development of intercultural competence. Language Learning and Technology, 10(3), 49–64. O’Dowd, R. and Ware, P. (2009). Critical issues in telecollaborative task design. Computer Assisted Language Learning, 22(2), 173–188. Schenker, T. (2012). Intercultural competence and cultural learning through telecollaboration. CALICO Journal, 29(3), 449–470. Schuetze, U. (2008). Exchanging second language messages online: Developing an intercultural communicative competence? Foreign Language Annals, 41(4), 660–673.

34  Margarita Vinagre Schulz, R., Lalande, J., Dykstra, P., Zimmer, H. & James, C. (2005). In pursuit of cultural competence in the German language classroom. Teaching German, 38(2), 172–181. Sinicrope, C., Norris, J. & Watanabe, Y. (2007). Understanding and assessing intercultural competence: A summary of theory, research, and practice. Second Language Studies, 26(1), 1–58. Vinagre, M. (2008). Politeness strategies in collaborative e-mail exchanges. Computers & Education, 50(3), 1022–1036. Vinagre, M. (2010). Intercultural learning in asynchronous telecollaborative exchanges: A case study. Eurocall Review, 17. Retrieved from http://www.eurocall-languages. org/review/17/index.html#vinagre, accessed 24 July 2014. Vogt, K. (2006). Can you measure attitudinal factors in intercultural communication? Tracing the development of attitudes in e-mail projects. ReCall, 18(2), 153–173. Ware, P. & Kramsch, C. (2005). Toward an intercultural stance: Teaching German and English through telecollaboration. The Modern Language Journal, 89(2), 190–205. Wiseman, R. L. (2001). Intercultural communication competence. In W. Gudykunst & B. Mody (Eds.), Handbook of international and intercultural communication (pp. 207–224). Thousand Oaks, CA: Sage Publications.

3 Return on investment The future of evidence-based research on ICT-enhanced business English Antonio J. Jiménez-Muñoz Universidad De Oviedo, Spain

The spread of ICT and EMI The prevalence of information and communication technology (ICT) in modern life is undeniable. The advancement of technology in the last thirty years has been accompanied by a parallel increase in the use of digital devices and methodologies in every aspect of our lives. In the last decade, more and more online systems track user actions, thus offering relevant data that businesses exploit to make informed decisions, gauge success and design action. However, the introduction of ICT into education has not run parallel. There has been a proliferation of research on ICT usage and its impact on learning, but ICT-enhanced learning has been constantly criticized as a low yield investment. Technology in education has often been chided as “running just to keep in place” with social expectations (Boody, 2001, p. 5), and said to be of less use than anticipated and a general “waste of money” (Apple, 2004, p. 514). Figures seem to back up those views. In the UK alone, allocations towards ICT provisions, including those for language learning, have exceeded $9.5 billion in the last decade (BESA, 2014, p. 6), but to disappointing results, particularly in language skills, where ICT has reportedly had a “significant negative effect” (Burge et al., 2013, p. 158). However, pupils are exposed to ICT at around half of teaching-learning time, which is significantly longer than in higher education (HE) language learning (BESA, 2014, pp. 40–41). These findings are coherent with longitudinal studies in Europe (Korte & Husing, 2006; European Commission, 2013) and the USA (Office of Educational Technology, 2004). However, in most theoretical literature the view is that ICT has the potential to promote higher-order thinking skills [ . . . ] It has the potential to a) engage students in authentic learning contexts; b) offer for students a rich, effective and efficient learning environment which improves their performance and learning; and c) impact student achievement positively. (Ghamrawi, 2011, p. 17) For many “much of what has been said revolves around what ICTs supposedly can do, not on evidence of what they actually do” (Shields, 2013). On the one hand, higher budgets for ICT do not ensure a more widespread use of technology, let

36  Antonio J. Jiménez-Muñoz alone better learning of something as complex as a specialized foreign language. If anything, some studies on the use of ICT in specialized domains showed little to be gained from the instant use of ICT in classrooms (Balanskat, Blamire, & Kefala, 2006; Cox & Marshall, 2007) in comparison with more traditional forms of instruction (Hattie, Biggs, & Purdie, 1996; Marzano, 1998). However, there is an opposing view stating that key language learner skills such as technical writing and listening improve with the use of technology (Goldberg et al., 2003), as do reading comprehension and general cognitive retention in English-for-SpecificPurposes (ESP) contexts (Wong, Quek, Divaharan, Liu, Peer, & Williaams, 2006; Ghamrawi, 2011). Some more evidence-based studies have assumed ICT-enhanced learning to improve general academic performance (Fuchs & Woessmann, 2004; Shaikh, 2009), but very little research linking ICT to ESP skills has been offered. In ESP tertiary education there is comparatively less research based on evidence (Youssef & Dahmani, 2008; Youssef, Dahmani, & Omrani, 2010; Venkatesh, Croteau, & Rabah, 2014), and particularly less all-encompassing horizontal studies (Wastiau et al., 2013) than in Secondary or Primary education, but it is precisely in HE where the return on the investment in ICT provisions has been more controversial. There is a similar divide between the grand expectations of theory (Fallows & Bhanot, 2005; Kruss, 2006) and results, prompting severe criticism on the use of technology in HE (Justel et al., 2004; Selwyn, 2007). The widespread adoption of English as the Medium of Instruction (EMI) in universities around the world has run a parallel course. Predictions of English to become the language of HE (Coleman, 2006) have quickly become true, particularly for inherently global degrees such as Business-related degrees. Some countries such as Germany, Sweden, the Netherlands, Finland, the United Arab Emirates, Korea, China and mainland Hong-Kong (Wächter & Maiworm, 2008) have a strong tradition, while some other nations such as Spain, France, Turkey or Poland have moved to EMI recently due to wider and tougher competition for their Business-related graduates within a globalized economy (Doiz, Lasagabaster, & Sierra, 2011; Smit & Dafouz, 2012). Just like ICT, ESP immersion provisions require considerable efforts for institutions and non-native participants in terms of training, budget and learning curves. ESP instructors have often complained about the lack of training to solve “language-related issues” (Airey, 2013, p. 64), which standard of English (Jenkins, Cogo, & Dewey, 2011), and the need to water down content to make it comprehensible to students (Costa & Coleman, 2010). Within ESP contexts, code-switching between native and foreign language requires effort for both lecturer and learner. Students in Business-related degrees need much more than just being exposed to specialized language: they need to know about cultural systems and factors, and require the ability to adapt to rapid language change. The reliance on specialized word lists has been pervasive, disregarding at times the difference between English for Academic Purposes (EAP) and ESP, on the one hand, and between lexical frequency and pedagogy, on the other (Simpson-Vlach & Ellis, 2010). The impact on employability of such outcomes is beyond calculation. Despite being central to their future success, most students focus on content subjects and

Return on investment 37 devote few hours to ESP, and they do not achieve an advanced level in English before pursuing employment. Many new graduates in Business-related degrees cannot use English in face-to-face situations, and job interviews “reveal that their actual business-English competence is far from adequate, and many entrepreneurs have to devote time and money in order to organize in-company English courses” (Fortanet-Gomez & Räisänen, 2008, p. 154). As a consequence, undergraduates aspiring to a successful career must work on their language skills independently of the language provision in their programmes, for which they generally use ICT (Kucˇírková, Kucˇera, & Vostrá Vydrová, 2012; Macho-Barés & Llurda, 2013). At the same time, the widespread inclusion of multimedia in EMI/ESP lessons to provide realia and a native model, and the rise of blended instructional modes have made ICT crucial to the successfulness of instruction in those degrees where English is a specialized domain.

Determining factors of evidence-based research on ICT-enhanced learning Both EMI and ICT-enhanced teaching and learning have been integrated in tertiary programmes, often at a high human or monetary cost, with a view to provide higher employability to new graduates. However, the “added value” of EMI to teach content and language as “two for the price of one” (Bonnet, 2012, p. 66) and the instrumentality of ICT towards this goal must be supported by evidence to justify investment costs and to identify those cost and time-effective practices yielding better results for ESP graduates. Theoretical expectations and evidencebased research carried out in the last decade seem to be at odds, exemplifying the gap between the potential advantages of ICT and its actual usage. It is often neglected that design and implementation come from the ESP instructor, who needs to insert such technology with no distinctions between ICT-mediated and other mediated means of teaching and learning (Chan et al., 2006) for which further training is lacking (BESA, 2014, pp. 43–45). The impact of ICT on specialized language learning may be inconclusive unless the ICT proficiencies of teachers and learners are analyzed and aggregated as a determining factors along with pedagogical design. An inescapable issue in the return of investment on ICT is the impact on student achievement. Much academic research has tried to analyze learning outcomes, but two main challenges remain: the definition of student performance (in terms of grades and skills, and whether these can be transferred to other contexts) and the difficulty to isolate the impact of ICT from their environment of application. These are particularly complex for ESP degrees, since the lack of cohesion between outcomes, skills and language-specific issues at universities around the globe complicates comparison. In addition, English level, depth of sophistication and degree of technicality vary greatly among cohorts in degrees with a distinct sub-set of ESP lexicon and skills, such as Economics, Business Administration, Finance or Accounting. Even at a given faculty, the approach and expectations of either general Economics, Applied Economics, Econometrics, Marketing,

38  Antonio J. Jiménez-Muñoz Business, Financial or other realms of Economics would create problems for horizontal research across these areas. However, it is perhaps in this context where attention to pedagogical changes is more needed. HE is through a period of renovations due to the introduction of methodologies, often through ICT, which deviate from the traditional lecture as the main medium of content delivery as well as the implicit internationalization through EMI. These factors have had a significant change in pedagogical designs; lecturers are becoming learning facilitators which explain core content in class and students play a remarkably more active role in the construction of their ESP skills in interaction with rich, ICT-mediated materials outside the classroom (Garrison & Vaughan, 2011; Picciano et al., 2013). This rise of blended modes necessitates ESP lecturers to assist this new learning perspective while not deviating from degree-specific curriculum. These factors have resulted in the greater attention and understanding of the importance of ICT in the educational contexts. In particular, several studies have pointed out that ICT has become an essential tool for HE as the use of ICT as a learning tool on the part of teachers and lecturers has improved over time (Moens, Broerse, Gast, & Bunders, 2010; Peeraer & Van Petegem, 2012). As best practices in ICT-enhanced specialized language learning are identified and disseminated, the divide between the theoretical potential of ICT in education and professional practice seems to narrow. However, best practices also show that two instructors using in-class Twitter to enhance class participation in an Economics bilingual lesson (Rodriguez, 2010; López-Zapico & TascónFernández, 2013) may do so in different successful ways even with strikingly similar cohorts. Hence, the common factor of using a social network shows scant relevant detail unless the piece of research observes pedagogical variables and links those to academic results. Otherwise, evidence-based research would always risk being conjectural or incidental. Consequently, influential institutional reports (Scheuermann & Pedró, 2009) have continued to show a traditional distrust for technology to make an impact on education, stressing that the impact of technology on student performance is inconclusive or unmeasurable. However, the difficulties of ICT ventures in educational contexts where the introduction of ICT is poorly equipped, implemented unpreparedly or unaccompanied of teacher and learner proficiency in the use of technology have been extensively documented (Pittard, Bannister, & Dunn, 2003; Underwood et al., 2005; Youssef & Dahmani, 2008; Livingstone, 2012). These same studies also hint at positive effects in contexts where these issues have been previously addressed. Aspects such as access to ICT, quality of connections and equipment, teacher training and ICT proficiency on the part of teachers and students, their expectations and previous experiences, intrinsic and extrinsic motivation, etc., and other general background information has not been addressed sufficiently, despite the fact that these have been extensively analyzed in literature, particularly in studies centred upon developing or emerging countries (Tiene, 2004; Yalin et al., 2007; Richardson, 2011; Peeraer & Van Petegem, 2012). These particulars are difficult and costly to obtain, unreliable or subjective either on

Return on investment 39 the part of the informant or the evaluator instructor, whose opinion is often institution-mediated (Perrotta, 2013). A suggested solution to language description and evaluation comes directly from the adoption of a qualitative framework such as the Common European Framework of Reference for Languages: Learning, Teaching and Assessment (CEFR) in combination with quantification methods (Jimenez-Muñoz, 2014). This targets the issue of what to measure from a linguistic point of view, but it needs to be complemented with what to measure in ICT-enhanced learning. As such, one of the difficulties is that student performance by ESP students is evidenced at a given moment but it is differentiated by the particulars of the skill in use (whether productive or receptive) as well as the time of impact of such technology on student performance (whether ICT is used as a medium of instruction or as a medium of production) in the particular specialized task. To measure the impact of ICT on a productive language skill, such as speaking, students need to be recorded to allow a linguistic-focused analysis as differentiated from task completion and grade. This raises questions about synchronous and asynchronous learning which, as shown below, escape a two-dimensional research model. Some models have been outlined to cater for the impact of ICT on student performance, from landmark research designs (Wagner et al., 2005) to policymaking tools (Plomp et al., 2009) and heavily data-based research designs (Song & Kang, 2012). Many research groups such as the World Bank’s Working Group on ICT Statistics in Education (WISE) or the Institute for Statistics (UIS) at the United Nations Educational, Scientific and Cultural Organization (UNESCO) are trying to gather more data on ICT use, but their efforts in using a number of standardized indicators (UNESCO Institute for Statistics, 2009) have avoided instructional aspects. Typically, these target the impact of aspects of investment such as “learners-to-computer ratio”, “proportion of learners who have access to the Internet at school” and “proportion of ICT-qualified teachers” (Trucano, 2012, p. 105), but do not offer any pedagogical details about the impact of a given technology on learning. To disregard pedagogy in the analysis of the impact of ICT complicates granularity, refinement and drawing relevant conclusions. Most studies are inconclusive or difficult to transfer because they centre upon the technological part or the possibilities of a specific mobile device such as tablets (Beauchamp & Hillier, 2014), focusing on student-machine interaction and without coherent integration with curriculum and learner skills. A description of how technology is used to learn within a specialized domain is often absent, and further considerations about the usefulness of such specific technology jeopardize transferability to other contexts. Although they “hold great potential for increased efficiency in ICT and for improving their educational outputs and outcomes” (Aristovnik, 2012, p. 150), most non-parametric studies deriving from both descriptive and inferential statistics find that there is still room for improvement in the return of ICT investment by both instructors and learners. Other research groups have outlined methods for analyzing the role of ICT in student achievement, but these have either focused on the quantity and variety of the use of ICT as

40  Antonio J. Jiménez-Muñoz a means of specialized language communication and for searching for specialized information, with no reference to learning (Usluel et al., 2008; OFSTED, 2011); have been based on opinions on the wider use of ICT in the learning process, and not its results (Hales & Fura, 2013); or have been centred upon personal variables (Iniesta-Bonillo et al., 2013). A few research projects have attempted to link student use of ICT to learning and attainment (Song & Kang, 2012) but have avoided including the pedagogical aspects of technology in classrooms, or have designed a complex research model that focuses more on the evolution of technology than that of students (Johnson & Adams, 2011).

Conclusion: beyond two-dimensional models for evidencebased ICT-enhanced research on ESP learning These studies have their intrinsic merits, but remain consistently two-dimensional. Their inherent limitations have failed to account for the interaction between the method of delivery, student work and results in order to cater for the needs of a context as complex as ESP. At the core of these studies lies the need to assess not technology but teaching innovation. In this new century, because of the moves in HE contexts and the possibilities of ICT for language learning, classrooms progress towards a model of student-centred pedagogies that personalize learning and empower students in their independent learning. Particularly in vibrant ESP contexts such as Business English, instruction is extended beyond the classroom and curriculum to address real-life use of skills, and new technology is integrated into teaching and learning objectives. These aspects of pedagogical design can be related to particular effects on student performance: how in a given context technology and participants act towards curricular achievement as measured by well-designed tests within the specialized linguistic domain. To carry out adequate evidence-based research in the field of ICT-enhanced language learning a framework is needed in order to situate both methodological and quantitative aspects coherently. Although language performance can be translated into quantity via the CEFR, data concerning ICT provisions, participants and pedagogical methods will not be quantitative only, as most of these two-dimensional studies assume. By taking advantage of some models put forward for the analysis of qualitative data in educational contexts (Taylor-Powell & Renner, 2003; Castellan, 2010) it is possible to create a simple conceptual model to include these qualitative dimensions as required by ESP: the location where learning takes place (the classroom, an ICT room, at home, etc.), the delivery medium (the teacher, the language assistant, a mobile device, a computer, etc.), the type of instruction (theory-based to hands-on or practice-based) and time (synchronous or asynchronous learner progression within the group). However, some of these dimensions depend heavily on the instructional design at hand, which determines the interplay of these factors in each instructional subtask, as well as other quantifiable factors such as ICT availability and learner ICT proficiency. ESP instructors in such contexts make their technological choices

Return on investment 41 almost subconsciously, and unearthing these through surveys is undesirable. As a consequence, some directly observable but non-pedagogical variables should be included as aggregations to other dimensions rather than being an integral part of the research model. Similarly, the categorization of external factors such as previous experience or motivational aspects would work as variables affecting other dimensions qualitatively or as clustering. The impact of learner motivation, for example, will not be a general factor taking the same effect across every dimension in the model, but a composite of the different degrees of impact as related to personal factors as well as the activity to be carried out by the student (Giesbers et al., 2013) as designed by the instructor. Margulieux et  al. have recently shown a simple but useful two-dimensional model that accommodates a higher number of real-life factors in order to differentiate between the inconsistent use of hybrid, blended, flipped, and inverted models of instruction (Margulieux, Bujak, McCracken, & Majerich, 2014). Such a scaffold can be part of a flexible approach to situate the practice-delivery continuum which takes place in ESP learning as mediated by ICT; adding measured proportions of student time (ST) to these limits would easily allow to link quantitative data to qualitative aspects of methodology for each of the language-based activities. This improved model still remains two-dimensional (instruction and delivery), which in research terms would stay at statistical or descriptive level, thus complicating its usefulness for measuring the impact on grades and skills attainment. However, if applied to research geared towards backing up good uses of ICT in language learning or analyzing larger case studies where both quantitative and qualitative data were collected, it would allow situating them pedagogically in a standardized way, in turn allowing a tri-dimensional model to be Delivery via instructor Instructor-transmitted F2F Mixed (76–100% Instructor-mediated ST instructor, 0–24% technology)

Information transmission

Lecture hybrid (76–100% ST information, 0–24% praxis)

Blended (25–50% ST instructor, 25–50% technology; 25–50% ST information, 25–50% praxis)

Practice hybrid (76%–100% ST praxis, 0–24% information)

Technologytransmitted

Online mixed (76–100% ST technology, 0–24% instructor)

Technology-mediated

Delivery via technology

Figure 3.1  Combined Learning Experiences: student time

Praxis

42  Antonio J. Jiménez-Muñoz constructed to accommodate student outcomes and a number of other dimensions. Nowadays, multi-dimensional databases and online analytical processing allow managing such complex data into n-dimensional hypercubes to accommodate more factors (motivation, previous experience, ICT access, instructor ICT proficiency, etc.) as a particular added dimension relevant to the context of education. This would transfer analysis approaches which are common practice within Business Intelligence to scrutinize yield and make informed decisions in real-life contexts. To situate the use of ICT in such a manner allows a continuum between the factors measured while it also remains understandable and granular, as each of the tri-dimensional intersections of instruction-delivery outcomes can change or move freely (as pedagogical techniques in languagerelated tasks do) without their data getting misinterpreted or decontextualized. This would entail a quantum leap from current evidence-based research being carried out in ESP and EMI contexts. Taking such an approach to educational analyses would increase the chances of finding relevant patterns of successful pedagogical ICT usage exponentially, and it would allow studies on any particular dimension to interact with each other, casting in turn some light on real-life practice, its impact and transferability beyond the specificity of each specialised domain. Only in such a way the manifold of technologies, resources and mixed methods of teaching and learning within ESP can interact and be accountable for future research.

References Airey, J. (2013). “I don’t teach language.” The linguistic attitudes of physics lecturers in Sweden. AILA Review, 25, 64–79. Apple, M. (2004). Are we wasting money on computers in schools? Educational Policy, 18(3), 513–522. Aristovnik, A. (2012). The impact of ICT on educational performance and its efficiency in selected EU and OECD countries: A non-parametric analysis. The Turkish Online Journal of Educational Technology, 3(11), 144–152. Balanskat, A., Blamire, R., & Kefala, S. (2006). The ICT impact report: A review of studies of ICT impact on schools in Europe. Brussels: European Schoolnet. Beauchamp, G., & Hillier, E. (2014). An Evaluation of iPad Implementation across a Network of Primary Schools in Cardiff. Cardiff: Cardiff School of Education. BESA. (2014, September 30). British Educational Suppliers Association. Retrieved October 15, 2014, from http://www.besa.org.uk/library/besa-research-reportsict-uk-state-schools-2014-volume-i. Bonnet, A. (2012). Towards an evidence base for CLIL: How to integrate qualitative and quantitative as well as process, product and participant perspectives in CLIL research. International CLIL Research Journal, 1(4), 65–78. Boody, R. (2001). On the relationships of education and technology. In R. Muffoletto (Ed.), Education and Technology: Critical and Reflective Practices (pp. 5–22). Cresskill, NJ: Hampton Press. Burge, B., Ager, R., Cook, R., Cunningham, R., Morrison, J., & Weaving, H. (2013). European Survey on Language Competences: Language Proficiency in England. London: Department for Education.

Return on investment 43 Castellan, C. (2010). Quantitative and qualitative research: A view for clarity. International Journal of Education, 2(2), 1–14. Chan, T. W., Roschelle, J., Hsi, S., Kinshuk, M., Brown, T., & Patton, C. (2006). One-to-one technology-enhanced learning: An opportunity for global research collaboration. Research and Practice in Technology Enhanced Learning Research and Practice in Technology Enhanced Learning, 1(1), 3–29. Coleman, J. A. (2006). English-medium teaching in European higher education. Language Teaching, 39, 1–14. Costa, F., & Coleman, J. A. (2010). Integrating content and language in higher education in Italy: Ongoing research. International CLIL Research Journal, 1(3), 19–29. Cox, M. J., & Marshall, G. M. (2007). Effects of ICT: Do we know what we should know? Education and Information Technologies, 12, 59–70. Doiz, A., Lasagabaster, D., & Sierra, J. M. (2011). Internationalisation, multilingualism and English-medium instruction. World Englishes, 30(3), 345–359. European Commission. (2013). Survey of Schools: ICT in Education. Benchmarking Access. Use and Attitudes to Technology in Europe’s Schools. Luxembourg: Publication Office of the European Union. Fallows, S. J., & Bhanot, R. (2005). Quality Issues in ICT-Based Higher Education. London: Routledge. Fortanet-Gomez, I., & Räisänen, C. (2008). ESP in European Higher Education: Integrating Language and Content. Amsterdam: John Benjamins. Fuchs, T., & Woessmann, L. (2004). Computers and student learning: Bivariate and multivariate evidence on the availability and use of computers at home and at school. CESifo Working Paper, 1321, 1–34. Garrison, D., & Vaughan, N. (2011). Blended Learning in Higher Education: Framework, Principles, and Guidelines. New York: John Wiley & Sons. Ghamrawi, N. (2011). Trust me: Your school can be better – a message from teachers to principals. Educational Management Administration & Leadership, 39(3), 333–348. Giesbers, B., Rienties, B., Tempelaar, D., & Gijselaers, W. (2013). Investigating the relations between motivation, tool use, participation, and performance in an e-learning course using web-videoconferencing (2013–01). Computers in Human Behavior, 29(1), 285–292. Goldberg, A., Russell, M., & Cook, A. (2003). The effect of computers on student writing: A meta-analysis of studies from 1992 to 2002. Journal of Technology, Learning, and Assessment, 2(1), 1–52. Hales, C., & Fura, B. (2013). Investigating opinions on ICT use in higher education. Business Informatics, 2(28), 37–55. Hattie, J., Biggs, J., & Purdie, N. (1996). Effects of learning skills intervention on student learning: A meta-analysis. Review of Research in Education, 66(2), 99–136. Iniesta-Bonillo, M., Sánchez-Fernández, R., & Schlesinger, W. (2013). Investigating factors that influence on ICT usage in higher education: A descriptive analysis. International Review on Public and Nonprofit Marketing, 10(2), 163–174. Jenkins, J., Cogo, A., & Dewey, M. (2011). Review of developments in research into English as a Lingua Franca. Language Teaching, 44(3), 281–315. Jimenez-Muñoz, A. (2014). Measuring the impact of CLIL on language skills: A CEFR-based approach for Higher Education. Language Value, 6(1), 28–50.

44  Antonio J. Jiménez-Muñoz Johnson, L., & Adams, S. (2011). Technology Outlook for UK Tertiary Education 2011–2016: An NMC Horizon Report Regional Analysis. Austin, TX: The New Media Consortium. Justel, A., Lado, N., & Martos, M. (2004). Barriers to university on-line education: Towards a better understanding from a marketing view. The International Review on Public and Nonprofit Marketing, 1(2), 29–42. Korte, W., & Husing, T. (2006). Benchmarking Access and Use of ICT in European Schools. Bonn: Empirica. Kruss, G. (2006). Working Partnerships in Higher Education, Industry and Innovation: Creating Knowledge Networks. Cape Town: Human Sciences Research Council. Kučírková, L., Kučera, P., & Vostrá Vydrová, H. (2012). Study results and questionnaire survey of students in the lessons of business English e-learning course in comparison with face-to-face teaching. Journal on Efficiency and Responsibility in Education and Science, 5(3), 173–184. Livingstone, S. (2012). Critical reflections on the benefits of ICT in education. Oxford Review of Education, 38(1), 9–24. López-Zapico, M., & Tascón-Fernández, J. (2013). El uso de Twitter como herramienta para la enseñanza universitaria en el ámbito de las ciencias sociales: Un estudio de caso desde la Historia económica. Teoría de la Educación: Educación y Cultura en la Sociedad de la Información, 14(2), 316–345. Macho-Barés, G., & Llurda, E. (2013). Internationalization of business English communication at university: A threefold needs analysis. Iberica, 26, 151–170. Margulieux, L. E., Bujak, K. R., McCracken, W. M., & Majerich, D. M. (2014). Hybrid, blended, flipped, and inverted: Defining terms in a two dimensional taxonomy. Proceedings of The Hawaii 12th International Conference on Education, 12, 2394–2402. Marzano, R. J. (1998). A Theory-Based Meta-Analysis of Research on Instruction. Aurora, CO: Mid-continent Regional Educational Laboratory. Moens, N., Broerse, J., Gast, L., & Bunders, J. (2010). A constructive technology assessment approach to ICT planning in developing countries: Evaluating the first phase. Information Technology for Development, 16(1), 34–61. Office of Educational Technology. (2004). Toward a New Golden Age in American Education. Washington, DC: U.S. Department of Education. OFSTED. (2011). ICT in schools 2008–11: An Evaluation of Information and Communication Technology Education in Schools in England 2008–11. London: Office for Standards in Education, Children’s Services and Skills. Peeraer, J., & Van Petegem, P. (2012). Information and communication technology in teacher education in Vietnam: From policy to practice. Educational Research for Policy and Practice, 11(2), 89–103. Peeraer, J., & Van Petegem, P. (2012). The limits of programmed professional development on integration of information and communication technology in education. Australasian Journal of Educational Technology, 28(6), 1039–1056. Perrotta, C. (2013). Do school-level factors influence the educational benefits of digital technology? A critical analysis of teachers’ perceptions. British Journal of Educational Technology, 44, 314–327. Picciano, A., Dziuban, C., & Graham, C. (2013). Research Perspectives in Blended Learning: Research Perspectives vol. 2. London: Routledge. Pittard, V., Bannister, P., & Dunn, J. (2003). The Big Picture: The Impact of ICT on Attainment, Motivation and Learning. London: DfES Publications.

Return on investment 45 Plomp, T., Anderson, R., Law, N., & Quale, A. (2009). Cross-National Information and Communication Technology Policies and Practices in Education (2nd ed.). Amsterdam: Information Age Publishing. Richardson, J. W. (2011). Challenges of adopting technology for education in less developed countries: The case of Cambodia. Comparative Education Review, 55(1), 8–29. Rodriguez, J. (2010). Social media use in higher education: Key areas to consider for educators. MERLOT Journal of Online Learning and Teaching, 7(4), 1–12. Scheuermann, F., & Pedró, F. (2009). Assessing the Effects of ICT in Education: Indicators, Criteria and Benchmarks for International Comparisons. Luxembourg: Publications Office of the European Union. Selwyn, N. (2007). The use of computer technology in university teaching and learning: A critical perspective. Journal of Computer Assisted Learning, 23(2), 83–94. Shaikh, Z. A. (2009). Usage, acceptance, adoption, and diffusion of information and communication technologies in higher education: A measurement of critical factors. Journal of Information Technology Impact, 9(2), 63–80. Shields, R. (2013). Globalization and International Education. London: Bloomsbury. Simpson-Vlach, R., & Ellis, N. C. (2010). An academic formulas list: New methods in phraseology research. Applied Linguistics, 31(4), 487–512. Smit, U. & Dafouz, E., (2012). Integrating content and language in higher education: An introduction to English-medium policies, conceptual issues and research practices across Europe. AILA Review, Special issue, 1–12. Song, H. & Kang, T. (2012). Evaluating the impacts of ICT use: A multi-level analysis with hierarchical linear modeling. Turkish Online Journal of Educational Technology, 11(4), 132–140. Taylor-Powell, E., & Renner, M. (2003). Analyzing Qualitative Data. Madison, WI: University of Wisconsin. Tiene, D. (2004). Bridging the digital divide in the schools of developing countries. International Journal of Instructional Media, 31(1), 89–98. Trucano, M. (2012). Information and communication technologies. In H. Patrinos (Ed.), Strengthening Education Quality in East Asia (pp. 101–108). Montreal: UNESCO. Underwood, J., Ault, A., Banyard, P., Bird, K., Dillon, G., & Hayes, M., et al. (2005). Impact of Broadband in Schools. Nottingham: Nottingham Trent University. UNESCO Institute for Statistics. (2009). Guide to Measuring Information and Communication Technologies (ICT) in Education. Montreal: UNESCO Institute for Statistics. Usluel, Y., As,kar, P., & Bas,, T. (2008). A structural equation model for ICT usage in higher education. Educational Technology & Society, 11(2), 262–273. Venkatesh, V., Croteau, A., & Rabah, J. (2014). Perceptions of effectiveness of instructional uses of technology in higher education in an era of Web 2.0. System Sciences, 47, 110–119. Wächter, B., & Maiworm, F. (2008). English-Taught Programmes in European Higher Education: The Picture in 2007. Bonn: Lemmens. Wagner, D., Day, B., James, T., Kozma, R., Miller, J., & Unwin, T. (2005). Monitoring and Evaluation of ICT in Education Projects. Washington, DC: World Bank. Wastiau, P., Blamire, R. K., Quittre, V., Van de Gaer, E., & Monseur, C. (2013). The use of ICT in education: A survey of schools in Europe. European Journal of Education, 48, 11–27.

46  Antonio J. Jiménez-Muñoz Wong, A. F., Quek, C. L., Divaharan, S., Liu, W. C., Peer, J., & Williaams, M. D. (2006). Singapore students’ and teachers’ perceptions of computer-supported project work classroom learning environments. Journal of Research in Technology in Education, 38, 449–479. Yalin, H. I., Karadeniz, S., & Sahin, S. (2007). Barriers to ICT integration into elementary schools in Turkey. Journal of Applied Science, 7(24), 4036–4039. Youssef, B., & Dahmani, M. (2008). The impact of ICT on student performance in Higher Education: Direct effects, indirect effects and organisational change. Universities and Knowledge Society Journal, 5(1), 45–56. Youssef, B., Dahmani, M., & Omrani, N. (2010). Students’ e-skills, organizational change and diversity of learning process: Evidence from French universities in 2010. ZEW Discussion Papers, 12(31), 2–31.

4 L2 English learning and performance through online activities A case study M. Ángeles Escobar Universidad Nacional de Educación a Distancia, Spain

Introduction Today one can have access to cloud-based digital libraries from any electronic device ranging from mobile phones to highly sophisticated tablets, as predicted by Akst (2003). These digital libraries not only offer knowledge to any reader, but also a digital platform which is an open and collaborative learning space where students can personalize their selections for study according to their selfdetermined learning skills, following Blaschke (2013). In addition, students can also find innovative methods in higher education programmes which can help them to study subjects that are as difficult to learn as, for example, grammars of second languages. In fact, many researchers assume that most adults never master a foreign grammar. Moreover, for foreign language instruction in distance contexts where learners can hardly be exposed to the second language in the natural environment, and, where acquisition of communicative skills is not face to face through the foreign language, an appropriate learning context is required. Recent research on e-learning shows that online education based on successful methodology surpasses traditional face-to-face teaching and learning (Hamilton & Feenberg, 2007; Gee, 2013; Bichsel, 2013). Lehmann and Chamberlin’s (2009) approach also suggests that differentiation and specific needs should be present in learning activities. This also means that successful methodology should pursue areas of personal interest. Furthermore, learning activities need to be fairly structured without compromising flexibility. Online learning has to be adapted to successful methods, and relevant, important components are required such as teacher’s constructed learning environment and students’ specific actions (related to learning tasks), as shown in the conversational framework put forward by Laurillard (1993, 2002). In any case, second language teachers should check whether their methods are in fact able to improve language accuracy. Here we present a case study of L2 English acquisition by adult learners throughout a series of online activities. The grammatical contents selected for this study involve, among other things, word order configurations where English and Spanish diverge, making use of a task-oriented approach.

48  M. Ángeles Escobar The main aim of this chapter is to test whether or not online activities have a positive effect in the performance and accuracy of the foreign language. By doing so, we also aim to discuss the type of tasks that can be determinant in a virtual course for the acquisition of a specific subject, L2 English for Specific Purposes (ESP) in a distance learning context. This will serve us to encourage teachers to focus on their pedagogical L2 English courses paying attention to their students’ motivation through online tasks.

UNED distance learning methodology Nowadays we live in the digital era and the era of technology is turning our lives upside down at such a fast pace that it is a bit overwhelming for university students to decide on the relevant technology that can enhance their study habits, especially if they are enrolled in a distance college. The learning methodology found in the Spanish University of Distance Education (UNED) is based on the hypothesis that if students are able to make the most of the improvements which technology brings to their lives, they will improve if they use study materials that are related to their routine tasks on their laptops, smartphones or tablets. In fact, most online courses at the UNED make use of electronic or digital materials that one not only can download online easily, but also read offline as well, providing that one has the right software installed on one’s computer, laptop or smartphone. Besides, these online courses have many other advantages over traditional faceto-face lectures such as the promptness one gets from Internet, since there is no need to wait to receive anything, and study materials are easy to download without any delays. The UNED platform ALF aims to allow students to: (i) have access to study materials; (ii) manage and share documents; (iii) upload timed online activities; and (iv) conduct a fluid communication with their teachers and classmates. As with any standard university, undergraduate students at the UNED need to be examined through a final in-situ exam at a registered centre. This is in fact a compulsory requirement to complete their degree subjects.

English grammar and e-learning People sometimes describe grammar as the rules of a language, but in fact no language has rules. If we use the word “rules”, we suggest that somebody created the rules first and then spoke the language, like a new game. Nevertheless, languages did not start like that. Languages started by people making sounds which evolved into words, phrases and sentences. No commonly spoken language is fixed. All languages change over time. What we call grammar is simply a reflection of a language at a particular time. For example, English language learners are aware of the fact that the grammar of their mother tongue is very different from the grammar of the target language.

L2 English learning and performance 49 One may argue that one does not have to study grammar to learn a language, since most people in the world speak their own native language without having studied its grammar. However, although native speakers of English cannot answer learners’ questions such as Why do genitives use ‘s’ in English? or Why can’t you omit subjects in English? This type of information is very useful for learners in order to have the key to understanding the target language and using it for communication purposes. Furthermore, it is clear that the explicit study of some constructions that form the grammar of the target language can support the learning of such a language in a quicker and more efficient way. In this sense, it is important to think of grammar as something that can help, like any tool, rather than something that has to be memorized. When one understands the grammar (or the particular constructions) of a language, one can immediately apply this explicit knowledge to other related linguistic facts without having to ask a teacher or look in a book, which is also essential, for example, in a case of self-study. In effect, most online English courses contain boxes with explicit drills which feature in the language in use sections elsewhere. However, the grammar points which are covered in these sections are not meant to be a comprehensive grammatical syllabus but are there to revise and consolidate what the student already knows and will need to know for succeeding in particular Grammar cloze tests at the end of the course. Not surprisingly, descriptive rules which also appear in online courses are often forgotten and what is more important they do not really help to raise the students’ level of accuracy in order to perform well in the writing paper or in the speaking part of the exam. One of the goals of the learning tasks contained in our online course at the UNED is to develop, improve and practise the knowledge of English grammar required for the practise of all language skills, through real professional communication both in oral and written tasks.

Task-based learning and technology It is well known that task-based language teaching (TBLT) or task-based instruction (TBI) attempt to make students do meaningful tasks using the target language, cf. Long (1991), Ellis (2003), Gass and Selinker (2008), among many others. Nowadays, interdisciplinarity and task-based approach are the terms appearing in the new curricula of L2 teaching across the board since there is a general need to teach languages with a topic-content. Moreover, recent literature presents relevant conditions for L2 learning with an emphasis on technology. Zhao and Lai’s (2007) theory contains some relevant conditions for L2 learning regardless of whether it is enriched by technology or not. However, they cite research showing that technology-enhanced L2 instruction is often better able to fulfill these conditions. Our online English course attempts to support an active, autonomous attitude to language learning. The students develop small learning tasks, which focus

50  M. Ángeles Escobar on their own language learning process. Moreover, they have to be curious and reflect on the way they act and solve problems. Such an approach fits into a professional environment. Through the learning tasks, they are also expected to have a solid language background for their daily practise since most of the exercises are task oriented. Therefore, following a traditional task-based approach and incorporating technology, the way we designed our online exercises throughout the course attempted to make learners: 1 Understand each task, i.e. reading through the input material and seeing what is required by it. 2 Select ideas, i.e. deciding what specific information is needed from the input material but taking care to avoid lifting phrases from the texts. 3 Take notes, i.e. highlighting who the target reader or audience for the writing task is and what register is most appropriate. 4 Plan a final answer, i.e. deciding on the outline for the task, how to structure it, thinking about paragraphs and using some linking devices. In considering learning from tasks in the context of grammar in the foreign grammar, motivation is also required. The grammar constructions included in the study materials also contain real professional examples. According to Hutchinson and Water’s 1987 ESP approach: “the language and content focused are drawn from the input [. . .] in order to do the task” (p. 109). In fact, there is also abundant literature on the importance of motivation in the learning of foreign languages. Ellis (1994), in an attempt to explore motivation, merely emphasizes that the motivation allows learners to be aware of their own learning process. Krashen’s Affective Filter theory during the 1970–80s (Krashen, 1982) showed that learners with high motivation, self-confidence, a good self-image, and a low level of anxiety are better equipped for success in second language acquisition. In contrast, low motivation, low self-esteem, and debilitating anxiety may form a mental block that prevents comprehensible input from being used for acquisition. According to Dörnyei (2001), the teacher behaviour is a prevailing motivational tool: “The teacher stimulus is diverse in that it ranges from the empathy with student-teacher attitudes that conquer students to engross in undertakings” (p. 120).

The present study Objective of the study The main goal of the study was to find out whether students that participated in different tasks throughout the online course obtained better academic results than the students that did not participate in any of the tasks of the online course and only took the offline exam to be graded at the end of the course.

L2 English learning and performance 51 Materials Three main tasks during the online course were proposed to a total of 194 students who study Tourism at the Faculty of Economics, Spanish University of Distance Education (UNED): two optional online tasks throughout the course, and one offline written test at the end of the course. The online tasks consisted in: (a) a writing essay on the advantages and disadvantages of print textbooks vs. ebooks; and (b) an online oral test with an ESP teacher. The offline task consisted in a compulsory written exam with a reading comprehension part, a grammar section and a writing exercise. All the students used the coursebook entitled English Grammar and Learning Tasks for Tourism Studies (Escobar Alvarez, 2011). This textbook focuses on different topics related to English for Tourism, for example: Tour Operators, Hotels, Travelling, Transports, Excursions, Visits to museums or historical cities, Customer’s complaints, etc. Both print and ebook formats contain: (1) self-assessment grammar exercises; (2) a key to all learning tasks found in each chapter; and, (3) tips for writing and speaking with a glossary of useful terms for Tourism Industry. In each case, students not only are invited to work with grammatical points in real situations but also to use most frequent English constructions, applying the grammatical notions with the help of specific ESP learning tasks. As for the L2 English grammar contained in the book, the main aim was to offer the student a reasonably detailed account of some of the most frequent parts of the English grammar involved in real communication situations that have to be practised in order to properly use this foreign language for specific purposes. The coursebook, either as print or ebook, was designed to provide a solid grammatical syllabus for undergraduate students undertaking English language as a foreign language within Tourism studies.

Method The experimental tasks incorporated in the online course, different from the ones found in the coursebook, attempted to further motivate students to meet their learning needs. This is common practice in most online courses at UNED’s platform, as discussed above. During the virtual course, among other activities, the students were invited to send the teaching team an essay on the pros and cons of ebooks in contrast to print textbooks; and, in the second place, they were asked to freely take an oral test about a topic related to their own field of study, using new ESP words and constructions studied in the coursebook. In this way we could incorporate our two experimental tasks in a natural way. The students that performed these tasks naturally showed an explicit interest in showing their preferences for print or ebooks, and in practising the grammatical constructions studied in the coursebook.

52  M. Ángeles Escobar One of the goals of our online course was to let our students practise their L2 in tourism-oriented situations. On the assumption that learners in Tourism degrees usually have a better command of the foreign language, since their oral communication competence is key for their future career, we let our students develop, improve and practise their knowledge of English grammar in a real situation, with special emphasis in professional communication. This strategy was originally explored by García Laborda (2009). This author noted that the communication in English for Tourism in professional environments will tend to be specialized, functional and adequate to the professional context. Thus, it is the function of English for Tourism instructors to promote oral interactions to foresee and practice situations that students will probably face in travel agencies, hotels or tourism establishments. (García Laborda, 2009, p. 263)

Assessment In our experimental tasks, students were expected to make use of contentbased vocabulary and structures (as provided in the course textbook) to give real information and solutions to the raised problem-solving situations (travel schedule, bookings, accommodation complains, trip information) in a communicative and realistic way. The feedback provided by the teacher depended on the students’ ability to: (i) communicate opinions and information on everyday topics in the Tourism industry; being able to answer a range of questions; (ii) speak at length on a given topic using appropriate language; (iii) organize ideas coherently; (iv) express and justify personal opinions; and (v) analyse, discuss and speculate about relevant issues. The rest of the activities found in the online course which were not part of our study included selfassessment tests on the contents of the textbook and general forum discussion on the textbook exercises. Finally, all students had to take an offline exam, regardless of whether they had participated in the online tasks described above. This examination paper contained different sections dealing with four linguistic skills: (i) reading comprehension; (ii) vocabulary; (iii) grammar test; and (iv) writing on a specific topic in the Tourism industry addressed in the course textbook. The parts of speech to be assessed in the grammar test included: (i) verb tense selection; (ii) prepositions; (iii) connectors; and (iv) grammatical error analysis.

Participants The students of our study were enrolled in the second year of Tourism at the Faculty of Economics at the UNED. Therefore, there were as many as 194 second year students who participated in our study. They all had intermediate or upper-intermediate level, but L2 English Grammar for Specific Purposes was new to them at the beginning of the course.

L2 English learning and performance 53 The method of study described above let us split up our students into different groups according to their participation in the online course tasks, beyond their participation in the final examination written paper.

Preliminary results According to our results, all the students took the final offline written test since this was a requirement to complete the course but not all of them did the online tasks. While 9 students freely emailed an essay on their study material preferences, print or ebook, and participated in the other online assessment tasks; 112 students took the oral test and participated in the self-assessment tests in the online course, and 73 students only took the final exam, without participating in any of the previous online assessment tasks.

Qualitative study results As mentioned above, 9 students freely emailed an essay on their preferences over their study materials. Surprisingly, there was just 1 student who found more advantages than disadvantages in using an ebook in his studies, crucially because he argues he is really keen on technology. In contrast, most students preferred print textbooks to prepare their academic subjects. They argue they find print textbooks easier and quicker to read, even though they assumed ebooks are user friendly but they do not have much experience with them. They seemed to be discouraged by ebooks because they think they need to be provided with intricate e-reading devices and expensive additional software. Moreover, they like to make notes next to the text or underline important words or concepts, and they believe they cannot make this kind of notes on an ebook. They also felt they would have to use extra time to learn how to manage those devices before they can make any profit of them. In sum, despite the number of ebook advantages discussed in the previous section above, most students that participated in our study seemed not be aware of them. Four students pointed out that one can run out of batteries, which means that they would not be able to work for a few hours. One student argued that a virus can also cause a lot of damage in some operating systems. Three other students pointed out that studying with an ebook can be very stressful since they would have to start all over again if they stopped working. For other students, reading in a second language online can cause eyestrain even if screen resolution and lighting have been improved. They also argued that they have difficulties finding their grammatical mistakes on the screen. Therefore, they always prefer to print their essays, which reinforces their idea that it is easier to read what is written on paper. For most students, it is very appealing to go into a bookshop and, for example, spend time reading the criticisms by experts on back covers. Although paper books are more expensive, they can be borrowed from a library. They can even be sold as second-hand books. Moreover, they

54  M. Ángeles Escobar think they do not need to invest in any reading device or pay a monthly fee to have Internet access and there is no possibility of content damage. In short, most students in this study show that they are not completely computer literate and therefore they cannot enjoy the good and vast improvements which technology brings to their studies and therefore, ebooks are not part of their study materials. In fact, the students who chose ebooks over print textbooks argued that ebooks are somehow related to his routine tasks on his laptop, smartphone or tablet.

Quantitative study results ONLINE TASKS

As mentioned above, there were other tasks that students could complete during the course, namely an oral test and a series of online self-assessment tests. According to our results, 122 students, including the 9 students that wrote the above-mentioned essay, completed both tasks, whereas 73 students did not do any of the previous tasks but only took the offline exam. First, we wanted to check whether the 9 students who participated in the extra condition of writing an essay on the pros and cons of ebooks did well also in their final exams. Interestingly, most of them did well in their oral tests, and so did they in their final written exams. This seems to suggest that the students who participated in our study really knew how to carry out a communicative task in their foreign language although they still made some grammatical mistakes. These results are in line with what the Common European Framework Reference of languages (CEFR) seems to require for the B2/upper intermediate level according to the performance of most students in the study. FINAL TEST

The final test consisted in a controlled written exam taken under exam conditions at a UNED Registered Center. As any regular ESP exam, this examination assesses the student’s reading and writing skills in the field under study (English for Tourism). As mentioned above, the exam paper contained a grammar section that we used for our study to assess grammar development of our students. This grammar section included 17 multiple choice questions on: (i) verb tense selection; (ii) prepositions; (iii) connectors; and (iv) error analysis. According to our results on correct response, condition ii (prepositions) and condition iii (connectors) were easier for our students than condition iv (verbs) and condition iv (error analysis), cf. 66.28% and 60.26% vs. 32.82% and 32.72%, respectively. TOTAL RESULTS

Considering the participation of our students in the virtual course there were three different groups split up on the basis of whether they participated or not in

L2 English learning and performance 55 120 100 80 60 40 20 0 Essay

Oral test

Group 1 (9)

Final test

Fail (< 5)

Pass (> 5)

Group 2 (112)

Fail Pass (< 3.1) (< 3.1) Group 3 (73)

Figure 4.1  Evaluation per group in all tasks

the online assessment tasks: Group 1: 9 students wrote an email; Group 2: 112 students participated in the online oral task; and Group 3: 73 students only took the offline exam. Figure 4.1 above provides us with the whole picture of the final evaluation of the students in the three groups. The figure above depicts the amount of students that participated in each of the course tasks. In addition, it also shows the amount of students who passed or failed the offline exam. Interestingly, we found significant differences among the three groups with respect to the final grade in their final exam. In particular, most students in Groups 1 and 2 that took part in online tasks passed the final exam (around 60% and 61% respectively), in contrast to students in Group 3 who only took the final exam. Note that only around 20% of these students passed it. Moreover, if we change the grade to pass the final exam into 3.0, the difference between groups is even bigger. This also means that the lowest grades (0–3) were scored by the students in Group 3 who did not participate in any online tasks in the online course.

Discussion and concluding remarks It is well known that adults have more difficulty learning a second language than children who simply absorb the grammar rules or exceptions and learn from them. Therefore, grammar rules need to be well explained and easily supported by real motivating professional tasks that can help them to develop their profession in the second language. Less motivated adult learners show a decline in their foreign language performance in contrast to more motivated learners. In this study we observed that there is evidence of successful performance in the development of grammar in adult learners acquiring English as L2 on the influence of their personal needs to carry out different tasks online.

56  M. Ángeles Escobar In fact, according to our findings, students did well in their essays when asked about their own opinions on learning materials and when they took the oral test carrying out a particular communicative task in the tourist industry. The difficulty in the grammar of the foreign language found in the written test of the students in our study contrasts to their ability in performing the different tasks in the online exercises in the other online tasks. This clearly implies that our learners can express or decode meaning in the target language, although they have not acquired the appropriate systematic target language grammar yet. In fact we assume that the acquisition of the grammar of the foreign language is a long process and obviously takes much longer than two university years. Furthermore, the learning of word order in the foreign language is particularly difficult for adult learners whose mother tongue exhibits a rather different one. This explains why the online tasks provided to our students, all Spanish speakers, highlighted practice on word order in L2 English since their L1 word order is rather different. In our study we have also assumed that students who freely write a voluntary essay clearly indicate a certain grade of motivation. In this way, we not only measure their personal taste with respect to online learning materials, a trendy topic in distance education, but also their motivation to be explicitly active in their own learning process. In fact, all the students who participated in this first activity also took part in the online oral test which was also presented as an optional activity. Our findings serve as replicating previous studies, discussed in the above section, showing that motivated students obtain better results in foreign language testing. Clearly, there are four positive points about learning English grammar through online learning materials. First, grammar points can be checked more quickly. Second, grammatical error analysis allows students to check their own grammar development. Third, grammar can be studied from anywhere to improve accuracy. Fourth, L2 performance is enhanced by letting students complete motivating tasks where writing accuracy is assisted. Overall, our research results on the acquisition of L2 English show that our students did not do well in their written grammatical test, especially when they were explicitly asked to render their grammaticality judgements. Yet, the students that participated in the online tasks during the virtual course obtained better academic results than the students that only took the final exam, obtaining the lowest scores. Finally, our study leads us to conclude that making the move to e-learning requires learners to have explicit evidence along with online resources through a task-based methodology. Other factors that may contribute to L2 development are left for further research on e-learning.

References Akst, D. (2003). The digital library: Its future has arrived. Carnegie Reporter, 2(3), 4–8. Bichsel, J. (2013). The state of e-learning in higher education: An eye toward growth and increased access. Research Report. Lousivell, CO: EDUCAUSE Center for Analysis and Research.

L2 English learning and performance 57 Blaschke, L. M. (2013). E-learning and self-determined learning skills. In S. Hase & C. Kenyon (Eds.), Self-determined learning: Heutagogy in action (pp. 55–68). London: Bloomsbury Academic. Dahl, M., Banerjee, K. & Spalti, M. (2006). Digital libraries: Integrating content and systems. Oxford: Chandos Publishing Limited. Dörnyei, Z. (2001). Teaching and researching motivation. Harlow, England: Pearson Education. Ellis, R. (1994). The study of second language acquisition. Oxford: Oxford University Press. Ellis, R. (2003). Task-based language learning and teaching. Oxford, New York: Oxford Applied Linguistics. Escobar-Alvarez, M. A. (2011). English grammar and learning tasks for tourism studies: Unidad Didáctica. Madrid: Universidad Nacional de Educación a Distancia. García Laborda, J. (2009). Using webquests for oral communication in English as a foreign language for Tourism Studies. Educational Technology & Society, 12(1), 258–270. Gass, S. & Selinker, L. (2008). Second language acquisition: An introductory course. New York: Routledge. Gee, J. P. (2013). The anti-education era: Creating smarter students through digital learning. New York: Palgrave Macmillan. Hamilton, E. & Feenberg, A. (2007). The technical codes of online education. In J. Lockard & M. Pegrum (Eds.), Brave new classrooms: Democratic education and the Internet (pp. 225–249). New York: Peter Lang. Harding, K. (2007). English for specific purposes: Resource books for teachers. Oxford: Oxford University Press. Hutchinson, T. & Waters, A. (1987). English for specific purposes: A learning-centered approach. Cambridge: Cambridge University Press. Krashen, S. D. (1982). Principles and practice in second language acquisition. Oxford: Pergamon. Laurillard, D. M. (1993). Rethinking university teaching: A framework for the effective use of educational technology. London: Routledge. Laurillard, D. M. (2002). Rethinking university teaching: A conversational framework for the effective use of learning technologies. London: Routledge. Lehmann, K. J. & Chamberlin, L. (2009). Making the move to e-learning: Putting your course online. Lanham, MD: Rowman & Littlefield Education. Long, M. (1991). Focus on form: A design feature in language teaching methodology. In K. De Bot, R. Ginsberg, & C. Kramsch (Eds.), Foreign language research in cross-cultural perspective. Amsterdam: John Benjamins. Zhao, Y. & Lai, C. (2007). Technology and second language learning: Promises and problems. In L. A. Parker (Ed.), Technology-based learning environments for young English learners: In and out of school connections. Mahwah, NJ: Erlbaum.

This Page is Intentionally Left Blank

Part 2

Languages and technologyenhanced assessment

This Page is Intentionally Left Blank

5 Language testing in the digital era Miguel Fernández Álvarez Chicago State University, USA

Introduction The field of education has evolved in many different ways since technology was implemented as a tool that facilitates the teaching and learning process. Instructors, administrators, students, parents and book editors and writers have experienced during the last decades the greatest revolution that ever happened in the classroom. This advancement has been influenced basically by the introduction and use of the computer in the teaching and learning process. Thus, learning became significantly richer as students started having access to new and different types of information, as they manipulated it on the computer through graphic displays or controlled experiments in ways never before possible, and as they could be able to communicate their results and conclusions in a variety of media to their teachers, students in the next classroom, or students around the world. Assessment, as an integral ongoing part of such a process, has also been affected by the integration of technology. The aim of this chapter is to present the history and evolution of technologyenhanced assessment in language testing from the introduction of computerbased testing (CBT) in the mid-1980s to the most recent advances in the field, focusing on the advantages and disadvantages in their use. The second part of the chapter focuses on Computer-Assisted Language Testing (CALT) as a field and the advantages and drawbacks associated with it. The ultimate goal of this chapter is to present the current state of the art and to explore the future of technologyenhanced language testing, based on existing practices and methodologies.

History and evolution of technology-enhanced assessment Back in the year 2000, Alderson (2000) published a state-of-the-art of technology in testing where he analyzed the current trends at that time and looked at what technology would offer to the field of language testing in the future. In Alderson’s words (2000, p. 593), at the turn of the century “[a]s developments in information technology ha[d] moved apace, and both hardware and software ha[d] become more powerful and cheaper, the long-prophesied use of IT for language testing [was] finally coming about.” However, the idea of the integration

62  Miguel Fernández Álvarez of technology with language testing had long been considered in circles of experts in the field. As such, the International Language Testing Association (ILTA) chose that theme for its annual conference, the so-called Language Testing Research Colloquium (LTRC) in 1985. In that international forum, language assessment researchers, scholars, and practitioners from many parts of the world met to talk about what in the general measurement profession had been called computer adaptive testing (CAT) and to start exploring a new concept that was evolving in the field: computer-based testing (CBT). One year later, Stanfield (1986) published the conference proceedings under the tile Technology and Language Testing. The same concept was again the topic of discussion in the 2001 LTRC meeting. Since then, technology has been an integral component in the field of language testing. It is important to emphasize the fact that there has always been a strong connection between technology and the teaching and assessment of languages. However, it is worth noting that technology advances so rapidly that today’s most basic technology was considered to be the leading technology a few decades ago. With that concept in mind, in this section I present a brief description of the history and evolution of technology-enhanced assessment in the field of language testing since its origins, focusing on important trends that have changed the way languages are being assessed nowadays.

Computer-based testing (CBT) The field of language testing evolved considerably in the late ’70s and early ’80s when computers started being used as an alternative way to measure the language proficiency of test-takers. Thus, computer-based tests (CBTs) started being developed and implemented with the aim of simplifying the administration and scoring of the tests. While paper-and-pencil tests traditionally had to be administered to a large group of test-takers at the same moment, CBTs introduced a new delivery method in which each candidate could take the test on an individual computer or a closed network. As Fulcher (2000, p. 96) points out, “the first computer-based tests were simply paper and pencil tests that had been designed, constructed and piloted using the tools of [Classical Test Theory] CTT.” Since then, multiple studies have focused on comparing both paper and pencil tests and CBTs with the aim of identifying elements that make each type of test distinct and unique (Choi & Tinkler, 2002; Fernández Álvarez & Sanz Sainz, 2004; Kim & Huynh, 2007; Noyesa & Garlandb, 2008; Kingston, 2009; Hosseini, Zainol Abidin, & Baghdarnia, 2014). One of the findings identified is that students who are familiar with computers prefer the use of CBTs, as was expected (O’Malley et al., 2005; Poggio, Glasnapp, Yang, & Poggio, 2005). However, this preference does not prove that test-takers perform better on CBTs, as confirmed by Hosseini, Zainol Abidin and Baghdarnia (2014). In fact, the results of

Language testing in the digital era 63 their study revealed that candidates performed better on paper-and-pencil tests than CBTs. One of the most attractive advantages of CBTs was the fact that results were given upon completion of the test, which is a very important characteristic for pedagogical purposes (Bennett, 1999). This characteristic caught the attention of language testers, as that was something new that paper-and-pencil tests could not offer. However, a few years after CBTs started being used, Fulcher (2000, p. 94) reminds us the scorability of tests by machines [was] still an area of concern and active research in the current generation of computer-based tests (CBTs). This is true for constructed responses as well as responses to objective items, for the same reasons of cost and efficiency. While how items are scored seems to be an issue for some, others believe that identifying the test construct is also a very important problem. One of the concerns expressed by Fulcher in 2000 (p. 96) was that “[t]he introduction of multimedia to a listening test may change the nature of the construct being measured.” He further explained that “[i]t is possible that video content changes the process of comprehending listening texts in ways that we do not yet fully understand,” and expressed his concern in the lack of research in that area. A few years later, in one of his studies, Ockey (2007) discussed construct implications of including still image or video in computer-based listening tests, coming to the conclusion that test-takers engage in a different way when input is provided using different modes of delivery. Thus, he suggests that “the utilization of video in such computer-based tests may require a rethinking of the listening construct.” This is an issue that is still under research, as evidenced in some studies that have recently been published (Ockey, 2007; Batty, 2015), and further exploration is still needed. Besides the number of weaknesses that researchers have found on the use of CBTs, there are also benefits that are associated to them, as listed below: ••

••

Efficient Administration: CBTs can be administered to individuals or small groups of students in classrooms or computer labs, eliminating timing issues caused by the need to administer paper-and-pencil tests in large groups in single sittings. Different students can take different tests simultaneously in the same room. Immediate Results: One of the major drawbacks of testing on paper has been the long wait for results because of the need to distribute, collect, and then scan test booklets/answer forms and hand score open response items and essays. On the other hand, the results of computer-based tests can be available immediately, providing schools with diagnostic tools to use for improved instruction.

64  Miguel Fernández Álvarez ••

••

Efficient Item Development: As computer-based testing becomes more advanced, item elaboration will be more efficient, higher quality, and less expensive (National Governors Association, 2002). Bennett (1998) believes that at some point items might be generated electronically, with items matched to particular specifications at the moment of administration. Increased Authenticity: Computers allow for increased use of “authentic assessments” – responses can be open-ended rather than just relying on multiple choice.

In looking ahead, one of the concerns that exists with CBTs is the issue of security. In their work, Al-Saleem and Ullah (2014) explore security considerations in computer-based and online-based testing and propose authentication systems based on palm-based biometrics in conjunction with video capturing technique. While their work is based on recent studies (Anusha, Soujanya, & Vasavi, 2012; Oluwatosin & Samson, 2013), we are in need of more research on the practical aspects of the security protocols and authentication features.

Computer adaptive testing (CAT) Computer adaptive testing (CAT) was a term that first appeared in the early ’70s (Dunkel, 1999; Gruba & Corbel, 1999). However, Fulcher (2000) reminds us that “it is not until recently that they have come into widespread use, generating much research and comment.” One of the reasons why they were not as prevalent in the L2 field at the beginning is because teachers, testers and practitioners were more concerned about performance-based assessment, and this is more difficult to be attained in a computerized test (Chalhoub-Deville, 2001). Dunkel’s contributions (1991; 1997; 1999) have been fundamental in the establishment and development of second/foreign language CATs. In these types of tests, the computer can make the necessary calculations needed to estimate a person’s proficiency and to choose the questions to present. Based on the test-takers’ responses, the computer selects items of appropriate difficulty thereby avoiding delivering items that are too difficult or too easy for a test-taker, but instead selects more items at the test-taker’s level of ability than a non-adaptive test could include (Alderson, Clapham, & Wall, 1995; McNamara, 1996; Fernández Álvarez & García Rico, 2006). Thus, computer adaptive testing “offers a potentially more efficient way of collecting information on people’s ability” (Hughes, 2003, p. 23). Figure 5.1 shows in a clear and detailed manner the way CATs work. Adaptivity and security seem to be two of the main advantages of CATs. On the one hand, the test adapts to the level of the test-taker, which to some degree makes the process similar to the assessment that is done by a real person who selects the questions in the test based on the candidate’s performance. On the other hand, due to the fact that items are chosen in a different order, each testtaker is basically doing a different test. Two candidates can be doing the same exam at the same time and there are chances that the two of them have to respond

Language testing in the digital era 65 ITEM BANK, ORDERED BY DIFFICULTY MORE DIFFICULT

EASIER

The test starts with an intermediate item CORRECT: A more difficult item is chosen CORRECT: More difficult item INCORRECT: An easier item is chosen The level is established

Figure 5.1  Selection of items in a CAT Source: UCLES, 2001, p. 4

to different questions. In the words of Davies et  al. (1999, p. 29), CATs are claimed to facilitate greater accuracy of measurement as a result of presenting candidates with items which are at the maximum discretion level, i.e., are more or less at the candidate’s level of ability, items of this type providing more information about the candidate than items which are too easy or too difficult. However, CATs show some other advantages. In fact, CATs were developed to eliminate the time-consuming and traditional test that presents easy questions to high-ability persons and excessively difficult questions to low-ability testees. Dunkel (1999) identifies the following advantages in their use: ••

••

•• ••

Self-Pacing: CATs allow test-takers to work at their own pace. The speed of examinee responses could be used as additional information in assessing proficiency, if desired and warranted. Challenge: Test-takers are challenged by test items at an appropriate level; they are not discouraged or annoyed by items that are far above or below their ability level. Immediate Feedback: The test can be scored immediately, providing instantaneous feedback for the examinees. Multimedia Presentation: Tests can include text, graphics, photographs, and even full-motion video clips.

While most CATs are commercial tests, there are some free tests available for use. Two of the most common ‘free tests’ are as follows: Concerto (The Psychometrics Centre, 2013) and SLUPE (Saint Louis University, 2014). Concerto is an online R-based adaptive testing platform that can be used in any field. It requires, though, the services of a computer programmer expert in R and someone with knowledge in statistical analyses. SLUPE, on the other hand, does not require any programming or statistical expertise. It is a much more

66  Miguel Fernández Álvarez user-friendly system that the test administrator uses to create his or her own database. This has changed drastically how CATs have been perceived and how they have been used for decades, as now, “it is definitely feasible for language teachers without computer programming skills to create reliable computeradaptive tests” (Burston & Neophytou, 2014). Although we now see other advantages in their use and an increased interest in security, reliability and maintainability, there are still issues related to other aspects such as presentation, functionality, feedback, contracts and licensing that are usually ignored (Economides & Roupas, 2007), which is an indication that we are still in the development phase of CATs.

Web-based testing (WBT) It was at the turn of the century when researchers started exploring other uses of CBTs and began developing tests that could be delivered online, written in the “language” of the Internet (HTML), and possibly enhanced by scripts. In just a few years, there was a large number of projects, presentations and publications devoted to WBTs: the DIALANG project (Alderson, 2001), a paper (Roever, 2000), several in-progress reports (Malone, Carpenter, Winke, & Kenyon, 2001; Sawaki, 2001; Wang et al., 2000), poster sessions (Carr, Green, Vongpumivitch, & Xi, 2001; Bachman et  al., 2000), and a few publications (Douglas, 2000; Roever, 2001). In WBTs, the test is located as a website on the tester’s server where it can be accessed by the test-taker’s computer. The client’s browser software displays the test, the test-taker completes it and – if so desired – sends his/her answers back to the server, from which the tester can download and score them. If the test consists of dichotomous items (true/false or multiple choice) it can be made self-scoring by using scripts. Even though we can still find several drawbacks in the use of WBTs (i.e. security or technical problems such as server failure and browser incompatibilities), it is obvious that there are several advantages linked to them. Roever (2001) distinguishes three main advantages: ••

••

Flexibility in time and space: This is probably the biggest advantage of a WBT. All that is required to take a WBT is a computer with a Web browser and an Internet connection (or the test on disk). Test-takers can take the WBT whenever and wherever it is convenient, and test designers can share their test with colleagues all over the world and receive feedback. Even though security is a big issue in the use of WBTs, there are still advantages to delivering the test via the Web, that is, no specialized software necessary, existing facilities like computer labs can be used as testing centers. Easy to write: Whereas producing traditional CBTs requires a high degree of programming expertise and the use of specially-designed and non-portable delivery platforms, WBTs are comparatively easy to write and require only a free, standard browser for their display. In fact, anybody with a computer

Language testing in the digital era 67

••

and an introductory HTML handbook can write a WBT without too much effort, and anybody with a computer and a browser can take the test; language testers do not have to be computer programmers to write a WBT. This is largely due to HTMLs not being a true programming language but only a set of formatting commands, which instruct the client’s Web browser how to display content. In addition, HTML contains elements that support the construction of common item types, such as radio buttons for multiplechoice items, input boxes for short response items, and text areas for extended response items (essays or dictations). Affordability: A WBT is very inexpensive for all parties concerned. Testers can write the test by hand or with a free editor program without incurring any production costs except the time it takes to write the test.

Once a test is written, it can be uploaded to a server provided by the tester’s institution or to one of many commercial servers that offer several megabytes of free web space. Since WBTs tend to be small files of no more than a few kilobytes, space on a free server is usually more than sufficient for a test. The use of images, sound, or video can enlarge the test considerably, however, and may require the simultaneous use of several servers or the purchase of more space. CATs are possible on the Web and do not pose many technical problems beyond those encountered in linear tests but it cannot be emphasized enough that the design of a sophisticated WBT is a very complex undertaking that requires considerable expertise in Item Response Theory (IRT).

Computer-assisted language testing (CALT) The rising use of computers in testing and the creation of the many different types of tests mentioned above and their adaptations led to the establishment and development of a new discipline called Computer-Assisted Language Testing (CALT), sometimes also referred to as Computer-Assisted Language Assessment. CALT, according to Noijons (1994, p. 38), can be defined as “an integrated procedure in which language performance is elicited and assessed with the help of a computer.” However, this term can nowadays be redefined to include other instruments. The field has increasingly evolved since computers were first used for scoring test items in 1935 in the United States (Fulcher, 2000, p. 93) until today, when not only computers are being used, but also other electronic devices such as smartphones and tablets. These tools have somehow made the testing delivery process easier and simpler. Pathan (2012, p. 31) identifies three domains in CALT: (1) the use of computers for generating tests automatically, (2) the interaction between test-takers and the computer (in the form of online interaction), and (3) the use of computers for the evaluation of test-takers’ responses. It is the third aspect that gives birth to CALT as a discipline that, like any other field, presents benefits and weaknesses. However, it is evident that, despite the possible drawbacks associated to CALT (which I will present below), there are many scholars who have advocated the use

68  Miguel Fernández Álvarez of computer technology in the field of language testing (Madsen, 1986; Stanfield, 1986; Dandonoli, 1989; Chapelle & Douglas, 2006). Some of the most cited advantages of CALT, identified by Pathan (2012) include the following: •• •• •• •• •• •• •• •• •• ••

Administrative and logistic issues are much easier to overcome (Roever, 2001). Consistency and uniformity is offered. Authenticity and greater interaction are enhanced. Insight into test-taker’s route and strategies is offered (Alderson, 1990). Tests are individualized. Self-pacing is provided. Immediate test results and feedback are offered. More accurate assessment of the test-taker’s language ability is provided. Less time is required in the administration process (Dandonoli, 1989). A more positive attitude toward tests is created (Madsen, 1986).

On the other hand, there are some issues associated with CALT, and security is perhaps one of the first aspects to highlight. Although there have been exceptional improvements in terms of identifying test-takers’ identity, this still seems to be one of the areas in which more research is needed, as in many cases it is difficult to make sure that the test-taker is the same as the person for whom the score is reported. That is one of the reasons why many agencies are reluctant to use CALT for high-stakes tests. Apart from that, there are also technical issues that need to be taken into consideration at different levels (test construction, technical expertise and costs issues, to name a few). These technical issues lead us to the negative effects that the delivery mode can have on the quality of the test. The size of the screen, for instance, can interfere on reading tasks if the text that the candidates have to read is too long. Furthermore, there might be inaccurate automatic response scorings. While scores are expected to be as accurate and reliable as possible, “computer-assisted response scoring may fail to assign credit to the qualities of a response that are relevant to the construct which the test is intended to measure” (Pathan, 2012, p. 41). Finally, we cannot forget the possible negative washback effect of CALT, which to some degree is associated with the technological costs and the anxiety that taking the test on a computer can cause in test-takers.

Future directions As we have seen, nowadays, technology plays a very important role in the development of testing methods. The many different kinds of computerized assessments such as CBTs, CATs and WBTs have offered great advantages to assessment research, by providing immediate feedback, ways to store test results for further analysis, storage of large amounts of items, grading objectivity, multimedia presentations, test-takers’ ability to self-pace, etc. However, there are still areas in the digital era that need further research. Construct studies, as Alderson predicted more than a decade ago, are essential and needed.

Language testing in the digital era 69 When looking to the future, we need to pay close attention to how technology has evolved in the last few years and consider whether CALT can better meet the needs of the new generations that are most familiar with portable electronic devices. To what degree will a test done on a tablet or cell phone be as valid as a CBT or a paper-and-pencil version of the same test? Will it be feasible to invest resources and time in such kind of research? Pathan (2012, p. 42) claims we need to “make testing practice[s] more flexible, innovative, dynamic, efficient and individualized.” However, are we ready for such a shift? This change will have enormous implications not only in education, but also in society as a whole, as there will be agencies, test companies, and many stakeholders involved and affected. What effect and consequences (either negative or positive) will the level of computer literacy have on test-takers’ performance on the test? There are many ethical aspects to contemplate, especially when we consider that “[c]omputers lack human intelligence to assess [some skills, such as] direct speaking ability or free written compositions” (Fernández Álvarez & García Rico, 2006, p. 20). One of the major concerns in CALT nowadays seems to be security, identity and authentication, a question that has already been addressed but needs further exploration. Many testing agencies require test-takers to go to a testing center to do the test while they are being proctored, especially when concerning high-stakes tests. Will we come to a time when test-takers can do the test from the comfort of their homes or offices and we can assure that results are 100% valid and reliable? Much research needs to be done until we reach that moment in time.

References Alderson, C. J. (1990). Learner-centered testing through computers: Institutional issues in individual assessment. In J. de Jong, & D. K. Stevenson (Eds.), Individuallizing the assessment of language abilities (pp. 20–27). Clevedon: Multilingual Matters. Alderson, C. J. (2000). Technology in testing: The present and the future. System, 28, 593–603. Alderson, C. J. (Organizer). (2001, March). Learning-centred assessment using information technology. Symposium conducted at the 23rd Annual Language Testing Research Colloquium, St. Louis, MO. Alderson, C. J., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge: Cambridge University Press. Al-Saleem, S. M., & Ullah, H. (2014). Security considerations and recommendations in computer-based testing. Scientific World Journal. doi: 10.1155/2014/562787. Anusha, N. S., Soujanya, T. S., & Vasavi, D. S. (2012). Study on techniques for providing enhanced security during online exams. International Journal of Engineering Inventions, 1(1), 32–37. Bachman, L. F., Carr, N., Kamei, G., Kim, M., Llosa, L., Sawaki, Y., Shin, S., Sohn, S-O., Vongpumivitch, V., Wang, L., Xi, X., & Yessis, D. (2000, March). Developing a web-based language placement examination system. Poster session presented at the 22nd Annual Language Testing Research Colloquium, Vancouver, BC, Canada.

70  Miguel Fernández Álvarez Batty, A. O. (2015). A comparison of video- and audio-mediated listening tests with many-facet Rasch modeling and differential distractor functioning. Language Testing, 32(1), 3–20. Bennett, R. E. (1998). Reinventing assessment: Speculations on the future of largescale educational testing. Princeton, NJ: Policy Information Center, Educational Testing Service. Bennett, R. E. (1999). Using new technology to improve assessment. Educational Measurement Issues and Practice, 18(3), 5–12. Burston, J., & Neophytou, M. (2014). Lessons learned in designing and implementing a computer-adaptive test for English. The EUROCALL Review, 22(2), 19–25. Carr, N., Green, B., Vongpumivitch, V., & Xi, X. (2001, March). Development and initial validation of a web-based ESL placement test. Poster session presented at the 23rd Annual Language Testing Research Colloquium, St. Louis, MO. Chalhoub-Deville, M. (2001). Language testing and technology: Past and future. Language Learning & Technology, 5(2), 95–98. Chapelle, C. A., & Douglas, D. (2006). Assessing language through computer technology. Cambridge: Cambridge University Press. Choi, S. W., & Tinkler, T. (2002). Evaluating comparability of paper-and-pencil and computer-based assessment in a K-12 setting. Paper presented at the 2002 annual meeting of the National Council on Measurement in Education. Dandonoli, P. (1989). The ACTFL computerized adaptive test of foreign language reading proficiency. In F. Smith, Modern technology in foreign language education: Application and projects. Lincolnwood, IL: National Textbook. Davies, A., Brown, A., Elder, C., Hill, K., Lumley, T., & McNamara, T. (1999). A dictionary of language testing. Cambridge: Cambridge University Press. Douglas, D. (2000). Assessing languas for specific purposes. New York: Cambridge University Press. Dunkel, P. (1991). Computer assisted language learning and testing: Research issues and practice. New York: Newbury House. Dunkel, P. (1997). Computer-adaptive testing of listening comprehension: A blueprint for CAT development. The Language Teacher Online, 21(10), 1–8. Dunkel, P. (1999). Considerations in developing or using second/foreign language proficiency computer-adaptive tests. Language Learning & Technology, 2(2), 77–93. Economides, A. A., & Roupas, C. (2007). Evaluation of computer adaptive testing systems. International Journal of Web-Based Learning and Teaching Technologies, 2(1), 70–87. Fernández Álvarez, M., & García Rico, J. (2006). Web-based tests in second/foreign language self-assessment. In M. Simonson, & M. Crawford (Ed.), Selected papers on the practice of educational communications and technology presented at the 2006 annual convention of the Association for Educational Communications and Technology, 2, pp. 13–21. North Miami Beach: Nova Southeastern University. Fernández Álvarez, M., & Sanz Sainz, I. (2004). Computer adaptive tests (CATs) and their paper and pen versions: A case study. Language Testing Update, 36, 109–113. Fulcher, G. (2000). Computers in language testing. In P. Brett, & G. Motteram (Eds.), A special interest in computers: Learning and teaching with information and communications technologies (pp. 93–107). Manchester: IATEFL publications.

Language testing in the digital era 71 Gruba, P., & Corbel, C. (1999). Computer-based testing. In C. Clapham, & D. Corson (Eds.), Encyclopedia of language and education: Language testing and assessment vol. 7 (pp. 141–150). Dordrecht: Kluwer Academic Publishers. Hosseini, M., Zainol Abidin, M. J., & Baghdarnia, M. (2014). Comparability of test results of computer based tests (CBT) and paper and pencil tests (PPT) among English language learners in Iran. Procedia-Social and Behavioral Sciences, 98, 659–667. Hughes, A. (2003). Testing for language teachers. Cambridge: Cambridge University Press. Kim, D. H., & Huynh, H. (2007). Comparability of computer and paper-and-pencil versions of Algebra and Biology assessments. Journal of Technology, Learning and Assessment, 6(4), 1–35. Kingston, N. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K-12 populations: A synthesis. Applied Measurement in Education, 22(1), 22–37. Madsen, H. S. (1986). Evaluating a computer adaptive ESL placement test. CALICO Journal, 4, 4150. Malone, M., Carpenter, H., Winke, P., & Kenyon, D. (2001, March). Development of a web-based listening and reading test for less commonly taught languages. Work in progress session presented at the 23rd Annual Language Testing Research Colloquium, St. Louis, MO. McNamara, T. (1996). Measuring language performance. New York: Longman. National Governors Association. (2002). Using electronic assessment to measure student performance. Education Policy Studies Division. (N. G. Association, Producer) Retrieved November 2014, from http://www.nga.org/cda/files/ELECTRO NICAASSESSMENT.pdf. Noijons, J. (1994). Testing computer assisted language tests: Towards a checklist for CALT. CALICO Journal, 12(1), 37–58. Noyesa, J. M., & Garlandb, K. J. (2008). Computer- vs. paper-based tasks: Are they equivalent? Ergonomics, 51(9), 1352–1375. Ockey, G. J. (2007). Construct implications of including still image or video in computer-based listening tests. Language Testing, 24(4), 517–537. Oluwatosin, O. T., & Samson, D. D. (2013). Computer-based test security and result integrity. International Journal of Computer and Information Technology, 2(2), 324–329. O’Malley, K. J., Kirkpatrick, R., Sherwood, W., Burdick, H. J., Hsieh, M. C., & Sanford, E. E. (2005, April). Comparability of a paper based and computer based reading test in early elementary grades. Paper presented at the AERA Division D Graduate Student Seminar, Montreal, Canada. Pathan, M. M. (2012). Computer assisted language testing [CALT]: Advantages, implications and limitations. Research Vistas, 1(4), 30–45. Poggio, J., Glasnapp, D., Yang, X., & Poggio, A. (2005). A comparative evaluation of score results from computerised and paper & pencil mathematics testing in a large scale state assessment program. The Journal of Technology, Learning and Assessment, 3(6), 5–30. Roever, C. (2000, March). Web-based language testing: opportunities and challenges. Paper presented at the 22nd Annual Language Testing Research Colloquium, Vancouver, BC, Canada. Roever, C. (2001). Web-based language testing. Language Learning & Technology, 5(2), 84–94.

72  Miguel Fernández Álvarez Saint Louis University. (2014). SLUPE: Language placement testing free Web 2.0 application. Retrieved December 10, 2014, from http://phrants.net/pt/pt.html. Sawaki, Y. (2001, March). How examinees take conventional versus web-based Japanese reading tests. Work in progress session presented at the 23rd Annual Language Testing Research Colloquium, St. Louis, MO. Stanfield, C. (1986). Technology and language testing. Washington, DC: TESOL. The Psychometrics Centre. (2013). Concerto platform for the development of on-line adaptive tests. Retrieved January 2, 2015, from http://www.psychometrics.cam. ac.uk/newconcerto. Wang, L., Bachman, L. F., Carr, N., Kamei, G., Kim, M., Llosa, L., Sawaki, Y., Shin, S., Vongpumivitch, V., Xi, X., & Yessis, D. (2000, March). A cognitive-psychometric approach to construct validation of web-based language assessment. Work-in-progress report presented at the 22nd Annual Language Testing Research Colloquium, Vancouver, BC, Canada.

6 Synchronous computer-mediated communication in ILP research A study based on the ESP context Vicente Beltrán-Palanques Universitat Jaume I, Spain

Introduction It is commonly accepted that learners need to reach communicative competence to employ the second/foreign language (SL/FL) successfully. Reaching to that end, however, is not an easy task as learners need to develop many different competencies, such as pragmatic competence. Broadly speaking, pragmatic competence refers to the ability to use different linguistic resources in a particular way in a given social encounter. Then, learners should master two different types of pragmatic knowledge, i.e. pragmalinguistic knowledge and sociopragmatic knowledge (Leech, 1983; Thomas, 1983). The former involves the linguistic resources that are required to utter a particular communicative act. The latter refers to the appropriate use of those linguistic resources which are affected by some conditions such as participant’s role, social status, social distance and degree of imposition (Brown & Levinson, 1987). Pragmatic competence should be integrated in the language classroom to provide learners with opportunities to develop their communicative competence, and this becomes especially important in the English for Specific Purposes (ESP) courses taught at university, which “cater to the specific situations or purposes for which the language may be needed” (Räisänen & Fortanet-Gómez, 2008: p. 12). Developing learners’ pragmatic competence within ESP courses should be seen as crucial because learners need to increase this knowledge to interact successfully in the professional environment. Teaching and assessing pragmatics in the ESP context involves designing specific teaching approaches and creating assessment instruments to test pragmatic knowledge appropriately. Pragmatic competence can be fostered using computer-mediated communication (CMC) as it provides learners with a space for communicating and working collaboratively (Chapelle, 2003; González-Lloret, 2008). Warschauer (2001) distinguishes two types of CMC, synchronous computer-mediated communication (SCMC) and asynchronous computer-mediated communication (ACMC). The former refers to the type of communication that occurs in real time and the latter involves people communicating in a delayed fashion. SCMC, which is the mode selected for this study, allows learners to interact with other participants, collaborate in the interaction and negotiate the meaning

74  Vicente Beltrán-Palanques (González-Lloret, 2008), thereby promoting the development of pragmatic competence (Sykes, 2005; González-Lloret, 2008; Eslami et al. 2015). Hence, it seems that technology can be used to promote language learning, but, as Peterson (2010) argues, what might benefit language learning would be the combination of designing appropriate tasks using technology. In addition to this, Taguchi and Sykes (2013) argue that technology has (1) expanded the understanding of the construct of pragmatic competence since it facilitates data collection; (2) provided opportunities for digitalizing audio recordings and distributing tasks and instruments to record data from large groups; (3) expanded the context of analysis; and (4) automated computer-based techniques. In the present study, information and communication technology (ICT) tools, more specifically SCMC, are used to design the interactive discourse completion tasks/tests (IDCTs) and to conduct the retrospective verbal reports (RVRs) to gather speech act data and verbal report data.

The speech act of advice The pragmatic feature addressed in this study is the speech act of advice, which is classified within the category of directives (Searle, 1976). Wierzbicka (1987) argues the speech act of advice is usually performed by professionals, or people occupying specific positions. Similarly, Mulholland (2002) suggests that advice is used “to give counsel, offer an opinion to, to indicate that something is good or bad” (p. 148). The author also states that the advisers assume that the hearer should be advised about something and then the advisers are in a position to advise. Martínez-Flor (2003) proposes a taxonomy of the speech act of advice based on Alcón and Safont (2001) and Hinkel (1997). It consists of indirect, conventionally indirect, direct strategies, and an extra group of other types of strategies. Indirect advice involves hints in which the speaker’s intentions are not made explicit (e.g. You want to pass, don’t you?). Conventionally indirect strategies consists of: conditional (e.g. If I were you . . .); probability (e.g. It might be better for you . . .); and specific formulae (e.g. Why don’t you . . . ?). Direct strategies are called “pragmatically transparent expressions” (Martínez-Flor, 2003: p. 144) and it involves four sub-strategies, imperative (e.g. Be careful!); negative imperative (e.g. Don’t worry!); declarative (e.g. You should . . .) and performative (e.g. I advise you to . . .). The last group refers to the other types of strategies in which alternative strategies to those categorized in the other strategies might be identified (e.g. You have to). The author also suggests that because of the face-threatening nature of this speech act, speakers tend to perform it with peripheral modification devices to mitigate the force or threat on the hearer’s face. This is a clear classification of the different strategies that can be used to identify and categorize the speech act of advice. However, this taxonomy should be seen as open because the pragmalinguistic realization is determined by the sociopragmatic conditions of the situation and the speakers’ perceptions.

Synchronous computer-mediated communication 75

Retrospective verbal reports in ILP research Verbal reports can be used to access to learners’ cognitive processes and pragmatic knowledge (Félix-Brasdefer, 2010). Two types can be identified, concurrent verbal reports (CVRs), conducted while on task, and retrospective verbal reports (RVRs), performed after the completion of the task. These tools have been used in interlanguage pragmatics (ILP) research in combination with other instruments such as role-play tasks and DCTs to gain information about participants’ cognitive processes and pragmatic knowledge, which could not be obtained by other means (Félix-Brasdefer, 2010; Beltrán-Palanques, 2014). In this study, a particular type of DCT, more specifically one taking an interactive approach, is used to gather speech act data. DCTs are widely employed in ILP research to collect written speech act data (Roever, 2010) and they are traditionally seen as paper-and-pencil based tests in which usually only one turn is taken. However, further advanced versions can be found in the literature, such as the multimedia elicitation task (MET), developed by Schauer (2009) and DCTs that promote participants’ interaction (Grabowski, 2007; Martínez-Flor & Usó-Juan, 2011; Beltrán-Palanques, 2013; Usó-Juan & Martínez-Flor, 2014). Robinson (1992) explored refusals elicited by 12 Japanese learners in the US and combined DCTs with CVRs and RVRs. Verbal reports provided specific information about the planning process of refusal strategies, evaluation of utterances, pragmatic and linguistic difficulties, and knowledge sources. Woodfield (2010) examined the cognitive processes and pragmatic knowledge of six pairs of advanced learners of English on DCTs eliciting requests using CVRs and RVRs. The RVRs, conducted immediately after the completion of the task, revealed insights into learners’ perceptions of the sociocultural features of the context and learners’ reasoning behind their pragmalinguistic realizations, and the influence of sociocultural transfer in some learners. Beltrán-Palanques (2013) conducted a study to explore the cognitive processes, attended aspects and pragmatic knowledge of 23 students completing their second academic year of English Studies in IDCTs eliciting apologies. The RVRs, conducted immediately after the completion of each task, revealed information as regards participants’ attended aspects while on task and how the contextual features affected the pragmalinguistic choice. Finally, Ren (2014), investigated by means of RVRs the cognitive processes of 20 advanced learners of English in MET, eliciting refusals during participants’ study abroad. The RVRs were instrumental in revealing participants’ socio-pragmatic knowledge and the effect of stay abroad on learners’ perception of the conditions affecting their pragma-linguistic realizations.

Methodology This study was based on the need to assess pragmatic knowledge in the ESP context using IDCTs and RVRs by means of SCMC to obtain speech act data and retrospective verbal data as regards participants’ attended aspects, pragmatic

76  Vicente Beltrán-Palanques knowledge and task design. To that end, the following research questions were investigated: ••

•• ••

To what extent do learners pay attention to the intralinguistic variables (i.e. grammar and vocabulary) when planning and executing the speech act of advice? To what extent are learners aware of the sociopragmatic norms when eliciting the speech act of advice? To what extent do learners react positively to the use of SCMC to complete the IDCTs?

Participants This study involved initially 22 students (16 female and 6 male) but finally only 12 female students took part in it. All the participants were enrolled in the ESP course English for Psychology, which involves theoretical, practical, seminar/ laboratory sessions and tutorials, and students are divided in small groups for the practice sessions. One of the subgroups was selected for this study. Gender and proficiency level were controlled to obtain a homogenous group. The UCLES Quick Placement Test (Oxford University Press) was used to examine learners’ proficiency level. Results indicated that 12 girls obtained a B1 level, 4 girls A2 level, 4 boys A2 level, and 2 boys B2, according to the Common European Framework of Reference for languages: Learning, teaching and assessment (CEFR). In light of this result, data for the study was only gathered from the group of 12 girls that obtained a B1 level, whose ages ranged from 19 to 21. The remaining learners participated in the study albeit their production was not considered for this study. A background questionnaire, adapted from Beltrán-Palanques (2013), was administered using Google Forms to gather basic demographic information, FL learning experience, and experience abroad. Results revealed that participants’ nationality was Spanish, they were bilinguals (i.e. Catalan-Spanish), they had similar sociocultural background and FL learning experience, and none of them had been to an English-speaking country. The participants did not receive instruction on the speech act of advice before conducting the study.

Instruments and procedures Two different ICT tools allowing SCMC, i.e. Google Docs and Google Hangouts, were chosen. These tools were selected because they allow SCMC, they are easy to use, they can store written and oral data, and because no one external to the task could enter and participate. Moreover, our home university uses Google Apps and every student has an account. Google Docs is a web-based word processor that allows people to create and edit texts online from a collaborative perspective. This specific App was used to implement the IDCTs and Google Hangouts, which allows video chat, was employed to conduct the RVRs.

Synchronous computer-mediated communication 77 Four different scenarios were devized to elicit the speech act of advice by means of online IDCTs (Appendix 1). The roles given in the IDCTs were those of a psychologist and a patient and they were designed taking into account the sociopragmatic features of social status and social distance (Brown & Levinson, 1987). Two levels of social status (i.e. high and low) and social distance (i.e. stranger and acquaintance) were chosen. The situations were somehow familiar for the participants as they were studying the degree of Psychology. The scenarios were first piloted with a group of four native speakers of English and four advanced FL learners to assure that the situations were properly designed. Also, two English language teachers of ESP were asked to examine the scenarios. The feedback provided was considered to improve the scenarios. IDCTs were embedded in the Google Docs so that learners could work with them online in the language laboratories where the study was carried out. To improve the validity and reliability of the RVRs, the following aspects were considered: (1) RVRs were conducted immediately after the completion of the task (Gass & Mackey, 2000; Beltrán-Palanques, 2013; Ren, 2014); and (2) participants could use the language they felt more confident with, i.e. Catalan, Spanish and/or English, to avoid the influence of the FL proficiency (Gass & Mackey, 2000). The verbal probe questionnaire was adapted from Félix-Brasdefer (2008) and consisted of the questions shown in Table 6.1: Table 6.1  Verbal probe questionnaire Attended aspects Pragmatic knowledge Task design

What were you paying attention to when you gave advice? Do you think that the context of the situation affected the way you interacted? To what extent do you think the use of SCMC was useful for completing the IDCT?

The study was conducted in two sessions of one hour and a half. Four teachers were needed to carry it out: the two teachers in charge of the course plus two other teachers. The first session was devoted to explain the purpose of the task, examine the proficiency level, complete the background questionnaire, and train learners on the use of the two ICT tools. The data collected in that session was analysed to organize the sample. The second session was devoted to conduct the study. The participants were divided into two groups: 11 learners performing the role of a psychologist, and 11 learners performing the role of a patient, each group in one language laboratory. The participants were allocated in time slots of 15 minutes. However, as aforementioned, we only considered the data elicited by the 12 girl whose proficiency level was B1. Teachers were distributed as follows: the two teachers of the course were assigned to one language laboratory respectively, while the other two teachers were in their offices connected via Google Hangouts with the participants, one teacher with the learners performing the role of a psychologist, and the other one with the

78  Vicente Beltrán-Palanques learners performing the role of a patient. This was done purposefully to conduct the RVRs, which were done immediately after the completion of the IDCT. Once the participants were in the corresponding language laboratory, they were provided with instructions to complete the IDCTs. Time restriction was also given, 10 minutes for the written part and 5 minutes for the RVRs. The four teachers participating in this study had access to the Google Docs of each pair since the two teachers of the course needed the data to work on it and provide feedback in class and the other two teachers to perform the RVRs. During the RVRs, participants were provided with opportunities to read back their production. Headphones were used in the spoken interaction.

Results This section reports on the results derived from the RVRs of the six participants performing the speech act of advice. An in-depth analysis of the speech act of advice is not presented as it is beyond the scope of the study. However, it is worth mentioning that participants’ speech act production was consistent with the taxonomy advanced by Martínez-Flor (2003), including examples of indirect, conventionally indirect, direct, as well as other types of strategies. Grammar error mistakes found in the IDCTs and RVRs were not revised in order to present the data as elicited. Translations are provided when the RVRs were not elicited in English since three participants did in Spanish and one in Catalan. The first research questions focused on the aspects attended while planning and executing the speech act. RVRs revealed information regarding learners’ intralinguistic variables (i.e. grammar and vocabulary) when planning and executing the speech act of advice. Four out of six participants indicated that they focused on the grammar construction of the utterances, and all the participants indicated that they paid attention to vocabulary, particularly, referring to the limitations faced. Example 1 exemplifies it. Example 1. Pair 6: participant 11 and participant 12 Written data of the IDCT, scenario 3. Participant 11. Turn 2: I’ve failed 1 exam and the others marks are not quite good either. Participant 12. Turn 3: Did you study? Participant 11. Turn 4: Yeah I did! A lot! Participant 12. Turn 5: It was difficult to do it? Participant 11. Turn 6: No, I don’t know why. Participant 12. Turn 7: You don’t have to be worried. Participant 11. Turn 8: I try to get good grades, because I want to go on an erasmus.

Synchronous computer-mediated communication 79 Participant 12. Turn 9: You don’t have to get nervous if you have studied! Participant 11. Turn 10: I know but, the one that I failed was the first one I did. Participant 12. Turn 11: You will have other exam that you will pass. Participant 11. Turn 12: I did but with low marks. Participant 12. Turn 13: It is your first year, your first semester. You don’t have to panic. RVRs: Participant 11 Participant 11: Estaba un poco nerviosa porque estaba pensando todo el rato como decir las cosas en inglés, la gramática y el vocabulario que sería mejor para decir lo que quería. Translation: I was a bit nervous because I was thinking all the time about how to say things in English, the grammar and vocabulary that would be better to say what I wanted. All the participants also indicated that they paid attention to the role they were playing in each situation. The participants reported that they advised on the problem the patients had since they were in a position to give counsel. Hence, it seems that they saw themselves as having the right to utter advice as they perceived their role as being that of a professional (Wierzbicka, 1997; Mulholland, 2002). Example 2 illustrates it. Example 2. Pair 1: participant 1 and participant 2 Written data of the IDCT, scenario 2. Participant 2. Turn 4: I suffer from anxiety when I must speak in public. Next week I have to present a project. Participant 1. Turn 5: Anxiety is a mental product caused by negative ways of thinking. Be positive and trust yourself. Participant 2. Turn 6: I understand, but I will have too many eyes looking at me. Participant 1. Turn 7: Okay, you should practise in front of your mirror. This will help you. RVRs: Participant 1 Participant 1: Creo que desde mi posición puedo aconsejar al paciente lo que creo quees mejor porque yo soy la psicóloga y por lo tanto sé que es bueno paraella. Translation: I think that from the position I hold I can advise the patient what I think it is best for her because I am the psychologist and I know what it is good for her.

80  Vicente Beltrán-Palanques In light of these results, it can be claimed that the first research question was confirmed since the RVRs were instrumental in revealing the participants’ attended aspects, that is, intralinguistic aspects of language (i.e. grammar and vocabulary). The second research question that guided this study focused on the participants’ sociopragmatic knowledge. The RVRs indicated that the participants seemed to take into account how the sociopragmatic conditions affected their pragmalinguistic choice. This particular aspect was somehow also noted in the first verbal probe as the participants reported paying attention to the role they played. In this case, the RVRs evidenced that the perceived social status affected the way of addressing to their interlocutors. The participants expressed that the way of addressing was determined by the role played and that they were entitled to decide how to counsel and to control the situation. Example 3 shows it. Example 3. Pair 3: participant 5 and participant 6 Written data of the IDCT, scenario 1. Participant 6. Turn 8: I don’t know what to do with my future. Participant 5. Turn 9: What do you means with future? What’s the problem? Participant 6. Turn 10: I’m very stressed and I can’t sleep. I have a lot of work. Participant 5. Turn 11: Why don’t you try to relax? I think you should relax and think positive. Participant 6. Turn 12: But I have to study and work at the same time. Participant 5. Turn 13: If I were you I would relax a bit and take less courses this year. Participant 6. Turn 14: Maybe but I have to study a lot. Participant 6. Turn 15: I know but you have to relax and you will see that everything will be ok. RVRs: Participant 5 Participant 5: I think it wasn’t difficult because I was the psychologist and I had to decide what to say. I had to give her an advice. I can control what I say and she should believe in me because I know she has a problem. She’s the patient and I’m the psychologist. The other aspect noted was the social distance between the interlocutors. The participants indicated that the relationship with their interlocutors had an influence on the way of addressing and providing advice. Specifically, four out of six participants pointed out that knowing the other interlocutor (i.e. an acquaintance

Synchronous computer-mediated communication 81 relationship) would imply that they have a closer relationship and then the way of speaking and approaching the other person could vary. In other words, the level of directness or indirectness could be altered depending on social distance (i.e. stranger or acquaintance) as this could affect their interpersonal proximity. Example 4 illustrates it. Example 4. Pair 4, participant 7 and participant 8 Written data of the IDCT, scenario 4. Participant 8. Turn 4: I have problems with my former boyfriend. Participant 7. Turn 5: Tell me please. Participant 8. Turn 6: I can’t forget him and I want to meet with him again. Participant 7. Turn 7: Be careful! You shouldn’t meet him again because the last time you suffered a lot. Participant 8. Turn 8: I don’t know what to do. Participant 7. Turn 9: Don’t do it! He is not good for you. You have listen to me. Don’t meet him. RVRs: Participant 7 Participant 7: She is one of my patients and I know her and I know the problem that she has with her ex-boyfriend. Then, I know I have to be very direct because she has problems and I’m the psychologist. The second research question was also confirmed since the RVRs were useful to obtain information about the participants’ pragmatic knowledge. RVRs revealed that the participants focused on how the context of the situation, the type of relationship and the role that they played affected the choice of the pragmalinguistic realizations. Finally, the last research question of this study attempted to examine the participants’ perceptions of the task design. The RVRs indicated that the participants felt comfortable with the format of the task because they were accustomed to using SCMS in the written form by means of instant messaging, especially with the smartphone. They also revealed that the task was easy to complete and interesting because they could see simultaneously what the other person was typing and that it was similar to other social networks they used. Four out of six also claimed that it was the first time they did a similar task and they found it useful for practising the FL and work collaboratively. Hence, the third research question was also confirmed. The results of this study are in line with previous research which combined DCTs and RVRs since RVRs were instrumental in providing information about the planning and execution of the speech act selected (Robinson, 1992; Woodfield, 2010; Beltrán-Palanques, 2013), the attended aspects and the difficulties faced

82  Vicente Beltrán-Palanques (Robinson, 1992; Woodfield, 2010; Beltrán-Palanques, 2013; Ren, 2014) and pragmatic knowledge (Robinson, 1992; Woodfield, 2010; Beltrán-Palanques, 2013; Ren, 2014). RVRs indicated that the participants seemed to be influenced by the role they were asked to perform as they revealed that their professional position appeared to entitle them to advise their interlocutors. The participants also revealed that they tended to focus on grammar and vocabulary aspects to elaborate their written production, albeit attention to pragmatic features was also paid. Finally, it is important to note that the participants reacted positively to the design and implementation of the task as they were somehow familiar with these types of ICT tools.

Conclusion This study aimed at examining participants’ attended aspects, pragmatic knowledge in IDCTs embedded in SCMS, and issues of task design by means of RVRs using SCMS. Hence, SCMS was used to create the environment for performing the IDCTs and conducting the RVRs. The study was conducted in the specialized domain of English for Psychology and the pragmatic feature selected was the speech act of advice. The IDCTs were designed taking into account the field of expertise of the learners of the ESP course and the real pragmatic competence needs they might have in their professional context. There are, however, some limitations to this study that should be acknowledged. First, the small number of participants and not having different proficiency levels and an equal number of females and males to compare results. Second, the lack of comparison between face-to-face interaction and written interaction, which would also allow a multimodal analysis and an examination of task effects. For further research, it should be interesting to explore the perception of the interlocutor receiving the speech act of advice. Ultimately, it is worth mentioning that ICT tools were useful for distributing the IDCTs and promoting interaction. It also permitted conducting the RVRs and storing the data in a relatively quickly manner. Further research should be carried out integrating SCMS to collect speech act data from an interactive perspective and retrospective verbal data. Furthermore, it would be necessary to further explore the real necessities of ESP learners in terms of speech act performance and formulaic language to provide them with learning opportunities based on language use and communication for real purposes using SCMS in the ESP context.

Acknowledgements The research conducted in this article is part of the Education and Innovation Research Project: Proyecto de Innovación Educativa Universitat Jaume I 2779/13 “Parámetros de aproximación a la evaluación de las destrezas orales en lengua inglesa: Tipología, diseño de test y criterios de validación”.

Synchronous computer-mediated communication 83

References Alcón, E. & Safont, P. (2001). Occurrence of exhortative speech acts in ELT materials and natural speech data: A focus on request, suggestion and advice realisation strategies. SELL: Studies in English Language and Linguistics, 3, 5–22. Beltrán-Palanques, V. (2013). Exploring research methods in interlanguage pragmatics: A study based on apologies. Saarbücken: Lambert Academic Publishing. Beltrán-Palanques, V. (2014). Methodological issues in interlanguage pragmatics: Some food for thought. In G. Alcaraz & M. M. Jiménez-Cervantes (Eds.), Studies in philology: Linguistics, literature and cultural studies in modern languages (pp. 239–253). Newcastle upon Tyne: Cambridge Scholars Publishing. Brown, P. & Levinson, S. C. (1987). Politeness. Some universals in language usage. Cambridge: Cambridge University Press. Chapelle, C. A. (2003). English language learning and technology. Amsterdam: John Benjamins Publishing Company. Council of Europe. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge: Cambridge University Press. Eslami, Z. R., Mirzaei, A. & Dini, S. (2015). The role of asynchronous computer mediated communication in the instruction and development of EFL learners’ pragmatic competence. System, 48, 99–111. Félix-Brasdefer, J. C. (2008). Perceptions of refusals to invitations: Exploring the minds of foreign language learners. Language Awareness, 17(3), 195–211. Félix-Brasdefer, J. C. (2010). Data collection methods in speech act performance. DCTs, role plays, and verbal reports. In A. Martínez-Flor & E. Usó-Juan (Eds.), Speech act performance. Theoretical, empirical and methodological issues (pp. 41–56). Amsterdam: John Benjamins Publishing Company. Gass, S. & Mackey, A. (2000). Stimulated recall methodology in second language research. Mahwah, New Jersey: Lawrence Erlbaum Associates. González-Lloret, M. (2008). Computer-mediated learning of L2 pragmatics. In E. Alcón & A. Martínez-Flor (Eds.), Investigating pragmatics in foreign language learning, teaching and testing (pp. 114–132). Bristol: Multilingual Matters. Grabowski, K. C. (2007). Reconsidering the measurement of pragmatic knowledge using reciprocal written task format. Working Papers in TESOL and Applied Linguistics, 7(1), 1–48. Hinkel, E. (1997). Appropriateness of advice: DCT and multiple choice data. Applied Linguistics, 18(1), 1–26. Leech, G. (1983). Principles of pragmatics. London: Longman. Martínez-Flor, A. (2003). Non-native speakers’ production of advice acts: The effects of proficiency. RESLA, 16, 139–153. Martínez-Flor, A. & Usó-Juan, E. (2011). Research methodologies in pragmatics: Eliciting refusals to requests. ELIA: Estudios de Lingüística Inglesa Aplicada, 11, 47–87. Mulholland, J. (2002). The language of negotiation. A handbook of practical strategies for improving communication. New York: Routledge. Peterson, M. (2010). Task-based language teaching in network-based CALL: An analysis of research on learner interaction in synchronous CMC. In M. Thomas & H. Reinders (Eds.), Task-based language learning and teaching with technology (pp. 41–62). London: Continuum International Publishing Group.

84  Vicente Beltrán-Palanques Quick Placement Test. (2001). Pen and paper test. Oxford: Oxford University Press. Räisänen, C. & Fortanez-Gómez, I. (2008). The state of ESP teaching and learning in Western European higher education after Bologna. In I. Fortanez-Gómez & C. Räisänen (Eds.), ESP in European higher education: Integrating language and content (pp. 11–51). Amsterdam: John Benjamins Publishing Company. Ren, W. (2014). A longitudinal investigation into L2 learners’ cognitive processes during stay abroad. Applied Linguistics, 35(5), 575–594. Robinson, M. A. (1992). Introspective methodology in interlanguage pragmatics research. In G. Kasper (Ed.), Pragmatics in Japanese as a native and target language (pp. 27–82). Honolulu HI: University of Hawai’i at Manoa, Second Language Teaching and Curriculum Centre. Roever, C. (2010). Researching pragmatics. In B. Paltridge & A. Phakiti (Eds.), Continuum companion to research methods in applied linguistics (pp. 240–255). London: Continuum International Publishing Group. Schauer, G. (2009). Interlanguage pragmatic development: The study abroad context. London: Continuum International Publishing Group. Searle, J. R. (1976). The classification of illocutionary acts. Language in Society, 5, 1–24. Sykes, J. (2005). Synchronous CMC and pragmatic development: Effects of oral and written chat. CALICO Journal, 22(3), 399–431. Taguchi, N. & Sykes, J. (2013). Introduction: Technology in interlanguage pragmatics research and teaching. In N. Taguchi & J. Sykes (Eds.), Technology in interlanguage pragmatics research and teaching (pp. 1–15). Amsterdam: John Benjamins Publishing Company. Thomas, J. (1983). Cross-cultural pragmatic failure. Applied Linguistics, 4(2), 91–112. Usó-Juan, E. & Martínez-Flor, A. (2014). Reorienting the assessment of the conventional expressions of complaining and apologising: From single-response to interactive DCTs. Iranian Journal of Language Testing, 4(1), 113–136. Warschauer, M. (2001). On-line communication. In R. Carter & D. Nunan (Eds.), The Cambridge guide to teaching English to speakers of other languages (pp. 207–212). Cambridge: Cambridge University Press. Wierzbicka, A. (1987). English speech act verbs. A semantic dictionary. Sidney: Academic Press. Woodfield, H. (2010). What lies beneath? Verbal reports in interlanguage requests in English. Multilingua, 29(1), 1–27.

Synchronous computer-mediated communication 85

Appendix A Scenarios for the IDCTs Scenario 1: Problems to get sleep. Role-play A: (High and stranger) You are a psychologist and today a new patient is coming to your office. Listen to him/her and give some advice. Role-play B: (Low and stranger) You have an appointment with a psychologist because you have problems to sleep properly. It’s the first time you visit this psychologist. Explain your problem.

Scenario 2: Public speaking Role-play A: (High and acquaintance) You are a private practice psychologist. One of your regular patients has an appointment with you today. You have been treating him/her for 6 months. As you know, he/she has suffers from anxiety when speaking in public. Listen to him/her and give some advice. Role-play B: (Low and acquaintance) You suffer from anxiety when speaking in public. You are now studying at the university and you have to present a project next week in front of your classmates. You go to your regular psychologist to talk about this problem. Explain your problem.

Scenario 3: Academic results Role-play A: (High and stranger) You work as a psychologist at the university. Today, a student you don’t know is coming to your office to talk about his/her academic results. Listen to him/her problem and give some advice. Role-play B: (Low and stranger) You are a student at the university. You’re starting your second semester the exams results in the previous semester weren’t good. You have an appointment with the psychologist of your university to talk about it. It’s the first time you visit this psychologist. Explain your problem.

Scenario 4: Relationship Role-play A: (High and acquaintance) You are a private practice psychologist. One of your regular patients has an appointment with you today. You have been treating him/her for 6 months because he/she has emotional problems. Listen to him/her and give some advice. Role-play B: (Low and acquaintance) You have an appointment with your regular psychologist. You want to talk about the problems you’re having with your former partner. Explain your problem.

7 The COdA scoring rubric An attempt to facilitate assessment in the field of digital educational materials in higher education Elena Domínguez Romero Universidad Complutense De Madrid, Spain

Isabel De Armas Ranero Universidad Complutense De Madrid, Spain

Ana Fernández-Pampillón Cesteros Universidad Complutense De Madrid, Spain Quality assessment tool (COdA): motivations Following the 1999 Bologna Process, countries belonging to the European Union had to adapt their teaching and learning strategies in higher education to certain parameters. The aim was to reach common academic degree and quality assurance standards in European countries by 2010, when students should enjoy a more autonomous learning process and teachers would be asked to avoid teacher-centered lectures. In exchange, they would be expected to design new digital learning methods in order to improve their teaching quality (Drennan & Beck, 2001; Felder & Brent, 1999). In one decade, university teachers found themselves forced to become both developers and users of their own educative material even if they lacked the theoretical and technological knowledge to build Learning Objects (hereafter LOs). Quite unfortunately, this is still the problem in some Spanish Universities and Schools (Uceda & Barro, 2010). Educational technology models first started to include learning tools under the framework of the European Space for Higher Education (ESHE), which sought to harmonize qualifications throughout Europe towards the improvement in services and competence. One of these tools had appeared back in the early ’90s under the label “Learning Objects.” More specifically, Wayne Hodgins (2006) introduced the concept of “Reusable Learning Object” (RLO) in 1992, which has since then been a large axis of attention and debate in the educational community and has been inseparably present in e-learning and, by extension, in b-learning. In 1998, the Institute of Electrical and Electrons Engineers (IEEE) created the Learning Technology Standards Committee (LTSC), which adopted the term “Learning Objects” and defined these “objects” as “any entity, digital or non-digital, that can be used, reused or referenced during a learning process

The COdA scoring rubric 87 mediated by technology.” Two years later, David Wiley (2000) proposed a more precise definition for this term: “any digital resource that can be reused to facilitate learning.” These resources include images, video, audio or web applications. The need to produce quality specialized digital academic materials in university contexts with a poor technical background like that of the Complutense School of Humanities led us to develop and implement a new, straightforward strategy for the creation and evaluation of quality LOs with minimal or no technical support: COdA (Fernández-Pampillón et al., 2011, p. 5). As explained in the subsequent sections of the chapter, COdA has been tested for reliability and usability in order to facilitate the LO evaluation process to the maximum. Attending to the results obtained in the test, the aim now is to organize the COdA criteria and subcriteria into the resulting COdA scoring rubric presented in the section of the present chapter headed “COdA: Scoring rubric.”

Antecedents Quality is hard to define, dependant as it is on the multiple views and needs derived from the different sectors, roles and subjects taking part in the creation, usage and evaluation of LOs (Dondi, 2007). This is probably the reason why researchers so far have mostly approached the process of LO quality evaluation from one specific criterion only. This is the case of the reusability of the materials (Convertini et al., 2006; Cochrane, 2005; Del Moral & Cernea, 2005), the quality standards (McGreal et al., 2004; Nesbit & Belfer, 2002; Williams, 2005), design (Bradley & Boyle, 2004; Buzetto-More & Pinhey, 2006; Deaudelin et al., 2003; Del Moral & Cernea, 2005; Gadanidis et al., 2004; Kay & Knaack, 2005; Koohang & Du Plessis, 2004; Krauss & Ally, 2005; Lin & Gregor, 2006; Maslowski et  al., 2000), and usability (Bradley & Boyle, 2004; Buzetto-More & Pinhey, 2006; Concannon et al., 2005; Haughey & Muirhead, 2005; Kay & Knaack, 2005; Koohang & Du Plessis, 2002; Lin & Gregor, 2006; Nesbit et al, 2002; Nielsen, 1995; Schell & Burns, 2002; Schoner et al., 2005), but also of content (Haughey & Muirhead, 2005; Macdonald et  al., 2005; Nesbit et  al., 2002; Schell & Burns, 2002), motivation (Brown & Voltz, 2005; Buzetto-More & Pinhey, 2006; Gadanidis et  al., 2004; Haughey & Muirhead, 2005; Kay & Knaack, 2005; Koohang & Du Plessis, 2004; Lin & Gregor, 2006; Nesbit et al., 2002; Nielsen, 1995; Oliver & McLoughlin, 1999; Reimer & Moyer, 2005; Van Zele et al., 2003), and interactivity (Akpinar & Bal 2006; Baser, 2006; Cochrane, 2005; Convertini et  al., 2005; Deaudelin et  al., 2003; Gadanidis et  al., 2004; Koohang & Du Plessis, 2004; Lim et al., 2006; Lin & Gregor, 2006; Metros, 2005; Nielsen, 1995; Ohl, 2001; Oliver & McLoughlin, 1999; van Marrienboer & Ayres, 2005). This is not the case in Spain. COdA, itself being improved nowadays by the AENOR Group “Standard for the Assessment of Quality in Digital Educational Materials” (PNE 71362), includes ten criteria, five of them pedagogical and the other five technical. HEODAR (2008), designed by a group of experts from the

88  Elena Domínguez Romero et al. University of Salamanca, is possibly the most complete model developed in Spain after COdA. It covers pedagogical aspects subdivided into psycho-pedagogical criteria and didactic-curricular criteria as much as technical criteria such as design and usability, also in COdA. Besides, HEDOAR includes a user-friendly template for punctuation. The important factor “estimated time to do the activities” is also considered by HEDOAR. In a third position, the quality evaluation proposal developed by the UNED (2012) is also worth mentioning. The division of criteria posited by the Spanish Open University (hereafter UNED) is by far less clear than that established by HEADOAR and COdA. Still, the three Spanish models share key criteria such as motivation, didactic coherence, interactivity and adaptability. It needs to be said, though, that both HEODAR (with design and usability as its only technical criteria) and UNED lack important technical criteria that COdA shares with other evaluation models such as Becta (2007), DESIRE (2000), Kurilovas and Dagiené (2009), Leacock and Nesbit (2007), LOEM (2008), LORI (2003), MELT (2008), Paulsson and Naeve (2006) or Q4R (2007). Also, COdA shares pedagogical criteria with LORI. Following LORI and Q4R, COdA makes special reference to the pedagogical goals, in most of the cases assimilated by more general criteria such as the quality of the contents attending to the previous knowledge of the students or the curriculum. In line with LORI, COdA divides each of the ten criteria into a set of subcriteria which necessarily need to be met in order to reach the highest quality score (5). For the sake of accuracy, the quality of each of the ten criteria depends on the strict number of subcriteria fulfilled, not open to interpretation on the side of the evaluator.

COdA: development COdA draws mainly on the Learning Object Review Instrument (LORI) (Nesbit et al., 2012), which is a widely used model not proved to be adequate enough to guide faculty with a poor technical background in the adaptation of already existing teaching materials to LOs because (1) it is a tool for the evaluation of materials that are LOs already, (2) it has a broad scope – being aimed at any teaching level, national or international – which does not help to guide instructors through the process of building LOs specifically aimed at higher education, and (3) its use requires computer skills that faculty at the School of Humanities does not have in most of the cases. Following LORI, COdA was thus developed to help our faculty to develop digital teaching materials out of their own existing ones by prioritizing a set of criteria relating to academic profitability in higher education: (1) effectiveness to teach future specialists, (2) reusability, including interoperability, durability and extensibility, and (3) accessibility to all users, those with any kind of disability included. As previously mentioned, the result is a tool which consists of a questionnaire for the evaluation of ten quality criteria and a manual with instructions to help to complete the questionnaire. With this evaluation tool, authors, users and

The COdA scoring rubric 89 potential external reviewers alike can evaluate digital educational materials by rating them according to the ten criteria below (five of them pedagogical, the other five technical):   1 Objectives and didactical coherence evaluates whether the LO is accompanied by a clear and coherent description (metadata) of its didactic goal and its target learners, as well as by suggestions for its didactic implementation (instructions for educators and students).   2 Content quality focuses on the formal quality of the content: e.g., balance and clarity of ideas, updated content, absence of ideological bias, or respect towards property rights.  3 Capacity to generate learning assesses whether the LO stimulates reflection, critical thought and ideas and/or techniques to solve problems and address tasks.   4 Adaptability and interactivity evaluates whether the content suits the background knowledge and the needs of the students. In addition, it assesses whether the LO facilitates the students’ control of their learning process.   5 Motivation considers whether the LO is able to attract and retain the students’ attention.   6 Format and design aims at analyzing whether the LO’s design is well organized, clear and concise, and whether it includes multimodal formats aimed at supporting the comprehension and assimilation of content.   7 Usability. The ease of interaction with the LO is evaluated by the application of this criterion.   8 Accessibility evaluates whether the LO is accessible to individuals with disabilities and/or can be used by them thanks to aid devices.   9 Reusability measures the possibilities of reusing the LO or its components. Three types of reusability are considered: (i) the reusability of the content (all or part of the content can be reused to create other LOs), (ii) the reusability of the educational context (all or part of the LO can be used in more than one discipline or with more than one group of students), and (iii) the reusability of the environment (all or part of the LO can be used in different learning environments: in-person, virtual or blended. 10 Interoperability is meant to see the extent to which the LO can be used in different computer systems and environments. Hence COdA can be used to evaluate LOs in terms of self-evaluation, peer review/collaborative and end-user evaluation. Each evaluation method has its own purpose. Self-evaluation aims to improve LOs’ quality during the creation process. It is carried about by the author as s/he attempts to fulfil as many criteria as possible. Peer review/collaborative evaluation focuses on the potential quality of a LO, that is, before the LO reaches the end user. This evaluation is performed among colleagues, as if research work. Finally, the end user’s evaluation can give developers an idea of the quality of the final teaching/ learning product.

90  Elena Domínguez Romero et al.

COdA: assessment COdA has been evaluated and corrected for usability (Fernández-Pampillón et al., 2011, p. 8). It was experimentally applied to the assessment of a collection of LOs in an area of non-technological knowledge, Humanities, during the development phase. Both the usability of the tool and the quality of the resulting materials were thus assessed. Reliability has been equally tested by means of an experiment (Domínguez Romero et al., 2012) designed and conducted so as to show whether each of the criteria were described in a sufficiently precise way to get ratings that can be independent of the evaluators. Conclusions derived from the study urged the development of the COdA scoring rubric presented in the section immediately following.

COdA: scoring rubric The results derived from the evaluation of COdA suggested the development of a scoring rubric that would improve the accuracy of the tool while reducing the time of application. We thus present a COdA rubric for (a) inductively adapting metadata standards; (b) adapting LO quality assessment to the needs of instructors; and (c) building a customized repository to satisfy the needs of those instructors. By so doing, our updated model intends to be a good example of academic rubric put to the service of Education. The rubric defines the requirements for each of the possible scores on each of the criteria in the tabular sections in the final Appendix.

Lines for future research This COdA scoring rubric is being newly tested nowadays, in the shape of a checklist still under development, to serve as the basis for the development of a future UNE standard by AENOR, the Spanish Association for Standardization and Certification, which should be completed by the end of 2016.

References Adesope, O. O., & Nesbit, J. C. (2012). Verbal redundancy in multimedia learning environments: A meta-analysis. Journal of Educational Psychology 104(1), 250–263. Akpinar, Y., & Bal, V. (2006). Student tools supported by collaboratively authored tasks: The case of work learning unit. Journal of Interactive Learning Research, 17(2), 101–119. Anon., (2007). Elearning quality in European universities: Different approaches for different purposes. UNIQUE. Retrieved October 10, 2014 from http://efquel. org/wp-content/uploads/2012/03/eLearning-quality-approaches.pdf. Baser, M. (2006). Promoting conceptual change through active learning using open source software for physics simulations. Australasian Journal of Educational Technology, 22(3), 336–354.

The COdA scoring rubric 91 BECTA. (2007). Becta Quality principles for digital learning resources. 2007. Retrieved October 10, 2014 from http://www.teachfind.com/becta/bectaschools-resources-digital-resources-quality-principles-digital-learning-resources-0. Bradley, C., & Boyle, T. (2004). The design, development, and use of multimedia learning object. Journal of Educational Multimedia and Hypermedia, 13(4), 371–379. Brown, A. R., & Voltz, B. D. (2005). Elements of effective e-learning design. The International Review of Research in Open and Distance Learning, 6(1). Retrieved October 10, 2014 from http://www.irrodl.org/index.php/irrodl/ article/view/217. Buzzetto-More, N., & Pinhey, K. (2006). Guidelines and standards for the development of fully online learning objects. Interdisciplinary Journal of E-Learning and Learning Objects, 2(1), 95–104. Cochrane, T. (2005). Interactive quick time: Developing and evaluating multimedia learning objects to enhance both face-to-face and distance e-learning environments. Interdisciplinary Journal of Knowledge and Learning Objects, 1, 33–54. Concannon, F., Flynn, A., & Campbell, M. (2005). What campus-based students think about the quality and benefits of e-learning. British Journal of Educational Technology, 36(3), 501–512. Covertini, V. N., Albanese, D., & Scalera, M. (2006). The OSEL taxonomy for the classification of learning objects. Interdisciplinary Journal of Knowledge and Learning Objects, 2, 155–138. Deaudelin, C., Dussault, M., & Brodeur, M. (2003). Human-computer interaction: A review of the research on its affective and social aspects. Canadian Journal of Learning and Technology, 29(1) Winter. Retrieved January 10, 2015, from http:// www.cjlt.ca/index.php/cjlt/article/view/34/31. Del Moral, E., & Cernea, D. A. (2005). Design and evaluate learning objects in the new framework of the semantic web. In A. Mendez-Vila, B. Gonzalez-Pereira, J. Mesa Gonzalez, & J. A. Mesa Gonsalez (Eds.), Recent research developments in learning technologies. Badajoz: Formatex. Desire Project Team, E. (Ed.). (2000). DESIRE Information Gateways Handbook. Retrieved January 10, 2015 from http://cuc.carnet.hr/cuc2000/handbook/index. html. Domínguez Romero, E., Fernández-Pampillón Cesteros, A., & Armas Ranero, I. (2012). COdA, una herramienta experimentada para la evaluación de la calidad didáctica y tecnológica de los materiales didácticos digitales. RELADA-Revista Electrónica de ADA-Madrid, 6(4). Retrieved October 10, 2014 from http:// polired.upm.es/index.php/relada/article/view/1925. Dondi, C., & Moretti, M. (2007). A methodological proposal for learning games selection and quality assessment. British Journal Educational Technology, 38(3), 502–512. doi: 10.1111/j.1467-8535.2007.00713.x. Drennan, L. T., & Beck, M. (2001). Teaching quality performance indicators – key influences on the UK universities’ scores. Quality Assurance in Education, 9(2), 92–102. doi:10.1108/09684880110389663. Felder, R. M., & Brent, R. (1999). FAQQs. Responses to the questions “Can I use active learning exercises in my classes and still cover the syllabus?” and “Do active learning methods work in large classes?” Chemical Engineering Education, 33(4), 276–277. Fernández-Pampillón Cesteros, A., Domínguez, E., & De Armas, I. (2011). Herramienta para la revisión de la calidad de objetos de aprendizaje universitarios (COdA): guía del usuario. v.1.1. Retrieved October 10, 2014 from http://eprints.ucm.es/12533/.

92  Elena Domínguez Romero et al. Gadanidis, G., Sedig, K., Liang, H.-N., Gadanidis, G., Sedig, K., & Liang, H.-N. (2004). Designing online mathematical investigation. Journal of Computers in Mathematics and Science Teaching, 23(3), 275–298. Retrieved October 10, 2014 from http://www.editlib.org/p/4731/. Haughey, M., & Muirhead, B. (2005). Evaluating learning objects for schools. E-Journal of Instructional Science and Technology, 8(1). Retrieved October 10, 2014 from http://www.ascilite.org.au/ajet/e-jist/docs/vol8_no1/fullpapers/ eval_learnobjects_school.htm. Hodgins, H. W. (2002). The future of learning objects. In D. A. Willey (Ed.), The instructional use for learning objects (pp. 281–298). Bloomington: Agency for Instructional Technology/Association for Educational Communications & Technology. Kay, R. H., & Knaack, L. (2008). A multi-component model for assessing learning objects: The learning object evaluation metric (LOEM). Australasian Journal of Educational Technology, 24(5), 574–591. Kay, R., & Knaack, L. (2005). Developing learning objects for secondary school students: A multi-component model. Interdisciplinary Journal of Knowledge and Learning Objects, 1(1), 229–254. Koohang, A., & du Plessis, J. (2004). Architecting usability properties in the e-learning instructional design process. International Journal on E-Learning, 3(3), 38–44. Krauss, F., & Ally, M. (2005). A study of the design and evaluation of a learning object and implications for content development. Interdisciplinary Journal of E-Learning and Learning Objects, 1(1), 1–22. Kurilovas, E., & Dagiene, V. (2009). Multiple criteria comparative evaluation of e-learning systems and components. Informatica, 20(4), 499–518. Leacock, T. L., & Nesbit, J. C. (2007). A framework for evaluating the quality of multimedia learning resources. Educational Technology & Society, 10(2), 44–59. Lim, C. P., Lee, S. L., & Richards, C. (2006). Developing interactive learning objects for a computing mathematics models. International Journal on E-Learning (IJEL), 5(2), 221–224. Lin, A. C. H., & Gregor, S. D. (2006). Designing websites for learning and enjoyment: A study of museum experiences. The International Review of Research in Open and Distance Learning, 7(3), 1–21. Retrieved October 10, 2014 from http://www.irrodl.org/index.php/irrodl/article/view/364. MacDonald, C. J., Stodel, E., Thompson, T. L., Muirhead, B., Hinton, C., & Carson, B. (2005). Addressing the eLearning contradiction: A collaborative approach for developing a conceptual framework learning object. Interdisciplinary Journal of Knowledge and Learning Objects, 1, 79–98. Maslowski, R., Visscher, A. J., Collis, B. A., & Bloemen, P. P. M. (2000). The formative evaluation of a web-based course-management system within a university setting. Educational Technology, 40(3), 5–20. McGreal, R. T., Downes, S., Friesen, N., & Harrigan, K. (2004). EduSource: Canada’s learning object repository network. International Journal of Instructional Technology and Distance Learning. Retrieved January 10, 2015 from http://auspace.athabascau. ca:8080/bitstream/2149/743/1/edusource_canada’s_learning.pdf. MELT. (2009). Metadata Ecology for Learning and Teaching project website. Retrieved October 10, 2014 from http://info.melt-project.eu/ww/en/pub/melt_project/ welcome.htm.

The COdA scoring rubric 93 Metros, S. (2005). Visualizing knowledge in new educational environments: A course on learning objects. Open Learning, 20(1), 93–102. doi:10.1080/02680510420 00322122. Morgado, E. M., Aguilar, D. A. G., & Peñalvo, F. J. G. (2008). HEODAR: Herramienta para la evaluación de objetos didácticos de aprendizaje reutilizables. In A. B. Gil González, J. A., Velázquez Iturbide, & J. García Peñalvo (Eds.), Simposio Internacional de Informática Educativa SIIIE (pp.181–186). Salamanca: Ediciones Universidad de Salamanca. Nesbit, J., Belfer, K., & Leacock, T. (2003). Learning object review instrument (LORI). User manual version 1.5. Retrieved October 10, 2014 from http://www. elera.net/eLera/Home/Articles/LORI%201.5.pdf. Nesbit, J., Belfer, K., & Vargo, J. (2002). A convergent participation model for evaluation of learning objects. Canadian Journal of Learning and Technology, 28(3). Retrieved January 10, 2015 from http://www.cjlt.ca/index.php/cjlt/article/ view/110/103. Nielsen, J. (1995). How to conduct a heuristic evaluation. Retrieved October 10, 2014 from Nielsen Norman Group website: http://www.nngroup.com/articles/ how-to-conduct-a-heuristic-evaluation/. Nurmi, S., & Jaakkola, T. (2005). Problems underlying the learning object approach. International Journal of Instructional Technology & Distance Learning, 2(11), 61–66. Retrieved October 10, 2014 from http://itdl.org/journal/nov_05/ Nov_05.pdf#page=65. Ohl, T. M. (2001). An interaction-centric learning model. Journal of Educational Multimedia and Hypermedia, 10(4), 311–332. Retrieved October 10, 2014 from http://www.editlib.org/p/8438/. Oliver, R., McLoughlin, C., Oliver, R., & McLoughlin, C. (1999). Curriculum and learning-resources issues arising from the use of web-based course support systems. International Journal of Educational Telecommunications, 5(4), 419–435. Retrieved October 10, 2014 from http://www.editlib.org/p/8840/. Paulsson, F., & Naeve, A. (2006). Establishing technical quality criteria for Learning Objects. In P. Cunningham, & P. Cunningham (Eds.), Exploiting the knowledge economy: Issues, applications. Case studies, 3 (pp. 1431–1439). Amsterdam: IOS Press. Q4R. (2007). Q4R | Quality for Reuse | Quality assurance strategies and best practices for LO repositories. Retrieved October 10, 2014 from http://www.q4r.org/. Reimer, K., & Moyer-Packenham, P. (2005). Third graders learn about fractions using virtual manipulatives: A classroom study. Journal of Computers in Mathematics and Science Teaching, 24(1), 5–25. Retrieved October 10, 2014 from http://digital commons.usu.edu/teal_facpub/40. Schell, G. P., & Burns, M. (2002). Merlot: A repository of e-learning objects for higher education. e-Service Journal, 1(2), 53–64. doi:10.1353/esj.2002.0004. Schoner, V., Buzza, D., Harrigan, K., & Strampel, K. (2005). Learning objects in use: ‘Lite’ assessment for field studies. Journal of Online Learning and Teaching, 1(1), 1–18. Retrieved October 10, 2014 from http://jolt.merlot.org/vol1_no1_ schoner.htm. Uceda Martín, J., & Barro Ameneiro, S. (2010). Evolución de las tic en el sistema universitario español. CRUE. Retrieved October 10, 2014 from http://www.crue. org/Publicaciones/Documents/Universitic/2010.pdf.

94  Elena Domínguez Romero et al. Van Merriënboer, J. J., & Ayres, P. (2005). Research on cognitive load theory and its design implications for e-learning. Educational Technology Research and Development, 53(3), 5–13. Retrieved October 10, 2014 from http://link.springer. com/article/10.1007/BF02504793. Van Zele, E., Vandaele, P., Botteldooren, D., & Lenaerts, J. (2003). Implementation and evaluation of a course concept based on reusable learning objects. Journal of Educational Computing Research, 28(4), 355–372. Wiley, D. A. (2000). Connecting learning objects to instructional design theory: A definition, a metaphor, and a taxonomy. Retrieved January 10, 2014 from http:// wesrac.usc.edu/wired/bldg-7_file/wiley.pdf. Williams, D. D. (2005). Evaluation of learning objects and instruction using learning object. Retrieved January 10, 2015 from NMC Learning Object Initiative website: http://archive2.nmc.org/projects/lo/sap_effect_williams.shtml.

Learning objectives are either missing or not clear enough. Contents are hard to justify with regards to learning objectives.

The activities’ presentation and instructions are not clear enough and content is not updated enough and/or does not respect copyright in all the cases and/or is biased.

2 Content quality

1

1 Objectives and didactical coherence

COdA scoring rubric

Appendix

Content is adequate to the level of the target students as well as consistent with the objectives; yet the number and distribution of concepts and ideas is not balanced. The activities’ presentation and instructions are not clear enough and content is not updated enough

Learning objectives are either not clear enough or not consistent with the contents.

2 Learning objectives are clear, contents are appropriate enough to the learning objectives; yet the relation of objectives, skills and target students is either not clear or not consistent enough. Content is adequate to the level of the target students as well as consistent with the objectives; yet the number and distribution of concepts and ideas is not balanced. Content is not biased and respects copyright yet it is not updated enough. The activities’ presentation

3 Learning objectives are clear; objectives, skills and target students are consistently related, contents are appropriate enough to the objectives, skills and target students; yet implementation instructions and/or suggestions are either missing or not clear enough. Content is balanced: adequate to the level of the target users as well as consistent with the objectives, skills and target users; yet the number and distribution of concepts and ideas is not balanced. Content is not biased and respects copyright though it is not updated enough. The activities’ presentation is clear but instructions are either

4

(continued)

Content is balanced: adequate to the level of the target users as well as consistent with the objectives, skills and target users; the number and distribution of concepts and ideas is balanced. Content is updated, not biased, and respects copyright. The activities’ presentation and instructions are clear enough.

Learning objectives are clear; objectives, skills and target students are consistently related; implementation instructions and/or suggestions for teachers and/or students are provided.

5

3 Capacity to generate learning

(continued)

Content does not fit the students’ previous knowledge or needs. Different contents/activities for each type of learner or skill level are not provided.

Contents are clearly not aimed at the achievement of the learning objectives.

1 is clear but instructions are either missing or not clear enough. Contents allow the achievement of objectives because the old/ new knowledge relation is clear enough although neither reflection nor critical thinking, or the development of new ideas and/ or procedures/ methods/ techniques to solve problems and tasks are enhanced. Content fits the students’ previous knowledge yet the students’ needs are not considered. Different contents/activities for each type of learner or skill

and/or does not respect copyright in all the cases and/or is biased. Learning is not meaningful because there is not a clear relation between old and new knowledge. Reflection, critical thinking, or the development of new ideas and/ or procedures/ methods/ techniques to solve problems and tasks are not enhanced.

Content fits the students’ previous knowledge yet the students’ needs are not considered. Different contents/activities for each type of learner or skill level

3

2

Content fits the students’ previous knowledge and needs. Different contents/activities for each type of learner or skill level are provided to be used independently of teaching/learning

Contents allow for the achievement of learning objectives because the old/new knowledge relation is clear and reflection is promoted though critical thinking and the development of new ideas and/or procedures/methods/ techniques to solve problems and tasks are not enhanced

missing or not clear enough.

4

Content fits the students’ previous knowledge and needs, learning styles or skill level and can be used independently of teaching/learning methods. It is interactive: autonomous learning is facilitated

Contents allow for the achievement of learning objectives because the old/new knowledge relation is clear; reflection, critical thinking and the development of new ideas and/or procedures/methods/ techniques to solve problems and tasks are enhanced.

5

4 Adaptability and interactivity

Contents and activities cannot be used independently of teaching/ learning methods. Autonomous learning is not facilitated: students do not have the choice to control and guide their own learning process by selecting contents or activities depending on their capacity of response. The presentation of content does not take into consideration the students’ previous actions.

are not provided. Contents and activities cannot be used independently of teaching/ learning methods. Interaction is missing because autonomous learning is not facilitated: students do not have the choice to control and guide their own learning process by selecting contents or activities depending on their capacity of response. The presentation of content does not take into consideration the students’ previous actions.

level are provided to be used independently of teaching/ learning methods. Interaction is missing because autonomous learning is not facilitated: students do not have the choice to control and guide their own learning process by selecting contents or activities depending on their capacity of response. The presentation of content does not take into consideration the students’ previous actions.

methods. Interaction is missing because autonomous learning is not facilitated: students do not have the choice to control and guide their own learning process by selecting contents or activities depending on their capacity of response. The presentation of content does not take into consideration the students’ previous actions.

(continued)

so that students have the choice to control and guide their own learning process by selecting contents or activities depending on their capacity of response. The presentation of content takes into consideration the students’ previous actions.

5 Motivation

(continued)

No direct references to the usefulness of the pedagogical material in the students’ real life are made so that target users do not feel learning to be relevant enough for their professional and/ or social contexts. Pedagogical contents and procedures are not presented in an innovative or appealing manner. Previous criteria 2 (content quality), 3 (learning generation) and 4 (adaptability and interactivity) have got a minus 3 average punctuation.

1 Direct references are made to the usefulness of the pedagogical material in the students’ real life are made yet target users do not feel learning to be relevant enough for their professional and/ or social contexts. Pedagogical contents and procedures are not presented in an innovative or appealing manner. Previous criteria 2 (content quality), 3 (learning generation) and 4 (adaptability and interactivity) have got a minus 3 average punctuation.

2 Direct references to the usefulness of the pedagogical material in the students’ real life are made yet target users do not feel learning to be relevant enough for their professional and/ or social contexts. Pedagogical contents and procedures are not presented in an innovative or appealing manner. Previous criteria 2 (content quality), 3 (learning generation) and 4 (adaptability and interactivity) have got a minimum 3 average punctuation.

3 Direct references to the usefulness of the pedagogical material in the students’ real life are made and target users do feel learning to be relevant enough for their professional and/or social contexts. Pedagogical contents and procedures are not presented in an innovative or appealing manner. Previous criteria 2 (content quality), 3 (learning generation) and 4 (adaptability and interactivity) have got a minimum 3 average punctuation.

4

Direct references to the usefulness of the pedagogical material in the students’ real life are made and target users do feel learning to be relevant enough for their professional and/or social contexts. Pedagogical contents and procedures are presented in an innovative or appealing manner. Previous criteria 2 (content quality), 3 (learning generation) and 4 (adaptability and interactivity) have got a 4 as the minimum punctuation.

5

Design is not organized, clear and concise enough and does not facilitate content understanding and acquisition. Contents cannot be accessed because of the bad quality of texts, images and/or audios.

Not all the contents can be accessed because these are difficult to use and no clear instructions are provided. Some of the links do not work properly preventing access to relevant content.

6 Format and design

7 Usability Contents are hard to locate because the interface is not intuitive and there are not clear instructions which are, however, necessary. Some links do not work properly thus preventing access to relevant content.

Design organization is not clear and concise enough and does not always facilitate content understanding and acquisition. Texts’, images’ and/or audios’ quality is not always good enough so as to allow content access.

Design is organized, clear and concise, and facilitates content understanding and acquisition yet it does not include a multimodal format and it is not aesthetically adequate for acquisition and reflection. Texts’, images’ and/or audios’ quality is not always good enough so as to allow access content. Contents can be located though the interface is not always intuitive and instructions are necessary. Some links do not work properly but this does not prevent access to relevant contents. Contents can be easily located. The interface is not always intuitive enough but there are clear instructions. All the links work properly.

Design is organized, clear and concise, and facilitates content understanding and acquisition. A multimodal format (text, image, audio and/or video) is included. Nevertheless, it is not aesthetically adequate for acquisition and reflection (with an excess of colours and disturbing audios). High-quality texts, images and audios.

(continued)

Contents can be easily located. The interface is intuitive enough. All the links work properly.

Design is organized, clear and concise, and facilitates content understanding and acquisition. A multimodal format (text, image, audio and/or video) is included but design is aesthetically adequate for acquisition and reflection (with no excess of colours and/ or disturbing audios). High-quality texts, images and audios.

Material is not adapted to target users with visual, auditory or motor disabilities and no previous information about this issue is provided.

Material is not organized into different modules.

9 Reusability

1

8 Accessibility

(continued)

Material is organized into different modules yet the parts cannot be used to develop other materials facilitating the development and updating of the contents. Modules cannot be used in more than one teaching discipline,

Material is not adapted to target users with visual, auditory or motor disabilities yet previous information about this issue is provided.

2 Material is not totally adapted to target users with visual, auditory or motor disabilities though it complies with points 1 to 12 on the web accessibility criteria. In any case, information about those points in the accessibility chart in which accessibility cannot be guaranteed is provided. Material is organized into different modules: all or some of its parts can be reused to develop other materials facilitating the development and updating of the contents. Nevertheless, modules cannot be used in more than one teaching

3

Material is organized into different modules: all or some of its parts can be reused to develop other materials facilitating the updating and development of the contents. The material, or some of its parts, can be used in different teaching–learning contexts: in-person, e-learning or b-learning

Material is adapted to target users with visual, auditory or motor disabilities. It complies with the web accessibility criteria as much as with the multimedia content accessibility criteria. In case accessibility cannot be guaranteed at any point, target users are properly informed.

4

Material is organized into different modules: all or some of its parts can be reused to develop other materials facilitating the updating and development of the contents. The material, or some of its parts, can be implemented in more than one teaching discipline and teaching– learning contexts: in-person, e-learning

Material is adapted to target users with visual, auditory or motor disabilities. It complies with the web accessibility criteria as much as with the multimedia content accessibility criteria.

5

10 Interoperability

Content can be used in a limited number of computers only. Minimum requirements for usage are not clearly described.

Content has not been developed to be used in most of the computers. Nevertheless, minimum requirements for usage are clearly described. Necessary software is not provided though.

group of students or teaching–learning in-person context, e-learning or b-learning. Content has been developed to be used in most of the computers though not in all of them. Minimum requirements for usage are not clearly described. The metadata file includes didactic goals, learners, skills and instructions for implementation. Information concerning accessibility is not provided and/ or does not meet international standards though.

discipline, group of students or teaching–learning in-person context, e-learning or b-learning. Content has been developed according to general standards (text (txt), word, pdf, wav, mp3, mp4, flash, jpeg, gif, etc); and can be used in any computer. Otherwise, minimum requirements for usage are clearly described in the metadata file, equally defined according to international standards. This includes usage requirements, didactic goals, learners, skills and instructions for implementation. Information about accessibility issues is equally facilitated in the metadata file.

though it is hard to use in more than one discipline.

Content has been developed according to general standards (text (txt), word, pdf, wav, mp3, mp4, flash, jpeg, gif, etc); and can be used in any computer. Otherwise, minimum requirements for usage are clearly described in the metadata file, equally defined according to international standards. This includes usage requirements, didactic goals, learners, skills and instructions for implementation. Information about accessibility issues is equally facilitated in the metadata file. Content is packet ready to be exported and imported to/from any computer, tool or website.

or b-learning though it is hard to use in more than one discipline.

8 Enabling automatic, technologyenhanced assessment in language e-learning Using ontologies and linguistic annotation merge to improve accuracy Antonio Pareja-Lora Universidad Complutense de Madrid, Spain Introduction Assessment is one of the main obstacles that might prevent the new approaches to language e-learning (e.g., language blended learning, language autonomous learning or mobile-assisted language learning) from being truly successful. Language e-learners need to know somehow that they are improving their knowledge of the languages they learn. Whereas oral skills can be more easily (self-)assessed when put into practice with other people who can speak the language, written skills and the written proficiency of a language requires being continuously and progressively assessed and corrected. Language e-students need to know that the way they write in the foreign language is grammatically and/or discursively correct or, at least, that they are making some progress. This issue (written skill assessment) gets even more complicated when autonomous or semi-autonomous learning is involved. In this case, some form of autonomous or semi-autonomous assessment should be provided as well. One way to achieve this is to make some kind of (semi-) automatic (that is, technologically enhanced) assessment tool available to (semi-) autonomous e-learners. Thus, some automatic functions for error correction (for instance, in exercises) will have to be included in the long run in the corresponding environments and/ or applications for language e-learning. A possible way to achieve this is to use some Natural Language Processing (NLP) functions within language eLearning applications (cf. Urbano-Mendaña et al., 2013). These functions should be based on some truly reliable and wide-coverage linguistic annotation tools (e.g. a POS [Part of Speech] tagger, a syntactic parser and/or a semantic tagger). However, linguistic annotation tools still have some limitations, which can be summarized as follows (Pareja-Lora, 2012): 1

Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2 They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10% up to 50% of the units annotated for unrestricted, general texts.

Enabling automatic, technology-enhanced assessment 103 The interoperation and the integration of several linguistic tools into an appropriate software architecture that provides a multilevel but integrated annotation should most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate can also minimize the limitation stated in (2), as shown in Pareja-Lora and Aguado-de Cea (2010). In this chapter, we present an annotation architecture and methodology, as well as a first prototype implementing it, which (a) unifies the annotation schemas of different linguistic annotation tools or, more generally speaking, that makes a set of linguistic tools (as well as their annotations) interoperate; and (b) helps correct or, at least, reduce the errors and the inaccuracies of these tools. For clarity, we have included in the chapter several annotation examples, in order to illustrate how the methodology and the prototype carry out these interoperation and accuracy improvement tasks. We present also the ontologies (Gruber, 1993; Borst, 1997) developed to solve this interoperability problem. As with many other interoperability problems, they have really helped integrate the different tools and improve the overall performance of the resulting NLP module. In particular, we will show how we used these ontologies to interlink several POS taggers together within the prototype, in order to produce a combined POS tagging that outperformed all the tools interlinked. The error rate of the combined POS tagging was around 6%, whereas the error rate of each of the tools interlinked was around 10–15%.

OntoTag: an architecture and methodology for the joint annotation of (semantic) web pages The annotation architecture presented here belongs in the OntoTag’s annotation model. This model aimed at specifying a hybrid (that is, linguisticallymotivated and ontology-based) type of annotation suitable for the Semantic Web. Hence, OntoTag’s tags had to (1) represent linguistic concepts (or linguistic categories, as they are termed within ISO TC 37), in order for this model to be linguistically-motivated (see http://www.iso.org/iso/standards_ development/technical_committees/other_bodies/iso_technical_committee. htm?commid=48104, and also http://www.isocat.org); (2) be ontological terms (i.e. use an ontological vocabulary), in order for the model to be ontologybased; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, as discussed above, it should be able to join (merge) the annotation of several tools, in order to POS tag texts more accurately (in terms of precision and recall) than some tools available (e.g., Connexor’s FDG, Bitext’s DataLexica). Thus, OntoTag’s annotation architecture is, in fact, the methodology we propose to merge several linguistic annotations towards the ends mentioned above. This annotation architecture consists of several phases of processing, which are used to annotate each input document incrementally. Its final aim is to offer automatic, standardized, high-quality annotations.

104  Antonio Pareja-Lora Briefly, the five different phases of OntoTag’s annotation architecture and methodology are 1 distillation, where the input file (e.g., an HTML, Word or PDF file) is distilled (extracted) before using it as input for an already existing linguistic annotation tool (which usually can process only text files); 2 tagging, in which the clean text document produced in the distillation phase is inputted to the different annotation tools assembled into the architecture; 3 standardization, responsible for mapping the annotations obtained in the previous phase onto a standard or guideline-compliant – that is, standardized – type of annotation; as shown below, this standardized format of annotation helps comparing, combining, interlinking and integrating all of them afterwards; 4 decanting, in charge of separating the annotations coming from the same tool but pertaining to a different level in a way that a b c

the process of the remaining phases is not complicated, the comparison, evaluation and mutual supplement of the results offered at the same level by different tools is simplified, the different decanted results can be easily re-combined, after they have been subsequently processed;

5 merging, whose application must produce a unique, combined and multilevel (or multi-layered) annotation for the original input document; this is the most complex part of the architecture and, hence, it has been sub-divided into two intertwined sub-phases: combination, or intra-level merging, and integration, or inter-level merging. These phases and sub-phases are further described below, where it is shown how a text was annotated according to OntoTag’s annotation model and/or methodology.

The linguistic ontologies OntoTag’s ontologies play a crucial role, e.g., in OntoTag’s standardization phase. OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations regarding directly or indirectly morphosyntactic, syntactic and semantic annotation so far (not discussed here for the sake of space; for further information, see Pareja-Lora (2012)). Accordingly, annotating with reference to OntoTag’s ontologies produces a result that uses a standardized type of tagset. The elements involved in linguistic annotation were formalized in a set (or network) of ontologies (OntoTag’s linguistic ontologies). OntoTag’s network of ontologies consists of 1 the Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalization of the different types of linguistic elements (i.e. units) identifiable in a written text (across levels and layers);

Enabling automatic, technology-enhanced assessment 105 2

the Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalization of the different types of features that characterize the linguistic units included in the LUO; 3 the Linguistic Value Ontology (LVO), which includes the corresponding formalization of the different values that the attributes in the LAO can take; 4 the OIO (OntoTag’s Integration Ontology), which (A) includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO; and (B) can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of (linguistic) annotation. Figure 8.1 shows in detail the phases of the OntoTag methodology in which we applied these ontologies to build our first OntoTag-compliant prototype. The prototype itself is described in the following section.

The experiment: development of OntoTagger We built a small corpus of HTML web pages (10 pages, around 500 words each) from the domain of the cinema reviews. This corpus was POS tagged automatically, and its POS tags were manually checked afterwards. Thus, we had a gold standard with which we could compare the test results. Then, we used two of these ten pages to determine the rules that had to be implemented in the combination module of the prototype, following the methodology described in Pareja-Lora and Aguado de Cea (2010). Eventually, we implemented in a prototype (called OntoTagger) the architecture described above in order to merge the annotations of three different tools (see Figure 8.1). These three tools are Connexor’s FDG Parser (henceforth FDG, http://www.connexor.com/nlplib/?q=demo/ syntax), a POS tagger from the LACELL research group (henceforth LACELL, https://www.um.es/grupos/grupo-lacell/index.php), and Bitext’s DataLexica (henceforth DataLexica, http://www.bitext.com/whatwedo/components/ com_datalexica.html). The prototype was then tested on the remaining eight HTML pages of the corpus.

OntoTagger at work: an annotated example This section further details the different stages of annotation that texts have to go through, according to OntoTag’s annotation methodology. They are shown by means of the annotations of a particular sample text of the corpus obtained with OntoTagger. All the annotations of the text relating a given OntoTag’s annotation phase have been conveniently grouped and included in a dedicated subsection (that is, (1) distillation, (2) tagging, (3) standardization, (4) decanting, and (5) merging).

OntoTagger’s distillation phase Figure 8.2 includes a screenshot with a recreation (made by the author of this chapter) of one of the original web pages whose text was annotated by means

106  Antonio Pareja-Lora

Web document

Distiller

Distilled doc. Distillation

FDG Parser

FDG Parser Standardizer

LACELL’s Tagger

DataLexica

LACELLS’s Standardizer

DataLexica Standardizer

Tagging

Standardization

Standardized & syntactically (Syn) tagged document

Standardized, lemmatized & POS-tagged (L+POS) doc.

Standardized, morpho. & POS-tagged (POS+M) doc.

Standardized & semantically (Sem) tagged document Decanting

Syntactic combinator (A) L + POS combinator

POS + M combinator

Syntactic combinator (B)

L+POS+M integrator

L+POS+M+Syn integrator

Standardization OntoTag’s Ontologies

Decanting

L+POS+M+Syn+Sem integrator

Semantic combinator

Semantically (& hybridly) annotated doc.

Merging Merging

Figure 8.1  OntoTag’s Experimentation: OntoTagger’s Architecture

of OntoTagger (consisting mainly of film reviews). In particular, the original web page (from La Guía del Ocio, http://www.guiadelocio.com/) provided information about the film Serendipity, which was being shown in cinemas when the corpus was compiled. As most web pages, the web page for Serendipity contains some elements, such as images and HTML labels (for instance, the hyperlink for ‘Página Web’ » ‘Web page’), that need to be removed before the text is inputted to a linguistic tool for tagging. Thus, all these elements were removed within the first phase of OntoTagger, as recommended by the OntoTag model and methodology, in

Enabling automatic, technology-enhanced assessment 107

Figure 8.2  Text (excerpt) to be annotated

order to extract only the textual information from the web page and annotate it afterwards. The resulting (distilled) text (see Figure 8.3) contained complete and wellformed sentences, but also sub-sentential components. In these text fragments, the main verb and some punctuation were often missing. For example, full stops were frequently omitted in the credits for the films (as in ‘Director: Peter Chelsom’) or after the first commentary and/or synopsis of the film. As shown below, this complicated the annotation of these particular text fragments.

OntoTagger’s tagging phase The distilled text was then tagged by each of the tools incorporated into OntoTagger’s architecture, namely DataLexica, FDG and LACELL. The results of this tagging phase for the text excerpt ‘Director: Peter Chelsom’ are shown, respectively, in Table 8.1, Table 8.2 and Table 8.3. Each of them is discussed below.

108  Antonio Pareja-Lora

SERENDIPITY Menuda palabrita John Cusack y Kate Beckinsale protagonizan esta historia romántica sobre la fuerza del destino y las casualidades Título de la pelicula: Serendipity Título original: Serendipity Director: Peter Chelsom Intérpretes: John Cusack y Kate Beckinsale Nationalidad: Estadounidense Duración: 90 min. Estreno: 25 de enero de 2002 Página Web Jonathan Trager (John Cusack, ‘La pareja del año’) y Sara Thomas (Kate Beckinsale, ‘Pearl Harbour’) se conocen en 1990 en Nueva York mientras compran los regalos navideños para sus respectivas parejas. El flechazo que sienten es instantáneo, y tanto como para quedar enganchados el uno con el otro durante el resto del dia y que todo apunte al comienzo de un romance. Pero Sara no está segura de la casualidad y cree que si el destino ha pensado unirles, Habrá que ponerlo a prueba, y forzarlo.

Figure 8.3  The distilled text (excerpt) to be annotated Table 8.1 The distilled text excerpt ‘Director: Peter Chelsom’, tagged by means of DataLexica ………………………………………………. Director:director+r#1#01#0#/director+r#2#01#0# ::0 Peter:0 Chelsom:0 \n ……………………………………………….

Thus, Table 8.1 includes the tagged text provided by DataLexica. This tool behaves in fact as a lexical database, and can be queried just on a word-by-word basis. That is, DataLexica accepts only one word at a time as input, and then returns all the possible morpho-syntactic analyses for that word (out of context). Therefore, a dedicated wrapper was built for this tool, which was in charge of (i) tokenizing the text; (ii) extracting the words it contained; (iii) using each of these words to query DataLexica; (iv) tagging separately the punctuation

Enabling automatic, technology-enhanced assessment 109 marks; (v) and storing all the results in the right order in a text file, as shown in the table. Accordingly, the result of tagging ‘Director’ by means of DataLexica includes the two possible morpho-syntactic analyses of this word in Spanish. Both analyses contain (a) the lemma of the word, according to that possible analysis (‘Director’ in both cases); (b) the separating character ‘+’; and (c) its corresponding morpho-syntactic analysis. The first morpho-syntactic analysis (‘r#1#01#0#’) stands for non-inflected form (‘r’), common noun (‘1’), masculine gender and singular number (‘01’), and no derivational attributes (‘0’). The second one (‘r#2#01#0#’) accounts for the fact that ‘director’ (at least in Spanish) functions in some contexts as an adjective (‘2’) whose gender is masculine and whose number is singular (‘01’). Table 8.1 also shows that punctuation marks and proper nouns are not tagged at all by means of DataLexica (as indicated by the ‘0’ tag for these tokens), since these types of elements are not included in this tool’s database. These are the most prominent drawbacks of this tool, namely (a) its annotations (when provided) lack any kind of disambiguation; and (b) it does not tag a lot of important tokens in our input texts (film reviews), like proper names and punctuation marks. However, the main contribution of this tool to OntoTagger’s overall performance is not insignificant: its annotations are most accurate, once the right one has been discerned from the spurious ones in a given context, due to the huge dimension of its in-built lexicon. The case of FDG is completely different (see Table 8.2). Since it is statistically based, it can tag all the tokens in the input text, including punctuation marks and proper nouns, on the basis of the statistical information relating the token context it compiles. Besides, it provides not only morpho-syntactic tags for the input text, but also syntactic function tags and syntactic dependency and/or semantic role tags for most of the tokens. As shown in Table 8.2, FDG tags every token with (a) its lemma; (b) optionally, a syntactic/semantic dependency function (such as ‘main’ for ‘Director’); (c) its surface syntactic function in the text (e.g. ‘&NH’ = nominal head, for ‘Director’); and (d) a quite readable POS label (for instance, ‘N MSC SG’ = (common) noun, masculine, singular, in the case of ‘Director’). Proper nouns are marked with the extra POS tag ‘’ (see, for example, the annotation of ‘Peter’). The additional ‘’ (also included among the POS tags for ‘Peter’) makes explicit that the tool ‘guessed’ (deduced) the category and the morphological information of the token by means of some heuristic (contextual and/or statistic) procedure. Thus, the main contribution of this tool to OntoTagger’s overall performance are (i) the annotation of syntactic dependencies and other syntactic elements; and (ii) its reliability and/or robustness at annotating infrequent tokens (such as proper nouns, abbreviations, neologisms or loan words). Nevertheless, an important drawback of FDG is the high degree of ambiguity that it introduces in its annotations. This degree of ambiguity affects the annotation

110  Antonio Pareja-Lora Table 8.2 The distilled text excerpt ‘Director: Peter Chelsom’, tagged by means of FDG

……………………………………………….

Director director main:

&NH N MSC SG

: :

Peter peter ada:

&A> N SG

Chelsom chelsom ada:

&A> N SG

……………………………………………….

both of the morpho-syntactic category of some tokens and (in certain occasions) of some morphological attributes. According to FDG’s user’s manual, no more than 3% of the tokens should be ambiguously POS-tagged by the tool. However, after manuallychecking its annotations on the sample sub-corpus, it was observed that around 19% of the tokens had an ambiguous grammatical category tag (that is, more than one category tag for a given token). Besides, approximately 14% of the overall morpho-syntactic attributes had been ambiguously

Enabling automatic, technology-enhanced assessment 111 annotated as well. This degree of ambiguity reduces to some extent the reliability of FDG’s annotations. Finally, Table 8.3 contains the results of tagging the text excerpt ‘Director: Peter Chelsom’ by means of LACELL (or, equivalently, the UMurcia tool). The most prominent drawback of this tool is that, for several input tokens, it does not annotate the value of their morphological attributes (mostly labelled

Table 8.3 The distilled text excerpt ‘Director: Peter Chelsom’, tagged by means of LACELL

……………………………………………….

Director director

Nombre Común ; MS

: :

Puntuación Dos Puntos ; Ø

Peter Peter

Nombre Propio ; Ø

(continued)

112  Antonio Pareja-Lora Table 8.3  (continued)

Chelsom Chelsom

Nombre Propio ; Ø

……………………………………………….

with the empty set symbol ‘Ø’). In addition, it does not recognize the ends of paragraphs (whilst the other two tools do). However, the main contribution of LACELL’s POS tagger to OntoTagger’s overall performance is that the morpho-syntactic annotation of this tool for a token is never ambiguous. That is, though it can be wrong, it just returns one POS tag for each input token. Yet, its error rate in this aspect is rather low. Besides, it uses a more fine-grained taxonomy of morpho-syntactic categories to tag tokens (including a semantic classification for adverbs) than the other tools, that is, it can be more precise at POS tagging than FDG and DataLexica.

OntoTagger’s standardization phase The high level of heterogeneity of the tagged examples included above shows clearly the need of a standardization process prior to annotation comparison and merge. Thus, as stated above, in order for the annotations coming from the different linguistic tagging tools to be conveniently compared and combined, they should be first mapped onto a standard or guideline-compliant – that is, standardized – type of annotation. Consequently, in this phase, (i) the annotations from each and every tool are first mapped onto the terms of OntoTag’s ontologies; then, (ii) a preliminary renumbering of the tokens tagged by each of the tools is performed. This renumbering (a) enables aligning of the different annotations later on; and (b) provides a systematic and consistent way to refer to paragraphs and sentences in the identification of tokens and/or words. For instance, the token identifier “t:9_1_1” in Table 8.4, read from right to left, designates the first token within the first sentence of the ninth paragraph. Altogether, (i) and (ii) simplify immensely the comparison of annotations that has to be performed in the following phases. One of the main problems that this phase has to solve is that morphosyntactic annotations are often compressed and/or abbreviated somehow. Hence,

Enabling automatic, technology-enhanced assessment 113 (1) the information about grammatical attributes (such as gender or number) is left implicit; and (2) the information about the values of these attributes (such as masculine or singular, respectively) may or may not be made explicit. Yet, (3) when morpho-syntactic attribute values are mentioned explicitly, they may be expressed in different orders and/or according to different formats. These three standardization processes (namely mapping to OntoTag’s ontologies, renumbering tokens and/or words, and making grammatical information explicit in a consistent way) are encapsulated together in a so-called first standardization sub-phase within OntoTagger. This is due to the fact that they can be performed on a word-by-word basis, quite independently. A second standardization sub-phase, which requires using more contextual information about some tokens, is performed afterwards. The second standardization subphase aims at normalizing the annotation of some rather language-specific (i.e. Spanish) particular phenomena, such as contractions (‘al’ = ‘a’ + ‘el’; and ‘del’ = ‘de’ + ‘el’) or deictic pronouns, when they are orthographically attached to verbs (e.g. ‘ponerlo’ = ‘poner’ + ‘lo’ » ‘put it’, and ‘forzarlo’ = ‘forzar’ + ‘lo’ » ‘force it (to happen)’ – see Figure 8.3). This second standardization sub-phase sometimes entails a new renumbering of some tokens and/or words, since it entails in fact a standardization of the tokenization (that is, the segmentation into tokens) of the input. Table 8.4 shows the results of OntoTagger’s first standardization sub-phase for DataLexica’s annotation of the text ‘Director: Peter Chelsom’.

Table 8.4 The DataLexica-tagged text excerpt ‘Director: Peter Chelsom’, after the first standardization sub-phase

……………………………………………….



Director director

M S N 0

(continued)

Table 8.4  (continued)

Director director

M S P N 0

[A|P]



: :



Peter peter

[M|F|N] [S|P]



Chelsom chelsom

[M|F|N] [S|P]



……………………………………………….

Enabling automatic, technology-enhanced assessment 115 Table 8.5 and Table 8.6, in turn, show the result of applying this first standardization sub-phase to the tags provided for the text ‘Director’ (for brevity) by FDG and LACELL, respectively. This first standardization sub-phase of OntoTagger is inspired by EAGLES (1996) to some extent. The annotations of this example after the second standardization sub-phase are quite similar: they only differ basically on the numbering of tokens and/or words. Thus, they are not included here for the sake of space. Table 8.5 The FDG annotations for ‘Director’ after the first standardization sub-phase

……………………………………………….

Director director

N H N main

M S N 0

……………………………………………….

All the examples of standardized annotations in Table 8.4, Table 8.5 and Table 8.6, altogether, show 1

how paragraphs, sentences, tokens and words are systematically renumbered and identified (see, for instance, the renumbering of the paragraph, the sentence, the token and the word associated to ‘Director’); 2 how this systematic way to identify the elements in the annotation helps managing ambiguity. See in Table 8.6, for example, the double annotation

116  Antonio Pareja-Lora of the ambiguous token ‘Director’ (according to DataLexica). This theoretical (i.e. out-of-context) ambiguity is expressed by means of two word elements, whose identifiers are exactly the same, but whose grammatical categories differ (i.e. common noun (‘NC’) vs. adjective (‘AJ’)). 3 how the ambiguity in the annotation of morphological attribute values is managed by means of a regular-expression based notation. For example, DataLexica does not provide a gender annotation for ‘Peter’. Hence, a priori, its gender could be masculine (‘M’), feminine (‘F’) or neuter (‘N’). This can be annotated using the (standardized) regular expression shown in Table 8.6, that is, ‘[M|F|N]’; 4 how the information about each grammatical attribute (e.g. lemma, gender, or number) is separated from the rest and designated explicitly in the annotations, in order to reduce their ambiguity and improve their readability and comparability; 5 how to refer to grammatical categories also in a systematic way. For instance (see Table 8.6), a

‘NC’ is used to refer to the grammatical category of ‘Director’ (nouncommon); b ‘TS’ stands for ‘token-simple’, as opposed to ‘token-multiple’, which refers to multiword tokens, such as contractions; c and ‘RU’ refers to ‘residual-unclassified’, a category usually attached to tokens whose grammatical category is not provided by a given tool, as is the case of ‘Peter’, not tagged by DataLexica.

Table 8.6 The LACELL annotations for ‘Director’ after the first standardization sub-phase

……………………………………………….

Director director

M S N 0

……………………………………………….

Enabling automatic, technology-enhanced assessment 117 OntoTagger’s decanting phase Within the decanting phase of OntoTagger, each file containing the standardized annotations of a particular tool is split into several decanted files. Each of these decanted files includes only one type of annotation, that is, either (i) lemmas and grammatical categories (L+POS files); (ii) grammatical categories and morphological annotations (POS+M files); (iii) syntactic annotations (Syn files); or (iv) semantic annotations (Sem files). Table 8.7, Table 8.8, Table 8.9 and Table 8.10 show, respectively, the decanted L+POS, POS+M, Syn

Table 8.7 The FDG lemma and category information for the word ‘Director’ included in its associated L+POS file

……………………………………………….

Director director



……………………………………………….

Table 8.8 The FDG category and morphological (POS+M) information for the word ‘Director’

……………………………………………….

Director

M S N 0

……………………………………………….

118  Antonio Pareja-Lora Table 8.9  The FDG syntactic (Syn) information for the word ‘Director’

……………………………………………….

Director

N H N main

……………………………………………….

Table 8.10  The FDG semantic (Sem) information for the word ‘Director’

……………………………………………….

Director

……………………………………………….

and Sem annotations for the token ‘Director’ corresponding to FDG. In addition, since FDG does not provide any semantic information for this token, the decanted Sem annotations for ‘Director’ obtained from DataLexica have been included in Table 8.11. Thus, Table 8.11 includes the DataLexica decanted Sem annotations for ‘Director’. As shown in Table 8.4, DataLexica’s morpho-syntactic annotation for this token is ambiguous, since (out of context) it could be either a noun or an adjective. Whereas the annotation of ‘Director’ as a noun does not provide any semantic information to be decanted, its annotation as an adjective does (as shown in Table 8.11).

Enabling automatic, technology-enhanced assessment 119 Table 8.11  The DataLexica semantic (Sem) information for the word ‘Director’

……………………………………………….



Director

Director

[A|P]

……………………………………………….



This semantic annotation describes ‘Director’ as a particular (semantic) property (‘R’) of a given entity, which in Spanish can be attributed (‘A’ = attributive) or predicated (‘P’ = predicative). Since no further information about this potential property is provided, an ambiguous value for the attribute @use was included already in the standardized annotation of this token (that is, ‘[A|P]’).

OntoTagger’s merging phase The first sub-phase of the merging phase is the combination sub-phase, in charge of attaching an unambiguous, correct and most precise annotation to each linguistic item (token, word, grammatical attribute, etc.) of the text being annotated (whenever possible). In OntoTagger, this aim is accomplished by means of a production rule system, whose set of rules was determined empirically, after manually-correcting the annotated sub-corpus (the gold standard used for this experiment). Table 8.12 includes the result of applying the combination rules to the L+POS annotations of DataLexica, FDG and LACELL (called UMurcia in this table) for the input chunk ‘Director: Peter Chelsom’. A similar example of the rest of the combined annotations (POS+M, Syn and Sem) is not presented here for brevity. As shown in the table, the combined morpho-syntactic category tag for ‘Director’ is ‘NC’, that is, common noun (which is clearly correct). The set of possible tags included ‘NC’ (supported by the annotations of FDG, LACELL and one of the ambiguous annotations of DataLexica) and ‘AJ’ (that is, adjective, only

120  Antonio Pareja-Lora supported by the other ambiguous annotation of DataLexica) – see Table 8.4, Table 8.5 and Table 8.6. In this case, the rule applied implements a simple voting (weight) mechanism: majority wins. The same applies for the annotation of the lemma, since all the tools provided the same and correct one, that is, ‘Director’, and this is also the case of the combined morpho-syntactic category and lemma computed for the colon (‘:’) after ‘Director’ and for the proper nouns ‘Peter’ and ‘Chelsom’. For traceability reasons, the identifier of each token within the corresponding standardized files is included also in the combined files, together with the combined identifier for that token. In our gold standard, when these identifiers differ, the one provided by the DataLexica wrapper is quite often the most accurate. Hence, the rule to solve identifier conflicts basically takes the DataLexica identifier as the combined identifier for the token. The different files resulting from the combination sub-phase contain separate annotations that belong to different levels and layers of linguistic description. So, finally, in order to provide an overall annotation of the input text, all this separated annotations can be interconnected and put together in one (merged) annotation file. The new annotation file includes an accurate, standardized and merged annotation of the input text. This is the main output of OntoTagger for the input text, and it includes references to the particular OntoTag ontology where the different tags used in its annotations come from. These references are represented by some prefixes that are added to the different items of the annotations. These prefixes are defined by means of namespace declarations that are included at the beginning of the merged annotation file. There as many prefixes (that is, as many, namespace declarations) as OntoTag’s ontologies. An example of these merged annotations, obtained by means of OntoTagger, is shown in Table 8.13 (the merged annotation for the chunk ‘Director: Peter Chelsom’).

Results In the annotation of the corpus mentioned above, in terms of precision, the OntoTagger prototype (93.81%) highly outperformed DataLexica (83.82%), which actually does not provide POS tagging disambiguation; improved significantly the results of LACELL (85.68% – OntoTagger is more precise in around 8% of cases); and slightly surpassed the results of FDG (FDG yielded a value of precision of 92.23%, which indicates that OntoTagger outperformed FDG in around 1.50% of cases). In terms of recall, two different kinds of particular statistical indicators were devised. First, a group of indicators was calculated to show simply the difference in the average number of tokens which were assigned a more specific morphosyntactic tag by each tool being compared. For this purpose, for instance, the tags “NC” (Noun, Common) and “NP” (Noun, Proper) should be regarded as more specific than “N” (Noun).

Table 8.12 The L+POS combined annotations obtained for the chunk ‘Director: Peter Chelsom’ by means of OntoTagger in the combination sub-phase

……………………………………………….



Director director



: :



Peter Peter



Chelsom Chelsom



……………………………………………….

Table 8.13 The final integrated (and merged) annotation obtained by means of OntoTagger for the chunk ‘Director: Peter Chelsom’

……………………………………………….



Director director

<  lao:surface_tag>lvo:N <  lao:phrase_function>lvo:H <  lao:morpho-syntactic_ function>lvo:N <  /lao:morpho-syntactic_ function> main

lvo:M lvo:S lvo:N 0

lvo:[A|P]



: :

<  llo:morpho luo:category=”luo:PU05”/>



Peter Peter

lvo:A lvo:R lvo:N

ada

lvo:[M|F|N] lvo:S lvo:N 0



Chelsom Chelsom

lvo:A lvo:R (continued)

124  Antonio Pareja-Lora Table 8.13 (continued) lvo:N

ada

lvo:[M|F|N] lvo:S lvo:N 0



……………………………………………….

Regarding the values of the indicators in this first group, OntoTagger clearly outperformed DataLexica in 11.55% of cases, and FDG in 8.97% of cases. However, the third value of this comparative indicator shows that OntoTagger and LACELL are similarly accurate. This is due to the fact that, in fact, LACELL’s morphosyntactic tags, when correct, are the most accurate of the three outputted by the three input tools. Hence, its recall can be considered the upper bound (or baseline) for this value, which is inherited somehow by OntoTagger. On the other hand, a second group of indicators was calculated, in order to characterize the first one. Indeed, it measured the average number of tokens which are attached to a more specific tag by a given tool than the others, but just in some particular cases. In these cases, the tools agreed in the assignment of the higher-level part of the morphosyntactic tag, but they did not agree in the assignment of its most specific parts. A typical example is that some tool(s) would annotate a token as “NC” (Noun, Common), whereas (an)other one(s) would annotate it as “NP” (Noun, Proper). Both “NC” and “NP” share the higherlevel part of the morphosyntactic tag (“N” = Noun), but not their most specific parts (respectively, “C” = Common, and “P” = Proper). Regarding the values of the indicators in this second group, OntoTagger outperformed DataLexica in 27.32% of cases, and FDG in 12.34% of cases. However, once again, the third value of this comparative indicator shows that OntoTagger and LACELL are similarly accurate, which results from the same reasons described above.

Enabling automatic, technology-enhanced assessment 125 Thus, to sum up, OntoTagger results were better in terms of precision than any of the annotations provided by the tools included in the experiment (only around 6% of tokens being wrongly tagged); and did not perform worse than any of them (outperforming most of them) in terms of recall.

Conclusions In this chapter, we have presented an annotation architecture, a methodology and a prototype implementing them both that have helped us (a) make a set of linguistic tools (as well as their annotations) interoperate; and (b) reduce the POS tagging error rate and/or inaccuracy of these tools. We have also presented briefly the ontologies developed to solve this interoperability problem, and shown how they were used to interlink several POS taggers together, in order to attain the goals previously mentioned. As a result, the error rate of the combined POS tagging was around 6%, whereas the error rate of the tools interlinked was in the range 10–15%. Unlike the individual annotation tools assembled (with a much higher annotation error rate), the resulting error rate of our prototype allows including this type of technologies within language e-learning applications and environments (e.g. mobile-assisted language learning) to automatically correct the exercises and/or the errors of e-learners. This is a clear advance in the area of technologyenhanced assessment, which should also help enhance and/or improve these language e-learning scenarios, and make them more powerful, effective and even more successful.

Acknowledgements I would like to thank the ATLAS (UNED) research group for their constant inspiration, encouragement and support, as well as Guadalupe Aguado de Cea and Javier Arrizabalaga, without whom this research would have never been completed.

References Borst, W. N. (1997). Construction of engineering ontologies (PhD thesis). Enschede. The Netherlands: University of Twente. EAGLES Consortium (1996). EAGLES: Recommendations for the morphosyntactic annotation of corpora. EUROPEAN PROJECT DELIVERABLE: EAGLES Document EAG – TCWG—MAC/R, EAGLES Consortium, 1996. [Available online at http://www.ilc.cnr.it/EAGLES96/annotate/annotate.html (visited on 14/10/2014)]. Gruber, T. R. (1993). A translation approach to portable ontologies. Journal on Knowledge Acquisition, 5(2): 199–220. Pareja-Lora, A. (2012). Providing linked linguistic and semantic web annotations – the OntoTag hybrid annotation model. Saarbrücken: LAP – LAMBERT Academic Publishing.

126  Antonio Pareja-Lora Pareja Lora, A. & Aguado de Cea, G. (2010). Ontology-based interoperation of linguistic tools for an improved lemma annotation in Spanish. In Proceedings of the 7th Conference on Language Resources and Evaluation (LREC 2010), (pp. 1476–1482). Valletta, Malta: ELDA. Urbano-Mendaña, M., Corpas-Pastor, G. & Mitkov, R. (2013). NLP-enhanced selfstudy learning materials for quality healthcare in Europe. In Proceedings of the “Workshop on optimizing understanding in multilingual hospital encounters”. 10th International Conference on Terminology and Artificial Intelligence (TIA 2013). Paris, France: Laboratoire d’Informatique de Paris Nord (LIPN).

Part 3

Mobile-assisted language learning

This Page is Intentionally Left Blank

9 Challenges and opportunities in enacting MALL designs for LSP Joshua Underwood London Knowledge Lab, UCL Institute of Education, United Kingdom

Introduction This chapter examines designs for Mobile-Assisted Language Learning (MALL), appropriate for learning of Language for Specific Purposes (LSP), and identifies challenges in delivering these MALL designs. Amongst other things MALL designs may: help learners connect episodes of different kinds of learning activity; provide access to help on demand; enable anywhere, anytime study; increase opportunities for target language communication; facilitate sharing of content; support collaboration with peers and teachers; encourage more and distributed study. Such MALL designs both exploit and stretch the affordances of mobile technology. In this chapter, analysis of requirements for vocabulary learning tools, technology affordances that can be exploited for vocabulary learning, and mobile learning designs, leads to the identification of opportunities to better support enactment of MALL designs. These are of particular relevance to LSP as their exploitation allows closer attention to the requirements of specific language domains and learners. The chapter first discusses why mobile learning is of particular relevance to LSP teaching and learning. Then I derive requirements for mobile apps that emulate and improve on three traditional tools for vocabulary learning; notebooks, dictionaries, and flashcards. Next, I describe how other affordances of mobile devices can be exploited for vocabulary learning. I then outline guidelines for MALL designs and challenges in enacting mobile learning designs in projects I have been involved in. Finally, I use requirements for vocabulary learning tools, features of MALL designs, and challenges in enacting these to identify opportunities to better support vocabulary learning.

LSP and mobility LSP “should be centered on learner need, specificity of activities and materials, and teacher and learner profiles” (Arnó–macià, 2012, p. 89). Two ways in which technology has already transformed LSP teaching and learning are by improving access to discipline-specific materials and to the discourse communities that use and create LSP (Arnó–macià, 2012). Technology also offers opportunities to go

130  Joshua Underwood beyond access and enable data-driven approaches to learning through the use of corpus linguistics tools. While these are clearly useful to LSP teachers, the most lasting impact comes from equipping learners to exploit such tools for themselves (Bernardini, 2004). For LSP teachers, fostering learner autonomy (Arnó–macià, 2012) through guided used of technology is particularly important. In order to foster autonomy activity designs should involve learners in making informed choices about what and how to learn and hence aim to empower learners to create their own personalized and effective learning practices (Kern, 2013). Such tasks might, for example, ask students to collect discipline-specific resources (not intended for language learning) and share analyses of these with respect to how they may be exploited for language learning (Barahona & Arnó, 2001). Personal mobile devices can greatly facilitate capture, sharing, and collaborative analysis of content used for such tasks and as we shall see successful MALL task designs are often rather similar. These activities may initially be modelled by teachers, but responsibility for setting objectives, finding relevant material, exploiting this for language learning, and selecting which tools to use, should gradually transition to learners. When modelling such activities teachers should take care to exploit the technologies learners use, or are likely to use, within the professional or other language communities in which they are operating. Many typical learners of languages for specific purposes, such as professionals and academics, are highly mobile (Read & Barcena, this publication). Mobile devices and communication channels are an increasingly essential component of the ecology of resources that support interactions and relationships in such language communities. Furthermore, many LSP learners will not have time to learn unfamiliar tools in order to support their language learning. Exploiting the devices and other technologies these people use to get things done in their day-to-day professional lives clearly makes sense. In fact, support for language learning that cannot easily be accessed on personal mobile devices is increasingly likely to be considered inconvenient and inaccessible. Even in domains of LSP in which mobile communication competence is not yet essential, mobile tools offer substantial advantages. LSP is often highly situated (Kern, 2013) and used in conjunction with elements of physical environments or social settings to make meaning; for example, when using a target language manual to work with machinery. Current research in digitally augmenting manufacturing environments to support learning (Wild et al., 2014) indicates various ways in which such environments can be enhanced using mobile technology to support ‘on the job’ and ‘just-in-time’ learning. However, personal mobile devices also offer simpler ways for LSP learners to exploit such situatedness, for example by capturing and sharing images, sound recordings, or video of language encountered, and accessing specialist help either at the time or later. Yet another appeal of MALL, particularly for professional LSP learners, is the promise of anytime, anywhere learning. LSP learners are very frequently under severe time constraints. Flexibility, remote access to teachers and learning resources, and the

Enacting MALL designs for LSP 131 use of ‘downtime’ for study are all appealing options well supported by mobile devices as demonstrated in projects such as English for taxi drivers (Kern, 2011). However, while personal mobile devices can provide flexibility and independence, learners will typically require help orienting their mobile-assisted activity towards effective learning processes (Gimeno, 2014). Indeed, the need to help learners develop and adopt personally effective learning strategies is widely recognized (Wong & Nunan, 2011). Learners need to be involved not only in deciding what is to be learnt but also in developing personally relevant tasks, selecting the tools that may be useful in enacting these tasks, and shaping and adapting their learning experiences over time and beyond the end of formal courses. Many see increasingly pervasive access to technology, often through mobile devices, as stimulating and facilitating the kinds of teacher student collaboration necessary to achieve this transition (Fullan & Langworthy, 2014). In short, MALL appears to offer opportunities to change learners relationships’ with LSP texts, language communities, and teachers in ways that should be beneficial. Furthermore, mobile devices are increasingly likely to be part of the repertoire of communication tools used by any LSP community and for many learners the tool of choice for accessing information, communicating, and getting stuff done in both personal and professional spheres. Nevertheless, many learners and teachers will require help in identifying and realizing opportunities to make effective use of mobile devices for language learning. The next section looks specifically at using mobile devices to support vocabulary learning.

Mobile tools for vocabulary learning Vocabulary has often been a focus both in LSP (Kern, 2014) and in MALL research (Burston, 2015). Although, Burston’s meta-review of MALL finds very limited empirical evidence of impact on vocabulary learning he concludes that There is every reason to expect that MALL can make significant contributions to improving language learning, most particularly by increasing time spent on language acquisition out of class, by exploiting mobile multimedia facilities to complete task-based activities, and by using the communication affordances of mobile devices to promote collaborative interaction in the L2. (Burston, 2015, p. 17) These three affordances of mobile devices have all been exploited in various ways for mobile-assisted vocabulary learning. Learning vocabulary involves integrating various kinds of receptive and productive knowledge; for example, recognizing and being able to produce the spoken and written forms, knowing in which circumstances a word is appropriate, etc. (Nation, 2001). These kinds of knowledge typically develop across different episodes of activity (e.g. situated encounters with language in professional life, more abstract analysis in a formal class, look up in a dictionary, vocabulary review, etc.). Vocabulary learning is then not accomplished in a single episode, or through a

132  Joshua Underwood single activity (Nation, 2001). Rather, word knowledge develops cumulatively over episodes of learning. These connected episodes can be described as learning trajectories (Kerawalla, Littleton, Scanlon et al., 2013). Amongst the tools language learners and teachers have traditionally exploited to help sustain and develop vocabulary knowledge over such learning trajectories are dictionaries, vocabulary notebooks, and flashcards. Digital mobile versions of these offer certain advantages. Criteria for good vocabulary notebooks can be drawn from research (Schmitt & Schmitt, 1995). They should be personal word stores centred on individual needs; encourage deep processing and the creation of associations (e.g. through inclusion of example sentences, use of drawings, etc.), prompt cumulative development of different aspects of vocabulary knowledge (e.g. through sections for inclusion of information about usage, pronunciation, collocates, derived forms, etc.); and be shared with teachers for help with prioritization, learning strategies, and correction (Schmitt & Schmitt, 1995). Mobile digital versions of vocabulary notebooks should make meeting such criteria much easier, more flexible, and more efficient (e.g. by assisting retrieval of relevant information about meaning and usage, enabling inclusion of multimedia, and facilitating sharing of content). Flashcards are typically used for self-directed out-of-class study to assess and reinforce vocabulary knowledge. Nakata (2011) derives various requirements for digital flashcard systems from vocabulary research. These include: test learners on both productive and receptive competence, test both recognition and recall; present hard and unknown items more frequently than easier and known items; increase difficulty of tests as performance improves; vary the texts and contexts in which vocabulary items are presented; promote spaced practice; support integration of multimedia. Self-created multimedia and vocabulary records are likely more effective for long-term retention than material provided by others (Hasegawa, Ishikawa, Shinagawa et al., 2011). Clearly mobile flashcard systems fulfilling the above requirements offer opportunities to review and test vocabulary knowledge in a wider variety of times and settings. Dictionaries are typically used in a ‘just-in-time’ fashion to support comprehension and incidental learning rather than for deliberate vocabulary learning. Use of digital dictionaries is probably no more effective for vocabulary retention than use of paper-based dictionaries (Chen, 2010). However, dictionary apps offer many advantages including: greater portability; speed and ease of use; use of multimedia; and logging of items looked up. As voice recognition and optical character recognition improve, word look up and translation of both written and spoken language has become almost instantaneous and ubiquitously available through apps. While the value for retention of effortless look up is questionable, there is potential to log words and phrases looked up and exploit this knowledge for later and more deliberate learning activity. For example, frequently looked up words can be automatically added to vocabulary notebooks or flashcard sets, or shared with teachers. Mobile apps can then offer improved functionality and convenience compared to paper-based vocabulary notebooks, flashcards, and dictionaries. For language

Enacting MALL designs for LSP 133 learning, the opportunity to better integrate episodes of distinct kinds of incidental and deliberate learning activity conducted with the help of different tools is of particular interest and as yet under exploited. The most fundamental uses of current mobile devices are for communication and capture of image and sound. Various MALL applications go some way towards integrating tools for communication and capture with tools for deliberate vocabulary learning (e.g. Pemberton & Winter, 2012). In communities of LSP learners and teachers such tools can support the creation and curation of shared corpora of relevant language and situations. Nevertheless, the need for teachers to facilitate this process is often highlighted (Procter-Legg, Cacchione & Petersen, 2012). Apps can also provide learners with ‘just-in-time’ communication help (e.g. Google Translate). Such help can be adapted to context, for example VocabNomad (Demmans Epp, 2013) offers potentially useful vocabulary related to the user’s current location. Clearly, for LSP settings corpora related to the specific domain and likely settings of use might be used similarly to provide more specific ‘justin-time’ communicative assistance. An alternative approach is to make it easy and fast for learners to request and receive crowd-sourced help as they encounter communicative needs (Chang et al., 2013). For LSP such help might be sourced through the community of competent LSP users and/or teachers. Even when it is not practical to act on ‘just-in-time’ communication help, logging the need for communicative help in a particular situation might usefully inform later deliberate study. However, this requires the ability to easily share this information with teachers or apps that can support such study. Another way communication functions of mobile devices are exploited for language learning is messaging (e.g. SMS, Whatsapp). Messages, whether sent by teachers or automated, can be effective at prompting study, though researchers note the need for persuasive message design and sensitivity about the frequency and volume of messages (Kennedy & Levy, 2008). Often the intention is to promote study spaced over time, which is known to be more effective. Interestingly characteristics of mobile devices can also be used to promote study spaced across locations, with possible benefits for transfer. MicroMandarin (Edge et al., 2011) used positioning to prompt study related to location and, compared to traditional flashcard implementations, this resulted in a greater number of shorter study sessions in more locations. Vocabulary Wallpaper also detects location and then displays related target language words on the phone’s wallpaper (Dearman & Truong, 2012). The intention is to promote situated incidental learning. Other ways to promote incidental learning include augmenting physical interactions and environments with audio and/or visual presentation of target language so as to contextualize this in day-to-day activity (e.g. Intille, Lee & Pinhanez, 2000). Such contextualized language presentation and practise can be integrated within task-based language-learning activities conducted in augmented workplace environments (e.g. Seedhouse et al., 2013). It is also possible to promote exposure to contextualized target language within day-to-day activity conducted in the first language. For example, target language translations can be embedded in the digital texts people read anyway as

134  Joshua Underwood part of their daily routine (Trusty & Truong, 2011) and in messaging tools used in the first language while people wait for replies (Cai et al., 2014). In some LSP settings this may mean learners can start to ‘incidentally’ acquire target language while performing domain related activities in their first language. Again, such incidental exposure and learning may be enhanced through sharing data with tools that support deliberate vocabulary learning. In summary, mobile technology can enhance many activities likely to contribute to learning vocabulary and improve on tools traditionally used for vocabulary learning. However, the most significant opportunity lies in helping learners and teachers coherently connect different kinds of learning activity, typically supported by different tools and conducted at different times and in different locations, across episodes for effective learning trajectories. The utility of mobile technology in connecting learning across settings is widely acknowledged (Sharples et al., 2013). However, this connection making is not always straightforward. The various software and other tools we use to support learning do not necessarily share information easily. In LSP settings this is further complicated as it likely becomes useful to share such information across the range of software and other tools used professionally and for learning. Furthermore, many teachers and learners require help in realizing and acting on the possibly unfamiliar opportunities for learning afforded by mobile technologies. The next section summarizes guidelines for the design of MALL activities and identifies some challenges in enacting mobile learning experiences.

Designing and enacting MALL Palalas (2012) provides ten pedagogic criteria to guide design of mobileenhanced language learning courses. Amongst these are the need to include real life communicative tasks, both individual and collaborative work, the creation of learner-generated linguistic artefacts, and the provision of preparation, help, and feedback. Wong (2013) highlights the need to seamlessly connect individual learner-directed activity, collaborative activity and teacher-directed activity both within and outside formal settings. Stockwell and Hubbard (2013) acknowledge the utility of using mobile messaging to push learners into action but highlight the need to do this respectfully and to accommodate diversity by allowing for choice. Burston (2015) suggests promising MALL designs employ learnercentred, constructivist, collaborative methodologies and points to Tai (2012) as an example. In Tai’s MALL design, communicative use of mobile devices is embedded in a series of collaborative tasks contextualized in a game like role-play experience during a field trip to a historic mansion and garden. This is followed by classroom review. The use of mobile devices is seen as integral in connecting ‘in the field’ and ‘in the classroom’ phases (Tai, 2012). Wong (2013) describes a similar use of mobile devices to connect tasks that include in-class presentation and preparation, individual out-of-class use and artifact creation, and later collaborative reflection in class. One useful pattern for MALL designs appears to be: supported preparation for meaningful communicative tasks, performance

Enacting MALL designs for LSP 135 of tasks and production of related language artefacts in authentic or relevant settings, collaborative analysis of artefacts and reflection in a later episode. However, enacting such designs is not always straightforward. In the next section, I identify challenges in enacting similar designs through reflection on mobile learning projects. In various inquiry learning projects for secondary science we used mobile devices to support capture and sharing of experimental data (Underwood et  al., 2008; Smith et al., 2008; Wyeth et al., 2008; Underwood et al., 2007). While participants generally found inquiry activities engaging and used mobile devices effectively to collect and share data, it was more challenging to sustain engagement through data analysis, reporting, presentation of findings, and reflection. Possibilities we explored included motivating presentation by providing an audience – for example external experts or the public – and facilitating analysis through novel and semi-automated representations, e.g. automated generation of photo-stories of experimental activity for annotation by participants (Rowland et al., 2010). We also found mobile messaging useful for prompting and guiding learners during enactment of inquiry tasks and for enhancing teachers’ awareness of activity. Messages on mobile devices used by primary children at school and home were similarly useful for maintaining parental awareness of current learning objectives and suggesting ways parents could support their children (Kerawalla et al., 2007). In a project that developed designs for improving self-directed mobile vocabulary learning, grounded in reflection on participants’ authentically motivated uses over several months, a common feeling was the need to do more to sustain engagement with new vocabulary beyond initial capture and inquiries into the meaning of new vocabulary (Underwood et al., 2014). Here again, systemor teacher-generated messages may be helpful in nudging learners to revisit and deepen their understanding of new vocabulary. Also, semi-automated production and analysis of vocabulary artefacts generated by learners through their use of language tools may help reduce the organizational burden on learners and help them prioritize and organize deliberate learning. In summary, mobile technology makes data capture, sharing and initial inquiry easy. Messaging can prompt and guide learner activity and support others’ understanding of this, hence facilitating help giving. Sustaining engagement through analysis and reflection and over the longer-term, as may be required for deeper lasting vocabulary knowledge when this is not frequently encountered in daily activity, can be more challenging but may be facilitated through appropriate representations of activity and reminders. In some situations it may be that teachers take on responsibility for motivating and organizing sustained learning trajectories. However, there are certainly opportunities for mobile apps to do more to help learners organize, prioritize and sustain their own learning.

Discussion In this chapter, after highlighting the relevance of MALL to LSP, I have outlined criteria for effective vocabulary learning tools and illustrated how affordances of

136  Joshua Underwood mobile technology can be used to support vocabulary learning. I have also summarized criteria for effective MALL designs and identified some challenges in enacting mobile learning interventions. In this final section I discuss opportunities to exploit affordances of mobile technology to better support enactment of MALL designs. Mobile technology makes it easier to capture and share language as we encounter it and this can be particularly useful for LSP tasks that aim to build autonomy. However, access to technologies that are always on and always to hand may well mean we end up capturing and sharing more language data than is practical to cope with. Both learners and teachers are likely to find automated analysis, highlighting of significant expressions, and help with prioritization useful. Integration of mobile tools for capturing and sharing language encounters amongst LSP communities with tools for building and analysing corpora offer opportunities to help learners and teachers prioritize, organize, and sustain deliberate learning. Messaging to nudge learners to engage in activity when they do not appear to be sustaining learning trajectories can also be useful. However, while it is useful for apps to provide reminders, learners need to be in control. The models and rules, which govern when and how to remind, need to be accessible and adaptable. Apps should make it easy for learners to determine when, where, how often, and how they want to be pushed to learn. Learners also need help in assessing their learning. Most flashcard apps currently rely on self-assessment. Apps need to employ a broader range of techniques to help learners assess both receptive and productive competence. As voice recognition improves this is practical for both written and spoken interaction. Again, sharing data between apps that support different kind of activities can also help. For example, use of vocabulary that is being studied deliberately in one app, might be monitored in apps used for communication. This kind of data sharing may also support more dynamic assessment. Use of vocabulary and structures being studied deliberately might be prompted in context, in apps used for communication through highlighted inclusion in auto-correct and predictive text suggestions. In summary, much vocabulary knowledge may develop incidentally through use but deliberate activity can accelerate learning (Huljustin, 2012). By sharing data, apps that support communicative and professional activities such as reading, messaging colleagues, etc., and apps that support deliberate learning, such as vocabulary notebooks or flashcards, can be mutually enhanced. For example, employing emerging technologies like mobile eye-tracking, information about words I struggle with while reading (e.g. Kunze et al., 2013) an LSP text might be fed to apps that tutor my understanding of these words, or shared with teachers for later support. Sharing activity from outside the classroom can help teachers make classroom learning more relevant to learners’ interests and needs. In the opposite direction, it also allows teachers to help learners prioritize and assess selfdirected learning. And apps might do more to support this, for example words and phrases identified as meriting deliberate study might be highlighted in the

Enacting MALL designs for LSP 137 texts I read in the target language and/or embedded in related first-language texts for contextualized incidental micro-learning opportunities. Clearly if the objective of MALL designs is to co-ordinate and connect vocabulary learning across many different activities, then easy exchange of data and actions on data between the apps, people, and other resources that support these activities is required. This kind of close integration and data exchange has not always been well supported or easy. However, it is increasingly possible and common for specific actions offered by any particular app to be made available in a timely way within other apps, for example a contextual button to send selected text within a web page straight to Twitter. In the future, we can expected similar integration of actions offered by tools useful for language learning, within the tools used for professional and personal communication and other activities. It should also be possible to chain such actions, for example allowing one tap within an email to provide a definition and/or translation, send the selected text to my vocabulary notebook, retrieve examples of use and information about frequency, schedule a study reminder, and message my language teacher.

References Arnó–macià, E. (2012). The role of technology in teaching languages for specific purposes courses. The Modern Language Journal, 96, 89–104. doi: 10.1111/ j.1540-4781.2012.01299.x. Barahona, C. & E. Arnó (2001). English online: A virtual EAP course@university. In S. Posteguillo, I. Fortanet & J.C. Palmer, (Eds.), Methodology and new technologies in LSP (pp. 181–194). Castelló de la Plana: Publicacions de la Universitat Jaume I. Bernardini, S. (2004). Corpora in the classroom. How to Use Corpora in Language Teaching (2004), 15–36. Burston, J. (2015). Twenty years of MALL project implementation: A meta-analysis of learning outcomes. ReCALL, 1–17. doi:10.1017/S0958344014000159. Cai, C.J., Miller, R.C, Guo, P.J. & Glass, J. (2014). Wait-learning: Leveraging conversational dead time for second language education. In Proc. CHI 2014. ACM Press. NY. Chang, Y-J., Li, L., Chou, S-H., Liu, M-C. & Ruan, S. (2013). Xpress: Crowdsourcing native speakers to learn colloquial expressions. In Proc. CHI 2013, p. 2555. ACM Press. NY. Chen, Y. (2010). Dictionary use and EFL learning. A contrastive study of pocket electronic dictionaries and paper dictionaries. International Journal of Lexicography, ecq013. Dearman, D. & Truong, K. (2012). Evaluating the implicit acquisition of second language vocabulary using a live wallpaper. In Proc. of CHI 2012, p. 1391. ACM, NY. Demmans Epp, C. (2013). Mobile adaptive communication support for vocabulary acquisition. In Proceedings of AIED 2013, pp. 876–879. Springer, Berlin. Edge, D., Searle, E., Chiu, K., Zhao, J. & Landay, J. A. (2011). MicroMandarin. In Proceedings CHI 1011, p. 3169. ACM Press. NY. Fullan, M. & Langworthy, M. (2014). A rich seam: How new pedagogies find deep learning. London, Pearson. Available at http://www.michaelfullan.ca/wp-content/ uploads/2014/01/3897.Rich_Seam_web.pdf.

138  Joshua Underwood Gimeno, A. (2014). Fostering learner autonomy in technology-enhanced ESP courses. In Bárcena et al. (Eds.), Languages for specific purposes in the digital era (pp. 27–44). Springer, Switzerland. Hasegawa, K., Ishikawa, M., Shinagawa, N., Kaneko, K. & Mikakoda, H. (2008). Learning effects of self-made vocabulary learning materials. In Proc. CELDA 2008, pp. 153–158. Hulstijn, J. (2012). Incidental learning in second language acquisition. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics. Wiley-Blackwell, Malden MA. Intille, S., Lee, V. & Pinhanez, C. (2003). Ubiquitous computing in the living room: Concept sketches and an implementation of a persistent user interface. In Proc. UBICOMP 2003. Kerawalla, L., O’Connor, J., Underwood, J., duBoulay, B., Holmberg, J., Luckin, R. & Tunley, H. (2007). Exploring the potential of the homework system and tablet PCs to support continuity of numeracy practices between home and primary school. Educational Media International, 44(4), 289–303. Kerawalla, L., Littleton, K., Scanlon, E., Jones, A., Gaved, M., Collins, T. & Petrou, M. (2013). Personal inquiry learning trajectories in Geography: Technological support across contexts. Interactive Learning Environments, 21(6), 497–515. Kern, N. (2013). Technology-integrated English for specific purposes lessons: Real-life language, tasks, and tools for professionals. In G. Motteram (Ed.), Innovations in learning technologies for English language teaching. Available from http://www.teachingenglish.org.uk/article/innovations-learning-techno logies-english-language-teaching. Kern, N. (2011). Tools for taxi drivers. English Teaching Professional, 77, 56–58. Kennedy, C. & Levy, M. (2008). L’italiano al telefonino: Using SMS to support beginners’ language learning. ReCALL, 20(03), 315–330. Kunze, K., Kawaichi, H., Yoshimura, K. & Kise, K. (2013). Towards inferring language expertise using eye tracking. Extended Abstracts CHI 2013, 217. doi:10.1145/2468356.2468396. Nakata, T. (2011). Computer-assisted second language vocabulary learning in a paired-associate paradigm: A critical investigation of flashcard software. CALL, 24(1), 17–38. Nation, I. S. P. (2001). Learning vocabulary in another language. Cambridge, England: CUP. Palalas, A. (2012). Design guidelines for a mobile-enabled language learning system supporting the development of ESP listening skills. Thesis. Pemberton, L. & Winter, M. (2011). SIMOLA: Helping language learners bridge the gap. In Proceedings of the 4th Int. Conf. on ICT for Language-Learning. Florence, Italy. Procter-Legg, E., Cacchione, A. & Petersen, S. (2012). LingoBee and social media: Mobile language learners as social networkers. Proc. CELDA 2012, pp. 115–122. Rowland, D., Thumb, G., Gibson, M., Walker, K., Underwood, J., Luckin, R., Good, J. (2010). Sequential Art for Science and CHI, Proc. CHI 2010. pp. 2651–2660. Schmitt, N. & Schmitt, D. (1995). Vocabulary notebooks: Theoretical underpinnings and practical suggestions. ELT Journal, 49(2), 133–143. Seedhouse, P., Preston, A., Olivier, P., Jackson, D., Heslop, P., Plötz, T., et  al. (2013). The French digital kitchen. Int.J. of Computer-Assisted Language Learning & Teaching, 3(1), 50–72.

Enacting MALL designs for LSP 139 Sharples, M., McAndrew, P., Weller, M., Ferguson, R., Fitzgerald, E., Hirst, T. & Gaved, M. (2012). Innovating Pedagogy 2013. Open University Innovation Report, OU, UK. Smith, H., Underwood, J., Walker, K., Fitzpatrick, G., Benford, S., Good, J. & Rowland, D. (2008). e-Science for Learning : Crossing the boundaries between school science and research science. Stockwell, G. & Hubbard, P. (2013). Some emerging principles for mobile-assisted language learning (pp. 1–14). TIRF. http://www.tirfonline.org. Tai, Y. (2012). Contextualizing a MALL: Practice design and evaluation. Educational Technology & Society, 15(2), 220–230. Trusty, A. & Truong, K. N. (2011). Augmenting the web for second language vocabulary learning. In Proc. CHI 2011 (p. 3179). ACM Press. NY. Underwood, J., Smith, H., Luckin, R. & du Boulay, B. (2007). Logic gates and feedback: Towards knowing what’s going on in the electronics lab. In Proceedings of AIED 2007 Workshop of Emerging Technologies for Inquiry-Based Learning in Science (p. 25). Underwood, J., Smith, H., Luckin, R. & Fitzpatrick, G. (2008). E-Science in the classroom – Towards viability. Computers & Education, 50(2), 535–546. Underwood, J., Luckin, R. & Winters, N. (2014, December). MALL in the wild: Learners’ designs for scaffolding vocabulary learning trajectories. In CALL Design: Principles and Practice. In Proceedings of the 2014 EUROCALL Conference, Groningen, The Netherlands (p. 391). Research-publishing.net. Wild, F., Perey, C., Helin, H., Davies, P. & Ryan, P. (2014). Advanced manufacturing with augmented reality. In Proc. of 1st AMAR Workshop, Munich, Germany, Sept 8, 2014. Wong, L. H. (2013). Analysis of students’ after-school mobile-assisted artifact creation processes in a seamless language learning environment. Journal of Educational Technology & Society, 16(2), 198–211. Wong, L. L. C. & Nunan, D. (2011). The learning styles and strategies of effective language learners. System, 39(2), 144–163. doi:10.1016/j.system.2011.05.004. Wyeth, P., Smith, H., Ng, K. H., Fitzpatrick, G., Luckin, R., Walker, K., . . . & Benford, S. (2008, April). Learning through treasure hunting: The role of mobile devices. In  Proceedings of the International conference on mobile learning 2008 (pp. 27–34).

10 Designer learning The teacher as designer of mobile-based classroom learning experiences Nicky Hockly The Consultants-E, United Kingdom

Brief literature review If researchers are in agreement about one thing, it is that defining exactly what constitutes ‘mobile learning’ is difficult (Kulkuska-Hulme, 2009; Traxler, 2009). The concept of mobility itself is problematic within any definition of mobile learning. For example, is it the mobility of the learners – the possibility that they can learn anywhere, any time by using portable devices – that is important? Or is it the mobility/portability of the devices themselves that is important (a more technocentric view)? Clearly both of these aspects are important, and current definitions also stress the importance of context, where mobile learning can take place in both formal classroom settings, and also in informal settings, across myriad devices, in a variety of physical and temporal arenas (Sharples et al., 2009; Kukulska-Hulme et al., 2009). Pegrum (2014) provides a helpful way of conceptualizing these interrelated aspects of mobile learning. The researcher suggests that the use of mobile devices in education frequently falls into one of three categories, corresponding to the emphasis on devices/learners/context mentioned above: •• •• ••

when the devices are mobile; when the learners are mobile; when the learning experience itself is mobile.

The first category – when the devices are mobile – is typical of what Pegrum describes as “connected classrooms,” where students use their own devices, or class sets of devices, to access the Internet, create content, etc. In this instance, the learners work within the confines of the classroom walls (or at home in a flipped learning model), so they are not physically mobile. In addition, the singular affordances of mobile devices (what the devices can actually do – such as geolocation) are not exploited so the learning experience itself is not mobile. Pegrum’s second category – when the learners are mobile – describes scenarios where learners may be moving around the classroom or the school premises while learning. Or students may be using commuting time or waiting time to access

Designer learning 141 short chunks of content to reinforce learning in self-study mode. Reviewing vocabulary via mobile flashcard apps might be a typical self-study activity that learners can do while on the move. However, the learning experience remains fundamentally the same, wherever the learners may physically be at the time of using the devices. The third category – when the learning experience is mobile – refers to learners using devices across a range of real-world contexts to access information needed at that moment, or to create multimedia records of their learning wherever they may be at that moment. Tasks relying on geolocation are clear examples of situational mobile learning. It is this third category of mobile learning which is arguably the most disruptive, and which relies on the specific affordances of networked smart devices. Although there are multiple devices which can be deployed in mobile learning (such as MP3 and MP4 players, gaming consoles, or e-readers), there is a trend towards convergence in devices such as smartphones and tablets in developed countries, and more basic or feature phones, or devices like XO laptops (as part of the One Laptop Per Child initiative) in developing countries. A look at recent and current international mLearning projects being carried out attest to this trend. (See for example UNESCO’s reports on mobile projects: http://www. unesco.org/new/en/unesco/themes/icts/m4ed/mobile-learning-resources/ unescomobilelearningseries/). Looking at the literature documenting mobile learning initiatives within the field of English language learning, we can identify three different project approaches with significantly different levels of access to funding, different scalability, and different timeframes: ••

••

••

large scale mLearning projects, particularly in developing countries, jointly funded by nongovernmental organization, Ministries of Education, hardware and/or software providers, mobile telephone companies, and educational institutions such as the British Council or universities (see Pegrum, 2014 for a discussion of example projects); smaller institutions or universities, in both developed and developing countries, carrying out the strategic implementation of mobile devices to support language learning. Small individual language learning schools such as the Anglo European School of English in Bournemouth, and larger institutions such as the Cultura Inglesa in Brazil, the Casa Thomas Jefferson also in Brazil, or the British Council in Hong Kong, are examples of good practice in this respect; and individual teachers who are early adopters of technology, and experiment on an ad hoc basis with small groups of students, sometimes with little or no support from their institutions. The work of Paul Driver in Portugal, Anne Fox in Denmark, and Karin Tiraşim and Çigdem Ugur in Turkey provides examples of these (see Hockly, 2012 for a brief discussion of some of these studies).

142  Nicky Hockly mLearning for language learning (or MALL – Mobile-Assisted Language Learning) is a relatively new field within CALL and e-learning, and as such, there is still little reliable research available. Even the term ‘MALL’ has come under scrutiny (Jarvis & Achilleos, 2013), with alternatives such as MALU (Mobile-Assisted Language Use) being proposed as a more accurate reflection of how mobile devices can be used for learning. Longitudinal research studies are challenging to carry out because mobile devices are evolving so quickly (Pachler, 2009). What may be the latest mobile technology at the start of a three-year mobile learning project may start to seem very limited by the end. In addition, like CALL in general, MALL suffers from a lack of a single unifying theoretical framework against which to evaluate its efficacy, and this can lead to a confusing array of anecdotal case studies that do little to contribute to a sound research base (Egbert & Petrie, 2005; Levy, forthcoming 2016). For example, CALL (and by extension MALL) researchers may decide to use an interactionist second language acquisition framework, a sociocultural perspective, a systemic functional linguistics perspective, an intercultural perspective, a situated learning perspective, a design-based research perspective, and so on. Whatever theory is chosen to underpin a research study, the researcher needs to make it salient for the reader, and to be aware of what a particular focus might leave uncovered (Egbert & Petrie, 2005). With these points in mind, we turn to a brief discussion of this small-scale action research project. Given the limitations of space in this chapter, what follows is necessarily a brief summary.

The study This classroom-based action research project was carried out with two consecutive small groups of international EFL (English as a Foreign Language) learners studying at a private language school in Cambridge, UK, over a period of two weeks in July, 2013. The first group (week 1) consisted of very low-proficiency learners (A1 level in the Common European Framework of Reference for Languages, or CERF) with a total of 12 learners. The vast majority were Arabic speakers, with one Chinese speaker, and one Russian speaker. Half the class were adolescents (16 years old), with adult learners ranging from 20–45 years of age. The second group was of low-intermediate level (B1 in the CERF) with a mix of nationalities among the eight learners (Kuwaiti, Italian, Brazilian, Turkish, Argentinian, and Chinese), and ages ranged from 16 to 27 years old. Each group received three hours of EFL instruction with me in the mornings, and a further one and a half hours of EFL instruction with a different teacher in the afternoons, who did not use much technology beyond occasional use of the IWB (interactive whiteboard). The aim of the study was to use the experience of teaching in a real classroom context to explore how learners’ own mobile devices might be integrated into a course book-driven approach (set by the school) to supplement and enhance communicative tasks, and what learners’ expectations and reactions to this use of their mobile devices as part of their learning might be. The overall aim was

Designer learning 143 to generate theory from practice, in an attempt to create a practical framework for designing and implementing mobile-based communicative tasks in the language classroom. That is, based on the experience of designing and carrying out classroom tasks for mobile devices in this particular context, it was expected that theoretical principles or considerations would emerge. As such, the approach attempts to generate a “mobile-specific” theory (Vavoula & Sharples, 2009; Viberg & Grönlund, 2012), which may have wider repercussions for task design beyond the language classroom. A BYOD (bring your own device) approach was chosen because the likelihood of most – if not all – learners owning smartphones or tablet computers was very high. This assumption proved to be the case. Private language schools in the UK tend to attract students who can afford these devices, with professional adults attending, and adolescent learners coming from relatively wealthy backgrounds. In addition, the school had Wi-Fi connectivity: Having reliable connectivity when implementing mobile-based activities is clearly a key consideration. In this situation, there were weekly ongoing enrolments typical of a private language school in the summer in the UK, where the members of a class (and sometimes teachers) change on a weekly basis. As a result, it was impossible to know much about the learners in advance, so some planned activities had to be altered once I started working with the groups. The research approach was developed as each course progressed, and focused on task design and sequencing based on contextual factors as they arose. The key research question that emerged was: What pedagogical models of task design and sequencing facilitate learning with mobile devices in the classroom and enhance its benefits?

Initial class survey In the first class of the week, both groups completed an online survey designed to check what learning experiences they may already have had using mobile devices, what devices and connectivity they had with them in the UK, and to gauge their attitudes to the idea of working with their devices during the coming week. The results of this initial survey were very similar for both groups, and affected the subsequent task design and sequencing of mobile-based activities during the week: ••

••

Although all the learners had smartphones, not all of them had 3G connectivity, and had to rely on Wi-Fi connections either in the school, or (some) at home. This meant that any activities to be carried out outside of the school (for example, on the move or at home) could not rely on an internet connection. All the learners regularly used bilingual dictionary or translator apps on their mobile phones in class. None of the learners had ever used their mobile phones for any other language-related activities. This point suggested that the introduction of mobile-based language learning activities needed to be gradual and staged, so that learners could start off in familiar territory.

144  Nicky Hockly ••

All of the learners in both groups agreed that they would like to use their devices to help them learn English. Although the learners had clearly not had any experience of doing so in the past, this result did show that 100% of learners in both groups were positively disposed towards trying out mobilebased learning activities.

Pedagogical implementation Given that the learners of both groups were unfamiliar with using their mobile devices for language learning, beyond the use of their ubiquitous dictionary and translator apps, a staged approach, moving from simpler activities towards more complex activities during the week, seemed appropriate. It was also important that activities – all of which were designed to develop the ability to communicate in English, and focused primarily on language production – were related to the course book syllabus, and were appropriate to the linguistic levels of the learners. Most of the activities were open ended, encouraging the learners to produce language in spoken and/or written format, so it was possible to use some of the same or similar activities with both groups as learners were able to produce language at their current linguistic level. Tasks were developed prior to each class, and aligned to the course book syllabus and content where appropriate. This enabled an approach where learners’ feedback and our experiences with one day’s tasks informed the approach and development of the next day’s tasks. The course book and syllabus provided the content framework, so lesson content was not as off-the-cuff as it might seem. Rather it allowed for introducing more (or less) challenging activities depending on how the learners were progressing. Table 10.1 summarizes the mobile-related tasks carried out with each group. A detailed description of each task is beyond the scope of this chapter, but hyperlinks are provided to task descriptions, for interested readers. The activities listed above were not the only tasks carried out with the class each day. Rather, they were integrated into a range of other language learning activities, many of which were related to, or directly taken from, the course book. The school policy required teachers to use a pre-set course book with learners, so for this research project, any mobile activities had to be fitted around this requirement. In fact, the majority of teachers around the world are usually given a syllabus to work from, whether this is imposed by the course book, the Ministry of Education, or the institution itself. In this respect, it was a useful exercise to have to ensure that the activities using learners’ mobile devices were integrated into an externally imposed syllabus, and were congruent with the curriculum as much as possible in terms of language content and topics.

Learner feedback An exit survey found that the majority of learners in both groups enjoyed using mobile devices and would like to continue to do so in the future. What is especially

Designer learning 145 Table 10.1  Mobile tasks

Monday

Tuesday

Wednesday Thursday

Friday

Beginner group (A1 level)

Intermediate group (B1 level)

*Letter dictation Reviewing questions Online mobile use survey Described above *We’ve got it Sharing personal photos

Online mobile use survey Described above QR codes in class Reviewing questions *We’ve got it Sharing personal photos Water photos Collecting photos with phones, related to the coursebook topic of ‘water’ *Water interviews Recording narrative interviews in pairs Bombay TV Viewing learner-created subtitled videos *16-18-20h selfies Sharing personal photos QR code treasure hunt Review & integrated skills

QR codes in class Reviewing questions *My mobile Text reconstruction *Mobile English Sharing photos of English found around the school QR code treasure hunt Review & integrated skills

*Cambridge Guide Audio recording in Woices app

*Described in Hockly, N. & Dudeney, G. (2014)

interesting about the responses to the survey though, is that one learner clearly felt that there were few benefits to the mobile tasks, with comments such as “useless” and “it doesn’t work.” From discussion with previous teachers of the group, it had already been conveyed to me that this learner was reluctant to take part in communicative activities, and preferred very structured written grammar practice in class. This reluctance appeared to relate to personal learning style and expectations about learning; it is also a clear case of the need for learner training in the benefits of not only using mobile devices, but of the communicative approach in language learning. The implications of this learner’s resistance to mobile-based activities are discussed below. A fuller description of the class surveys, and a discussion of the affective factors that these surveys addressed, can be found on my blog at http://www. emoderationskills.com/?p=1188.

Discussion of the research question The initial online class survey (see above) conducted with learners was instrumental in laying out several parameters for task design. For example, key elements were hardware (whether learners had access to devices, and what type) and connectivity (whether learners had access to Wi-Fi and 3G outside of class). It was clear from the survey results that most mobile-based communicative tasks

146  Nicky Hockly would need to take place within the classroom and the school grounds, and that any mobile-based tasks assigned for homework could not rely on connectivity. According to Pegrum’s (2014) three categories of mobility described earlier (whether the devices, learners or learning experience are mobile), the survey showed that any attempt to include tasks in all three of these categories would be limited to the geographical location of the school itself, as it was the only place where all learners had Wi-Fi access. This limitation had a marked effect on mobile task design. In addition, the affordances of the mobile devices owned by the learners – in other words, what the devices could do – were clarified in the initial survey. Because all learners owned smartphones (a majority of iPhones, two Android phones, and one BlackBerry), tasks that leveraged the affordances of smartphones (e.g., audio, video, access to apps, and geolocation capabilities) could be included. Devices’ features such as screen size are also important for task design. Having learners read or produce long texts on smartphones is not ideal, and in this context, using capabilities such as taking photos, or recording audio and video fitted better with the affordances of the learners’ smartphones. The fact that none of the learners had any previous experience of using mobile devices in their language learning (apart from translation/dictionary apps) suggested that beginning with a low level of technological complexity would allow them to work within their comfort zones, and not overwhelm them with complicated apps or tasks too early. On the first day of using their mobile devices, both groups did a dictation task that required them to use their smartphone note app (the letter dictation task), and a task that required them to access the internet (the online survey). In addition, the intermediate week-two group carried out the QR (Quick Response) code integrated-skills question review task, as they were clearly proficient at handling their own devices, and were quickly and easily able to download a QR code reader (new to all of them) via the school Wi-Fi. The decision to include this activity at the end of the first day with this group was made on the spot, and replaced a planned course book related activity. It was clear that this group had not just the technological competence to carry out the task, but the necessary linguistic level to quickly grasp the instructions and successfully complete the activity. This finding suggests two more parameters to keep in mind in mobile task design for the communicative classroom: technological complexity and linguistic/ communicative competence. It makes sense to ensure that the task does not have a high level of technological and linguistic complexity at the same time. To have learners struggling with both the technology and the task content makes the task harder to complete successfully. In the case of the QR codes task with the intermediate group, the level of technological complexity is not particularly high once one understands what QR codes are and how a QR code reader works. The task itself encouraged learners to share information about their hobbies in written (and then in spoken) form, and was not too demanding linguistically for this level.

Designer learning 147 Four types of MALL are suggested by Pegrum (2014), each focused primarily on one of these areas: •• •• •• ••

content MALL: for example, self-study content such as listening to podcasts or reading e-books; tutorial MALL: behaviorist activities, such as vocabulary flashcard apps, pronunciation/repetition apps, quizzes, and games; creation MALL: activities including the creation of text, images, audio and/ or video; and communication MALL: for example, the sharing of created digital artifacts via mobile devices, either locally, and/or internationally via networked groups.

The first two types of MALL content and tutorial MALL fit with a behaviorist theory of learning, in which learners consume content, and may reproduce it in very controlled contexts. The second two types of MALL, creation and communication, clearly sit more comfortably with a communicative or task-based approach to teaching and learning. These four types of MALL are not mutually exclusive, and it is possible to have several types appearing within the same activity. What is clear, however, is that creation and communication MALL require the guidance of a teacher. Therefore, they are arguably more suited to classroom-based tasks (or to e-learning contexts) where teachers are able to provide guidance and feedback. Thus we have a fifth parameter to keep in mind in our design of mobile-based communicative classroom tasks: To what extent does the mobile-based task allow for creation and communication, or to what extent does it rely on content or tutorial approaches? In the communicative classroom, I would argue that although all four approaches may be present, they should be significantly weighted towards creation and communication MALL. However, for learners new to using their mobile devices for language learning, or from educational contexts in which behaviorist approaches are preferred or are the norm, it may make sense to spend some time initially on content and tutorial MALL tasks. Some activities may take place during class, and others outside of class (e.g., for homework), but both before introducing more communicative MALL tasks. The one learner in my intermediate week-two group who found it difficult to relate to and approach the tasks may have benefited from this sort of staged approach, along with focused learner training. And thus we come to a sixth parameter for effective communicative mobile task design: educational/learning context. In monolingual contexts, this decision is much easier to make, and an appropriate type of MALL can be introduced at the initial stages. Based on the ideas discussed above, we see six key parameters emerging for the design of communicative tasks using mobile devices in the classroom: •• •• ••

hardware (device affordances including features and connectivity capabilities); mobility (devices, learners, or learning experience); technological complexity (related to learners’ technological competence);

148  Nicky Hockly •• •• ••

linguistic/communicative competence; content, tutorial, creation, or communication MALL; educational/learning context (related to learners’ expectations and preferred learning styles).

By keeping these six parameters in mind, and by ensuring a fit with the syllabus, effective mobile-based communicative classroom tasks can be designed and sequenced.

Issues of concern There are a number of caveats connected to this research project, which need to be borne in mind when assessing the results. As noted earlier, this research study was very small and carried out with only two groups of EFL students, with class sizes of 12 and eight students. In addition, I was only able to work for one week with each group for a total of 15 hours. It is unlikely that the use of mobile devices in and/or out of class continued to occur in their learning, as other teachers in the institution were not part of this project. The rapid turnover of learners and regularly changing teachers typical of the context of a UK language school in summer meant that working with a number of teachers on a longer-term project exploring the ongoing implementation of mobile devices, was simply not possible. In addition, working with an international group of adult learners (aged 16+) in a multilingual context is in many ways atypical of much EFL teaching around the world, which tends to take place in monolingual contexts. The multilingual context of the study meant that interesting challenges arose – such as a reluctance to take part in communicative tasks and the related need for learner training. Although such obstacles affected only one student, it does highlight the importance of the educational/learning context when it comes to implementing certain types of tasks and approaches. Working with monolingual groups in a number of different contexts would allow for more context-specific decisions to be made about mobile task design, and especially sequencing. Furthermore, in this particular study, given the low language proficiency of both groups, it was difficult to solicit detailed feedback from the learners about their experiences, in English. When the researcher speaks the L1 (first language) in a monolingual context, learners with low proficiency in English can provide much more complex and nuanced reactions to the use of mobile devices, as they are able to express themselves in their L1s. But perhaps most importantly, a major drawback of this study was its ad hoc nature. If the use of mobile devices is to be well integrated into learning, and if students are to fully reap the benefits, there needs to be institutional support. A teacher’s work in the classroom should form part of a wider mobile strategy as part of an institution’s educational plan. More rigorous and longitudinal research can then be carried out in this particular context over time, and the learners’ experiences of mobile device use is less disjointed. However, it is hoped that this study – with its limitations kept firmly in mind – has helped foreground some of

Designer learning 149 the key parameters involved in designing and sequencing classroom-based communicative tasks for mobile and handheld devices.

Future directions Given that MALL/MALU research is still in its infancy, one potential area of future research might involve investigating frameworks for the design of mobilebased tasks in education. Keeping in mind the six parameters discussed earlier may help educators decide on the most effective tasks for any given context, and help with the sequencing of these tasks. Additional context-specific parameters may be relevant in other fields of education. However, for educators to be able to implement meaningful and communicative mobile-based tasks with learners, they need to first ensure that they have the technical and technological competence needed to work with mobile devices of any kind. Practitioners must also stay current with devices’ future developments. Teacher training programs need to ensure that the ‘technological competence’ described in Mishra and Koehler’s (2006) TPACK model is given equal weight to content and pedagogical knowledge. Teachers need to be able to work not just with language content; they need to be (co-) designers of effective learning experiences for their learners, whether using technology or not (Laurillard, 2012). In the words of a whitepaper from the STELLAR (Sustaining Technology Enhanced Learning at a LARge scale) project: The challenge of education is no longer about delivery of knowledge: it is about designing environments, tools and activities for learners to construct knowledge. In order for educators to effectively orchestrate learning within this landscape they need to perceive themselves, and indeed to be perceived by society, as techno-pedagogical designers. (Mwanza-Simwami et al., 2011, p. 5, as cited in Pegrum, 2014)

In addition, teachers need to feel comfortable with a wide range of digital literacies, and know how to leverage these in the classroom (Dudeney, Hockly, & Pegrum, 2013). And an increasingly key digital literacy is mobile literacy. As Parry (2011) notes, “The future our students will inherit is one that will be mediated and stitched together by the mobile web” (p. 16). If training programs are not equipping educators to deal with the future’s demands, then we do a disservice not just to teachers, but also to learners.

Acknowledgments This chapter is reprinted with the permission of TIRF, The International Research Foundation for English Language Education. For further information please visit www.tirfonline.org. The original paper can be found at http://www. tirfonline.org/english-in-the-workforce/mobile-assisted-language-learning/ designer-learning-the-teacher-as-designer-of-mobile-based-classroom-learningexperiences/.

150  Nicky Hockly

References Dudeney, G., Hockly, N., & Pegrum, M. (2013). Digital literacies. London: Routledge. Egbert, J. L., & Petrie, G. M (Eds.). (2005). CALL research perspectives. Mahwah, NJ: Lawrence Erlbaum Associates. Hockly, N. (2012). Tech-savvy teaching: BYOD. Modern English Teacher, 21(4), 44–45. Hockly, N., & Dudeney, G. (2014). Going mobile: Teaching and learning with handheld devices. London: Delta Publishing. Jarvis, H., & Achilleos, M. (2013). From computer assisted language learning (CALL) to mobile assisted language use. TESL-EJ, 16(4), 1–18. Retrieved 12 August, 2013, from http://goo.gl/Uuq5dr. Kukulska-Hulme, A. (2009). Will mobile learning change language learning? ReCALL, 21(2), 157–165. Retrieved 12 August, 2013, from http://goo.gl/Pbv5n. Kukulska-Hulme, A., Sharples, M., Milrad, M., Arnedillo-Sánchez, I., & Vavoula, G. (2009). Innovation in mobile learning: A European perspective. International Journal of Mobile and Blended Learning, 1(1), 13–35. Laurillard, D. (2012). Teaching as a design science: Building pedagogical patterns for learning and technology. New York, NY: Routledge. Levy, M. (forthcoming 2016). Researching in language learning and technology. In F. Farr, & L. Murray (Eds.), Routledge handbook of language learning and technology. New York, NY: Routledge. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. Mwanza-Simwami, D., Kukulska-Hulme, A., Clough, G., Whitelock, D., Ferguson, R., & Sharples, M. (2011). Methods and models of next-generation technology enhanced learning. In Alpine Rendezvous, March 28–29, La Clusaz, France. Retrieved 12 August, 2013, from http://goo.gl/G5bLod. Pachler, N. (2009). Research methods in mobile and informal learning: Some issues. In G. Vavoula, N. Pachler, & A. Kukulska-Hulme (Eds.). Researching mobile learning: Frameworks, tools, and research designs. New York, NY: Peter Lang. Parry, D. (2011). Mobile perspectives on teaching: Mobile literacy. EDUCAUSE Review, 46(2), 14–18. Retrieved 12 August, 2013, from http://goo.gl/AKizd. Pegrum, M. (2014). Mobile learning: Languages, literacies & cultures. Basingstoke & London, UK: Palgrave Macmillan. Sharples, M., Milrad, M., Arnedillo-Sánchez, I., & Vavoula, G. (2009). Mobile learning: Small devices, big issues. In N. Balacheff, S. Ludvigsen, T. de Jong, A. Lazonder, S. Barnes, & L. Montandon (Eds.), Technology enhanced learning: Principles and products (pp. 233–249). Dordrecht, Germany: Springer. Traxler, J. (2009). Learning in a mobile age. International Journal of Mobile and Blended Learning, 1(1), 1–12. Vavoula, G., & Sharples, M. (2009). Meeting the challenges in evaluating mobile learning: A 3-level evaluation framework. International Journal of Mobile and Blended Learning, 1(2), 54–75. Viberg, O., & GrÖnlund, A. (2012). Mobile assisted language learning: A literature review. In M. Specht, M. Sharples, & J. Multisilta (Eds.), mLearn 2012: Proceedings of the 11th International Conference on Mobile and Contextual Learning 2012, Helsinki, Finland, October 16–18 (pp. 9–16). Retrieved 12 August, 2013, from http://goo.gl/mPOFs2.

11 Mobile and massive language learning Timothy Read Universidad Nacional de Educación a Distancia, Spain

Elena Bárcena Universidad Nacional de Educación a Distancia, Spain

Agnes Kukulska-Hulme The Open University, United Kingdom Mobile-assisted language learning As Evans (2013) notes, “The world is mobile!” and data he presents show that over 1,250 million smartphones and tablets were sold in 2013 (compared with just over 250 million desktop computers). These handheld devices run small programs, called apps, that are typically downloaded from online stores. There have been over 6 billion such downloads for the two principal operating systems: iOS and Android. Mobile devices, like desktop computers, when used online, can arguably be said to serve two functions: access to information resources and shortening distances between people. Over 50 social messaging apps have had more than a million downloads on Google Play, and over 14 billion messages are sent on WhatsApp a day (Evans, 2013). Mobile devices represent a step toward ubiquitous information access and what will increasingly become “wearable technology.” Early examples include smart watches and headsets that provide information to users without the need to take out a smartphone or tablet. Unlike previous educational technology, such as learning management platforms, it is not necessary to persuade and cajole students into using these devices. The great majority already have smartphones and even tablets so they are already online using Web 2.0 environments from their mobile devices. Beyond the use of communication apps the majority of activity on mobile devices revolves around social networks or social media. Kaplan (2012) characterizes such tools and sites into four types: Space-timers (for location-sensitive messages, e.g., Facebook Places or Foursquare); Space-locators (for messages that can be read later by others at a given location, e.g., Yelp or Qype); Quick-timers (for time sensitive messages, e.g., Twitter or Facebook updates); and Slow-timers (traditional social media content, e.g., YouTube or Wikipedia). Given the presence of students in these networks and as consumers/producers of such media, it is clear that if a user is already studying something or on a course, then they would likely try to carry on their studies from their mobile devices (cf. Kukulska-Hulme, Traxler, & Pettit, 2007 and Pettit & Kukulska-Hulme,

152  Timothy Read et al. 2007). “Mobile learning” (henceforth ML) has been defined by Crompton et al. (sited in Crompton 2013, p. 82) as “learning across multiple contexts, through social and content interactions, using personal electronic devices.” Earlier definitions were less elaborate, reflecting a time when the field was in its infancy; for example, Geddes (2004) identifies it with learning “anywhere, anytime,” making use of the tools that mobile devices have; Traxler (2005, p. 262) defines it as “any educational provision where the sole or dominant technologies are handheld or palmtop devices,” while Kukulska-Hulme and Shield (2008, p. 273) define it as “learning mediated via handheld devices” in contrast to computer-assisted language learning taking place on desktop computers. In one sense, ML is a natural extension of e-Learning, where essentially faceto-face courses were moved from classroom settings to learning management systems that students could access from any networked computer, such as ones they have at work or at home. It is not suggested that students use their mobile devices in these courses as a substitute for standard computers whose keyboards and large screens make them ideal workstations for many types of learning. However, what handheld technology offers is a degree of immediacy in that if a student is participating in an online debate related to the course or waiting for a new online resource that a teacher has promised would be available, then the student can connect to the course during his/her day, whenever some free time presents itself and they are not restricted to being in front of a computer. Within ML there has been a consolidated effort toward the use of this technology for language learning, giving rise to Mobile-Assisted Language Learning (henceforth MALL) (Kukulska-Hulme & Shield, 2008). Here the students are able to access and interact with second language materials and communicate from their smartphones or tablet devices. According to Burston (2013) there were 345 publications released over a period of more than twenty years on different applications of mobile devices to language learning. While a great deal has been learned about their suitability for language learning, they have not been adopted in a widespread fashion for this task; there is no single “killer MALL app” that is widely used. This is argued to be due to the difficulties in finding apps that can run on a range of mobile devices and can be used for different types of language learning activities. However, over the past decade or so mobile devices have become more powerful and sophisticated and therefore potentially valuable for second language learning. Furthermore, an improvement in network bandwidth for such devices has also opened the door for them to be used as clients for online courses, social networks and social media, thereby potentiating the mobility that they offer to language learning students.

Language MOOCs Alongside mobile learning, MOOCs have also been argued to hold great potential for languages (Bárcena, Read, & Jordano, 2013; Ventura, Bárcena, & Martín-Monje, 2013; Beaven, Codreanu, & Creuzé, 2014). There are different types of courses that are loosely referred to as MOOCs (cf. Clark, 2013; Watters,

Mobile and massive language learning 153 2012). The most widely accepted difference is between xMOOCs and cMOOCs, where the former are like standard online courses with more students (leading to a diverse student population), and the latter (connectivist MOOCs) revolve around large distributed learning communities sharing content and constructing knowledge (Morrison, 2013). Language MOOCs (henceforth, LMOOCs), can be seen to be the application of the MOOC framework to foreign language learning, including elements that are essential for effective development, namely: structured educational content together with activities, resources, and appropriate social media tools and technologies. A limitation of face-to-face (henceforth, F2F) classrooms and most e-Learning courses held on closed-access institutional platforms for language learning is that there are few opportunities for interaction in the target language. We argue that if LMOOCs are correctly structured and managed, they can represent a bridge between formal and informal learning to assist the development of second language competences, particularly the productive and interactive ones. These courses are not without their criticisms. Bárcena and Martín-Monje (2014) and Read (2014) highlight difficulties related to the changing role of teachers in LMOOCs (changing from the ‘sage on the stage’ to the ‘guide on the side,’ and not typically able to interact directly with the students). New communication, problem solving, and motivational strategies are required to provide effective feedback given such unbalanced teacher–student ratios. A further difficulty present in such large online courses is the very heterogeneous nature of the student group and the different levels of language communicative competences in such groups. The tasks that some students will find extremely challenging will provoke boredom in others. Some authors go even further and question the vary nature and suitability of MOOCs for second language learning. Romeo (2012, p. 2), for example, argues that such self-directed study cannot be effective because it does not make students pro-active, and furthermore, such courses provide few opportunities for actual communication with native speakers. However, given that language learning combines both the acquisition of theoretical knowledge and the development of practical and skill-based competences, then it falls into the middle of a potential scale of “intrinsic MOOC suitability” (cf. Bárcena & Martín-Monje, 2014). As Bárcena (2009) notes, a range of competences, skills and data need to be finely intertwined as learning progresses. This requires both cognitive involvement (using high-order mental skills) and social interaction (with competent speakers of the target language) (Read, Bárcena, & Rodrigo, 2010). Romeo (2012, p. 2) is correct in his assumption that a student would advance quicker if s/he has access to native speakers to undertake relevant second language learning activities. However, very few have access to such rich and controlled learning scenarios. For students actually living in countries where the target language is used, such interaction happens on a daily basis, in a lot of cases, as part of personal and professional life. In other countries and contexts such interaction is all but impossible, unless paid for as a service (socalled conversational classes).

154  Timothy Read et al. The majority of language teaching/learning takes place in small F2F classroom learning situations, or online distance-education courses. Such learning is instructional with few opportunities for interaction in the target language. Furthermore, many people wanting to improve their foreign language competences do not even have access to any kind of language course. LMOOCs can be seen to represent a real opportunity for learning both for students in other language courses (since they complement them and offer possibilities for interaction not present in most closed courses) and also for people not able to participate in other courses. We argue that such MOOCs promote student interaction and communication in the target language (including with non-natives), and enable the same (meta) cognitive strategies to be deployed as would be used in authentic communicative situations (e.g., reasoning, contrasting, enquiring, justifying, reflecting, etc.). However, for this to happen, careful course design needs to be employed so that students do not internalize erroneous language while at the same time being provided with activities containing some degree of flexibility and adaptability (e.g., letting the student decide what s/he wants to learn, providing a minimum of 80% of the activities are done and passed, for certification purposes). The degree to which a MOOC platform can be accessed, and its resources and tools used from mobile devices, varies between different course providers. There are two different ways to achieve this end, by modifying the course Web pages and resources so that they can be accessed from mobile devices (ensuring that they are legible on small screens) and by developing specific apps that encapsulate the access and interaction process thereby ensuring that device-specific limitations are overcome (and also, in some cases, provide offline access to course resources). Examples of the former include EdX and Khan Academy that have resources within their courses that can be accessed from mobile devices. FutureLearn have complete courses that can be undertaken from a mobile device. An example of the latter is Coursera’s app that enables students to participate in courses and also stream or download the course videos. However, it is recommended that a computer is used for the activities and homework. Furthermore, new collaborative projects are beginning to appear that are specifically targeting mobile devices, for example, the EdX partnership with Facebook for African mobile course access. The Indian My Open Courses initiative, can be appreciated to have been developed to be mobile friendly from the very start. Questions regarding mobile access to a language MOOC can be seen to be part of a larger question of heterogeneous course access across different devices and contexts. As Event (2013) noted, life is mobile, but this applies not only to people moving around with their devices on them, it also refers to the desire to use different computers and devices in different locations to access the same networks, media and courses, as if no device change has taken place. Such continuity in informational access across different devices and contexts is referred to as seamlessness in the area of ubiquitous and pervasive computing (Chalmers & MacColl, 2003). In the context of online courses, a student can start, for example, to read an online document or undertake a given activity from a computer at home and

Mobile and massive language learning 155 then, later on at work, connect again from a different computer to carry on with the work or event to check whether a question raised by them has already been answered. The presence of mobile devices complicates this example. The student might start off at home, as above, but then use his/her smartphone to access a course to check for updates to a question (while on public transport to/from work), and connect back later from a tablet (maybe from the sofa in the evening) to continue working in the course. The potential difficulties that such access and work approaches create are not only an issue for technologists, responsible for the correct and seamless functionality across different devices, but also for course developers. It is argued that when developing a MOOC that will be accessed from different platforms, including mobile devices, there are three options available: either the course is designed so that it cannot be used from such devices, and the students warned to the effect a priori; or nothing is said and the students can explore what they can and cannot do from such devices; or a course can be designed specifically for heterogeneous access, taking into account screen size, file formats, activity structure, etc. Such course design and tool selection is still very much an activity for future research. In part the actual type and focus of a given MOOC influences how effectively mobile technology can be used. In second language learning, it is not just a case of acquiring new knowledge but more specifically developing and refining a series of related language competences that are used to comprehend and produce meaning orally and textually in the target language. As such, the communicative and social parts of LMOOCs are not a means to an end, as in MOOCs in general, but actually a central part of the second language learning (2LL) process, directly reinforcing competences involved in comprehension and production. Hence, mobile devices that were developed originally as communication tools offer many affordances for such courses. They may not be ideal for all activities (for example, reading a long document) but may be ideal for certain tasks that can be undertaken even more easily than they could from desktop computers. It would seem logical, therefore, to consider a priori a mobile device as a tool that can be used to complement computers for task types that profit from mobility and context, and/or increasing the frequency with which a student is connected.

MALMOOCs We argue that mobile devices are particularly potent tools for students on LMOOCs since they complement the learning experience by providing three affordances, namely as: portable course clients, mobile sensor-enabled devices, and powerful small handheld computers. Firstly, as portable course clients, they offer anytime-anywhere access. Students can continue to participate in their courses making the most of the time they have as they move around every day. This would generally lead to more frequent interaction within the course, thereby extending, and hopefully improving, communication and collaboration. Such interaction has been argued by Bárcena and Martín-Monje (2014) to be essential

156  Timothy Read et al. for second language learning since it would take place in the target language, enabling valuable application of what has been learnt during the course. It is not just a question of frequency but also “fluidity,” in that since the time between connections to the course would arguably decrease, then the actual practice would be more continuous and gradual, providing a more fluid learning experience. Part of this experience comes from the continued use of students’ own devices, referred to as BYOD (bring your own device), since a person is more familiar with his/her own device (and the apps on it) than one provided by an educational institution, and hence, the sense of ownership extends from the device to the content used on it (de Waard, 2013). Secondly, as mobile sensor-enabled devices, modern smartphones enable students to interact with the world around them, taking photos, making recordings, obtaining geographical data and other location-specific information. Such activity can be seen to enrich and complement standard online learning activities (e.g., take a photo of a specific type of object, label its parts and upload it to the course for fellow students to work with). Furthermore, the mobile device can also form part of immersive augmented reality learning scenarios, where context-specific language scaffolding (Bárcena & Read, 2004) can be provided as required. The results can be logged on the mobile device for later analysis in subsequent learning activities. This is an important complement to any online course since it enables the students to “blur” the edges between online learning and real-world activities, thereby giving rise to what can be referred to as “generalized digital living,” where the informational ubiquity overlaps into a continual learning process. Kilickaya (2004) argues that second language learning activities should be authentic or at the very least realistic, based on real-world situations and scenarios. A smartphone or tablet can be considered as the digital equivalent of a Swiss army knife; containing functions that are particularly useful for immersive 2LL experiences. If a student is in a country where the target language is used, then his/her second language comprehension/production can be scaffolded in a justin-time fashion. If they are in their own countries, such activities can be included in the learning experiences to carry the learning activities from the online course into the real-world context. For example, students can search for specific objects in their environments, and then photograph them for later use in the LMOOC as part of role-playing activities or as a way of comparing and contrasting cultural differences in a given area of life. Data gathering undertaken with mobile devices does not have to be limited to online courses but can form part of any classroom-based language activity. However, the advantage it offers for LMOOCs is that it represents a bridge between the digital and the real world, extending the scope of the learning from the online course to the everyday events of the students’ lives. Extending online learning activities to include real-world tasks using mobile devices encourages the students to dedicate more time to the course with an increase in associated second language usage and corresponding improvement in related competences. Thirdly and finally, smartphones and tablets are powerful handheld computers containing apps that can both provide general tools to be used to complement

Mobile and massive language learning 157 second language competences and online course activities. There are already a considerable number of MALL apps for developing language competences that can be used as part of LMOOCs (Godwin-Jones, 2011). They include apps for training in vocabulary, sentence structure, pronunciation, etc. There are also other apps that while they are not specifically designed for second language learning can be used to support it, for example, apps that permit the manipulation of sensor data (photos, audio and video recordings, GPS locations), basic office-type apps (that offer text processors, spread sheets, etc.), miscellaneous multimedia streaming and reproduction apps, games and social media apps, and so on. Such apps can be used in an LMOOC to complement learning but cannot be made mandatory for the course since not all students there will have the corresponding mobile device. This difference in access to mobile devices represents an added degree of complexity for a language teacher when designing the course. How should the apps be used in the course? What can students who do not have access to such devices do to compensate for their lack? However, such question may have less importance as time goes on, since as Evans (2013) notes, it is estimated that over 4 billion people will be connected to the Internet by smartphones by 2020 (cf. over 7 billion people alive on the planet). Therefore, developing LMOOCs that are principally intended for deployment on mobile devices may become more of a reality since the number of students not able to participate in the course because they do not have a smartphone or tablet is likely to be very low. Such Mobile-Assisted LMOOCs (henceforth MALMOOCs) are a real possibility and one that could improve the language learning experience significantly by extending the time and activities that a student would undertake into their everyday life. It should be noted that as has been the case previously in all technological advances applied to learning, methodological considerations will be key for MALMOOCs to be effective. In the authors’ experience, a scaffolded spiral approach is most effective for second language learning involving the use of technology (Bárcena, 2004). Such an approach moves the students from teacherlead instruction to self-directed learning, and back again, in a circular fashion, which combines an instructivist stage with subsequent social-constructivist ones. LMOOCs are different from standard online courses since the teachers are not typically present when the course starts. Hence, the course structure must take into account difficulties that can arise at different points. Possible learning paths need to be identified in order to provide adequate and relevant scaffolding. The way in which mobile devices can be used in a MALMOOC can be illustrated by an example, which goes beyond just accessing the course. The LMOOC in question is on Professional English (B1 level). In one section, the importance of intercultural factors in job adverts is being studied. The students are provided with videos (with written scripts for scaffolding purposes) and examples that illustrate and explain how job adverts are typically structured and what type of sublanguage is used. They have also undertaken both individual and collaborative activities. The former are closed activities whose objective is to help the students

158  Timothy Read et al. internalize basic concepts, and in the latter, they are asked to prepare a job advert, working in small groups. For this specific activity the following description is presented: In the job advertisements in some countries it is common to find references to what could be considered ‘personal’ traits and require candidates to be of a specific sex, or within a certain age range, have certain physical characteristics, follow a given religion, etc. The students are provided with some open reflexive questions to help them structure their participation in the peer interaction activity: “What do you think of this? Do you consider that this procedure is discriminatory or not? Can you identify any circumstances (types of jobs etc.) where these personal requirements could make sense and be appropriate?” The students are then instructed to use websites and social media on employment to locate evidence that supports their ideas, opinions and arguments. This can be done from both desktop computers and mobile devices. Furthermore, specific online job hunting services like LinkUp and Monster Jobs and their corresponding mobile apps (LinkUp app1 or Monster app2) can be used for this purpose. As this task advances, the students can share their opinions in the forum and/or in small groups. Such collaboration can be useful to help the students refine concepts and terms that are relevant when searching for evidence, for example: “sexist job adverts,” “job adverts and age discrimination,” “religious job adverts,” etc. Subsequently, the activity can be extended out of the online course, and the students can be instructed to look for jobs which do/do not include physical characteristics, preferably in the target language (in this case English) in their daily life. Gamification can be applied to the task by quantifying the number of different jobs that can be found, or the oddest, funniest, etc. Mobile devices can be used here to take photographs of examples that are subsequently shared online. Even if such examples are not in the target language, they can still be translated before sharing in the MALMOOC. Finally, the students can work in small groups to generate some conclusions on the reflexive questions presented earlier and prepare a final written summary that can be compared and contrasted with what other groups have done. If each student reviews the work of two other groups then some peer feedback will be available. The students can then demonstrate that they have understood the relevant intercultural competences and related terminology by undertaking a closed test previously elaborated by the course designer. This example combines activities that can be undertaken from desktop computers and with a mobile device, and as such, is not a pure MALMOOC but does illustrate how different types of device can be used. However, the use of smartphones or tablets, while the students are undertaking their daily activities (for example, to record job card summaries found in windows of employment agencies or on supermarket noticeboards), can help extend and consolidate different conceptual and skill-based language competences. This kind of overlap

Mobile and massive language learning 159 between online learning and real-world activities can provide students with access to situated and task-based learning, that go beyond what is contained in the online course. In general terms, any exposure to the target language is beneficial to students. Since the use of mobile devices provide students with an opportunity to integrate learning activities with their current usage of communication and social media, they are provided with more opportunities to internalize language structures and practice what they see/hear. Furthermore, as was noted above, mobile phones were originally conceived as communication devices before their increasing sophistication converted them into digital Swiss army knives. They are therefore useful for students to be able to go beyond standard textual production and record audio/video that can be used as part of an activity within the MALMOOC. Such audio recording, or similarly conferencing (if the students connect in real time) have been shown to be effective for second language learning (e.g., Hampel & Hauck, 2004). Since these language courses have high student numbers then arguably there are many opportunities for students to interact using the target language via their mobile devices. Often the distributed nature of the student population in distance-learning means that it is hard for them to take part in synchronous communication and collaboration. In this case, given the large numbers of participants it is more likely that at any given time people are available online. By extending the course into everyday activity, when students find themselves with some unexpected free time they can pick up their smartphones or tablets and begin to interact with their peers. The mobility factor present here offers, amongst other things, advantages of increased access time, richer learning scenarios involving real-world situations and objects, and greater opportunities for students to interact and communicate with their peers. Further research is needed to actually explore the way in which students use their mobile devices in relation to digital media and online courses. The results will help language teachers to map out effective learning routes for students participating in MALMOOCs.

Notes 1 http://www.linkup.com/mobile 2 http://career-services.monster.com/mobile-apps/home.aspx

References Bárcena, E., & Read, T. (2004). The role of scaffolding in a learner-centered tutoring system for business English at a distance. In Proceedings of the Third EDEN Research Workshop, University of Oldenburg, Germany. Bárcena, E. (2009). Designing a framework for professional English distance learning. In I. K. Brady (Ed.), Helping people to learn foreign languages: Teach-niques and teach-nologies (pp. 89–103). Murcia: Quaderna Editorial. Bárcena, E., Read, T., & Jordano, M. (2013). Enhancing social interaction in massive professional English courses. The Online Journal of Distance Education and E-Learning, 1(1), 14–19.

160  Timothy Read et al. Bárcena, E., & Martín-Monje, E. (2014). Language MOOCs: An emerging field. In E. Martín-Monje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 1–10). Warsaw: Mouton de Gruyter. Beaven, T., Codreanu, T., & Creuzé, A. (2014). Motivation in a Language MOOC: Issues for course designers. In E. Martín-Monje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries. Warsaw: Mouton de Gruyter. British Council (2014). Understanding India: The future of higher education and opportunities for international cooperation. http://www.britishcouncil.org/ sites/britishcouncil.uk2/files/understanding_india_report.pdf. Burston, J. (2013). Mobile-assisted language learning: A selected annotated bibliography of implementation studies 1994–2012. Language Learning & Technology, 17(3), 157–224. Chalmers, M., & MacColl, I. (2003, October). Seamful and seamless design in ubiquitous computing. In Workshop at the Crossroads: The interaction of HCI and systems issues in UbiComp (p. 17). Clark, D. (2013). Taxonomy of 8 Types of MOOC. FORUM TIC EDUCATION. Retrieved June 5, 2014, from http://ticeduforum.akendewa.net/actualites/ donald-clark-taxonomy-of-8-types-of-mooc. Cormier, D., & Siemens, G. (2010). Through the open door: Open courses as research, learning, and engagement. EDUCAUSE Review, 45(4), 30–39. Crompton, H. (2013). A historical overview of mobile learning: Toward learnercentered education. Handbook of mobile learning. Florence, KY: Routledge. Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility. Journal of Interactive Media in Education, 3. Retrieved June 4, 2014, from http://jime.open.ac.uk/jime/article/view/2012-18. de Waard, I. (2013). 20 strategies for learner interactions in mobile #MOOC. Retrieved June 4, 2014, from http://ignatiawebs.blogspot.com.es/2013/03/moocresearch-20-strategies-to-increase.html. Downes, S. (2008). Places to go: Connectivism and connective knowledge. Innovate: Journal of Online Education, 5(1). Downes, S. (2012a). Half an hour: Creating the connectivist course. Retrieved June 4, 2014, from http://halfanhour.blogspot.pt/2012/01/creating-connectivistcourse.html. Downes, S. (2012b). Half an hour: What a MOOC does – #Change11. Half an Hour. Retrieved June 4, 2014, from http://halfanhour.blogspot.com.es/2012/03/ what-mooc-does-change11.html. Evans, B. (2013). Mobile is eating the world. Benedict Evans Blog. Retrieved June 4, 2014, from http://ben-evans.com/benedictevans/2013/11/5/mobileis-eating-the-world-autumn-2013-edition. Evans, B. (2015). Twitter post. pic.twitter.com/FJKP6XylLA. Geddes, B. J. (2004). Mobile learning in the 21st Century: Benefit for learners. Knowledge Tree E-Journal, 6. Godwin-Jones, R. (2011). Emerging technologies: Mobile apps for language learning. Language Learning & Technology, 15(2), 2–11. Hampel, R., & Hauck, M. (2004). Towards an effective use of audio conferencing in distance language courses. Language Learning & Technology, 8(1), 66–82. Kaplan, A. M. (2012). If you love something, let it go mobile: Mobile marketing and mobile social media 4x4. Business Horizons, 55(2), 129–139.

Mobile and massive language learning 161 Kilickaya, F. (2004). Authentic materials and cultural content in EFL classrooms. The Internet TESL Journal, 10(7), 1–6. Kukulska-Hulme, A., Traxler, J., & Pettit, J. (2007). Designed and user-generated activity in the mobile age. Journal of Learning Design, 2(1), 52–65. Kukulska-Hulme, A., & Shield, L. (2008). An overview of mobile assisted language learning: From content delivery to supported collaboration and interaction. ReCALL, 20(3), 271–289. Morrison, D. (2013). The ultimate student guide to xMOOCs and cMOOCs. Retrieved June 4, 2014, from http://moocnewsandreviews.com/ultimate-guideto-xmoocs-and-cmoocso. Pettit, J., & Kukulska-Hulme, A. (2007). Going with the grain: Mobile devices in practice. Australasian Journal of Educational Technology, 23(1), 17. Read, T. (2014). The architecture of language MOOCs. In E. Martín-Monje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries. Warsaw: Mouton de Gruyter. Read, T., Bárcena, E., & Rodrigo, C. (2010). Modelling ubiquity for second language learning. International Journal of Mobile Learning and Organisation, 4(2), 130–149. Romeo, K. (2012). Language learning MOOCs? Hive talkin. Retrieved June 4, 2014, from https://www.stanford.edu/group/ats/cgi-bin/hivetalkin/?p=3011. Traxler, J. (2005). Defining mobile learning. In IADIS International Conference Mobile Learning. Suomen kuntaliitto, 251–266. Ventura, P., Bárcena, E., & Martín-Monje, E. (2013). Analysis of the impact of social feedback on written production and student engagement in language MOOCs. Procedia – Social and Behavioural Sciences, 141, 512–517. Watters, A. (2012). Top ed-tech trends of 2012: MOOCs. Retrieved June 4, 2014, from http://hackeducation.com/2012/12/03/top-ed-tech-trends-of-2012-moocs/.

This Page is Intentionally Left Blank

Part 4

Language massive open online courses

This Page is Intentionally Left Blank

12 Academic writing in MOOC environments Challenges and rewards Maggie Sokolik University of California, Berkeley, USA

Introduction: ESP, EAP, and academic writing English for Specific Purposes (ESP), as a sub-branch of LSP, grew out of a perceived need for English language instruction that was instrumental in its approach. In other words, rather than arising from a specific learning theory or system of knowledge about the best way to learn a language, ESP arose from the idea of serving specific needs of adult learners. One of the characteristics of ESP is that it is “not taught according to any pre-ordained methodology” (Dudley Evans & St John, as cited in Johns & Price, 2014, p. 472). Belcher states, “ESP assumes that the problems are unique to specific learners in specific contexts and thus must be carefully delineated and addressed with tailored-to-fit instruction” (2006, p. 135). Of course, this view of ESP as merely comprising needs-analysis and tailored content has been further complicated by more recent research into the role of community in the co-construction of knowledge. These complications have, in large part, been stirred by the position of English for Academic Purposes (EAP). The broad concept of Academic English (AE) as a specialized register used in educational settings developed from research conducted in the late 1970s and early 1980s (DiCerbo, 2014). While the full history of EAP is not necessary here, it is logical that the subfield of Academic Writing has grown out of EAP. Research into Academic Writing has focused both on the writing tasks expected by professors across the curriculum at the academy (e.g., Horowitz, 1986) as well as what students perceive their own needs to be (e.g., Leki & Carson, 1994). In addition, it has looked beyond the instrumental role of EAP to include a focus on critical thinking and the “heuristic functions of writing” (Leki & Carson, 1994, p. 91). More recently, the constructivist concepts of “co-construction” of knowledge in learner communities has played an important role in educational philosophy (cf. Gibbons, 2006). It is this focus on learner communities, coupled with the growth of online affordances, such as Web 2.0 tools, social media, and learning management systems (LMS) that have led us to the current state of Academic Writing in a technological environment: the growth of LMS use, as well as Massive Open

166  Maggie Sokolik Online Courses (MOOCs), especially language MOOCs (LMOOCs). As Groves and Mundt state, “Technological advances have always played a significant role in second language teaching and acquisition, and they have generally been accepted as valuable tools in the classroom, for autonomous practice, for tutor–student communication and for research” (p. 113). These advances have led us from small online courses and blended classes, to the current possibility of teaching Academic Writing in massive environments.

Academic writing and MOOCs The humanities in general, and academic writing in particular have been latecomers to the MOOC scene. When the first MOOCs were offered in 2008, most were focused on artificial intelligence, biology, and computer programming. It wasn’t until 2012, “The Year of the MOOC” (Pappano, 2012) when we saw the emergence of more platforms, such as Coursera, edX, Futurelearn, Open2Study, Instreamia, and iversity, among others, that we also saw more humanitiesoriented and writing MOOCs offered. One obvious reason for this is the challenge to the way writing courses are typically conducted. A face-to-face writing class may have fewer than thirty students, incorporate drafting and instructor feedback, as well as extensive discussion of the individual writing problems. By contrast and definition, MOOCs cannot fulfill these same characteristics. As Balfour (2013, p. 40) states: Some MOOCs have enrolled more than 150,000 students . . . consequently, the time an instructor spends teaching and evaluating work per student is very low in high enrollment MOOCs. MOOCs use computers to score and provide feedback on student activities and assessment and thus rely heavily on multiple choice questions, formulaic problems with correct answers, logical proofs, computer code, and vocabulary activities. Given these limitations, writing MOOCs have been slow to develop, even now. However, they are offered, and are using a variety of techniques successfully to help learners from around the world improve their writing skills.

Survey of writing MOOCs As of March 13, 2015, out of the roughly 2,000 courses listed on class-central.com, including past, current, and future MOOCs, 19 course offerings had a clearly identifiable writing focus. Eleven of them were courses that featured academic writing of some type, while the others dealt with journalism or creative writing. While this list is constantly changing, at the time of this writing, College Writing 2x, is the only MOOC, to my knowledge, whose focus is on writing for English language learners.

Academic writing in MOOC environments 167 The College Writing 2x MOOC In the fall of 2013, the first 5-week segment of a 15-week course, “Principles of Written English,” was launched. It was created on the edX.org platform, and is supported materially by UC Berkeley, as well as the U.S. Department of State through its recruitment of online moderators, some of whom conducted faceto-face meetings with students to support their learning in the MOOC. These meetings took place in different countries, often through the American Corner function of the U.S. Embassy, or at universities or other locations. The initial enrollment of College Writing 2x exceeded 60,000 students. The second and third segments of the course experienced lower enrollment (roughly 30,000), but the second offering of the course, beginning in 2014–15, has seen enrollment grow to over 70,000 per five-week segment. The completion rate has been average, at around 10% (see Sokolik, 2014 for an interpretation of completion rates). The course uses a combination of short videos, written materials, machinescored quizzes, short peer-evaluated written assignments, forum discussions, and peer- and self-evaluated essays or other longer pieces of writing. It also has an active Facebook group page and uses the Twitter hashtag #cwp2x to communicate among students, moderators, and the instructor. From the sum of these experiences, we have gained insight to many of the successes, attitudes, and challenges that face teaching academic writing at scale.

MOOC challenges to academic writing Although size and attrition are often seen as the main challenges to learning in MOOCs, in fact, the more serious challenge to the teaching of Academic Writing comes from the functions of the available instructional tools and their attendant methodologies. The main tools that are used by designers – videos and text as content, discussion forums, assessment tools – and those that may be used by students – spelling and grammar checkers, Google translate, and online resources – can create barriers to optimal collaboration and learning. The challenges listed below pertain primarily to xMOOCs, which have become the dominant way of delivering courses (cf. Siemens, 2013, for a thorough discussion of the distinction between xMOOCs and cMOOCs).

Content delivery As has been pointed out by many scholars, the role of the instructor in a MOOC has changed from that of the typical classroom instructor (cf. Castrillo de LarretaAzelain, 2014). The MOOC instructor is responsible for the designing of the learning experience via the course materials, from the content itself to the assessment of that content. Course materials delivery in writing MOOCs can be offered through both video and written materials. While the classroom instructor may

168  Maggie Sokolik rely on a textbook, conduct face-to-face discussions, and in some ways, co-create the course materials by gauging student intake on a regular basis, most MOOCs do not allow for this type of flexible response to student learning. Designed in advance with the software platform facilitating the release of materials, typically on a weekly schedule, there is little room for “customizing” the learning environment based on learner experience. Instead, the instructor must anticipate the needs of a learner group, which could include high school students as well as seniors in their retirement. Adaptive learning technologies, for the large part, are still not part of the MOOC experience. All students, regardless of experience or proficiency, choose a MOOC and learn from the same materials at approximately the same pace.

Discussion forums In addition to the contents a course may have, most MOOCs offer forums for discussing the course materials, getting help with course assignments, or just for socializing with classmates. According to Boettcher and Conrad, “[D]iscussion boards [are] the ‘campfire’ around which course community and bonding occur at the same time that content processing and knowledge development are happening” (2010, p. 85). That said, threaded discussion forum technology within LMS platforms bears a striking resemblance to the design of discussion forums of more than ten years ago. In many ways, they present a barrier to effective course communication among its participants. For example, WebCT in the 1990s an early 2000s used a threaded structure that would seem familiar to most MOOC students today (see Figure 12.1). Topics typically appear in a list that can be sorted by date, marked as read or unread, and in more recent discussion lists, posts can be starred or followed. In response to the frustration that is often felt by being overwhelmed by the discussions, we see students self-organize into groups that move to more familiar social media platforms, such as Facebook and Twitter, where popular features, such as being able to follow particular participants (not just certain posts) make it easier to form smaller communities within the large course. Furthermore, MOOC platform FAQs and course guides give students tips and guidance on how to make unmanageable discussion boards more manageable. However, it seems the burden should be on platform developers to create discussion spaces that leverage the affordances of current, more user-friendly, social media. A larger issue also addresses the educational value of discussion forums. As Lander states: [W]hile their potential is widely recognised, the implementation of online discussions remains problematic . . . and claims about their effectiveness largely unsubstantiated. . . . Educators reported that these discussions were not meeting expectations in two main respects: students’ failure to reach higher levels of knowledge construction and their sometimes negative emotional reactions to participation. (2014, p. 42)

Academic writing in MOOC environments 169

Figure 12.1  Threaded forums: WebCT 2002 vs. Coursera 2015

Of course, in light of Lander’s research, one could abandon or minimize the use of discussion forums within the MOOC platform. However, while some MOOCs could be effective with minimal interaction in student forums, I would argue that discussion forums are at the heart of Academic Writing MOOCs, standing in for class discussion, and more importantly, being a writing task in and of itself.

Assessment tools Because of the size of most MOOCs, as well as their “community structure,” instructor assessment of student writing is not feasible or commonly used. If instructors assess student work, it is often anonymously through the peer system, and the assessment does not receive more weight than the other types of

170  Maggie Sokolik assessment available. At present, the most commonly used assessment types in a MOOC environment are self-assessment, peer-assessment, and machine scoring. Each of these types is examined here.

Peer assessment Peer assessment is commonly used in MOOCs where open-ended responses are required, whether for short answers, or longer essay-length writing. Peer assessment is often met with mixed feelings, both from instructors and students, as Liu and Sadler (2003, p. 194) state: [T]he biggest concern second language (L2) writing teachers have expressed about using peer review activities is whether peer comments in fact help students write better papers. This concern raises two fundamental questions: First, do L2 writing students have sufficient linguistic, content, and rhetorical knowledge to give their peers constructive feedback on their drafts? Second, are these students able to modify their texts based on their peers’ comments? While students also express these concerns, whether in face-to-face classes or in MOOCs, some see the obvious benefit to their writing development. A student’s blog from College Writing 2.2x (the second 5-week part of the course) reports: Part of the course consists of writing exercises and an essay. These are partly assessed by other students taking part in the class. For me, this was a refreshing experience. Though my scores remained near to perfect, my peers have learned me a lot on this course. [sic] On the other hand, I also had to assess other students’ work. In doing so, I became more aware of the things I knew myself, and I felt happy to be able to assist others in their learning. (Hendrikx, 2015) Peer assessment, whether students see the value or not, remains one of very few options in assessment of writing in MOOC environments. The use of detailed rubrics, as well as student “training” in how to do peer assessments are common features in MOOCs using this assessment technique.

Self-assessment Self-assessment, when using a detailed rubric – as is done in peer assessment – can be a rewarding experience as well. Harris (1997, p. 19) states: [S]ystematic self-assessment provides an ideal springboard for other learner development activities: organizing and planning learning, thinking about learning styles, discussion of learning and communication strategies. Alternatively, self-assessment can itself provide a central core of learner development in settings where full-blown learner training . . . or explicit learner strategy instruction . . . is felt to be too time-consuming.

Academic writing in MOOC environments 171 In addition, as Kulkarni et  al. (2013) maintain, “Peer and self assessment is a promising alternative, with potential additional benefits. It not only provides grades, it also importantly helps students see work from an assessor’s perspective” (p. 133).

Machine assessment A final option, but one that is not used as often as self- and peer assessment is the machine scoring of essays (edX has had this option in the past, but at this writing, it is not available to course designers). This technology is increasingly used in large-scale writing tests, using proprietary systems. Not surprisingly, the machine-scoring of essays is not without controversy. Condon, one critic of such systems, writes: None of the features [of AES] move beyond the syntactic – some of their claims (organization and style, for example) seem to, but what the computer counts (sentence length, variety of vocabulary) does not. No AES system can achieve the kind of understanding necessary to evaluate writing on the semantic level – on the level of meaning, let alone the level of awareness of occasion, purpose, and audience demanded by any form of real-world writing, whether within the academy or in the workplace. (2013, p. 102) For language learners, however, the use of this technology is even more problematic. As pointed out by Weigle, most automated essay scoring systems (AES) “were designed with a primarily native English speaker/writer population and an AW [assessment of writing] in mind” (2013, p. 90). While some have seen the potential for this technology in massive writing courses, to date, it has not been implemented in any significant way.

Plagiarism and translation plagiarism Standard “copy and paste” plagiarism has always been a double-edged sword. The tools that students use to find a text to plagiarize can also be used by instructors to locate the same text. This can be done through a web search or by using any of the available free or commercial plagiarism check services, such as turnitin.com. However, when teaching Academic Writing at scale, instructors must rely on peer reviewers to detect any cases of plagiarized writing. Plagiarism detection software is not built into the current MOOC platforms. Although honor codes exist, and in the cases of verified (that is, paid) registration in a MOOC sometimes uses ID checking and web cameras to verify the identity of the student doing the work, routine plagiarism checking is not performed. However, even if it were, machine translation technology has introduced a new challenge to Academic Writing: translation plagiarism. Translation plagiarism, defined as translation of text from a source language into another language,

172  Maggie Sokolik with the result being a text that uses almost the exact same translated words as used by the original author in the original language. The most popular plagiarism checking software programs do not check for this type of plagiarism, although the technology is being developed (Potthast et al., 2011).

Attrition The final challenge to Academic Writing MOOCs is so-called attrition of students. Although I have dealt with this issue previously (Sokolik, 2014), it has special implications within an Academic Writing course, primarily with respect to peer assessment. Although the majority of students who do not finish a course typically do not start it, it can generally be counted on that fewer students will be available to conduct peer assessment as the course progresses. In contrast to the 10% completion rate criticism MOOCs typically have, findings from the MOOC Research Initiative also examined student participation, reporting that in a remedial English writing MOOC, “23 percent of students engaged meaningfully with the content even though, by traditional metrics, only 8 percent of the 48,174 enrolled actually passed the course” (Strausheim, 2014).

Summary Academic Writing within a MOOC environment faces several challenges: methods of content delivery, limitations of discussion forums, methods of assessment, plagiarism, and attrition. In spite of these limitations, writing MOOCs continue to be popular, as judged by the registration numbers, and by the enthusiastic sentiments by many of the participants. The next sections look at these benefits.

MOOC benefits to academic writing Communities of discourse For “non-traditional” students, finding communities of writers, as well as expert instructors, can be difficult and/or expensive. Academic Writing MOOCs address this issue. Clearly, those students who sign up and actively participate in MOOCs possess a certain amount of self-motivation and interest in learning. This behavior fits into Scardamalia and Bereiter’s (2010, p. 2) definitions of knowledge building: Intentionality. Most of learning is unconscious, and a constructivist view of learning does not alter this fact. However, people engaged in Knowledge Building know they are doing it and advances are purposeful. Community & knowledge. Learning is a personal matter, but Knowledge Building is done for the benefit of the community.

Academic writing in MOOC environments 173 MOOCs present the opportunity for both personal learning and community involvement, as further elaborated by McLoughlin (2014, p. 268): With the emergence and growth of online learning and concomitant emphasis on knowledge creation and collaboration, the metaphor of the knowledge age has become a reality. . . . Online collaborative learning theory (OCL) focuses [on how to support] learners as knowledge builders and active learners. . . . Technology facilitates dialogue and exchange, and contributes to the occurrence of knowledge building discourse. Thus, in spite of an ever-changing landscape of technological tools and platforms, MOOCs provide language learners, and academic writers in particular, a way to build communities of practice through collaboration and exchange with other like-minded learners.

Affective factors Dulay and Burt (1977) posited the idea of the “affective filter,” roughly described as a student’s lowered ability to acquire a language when under duress or stress. This concept was further expanded upon by Krashen (1982), who recommended language teachers create classrooms in which the language-learning atmosphere is as stress-free as possible. Cognitive science also tells us that lower stress can often improve learning. More recently, Picard et  al. stated: “Among educators and educational researchers, there is a growing recognition that interest and active participation are important factors in the learning process” (2004, p. 254). As I have argued previously (Sokolik, 2014), writing for an audience, without the pressure of required exams, grade point averages, or time-to-graduation, reduces the pressure for students. Students generally join MOOCs out of their own free will, and may drop out of them without repercussion if it doesn’t suit their taste or schedule. MOOCs provide a mostly stress-free environment for learning (no learning environment should be completely stress-free), with students in control of their own progress.

Access Access to online courses will probably never be universal, due to economic and Internet availability issues in different parts of the world. However, one can still argue that MOOCs, being both open and free, offer unprecedented access to courses offered by the world’s universities. Access does not just refer to the ability of a person in a remote village somewhere being able to take a course from Harvard. It also means access to subject matter that may not normally be available. Academic Writing is not universally taught – many countries do not include it in a national curriculum, and many colleges and universities do not offer it as a stand-alone course.

174  Maggie Sokolik “Big data” Finally, teaching at scale offers opportunities to acquire data showing student behavior and understanding on a very large scale. Within Academic Writing, it is often difficult to access a large number of data points outside of the context of written standardized examinations, such as those written for the SAT or TOEFL examination. While challenges exist to the format and access to the data, MOOCs offer significant promise to both research and understanding of how Academic Writers learn to write outside of the pressures of high-stakes examinations.

Summary In spite of the challenges to Academic Writing in a MOOC environment, the benefits to students and researchers alike are clear. Academic writers can build communities of discourse, access courses and colleagues not locally available, all in a relatively low-stress environment. Researchers have access to large groups of learners, and large databases of writing samples that could be used to develop better models of the process of learning to write.

Conclusion Academic Writing has always, to some extent, depended upon the available technology – from slates to parchment to manual typewriters to modern wordprocessing on laptops and tablets. We learn to write, and we write, with the tools that we have. While the last twenty years have witnessed amazing technological advances, the tools we use in educational technology settings may seem less exciting. The LMSs of the past, such as WebCT, were remarkably innovative. The LMSs of the present look too much like the LMSs of the past, with essentially the same functionality. However, the servers, Internet, and LMS platforms now allow for massive courses to be offered. Along with this, we now can offer hundreds of thousands of students globally the opportunity to learn not only the concepts of Academic Writing, but access to the ideas and opinions of diverse classmates and experiences. The challenges faced are significant: how we present course materials, assess learning, engage students in meaningful discussion, and ensure ethical behavior, all require rethinking. The methods we use in face-to-face writing classes, in large part, are not practical or applicable when teaching at scale. However, there is much to learn from the MOOC experience: especially, refining what we know about peer assessment and understanding what motivates students to learn on their own will be especially beneficial to our understanding of Academic Writing, both inside and outside of MOOC environments. The experience of learning to write, of finding things to write about, and of sharing one’s writing with others has always transcended the tools with which the writing is done. MOOCs, like all technology, will evolve, and writing instructors will find ways to leverage whatever new technology appears to engage learners in the process of finding their voices.

Academic writing in MOOC environments 175

References Balfour, S. P. (2013). Assessing writing in MOOCs: Automated essay scoring and Calibrated Peer Review™. Research and Practice in Assessment, 8, 40–48. Belcher, D. (2006). English for specific purposes: Teaching to perceived needs and imagined futures in worlds of work, study, and everyday life. 
 TESOL Quarterly, 40(1), 133–156. Boettcher, J. V., & Conrad, R.-M. (2010). The online teaching survival guide: Simple and practical pedagogical tips. San Francisco: Jossey-Bass. Castrillo de Larreta-Azelain, M. D. (2014). Language teaching in MOOCs: The integral role of the instructor. In E., Martín-Monje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 67–90). Warsaw/ Berlin: De Gruyter Open. Class Central. (n.d.) Retrieved March 13, 2015, from https://www.class-central.com/. Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings? Assessing Writing, 18(1), 100–108. DiCerbo, P. A., Anstrom, K. A., Baker, L. L., & Rivera, C. (2014). A review of the literature on teaching Academic English to English language learners. Review of Educational Research, 84(3), 446–482. Dulay, H., & Burt, M. (1977). Remarks on creativity in language acquisition. In M. Burt, H. Dulay, & M. Finnochiaro (Eds.), Viewpoints on English as a second language (pp. 95–126). New York: Regents. Gibbons, P. (2006). Bridging discourses in the ESL classroom. London: Continuum.
 Groves, M., & Mundt, K. (2015). Friend or foe? Google Translate in language for academic purposes. English for Specific Purposes, 37, 112–121. Harris, M. (1997). Self-assessment of language learning in formal settings. ELT Journal, 51(1), 12–20. Hendrikx, D. (2015). Doing a MOOC, a personal experience. Retrieved March 15, 2015, from https://medium.com/@littlebelgianwriter/doing-a-mooc-a-/personalexperience-8ce0d4cbd698. Horowitz, D. M. (1986). What professors actually require: Academic tasks for the ESL classroom. TESOL Quarterly, 20, 445–462. Johns, A. M., & Price, D. (2014). English for specific purposes: International in scope, specific in purpose. In M. Celce-Murcia, D. M. Brinton, & M. A. Snow (Eds.), Teaching English as a second or foreign language, 4th edition (pp. 471–487). Boston: National Geographic Learning. Krashen, S. (1982/2009). Principles and practice in second language acquisition. Oxford: Pergamon Press. Internet edition (2009) retrieved March 1, 2015, from http://www.sdkrashen.com/content/books/principles_and_practice.pdf. Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., & Klemmer, S. R. (2015). Peer and self assessment in massive online classes. In H. Plattner, C. Meinel, & L. Leifer (Eds.), Design thinking research: Building innovators (pp. 131–168). Springer. Retrieved March 14, 2015, from http://link. springer.com/chapter/10.1007/978-3-319-06823-7_9#. Lander, J. (2014). Conversations or virtual IREs? Unpacking asynchronous online discussions using exchange structure analysis. Linguistics and Education, 28, 41–53. Leki, I., & Carson, J. G. (1994). Students’ perceptions of EAP writing instruction and writing needs across the disciplines. TESOL Quarterly, 28(1), 81–101.

176  Maggie Sokolik Liu, J., & Sadler, R. W. (2003). The effect and affect of peer review in electronic versus traditional modes on L2 writing. Journal of English for Academic Purposes, 2, 193–227. McLoughlin, C. E. (2013, June). The pedagogy of personalised learning: Exemplars, MOOCs and related learning theories. World Conference on Educational Multimedia, Hypermedia and Telecommunications 2013, 2013(1), 266–270. Pappano, L. (2012, Nov. 2). The Year of the MOOC. The New York Times, Education Life. Retrieved March 14, 2015 from http://www.nytimes.com/2012/11/04/ education/edlife/massive-open-online-courses-are-multiplying-at-a-rapid-pace. html. Picard, R. W., Papert, S., Bender, W., Blumberg, B., Breazeal, C., Cavallo, D., Machover, T., Resnick, M., Roy, D., & Strohecker, C. (2004). Affective learning— a manifesto. BT Technology Journal, 22(4), 253–269. Potthast, M., Barrón-Cedeño, Stein, B., & Rosso, P. (2011). Cross-language plagiarism detection. Language Resources and Evaluation, 45(1), 45–62. Scardamalia, M., & Bereiter, C. (2010). A brief history of knowledge building. Canadian Journal of Learning and Technology/La revue canadienne de l’apprentissage et de la technologie, 36(1). Retrieved March 1, 2015 from http:// www.cjlt.ca/index.php/cjlt/article/view/574. Siemens, G. (2013). Massive Open Online Courses: Innovation in education? In R. McGreal, W. Kinuthia, & S. Marshall (Eds.), Open educational resources: Innovation, research and practice (pp. 5–15). Commonwealth of Learning, Athabasca University. Retrieved November 1, 2014, from https://oerknowledgecloud.org/ sites/oerknowledgecloud.org/files/pub_PS_OER-IRP_CH1.pdf. Sokolik, M. (2014). What constitutes an effective language MOOC? In E. Martín-Monje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 16–32). Warsaw/Berlin: De Gruyter Open. Sokolik, M. (2014a, Sept. 14). Learning without pressure: English writing MOOCs for an international audience. The Evolllution: Illuminating the Life Long Learning Movement. Retrieved September 14, 2014, from http://www.evolllution.com/ distance_online_learning/learning-pressure-english-writing-moocs-internationalaudience/. Strausheim, C. (2014). Data, data everywhere. Inside Higher Ed. Retrieved March 15, 2015, from https://www.insidehighered.com/news/2014/06/10/after-grapplingdata-mooc-research-initiative-participants-release-results. Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing, 18(1), 85–99.

13 Language MOOCs Better by design Fernando Rubio University of Utah, USA

Carolin Fuchs Columbia University, USA

Edward Dixon The University of Pennsylvania, USA Introduction The field of language teaching has been somewhat of a latecomer with respect to online education and, in particular, in the case of MOOCs, because of the assumption that languages are not amenable to distance, asynchronous learning. From the sidelines, we have seen how other fields have experimented with and embraced technology-enhanced teaching as research that proves its effectiveness has become more and more frequent. The U.S. Department of Education released in 2010 the results of a meta-analysis of 45 studies of online and blended courses, all published after 2004, and found that, on average, “students in online learning conditions performed modestly better than those receiving F2F instruction” (p. ix). Yet, even today, many foreign language practitioners still question the extent to which online language instruction can deliver the type and amount of interaction that is necessary to foster language acquisition. The resistance is even more pronounced against language MOOCs (LMOOCs) which, in addition to the lack of F2F contact, violate another long-held belief that structured language acquisition requires small groups of learners. There is plenty of literature that suggests that technology does create the necessary conditions to facilitate second language acquisition (SLA) when some or all F2F contact is removed (e.g., Egbert, Hanson-Smith, & Chao, 2007). Research on the effectiveness of blended courses (Chenoweth, Ushida, & Murday, 2006; Rubio, 2012; Thoms, 2012), shows that this format is just as effective at promoting proficiency as traditional brick-and-mortar courses. Similar results are reported for fully online courses (Blake et  al., 2008; Chenoweth & Murday, 2003). Because of the relatively young history of LMOOCs, and the small number of them that have been offered – a total of 26 as of late 2014 according to Bárcena and Martín-Monje (2014) – there is still only limited empirical research on their effectiveness, although preliminary findings also indicate that the massive nature of LMOOCs does not seem to affect their effectiveness when compared to

178  Fernando Rubio, Carolin Fuchs and Edward Dixon traditional courses (Rubio, 2014). Despite these findings, the reality is that there are a number of reasons that prevent us from making any strong claims regarding comparability. The first limitation is the small number of empirical studies that have been published to date and the fact that they have been typically small in scale and have varied widely in terms of research methodology and design and in the type of tools used to assess the effectiveness of the courses studied. A second issue is the difficulty of dissociating course format (online vs. traditional) and size (small vs. massive) from other variables that may affect learners’ proficiency gains including teacher effectiveness, student self-selection, familiarity with technology, etc. Interestingly, Garrett (1991) already pointed out some of these limitations more than two decades ago. But rather than worry about the generalizability of studies that show comparable learning outcomes, perhaps we should ask ourselves whether comparability is, in fact, a desired outcome for online courses in general and for LMOOCs in particular. Research that compares F2F with online language learning follows a long tradition of media comparison studies (MCS) that started when distance learning became popular in the early 20th century. In a summary of MCS research done over the course of several decades, Russell (2001) stresses the fact that there seems to be no significant difference in learning when matching distance and traditional course delivery. This also seems to be the general conclusion of most comparison studies published in our field to date. Those who have tried to defend online courses have used the “No Significant Difference” phenomenon to prove that technology does not impede acquisition. Those on the opposite camp see it as proof that technology does not facilitate or speed up language learning. The same is true when it comes to size. Opponents of massive courses see their size as an impediment to acquisition, but research on the effectiveness of LMOOCs points out that their size does not negatively affect their success. In either case, if there is no significant difference, one could conclude that online technology and larger enrollments may not be obstacles, but they do not particularly support language learning. However, the view that we express in this chapter is that an adequate course design that takes advantage of the affordances of the online medium and the opportunities opened up by massive enrollments can make LMOOCs an effective learning context in ways that would be difficult to replicate in a traditional F2F context. One approach to bridging the gap between camps would be to view LMOOCs as different in that their unique feature of having a massive number of students enrolled in a free, online course has been unprecedented in language teaching. According to Godwin-Jones (2012, p. 9), [m]ost of the recent courses are in computer science related topics and do not feature the kind of interactivity and peer-to-peer collaboration needed for language learning. Having such large numbers in an online language course would be a daunting task indeed. However, opening up a language course to participation from students outside the institution (in limited numbers) would be of interest from a cultural perspective.

Language MOOCs 179 An option that would follow logically from Godwin-Jones’ observation would be to consider LMOOCs as the natural bridge between what has emerged as two distinct practices/philosophies of MOOCs: the cMOOC, and the xMOOC. In connectivist MOOCs, or cMOOCs, learners organize themselves around social media platforms to communicate and share content and ideas. The emphasis is on knowledge that is generated organically and there are no set roles for participants. There is also no central platform in which content is presented and shared, no formal assessment, and no specific learning path or goals. In contrast, instructivist MOOCs, or xMOOCs, are much more similar to traditional courses in the sense that participants have clearly defined roles and the transmission of knowledge is distinctly unidirectional (instructor to students), there is a central platform in which the course is delivered, and they follow a structured syllabus. The criticism usually leveled against xMOOCs is that they are an inferior form of delivery because they limit the possibility of interaction and underutilize the affordances of the massive quality of the course. The social nature of cMOOCs obviously presents invaluable opportunities for language learning seen from an interactionist or sociocultural perspective. However, as Sokolik (2014) points out, in addition to the challenge of navigating a loosely organized “course,” the difficulty of cMOOCs for language learning is that “students are using the medium of instruction as the medium of communication” (p. 18), which adds a layer of complexity to any task. xMOOCs, on the other hand, provide opportunities for the type of guidance, scaffolding and feedback that also seem to be trademarks of effective language learning experiences. Rather than accepting these two philosophies of massive online education as mutually exclusive, we propose that LMOOCs can and should take advantage of design features of both models to facilitate language acquisition. This unique nature of LMOOCs as hybrids provides a number of affordances and challenges, which will be explored further in the following sections.

Affordances of the massive online format One of the distinctive affordances of MOOCs is the size and breadth of their learning communities. Participants enroll across linguistic, cultural and geographic boundaries. Furthermore, the demographic makeup of the participants is not limited by age, education or professional background. MOOCs afford students opportunities to engage with participants from around the world in multilingual, multicultural, multiethnic and multiracial communities that clearly exceed the affordances of the traditional classroom setting. The reasons and motivation of the participants for gaining membership in these communities will vary. Most are non-traditional students who are not learning a language to fulfill a university requirement but rather may be looking for personal enrichment or ways to develop their professional career skills. The size and breadth of these communities are frequently looked upon as a challenge to learning. However, designers of LMOOCs can take the opposite

180  Fernando Rubio, Carolin Fuchs and Edward Dixon view that the unusual complexity of these large learning environments is not an obstacle, but rather can benefit the learner and language instruction. MOOCs are opening the door to new learning situations in which students are able to engage, learn from and communicate with learners from around the world in ways that were previously not available to them. White (2015) points out the potential positive effects of language communities in virtual environments and examines them within the context of the American Councils on the Teaching of Foreign Languages (ACTFL) World Readiness Standards for Learning Languages (National Standards, 2015) Communication, Cultures, Connections, Comparisons and Communities. She cites the lack of research and curricula devoted to the 5th Standard, Communities, or what many scholars refer to as “the lost C.” White then proceeds by citing the research of scholars such as Magnan (2008), who questions the hierarchical ordering of the standards that places Communication at the top and Communities at the bottom. By turning the Standards upside-down Magnan draws attention to the fundamental importance of communities, which potentially can affect the realization and goals of the other standards. Learner engagement with global learners in MOOCs may evoke new learning behaviors that impact motivation and produce increased social interaction, which in Sociocultural Theory is at the core of learning (Christensen, 2015). This is also in line with the growing body of research on telecollaboration, which has highlighted the affordance of online communities of practice for the purpose of language study and intercultural learning (e.g., Belz & Thorne, 2005; Dooly, 2008; Dooly & Saddler, 2013; Furstenberg & Levet, 2001). However, existing reviews of LMOOCs show that most are content-based or task-based models, not connectivist and network-based (Beaven et al., 2014, p. 33). While it does not seem surprising that most LMOOCs are situated on the instructivist/content-transmission or task completion ends of the continuum, those MOOCs fail to take advantage of the affordance of a multitude of speakers, native and non-native, enrolled in a course. The community and feedback aspects in LMOOCs may be more easily realized at the intermediate and/or advanced levels since beginning-level students may lack the proficiency to participate actively and provide substantive peer feedback. As a consequence, courses with a heavier focus on content (e.g., English for Academic Purposes, English for Specific/Occupational Purposes such as English for Lawyers, English for Nurses) may be good candidates for a cMOOC approach, while those in which the proficiency level of participants is not very high, or where there is a mix of levels may lend themselves better to an xMOOC format because learnercontent and/or learner – instructor interaction may prove more conducive to success than learner–learner interaction (Rubio, 2015). This begs the question of how LMOOCs – especially beginning-level LMOOCs and those that emphasize linguistic development – can be approached from a constructivist perspective in order to facilitate negotiation of meaning and interaction with an authentic audience, which are crucial aspects of L2 learning (Egbert et al., 2007). We provide specific recommendations later in this chapter.

Language MOOCs 181

Challenges of the massive online format In addition to having an unpredictably high number of students, LMOOC instructors will also see themselves faced with social presence issues when trying to tailor the course to students’ needs. Social presence can be differentiated based on how many different media channels are available, and on how well these channels transmit various social and non-verbal cues such as facial expression, eye contact, clothes, etc. Since online communication tends to have fewer cues compared with face-to-face communication, the former can result in a feeling of reduced social presence among participants according to Short, Williams, and Christie (as cited in Hesse, Garsoffky, & Hron, 1997). MOOCs have been described as impersonal, and suggestions to bring in the human element include tailoring content to students’ needs, including audio/video, and other social networking tools (e.g., Kilgore & Lowenthal, 2015). While these seem like viable options in non-LMOOCs, these modes of communication may not be easily accessible for lower-level learners in beginning LMOOCs. Another inherent challenge of the massive online format is the incorporation of rich and frequent opportunities to engage in synchronous interpersonal communication. Although the technology exists to facilitate these interactions via audio or video, it is still difficult to deal with some of the issues that are simply taken for granted in the physical classroom. Assigning pair and group activities when students work on their own schedules is a major challenge and so is the issue of accountability. An added problem is that the assessment of interpersonal spoken communication is particularly difficult when there is no F2F contact. These types of activities are common and easy to implement in the physical classroom and they provide opportunities to assess students in a low-stakes, formative manner. Instructors can easily incorporate on-the-spot feedback and guidance and turn their observations into formative assessment data. However, we believe that formative assessment of synchronous oral communication, in the context of a well-designed LMOOC, can be as effective and conducive to acquisition as it is in a physical classroom. The next section of this chapter provides specific recommendations for course designers.

Recommendations Colpaert (2014) claims that “the eventual effect of the learning environment will be proportional to its design, meaning the extent to which it has been designed in a methodological and justifiable way” (p. 167). He argues that instead of focusing on LMOOCs as products, we should focus our attention on the design process. With that in mind, we want to offer the following recommendations for course designers.

Community We have argued that communities, not necessarily small groups, need to be at the center of language instruction. In order to facilitate this goal, a carefully designed

182  Fernando Rubio, Carolin Fuchs and Edward Dixon a priori needs analysis (e.g., González-Lloret, 2014) may help identify students’ proficiency levels and needs, and funnel them into appropriate groups (rather than ask students to self-organize as is the case in many content-based MOOCs). In order to move LMOOCs closer to network-based MOOCs, there needs to be a shift in focus to connectivist, socially constructed knowledge, and community creation by focusing on “communities” (the “lost C” in the Standards). Moreover, the principles of reciprocity and authentic tasks and audiences (Egbert et al., 2007) need to be addressed. This also entails a careful definition of instructor and student roles in that just like in F2F courses, teachers need to move from content-transmitters to facilitators and moderators in LMOOCs. To this end, we may see similar challenges to those that have been documented in crossinstitutional collaborative projects. If teachers need to possess organizational, pedagogical, electronic literacy, intercultural, and socio-affective competences in order to conduct telecollaborative projects (O’Dowd, 2011), this role may become amplified in a non-institutionalized learning environment due to the overwhelming number of students. In planning LMOOCs, designers should take into account the agency of students to set their own goals for language learning and to apply technologies from their extracurricular lives for self-learning, connecting with native speakers and producing L2 texts (Case, 2015). Case notes the capabilities of even beginning language students to self-select technologies and engage in online activities that do not require the help of a teacher and go beyond traditional course requirements. That is not to say that we leave students to their own resources, but that we take advantage of the learner’s personal learning environment as a source for enhancing the structure and feedback that we provide in LMOOCs. Since we presume that community and interaction is of central importance for language instruction and acquisition, designers might consider including tasks that allow learners to use Internet tools that they are familiar with in their everyday lives to make connections and develop their interpersonal skills with other speakers of the target language. As evidence for completing the task, learners can present on their interactions in an online log. Giving students the opportunity to choose technologies from their personal learning environments will furthermore provide them the means to continue their learning relationships beyond the duration of the course. Godwin-Jones (2014) provides several examples of LMOOCs that have been particularly successful at exploiting the affordances of a community of learners. One example is “Spoken Communication: English/Spanish in Tandem,” a course offered by a consortium of Spanish universities in which native speakers of Spanish have to collaborate with speakers of English while both learn each other’s language. Although not a language MOOC, another excellent example mentioned by Godwin-Jones is “DS106 Digital Storytelling,” offered by the University of Mary Washington, in which learners can create new assignments and contribute them to an assignment bank. Those assignments are then used in subsequent offerings of the course. MOOCs generally focus on high-quality content delivered by some of the world’s leading authorities in their fields. From these MOOCs we have learned

Language MOOCs 183 that providing learners with interesting, informative and relevant content is essential to their success. In designing content for LMOOCs, designers should consider the median profile of students who are enrolled in them, which tend to be professionals with a median age of 24. However, students in Humanities MOOCs are four times as likely to be 40 years old or older (Newman & Oh, 2014). For example, of the 15,000 students enrolled in Auf Deutsch, a beginnerlevel German MOOC (Auf Deutsch: Communicating in German across Cultures https://www.coursera.org/course/deutsch) offered by Coursera and the University of Pennsylvania, 22% are between the ages of 25 and 34 followed by 11% between the ages of 35 and 44. Hence, designers need to take into consideration the incorporation of content that is suitable and interesting for the adult professional. To provide learners with age-appropriate information about culture in the target language, designers may need to rely on more than just traditional textbooks. They may need to author their own content or didacticize authentic content from online music, TV programs, films, newspapers and other information from the Internet. Content should not only be interesting and informative, but as Batardière (2015) points out in her discussion of Beatty and Nunan’s and Helm’s research, topics need to provide meaningful learning opportunities that are capable of stimulating truly intercultural exchanges between learners and of even changing one’s own cultural viewpoint. In order to attract and satisfy the interests of a diverse audience in MOOCS, developers need to consider content that is relevant to the learner and appeals to different ethnic, gender, racial and other socially diverse groups.

Social presence Most MOOCs incorporate a flipped classroom model in which instructors are filmed lecturing on topics related to their subject and discipline. Using this model, learners are passive receivers of instructor-generated information. Frequently, these lectures are followed by short quizzes, with which learners are able to reflect on and engage with the content. This model, however, is not especially conducive to developing the learners’ interactive and oral communicative skills that are an essential part of language learning. It also misses the opportunity to expose students to actual communicative situations between speakers of the target language that can illustrate important pragmatic and sociolinguistic features. The instructional videos in MOOCs, can simulate interaction between the instructor and the learner and provide opportunities for the learner to practice the language beyond the completion of multiple-choice checks that drill their comprehension of grammar and vocabulary. This can be accomplished through video presentations in which instructors engage the students in a conversational style dialogue on topics such as learning each other’s names, occupation, likes and dislikes, etc. Students can respond orally to the questions posed by the instructor in the video by using applications such as VoiceThread (voicethread.com). In this way the mode of communication used by the instructor to post questions corresponds to the mode of communication used by the learners in their answers. Moreover, learners begin

184  Fernando Rubio, Carolin Fuchs and Edward Dixon to experience language in a MOOC as a means of communication, which they can further explore later with their peers in actual synchronous dialogues using social media from their personal learning environments such as Skype or Google chats and Hangouts. Although most MOOCs offer only asynchronous written interactions, language educators and course designers can use these written forums as a starting point for scaffolding, structuring and initiating interactive language learning activities. Through these forums, students become acquainted with each other and can begin developing relationships based on common interests, similarities and differences. Students can continue to build upon their exchanges from the asynchronous forums by using the more synchronous tools from their personal learning environments. This progression of learning activities – for exchanging linguistic and intercultural knowledge – that begins with a structured task in the MOOC and proceeds to more self-initiated tasks that students accomplish in their personal learning environments, requires thoughtful and carefully planned tasks. In her analysis of advanced learners’ interactions with native speakers in asynchronous online written forums, Batardière (2015) contends that instructor facilitation is more important at the design level than in the implementation phase where interaction begins. Her suggestion that instructors spend more time preparing asynchronous discussions “rather than being active within them” is especially appropriate for designers of MOOCs which afford students only limited synchronous contact with the instructor. Hence, in MOOCs, interesting topics combined with prompts that keep students on task are essential for favorably affecting the students’ continued engagement with each other, especially in their personal learning environments where learners are on their own and the instructor’s direct intervention is missing.

Feedback and assessment One unique challenge is that of providing linguistic feedback and error correction in LMOOCs to facilitate language learning. The massive nature of the course makes it necessary to combine a variety of strategies and formats that include computer-, self-, peer-, and instructor-generated feedback. Computer-generated feedback can be used with tasks in which students are simply checking their knowledge of certain features, but it is obviously of limited use in truly communicative practice. Instructor-generated feedback is a very important component, but it is time-consuming and an unrealistic expectation in massive courses. The reality, then, is that LMOOCs need to incorporate a variety of forms of feedback and assessment. Instructor assessment, error correction and structuring peer feedback pose unique challenges in LMOOCs, especially at the lower levels. Because of time constraints, the instructor cannot be the sole or primary assessor when a course enrolls a very high number of students. The burden of assessment and, to some extent, feedback as well, needs to be shared with learners. This is perfectly congruent with the philosophy of a cMOOC in which the roles of teachers and

Language MOOCs 185 learners are blurred and superposed. A possible option that has been implemented successfully (cf. the WebCEF project (www.webcef.eu) is the idea of “training” learners to become assessors. Using Can-Do statements such as the ones created by the Association of Language Testers in Europe (ALTE) or by ACTFL and the National Council of State Supervisors of Foreign Languages (NCSFL) in the United States, learners can be guided to assess themselves and their peers frequently and systematically. These Can-Do statements can be easily modified to reflect the objectives of specific tasks, so that when students are assigned to have pair or small group synchronous interactions, they can apply a specific rubric to assess themselves and each other. A similar rubric can be applied to oral activities that involve presentational communication, be it audio- or video-recorded. Instructors can gather the data provided by self- and peer-assessments to close the assessment loop and take appropriate pedagogical actions.

Accountability Taking advantage of a large community of learners and the potential for interaction requires devising ways to hold students accountable for those interactions. This can be achieved through blog posts and responses at the end of written assignments of each unit, since embedding social networking tools such as Facebook and Twitter is an option offered by many platforms. In an attempt to optimize peer feedback and interaction, and reciprocity and collaboration among students, each student would be required to post questions, and answers to questions posed by others on Facebook. In this way, students begin to connect and to practice the language with each other in small clusters that continue during and beyond the course. A similar goal can be achieved by including student “followers” on Twitter. Students may gather around common interests in course topics and choose to follow a student that they connect with on a regular basis. Reporting on a student could be part of their final project. Instructor and peer assessments of these posts and responses would need to be built into the grading system in order to hold students accountable. In addition to having students report on their interactions with each other in their personal learning environments, designers of LMOOCs can further hold their students accountable for their out-of-course engagements through surveys. In her evaluation of the MOOC Auf Deutsch, the reviewer Junko Hondo (personal communication) recommended that students be asked as part of an online assessment (1) whether they communicated with a partner outside of class, (2) what tools they used (3) how often and how long they communicated and (4) how many partners they communicated with. Students might also be asked the reasons for selecting their partners and what topics they discussed. Besides using surveys for holding students accountable for their exchanges, we also suggest using surveys to gather data for improving instruction. These surveys would take the form of exit interviews at the end of every chapter and as a requirement for completing the course. In these surveys, students could respond to multiple-choice questions such as those used by the survey tool Qualtrics or

186  Fernando Rubio, Carolin Fuchs and Edward Dixon comment in social media platforms about their learning. Regardless of the format, such questionnaires and surveys can be used to gather data about (1) the students’ personal learning experiences in the MOOC, (2) their reflections on how and what they learned, and (3) where they experienced difficulties. With this information, designers and instructor can decide on where changes are necessary to improve content, create more effective instruction and help students maximize language learning, therefore closing the assessment loop.

References Bárcena, E., & Martín-Monje, E. (2014). Introduction. In E. Martín-Monje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 1–15). Berlin: De Gruyter Open. Batardière, M-T. (2013) Examining cognitive presence in students’ asynchronous online discussions. In E. Dixon, & M. Thomas (Eds.), Researching language learner interaction online: From social media to MOOCs. CALICO Monograph 2015. Beatty, K., & Nunan, D. (2004). Computer-mediated collaborative learning. System, 32, 165–183. Beaven, T., Hauck, M., Comas-Quinn, A., Lewis, T., & de los Arcos, B. (2014). MOOCs: Striking the right balance between facilitation and self-determination. MERLOT Journal of Online Learning and Teaching, 10(1), 31–43. Belz, J. A., & Thorne, S. L. (Eds.). (2005). Internet-mediated intercultural foreign language education. Boston: Heinle & Heinle. Blake, R., Wilson, N., Pardo Ballester, C., & Cetto, M. (2008). Measuring oral proficiency in distance, face-to-face, and blended classrooms. Language Learning & Technology, 12, 114–127. Case, M. (2015) Language students’ personal learning environments through an activity theory lens. In E. Dixon, & M. Thomas (Eds.), Researching language learner interaction online: From social media to MOOCs. CALICO Monograph 2015. Chenoweth, N. A., & Murday, K. (2003). Measuring student learning in an online French course. CALICO Journal, 20, 284–314. Chenoweth, N. A., Ushida, E., & Murday K. (2006). Student learning in hybrid French and Spanish courses: An overview of language online. CALICO Journal, 24, 115–145. Christensen, M., & Christensen, M. (2015). Language learner interaction in social network site virtual worlds. In E. Dixon, & M. Thomas (Eds.), Researching language learner interaction online: From social media to MOOCs. CALICO Monograph 2015. Colpaert, J. (2014). Conclusion. Reflections on present and future: Towards an ontological approach to LMOOCs. In E. Martín-Monje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 161–172). Berlin: De Gruyter Open. Dooly, M. (Ed.). (2008). Telecollaborative language learning: A guidebook to moderating intercultural collaboration online. New York: Peter Lang. Dooly, M., & Sadler, R. (2013). Filling in the gaps: Linking theory and practice through telecollaboration in teacher education. ReCALL, 25, 4–29.

Language MOOCs 187 Egbert, J., Hanson-Smith, E., & Chao, C. (2007). Foundations for teaching and learning. In E. Hanson-Smith (Ed.), CALL environments: Research, practice, and critical issues (2nd ed.) (pp. 1–14). Alexandria, VA: TESOL. Furstenberg, G., & Levet, S. (2001). Giving a virtual voice to the silent language of culture: The Cultura Project. Language Learning and Technology, 5(1), 55–102. Garrett, N. (1991). Technology in the service of language learning: Trends and issues. Modern Language Journal, 75, 74–101. Godwin-Jones, R. (2012). Emerging technologies: Challenging hegemonies in online learning. Language Learning & Technology, 16(2), 4–13. Godwin-Jones, R. (2014). Global reach and local practice: The promise of MOOCs. Language Learning & Technology, 18(3), 5–15. Retrieved from http://llt.msu. edu/issues/october2014/emerging.pdf. González-Lloret, M. (2014). The need for needs analysis in technology-mediated TBLT. In M. González-Lloret, & L. Ortega (Eds.), Technology-mediated TBLT (pp. 23–50). Philadelphia/Amsterdam: John Benjamins. Hesse, F. W., & Garsoffky, B. Hron, A. (1997). Interface-Design für computergestütztes kooperatives Lernen. In L. J. Issing, & P. Klimsa, (Eds.), Information und Lernen mit Multimedia (pp. 252–267). Beltz, Weinheim. Kilgore, W. & Lowenthal, P.R. (2015). The Human Element MOOC. In R. D. Wright (Ed.), Student-teacher interaction in online learning environments (pp. 373–391). Hershey, PA: IGI Global. Magnan, S. S. (2008). Reexamining the priorities of the National Standards for Foreign Language Education. Language Teaching, 41(03), 249–366. doi: 10.1017/S0261 444808005041. National Standards in Foreign Language Education Project (NSFLEP) (2015). Worldreadiness standards for learning languages (W-RSLL). Retrieved from: http:// www.actfl.org/publications/all/world-readiness-standards-learning-languages. Newman, J., & Oh, S. (2014). 8 things you should know about MOOCs. Retrieved from http://chronicle.com/article/8-Things-You-Should-Know-About/146901/. O’Dowd, R. (2011). Online foreign language interaction: Moving from the periphery to the core of foreign language education? Language Teaching, 44(3), 368–380. Rubio, F. (2012). The effects of blended learning on second language fluency and proficiency. In F. Rubio, & J. Thoms (Eds.), Hybrid language teaching and learning: Exploring theoretical, pedagogical and curricular issues (pp. 137–159). Boston: Heinle and Heinle. Rubio, F. (2014). Teaching pronunciation and comprehensibility in a language MOOC. In E. Martín-Monje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 143–160). Berlin: De Gruyter Open. Rubio, F. (2015). The role of interaction in MOOCs and traditional technology-enhanced language. In E. Dixon, & M. Thomas (Eds.), Researching language learner interaction online: From social media to MOOCs. CALICO Monograph 2015. Russell, T. L. (2001). The no significant difference phenomenon: A comparative research annotated bibliography on technology for distance education. Montgomery, AL: IDECC. Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. Chichester: John Wiley. Sokolik, M. (2014). What constitutes an effective language MOOC? In E. MartínMonje, & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 16–29). Berlin: De Gruyter Open.

188  Fernando Rubio, Carolin Fuchs and Edward Dixon Thoms, J. (2012). Analyzing linguistic outcomes of L2 learners: Hybrid vs. traditional course contexts. In F. Rubio, & J. Thoms (Eds.), Hybrid language teaching and learning: Exploring theoretical, pedagogical and curricular issues (pp. 177–194). Boston: Heinle and Heinle. U.S. Department of Education. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Retrieved from https://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport. pdf. White, M. (2015). Orientations and access to German-speaking communities in virtual environments. In E. Dixon, & M. Thomas (Eds.), Researching language learner interaction online: From social media to MOOCs. CALICO Monograph 2015.

14 Enhancing specialized vocabulary through social learning in language MOOCs Elena Martín-Monje Universidad Nacional de Educación a Distancia, Spain

Patricia Ventura Universidad Nacional de Educación a Distancia, Spain Introduction This chapter provides an overview of the possibilities that Massive Open Online Courses (MOOCs) can offer for specialized language learning and the acquisition of vocabulary. It is widely accepted that vocabulary is an essential component of language teaching and learning (Perea-Barberá & Bocanegra-Valle, 2014; Schmitt, 2010) and it has attracted the attention of practitioners and researchers involved in technology-enhanced language learning, trying to make it a more interactive process in online tuition (Stockwell, 2007). MOOCs, the latest model of online education, are under constant revision in their endeavour to cater for the specific needs of the various disciplines offered. Languages, as stated further on in this chapter, are one of the most complex fields to adapt to the MOOC format, due to their skill-based nature. This piece of research looks at the defining features of language instruction when linked to social learning in online contexts, and provides an insight into possible configurations of MOOC courses in specialized linguistic domains so that they can reinforce vocabulary acquisition, a pivotal aspect in the language learning scenario.

Technology-enhanced language learning in specialized linguistic domains Languages for specific purposes (LSP) have undergone a profound transformation since their defining features were established by Dudley-Evans and St. John (1998). These authors identified key concepts in LSP, previously contemplated by Grosse and Voght (1991), such as: (a) context; (b) authentic materials and situations; (c) needs analysis; (d) the role of the LSP teacher, who is also a course designer, materials developer, collaborator, researcher and evaluator (DudleyEvans & St. John, 1998); and (e) cross-cultural and interdisciplinary dimensions of specialized linguistic domains, one of the areas which have historically received least coverage (Grosse & Voght, 1991). These considerations, made over two decades ago, have not become obsolete, but need to be revised and expanded so

190  Elena Martín-Monje and Patricia Ventura that they account for the demands of these days, in which the pervasive use of technology and the social interest in language training are proof of an increasingly complex and demanding professional and academic context (Arnó et al., 2006; Bárcena et  al., 2014). LSP learners are characterized by having a lack of time, since it is not uncommon for these language users to combine work and lifelong learning and training (Oblinger, 2006). That means that the latter need to be re-designed according to the circumstances of these professionals, creating innovative educational solutions which make the most of the technological advances. There is an undoubted academic interest in the field, evidenced by numerous research articles describing a successful integration of Information and Communication Technology (ICT) and LSP, focusing on various aspects of CALL: computer-mediated communication (Appel & Gilabert, 2006), blended learning proposals (Martín-Monje & Talaván, 2014), learner autonomy in CALL courses (Gimeno, 2014), digital literacy (Stapleton & Helms-Park, 2006), Web 2.0 applications (Kuteeva, 2011) or even mobile-assisted language learning (Stockwell, 2013). Other publications include special issues in scientific journals (such as Ibérica) or monographic volumes devoted to the impact of ICT in the development of LSP (see for instance, Arnó et al., 2006 or Bárcena et al., 2014) and specific panels in LSP academic events, such as the annual conference organized by AELFE (European Association of Languages for Specific Purposes). Not only that, authors such as Grosse and Voght (2012) or Arnó-Macià (2012) have convincingly stressed the role that technology has played in the growth of LSP throughout these years, “giving learners instant access to current information about target languages and cultures and facilitating the formation of ‘communities of practice’” (Grosse & Voght, 2012, p. 191). According to Arnó-Macià (2012, 2013) LSP has drawn from the developments of Computer-Assisted Language Learning (CALL) and followed a parallel evolution. There was first an important transition from the role of the computer as ‘tutor’ to that of a ‘tool’ (Levy, 1997), placing the language learner in a central position, and at the turn of the century took place a ‘second wave of online learning’, triggered by the spread of collaboration and an intensified focus on cultural aspects, making technology-enhanced language learning more interactive and motivating. Consequently, the need has emerged for students to acquire digital literacies in addition to language competence (Dudeney, Hockly & Pegrum, 2013), so that language learners are able to make effective use of the technologies at their disposal and can discriminate among the myriad of authentic resources offered by the Internet. Another related issue in which LSP teaching and learning and CALL have followed a similar path is the increasing urge for what Bax calls ‘normalization’, which is the “stage when the technology becomes invisible, embedded in everyday practice and hence, ‘normalized’” (2003, p. 23). In this sense, authors such as Godwin-Jones (2014) predict this relationship between LSP and ICT to be linked in the foreseeable future to the new pedagogical model represented by MOOCs (Massive, Open, Online Courses), which constitutes the focus of this chapter. In his view, the combination of MOOCs and LSP can be a very fruitful one.

Enhancing specialized vocabulary 191 This will be discussed more in depth further on, in a section especially devoted to MOOCs and LSP. Now, we will turn to some specific problems that arise in the teaching and learning of specialized vocabulary and how technology can aid in this important component of foreign language learning.

Key issues in the acquisition of specialized vocabulary There are several classifications and definitions of specialized vocabulary (SV henceforth) (Coxhead, 2013), such as ‘receptive’ vocabulary vs. ‘productive’ vocabulary, according to the mode of delivery (Graves, 2006) or ‘basic’, ‘technical’ and ‘sub-technical’ according to the use (Nation, 2001), and its corresponding teaching and learning techniques. According to Coxhead, SV is just one of the different names that LSP vocabulary receives from one research study to another, mostly “referring to the vocabulary of a particular area of study or professional use.” However, other authors (Graves, 2006) consider SV is contained in academic and professional language and as such “may be classified into technical, sub-technical or semi-technical, and general” (Perea & Bocanegra, 2014). Furthermore, Hirsh’s reflection on academic vocabulary (2010) also contributes to clarify what SV is (and is not): Academic vocabulary (1) does not occur frequently enough in non-academic writing; (2) is not sufficiently associated with a subject area to be considered terminology; and (3) appears frequently enough in academic texts across subject areas to be considered as part of the core vocabulary. Following this clarification, we are focusing on different key issues in the acquisition of SV, such as range of topic and range of word. These are fundamental issues, since the restriction of the former is essential to create a particular type of SV (i.e. medical, business) and the latter is necessary in order to establish differences between the meaning of a word in ‘general’ vocabulary and its specialized or specific meaning. Specialized words are not only highly-technical words but they can be everyday words, such as ‘market’ (Camiciottoli, 2007) which is used in business studies. Following Nation, there are two ways of making special vocabularies: (1) “by doing frequency counts using a specialized corpus” and (2) “others are made by experts in the field gathering what they consider relevant vocabulary” (2008, p. 17). Another key aspect in the study of SV is how to approach its teaching and learning process. Nation (2001) suggests teaching and studying the words in varied ways, for instance, teachers should support learners by helping them see the connections and differences between high-frequency and specialized meanings and also training them in strategies which will help them understand and remember the words. Furthermore, Coxhead (2013) provides other techniques such as discussing and exploring misconceptions about words with learners or breaking down words in order to show their constituent word parts. To the challenges and difficulties presented above, Camiciottoli (2007) added the change and evolution of SV “according to changing interests within communities of practice” which result in SV being in constant transformation and evolution. All these hint at the possibility of adopting a social approach to the

192  Elena Martín-Monje and Patricia Ventura field, which may contribute to a more effective process, keeping up with the latest variations in meaning and use. This feature is explored in detail in the following section.

Social language learning Socio-cultural theories developed from the work of Vygotsky (1978, 1987) have a straightforward application in language learning nowadays, through what is called the ‘social web’, which provides an easy way for teachers to promote social language learning among their students and for learners to make their language learning process more social. Web 2.0 applications, such as blogs and wikis, allow users to generate their own contents and share them with others, who accordingly can adapt those contents and re-use them. In the same way, mobile technologies facilitate interaction with others in the learning process, since they support mobile applications, such as instant messaging service, social networking tools, educational applications and games applied to learning activities. Among all these, social networking is capturing most of the interest in second language learning these days (Demaizière & Zourou, 2010; Lamy et al. 2013; Brick, 2013). According to Conole and Alevizou’s classification of Web 2.0 tools (2010), social networking is defined as “websites that structure social interaction between members who form subgroups of ‘friends’’’ (p. 11). Examples of these social networks today are Facebook (1 billion users) or Twitter (271 million users) (McCarthy, 2014). Similarly, over the last few years specific social networks or communities designed for language learning, such as Livemocha (http://livemocha.com/), Busuu (https://www.busuu.com/) or SharedTalk (http://www.sharedtalk.com/index. aspx) have occupied a relevant place in online language learning, either being used by language teachers for telecollaborative projects (Blattner & Lomicka, 2012, in Zourou, 2012) or as a complement to the language classroom (Blattner & Fiori, 2009), with positive results pointing to a sense of community and an impact on the development of socio-pragmatic competence. Nonetheless, social networks have also encountered some criticism in education, mainly because of students’ wrong use of them, concerning especially privacy issues, and the unproven relationship between the use of social networks in education and student results. This last aspect has been a main focus of research in our study. Next, we will deal with the most recent methodological proposal for social learning, MOOCs, when applied to LSP.

MOOCs as an emerging option for specialized language learning: the case of “Professional English” In this section we will look into the new methodological model of MOOCs and how it has been adapted for language learning, presenting the case of “Professional English”, the first LSP MOOC in Spain. While the importance of ICT for LSP is well documented (see previous sections), there is not much literature devoted to language MOOCs (LMOOCs). This model of online instruction has the goal

Enhancing specialized vocabulary 193 of promoting learning for a vast number of people and represents the natural evolution of Open Educational Resources (Colpaert, 2014; Martín-Monje & Bárcena, 2014; Pantò & Commas-Quinn, 2013). It was first offered by Siemens, Downes and Cormier at the University of Manitoba, Canada, in 2008 (Daniel, 2012; Downes, 2012; Watters, 2012) and is based on a connectivist theoretical framework, which envisages learning as a process of growing and navigating across networks of people and technology (Downes, 2012; EDUCAUSE, 2011, 2013; Siemens, 2012). MOOCs have experienced a staggering growth in a short lapse of time, both in the number of courses offered and in the scholarly research devoted to (see for instance Haggard et al., 2013). However, this has not been the case with MOOCs in the field of Humanities and more specifically those dealing with foreign language learning (Sokolik, 2014). The number of MOOCs created in 2014 reached almost 4,000 (www.moocs.co), whereas there are only around 30 LMOOCs on offer to date (Bárcena & Martín-Monje, 2014). This early stage of development in LMOOCs is confirmed by the scarcity of related scholarly work, neither of an empirical nor of a theoretical nature. The literature review carried out by one of the authors of this chapter revealed a surprisingly low number of articles on this topic published in scientific journals: Godwin-Jones (2012, 2014), Schulze and Smith (2013), Stevens (2013a, 2013b), and Winkler (2013) are the sole six contributions so far (Bárcena & Martín-Monje, 2014). All these data show an incipient but steady scholarly interest in the field of LMOOCs, which should include in its agenda a study to justify the remarkable disparity between the number of MOOCs in other disciplines and the small amount of LMOOCs. Various authors point towards the particular features of language learning, which make it difficult to create a purely connectivist MOOC (Martín-Monje & Bárcena, 2014): language is not only knowledge-based, in the sense that it requires the assimilation of vocabulary items and grammar rules, but is mainly skill-based, since it involves putting into practice a complex range of receptive, productive and interactive functional capabilities. Colpaert (2014) and Godwin-Jones (2014) have also attempted to provide paths for future development. Colpaert indicates that “an LMOOC should look like it has been well designed in a methodological and justifiable way, with its eventual features depending on local context and learning goal” (2014, p. 169) and proposes four issues as the suggested path for LMOOCs: modularity, specialization, adaptation and co-construction. It is the second issue, specialization, the one that is particularly relevant in this chapter, since LMOOCs should not try to cater for as many language users as possible, but also attract the participants that have those LSP needs. Godwin-Jones (2014) ventures to look ahead and enumerates seven possible directions: (1) More options for credentialing from completion of MOOCs; (2) Growth in Learning Analytics applied to MOOCs; (3) More involvement in planning and teaching by information specialists, especially subject librarians; (4) More openness in MOOC content; (5) Greater modularity in MOOC structure; (6) Increased adaptation of MOOCs to mobile environments; and (7) More language MOOCs in targeted areas: English as a foreign language, less commonly taught languages and LSP.

194  Elena Martín-Monje and Patricia Ventura Next we will focus on the specific case of the LMOOC “Professional English” and describe the context in which this empirical study was carried out.

Context The “Professional English” LMOOC offered by UNED was, as already mentioned, the first ESP MOOC in Spain. During its first edition the course was simultaneously launched in two platforms, MiríadaX and UNED’s own open MOOC platform, Aprendo, reaching nearly 50,000 users between both platforms. On its second edition the MOOC was launched only in Aprendo, with more than 8,000 users registered. The course ran for 12 weeks and contained six modules designed to be studied every two weeks, although this planning was only offered as guidance, since registration was open throughout the course as well as all the contents and activities, with the aim of providing students with a flexible and open methodology that allowed them to work at their own pace. Research undertaken by the authors throughout the two editions of “Professional English” has followed an action-research methodology (Lewin, 1946), focusing on the importance of interaction and collaboration among students enrolled in the course. Following this, the authors led a pedagogical intervention in the first edition of the course and analysed the results in order to evaluate success. Analysis of results was followed by a reflection period during which they devised a new strategy oriented to improve results in the second edition. The implementation of this new strategy was undertaken as a side project experiment which started in the middle of the second edition of “Professional English”. During the second edition it was noted that the participants were not making use of the SV that is inherent to any effective LSP course and that, together with a lack of interaction and collaboration, raised another cause of concern. Social networks were thought of as a new – but well-known for many – learning environment which could foster collaboration and interaction among students to address fundamental issues in the course development, such as that lack of SV on the part of the students. Facebook was the selected social network to carry out the experiment, since it is one of the most popular ones and offers the possibility of creating a group environment. Therefore, a Facebook group (henceforth FG) was created by the “Professional English” MOOC curator with the objective of attracting students to join it and thus, promote collaboration and engagement in relation to the course but through a different environment. The experiment ran for eight full weeks, during which different topics were presented by the instructor following the MOOC contents – job offers, communication in business, meetings, negotiation – and focusing on specific vocabulary and expressions, such as collocations, word forms, phrasal verbs, etc. Following a mixed-method approach, the strategies adopted to collect data from the experiment were mainly: observation of participants’ interaction and two questionnaires issued before and after the experiment. The preexperiment questionnaire was addressed to students who joined the FG

Enhancing specialized vocabulary 195 and gathered information about their use of technology and social networks in language learning prior to their registration in the MOOC. Conversely, the post-experiment questionnaire aimed at finding out the students’ experience after taking part in the FG. In the following section data gathered from these instruments are analysed and presented.

Data analysis and discussion The second edition of “Professional English” was launched in UNED’s MOOC platform (Aprendo) and ran for 12 weeks, from November 2013 to January 2014. Approximately 8,500 students enrolled in the MOOC and around 6,400 actually started it, which shows the lack of true motivation or interest of nearly a quarter of potential students. Activity dramatically decreased from Module 1 (6,400 students) to Module 2 (2,500 students) and Module 3 (1,450 students), but after that a sort of balance was found in the region of 1,000 students in Modules 4, 5 and 6. Concerning success and completion rates, roughly 600 students passed the course, which means that almost 10% of those who actually started were able to obtain a credential (a type of online certificate students can request, paying a small fee and taking an online test). Out of that group, approximately half of them (326) completed it, reaching thus a completion rate of 5%, in line with the average MOOC completion rate according to recent research (Jordan, 2013). Since forum activity mostly centred on technical issues of the platform and other difficulties encountered unrelated to the course contents, the course curator decided to open a FG, as explained in the previous section, following week 4 of the course to promote social learning and interaction in the target language focusing on language learning rather than on technical issues. With the implementation of this FG as a side project of the MOOC, the curator also wanted to increase student motivation, which is one of the key issues addressed in MOOC research due to the low retention rate these courses show (Beaven, Codreanu & Creuzé, 2014). When asked about their expectations before joining the FG in the post-experiment questionnaire, 79% of “Professional English” MOOC students answered that they wanted to improve their language skills, but there was also a considerable amount (22 out of 66 students) who said they wanted to supplement the MOOC. This may account for students who, perhaps, already had good knowledge of professional English and found the course contents, at least up to week 4, insufficient. This need or wish to supplement the MOOC leads us to students’ knowledge of SV before joining the FG. To the question “How would you rate your vocabulary knowledge on the subject (Professional English) before joining the group?” 58% of the students answered it was ‘basic’, although a relevant amount (25 out of 66) rated it as ‘wide’. Only 5% of students considered that their knowledge of SV was ‘poor’. Since these data are based only on the students’ perception – not on a diagnostic vocabulary test prior to joining the FG, it may be that their perception does not coincide with their actual knowledge of SV. Therefore, that

196  Elena Martín-Monje and Patricia Ventura basic knowledge of SV may affect reading comprehension and consequently hinder overall success in the LMOOC. As for the 38% who rated their knowledge of vocabulary as ‘wide’, it may account for those students who wanted to supplement the MOOC. Although an intermediate level of English (B1 according to the CEFR) is the desired level to easily follow the course, its level lies between A2 and B1, being students of the latter who probably had a wider knowledge of SV. Furthermore, some students who answered the post-experiment questionnaire suggested creating a new ‘advanced’ course (“I think you will have to offer a higher level course”, “It would be a brilliant idea to create a new professional English course [higher level], keep the group permanently open and keep on posting new vocabulary”) and even continuing work in the FG with a greater degree of specialization (“I would love to continue learning with the Facebook Group. Would be great to have a course or lessons dedicated to English in engineering. Thanks!!!”). Concerning students’ perception of the feedback exchanged among their own course mates in the FG, 58% thought it was “good, they answered my questions and solved my doubts”. At the beginning of the experiment, guidelines and instructions were given to the LMOOC students who wished to participate in the FG on how to use the feedback mechanisms inherent to this social network, such as the “Like” button, mentions and comments. Therefore, students were highly encouraged to comment on other course mates’ replies in order to help them, ask for further explanations or even make corrections, creating thus an enriching social learning experience. Comparing interaction in the FG with interaction in the course forums, we can see that, although activity decreased during the course in both learning environments, students were more active and willing to participate in the social network than in the course forums – even though sometimes participation only consisted of hitting a button. Since written interaction in the FG was strictly in English, regular participation could provide students with opportunities to practise with the SV they were learning and also practise general conversational English with their course mates and the instructor. Finally, two more aspects have been taken into account from the students’ perspective after participating in the experiment: improvement of SV knowledge and overall perception of the combination of an LMOOC and Facebook Groups. Almost 40% of students stated that their vocabulary knowledge had ‘really improved’ after joining the FG, whereas 56% considered their vocabulary knowledge had ‘slightly improved’. Similarly, when asked about their overall perception of the FG experience, 57% of students rated it as ‘good’, 20% as ‘excellent’ and ‘OK’ respectively. Data gathered from the answers to these questions show a positive and rather optimistic view from the students’ perspective. Although this research relies mostly on qualitative work, results considering both students’ perception of their vocabulary learning experience and observation from the MOOC curator and FG administrator and instructor lead us to believe that social networks have the potential to engage students and foster interaction in LMOOCs, as well as increase acquisition of SV. Further research on this area should examine more in detail the potential impact of social networks

Enhancing specialized vocabulary 197 implementation in LMOOCs in students’ results, taking into consideration quantitative data as well.

Conclusion This chapter has attempted to shed light on the acquisition of SV in MOOCs, paying special attention to how social learning can be enhanced when attaching a social network such as Facebook to these online courses. Data gathered show that students who are genuinely interested appreciate this educational intervention, since those that joined the FG have a higher completion rate than the rest. Their perception of progress is encouraging, the majority are satisfied with the positive impact of the FG on their SV learning, but the results are far from concluding, since the completion rates in LMOOCs are still rather low. This has led the authors of this chapter to conclude that some further research would be desirable, launching a third iteration of the MOOC “Professional English” in which sound quantitative information is gathered and triangulated with the qualitative data, so that we can obtain a clearer, more detailed view on how social learning can make a significant impact on SV acquisition in LSP.

References Appel, C. & Gilabert, R. (2006). Finding common ground in LSP: A computermediated communication project. In E. Arnó, A. Soler & C. Rueda, (eds.), Infor­ mation technology in languages for specific purposes: Issues and prospects (pp. 75–90). New York: Springer. Arnó, E., Soler, A. & Rueda, C. (Eds.). (2006). Information technology in languages for specific purposes. New York: Springer. Arnó-Macià, E. (2012). The role of technology in teaching LSP courses. The Modern Language Journal, 96, 89–104. Arnó-Macià, E. (2014). Information technology and languages for specific purposes in the EHEA: Options and challenges for the knowledge society. In E. Bárcena, T. Read & J. Arús (Eds.), Languages for specific purposes in the digital era (pp. 3–25). Bern: Springer. Bárcena, E. & Martín-Monje, E. (2014). Introduction. Language MOOCs: An emerging field. In E. Martín-Monje & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 1–15). Berlin: De Gruyter Open. Bárcena, E., Read, T. & Arús, J. (Eds.). (2014). Languages for specific purposes in the digital era. Bern: Springer. Bax, S. (2003). CALL – Past, present and future. System, 31, 13–28. Beaven, T, Codreanu, T. & Creuzé, A. (2014). Motivation in a language MOOC: Issues for course designers. In E. Martín-Monje & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 48–66). Berlin: De Gruyter Open. Blattner, G. & Fiori, M. (2009). Facebook in the language classroom: Promises and possibilities. International Journal of Instructional Technology and Distance Learning, 6(1), 17–28. Retrieved from http://www.itdl.org/journal/jan_09/ article02.htm.

198  Elena Martín-Monje and Patricia Ventura Blattner, G. & Lomicka, L. (2012). Facebook-ing and the social generation: A new era of language learning. Apprentissage des Langues et Systèmes d’Information et de Communication, 15(1). Retrieved from http://alsic.revues.org/2413. Brick, B. (2013). Evaluating social networking sites (SNSs) for language learning: An inquiry-based student project. [PDF document]. Retrieved from Coventry University website: http://www.coventry.ac.uk/Global/BES/Active%20Learning/Billy%20 Brick.pdf. Camiciottoli, B. (2007). The language of business studies lectures. Amsterdam: John Benjamins Colpaert, J. (2014). Conclusion. Reflections on present and future: Towards an ontological approach to LMOOCs. In E. Martín-Monje & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 161–172). Berlin: De Gruyter Open. Conole, G. & Alevizou, P. (2010). A literature review of the use of web 2.0 tools in Higher Education. [PDF document]. Retrieved from the Higher Education Academy website: https://www.heacademy.ac.uk/sites/default/files/Conole_ Alevizou_2010.pdf. Coxhead, A. (2013). Vocabulary and ESP. In B. Paltridge & S. Starfield (Eds.), The handbook of English for specific purposes (pp. 115–132). Boston: Wiley-Blackwell. Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility. Journal of Interactive Media in Education, 3(1). Retrieved from http://www-jime.open.ac.uk/jime/article/view/2012-18. Demaizière, F. & Zourou, K. (2010). Social media and language learning: (R)evolution? Apprentissage des Langues et Systèmes d’Information et de Communication, 13(1). Retrieved from http://alsic.revues.org/1695. Downes, S. (2012, January 6). Creating the connectivist course. [Blog post]. Retrieved from http://halfanhour.blogspot.pt/2012/01/creating-connectivistcourse.html. Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital literacies. Harlow: Pearson. Dudley-Evans, T. & St John, M. (1998). Developments in ESP: A multi-disciplinary approach. Cambridge: Cambridge University Press. EDUCAUSE (2011). 7 things you should know about . . . MOOCs. Retrieved from http://wwwcdn.educause.edu/ir/library/pdf/ELI7078.pdf. EDUCAUSE (2013). 7 things you should know about . . . MOOCs II. Retrieved from http://wwwcdn.educause.edu/ir/library/pdf/ELI7097.pdf. Gimeno, A. (2014). Fostering learner autonomy in technology-enhanced ESP courses. In E. Bárcena, T. Read & J. Arús (Eds.), Languages for specific purposes in the digital era (pp. 27–44). Bern: Springer. Global social networks ranked by number of users. (2014). Retrieved 20 November, 2014, from http://www.statista.com/statistics/272014/global-social-networksranked-by-number-of-users/. Godwin-Jones, R. (2012). Emerging Technologies: Challenging hegemonies in online learning. Language Learning & Technology, 16(2): 4–13. Retrieved from http://llt.msu.edu/issues/june2012/emerging.pdf. Godwin-Jones, R. (2014). Global reach and local practice: The promise of MOOCs. Language Learning & Technology, 18(3), 5–15. Retrieved from http://llt.msu. edu/issues/october2014/emerging.pdf. Graves, M. F. (2006). The vocabulary book: Learning & instruction. New York, NY: Teachers College Press.

Enhancing specialized vocabulary 199 Grosse, C. U. & Voght, G. M. (1991). The evolution of languages for specific purposes in the United States. Modern Language Journal, 75, 181–195. Grosse, C. U. & Voght, G. M. (2012). Update to “The evolution of languages for specific purposes”. Modern Language Journal, 95(Focus Issue), 189–202. Haggard, S., Brown, S., Mills, R., Tait, A., Warburton, S., Lawton, W. & Angulo, T. (2013). The maturing of the MOOC: Literature review of massive open online courses and other forms of online distance learning. Research paper number 130. London: Department for Business, Innovation and Skills. Hirsh, D. (2010). Academic vocabulary in context. Bern: Peter Lang. Jordan, K. (2013). Emerging and potential learning analytics from MOOCs. [PowerPoint slides]. Retrieved from http://www.academia.edu/3264990/ Emerging_and_potential_learning_analytics_from_MOOCs. Kuteeva, M. (2011). Wikis and academic writing: Changing the writer–reader relationship. English for Specific Purposes, 30, 44–57. Lamy, M. & Zourou, K. (Eds.). (2013). Social networking for language education. UK: Palgrave Macmillan. Levy, M. (1997). Computer-assisted language learning: Context and conceptualization. Oxford: Oxford University Press. Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2(4), 34–46. Martín-Monje, E. & Bárcena, E. (2014). Language MOOCs: Providing learning, transcending boundaries. Berlin: De Gruyter Open. Martín-Monje, E. & Talaván, N. (2014). The I-AGENT project: Blended learning proposal for professional English integrating MOODLE with classroom work for the practice of oral skills. In E. Bárcena, T. Read & J. Arús (Eds.), Languages for specific purposes in the digital era (pp. 45–67). Bern: Springer. McCarthy, N. (2014). Twitter versus Facebook. Retrieved from http://www.statista. com/chart/2835/twitter-versus-facebook/. Nation, P. (2001). Learning vocabulary in another language. Cambridge: Cambridge University Press. Nation, P. (2008). Teaching vocabulary: Strategies and techniques. Boston, MA: Heinle. Oblinger, D. G. (2006). Space as a change agent. In D. G. Oblinger (Ed.), Learning spaces. EDUCAUSE e-book, 1–3. Retrieved from http://www.educause.edu/ research-andpublications/books/learning-spaces. Pantò, E. & Comas-Quinn, A. (2013). The challenge of open education. Journal of e-Learning and Knowledge Society, 9(1), 11–22. Perea-Barberá, M. D. & Bocanegra-Valle, A. (2014). Promoting specialised vocabulary learning through computer-assisted instruction. In E. Bárcena, T. Read & J. Arús (Eds.), Languages for specific purposes in the digital era (pp. 141–154). Bern: Springer. Schmitt, N. (2010). Key issues in teaching and learning vocabulary. In R. ChacónBeltrán, C. Abelló-Contesse & M. M. Torreblanca-López (Eds.), Insights into nonnative vocabulary teaching and learning (pp. 28–40). Bristol: Multilingual Matters. Schulze, M. & Smith, B. (2013). Computer-assisted language learning – the times they are a-changin’ (Editorial). CALICO Journal, 30(3), i-iii. Siemens, G. (2012, June 3). What is the theory that underpins our MOOCs? [Blog post]. Retrieved from http://www.elearnspace.org/blog/2012/06/03/ what-is-the-theory-that-underpins-our-moocs/.

200  Elena Martín-Monje and Patricia Ventura Sokolik, M. (2014). What constitutes an effective language MOOC? In E. MartínMonje & E. Bárcena (Eds.), Language MOOCs: Providing learning, transcending boundaries (pp. 16–32). Berlin: De Gruyter Open. Stapleton, P. & Helms-Park, R. (2006). Evaluating Web sources in an EAP course: Introducing a multi-trait instrument for feedback and assessment. English for Specific Purposes, 25, 438–455. Stevens, V. (2013a). What’s with the MOOCs? TESL-EJ: Teaching English as a Second or Foreign Language, 16(4). Retrieved from http://www.tesl-ej.org/wordpress/ issues/volume16/ej64/ej64int/. Stevens, V. (2013b). LTMOOC and Instreamia. TESL-EJ: Teaching English as a Second or Foreign Language, 17(1). Retrieved from http://www.tesl-ej.org/ wordpress/issues/volume17/ej65/ej65int/. Stockwell, G. (2007). A review of technology choice for teaching language skills and areas in the CALL literature. ReCALL, 19, 105–120. Stockwell, G. (2013). Tracking learner usage of mobile phones for language learning outside of the classroom. In P. Hubbard, M. Schulz & B. Smith (Eds.), Learner – computer interaction in language education: A festschrift in honor of Robert Fischer (pp. 118–136). San Marcos, TX: CALICO. Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Vygotsky, L. (1987). Problems of general psychology. In R. R. A. Carton (Ed.), The Collected Works of L. S. Vygotsky. New York: Plenum Press. Watters, A. (2012, December 3). Top Ed-Tech Trends of 2012: MOOCs. Retrieved from http://hackeducation.com/2012/12/03/top-ed-tech-trends-of-2012-moocs. Winkler, K. (2013). Where there’s a MOOC. The Linguist, 52(3), 16–17. Zourou, K. (2012). On the attractiveness of social media for language learning: A look at the state of the art. Apprentissage des Langues et Systèmes d’Information et de Communication, 15(1). Retrieved from http://alsic.revues.org/2436.

Part 5

Corpus-based approaches to specialized linguistic domains

This Page is Intentionally Left Blank

15 Corpus-based teaching in LSP Tony Berber Sardinha São Paulo Catholic University, Brazil

Introduction Corpora (principled collections of spoken and written text, stored in electronic form) have had an impact on language teaching for some time through the incorporation of computer tools, techniques, theoretical constructs, and research findings from corpus linguistic research (Berber Sardinha, 2013; Berber Sardinha et al., 2013; O’Keeffe, McCarthy, & Carter, 2007; Reppen, 2010). Among the contributions that corpus linguists have made to the development of our knowledge about how language operates in reality (rather than how we imagine it does), two are particularly important: the notion that language use operates based on individuals employing words in generally predictable sequences (which largely forms the basis of John Sinclair’s linguistics) and the notion that language use is inherently variable and modeled by the situations in which individuals speak and write (which generally underlies the work of Douglas Biber). Although different thinkers have proposed such ideas over the years, Sinclair and Biber have provided the necessary evidence that each of these ideas reflects reality and, more importantly, have demonstrated the extent of the systematicity of these notions. Sinclair’s work led him to “discover” collocation, as Hoey pointed out (2009, p. 34) by providing the evidence for the idea of collocation, which Firth (1957) had proposed earlier. Based on corpus-based research, Sinclair introduced the idiom principle, according to which language users make use of semi pre-constructed word combinations (collocations) to produce and understand language (Sinclair, 1991), rather than assemble and decode utterances by focusing on individual wordsy. The idea that words bring along with them a “memory” of their preferred uses has been referred to as phraseology (or the phraseological principle), which Hunston (2002, p. 137) defined as “the tendency of words to occur, not randomly, or even in accordance with grammatical rules only, but in preferred sequences.” In the classroom, the pioneering work of Tim Johns (1991) with classroom concordancing and data-driven learning in the 1980s and 1990s incorporates these ideas and still provides the basis of contemporary corpus-based teaching in general and in language for specific purposes (LSP) in particular. He advocated the use of concordances in the classroom and encouraged students to build hypotheses about how words were used and test

204  Tony Berber Sardinha them against the evidence provided by concordances. His work was influenced by John Sinclair’s corpus-based lexicography carried out from the late 1960s and in the Cobuild project led by Sinclair in the 1980s and 1990s. Biber’s work, in turn, has shown how registers are systematically distinct from each other; they carry built-in linguistic profiles, nuanced ‘linguistic fingerprints’. These profiles are based on sets of probabilities that form large underlying parameters called dimensions – which can be turned into a set of ‘yardsticks’ with which we can measure the distance from one register to another (Biber, 1988). Dimensions are identified by means of detailed linguistic and statistical analyses. Biber’s work questions the “language as a whole” approach (cf. Berber Sardinha & Veirano Pinto, 2014) to language description as teaching. If registers are not considered in the classroom, students might not develop the crucial skill of learning to express themselves according to the conventions and expectations of particular registers. Fortunately, LSP teaching naturally invites a register-based approach, as the idea of focusing on particular text varieties (genres, registers, text types, etc.) is often a necessity in content and/or occupational settings (cf. Celani et al., 1988). In this chapter, I distinguish between two broad types of corpus-based approaches to LSP material development and classroom teaching: phraseologycentered and register-centered. In the phraseology-centered approach, the focus is on lexical patterning and collocations, and the lexicographical work by Sinclair and subsequent researchers is influential. Students are often expected to do lexicography-style activities, going through concordances and noting down the frequent lexical patterns and their meanings. In contrast, the register-centered approach focuses on the salient features that distinguish registers; the major goal is to familiarize students with such structures by comparing and contrasting different registers. Biber’s work motivates such materials, with its emphasis on the clusters of linguistic characteristics that mark individual registers. These two perspectives have often been incorporated into classroom teaching in some form. This merging is beneficial as it enables students to both take a detailed look at the phraseology of the texts and examine how the texts vary in relation to one another or to other registers. In what follows, ideas about how to use online corpora to explore both phraseology and register variation are presented. The goal is to demonstrate some of the ways in which corpora can be analyzed and the findings from corpus explorations brought to the classroom. The examples are from English corpora, but the general principles of phraseology and register variation apply to all languages, and it is possible to use corpora from other languages with most of the tools discussed here. The chapter uses online corpus analysis portals to illustrate how phraseology and register variation can be explored in the LSP classroom. Many of the online analyses discussed here – especially KWIC/concordancing, word frequency, and keyword listing – can be replicated with desktop corpus-processing tools such as WordSmith Tools (Scott, 1996/2015) and AntConc (Anthony, 2015). However, online corpus portals were preferred for a number of reasons. In terms of power, they are engineered to handle both small and very large data sets – in the order

Corpus-based teaching in LSP 205 of billions of words. They can perform searches that would take hours (with a desktop program) in a matter of seconds (or less). They offer sophistication by incorporating powerful functions, such as tagging, statistical collocate listing and term extraction, which are not normally available in desktop corpus analysis packages. They further provide built-in resources, making available many different ready-to-use corpora. Some online corpus portals (like SketchEngine) enable users to upload their own corpora and make use of the same tools used with the larger built-in corpora. Furthermore, as the corpora and tools are online, group work is facilitated; individuals can work on the same data from different places, and there is no need to install the same piece of software on individual machines. And because the tools are online, different operating systems can access the same data and see the results in (relatively) the same way. Finally, they provide portability and can be accessed on most devices, including mobile ones. However, online corpora also include specific disadvantages. For example, they generally offer restricted access to the full texts for copyright reasons; they are susceptible to vulnerability; they can be shut down temporarily or permanently, beyond the users’ control; and they require an Internet connection to operate. Online corpora do not necessarily ensure continuity; saving work so that users can start from where they stopped at some other time might not be possible. Additionally, online corpus sites can charge a fee to be accessed. Finally, portability is an issue as transferring output to another program (e.g., Microsoft Word) can be challenging. Naturally, it is possible to combine desktop and online resources for analysis and teaching (e.g., running concordances on a small local corpus using a desktop program and searching for the same words in a large online corpus). The online corpora used to illustrate the analyses in this chapter are the SketchEngine (www.sketchegine.co.uk) and the BYU corpora (corpora.byu.edu), and although neither provides LSP corpora per se, they integrate most of the advantages and avoid most of the disadvantages mentioned.

Phraseology-centered materials The stock and trade of phraseology-centered materials are often word frequency lists, keyword lists, concordances, collocation tables, and n-gram lists (also known as clusters, multi-word units, and bundles). Users can either build LSPlike corpora on the fly or import their own texts into SketchEngine. To illustrate these functions, two LSP corpora will be mentioned: the Corpus of Aircraft Manuals (CAM; Zuppardo, 2014), and the Corpus of English Research Articles (CERA; Ramos Filho, 2015). These corpora were imported into SketchEngine and are not accessible to users other than their owners. One way in which a word frequency list is useful in corpus-based classroom teaching is as a tool for taking a bird’s eye view of the corpus (what kind of texts are in the corpus, what topics are talked about, what words are most frequent, etc.). The word frequency list can also help guide decisions regarding which words should be focused on in a lesson as a frequency-ordered list will show which words are likely to appear more often in the target texts, meaning the students will likely see that word many more

206  Tony Berber Sardinha times inside and outside the classroom. To get started, teachers usually look for content words in the word frequency list, and in this case, candidate words would include task (both in all capital letters and in lower case), maintenance, and aircraft. Alternatively, the teacher might let the students work on the list themselves, either on the actual SketchEngine screen or on printouts of the list. The next step is usually searching for the target words in the corpus and displaying the hits in keyword in context (KWIC) layouts or concordances. This step can also be carried out by the teacher in advance or be left for the students to do in a lab or with printouts. In analyzing concordances, teachers and students are generally concerned with questions such as: •• •• •• •• ••

What words typically occur to the left and to the right of the word? Do these neighboring words form repeated groups (patterns) around the word? Are fixed word sequences formed with that particular word? What is the meaning of the word in each pattern and/or sequence? What is the word used for in the text?

In the case of task, a concordance for that word in CAM (on SketchEngine) shows a predominant pattern, do this task, often followed by either an imperative verb phrase (supply electrical power, remove electrical power) or a noun–noun sequence (ELMS Power Management Panel, APU Starting and Operation, Backup Generator Oil Level Check). The meaning of the keyword can be arrived at by guessing from the context, using previous knowledge, inferring from the first language, or using other such strategies. As far as what the word is used for in the text, in this corpus, task is predominantly employed to signal the beginning of the description of a particular operation. This is indicated typographically by the frequent use of colons (task:), followed by a description of the operations needed to be performed on the airplane. It is sometimes necessary to see more context, which can be accomplished by clicking the label on the left-hand side of each concordance line (e.g., file1961565), which will bring up a small window at the bottom of the screen where the whole text file can be seen (this will not work if the files submitted to the corpus are in zip format). It is important to bear in mind that the original layout of the document is often lost in online corpora, which occasionally makes it harder for students to see the role the word is playing in the text organization. An important issue regarding the use of concordances in the classroom is that the level of detail of the words and patterns to be taught should be tailored to the level of the students. If the LSP students are at an elementary level of English, then perhaps studying the immediate verb phrase (do this task) would suffice. For intermediate/advanced students, it might be a good idea to explore both the verb and noun phrases, which form patterns based on their grammatical equivalence, not their lexical similarity. LSP students normally have questions about the words in concordances, and the teacher can use a number of different techniques to deal with such situations, including asking the students to run concordances

Corpus-based teaching in LSP 207 for the unknown words in the same or different corpora and have the students try to guess the meaning of the words. The verb phrases following the word task present an interesting case of opposition: supply versus remove electrical power, thereby giving the students an opportunity to work on lexical sets – that is, words related by various semantic relations (synonymy, antonymy, meronymy, etc.). SketchEngine offers Sketch-Diff, a very interesting tool to explore such relations, as part of the Word Sketch tool. To illustrate, supply will be compared to remove; through Sketch-Diff, users can determine the collocates most frequently occurring with either of these focus words and compare the sets of collocates. An example of such an analysis appears in Figure 15.1; the display is color-coded to show which collocates occur more frequently with each word.

Figure 15.1  Use of Sketch-Diff for supply/remove in CAM

208  Tony Berber Sardinha According to Figure 15.1, for the collocate category and/or, supply is used with close (e.g., After ten seconds, a switch closes and supplies a ground to the dim relay), whereas remove occurs with many different verbs, like retain (e.g., Remove and retain the four screws and washers), add (Add or remove shims between the alignment fitting), and discard (Remove and discard the packing). With the object category, marked differences occur between the two words as well, as supply prefers source, output, and current, whereas remove attracts piece, fastener, and particle. Interestingly, power is not marked for either supply or remove; thus, it is shown in white. The Sketch-Diff feature can be explored on its own, with students running their own comparisons and checking the collocate sets through the concordances (by simply clicking on the frequencies in the tables). It can also be used in conjunction with some form of note-taking or vocabulary book activity, where the students can group the words into sets and record them for later reference or study. For instance, with remove, the object category gives a wide range of machine parts that are removed in the course of aircraft maintenance, such as bolts, nuts, screw, and cover. These words can then be labeled as a parts set and entered on cue cards, in a note-taking app, or even in a computer database. The SketchEngine thesaurus tool can also be used to help develop awareness of word sets by showing which words in the corpus have similar collocational sets. The user enters a focus word and selects the part of speech; the tool then brings up a list of related words. Unlike a regular thesaurus, the resulting list is not strictly arranged by any one particular semantic relation; therefore, students must not confuse this with a list of synonyms or antonyms. By clicking the word on the thesaurus list, the user sees a Sketch-Diff display with the preferences of each word broken down into the usual SketchEngine categories (and/or, object, subject, etc.). As the thesaurus list can be quite long, it seems prudent to have students choose a few words to look at in more detail. The list itself can be another source for word set work, with students mining the entries for potential related words, such as connect and disconnect, hold and keep, and turn and tighten. The thesaurus tool also provides a word cloud, which conveniently displays the most significant words in large type; when clicked, the word cloud leads the user to the phraseology of each word via the Sketch-Diff table. A popular tool in corpus-based LSP teaching is keyword lists – lists of words whose frequencies are statistically higher in the corpus under study (focus corpus) than in a comparison/reference corpus. SketchEngine computes keywords through its Word List option. To illustrate, the keywords for CAM were computed using as reference corpus enTenTen12, a corpus of online texts with nearly 12 billion words, available in SketchEngine as both a stand-alone and a reference corpus. The general rules of thumb with keywords is that the reference corpus should not contain the focus corpus and that it should be larger than the focus corpus (Berber Sardinha, 2004), unless there are good reasons to do otherwise. When keywords are run on a very specialized corpus such as CAM, it is common for technical words to pop up to the top of the list because they tend to occur much more frequently in that particular field of expertise than in regular

Corpus-based teaching in LSP 209 language use. The discrepancy in frequency in the two domains is expressed in the frequency per million count in the table. A keyword like subtask occurs more than 6,000 times per million words in the aircraft manuals but very rarely in general English. Clicking the count under one of the corpora runs a concordance for that particular corpus: The concordance shows that subtask is so frequent in the manuals because it is used in section headings that divide the whole document into different parts. In enTenTen12, the concordance shows that subtask is used differently as part of regular sentences, such as that subtask can be implemented as a function, each subtask can only ever be partially satisfied, and this subtask entails monitoring. Despite this difference in usage, the texts in which this word occurs in enTenTen12 are largely as technical as the aviation maintenance manuals (some are procedural texts, like computer and financial reports, for instance). Advanced students are more likely to benefit from this feature, as they might encounter uses of the word in the reference corpora that require extra vocabulary exposure to understand.

Register-centered materials As previously mentioned, in the so-called register-centered materials the focus is on exploring the similarities and differences among registers. This approach can be put into practice in at least three ways: (1) by looking at the major characteristics of individual registers; (2) by comparing the salient characteristics of a register to other registers; and (3) by comparing individual texts of a particular register to corpora of that same register. These techniques can be applied in isolation or in combination. To illustrate these techniques, academic writing will be used. CERA is a balanced 4.9 million-word corpus of academic writing (more specifically, research articles) from 10 fields, published by authors from nine countries, a subsection of which will be used here (the chemistry subcorpus, with 90 texts totaling 677,000 words). One way to get started with register comparisons is by resorting to previous multidimensional (MD) analyses. MD analyses depend on sophisticated statistical techniques, and therefore, in classroom teaching it is generally advisable to keep the technical side of the analysis in the background, unless required. One of the most recurring themes of MD studies is the existence of a major dimension of register variation that accounts for a basic distinction (in English and other languages as well) between texts with a verbal style and texts with a nominal style. Texts with a verbal style are often spoken, oral, and interactive whereas texts with a nominal style are generally written, literate, and informative. Scholars have used labels like involved versus information and oral versus literate to capture these characteristics. The exact linguistic features constituting each end of the dimension are provided in the publications by Biber and other MD analysts (Berber Sardinha & Veirano Pinto, 2014; Conrad & Biber, 2001). Due to space constraints, only a few of these features will be explored here. A useful starting point for LSP teaching is the linguistic features that statistically mark the register under study. A first step would be to choose the words or

210  Tony Berber Sardinha grammatical structures from among those that load on a particular dimension of register variation in English (Biber, 1988). As Biber (1988) has shown, English academic writing is highly informational (on Biber’s dimension 1) and, as such, has high frequencies of attributive adjectives, prepositions, nouns, and agentless passives, among other characteristics. Any of these linguistic characteristics would be suitable starting points for a class on academic writing; to illustrate, nouns will be used. A corpus tagged for part of speech is needed to carry out a search for nouns; fortunately, in SketchEngine, the corpora have already been tagged. To run a concordance search for nouns, the user has to write the search term using corpus query language (CQL), which has a special syntax for representing part of speech tags. The search string for (common) nouns is [tag=“NN.”]. This search retrieved more than 139,000 hits or 205,000 words per million in the chemistry subcorpus of CERA, indicating that one in every 4.8 words is a noun. Upon inspection, it was noted that the tagger treated letters used in formulae (of which there are many in chemistry research articles) as nouns; therefore, the actual noun count is not exactly what was reported (not just because some non-nouns were considered as nouns, but also because some nouns were not tagged as such). It is advisable to warn students that every part-of-speech tagger makes mistakes, but reliable taggers achieve upwards of 95% accuracy (Treetagger, the tagger used by SketchEngine in this illustration, reports such accuracy). This should give students the confidence to evaluate by themselves some of the decisions the tagger made, spot errors, and when appropriate, disregard them. As a concordance for a part of speech is likely to throw up many different node/keywords, it is necessary to sort the concordance by the node word so that similar node/keywords occur next to each other in the display. After that, students can browse the concordance to select nouns to look at in detail, conducting all the different kinds of analyses discussed in the previous section. In academic writing in general, and in technical writing in particular, nouns tend to be part of the terminology of the field. SketchEngine offers a facility to extract terms, which are generally multi-word items occurring frequently in the corpus. Technically, the output presents term candidates (not necessarily true terms). The term extraction algorithm in SketchEngine compares the frequency of common word sequences in the focus corpus with the counts of the same sequences in a reference corpus (in this case, the reference corpus was enTenTen12). Frequent term candidates in the corpus include column chromatography, aqueous solution, colorless oil, and crude product. The term column chromatography occurs 79 times in chemistry CERA and 297 in the much larger enTenTen12; as the difference in frequency (297/79 = 3.8) is much lower than the difference in corpus size alone would predict (12 billion/677,000 = 17,725), that sequence is flagged as a potential term. Being aware of the role of proportions in the extraction algorithms for terms and keywords might help students make sense of the fact that there are often more occurrences of a particular term candidate or keyword in the reference corpus than in the focus corpus. By clicking the counts under F or RefF, a concordance will be produced for the occurrences of that term in either

Corpus-based teaching in LSP 211 the focus corpus (F) or the reference corpus (RefF). Once the concordance is displayed, the collocations of that word or term can be computed. In SketchEngine, the collocations option brings up a table of collocates—that is, a list of the words that most typically accompany the search term. For column chromatograpy, the most significant statistical collocates include several passive forms, which are also significant linguistic characteristics of academic writing in general, according to Biber (2006). As with other SketchEngine displays, the collocates table enables users to generate the concordances on the fly, and two kinds of filtering are available. The user can click on either P, to get the actual occurrences of the search term (column chromatography) with the collocate (e.g., purified), or N, to obtain a concordance of the search term without that collocate. Another major technique to register-centered teaching focuses on register comparisons, which can be implemented in SketchEngine using the keywords procedure (shown above) or through the compare corpora function, for instance. With this function, the user chooses at least two corpora, and the tool provides a measure of similarity for each pair of corpora. To illustrate, CERA was compared to the London English corpus, a spoken corpus comprising interviews and selfrecordings. The compare corpus function yields a score that indicates the degree of similarity/dissimilarity between the two corpora, which in this case was high for dissimilarity (9.79, signaled by a dark color). Similarity is measured by comparing the relative frequencies of the words in the corpora (as in keywords). A table similar to the keywords table is presented, having one of the corpora as the focus corpus and the other as the reference. In this case, the comparison showed that the lexical constitution of the two corpora are very different indeed: The interviews have a large number of personal pronouns (you, she, he, etc.), communication or public verbs (say, said), discourse particles (yeah, right), contractions (n’t, gonna), hedges (like, bit), wh-words (what, who, why), and indefinite pronouns (something), which are all characteristic of oral discourse (Biber, 1988, p. 105). It is possible to switch the corpora around and have CERA as the focus corpus, which in turn produces a table showing a large number of abbreviations (H, CO, mg, etc.), past tense verb forms (obtained, observed), nouns (solution, surface, mixture, etc.) and “long” words (concentration, temperature, measurements, etc.); as previously discussed, these characteristics reflect the nominal, informational nature of research articles. The output of the compare corpora function can be used in the classroom as points for discussion about the importance of knowing how language is ‘finetuned’ to different registers, which in turn can help students become aware of the systematic differences underlying register variation. In this way, they can build realistic expectations about what they are likely to encounter or produce in particular texts, once they know the register from which the text comes. The students can also explore the phraseology of individual words in the compare corpora output list and select some of them for detailed study. Again, it is generally productive to have students group the words into sets or families and explore these, rather than focus on random words from the list. For instance, students could classify words into categories used in MD analyses, and then select one or more categories to explore.

212  Tony Berber Sardinha Another way in which registers can be compared is by using a multi-register corpus like the Corpus of Contemporary American English (COCA), which provides access to some of the major registers of English (spoken, which in reality is mostly TV programs; fiction; magazine; academic). As COCA does not permit uploading one’s own corpora, the idea here is to use it in conjunction with SketchEngine, which enables this facility. To illustrate, students can use the compare corpus function with corpora representing two very different registers (such as CERA and the London English Corpus) and select a particularly salient linguistic category to compare with COCA. A linguistic feature that marks academic writing is agentless passives (Biber, 1988), and the compare corpora tool shows a number of past tense forms that (are likely to) perform this function, such as obtained, observed, reported, and shown. To search for these forms in COCA, the search string would be obtained|observed|reported|shown and the type of search would be chart. The result appears in Figure 15.2, which confirms that such forms are most typical of academic prose (761.95 times per million words); they are next most commonly found in the press (magazine and newspaper, 242.55 and 226.25 times per million words, respectively). Curiously, they are found more often in spoken than in fiction, which actually reflects the fact that the spoken subcorpus of COCA contains TV commentary, which tends to be more informational than literary fiction. Students generally like working with visual displays like the COCA bar charts, as these present the information in a user-friendly manner. Finally, the third manner in which register-centered LSP can be implemented is by comparing a single text to a corpus of texts from the same register; this kind of classroom activity enables a switch from a corpus focus to a text focus, which is important given that outside the classroom students are far more likely to encounter language in texts (written or spoken) than in corpus analysis displays. A useful tool is WordandPhrase.info, which is part of the BYU family of online corpora. There are actually two versions of this tool: one that utilizes academic texts and the other that uses a combination of different registers (called all genres); on both, users can run searches based on either words or phrases. The academic version enables users to compare individual texts (up to 1,000 words) to a list of academic words and phrases (drawn from corpora.) The all genres version, in contrast, compares the text to a frequency list of general English. The All

Spoken

Fiction

Magazine

Newspaper

133062

12900

6845

23178

20751

69388

286.56

134.99

75.69

242.55

226.25

761.95

Figure 15.2  Chart view in COCA for past tense verb forms

Academic

Corpus-based teaching in LSP 213 academic tool shows the English academic prose frequency band of each word in the text; these bands are based on an academic vocabulary list (Gardner & Davies, 2013) that classifies words according to their frequency range – namely, 1–500 (that is, the range ranked frequencies from the 1st most common in academic writing down to the 500th most common) and 501–3000 (words at rank 501st down to 3,000th in the academic list). It also shows the technical words in the text – that is, words typically found in texts of specific disciplines (e.g., education, science, philosophy, business). The bands can be used as a means for teachers to identify the vocabulary most appropriate to their students. The words in the first band are suited to all students, including those who are beginning their academic studies; words in the second band are usually better suited to more advanced students. The difficulty associated with the technical words varies as students will not find difficulty in words that reflect specialized terms from their field of expertise, as often is the case with technical words. As the display is “live,” students can click on individual words to see a sketch – that is, a concordance with collocates coded in the colors representing the frequency bands in the academic word list. For instance, clicking the word provide (from the first band) in the sentence Thermodenuding particles can provide insights into aerosol composition brings up a concordance for provide with a number of lemma citations (from the academic corpus made available by the tool) such as better signaling is thought to provide a cognitive boost, a surface plate is used to provide a dead-accurate reference plane for critical dimensions, and the Internet provides a functionally infinite number of IP addresses. A synonym list is also presented. In this case, words such as offer, present, and grant are given as potential synonyms of provide. Finally, a total count of each band is included in the output. In this particular text, 16% of the word tokens fall within the first band, 6% in the second, and 14% in technical (characterizing the text as quite technical).

Conclusion This chapter presented what was called the phraseology-centered and the registercentered perspectives for carrying out corpus-based analysis as a means of preparing activities for LSP classroom teaching. Each perspective feeds off the other, as the sample analyses intended to show. A phraseology-based exploration presupposes a focus on either ‘language as a whole’ or particular registers or genres, while a register-based analysis naturally invites a focus on the phraseology of the words and structures marking the register/genre. The general goal was to illustrate these perspectives through resources that are user-friendly yet powerful, hence the choice of SketchEngine and the BYU tools (COCA, and academic WordsandPhrases.info). These represent cutting-edge corpus technology designed with general users rather than computer experts in mind. The explorations shown here incorporate very sophisticated programs that work in the background while the user interacts with the tools through uncomplicated pointand-click online interfaces. Some of the ideas discussed can be implemented with tools not shown here. Offering ready-made materials was not the intention, since

214  Tony Berber Sardinha these should preferably reflect the goals of individual classrooms, which is not within the scope of this chapter. It is important to note that genuine corpusbased teaching and learning essentially comprise discovery work, where teachers and learners give themselves the opportunity to learn with the corpus. It is hoped that the ideas presented here help put that general principle into practice.

Acknowledgments I am grateful to the editors for their continued support, and to the anonymous reviewer for comments. I’m also grateful to both CNPq (Brasília, DF, Brazil) and Fapesp (São Paulo, SP, Brazil) for funding my research, from which this chapter derives.

References Anthony, L. (2015). AntConc [computer software]. Tokyo: Waseda University. Berber Sardinha, T. (2004). Linguística de corpus [Corpus linguistics]. São Paulo, Brazil: Manole. Berber Sardinha, T. (2013). Teaching grammar and corpora. In C. Chapelle (Ed.), The encyclopedia of applied linguistics (pp. 5578–5584). Malden, MA: WileyBlackwell. Berber Sardinha, T., Shepherd, T. M. G., Delegá-Lúcio, D., & São Bento Ferreira, T. (2013). Tecnologias e mídias no ensino-aprendizagem de inglês [Technologies and media in English language teaching]. São Paulo, Brazil: Macmillan. Berber Sardinha, T., & Veirano Pinto, M. (Eds.). (2014). Multi-dimensional analysis, 25 years on: A tribute to Douglas Biber. Amsterdam/Philadelphia, PA: John Benjamins. Biber, D. (1988). Variation across speech and writing. Cambridge: Cambridge University Press. Biber, D. (2006). University language: A corpus-based study of spoken and written registers. Amsterdam/Philadelphia, PA: John Benjamins. Celani, M. A. A., Holmes, J. L., Guerra Ramos, R., & Scott, M. (1988). The Brazilian ESP project—An evaluation. São Paulo, Brazil: Educ. Conrad, S., & Biber, D. (2001). Variation in English: Multi-dimensional studies. Harlow: Longman. Firth, J. R. (1957). Papers in linguistics—1934–1951. Oxford: Oxford University Press. Gardner, D., & Davies, M. (2013). A new academic vocabulary list. Applied Linguistics, 35(3), 305–327. Hoey, M. (2009). Corpus-driven approaches to grammar: The search for common ground. In R. Schulze & U. Römer (Eds.), Exploring the Lexis-grammar interface (pp. 34–48). Amsterdam: John Benjamins. Hunston, S. (2002). Corpora in applied linguistics. Cambridge: Cambridge University Press. Johns, T. (1991). Should you be persuaded: Two examples of data-driven learning. ELR Journal, 4, 1–16. O’Keeffe, A., McCarthy, M., & Carter, R. (2007). From corpus to classroom: Language use and language teaching. Cambridge: Cambridge University Press.

Corpus-based teaching in LSP 215 Ramos Filho, E. (2015). Artigos acadêmicos em língua inglesa: Uma abordagem multidimensional [Academic articles in English: A multidimensional approach] (Unpublished PhD dissertation). São Paulo Catholic University, São Paulo, Brazil. Reppen, R. (2010). Using corpora in the language classroom. New York: Cambridge University Press. Scott, M. (1996/2015). WordSmith tools [computer software]. Oxford/Stroud: Oxford University Press/Lexical Analysis Software. Sinclair, J. M. (1991). Corpus, concordance, collocation. Oxford, New York: Oxford University Press. Zuppardo, M. C. (2014). Dimensões de variação em manuais aeeronáuticos: Um estudo baseado na análise multidimensional [Dimensions of variation in aviation manuals: A mutli-dimensional study] (Unpublished MA thesis). São Paulo Catholic University, São Paulo, Brazil.

16 Transcription and annotation of non-native spoken corpora Mario Carranza Díez Universitat Autònoma de Barcelona, Spain

Introduction In recent years, foreign language acquisition studies and Information and Communication Technologies (ICT) research have turned their interest towards the compilation of learner corpora, both for the need of empirical data about learners’ proficiency in the use of the foreign language and for the possibility of incorporating such databases into the development of computer-assisted language learning (CALL) applications. In the case of the acquisition of a foreign language sound system and its phonological rules, the learners’ utterances should be represented by means of a symbolic system in order to show their actual pronunciation of words. In written corpora, a common distinction is drawn between the source data, the text, and other pieces of information included to describe the original text, i.e., the annotations. In spoken corpora, by contrast, the linguistic content of the data is not directly accessible, therefore, a representation of the speech in a symbolic form is needed to process and analyze the data, viz. a transcription. Hence, transcriptions can be considered linguistic annotations needed in spoken corpora to represent speech in an abstract way, but they must not be confused with the original speech data. Sometimes the transcription is treated as the speech itself, and this misconception can lead to overgeneralizations about the data (Cucchiarini, 1993; Gibbon, Moore, & Winski, 1998). Although the process of translating the speech signal into an abstract representation is commonly carried out by specialists (e.g. experts in phonetics or language instructors), a high degree of variability and the inherent subjectivity always present in the individual auditory judgment of speech can lead to errors. In order to diminish the degree of subjectivity, there have been attempts at obtaining automatic or semi-automatic phonological transcriptions of foreign speech by means of adapting automatic speech recognition (ASR) and automatic phoneme recognition technologies to improve the quality of the transcriptions (Strik & Cucchiarini, 2014). In spite of the recent improvements in automatic phonological transcription of speech corpora, manual transcription, although time-consuming and costly, continues to be the method for obtaining the most accurate representation of speech, especially if the speech is unprepared or spontaneous and the reference orthographic transcription is missing.

Transcription and annotation 217 In this chapter we discuss the characteristics of spontaneous non-native speech and the process of manual transcription and annotation at different levels of representation, depending on the purpose for which the corpus has been compiled. We offer an insight of standards for transcribing and annotating speech at the segmental level, and exemplify this methodology with some samples taken from a spoken corpus of Spanish as a second language (L2) by Japanese speakers, transcribed and annotated to train a system for automatic detection of pronunciation errors.

Types of non-native speech data Non-native speech data can be easily obtained from recordings of oral productions of students; however, the situation where the recordings take place and the type of task that the student is asked to perform determine the speaking style and, thereby, the degree of control and awareness that the foreign speaker has over his pronunciation. Most non-native speech corpora use tasks designed to prompt different speaking styles from students (Cylwik, Wagner, & Demenko, 2009; Ballier & Martin, 2013; Detey et al., 2014). Non-spontaneous speech – usually referred as read speech – shows the highest control over pronunciation. Since the speaker does not need to construct the message while speaking, all the attention can be focused on the formal aspects of the utterance, among them, the correct pronunciation. The data obtained from non-spontaneous speech is similar to “clean data”; as the content is already given, non-linguistic phenomena, frequent in spontaneous speech, are not commonly found in non-spontaneous speech. Non-spontaneous speech can be elicited from controlled activities which prompt the speaker to read aloud some words or sentences or to repeat the desired input after hearing an example from a native speaker, such as shadowing and mirroring techniques. An advantage of this speaking style is the possibility of obtaining both the recordings and the orthographic transcriptions of the prompts, which can be used to generate (semi-)automatic phonological transcriptions of the speech. When the student is not provided with textual or oral cues and the content of the speech is not improvised, semi-spontaneous speech is obtained. Semi-spontaneous speech differs from spontaneous in a lesser degree of awareness paid to form, as part of the attention is directed to remembering the prepared text. The non-linguistic phenomena mentioned before are not rare in semi-spontaneous speech and the orthographic transcription can also be obtained if the original text is also provided. Finally, spontaneous – or extemporaneous – speech is obtained from unprepared situations and can be recorded from conversations, role-plays or free talking activities. Spontaneous speech is characterized by the frequent presence of non-linguistic phenomena and can be affected by uncontrolled events. In spite of the difficulty for analysis, due to the high degree of noise in the signal, a non-prepared situation can trigger specific pronunciation errors (Cylwik et al., 2009) and the oral performance of the student can be captured in communicative situations, more similar to the real use of the language; therefore, this style is preferred when naturalness is pursued.

218  Mario Carranza Díez

Transcription Transcribing non-native spontaneous speech is a complex activity due to the high degree of variability in the speech signal, the transfer from the L1 and the constant presence of vocalizations and other extra-linguistic phenomena. For this reason, transcribers should follow a set of rules to interpret and represent speech, aimed at maintaining consistency across all levels of transcription (Bonaventura, Howarth, & Mensel, 2000; Cucchiarini, 1993; Detey et al., 2014). Variability in manual transcription is caused by a certain degree of subjectivity because it is based on individual perception and implies other factors regarding the accuracy of the transcriptions, the training and expertise of the annotator, and the intelligibility of the speech data. If possible, all the criteria adopted for the written representation of speech should be specified in the transcription guidelines, which can be used as a benchmark by the different annotators. Adopting this methodology does not ensure the total elimination of variability, but can possibly contribute to decrease the degree of subjectivity in the transcriptions (Cucchiarini, 1993; Racine et al., 2011).

Orthographical transcription The initial level of representation is achieved through the orthographical transcription of the speech, viz. a graphical representation in a standard written form. The difficulty of this task increases as the speech shows more characteristics of spontaneity and more features of orality are present. A verbatim transcription – a transcription that reproduces all the linguistic content present in the speech, including vocalizations, repetitions, truncations, and other non-linguistic phenomena – is much more time-consuming than a clean transcription, where all these phenomena are omitted. Thus, the degree of fidelity to the speech needs to be carefully agreed between the transcribers, and should respond to the aim for which the corpus is compiled, considering the difficulty and time needed for a verbatim transcription. An intermediate solution is the use of labels for tagging the non-linguistic content. Given that a standardized mark-up language can be read and processed by the majority of corpora processors and search engines, a clean transcription can always be automatically derived from the verbatim transcription by eliminating the content labeled as non-linguistic. Standardized forms of representing the nonlinguistic phenomena in the target-language should be agreed between transcribers and incorporated into the transcription guidelines (TEI Consortium, 2014) (see the section ‘Non-linguistic and extra-linguistic annotation’ (p. 222)).

Broad phonological transcription The phonological transcription of speech, an abstract representation of the pronunciation of words, is particularly useful if the corpus is intended for the development of speech technologies or for research about the acquisition of L2’s pronunciation rules. A reference – canonical – representation includes the

Transcription and annotation 219 expected pronunciation of each word as recorded in dictionaries, e.g. pronounced in a standard manner without individual or dialectal features. The canonical phonological transcription serves as a reference, a benchmark against which other transcriptions can be validated. The variant considered as standard in the target language should be adopted as the pronunciation reference; in the case of languages with more than one standard, such as English or Spanish, the choice can be motivated by the variant used in the classroom. This is a crucial decision because pronunciation errors will be judged according to the reference pronunciation that is represented at this level; if the foreign student speaks in a different variant or speaking style, certain pronunciation features might be evaluated as errors. Consequently, the actual pronunciation of students should be matched against the adopted pronunciation standard, and discrepancies between the reference and the learners’ realization can be identified. The broad phonological transcription can be automatically generated from the orthographic transcription by a grapheme-to-phoneme conversion algorithm or by the substitution of each written word by its phonological representation taken from a pronunciation dictionary (Strik & Cucchiarini, 2014). At this level, the transcription shows the phonological representation of words as pronounced in isolation; thus, no allophones or diacritics to mark coarticulation processes are included into the phone inventory. However, possible correct pronunciations of the same word – pronunciation variants – can be incorporated if the pronunciation dictionary is provided with this information. A broad phonological transcription can also be converted into an allophonic transcription to represent the actual pronunciation of the speaker by extending it with new symbols to represent allophones. The International Phonetic Association has developed an inventory of phonetic symbols (IPA, 1999) for transcribing different world languages. However, if the corpus is intended to be automatically processed it should be written in a computer-readable notation system; namely, plain ASCII text without formatting and in standard character encoding. Many different standards have been proposed for adapting a phone inventory to ASCII characters; among the most common are SAMPA (Wells, 1997), Arpabet (Shoup, 1980), and TIMITBET (Seneff & Zue, 1988). Worldbet (Hieronymus, 1994) was created as an extension of Arpabet, adapted for transcribing other languages apart from American English. Some of these alphabets have also been used for annotating non-native speech; Goronzy (2002) used SAMPA for transcribing German accented English L2 speech and Gruhn, Minker and Nakamura (2011), the TIMITBET phone labels for non-native English.

Narrow phonetic transcription A narrow phonetic transcription implies a careful analysis of the speech signal in order to accurately represent the actual realization of the target language by the foreign speaker. The degree of detail of a narrow phonetic transcription depends on the aim for which the corpus is transcribed. In the case of multipurpose

220  Mario Carranza Díez corpora of spoken language, a general broad phonological transcription is commonly preferred, whereas research about the foreign realization of target sounds requires a more detailed analysis. Coarticulation processes, such as nasalization, palatalization, devoicing, aspiration, as well as other phenomena characteristic of spontaneous speech, like breathy voice and creaky voice, can interfere in the generation of the acoustic models; therefore, they should be properly annotated in a systematic way. A set of diacritics might be added to the phone inventory used in the phonological transcription to account for these phenomena and for the characteristics of non-native pronunciation that deviates from native pronunciation of the target language. Narrow phonetic transcription can not be directly obtained by automatic methods, but can be semi-automatically generated by automatic phonological transcription software if the orthographic transcription is provided and the system is optimized for this specific task (Strik & Cucchiarini, 2014). X-SAMPA, an extension of SAMPA, was proposed by Wells (1994) and accounts for all the symbols of the IPA, allowing to virtually represent any given language with machine-readable characters. Wordbet can be also adopted for the narrow phonetic transcription, as it provides symbols for all IPA phones and diacritics. One possibility for representing the pronunciation of L2 speech is by combining the two sets of phonemic inventories of the target and source language. By doing so, the annotators can choose between the L1 phone or the L2 phone symbols according to the degree of non-nativeness perceived. This method demands doubling the original phonemic inventory and adding more than one symbol for similar sounds in both languages (Bonaventura et al., 2000), which increases the range of possibilities for transcription and occasionally can lead to biased decisions when evaluating ambiguous realizations. That is why annotators should judge carefully on each phone unit present in the speech, which becomes a very difficult and time-consuming task, prone to perception errors and, eventually, affecting the transcription quality. The difficulty increases when the target sound is not realized neither as expected in the target language nor in the source language, but somewhere in between both realizations. This is known as intermediate realization, and presents features of both the target and the source language. This phenomenon is very common in non-native speech, considering that the process of acquisition of new sounds and phonological contrasts in the L2 is gradual. Intermediate realizations exemplify, thus, the in-between state, from the L1 sound to the target sound of the L2. In Carranza (2013) we discussed this issue and proposed a method based on the adaptation of decision trees for each intermediate realization as guidance for manual transcription.

Annotation Annotation of speech data is a time-consuming and costly task and highly prone to human errors; for this reason, it is advisable to automatize every repetitive task if possible. In non-native speech a high number of non-linguistic vocalizations (coughs, laughs, interjections) and disfluencies (repetitions, truncations, hesitations)

Transcription and annotation 221 are likely to appear. Moreover, the speech signal can also be corrupted by extraneous noises (doors slamming, buzz from the recorder, ringing phones, chimes, overlapping speech of the interviewer, etc.). All these events must be conveniently marked using a standardized notation system which guarantees the maximal compatibility with existing corpora retrieval software and responds to the needs of the transcriber. It is important, then, to find a balance between the sophistication of the annotation system and the usability for the transcriber. A detailed review on the annotation schemes and software used for transcribing non-native speech can be found in Ballier and Martin (2013). In the case of speech corpora, the labels and annotations must be time-aligned with the signal if the corpus is intended to be adapted as a resource for the development of speech technologies. This can be achieved by using a general-purpose software for speech analysis such as Praat (Boersma & Weenink, 2015), but recently more specific tools for the annotation of non-native speech have been developed; among which, MAT (Ai & Charfuelan, 2014) and DOLMEN (Eychenne & Paternostro, forthcoming). The Text Encoding Initiative (TEI Consortium, 2014) adopts a mark-up scheme based on XML (eXtensible Markup Language) for annotating speech, as this standard is widely extended and is used in the most consolidated spoken corpora projects, such as the Spoken Dutch Corpus (Oostdijk, 2000), and in some of the non-native speech corpora such as ISLE (Menzel et al., 2000). One of the main benefits of adopting XML for annotation is its high degree of usability and compatibility with the majority of search engines and corpora processors, since XML is independent of any software or hardware system. Besides, the set of tags is not fixed and can be incremented for special needs, which offers a high degree of flexibility for annotators. On the drawback side, the high formality of the XML mark-up scheme creates a steep learning curve and can be a source of annotation errors. The TalkBank project (MacWhinney, 2015) is a consolidated working platform for corpora creation, that offers its own repository with an independent section for learner corpora, SLABank, in the same web site. The project is also provided with its own mark-up and transcription standards, and other computational tools for speech annotation, PHON, and corpora processing, CLAN (MacWhinney, 2014). CHAT, the annotation scheme developed in the TalkBank project, establishes a common method for compiling speech databases, including non-native, bilingual, pathological and children speech corpora. This multi-task platform facilitates the elaboration of speech corpora and the inter-usability between databases in the CHAT format but, by contrast, restricts the compatibility of the databases outside the project. Several L2 corpora have adopted the CHAT format for annotation and transcription, like SPLLOC (Mitchell et al., 2008), and Language Use In Spanish (Díaz, 2007). The CHAT standard proposes solid protocols for meta-data documentation, labeling of disfluencies, pronunciation errors, and other annotations related to visual information, useful if the goal is the annotation of multimodal data. Besides, the mark-up scheme is simpler than XML and, therefore, the final enriched text is more readable, and the annotation process, less prone to mistakes.

222  Mario Carranza Díez Non-linguistic and extra-linguistic annotation The types of non-linguistic and extra-linguistic phenomena that can appear in spontaneous non-native speech are of a varied nature and sometimes difficult to evaluate. For this reason, it is helpful to establish a protocol that describes all the phenomena under consideration with the annotation standards and marking procedures adopted. First, a distinction should be drawn between vocalizations uttered by the student (non-linguistic content) and external incidents that interfere in the signal (extra-linguistic content). The TEI (2014) also includes a further subset of vocalizations divided into non-lexical phenomena, which include all non-linguistic sounds made by the speaker (yawn, laugh, sneeze, snort, etc.) and semi-lexical phenomena, viz. filled pauses or hesitations. The extent to which vocalizations and incidents are encoded in the transcription depends entirely on the purpose for which the corpus is being compiled. Depending on the intended application of the corpus, vocalizations and incidents can be tagged along with the orthographic transcription or in independent tiers aligned with the speech signal. Tagging these phenomena in independent tiers offers the possibility of marking the extent of the whole phenomenon, which very often overlaps with speech, specifying the beginning and ending times. This is a great advantage if the corpus is intended as a resource for the development of speech technologies, where noisy signals must be avoided for the training. An example of a subset of XML tags for annotating non-linguistic and extra-linguistic phenomena in a L2 Spanish corpus by Japanese speakers can be found in Carranza (2014).

Error annotation One of the main objectives in compiling a non-native speech corpus is to collect empirical data of the most frequent mispronunciations. This information provides an insight into the oral proficiency of L2 learners and eventually establishes the type of errors that should be addressed by pronunciation training courses. With this purpose in mind, pronunciation errors should be tagged and annotated following systematic criteria which will be used in the automatic processing and in further statistical analyses. The information that should be included in the error annotation depends on the purpose for which the corpus was compiled. Some corpora have been designed to test a specific pronunciation characteristic for specific L1 learners, such as the study of the nasal vowel realization in French L2 in the IPFC Corpus (Detey et al., 2014). In this case, there is no need to include any information other than the variables related to this type of error. However, if the corpus was not designed with the purpose of studying any specific error, the tagging must be as specific as possible (Burgos et al., 2014). It is advisable to include, at least, information about the type of mispronunciation (substitution, deletion or insertion), the affected phone and, if possible, the context where the error appeared. In Carranza (2014) and Carranza, Cucchiarini, Llisterri, Machuca, and Ríos (2014b) an encoding system for including all this information in a simple alpha-numerical

Transcription and annotation 223 code was proposed; some examples of errors annotated following this system are shown on Table 16.1. In the CHAT standard the pronunciation errors are labeled by means of a systematic notation protocol that includes information about the type of error, a phonological transcription of the speech signal and the expected realization, annotated and transcribed using IPA symbols. Machine-readable corpora must use an ASCII encoding for phone annotation instead: SAMPA, X-SAMPA or a similar alphabet, or a numeric code. By tagging the errors and time-aligning them with the speech signal and the reference and phonetic transcriptions, pronunciation errors can be automatically identified and processed by means of learning algorithms, which compare both transcriptions and map the discrepancies between the two levels (Strik & Cucchiarini, 2014). This information can be included, then, into the development of computer-assisted pronunciation tools enhanced with automatic speech recognition technology.

Metadata The information that describes the data content and the process of compilation is encoded into the metadata. Metadata is a systematic documentation method for enriching text by indexing and implementing rules that will be used by search engines to access segments of text from the corpus. Once properly labeled, the desired subset of the corpus, relevant for its specific characteristics, can be accessed directly without the need for the system to read the whole database again. Standards for describing the metadata of corpora have been compiled and established, among others, by the EAGLES Group (Gibbon et al., 1998), the Corpus Encoding Standard (CES) (Ide, 1998), the TEI guidelines (TEI Consortium, 2014), the Meta-Share project, part of the Meta-Net network (European Commission, 2010) and, finally, an approach focusing on speech data has been put forward by Oostdijk (2000). Metadata functions as a pointer to the information present in the corpus, and likewise subsets the data into units used for exploring the corpus as a resource. In the head documentation about the speech data is collected, such as recording time, date, student ID, interviewer ID, type of task, speaking style, proficiency level of the student, conditions and quality of the recording, and other dimensions underlying the variation that can be observed in language use. The documentation belonging to the body of the corpus needs to be optimally represented by adopting mark-up schemes, such as text headers and standardized labeling, to mark the samples that meet the requirements relevant in the posterior analysis of the corpus.

Applications Research on foreign language acquisition The majority of non-native spoken corpora have been collected with the purpose of carrying out research on the acquisition of the phonological contrasts and the

224  Mario Carranza Díez Table 16.1 Examples of the encoding and annotation of mispronunciations with their expected pronunciation (transcribed in SAMPA) and as pronounced by the learner (transcribed in X-SAMPA) Error tag

Explanation

Expected

Pronounced

Type

a32#01_01 a05#28_23

[ɾ] -> [l] between [a] , [a]

para Gusta

pala GMsta

subs. subs.

poder

poDerM

insert.

ki_^ero

k_ji_^ero

insert.

mirar ban

mira ba_~

del. del.

b08#32_00 b39#16_06 c32#01 _00 c35#01_00

[u] -> [ɯ] between [ɣ̞] , [s] [ɯ] after [ɾ] in final position [k] -> [kj] before [ɣ] [ɾ] -># in final position [n] -># in final position

new sounds of an L2. The ISLE corpus of spoken English by German and Italian speakers (Menzel et al., 2000), the Interphonologie du Français Contemporain (IPFC) corpus (Detey et al., 2014), the C-ORAL ROM corpora (Cresti & Moneglia, 2005) and the Japanese English Learner Corpus (NICT JLE) (Izumi, Uchimoto, & Isahara, 2004) are examples of corpora compiled for analyzing L2 pronunciation acquisition. The focus of research can be at the segmental (phone) level, the suprasegmental (prosodic) level or both. Compiling large language resources such as non-native corpora is an enormous task; therefore, the purposes for which the corpus is compiled should be well defined and established from the beginning. If the purpose is to quantify and analyze learners’ proficiency, at least the orthographic and phonological transcriptions are required. With these two transcription levels, errors can be automatically detected by comparing both transcriptions (see the section ‘Broad phonological transcription’ (p. 218)). Read speech is the most appropriate for obtaining good quality speech data and, if the script is provided, the canonical phonological transcription can be automatically obtained as well by means of grapheme-to-phoneme conversion algorithms; besides, the transcription reflecting the actual pronunciation can be manually prepared or obtained by means of automatic speech recognition. Spontaneous speech, by contrast, results in a poorer automatic transcription which should be manually checked but, on the other hand, it provides a more realistic performance of the student’s pronunciation in more communicative situations, similar to real life.

Development of Computer-Assisted Pronunciation Training (CAPT) systems Several studies have pointed out the possibility of efficiently adapting ASR systems to pronunciation assessment of non-native speech (Neri, Cucchiarini, & Strik, 2003; Van Doremalen, 2014) if technology limitations are compensated with a good design of language learning activities and feedback, and if repair strategies are included to safeguard against recognition errors. An ASR system can be adapted as an automatic pronunciation error detection system by

Transcription and annotation 225 training it with non-native speech data that generates new acoustic models for the non-native realizations of L2 phones, and by the systematization of L1-specific common errors by means of rules (Goronzy, 2002; Burgos et al., 2014; Carranza et al., 2014a). In order to do so, a large quantity of transcribed speech data is needed; however, manual transcription of non-native speech is a time-consuming costly task, and current automatic transcription systems are not accurate enough to carry out a narrow phonetic transcription.

Conclusions In this chapter we have introduced a framework for compiling, transcribing and annotating a non-native speech corpus, with two objectives in mind: as a linguistic resource for empirical studies on L2 pronunciation acquisition and as a database for the development of CAPT software. Both applications require the corpus to be transcribed at various levels of representation: orthographic, canonical phonemic and, at least, a level where the actual pronunciation of the speaker is phonologically or phonetically transcribed. The computation of frequencies about the most common errors and their phonemic contexts can be obtained by explicitly tagging the mispronounced segments. This type of information is of extreme utility for modeling pronunciation variants of non-native speech and, if the annotations are time-aligned with the speech signal, the corpus can be additionally adapted as a training database for automatic speech recognition systems. Furthermore, non-native spontaneous speech is characterized by frequent disfluencies and a high degree of variability within and between speakers, which turn the process of transcribing into a highly subjective task, and ultimately can affect the accuracy and reliability of transcriptions. That is why the adoption of standardized conventions is of crucial importance in this particular case, not only because following a protocol facilitates the process of transcribing and annotating speech, but also for the sake of the compatibility and future reusability of the corpus.

References Ai, R., & Charfuelan, M. (2014). MAT: A tool for L2 pronunciation errors annotation. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC). Reykjavik, May 26–31, 2014. Ballier, N., & Martin, P. (2013). Developing corpus interoperability for phonetic investigation of learner corpora. In A. Díaz-Negrillo, N. Ballier, & P. Thompson (Eds.), Automatic treatment and analysis of learner corpus data (pp. 33–64). Amsterdam /Philadelphia, PA: John Benjamins. Binnenpoorte, D., Cucchiarini, C., Strik, H., & Boves, L. (2004). Improving automatic phonetic transcription of spontaneous speech through variant-based pronunciation variation modeling. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC). Lisbon (Portugal) May 26–28, 2004 (pp. 1–4). Boersma, P., & Weenink, D. (2015). Praat: Doing phonetics by computer. [Computer Program]. Version 5.4.06. Retrieved February 21, 2015, from www.praat.org.

226  Mario Carranza Díez Bonaventura, P., Howarth, P., & Menzel, W. (2000). Phonetic annotation of a nonnative speech corpus. In Proceedings of InSTIL (Integrating Speech Technology in Language Learning). Dundee (Scotland) August 31–September 2, 2000 (pp. 10–17). Burgos, P., Cucchiarini, C., van Hout, R., & Strik, H. (2014). Phonology acquisition in Spanish learners of Dutch: Error patterns in pronunciation. Language Sciences, 41, 129–142. Carranza, M. (2013). Intermediate phonetic realizations in a Japanese accented L2 Spanish corpus. In P. Badin, T. Hueber, G. Bailly, D. Demolin, & F. Raby (Eds.), Proceedings of SLaTE 2013, Interspeech 2013 Satellite Workshop on Speech and Language Technology in Education. Grenoble (France) August 30th–September 1st, 2013 (pp. 168–171). Carranza, M. (2014). Transcription and annotation of a Japanese accented spoken corpus of L2 Spanish for the development of CAPT applications. In II International Workshop on Technological Innovation for Specialized Linguistic Domains (TISLID). Ávila (Spain), May 7–9, 2014. Carranza, M., Cucchiarini, C., Burgos, P., & Strik, H. (2014a). Non-native speech corpora for the development of computer assisted pronunciation training systems. In Edulearn 2014 Proceedings [CD], Barcelona (Spain) July 7–9, 2014 (pp. 3624–3633). Carranza, M., Cucchiarini, C., Llisterri, J., Machuca, M. J., & Ríos, A. (2014b). A corpus-based study of Spanish L2 mispronunciations by Japanese speakers. In Edulearn 2014 Proceedings [CD], Barcelona (Spain) July 7–9, 2014 (pp. 3696–3705). Cresti, E., & Moneglia, M. (Eds.). (2005). C-Oral-Rom. Integrated reference corpora for spoken Romance languages. Amsterdam: John Benjamins. Cucchiarini, C. (1993). Phonetic transcription: A methodological and empirical study. PhD dissertation, Radboud Universiteit Nijmegen. Cylwik, N., Wagner, A., & Demenko, G. (2009). The EURONOUNCE corpus of non-native polish for ASR-based pronunciation tutoring system. SLaTE 2009. ISCA Workshop on Speech and Language Technology in Education. Warwickshire (England) September 3–5, 2009. Detey, S., Racine, I., Eychenne, J. & Kawaguchi, Y. (2014). Corpus-based L2 phonological data and semi- automatic perceptual analysis: the case of nasal vowels produced by beginner Japanese learners of French. Proceedings of Interspeech 2014, Singapore 29th–30th May, 2014 (pp. 539–544). Díaz Rodríguez, L. (2007). Interlengua española. Estudio de casos. Barcelona: Regael. European Commission (2010). Meta-data schema. META: A Network of Excellence forging the Multilingual Europe Technology Alliance. Retrieved February 21, 2015, from http://www.meta-net.eu/meta-share/metadata-schema. Eychenne, J., & R. Paternostro (forthcoming). Analyzing transcribed speech with Dolmen. In S. Detey, J. Durand, B. Laks, & C. Lyche (Eds.), Varieties of spoken French: A source book. Oxford: Oxford University Press. Gibbon, D., Moore, R., & Winski, R. (1998). Spoken language system and corpus design. Berlin: Mouton De Gruyter. Goronzy, S. (2002). Robust adaptation to non-native accents in automatic speech recognition. Berlin: Springer. Gruhn, R. E., Minker, W., & Nakamura, S. (2011). Statistical pronunciation modeling for non-native speech processing. Berlin: Springer. Hieronymus, J. L. (1994). ASCII phonetic symbols for the world’s languages: Worldbet. AT&T Bell Laboratories, Technical Memo. Retrieved February 21, 2015, from http://www.ling.ohio-state.edu/~edwards/WorldBet/worldbet.pdf.

Transcription and annotation 227 Ide, N. (1998). Corpus Encoding Standard: SGML Guidelines for encoding linguistic corpora. In Proceedings of the First International Language Resources and Evaluation Conference (LREC). Granada (Spain), May 28–30, 1998 (pp. 463–470). International Phonetic Association (1999). Handbook of the International Phonetic Association: A guide to the use of the International Phonetic Alphabet. Cambridge: Cambridge University Press. Izumi, E., Uchimoto, K., & Isahara, H. (2004). The NICT JLE Corpus. Exploiting the language learners’ speech database for research and education. International Journal of the Computer, the Internet and Management, 12(2), 119–125. MacWhinney, B. (2014). The CHILDES project: Tools for analyzing talk. Electronic Edition. Part 1: The CHAT transcription format. Retrieved February 21, 2015, from http://childes.psy.cmu.edu/manuals/CHAT.pdf. MacWhinney, B. (2015, February 21). Talkbank. Retrieved February 21, 2015, from http://talkbank.org/. Menzel, W., Atwell, E., Bonaventura, P., Herron, D., Howarth, P., Morton, R., & Souter, C. (2000). The ISLE corpus of non-native spoken English. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC). Athens (Greece), May 31st–June 2nd, 2000. Mitchell, R., Domínguez, L., Arche, M. J., Myles, F., & Marsden, E. (2008). SPLLOC: A new database for Spanish second language acquisition research. EUROSLA Yearbook, 8, 287–304. Neri, A., Cucchiarini, C., & Strik, H. (2003). Automatic Speech Recognition for second language learning: How and why it actually works. In Proceedings of the 15th International Congress of Phonetic Sciences ICPhS 03, Barcelona (Spain), August 3–9, 2003 (pp. 1157–1160). Oostdijk, N. (2000). Meta-data in the Spoken Dutch Corpus project. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC). Athens (Greece), May 31st–June 2nd, 2000. Racine, I., Zay, F., Detey, S., & Kawaguchi, Y. (2011). De la transcription de corpus à l’analyse interphonologique: Enjeux méthodologiques en FLE. In G. Col & S. N. Osu (Eds.), Transcrire, écrire, formaliser (1) (pp. 13–30). Rennes: PUR. Travaux Linguistiques du CerLiCO, 24. Seneff, S., & Zue, V. W. (1988). Transcription and alignment of the TIMIT database. DARPA TIMIT CD-ROM documentation. Retrieved February 21, 2015, from https://perso.limsi.fr/lamel/TIMIT_NISTIR4930.pdf. Shoup, J. E. (1980). Phonological aspects of speech recognition. In W. A. Lea (Ed.), Trends in speech recognition (pp. 125–138). New York: Prentice-Hall. Strik, H., & Cucchiarini, C. (2014). On automatic phonological transcription of speech corpora. In J. Durand, U. Gut, & G. Kristofferson (Eds.), Oxford handbook of corpus phonology (pp. 89–109). Oxford, UK: Oxford University Press. TEI Consortium (2014). 8 Transcription of speech. TEI P5: Guidelines for electronic text encoding and interchange. Retrieved February 21, 2015, from http://www. tei-c.org/release/doc/tei-p5-doc/en/html/TS.html. Van Doremalen, J. (2014). Developing automatic speech recognition-enabled language learning applications. PhD Dissertation, Center for Language and Speech Technologies, Radboud University Nijmegen. Wells, J. C. (1994). Computer-coding the IPA: A proposed extension of SAMPA. Speech, Hearing and Language, Work in Progress, 8, 271–289. Wells, J. C. (1997). Speech Assessment Methods Phonetic Alphabet. Retrieved February 21, 2015, from http://www.phon.ucl.ac.uk/home/sampa/.

17 Using monolingual virtual corpora in public service legal translator training María Del Mar Sánchez Ramos Universidad De Alcalá, Spain

Francisco J. Vigier Moreno Universidad De Alcalá, Spain

Public service interpreting and translation as a new profession and academic discipline The ever-increasing mobility of people across boundaries, be it for economic, political or educational reasons, has led to the creation of multilingual and multicultural societies where the need for language and cultural mediation is also ever growing. Even if this is a worldwide phenomenon, it is most conspicuous in countries which have been traditionally considered as countries of emigration and have become countries of immigration in the last 20 years, thus evolving into complex multilingual and multicultural societies. This is also the case of Spain, a country where the high influx of immigrants and tourists poses challenges which require adequate responses to ensure a balanced coexistence (Valero-Garcés, 2006, p. 36). This need for translators and interpreters is even greater in public services like schools, hospitals, police stations, courts . . . where users who do not command the official language of the institution must be catered for. As argued by Corsellis (2008, p. 2), “it benefits no one if a proportion of the population suffers increased infant mortality rates, miscarriages of justice, substandard housing, education and social care [due to] barriers caused by lack of language and related skills”. The emergence of multicultural and multilingual societies has fostered, therefore, the need for professionals who can effectively remove such barriers and guarantee successful communication between public service providers, on the one part, and public service users, on the other. Subsequently, the study of this professional sphere has led to the development of a new academic branch within Translation Studies, commonly referred to as Public Service Interpreting and Translation (PSIT), with a very wide scope, since it includes healthcare, educational, administrative and legal settings, to name but the most prominent. In order to provide trainees with the skills required in PSIT professionals, many training programmes have been specifically devised and are currently offered worldwide. This is the case of the Master’s Degree in Intercultural Communication, Public Service Interpreting and Translation offered by the

Using monolingual virtual corpora in public service 229 University of Alcalá (http://www3.uah.es/master-tisp-uah/), which is part of the European Commission’s European Master’s in Translation network (http:// ec.europa.eu/dgs/translation/programmes/emt/network/). This programme, which is offered in a wide variety of language pairs, namely Spanish and English, French, Arabic, Chinese, Russian or Romanian, comprises specific modules on intercultural communication, healthcare translation, healthcare interpreting and legal and administrative translation and interpreting.

Legal translator training within PSIT Within PSIT, legal translation deals basically with the communication undertaken between justice authorities and the citizens, especially in relation to the legal process. According to the typology of legal genres set forth by Borja (2000, pp. 133–134), the texts which are prototypical of a public service translators’ practice include court documentation such as claim forms, writs of summons, judgments, appeals, writs, orders, injunctions, informations, warrants and so forth. Since it is most frequently in criminal cases that legal-aid translation and interpreting services are provided (Aldea Sánchez et al., 2004, p. 89), documents related to criminal proceedings (namely summonses, indictments and judgments) are of paramount importance for the public service translator. Unless other types of legal translation by which the translated document is to be regarded as law itself (for example in multilingual jurisdictions or international law instruments), the type of legal translation most frequently found in PSIT chiefly serves an informative purpose and the translated texts are mostly descriptive, as “the translations of such documents are used by clients [public service users] who do not speak the language of the court [. . .] or by lawyers and courts who otherwise may not be able to access the originals” (Cao 2007, p.12). Therefore, this way of translating legal documentation where communicative purposes prevail is clearly orientated towards the target-text receiver, rather than focusing prominently on accuracy and faithfulness with respect to the source text. In other words, “the translator’s first consideration is no longer fidelity to the source text but rather fidelity to the uniform intent of the single instrument, i.e. what the legislator [. . .] intended to say” (Šarčević, 1997, p. 112). In view of this, PSIT legal translators should, foremost, adhere to the purpose of their translated text and concentrate their efforts on producing a target text that conveys the same meaning as the source text, meets the acceptability requirements of the expected target text and fulfills the conventions of the target-text receiver. In line with the so-called competence-based training (Hurtado Albir, 2007), the module on legal translation of the aforesaid master’s degree offered by the University of Alcalá includes training in translation into both L1 and L2, as a response to current market demands. This module is mainly intended to equip students with the skills, abilities, knowledge and values required in a competent public service translator working in the legal field, which is encapsulated in the notion of legal translation competence. Based on previous multicomponent models such as that of PACTE (2000), Kelly (2002) and EMT model

230  María Del Mar Sánchez Ramos and Francisco J. Vigier Moreno (EMT Expert Group, 2009), as well as his own extensive professional experience as a legal translator, Prieto Ramos (2011, pp. 11–13) offers a very interesting model for legal translation competence. This model breaks down into the following sub-competences: strategic or methodological competence (which controls the application of all other sub-competences and includes, among others, the identification of translation problems and implementation of translation strategies and procedures); communicative and textual competence (linguistic knowledge, including variants, registers and genre conventions); thematic and cultural competence (including but not limited to knowledge of law and awareness of legal asymmetry between source and target legal systems); instrumental competence (mainly concerning documentation skills and use of technology); and interpersonal and professional competence (for instance, social skills, such as teamwork, knowledge of the profession and ethical issues).

Corpus-based translation studies (CTS) and their application to PSIT legal translation training Corpus linguistics’ initial steps can be dated back to the pre-Chomskian period (McEnery, Xiao & Tono, 2006, p. 3), where structuralist tradition followers used a corpus-based methodology to provide empirical results based on observed data. This area of research, defined as “the study of language based on examples of ‘real life’ language use” (McEnery & Wilson, 2004, p. 1), has opened many possibilities for the study of language in context. Similarly, corpora have attracted increasing attention in Translation Studies over the last years. Since Mona Baker (1993, 1995) published her seminal paper on the use of corpora in translation studies, other scholars have followed her footsteps. Laviosa (2003, p. 45) defines CTS as “the branch of the discipline that uses corpora of original and/or translated text for the empirical study of the product and process of translation, the elaboration of theoretical constructs, and the training of translators”. Depending on the nature of the work carried out, researchers have used corpora to investigate different translation issues, from the features of translated texts (Baker, 1995, 1996; Kenny, 2001; Saldanha, 2004) to the possibilities of using corpora as translation and terminology resources (Bowker, 2003; Zanettin et al., 2003; Zhu & Wang, 2011). Therefore, CTS have proved to be a reliable way of collecting data to generalize about the so-called translated linguistic features or universals of translation (Baker, 1993). In terms of text corpora typology, there is a general distinction between three major corpus types (Zanettin et al., 2003, p. 6): monolingual corpora, comparable bilingual corpora and parallel corpora. Monolingual corpora usually contain texts in the target language. They can be efficient linguistic instruments offering relevant information on idiomatic use, collocations, syntactic constructions or genre conventions (Borja, 2007, p. 3). Comparable corpora consist of original source and target language texts with similar content. They can be divided into different subtypes according to the intended purpose or function of texts (Borja, 2007, p. 11). Parallel corpora contain source language texts and their translations

Using monolingual virtual corpora in public service 231 (Baker, 1995, p. 230). Among translation scholars, attention is turning towards small corpora in order to translate specialized texts. Such corpora are known as virtual corpora, also called ad hoc corpora or disposable corpora (Varantola, 2003, p. 55). A virtual corpus is a collection of texts developed from electronic resources by the translator and compiled “for the sole purpose of providing information – either factual, linguistic or field-specific – for use in completing a translation task” (Sánchez Gijón, 2009, p. 115). Zanettin (2012, p. 64) also highlights the advantages of virtual corpora, which are created in response to specific translation problems. Due to this information need, these corpora are usually very precise and content-specific, and can be extended at any time. The benefits and the pedagogical implications of using corpora within Translation Studies have also been shown by various researchers (Bowker & Pearson, 2002, p. 10; Corpas Pastor & Seghiri, 2009, p. 102; Lee & Swales, 2006, p. 74). Some of the main advantages identified are related to the development of instrumental sub-competence (PACTE, 2003, p. 53) or so-called information mining competence (EMT Expert Group, 2009). The need to know and use different electronic corpora and concordancing tools is also illustrated by Rodríguez Inés (2010, p. 253), who identifies a further sub-competence within the instrumental sub-competence of the PACTE model, namely “the ability to meet a number of learning outcomes: identifying the principles that lie at the basis of the use of corpora; creating corpora; using corpus-related software; and solving translation problems by using corpora” (Rodríguez Inés, 2010, p. 253). Corpora of so-called parallel texts are also very valuable tools for prose translation or translation into L2, as translators can verify lexical, phraseological and textual patterns beyond their intuition or previous knowledge and thus make more informed, justified translation choices (Neunzig, 2003; Rodríguez Inés, 2008). The use of corpora in training students to translate into their L2 also boosts trainees’ self-confidence and autonomous learning (Ulrych, 2000). Therefore, using corpora may be part of the daily practice of professional translation, especially ad hoc corpora due to the fact that they are typically collected to accomplish a single translation assignment or to satisfy a transitory need (Varantola, 2003, p. 55). In contrast with other domains, nevertheless, corpus methodology has not been that frequently applied to research on legal translation, let alone on legal translator training, despite the fact that corpora can provide useful linguistic information, such as terminology, phraseology and textual features (Borja, 2007, p. 13) also in the legal domain. Biel (2010, p. 4) explains that the major limitations in relation to the application of legal corpora are to be found in both availability and confidentiality issues, particularly as private legal documents are concerned, which results in legal corpora being rather small. However, this author also points out the potential of CTS when applied to legal translation research, practice and pedagogy, namely, among others, to double check dictionaries, to review phraseological units in legal discourse, to raise awareness of target-language conventions, to study the translation techniques utilized before to translate a problematic unit and to prompt data-driven learning (Biel, 2010). Monzó Nebot (2008) also claims that corpus-based tools can improve legal

232  María Del Mar Sánchez Ramos and Francisco J. Vigier Moreno translators’ competence, as they help increase productivity without jeopardizing the quality of their products. She puts forward so many functionalities for corpora in legal translator training that she goes on to say that “students must be trained in the mechanisms and practicalities of corpus compilation, so that they are able to address their changing translation needs by building ad hoc corpora” (Monzó Nebot, 2008, pp. 224–225).

Compiling a virtual corpus for Public Service Translation Training As it has been previously mentioned, the development of instrumental competence, including the use of documentation sources and electronic tools, is particularly relevant in PSIT, where translators need to manage different information sources in order to acquire sufficient understanding of the subject of a text and thus enable the accurate transfer of information. Meanwhile, according to our experience in PSIT legal translation training, it is precisely in thematic and cultural competence (scarce knowledge of legal notions and systems) and in communicative and textual competence (especially as regards terminological and phraseological use of legal discourse in the target language, chiefly when translating into their non-mother tongue, in our case, English) that many of our trainees show weaknesses. As to thematic competence, even if legal translators must have an essential legal knowledge so as to understand the context of a given document, to comprehend the legal effects and notions contained in the document and to compare legal systems and conventions (Prieto Ramos, 2011, p. 13), we firmly agree with the view that “being able to translate highly specialized documents is becoming less a question of knowledge and more one of having the right tools” (Martin, 2011, p. 5). Hence, taking into account that instrumental competence is part and parcel of legal translator competence, given the importance of documentation in PSIT training to ensure the production of a functionally adequate and acceptable target language text, and in an attempt to make our students “move beyond their passive knowledge of basic legal phraseology and terminology and take a more proactive stance in the development of their legal language proficiency” (Monzó Nebot, 2008, p. 224), we designed an activity based on the compilation of a monolingual virtual corpus with a twofold objective: first, to make our students aware of the usefulness of computer tools, when applied to legal translation, to overcome many of the shortcomings they face when translating legal texts; and secondly, to help our trainees develop their instrumental competence. Of the different major corpus types (Zanettin et al., 2003, p. 6), we found monolingual corpora especially useful for our task, as students were assigned to compile a corpus containing texts produced in the target language. The final monolingual corpus would thus provide them with information about idiomatic use of specific terms, collocations, formulaic language, phraseological units, as well as genre conventions (Monzó Nebot, 2008, p. 230) which they can successfully apply when rendering their target-texts, thus using as natural legal discourse as possible in the target language (in this case, English).

Using monolingual virtual corpora in public service 233 Our students attended two training sessions of six hours in total. In the first session, they were introduced to the main theoretical concepts in CTS, the main documentation resources for PSIT (lexicographical databases, specialized lexicographical resources and specialized portals) and different word search strategies needed to take advantage of search engines and Boolean operators. This introductory session was relevant as searching for keywords would allow students to move into more specific areas or domains. In the second session, they learned the differences between the so-called Web for Corpus (WfC) – where the web is used as a source of texts in digital format for the subsequent implementation of an offline corpus – and Web as Corpus (WaC) approaches, which focus on the potential of the Web as a corpus in itself (De Schryver, 2002, p. 272). The students were also introduced to specific computer software and were shown how to use retrieval information software, such as SketchEngine (Kilgarrif, 2014) and AntConc (Anthony, 2014). They also learned the basic functions of both software programs (generating and sorting concordancing, identifying language patterns, retrieving collocations and collocations clusters, etc.). After this training session, the students were each asked to compile a monolingual corpus (British English) as part of their module on Legal Translation (within their Master’s Degree in Intercultural Communication, Public Service Interpreting and Translation) to translate into (British) English a judgment following an appeal to the Spanish Supreme Court regarding a criminal case of drug trafficking. We chose this legal genre because it is a genre typically translated within the realm of PSIT but more significantly because, as a court ruling, they are a matter of public record and can be easily accessed online, thus overcoming the aforementioned problems related to confidentiality and availability in the compilation of legal corpora. They were also asked to investigate genre and lexical conventions and to use their virtual corpus to solve terminological and phraseological problems when translating. At this point, it is important to mention that there is a need to previously determine criteria (or a protocol) for selection and inclusion when designing and compiling a corpus. Our methodology was divided into three stages: (1) sourcetext documentation, (2) the compilation process and (3) corpus analysis. In the first stage, source-text documentation, we encouraged our students to read texts similar to the source text (in this case, a judgment passed by Spain’s Supreme Court), which we provided to help them learn about the nature of this type of text and to familiarize them with the main linguistic and genre conventions. As part of this first stage, the students needed to be able to locate different Internet-based texts to be included in their own corpus. To do so, they needed to put into practice what they had learned about Boolean operators in previous sessions, that is, to search for information using keywords (e.g. “judgement”, “appeal”, “drug trafficking”, “constitutional rights”, etc.). It is of paramount importance at this stage to use very precise keywords – seed words – as filters in order to exclude irrelevant information or ‘noise’. For instance, they used the operator site, which restricts the search to a specific website or domain. The search string court site: bailii.org (without space after colon) would only yield results from the website http://www.bailii.org (British and Irish Legal

234  María Del Mar Sánchez Ramos and Francisco J. Vigier Moreno Information Institute), which provides access to a very comprehensive set of British and Irish legislation and case-law, whereas the search string court site:.gov would restrict the search to websites under the domain .gov, that is, governmental sites. In the present case, the institutional web page of the Supreme Court of the United Kingdom (http://supremecourt.uk/decided-cases/index.html) can be used to download and save complete UK Supreme Court’s judgments, which would be the comparable genre of our source text in the target legal culture. Once our students had chosen the appropriate material for their corpus, they needed to download it (compilation stage). Students were also encouraged to use free software (i.e. HTTrack, GNU Wget or Jdownloader) to download web sites and web pages so that they could automate the downloading process. Once the documents had been found and downloaded, the texts had to be converted to .txt files in order to be processed by corpus analysis software like AntConc. This task is especially necessary in the case of texts retrieved in .pdf format. For this purpose, students used HTML as Text for html files and a pdf to text converter for pdf files. Finally, all documents were stored and the students were able to initiate the corpus analysis stage.

Outcomes of the virtual corpus compilation As described by Alcaraz and Hughes (2002, p. 5), translators have to face “some quite daunting linguistic tasks in preparing their versions of legal originals” due to the specificities of legal discourse, such as – to name but a few – the use of latinisms, archaisms, fixed formulae, formal register patterns, redundancy elements, long and complex sentence structure, syntactical intricacies and abundant subordination. In this sense, the virtual corpora compiled by our students were highly useful in terms of all the terminological, idiomatic and textual information they offered to aid the completion of the translation task. Our students carried out different tasks in order to use their corpus as a lexicographic resource to translate their source text. Regarding terminology and phraseology, using and ad hoc corpus proved to be useful to get familiar with the specialized lexis. First, our students built a word list with the most frequent words of their corpora. They also created what is known as stop list or negative list in order to filter words such as articles, adverbs, and prepositions so that irrelevant words or ‘noise’ could be avoided. In terms of phraseology, looking up for collocations and groups of words (collocation clusters) was a valuable tool to identify the frequency of appearance of more specific lexical patterns. Collocations are sequences of words that occur together more often that we would expect. AntConc offers the possibility of sorting the context words to the right and/or the left of the keyword. It also gives the option to extract all n-grams – sequences of words of a certain length – from the text. For instance, they used the collocations and collocation cluster function to identify the frequency of appearance of “direct evidence” or “direct proof” for the translation of “prueba indiciaria”. They also corroborated that an adequate solution for “delito de tráfico de drogas” would be “drug trafficking offence”.

Using monolingual virtual corpora in public service 235 The cluster/Ngram function was particularly useful when students faced with more problematic translation units, such as Noun+Preposition (e.g. judgment on/in), where students positively evaluated the contextual information their ad hoc corpora offered. Another problem that could arise when translating is how to translate specific set expressions or fixed formulae, which are very common in legal language. The concordance function was also a very attractive resource for this situation. A simple query generated concordance lines listed in keyword in context (KWIC) format, where the software displays all occurrences of the searched word (or pattern) in the corpus. The most outstanding feature is that the concordance function presents the search pattern surrounded by its immediate context. For instance, students looked up the appropriate English term for “infracción de precepto constitucional”. The virtual corpus they had compiled offered a number of alternatives, such as “breach”, “infringement”, and “violation”, with “violation” being the most frequent. Apart from the extraction of specific terms, AntConc offers direct access to the corpus individual files so students can also investigate in more detail the searched pattern in its context. This possibility is displayed by the Concordance Plot function, a bar code representation of how the search word is distributed over the text. As it has been previously mentioned, legal language is characterized by formulaic expression. Accordingly, their ad hoc corpus was also a documentation tool to find typical linguistic formulas. For example, the source text contains the

Figure 17.1  Concordance function

236  María Del Mar Sánchez Ramos and Francisco J. Vigier Moreno typical formula used by Spain’s Supreme Court judges to express that they allow the relevant appeal (to wit, “Que debemos declarar y declaramos haber lugar al recurso de casación”). In the target text, a literal translation of this formula would not be as functionally adequate as replicating the usual expression used in the target-language comparable texts (“we agree that the appeal should be allowed”), which they found by analysing their ad hoc corpus of UK Supreme Court’s judgements. In general terms, the students’ feedback was largely positive. They appreciated the usefulness of the different functions that a software tool such as Antconc can provide (i.e. frequency lists, collocates, clusters/N-grams, concordancer). Compiling and using corpora made the students feel more confident in their technical skills and translation solutions. Altogether, corpus use was evaluated by our students as a valuable tool for developing their instrumental competence. Nevertheless, as otherwise expected, students also highlighted that their corpora were of little use to them when it came to the translation of markedly system-bound legal terms, such as “Roj” – which stands for Repertorio Oficial de Jurisprudencia – or “Audiencia Provincial de Vitoria”. In these cases, as normally happens in legal translation, “«neutral» renderings, borrowings and solutions combining literal formulations with lexical expansions are commonplace in order to accurately convey the specificity of source system-bound concepts, while allowing for target text comprehension” (Prieto Ramos, 2011, p. 16). It is no wonder, then, that some of the solutions for these translation problems include, on the one hand, “Official Case Law Catalogue” “or Official Case Repertory” and, on the other hand, “Provincial Court of Vitoria” or “Provincial Criminal Court of Vitoria”. For the solution of these problematic items, students mentioned dictionaries (both monolingual and bilingual), multilingual databases and discussion forums as their principal reference sources.

Conclusion We have shown how we developed and exploited a monolingual virtual corpus as a resource in a PSIT training environment. As has been highlighted, the are multiple advantages of ad hoc multilingual corpora application in legal translation training, particularly for the retrieval of very useful information about the use of terms, collocations, formulaic language, phraseological units and genre conventions in legal discourse. Thus, a corpus has proven to be a valuable aid for our specialized translation students, especially when they are assigned to produce a specialized translation (in this case, a legal translation) into their L2, since they can consult the corpus to acquire essential linguistic knowledge, including information about appropriate (and inappropriate) terminology, collocations, phraseology, style and register, but also consolidate and activate their thematic competence. We have demonstrated, however, that training is essential when compiling and using corpora, as this requires a variety of competences, not only linguistic and technological but also thematic (one student compiled a corpus made up by judgments passed by the Court of Appeal, a lower court

Using monolingual virtual corpora in public service 237 than the Supreme Court and hence not the highest appellate court in the UK, which in turn renders that corpus invalid for the set objective). As we have seen, well-planned training on corpora and compiling methodology can contribute to the development of these competences. Bringing together corpus analysis software and legal translation made our students develop both instrumental and linguistic competences and take better translation decisions with greater self-confidence and in a much more informed manner. This ideal scenario fits perfectly with the general demand of today’s professional practice and therefore must be considered in specialized translator training if we aim to equip our students with the skills that will be required in an increasingly demanding and competitive translation market.

References Alcaraz, E. & Hughes, B. (2002). Legal translation explained. Manchester: St. Jerome. Aldea Sánchez, P., Arróniz de Opacua, P., Ortega Herráez, J. M. & Plaza Blázquez, S. (2002). Situación actual de la práctica de la traducción y de la interpretación en la Administración de Justicia. In S. Cruces & A. Luna (Eds.), La traducción en el ámbito institucional: autonómico, estatal y europeo (pp. 85–126). Vigo: Universidade de Vigo. Anthony, L. (2014). AntConc (Version 3.4.3) [Computer Software]. Tokyo, Japan: Waseda University. Available from http://www.laurenceanthony.net/. Baker, M. (1993). Corpus linguistics and translation studies – Implications and applications. In M. Baker, G. Francis & E. Tognini-Bonelli (Eds.), Text and technology: In honour of John Sinclair (pp. 233–253). Amsterdam/Philadelphia: John Benjamins. Baker, M. (1995). Corpora in translation studies: An overview and some suggestions for future research. Target, 7(2), 223–243. Baker, M. (1996). Corpus-based translation-studies: The challenges that lie ahead. In H. Somers (Ed.), Terminology, LSP, and translation: Studies in language engineering in honor of Juan C. Sager (pp. 175–186). Amsterdam/Philadelphia: John Benjamins. Biel, Ł. (2010). Corpus-based studies of legal language for translation purposes: Methodological and practical potential. In C. Heine & J. Engberg (Eds.), Recon­ ceptualizing LSP. Online proceedings of the XVII European LSP Symposium 2009. Retrieved January 25, 2014, from http://bcom.au.dk/fileadmin/www.asb.dk/ isek/biel.pdf. Borja Albi, A. (2000). El texto jurídico inglés y su traducción al español. Barcelona: Ariel. Borja Albi, A. (2007). Corpora for translators in Spain. The CDJ-GITRAD Corpus and the GENTT Project. In G. Andermann & M. Rogers (Eds.), Incorporating corpora – The linguist and the translator (pp. 243–265). Clevedon: Multingual Matters. Bowker, L., & Pearson, J. (2002). Working with specialized language: A practical guide to using corpora. London: Routledge. Bowker, L. (2003). Corpus-based applications for translator training: Exploring possibilities. In S. Granger, J. Lerot & S. Petch-Tyson (Eds.), Corpus-based approaches to contrastive linguistics and translation studies (pp. 161–183). Amsterdam/ Philadelphia: Rodopi. Cao, D. (2007). Translating law. London: Multilingual Matters. Corsellis, A. (2008). Public service interpreting. The first steps. Basingstoke: Palgrave McMillan.

238  María Del Mar Sánchez Ramos and Francisco J. Vigier Moreno Corpas Pastor, G. & Seghiri, M. (2009). Virtual corpora as documentation Resources: Translating travel insurance documents (English–Spanish). In A. Beeby, P. Rodríguez Inés & P. Sánchez-Gijón (Eds.), Corpus use and translating. Corpus use for learn­ ing to translate and learning corpus use to translate (pp. 75–107). Amsterdam: John Benjamins. De Schryver, G. (2002). Web for / as corpus: A perspective for the African Languages. Nordic Journal of African Studies, 11(2), 266–282. EMT Expert Group. (2009). Competences for professional translators, experts in multilingual and multimedia communication. Retrieved March 28, 2014 from the European Master’s in Translation Website of the DG Translation of the European Commission: http://goo.gl/OajvWP. Hurtado Albir, A. (2007). Competence-based curriculum design for training translators. The Interpreter and Translator Trainer, 1(2), 163–195. Kenny, D. (2001). Lexis and creativity in translation. A corpus-based study. Manchester: St. Jerome. Kilgarrif, A., et al. (2014). Sketch Engine: Ten years on. Lexicography, 1–30, from Sketch Engine website http://www.sketchengine.co.uk/. Laviosa, S. (2003). Corpora and translation studies. In S. Granger, J. Lerot & S. Petch-Tyson (Eds.), Corpus-based approaches to contrastive linguistics and transla­ tion studies (pp. 45–54). Amsterdam/New York: Rodopi. Lee, D. & Swales. J. (2006). A corpus-based EAP course for NNS doctoral students: Moving from available specialized corpora to self-compiled corpora. English for Specific Purposes, 25, 56–75. Martin, C. (2011). Specialization in translation – myths and realities. Translation Journal. Retrieved March 15, 2014 from http://www.bokorlang.com/journal/ 56specialist.htm. McEnery, T. & Wilson, A. (2004). Corpus linguistics. Edinburgh: Edinburgh University Press. McEnery, T., Xiao, R. & Tono, Y. (2006). Corpus-based language studies. An advanced resource book. London and New York: Routledge. Monzó Nebot, E. (2008). Corpus-based activities in legal translator training. The Interpreter and Translator Trainer, 2(2), 221–252. Neunzig, W. (2003). Tecnologías de la información y traducción especializada inversa. In D. Kelly et al. (Eds.), La direccionalidad en Traducción e Interpretación. Perspectivas teóricas, profesionales y didácticas (pp. 189–205). Granada: Atrio. PACTE. (2000). Acquiring translation competence: Hypotheses and methodological problems of a research project. In A. Beeby, D. Ensinger & M. Presas (Eds.), Investigating Translation (pp. 99–106). Amsterdam: John Benjamins. PACTE Group. (2003). Building a translation competence model. In F. Alves (Ed.), Triangulating translation: Perspectives in process oriented research (pp. 43–66). Amsterdam: John Benjamins. Prieto Ramos, F. (2011). Developing legal translation competence: An integrative process-oriented approach. Comparative Legilinguistics – International Journal for Legal Communication, 5, 7–21. Rodríguez Inés, P. (2008). Translation into L2 in education and in real life, and how electronic corpora can help. Paper presented at the Eighth Translation Conference: The Changing Face of Translation, University of Portsmouth. Retrieved March 20, 2014, from http://goo.gl/82GtoS.

Using monolingual virtual corpora in public service 239 Rodríguez Inés, P. (2010). Electronic corpora and other information and communication technology tools. An integrated approach to translation teaching. The Interpreter and Translator Trainer, 4(2), 251–282. Saldanha, G. (2004). Accounting for the exception to the norm: A study of split infinitives in translated English. Language Matters, Studies in the Languages of Africa, 35(1), 39–53. Sánchez Gijón, P. (2009). Developing documentation skills to build do-it-yourself corpora in the specialized translation course. In A. Beeby, P. Rodríguez-Inés & P. Sánchez-Gijón (Eds.), Corpus use and translating. Corpus use for learning to translate and learning corpus use to translate (pp. 109–127). Amsterdam: John Benjamins. Šarčević, S. (1997). New approach to legal translation. The Hague: Kluwer Law International. Ulrych, M. (2000). Teaching translation into L2 with the aid of multulingual parallel corpora: Issues and trends. Miscellanea, 4, 58–80. Valero-Garcés, C. (2006). Formas de mediación intercultural: Traducción e Interpretación en los Servicios Públicos. Granada: Comares. Varantola, K. (2003). Translators and disposable corpora. In F. Zanettin, S. Bernardini, & D. Stewart (Eds.), Corpora in translator education (pp. 55–70). Manchester: St. Jerome. Zanettin, F., S. Bernardini & S. Dominic (Eds.). (2003). Corpora in translator educa­ tion. Manchester: St. Jerome. Zanettin, F. (2012). Translation-driven corpora. Manchester: St. Jerome. Zhu, Ch. & Wang, H. (2011). A corpus-based, machine-aided mode of translator training. The Interpreter and the Translator Trainer, 5(2), 269–291.

This Page is Intentionally Left Blank

Part 6

Computer-assisted translation tools for language learning

This Page is Intentionally Left Blank

18 Computer-assisted translation tools as tools for language learning María Fernández-Parra Swansea University, United Kingdom

Introduction In professional translation, Computer-Assisted Translation (CAT) tools have gradually become a staple tool. This is increasingly reflected in translator training programmes across universities and schools (e.g. DeCesaris, 1996; Bohm, 1997; Olohan, 2011), often at both undergraduate and postgraduate level. In recent years we have seen the proliferation of these tools, which can be described as a single integrated system, made up of various translation tools and resources, allowing for a more efficient and consistent translation process (cf. Quah, 2006). Despite their usefulness for translation, CAT tools are seldom used in the context of learning a language. This is because a good command of a language is usually needed before starting to translate. CAT tools are designed to facilitate the translation process rather than to facilitate language learning. However, translator training programmes are often delivered in universities or schools where language learning programmes exist alongside translator training programmes. Indeed, students who enrol on translator training programmes are often required to attend language classes too. Therefore, this chapter explores the possibilities of expanding the usefulness of CAT tools from the translation curriculum into the learning of a specialized language. After providing an overview of the main features of CAT tools, this chapter maps how some of the main components can be used to support and improve a number of skills in the learning of specialized language.

Features of CAT tools CAT tools can vary in the functionality provided but at a basic level CAT tools offer at least Translation Memory tools (including alignment tools) or Terminology Management tools, or both. At a more advanced level, both the architecture and functionality of these tools are increased (cf. FernándezParra, 2014). The following sections review the main features of CAT tools and the theoretical motivations for considering them as feasible tools for learning or honing specialized language skills.

244  María Fernández-Parra Translation Memory (TM) and alignment tools A translation memory consists of a database of texts and their corresponding translation(s), divided into segments, often at sentence level, for future reference or reuse (cf. Austermühl, 2001). The main advantage of a TM is that “it allows translators to reuse previous translations” (Bowker, 2002, p. 111) quickly and efficiently. Therefore they are typically used in the translation of technical, scientific and legal genres because documents in these genres tend to contain repetition. TMs can help translators automate much of the repetitive translation work. TMs work by comparing the current source text to previously translated documents. They are sophisticated tools in that they can not only retrieve identical segments, but also similar ones, by using the technology known as fuzzy matching (cf. Esselink, 2000). This means that the program will search for any text in the TM similar to the current source text. If similar text is found, some editing may be needed in order to adjust the text to be a translation of the current source text. Users can set the level of similarity in terms of percentage. A 100% match is called an exact match, whereas any match less than 100% similar, whether linguistically or in terms of formatting, is called a fuzzy match. The fuzzy match threshold often set by translators typically varies between 70–80%. However, in order to use a TM, texts need to be entered into it first, i.e. TMs come empty to start with. This means that some previous training may be necessary if using the more sophisticated CAT tools. One method of creating a TM is by aligning a source text with its translation. Alignment is the process of comparing both texts and matching the corresponding sentences which will become segments, known as translation units, in the TM. In many CAT tools, alignment is carried out automatically by the software. In this case, it is almost inevitable that some of the segments will be misaligned (e.g. Bowker, 2002) but some alignment tools cater for this possibility by allowing manual post-editing of the results of the alignment process, such as the alignment components in SDL Trados and Déjà Vu X2.

Terminology management Along with the TM, the terminology database, or termbase, is an essential component of CAT tools, as terminology is a crucial task in technical translation (cf. Bowker, 2002). The termbase is a repository of specialized language and terminology to use in future translations. A termbase is a database, but it differs from a TM in that it is used to store and retrieve segments at term level, e.g. phrases and single words, whereas the TM is typically used for sentences. Depending on the level of sophistication of the CAT tool, the termbase can also be used to store and retrieve various kinds of information about the term, such as gender, definition, part of speech, usage, subject field, etc. In addition, the termbases in some CAT tools allow the storage and retrieval of multimedia files, e.g. graphics, sound or video files, etc. Storage and retrieval of such files is much quicker and more efficient than with spreadsheet software such as MS Excel. Further, they can allow for a hierarchical organization of the information.

Computer-assisted translation tools 245

Using CAT tools in language learning The idea of using computers for language learning is not new. There is a wealth of literature covering many aspects of CALL (computer-assisted language learning). However, there is little research into using CAT tools specifically for learning specialized language. Kenny (1999) had already pointed out that the integration of CAT tools into university curricula could open up new areas of research and pedagogy. However, there has been little research since then on how CAT tools in particular might be used as tools to hone students’ specialized language skills. As Rogers (1996) points out, language learners and translators “have a good deal in common when comes to dealing with words: each must identify new words, record them, learn them, recall them, work out their relationships with other words and with the real world” (Rogers, 1996, p. 69). This chapter is based on this common ground between the tasks performed by translators and the tasks performed by language learners when it comes to specialized language. Thus, this chapter aims to ‘recycle’ the main components of CAT tools, such as the TM and the termbase, in order to provide continued support to the language learning process. There is much literature on the skills needed in language learning, but a number of foundational skills are generally well established in language pedagogy, such as speaking, listening, reading, writing, grammar and vocabulary (e.g. Hinkel, 2011; Widdowson, 2013). In the following sections, an overview is provided on how the TM and the termbase might support various foundational language learning skills. However, CAT tools work with written text. Therefore, the focus on skills such as listening and speaking rather falls outside the scope of this chapter. Such skills will nevertheless be hinted at when discussing the use of multimedia files in the termbase. Translation skills have been added to the list of foundational skills, as translation is clearly another skill that CAT tools can contribute to. The CAT tools explored in this chapter are SDL Trados Studio 2011 suite (Trados for short), which includes WinAlign and SDL MultiTerm, and Déjà Vu X2 (henceforth DVX2), not only because these are two important CAT tools in the translation industry and therefore widely taught in (at least UK) universities, but also because their useful components for language learning in these tools can be accessed as standalone features, without the need to launch the rest of the software, as is the case with some CAT tools. The usefulness of Trados and DVX2 software is compared in the following sections.

TM, alignment and specialized language learning Since the TM deals mainly with segments at sentence level, it is particularly suited to supporting skills such as reading, writing and translation, which are often employed at textual or sentential level. For the same reason, the more advanced language learners would particularly benefit from using TM as an additional tool. An example of TM that can be used for language learning is that of DVX2. A populated TM becomes a readily available corpus of source texts and their translations. The lecturer can use this corpus to produce a variety of

246  María Fernández-Parra CALL exercises for students to benefit from additional support to improve their knowledge of the foreign language. The exercises with the TM can range from substitution and gap-filling exercises (whether single words or complete phrases) to all kinds of text manipulation exercises, such as partial or complete text reconstruction, re-ordering words in a sentence, unscrambling, and much more. The exercises can be either in the source language or the target language, or both. For other types of CALL exercises, see Blake (2013). One kind of exercise where TMs can help the more advanced students is writing in the foreign language, e.g. by helping students to structure their writing and use fluent, natural ways of expressing themselves. The left half of the screen of a TM is the usual place to display the source text. Instead of the source text, the lecturer can create a list of headings structuring the essay in a particular way, e.g. Introduction, Aims, etc. Students can use these headings to guide them through the structuring of their essay in the foreign language. There can be different templates for different tasks and students need not adhere to the templates very strictly. Once students open the list of headings in the CAT tool, they can search the TM for accepted, conventional phrases to start paragraphs. In order to do this, students would simply right-click a word such as Introduction in the source text column in order to search for that word in the TM. This will bring up a bilingual dialog box also with the results. In this case, the dialog box contains a couple of examples of good ways to start a paragraph in Spanish. Students would select the phrase they would like to use by doubleclicking it, thus copying it directly into the Spanish column. Students can then finish the sentence with their own input. If necessary or desired, students could also type the selected phrase manually. Of course, this scenario requires a certain amount of preparation of the TM contents for the exercise but in some cases students could be asked to create the amended TM themselves. The involvement of the lecturer would be to oversee the accuracy of the TM contents before students use it for the writing task. Every English segment in the TM that can be re-used in the future as an introductory phrase, for example, should be amended to include the label INTRODUCTION at the start. This would remind students at the time of the exercise that the phrase can be used as an introductory phrase to start a paragraph. The use of uppercase is deliberate to distinguish the label from the actual phrase. Similarly, other labels could be PRESENTING AIMS, CONCLUSION, INTRODUCING AN OPPOSED VIEW, etc. The TM in Trados can also be used in this way. However, the display of segments in Trados is less user-friendly for language learning exercises. Placing the label INTRODUCTION at the start of a segment is purely for indexing and displaying purposes when the software searches for matches within the TM. As it is assumed that students have already familiarized themselves with the software, the automatic changes to the source text should not come as a surprise or hindrance to students. Provided that the label INTRODUCTION remains placed at the start of the segment, the Spanish segment will remain unchanged and students will be able to decide which Spanish introductory phrase to use in their writing task.

Computer-assisted translation tools 247 This writing activity can be integrated in to the classroom but it also lends itself well to self-study. The lecturer can even set students the task of creating contents in advance by asking students to create a bilingual TM themselves using alignment. Students would simply need to create a separate file for each language and run both files through an alignment program. One such program to use in this scenario is SDL WinAlign (henceforth WinAlign). Initially, the program will attempt to match the relevant phrases in each language and it will display them joined with a dotted line. Dotted lines indicate that human intervention is needed to confirm that the segments have been appropriately matched. Dotted lines joining segments can easily be deleted by selecting the relevant option after right-clicking the icon of a particular segment. Suggestions can also be accepted by right-clicking and selecting the relevant option. Once a segment has been amended, the dotted lines become continuous lines to indicate that the segment match is confirmed. If necessary, the match can become unconfirmed again by selecting the relevant option when right-clicking on the segment, thus allowing the user to re-match the segments as required. An alignment task such as this can be used to help improve students’ skills in the foreign language at both sentential and textual level. The role of the lecturer could be to provide the source and target files to be aligned, having amended the files so that the order of the sentences is different in each language. When opening the files in WinAlign, students should first disconnect all dotted lines, and then attempt to make the appropriate connections. Thus far we have assumed a one-to-one correspondence of segments but it may be that one segment matches two. Both these segments correspond to one segment in English and can easily be joined by right-clicking the relevant option. Following the same procedure, a segment could be split if required. The source segments discussed thus far are phrases, but the same exercise could be carried out with complete sentences where students would not only have to match the right sentences, but also order them to create a coherent text. Once all the segments are matched, students can export the results as a newly created TM, which they can then retrieve for future reference or reuse, either in Trados or in other CAT tools. DVX2 also has an alignment component. Its approach, however, differs considerably from the drag-and-drop approach in WinAlign. Instead of dotted lines, the alignment procedure in DVX2 consists of placing the cursor on a segment and clicking the relevant button at the bottom of the screen to join it to another segment, split it, move it or delete it. In order to align an English segment to its corresponding Spanish one, students would need to place the cursor on the Spanish segment, for example, and keep clicking the button “Move Up” under the right pane until the Spanish segment is at the top, next to the English one. Alternatively, they could place the cursor on the English segment and keep clicking the “Move Down” button under the right pane until the English segment is level with the Spanish one. Each language has its own set of buttons for flexibility. In language learning, an alignment exercise such as this could be used at all levels of language learning, as the level will largely depend on the contents to be

248  María Fernández-Parra aligned. Alignment tasks can be performed with single words, phrases, sentences or even complete paragraphs. A special feature in DVX2 allows users to discern the type of segment they are dealing with by allowing them to export the segments to either the termbase or to a TM. Although in translation alignment is typically used to create TMs, which often contain segments at sentence level, DVX2 also allows users to highlight any word, phrase or term they find in each language within a sentence and send it to the termbase, which often contains segments at word or term level, by clicking the “Add to TB” button. Thus, translators could export the sentence This range of printers provide high resolution photo printing (4800 dpi maximum) and its translation to a TM, whereas the terms printer and high resolution, for example, could be highlighted in each language and added directly to a termbase from the alignment screen by clicking “Add to TB”. Similarly, language learners could export a phrase such as This essay is an attempt to answer a fundamental question to a TM, whereas they could highlight essay to a termbase, each with their corresponding translations. It should be noted that in DVX2 terms can be added to the termbase while the alignment results are being checked by the users, whereas exporting to a TM is usually performed once the alignment is complete and confirmed. DVX2 allows users to export the data to a TM in a variety of formats, such as TMX, which allows the data to be used in the TMs of other CAT tools, and TXT and MS Excel, among others, which allow users to carry the data over to other applications. In WinAlign, the alignment data can also be exported to TMX, but there is no dedicated facility to send data to the termbase directly from the alignment screen. However, alignment data in WinAlign can be exported to a termbase via another SDL component, SDL MultiTerm Extract, from where users can extract terms automatically and then send them to a termbase. For a fuller account of the term extraction process with MultiTerm Extract, see Fernández-Parra and ten Hacken (2008).

The termbase and language learning Terminology is a crucial aspect of the translation process and many CAT tools are equipped to deal with a variety of terminological tasks efficiently and consistently. In language learning, the termbase can support a crucial aspect, that of vocabulary acquisition. Since CAT tools are typically used for technical translations, this section explores the possibilities of using the terminology function of CAT tools as tools for learning specialized language. Termbase facilities can vary widely from one CAT tool to another, but one good example of a useful terminological facility for language learning is that of SDL MultiTerm (henceforth MultiTerm). The MultiTerm termbase is a conceptoriented database, which means that each entry contains data (i.e. the source and target terms and information about them) which relates to a single concept. The screen is divided into a customizable number of panes. Usually, the top pane displays information that applies to the whole entry, such as “Subject” and

Computer-assisted translation tools 249 the “Entry number”, the latter is automatically assigned by the software. The pane in the middle displays the English (source) term and any associated information about it, such as “Definition”, “Part of speech”, “Context”, etc. Similarly, the bottom pane contains all the information about the Spanish (target) term. In translation, such a hierarchical arranging of the information is important because it helps translators record information that may only be pertinent to the term in one language but not in the other, e.g. geographical variations, standardized forms of terms, customer-specific or project-specific terminology, etc. A hierarchically organized termbase is equally beneficial for language learning as there is much overlap between the terminological information needed by translators and the linguistic information needed by language learners. For instance, it is important both for language learners and professional translators to learn the standardized forms of terms. Learners may also benefit from fields such as grammatical information, e.g. regular/irregular verb, collocational restrictions, etc. The organization of the information may be a typical one in translation but the visual, hierarchical display can enhance vocabulary-building tasks and memorization in language learning. In addition to a flexibility of content the MultiTerm termbase offers a variety of configuration and display options, so that, for example, users can select to view only the terms, without any associated information or re-arrange the configuration so that graphics are at the bottom of the entry, rather than at the top, etc. Thus, when using the termbase for learning specialized language, students will be able to select either full view with the complete termbase contents, or just to display the source and target languages as a simple bilingual glossary. Students will also be able to toggle between the different views and layouts depending on the specific linguistic task they may be carrying out. Further, the MultiTerm termbase is reversible, that is, it can be searched in either direction between the languages, English into Spanish or Spanish into English, in this case. Another advantage of the MultiTerm termbase over others, such as DVX2, for example, is that it allows the inclusion of multimedia information such as graphics, videos and sound files. Collecting such information for future translations on topics such as medicine or science may be obvious but, for specialized language learning, sound files and videos in particular, can be usefully exploited in knowledge-building tasks. Furthermore, the MultiTerm termbase supports the inclusion of live (i.e. clickable) hyperlinks which would allow students to navigate to further relevant pages on the Internet directly from the termbase. The main difference in functionality and usefulness of the DVX2 termbase for language learning, compared to the MultiTerm termbase, is that neither multimedia information nor live hyperlinks can be included in a DVX2 termbase, at least in the version studied here. As with the TMs, termbases come empty to start with, and it is up to the user to decide whether a termbase should be bilingual or multilingual, which fields to include in each entry and which hierarchy of fields to establish. When creating a MultiTerm termbase for the first time, users can select to create their own layout manually field by field, or to select from a range of predetermined templates

250  María Fernández-Parra offered by the software. These templates are designed to save time and effort when creating termbases, but they also help maintain consistency of presentation if other termbases are created at a later date. Consistency is an important part of the translation work, for example if working regularly for a client that requires translators to provide termbases with a specific layout. However, consistency of layout and presentation can also be of benefit when using the termbase for specialized language learning. Once the termbase is setup and populated, information can be easily retrieved from it. In the default screen, a dedicated pane lists all the entries in the termbase. Thus, users can simply click the relevant entry to view the full contents in the entry. For users who prefer a less cluttered screen, the list of terms can be minimized with one click. Another method to retrieve information from the termbase is to use its search functionality. The MultiTerm termbase can not only be searched in the normal way (“Normal Search”), i.e. searching for a term or terms which begin with a particular character or characters, but it also offers a “Full Text Search” and a “Fuzzy Search”. A “Full Text Search” searches the text in all the fields of the entry; as such, it constitutes a more thorough search than the “Normal search”. Although it is possible to use wildcards (i.e. using the symbol * to represent multiple unknown characters or the symbol “?” for a single character) in the MultiTerm Termbase, users can also perform a so-called “Fuzzy Search”, which finds text that is either identical or similar to the search value. As with fuzzy matches in the TM, users can set the fuzzy threshold value they wish to use for their search in the termbase or use the default value. Further, users can resort to the so-called “Filters” to perform more advanced or selective searches. Such advanced search functionality allows users to store large volumes of information in the termbase and be able to retrieve it quickly and efficiently. The work carried out on a MultiTerm termbase can be shared through a server, if the specific institution has a server in place. Where possible or affordable, this solution would allow students to work in groups and, for example, share vocabulary building tasks. Another useful feature of the MultiTerm termbase for language learning is its portability, that is, the data collected in the termbase can be exported at any time into other applications and there is a wide range of export options for the user. Of particular interest for language learners, for example, is the fact that the data can be exported directly to an MS Word .rtf file, where the entries are automatically arranged in a dictionary fashion by the software, as shown in Figure 18.1. Figure 18.1 shows how the information contained in MultiTerm can be automatically arranged in a user-friendly, visual way. The terms in the source language are displayed in colour and bold, those in the target language in black and bold. The labels of the different fields are shown in italics, e.g. Domain, Definition, etc. The bilingual glossary shown in Figure 18.1 displays terms from two different domains, but glossaries can also be multilingual and be focused on one domain only. In order to edit the contents of the glossary, users should work from within MultiTerm and re-export the selected termbase or termbases into Word. In other

A asistencia letrada EN-GB legal aid PoS: noun phrase. Gender: f. Payment from public funds allowed, in cases of need, to help pay for legal advice or proceedings. Source: http://www.oxforddictionaries.com/ definition/english/legal-aid. atestado policial EN-GB police statement PoS: noun phrase. Gender: f. Definition: Written summary of police investigation. Source: http://www.wordreference.com/es/ translation.asp?tranword=police+report.

B biomasa forestal EN-GB forest biomass PoS: noun phrase. Gender: f. Definition: the by-products of current forest management activities, current forest protection treatments, or the by-products of forest health treatment prescribed or permitted under forest health law.

C caza furtiva EN-GB poaching PoS: noun phrase. Gender: f. Definition: the illegal practice of trespassing on another’s property to hunt or steal game without the landowner’s permission. See also cazador furtivo. cazador furtivo EN-GB poacher PoS: noun phrase. Gender: m. Definition: A person who illegally hunts birds, animals or fish on somebody’s else’s property Example: According to the Wildlife Protection Society of India, 14 tigers have been killed by poachers in India so far this year – one more than for all of 2011. See also caza furtiva. cordillera EN-GB mountain range PoS: n. Gender: f. Definition: A mountain range is a group or chain of mountains that are close together. A mountain system or system of mountain ranges sometimes is used to combine several geographical features that are geographically (regionally) related. Source: http://resources.woodlands-junior. kent.sch.uk/homework/mountains/ranges. htm.

[graphic]

D deforestación EN-GB deforestation PoS: noun. Gender: f. Definition: The clearing or thinning of forests, the cause of which is implied to be human activity.

E empresa maderera EN-GB logging company PoS: noun phrase. Gender: f. Definition: A company that fells trees and sells timber. eslabón EN-GB link PoS: n. Gender: m. Definition: A relationship between two things or situations especially where one affects the other. Source: Oxford Online English Dictionary. Example: “a commission to investigate a link between pollution and forest decline”.

M Ministerio del Interior EN-GB (Spanish version of) Home Office PoS: noun phrase. Gender: m. Source: https:// www.gov.uk/government/organisations/ home-office/about.

P prueba pericial EN-GB link PoS: noun phrase. Gender: f. Definition: Forensic evidence is evidence obtained by scientific methods such as ballistics, blood test, and DNA test and used in court. Forensic evidence often helps to establish the guilt or innocence of possible suspects. Source: http://definitions.uslegal.com/f/ forensic-evidence/.

H hacer constar EN-GB record officially PoS: vb. Source: Diccionario de términos jurídicos 1999, Teddington: Peter Collin.

Figure 18.1  SDL MultiTerm termbase exported into MS Word

252  María Fernández-Parra

T tala industrial EN-GB industrial logging PoS: noun phrase. Gender: f. Definition: The business of cutting down trees and transporting the logs to sawmills for varied industrial uses, as in the fabrication of telegraph poles and railroad ties, and in building construction, shipbuilding, and furniture manufacture.

testigo EN-GB witness PoS: n. Gender: m. Definition 1: A person who sees an event, typically a crime or accident, take place. Definition 2: A person giving sworn testimony to a court of law or the police. Source: http://www.oxforddictionaries.com/ definition/english/witness Spanish synonym: deponente.

Figure 18.1  (continued)

words, MultiTerm users need never concern themselves with the task of adding the formatting shown in Figure 18.1 to the resulting Word glossary – this is done automatically by the software each time a termbase is exported. Users can select to export the whole termbase or a single entry. However, it should be noted that, in the 2011 version of the software, multimedia files such as graphics are not exported and need to be added manually in the resulting .rtf  Word file, but this may change in future versions of the software. The extent to which the MultiTerm termbase is integrated with MS Word is such that termbases can be accessed directly from MS Word by means of an add-in on the MS Word ribbon. In addition, ad-hoc entries can be added from the MS Word ribbon directly to the MultiTerm termbase while students are working on files in MS Word. Termbase data can also be exported to other formats, such as .txt, which allows users to export their data into a wide range of applications, including MS Excel. Termbases can be provided by lecturers for use in the classroom, for example, in specialized translation classes, but they can also be created by students themselves. In this case, much of the learning can take when students collect the information needed and carry out the necessary research work themselves to include items in the termbase. The learning of specialized language, at all levels, is further reinforced by consulting the termbase in subsequent tasks. The MultiTerm termbase was designed with a view to improve quality and efficiency in the terminology tasks that translators perform. However, its combination of powerful options in terms of ease of creation, flexibility of layout and display, ease of lookup and shareability suggest that there is scope for considering such components of CAT tools as additional viable tools for specialized language learning, not just for translation.

Concluding remarks This chapter explored the possibilities of considering CAT tools as additional language learning tools, especially in universities, departments or schools where CAT tools are already part of the translator training curriculum, as both students and staff may already be familiar with the software. Some knowledge of the software

Computer-assisted translation tools 253 has therefore been assumed, although an effort has been made to select CAT tools which have components that can be accessed relatively quickly as standalone components, rather than having to launch the whole program in order to access certain components, as is the case with some CAT tools. Three main components of CAT tools were explored, the translation memory (TM), the alignment task and the terminological database (termbase for short). Each of the components can support a variety of language learning tasks. The TM and in the alignment task, students could improve their reading, writing and translation tasks, both at sentential and textual level, whereas the termbase can support a wide range of specialized vocabulary acquisition tasks. By contrast with the TM, the termbase contains items at word or phrase level. The languages used in this chapter were English and Spanish, but the results reported here can be applied to many other language combinations. The two CAT tools explored in this chapter were SDL Trados Studio 2011 and DVX2. The results of these explorations suggest that the TM, the alignment task and the termbase can be productively used in each tool for language learning, not only for the translation tasks they were designed for. However, each tool presented slight differences in approach and sophistication. The integration of the MultiTerm termbase with wordprocessing software such as MS Word, through add-ins and export options, can be particularly useful for students to continue improving their knowledge of specialized vocabulary. Since the students this chapter is aimed at are trainee translators, using these tools for language learning outside their translation tasks can help students become even more proficient in their use of CAT tools. The examples of activities shown here can be used in a classroom setting but they also lend themselves well to self-study. The involvement of the lecturer in the suggested activities can vary from minimal (by setting students the task of creating the materials which they will then use for other activities, checking the accuracy of the student’s work, etc.) to substantial (by creating the materials for students, using them in class, etc.) as required. Encouraging students to use CAT tools for language learning can facilitate a more active, hands-on learning. Two important advantages of each tool are portability and shareability. They are portable because the various export options offered allowed users to take the data to other CAT tools and other applications. They are shareable because both tools offer server options to allow fast, efficient teamwork, where acquiring and installing server options of these tools is possible and affordable. However, exploring the full possibilities of the server versions of these tools for language learning rather falls outside the scope of this chapter. Another advantage of using CAT tools versus other types of computerized tools for language learning is that of reuse. This chapter hopes to have shown that the original idea of the suitability of CAT tools for technical translation due to the repetition of lexical items has another application in specialized language learning as repetition of such items for the reinforcement of the learning and to promote the continued honing of knowledge and language skills. While CAT tools were not designed to be used for language learning and will never replace

254  María Fernández-Parra other methods of language learning, this chapter hopes to have shown that CAT tools can nevertheless co-exist with such methods and contribute to enhance the language learning experience.

References Austermühl, F. (2001). Electronic tools for translators. Manchester: St. Jerome Publishing. Blake, R. J. (2013). Brave new digital classroom. Technology and foreign language learning. Georgetown: Georgetown University Press. Bohm, E. (1997). A translation memory system in the university context. Practical applications and didactic implications. In E. Fleischmann et al. (Eds.), Translationsdidaktik (pp. 361–367). Tübingen: Narr. Bowker, L. (2002). Computer-aided translation technology. A practical introduction. Ottawa: University of Ottawa Press. DeCesaris, J. A. (1996). Computerized translation managers as teaching aids. In C. Dollerup & V. Appel (Eds.), Teaching translation and interpreting 3. New horizons (pp. 263–270). Amsterdam: John Benjamins. Déjà Vu (2014). Retrieved April 22, 2014, from www.atril.com. Esselink, B. (2000). A practical guide to localization. Amsterdam & Philadelphia: John Benjamins. Fernández-Parra, M. (2014). Formulaic expressions in computer-assisted translation. Saarbrücken: Scholars’ Press. Fernández-Parra, M. & ten Hacken, P. (2008). Beyond terms: Multi-word units in MultiTerm extract. Proceedings of the 30th International Conference on Translating and the Computer. London: ASLIB. Hinkel, E. (Ed.). (2011). Handbook of research in second language teaching and learning. Volume II. Abingdon: Routledge. Kenny, D. (1999). CAT Tools in an academic environment: What are they good for? Target, 11, 65–82. Olohan, M. (2011). Translators and translation technology: The dance of agency. Translation Studies, 4(3), 342–357. Quah, C. K. (2006). Translation and technology. Basingstoke: Palgrave Macmillan. Rogers, M. (1996). Beyond the dictionary: The translator, the L2 learner and the computer. In G. Anderman & M. Rogers (Eds.), Words, words, words. The translator and the language learner (pp. 69–95). Clevedon: Multilingual Matters. SDL Trados (2014). Retrieved April 14, 2014, from http://www.sdl.com/products/ sdl-multiterm/desktop.html. Widdowson, H. (2013). Skills and knowledge in language learning. In M. Byram & A. Hu (Eds.), Routledge encyclopedia of language teaching and learning (p. 631). Abingdon: Routledge.

19 Applying corpora-based translation studies to the classroom Languages for specific purposes acquisition Montserrat Bermúdez Bausela Universidad Alfonso X el Sabio, Spain

Introduction This chapter shows how the compilation of an ad hoc corpus and the use of corpus analysis tools applied to it will help us with the translation of a specialized text in English. This text could be sent by the client or it might be the text used by the teacher in the classroom. The corpus used for the present study is a bilingual (English and Spanish) comparable specialized corpus consisting of texts from the field of microbiology. Once our corpus is operative to be exploited using corpus processing tools, our aim is to study conceptual, terminological, phraseological and textual patterns in both the English and the Spanish corpus to help us make better informed decisions as to the most appropriate natural equivalents in the target language (TL) in the translation process (cf. Bowker & Pearson, 2002; Philip, 2009). We intend to do so thanks to word lists, concordance, collocate and cluster searching. All these utilities are provided by the lexicographical tool WordSmith Tools.

Background As Bowker and Pearson (2002, p. 9) underscored, a corpus is a large collection of authentic texts, as opposed to “ready-made” texts; they are in electronic form, which allows us to enrich them as we carry out further research; and they respond to a specific set of criteria depending on the goals of the research in mind. For the purposes of the article, we have considered “parallel corpora” as compilations of original texts in one language and their translations into a different one (Baker, 1995, p. 230; Šarčevic, 1997, pp. 20–21), while we have used the term “comparable corpora” to describe original texts in two or more languages according to Johansson (2003, p. 136) and Bowker and Pearson (2002, p. 12). There are many fields of study in which linguistic corpora are useful, such as lexicography, language teaching and learning, contrastive studies, sociolinguistics, and translation, to name a few. Using García-Izquierdo and Conde’s words (2012, p. 131), “In any event, regardless of their area of activity, most subjects feel the need for a specialized corpus combining formal, terminological-lexical, macrostructural and conceptual aspects, as well as contextual information”. The

256  Montserrat Bermúdez Bausela use of linguistic corpora is closely linked to the need to learn languages for specific purposes (LSPs). In this regard, translators are among the groups who need to learn and use an LSP since they are non-experts in the specific field they are translating and they need to acquire both linguistic and conceptual knowledge in order to do so. From the observation of specialized corpora, it is possible to identify specific patterns, phraseology, terminological variants, cohesive features and so forth. Access to this information will allow the translator to produce quality texts. In the last few years many important voices in the literature have been raised to state the importance of applying corpus methodology to research in translation studies. One of the first ones to do so was Baker (1995), who highlighted the development conducted in areas such as terminology extraction and machine translation. Laviosa (1998) also mentioned a corpus-based approach in translation studies and even considered it as the “new paradigm” in the discipline. This same idea was emphasized by other authors such as Olohan (2004, p. 17), who claimed that “it provides a method for the description of language use in translation”. Likewise, Zanettin (2013, p. 21) carried out an overview of how corpus-based techniques have been used in descriptive translation research. Accordingly, this article tries to contribute to the acknowledgement of the potential of corpus studies in translation.

Methodology, corpus design and compilation There are some specific issues that need to be considered when designing and compiling a corpus for the translator. One of them is the issue of “representativeness” as stated by Baker (1995, p. 225), Kennedy (1998, p. 62) and Olohan (2004, p. 47), among others: Will our data be representative enough for what we aim to study? The size of the corpus is another issue to bear in mind. In this regard, a corpus does not necessarily have to be referred to as a “large” collection of texts (Hunston, 2002, p. 26) since a small corpus might fit our needs more precisely. Another important aspect is the nature of the texts that constitute the corpus. In the case of the translator’s corpus, it is very likely that the corpus is compiled to study terminological, phraseological and textual aspects. Therefore, we would be talking about specialized texts. In this sense, Cabré (2007) stressed as some of the most relevant criteria: topic, level of specialization, languages, type of text, textual genre, and sources. We will summarize here some of the general characteristics of our corpus at the same time as dealing with some of the issues that we have mentioned before. The corpus we have built is an ad hoc corpus, with a specific goal in mind: the translation brief. It is a written corpus, as opposed to an oral one. It is a specialized corpus, not a general one. All of the texts deal with the topic of microbiology and they have been produced in a professional and academic context. In addition, both the macrostructure and microstructure of the texts are ineffable signs of their specialized nature, as well as the relevant number of lexical units present in them.

Applying corpora-based translation studies  257 It is a comparable bilingual corpus in English and Spanish. “Text originality” is a controversial issue when talking about comparable corpora, especially regarding the English texts (since English is the lingua franca in scientific communication and it is the most frequent language of scholarly scientific articles published on the Internet). How can we be sure that all the texts that make up our corpus were originally written in English? From our point of view, this should not be a problem because, even if these texts were covert translations (House, 2006), they are presented to the scientific community as originals, and they are totally acceptable and functional in their target system. Another important issue for the compilation of our corpus was the accuracy and reliability of the consulted sources. Thus, all the chosen texts had passed a strict quality control since they were published in journals with a peer-review process. All texts (both in the English corpus and the Spanish corpus) belonged to the same text type: they were mainly expository and partially argumentative, according to Hatim and Mason’s (1990, p. 149) terminology and classification. Regarding textual genre, all the results obtained for the English corpus were scientific research articles. In the Spanish corpus compilation, however, there was a wider variety of textual genres in the output. The result of our search included, in a large number of cases, PhD theses and final year dissertations, as well as scientific research articles, which enlarged considerably the size of the Spanish corpus compared to the English one. The English corpus consisted of 29 files, 67,844 tokens (running words in the corpus) and 6,466 types (different words); while the Spanish corpus consisted of 27 files, a total of 363,424 tokens and 18,994 types. We could have taken into consideration only scientific research articles in Spanish, but it was very important for us to be faithful to the same search criteria in both the English and the Spanish corpora. It was because of this that we accepted the output including the wider variety of genres in Spanish. As Hunston (2002, p. 26) stated, “someone aiming to build a balanced corpus may restrict the amount of data from one source in order to match data from another source”, while she expressed her own view that “it is preferable to select from a large amount of data than to restrict the amount of data available”, in agreement with other authors such as Sinclair (1992). We should add that it is a multi-author corpus because one of our purposes was to learn the features that characterize a particular LSP. Also, we tried to aim at contemporary texts since we wanted to investigate the current state of a particular subject or field. Finally, the degree of reusability of our corpus is very high since it was created with the aim of being further enlarged and enriched. Regarding the mechanics of the corpus compilation, the whole process began by choosing a specialized text in the source language (SL). It could be the text that the teacher and the students are working with in the classroom, or the actual text sent by the client to be translated. It could belong to any field: scientific, technical, legal, business, etc. In our particular case, we took as our source text (ST) the article entitled “Antibacterial activity of Lactobacillus sake isolated from meat” by U. Schillinger and F. K. Lücke, published in the journal Applied and

258  Montserrat Bermúdez Bausela Environmental Microbiology, August 1989, 55(8), 1901–1906. We thought it was a good example of a highly specialized text, scientific in this case, confirmed not only by its specialized terminology, but also by its macrostructure. It belonged to an academic and professional type of discourse in which both the sender and the recipient are experts, and it corresponded to an expository type of text and to a certain extent argumentative. What we first needed to confirm was the field of study and the level of specialization of the ST. With this aim in mind, we generated a word list (using the software WordList, provided by WordSmith Tools) of the most frequent words in the text. We only took into account lexical (or content) words and we left aside grammatical words (or function words) with the help of a stop list. Stop lists are files containing all the words that we are not interested in, i.e., words that are not going to add to the specificity of the text and are not terms. The program includes some stop lists, and others are easily created. This “filtered” frequency list provided us with part of the specific terminology of the text (bacteriocin, strain, culture, agar, bacteria, plasmid, supernatant, etc.). In order to start building our corpus, we searched on the Internet for texts that included a number of the above-mentioned terms. Each text was saved individually in TXT format (the format supported by WordSmith Tools). Once the English corpus was ready, we started building the Spanish corpus by searching for texts in Spanish that included the equivalents in Spanish of some of the most frequent and representative terms in the ST in English. Accordingly, we searched for texts that included “bacteriocina”, “cepa”, “cultivo”, “agar”, “bacteria”, “plásmido”, or “sobrenadante”, among other terms.

Data extraction and analysis The most common lexical word in the ST was bacteriocin, with a frequency of 0.98%. A corpus can help us identify terms shown in context, and the most frequent patterns of use. From the different concordance lines, collocates and clusters (retrieved thanks to the software Concord, a functionality provided by WordSmith Tools), we obtained relevant grammatical and lexicographical information. We entered the search pattern using a wildcard, namely bacterio*. When working with concordancers, they frequently provide different options to limit or to widen your search by using wildcards. The asterisk, in particular, is a wildcard that substitutes an unlimited number of characters. Using this, we were able to rule out the incorrect equivalents and check the different varieties of the term. It is important to understand the meaning behind the term and learn something about the subject. In this context, corpora are of great importance since we can search the corpus to find this kind of information. We could, of course, use some reference material on the topic, but sometimes we do not have immediate access to it and we have direct access to our corpus. This idea is reinforced by authors such as Bowker and Pearson (2002, p. 38) and Corpas-Pastor (2004, pp. 152–153). In this sense, from the concordance lines of bacteriocin(s) we

Applying corpora-based translation studies  259 learnt that bacteriocin(s) are a type of activity; they are produced by bacteria or by strains; and they have a negative effect on other bacteria. One way of studying grammatical and lexicographical patterns is through clusters. Clusters are groups of a specific number of words that appear together on a number of occasions. In this case, we believe that through clusters we can also study colligates. Colligation was defined by Hunston (2002, p. 12) as “the collocation between a lexical word and a grammatical one”. Another way of learning about the words that tend to co-occur is through collocates. Collocates are words that appear with a certain frequency close to the term we are studying and help us identify the most frequent patterns of use, along with clusters. We conducted the same type of study in both corpora. Examining the results of the concordances, collocates and clusters in English and in Spanish, we drew some conclusions: 1 The most frequent variant in English of the search pattern bacterio* was bacteriocin(s). Likewise, in Spanish, from the thorough study of the occurrences, we concluded that the most common terminological Spanish variant was “bacteriocina(s)”. 2 Both sets of corpora shared the pattern that included producing and produced in English, and in Spanish “producción” and “producidas”. Syntactically speaking, there were two main patterns: the participial construction (a bacteriocin produced by) and the term bacteriocin as the agent (a bacteriocin producing). In Spanish, the most common syntactical construction was the use of a noun phrase (“la producción de bacteriocinas”) and the participial construction (“las bacteriocinas producidas”). 3 Both bacteriocin(s) in English and “bacteriocina(s)” in Spanish, shared some of their most frequent collocates: producing, production, produced, lactic, activity and strains in English; and “producida(s)”, “producción”, “lácticas”, “actividad” and “cepas” in Spanish. 4 In English, it was common to find the combination [bacteriocin + noun] (bacteriocin inhibition), while in Spanish the structure [noun + “bacteriocinas”] (“inhibición de las bacteriocinas”) was more frequent. All this information is of utmost importance for the translation of the text. A corpus can help us reflect the most natural style in our target text (TT), giving priority to the textual norms of the TL.

Using corpora in translation: an example We would like to show an example of the direct contribution of corpora to translation practice. It is with this purpose that we want to use our ad hoc corpus: to check for terminology and collocates, to learn about the subject, to study phraseology, to identify textual conventions, to validate (or not) intuitions, and to learn about the style of a specific textual genre. We are going to do it through the translation of part of the abstract of the article we have used as our ST.

260  Montserrat Bermúdez Bausela Sentence 1 A total of 221 strains of Lactobacillus isolated from meat and meat products were screened for antagonistic activities under conditions that eliminated the effects of organic acids and hydrogen peroxide. In order to perform the translation of the previous segment, we looked up this information in our corpus: ••

••

Lactobacillus: ¿lactobacilo? ¿Lactobacillus? We needed to perform a search of the term using a wildcard to make sure we had found the correct equivalent (see pattern 1). were screened for: in order to find the equivalent of screen we needed to search for possible collocates of strains of Lactobacillus together with antagonistic activities. Also, concerning style, we wanted to know if we should keep the passive voice or not (see pattern 2).

Pattern 1: search word: “cepas”; context word: “lactobac*” The first thing we did was conduct a concordance search using “cepas” as our search word (equivalent of strains) and including a context word with an asterisk: “lactobac*”. A context word is used to check if the word typically occurs in the vicinity of our search word in a specified horizon to the right and left of the search word. Also, we used a wildcard, the asterisk, in order to look for all the words that begin with “lactobac”, without knowing how they ended. This is a screen capture of part of the results of the search: From the result of the concordance lines, we could see that both “cepas de Lactobacillus” and “cepas de lactobacilos” were possible equivalents. However, the frequency was quite different: we obtained 47 entries for “Lactobacillus” and 25 entries for “lactobacilos”. Also, we could see that whenever they used the Latin word Lactobacillus, they used it as a proper name: either to mention the genus (“del género Lactobacillus”) or when it was accompanied by another Latin name (“cepas de Lactobacillus plantarum”). We therefore opted for the Latin version: “cepas de Lactobacillus”.

Pattern 2: search word: “cepas”; context word: “antagonist*” Our next step was to conduct a concordance for “cepas” but this time changing the context word for “antagonist*”. We wanted to know the specific equivalent for antagonistic and find out possible collocates so that we knew how to translate were screened for. We found in our corpus that in Spanish it was quite common to use the pattern “cepas con actividad antagonista”. As mentioned previously, specialized translation is not only about terminology, but also about style. Our translation should resemble other texts produced within that particular LSP. It must be stylistically appropriate as well as terminologically accurate. In such a way, we checked that the much-used passive voice

Applying corpora-based translation studies  261

Figure 19.1  Concordance lines for “cepas” with context word “lactoba*”

in English had been substituted by what in Spanish is called “pasiva refleja”. By reading all concordance lines for “cepas de Lactobacillus” in connection with “actividad antagonista”, we decided to translate the passive voice were screened for for “se determinó”. Another possibility would have been “se probó”. Also, while conducting other searches, we came across the following concordance: “Se realizó un primer ‘screening’ utilizando el método del ‘spot test’ descrito por [. . .]”. Suggested translation: “Se determinó la actividad antagonista de un total de 221 cepas de Lactobacillus aisladas de carne y productos cárnicos en condiciones que eliminaron los efectos de los ácidos orgánicos y el peróxido de hidrógeno.”

Sentence 2 Cell-free supernatants from 6 of the 19 strains of L. sake exhibited inhibitory activity against indicator organisms. In this other fragment, we were especially interested in learning how to translate: 1 2

The compound noun cell-free supernatants (see pattern 1). The phrase inhibitory activity against (in particular, the preposition against) (see pattern 2).

262  Montserrat Bermúdez Bausela Pattern 1: search word: “sobrenadante*” (equivalent of supernatant); context word: “célula*” The results left no doubt as to how “cell-free supernatants” could be translated as all 28 concordance lines yielded the same result: “sobrenadante libre de células”. We also noticed from the results that in Spanish it was always used in singular: “el sobrenadamente”. This would be considered a question of style.

Pattern 2: search word: “actividad inhibi*”; context word: “indicador*” With this search pattern we were looking for the right preposition (“contra”? “frente”? “respecto”?). We found that the most frequent one was “frente”, even though it was not the only one: the preposition “contra” was also used, but in far fewer cases. Suggested translation: El sobrenadante libre de células de 6 a 19 cepas de L. sake presentaron actividad inhibitoria frente a los organismos indicadores.

Sentence 3 In mixed culture, the bacteriocin-sensitive organisms were killed after the bacteriocin-producing strain reached maximal cell density, whereas there was no decrease in cell number in the presence of the bacteriocin-negative variant. There were certain issues that caught our attention, such as how we should translate the following complex noun phrases. These structures are particularly difficult to translate since in English all the components are juxtaposed without the use of prepositions and it is somehow arduous to identify their semantic connections: •• •• ••

bacteriocin-sensitive organisms (see pattern 1); bacteriocin-negative variant (see pattern 2); bacteriocin-producing strain (see pattern 3).

Pattern 1: search word: “sensible*”; context word: “bacteriocina*” The first thing we did was conduct a concordance search in the Spanish corpus using “sensible*” as our search word and including a context word, “bacteriocina*”. Also, we used a wildcard, the asterisk, in order to look for all the possible variants. We conducted this search pattern because the different parts of the noun phrase had quite direct equivalents in Spanish (“bacteriocina”, “sensible” and “organismos”), but we needed to check, first, if our intuition was right, and, second, how the three words combined together in Spanish. We obtained a result of 10 concordance lines, from which we could deduce that the most frequent expression in Spanish was “organismos sensibles a las bacteriocinas”.

Applying corpora-based translation studies  263 Pattern 2: search word: “bacteriocina*”; context word: “negativ*” We conducted a concordance search using “bacteriocina*” as our search word and included the context word “negativ*”. As in the previous search pattern, we knew the equivalents of the separate words (“bacteriocina”, “negativa”, “variante”), but we wanted to check any possible variants and how they combined to form a natural structure in scientific Spanish. In the outcome, we observed the concordance line: “variante negativa para bacteriocina”. Pattern 3: (1) search word: “bacteriocina*”; context word: “produc*”. (2) search word: “cepa*”; context word: “produc*”. Firstly, we looked for the search word: “bacteriocina*” along with the context word: “produc*”. As the original noun phrase included the structure bacteriocin-producing, we needed to recall the possible collocates of “bacterio*” in Spanish that were related to the English gerund form producing (these were “producción”, “producidas” and “productoras”). We obtained different concordance lines in which it was common to find the textual patterns “producción de bacteriocinas”, “productoras de bacteriocinas”, “produciendo bacteriocinas”, etc. Secondly, we conducted another search in which the search word was “cepa*” and “produc*” as our context word. Combining both results, we could see that the most repeated pattern in Spanish was the noun phrase “cepa(s) productora(s) de bacteriocina(s)”. Regarding style, we came across a difficulty in the translation of the bacteriocinsensitive organisms were killed. We were not able to find in our corpus any examples of concordance of “organismos eliminados” or “fueron eliminados”. As it seems, we had come across the appropriate collocate but not the appropriate style. The verb “eliminar” in the Spanish corpus follows the grammar pattern: verb + object (“eliminar microorganismos”) and in a large number of the cases, the noun “eliminación” is used. Suggested translation: “En un cultivo mezclado, la eliminación de los organismos sensibles a la bacteriocina se produjo después de que la cepa productora de bacteriocina alcanzara la máxima densidad celular, mientras que no hubo disminución en el número de células en presencia de la variante negativa para bacteriocina.” With the translation of this part of the abstract, our intention has been to provide a small sample of how corpora could be applied to the translation process to help obtain the most natural results. In the same way, the translator could carry out a similar type of study applying it to each translation commission; or the teacher could use it as a methodology in the classroom; or the student, to increase his or her own competence, or the researcher, to carry out a specific project. The advantages of applying a corpus-based methodology, as we have seen, are numerous. Also, the model we have suggested is general enough to be applicable to any pair of languages and to texts belonging to any specialized field of study. Ours was an ad hoc comparable bilingual corpus, but an ad hoc corpus could also be monolingual (consisting of only texts in the TL) or parallel (originals and their

264  Montserrat Bermúdez Bausela translations). They all have clear benefits; choosing one or another depends on the finality of the study. Regarding the purposes of the classroom, it would probably be enough to build a monolingual corpus consisting of texts in the TL as similar as possible to the field of study, level of specialization, textual genre, textual type and function of the ST. The student could use it to understand the meaning of a specific term, to learn the contexts in which a key term is used, to study collocates and the most frequent textual patterns, etc. However, we feel that a comparable corpus would allow us to perform another type of procedure such as a contrastive study of terminological possibilities, style and textual conventions expressed in the SL and in the TT, as well as trying to provide solutions to translation-specific problems. In the case of compiling a parallel corpus, the focus would be on terminology and translation strategies; we recommend using international organisms’ websites, such as the European Union, to compile the corpus and produce bilingual concordances, previously aligning the texts. As we mentioned before, translators are a particular type of LSP users. A corpus can help us acquire the necessary knowledge to be able to translate a text that belongs to a particular LSP, especially linguistic knowledge, but also conceptual knowledge. LSPs are also crucial for learners of a particular language, and also native speakers, to be able to communicate with professionals of that same discipline. Terminologists may also benefit from corpora using text extraction tools. This is just a small listing of what corpora can do (see Hunston, 2002, for a detailed description of different areas in which the application of a corpus can be extremely useful).

Conclusions The development of specific software along with the enormous amount of resources and documentation available on the Internet is a valuable advantage that facilitates the compilation of an ad hoc corpus. When compiling a translator’s corpus, the user has to carefully contemplate the different criteria that will serve his or her purposes. Also, when working with texts on the Internet, there are certain issues that we have to bear in mind such as trustworthiness. Neither should we forget the investment in money and time when it comes to learning to use specific software tools, such as concordancers. We think, however, that all this effort is worth it and that it pays in the long run since once the translator knows how to extract information from his or her corpus, the framework is already established and it can be enlarged and enriched with future projects. There are a number of ways in which specialized corpora can help the translator. We can generate word lists to identify the field and level of specialization of the ST. We can use them to learn about the subject we are translating, and about the most common lexical and grammatical patterns through the retrieval of concordances, collocates and clusters. Furthermore, it is an invaluable source regarding style: choosing the appropriate textual conventions and norms that the recipient of the TT expects to find reflected on the text is a guarantee that the text will have a high degree of acceptability. The use of ad hoc corpora involves a great

Applying corpora-based translation studies  265 development in the documentary sources for the translator, as Corpas-Pastor (2004, p. 155) points out, apart from being an excellent working methodology in the classroom. We believe that corpora help the student acquire and develop their own competence on translation, and that their use suits the specialized translator’s needs perfectly.

References Baker, M. (1995). Corpora in translation studies: An overview and suggestions for future research. Target, 7(2), 223–43. Bowker, L. & Pearson, J. (2002). Working with specialized language. A practical guide to using corpora. London: Routledge. Cabré, M. T. (2007). Constituir un corpus de textos de especialidad: Condiciones y posibilidades. In M. Ballard & C. Pineira-Tresmontant (Eds.), Les corpus en linguistique et en traductologie (pp. 89–106). Arras: Artois Presses Université. Corpas-Pastor, G. (2004). La traducción de textos médicos especializados a través de recursos electrónicos y corpus virtuales. El español, lengua de traducción. Retrieved from Congreso internacional de ESLETRA web site: http://cvc.cervantes.es/ lengua/esletra/pdf/02/017_corpas.pdf. García-Izquierdo, I. & Conde, T. (2012). Investigating specialized translators: Corpus and documentary sources. Ibérica, 23, 131–156. Hatim, B. & Mason, I. (1990). Discourse and the translator. London/New York: Longman. House, J. (2006). Covert translation, language contact, variation and change. SYNAPS, 19, 25–47. Hunston, S. (2002). Corpora in applied linguistics. Cambridge: Cambridge University Press. Johansson, S. (2003). Reflections on corpora and their uses in cross-linguistic research. In F. Zanettin, S. Bernardini & D. Stewart (Eds.), Corpora in translator education (pp. 133–44). Manchester: St Jerome. Kennedy, G. (1998). An introduction to corpus linguistics. Amsterdam: Rodopi. Laviosa, S. (1998). The corpus-based approach: A new paradigm in translation studies. META, 43(4), 474–479. Olohan, M. (2004). Introducing corpora in translation studies. London: Routledge. Philip, G. (2009). Arriving at equivalence. Making a case for comparable general reference corpora in translation studies. In A. Beeby, I. Patricia-Rodríguez & P. Sánchez-Gijón (Eds.), Corpus use for learning to translate and learning corpus use to translate (pp. 59–73). Amsterdam/Philadelphia: John Benjamins. Šarčevic, S. (1997). New approach to legal translation. The Hague/London/Boston: Kluwer Law International. Sinclair, J. M. (1992). The automatic analysis of corpora. In J. Svartvik (Ed.), Directions in corpus linguistics. Proceedings of Nobel symposium 82, Stockholm, 4–8th August 1991 (pp. 379–397). Berlin: Mouton de Gruyter. Zanettin, F. (2013). Corpus methods for descriptive translation studies. Procedia – Social and Behavioral Sciences, 95, 20–32.

20 VISP A MALL-based app using audio description techniques to improve B1 EFL students’ oral competence Ana Ibáñez Moreno Universidad Nacional de Educación a Distancia, Spain

Anna Vermeulen Universiteit Ghent, Belgium

Introduction1 From the beginning of the eighties of the 20th century, audio-visual products such as films, theatre and media events began to be provided by means of oral descriptions in the gaps between dialogues in order to make them accessible to visually impaired people. Schmeidler and Kirchner (2001), Bourne and Jiménez Hurtado (2007) and Snyder (2011) have shown that this new mode of AVT, called audio description (AD), not only increases the accessibility to audiovisual products for the visually challenged people, but also for children, elderly people and immigrants. Additionally, it has recently been applied in the foreign language (FL) classroom (Clouet, 2005; Ibáñez Moreno & Vermeulen, 2013, 2014). In this line, we applied AD to mobile learning (m-learning), and developed a mobile application (app) for EFL students to promote oral skills. We present here the first version of VISP (VIdeos for SPeaking), which contains a short clip (31 seconds) of the film Moulin Rouge (Luhrmann, 2001) that has to be audio described. The process of audio describing a video clip comprises several steps: (1) filling in an online pre-test/questionnaire, (2) viewing the clip several times, (3) drafting a small (written, if they wish) AD script, (4) recording the AD and sending it, and (5) filling in a final test/questionnaire containing formal and functional questions related to the task, where users can assess themselves by comparing their AD to the original AD. This first prototype was tested with two distance education students of EFL, who were studying to reach level B2 (following the Common European Framework of Reference for Languages, CEFR, 2001). In what follows, a brief state of the art regarding the fields of mobile-assisted language learning (MALL) (p. 267) and AD as a tool in the FL are presented (pp. 267–268); then, the data obtained from the qualitative pilot study are shown and analysed (pp. 268–272). Finally, we discuss the results and the comments of the two students who tested the

VISP 267 app (pp. 272–275). It is suggested that more research is necessary in order to assess its validity.

MALL apps: state of the art MALL has been defined by Kukulska-Hulme (2013, p. 2) from the perspective of the connection between language learning and the new mobile technologies: “mobile technologies in language learning, especially in situations where device portability offers specific advantages”. This concept is intimately linked to the development that mobile technology has experienced in the last decade as well as to the variety of emerging mobile devices. Despite the wide range of apps that help improve FL skills, there are not many developed within the academic world. As Rodríguez Arancón, Arús-Hita and Calle-Martínez (2013), CalleMartínez, Rodríguez Arancón and Arús-Hita (2014), and Martín-Monje, Arús Hita, Rodríguez Arancón and Calle-Martínez (2014) remark, there is still a need for empirical studies based on testing the available apps, especially with regards to oral competences. Some pioneering work is being undertaken by the research group ATLAS. ATLAS (http://atlas.uned.es) stands for ‘Applying Technology to LAnguageS’ and is a UNED-based Research group (UNED Ref. 87H31) working on MALL, CALL and MOOCs, among other technological areas for linguistic applications. Its members have designed MALL apps to promote oral skills by enhancing the communicative competences: ANT (Audio News Trainer, Pareja-Lora, Arús-Hita, Martín-Monje, Read, Pomposo Yanes, Rodríguez Arancón & Bárcena Madera, 2013), FAN CLUB (Friends of Audiobook Network), VIOLIN (Videos for LIsteniNg, Elorza, Castrillo, Ávila-Cabraera & Talaván Zanón, 2014), BUSINESS APP, EATING OUT, and MARLUC. Given that this field is still under development, our aim is to contribute to it, in line with ATLAS research group, by proposing a MALL app with strong academic grounds and pedagogically solid, as well as motivating and stimulating for EFL students.

Audio description (AD) and its application in the foreign language (FL) classroom The concept of providing oral descriptions in the gaps between the dialogues for visually impaired people was first developed in the US, in the 1970s (Snyder, 2005), and introduced in Europe at the beginning of the 1980s (Benecke, 2007). The so-called AD turns images into speech in order to make audiovisual material (films, theatre, opera, etc.) accessible for the blind and partially-sighted. AD has been studied from several points of view. From the translating point of view, Matamala (2006) and Díaz Cintas (2007) among others have dealt with the competences and skills of the good audio describer, who has to be aware of his/her role as a social intermediary. Regarding the use of AD as a didactic tool in FL learning, the multimodal nature of audiovisual texts, which combine the verbal sign with images and sound, represents a stimulating background for

268  Ana Ibáñez Moreno and Anna Vermeulen language learners. In line with the success of applying subtitling and revoicing to language learning and teaching (Díaz Cintas, 2008, Talaván Zanón, 2013, etc.), AD proves to be beneficial as a didactic tool in FL learning. Clouet (2005) first proposed its use as a didactic tool to promote writing skills in English as a FL. The EU-funded Project ClipFlair, which promotes multilingualism in Europe and provides activities based on AVT to foster FL learning, also includes AD-based ones. In the same vein, the ARDELE (in Spanish Audiodescripción como Recurso Didáctico en la Enseñanza de Lenguas Extranjeras) project, at the University of Ghent (Belgium), explores the benefits and the limitations of the use of AD in the Spanish as an FL classroom (Ibáñez & Vermeulen, 2013, 2014).

An ad-based MALL app – VISP: videos for speaking Once the validity of AD as a didactic tool in the teaching and learning of FL in formal settings was assessed, we took it out of the classroom, by designing an AD-based app and testing it. This app is presented below.

Preliminaries: theoretical framework and pedagogical premises VISP (which stands for VIdeos for SPeaking) is a mobile app that has been conceived within the framework of MALL for the promotion of oral production skills. The reason is that even if ubiquitous learning environments have increased and new technologies have been developed to adapt to the new learning styles (Jones & J. Ho, 2004), there are fewer chances for the average user, in an average ubiquitous context, to practise oral production. This first version (VISP v1) has been conceived for students of EFL who possess at least a B1 or B2 level (according to the CEFR, 2001) and who are interested in practising their oral skills while improving their lexical competence, which is considered by many as the primary aspect for Second Language Acquisition (Tight, 2010). This app has been developed departing from a number of theoretical premises: first, from the language learning perspective, the importance of lexical and phraseological competence for proficiency levels (Tight, 2010) is higher than other aspects of language learning. That is, lexical errors are perceived by native speakers as more serious than grammatical errors (Schmitt, 2000). Moreover, phraseological competence is the most difficult competence to acquire, and it does not come implicitly – that is, naturally (Laufer, 1997, p. 25). The number of exposures necessary for both the recognition and the production of vocabulary is significantly lower than the number of exposures needed to learn words in context (Nation, 2001, pp. 289–299, in Tight, 2010). Second, from the perspective of mobile learning VISP v1 is conceived as a ubiquitous learning tool, that is, a personalized learning tool, where learning is based on user-generated contexts (Traxler, 2011). The user only needs a few minutes to accomplish the task and can pause the application, and continue or repeat the activities whenever he/she wants. The activities of one session, altogether, take an average of 30 minutes.

VISP 269 Third, from a pedagogical perspective VISP v1 is framed within the currently accepted and used language teaching approach: the communicative approach. Within it, VISP follows the transferrable skills approach (Talaván Zanón, 2010), which is necessary in ubiquitous learning to support the idea of using mobile devices for language learning, and the task-based approach (Willis & Willis, 2007) in the sense that it consists of communicative tasks whose goal is to achieve a specific learning objective (Ellis, 2003). In both approaches the user develops his/ her communicative competence: when assuming the role of an audio describer, the learner must take the recipient into account in a way that he/she does not contaminate the objective idea that the recipient has to receive. In the next section VISP is briefly described.

Description of VISP v1 For this pilot study, we selected a clip of the film Moulin Rouge (Luhrmann, 2001), which lasts only 31 seconds, as opposed to the clips that have been used when working with AD in the classroom (Ibáñez Moreno & Vermeulen, 2013, 2014), which last around three minutes. Following the guidelines (Benecke, 2007; Matamala & Orero, 2007), an audio describer is allowed to use 180 words per minute. In this particular case, the user will employ around 55 to 60 words. First, the clip was chosen because it contains a description of the character, both physically and emotionally, and several actions take place. The transcription of the AD of the clip is provided below: A handsome man, Christian, in his twenties, with dark hair and beard, takes a new line on his typewriter. He puts his hand to his forehead. Through his open window lies Paris at night. Tearfully, he stares out to the window, at the Moulin Rouge. He turns back to the typewriter. The Paris city scene. Example 1. Transcription of the AD in the clip of Moulin Rouge Second, the instructions for the usability of VISP v1 were designed. They consisted of several steps, beginning by a very brief introduction to AD, reproduced below: As can be observed, only the most essential information about AD was included. This page has two buttons. The one above (Sample Audio Description) directs the user to a real AD sample: when the user clicks on that button, he gets access to a clip extracted from Memoirs of a Geisha, with AD. In this short clip of 4 seconds the user listens to a real AD, as a warming-up listening task. Finally, the last button, at the bottom of the screen directs users to a Google Drive questionnaire, where they fill in their personal data and complete a short test, which includes language content that will appear on the AD task and therefore on the ADS. Also, this questionnaire includes a link to the YouTube trailer of the movie from which the clip is obtained: Moulin Rouge. This video is also meant as a part of the warming-up phase, in the sense that users can start familiarizing themselves with the audiovisual material that they will encounter immediately afterwards for the activity.

270  Ana Ibáñez Moreno and Anna Vermeulen

Figure 20.1  Introduction screen in VISP v1

Once users have been introduced to AD, seen and listened to an example, and filled in their data, they can continue to the next step, by clicking in the main menu on the Instructions button, which takes them to a screen where they find the following text: Watch this clip from Moulin Rouge as many times as you want. Audio describe it following the example you saw in the introduction. You can record your voice while you play it. Useful tips: You first can prepare a written script of the audio description. Remember: use short action verbs in the present tense do NOT audio describe recognizable sounds.

VISP 271 Record your AD as much as you want. When you feel satisfied, continue to the next step to submit your task. Example 2. Instructions to do an AD in VISP v1 These instructions are very simple, brief and direct, so as to keep the user’s attention and interest. They contain tips and basic rules to audio describe. Once users know what they have to do, the next step is the task itself, in the Practice screen, where they practise as much as they want with audio describing, by recording and listening to themselves, and repeating and changing their AD. Once they are satisfied with their performance, they click on the button Finish. This button directs them to a screen in which users fill in their name, and send their recordings to an e-mail account. Besides, this screen also includes a self-evaluation section, with a final questionnaire conceived to obtain data on the user’s perception of the application and on the idea of using AD as a learning tool, and to help him/her assess his/her own performance. As a first step, users can watch the original clip with AD and read the written original ADS, so they can start comparing their own AD with the professional one. A second step consists of more specific self-assessment, where reflection on their performance is elicited by comparing, more specifically, their own ADS to the several parts of the original ADS. For this, several questions are included. They are outlined below: 2 Now, let’s compare it to your AD: a b c d

Which words/expressions are different from yours? In any case, any interesting differences on what you said? Did you use expressions such as “We see, we observe . . . ” in your AD? Did you use adjectives or adverbs in your AD other than the ones of the original AD?

Example 3. Section 2 of the post-questionnaire (2.a) is a multiple-choice question. The user has to tick the ones that he/she considers adequate, that is, all the expressions that had less than three words similar to the expressions used in the original AD. We cannot expect learners to produce an AD totally similar to the original, because different people see and formulate what they see in many different ways. Users may be discouraged if they are expected to produce an AD that is exactly the same as the AD in the clip. Therefore, we think that the fact that they have to use at least three similar words is an appropriate limit to evaluate their own performance. (2.b) is an open question, where users can write their particular observations on the differences between their AD and the original. This question is meant to leave some freedom and autonomy to the learners to analyse their own performance without instructions. Finally, (2.c) and (2.d) are Yes/No questions. Below them, the following notes can be read, respectively:

272  Ana Ibáñez Moreno and Anna Vermeulen 2.c. The key issue here is: Do you think it is polite to remind a visually impaired person what we can see? Besides, we have limited time, and these expressions just steal it from us! Think about that :-) 2.d. As you may have noticed, in the original AD there are indeed adverbial expressions such as “tearfully”. Now: compare “tearfully” to saying “he looks sad”. The adverb is just describing what appears on the screen, while the sentence is more subjective, isn’t it? Example 4. Part of the question 2 of the post-questionnaire These remarks are meant as guided reflection, to make users take their (supposed) audience into consideration: visually impaired persons. Therefore, these two questions are meant to elicit interpersonal and intercultural competence, in the sense that the audience is taken into account and the user is encouraged to realize how important it is to communicate a message adequately, and how important it is the message we communicate and how we communicate it. Finally, section 3 contains several grid questions, where users must evaluate (in a degree of 1 to 5, where 1 means “I totally disagree” and 5 means “I completely agree”) the use of AD for language learning purposes. This section contains four questions, reproduced below: a This AD task has made me think about my own language learning. b This AD task has made me reflect on how what we communicate is strongly influenced by our particular way of looking at things. c AD has helped me observe the importance and difficulty of using accurate vocabulary. d This task has been useful for me to observe how important it is to make all types of audiovisual material accessible to visually impaired persons, by using the language in such a way that the recipient is taken into account. Example 5. Section 3 of the post-questionnaire These questions are intended to make users reflect on AD as a technique to improve certain areas of their vocabulary in English, as well as on their own communicative competence in general.

A qualitative pilot case study In order to test VISP v1 a first pilot study was conducted during the second term of school year 2014–2015, with two students – who will be called student A and student B – from the Degree in Tourism studies of UNED (Universidad Nacional de Educación a Distancia), the Spanish national university of distance education. These two female students were following the course Inglés II para Turismo (English for Tourism Studies II), which is equivalent to a B2 level (CEFR, 2001).

VISP 273 They volunteered to participate in a project, which consisted in trying out the app. Therefore, the data gathered derives from the pre- and post-questionnaire, as well as from their recordings.

Sample and preliminary data: results of the pre-questionnaire The pre-questionnaire is meant to gather data on the users, such as linguistic background, previous knowledge of AD and a test, in order to assess the rate of improvement after using the app. In this case, it is remarkable that both voluntary students were bilingual: student A in Spanish and Galician, and student B in Spanish and Romanian. In both cases, they stated that they used English as an FL quite often, and student B pointed out that she used another third language: French. Regarding their previous knowledge of AD, student B had already heard of it, whereas student A had not. If students declare not to know what it is, they are invited to guess what they think it is. In this particular case, student A said that she thought it was a mobile app with videos to learn a language. Regarding the test, they had to translate words (from English to Spanish or vice versa), both of them provided correct answers, although student A left four out of ten terms without translation.

Results of the recording The users are not obliged to send it, they can keep it for themselves and practise with AD as much as they want. Student A finished the task, by filling in the post-questionnaire, but decided not to send her recording. In what follows, the transcribed recording sent by student B is presented and analysed: Dark side of the Moulin Rouge at night, followed by a young black haired man, sitting in a dark room, in front of a typewriter, crying, with his hand in his forehead. The words on the paper come to life: “The woman I loved . . . ” He makes a pause, and looks out of the window across the street, while the words start coming, and so he begins his story: “I first came to Paris, one year ago”. Example 6. Transcription of the AD submitted by student B Linguistically, the recording in (5) is impeccable. The only errors we find are related to AD techniques themselves: she used too many words (76) for a clip of 31 seconds, due to the fact that she also included in the AD narration of the protagonist. Users are told not to describe sounds or what can already be heard, but still, this mistake is very common among students, as shown by Ibáñez Moreno and Vermeulen (2013, 2014). As for the rest, we can say that the task performed by this student was very good, which is pointed out by the student herself in the post-questionnaire.

274  Ana Ibáñez Moreno and Anna Vermeulen Results of the post-questionnaire Both students filled in the post-questionnaire. The answers to section 2.a, where they had to tick the expressions that were completely different from the ones they had used, revealed that student A used all expressions differently, except one (she ticked five boxes out of six). Student B ticked three boxes. These results are confirmed by the students’ answers to section 2.b (Any differences with what you said?), provided below: Student A: “My sentences are shorter than these ones.” Student B: “I think I described almost the same, just in a different way, with my own words, though I didn’t know it was in Paris (though I should’ve figured it out easily) and I also didn’t know his name was Christian.” Example 7. Some results from the post-questionnaire Regarding student A, we cannot compare her statement to her actual performance because she did not send it, but regarding student B, who states that she described almost the same, we can see it is true. We can state, then, that student B shows high metacognitive skills. Also, her remark on the fact that she could not know about proper names is correct, and has been used for improving the postquestionnaire in future editions. Both students stated that they did not use expressions such as we observe, we see . . . and that they used adjectives and adverbs that were different from the ones in the original AD. Regarding students’ answers to section 3 (Reflections on AD, see Example 5 above), student A answered 4 or 5, and student B answered 5 in all cases, so we can state that both students strongly believed that the tasks with AD were helpful for them to become aware of the others in communication, to try to find the accurate words and expressions, and to reflect upon their own language learning.

Conclusions and suggestions for future research In this work we have presented the methodological steps taken to develop a MALL app that aims to help B1/B2 English language learners to work their oral production skills. It has been developed departing from the idea that the use of AD, which offers the same information that is accessible visually in an oral way, can create an effective and motivating multimodal learning environment. VISP v1, as a pilot version, was tested on two distance learning students of English. The results show the positive and promising potential of applying AD to a mobile app that aims at promoting oral skills in the FL, although there is still room for improvement. We are already working on a second version, where we will implement more clips. In the near future we expect VISP to become a pioneering MALL app that uses AD as a technique to practise oral production skills in the FL. Another point of future research is the correlation between the user’s linguistic and

VISP 275 cultural background and the successful implementation of VISP as a MALL app. By now, we can only say that student B – who spoke more languages, came from a multicultural environment (she was both Spanish and Romanian) and knew what AD is – was significantly more motivated from the beginning and showed her satisfaction with the app in several emails sent to the researchers. More data would contribute to shedding light on this topic.

Note 1 The research presented in this chapter has been carried out in the wide context of the SO-CALL-ME project, funded by the Spanish Ministry of Science and Innovation (ref.no.: FFI2011-29829).

References Benecke, B. (2004). Audio-Description. Meta, 49(1), 78–80. Bourne, J. & Jiménez Hurtado, C. (2007). From the visual to the verbal in two languages: A contrastive analysis of the audio description of The Hours in English and Spanish, in Díaz Cintas J. et al. (Eds.), Media for all: Subtitling for the deaf, audio description and sign language (pp. 175–187). Amsterdam: Rodopi. Calle-Martínez, C., Rodríguez-Alarcón, P. & Arús-Hita, J. (2014). A Scrutiny of the educational value of EFL mobile learning applications. Cypriot Journal of Educational Sciences, 9(3), 137–146. Clouet, R. (2005). Estrategia y propuestas para promover y practicar la escritura creativa en una clase de inglés para traductores. Actas del IX Simposio Internacional de la Sociedad Española de Didáctica de la Lengua y la Literatura. Retrieved December 20, 2012, from http://sedll.org/es/congresos_actas_interior.php?cod=33. Díaz Cintas, J. (2007). Por una preparación de calidad en accesibilidad audiovisual. Trans, 11, 45–60. Díaz Cintas, J. (2008). The didactics of audiovisual translation. Amsterdam: John Benjamins. Elorza, I., Castrillo, M. D., Ávila-Cabrera, J. J. & Talaván Zanón, N. (2014). Implementing situated m-learning activities for receptive oral skills. Presented at the International TISLID ’14 Conference, Ávila, May 7–9. European Council (2001). Common European Framework of Reference for languages: learning, teaching, and assessment. Cambridge: Cambridge University Press and the Council of Europe. Ibáñez Moreno, A. & Vermeulen, A. (2013). Audio description as a tool to improve lexical and phraseological competence in foreign language learning. In D. Tsagari & G. Floros (Eds.), Translation in language teaching and assessment (pp. 45–61). Newcastle upon Tyne: Cambridge Scholars Publishing. Ibáñez Moreno, A. & Vermeulen, A. (2014). La audiodescripción como recurso didáctico en el aula de ELE para promover el desarrollo integrado de competencias. In Rafael Orozco (Ed.), New directions in Hispanic linguistics (pp. 263–292). Baton Rouge: Cambridge Scholars Publishing. Jones, V. & H. Jo, J. (2004). Ubiquitous learning environment: An adaptive teaching system using ubiquitous technology. Proceedings of the annual conference of the Australian Association for Computers in Learning in Tertiary Education. Retrieved

276  Ana Ibáñez Moreno and Anna Vermeulen March 3, 2014, from http://www.ascilite.org.au/conferences/perth04/procs/ pdf/jones.pdf. Kukulska Hulme, Agnes (2013). Some emerging principles for mobile-assisted language learning. The International Research Foundation for English Language Education. Retrieved April 17, 2014, from http://www.tirfonline.org/wpcontent/uploads/2013/11/TIRF_MALL_Papers_StockwellHubbard.pdf. Laufer, B. (1997). What’s in a word that makes it hard or easy: Some intralexical factors that affect the learning of words. In N. Schmitt and M. McCarthy (Eds.), Vocabulary: Description, acquisition and pedagogy (pp. 140–155). Cambridge: Cambridge University Press. Luhrmann, B. (2001). Moulin rouge (director). Martín-Monje, E. Arús-Hita, J., Rodríguez-Arancón, P. & Calle-Martínez, C. (2014). REALL: Rubric for the evaluation of apps in language learning. Proceedings of Jornadas Internacionales Tecnología Móvil e Innovación en el Aula: Nuevos Retos y Realidades Educativas. Online. Retrieved January 2, 2015, from http://eprints. ucm.es/25096/. Matamala, A. (2006). La accesibilidad en los medios: Aspectos lingüísticos y retos de formación. In R. Amat & A. Pérez-Ugena (Eds.), Sociedad, integración y televisión en España (pp. 293–306). Madrid: Laberinto. Matamala, A. & Orero, P. (2007). Designing a course on audio description and defining the main competences of the future professional. Linguistica AntverpiensiaNew Series, 6, 329–343. Pareja-Lora, A., Arús-Hita, J., Martín-Monje, E., Read, T., Pomposo Yanes, L., Rodríguez Arancón, P. & Bárcena Madera, E. (2013). Toward mobile assisted language learning apps for professionals that integrate learning into the daily routine. In L. Bradley & S. Thouësny (Eds.), 20 Years of EUROCALL: Learning from the past, looking to the future. Proceedings of the 2013 EUROCALL Conference, Évora, Portugal (pp. 206–210). Dublin Ireland: Research-publishing.net. doi: 10.14705/rpnet.2013.000162. Rodríguez Arancón, P., Arús-Hita, J. & Calle-Martínez, C. (2013). The use of current mobile learning applications in EFL. Elsevier: Procedia. Social and Behavioural Sciences, 103, 1189–1193. Schmeidler, E. & Kirchner, C. (2001). Adding audio description: Does it make a difference? Journal of Visual Impairment and Blindness, 95(4), 197–213. Schmitt, N. (2000). Vocabulary in language teaching. Cambridge University Press. Snyder, J. (2005). Audio description: The visual made verbal. Elsevier, International Congress Series, 1282, 935–939. Snyder, J. (2011). Audio description: An aid to literacy. Revista Brasileira de Tradução Visual, 6(3), 19–22. Talaván Zanón, N. (2010). Subtitling as a task and subtitles as support: Pedagogical applications. Approaches to Translation Studies, 32, 285–299. Talaván Zanón, N. (2013). La subtitulación en el aprendizaje de las lenguas extranjeras. Madrid: Ediciones Octaedro. Tight, D. (2010). Perceptual learning style matching and L2 vocabulary acquisition. Language Learning, 60(4), 792–833. Traxler, J. (2011). Research essay: Mobile learning. International Journal of Mobile and Blended Learning, 3(2). Retrieved April 29, 2014, from DOI: 10.4018/ jmbl.2011040105. Willis, D. & Willis, J. (2007). Doing task-based teaching. Oxford: Oxford University Press.

Afterword Technology and beyond – enhancing language learning

In the introduction to this volume we anticipated that these few pages would provide a representative sampling of how technological applications have contributed to enhance language learning in a variety of ways in recent years. This volume is justified by the thorough presentation of state-of-the-art technological and methodological innovations in specialized linguistic areas through the contributions of authors in this book, organized in six sections that revolve around five key aspects: (1) General issues about learning languages with computers; (2) Languages and technology-enhanced assessment; (3) Mobile-assisted language learning; (4) Language Massive Open Online Courses; (5) Corpusbased approaches to specialized linguistic domains; and (6) Computer-assisted translation tools for language learning. It is our conviction that the sections included in this volume have not only offered valuable insights into the promised state-of-the-art but also presented a number of approaches that will open the way for the future research, teaching and learning in the field. The unifying message of this volume is that mobile technologies have both provided a new perspective and a new methodology to approach language learning, shifting the focus of attention from the classroom environment towards the individual needs of the learner. This perspective is embodied in the aforementioned sections, purposefully selected to provide readers with a panoramic view of the most recent research. The sections included in this volume have been presented and arranged in such a way that the reader is progressively introduced into a net of interconnected findings and data that give the volume its comprehensive character. The first section of the volume posits the idea that digital literacy should be taken as a contemporary must to acquire communication skills, a very relevant perspective to start the volume, which gets progressively focused along the section as it subsequently deals with intercultural competence and its acquisition through technological tools, to end with the proposal to encompass both the evolution of technologies and educational materials when presenting evidencebased research in specific language domains. The initial section constitutes a very relevant and cohesive start for the volume, very much in relation to the second section of the volume, where technology-enhanced assessment in languages is addressed through the presentation of four experiences that complement each

278 Afterword other and are based on the use of Computer-Assisted Language Testing (CALT) systems, as introduced in the first chapter, where the author provides us with a detailed and relevant analysis of the current state, advantages and disadvantages of this assessment methodology, to end with future perspectives to continue the research. Very much in the line of what is put forward in the first chapter, and deepening into the assessment process, the second one proposes the use of Interactive Discourse Completion Tasks (IDCTs) and Retrospective Verbal Reports (RVRs) as tools to assess the performance of speech acts by students at university level. This second section also includes the introduction of tools to evaluate materials and linguistic processes in discourse analysis. On the one hand, a tool and a scoring rubric for the evaluation of digital educational materials are presented as a reference framework for the selection of the most suitable class information, and, on the other, the methodology and architecture behind a tool to detect errors in Parts of Speech tags are conscientiously described. Part Three displays advances on mobile-assisted language learning processes, firstly focusing on how mobile-based tasks presented to students can enhance communication competence in the classroom, even if the students show different levels when learning English as a Foreign Language, to then move on a step forward and provide us with the perspective of mobile devices being key elements in the design of Language Massive Open Online Courses (LMOOCs), as they enable students to overcome the constraints of face-to-face classes and benefit from the use of specialized devices: portable course clients, mobile sensor-enabled devices and handheld computers. Finally, this section complements the vision so far provided with a proposal to use technologies as vocabulary learning tools, as well as putting forward the use of these technologies to improve the performance of MALL designs. Part Four provides new insights into the design and implementation of language MOOCs, such as the presentation of writing-focused MOOCs, courses designed for an academic context in which students deal with a specific skill, also bringing suggestions for teachers. Complementing the pedagogical guide to MOOCs design which is provided in this section, the combination of Connectivist and Instructivist approaches is portrayed as an ideal model for both teachers and designers of LMOOCs. As a third contribution to this section, LMOOCs are presented as a vehicle to reach professionals who need to learn vocabulary from a specific language field, also highlighting the importance of social learning in a 2.0. society and envisaging possible problems drawn from a real experience in a Professional English Language MOOC. Part Five complements the vision provided in the last part of the previous section, as it draws the attention towards the use of corpus inspection and analysis tools as a combination that leads to a better acquisition of expertise by trainees in a certain specific field. Along this line, the first contribution focuses on the benefits that corpus linguistics brings to the LSP class, providing with relevant examples from a real experience using commercial aviation maintenance manuals and research papers as study corpora in the class. Moreover, a computer-assisted pronunciation training app is presented as the result of an annotation process in

Afterword 279 non-native spoken corpora, used as a tool to improve learners’ pronunciation. As a new link to the acquisition of linguistic competences at a professional level, a monolingual virtual corpora is introduced as a tool for the training of public service legal translators. Contributions in Part Six establish interconnected links around the notion of using computer-assisted translation tools as learning tools, firstly drawing the attention on the use of specific ‘ad hoc’ created corpora in LSP classes and then moving on to presenting a mall-based app as a technological resource to effectively improve learners’ oral competence. However, the distinction among these parts is only formally that clear-cut. Themes and sub-themes blend and overlap across parts and chapters, weaving a net of shared interests that underlies all contributions: issues on task design, literacy, intercultural aspects, assessment, usability, mobility, scalability to reach massive cohorts, the build-up of a learning community, innovative and practical ideas for language teachers . . . all relevant to linguists, language teachers and researchers. ♣♣♣♣♣♣ As a way of general conclusion, we would like now to turn to the actors, both teachers and learners, who are the real driving force for the changes taking place within the great variety of language teaching and learning situations that can be encountered in specialized domains, and who have made it possible to reveal other relevant facets of technology-enhanced language learning through the situations discussed in the contributions of this volume. The initial development of educational technology can be set at the early 1970s, the years when this term started to spread and also when instructional media started to have some impact on educational practices (Reiser, 2001, p. 59); and looking back from today, deep changes at all levels can be observed. Nowadays teachers and learners in specialized domains as the ones described in this volume make use of technology as part of their normal behaviour to experience the world and to interact with other people, including experience and interactions with the purpose of teaching or learning a language. Although this may seem a truism, the point to be made here is that, when compared to the educational practices of fifty years ago, technology-enhanced language education has been revealed as an irreversible process, and a highly dynamic one indeed. A symptom of this irreversibility may be observed in the evolution of the methods and procedures used by teachers to teach and by learners to learn along these years. If we consider the dialectical relationship between theoretical and applied linguistics, we can explain this evolution by looking at how linguistic and cognitive theories and descriptions are evolving, the effects of which include changes in methodologies. The constructivist approach is now part of a socio-constructivist paradigm in which learning involves “the co-construction and sharing of knowledge, with the concept of ‘collective intelligence’ becoming central” (ArnóMacià, 2014, p. 14). The reason why can be found in the fact that, as Arnó-Macià

280 Afterword highlights, “in the last decade our society has undergone radical transformations in forms of communication, the access to and management of information, the creation of virtual communities, and immediate availability through portable devices” (2014, p. 5). One of the consequences of this is a shift in the perspective of the role of computers in language education, which are crucially contributing to help bridge the gap between foreign language learning and learning acquisition (Gimeno Sanz, 2014, p. 43). Another consequence refers to the still ill-defined full implications of the introduction of tablets and other handheld devices into the panoply of technology-based educational devices, with the possibilities of adding mobility to learning, and thus making us reinterpret the conditions of time and space for learning. From a different perspective, another aspect we can also take into consideration to explain methodological and procedural evolution is the gradual change in the degree of dependency of learners on their teachers along their individual learning processes, as the Common European Framework of Reference for Languages explicitly acknowledges by describing learners who have reached the highest levels as ‘independent users’ of the language. However, the explanation why technology-based language education has evolved as an irreversible process would not be complete without also considering its crucial dependency on the feasibility/lack-of-feasibility of autonomous learning, patently materialized in the (un)availability of resources that make autonomous and independent learning (im)possible at all. This volume, with its kaleidoscopic view, as Colpaert has described it in his Foreword, represents a taster of the current possibilities which are available nowadays for facilitating teachers and learners a gradual shift into unparalleled learners’ possibilities for both autonomous and independent learning, which also involve a change in teachers’ roles within the learning process towards empowering learners. The unprecedented wealth of applications currently available, and which are being put in practice and/or used by language teachers and learners on a daily basis, part of which has been developed and described by the contributors of this volume, shows that the dynamic process of technology-enhanced language learning has reached the maximum entropy so far. The current expectations that a randomly selected language learning situation in a specialized domain does not rely on some sort of practical application of technology are practically none. The question is not whether a course or a learning situation is technologyenhanced or not, but rather to what extent it is. The answer to this question is very much related to the extent to which teachers and learners are technologyliterate in their daily lives. According to Arús, Bárcena and Read (2014), the areas of educational technology which have started to receive attention recently in relation to language learning in specialized domains include, apart from the Web 2.0, open educational resources and practices such as Massive Open Online Courses (MOOCs), gamification, and mobile and augmented reality (AR) devices and applications. Albeit the situation may have changed by the time that these pages reach readers, the two areas that have already permeated into teachers’ and learners’ daily experience are most probably MOOCs and Mobile Assisted

Afterword 281 Language Learning (MALL), particularly in terms of mobile applications. In the educational arena, MOOCs have experienced a record growth and popularity in a very short time, reaching all layers of society, and the potential as well as the practical applications of mobile phones and other handheld devices have also gained full attention in the academia in a very short time, the birth of the EUROCALL Special Interest Group on MALL in 2014 being a healthy symptom of this (http://www.eurocall-languages.org/sigs/mall-sig). The rest of the areas discussed in this volume focus on topics of more concern for teachers, but which also affect the way present and future language learners in specialized domains are likely to encounter right now or in a very short time.

References Arnó-Macià, E. (2014). Information technology and languages for specific purposes in the EHEA: Options and challenges for the Knowledge Society. In E. Bárcena, T. Read & J. Arús (Eds.), Languages for specific purposes in the digital era (pp. 3–25). Bern: Springer. Arús, J., Bárcena, E. & Read, T. (2014). Reflections on the future of technologymediated LSP research and education. In E. Bárcena, T. Read & J. Arús (Eds.), Languages for specific purposes in the digital era (pp. 345–348). Bern: Springer. Gimeno Sanz, Ana María. (2014). Fostering learner autonomy in technologyenhanced LSP courses. In E. Bárcena, T. Read & J. Arús (Eds.), Languages for specific purposes in the digital era (pp. 27–44). Bern: Springer. Reiser, R. A. (2001). A history of instructional design and technology: Part I: A history of instructional media. Educational Technology, Research and Development, 49(1), 53–64.

Index

academic writing 165–76, 191, 209–13 accessibility 88–9, 100, 266 accuracy 47–9, 56, 102–26 acquisition 47, 48, 50, 56, 99, 277; language acquisition 131, 142, 166, 177, 179, 223, 268; pronunciation acquisition 224, 225; vocabulary acquisition 189, 248, 253 action research 142, 194 ad hoc corpora 231–5, 264 adult learners xvii, 47, 55, 56, 142, 148, 165 advanced students 206, 209, 213, 246 affective filter 50, 173 alignment 208, 244, 245, 247–8; components 244; process 244; screen 248; task 247, 253; tools 243, 244 annotation 102–26, 135, 216–27 AntConc 204, 233–6 app 14, 76, 136–7, 146, 152, 154, 266–76 assessment 23–6, 52, 61–72, 86–94, 102–26, 166; formal assessment 179; formative assessment 181; instructor assessment 169, 184; machine assessment 171; online assessment 53, 55; peer-assessment 170, 172; self-assessment 51, 52, 53, 54, 136, 170–1, 271; tools 32, 73, 167, 184 asynchronous computer-mediated communication 73; learning 39, 177; written interactions 184 attrition 164, 172 audio description 266–76

autonomous attitude 49; learning 86, 96, 102, 231, 280; practice 166 autonomy 9, 130, 136, 271 big data 14, 174 blended classes 166; courses 177; learning 102, 190; learning environments 89; modes 37–8 blog 16, 170, 185 BYOD (bring your own device) 143, 156 Byram’s model 24, 31 CALL (computer-assisted language learning) 142, 190, 216, 245–6, 267 CALT (computer-assisted language testing) 61–72 CAT (computer adaptive testing) 64–5 CAT (computer-assisted translation) tools 243–54 CBT (computer-based testing) 61–72 CEFR (Common European Framework of Reference for Languages) 39, 40, 54, 76, 196, 266, 268, 272 cMOOC (Connectivist Massive Open Online Course) 179, 180, 184 COCA (Corpus of Contemporary American English) 212–13 collaboration 15, 129, 131, 155, 158, 167, 173, 178, 194 collaborative learning 47, 173; project 23–34; task 23, 134 collocates 132, 207, 211, 213, 236, 258–65 collocation 203, 205, 259; cluster 234

Index 283 communicative approach 145, 269 communicative competence 73, 146, 148, 269, 272; intercultural communicative competence 9, 23–34 communicative skills 47, 183 community 9, 15, 17, 86, 165, 172, 179, 182; language community 130–1; learner community 165; learning community 14, 153, 179, 279; LSP community 131–6; virtual community completion rate 167, 172, 195, 197 concordance 203–15, 255, 260; function 235; line 206, 235, 258–63 concordancer 236, 258, 264 connectivism 16; connectivist course 153, 179; connectivist framework 193 connectivity 15, 143, 145–7 corpora 133, 204; learner corpora 216, 221; LSP corpora 205; spoken corpora 216–27 corpus analysis 204, 212, 233–7, 255; linguistics 130, 230, 278 creativity 9, 18 critical cultural awareness 24, 27, 31 critical thinking 9, 96 culture 9, 23–34; digital culture 16, 18 DCT (discourse completion task) 73–85 decanting 104–6, 117 digital literacies 9–22, 149, 190 distance learning 48, 159, 178, 274 EAP (English for Academic Purposes) 165–76 EMI (English as a Medium of Instruction) 35–46 engagement 10, 135, 180, 184, 194 error correction 102, 184 feedback 26, 52, 66, 68, 77, 134, 147, 166, 179, 224, 236; and assessment 184–5; constructive feedback 179, 180; detailed feedback 148; effective feedback 153; immediate feedback

65, 68; instructor feedback 166; learner feedback 144; on-the-spot feedback 181; peer feedback 158, 180, 184, 185; perception of the feedback 196 flipped classroom xix, 183; model 140 forums 16, 168, 184, 196; discussion forums 167, 168–9, 172, 236 geolocation 140, 141, 146 Google 18; Apps 76; Docs 76, 77, 78; Drive 269; Forms 76; Hangouts 76, 77, 184; Play 151; Translate 133, 167 grammar 47, 50, 55, 56; checker 167; English grammar 48–9, 51, 52; exercises 51; practice 145; rules 55, 193; section 54; test 49, 52; and vocabulary 76, 78, 79, 80, 82, 183 higher education 47, 73, 86, 88 ICT: and EMI 35–46; and LSP 189–91; tools 74, 76, 77, 82 incidental learning 132, 133, 134, 137 information literacy 11, 13, 14, 16, 18 instructional design 40 interaction 17, 29, 68, 82, 89, 97, 153–5, 177, 192; and collaboration 194; participants’ interaction 75; social interaction 23, 24, 28, 153, 180, 192; spoken interaction 78, 136; student-machine interaction 39, 67; written interaction 82, 196 interoperability 103, 125 interviews 24, 32, 37, 145, 185, 211 key competences 28 language testing 56, 61–72 learning object 86–94 legal discourse 231, 232, 234, 236 legal translator training 228–39 literacy: code literacy 11–8; multimedia literacy 9, 11; multimodal literacy 11–12, 15, 16, 18; network literacy 11–18; print literacy 10; remix

284 Index literacy 11, 18; social literacy 12; visual literacy 9 LMS (learning management system) 165, 168, 174 MALL (mobile-assisted language learning) 129–39, 140–50, 150–61, 266–76 misunderstandings 29, 33 MOOCs (Massive Open Online Courses) 153; language MOOCs 152–5, 165–76, 177–88, 189–200; MALMOOCs (Mobile-Assisted MOOCs) 155–9 morpho-syntactic annotation 112, 118 motivation 24, 41, 48, 50, 56, 87–9, 98, 179, 180, 195 multiliteracies 10 multimedia 37, 63, 132, 157; content 100; elicitation task 75; facilities 131; files 244, 245, 252; information 12, 249; presentation 65, 68 online activities 47–57; communication 23, 181; courses 48–51, 152–9, 173, 178; education 177; exchange 25, 26; networks 16; resources 56, 167, 205; tasks 48, 51, 52, 56 ontologies xix, xx, 102–26 OntoTag 102–26 pedagogy 36, 39, 231, 245; participative pedagogy 17 peer review 89, 170, 257 phraseology 203, 208, 211, 232, 234, 236, 256, 259; -centered approach 204, 205, 213; and register variation 204 plagiarism 171–2 podcasts 16, 147 POS (Part-of-Speech) tagger 103–25 pragmalinguistic 73–81 pragmatic competence 73, 74, 82, 192 procedural knowledge 29 professional English 157, 192, 194, 195, 196

proficiency 64, 76, 168, 177, 180, 268; ICT proficiency 37, 38, 40, 42; language proficiency 62, 77, 148; learner proficiency 38, 76, 178, 216; level 77, 180, 182, 223, 224, 232, 268; low proficiency 142, 148; oral proficiency 222; written proficiency 102 profile 25, 139, 183, 204 project work 23 PSIT (Public Service Interpreting and Translation) 228–39 qualitative analysis 32; study 53, 196, 266, 272 quality assessment 86,90 quantitative analysis 32; study 40, 54 questionnaires 24, 25, 26, 31, 32, 186, 194 reflection 26, 89, 28, 96, 135, 186, 194; critical reflection 33 registers 204, 209, 211–13 repository 90, 221, 244 rubric 170; scoring rubric 86–95 RVRs (retrospective verbal reports) 74–82 scaffolding 156, 157, 179, 184 self-directed learning 136, 157 self-evaluation 32, 89, 271 self-study 49, 141, 147, 247, 253 semantic annotation 104, 117, 119 semantic web 103 situated learning 16, 142 social learning 189–200 sociopragmatic conditions 74, 80; knowledge 73, 80 speech 52, 208; act 74; nonspontaneous speech 217; semi-spontaneous speech 217; spontaneous speech 217, 218, 220, 225 standardisation 103–15 student: perception 25, 31, 32, 195, 196; performance 37, 38, 39, 40 surveys 41, 143, 144, 145, 146, 185, 186

Index 285 tagging 11, 102, 104, 218, 222, 223, 225 task-based approach 49, 50, 147, 269; learning 49, 133, 159 tasks 23, 25, 48, 74, 96, 103, 130, 144, 156, 182, 274; learning tasks 47, 49, 51, 253; online assessment tasks 53, 55 taxonomy 74, 78, 112 telecollaboration 16, 23, 24, 180 termbase, 245, 248–53 textual genre 256, 257, 259, 264 tokens 103–26, 213, 257 tourism 51, 52, 54, 272 training 16, 36, 190, 191, 218, 222, 233; learner training 185; pronunciation training 222, 224; teacher training 38, 149; translator training 228–39, 243, 252 transcription: and annotation 216–27; manual transcription 217, 218, 220, 225; narrow phonetic transcription 219–20, 223, 225; orthographic

transcription 216, 217, 219, 220, 222; phonological transcription 216–24 translation: specialized translation 236, 252, 260 translation process 243, 248 usability 87, 89, 90, 99, 221, 269 videos 16, 145, 154, 157, 167, 183, 249, 266–76 vocabulary 52, 76, 80, 131, 268, 272; knowledge 132, 196; learning 129, 131–4, 196; ontological vocabulary 103; specialised vocabulary 189–200 WBT (web-based testing) 66–7, 68 Wikipedia 16, 151 word lists 36, 205, 255, 264 WordSmith Tools 204, 255, 258 xMOOC (eXtended Massive Open Online Course) 153, 167, 179, 180