339 27 5MB
English Pages XIV, 270 [284] Year 2020
Springer Texts in Education
Noriko Nagai Gregory C. Birch Jack V. Bower Maria Gabriela Schmidt
CEFR-informed Learning, Teaching and Assessment A Practical Guide
Springer Texts in Education
Springer Texts in Education delivers high-quality instructional content for graduates and advanced graduates in all areas of Education and Educational Research. The textbook series is comprised of self-contained books with a broad and comprehensive coverage that are suitable for class as well as for individual self-study. All texts are authored by established experts in their fields and offer a solid methodological background, accompanied by pedagogical materials to serve students such as practical examples, exercises, case studies etc. Textbooks published in the Springer Texts in Education series are addressed to graduate and advanced graduate students, but also to researchers as important resources for their education, knowledge and teaching. Please contact Natalie Rieborn at textbooks. [email protected] for queries or to submit your book proposal.
More information about this series at http://www.springer.com/series/13812
Noriko Nagai • Gregory C. Birch • Jack V. Bower • Maria Gabriela Schmidt
CEFR-informed Learning, Teaching and Assessment A Practical Guide
123
Noriko Nagai Ibaraki University Mito, Japan
Gregory C. Birch Seisen Jogakuin College Nagano, Japan
Jack V. Bower Tezukayama University Nara, Japan
Maria Gabriela Schmidt Nihon University Tokyo, Japan
ISSN 2366-7672 ISSN 2366-7680 (electronic) Springer Texts in Education ISBN 978-981-15-5893-1 ISBN 978-981-15-5894-8 (eBook) https://doi.org/10.1007/978-981-15-5894-8 © Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
This book is a practical guide to the CEFR (Council of Europe 2001) and the newly developed CEFR Companion Volume (CEFR/CV, Council of Europe 2018), which have increasingly been used to inform language policies and teaching practices of countries within and outside of Europe. The development of the CEFR Companion Volume reflects the evolution of a paradigm shift in language teaching which began in the 1970s. The shift was marked by the publication of the Threshold Level in 1975, and accelerated with the introduction of the CEFR; that is, the defining of objectives in terms of performance standards rather than content specification (e.g., notions and functions) and from teacher-centered language teaching to learner-centered language learning promoted through self-reflection, collaboration and mediation among learners, and between learners and teachers. This shift is occurring most visibly in Europe but can also be seen in countries beyond Europe. Countries in Asia are now keen to adopt and implement the CEFR. The Ministry of Education in Japan, for instance, proposed in 2011 that concrete objectives of English curricula at the secondary education level ought to be stated using the ‘Can Do’ schemata of the CEFR to improve English proficiency of learners. Then, the CEFR-J, a contextualized version of the CEFR, was published in 2012. In 2014, the Ministry announced a new English examination system for college enrollment to be launched in 2020 using a number of English proficiency tests developed by private testing organizations and firms, the scores of which will be linked to the six levels of the CEFR. In the People’s Republic of China, the China Standards of English (CSE) was developed in reference to the CEFR by the National Education Examinations Authority under the auspices of the Ministry of Education. In Taiwan and Vietnam, situations are similar: Each of these governments pushes the adoption of the CEFR to set standards of English proficiency and hopes to rejuvenate its English education system accordingly, which places new challenges and demands on teachers. These top-down attempts to implement the CEFR, however, have caused great confusion among practitioners and led to misconceptualizations of the CEFR. The philosophy and core ideas of the CEFR and the principles to utilize it are neither widely understood nor shared by these practitioners. Furthermore, wide varieties of useful CEFR-related documents, which are already available through the Council of Europe (COE), the European Centre of Modern Languages (ECML) and other
v
vi
Preface
organizations are not presented systematically according to themes or topics in one source, and are difficult to identify and evaluate. This book attempts to help practitioners (i) grasp essential and core concepts of the CEFR, (ii) identify parts of the CEFR and the CEFR/CV as well as other CEFR-related resources and documents which are relevant for readers’ different purposes, and (iii) utilize the resources for their own needs. The book consists of six chapters. Chapter 1 explains the philosophy and core ideas behind the CEFR and discusses its impact on language education, while acknowledging some critical views on the CEFR. The chapter also lays out major CEFR-related resources and categorizes them based on five themes, each of which will be explained in detail in the subsequent five chapters. Chapter 2 focuses on curriculum and course design; key issues are raised to consider and make decisions regarding learners’ needs and CEFR descriptors, which are elucidated and utilized as a medium to articulate learners’ needs. Chapter 3 focuses on how to design, implement, and evaluate CEFR-informed assessments. The chapter also clarifies important concepts in assessment such as summative and formative assessment, CEFR-informed assessment rubrics, and rater training and learner self-assessment training, which are vital to understand and perform CEFR-informed assessment. Chapter 4 focuses on learner autonomy and the European Language Portfolio. The CEFR regards learners as social agents responsible for their language learning development, and this chapter demonstrates how the European Language Portfolio promotes learners' reflective attitudes toward their learning. Chapter 5 aims to integrate topics discussed in the previous three chapters—course design, assessment, and learner autonomy—with teaching by giving CEFR-informed tasks a central role for doing so. Finally, Chap. 6 focuses on teacher autonomy, which has evolved independently of the CEFR, but which is widely discussed in line with learner autonomy and regarded as a prerequisite not only for the development of learner autonomy but also for CEFR-informed language teaching innovation. Chapters 2 through 6 are laid out in a similar format, so that readers can easily follow each chapter and develop an understanding of how to use and contextualize the most relevant CEFR and CEFR/CV documents for their own purposes. First, each chapter elucidates parts of the CEFR, the CEFR/CV, and CEFR-related information most relevant for each theme. Second, step-by-step processes to utilize them are demonstrated in illustrative diagrams and explanations. Finally, exercises concerning the usage of the CEFR and CEFR-related information for different purposes, samples of case studies, and further reading are provided. Each chapter is written as an independent unit on its specific topic, while at the same time being interrelated to the other chapters. Therefore, the book may be read cover to cover for a thorough introduction to implementing the CEFR and its related resources, or readers may simply refer to individual chapters as the need arises. Chapter 2, for instance, explains the most relevant parts of the CEFR and the CEFR/CV for curriculum and course design: the global scale and self-assessment grid for the former and illustrative scaled descriptors for communicative activities and language competence for the latter. It then demonstrates what constitutes CEFR descriptors by decomposing them into different descriptive roles: for instance, types
Preface
vii
of tasks & activities, manner as well as conditions, and constraints on task performance. After a detailed analysis of CEFR descriptors, the chapter shows how to modify them to articulate overall objectives for curriculum design and concrete learning outcomes for course design. The process is demonstrated step by step, providing numerous examples, including new descriptors from the CEFR/CV. Finally, exercises concerning how to modify CEFR descriptors, case studies from Europe and beyond, and further reading are provided. The purpose of this book is to provide a hands-on guide, a tool-kit to navigate through the complexity of the CEFR and the huge amount of available resources. It will help novice users of the CEFR grasp and implement key aspects of the framework and advanced users to reflect on their practice. The information and the resources presented may at times overlap and be repeated to illustrate application in different contexts on one side, on the other they are simplified and only mentioned sparingly so as to not overload readers. It is written by practitioners for practitioners. We hope it will help many educators to use the CEFR and its tools effectively and promote critical reflection in language learning and teaching. Finally, this whole project would have not been realized without the very generous support by the JSPS Grant-in-Aid research funds project no. 16K02835 and 16K02834. While writing this book we presented parts of the book at several conferences including the 2018 CercleS International conference XVI in Poznan, Poland, JALT 44th Annual International Conference on Language Teaching and Learning in Shizuoka, Japan and JALT PAN-SIG 2018 Conference in Tokyo, Japan. We are very grateful to participants of the conferences and to members of JALT CEFR & LP SIG who provided comments on our work and encouragements. Our gratitude also goes to Morten Hunke and Naoyuki Naganuma who are part of the research team and helped develop ideas for the book. We are most grateful to Fergus O’Dwyer and Judith Runnels who gave valuable and insightful comments. Last but not least we are thankful for Theron Muller who painstakingly copy edited this volume. Mito, Japan Nagano, Japan Nara, Japan Tokyo, Japan
Noriko Nagai Gregory C. Birch Jack V. Bower Maria Gabriela Schmidt
Contents
1 The CEFR and Practical Resources . . . . . . . . . . . . . . . . . . . . . 1.1 Overview of the CEFR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Aims of the COE, CEFR and ELP . . . . . . . . . . . . . 1.1.2 Key Aspects of the CEFR . . . . . . . . . . . . . . . . . . . 1.2 The Descriptive Scheme and the Common Reference Levels . 1.2.1 Four Modes of Communication: Reception, Production, Interaction, and Mediation . . . . . . . . . . 1.2.2 Competence: General, Communicative Language, Plurilingual, and Pluricultural . . . . . . . . . . . . . . . . . 1.2.3 The Common Reference Levels . . . . . . . . . . . . . . . 1.3 History and Criticism of the CEFR . . . . . . . . . . . . . . . . . . . 1.3.1 CEFR History and Development . . . . . . . . . . . . . . . 1.3.2 ELP History and Development . . . . . . . . . . . . . . . . 1.3.3 Criticisms of the CEFR . . . . . . . . . . . . . . . . . . . . . 1.4 The CEFR in and Beyond Europe . . . . . . . . . . . . . . . . . . . . 1.4.1 The CEFR in Europe . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 The CEFR Beyond Europe . . . . . . . . . . . . . . . . . . . 1.5 Practical Resources for the CEFR . . . . . . . . . . . . . . . . . . . . 1.5.1 Resources by Institution . . . . . . . . . . . . . . . . . . . . . 1.5.2 Resources Related to Chapters . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Curriculum and Course Design . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The Role of the CEFR in Curriculum and Course Design . . . 2.2 Types of Descriptor Scales . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Common Reference Levels for Curriculum Design . . 2.2.2 Illustrative Descriptor Scales for Course Design . . . . 2.3 Application of the CEFR: How to Utilize CEFR Descriptors for Curriculum and Course Design . . . . . . . . . . . . . . . . . . . 2.3.1 Curriculum Design: Using Common Reference Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Course Design: Using Illustrative Descriptor Scales .
. . . . .
1 1 3 4 8
....
10
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
14 16 17 17 19 21 23 23 25 28 29 30 33
. . . . .
. . . . .
. . . . .
. . . . .
37 37 38 39 40
....
45
.... ....
45 50
. . . . .
. . . . .
. . . . .
ix
x
Contents
2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Exercise 1: Aligning a Course to CEFR Descriptors . . 2.4.2 Exercise 2: Creating Illustrative Descriptors Between Criterial Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Case Studies and Further Reading . . . . . . . . . . . . . . . . . . . . . 2.5.1 Case Study 1: Using Illustrative Descriptors for an Academic Writing Course at the A2+ Level . . 2.5.2 Case Study 2: Using Illustrative Descriptors for a General English Course . . . . . . . . . . . . . . . . . . 2.5.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The Role of Assessment in the CEFR . . . . . . . . . . . . . . . . . 3.2 Some Important Concepts in Assessment . . . . . . . . . . . . . . . 3.2.1 Testing, Assessment, and Evaluation . . . . . . . . . . . . 3.2.2 Assessing Achievement and Proficiency Through Criterion-Referenced and Norm-Referenced Tests . . 3.2.3 Reliability and Dependability . . . . . . . . . . . . . . . . . 3.2.4 Summative and Formative Assessment . . . . . . . . . . 3.2.5 Validity Theories . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.6 The Assessment Development Cycle . . . . . . . . . . . . 3.2.7 Assessment Specifications . . . . . . . . . . . . . . . . . . . . 3.2.8 Selected Response Items . . . . . . . . . . . . . . . . . . . . . 3.2.9 CEFR-Informed Assessment Rubrics . . . . . . . . . . . . 3.2.10 Rater Training and Learner Self-assessment Training 3.3 Types of CEFR-Informed Assessments . . . . . . . . . . . . . . . . 3.3.1 The CEFR for Self-assessment . . . . . . . . . . . . . . . . 3.3.2 Teacher Assessment of CEFR Levels . . . . . . . . . . . 3.3.3 CEFR-Informed Portfolio Assessments . . . . . . . . . . 3.3.4 Available CEFR Level Placement Tests . . . . . . . . . . 3.3.5 CEFR-Informed Speaking Assessments . . . . . . . . . . 3.3.6 CEFR-Informed Writing Assessments . . . . . . . . . . . 3.3.7 CEFR-Informed Reading and Listening Assessments 3.3.8 CEFR-Informed Vocabulary and Grammar Assessments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Exercise 1: Plan a Course Assessment Breakdown . . 3.4.2 Exercise 2: Brainstorm Assessment Context and Use Specifications . . . . . . . . . . . . . . . . . . . . . .
... ...
74 74
... ...
75 77
...
77
. . . .
. . . .
. . . .
79 80 82 87
. . . .
. . . .
. . . .
. . . .
89 89 91 91
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
92 93 93 94 98 101 102 102 103 104 105 107 108 109 110 113 115
. . . . 118 . . . . 120 . . . . 120 . . . . 121
Contents
xi
3.4.3 3.4.4
Exercise 3: Brainstorm Task Specifications . . . . . . . . Exercise 4: Make Productive Language Assessment Rubrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Case Study and Further Reading . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Case Study: CEFR-Based Assessments in an English Language Course . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 123 . . . 124 . . . 125 . . . 125 . . . 132 . . . 134
4 Learner Autonomy and the European Language Portfolio . . . . . . 4.1 The Role of the CEFR and ELP in Autonomous Learning . . . . 4.1.1 Learner Autonomy and the CEFR . . . . . . . . . . . . . . . . 4.1.2 Learner Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 European Language Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 ELP: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 ELP: Functions, Types, Components . . . . . . . . . . . . . . 4.2.3 ELP Implementation Guidelines . . . . . . . . . . . . . . . . . 4.2.4 e-Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Application of the CEFR: Creating a Language Portfolio for Your Own Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Validated ELPs to Serve as a Model . . . . . . . . . . . . . . 4.3.2 Ensuring the Quality of the New Model . . . . . . . . . . . 4.3.3 Steps to Be Taken When Compiling a New ELP Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 Relevant Components and Available Templates . . . . . . 4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Exercise 1: Language Passport . . . . . . . . . . . . . . . . . . 4.4.2 Exercise 2: Language Biography . . . . . . . . . . . . . . . . . 4.4.3 Exercise 3: Dossier . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Case Studies and Further Reading . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Case Study 1: EPOS . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Case Study 2: Digital Materials and Self-directed Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Case Study 3: Developing Intercultural Competence . . 4.5.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 Application for Validation and Accreditation of an ELP Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Integrating Learning, Teaching, and Assessment . . . . . . . . . 5.1 The Role of the CEFR in Integrating Learning, Teaching, and Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Action-Oriented Approach . . . . . . . . . . . . . . . . . 5.1.2 Tasks as Integrative Tools . . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
141 142 142 143 149 149 150 164 164
. . 166 . . 167 . . 168 . . . . . . . .
. . . . . . . .
169 169 171 172 173 181 182 182
. . 185 . . 186 . . 188 . . 190 . . 192
. . . . . . 197 . . . . . . 198 . . . . . . 198 . . . . . . 199
xii
Contents
5.2 CEFR as an Integrative Tool . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Aligning a Curriculum to the CEFR . . . . . . . . . . 5.2.2 Linking ‘Can Do’ Descriptors to Language . . . . . 5.2.3 Task-Based Language Teaching . . . . . . . . . . . . . 5.3 Application of the CEFR: Integrating Learning, Teaching, and Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Learning Outcome Statements . . . . . . . . . . . . . . . 5.3.2 Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Exercise 1: Defining Curriculum and Class Goals 5.4.2 Exercise 2: Curriculum Design Process . . . . . . . . 5.4.3 Exercise 3: Involving Students and Teachers . . . . 5.5 Case Studies and Further Reading . . . . . . . . . . . . . . . . . . 5.5.1 Case Study 1: Connecting TBLT, CEFR, and Learning-Oriented Assessment Through Cyclical Learning . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Case Study 2: Evaluating Cultural Exchanges Using Communicative Language and Intercultural Competence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Teacher Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 The CEFR and Teacher Autonomy . . . . . . . . . . . . . . 6.1.1 Teacher Autonomy Redefined . . . . . . . . . . . . 6.1.2 How to Develop Teacher Autonomy . . . . . . . 6.1.3 The Three-Stage Model of Reflective Practice 6.2 Tools for Reflective Practice . . . . . . . . . . . . . . . . . . . 6.2.1 EPOSTL . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 EPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Application of Tools for Self-reflection . . . . . . . . . . . 6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Exercise 1: Self-reflection . . . . . . . . . . . . . . . 6.4.2 Exercise 2: Critical Inquiry . . . . . . . . . . . . . . 6.4.3 Exercise 3: Action . . . . . . . . . . . . . . . . . . . . 6.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
203 203 205 207
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
216 216 222 224 227 227 228 229
. . . . . . 230
. . . . . . 234 . . . . . . 237 . . . . . . 237 . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
241 241 243 245 246 249 249 253 256 259 259 259 259 260 261
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Subject Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Abbreviations
ACTFL AfL AIE AIOLE ALTE AOA AUA CEFR CEFR/CV CEFR-J CercleS CLIL CLT COE DIALANG EALTA EAQUALS ECML eELP ELC/CEL EMI EPG ePOS, EPOS EPOSTL GE IELTS IUA JSPS KET LA LLHE LOA MLA
American Council for the Teaching of Foreign Languages Assessment for Learning The Autobiography of Intercultural Encounters An Interactive Online Learning Environment Association of Language Testers in Europe Action-Oriented Approach Assessment Use Argument Common European Framework of Reference for Languages Common European Framework of Reference for Languages. Companion Volume Adapted Japanese version of CEFR European Confederation of Language Centres in Higher Education Content and Language Integrated Learning Communicative Language Teaching Council of Europe Diagnostic online tool for assessing languages European Association for Language Testing and Assessment Evaluation and Accreditation of Quality Language Services European Centre of Modern Languages Electronic European Language Portfolio European Language Council English as Medium of Instruction European Profiling Grid Electronic European Portfolio of Languages European Portfolio for Student Teachers of Languages General English International English Language Test System Interpretation and Use Argument Japan Society of the Promotion of Science Cambridge English, Key English Test Learner Autonomy Language Learning in Higher Education (journal) Learning-Oriented Assessment Modern Language Association xiii
xiv
OOPT PET PPP RLD SAC SALC SDL SLA TALE TBL TBLT TLU TOEFL ibt TOEFL TOEIC
Abbreviations
Oxford Online Placement Test Cambridge English, Preliminary English Test Presentation, Practice, and Production Reference Level Descriptions Self-Access Center Self Access Learner Center Self-Directed Learning Second Language Acquisition Teacher Assessment Literacy Education Task-Based Language Task-Based Language Teaching Target Language Use Test of English as Foreign Language Internet-based test Test of English as Foreign Language Test of English for International Communication
1
The CEFR and Practical Resources
The purpose of this chapter is to introduce the Council of Europe (COE)’s Common European Framework of Reference for Languages (CEFR) (2001) and the European Language Portfolio (ELP) (COE 2019c) through key resources, literature, and implementation guidelines related to these documents. In Sects. 1.1 and 1.2 readers are provided with an overview of the CEFR and the ELP, including key aspects that serve as their foundation. These include CEFR’s descriptive scheme of language use and competence, which informs implementation and practice. This overview is followed by an account of the historical events that explain and justify its emergence, reasons for its widespread adoption, and refinement as evidenced by the publication of the Companion Volume (CEFR/CV) (COE 2018). Criticisms of the CEFR are addressed next. The current state of CEFR adoption in Europe will be explored later in the chapter as well as its use outside of Europe and the flexibility stakeholders exercised when adapting the CEFR to local conditions and needs. The chapter will conclude with a summary of available practical resources which have been organized thematically. In subsequent chapters, more detailed information will be provided concerning curriculum and course design (Chap. 2), assessment (Chap. 3), ELP and learner autonomy (Chap. 4), integration of learning, teaching and assessment through Task-Based Language Teaching (Chap. 5), and teacher autonomy (Chap. 6) as well as exercises to guide readers in applying this information for their individual purposes and contexts. Various aspects of the CEFR will be discussed throughout the book, building on the foundation of background knowledge laid out in this chapter.
1.1
Overview of the CEFR
The Common European Framework of Reference for Languages: Learning, teaching, assessment (CEFR) was published by the COE’s Language Policy Division in English and French in 2001 (COE 2001). Since then, it has been © Springer Nature Singapore Pte Ltd. 2020 N. Nagai et al., CEFR-informed Learning, Teaching and Assessment, Springer Texts in Education, https://doi.org/10.1007/978-981-15-5894-8_1
1
2
1 The CEFR and Practical Resources
translated into 40 languages (COE 2019a), informing language standards, curricula and education reform both inside and outside of Europe. Since 2001, the CEFR and its use has been thoroughly researched (e.g., Martyniuk and Noijons 2007; Language Learning in Higher Education 2011 Special Issue; Byram and Parmenter 2012b; Kühn and Perez Cavana 2012), leading to the publication of the CEFR Companion Volume (CEFR/CV) (COE 2018) that complements and expands upon the original volume. The main functions of the CEFR remain the same as in the 2001 publication: “(a) to provide a metalanguage for discussing the complexity of language proficiency and for reflecting on and communicating decisions on learning objectives and outcomes that are coherent and transparent, and (b) to provide inspiration for curriculum development and teacher education” (COE 2018: 22). This metalanguage is provided in the 2001 CEFR guide and 2018 CEFR/CV. It primarily takes the form of descriptions of language use, users, and competences that are at the core of the CEFR. Learning, teaching, and assessment can be discussed and reflected upon using the Common Reference Levels (COE 2001, Chap. 3; COE 2018, Appendix 2)—six broad bands of proficiency covering four modes of communication; receptive, interactive, productive, and mediation skills. These skills are articulated in CEFR Illustrative Descriptor Scales containing detailed descriptions of language use and strategies according to real-world tasks (COE 2001, Chap. 4; COE 2018: 54–129) along with the competences necessary to realize these goals (COE 2001, Chap. 5; COE 2018: 130–144). The justification for defining learning objectives in terms of performance standards is explained in the CEFR (COE 2001, Chaps. 1 and 2; COE 2018: 25–44) and includes “the promotion of the positive formulation of educational aims and outcomes at all levels”, which in turn “inform curriculum reform and pedagogy” and “provide transparency and clear reference points for assessment purposes” (COE 2018: 25). Implementation is then discussed in broad terms in relation to language learning and teaching methodologies, pedagogic tasks, linguistic diversity, and assessment (COE 2001 Chaps. 6– 9, respectively). The original volume of the CEFR (COE 2001) has been updated in the CEFR/CV (COE 2018), with new descriptors for language activities (e.g., written online production) and competences (e.g., phonology) (see COE 2018: 50– 51 for a summary). The CEFR is also complemented by the European Language Portfolio (ELP) (COE 2019c), both of which were conceived at an intergovernmental symposium hosted by the Federal Swiss authorities at Rüschlikon in 1991 and introduced together in 2001. The ELP is a concrete tool encouraging language users to monitor and document their progress in relation to the Common Reference Levels and illustrative scales enabling them to take responsibility for their language learning.
1.1 Overview of the CEFR
3
1.1.1 Aims of the COE, CEFR and ELP 1.1.1.1 Council of Europe The Council of Europe was founded after World War II to increase international understanding, protect human rights, and avoid future war. Through its Language Policy Unit and Education Department, the COE is now involved with all levels of language education, from primary school to adult learners, and the promotion of plurilingual and intercultural education, with projects organized primarily through the council’s European Centre of Modern Languages (COE 2019b). The coherence and continuity of these projects is due to adherence to three basic principles governing the Council of Cultural Co-operation of the COE: • that the rich heritage of diverse languages and cultures in Europe is a valuable common resource to be protected and developed, and that a major educational effort is needed to convert that diversity from a barrier to communication into a source of mutual enrichment and understanding; • that it is only through a better knowledge of European modern languages that it will be possible to facilitate communication and interaction among Europeans of different mother tongues in order to promote European mobility, mutual understanding and cooperation, and overcome prejudice and discrimination; • that member states, when adopting or developing national policies in the field of modern language learning and teaching, may achieve greater convergence at the European level by means of appropriate arrangements for ongoing cooperation and coordination of policies (COE 2001: 2).
1.1.1.2 The Common European Framework of References for Languages The original aims of the CEFR (COE 2001) were “to act as a stimulus for reflection on current practice and on the other hand to provide a common reference point for the elaboration of language syllabuses, curriculum guidelines, examinations, and textbooks across Europe” (North 2014: 9). The common reference levels and accompanying illustrative scales were designed to communicate goals for language learning at different levels in a transparent and coherent manner, with the rationale and implementation guidelines drawing upon the applied linguistics literature. These aims are realized through: • A taxonomic descriptive scheme, covering domains of language use, communicative language activities and strategies plus the competences that the learner as a language user needs for such activities (CEFR Chaps. 4 and 5).
4
1 The CEFR and Practical Resources
• A set of Common Reference Levels (A1, A2, B1, B2, C1, C2), defining proficiency in a number of scales of illustrative descriptors (CEFR Chap. 3, plus scales in Chaps. 4 and 5) (North 2014: 9, italics in original).
1.1.1.3 European Language Portfolio The ELP (COE 2019c) is composed of three parts, which usually appear in the following order: Language Passport: an overview of the learner’s current level in relation to the Common Reference Levels (i.e., CEFR Table 1: Global scale and CEFR Table 2: Self-assessment grid), Language Biography facilitates the learner’s involvement in planning, reflecting upon and assessing their learning process and progress, Dossier a collection of materials to document and illustrate the learner’s achievements and experiences. ELP functions The ELP has two functions (Little and Perclová 2001: 3). The reporting function, primarily fulfilled in the Language Passport and Dossier sections, displays what the learner is capable of in a foreign language. The second function is pedagogic in nature and is fulfilled primarily in the biography section, where the learner develops his/her ability to reflect upon and assess language learning with the ultimate goal of becoming an autonomous learner.
1.1.2 Key Aspects of the CEFR Realizing the potential of the CEFR involves more than a use of the ‘Can Do’ descriptors. Underlying the CEFR is a set of principles that provides not only the foundation upon which the framework itself is built, but guides practice to effectively develop proficiency in a foreign language. The key aspects of the CEFR are discussed in this section, but at its core are the Action-oriented Approach and a view of Learners as Social Agents.
1.1.2.1 Action-Oriented Approach and Learners as Social Agents The action-oriented approach “views users and learners of a language primarily as ‘social agents’, i.e., members of society who have tasks (not exclusively language related) to accomplish in a given set of circumstances, in a specific environment and within a particular field of action” (COE 2001: 9). In other words, the emphasis is on what the learners can do with the language (action oriented) as opposed to what the learners should know about the language (knowledge oriented). The actionoriented approach of the CEFR envisions curricula and courses based on real-world communicative needs rather than a deficiency perspective that focuses on what learners have not yet acquired. These are organized around real-life tasks and accompanied by ‘Can Do’ descriptors that communicate the aims to the
1.1 Overview of the CEFR
5
learners rather than syllabuses based on a linear progression through language structures, or a predetermined set of notions and functions (COE 2018: 26).1 The CEFR also “presents the language user/learner as a ‘social agent,’ acting in the social world and exerting agency in the learning process” (COE 2018: 26). Learners not only use language for social purposes, but they are encouraged and expected to take responsibility for their learning through such measures as goal setting and reflecting on language learning progress and process. Furthermore, learners are seen as “plurilingual, pluricultural2 beings (which) means allowing them to use all their linguistic resources when necessary, encouraging them to see similarities and regularities as well as differences between languages and cultures” (COE 2018: 26). For an updated perspective on the action-oriented approach, see Piccardo and North (2019).
1.1.2.2 Backward Design and Needs Analysis Backward Design Curriculum development based on the CEFR and an action-oriented approach starts with the specification of learning outcomes in terms of language use and then proceeds to identify the content, methodology, activity types, and assessment tools most appropriate for realizing these goals. This is known as Backward Design (see Richards 2013) as the starting point has traditionally been on moving from content specification (e.g., grammar and vocabulary) to methodology to assessment, or Forward Design. Curriculum development within the Task-Based Language Teaching literature, on the other hand, tends to prioritize the process of teaching and learning, or Central Design (see Richards 2013). In short, the essence of backward design is that, like the CEFR, it sees the learner as a language user and not as a language student. It is interested in what the learner will be able to do after the course, and what they will need to learn or become accustomed to in order to reach their goals. Hence, it tends to focus on real-world outcomes. These may be encapsulated in a series of needs-based tasks … but it is convenient to express them in CEFR-style can-do descriptors. (North et al. 2018: 22)
Assessment tasks linked to ‘Can Do’ descriptors also have the potential to reinforce use of the action-oriented approach through a positive washback effect on classroom practice as teachers are more likely to employ tasks in their lessons if their students will be assessed using similar tasks. For more information, see Chap. 3. Needs Analysis When learning outcomes are articulated using ‘Can Do’ descriptors, determining the most appropriate objectives involves a needs analysis, which refers to “the process of gathering information before or during a course to In the CEFR (COE 2001), the term ‘action-oriented approach’ is used exclusively for the descriptive scheme. In the Companion Volume (COE 2018), ‘action oriented’ appears to be used for teaching approaches, but without acknowledging that this is a new development. 2 It was acknowledged in a Council of Europe project, Languages for Education, Language in Education, that ‘pluricultural’ is a problematic concept in educational contexts and ‘intercultural’ in now used instead (see Sect. 1.1.2.4 of this volume for more information). 1
6
1 The CEFR and Practical Resources
determine objectives that can then be analyzed in order to create an inventory of aims and suitable activities for that course” (North et al. 2018: 47). At the macro-level, this involves the development of a language curriculum for an institution or department, and at the micro-level, a teacher finalizes a course within the curriculum. The main advantage of using CEFR-descriptor scales at the macro-level is that stakeholders, including (a) local business people known to the school, (b) parents/guardians, (c) experienced teachers, and (d) advanced learners, can help identify the important target situations, activities, and possible levels of each activity (North et al. 2018: 53). At the micro-level, “learners enrolled in a course should feel as though they are treated as individuals, and that the teacher cares about their own specific needs… Nowadays, … can-do checklists related to curriculum aims can be used to make a negotiated group syllabus, supplemented by self-study for individual aims, a reality for general language learners” (North et al. 2018: 17). Chap. 2 guides readers through the process of choosing and modifying the most relevant CEFR descriptors, and in Chap. 5, the role of tasks for developing communicative ability in relation to these descriptors along with resources for identifying the language (e.g., grammar) needed to fulfill these tasks are discussed. (See North et al. 2018: Chap. 4 for concrete examples/procedures for needs analysis.)
1.1.2.3 Comprehensive, Transparent, Coherent, and Neutral As described earlier, one of the main functions of the CEFR is “to provide a metalanguage for discussing the complexity of language proficiency and for reflecting on and communicating decisions on learning objectives and outcomes” (COE 2018: 22). To achieve this, it is necessary for the CEFR to be comprehensive, transparent, and coherent. The CEFR is comprehensive as it attempts to “specify as full a range of language knowledge, skills and use as possible” (COE 2001: 7). This is accomplished through a taxonomic descriptive scheme covering domains of language use along with communicative language activities, strategies and competences. The Common Reference Levels (A1, A2, B1, B2, C1, C2) define proficiency levels in the various illustrative scales (The descriptive scheme is explained in detail in Sect. 1.2). For the CEFR to serve as a metalanguage, the information must also be transparent, or “clearly formulated and explicit, available and readily comprehensible to users”, and coherent, or “free from internal contradictions” due to the “harmonious relationships” between the different components of the CEFR (COE 2001: 7). In the words of North et al. (2018: 15) (emphasis added), A reference framework like the CEFR can be a source for the formulation of standards. It provides descriptors in differing degrees of detail and with different emphases, and so can help to provide a transparent, coherent alignment between the overall curriculum aims, the detailed objectives teachers use to implement the curriculum, and the assessment of achievement in relation to them.
It is also important to point out that the CEFR is neutral in that it does not prescribe any particular pedagogical approach (COE 2018: 27). However, there is an unmistakable tension between the CEFR’s refusal to advocate a particular
1.1 Overview of the CEFR
7
pedagogical approach on the one hand and its promotion of tasks (COE 2001: Chap. 7) and the task-based tendency of its ‘Can Do’ descriptors on the other. Decisions concerning pedagogy must incorporate the underlying principle that “language learning should be directed towards enabling learners to act in real-life situations, expressing themselves and accomplishing tasks of different natures” (COE 2018: 27), and this is given priority in curriculum development (e.g., Backward Design), enacted in the classroom through the use of purposeful, collaborative tasks (CEFR 2001: Chap. 7) and reinforced with assessment tasks linked to ‘Can Do’ descriptors (CEFR 2001: Chap. 9) (e.g., Assessment for Learning).
1.1.2.4 Plurilingualism and Pluriculturalism Within the CEFR, a distinction is made between Multilingualism,3 which can refer to a knowledge of a number of languages or the coexistence of different languages in a society and Plurilingualism, or “the dynamic and developing linguistic repertoire of an individual user/learner” (COE 2018: 28). Within the CEFR, it is stressed that the language user “does not keep these languages and cultures in strictly separated mental compartments, but rather builds up a communicative competence to which all knowledge and experience of language contributes and in which languages interrelate and interact” (COE 2001: 4). According to Beacco et al. (2016: 20), plurilingual competence is “the ability to use a plural repertoire of linguistic and cultural resources to meet communication needs or interact with people from other backgrounds and contexts, and enrich that repertoire while doing so”. They note that the plurilingual approach to learning “would be incomplete without its pluricultural and intercultural dimensions” (20). “Neither pluriculturalism nor the notion of intercultural competence … are greatly developed in the CEFR book” (COE 2018: 29). Within the CEFR/CV (COE 2018), it appears that the intercultural dimension is subsidiary to the pluricultural dimension. For example, the relevant category is listed as Plurilingual and Pluricultural Competence. The related COE website (2019d), guide (Beacco et al 2016) and European Centre for Modern Languages (ECML) homepage (COE 2019d), on the other hand, promote Plurilingual and Intercultural Education and Competence. Distinguishing between pluriculturality and interculturality will help us understand the significance of this difference. According to Byram (2009: 6): Pluriculturality refers to the capacity to identify with and participate in multiple cultures. Interculturality refers to the capacity to experience and analyze cultural otherness, and to use this experience to reflect on matters that are usually taken for granted within one’s own culture and environment. Interculturality involves being open to, interested in, curious about and empathetic towards people from other cultures, and using this heightened awareness of otherness to engage and interact with others and, potentially, to act together for common purposes. Interculturality, finally, involves evaluating one’s own everyday patterns of perception, thought, feeling, and behavior in order to develop greater self-knowledge and self-understanding. Interculturality thus enables people to act as mediators among people of different cultures, to explain and interpret different perspectives. 3
The inclusion of this definition of (individual) multilingualism is an indirect criticism of the traditional practice of teaching languages in isolation from one another.
8
1 The CEFR and Practical Resources Interculturality does not involve identifying with another cultural group or adopting the cultural practices of the other group.
In short, interculturality is much more comprehensive, and is not limited to identifying with, participating in, or adopting practices from a different culture. Within the CEFR/CV, the distinction between these two concepts is not made, and although Beacco et al. (2016) and the ECML website are referred to, it is unclear the degree to which they inform the scales in the CEFR/CV. Readers interested in this area should look to the above resources for more detailed information.
1.1.2.5 A Profile of Needs and Proficiency The descriptive scheme includes a range of illustrative scales across different language activities (e.g., productive speaking skills and receptive reading skills) and levels, which can be used to describe the learning objectives for a particular group of learners (a needs profile) as well as their current level of proficiency (a proficiency profile). Furthermore, within these profiles, it is possible to account for variation. For example, a student’s proficiency profile will vary depending on the language activity (e.g., a learner will read at a higher level than he or she writes at), and how well he or she communicates (e.g., one’s grammatical accuracy may exceed one’s control of vocabulary). Likewise, a needs profile may acknowledge that learners’ receptive abilities are more developed than their productive or interactive abilities. The various illustrative scales and levels (especially since the introduction of the CEFR/CV) allow for a complex and varied profile of needs and proficiency (see COE 2018: 36–40 for a detailed explanation).
1.2
The Descriptive Scheme and the Common Reference Levels
At the heart of the CEFR is the descriptive scheme (Fig. 1.1) (COE 2018: 30, taken from Piccardo, et al. 2011: 55) and the Common Reference Levels (Sect. 1.2.3), describing communicative language activities and the competences and strategies required to perform these activities at different levels of proficiency. This section provides an overview4 of this scheme (see also COE 2018: 29–34). A detailed introduction to the illustrative scales and reference levels is included in Chap. 2. This overview of language proficiency outlines how: in any communicative situation, general competences (e.g., knowledge of the world, socio-cultural competence, intercultural competence, professional experience if any: CEFR Sect. 5.1) are always combined with communicative language competences (linguistic, sociolinguistic and pragmatic competences: Sect. 5.2), and strategies (some general, some communicative language strategies) in order to complete a task (CEFR Chapter 7). (COE 2018: 29)
4
It is also possible to include plurilingual and pluricultural competences, as shown in Sect. 2.2.2.
1.2 The Descriptive Scheme and the Common Reference Levels
9
Overall Language Proficiency
General competences
Communicative language competences
Communicative language activities
Communicative language strategies
Savoir (Declarative knowledge)
Linguistic
Reception
Reception
Savoir-faire (Skills and know-how)
Sociolinguistic
Production
Production
Savoir-etre ('Existential' competence)
Pragmatic
Interaction
Interaction
Mediation
Mediation
Savoir apprendre (Ability to learn)
Fig. 1.1
Structure of the CEFR descriptive scheme © Council of Europe
Of course, completing a task that involves communicating with others requires language use, which is defined in the CEFR (2001: 9) as follows: Language use, embracing language learning, comprises the actions performed by persons who as individuals and as social agents develop a range of competences, both general and in particular communicative language competences. They draw on the competences at their disposal in various contexts under various conditions and under various constraints to engage in language activities involving language processes to produce and/or receive texts in relation to themes in specific domains, activating those strategies which seem most appropriate for carrying out the tasks to be accomplished. The monitoring of these actions by the participants leads to the reinforcement or modification of their competences.
Before going further, it might be useful to review the terms used in the above definition as described in CEFR Sect. 4.1—The context of language use. Readers are also guided to the relevant sections within the CEFR (COE 2001) and this book where detailed information about these topics can be found. Domains Language use is set in the context of a particular situation within the personal, public, occupational or educational domain (defined in Sect. 2.3.2.1).
10
1 The CEFR and Practical Resources
Situations the external situations which arise in each domain. See CEFR Table 5: External context of use: descriptive categories (COE 2001: 48–49), which includes examples for each domain in terms of such aspects as the location and times in which language use occurs, the institutions or persons involved, etc. Conditions and Constraints the external conditions (e.g., physical and social conditions) under which communication occurs and the various constraints (e.g., time pressure) that language users face. Themes (CEFR Sect. 4.2): the focus of attention in communication acts (e.g., the subjects of discourse, conversation, reflection or composition). Tasks (CEFR Sect. 4.3): the communication acts a language user undertakes in pursuance of his or her needs in a given situation. (Focused on in greater detail in Chap. 5 and CEFR Chap. 7.) Language activities and strategies (CEFR Sect. 4.4): the illustrative scales for reception, production, interaction, and mediation5 and strategies covering preplanning, execution, monitoring, and repair action for language activities (described in Chap. 2). Communicative language processes (CEFR Sect. 4.5): the planning, execution, and monitoring of language use. Texts (CEFR Sect. 4.6) any piece of language, spoken or written, which language users receive, produce, or exchange.
1.2.1 Four Modes of Communication: Reception, Production, Interaction, and Mediation Central to the CEFR definition of language use is the idea that real-life language use “is grounded in interaction in which meaning is co-constructed” (COE 2018: 30). Therefore, language activities are presented under four modes of communication; reception, production, interaction and mediation, as the traditional model of four skills (listening, speaking, reading, writing) “does not lend itself to any consideration of purpose or macro-function” (COE 2018: 30) (see COE 2018: 31 for a detailed list of advantages this revised model offers). As there are spoken and written forms of reception, production, interaction and mediation, reference is made to differences in media for receptive listening (e.g., audio-visual: watching TV, film, or video), medium for interaction (e.g., online interaction) and focus for mediation (i.e., text, concepts, or communication). For ease of reference and to aid readers new to the CEFR, definitions for the four modes are provided below, along with the corresponding illustrative scales and one example of a ‘Can Do’ descriptor from the B1 level. Detailed descriptions for these and their requisite strategies and competences are covered in Chap. 2. Reception According to the COE (2018: 54):
5
Illustrative Scales for Mediation were added to the CEFR/CV (COE 2018) and did not appear in the original volume (COE 2001).
1.2 The Descriptive Scheme and the Common Reference Levels
11
Reception involves receiving and processing input, activating what are thought to be appropriate schemata in order to build up a representation of the meaning being expressed and a hypothesis as to the communicative intention behind it. Incoming co-textual and contextual cues are checked to see if they ‘fit’ the activated schema—or suggest that an alternative hypothesis is necessary.
Reception activities include: • Listening Comprehension (Overall Listening Comprehension, B1.16) Can understand the main points of clear standard speech on familiar matters regularly encountered in work, school, leisure, etc., including short narratives. (55) • Reading Comprehension (Overall Reading Comprehension, B1.1) Can read straightforward factual texts on subjects related to his/her field and interests with a satisfactory level of comprehension. (60) • Audio-visual Comprehension (Watching TV, film, and video, B1.1) Can follow many films in which visuals and action carry much of the storyline, and which are delivered clearly in straightforward language. (66) Production According to the COE (2018: 68): Production includes both speaking and writing activities. Spoken production is a ‘long turn,’ which may involve a short description or anecdote, or may imply a longer, more formal presentation. Productive activities, spoken and written, have an important function in many academic and professional fields (oral presentations, written studies and reports) and particular social value is attached to them.
Production activities include: • Spoken Production (Overall Spoken Production, B1) Can reasonably fluently sustain a straightforward description of one of a variety of subjects within his/her field of interest, presenting it as a linear sequence of points. (69) • Written Production (Overall Written Production, B1) Can write straightforward connected texts on a range of familiar subjects within his/her field of interest, by linking a series of shorter discrete elements into a linear sequence. (75) Interaction According to the COE (2018: 81): Interaction, which involves two or more parties co-constructing discourse, is central in the CEFR scheme of language use …. Spoken interaction is considered to be the origin of language, with interpersonal, collaborative and transactional functions.
Interaction activities include: • Spoken Interaction (Overall Spoken Interaction, B1.1) Can exploit a wide range of simple language to deal with most situations likely to arise whilst travelling. Can enter unprepared into conversation of familiar topics, express 6
Within the CEFR/CV (COE 2018), some levels are divided into criterion levels (e.g., B1.1 or B1) and slightly more advanced plus levels (e.g., B1.2 or B1+).
12
1 The CEFR and Practical Resources
personal opinions and exchange information on topics that are familiar, of personal interest or pertinent to everyday life (e.g. family, hobbies, work, travel and current events). (83) • Written Interaction (Overall Written Interaction, B1.1) Can write personal letters and notes asking for or conveying simple information of immediate relevance, getting across the point he/she feels to be important. (93) • Online Interaction (Online Conversation and Discussion, B1.1) Can post a comprehensible contribution in an online discussion on a familiar topic of interest, provided that he/she can prepare the text beforehand and use online tools to fill gaps in language and check accuracy. (97) Mediation According to the COE (2018: 103): In mediation, the user/learner acts as a social agent who creates bridges and helps to construct or convey meaning, sometimes within the same language, sometimes from one language to another (cross-linguistic mediation). The focus is on the role of language in processes like creating the space and conditions for communicating and/or learning, collaborating to construct new meaning, encouraging others to construct or understand new meaning, and passing on new information in an appropriate form. The context can be social, pedagogic, cultural, linguistic or professional.
Mediation includes mediating a text, mediating concepts, and mediating communication. There is only one overall scale for mediation. Therefore, ‘Can Do’ descriptors for the first illustrative scale within each category are provided. • Mediation (Overall Mediation, B1) Can introduce people from different backgrounds, showing awareness that some questions may be perceived differently, and invite other people to contribute their expertise and experience, their views. Can convey information given in clear, well-structured informational texts on subjects that are familiar or of personal or current interest, although his/her lexical limitations cause difficulty with formulation at times. (105) • Mediating a text (Relaying specific information in speech, B1) Can relay (in Language B) specific information given in straightforward informational texts (such as leaflets, brochure entries, notices and letters or emails) (written in Language A). (108) • Mediating concepts (Collaborating in a group, B1) Can invite other people in a group to speak (Facilitating collaborative interaction with peers). Can ask a group member to give the reason(s) for their views (Collaborating to construct meaning). (119) • Mediating communication (Facilitating pluricultural space, B1) Can help to develop a shared communication culture, by exchanging information in a simple way about values and attitudes to language and culture. (123) The illustrative scales for all communicative language activities and competences are presented in Sects. 2.2.2.1 and 2.2.2.2 in table form, but readers might benefit from a diagrammatic overview of the activities. The one for receptive listening (Fig. 1.2) is taken from CEFR/CV (COE 2018: 54). They exist for all the
1.2 The Descriptive Scheme and the Common Reference Levels
Receptive Activites
Overall Listening Comprehension
Overall Reading Comprehension
13
Reception Strategies
Audio-visual (Watcing TV, film & video)
Understanding interaction between other speakers
Reading Correspondence
Listening as a member of a live audience
Reading for orientation
Listening to announcements & instructions
Reading for information & argument
Listening to audio media & recordings
Reading instructions
Identifying cues & inferring
Reading as a leisure activity
Fig. 1.2 Reception activities and strategies (COE 2018: 54)
modes of language activities, communicative language competences, and plurilingual and pluricultural competence within the CEFR/CV (COE 2018). Three Types of Scales The illustrative scales found in the CEFR/CV are comprehensive as they provide “illustrations of competence in the area concerned at different levels” (COE 2018: 41). These scales, however, can be modified for a variety of purposes. In Chap. 2 of this volume, readers are guided through the process of developing user-oriented descriptors to articulate general goals for a curriculum and specific goals for a course. Checklists of descriptors appear in ELPs to articulate a range of communicative language activities at an appropriate level for learners, offer “a ‘menu’ to negotiate priorities with adult learners in a process of ongoing needs analysis” (COE 2018: 42), and serve as a basis for (self-)assessment. Assessor-oriented scales provide the criteria to evaluate student performance and
14
1 The CEFR and Practical Resources
include qualitative aspects of language performance (e.g., how well a learner performs) based on the communicative language competences scales or a summary of them (see Sects. 1.2.2 and 1.2.3). Constructor-oriented scales, which guide the construction of tests, are also more detailed than the user-oriented scales. See COE (2001: 37) Sect. “3.8 How to use scales of descriptors of language proficiency” for a detailed discussion of these distinctions.
1.2.2 Competence: General, Communicative Language, Plurilingual, and Pluricultural The CEFR descriptive scheme (see Fig. 1.1) includes both communicative language activities and strategies within four modes of communication as well as the competences necessary to achieve the goals stated in these activities. It is important to point out that these two dimensions are interdependent. Descriptors of communicative language activities presuppose certain language competences, and specifications of language competences need to be understood in terms of corresponding communicative language activities. In practice, the illustrative scales for language activities outline what language learners can do or should be able to do at the end of a course. The descriptors for communicative language competences, on the other hand, can be used for “developing assessment criteria for how well user/learners are able to perform a particular task: to assess the quality of their production” (COE 2018: 43). As mentioned in the definition of language use (Sect. 1.2), learners draw upon their competences to complete a task. In this book, we are primarily concerned with developing communicative language competence, which will be discussed shortly, but we will start with a brief introduction to the general competences, and end with plurilingual and pluricultural competences. General competences General competences involve four aspects: • Declarative knowledge (savoir) includes (1) a knowledge of the world derived from experience, education and information sources, etc.; (2) sociocultural knowledge—a knowledge of the society and culture of the community or communities in which a language is spoken; and (3) intercultural awareness— knowledge, awareness, and understanding of the relation (similarities and distinctive differences) between the ‘world of origin’ and the ‘world of the target community’ produce an intercultural awareness (COE 2001: Sect. 5.1.1). • Skills and know-how (savoir-faire) includes (1) practical skills and know-how related to social skills, living skills, vocational and professional skills, and leisure skills; and (2) intercultural skills and know-how involving the ability to bring the culture of origin and the foreign culture into relation with each other, cultural sensitivity, and the ability to serve as a cultural intermediary and overcome stereotyped relations (COE 2001: Sect. 5.1.2). • ‘Existential’ competence (savoir-être) “The communicative activity of users/learners is affected not only by their knowledge, understanding, and skills, but also by selfhood factors connected with their individual personalities,
1.2 The Descriptive Scheme and the Common Reference Levels
15
characterized by the attitudes, motivations, values, beliefs, and cognitive styles and personality types which contribute to their personal identity” (COE 2001: 105). • Ability to learn (savoir-apprendre) “the ability to observe and participate in new experiences and to incorporate new knowledge into existing knowledge, modifying the latter where necessary” (COE 2001: 106) in such areas as (1) language and communication awareness, (2) general phonetic awareness and skills, (3) study skills, and (4) heuristic skills. Communicative Language Competences In addition to the general competences that learners utilize when communicating, learners must also draw upon language-related competences. These fall into the categories of linguistic, sociolinguistic, and pragmatic, which in turn are broken down further into numerous illustrative scales. Linguistic competence involves criteria for range (e.g., vocabulary range) and control (e.g., vocabulary control). • Linguistic competence includes scales for general linguistic range, vocabulary range, grammatical accuracy, vocabulary control, phonological control, and orthographic control. (COE 2001: Sect. 5.1.1). • Sociolinguistic competence concerns the knowledge and skills required to deal with the social dimension of language use and involves (1) linguistic markers of social relations, (2) politeness conventions, (3) expressions of folk wisdom, (4) register differences, and (5) dialect and accent. These are summarized in a scale for Sociolinguistic Appropriateness (COE 2001: Sect. 5.1.2). • Pragmatic competences are concerned with the user/learner’s knowledge of the principles according to which messages are (a) organized, structured, and arranged (discourse competence); (b) used to perform communicative functions (functional competence); and (c) sequenced according to interactional and transactional schemata (design competence). Pragmatic competence scales include flexibility, turn-taking, thematic development, coherence and cohesion, propositional precision, and spoken fluency. These competences can be used to evaluate learner production. They have been summarized for this purpose in CEFR Table 3—Qualitative features of spoken language (expanded with phonology) (COE 2018: Appendix 3), and Manual Table C4: Written assessment grid (COE 2018: Appendix 4). The former is discussed briefly in Sect. 1.2.3 and Chap. 3. Plurilingual and Pluricultural Competence Within the CEFR, value is given to an individual’s cultural and linguistic diversity. Learners as ‘social agents’ draw upon their plurilingual and pluricultural competence “to fully participate in social and educational contexts, achieving mutual understanding, gaining access to knowledge and in turn further developing their linguistic and cultural repertoire” (COE 2018: 157). (As noted earlier, Plurilingual and Intercultural Education is one focus of the COE’s Language Policy. For a discussion of the difference between pluriculturality and interculturality, see Sect. 1.1.2.4.) For the Companion Volume
16
1 The CEFR and Practical Resources
(COE 2018), scales were created for Building on pluricultural repertoire, Plurilingual comprehension, and Building on plurilingual repertoire. • Building on pluricultural repertoire Many notions that appear in the intercultural competence literature and descriptors are included in this scale; for example, “the need to deal with ambiguity when faced with cultural diversity, adjusting reactions, modifying language, etc.; the need for understanding that different cultures may have different practices and norms, and that actions may be perceived differently by people belonging to other cultures” (COE 2018: 158). The following is a descriptor for the B1 level: Can generally act according to conventions regarding posture, eye contact, and distance from others. (159) • Plurilingual comprehension “The main notion represented by this scale is the capacity to use the knowledge of and proficiency (even partial) in one or more languages as leverage for approaching texts in other languages and so achieve the communication goal” (COE 2018: 160). Here is an example from the B1 level: Can use what he/she has understood in one language to understand the topic and main message of a text in another language (e.g. when reading short newspaper articles on the same theme written in different languages). (160) • Building on plurilingual repertoire “In this scale we find aspects that characterize both the previous scales. As the social agent is building on his/her pluricultural repertoire, he/she is also engaged in exploiting all available linguistic resources in order to communicate effectively in a multilingual context and/or in a classic mediation situation in which the other people do not share a common language” (COE 2018: 161). Here is an example from the B1 level: Can exploit creatively his limited repertoire in different languages in his/her plurilingual repertoire for everyday contexts, in order to cope with an unexpected situation. (162)
1.2.3 The Common Reference Levels Within the CEFR proficiency is described in relation to six broad levels (A1, A2, B1, B2, C1, C2). While the illustrative scales are quite detailed, the Common Reference Levels contain summaries of the major characteristics of each level. • The Global Scale (COE 2001: Table 2.1; reproduced in Chap. 2 Table 2.1) provides an overview of each of the six levels to describe in holistic terms proficiency levels for Basic Users (A1, A2), Independent Users (B1, B2), and Proficient Users (C1, C2). • The Self-Assessment Grid (COE 2001: Table 2; updated in CEFR 2018: Appendix 2; reproduced in Chap. 2 Table 2.2) provides a summary of the six levels for receptive, productive, interactive, and mediation activities. These descriptors can be utilized to communicate learning objectives at the curricular
1.2 The Descriptive Scheme and the Common Reference Levels
17
level. This grid is also included in the European Language Portfolio’s Language Passport. • Qualitative features of spoken language (COE 2001: Table 3; updated in CEFR 2018: Appendix 3) is a selective summary of the communicative language competences under the categories of Range, Accuracy, Fluency, Interaction, and Coherence. It is best to think of these levels (A1, A2, B1, B2, C1, C2) as a continuum, as their boundaries are not clearly delineated. Furthermore, they are holistic summaries that serve as simple representations, while the illustrative scales provide a much more nuanced picture. Within the CEFR/CV, the illustrative scales at each level have been further subdivided. For example, the A2 level contains criterion levels (e.g., A2 or A2.1) and plus levels (e.g., A2+ or A2.2). (This distinction is not made in the three tables of the Common Reference Levels). In some contexts, further subdivisions are necessary, such as with learners studying in short-term programs where progress cannot be monitored using broad proficiency bands. Exercise 4.2.4 summarizes the Eaquals (Evaluation and Accreditation of Quality Language Services) descriptors that fulfill this purpose. Following the lead of the Swiss Lingualevel project and the Japanese CEFR-J project, an additional level, Pre-A1, has also been included (COE 2018: 36). The CEFR/CV builds on these projects and others to further refine the descriptive scheme. An overview of the changes made can be found in the CEFR/CV (COE 2018: 45–51).
1.3
History and Criticism of the CEFR
1.3.1 CEFR History and Development The very first idea for a common framework of reference dates back to the 1970s when there was a call for a European-wide unit/credit scheme for adult learners of modern languages. Before the CEFR’s focus on performance standards7 for language use, it was typical to specify language learning goals in terms of the most frequent vocabulary items and grammar structures. However, due to the variability between languages in Europe, it was believed that “a pan-European system could not be built on this basis” (Trim 2012: 25). Instead, to encourage mobility within Europe, the focus shifted to the ability to communicate within a foreign language environment, and the level of proficiency required to achieve this learning objective was labeled Threshold Level (van Ek 1976; van Ek and Trim 2001). Threshold Level, which serves as the basis for the CEFR B1 level, outlined and detailed “a notional/functional specification of the language knowledge and skills needed to visit or live in another country” (North 2014: 14). It adopted a communicative approach, which considered the learner a The Council of Europe has often insisted that the CEFR proficiency levels are not ‘standards’ even though they have sometimes been used as such.
7
18
1 The CEFR and Practical Resources
language user, and promoted the development of practical communication skills, but through the specification of the functions (e.g., imparting and seeking factual information) and notions (e.g., concepts and topics language users will refer to) necessary to communicate in a foreign country. Furthermore, the relevant grammar and vocabulary were specified, along with notes concerning pronunciation and intonation. The Threshold Level was influential and its impact widespread (Trim 2012: 26–28). Following its publication in 1976 new exams were introduced (e.g., Cambridge Preliminary English Test), new levels above (e.g., Vantage) and below (e.g., Waystage) Threshold were proposed, and there was a period of rapid development in language teaching in which the focus was on the dissemination of new ideas rather than on European harmonization (e.g., the idea for the European-wide unit/credit scheme mentioned earlier was abandoned) (North 2014: 15). It was at the 1991 COE symposium in Rüschlikon, entitled Transparency and coherence in language learning in Europe: objectives, assessment and certification, that the development of performance standards became the sole focus of learning objectives (although that was not the intention at the time). At the symposium, the COE was asked to consider (Trim 2012: 29): (a) the introduction of the Common European Framework of Reference (CEFR) for the description of objectives and methods for language learning and teaching, curriculum and course design, materials production and language testing and assessment, and (b) the introduction of a European Language Portfolio (ELP), in which individual learners could record not only institutional courses attended and qualifications gained, but also less formal experiences with respect to as wide a range of European languages and cultures as possible. At the symposium, it was agreed that it would be desirable to develop a Common European Framework of Reference in order to: • promote and facilitate cooperation among educational institutions in different countries; • provide a sound basis for the mutual recognition of language qualifications; • assist learners, teachers, course designers, examining bodies and educational administrators to situate and coordinate their efforts. It should be useful both as a common basis for the exchange of information among practitioners and as a basis for critical reflection by practitioners on their current practice and the options open to them. (Trim 2012: 29) Fulfilling these functions is made possible through a framework that is comprehensive, transparent and coherent (this key aspect of the CEFR is discussed in Sect. 1.1.2.3). According to Trim (2012: 29), the CEFR should specify as full a range of language knowledge, skills, and use as possible. This information should be clear, explicit, available, and comprehensible, allowing users to not only describe learning objectives, but to calibrate language proficiency and progress. The
1.3 History and Criticism of the CEFR
19
feature that would have the greatest impact on the field of language learning was the six levels of proficiency, with various stakeholders using them to communicate the levels of courses, textbooks, examinations, and qualifications in Europe and beyond (Trim 2012: 30–31), which Trim attributes to the CEFR’s user-friendliness. However, its validity in these areas has been questioned, particularly for tests that lead to recognized qualifications. In response, the Language Policy Division of the COE set up a working party to link examinations to the CEFR, resulting in a manual (COE 2009) (see Chap. 3 of this volume for more information). The COE also encourages research into the production of Reference Level Descriptions of national and regional languages (COE 2019f). This includes English Profile’s descriptions of linguistic ability—vocabulary, grammar, and language functions— across the six levels (discussed in Sect. 5.2.2). Additionally, the Eaquals’ Core Inventory represents an attempt to specify the language implied by the CEFR levels and descriptors. It is stressed, however, that the results of these new initiatives concerning Reference Level Descriptions (RLDs) are not prescriptive lists of language points intended to replace CEFR ‘Can Do’ descriptors (COE 2005). Rather, they represent reference works intended to inform practice and complement the CEFR. In short, now that the performance standards have been extensively refined in the CEFR/CV for pedagogic purposes, the current focus of research addresses criticism of the CEFR by better linking examinations to it and by identifying the language (e.g., grammar and vocabulary) necessary to achieve the CEFR’s ‘Can Do’ descriptors. Criticisms of the CEFR are addressed in more detail in Sect. 1.3.3.
1.3.2 ELP History and Development The ELP (COE 2019c) was conceived at the same time as the CEFR at the symposium held by the COE in 1991. In addition to calling for the development of the Common European Framework of Reference for language learning, it was agreed that the forms and functions of the European Language Portfolio should also be discussed and developed in a working group. The research to develop the descriptors for the CEFR and the ELP took place in Switzerland (Schneider and North 2000) with the ELP prototype produced by Schneider et al. (2000). The ELP should contain a section in which formal qualifications are related to a common European scale, another in which the learner him/herself keeps a personal record of language learning experiences and possibly a third which contains examples of work done. Where appropriate entries should be situated within the Common Framework. (COE 1992: 40, as cited in Little 2012: 12)
As Little (2012) notes, the three-part structure of the portfolio—the passport, biography, and dossier—was anticipated, with the ELP explicitly linked to the CEFR via the ‘Can Do’ descriptors. However, the initial focus of the portfolio was primarily on its reporting function, namely “to provide a detailed cumulative record of the user’s experience of second language and foreign language learning and use” (Little 2012:
20
1 The CEFR and Practical Resources
9). The importance of the pedagogic function became increasingly visible through the numerous ELP pilot projects that took place between 1998 and 2000 (see Schärer 2000). This was in part intended to address a concern that the reporting function would not be fulfilled unless the ELP was fully integrated into language teaching and learning. It was also difficult to evaluate the merits of the reporting function as this phase “was too short to investigate whether teachers at the next higher level of education agreed with the previous self-assessment of their new learners” (Schärer 2000: 25). Concerns also began to emerge regarding “the degree of validity and reliability of (learner) self-assessment compared to teacher, outside formative or summative evaluation, to final exams and standardized tests.” This latter argument may still be true today as “test specialists might argue that self-assessment does not yield sufficiently valid and reliable information to formally assess language competence” (Schärer 2007: 5). However, the ELP was never meant to replace the results of proficiency tests, certificates, or diplomas. From the very beginning, the purpose of the reporting function was “to supplement them by presenting additional information about the (ELP) owner’s experience and concrete evidence of his or her foreign language achievements” (Little and Perclová 2001: 3). The main contribution of the pilot projects to the ELP’s evolution might be that among the teachers involved in the pilot projects “the ELP was seen, above all, as a means of fostering the reflective processes associated with learner autonomy” (Little 2012: 9). As noted in Schärer (2012: 52), the ELP-related projects have varied “considerably in focus, objectives, contexts, size, organization, and stage of development.” While the number of ELPs reported to be in use is impressive, they are not as high as had been anticipated and “it is becoming clear that effort and support over a prolonged period are still needed for widespread implementation” (53). This, unfortunately, has not been the case. After five ECML projects between 2004 and 2011, support and research has shifted toward other areas, such as “the development and implementation of curricula for plurilingual and intercultural education” (54). The ELP’s role in this initiative remains unclear, particularly since no descriptor scales exist for these areas. Some resources have also been withdrawn. For example, within Europe, the National ELP Contact Persons group has been disbanded. Although support for the ELP seems more decentralized, so this void has been filled to a degree through ELP-related publications and research (see edited volume by Kühn and Perez Cavana 2012). The role of those papers and guides such as this book is to help fill this gap. Compared with the research into educational portfolios, however, Álvarez (2012: 125) argues that there is still a “lack of research on the ways in which the ELP has been developed and used”8 and that the future of the ELP depends in part on an electronic version that takes advantage of available technology—a point taken up in Sect. 4.2.4 regarding e-Portfolios—and the adoption of pedagogical approaches that are more sympathetic to portfolio learning. 8
There is a lack of empirical research on the impact of ELP use on L2 proficiency development. In the US, important research of this kind has been undertaken by Aleidine Moeller, University of Nebraska, focusing on the ELP’s American cousin, Linguafolio (see Moeller et al. 2012; Ziegler and Moeller 2012).
1.3 History and Criticism of the CEFR
21
1.3.3 Criticisms of the CEFR In many ways, the CEFR has fulfilled its goal of serving as the metalanguage for “discussing the complexity of language proficiency and for reflecting on and communicating decisions on learning objectives and outcomes that are coherent and transparent” (COE 2018: 22), particularly with respect to communicating learning objectives. Many curriculum developers, textbook writers, and examination providers have claimed a close relationship between their products and the CEFR (Alderson 2007). However, the CEFR’s growing influence has also resulted in intense debate about its role in describing the complexity of language proficiency. Alderson (2007) raises several concerns that center around a lack of empirical research to underpin the CEFR and concrete information (e.g., about specific languages) to inform the development of curricula, textbooks, and tests. It is first important to acknowledge that “the main limitation of the CEFR descriptors is that they are scaled teacher perceptions of the second language proficiency of learners in relation to given descriptors” (North 2014: 23). Using Item Response Theory (IRT) as the methodology and the Rasch model as the measurement model, “what can be claimed for the CEFR descriptors … is an empirically proven interpretation of difficulty (i.e., teachers’ interpretation of the ‘level’ of a descriptor)” (North 2014: 24). In other words, the content, level, and order of the descriptors is based on teacher perception and is not the product of SLA research, which is Alderson’s first concern. However, to be fair, “such research was not available in the mid-1990s” (North 2014: 23) and arguably is still not available today. Alderson’s (2007) second concern was about a lack of concrete information. The challenge examination providers face when trying to link tests to the CEFR is that “the CEFR does not classify the specific characteristics of texts and tasks at different levels, does not provide inventories of linguistic elements suitable for syllabuses and tests at different levels and does not even mention item types …. The CEFR can do no more than provide a starting point” (North 2014: 36) for test developers. Assessment is discussed in the CEFR to encourage assessment of learner proficiency in relation to the descriptors but the CEFR itself was never meant to serve as a set of test specifications. Therefore, supplementary resources needed to be developed to fill this need. For example, the COE (2009) published a guide for linking language examinations to the CEFR, and the COE and Association of Language Testers in Europe (ALTE) jointly published the Manual for Language Test Development and Examining, for use with the CEFR (COE-ALTE 2011). Using these resources and others, practical suggestions for basing assessments on the CEFR are given in Chap. 3. As noted above, the CEFR lacks explicit links to individual languages because it is language independent. In other words, textbook writers and teachers have to decide what language items to include in teaching materials designed around ‘Can Do’ descriptors. This too is being addressed to a degree by the COE through its call to develop Reference Level Descriptions for national and regional languages which provide “detailed specifications of content at the different CEFR levels for a given
22
1 The CEFR and Practical Resources
language” (COE 2019f). RLDs are available for Croatian, Czech, English, German, French, Italian, Portuguese, and Spanish (COE 2019f), but Hulstijn (2014: 14) argues that these and other RLDs are at considerably different levels of development due to a lack of necessary funds to give professional guidance and support in the development and validation of high-quality RLDs. For example, comparing 11 inventories of numbers of words in various European languages claimed to be known at the different CEFR levels, Decoo (2011, cited in Hulstijn 2014: 13) found “the numbers diverged enormously, ranging, for instance for the B1 level, between 1500 and 7000 lexical items.” The RLDs for English are being developed through the English Profile program (see Sect. 5.2.2 for more information), but North (2014: 23) notes that most corpus analyses … still offer only a glorified snapshot of the aggregated proficiency of groups of learners at one point in time. This may help to confirm and/or elaborate the content of the CEFR descriptors for aspects of competence, but it does not address the issue of documenting the actual process and stages of SLA.
At this point, it is best to think of current research into RLDs as reference works to guide decisions concerning content specification (e.g., grammar and vocabulary), and not as prescriptive lists to be blindly followed. One last concern raised by Alderson (2007: 661) is that there is little empirical evidence to back up the claims made by exam providers, textbook publishers, and curriculum developers as to the link between their products and the CEFR. This problem is exacerbated by the fact that the “Council of Europe has refused to set up … [a] mechanism to validate or even inspect the claims made by examination providers or textbook developers” (Alderson 2007: 661–662). (Validation committees were established by the COE to vet ELPs, but this did not require engagement with commercial enterprises like language testing agencies.) This concern is not unique to the CEFR. Many educational innovations are quickly embraced by various stakeholders to appear up to date, but only tenuous links are made to the innovation itself. It is hoped that guides such as this one, will aid readers in determining how closely these products (curriculum, textbooks, tests, etc.) adhere to the CEFR. Returning to the intended purpose of the CEFR, it is important to stress that it should be viewed a heuristic, or a tool to be used by teachers and learners to develop communicative ability in a foreign language. The degree to which the CEFR is reflected in a particular curriculum, textbook, test, or classroom practice and is effective in helping students learn is primarily due to the stakeholders’ understanding of, willingness, and ability to incorporate the framework’s strengths while also being aware of its limitations. (see North 2014: Sect. 2.3 for more information about criticisms of the CEFR.) In the next section the use of the CEFR both inside and outside of Europe are explored.
1.4 The CEFR in and Beyond Europe
1.4
23
The CEFR in and Beyond Europe
This section will give a brief overview of the impact the CEFR has had in and beyond Europe, focusing on commonalities observed in its implementation (and adaptation) in a wide variety of contexts (Byram and Parmenter 2012b). While the CEFR is fulfilling its original intention to serve as a point of reference even outside of Europe, contextualized use of the CEFR has resulted in the appropriation of certain aspects of the CEFR to the exclusion of others (e.g., from language neutral to English specific), and reactions to the CEFR’s role in educational reform has varied from country to country.
1.4.1 The CEFR in Europe The impact of the CEFR in Europe and even beyond was much wider than its authors could have anticipated. Its introduction in 2001 resulted in intense discussion on how to best implement it. At first, this took place on the institutional side, within ministries of education and associations for language teaching and testing before being taken up by exam providers,9 textbook publishers, university-run language center administrations, and independent language schools. The last groups to adopt the CEFR were practitioners; teachers and students. The pace and intensity of CEFR implementation were very different in each country, eliciting enthusiasm and support as well as criticism and confusion (see Sect. 1.3.3). Since it was the political will of the COE to have a point of reference for learning, teaching, and assessing languages, this reaction should not be surprising. Efforts to implement the CEFR were supported by initiatives and projects by leading institutions and organizations, including ECML (established by the COE), Eaquals, ALTE, The European Association for Language Testing and Assessment, and The European Confederation of Language Centres in Higher Education. CEFR’s publication resulted in a wave of activity in these organizations and associations across many countries to: • translate the CEFR into 40-plus languages; • prepare comprehensive user guides (e.g., COE 2001; Goullier 2007); • prepare guides for using the European Language Portfolio (ELP) (e.g., Little and Perclová 2001; Little et al. 2007; Little 2011), and for developing ELPs (e.g., Schneider and Lenz 2001; Lenz and Schneider 2004bb); • prepare guides for relating language examinations to the CEFR (e.g., COE 2009) and for developing tests for use with the CEFR (COE-ALTE 2011); • encourage work on specific resources for individual languages through RLDs (COE 2005); and • update teacher training programs and prepare guides (e.g., Newby et al. 2011; Piccardo et al. 2011). The language testing agencies were the first to make systematic use of the CEFR.
9
24
1 The CEFR and Practical Resources
The CEFR has also been the focus of many conferences and publications, providing proponents and critics a space to debate the CEFR. Journals, such as Language Learning in Higher Education (LLHE) and Modern Language Journal (MLJ), have even devoted entire issues (LLHE 2011, Vol. 1(1)) or sections (Perspectives in the case of MLJ—Winter 2011/Vol. 91(4)) to this discussion with contributions by leading experts. To list all such resources and publications is simply not be possible. As a start, the last section of this chapter introduces readers to important resources (most available online), categorized thematically according to the areas covered in each chapter of this book. Extensive lists of references can also be found at the end of each chapter. To summarize, CEFR adoption has been to a large extent due to top-down efforts at the institutional and national level. As a result, the CEFR has been influential in a wide variety of contexts. Analyzing survey results about CEFR use by member states, Martyniuk and Noijons (2007: 7) commented that In general, the CEFR seems to have a major impact on language education. It is used— often as the exclusive neutral reference—in all educational sectors. Its value as a reference tool to coordinate the objectives of education at all levels is widely appreciated.
For practitioners, however, Martyniuk and Noijons (2007: 8) acknowledge that some respondents view the CEFR’s impact as quite modest … [as it] does not yet play an important role for the teaching profession at the school level, although it has undeniably contributed to more transparency and coherence in general … [and] the full potential of the CEFR has not yet been realized, partly due to the fact that it is still not very reader-friendly and a greater effort is needed to mediate it to users.
Despite all the efforts to create resources to accommodate the complexity of the CEFR and ensure the challenging content is accessible, “it is still not possible to say that these language policies have been effectively transferred to classrooms or to teaching materials” (Figueras 2012: 478). Common factors facilitating or inhibiting CEFR adoption in Europe There are several commonalities between different contexts which can help explain the factors that either facilitate or inhibit CEFR adoption. Byram and Parmenter (2012b) assembled a collection of case studies from inside Europe (covering France, Germany, Bulgaria, and Poland) and outside of Europe (covering Argentina, Colombia, China, Japan, Taiwan, and New Zealand). They provide a systematic analysis of the international impact of the CEFR on policy-makers and academe. They observe that within Europe, there are four basic commonalities: Commonality 1 Chronological coincidence A term coined by Goullier (2012) to acknowledge that part of the CEFR’s success can be attributed to the timing of “the appearance of the CEFR and national curriculum change, although the impulse for change varies from country to country” (Byram and Parmenter 2012a: 114).
1.4 The CEFR in and Beyond Europe
25
Commonality 2 Entry to the CEFR It is commonly agreed that most stakeholders first embrace the scales and levels (see Figueras 2012: 470–480), as well as assessment and/or self-assessment in relation to them and the ELP. However, this point also highlights the danger that the CEFR is only seen in terms of its levels and scales. Commonality 3 Not all aspects of the CEFR are embraced “There are aspects of the CEFR that are not influential or accepted—in particular the notion of plurilingualism and its implications for curriculum design”, and stakeholders mentioned having difficulty with new terminology and problems with translation (Byram and Parmenter 2012a: 114). Commonality 4 CEFR’s increasing influences on assessment systems Byram and Parmenter (2012a: 114) observe that: the influences of the CEFR on assessment systems are substantial, and the attention to levels and scales is a consequence of a general demand, by politicians and others, for measured outcomes from all teaching and learning, and this translates into language competences in the case of foreign language teaching. It is in this assessment dimension that the chronological coincidence is particularly favourable to the CEFR, whereas the other two keywords of its subtitle ‘learning, teaching’ and the ways these are described do not coincide with changes or reforms.
Whether the CEFR is embraced or not is also due to other factors. For example, in France and Germany, resistance has stemmed from the fact that: the CEFR does not give sufficient attention to ‘education’: to the aspects of language which are not only utilitarian, and which are not necessarily measurable. There was also a fear that Europeanisation would lead to reductionism in language education in so far as it would lead to a focus on certification (Byram and Parmenter 2012a: 115).
Perhaps Figueras (2012: 478) summarizes this concern best: “Not all teaching and learning objectives are designed to meet communication needs, and not all assessment is geared to [CEFR-informed] outcomes.” In the next section, the state of CEFR implementation outside of Europe is examined.
1.4.2 The CEFR Beyond Europe Although originally intended for use in Europe, from early on many educators and officials outside of Europe were interested in learning more about the CEFR through participation in such events as the Intergovernmental Language Policy Forum. It was held in Strasbourg in 2007 and entitled The Common European Framework of Reference for Languages and the development of language policies: challenges and responsibilities. This provided member states of the COE “a forum
26
1 The CEFR and Practical Resources
for discussion and debate on a number of policy issues raised by the very speedy adoption of the CEFR in Europe and the increasingly widespread use of its scales of proficiency levels” (Goullier 2007: 7). However, the diversity of participants in the forum, including delegations from Canada, China, the USA, Australia, and Japan “showed the huge level of interest in the issues raised by the CEFR, both in Europe and beyond Europe’s borders” (Goullier 2007: 8). Since this forum, similar discussions about the CEFR’s potential, usage, and impact outside of Europe have been ongoing, and questions about how the CEFR can or has “changed the context in which language teaching and assessment of language learning outcomes now take place” (Goullier 2007: 7) are being continually posed (Runnels 2015: 8–9). The international impact of the CEFR took on speed roughly a decade after its introduction. In many countries, especially in Asia, the CEFR was studied and adopted by policy-makers in a top-down approach to renew from the roots their English language curricula. Countries such as China, Thailand, Malaysia, and Vietnam have ambitious goals for internationalization and reforming foreign language education. The CEFR serves as a common basis in these contexts as well, and it has evolved to become an international ‘common currency’ that provides governments, applied linguists, educators and other stakeholders both inside and outside of Europe with a comprehensive framework linking language learning, teaching, and assessment with a “more real-life oriented approach” (Figueras 2012: 478). This was the intended purpose of the CEFR—to be used as a shared point of reference to compare contextual choices (decisions to adapt the CEFR to meet local needs) and not as a standard (Coste 2007: 12). Common factors facilitating or inhibiting the adoption of CEFR outside of Europe In the previous section, commonalities concerning CEFR adoption within Europe were explained. The focus here is on contexts outside Europe. The commonalities observed by Parmenter and Byram (2012: 258–261) include the following: Commonality 1 Goullier’s ‘Chronological Coincidence’ is also apparent Parmenter and Byram (2012: 258) observe: Whether in policy or academic discourse, the CEFR responds to an identified need or lack in current policy or practice at the time of education reform. The actual perceived need or lack varies from situation to situation, of course, but perceptions of the CEFR as a bridge to facilitate change are constant.
Commonality 2 CEFR reference depends on degrees of distance and familiarity Parmenter and Byram (2012: 258) write that “the cases from beyond Europe all demonstrate that the CEFR could not be referenced or used if it was too distant or unfamiliar, and it is usually the familiar that is appropriated first.” This was the case with its scales in some countries, such as New Zealand and the USA, which were already familiar with the concept of scales.
1.4 The CEFR in and Beyond Europe
27
Commonality 3 Entry point and assessment Parmenter and Byram (2012: 258) suggest that familiarity also accounts for another commonality also true of Europe, “namely the use of the levels as an entry point, accompanied by a bias toward assessment rather than the learning and teaching aspects of the CEFR.” Commonality 4 The importance of localization In most cases, the CEFR is seen as providing structure and direction to language education reform that is in response to globalization, but stakeholders struggle with situating the CEFR in relation to national/regional/local policies and systems. Variation between and within these regions and their needs results in contextualized uses of the CEFR that prioritizes different aspects. Commonality 5 From language neutral to English specific English appears to be emphasized in countries where it is not the dominant language, with the concept of plurilingualism all but ignored. The CEFR is essentially being seen in these countries as a tool for English education, and its influence on assessment, just as it is in Europe, is also substantial. Parmenter and Byram (2012: 260) observe that: For example, in Taiwan, China, and Japan, all countries where ability to use English in everyday life is rarely required, but ability to pass examinations in English determines educational and employment prospects, the greatest impact of the CEFR has been on English language testing regimes.
While Byram and Parmenter (2012b) help us to understand general trends in CEFR implementation both inside and outside of Europe, there is another important branch of research seeking to identify procedures and strategies which have been shown to facilitate CEFR implementation in different educational contexts. Piccardo et al. (2019: 107), for example, report on the QualiCEFR research project, a comparative study of the CEFR in Canada and Switzerland, which aims “to facilitate the transfer of CEFR-related knowledge and know-how and to support CEFR implementation informed by quality assurance (QA) concepts and procedures.” These results are being utilized in a sister project, ‘A Quality Assurance Matrix for CEFR Use’ (QualiMatrix), which forms part of the 2016–19 program of the European Centre for Modern Languages (ECML) (Piccardo et al. 2019: 122) (see Sect. 1.5.1 for homepage information). Useful practical resources, such as the QualiMatrix, are the focus of Sect. 1.5. Last, it should be mentioned that many countries and institutions worldwide have contributed to the CEFR/CV and the validation of new descriptors. See the CEFR/CV Preface (COE 2018) to learn more about the vast international CEFR network.
28
1.5
1 The CEFR and Practical Resources
Practical Resources for the CEFR
In this section, we will briefly outline the key organizations and resources referred to throughout this book. In addition to being able to access the key documents directly from the respective websites, readers will be able to stay abreast of future developments. After a brief introduction to the key organizations and their purposes, the various guides and resources will be listed first by source (Sect. 1.5.1) and then thematically (Sect. 1.5.2). Within the COE (www.coe.int), the Language Policy Portal provides access to support for all levels of language education, from primary school to adult learners, and for the promotion of plurilingual and intercultural education (www.coe.int/lang). The COE CEFR website (www.coe.int/lang-CEFR) contains the CEFR and the CEFR/CV as well as a wealth of information and resources concerning the levels, implementation, tests, and the ELP (see Sect. 1.5.1 for more detailed information). The COE also founded the European Centre of Modern Languages (ECML, www.ecml.at) in 1994, which organizes a program of international projects on language education. The ECML aims to help implement effective language teaching policies by: • focusing on the practice of learning and teaching languages; • training multipliers; and • supporting program-related research projects. Many of the guides listed below are the end-product of such projects. Eaquals (www.eaquals.org) is a not-for-profit company whose mission is “to foster excellence in language education across the world by providing guidance and support to teaching institutions and individuals” (Eaquals 2019). Their aims include: • to improve the experience of language learners by developing quality standards for the teaching of modern languages; • to develop practical resources, and offer training and support for those working in the field of modern languages. Available on the Eaquals website are CEFR standardization packs, training materials, case studies, self-help guides and descriptor banks. Other important institutions and projects provide extremely valuable resources for the CEFR besides those listed here. However, including all programs and projects related to the CEFR is unfortunately not possible.
1.5 Practical Resources for the CEFR
29
1.5.1 Resources by Institution This section gives an overview of the most important resources and their sources. Resources listed here are free to access. Council of Europe CEFR resources homepage: www.coe.int/lang-CEFR • CEFR • CEFR/Companion Volume Further materials and resources available from https://www.coe.int/en/web/common-european-framework-reference-languages/documents Levels The CEFR Levels The CEFR descriptors Illustrations of levels Reference Level Descriptions (RLD) Bank of supplementary descriptors Learning, teaching, Tools for curricula assessment The CEFR in the classroom Assessment in the classroom Tests/examinations Manual for relating language examinations to the CEFR Reference supplement to the manual for relating language examinations to the CEFR Content analysis grids for speaking, writing, listening, and reading Manual for language test development and examining ALTE—Guidelines for the development of language for specific purposes tests Illustrative test tasks and items Scaled illustrative samples of learner spoken production and interaction from a benchmarking seminar at the centre international d’Etudes Pédagogiques (CIEP) Illustrative examples of CEFR-leveled writing test tasks and answers European language Developing an ELP: www.coe.int/portfolio portfolio
European Centre for Modern Languages www.ecml.at European language portfolio
CEF-ESTIM descriptors EPOSTL
Using the ELP Training teachers to use the ELP Implementing and evaluating whole-school ELP projects (Listed under ECML Themes of Evaluation and Assessment, and Teacher Competence) CEFR—Level estimation grid for teachers http://cefestim.ecml.at European portfolio for student teacher of languages http://epostl2.ecml.at (continued)
30 GULT RELEX ECEP
CEFR-QualiMatrix
1 The CEFR and Practical Resources Guidelines for university language testing http://gult.ecml.at Training in relating language examinations to the CEFR http://relex.ecml.at Piccardo et al. (2011) Pathways through assessing, learning and teaching in the CEFR https://www.ecml.at/Resources/ ECMLPublications/tabid/277/ID/28/language/en-GB/Default.aspx A quality assurance matrix for CEFR use https://www.ecml.at/ECML-Programme/Programme2016-2019/ QualityassuranceandimplementationoftheCEFR/tabid/1870/language/ en-GB/Default.aspx
Eaquals www.eaquals.org Applying the CEFR in the classroom A Core Inventory for General English (North et al. 2015/2010) Inventaire des contenus clés aux niveaux du CECR (French) • Guidance on and a summary of appropriate language points to be taught at each CEFR level (A1 to C1) for English and French Descriptors Introduction to the CEFR with checklists of descriptors Revision and refinement of CEFR descriptors CEFR standardization materials Spoken samples with documentation Written samples with documentation Guidance on curriculum and syllabus development Eaquals Self Help Guide for Curriculum and Syllabus Design Eaquals CEFR Curriculum Case Studies
1.5.2 Resources Related to Chapters Here an overview of the important resources for each chapter can be found. A more detailed list of resources is included at the end of each chapter. The resources were selected based on considerations of relevance, access, and cost. Using the titles in an Internet search, it is easy to find the resources online.
1.5 Practical Resources for the CEFR Chapter 1
31 Basics of the CEFR
• CEFR (COE 2001) • CEFR/CV introduction (COE 2018) • Trim (2001) Guide for users provides a summary of the CEFR and advice on its implementation. A good general overview, which is more accessible than the CEFR document • Piccardo et al. (2011) Pathways through assessing, learning and teaching in the CEFR is an ECML guide and 107 handouts for teacher education and reflection about the CEFR. A great resource for deepening knowledge of the CEFR and for fostering reflection on teaching practice • Piccardo, E. (2014). From communicative to action-oriented: A research Pathway. Curriculum Services Canada • Using the CEFR: Principles of Good Practice (Cambridge University Press, Oct. 2011) introduces the CEFR and summarizes the Cambridge approach to CEFR-based assessment. Offers a straightforward explanation of the CEFR • Introductory Guide to the CEFR for English language teachers (English Profile CUP, 2013) is a short, overall introduction to the CEFR. It’s brief and easy to understand • English Profile: Introducing the CEFR for English (English Profile 2011) is an introduction to the English Profile for grammar, functions, and vocabulary • Breakthrough 1990, Waystage 1990, Threshold 1990, Vantage provide specifications such as themes, topics, functions, notions, descriptions of competencies, grammar, and wordlists. They are very useful for planning CEFR-based curricula from the ground up
Chapter 2
Curriculum and course design
• CEFR (COE 2001) Chaps. 4 and 5 • The CEF-ESTIM Grid: CEFR—Level estimation grid for teachers • Reference Level Descriptions (RLDs) for national and regional languages (COE 2019f) • North et al. (2015/2010) A Core Inventory for General English, developed by the British Council and Eaquals, the inventory gives simple lists of functions and grammar as well as example scenarios of lessons at each CEFR level. A good quick reference for language functions and grammar. Has great examples of implementing the CEFR in task-based lesson scenarios • CEFR-QualiMatrix—A quality assurance matrix for CEFR use
32 Chapter 3
1 The CEFR and Practical Resources Assessment
• CEFR illustrative scales: ‘Can Do’ descriptors for course and lesson goals, and learner self-assessment. This document contains all the CEFR ‘Can Do’ descriptors • Eaquals Descriptors: ‘Can Do’ descriptors for course and lesson goals, and learner self-assessment. These are well-organized and easy to interpret • COE and Association of Language Testers in Europe (COE-ALTE 2011) Manual for Language Test Development and Examining: For use with the CEFR offers advice on CEFR-aligned test design with an accessible introduction to designing language tests aligned to the CEFR. https://rm.coe.int/manual-for-language-testdevelopment-and-examining-for-use-with-the-ce/ 1680667a2b • ALTE Descriptors: ‘Can Do’ descriptors for course and lesson goals, and learner self-assessment. Includes an abundant bank of descriptors that seem readily applicable. (https://www.alte.org/resources/Documents/CanDo% 20Booklet%20text%20Nov%202002.pdf) • Illustrative tasks and items: a collection of CEFR-based test tasks and items for reading and listening provided by professional testing organizations. (https://www.coe.int/en/ web/common-european-framework-reference-languages/ using-illustrative-tasks)
Chapter 4
ELP, learner autonomy
• COE and the European Language Portfolio (https://www.coe.int/ en/web/portfolio) • ECML CEFR (https://www.ecml.at/) and ELP (https://elp.ecml. at/) homepages • Booklet: ELP: Guide for Developers (Schneider and Lenz 2001) • Booklet: Introduction to the bank of descriptors for self-assessment in European Language Portfolios (Lenz and Schneider 2004b): https://rm.coe.int/168045b15d • A bank of descriptors for self-assessment in European Language Portfolios (Lenz and Schneider 2004a): https://rm.coe.int/ 168045b15f • LINCDIRE project (www.lincdireproject.org)
1.5 Practical Resources for the CEFR Chapter 5
33
Learning, teaching, and assessment integrated
• Goullier (2007) Council of Europe tools for language teaching: Common European Framework and Portfolios: A guide to using the two tools (CEFR and ELP) for teaching a modern language and developing learner autonomy • Reference Level Descriptions (RLDs) for national and regional languages (COE 2019f) • North et al. (2015/2010) A Core Inventory for General English (Eaquals)
Chapter 6
Teacher autonomy
• EPOSTL The European Profile for Student Teachers of Languages is a reflection tool for language teacher education mainly for preservice teachers developed by ECML https://www.ecml.at/tabid/277/PublicationID/16/Default.aspx • EPG The European Profiling Grid developed by Equals mainly for in-service teachers is a framework with descriptors ‘spanning six development phrases’ in a language teacher’s professional evolution. Available at: http://egrid.epg-project. eu/en
References Alderson, J. C. A. (2007). The CEFR and the need for more research. The Modern Language Journal, 91(4), 659–662. Álvarez, I. (2012). From paper to the web: the ELP in the digital era. In B. Kühn & M. L. Perez Cavana (Eds.), Perspectives from the European language portfolio: Learner autonomy and self-assessment (pp. 125–142). Oxon: Routledge. Beacco, J.-C., Byram, M., Cavalli, M., Coste, D., Egli Cuenat, M., Goullier, F., et al. (2016). Guide for the development and implementation of curricula for plurilingual and intercultural education. Strasbourg: Council of Europe. Byram, M. (2009). Multicultural societies, pluricultural people and the project of intercultural education. Strasbourg: Council of Europe. https://rm.coe.int/multicultural-societiespluricultural-people-and-the-project-of-interc/16805a223c. Accessed August, 24, 2019. Byram, M., & Parmenter, L. (2012a). Commentary on the European cases. In M. Byram & L. Parmenter (Eds.), The common European framework of reference: The globalisation of language education policy (pp. 114–116). Bristol: Multilingual Matters. Byram, M., & Parmenter, L. (Eds.). (2012b). The common European framework of reference: The globalisation of language education policy. Bristol: Multilingual Matters. Coste, D. (2007). Contextualising uses of the common European framework of reference for languages. Strasbourg: Council of Europe. https://rm.coe.int/contextualising-uses-of-thecommon-european-framework-of-reference-for/16805ab765. Accessed March 18, 2019. Council of Europe. (2001). The common European framework of reference for languages: Learning, teaching, assessment. Cambridge: Cambridge University Press. Council of Europe. (2005). Guide for the production of RLD. Strasbourg: Council of Europe. http://www.coe.int/t/dg4/linguistic/Source/DNR_Guide_EN.pdf. Accessed March 8, 2019. Council of Europe. (2009). Relating language examinations to the common European framework of reference for languages: learning, teaching, assessment (CEFR). Strasbourg: Council of Europe.
34
1 The CEFR and Practical Resources
Council of Europe. (2018). The common European framework of reference for languages: Learning, teaching, assessment. Companion volume with new descriptors. Strasbourg: Council of Europe. Council of Europe. (2019a). Common European framework of reference for languages: Learning, teaching, assessment (CEFR). https://www.coe.int/en/web/common-european-frameworkreference-languages/home. Accessed September 2, 2019. Council of Europe. (2019b). European centre for modern languages of the council of Europe. https://www.ecml.at. Accessed September 2, 2019. Council of Europe. (2019c). European language portfolio. https://www.coe.int/en/web/portfolio. Accessed November 29, 2019. Council of Europe. (2019d). Platform of resources and references for plurilingual and intercultural education. www.coe.int/lang-platform. Accessed September 2, 2019. Council of Europe. (2019e). A framework of reference for pluralistic approaches to languages and cultures. http://carap.ecml.at/. Accessed September 2, 2019. Council of Europe (2019f). Reference level descriptions. https://www.coe.int/en/web/commoneuropean-framework-reference-languages/reference-level-descriptions. Accessed September 6, 2019. Council of Europe & Association of Language Testers in Europe (2011). Manual for language test development and examining, for use with the CEFR. Strasbourg, Council of Europe. https:// www.alte.org/resources/Documents/ManualLanguageTest-Alte2011_EN.pdf. Accessed March 12, 2019. Eaquals. (2019). Our aims and mission. https://www.eaquals.org/about-eaquals/our-aims-andmission/. Accessed September 6, 2019. Figueras, N. (2012). The impact of the CEFR. English Language Teaching Journal, 66(4), 477– 485. Goullier, F. (2007). Intergovernmental language policy forum. The common European framework of reference for languages (CEFR) and the development of language policies: Challenges and responsibilities. Strasbourg: Council of Europe. Goullier, F. (2012). Policy perspectives from France. In M. Byram & L. Parmenter (Eds.), The Common European Framework of Reference: The globalisation of language education policy (pp. 37–44). Bristol: Multilingual Matters. Hulstijn, J. (2014). The common European framework of reference for languages: A challenge for applied linguistics. International Journal of Applied Linguistics, 165(1), 3–18. Kühn, B., & Perez Cavana, M. L. (Eds.). (2012). Perspectives from the European language portfolio: Learner autonomy and self-assessment. Oxon: Routledge. Language Learning in Higher Education (LLHE). (2011). Special issue: The role of the CEFR and the ELP in higher education, 1(1), i–247. Lenz, P., & Schneider, G. (2004a). A bank of descriptors for self-assessment in European language portfolios. Strasbourg: Council of Europe. https://rm.coe.int/168045b15f. Accessed January 15, 2017. Lenz, P., & Schneider, G. (2004b). Introduction to the bank of descriptors for self-assessment in European language portfolios. Strasbourg: Council of Europe. https://rm.coe.int/168045b15d. Accessed January 15, 2017. Little, D. (2011). The European language portfolio: A guide to the planning, implementation and evaluation of whole-school projects. Strasbourg: Council of Europe. Little, D. (2012). The European language portfolio: history, key concepts, future prospects. In B. Kühn & M. L. Perez Cavana (Eds.), Perspectives from the European language portfolio (pp. 7–21). Oxon: Routledge. Little, D., Hodel, H., Kohonen, V., Meijer, D., & Perclová, R. (2007). Preparing teachers to use the European language portfolio: Arguments, materials and resources. Strasbourg: Council of Europe. Little, D., & Perclová, R. (2001). The European language portfolio: a guide for teachers and teacher trainers. Strasbourg: Council of Europe.
References
35
Martyniuk, W., & Noijons, J. (2007). Executive summary of results of a survey on the use of the CEFR at national level in the Council of Europe Member States. Strasbourg: Council of Europe. https://www.coe.int/t/dg4/linguistic/Source/Surveyresults.pdf. Accessed January 30, 2019. Modern Language Journal (MLJ). (2011). MLJ Perspectives, 91(4), 641–685. Moeller, A. J., Theiler, J., & Wu, C. (2012). Goal setting and student achievement: A longitudinal study. Modern Language Journal, 96, 153–169. Newby, D., Fenner, A.-B., & Jones, B. (Eds.). (2011). Using the European portfolio for student teachers of languages (EPOSTL). Graz, Austria: European Centre for Modern Languages. North, B., Ortega, A. & Sheehan, S. (2015, 2010). A Core Inventory for General English. British Council/ EAQUALS. https://www.teachingenglish.org.uk/article/british-council-eaquals-coreinventory-general-english-0. Accessed March 29, 2019. North, B. (2014). The CEFR in practice. Cambridge: Cambridge University Press. North, B., Angelova, M., Jarosz, E., & Rossner, R. (2018). Language course planning. Oxford: Oxford University Press. Parmenter, L., & Byram, M. (2012). Commentary on cases beyond Europe. In M. Byram & L. Parmenter (Eds.), The Common European Framework of Reference: The globalisation of language education policy (pp. 158–161). Bristol: Multilingual Matters. Piccardo, E. (2014). From communicative to action-oriented: A research Pathway. Curriculum Services Canada. https://transformingfsl.ca/en/components/from-communicative-to-actionoriented-a-research-pathway/. Accessed February 6, 2018. Piccardo, E., Berchoud, M., Cignatta, T., Mentz, O., & Pamula, M. (2011). Pathways through assessment, learning and teaching in the CEFR. Graz, Austria: European Centre for Modern Languages. Piccardo, E., & North, B. (2019). The Action-oriented approach: A dynamic vision of language education. Bristol: Multilingual Matters. Piccardo, E., North, B., & Maldina, E. (2019). Innovation and reform in course planning, teaching, and assessment: The CEFR in Canada and Switzerland, a comparative study. Canadian Journal of Applied Linguistics, 22(1), 103–128. Richards, J. C. (2013). Curriculum approaches in language teaching: Forward, central and backward design. RELC Journal, 44(1), 5–33. Runnels, J. (2015). CEFR usage and preliminary survey results. FLP SIG Newsletter, 14, 8–18. https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxmbHBza Wd8Z3g6MzdlYzMzNmNkYjY5MzJiNA. Accessed January 19, 2019. Schärer, R. (2000). Final report: European Language Portfolio pilot project phase 1998–2000. Strasbourg, France: Council of Europe. http://www.coe.int/t/dg4/education/elp/elp-reg/Source/ History/ELP_ PilotProject_FinalReport_EN.pdf. Accessed June 26, 2017. Schärer, R. (2007). European language portfolio: Interim report 2007. Council of Europe. https:// rm.coe.int/16804595a6. Accessed July 21, 2017. Schärer, R. (2012). Between vision and reality: Reflection on twenty years of a common European project. In B. Kühn, & M. Perez Cavana. (Eds.), Perspectives from the European language portfolio: Learner autonomy and self-assessment (pp. 45–58). Oxon: Routledge. Schneider, G., & Lenz, P. (2001). European language portfolio: Guide for developers. Modern Languages Division. https://rm.coe.int/1680459fa3. Accessed June 26, 2017. Schneider, G. & North, B. (2000). Fremdsprachen können—was heisst das? Skalen zur Beschreibung, Beurteilung und Selbsteinschätzung der fremdsprachlichen Kommunikationsfähigkeit. Chur/Zürich: Nationales Forschungsprogramm 33: Wirksamkeit unserer Bildungssysteme, Verlag Ruegger. Schneider, G., North, B., & Koch, L. (2000). Portfolio européen des langues/Europäisches Sprachenportfolio/ Portfolio europeo delle lingue/European Language Portfolio. Berne: Berner Lehrmittel- und Medienverlag.
36
1 The CEFR and Practical Resources
Trim, J. L. M. (Ed.) (2001). Common European framework of reference for languages: Learning, teaching, assessment. A guide for users. Strasbourg: Council of Europe. https://rm.coe.int/ CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=0900001680697 848. Accessed February 12, 2018. Trim, J. L. M. (2012). The common European framework of reference for languages and its background: A case study of cultural politics and educational influences. In M. Byram & L. Parmenter (Eds.), The common European framework of reference: The globalisation of language education policy (pp. 14–35). Bristol: Multilingual Matters. van Ek, J. A. (1976). The threshold level in a European unit/credit system for modern language learning by adults. Strasbourg: Council of Europe. van Ek, J. A., & Trim, J. L. M. (2001). Threshold 1990. Cambridge: Cambridge University Press. Ziegler, N., & Moeller, A. J. (2012). Increasing self-regulated learning through the LinguaFolio. Foreign Language Annals, 43, 330–348.
2
Curriculum and Course Design
This chapter first identifies and explains the parts of the Common European Framework of Reference for Languages: learning, teaching, assessment (CEFR, COE 2001) and the CEFR Companion Volume (CEFR/CV, COE 2018) that are essential for curriculum and course design and then demonstrates how to utilize them for these two purposes. Section 2.1 sets out the role of the CEFR in designing curriculum and courses. Then, Sect. 2.2 introduces relevant parts of the CEFR for curriculum design—the holistic view of Common Reference Levels in Sect. 2.2.1 and more detailed illustrative descriptor scales for course design in Sect. 2.2.2. Section 2.3 guides readers step by step through how to utilize the reference levels and illustrative descriptor scales. It explains key concepts behind each descriptor and how to modify them to fit local needs in a principled way. Section 2.4 provides exercises for designing courses, and Sect. 2.5 offers case studies which demonstrate various ways to implement ‘Can Do’ descriptors in local contexts.
2.1
The Role of the CEFR in Curriculum and Course Design
The most salient characteristic of a CEFR-informed curriculum and course design is that it is goal-oriented.1 Concrete and transparent goals and objectives based on learners’ needs are first set out and followed by other details including daily lesson plans, tasks to be performed in the classroom, teaching materials, and assessment methods. The term curriculum in this chapter is used to refer to the overall plan or design for language courses in an institution or program (cf. Diamond 2008; North
It is described as “backward design” in the CEFR/CV (COE 2018: 26–27). Richards (2013) explains the approach by comparing it with those of forward and central design.
1
© Springer Nature Singapore Pte Ltd. 2020 N. Nagai et al., CEFR-informed Learning, Teaching and Assessment, Springer Texts in Education, https://doi.org/10.1007/978-981-15-5894-8_2
37
38
2 Curriculum and Course Design
et al. 2018; Richards 2013). A curriculum presents an overall view and structure of a language program, which consists of a number of courses. Courses are designed following the goals of the curriculum, which enables courses to have coherence and rational sequencing. Designing curricula and courses demands serious consideration and requires making decisions concerning a number of issues, including: (i) (ii) (iii) (iv) (v)
What are the overall goals of language study in the institution? What are the domains in which learners will most likely use the target language? What types of language activities are learners expected to engage in? What specific tasks are learners expected to perform? What proficiency level are learners expected to attain in a given learning period?
The CEFR functions as a conceptual tool to facilitate decision making on these matters. The CEFR’s various types of descriptor scales provide a foundation for a rational sequence and coherence of courses within a curriculum. They also help curriculum and course designers articulate curriculum goals and course objectives in transparent and concrete terms. This chapter introduces different types of descriptor scales to be utilized for curriculum and course design and explains in detail how to use them for these purposes.
2.2
Types of Descriptor Scales
The CEFR (COE 2001) contains two distinctive sets of descriptor scales: The first set, presented in Chap. 3 of the CEFR, gives an overall, holistic view of the Common Reference Levels (A1 through C2). Three holistic scales are presented; the global scale, the self-assessment grid, and the qualitative aspects of spoken language use. The second set of descriptor scales is presented in Chaps. 4 and 5 of the CEFR and is made up of a wide variety of illustrative scales. The scales in Chap. 4 contain concrete and detailed descriptors for communicative language activities and strategies across four modes of communication, including reception, production, interaction, and mediation. The scales in Chap. 5 provide descriptors for communicative language competences and plurilingual and pluricultural competences. The illustrative scales from Chaps. 4 and 5 of the CEFR were updated in the CEFR/CV (COE 2018). In this chapter, the holistic scales are utilized for curriculum design, while the detailed illustrative scales are utilized for course design.
2.2 Types of Descriptor Scales
39
2.2.1 Common Reference Levels for Curriculum Design A curriculum, which constitutes a set of courses, describes an overview of the language study at an institution during a given time period and states the ultimate goals of study. To this end, the CEFR’s holistic scales are of great help. The CEFR contains three different holistic scales: a global scale, a self-assessment grid, and quantitative aspects of spoken language. In designing curricula, the first two sets of holistic scales are relevant. The global scale presents an overview of the CEFR’s six proficiency levels (A1 through C2), as shown in Table 2.1. The self-assessment grid describes the overall proficiency of the six levels in four major modes of language activities: reception, production, interaction, and mediation as described in Appendix 1. The global scale and the self-assessment grid are used for determining the exit level of the language program and stating ultimate goals of the curriculum. Designing a curriculum with reference to these holistic scales lays a structural foundation that helps to ensure a rational sequence and coherence among courses contained therein.
Table 2.1 Common reference levels: global scale (COE 2001: 24) Proficient user
C2
C1
Independent user
B2
B1
Can understand with ease virtually everything heard or read. Can summarize information from different spoken and written sources, reconstructing arguments and accounts in a coherent presentation. Can express himself/herself spontaneously, very fluently, and precisely, differentiating finer shades of meaning even in more complex situations Can understand a wide range of demanding, longer texts, and recognize implicit meaning. Can express himself/herself fluently and spontaneously without much obvious searching for expressions. Can use language flexibly and effectively for social, academic, and professional purposes. Can produce clear, well-structured, detailed text on complex subjects, showing controlled use of organizational patterns, connectors, and cohesive devices Can understand the main ideas of complex text on both concrete and abstract topics, including technical discussions in his/her field of specialization. Can interact with a degree of fluency and spontaneity that makes regular interaction with native speakers quite possible without strain for either party. Can produce clear, detailed text on a wide range of subjects and explain a viewpoint on a topical issue giving the advantages and disadvantages of various options Can understand the main points of clear standard input on familiar matters regularly encountered in work, school, leisure, etc. Can deal with most situations likely to arise while traveling in an area where the language is spoken. Can produce clear, detailed text on topics which are familiar or of personal interest. Can describe experiences and events, dreams, hopes and ambitions and briefly given reasons and explanations for opinions and plans (continued)
40
2 Curriculum and Course Design
Table 2.1 (continued) Basic user
A2
A1
Can understand sentences and frequently used expressions related to areas of most immediate relevance (e.g., very basic personal and family information, shopping, local geography, employment) Can communicate in simple and routine tasks requiring a simple and direct exchange of information on familiar and routine matters. Can describe in simple terms aspects of his/her background, immediate environment, and matters in areas of immediate need Can understand and use familiar everyday expressions and very basic phrases aimed at the satisfaction of needs of a concrete type. Can introduce himself/herself and others and can ask and answer questions about personal details such as where he/she lives, people he/she knows, and things he/she has. Can interact in a simple way provided the other person talks slowly and clearly and is prepared to help
2.2.2 Illustrative Descriptor Scales for Course Design The illustrative descriptor scales of the CEFR/CV (COE 2018), an expanded version of the CEFR, help course designers to consider and decide what language activities learners will be engaged in to perform various tasks using the target language. They also identify the linguistic competences and strategies necessary to perform them.2 The illustrative descriptor scales provide course designers with concrete options of possible language activities as well as the competences and the strategies necessary to perform those activities. The scales are organized under three major categories: communicative language activities and strategies, communicative language competences, and plurilingual and pluricultural competences, as presented in Fig. 2.1. Communicative language competences specify linguistic, sociolinguistic, and pragmatic competences required to carry out communicative language tasks and activities. Communicative language strategies refer to the ability to use communicative language competences and other general competences strategically to effectively and efficiently perform communicative language activities. Plurilingual and pluricultural competences are required to achieve communicative activities successfully and smoothly as a member of a diverse society with people of different social, political, cultural, and linguistic backgrounds. Under each of these three major categories, a vast number of more concrete and specific descriptors are provided. In the subsequent three sections, the descriptor scales within each of these categories are introduced.
2
The illustrative scales referred to for course design in this chapter are all from the CEFR/CV (COE 2018). Only the global scale referred in Sect. 2.1.1 is from the CEFR (COE 2001) since an updated global scale is not provided in the CEFR/CV.
2.2 Types of Descriptor Scales
41 Illustrative descriptors
Communicative language activities & strategies
Communicative language competences
Plurilingual and pluricultural competences
Fig. 2.1 Organization of illustrative descriptors
2.2.2.1 Illustrative Scales for Communicative Language Activities and Strategies Illustrative scales for communicative language activities are organized under four modes of communication: reception, production, interaction, and mediation. Illustrative scales for communicative language strategies are presented in Fig. 2.2. Illutrative scales for communicative language activities
Reception activites & Strategies
Production activites & Strategies
Interaction activites & Strategies
Mediation activites & Strategies
Fig. 2.2 Modes of communicative language activities
Reception Activities and Strategies Reception activities are classified into three major categories: listening comprehension, reading comprehension, and audiovisual reception. Each of these categories contains an overall scale and scales for more concrete and subcategorized reception activities. Along with these activity descriptors, a reception strategy scale is provided as shown in Table 2.2. Table 2.2 Reception activities and reception strategies Reception activities
Reception strategy
Listening comprehension • Overall listening comprehension • Understanding conversation between other speakers • Listening to a member of a live audience • Listening to announcements and instructions • Listening to the radio and audio recordings Reading comprehension • Overall reading comprehension • Reading correspondence • Reading for orientation • Reading for information and argument • Reading instructions • Reading for a leisure activity Audiovisual reception • Watching TV, film, and video
• Identifying cues and inferring (spoken and written)
42
2 Curriculum and Course Design
Production Activities and Strategies Production activities are further divided into two types of activities: spoken production and written production. Spoken production contains six scales, and written production includes three scales. Strategies for these production activities contain three scales as listed in Table 2.3. Table 2.3 Production activities and production strategies Production activities
Production strategies
Spoken production • Overall spoken production • Sustained monologue: Describing experience • Sustained monologue: Giving information • Sustained monologue: • Putting a case (e.g., in a debate) • Public announcement • Addressing audiences Written production • Overall written production • Creative writing • Written reports and essays
• Planning • Compensating • Monitoring and repair
Interaction Activities and Strategies Interaction activities include three types of interaction: spoken interaction, written interaction, and online interaction. For each of these activities, scaled descriptors are provided. Spoken interaction involves ten scales, written interaction three scales, and online interaction two scales. Strategy scales for these interaction activities consist of three strategies: turn-taking, cooperating, and asking for clarification. Table 2.4 lists all of the scales. Table 2.4 Interaction activities and interaction strategies Interaction activities
Interaction strategies
Spoken interaction • Taking the floor (turn-taking) • Overall spoken interaction • Cooperating • Understanding an interlocutor • Asking for clarification • Conversation • Informal discussion (with friends) • Formal discussion (meetings) • Goal-oriented cooperation • Obtaining goods and services • Information exchangea • Interviewing and being interviewed • Using telecommunicationsb Written interaction • Overall written interaction • Correspondence • Notes, messages, and forms Online interaction • Online conversation and discussion • Goal-oriented transactions and collaboration a Descriptors which define more monologic speech were moved to the sustained monologue: giving information b A subactivity using a telecommunication is added in the CEFR/CV
2.2 Types of Descriptor Scales
43
Mediation Activities and Strategies Mediation includes three types of activities: mediating a text, mediating concepts, and mediating communication. Each category of activities includes illustrative scales for more concrete mediation activities. For instance, mediating a text3 includes scales for eight activities, including relaying specific information, explaining data, and note-taking (lectures, seminars, meetings, etc.).4 Mediating concepts include four subactivities.5 Finally, mediating communication provides scales for such activities as facilitating pluricultural space, acting as an intermediary in informal situations (with friends and colleagues), and facilitating communication in delicate situations such as those involving disagreement. Mediation strategies contain two types of strategies: strategies to explain a new concept and strategies to simplify a text. The former contains three scales, while the latter two. Table 2.5 lists all of the mediation activity and strategy scales. Table 2.5 Mediation activities and strategies Mediation activities
Mediation strategies
Mediating a text • Relaying specific information in speech/in writing • Explaining data • Processing text • Translating a written text in speech • Translating a written text in writing • Note-taking (lectures, seminars, meetings, etc.) • Expressing a personal response to creative texts (including literature) • Analysis and criticism of creative texts (including literature) Mediating concepts • Facilitating collaborative interaction with peers • Collaborating to construct meaning • Managing plenary and group interaction • Encouraging conceptual talk Mediating communication • Facilitating pluricultural space • Acting as intermediary in informal situations (with friends and colleagues) • Facilitating communication in delicate situations and disagreement
Strategies to explain a new concept • Linking to previous knowledge • Adapting language • Breaking down complicated information Strategies to simplify a text • Amplifying a dense text • Streamlining a text
3
Descriptors for three subactivities in this category, namely relaying specific information, explaining data, and processing text, are an elaboration of concepts introduced in processing text under text in the original CEFR 2001: Sect. 4.6.3; cf. COE 2018: 51. 4 This descriptor scale is a modified version of note-taking which is under processing text in the CEFR 2001: 96. 5 Descriptors for three subactivities in this category, namely facilitating collaborative interaction with peers, collaborating to construct meaning, and encouraging conceptual talk, are a further development of concepts in the existing cooperating strategies under interaction strategies scales (cf. COE 2018: 51).
44 Table 2.6
2 Curriculum and Course Design Communicative language competences
Linguistic competence
Sociolinguistic competence
Pragmatic competence
General linguistic range Vocabulary range Grammatical accuracy Vocabulary control Phonological control Overall phonological control Sound articulation Prosodic features Orthographic control
Sociolinguistic appropriateness
Flexibility Turn-taking Thematic development Coherence and cohesion Propositional precision Spoken fluency
2.2.2.2 Communicative Language Competences Illustrative scales for communicative language competences are provided in three linguistic subfields: linguistic, sociolinguistic, and pragmatic competences. Linguistic competence consists of two major criteria: range and control. The former criterion specifies the breadth of linguistic knowledge in syntax and lexicon. The latter indicates abilities to execute such linguistic knowledge in performing activities, subcategorized into grammatical accuracy, vocabulary control, phonological control, and orthographic control. Phonological control scales are further divided into three categories: overall phonological control, sound articulation, and prosodic features. Sociolinguistic competence includes a scale for sociolinguistic appropriateness. Pragmatic competence contains six scales: flexibility, turn-taking, thematic development, coherence and cohesion, propositional precision, and spoken fluency as shown in Table 2.6. 2.2.2.3 Plurilingual and Pluricultural Competences Illustrative scales for plurilingual and pluricultural competences are developed in the CEFR/CV (COE 2018: 157–162) on the basis of the discussion on plurilingualism and pluriculturalism in the CEFR. The illustrative scales are provided in three areas: building on pluricultural repertoire, plurilingual comprehension, and building on plurilingual repertoire.
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
2.3
45
Application of the CEFR: How to Utilize CEFR Descriptors for Curriculum and Course Design
This section explains how the descriptor scales discussed in the previous section can be used as resources for curriculum and course design. For curriculum design, using the two holistic scales, the global scale and the self-assessment grid are explained in Sect. 2.3.1. For course design, how to use a wide variety of illustrative descriptor scales is elucidated step by step in Sect. 2.3.2.
2.3.1 Curriculum Design: Using Common Reference Levels The foremost part of curriculum design is to state the ultimate goals of a language program in an institution, which function as a basis for course design. Courses in a curriculum need to be structured coherently and designed to achieve their goals effectively. Before articulating the goals of a curriculum, a curriculum designer needs to be familiar with the salient properties of each proficiency level. For this purpose, he/she can refer to the global scale which summarizes distinctive characteristics of the different Common Reference Levels (COE 2001, 2018), as well as Matheidesz and Heyworth (2007) and North (2014). Essential properties of each level described by Matheidesz and Heyworth (2007: 26–31) are provided in Table 2.76: Through examining properties unique to each proficiency level, a curriculum designer can determine an overall exit level for a language program or specify different exit levels for specific language skills. Then, he/she may use global scale descriptors from that level to state the goals of the curriculum. For instance, if a designer decides the B2 level as the exit level, he/she may use all or some of the descriptors for B2 from below: Global scale descriptors: B2 (COE 2001: 24) • Can understand the main ideas of complex text on both concrete and abstract topics, including technical discussions in his/her field of specialization. • Can interact with a degree of fluency and spontaneity that makes regular interaction with native speakers quite possible without strain for either party.7 • Can produce clear, detailed text on a wide range of subjects and explain a viewpoint on a topical issue giving the advantages and disadvantages of various options.
6
Although implicit, it is clear from the descriptions of each criteria level that they depict distinct features of different domains of communicative language activities. The lower levels (A1-B1) relate more to daily use of a target language, while the upper levels (B2-C2) are more concerned with academic use of a target language (Little 2011). 7 Although the term “native speaker” remains in this descriptor, the CEFR does not regard the performance of an idealized “native speaker” as a point of reference for a certain proficiency level. Hence, such terminology is eliminated in CEFR/CV descriptors.
46
2 Curriculum and Course Design
Table 2.7 Salient features of the six CEFR levels C2
C1
B2
B1
A2
A1
This is the level of precision and ease with the language, conveying finer shades of meaning precisely by using a wide range of modification devices accurately. Speakers show flexibility in using different linguistic forms to reformulate idea, to give emphasis, and to differentiate and eliminate ambiguity (Matheidesz and Heyworth 2007: 31) Is the level of fluent, well-structured language, with a good command of a broad lexical repertoire allowing gaps to be overcome with circumlocutions. Speakers can express themselves fluently and spontaneously, and produce clear, smoothly flowing, well-structured speech, using organizational patterns, connectors, and cohesive devices (Matheidesz and Heyworth 2007: 30) At this level, effective argument is possible, and speakers can account for and sustain opinions in discussion, and explain viewpoints, advantages, and disadvantages. Users can hold their own in social discourse, with a degree of fluency enough for regular interaction with native speakers, and can adjust to changes of direction, style, and emphasis (Matheidesz and Heyworth 2007: 29) Is concerned with maintaining interaction and getting across what one wants, giving and seeking personal views and opinions, expressing main points comprehensibly, and keep discourse going, even though there may be frequent pauses. At this level, users can cope flexibly with problems in everyday life, dealing with most situations likely to arrive when traveling, and can enter unprepared into conversations on familiar topics (Matheidesz and Heyworth 2007: 28) Includes the majority of descriptors stating social functions—can greet people, ask how they are and react to news, handle short social exchanges, discuss what to do and where to go, and make arrangements. There are also descriptors on getting out and about, making simple transactions in shops, banks, etc., and getting simple information about travel and services (Matheidesz and Heyworth 2007: 27) The point at which the learner can interact in a simple way, ask and answer simple questions about themselves, and respond to statements in areas of immediate needs, rather than relying purely on a rehearsed repertory of phrases (Matheidesz and Heyworth 2007: 26)
If a designer aims at the B1 level, he/she may refer to the following B1 global scale descriptors: Global scale descriptors: B1 (COE 2001: 24) • Can understand the main points of clear standard input on familiar matters regularly encountered in work, school, leisure, etc. • Can deal with most situations likely to arise while traveling in an area where the language is spoken. • Can produce clear, detailed text on topics which are familiar or of personal interest. • Can describe experiences and events, dreams, hopes and ambitions and briefly given reasons and explanations for opinions and plans. Since the B1 level is characterized as operational usage of a target language in daily life, the descriptors may need to be modified if the curriculum aims at language use in contexts other than in daily life, such as in academic contexts. To
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
47
contextualize the descriptors for B1 given above to fit an academic context, they can be modified in a number of ways as exemplified below. First, the topic of the input may need change and input types can be specified. Second, the situation where language activities are considered most likely take place is specified. Third, the topic of the production activity may be altered. Finally, the fourth original descriptor may be judged irrelevant to the academic context, thus requiring deletion. Modified descriptors for academic language use at the B1 level are exemplified below, where changes are indicated in italics: Modified global scale descriptors for an academic context: B1 • Can understand the main points of clear standard written or oral input of subjects in his/her field of interest. • Can deal with most situations likely to arise in a language classroom. • Can produce clear, detailed text on topics in his/her field of study and make a presentation about it. If a designer selects the A2 level as a target level, he/she may use all or some of the global descriptors for A2 to state overall goals of the curriculum: Global scale descriptors: A2 (COE 2001: 24) • Can understand sentences and frequently used expressions related to areas of most immediate relevance (e.g., very basic personal and family information, shopping, local geography, employment). • Can communicate in simple and routine tasks requiring a simple and direct exchange of information on familiar and routine matters. • Can describe in simple terms aspects of his/her background, immediate environment, and matters in areas of immediate need. Because the global scale only summarizes the general properties of each proficiency level, a curriculum designer may need more concrete specifications of what learners are to be able to do in different modes of language activities.8 The self-assessment grid, which illustrates six levels of proficiency in four modes of communication (Appendix 1), may be explored. Comparing the global scale with the self-assessment grid, a curriculum designer can identify what language activities each of the global scale descriptors subsumes. For instance, each of the A2 global descriptors shown above includes language activities in different modes of communication indicated by an arrow in the following:
8
In designing a curriculum, the designer also has to consider how many levels of the given mode of language activities are necessary to achieve the goal of the curriculum. For this decision, he/she should take into consideration the proficiency levels of incoming students as well as the length of the language program for which the curriculum is being designed.
48
2 Curriculum and Course Design
Language activities subsumed in the global scale descriptors for A2 • Can understand sentences and frequently used expressions related to areas of most immediate relevance (e.g., very basic personal and family information, shopping, local geography, employment) ! reception activities (listening and reading). • Can communicate in simple and routine tasks requiring a simple and direct exchange of information on familiar and routine matters ! spoken interaction. • Can describe in simple terms aspects of his/her background, immediate environment, and matters in areas of immediate need ! production activities (spoken and written production). The first descriptor above summarizes reception activities including listening and reading activities, the second descriptor points to interaction activities, and the final one subsumes spoken and written production activities. Once language activities are identified, a designer may want to know more detailed descriptions of each activity. Then, he/she should examine the self-assessment grid. A2 descriptors for language activities in the four major modes (reception, interaction, production, and mediation) are provided in Table 2.8.
Table 2.8 Self-assessment grid descriptors for A2 (COE 2018: 167–170) Listening
Reading
Spoken interaction
Written and online interaction
Spoken production
Written production
Mediating a text
I can understand phrases and the highest frequency vocabulary related to areas of most immediate personal relevance (e.g., very basic personal and family information, shopping, local geography, employment). I can catch the main point in short, clear, simple messages and announcement I can read very short, simple texts. I can find specific, predictable information in simple everyday material such as advertisements, prospectuses, menus, and timetables, and I can understand short simple personal letters I can communicate in simple and routine tasks requiring a simple and direct exchange of information on familiar topics and activities. I can handle very short social exchanges even though I cannot usually understand enough to keep the conversation going myself I can engage in basic social interaction, expressing how I feel, what I am doing, or what I need, and responding to comments with thanks, apology, or answers to questions. I can complete simple transactions such as ordering goods, can follow simple instructions, and can collaborate in a shared task with a supportive interlocutor I can use a series of phrases and sentences to describe in simple terms my family and other people, living conditions, my educational background, and my present or most recent job I can write short, simple notes and messages relating to matters in areas of immediate need. I can write a very simple personal letter, for example thanking someone for something I can convey the main point(s) involved in short, simple texts on everyday subjects of immediate interest provided these are expressed clearly in simple language (continued)
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
49
Table 2.8 (continued) Mediating concepts
Mediating communication
I can collaborate in simple, practical tasks, asking what others think, making suggestions and understanding responses, provided I can ask for repetition or reformulation from time to time. I can make suggestions in a simple way to move the discussion forward and can ask what people think of certain ideas I can contribute to communication by using simple words to invite people to explain things, indicating when I understand and/or agree. I can communicate the main point of what is said in predictable, everyday situations about personal wants and needs. I can recognize when speakers disagree or when difficulties occur and can use simple phrases to seek compromise and agreement
The global scale descriptors for the B1 level subsume the following modes of communicative language activities, indicated by arrows: Language activities subsumed into the global scale descriptors for B1 • Can understand the main points of clear standard input on familiar matters regularly encountered in work, school, leisure, etc. ! reception activities (listening and reading). • Can deal with most situations likely to arise while traveling in an area where the language is spoken ! spoken interaction. • Can produce clear, detailed text on topics which are familiar or of personal interest ! written production. • Can describe experiences and events, dreams, hopes and ambitions and briefly given reasons and explanations for opinions and plans ! production activities (spoken and written production). B1-level self-assessment grid descriptors are given in Table 2.9. Table 2.9 Self-assessment grid descriptors for B1 (COE 2018: 167–170) Listening
Reading
Spoken interaction
Written and online interaction
I can understand the main points of clear standard speech on familiar matters regularly encountered in work, school, leisure, etc. I can understand the main points of many radio or TV programs on current affairs or topics of personal or professional interest when the delivery is relatively slow and clear I can understand texts that consist mainly of high frequency everyday or job-related language. I can understand the description of events, feelings, and wishes in personal letters I can deal with most situations likely to arise while traveling in an area where the language is spoken. I can enter unprepared into conversation on topics that are familiar, or personal interest or pertinent to everyday life (e.g., family, hobbies, work, travel, and current events) I can interact about experiences, events, impressions, and feelings provided that I can prepare beforehand. I can ask for or give simple clarifications and can respond to comments and questions in some (continued)
50
2 Curriculum and Course Design
Table 2.9 (continued)
Spoken production
Written production Mediating a text Mediating concepts
Mediating communication
detail. I can interact with a group working on a project, provided there are visual aids such as images, statistics, and graphs to clarify more complex concepts I can connect phrases in a simple way in order to describe experiences and events, my dreams, hopes, and ambitions. I can briefly give reasons and explanations for opinions and plans. I can narrate a story or relate the plot of a book or film and describe my reactions I can write straightforward connected text on topics which are familiar or of personal interest I can convey information given in clear, well-structured informational texts on subjects that are familiar or of personal or current interest I can help define a task in basic terms and ask others to contribute their expertise. I can invite other people to speak, to clarify the reason (s) for their views, or to elaborate on specific points they made. I can ask appropriate questions to check understanding of concepts and can repeat back part of what someone has said to confirm mutual understanding I can support a shared communication culture by introducing people, exchanging information about priorities, and making simple requests for confirmation and/or clarification. I can communicate the main sense of what is said on subjects of personal interest, provided that speakers articulate clearly and that I can pause to plan how to express things
The self-assessment grid descriptors state what learners are expected to be able to do more concretely than those in the global scale. It is important for a curriculum designer to notice the different degrees of specification in the descriptions of the global scale and those of the self-assessment grid and use them appropriately. A curriculum designer may use the former descriptors to demonstrate an overview of a curriculum for incoming students, their parents, and other stakeholders. A curriculum designer may use the latter descriptors to plan and discuss courses with other teachers. The process of course design will be explicated in detail in the next section.
2.3.2 Course Design: Using Illustrative Descriptor Scales The foremost part of backward course design is to first articulate the concrete objectives or learning outcomes of a course, which function as a basis for preparing a syllabus that includes specific lesson plans for a course. Illustrative descriptor scales can play an essential guiding role for a course designer to articulate course objectives. He/she can select illustrative descriptor scales most appropriate for the target learners and then accommodate them to local contexts and constraints. The following three subsections illustrate how to set up course objectives and articulate them. The preparation consists of three stages: (1) selection of the most appropriate scaled descriptors, (2) analysis of the selected descriptors, and (3) modification of them to
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
51
suit the course being planned. These three stages of course design are demonstrated in the following subsections. First, the process of how to select descriptors from three different categories of illustrative scales is demonstrated in Sects. 2.3.2.1–2.3.2.3. Then, Sect. 2.3.2.4 analyzes selected descriptors, which is an important step before contextualizing descriptors in a principled way. Finally, Sect. 2.3.2.5 demonstrates how to contextualize them to fit the course one is planning.
2.3.2.1 Selection of Illustrative Descriptors: Communicative Language Activities Descriptor scales in the CEFR and the CEFR/CV range widely across different types of language activities; altogether 53 scales are available. It is, thus, more efficient and effective to restrict the scope of the scales and target the relevant illustrative descriptors for the course being designed. Options for selection can be constrained by three parameters: the domain of social organization where language activities take place, the mode of communicative language activities, and the target proficiency level(s) of the course. Hence, the following questions are important: (1) In what domain will learners most likely perform language activities? (2) What modes of communicative language activities will learners most likely perform? (3) What is the proficiency level that learners are expected to reach? An answer to each question subsequently narrows the range of descriptors and facilitates the selection process. That is, the domain constrains modes of language activities. The mode of language activities constrains options of sublanguage activities. An exit proficiency level points to the specific level of descriptors for the selected sublanguage activities. The three parameters are discussed individually below. Parameter 1: Domain The domain is the area where communicative language activities take place. The CEFR classifies domains into four types: the personal domain, the public domain, the occupational domain, and the educational domain. They are defined as follows: • The personal domain, in which the person concerned lives as a private individual, centered on home life with family and friends, and engages in individual practices such as reading for pleasure, keeping a personal diary, pursuing a special interest or hobby, etc.; • The public domain, in which the person concerned acts as a member of the general public, or of some organization, and is engaged in transactions of various kinds for a variety of purposes; • The occupational domain, in which the person concerned is engaged in his or her job or profession; • The educational domain, in which the person concerned is engaged in organized learning, especially (but not necessarily) within an educational institution (COE 2001: 45).
52
2 Curriculum and Course Design
These domains constrain modes of likely communicative language activities. Hence, it is important to select the domain(s) of communicative language activities first when designing courses. Some modes of communication correlate more with one domain than another. For instance, the personal domain calls for interpersonal communicative skills necessary in interactional activities, while the educational domain may require a wider range of modes of communication including reception, production, and mediation, depending on the specific academic course. The CEFR/CV (COE 2018: Appendix 6) exemplifies situations and contexts in each of the four domains involving online interaction and mediation. This is a helpful resource for course designers to envision situations and contexts associated with a domain in a specific language activity and to identify scaled descriptors according to domains where learners are expected to perform an intended language task. Once a course designer selects the domain of language activities, that selection will in turn constrain options for modes of language activities, which include various types of sublanguage activities. Parameter 2: Mode Communicative language activities are organized under the four major modes of communication: reception, production, interaction, and mediation. Each mode subsumes a few subcategories such as listening comprehension, reading comprehension, and audiovisual reception in the mode of reception as shown in Fig. 2.3. Each subcategorized mode of communication includes illustrative descriptor scales for specific communicative language activities. For example, listening comprehension and reading comprehension each include five scales and audiovisual reception includes one scale. Likewise, each of the other subcategorized modes of communication includes several illustrative descriptor scales for communicative language activities. Among the CEFR/CV’s (COE 2018) scales given in Table 2.10, a course designer needs to select the most relevant ones for the course he/she is designing:
Reception
Production
Interaction
Mediation
Listening comprehension
Spoken production
Spoken interaction
Mediating a text
Reading comprehension
Written production
Written interaction
Mediating concepts
Online interaction
Mediating communication
Audio-visual reception
Fig. 2.3 Modes of language activities and subcategorized activities
2.3 Application of the CEFR: How to Utilize CEFR Descriptors … Table 2.10 Modes of language activities and scales for subcategorized activities Interaction activities Listening comprehension • Overall listening comprehension • Understanding conversation between other speakers • Listening as a member of a live audience • Listening to announcements and instructions • Listening to the radio and audio recordings Reading comprehension • Overall reading comprehension • Reading correspondence • Reading for orientation • Reading for information and argument • Reading instructions • Reading as a leisure activity Audiovisual reception • Watching TV, film, and video Production activities Spoken production • Overall spoken production • Sustained monologue: Describing experience • Sustained monologue: Giving information • Sustained monologue: Putting a case (e.g., in a debate) • Public announcement • Addressing audiences Written production • Overall written production • Creative writing • Written reports and essays Interaction activities Spoken interaction • Overall spoken interaction • Understanding an interlocutor • Conversation • Informal discussion • Formal discussion • Goal-oriented cooperation • Obtaining goods and services • Information exchange • Interviewing and being interviewed • Using telecommunications Written interaction • Overall written interaction • Correspondence • Notes, messages, and forms (continued)
53
54
2 Curriculum and Course Design
Table 2.10 (continued) Interaction activities Online interaction • Overall online interaction • Online conversation and discussion • Goal-oriented transactions and collaboration Mediation activities Mediating a text • Relaying specific information • Explaining data • Processing text • Translating a written text in speech • Translating a written text in writing • Note-taking • Expressing a personal response to creative texts • Analysis and criticism of creative texts Mediating concepts • Facilitating collaborative interaction with peers • Collaborating to construct meaning • Managing plenary and group interaction • Encouraging conceptual talk Mediating communication • Facilitating pluricultural space • Acting as intermediary in informal situations • Facilitating communication in delicate situations and disagreement
The selection of a domain points to a certain mode of language activity, which in turn constrains options for concrete illustrative descriptor scales. If the personal domain, where an individual’s private life is concerned, is selected, the interactive mode of communication that includes interactional activities with family and friends may be selected. In addition, reception and production modes may be selected. The former includes reading for pleasure, and the latter may involve writing a diary. If a course designer selects the educational domain, all of the four modes of language activities are relevant. However, some descriptors for each mode would be more pertinent than others. North (2014: 62), for instance, indicates the following descriptor scales of the original CEFR are particularly relevant to tertiary-level language education:
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
• • • • • • •
55
Reading orientation; Reading for information and argument; Listening as a member of an audience; Making presentations; Formal meetings; Summarizing text; Writing reports.
In addition to the above scales, a course designer may include the following newly developed mediation activity scales from the CEFR/CV (COE 2018; see Sect. 2.2.2.1): • • • • • • • •
Relaying specific information in speech/in writing; Explaining data in speech/in writing; Processing text in speech/in writing; Translating a written text in speech/in writing; Note-taking (lectures, seminars, meetings, etc.); Expressing a personal response to creative texts; Facilitating collaborative interaction with peers; Collaborating to construct meaning.
Parameter 3: Target Proficiency Level The final selection process is to choose the most appropriate level of descriptors for the selected communicative language activities. This is done by determining the target level, or levels, of communicative language activities in a course from the six proficiency levels: A1, A2, B1, B2, C1, and C2. In addition to these six levels, the CEFR/CV added a level below A1, the pre-A1 level, which is mainly for primary and secondary education contexts. It is important to note that the CEFR levels of all selected communicative activities will not necessarily be at the same proficiency level. It is often the case that a higher level for reception activities is aimed at than production activities. To determine a target level, it is important to consider the following questions: (1) What is the proficiency level of the learners at the initial stage of the course? (2) How many hours per semester are allocated to the course you are planning? (3) How many hours of self-study are expected?
56
2 Curriculum and Course Design
A course designer first needs to grasp the current proficiency level of the learners. He/she then calculates hours allocated to the course as well as for self-study assigned outside the classroom. Based on the initial proficiency level of learners and time allocated for the course, a course designer can determine a realistic goal for the course. Several studies estimate hours required to move from one level to another. For example, van Ek and Trim (2001a, b) estimate 180 h to achieve B1 level from A2 level. Takala (2010a, b) introduces a study in Finland which indicates 300 h of lessons and 100 h of self-study is necessary to reach B1 from A1.1. Furthermore, North (2014) introduces the Eurocentres data, which indicates 400 h of study is necessary to reach B1 from zero. Although teaching methods and learning dynamic as well as individual differences in learners’ ages, aptitudes, motivation, and working memory influence the amount of time necessary for a learner to reach a target level (DeKeyser 2012; Robinson 2002; Skehan 2012; Ushioda and Dörnyei 2012), a course designer should make a reasonable and realistic estimate of a target level for each target language skill. If the time allocated to a course is insufficient for learners to develop from one criterial level to another, subdivision of a criterial level using the plus symbol such as A2+ , B1+, and B2+ may be used. The CEFR (COE 2001: 31–32) suggests flexibility to meet local needs. It exemplifies a number of ways to branch a criterial level. One example subdivides an independent user (B) level into five narrower levels: A2+ , B1, B1+ , B2, and B2+ based on the Swiss empirical studies.9 Figure 2.4 demonstrates such a branching possibility. It is worth reiterating that learners may be at different proficiency levels depending on the type of language activity. For instance, learners usually possess higher reception proficiency than production proficiency. As such, a course designer may set different proficiency targets for different language activities. Considering these points, a realistic level of proficiency should be determined. Setting all three parameters leads to specific illustrative descriptors. To summarize, the three steps in the selection process of communicative language activities include: (i) selection of domains, (ii) selection of descriptive scales of concrete communicative activities, and (iii) identification of target proficiency level(s) for the learners. Each of these three steps narrows selection possibilities in the subsequent step. The selection of domains leads to more useful options for descriptive scales to be selected. Identification of proficiency level points to a specific level of selected scales. To exemplify the selection process, two scenarios are presented below.
9
For more information, refer to Appendix B in COE (2001).
2.3 Application of the CEFR: How to Utilize CEFR Descriptors … A Basic User A1
B Independent User
A2
B1
C Proficient User
B2
A2+
B1+
57
C1
C2
B2+
Fig. 2.4 Branching possibility of criterial levels (COE 2001: 33) Table 2.11 Interaction activities Spoken interaction
Written interaction
Online interaction
• Understanding the interlocutor • Conversation
• Correspondence
• Online conversation and discussion
• Notes, messages, and forms
• Goal-oriented transactions and collaboration
• • • •
Informal discussion Formal discussion Goal-oriented cooperation Obtaining goods and services • Information exchange • Interviewing and being interviewed • Using telecommunications
Scenario 1 The first scenario aims at the development of communicative language skills for daily purposes. Target learners are at the upper primary or lower secondary education level where a target language is formally taught. Real-world communicative language activities for daily purposes are mostly in the personal and public domains and constitute mainly interactive activities. There will be some receptive and productive activities in these domains as well, but they are less frequent. The relevant sublanguage activities for this scenario include spoken interaction, written interaction, and online interaction, each of which includes a number of illustrative scales listed in Table 2.11. The first step is to select the domain of the main language activities. If a course designer selects the personal domain as the main context of the language activities, then the following three scales are most relevant: “Understanding an interlocutor”, “Conversation”, and “Notes, messages and forms” scales. The next step is to identify specific scaled descriptors based on the target proficiency level. Assuming that the proficiency level is A2, the selected descriptors are given in Table 2.12. The selection process for this scenario is presented in Fig. 2.5:
58
2 Curriculum and Course Design
Table 2.12 Descriptors for selected activities at the A2 level Understanding an interlocutor (COE 2018: 84) A2+ • Can understand enough to manage simple, routine exchanges without undue effort • Can generally understand clear, standard speech on familiar matters directed at him/her, provided he/she can ask for repetition or reformulation from time to time A2 • Can understand what is said clearly, slowly, and directly to him/her in simple everyday conversation; can be made to understand, if the speaker can take the trouble Conversation (COE 2018: 85) A2+ • Can establish social contact: greetings and farewells; introductions; and giving thanks • Can generally understand clear, standard speech on familiar matters directed at him/her, provided he/she can ask for repetition or reformulation from time to time • Can participate in short conversations in routine contexts on topics of interest • Can express how he/she feels in simple terms, and express thanks • Can ask for a favor (e.g., to lend something), can offer a favor, and can respond if someone asks him/her to do a favor for them A2 • Can handle very short social exchanges but is rarely able to understand enough to keep conversation going of his/her own accord, though he/she can be made to understand if the speaker will take the trouble • Can use simple everyday polite forms of greeting and address • Can chat in simple language with peers, colleagues, or members of a host family, asking questions and understanding the answers relating to most routine matters • Can make and respond to invitations, suggestions, and apologies • Can express how he/she is feeling using very basic stock expressions • Can say what he/she likes and dislikes Notes Message and forms (COE 2018: 95) A2+ • Can exchange information by text message, email, or in short letters, responding to questions the other person had (e.g., about a new product or activity) A2 • Can convey personal information of a routine nature, for example in a short email or letter introducing himself/herself • Can write very simple personal letters expressing thanks and apology • Can write short, simple notes, emails, and text messages (e.g., to send or reply to an invitation, to confirm or change an arrangement) • Can write a short text in a greeting card (e.g., for someone’s birthday or to wish them a Happy New Year)
Step 1: Domain Step 2: Mode
Personal domain Interaction activities
Spoken interaction Understanding an interlocutor Conversation Step 3: Proficiency level
Written interaction Notes, messages & forms
A2
Result: A2 descriptors of the selected activities are given in Table 2.12.
Fig. 2.5 The selection process: scenario 1
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
59
Scenario 2 The second scenario aims at the development of communicative academic language competences and skills necessary for tertiary education. A course prepared for this scenario is a presentation course, and it demands three modes of language activities: reception, production, and mediation. Under these modes, thirty-six illustrative scales are available (see Table 2.10). A course designer needs to consider them and select the most relevant scales for the course he/she is designing. Assuming that the following four types of activities, reading for information and arguments, writing essays and reports, addressing audiences, and note-taking at the B2 level, are chosen, the selection process is summarized in Fig. 2.6.
Table 2.13 B2 descriptors for selected concrete language activities Reading for information and arguments (COE 2018: 63) B2+ • Can obtain information, ideas, and opinions from highly specialized sources within his/her field • Can understand specialized articles outside his/her field, provided he/she can use a dictionary occasionally to confirm his/her interpretation of terminology B2 • Can understand articles and reports concerned with contemporary problems in which the writers adopt particular stances or viewpoints • Can recognize when a text provides factual information and when it seeks to convince readers of something • Can recognize different structures in discursive text: contrasting arguments, problem–solution presentation, and cause–effect relationships Addressing audiences (COE 2018: 74) B2+ • Can give a clear, systematically developed presentation, with highlighting of significant points, and relevant supporting detail • Can depart spontaneously from a prepared text and follow up interesting points raised by members of the audience, often showing remarkable fluency and ease of expression B2 • Can give a clear, prepared presentation, giving reasons in support of or against a particular point of view and giving the advantages and disadvantages of various options • Can take a series of follow-up questions with a degree of fluency and spontaneity which poses no strain for either himself/herself or the audience Writing essays and reports (COE 2018: 77) B2+ • Can write an essay or report that develops an argument systematically with appropriate highlighting of significant points and relevant supporting detail • Can write a detailed description of a complex process • Can evaluate different ideas or solutions to a problem B2 • Can write an essay or report which develops an argument, giving reasons in support of or against a particular point of view and explaining the advantages and disadvantages of various options • Can synthesize information and arguments from a number of sources Note-taking (lectures, seminars, meetings) (COE 2018: 115) B2 • Can understand a clearly structured lecture on a familiar subject, and can take notes on points which strike him/her as important, even though he/she tends to concentrate on the words themselves and therefore to miss some information • Can make accurate notes in meetings and seminars on most matters likely to arise within his/her field of interest
60
2 Curriculum and Course Design
Step 1: Domain Step 2: Mode
Education domain Reception activities
Reading comprehension Reading for information & argument
Production activities
Spoken production Addressing audiences
Mediation activities
Written production
Mediating a text
Writing essays and reports
Note-taking
Step 3: Proficiency level B2 Result: B2 descriptors of the selected activities are given in Table 2.13.
Fig. 2.6 The selection process: scenario 2
As has been illustrated in the two scenarios, once a course designer selects a domain of language activities, that decision will lead to a mode or modes of language activities and then descriptive scales under each of the selected mode(s) of activities in a given proficiency level are selected to help articulate course objectives.
2.3.2.2 Selection of Illustrative Descriptors: Communicative Language Competences In order for learners to perform selected language activities appropriately and effectively, they must acquire the linguistic competences necessary to perform those activities. As shown in 2.2.2.2, three types of linguistic competence descriptors are provided: general linguistic, sociolinguistic, and pragmatic competences. Linguistic competence descriptors are further classified into two major criterial categories: range and control. The former specifies the range of grammatical and lexical knowledge, while the latter indicates ability to execute such knowledge in various language activities. The key parameter in selecting descriptors is the target level of a course. For example, if a course aims at the A2 level, the linguistic competence descriptors at the A2 level shown in Table 2.14 will be selected. If a course aims at the B2 level, then the linguistic competence scales in Table 2.15 will be selected.
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
61
Table 2.14 General linguistic competences at the A2 level General linguistic range (COE 2018: 131) A2+ • Has a repertoire of basic language, which enables him/her to deal with everyday situations with predictable content, though he/she will generally have to compromise the message and search for words A2 • Can produce brief everyday expressions in order to satisfy simple needs of a concrete type: personal details, daily routines, wants and needs, requests for information • Can use basic sentence patterns and communicate with memorized phrases, groups of a few words and formulae about themselves and other people, what they do, places, possessions, etc. • Has a limited repertoire of short memorized phrases covering predictable survival situations; frequent breakdowns and misunderstandings occur in non-routine situations Vocabulary range (COE 2018: 132) A2+ • Has sufficient vocabulary to conduct routine, everyday transactions involving familiar situations and topics A2 • Has a sufficient vocabulary for the expression of basic communicative needs • Has a sufficient vocabulary for coping with simple survival needs
Table 2.15 Communicative linguistic competencies at the B2 level General linguistic range (COE 2018: 131) B2+ • Can express himself/herself clearly and without much sign of having to restrict what he/she wants to say B2 • Has a sufficient range of language to be able to give clear descriptions, express viewpoints and develop arguments without much conspicuous searching for words, using some complex sentence forms to do so Vocabulary range (COE 2018: 132) B2+ • Can understand and use the main technical terminology of his/her field, when discussing his/her area of specialization with other specialists B2 • Has a good range of vocabulary for matters connected to his/her field and most general topics • Can vary formulation to avoid frequent repetition, but lexical gaps can still cause hesitation and circumlocution • Can produce the appropriate collocations of many words in most contexts fairly systematically • Can understand and use much of the specialist vocabulary of his/her field but has problems with specialist terminology outside of it
The general linguistic competence descriptors shown in Tables 2.14 and 2.15 are linguistically neutral and in principle apply to any language. However, a course designer may feel the descriptors are too general and need to identify more concrete grammatical structures or vocabulary necessary to a given proficiency level in the target language. For this purpose, the Reference Level Descriptions (RLDs) for national and regional languages spoken in Europe are an excellent resource. They provide detailed descriptions of linguistic features of a given language unique to the
62
2 Curriculum and Course Design
different CEFR levels. A list of the RLDs is accessible from the COE Web site.10 In accordance with this development for English, the English Profile program produced the English Vocabulary Profile and the English Grammar Profile. The former contains vocabulary and phrases across the six CEFR levels, while the latter lists distinctive grammatical features of the six proficiency levels. For instance, a simple direct wh-question construction is one of the criterial features of A2, while indirect question is one of B1. For more detailed explanation of criterial features of English, refer to Hawkins and Filipović (2012) and the English profile Web site (www. englishprofile.org). Although the selection of communicative language competences here is given only as general linguistic properties, a course designer may also consult illustrative scales in sociolinguistic as well as pragmatic competences and select appropriate descriptors for the activities he/she is planning for a course.
2.3.2.3 Selection of Illustrative Descriptors: Strategies for Language Activities Finally, it is important to consider the communicative language strategies which are necessary to perform selected communicative language activities effectively and efficiently. Illustrative descriptor scales for communicative language strategies are provided under each of the four modes of language activities: reception, production, interaction, and mediation activities. The key parameters to select descriptors for a course are the modes of activities to be performed and the target proficiency level. For example, scenario 1 in Sect. 2.3.2.1 calls for three interaction activities at the A2 level. For that scenario, the two interaction activity strategies, “taking the floor” and “asking for clarification,” have been selected (see Table 2.16). Table 2.16 Communicative language strategies for interaction activities Interaction Taking the floor (COE 2018: 100) A2+ • Can use simple techniques to start, maintain, or end a short conversation • Can initiate, maintain, and close simple, face-to-face conversation A2 • Can ask for attention Asking for clarification (COE 2018: 102) A2+ • Can ask very simply for repetition when he/she does not understand • Can ask for clarification about keywords or phrases not understood using stock phrases A2 • Can say he/she did not follow • Can signal non-understanding and ask for a word to be spelt out
10
The RLDs are currently available in ten European languages. Detailed explanation of linguistic features unique to the six levels of the CEFR in each language is available from: https://www.coe.int/en/web/common-european-framework-reference-languages/reference-leveldescriptions-rlds-developed-so-far.
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
63
For scenario 2, four language activities in three different modes are selected: “reading for information and arguments” (reception), “addressing audiences” and “writing essays and reports” (production), and “note-taking” (mediation). Selected strategy descriptors for reception, production, and mediation at the B2 level are listed in Table 2.17. Table 2.17 Selected communicative language strategies for reception, production, and mediation activities Reception: Identifying cues and inferring (spoken and written) (COE 2018: 67) B2 • Can use a variety of strategies to achieve comprehension, including listening for main points and checking comprehension by using contextual clues Production: Planning (COE 2018: 78) B2+ • Can, in preparing for a potentially complicated or awkward situation, plan what to say in the event of different reactions, reflecting on what expression would be appropriate B2 • Can plan what is to be said and the means to say it, considering the effect on the recipient(s) Mediation: Streamlining a text (COE 2018: 129) B2+ • Can simplify a source text by excluding non-relevant or repetitive information and taking into consideration the intended audience B2 • Can edit a source text by deleting the parts that do not add new information that is relevant to a given audience in order to make the significant content more accessible for them • Can identify related or repeated information in different parts of a text and merge it in order to make the essential message clearer
The selection process of illustrative descriptor scales demonstrated in the previous subsections involved three types of illustrative descriptor scales: (1) language activity scales, (2) linguistic competence scales, and (3) strategy scales. The selection process for each scale with key parameters is summarized below: (1) Selection of communicative language activity scales Selection is constrained by three parameters: (i) (ii) (iii)
Domain where language activities take place: personal, public, educational, and/or occupational; Modes of communication: reception, production, interaction, and/or mediation; Target proficiency level: A1 through C2.
(2) Selection of communicative linguistic competence scales Selection is constrained by target proficiency level. (3) Selection of strategies scales Selection is constrained by language activity mode and target proficiency level.
64
2 Curriculum and Course Design
A final note about the selection process The CEFR/CV suggests that the number of descriptors should be limited to around 20: … experience suggests that any list used as an instrument for teacher assessment or self-assessment is more effective if it is much shorter (e.g. 10–20 descriptors) and focused on activities of relevance in a particular section or module of the course. (COE 2018: 42– 43)
Course objectives are used for different purposes. Learners and teachers may use them to monitor learning progress and to assess degree of achievement. For these purposes, the number of descriptors should not exceed twenty. Too many descriptors are difficult to handle, especially for learner self-assessment. The course objective statements are also used by teachers to make more concrete daily lesson plans. For this purpose, all the relevant descriptors may be presented to them to facilitate their planning. Once illustrative descriptor scales most relevant to the course being planned are selected, the next task is to contextualize them to fit the local context.
2.3.2.4 Analysis of Illustrative Descriptors To localize selected illustrative descriptors in a principled way, one needs to understand what features each descriptor specifies to distinguish one criterial level from another. At the same time, one needs to know which part of a descriptor should be kept intact and which part should be more specific and localized. The CEFR/CV (COE 2018) explains the key concepts operationalized in each descriptor scale. After introducing them, we will analyze and decompose an individual descriptor scale, demonstrating how key concepts are expressed in each scale. One descriptor scale from each of the four modes of communicative language activities is analyzed in the following subsections. The scales include “reading for information and argument”, “addressing audiences”, “formal discussion”, and “note-taking”. Reception Descriptor Scale: In the reading for information and argument scale, three key concepts are operationalized: “type of texts,” “subject of texts,” and “depth of understating” (COE 2018: 63). They are important criterial features which differentiate levels of proficiency in the scale. These features are expressed by different constituents of a descriptor. In the descriptors shown in Table 2.18, verb phrases in bold indicate “depth of understanding,” underlined noun phrases specify “types of texts,” and italicized phrases express “subject of texts.” Additionally, it is important to notice that how the text is written and how one reads it are also specified. These distinguish one criterial level from another, which are indicated by bolded italics. Varying degrees of understanding are indicated by different verb phrases, “understand the finer points and implications” and “understand in detail” at C level, while just “understand” at B and A level. Proficiency levels in these bands are
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
65
Table 2.18 Reading for information and argument (COE 2018: 63) (emphasis added) C2 C1
B2
B1
A2
Can understand the finer points and implications of a complex report or article even outside his/her area of specialization Can understand in detail a wide range of lengthy, complex texts likely to be encountered in social, professional or academic life, identifying finer points of detail including attitudes and implied as well as stated opinions Can understand in detail lengthy complex texts, whether or not they relate to his/her own area of specialty provided he/she can reread difficult sections Can obtain information, ideas, and opinions from highly specialized sources within his/her field Can understand specialized articles outside his/her field, provided he/she can use a dictionary occasionally to confirm his/her interpretation of terminology Can understand articles and reports concerned with contemporary problems in which the writers adopt particular stances or viewpoints Can recognize when a text provides factual information and when it seeks to convince readers of something Can recognize different structures in discursive text: constructing arguments, problem–solution presentation and cause–effect relationships Can understand straightforward, factual texts on subjects relating to his/her interests or studies Can understand short texts on subjects that are familiar of current interest, in which people give their points of view (e.g., critical contributions to an online discussion forum or readers’ letters to the editor) Can identify the main conclusions in clearly signaled argumentative texts Can recognize the line of argument in the treatment of the issue presented, though not necessarily in detail Can recognize significant points in straightforward newspaper articles on familiar subjects Can understand most factual information that he/she is likely to come across on familiar subjects of interest, provided he/she has sufficient time for rereading Can understand the main points in descriptive notes such as those on museum exhibits and explanatory boards in exhibitions Can identify specific information in simpler written material he/she encounters such as letter, brochures, and short newspaper articles describing events Can follow the general outline of a news report on a familiar type of event, provided that the contents are familiar and predictable Can pick out the main information in short newspaper reports or simple articles in which figures, names, illustrations, and titles play a prominent role and support the meaning of the text Can understand the main points of short texts dealing with everyday topics (e.g., lifestyle, hobbies, sports, weather) Can understand texts describing people, places, everyday life, culture, etc., provided that they are written in simple language Can understand information given in illustrated brochures and maps, e.g., the principal attractions of a city or area Can understand the main points in short news items on subjects of personal interest (e.g., sport and celebrities) Can understand a short factual description or report within his/her own field, provided that it is written in simple language and does not contain unpredictable detail (continued)
66
2 Curriculum and Course Design
Table 2.18 (continued)
A1
PreA1
Can understand most of what people say about themselves in a personal ad or post and what they say they like in other people Can get an idea of the content of simpler informational material and short simple descriptions, especially if there is visual support Can understand short texts on subjects of personal interest (e.g., news flashes about sports, music, travel, stories, etc.) written with simple words and supported by illustrations and pictures Can understand the simplest informational material that consists of familiar words and pictures, such as a fast-food restaurant menu illustrated with photographs or an illustrated story formulated in very simple, everyday words
distinguished in terms of text types and topics: “specialized articles outside his/her field” at the upper B2 level, while “straightforward, factual texts on subjects relating to his/her interest or studies” at B1; at A2 “short texts dealing with everyday topics, while at A1 “short texts on subjects of personal interest”. How texts are written also differentiates levels: “with simple language” at A2, “with simple words” at A1, and “very simple everyday words” at pre-A1. Production Descriptor Scale The scaled descriptors for addressing audiences define the ability to perform two types of speech act: presentation and answering questions. According to the CEFR/CV (COE 2018), three key concepts are manifested in the descriptors: “type of address,” “consideration of the audience,” and “ability to handle questions” (COE 2018: 74). The first two concepts seem to be concerned with presentation and the other with answering questions. The scale is presented in Table 2.19. The key concept, “type of address” is characterized by two factors in the descriptors. One is attribution, which is expressed by adjectival phrases modifying “presentation”. They define the presentation quality specified for each criterial level. They include “clear, well-structured” at C1, “clear, systematically developed” at B2+ , “clear, prepared” at B2, “prepared” at B1+ , “prepared, straightforward” at B1, and “short, rehearsed” at A2, which are all marked in bold in the descriptors in Table 2.19. The other factor that distinguishes levels is presentation topic: “a complex topic/subject” at C, “on a controversial issue or on a topic from two competing perspectives such as merits and demerits” at B2, “on a familiar topic within a presenter’s field” at B1+ , and “on a topic of a presenter’s everyday life” at B1. These expressions are italicized in Table 2.19. The concept “consideration of the audience” indicates the manner through which the presentation is delivered, further differentiating the six levels of proficiency. For instance, at C2 the presentation is delivered “confidently and articulately to an audience unfamiliar with the topic” while “structuring and adapting the talk flexibly to meet the audience’s needs.” At C1, it is delivered “expanding and supporting
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
67
Table 2.19 Addressing audiences (COE 2018: 74) (emphasis added) C2
C1
B2
B1
A2
A1 Pre-A1
Can present a complex topic confidently and articulately to an audience unfamiliar with it, structuring and adapting the talk flexibly to meet the audience’s needs Can handle difficult and even hostile questioning Can give a clear, well-structured presentation of a complex subject, expanding and supporting points of view at some length with subsidiary points, reasons, and relevant examples Can structure a longer presentation appropriately in order to help the audience follow the sequence of ideas and understand the overall argumentation Can speculate or hypothesize in presenting a complex subject, comparing and evaluating alternative proposals and arguments Can handle interjections well, responding spontaneously and almost effortlessly Can give a clear, systematically developed presentation, with highlighting of significant points, and relevant supporting detail Can depart spontaneously from a prepared text and follow up interesting points raised by members of the audience, often showing remarkable fluency and ease of expression Can give a clear, prepared presentation, giving reasons in support of or against a particular point of view and giving the advantages and disadvantages of various options Can take a series of follow-up questions with a degree of fluency and spontaneity which poses no strain for either himself/herself or the audience Can give a prepared presentation on a familiar topic within his/her field, outlining similarities and differences (e.g., between products, countries/regions, plans) Can give a prepared straightforward presentation on a familiar topic within his/her field which is clear enough to be followed without difficulty most of the time, and in which the main points are explained with reasonable precision Can take follow-up questions, but may have to ask for repetition if the speech was rapid Can give a short, rehearsed presentation on a topic pertinent to his/her everyday life, and briefly give reasons and explanations for opinions, plans, and actions Can cope with a limited number of straightforward follow-up questions Can give a short, rehearsed, basic presentation on a familiar subject Can answer straightforward follow-up questions if he/she can ask for repetition and if some help with the formulation of his/her reply is possible Can read a very short, rehearsed statement—e.g., to introduce a speaker, propose a toast No descriptors available
points of view at some length with subsidiary points, reasons and relevant examples.” At B2 level, the presentation is given “with highlighting of significant points and relevant supporting detail,” at B1+ “outlining similarities and differences,” and at B1 “clear enough to be followed without difficulty most of the time” and “main points are explained with reasonable precision.” Finally, at A2+ one can “briefly give reasons and explanations.” All of these specifications are indicated by bolded italics in Table 2.19.
68
2 Curriculum and Course Design
The criterial feature of the second speech act, “ability to handle questions,” is signaled by two factors. One is the type of questions and comments, and the other is ability to handle questions. The former ranges from “difficult and even hostile questions” at C2 to “straightforward follow up questions” at A2, as indicated in bold in Table 2.19. The latter is differentiated from “spontaneously and almost effortlessly” at C2 to if the presenter “can ask for repetition and if some help with the formulation of his/her reply is possible” at A2. These are indicated by bolded italics in Table 2.19. Various components of a descriptor indicate different key concepts. Their users should be aware of these concepts because they crucially differentiate each level of proficiency in the activities described. Interaction Descriptor Scale The “formal discussion” scale is one of the interaction activity scales. COE (2018: 87) states that three key concepts are operationalized in the descriptors: “ability to follow the discussion,” “type of meeting and topics,” and “ability to contribute”. In Table 2.20, the concept “ability to follow the discussion” is expressed in verb phrases marked in bold, “type of meeting and topics” in italicized phrases, and “ability to contribute” underlined. The concepts “ability to follow the discussion” and “type of meeting and topics” characteristically vary from C2 to A2. The C level requires tactful engagement in formal discussion and active debate on complex issues. B2 demands involvement in discussions on topics related to his/her field, while B1 requires similar activities to B2 with some restrictions. A2 restricts the discussion to his/her field. The concept “ability to contribute” indicates the manner of expressing one’s own opinions, also crucially distinguishing between levels on the scale. In addition to these three key concepts, descriptors specify conditions under which learners follow and contribute to the discussion. They are italicized in bold in the descriptors in Table 2.20. Conditions are more prominently set as the proficiency level descends. The descriptors at B1 and A2 set conditions for which the designated performance act is successful. Mediation Descriptor Scale “Note-taking (lectures, seminars, meetings, etc.)” is one of the subactivities of mediation. In the scale, four distinctive features differentiate proficiency levels: “type of note-taking”, “type of source text”, “accuracy of the notes,” and “consideration on the part of the speaker” (COE 2018: 115). The type of note-taking is expressed in bold, and the type of source text is italicized in Table 2.21. Accuracy of the notes is stated only in higher-level descriptors and is underlined in the descriptors in Table 2.21, while considerations regarding the speaker that define conditions where note-taking is possible are italicized in bold. The type of note-taking differs significantly among the levels, starting with only simple notes at A2, notes for his/her own use at B1, and detailed and accurate notes sufficient not only for oneself but also for others at C1. The source text type and its topics also vary from familiar topics at A2, on lecture or seminar topics at B, and on complex, abstract topics from multiple spoken sources at C. Accuracy of the notes is described only for the upper levels, notes precise enough for other people’s use at C1 and one’s own use at B1. Some conditions on the part of the speaker/presenter
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
69
Table 2.20 Formal discussion (meetings) (COE 2018: 87) (emphasis added) C2
C1
B2
B1
A2
A1 Pre-A1
Can hold his/her own in formal discussion of complex issues, putting an articulate and persuasive argument, at no disadvantage to other speakers Can advise on/handle complex, delicate, or contentious issues, provided he/she has the necessary specialized knowledge Can deal with hostile questioning confidently, hold on to his/her turn to speak, and diplomatically rebut counterarguments Can easily keep up with the debate, even on abstract, complex unfamiliar topics Can argue a formal position convincingly, responding to questions and comments and answering complex lines of counterargument fluently, spontaneously, and appropriately Can restate, evaluate, and challenge contributions from other participants about matters within his/her academic or professional competence Can make critical remarks or express disagreement diplomatically Can follow up questions by proving for more detail, and can reformulate questions if these are misunderstood Can keep up with an animated discussion, identifying accurately arguments supporting and opposing points of view Can use appropriate technical terminology, when discussing his/her area of specialization with other specialists Can express his/her ideas and opinions with precision, present and respond to complex lines of argument convincingly Can participate actively in routine and non-routine formal discussion Can follow the discussion on matters related to his/her field, and understand in detail the points given prominence by the speaker Can contribute, account for and sustain his/her opinion, evaluate alternative proposals and make and respond to hypotheses Can follow much of what is said that is related to his/her field, provided interlocutors avoid very idiomatic usage and articulate clearly Can put over a point of view clearly, but has difficulty engaging in debate Can follow argumentation and discussion on a familiar or predictable topic, provided the points are made in relatively simple language and/or repeated, and opportunity is given for clarification Can take part in routine formal discussion of familiar subjects which is conducted in clearly articulated speech in the standard dialect and which involves the exchange of factual information, receiving instructions or the discussion of solutions to practical problems Can generally follow changes of topic in formal discussion related to his/her field which is conducted slowly and clearly Can exchange relevant information and give his/her opinion on practical problems when asked directly, provided he/she receives some help with formulation and can ask for repetition of key points if necessary Can say what he/she thinks about things when addressed directly in a formal meeting, provided he/she can ask for repetition of key points if necessary No descriptor available No descriptor available
70
2 Curriculum and Course Design
Table 2.21 Note-taking (lectures, seminars, meetings, etc.) (COE 2018: 115) (emphasis added) C2
C1
B2
B1
A2 A1 Pre-A1
Can, while continuing to participate in a meeting or seminar, create reliable notes (or minutes) for people who are not present, even when the subject matter is complex and/or unfamiliar Is aware of the implications and allusions of what is said and can make notes on them as well as on the actual words used by the speaker Can make notes selectively, paraphrasing and abbreviating successfully to capture abstract concepts and relationships between ideas Can take detailed notes during a lecture on topics in his/her field of interest, recording the information so accurately and so close to the original that the notes could also be used by other people Can make decisions about what to note down and what to omit as the lecture or seminar proceeds, even on unfamiliar matters Can select relevant, detailed information and arguments on complex, abstract topics from multiple spoken sources (e.g., lectures, podcasts, formal discussions and debates, interviews, etc.), provided that standard language is delivered at normal speed in one of the ranges of accents familiar to the listener Can understand a clearly structured lecture on a familiar subject, and can take notes on points which strike him/her as important, even though he/she tends to concentrate on the words themselves and therefore to miss some information. Can make accurate notes in meetings and seminars on most matters likely to arise within his/her field of interest Can take notes during a lecture which are precise enough for his/her own use at a later date, provided the topic is within his/her field of interest and the talk is clear and well-structured Can take notes as a list of key points during a straightforward lecture, provided the topic is familiar, and the talk is both formulated in simple language and delivered in clearly articulated standard speech Can note down routine instructions in a meeting on a familiar subject, provided they are formulated in simple language and he/she is given sufficient time to do so Can make simple notes at a presentation/demonstration where the subject matter is familiar and predictable and the presenter allows for clarification and note-taking No descriptor available No descriptor available
are also described; for instance, at B1 the talk conditions are “clear and well-structured,” while at A2 the presenter accommodates clarification and note-taking. We have seen that key concepts for each descriptor scale differentiate proficiency levels in various ways: cognition level of written and spoken materials, text genre and topics, quality of activity performance, conditions for activity performance, and activity themes. It is important for course designers to recognize and understand the defining role of each key concept in order to modify descriptors to fit the course being planned. Some key concepts are the criterial features that distinguish one level from another, while others broadly define the range of text types and topics.
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
71
2.3.2.5 How to Contextualize Illustrative Descriptors to Suit One’s Own Course Illustrative descriptor scales in the CEFR are common reference scales for a wide range of language activities and are therefore context free. A course designer needs to contextualize them to fit the course being planned. The COE encourages users of the CEFR to adapt them to better fit their own purposes (cf. North 2014: 112–3). However, one should localize descriptors in a principled way since certain parts of a descriptor indicate criterial features unique to a level that distinguish it from the other levels, as was explained in Sect. 2.3.2.4. In the following subsections, we will demonstrate which parts of a descriptor can be modified and in what ways. Reading for Information and Argument The descriptor scale for this activity differentiates proficiency levels with three key concepts: depth of understanding, type of texts, and subjects of texts (described in Sect. 2.3.2.4). Among these concepts, depth of understanding is a criterial feature that indicates varying degrees of cognition across the different levels and therefore it should not be modified. However, types of texts and subjects of texts are only broadly defined and therefore should be specified according to the course being planned. Table 2.22 demonstrates text type and topic characteristics at a B level; the former is underlined and the latter italicized. The text types and topics for each scale are rather general, so a course designer may want to specify them depending on the teaching materials to be used in the course. For this purpose, the CEF-ESTIM grid and mapping text types in the British Council–Eaquals Core Inventory for General English11 are helpful resources as they estimate the CEFR level of texts depending on topic, domain, discourse, text resource, vocabulary, and grammatical structure. One may consult them to specify text genre and topics that meet course planning needs. Addressing Audiences The descriptive scale for addressing audiences distinguishes levels with three key concepts: type of address, consideration of the audience, and ability to handle questions. The type of address is characterized by two factors: attributes and presentation topic. Different levels of presentation are defined by their attributes, such as how the presentation is structured. This part of the descriptor should be kept intact since it crucially differentiates one level from another. Presentation topics, however, are only broadly defined, as indicated in Table 2.23. A course planner should specify topics within the range given in the descriptors. For instance, A2 demands topics about a leaner’s everyday life. A course designer may select topics such as free time, entertainment, favorite food, and/or sports one enjoys doing and/or watching. At B1, presentation topics should be within learners’ fields and a course designer may choose topics in the humanities, sciences, and/or current social issues depending on learners’ fields of study and interests. Or a course
11
Although the Core Inventory confines itself to English, mapping text types can be used for teaching languages other than English.
72
2 Curriculum and Course Design
Table 2.22 Characteristics of B-level text types excerpted from COE (2018: 63) (emphasis added) B2+ B2
B1+
B1
• Highly specialized sources within his/her field • Specialized articles outside his/her field • Articles and reports concerned with contemporary problems in which the writers adopt particular stances or viewpoints • Discursive text: constructing arguments, problem–solution presentation, and cause– effect relationships • Straightforward, factual texts on subjects relating to his/her interests or studies • Short texts on subjects that are familiar of current interest, in which people give their points of view (e.g., critical contributions to an online discussion forum or readers’ letters to the editor) • Clearly signaled argumentative texts • Straightforward newspaper articles on familiar subjects • On familiar subjects of interest • Descriptive notes such as those on museum exhibits and explanatory boards in exhibitions
Table 2.23 Presentation topics in addressing audiences excerpted from COE (2018: 74) (emphasis added)
C2 C1 B2 B1 A2 A1
A complex topic A complex subject (no description of topics) On a familiar topic within his/her field A topic pertinent to his/her everyday life On a familiar subject To introduce a speaker, propose a toast
designer may let learners choose their own topics based on their fields of study and/or their interests. Formal Discussion The descriptor scale for formal discussion involves four key concepts: ability to follow discussion, ability to contribute to discussion, meeting type, and discussion topic. The first three present features unique to each level and should not be localized. However, discussion topic can be made more specific since the descriptors only define a broad range as illustrated in Table 2.24, which lists topics for each level of the scale. At the C level, any complex issues can be discussed. At B and A2, topics related to learners’ specialization are specified; thus, potential topics depend on their fields of study. Some practical issues can be discussed at A2, so a course designer may select specific topics related to daily life, jobs, and/or school life. Note-taking The descriptive scale for note-taking distinguishes levels of proficiency using the following four concepts: type of note-taking, accuracy of the notes, type of source text, and affordances on the part of the speaker. These concepts constitute criterial features for each level and should be kept intact in principle.
2.3 Application of the CEFR: How to Utilize CEFR Descriptors …
73
Table 2.24 Topics in formal discussion excerpted from COE (2018: 87) (emphasis added) C2 C1 B2+ B2 B1+ B1 A2+
Complex, delicate or contentious issues Abstract, complex unfamiliar topics About matters within his/her academic or professional competence His/her area of specialization with other specialists On matters related to his/her field Related to his/her field On a familiar or predictable topic Familiar subjects Related to his/her field On practical problems
Table 2.25 Source texts and topics in Note-taking excerpted from COE (2018: 115) (emphasis added) C2 C1
B2 B1+ B1 A2
The subject matter is complex and/or unfamiliar A lecture on topics in his/her field of interest A lecture or seminar even on unfamiliar matters Information and arguments on complex, abstract topics from multiple spoken sources (e.g., lectures, podcasts, formal discussions and debates, interviews, etc.) A clearly structured lecture on a familiar subject, Meetings, and seminars on most matters likely to arise within his/her field of interest A lecture, provided the topic is within his/her field of interest A straightforward lecture, provided the topic is familiar A meeting on a familiar subject A presentation/demonstration where the subject matter is familiar and predictable
However, source texts and their topics should be specified. Table 2.25 lists source text type and topics from the descriptors with topics italicized. A course designer can specify whether source texts should be lectures, meetings, or something else along with what they should be about. Source text topics can be selected based on learners’ interests or from teaching materials used in a course. Taking notes is usually performed along with other activities such as reading, listening, presenting, and discussion. A course designer may choose topics related to these other language activities. In 2.3, we have considered which parts of descriptors should be kept intact and which should be specified and localized to fit the course being planned. The importance of contextualizing descriptors in a principled way has been illustrated. The basic principle for customization is to specify topics and source texts which are broadly defined while keeping the criterial features for each level intact.
74
2.4
2 Curriculum and Course Design
Exercises
The three goals of this section are to review the process of how to implement the CEFR scales for a language course that you have taught or taken, to select activities and CEFR descriptors for your ideal language course, and to create CEFR-related descriptors between two CEFR Common Reference Levels.
2.4.1 Exercise 1: Aligning a Course to CEFR Descriptors This exercise asks you to align a language course you have taught to CEFR descriptors and then design a CEFR-informed course. (If you have not taught classes, you can use a language course you have taken in the past.) (To do this exercise, refer to the CEFR/CV descriptors listed on pp. 54–98 at the following site: https://rm.coe.int/cefr-companion-volume-with-new-descriptors2018/1680787989.) Key questions (1) What language activities were done in the class? (2) In what mode(s) did these language activities take place: reception, production, interaction, and/or mediation? (3) Which proficiency level was taught, beginners (A1 and A2), intermediate (B1 and B2), or advanced (C1 and C2)? (4) Select the most relevant scaled descriptors based on your answers to questions (1) and (2) through completing the following steps: (i)
(ii) (iii)
Select the major language activities: reception (listening comprehension and reading comprehension), production (spoken production and written production), interaction (spoken interaction, written interaction, and online interaction), and/or mediation (mediating a text, mediating concepts, and mediating communication). Select sublanguage activities based on your answer to (1). To answer this question, refer to Tables 2.2, 2.3, 2.4, and 2.5 in Sect. 2.2.2.1. Select scaled descriptors in the chosen sublanguage activities from the CEFR/CV, and write them in Worksheet 1.
(5) Review the selected descriptors listed in Worksheet 1, and examine if they reflect students’ and/or your target language use needs in a given domain. Recall there are four domains of major language activities: the personal domain, the public domain, the occupational domain, and the educational domain. Create an ideal language course that meets the needs of these learners, choosing language activities and illustrative descriptors for the appropriate target level in Worksheet 2. (Note that there are various constraints when you design a course that are discussed in Chap. 6. In this exercise, design an ideal course that reflects learners’ needs without worrying about constraints.)
2.4 Exercises
75
Worksheet 1 Alignment of your course to the CEFR Major language activity: Proficiency level: Sublanguage Selected illustrative descriptors activities
Worksheet 2 Your ideal course Major language activity: Proficiency level: The domain of language use: Sublanguage Selected illustrative descriptors activities
2.4.2 Exercise 2: Creating Illustrative Descriptors Between Criterial Levels One of the difficulties in implementing the CEFR is to adjust illustrative descriptors to fit between levels, for instance between A2 and B1. Analyze the “overall written production” descriptors in Table 2.26, and modify them to create descriptors between A2 and B1.
76
2 Curriculum and Course Design
Table 2.26 CEFR ‘Can Do’ descriptors (emphasis added) Overall written production (COE 2018: 75) B2 Can write clear, detailed texts on a variety of subjects related to his/her field of interest, synthesizing and evaluating information and arguments from a number of sources B1 Can write straightforward connected texts on a range of familiar subjects within his/her field of interest, by linking a series of shorter discrete elements into a linear sequence A2 Can write a series of simple phrases and sentences linked with simple connectors like ‘and’, ‘but’, and ‘because’ A1 Can give information in writing about matters of personal relevance (e.g., likes and dislikes, family, pets) using simple words and basic expressions Can write simple isolated phrases and sentences
Key questions (1) Examine tasks, text types, and text topics specified at each level in the overall written production descriptors in Table 2.26. (2) In Table 2.26, tasks are highlighted in bold, attributes of the tasks are in bolded italics, and topics are in italics. Classify them in Worksheet 3, following the example given for the B2 descriptor. Worksheet 3: Tasks and attribution of tasks Overall written production Tasks Attribution of tasks Topics B2 Write texts synthesizing and evaluating information a variety of subjects and arguments from a number of sources related to his/her field of interest B1
A2
A1
Descriptors between A2 and B1 level
2.4 Exercises
77
(3) Analyze the features of the writing activities for each level in Worksheet 3, and create a descriptor which you think fits the level between A2 and B1 in Worksheet 3.12 (4) Observe the overall objective of writing at the A2+ level given in the case study from Sabanci University in Sect. 2.5.1, and compare your descriptor(s) with theirs. Note that the overall objective for English courses at Sabanci University is to build students’ English competencies for academic purposes. (5) Compare your descriptors with those developed for writing at the B1 level for the general English courses at the University of Gloucestershire in Table 2.28 in Sect. 2.5.2. Describe similarities and differences between the two.
2.5
Case Studies and Further Reading
This section introduces two case studies which demonstrate how to implement the CEFR’s (COE 2001) ‘Can Do’ descriptors in course design, in particular to articulate learning outcomes for courses. The first case study from Sabanci University in Turkey is introduced in 2.5.1 and the second from the University of Gloucestershire in England in 2.5.2. More case studies concerned with various contexts across the globe are listed in the further reading in 2.5.3.
2.5.1 Case Study 1: Using Illustrative Descriptors for an Academic Writing Course at the A2+ Level Eken (2009) reports on a curriculum reform using ‘Can Do’ descriptors at Sabanci University in Turkey, which took place from 2005 to 2006. The reform attempted to clarify and articulate English course objectives for all stakeholders. Course objectives in the ‘Can Do’ scheme aim to raise learners’ awareness of course objectives and promote communication between learners and teachers as well as among faculty members. The reform produced two types of course objectives: the main objective and subsidiary objectives for four English skills at each level based on CEFR and ALTE ‘Can Do’ descriptors. Eken (2007: 14) acknowledges the usefulness of the CEFR ‘Can Do’ descriptors, stating that detailed descriptors: • Allowed us to check progression between levels; • The process of describing levels increased our awareness of some weakness in our program; You may wonder if you can alter existing ‘Can Do’ descriptors. You can. Remember the following quotation from North (2009: 2): Don’t be shy to reformulate the ‘Can Do’ descriptors, everybody does; just keep to the philosophy (descriptions of a concrete act) and be aware which CEFR descriptors yours relate to. 12
78
2 Curriculum and Course Design • Important to be clear about own program aims—CEFR does provide some guidance on this in Chapter 4 and 5 but may need additional needs analysis.
She notes there were different aims at the lower levels between the CEFR descriptors and those for her program. The CEFR descriptors below B1 aim more toward daily use of language, while her program aims toward use of English in academic tasks in higher education. This modification of lower-level descriptors is very demanding, as she explains: The descriptors of lower levels have been challenging to define—CEFR is described in terms of real life tasks e.g. basic transactions in a shop, which is irrelevant in our context due to focus on academic tasks e.g. essay writing comes earlier than in a general language programme. (Eken 2007: 14)
Sabanci University sets the objectives of writing at A2+ as given in Table 2.27. The objectives of writing at the A2+ level developed by Sabanci University seem to contain three layers of concreteness. The goal of the course is expressed in the following ‘Can Do’ descriptor, “I can write short texts about my studies, using mostly simple sentences,” which follows the CEFR’s descriptor of overall written production at A2 except for the topic of writing. This goal is comprised of the three subsidiary objectives: I can understand and follow the process of writing a paragraph or an essay. I can write an organized paragraph or a simple essay on a factual topic about my studies. I can use appropriate language.
Table 2.27 Sabanci 2006 Can Do statements (Eken 2007: 20)
Writing at A2+ level I can write short texts about my studies, using mostly simple sentences I can understand and follow the process of writing a paragraph or an essay I can analyze the question I can organize the ideas into a written outline I can write a draft I can rewrite my work after feedback from other people I can write an organized paragraph or a simple essay on a factual topic about my studies I can give examples I can give short reasons for ideas or opinions I can give advantages and/or disadvantages I can organize the information in a logical order I can use linkers to connect ideas I can use appropriate language My language is mostly grammatically correct My spelling and punctuation are mostly accurate I can use a variety of vocabulary I can write in an impersonal style
2.5 Case Studies and Further Reading
79
Furthermore, each subsidiary objective constitutes itemized concrete tasks which elaborate a term in the objectives. For instance, “an organized paragraph or simple essay” in the second subsidiary objective is specified by the following specific tasks: I can give examples I can give short reasons for ideas or opinions I can give advantages and/or disadvantages I can organize the information in a logical order I can use linkers to connect ideas
The itemized subtasks for each of the subsidiary objectives help learners to know what they need to be able to do to achieve them and in turn the goal of the course. The three-layered approach to articulating course objectives is an excellent example of how course objectives can be specified.
2.5.2 Case Study 2: Using Illustrative Descriptors for a General English Course Wall (2004) explored the process of implementing CEFR’s (COE 2001) ‘Can Do’ descriptors in their courses at the English language center at the University of Gloucestershire. Prior to the reform, they faced issues including mismatch between learners’ needs and previous courses as well as heavy reliance on textbooks in course planning. Hence, they decided to renovate their courses utilizing CEFR’s ‘Can Do’ descriptors. Wall (2004: 123) lists several advantages to using them: • The ‘Can Do’ statements would indicate to students what they were expected to do by the end of the course/level. • The statements would provide a substantial basis on which tutors could program a course. • The statements would help tutors to relate individual lessons to the course outcomes.
They developed ‘Can Do’ descriptors for the general English courses from A1 to C1. Table 2.28 demonstrates their descriptors for writing skills at the B1 level. Can Do objectives developed for writing activities at the University of Gloucestershire can be classified into two types: the first statement and the rest. The first ‘Can Do’ descriptor in Table 2.28 states the ultimate goal of the course, observing criterial features of writing competences for the B1 level: “can write simple, logical, and connected texts (100–50 words)” and “on topics which are familiar, or of personal interests, e.g., family, holidays, everyday routine.” Although it is more concrete than the CEFR descriptor of overall written production in Table 2.26, it adheres firmly to the criterial features of B1 writing competences. The rest of the descriptors supplement this first one, specifying text types, ways to logically link sentences, and both lexical and grammatical points to be acquired. Finally, the last statement refers to a writing activity, journal writing, which is likely assigned in the course.
80 Table 2.28 Writing at B1 at the English Centre at the University of Gloucestershire (Wall 2004: 129)
2 Curriculum and Course Design I can write simple, logical, and connected texts (100–50 words) on topics which are familiar, or of personal interests, e.g., family, holidays, everyday routine I can write a range of text types including letters, narratives, and descriptions of people, objects, and places I can complete forms and questionnaires at the appropriate level and communicate information I can write personal texts, which include descriptions of experiences and feelings, e.g., an embarrassing event, the happiest day of my life, a film I enjoyed I can connect my ideas logically using a limited range of expressions, e.g., linking words and time expressions I can spell known words accurately I can punctuate and paragraph appropriately I can begin to correct my own work and that of others I can keep a diary which records my problems and progress in which I practice writing on given topics
Case Study Conclusion Both case studies presented here demonstrate how to implement ‘Can Do’ descriptors to meet local needs. Case study 1 presents learning outcomes for a writing course at the A2+ level for academic purposes. The level is between A2 and B1, and CEFR descriptors at these levels primarily aim at daily language usage. Hence, descriptors presented in case study 1 are localized in terms of two aspects: adjusting the context from daily uses to academic uses and creating a level between A2 and B1. This case study can serve as a model answer to Exercise 1. Case study 2 also presents ‘Can Do’ descriptors for writing. The descriptors in both case studies constitute multiple layers: more general writing competence descriptors and more detailed subsidiary descriptors. The former articulates a course goal and the latter step-by-step tasks to achieve that goal. This layered descriptor approach presents an excellent way to articulate a general goal and the specific tasks which learners need to perform to achieve that goal.
2.5.3 Further Reading Numerous case studies that attempt to implement the CEFR in different contexts have appeared. In particular a recent development at the European Centre for Modern Languages (ECML), the Quality Assurance Matrix for CEFR use (CEFR-QualiMatrix) project is worth noting. The project produced a Web site called QualiMatrix which supports CEFR-based language learning innovation. It contains a wide variety of case studies where the CEFR is used to innovate the curriculum (https://www.ecml.at/ECML-Programme/Programme2016-2019/). Besides the QualiMatrix, numerous case studies that attempt to implement the CEFR in different contexts have appeared. The following are just a few examples from Asia and Europe.
2.5 Case Studies and Further Reading
81
Asia 1. Implementation of the CEFR ‘Can Do’ descriptors in higher education in Japan: See Shimo et al. (2017). 2. Implementation of the CEFR in a university English curriculum in Vietnam See Nhung (2017). Europe 1. Implementation of the CEFR in primary schools in Ireland See Little and Simpson (2004). 2. Implementation of the CEFR in English courses for teenagers See Manasseh (2004). 3. Implementation of the CEFR in a language school See North (2007). Resources Primary and supplementary resources for designing curricula and courses are listed below: Primary resources • CEFR (Chaps. 4 and 5) • https://rm.coe.int/1680459f97 • CEFR/CV https://rm.coe.int/cefr-companion-volume-with-new-descriptors-2018/ 1680787989. Supplementary resources • Reference Level Descriptions (RLDs) • https://www.coe.int/en/web/common-european-framework-reference-languages/ reference-level-descriptions-rlds-developed-so-far (List itself) • CEF-ESTIM • http://cefestim.ecml.at/Trainingkit/tabid/2518/language/en-GB/Default.aspx • British Council–Eaquals Core Inventory for General English • https://englishagenda.britishcouncil.org/continuing-professional-development/ cpd-teacher-trainers/british-council-eaquals-core-inventory-general-english.
I can understand familiar names, words, and very simple sentences, for example on notices and posters or in catalogues
Reading
Reading
Listening
I can understand extended speech even when it is not clearly structured and when relationships are only implied and not signaled explicitly I can understand television programs and films without too much effort I can understand long and complex factual and literary texts, appreciating distinctions of style. I can understand specialized articles and longer technical instructions, even when they do not relate to my field
Reception C1
I can understand phrases and the highest frequency vocabulary related to areas of most immediate personal relevance (e.g., very basic personal and family information, shopping, local geography, employment)
I can recognize familiar words and very basic phrases concerning myself, my family, and immediate concrete surroundings when people speak slowly and clearly
Listening
B1
I can read articles and reports concerned with contemporary problems in which the writers adopt particular stances or viewpoints. I can understand contemporary literary prose
I can understand extended speech and lectures and follow even complex lines of argument provided the topic is reasonably familiar I can understand most TV news and current affairs programs I can understand the majority of films in standard dialect
B2
I have no difficulty in understanding any kind of spoken language, whether live or broadcast, even when delivered at fast native speed, provided I have some time to get familiar with the accent I can read with ease virtually all forms of the written language, including abstract, structurally or linguistically complex texts such as manuals, specialized articles, and literary works
I can catch the main point in short, clear, simple messages and announcements I can understand the main points of clear standard speech on familiar matters regularly encountered in work, school, leisure, etc. I can understand the main point of many radio or TV programs on current affairs or topics of personal or professional interest when the delivery is relatively slow and clear I can read very short, simple texts. I can understand texts that consist mainly of high frequency everyday I can find specific, predictable or job-related language. I can information in simple everyday understand the description of events, material such as advertisements, prospectuses, menus, and timetables, feelings, and wishes in personal and I can understand short, simple letters personal letters C2
A2
Reception A1
Common Reference Levels: Self-assessment Grid (COE 2018: 167–170) © Council of Europe
Appendix 1
82 2 Curriculum and Course Design
Written and online interaction
Interaction Spoken interaction
I can interact in a simple way Spoken interaction provided the other person is prepared to repeat or rephrase things at a slower rate of speech and help me formulate what I’m trying to say. I can ask and answer simple questions in areas of immediate need or on very familiar topics Written I can post short, simple greetings as and online statements about what I did and how interaction I liked it, and can respond to comments in a very simple way. I can react simply to other posts, images, and media. I can complete a very simple purchase, filling in forms with personal details
B1
B2
I can deal with most situations likely I can interact with a degree of to arise while traveling in an area fluency and spontaneity that makes where the language is spoken. I can regular interaction with native enter unprepared into conversation speakers quite possible. I can take an on topics that are familiar, of active part in discussion in familiar personal interest or pertinent to contexts, accounting for and everyday life (e.g., family, hobbies, sustaining my views work, travel, and current events) I can interact about experiences, I can interact with several people, events, impressions, and feelings linking my contributions to theirs provided that I can prepare and handling misunderstandings or beforehand. I can ask for or give disagreements, provided the others simple clarifications and can respond avoid complex language, allow me to comments and questions in some time, and are generally cooperative. detail. I can interact with a group I can highlight the significance of working on a project, provided there facts, events, and experiences, are visual aids such as images, justify ideas, and support statistics, and graphs to clarify more collaboration complex concepts C1 C2 I can take part effortlessly in any conversation of discussion and have I can express myself fluently and spontaneously without much obvious searching for expressions. I can use language flexibly and effectively for social a good familiarity with idiomatic expressions and colloquialisms. and professional purposes. I can formulate ideas and opinions with precision I can express myself fluently and convey finer shades of meaning precisely. If I do have a problem, I can backtrack and restructure and relate my contribution skillfully to those of other speakers around the difficulty so smoothly that other people are hardly aware of it I can understand the intentions and implications of other contributions on I can express myself in an appropriate tone and style in virtually any complex, abstract issues and can express myself with clarity and precision, type of written interaction. I can anticipate and deal effectively with adapting my language and register flexibly and effectively. I can deal possible misunderstandings, communication issues and emotional effectively with communication problems and cultural issues that arise by reactions, and adjusting language and tone flexibly and sensitively as clarifying and exemplifying appropriate
A2 I can communicate in simple and routine tasks requiring a simple and direct exchange of information on familiar topics and activities. I can handle very short social exchanges, even though I cannot usually understand enough to keep the conversation going myself I can engage in basic social interaction, expressing how I feel, what I am doing, or what I need, and responding to comments with thanks, apology, or answers to questions. I can complete simple transactions such as ordering goods, can follow simple instructions, and can collaborate in a shared task with a supportive interlocutor
Interaction A1
Appendix 1 83
A2 I can present clear, detailed descriptions on a wide range of subjects related to my field of interest. I can explain a viewpoint on a topical issue giving the advantages and disadvantages of various options
B2
I can present a clear, smoothly flowing description or argument in a style appropriate to the context and with an effective logical structure which helps the recipient to notice and remember significant points I can write clear, smoothly flowing text in an appropriate style. I can write complex letters, reports, or articles which present a case with an effective logical structure which helps the recipient to notice and remember significant points. I can write summaries and reviews of professional or literary works
I can write straightforward connected I can write clear, detailed text on a text on topics which are familiar or of wide range of subjects related to my personal interest interests. I can write an essay or report, passing on information or giving reasons in support of or against a particular point of view C2
I can briefly give reasons and explanations for opinions and plans. I can narrate a story or relate the plot of a book or film and describe my reactions
B1
Spoken I can present clear, detailed descriptions of complex subjects production integrating subthemes, developing particular points, and rounding off with an appropriate conclusion Written I can express myself in clear, well-structured text, expressing points of production view at some length. I can write detailed expositions of complex subjects in an essay or a report, underlining what I consider to be the salient issues. I can write different kinds of texts in a style appropriate to the reader in mind
Production C1
I can use a series of phrases and sentences to describe in simple terms my family and other people, living conditions, my educational background, and my present or most recent job. I can connect phrases in a simple way in order to describe experiences and events, my dreams, hopes, and ambitions Written I can write simple I can write a series of simple phrases production isolated phrases and and sentences linked with simple sentences connectors like “and”, “but”, and “because”
Spoken I can use simple production phrases and sentences to describe where I live and people I know
Production A1
84 2 Curriculum and Course Design
A1
I can convey simple, predictable information given in short, simple texts like signs and notices, posters, and programs
I can invite others’ contributions using short, simple phrases. I can use simple words and signals to show my interest in an idea and to confirm that I understand. I can express an idea very simply and ask others whether they understand me and what they think
I can facilitate communication by showing my welcome and interest with simple words and nonverbal signals, by inviting others to speak and indicating whether I understand. I can communicate other people’s personal details and very simple, predictable
Mediation
Mediating a text
Mediating concepts
Mediating communication I can contribute to communication by using simple words to invite people to explain things, indicating when I understand and/or agree. I can communicate the main point of what is said in predictable, everyday situations about personal
I can collaborate in simple, practical tasks, asking what others think, making suggestions and understanding responses, provided I can ask for repetition or reformulation from time to time. I can make suggestions in a simple way to move the discussion forward and can ask what people think of certain ideas
I can convey the main point(s) involved in short, simple texts on everyday subjects of immediate interest provided these are expressed clearly in simple language
A2
I can help define a task in basic terms and ask others to contribute their expertise. I can invite other people to speak, to clarify the reason(s) for their views, or to elaborate on specific points they made. I can ask appropriate questions to check understanding of concepts and can repeat back part of what someone has said to confirm mutual understanding I can support a shared communication culture by introducing people, exchanging information about priorities, and making simple requests for confirmation and/or clarification. I can communicate the main sense of what is said on subjects of
I can convey information given in clear, well-structured informational texts on subjects that are familiar or of personal or current interest
B1
I can encourage a shared communication culture by adapting the way I proceed, by expressing appreciation of different ideas, feelings, and viewpoints, and inviting participants to react to each other’s ideas. I can communicate the significance (continued)
I can convey detailed information and arguments reliably, e.g., the significant point(s) contained in complex but well-structured texts within my fields of professional, academic, and personal interest I can encourage participation and pose questions that invite reactions from other group members’ perspectives or ask people to expand on their thinking and clarify their opinions. I can further develop other people’s ideas and link them into coherent lines of thinking, considering different sides of an issue
B2
Appendix 1 85
(continued)
I can acknowledge different perspectives in guiding a group, asking a series of open questions that build on different contributions in order to stimulate logical reasoning, reporting on what others have said, summarizing, elaborating and weighing up multiple points of view, and tactfully helping steer discussion toward a conclusion I can mediate a shared communication culture by managing ambiguity demonstrating sensitivity to different viewpoints, and heading off misunderstandings. I can communicate significant information clearly, fluently, and concisely as well as explaining cultural references. I can use persuasive language diplomatically
Mediating concepts
Mediating communication
C1
I can guide the development of ideas in a discussion of complex abstract topics, encouraging others to elaborate on their reasoning, summarizing, evaluating, and linking the various contributions in order to create agreement for a solution or way forward
wants and needs. I can recognize when speakers disagree or when difficulties occur and can use simple phrases to seek compromise and agreement
information, provided other people help me with formulation
Mediating a text
A2
A1
Mediation
Mediation of important statements and viewpoints on subjects within my fields of interest, provided speakers give clarifications if needed
B2
I can mediate effectively and naturally between members of my own and other communities, taking account of sociocultural and sociolinguistic differences and communicating finer shades of meaning
I can explain in clear, fluent, well-structured language the way facts and arguments are presented, conveying evaluative aspects and most nuances precisely, and pointing out sociocultural implications (e.g., use of register, understatement, irony, and sarcasm) I can guide the development of ideas in a discussion of complex abstract topics, encouraging others to elaborate on their reasoning, summarizing, evaluating, and linking the various contributions in order to create agreement for a solution or way forward
C2
personal interest, provided that speakers articulate clearly and that I can pause to plan how to express things
B1
86 2 Curriculum and Course Design
References
87
References Council of Europe. (2001). The common European framework of reference for languages: Learning, teaching, assessment. Cambridge: Cambridge University Press. Council of Europe. (2018). The common European framework of reference for languages: Learning, teaching, assessment. Companion Volume with new descriptors. Strasbourg: Council of Europe. https://rm.coe.int/cefr-companion-volume-with-new-descriptors-2018/1680787989. DeKeyser, R. M. (2012). Age effects in second language learning. In S. M. Gass & A. Mackey (Eds.), The Routledge handbook of second language acquisition (pp. 442–460). New York: Routledge. Diamond, R. M. (2008). Designing and assessing courses and curricula: A practical guide (3rd ed.). San Francisco: Jossey-Bass. Eaquals, D. K. (2007). CEFR curriculum case studies: Examples from different contexts of implementing ‘Can Do’ descriptors from the common European framework of reference. http:// www.eaquals.org. Accessed May 14, 2018. Eken, D. K. (2007). How ‘Can Do’ statements were used to aid the syllabus development project in Sabanci University, School of Languages. CEFR curriculum case studies: Examples from different contexts of implementing ‘Can Do’ descriptors from the common European framework of reference. (pp. 12–15). https://www.eaquals.org. Accessed May 14, 2018. Hawkins, J. A., & Filipović, L. (2012). Criterial features in L2 English. Cambridge: Cambridge University Press. Little, D. (2011). The common European framework of reference for languages: A research agenda. Language Teaching, 44(3), 381–393. Little, D., & Simpson, B. L. (2004). Using the CEF to develop an ESL curriculum for newcomer pupils in Irish primary schools. In K. Morrow (Ed.), Insights from the common European framework (pp. 91–108). Oxford: Oxford University Press. Manasseh, A. (2004). Using the CEF to develop English courses for teenagers at the British Council Milan. In K. Morrow (Ed.), Insights from the common European framework (pp. 109– 120). Oxford: Oxford University Press. Matheidesz, M., & Heyworth, F. (2007). The eaquals self-help guide for curriculum and syllabus design. www.eaquals.org. Accessed May 14, 2018. Nhung, P. T. H. (2017). Applying the CEFR to renew a General English curriculum: Successes, remaining issues and lessons from Vietnam. In O’Dwyer et al. (Eds.), Critical, constructive assessment of CEFR-informed language teaching in Japan and beyond. Cambridge: Cambridge University press. North, B. (2007). Eurocentres ‘Can Do’ experiences. In Eaquals CEFR curriculum case studies: Examples from different contexts of implementing ‘Can Do’ descriptors from the common European framework of reference. (pp. 4–11). http://www.eaquals.org. Accessed May 14, 2018. North, B. (2014). The CEFR in practice. Cambridge: Cambridge University Press. North, B., Angelova, M., Jarosz, E., & Rossner, R. (2018). Language course planning. Oxford: Oxford University Press. Piccardo, E., Berchoud, T., Mentz, O. & Pamula, M. (2011). Pathways through assessing, learning and teaching in the CEFR. European Centre for Modern Languages. https://www. ecml.at/Portals/1/documents/ECML-resources/2011_08_29_ECEP_EN_web.pdf?ver=201803-21-093928-570. Accessed May 14, 2018. Richards, J. C. (2013). Curriculum approaches in language teaching: Forward, central, and backward design. RELC Journal, 44(1), 5–33. Robinson, P. (2002). Individual differences and Instructed language learning. Amsterdam: Benjamins. Shimo, E., Ramirez, C., & Nitta, K. (2017). A ‘Can Do’ framework-based curriculum in a university-level English language learning programme: Course goals, activities and
88
2 Curriculum and Course Design
assessment. In O’Dwyer et al. (Eds.) Critical, constructive assessment of CEFR-informed language teaching in Japan and beyond. (pp.118-154). Cambridge: Cambridge University press. Skehan, P. (2012). Language aptitude. In S. M. Gass & A. Mackey (Eds.), The Routledge handbook of second language acquisition (pp. 381–395). New York: Routledge. Takala, S. (2010a.) Putting the CEFR to good use: Activities and outcomes in Finland, In J. Mader & Urkun, Z. (Eds.), Putting the CEFR to good use: Selected articles by the presenters of the IATEFL Testing, Evaluation and Assessment Special Interest Group (TEA SIG) and EALTA (Conference in Barcelona, Spain, October, 29–30, 2010. 96–105. available online. http://www. ealta.eu.org./documents/resouces/IATEFL_EALTA_Proceedings_2010.pdf. Accessed May 14, 2018. Takala, S. (2010b). CEFR in Finland: Use and adaptations. Possible implications for Js? PP slides used at Tokyo, 12 December 2010. www.kiesplang.fi. Accessed May 14, 2018. Ushioda, E., & Dörnyei, Z. (2012). Motivation. In S. M. Gass & A. Mackey (Eds.), The Routledge handbook of second language acquisition (pp. 396–409). New York: Routledge. van Ek, J. A., & Trim, J. L. M. (2001a). Waystage. Cambridge: Cambridge University Press. van Ek, J. A., & Trim, J. L. M. (2001b). Threshold. Cambridge: Cambridge University Press. Wall, P. (2004). Using the CEF to develop English courses for adults at the University of Gloucestershire. In K. Morrow (Ed.), Insights from the common European framework (pp. 121–130). Oxford: Oxford University Press.
3
Assessment
This chapter first outlines the role of assessment as it is presented in the Common European Framework of Reference for Languages (COE 2001). Then this chapter’s focus on designing, implementing, and revising CEFR-informed assessments is explained. Next key concepts in educational assessment are summarized to provide the necessary conceptual background for the following sections. This is followed by brief reviews of CEFR-informed assessment types. For each assessment type, suggestions are made for design, use, and revision along with a review of available validation studies. Suggestions are also made for each assessment type for appropriate forms of validity evidence which may be gathered and analyzed to inform revisions to assessment designs and procedures. At the end of the chapter, exercises are provided to help readers take their first steps in designing their own CEFR-informed assessments. The chapter culminates with a presentation and discussion of a case study in which all assessments in an English language curriculum were designed to be CEFR-informed. Finally, suggestions are given for further reading.
3.1
The Role of Assessment in the CEFR
Assessment is one of the main areas for which the CEFR is intended to act as a useful reference for language education practitioners. The importance of this area is reflected in the title of the CEFR document, the Common European Framework of Reference for Languages: learning, teaching, assessment (emphasis added). The CEFR (COE 2001) discusses assessment briefly in Sect. 2.4 and in more detail in Chap. 9. In CEFR Sect. 2.4, three main ways in which the CEFR can be useful for assessment are outlined. These are:
© Springer Nature Singapore Pte Ltd. 2020 N. Nagai et al., CEFR-informed Learning, Teaching and Assessment, Springer Texts in Education, https://doi.org/10.1007/978-981-15-5894-8_3
89
90
3
Assessment
1. for the specification of the content of tests and examinations. 2. for stating the criteria for the attainment of a learning objective, both in relation to the assessment of a particular spoken or written performance, and in relation to continuous teacher-, peer- or self-assessment. 3. for describing the levels of proficiency in existing tests and examinations thus enabling comparisons to be made across different systems of qualifications. (COE 2001: 19)
The third of the three uses outlined above refers to linking existing examinations to the CEFR, which ideally involves standard setting. Standard setting involves setting cut scores on a test to separate levels of test taker performance. Proper standard setting requires considerable resources and technical expertise, beyond what is available to most classroom teachers. Therefore, standard setting is not discussed in detail in this chapter. For those who are interested in more information on linking existing language tests to the CEFR, an excellent resource provided by the Council of Europe (COE) is the manual for relating language examinations to the Common European Framework of Reference for Languages: learning, teaching, assessment (2009). The manual presents and explains procedures to gather and analyze evidence in order to claim a relationship between an examination and the CEFR Common Reference Levels. For a general and accessible overview of important issues involved in standard setting, see Chap. 8 of Koretz (2008). For the technically inclined, see Cizek and Bunch (2007) and Cizek (2012) for more sophisticated presentations and discussions of standard setting methods. This chapter focuses on the first two uses of the CEFR for assessment outlined in its Sect. 2.4 (see the numbered list above) because these are the areas of most immediate relevance and utility to classroom language teachers. At the outset, an important distinction needs to be made between CEFR-linked, CEFR-based and CEFR-informed assessments. For the purposes of this chapter, assessments which have been linked to the CEFR after the assessment has been created in a post hoc fashion are referred to as CEFR-linked assessments. Assessments which were designed from the very beginning, or a priori, to place learners on the CEFR scales are referred to as CEFR-based. CEFR-based assessments make a strong claim of being able to place learners at a CEFR level for a language skill or assessing learner mastery of tasks represented by CEFR ‘Can Do’ descriptors. Finally, assessments which are informed by the philosophy, the descriptor scales, and supporting resources of the CEFR, but which do not make strong claims of being able to place language learners at CEFR proficiency levels are referred to as CEFR-informed. It is important to note that designing CEFR-informed assessments remains an inherently challenging task due to the nature of the CEFR. As Harsch and Rupp (2011: 2) explain: The CEFR represents a synthesis of key aspects about second and foreign language learning, teaching, and assessment. It primarily serves as a consciousness-raising device for anyone working in these areas and as an instrument for the self-assessment of language
3.1 The Role of Assessment in the CEFR
91
ability via calibrated scales. In other words, it is not a how-to guide for developing language tests even though it can serve as a basis for such endeavors.
The CEFR is deliberately underspecified so that it can be flexible enough to serve as a reference framework across multiple languages. Therefore, teachers of individual languages must, in many cases, look beyond the CEFR to supporting documents and other resources to design and validate language-specific, CEFR-informed assessments. The design, implementation, and validation of CEFR-based assessments are currently a rapidly evolving area to which researchers, testing institutions, and educators are continually contributing. As Weir (2005b: 283) stated, “the CEFR was not designed specifically to meet the needs of language testers and that it will require considerable, long-term research, much reflective test development by providers, and prolonged critical interaction between stakeholders in the field to address these deficiencies”. We hope that this chapter and this volume can give readers a firm foothold in this area so they can begin to utilize available resources to design and revise effective assessments for their CEFR-informed curricula.
3.2
Some Important Concepts in Assessment
This section outlines some important concepts in assessment, which are essential to understanding the following sections.
3.2.1 Testing, Assessment, and Evaluation It is crucial to differentiate between the terms testing, assessment, and evaluation, which are often used interchangeably. According to Coombe (2018: 40), a language test is “a set of tasks or activities intended to elicit samples of performance which can be marked or evaluated to provide feedback on a test taker’s ability or knowledge”. Testing is usually performed under stringent conditions, such as a strict time limit and enforced performance of narrowly, specifically defined tasks. Assessment, on the other hand, represents a wider category of appraisal than testing. Assessment includes language tests but also refers to other ways of eliciting information about language learner ability. In the CEFR, assessment is defined as something which evaluates “the proficiency of the language user” (COE 2001: 177). Lastly, evaluation, is an even broader term, which according to the CEFR, “may include the effectiveness of particular methods or materials, the kind and quality of discourse actually produced in the program, learner/teacher satisfaction, teaching effectiveness, etc.” (COE 2001: 177). To reiterate, the first term testing is the narrowest of the three. Testing refers to a type of assessment which requires language learners to complete a specific, formal task or tasks, which are then graded to indicate learner language ability. The second term, assessment, includes language testing but also encompasses other means of
92
3
Assessment
eliciting information about a learner’s language ability that do not necessarily impose the strict conditions of tests. These other types of assessments are often referred to as alternative assessments. They include, for example, self-assessments, portfolios, peer-assessments, and teacher observations. (See Tsagari 2004 for an excellent introduction to alternative assessments). The third term, evaluation, is the broadest of the three terms. Evaluation includes ways of appraising, not just student language learner ability, but also all aspects of language program quality.
3.2.2 Assessing Achievement and Proficiency Through Criterion-Referenced and Norm-Referenced Tests An important distinction is drawn in educational assessment between assessing learner achievement and assessing learner proficiency. In language assessment, achievement tests assess the extent to which learners have mastered specific course content, or learning objectives, between the start and end of a language course. Proficiency tests, on the other hand, assess learner language ability across a range or scale (Brown and Hudson 2002). Achievement tests are generally ‘criterion-referenced’ and proficiency tests are generally ‘norm-referenced’. Criterion-referenced tests assess test taker ability against a set of standards or criteria. Norm-referenced tests, in contrast, aim to separate test taker ability along a normal distribution or bell curve. Norm-referenced tests are concerned with making relative judgments between test takers, and criterion-referenced tests involve making absolute judgment about test taker ability. In other words, norm-referenced tests measure a test taker’s ability in relation to other test takers by placing them along a proficiency range with numerical scores, for example, by showing that a test taker scored higher than 70% of other test takers. Criterion-referenced tests, in contrast, are used to decide whether test takers meet a defined standard. An example of a criterion-referenced test would be a test that classifies learners into being able or unable to complete a speaking task at the CEFR B1 proficiency level. Tests that claim to place students at a certain CEFR level are, by definition, criterion-referenced tests. Therefore, the language testing sections of this chapter focus on criterion-referenced testing which is also explained in the CEFR (COE 2001: 184). (See Brown 2005; Carr 2011; Fulcher 2013 for further explanations of the differences between proficiency versus norm-referenced and achievement versus criterion-referenced tests.) Criterion-referenced assessments can be very challenging to design, but the process of designing these assessments can also be extremely useful, because it forces educators to think carefully about what the goals of a course really are, and how achievement of these goals can be measured. As Fulcher (2013: 226) states, “establishing standards and introducing a system of standards-based assessment can be exceptionally useful, even challenging and professionally rewarding for teachers”.
3.2 Some Important Concepts in Assessment
93
3.2.3 Reliability and Dependability Reliability and dependability refer to the consistency of test scores. Reliability is the term used for the consistency of norm-referenced tests and dependability is used for criterion-referenced tests. According to Brown (2005: 175) “reliability is defined as the extent to which the results can be considered consistent or stable”. Dependability is analogous to reliability, but it refers to the consistency and stability of criterion-referenced test scores (Bachman 2004). Consistency is important because assessment scores should not change due to the time the assessment is administered, the rater who scores a performance sample, or because a learner takes a different test form. Test forms are different versions of a test that have been made from the same specifications (Fulcher 2013). Test forms should therefore be of the same length, same style, test the same language skills, and be of approximately equal difficulty. To increase assessment reliability and dependability, it is important to clearly define the construct (language skill) to be assessed and to make sure that the assessment only evaluates the target construct, not other constructs. For example, a listening test task written in difficult language in the L2 might actually assess reading ability more than listening ability. Longer tests are usually more reliable than shorter tests (Fulcher 2013). For clear explanations of test reliability, dependability, and how to calculate these statistics, see Brown (2005), Carr (2011), and Fulcher (2013).
3.2.4 Summative and Formative Assessment A further important conceptual distinction in educational assessment is between summative and formative assessment. Summative assessment is conducted at the end of a learning program to assess overall achievement of learning goals. Summative assessment is usually associated with a big final test or a final student report card at the end of a year of study. On the other hand, formative assessment is conducted during a program. It is intended to provide feedback to learners and teachers on learning during a course with the intent that this feedback can be used to help teachers and learners adjust their teaching and learning. As Black and Wiliam (1998a: 2) stated in their influential article, formative assessment provides “information to be used as feedback to modify teaching and learning activities”. Formative assessment is also widely referred to as Assessment for Learning (AfL). AfL has been influential in education systems around the world (Leung and Scott 2009; Leong et al. 2018). There is a great deal of evidence that well-implemented formative assessment can lead to improved learning out comes (Black and Wiliam 1998b; Black et al. 2003, 2004). A further refinement of the concepts of formative assessment or AfL has been proposed, which is known as Learning-oriented assessment (LOA) (Purpura 2004). LOA is an attempt to clarify a specific approach to formative assessment, which was considered necessary due to varied and sometimes conflicting definitions and perceptions of formative assessment among researchers (Carless 2007). LOA is an
94
3
Assessment
assessment approach that places more emphasis on the learning aspects than the measurement aspects of assessment. Carless’ (2007) formulation of LOA includes the three components outlined below. 1. Assessment tasks as learning tasks—Tasks should represent real-world applications of the subject matter. 2. Self- and peer-assessment—This can include student involvement in creating assessment criteria, student to student feedback on work, and the kind of self-assessment discussed in Sect. 3.3.1 of this chapter. 3. Timely and appropriate feedback—This feedback may come from the teacher or peers, but it is essential that students engage with the feedback actively and meaningfully. In recent years, LOA has gained increasing attention in the fields of second and foreign language education (e.g., Green 2017; Hamp-Lyons 2017). For a thorough, overall explanation of LOA, and how it may be implemented, see Jones and Saville (2016). Concerning the LOA’s relevance to the CEFR, the three key components of LOA outlined by Carless (2007) align with the principles of the CEFR. Component one aligns with the CEFR’s focus on real-world communicative language tasks. Component two aligns with the CEFR’s focus on self-assessment. And component three is facilitated by clear CEFR-based program, course, and task goals on which to base feedback. Examples of CEFR-based summative assessments are end-of-course achievement tests aligned to course goals, such as a final writing test or a final speaking test. Although the results of such tests may be used by teachers to modify a course to better address the needs of subsequent cohorts of students, the test results cannot be used during the course to modify teaching and learning. An example of CEFR-informed formative assessment would be regular short quizzes throughout a semester aimed at evaluating the extent to which students have understood target grammar and/or acquired target vocabulary. Another example would be students self-assessing their understanding of lesson content at regular intervals during a semester and adjusting their individual learning plans as a result of these self-assessments (see Sect. 3.3.1 for an explanation of self-assessment). A further example of formative assessment is the portfolio approach described in Chap. 4 where students deliberately collect a body of work that demonstrates their mastery of language tasks at a certain CEFR level, and those tasks’ associated CEFR ‘Can Do’ descriptors, while regularly reflecting on their progress.
3.2.5 Validity Theories Test validation is a complex field with a long tradition in psychology and education. In the twenty-first century, test validation has come to mean gathering and assessing evidence to evaluate whether inferences made based on test results are justifiable. Validity is defined by the latest edition of the American Psychological Association,
3.2 Some Important Concepts in Assessment
95
American Educational Research Association, and National Council on Measurement in Education’s Standards for Educational and Psychological Testing as “the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests” (2014: 11). (See Cumming 2012 for a thorough yet succinct introduction to validation of language assessments.) Inferences made from CEFR-informed classroom assessments would generally be that students have mastered, or partially mastered, a task or tasks described by CEFR ‘Can Do’ descriptors at a certain CEFR level. For example, a teacher may claim that test takers who achieve a certain score on a speaking test have mastered tasks described by the CEFR A2 spoken interaction ‘Can Do’ descriptors. At the time of writing, there are two main streams of test validation theory in the language testing community. In the USA, an argument-based approach to test validation dominates, with major proponents including language testing experts such as Bachman and Palmer (2010) and Chapelle (2012). On the other hand, in the UK and Europe Weir’s (2005a) sociocognitive framework has grown to dominate much of the validation research on CEFR-based tests. These two dominant approaches to test validation are each briefly summarized in the following paragraphs, followed by a short summary of the types of evidence that are gathered and evaluated for test validation. Finally, important questions for classroom teachers arising from the test validation literature are presented in Sect. 3.2.5.1. Argument-based approaches are made up of two main components. Firstly, test users or designers create an argument for why they designed or selected a given test and to justify their interpretations and uses of test scores. This stage is called an interpretation/use argument (IUA) (Kane 2013a, b), or an Assessment Use Argument (AUA) (Bachman and Palmer 2010). Secondly, test developers or users gather and analyze evidence to assess the extent to which their proposed interpretations and uses are justified. This stage is called the validity argument (Kane 2013a, b) or the justification stage (Bachman and Palmer 2010). In argument-based approaches to test validation, these two stages are cyclical, with conclusions drawn from the analysis of evidence in the second stage used to guide test revisions in the next cycle of test design and creation. In this chapter, Kane’s (2013a, b) terminology of an IUA and a validity argument are used to explain an argument-based approach to validity for test development and revision in Sect. 3.2.6. Next, the two most influential argument-based approaches currently used in the field of language testing are summarized. The first is Chapelle et al. (2008) adaptation of Kane’s argument-based approach. Chapelle et al. elaborated a detailed argument for the design and use of the new TOEFL iBT (Educational Testing Service 2019). Their argument was structured around six inferences; domain definition, evaluation, generalization, explanation, extrapolation, and utilization. Each of these inferences is briefly outlined below. 1. Domain definition inference examines how well test content matches the real world or classroom language tasks that the test aims to assess. 2. Evaluation inference assesses how accurately and appropriately the test is scored in terms of the construct that is intended to be measured.
96
3
Assessment
3. Generalization inference assesses how well scores on a single test can be interpreted as representing test taker ability across the sample of the language domain which the test targets. 4. Explanation inference links performance on a test to a theory of language learning or language proficiency. 5. Extrapolation inference assesses how well performance on the test can be equated with performance in the language domain to be assessed by the test. 6. Utilization inference assesses how well test results fulfill their intended purposes. Bachman and Palmer (2010) have also detailed an excellent argument-based framework for language test validation, which is becoming increasingly influential. They structure an Assessment Use Argument (AUA) around the following four claims: (1) that the consequences of the uses of test scores are beneficial; (2) that decisions made based on test scores take community values and equitability into account; (3) that interpretations made about test scores are; (a) meaningful with respect to a particular learning syllabus, an analysis of the abilities needed to perform a particular task in the target language use (TLU) domain, a general theory of language ability, or any combination of these; (b) impartial to all groups of test takers; (c) generalizable to the TLU domain about which decisions are to be made; (d) relevant to the decisions to be made; (e) sufficient for the decisions to be made. (4) That assessment records are consistent across different assessment tasks and aspects of the direct assessment procedure (e.g., forms, occasions, raters). The other major approach to language test validation that has been particularly utilized in validation studies of CEFR-based and CEFR-linked tests in Europe (Geranpayeh and Taylor 2013; Khalifa and Weir 2009; Shaw and Weir 2007; Taylor 2011), and increasingly around the world (Florescano et al. 2011; O’Sullivan 2005; Wu and Wu 2010), is Weir’s (2005a) sociocognitive framework. This framework details five areas for which validity evidence should be gathered and analyzed. These are: 1. Theory-based validity assesses how well test tasks elicit the same cognitive processes as the real-world language tasks the test is designed to assess. 2. Context validity assesses the extent to which test tasks are representative of the wider sample of target tasks in the test’s target language situations. 3. Scoring validity encompasses aspects of test reliability.
3.2 Some Important Concepts in Assessment
97
4. Criterion-related validity examines correlations between the test and similar tests of the same construct. 5. Consequential validity examines the social consequences of the test.
3.2.5.1 Core Assessment Validity Questions The test validation frameworks briefly outlined above have considerable overlap and can be difficult to grasp initially for those without a background in testing. In any case, for classroom language teachers who seek to inform their curricula and assessments with the CEFR, it is neither necessary nor practical to conduct full-scale validation studies of their classroom assessments using those frameworks. However, it is useful for classroom teachers to justify their selection, design, and use of classroom assessments to provide some evidence of their utility. As such, the validation frameworks briefly outlined above provide indispensable guidance on important questions to ask when designing and implementing classroom assessments. Important questions for classroom language teachers to consider arising from the validation frameworks outlined above follow. If a classroom teacher is able to answer yes to all of these questions and can provide supporting evidence, then this is sufficient to provide a basic validity argument for a classroom test. 1. Does the assessment content match the curriculum content? Does the assessment content match the curriculum goals? Do the assessment tasks closely correspond to the real-world language tasks or the classroom tasks to be assessed? 2. Are the assessment tasks appropriate for the test taker’s proficiency levels, background knowledge, and culture? 3. Are the assessment criteria clear? Is the assessment administered in the same way for all test takers? Have test items been thoroughly checked? Are test specifications sufficiently clear for alternate forms to be made? Have raters been sufficiently trained to have a common understanding of a rubric? 4. Is the test reliable or dependable? Does it have sufficient internal consistency? Do raters of productive assessments show sufficient agreement? 5. Are alternate test forms of similar difficulty and content? Are alternate forms made to the same design? 6. Do the assessment tasks focus only on the constructs (language skill(s)) that the assessment is intended to assess? 7. Does the assessment include a wide sample of the tasks and/or target language from the curriculum? 8. Are the assessment results useful for students to identify areas they should focus on to improve their language ability? Do the assessment results provide useful information for teachers to change their teaching approach and lesson content to better meet learners’ needs? Do the assessment results provide useful evidence of program efficacy for stakeholders such as administrators and parents?
98
3
Assessment
3.2.6 The Assessment Development Cycle Assessment development is often presented as an iterative cycle, in which information obtained at each stage is used to inform later stages with the results of one cycle informing the following cycle. The assessment development cycle begins with a decision to make an assessment and ends with an overall analysis of the validity evidence gathered during development and implementation. This analysis then informs revisions to the assessment in the next cycle. It is important to note that validity needs to be taken into consideration throughout the assessment development cycle, starting right from stage one, when the IUA is developed. Validity considerations thus inform all stages of the assessment development cycle, as the procedures followed, and the documents produced form important lines of evidence for validation. These include how the assessment is administered, assessment results, and how assessment results are presented to both test takers and other stakeholders. The stages of an assessment development cycle are presented in Fig. 3.1. The first six stages of the cycle are adapted from the Council of Europe-ALTE’s Manual for Language Test Development and Examining (2011). Stages 7 and 8 have been added to incorporate Kane’s (2013a, b) argument-based approach to assessment validation. See the COE-ALTE manual for more detailed explanations of stages 1– 6. See Kane (2013a, b) and Chapelle et al. (2008) for more detailed explanations and examples of inferences in validity arguments and the types of evidence that are suitable for assessing different inferences. (1) Develop/Revise the Assessment The first stage of the assessment development cycle is to develop the assessment. There are two important components that need to be attended to here. The first is assessment specifications, which are briefly explained in Sect. 3.2.7. The second is an interpretation/use argument (IUA) (Kane 2013a, b) which describes the proposed interpretations and uses of assessment scores and details the kinds of evidence needed to support inferences for those interpretations and uses (see Sect. 3.2.5). It is also important to try out, or pilot, the assessment tasks. This may be done with first drafts of individual task types after the specifications have been written or as part of the following stage after the assessment has been completed. (2) Assemble the Assessment Once the assessment specifications have been produced, assessment tasks, rubrics, or test forms can be created based on those specifications. Where possible, it is important to have assessments edited by other teachers, or item writers, as this is an important part of quality control. If time and resources permit, it is also advisable to pilot the assessment on a representative group of learners. Also, for tests, it is advisable to produce more test tasks or items than needed, as some tasks and/or items may be found to be wanting in the pilot stage.
Put assessment results into easy-tounderstand categories such as letter grades or CEFR levels.
5. Grade the assessment
Score the assessment and collate scores.
4. Mark the assessment
Have students take the assessment and collect their responses or production.
3. Administer the assessment
Make or revise the assessment.
2. Assemble the assessment
Fig. 3.1 Assessment Development Cycle Adapted with permission from ALTE, © Council of Europe 2011
Provide assessment results to students and other stakeholders in a meaning and useful way.
6. Report assessment results
Collate validation evidence from assessment development and other sources.
7. Gather validation evidence
Analyse evidence for the validity argument. (Kane 2013a; 2013b) (See section 3.2.5) Decide on any necessary changes to the assessment specifications and/or the IUA.
8. Evaluate the validity argument
Write the assessment specifications. Write an interpretation/use argument (IUA) for the test (Kane 2013a; 2013b) (See section 3.2.5) In subsequent cycles revise test specifications based on the validity argument.
1. Develop/revise the assessment
3.2 Some Important Concepts in Assessment 99
100
3
Assessment
(3) Administer the Assessment When delivering the assessment, it is important for fairness that all learners should take the assessment under the same conditions. This can mean, for example, giving the same amount of time for taking assessment tasks, allowing the same amount of preparation time, and eliminating distractions such as excessive noise. Special accommodations may also need to be made for students with disabilities. (4) Mark and Grade the Assessment Where possible, with selected response items (see Sect. 3.2.8) machine scoring can minimize human error. While optical scanners are one option, they can be quite expensive. There are now cheap apps available for Android and Apple devices that produce answer sheets and automatically grade them using a device’s camera. For extended production tasks such as essays and spoken interaction tasks, it is important to familiarize raters with the rubric and train them in its use (see Sect. 3.2.10). Multiple raters for productive tests can increase reliability. (5) Report Assessment Results This stage is particularly important for CEFR-informed assessments as the reporting of results should give learners feedback on their language proficiency in relation to the CEFR scales. For CEFR-based criterion-referenced assessments, it is not enough to simply report a test score. Ideally, results should be reported in terms of learner levels of mastery of CEFR ‘Can Do’ descriptors. Also, alongside assessment results learners should be provided with advice and/or resources on how they could further improve their language ability in these areas. (6) Gather Validation Evidence It is important to clearly document each stage of the assessment development cycle, as such documentation forms lines of evidence to support the assessment’s validity argument. In addition to documenting stages 1–6 of the cycle, it is also beneficial to conduct deliberate validation research on CEFR-informed assessment. This can shed light on test validity, indicating how assessments, their accompanying documents, and processes can be improved to increase validity. The type and amount of validity evidence that should be gathered depends on the type of inferences intended to be made from test scores. Assessments on which important decisions are made such as university entrance or trade certification require broad and deep evidence. On the other hand, everyday classroom assessments require less thorough evidential backing. However, it is still useful for classroom teachers to think about, and as far as practical, to gather and analyze evidential backing for their proposed interpretations and uses of test scores. The types of validity evidence that have been used in previous research on CEFR-informed and other language assessments are briefly introduced in Sect. 3.4. (7) Evaluate the Validity Argument The final stage of the assessment development cycle is to examine the evidence gathered throughout the test development cycle and evaluate if it is sufficient to support the desired interpretations and uses of test scores. The validity evidence gathered and its evaluation are known as a validity argument (Kane 2013a, b). If any gaps or
3.2 Some Important Concepts in Assessment
101
strong rebuttal evidence are found in the validity argument, then changes may need to be made to the test specifications, revisions made to test forms, or further validity evidence may need to be gathered. For classroom assessments, the scope and depth of evidence in the validity argument will naturally be limited by teachers’ time and resource constraints. However, the process of making and evaluating even limited IUA and validity arguments is likely to result in better assessments than not engaging in this process.
3.2.7 Assessment Specifications As noted in Sect. 3.2.6, the first stage of an assessment development cycle involves designing assessment specifications. Assessment specifications describe: • The purpose of the assessment; • explain what inferences the assessment makers intend to make from assessment scores; • define the construct to be assessed; • specify the assessment task or tasks; • detail the assessment format; • describe how the assessment will be administered and graded; and • how both grades and feedback will be presented to stakeholders. Assessment specifications are important for standardizing assessments so that alternate assessment forms can be made, and so that assessments can be scored and graded in a consistent manner across classes and teachers. The process of writing assessment specifications also helps teachers to clarify course goals. Finally, assessment specifications are very important for validity as they define the domain and the tasks that the assessment is intended to focus on, so it is important for this information to be communicated clearly to all stakeholders. When writing assessment specifications, especially for language tests, it can be useful to follow one of the templates available in the literature. For example, Carr (2011) provides some clear and useful test specification templates, suggestions for their use, and important questions to consider when writing test specifications that are divided into three areas; (1) test context and purpose, (2) overall test structure, and (3) specifications for individual test tasks. Fulcher (2013) points out that test specifications should also include examples of all test task types to assist test writers and that the level of detail in task specifications will vary depending on how high stakes the test is. Task specifications for high-stakes, commercial tests must have very detailed and specific task specifications, but task specifications for classroom assessments will be more general and leave more room for teacher creativity. Finally, it is important to note that writing, using, and revising assessment
102
3
Assessment
specifications is an iterative process (Bachman and Palmer 2010; Carr 2011). Assessment specifications will need to be reviewed and revised regularly based on analysis of validation evidence.
3.2.8 Selected Response Items Selected response items are sometimes called receptive response items (Brown 2005). Commonly used selected response types are true/false, multiple choice, and matching. Selected response items are generally used for tests of receptive skills such as reading and listening. They are also used to assess knowledge of L2 vocabulary and grammar. As the focus of this chapter is on CEFR-informed assessments, detailed procedures for writing good selected response items, and for checking item quality through peer review and statistical analyses, are not presented here. There are many excellent guides in the literature which cover designing, reviewing, and analyzing selected response test items (see Brown 2005; Carr 2011; Anthony Green 2013; Rodriguez 2016). One advantage of selected response items is efficient machine scoring, which enables large numbers of tests to be graded quickly. A second advantage of machine scoring is objectivity. Select response items are all graded in exactly the same way. In contrast, human rating of language production is inherently subjective (Downing 2006).
3.2.9 CEFR-Informed Assessment Rubrics Assessment rubrics, also referred to as rating scales are the typical means of assessing language learners’ productive language, such as performance on spoken or written tasks. According to Carr’s (2011) description, a rating scale “has several levels or score bands, each of which has its own description of what performance looks like at that level” (p. 125). Assessment rubrics are typically divided into two types: holistic and analytic (Brown 2012). In holistic rubrics, a single general scale is used to give an overall performance rating. On the other hand, with analytic rubrics, various aspects of test taker’s language performance are rated separately. Examples of each type of rubric are given in Sect. 3.5.1 of this chapter. When designing CEFR-informed rubrics, the CEFR scales with their varied ‘Can Do’ descriptors for different language skills provide excellent resources for selecting behavior descriptors for a rubric for each band or performance level. Typically, the assessment designer will first clearly define the test task. For example, a task could be to write an email, plan a weekend activity with a friend, or give an academic presentation. The assessment designer then identifies the language skills and levels from the CEFR that seem to exemplify communicative language performance of the defined test task. After that, CEFR or related descriptors for the rubric score bands are selected, potentially modified, and assigned numerical scores.
3.2 Some Important Concepts in Assessment
103
Selected ‘Can Do’ descriptors may need to be modified to better reflect task requirements. Researchers have pointed out that the user-oriented CEFR proficiency scales, which are necessarily broad and general in their descriptions, often need to be modified for use as rating scales in local contexts to make them more detailed and specific to ensure reliable rating (Alderson 1991; Harsch and Martin 2012). See Sect. 2.3.2.5 for recommendations on how to contextualize ‘Can Do’ descriptors. The broad statements in the holistic scales for CEFR Common Reference Levels from this chapter of the CEFR (COE 2001) naturally lend themselves to inclusion in holistic rubrics, and the CEFR illustrative descriptor scales naturally lend themselves to inclusion in analytic rubrics. We further suggest that assessment designers check not only the CEFR scales for rubric performance descriptors, but also the Eaquals (2015) bank of descriptors. Eaquals emphasizes further useful subdivisions of the A1-C1 levels with superior performance, or plus (+) levels. The ALTE ‘Can Do’ descriptors can also be consulted, as they have been statistically related to the CEFR levels, and they cover a different range of communicative situations to the CEFR (Association of Language Testers in Europe 2002). See Chap. 2 of Brown (2012) and Chap. 7 of Carr (2011) for excellent advice on designing and implementing rubrics for language assessment.
3.2.10 Rater Training and Learner Self-assessment Training Rater training is necessary for raters of productive L2 assessments to reach a common understanding of what kind of learner performance is typical for each band in an assessment rubric. Similarly, learner training is necessary for language learners to accurately assess their own L2 abilities. This section presents recommendations and resources for rater training for the scoring of productive assessments then for learner training in self-assessment. Rater training is important to ensure similar rubric interpretation and scoring among multiple raters for tests of productive language skills. Rater training can improve the accuracy and consistency of ratings (Davis 2016; Harsch and Rupp 2011). It thus increases the fairness of test scores by lowering the chance that test takers are unfairly advantaged or disadvantaged by the rater(s) they happen to be assigned. The COE-ALTE’s Manual for Language Test Development and Examining (2011) refers to rater training intended to align rater judgments to an external standard like the CEFR as standardization. It recommends that a rater standardization session should begin with familiarizing raters with the relevant CEFR scales. Chapter 3 of the manual for Relating Language Examinations to the CEFR (COE 2009) has several excellent and clearly described procedures for familiarization. It is also important to familiarize raters with the rubric or rubrics they will use. Raters should then be taken through a guided discussion of examples of ratings of learner performance using the rubric(s). For CEFR-informed tests of productive
104
3
Assessment
language, it is recommended that publicly available, calibrated illustrative examples of spoken and written performance at the target CEFR levels be used (see Sect. 3.3.5 for speaking and Sect. 3.3.6 for writing). This should be followed by trainee raters rating a sample of test taker performances which have been pre-rated. Ideally, the pre-rated performances should be from the test taker population that the raters will test, and the training session should include discussion of any rating discrepancies and questions that arise. Similar to rater training, learner training is needed to increase learner understanding of exactly what constitutes mastery of course target ‘Can Do’ descriptors. Firstly, it is necessary to make goals clear for learners by providing course target ‘Can Do’ descriptors in easy-to-understand language, preferably in the learners’ first language. Secondly, it is recommended learners be provided with examples of performance, which have been calibrated as being representative of the target CEFR performance level, along with examples of poor performance, which do not meet the criteria for mastery (McMillan and Hearn 2008; Stiggins 2005). As with examples for rater training sessions, ideally these will be graded examples of student work from students’ peers, provided such samples can be used with their permission. For example, examples of videoed student presentations, written performance, and/or spoken interaction graded at the A2 and B1 levels could be used. Having students evaluate these examples with the same rubric that teachers use (preferably translated into the students’ first language) can raise learners’ awareness of mastery requirements. It may also be beneficial to have learners assess their own work or performance using the rubric before handing their work to the teacher for grading. This will give the teacher a window into learners’ level of understanding and their interpretation of the assessment rubric. Another practical strategy for learner training to increase the accuracy of self-assessment is to have learners self-assess their ability at a target ‘Can Do’ descriptor before and after a task or lesson which focuses on that language skill. In this way, learners experience a task and reflect on their understanding of their own ability to successfully complete the task. This approach can also foster a learning cycle as described in Sect. 5.5.1, which can increase learners’ awareness of their own language ability along with their autonomous learning efficacy.
3.3
Types of CEFR-Informed Assessments
This section introduces a range of CEFR-informed assessment types, which language teachers may choose to implement with their classes. For each assessment type, advice is given on design, use, and revision. CEFR resources are suggested, and available exemplary validation studies are briefly discussed.
3.3 Types of CEFR-Informed Assessments
105
3.3.1 The CEFR for Self-assessment Midraj (2018: 1) defines self-assessment as “a learner-centered alternative assessment that at its core involves learners making judgments about their performance, the process of learning, and attitudes based on criteria”. Self-assessment is one of the primary intended uses of the CEFR, as self-assessment is intrinsic to learner autonomy, one of the pillars underlying CEFR’s educational philosophy. Little (2009) states that “self-assessment is the hinge on which reflective learning and the development of learner autonomy turn” (p. 3). Black and Wiliam (1998b: 6) also state that “self-assessment by pupils, far from being a luxury, is in fact an essential component of formative assessment”. Many studies have found that self-assessment is relatively accurate compared to standardized language test results. For example, Blanche and Merino (1989: 315) conducted a wide-ranging review of literature on self-assessment in language testing, finding strong correlations between self-assessments and a variety of criterion ranging “from 0.50 to 0.60”, and that higher correlations were “not uncommon”. In addition, Ross (1998) conducted a meta-analysis of self-assessment in foreign and second language testing that included studies exploring associations between self-assessment and language tests of reading, listening, and speaking. Ross found moderate-strong correlations of 0.61 for the 23 reading studies examined, .65 for listening from 18 correlations, and 0.55 for speaking from 29 studies. These two broad analyses of the literature on self-assessment suggest that self-assessment can provide a reasonably accurate picture of language learners’ second or foreign language abilities. However, there are at least three factors that have been reported to compromise the accuracy of self-assessments of second or foreign language ability. The first is the respondents’ relative familiarity with the situations or tasks in the ‘Can Do’ descriptors through actual experience or lack of experience of the situations assessed (Black and Wiliam 1998). The second is differing interpretations of Likert scale categories (Ross 1998; Suzuki 2015). The third is a tendency for less proficient learners to underestimate their abilities and for more proficient learners to overestimate theirs (North and Jones 2009; Suzuki 2015). Given these factors, it is important to familiarize learners with the language tasks embodied by target ‘Can Do’ descriptors before asking them to self-assess their abilities. As Fulcher (2013: 71) states “in order for self- and peer-assessment to work well, it is essential that classroom time be spent on training learners to rate their own work, and the work of their colleagues”. Some strategies for learner training to improve the accuracy of learner self-assessment were outlined in Sect. 3.2.10. Secondly, as with all formative assessment, it is essential that learners be provided support so that they can effectively plan how to move toward mastery of the target skill once a gap between their ability and their goal has been identified (McMillan and Hearn 2008). Strategies for further study may be given as written points at the end of a lesson, in one-to-one consultations between teachers and students, or through written feedback on assignments. Where available, self-access center learning advisors (Mynard and Carson 2012) can help learners formulate,
106
3
Assessment
enact, and reflect on their learning plans. Another possible approach with considerable potential is to have learners engage in peer tutoring, as language learning peers can provide excellent role models and guidance on learning strategies that they have found to be effective (Everhard 2015; Murphey and Arao 2001).
3.3.1.1 Design, Use, and Revision CEFR ‘Can Do’ descriptors lend themselves to at least three levels of self-assessments in language programs. The first level, which may be useful at the overall program level, is to have learners self-assess their overall target language ability using the CEFR self-assessment grid. (See the Council of Europe’s (2019b) webpage for CEFR self-assessment grids in 32 European languages.) This is a useful way for learners to identify irregularities in their learning profiles (i.e., language skill areas which are stronger or weaker than other skills. For example, they may conclude that their reading abilities are at a higher level than their spoken interaction abilities. This can prompt them to reflect on areas that they want to improve, in order to create an overall personal learning plan. The second level is at the level of individual courses. Teachers and course designers may select and/or modify ‘Can Do’ descriptors from the CEFR or other CEFR-related sources such as the Eaquals (2015), or ALTE ‘Can Do’ descriptors (Association of Language Testers in Europe 2002) to match the communicative goals of a course. These statements can then be used for self-assessment at the start of a course, during a course, and at the end of a course to assist learners to reflect on their progress, and to modify their learning plan based on these reflections. The third level is for individual lessons; CEFR ‘Can Do’ descriptors, or ‘Can Do’ descriptors from other sources aligned to the CEFR can be selected and/or modified for specific language tasks in a curriculum. Again, we suggest that learners be given opportunities to self-assess pre-task, and then for self-, peerand/or teacher assessment at the end of a lesson, or post-task. Across all of these levels of self-assessment, it is useful to present learners with a simple scale for each ‘Can Do’ descriptor, such as; I can’t do it yet, I can almost do it, I can do it, I can do it easily. Such simple scales can give learners a visual indication of their progress and allow for partial degrees of mastery, which provides more information than a simple yes/no or I can/can’t do it. As mentioned previously, pre- and post-task self-assessment can also be used to foster a learning cycle such as the one described in Sect. 5.5.1, in which learners learn to cyclically plan, implement, and reflect on their learning. 3.3.1.2 Validation At present, there seems to be a paucity of exemplary validation studies on using the CEFR scales for self-assessment. Therefore, a few suggestions follow for types of validation evidence that can be used to improve the design and implementation of self-assessments. These include correlational studies with standardized tests of the self-assessed skill, and correlations and/or comparisons between teacher and peer-assessments of the self-assessed language skill. Further validity evidence can be gathered by surveying students on how useful they found their self-assessment to
3.3 Types of CEFR-Informed Assessments
107
be, what they found to be difficult, and how effective they found the support provided to create and carry out their individual learning plans based on their ongoing self-assessments. This validity evidence can then be used to improve how self-assessments are conducted and supported, and to provide stakeholders with evidence of the efficacy of self-assessment as part of the formative assessment process.
3.3.2 Teacher Assessment of CEFR Levels In addition to learner self-assessment, an alternative to standardized tests for assessing learner progress using the CEFR scales is teacher assessment. Teacher assessment is here defined as teacher judgment of learners’ target language ability. Research in general education has found that teacher assessments of student academic ability correlate strongly with performance on standardized tests. Notably, Hoge and Colardarci (1989) reviewed 16 studies, finding a strong median correlation (0.66). Südkamp et al. 2012) also conducted a meta-analysis of 75 studies which correlated teacher judgments with performance on standardized achievement tests, finding a median correlation of 0.53 and an overall mean effect size of 0.63, indicating overall reasonably good accuracy of teacher assessments. A further recent and excellent example of research with strong positive findings is a longitudinal study of educational data of twins in the UK conducted by Rimfeld et al. (2019), which found that teacher assessments correlated strongly with exam scores for English, mathematics, and science for students aged 7–14 (about. 7). Rimfeld et al. conclude that “Teachers can reliably and validly monitor students’ progress, abilities and inclinations” (p. 1). However, as with learner self-assessment, the level of teacher familiarity with the CEFR and its scales is likely to affect the accuracy of teacher assessments of student CEFR level. Therefore, it is essential to familiarize teachers with the CEFR levels in the range of their students’ abilities to foster valid teacher assessment (North 2009). As noted in Sect. 3.2.10, The Manual for Relating Language Examinations to the CEFR (COE 2009) explains familiarizing panel members with the CEFR for the purpose linking exams to the CEFR. These procedures could also be used to familiarize classroom teachers with the CEFR for the purpose of increasing the accuracy and reliability of teacher assessments.
3.3.2.1 Design, Use, and Revision North (2009) suggests two ways in which teachers can assess the CEFR proficiency levels of their students. The first is for teachers to grade their students on the CEFR course objectives at the end of a language course such as by giving a student a grade of A2 for reading and A1 for spoken production, etc. North describes this method as simple and quick, but very unreliable. The second way is for teachers to use checklists to indicate what a learner can do at certain intervals during a course. These checklists can include both CEFR-derived ‘Can Do’ descriptors and other language points. North points out that these checklists require teachers to be
108
3
Assessment
familiar with the students and that the number of points covered by the checklist should be kept to a reasonable amount, so that the teachers are not overwhelmed. Using CEFR-related scales and checklists for teacher assessment of student language ability has the advantage of giving students reasonably specific and immediate feedback on their language proficiency. However, we agree with North (2009) that given the known limitations to the accuracy of teacher assessments, they should be combined with other assessment types such as standardized tests of productive skills and/or learner self-assessment to provide a more reliable and accurate picture of learner ability. It is also important to solicit teacher and learner feedback on the practicality and utility of scales and checklists used for teacher assessment, to ensure completing such assessments is reasonable for teachers and that the information provided is understandable and thus can be put to use by learners.
3.3.2.2 Validation Although there are few studies investigating the accuracy of teacher judgment of student ability on the CEFR scales, a notable example is Fleckenstein et al. (2018). They investigated the accuracy of teacher judgments of individual student English language proficiency compared to TOEFL ITP scores. The TOEFL has been linked to the CEFR through a standard setting process (see Tannenbaum and Baron 2011). Fleckenstein et al. (2018) examined rating data from 73 EFL teachers of 1315 German high school students in their final year of study. They found that teacher placement of English student ability in their EFL classes into global CEFR ability levels (A1-C2) was moderately accurate in terms of ranking relative student ability within classes (0.41), but that teachers significantly overestimated their students overall CEFR level. They observed that “almost all of the students (99.3%) were located on CEFR levels A2, B1, and B2 according to the TOEFL ITP. The teachers, however, allocated most students (95.4%) on levels B1, B2, and C1” (p. 97). More research is needed using further CEFR-linked and CEFR-based proficiency tests as the criteria, along with other methods of teacher assessment, but Fleckenstein et al.’s study reinforces that teacher assessment, while useful, should not be solely relied upon to judge student CEFR proficiency. To gain validity evidence for teacher assessments in a language program, feedback can be gathered from teachers and learners. For example, feedback from surveys and/or focus groups can be used to modify the frequency of teacher assessments, the number and selection of checklist points, and the way checklist results are presented to learners. Correlations of teacher assessments with other measures of student language proficiency can also be used.
3.3.3 CEFR-Informed Portfolio Assessments A portfolio is a type of alternative assessment, which is a structured collection of student work. Brown and Hudson (1998: 664) define a portfolio “as purposeful collections of any aspects of students’ work that tell the story of their achievements,
3.3 Types of CEFR-Informed Assessments
109
skills, efforts, abilities, and contributions to a particular class”. They also provide a useful summary of advantages and disadvantages of portfolios as an assessment instrument. CEFR-informed language portfolios, specifically the European Language Portfolio, are covered thoroughly in Chap. 4 and so are not elaborated on here (see Sect. 4.2).
3.3.4 Available CEFR Level Placement Tests Designing and validating tests that can reliably place learners across the range of CEFR levels, and simultaneously building a strong validity argument for such tests, are rather monumental tasks. Therefore, it is unreasonable to expect busy classroom teachers to design and validate such tests. Fortunately, most major international standardized language proficiency tests have been linked to the CEFR through standard setting procedures. For example, for English language proficiency the TOEFL (Papageorgiou et al. 2015), IELTS (Lim et al. 2013) and TOEIC tests (Tannenbaum and Wylie 2013) all place learners at CEFR levels. Major commercial tests for other European languages also place learners at CEFR levels. There are also cheap or free online computer adaptive CEFR level placement tests for English that are easy to administer and have been designed by large organizations with the input of language education experts. For example, the English First Standard English Test (Education First 2014) is a free test of English language proficiency, which takes around 50 min to complete. The Oxford Online Placement Test (OOPT) (Purpura 2010) is affordable and allows easy aggregation and export of learner data. There is also the Cambridge placement test, which takes just 30 min to complete, but which seems to lack readily accessible validation research. Free CEFR placement tests for other European languages such as the Goethe Institute’s online German placement test (2019) and the Lengalia test for Spanish (2019) are also increasingly available online. Teachers are advised to consider factors such as price, ease of administration, practical collation of student results, and available supporting research when choosing a CEFR placement test. A further option for CEFR level placement is the DIALANG, a diagnostic test developed by a consortium of European higher education institutions with support from the European Commission’s Socrates program (Alderson 2005; Alderson and Huhta 2005). The DIALANG is currently maintained by the University of Lancaster (https://dialangweb.lancaster.ac.uk/). It is designed to assess language learner proficiency in 14 European languages. Advantages of the DIALANG are that it is free, it has been systematically designed as a CEFR-based test by language testing experts, and it can be taken by language learners on their smartphones. The DIALANG also has test instructions in 18 languages, including 4 Asian languages. It offers separate assessments in reading, writing, listening, grammatical structures, and vocabulary. Disadvantages of the DIALANG are that it is not designed as a placement test, so it does not allow administrators to easily aggregate test taker data for cohorts of students. The DIALANG is recommended for individual learners as a diagnostic
110
3
Assessment
tool to help them in planning and reflecting on their learning. It may also be used as a placement test in cases where the resources are not available to use a commercial placement test. For example, Baglantzi (2012) found evidence supporting the use of the DIALANG as a placement test for high school English classes in Greek junior high schools. However, using the DIALANG for placement purposes should be done with caution, as this is not what it was designed for. There remains little validation research for using the DIALANG as a course placement test. A final important note is that the results of commercial tests claiming to place learners at CEFR proficiency levels should only be used as a guide. This is partly because CEFR proficiency is multidimensional and a single score on a single test does not mean that, for example, a learner placed at the B1 level based on an overall test score will demonstrate mastery of all B1 CEFR ‘Can Do’ descriptors across all language skills, in all situations. Learners are likely to exhibit uneven profiles, meaning that they may demonstrate a higher CEFR level for some language skills in some contexts and a lower CEFR level for other language skills in other contexts. For example, a learner may be at B1 for overall reading ability and at A1 for overall spoken interaction, or a learner may be at B2 when speaking about a familiar work topic but at B1 when speaking about an unfamiliar topic or situation. Furthermore, different commercial tests have been shown to place learners at different CEFR levels (Anthony Green 2018; de Jong 2009). This may be because the process of standard setting is inherently subjective and standard setting results can vary depending on the type of method used (Koretz 2008). Variation in CEFR level placement between standardized tests may also be due to the CEFR being deliberately underspecified (Milanovic 2009; Weir 2005b). Finally, the type of test method has been shown to influence tests scores (Ong 1982). For example, some learners perform better on interview tests and others on multiple choice tests. Therefore, the results of commercial placement tests should be used only as a general guide. Placement test results should be combined with additional information from self-assessment, teacher assessment, and other forms of ongoing formative assessment.
3.3.5 CEFR-Informed Speaking Assessments CEFR-informed speaking assessments may include assessments focusing on spoken production (e.g., presentations) and spoken interaction tasks (e.g., booking a hotel). Methods of assessment can include teacher judgments of in-class communicative language activities and/or more formalized speaking tests. This section focuses on standardized speaking tests. For an overview of general issues involved in testing second or foreign language speaking, see O’Sullivan (2013). An informative overview of speaking test task types can be found in Ockey and Zhi (2015).
3.3 Types of CEFR-Informed Assessments
111
3.3.5.1 Design, Use, and Revision Practically, we recommend that teachers follow these steps when designing a CEFR-informed speaking test. 1. Choose the speaking task or tasks to be assessed from the curriculum or from the real-life situations for which the curriculum aims to prepare learners. 2. Decide on the test format. How many test takers and how many raters will there be? (For example, will it be a roleplay between two students with two raters, a one-on-one test with the teacher acting as an interlocutor and rater, or a student monologue with a single teacher rater.) Will you mimic a task or tasks from an existing test or design your own test tasks from scratch? 3. Write test specifications. These specifications should clearly state the time given for the task, the role of the rater(s), and the spoken language skills the test aims to assess. The CEFR Grid for Speaking (available from https://rm.coe.int/ 16806979df), produced by the ALTE CEFR SIG, is an excellent tool to use when writing or reviewing specifications for a CEFR-informed speaking test. 4. Write an assessment rubric by selecting and/or modifying CEFR ‘Can Do’ descriptors. The Qualitative features of spoken language (expanded with phonology) grid in Appendix 3 of the CEFR Companion Volume (COE 2018) and the illustrative scales for spoken interaction may be particularly useful. Teachers can also refer to commercially available exam rubrics for CEFR-aligned speaking tests, such as the Euroexam rubrics (Euroexam 2019). The teacher handbooks and documents on assessing teacher performance at the various CEFR levels for the Cambridge suite of ESOL exams also provide useful references. They include clear descriptions of CEFR-based speaking test tasks across the range of CEFR levels and simplified CEFR-informed speaking test rubrics covering CEFR target levels (see Cambridge English Language Assessment 2014b, 2016d, e, f, 2019; University of Cambridge ESOL Examinations 2012). Cambridge English Language Assessment (2008a, b, 2011a, b) also provided excellent explanations of their speaking assessment scales for CEFR levels A2, B2, C1 and C2. 5. Pilot the test. 6. Gather feedback from the pilot and modify the test specifications as necessary. 7. Train test raters through a standardization session. Examples of calibrated speaking performance at the CEFR levels A2-C2 useful for rater training are available at the Cambridge Assessment (2019) website. Videos of test taker performance on the Cambridge suit of CEFR-informed tests can also be found on YouTube along with links to examiner grades and notes for the samples. These examples have a strong supporting validity argument (University of Cambridge ESOL Examinations Research and Validation Group 2009). There are also further examples of calibrated spoken performance across the range of CEFR levels for English, French, German, Italian, and Spanish available from the Language Policy Unit of the Council of Europe in DVD format, which are the result of a cross-language benchmarking seminar organized at the Center International d’Etudes Pédagogiques (CIEP) in 2008. These
112
3
Assessment
samples can be ordered from the COE webpage, although many of the samples are also available online (https://www.ciep.fr/ressources/ouvrages-cederomsconsacres-a-levaluation-certifications/dvd-productions-orales-illustrant-les-6niveaux-cecrl). These samples also have supporting documentation and a Guide for the organization of a seminar to calibrate examples of spoken performance (see the Council of Europe website at https://www.coe.int/en/web/portfolio/ speaking). 8. Run the test. 9. Gather validation evidence on the test. 10. Modify the test specifications and/or test forms as needed based on the validation evidence.
3.3.5.2 Validation While there are few validation studies of speaking tests claiming alignment to the CEFR scales, two notable examples are Taylor (2011) and Liu and Jia (2017). In the volume Examining Speaking, edited by Taylor (2011), Weir’s (2005a) sociocognitive validation framework is used to examine speaking tests in Cambridge’s ESOL suite of exams. Overall, they found strong supporting evidence for the validity of the Cambridge ESOL exams for each of the areas in Weir’s framework, although they also identified several areas for which more research is needed. Liu and Jia (2017) analyzed and compared the language functions used by test takers during a speaking test to the language functions listed on a test syllabus developed based on the CEFR. They found that most of the language functions on the test syllabus were produced by test takers during the test, which provided strong validity evidence for the test. However, they also found some attenuating evidence that students demonstrated a “lack of interactional functions” (p. 1). Other research by Wisniewski (2018) has found evidence against the validity of the CEFR vocabulary and fluency scales for rating speaking tests, which is a further indication that CEFR scales often need to be modified for use in rubrics. Areas which teachers may consider examining to assess the validity of their CEFR-informed speaking tests include: • examining the match between speaking test tasks and the communicative language goals of the curriculum; • the match between speaking test tasks and speaking tasks in the curriculum, and/or; • the match between speaking test tasks and the real-world speaking tasks which the curriculum aims to prepare learners for. It may also be fruitful to compare the types of language produced on speaking test tasks to the types of language produced in the real-world speaking tasks that the test aims to emulate, with a closer match providing stronger validity evidence. Additionally, teachers may investigate test reliability, the relative leniency/harshness of test raters, and the relative difficulty of speaking test tasks across test forms to provide
3.3 Types of CEFR-Informed Assessments
113
evidence of consistency of grading and consequent fairness. For those interested in statistical analyses as a source of validity evidence, many-facet Rasch measurement is an excellent way to explore rater consistency, item difficulty, and test taker performance as validity evidence for productive language tests. (See Eckes 2015 for an accessible introduction to many-facet Rasch measurement.)
3.3.6 CEFR-Informed Writing Assessments CEFR-informed writing assessments focus on written interaction (e.g., emails or letters) or written production (e.g., essays or reports). As with speaking assessments, they should generally be graded by human raters.
3.3.6.1 Design, Use, and Revision We suggest the following steps when designing, using, and revising CEFR-informed writing assessments. 1. Choose the writing task or tasks to be assessed from the curriculum or from the real-life situations that the curriculum aims to prepare learners for. 2. Decide on the assessment format. What form will the writing prompt take? How much will test takers be expected to write? Will the assessment be computer or paper based? How much time will test takers be given? Will the assessment mimic a task or tasks from an existing test, or will you design your own tasks from scratch? 3. Write assessment specifications. These specifications should include clear statements about the time given for the task, the structure of the prompt, and the written language skills that the task aims to assess. A useful resource for creating writing assessment specifications or for evaluating existing specifications is the CEFR Grid for Writing Tasks (available from https://www.alte.org/ resources/Documents/CEFRWritingGridv3_1_presentation.doc.pdf) created by the ALTE CEFR SIG. 4. Make an assessment rubric by selecting and/or modifying CEFR ‘Can Do’ descriptors, ‘Can Do’ descriptors from other CEFR-related resources such as the Eaquals (2015) descriptors, and/or the Written assessment grid on pages 173–174 of the CEFR Companion Volume (COE 2018). It may also be useful to refer to some commercially available examiner rubrics for CEFR-based writing tests. For example, the teacher handbooks and documents on assessing performance at the various CEFR for the Cambridge suite of ESOL exams (see Sect. 3.3.5.1) are useful references with simplified CEFR-informed rubrics. Cambridge English Language Assessment (2014a, 2016a, b, c) also provide clear explanations of the writing scales and rubrics used for their exams along with graded examples of writing output at CEFR levels B1-C2. Finally, Appendix D of Rupp et al.’s (2008) in-depth description and analysis of the development of CEFR-based tests for the German school system provides
114
5. 6. 7.
8. 9. 10.
3
Assessment
detailed CEFR-informed rating scales for writing assessments, making it an excellent reference. Pilot the test. Gather feedback from the pilot and modify the test specifications as necessary. Train writing test raters through a standardization session, if feasible. Examples of writing tasks and answers at the CEFR levels A2-C2 for English French, German, Italian, and Portuguese made by professional testing organizations for the Preliminary Pilot Version of the Manual for Relating Language Examinations to the CEFR are available from the Council of Europe Website (https://rm. coe.int/168045a0cf). These examples are useful for rater/learner training and for designing writing test tasks. Run the assessment. Gather validation evidence on the test. Modify the assessment specifications and test forms as needed based on the validation evidence.
3.3.6.2 Validation There is considerably more research available for CEFR-based writing tests than for CEFR-based speaking tests. This may be because of the importance of English academic writing for studying in higher education. A few selected studies are briefly reviewed here to provide examples of the types of validation evidence that teachers may collect for test validation research. Konrad et al. (2018) conducted mixed methods research into the design of writing tasks for the CEFR A1 and A2 levels by reviewing the literature, examining a small corpus of exam tasks, and surveying small-scale test developers. Their research highlighted the difficulty of designing such tasks, as they found that there is little practical guidance available for designing CEFR-informed writing tasks at these levels. They also found that there was often no clear differentiation between A1 and A2 level tasks in the test tasks that they examined. On a positive note, they further speculated that the new and extended CEFR writing descriptors in the companion volume (COE 2018) may offer better guidance for distinguishing between tasks at the A1 and A2 levels. Harsch and Rupp (2011) investigated standardized writing tasks as part of a large-scale proficiency project to evaluate the English proficiency of German high school students. The writing tasks were designed to test English writing proficiency across the A1-C1 range of CEFR levels (Harsch et al. 2010; Rupp et al. 2008). The researchers found that statistical measures of task difficulty generally corresponded well with the intended CEFR difficulty level of the writing tasks, and that it would be possible to set empirically based cut scores to distinguish between CEFR English writing ability levels in their sample. Harsch and Rupp’s study provides strong validation evidence to support the design and implementation of level-specific, criterion-referenced, CEFR-informed writing test tasks that can distinguish between mastery and non-mastery of CEFR-informed writing skills at each level.
3.3 Types of CEFR-Informed Assessments
115
Harsch and Martin (2012) investigated how CEFR ‘Can Do’ descriptors can be adapted to make a writing rating scale in a local context. They used a data-driven approach to adapt their descriptors by monitoring 13 raters using 19 writing tasks during a rater training. The research focused on rubrics which were developed for each of the CEFR levels A1-C1. The analytic rubrics contained four categories; task fulfillment, organization, vocabulary, and grammar. Writing tasks were given a score for each category and an overall grade of fail, pass, or pass+ for exceptional performance. The researchers used statistical analysis to identify instances of disagreement between raters and problematic tasks, which were then discussed with the raters in groups. When the wording of rating scales was identified as causing a problem, the scales were revised after considering rater input. The resulting scales were further validated through a descriptor sorting exercise. High rater agreement was found using the revised scales in a subsequent rater training session. The authors argue that revising rating scales based on statistical evidence and consensus reached through rater discussion provides important validity evidence for the resulting rating scales. As the above three studies illustrate, validity evidence for writing tests can come from detailed documentation of test design and rater training procedures. Further sources of validity evidence may come from statistical analyses such as the level of agreement between raters and reliability indexes for ratings. Statistical analysis may also confirm or rebut the intended difficulty of CEFR-informed tasks, potentially suggesting the need to revise task specifications.
3.3.7 CEFR-Informed Reading and Listening Assessments As noted in Sect. 3.1, a challenge when designing reading and listening assessments based around the CEFR scales is that the CEFR does not provide enough information for the design of test specifications. For example, it is unclear what exactly is meant by this part of the A2+ overall reading comprehension statement (COE 2018: 60), “simple texts on familiar matters of a concrete type.” How long are such a texts? What vocabulary and grammar constitute “simple”? Which texts would be considered “concrete” and which abstract? These rather ambiguous terms leave considerable room for interpretation. A practical way to begin designing CEFR-informed receptive language assessments is to examine tests in the public domain that have a strong validity argument and strong empirical evidence backing their connection to the CEFR. While the specifications for these tests are generally not available in the public domain for proprietary reasons, examining tasks and items from these tests along with available supporting documentation can provide a useful starting point for designing CEFR level-specific reading and listening test tasks and items. Excellent examples of tests to certify learner ability on the CEFR scales for English are the Cambridge suite of CEFR level certification exams. This suite of
116
3
Assessment
exams consists of the Key English Test (KET) at the A2 level, the Preliminary English Test (PET) at the B1 level, the FCE at the B2 level, the CAE at the C1 level and the CPE at the C2 level. For other languages, the CIEP DELF, and DALF are excellent examples for French covering the full range of CEFR levels. The Goethe Institut Goethe-Zertifikat tests are excellent examples for German. The University for Foreigners of Perugia’s Center for Language Evaluation and Certification’s Certificato di Conoscenza della Lingua Italiana (CELI) exams for Italian, and for Portuguese the Assessment Center for Portuguese as a Foreign Language (Centro de Avaliação de Português Língua Estrangeira’s (CAPLE), CIPLE (A2), DEPLE (B1), DIPLE (B2), DAPLE (C1), and DUPLE (C2). Illustrative test tasks and items from CEFR-based and CEFR-linked exams are available from the Council of Europe’s website, for listening (https://www.coe.int/ en/web/common-european-framework-reference-languages/listening-comprehension) and reading (https://www.coe.int/en/web/common-european-frameworkreference-languages/reading-comprehension). Once CEFR-informed reading and/or listening assessments have been completed, an excellent tool for analyzing the match between reading and listening tasks to the CEFR is the Dutch CEFR Grid (Alderson et al n.d.). The Dutch CEFR Grid for analyzing tests of reading and listening was the result of a Dutch Ministry of Education, Culture and Science–funded project. It aims to “describe the construct of reading and listening for English, French, and German which should underlie test items, tasks, and whole tests at the six main levels of the CEFR” (Alderson et al 2006: 4). The grid provides a systematic framework for analyzing reading and listening tasks in terms of their alignment to the CEFR. The grid has been found to be effective for CEFR-linked reading test validation (Wu and Wu 2010).
3.3.7.1 Design, Use, and Revision This section deals with CEFR-informed reading and listening tests. We suggest that practitioners take the following steps to design and revise such tests. 1. Choose the reading and/or listening tasks to be assessed from the curriculum, or from the real-life situations, the curriculum aims to prepare learners for. Review the illustrative tasks from commercial testing organizations available on the Council of Europe’s website. 2. Decide on the task formats. How many and what type of reading/listening tasks will be on the test? How much time will test takers be given to answer the questions? Will the test be computer of paper based? Will you mimic a task or tasks from an existing test, or design your own test tasks from scratch? 3. Write test specifications. These specifications should include clear statements about the timing of the task, the structure of the prompt, the types of test taker responses, and the listening or reading skills that the test aims to assess. It will also be useful at this stage to check the specifications using the Dutch CEFR Grid, which was briefly introduced above.
3.3 Types of CEFR-Informed Assessments
117
4. Create the first draft of the test. At this stage, it may also be useful to check grammar and vocabulary of reading and listening passages in the test against Reference Level Descriptors if available (see Sect. 3.3.8). This will help identify the difficulty level of reading and listening passages, and help with simplifying language to suit the target CEFR level of the test if necessary. Tools such as the English Profile Text Inspector (Cambridge University Press 2015), which automatically analyzes the CEFR level of vocabulary in texts, also provide invaluable analysis of test texts. 5. Review the test draft through peer feedback. 6. Pilot the test. 7. Gather feedback from the pilot, and modify the task specifications as needed. 8. Run the test. 9. Gather validation evidence on the test. 10. Modify the test specifications and forms as needed based on an analysis of the validation evidence.
3.3.7.2 Validation Exemplary validation studies that have been conducted on the reading and listening sections of the Cambridge suite of CEFR certification tests using Weir’s (2005a) sociocultural framework are available in book form. See Khalifa and Weir (2009) for reading and Geranpayeh and Taylor (2013) for listening. Using judgments of a panel of experts on tasks from a small sample of tests Khalifa and Weir’s extensive study found that the types of tasks in the Cambridge suite of exams should generally activate the types of level-appropriate cognitive processes explicated in the literature on second language reading, but that there were few tasks at the CAE (C1) and CPE (C2) which did not appear level appropriate. The authors also found generally strong evidence to support the context, scoring, consequential, criterion-related components of Weir’s framework, although they indicated more research was needed, for example using verbal protocols to confirm these findings. The authors also suggested adding more detail to the test specifications. Geranpayeh and Taylor (2013) edited a volume to which various authors contributed validation research within Weirs (2005a) validity framework. This broad and thorough validation study found generally strong validity evidence for the listening sections in the Cambridge suite of CEFR-based certification tests across all the validity facets of the sociocognitive framework. The book also suggested changes which may further increase test validity, such as adding more detail to test specifications, and instructions to item writers and voice actors for recordings, in order to make listening texts closer to real-world speech, and standardizing terminology in the specifications across the test suite. Based on the results, the authors also outlined a future research agenda to further strengthen validity evidence for these exams. For example, researching the effect of presenting listening questions before, between or after listening passages, which are played twice, how to better
118
3
Assessment
elicit high-level listening processes in the higher level tests, whether more academic scripts should be used for the higher level tests, and the viability of giving learners a scale score with their test results that shows where the sit on an overall numerical scale proficiency from A1-C2. Typical sources of validity evidence for reading and listening tests include statistical analyses of the test as a whole and of individual tasks and items, correlations with other tests that claim to measure the same construct, and analyses of the domain coverage of the test. A further source of validity evidence for reading and listening assessments is to use verbal protocols, in which test takers report on what they are doing and thinking while taking test tasks, to explore if the cognitive processes used by test takers match those which the test is intended to elicit. See Alison Green (1998) for a thorough introduction to verbal protocol analysis in language testing research.
3.3.8 CEFR-Informed Vocabulary and Grammar Assessments Due to the communicative language focus of the CEFR, and the fact that the CEFR was designed to cover all European languages in a single framework, language-specific grammar points, mastery of which may be used to separate learner ability between levels, are not provided. To address this gap, the Council of Europe has called for the development of Reference Level Descriptions (RLDs) which list the key linguistic features for languages at each of the CEFR levels. According to the COE’s website (COE 2019a) “The RLDs constitute structured inventories of a language’s words and “rules” that are deemed necessary to produce oral and written texts corresponding to the CEFR descriptor scales.” A Draft Guide for the Production of RLDs has been produced by the COE (2005). The guide calls for RLDs to be produced based on statistical frequency in texts, analyses of learner language production, and drawing on established orders of second language acquisition. To date RLD projects for eleven languages have been deemed to comply to the Production Guide and links to the results of these projects can be found on the COE website here (https://www. coe.int/en/web/common-european-framework-reference-languages/reference-leveldescriptions-rlds-developed-so-far). Both proficiency with L2 grammar and also vocabulary knowledge influence language learners’ ability to comprehend language input and also to produce comprehensible output. Thus, grammar proficiency will be implicitly assessed in tests of the productive and receptive skills covered so far in this chapter. Grammar and vocabulary use may also be assessed explicitly through a grading category or categories for their use in writing and reading rubrics. Grammar and vocabulary tests are generally designed using the selected response format briefly explained in Sect. 3.2.8.
3.3 Types of CEFR-Informed Assessments
119
Purpura (2004) provides an excellent overall introduction to issues and methods in L2 grammar assessment and Read (2000) is a great general introduction to issues and methods in L2 vocabulary assessment. These two books also give useful overviews and analyses of task and item types for discrete grammar and vocabulary tests (i.e., tests which measure grammar and vocabulary as constructs separate from other facets of language ability).
3.3.8.1 Design, Use, and Revision We suggest taking the following steps to design and revise CEFR-informed grammar and vocabulary assessments. 1. Choose the grammar points and/or vocabulary to be assessed from the course curriculum. Course vocabulary lists can be constructed and target grammar points chosen with reference to RLDs, if available for your target language. 2. Decide on the assessment format. How many and what type of grammar/ vocabulary tasks will be on the test? How much time will test takers be given to answer the questions? Will the test be computer or paper based? Will you mimic a task or tasks from an existing test or design your own test tasks from scratch? You may refer to the general L2 testing literature such as Purpura (2004) for appropriate grammar task and item types and Read (2000) for appropriate vocabulary task and item types. 3. Write the assessment specifications. These should include clear statements about the definition of the construct(s) to be assessed, task time, prompt structure, types of test taker responses, and the range and types of grammar and/or vocabulary that the test aims to assess. 4. Create the first draft of the assessment. 5. Review the assessment draft through peer feedback. 6. Pilot the assessment. 7. Gather feedback from the pilot such as item analysis and feedback from test takers then modify the assessment specifications as needed. 8. Run the assessment. 9. Gather validation evidence for the assessment. 10. Modify the assessment specifications and test forms as necessary based on the validation evidence.
3.3.8.2 Validation To the best of our knowledge, there are no studies to date which have specifically sought to validate discrete grammar or vocabulary assessments as criterion-referenced tests aligned to the CEFR. Therefore, this is an area in which more research is needed. Teachers are advised to use the core validity questions in Sect. 3.2.5.1 to begin to build a validity argument with a view to assessing and improving their CEFR aligned grammar and vocabulary tests. Sources of evidence are similar to those used for selected response items for reading and listening tests and can include:
120
3
Assessment
• analysis of the match between test and curriculum content, • analysis of the match of the test specifications to the curriculum’s grammar and vocabulary goals, • statistical analyses of the test as a whole and/or of individual tasks and items on the test, and • think-aloud protocols to assess if the test items only assess the construct in question and are not influenced by other factors.
3.4
Exercises
The aim of this section is for readers to use what they have read in this chapter to plan and begin to design CEFR-informed assessments for a language course.
3.4.1 Exercise 1: Plan a Course Assessment Breakdown Decide on an assessment breakdown for a course that you have taken, taught or plan to teach. Fill in Worksheet 1 after reading the key questions below. Key questions (1) What kind of assessments will you include in your course? Will you include self-assessment? Will the course have students compile a language portfolio? Will the course include either teacher designed or commercial standardized tests? Which language skills will assessments target? Will classroom participation be a part of assessment? What will be the type, frequency, and amount of homework? Consider the amount of time and resources available for writing assessment specifications, writing and piloting test items, and rater training, etc. (2) How will students be encouraged to plan, monitor, and reflect on their own learning? Will learners be required to submit a learning plan? Will learners be required to correct their own writing mistakes? How will self-assessment be implemented? For example, students may watch a video of their own presentation and grade it or self-assess their abilities against target Can Do statements at the beginning and end of a lesson. (3) How will each assessment be weighted? That is, what proportion of the final course grade will be from each assessment?
3.4 Exercises
121
Worksheet 1 Course assessment breakdown Assessment Type
Assessment weighting (% of final course grade)
Timing of assessment (e.g. week 5 of course)
Description of assessment
Example Answer See the assessment breakdown in Sect. 3.5.1 for an example answer.
3.4.2 Exercise 2: Brainstorm Assessment Context and Use Specifications Choose one assessment from your answers to Worksheet 1. Brainstorm and write ideas for simple assessment specifications for the assessment in Worksheet 2 after reading the key questions. Worksheet 2 is influenced by Carr’s (2011, p. 50) assessment context and specification categories. Some of the key questions are adapted from Carr, and other key questions have been added by the authors of this book. Key questions (1) Assessment functions: What judgments will you make from the assessment results? What inferences will you make about learner language ability based on the results? What benefits will the assessment have for stakeholders such as students, teachers, and administrators? (2) Language skills: What language skills (e.g., vocabulary or listening) does the assessment aim to measure? What do you think are important components of these skills? (E.g., Vocabulary could include knowledge of meaning, form, pronunciation, collocation, conjugation, etc.)
122
3
Assessment
(3) Test type: Is the assessment criterion or norm-referenced? (Most CEFR-based classroom assessments will be criterion-referenced.) Is it formative or summative? (4) Target language situations: What language situations in the real world outside of the assessment should the assessment results generalize to? What should the test tell you about what test takers can do in those situations? For example, should a speaking test generalize to the test taker’s ability to deal successfully with a customer complaint when working at a hotel reception? (5) Description of test takers: What kind of learners will take your assessment? What are their age, sex, and cultural and language backgrounds? What knowledge do they probably have? (6) Practical planning: What resources do you need to make and administer the test? What resources do you have available? For example, how many teachers can work on creating and revising the assessment? What budget is available for printing? What further resources do you need? Worksheet 2 Assessment Context and Purpose Specifications Category
Notes
Assessment functions Language skills Test type Target language situations Description of test takers Practical planning
Example Answer Refer to the example answer below as necessary. Category
Notes
Assessment functions
To assess learner proficiency for spoken interaction on the CEFR scales A1-B1 in the situations covered by the general English (GE) curriculum over one semester This test aims to assess the construct of spoken interaction implicit in the CEFR scales for spoken language use and spoken interaction Criterion-referenced summative assessment The TLU domain for this test is spoken interaction tasks in the GE curriculum The test takers are nearly all Japanese female university students aged 18–20 A team of three teachers will work on designing and trialing these tests over each semester for use in the following semester
Language skills Test type Target language situations Description of test takers Practical planning
3.4 Exercises
123
3.4.3 Exercise 3: Brainstorm Task Specifications Choose one task from the assessment you focused on in Exercise 2. Brainstorm your answers to and make notes for each of the key questions below in Worksheet 3. Key questions (1) What language skill or skills does the task focus on? (2) What is the TLU domain and the communicative task context? For example, does the task focus on an interactive situation such as buying a bus ticket, or does the task focus on decontextualized language such as L1 to L2 vocabulary translation? (3) What ‘Can Do’ descriptors can you find and/or modify to describe the task requirements? (4) What is the format of the task? For example, is it a reading passage with multiple choice questions or a pair roleplay? (5) How long is the task in terms of length of prompts, length of listening passage, number of questions, etc.? (6) How will the task be graded? Will a rubric be used for grading? Will the teacher grade the task, will it be self-assessed, or will it be peer graded? Worksheet 3 Brainstorm task specifications Category
Notes
Language skill(s) Task context, TLU domain Relevant ‘Can Do’ descriptors Task format Task length Task grading
Example Answer Refer to the example answer below as necessary. Category
Notes
Language skill(s) Task Context, TLU Domain
Spoken interaction The TLU domain for the task is the GE curriculum. Specifically, it is classroom spoken interaction tasks that students have engaged in over one semester • CEFR illustrative scales for grammatical accuracy, phonological control, vocabulary range, and vocabulary control • CEFR scale for qualitative aspects of spoken language use • CEFR illustrative scales for spoken interaction The task is an information exchange between two test takers. One test taker must form questions from short prompts on a question card. The other test taker must answer the questions from a short, printed advertisement information card. This task is the second of three tasks on a longer speaking test (continued)
Relevant ‘Can Do’ descriptors
Task format
124
3
Assessment
(continued) Category
Notes
Task length Task grading
3–4 min The task is graded by two teachers. One teacher gives instructions to the test takers and uses a holistic rubric for grading. The other teacher observes the test, using an analytic rubric for grading. The holistic grade is weighted at 40% of the total score and the analytic rubric is weighted at 60%
3.4.4 Exercise 4: Make Productive Language Assessment Rubrics Choose a productive language task from a language course that you teach or that you have taken. This could be a written task (such as an essay or movie review) or a speaking task (such as a presentation or roleplay). Create a CEFR-based grading rubric for the task in Worksheet 4 using the key questions below to guide your answers. Key questions (1) What CEFR level(s) does the task target? (2) Will you use an analytic rubric, a holistic rubric, or both? (3) For a holistic rubric, which CEFR overall scale or which subscale is most relevant? (4) For an analytic rubric, which CEFR subscales are relevant? In addition to considering the subscales for productive language, you may also consider relevant subscales for linguistic competence such as grammar, vocabulary, and phonological control in addition to communication strategies. Also consider looking at other CEFR-related scales, such as the Eaquals (2015) descriptors. (5) Are there any non-linguistic categories that you wish to include in the assessment rubric? For example, eye contact and gestures for a presentation or quality of supporting arguments and evidence for an essay. (6) For an analytic rubric, which are the five categories that you think best capture the communicative demands of the task? (7) Choose a CEFR level range for your task, including one level above the task and one level below the task. For example, if you consider mastery of the task to be at the A2 level, your level range would be A1-B1. For a narrower range, you may consider using CEFR plus levels, e.g., A1+ -A2+ . (8) Insert ‘Can Do’ descriptors from the subscales you have chosen in Worksheet 4 for a holistic rubric and in Worksheet 5 for an analytic rubric. Insert your mastery CEFR level ‘Can Do’ descriptors for the score ‘3’ band. Insert the descriptor for one level above in the score ‘5’ band and the descriptor for one level below in the score ‘1’ band. For a test taker, population of a fairly
3.4 Exercises
125
homogenous level consider using the CEFR plus levels or scales from other CEFR-based sources for score bands ‘2’ and ‘4,’ such as the Eaquals (2015) plus levels or the CEFR-J (Tono 2019) sublevels. Choosing a narrower CEFR range can give lower-proficiency learners a greater sense of achievement from higher test scores. (9) If the descriptors from (8) do not appear to match the task well, consider modifying them as described in Sect. 2.3.2.5. Also try looking to other sources such as the ALTE (2002) ‘Can Do’ descriptors. (10) Share your rubric(s) with a peer and ask for feedback. Worksheet 4 Holistic rubric Scores
Descriptors
5 4 3 2 1 0
Worksheet 5 Analytic rubric Scores
Descriptor categories
5 4 3 2 1 0
Example Answers Refer to the holistic and analytic speaking test rubrics in Sect. 3.5.1.
3.5
Case Study and Further Reading
3.5.1 Case Study: CEFR-Based Assessments in an English Language Course Bower et al. (2017) describe and evaluate a curriculum renewal that took place at a language center at a Japanese university. The renewal aimed to fundamentally revise an English language curriculum called the general English (GE) curriculum
126
3
Assessment
to align it with the CEFR. This case study summarizes the role that assessment played in this curriculum renewal. Before beginning the renewal process, the OOPT (see Sect. 3.3.4) was given to a representative sample of students entering the GE curriculum. Results indicated that around half of the incoming students were at overall CEFR A1 proficiency or below, and around half of the students were at overall CEFR A2 level or above. These results led the leaders of the curriculum renewal to decide to create two separate curricula and materials for two course levels; a lower level for A1 learners that aimed to raise them to the A2 level over two years of study and a higher A2 level course that aimed to raise students’ overall ability to the B1 level over two years of study. Self-assessment was included in the new courses by writing ‘Can Do’ descriptors for the main communicative goal of each lesson. These were adapted from the CEFR ‘Can Do’ descriptors or from CEFR-related ‘Can Do’ scales, such as the Eaquals (2015) or ALTE (2002) descriptors. Learner-friendly, simplified versions of these ‘Can Do’ descriptors were created for students and translated into the students’ native language of Japanese. These simplified learner ‘Can Do’ descriptors in English along with their Japanese translations were included at the start and end of each lesson handout for students to self-assess their mastery of the communicative task described. Teachers were encouraged to have students complete these self-assessments before and after each lesson to stimulate students to reflect on their ability to complete the lesson’s main communicative task. This process is intended to facilitate the learning cycle described in Sect. 5.5.1. Table 3.1 shares an example Can Do lesson checklist from Bower et al. (2017). Learner autonomy in the curriculum was facilitated further through the provision of a collection of Self-Access Learner Center (SALC) learning activities which supported communicative lesson targets and aimed to teach students to learn how to learn (Kodate 2017). Students were required to complete two of these SALC Table 3.1 An example of a lesson self-assessment checklist Check the boxes I can understand a travel blog
I can talk about travel experiences
I can write a travel blog about a trip
I can do it easily
I can do it
I can do it, but I need more practice
I can’t do it
3.5 Case Study and Further Reading
127
activities per semester. Completion of these activities made up 10% of their final course grade each semester. A presentation rubric was designed for use with both course levels that utilized ‘Can Do’ descriptors from the CEFR subscales for addressing audiences and phonological control for levels A1-B1. It also included a column for presentation skills that was written in-house. The rubric descriptors were provided in both English for teachers and in Japanese for student reference. All teachers were expected to use the rubric for grading presentations. However, due to time constraints, no standardization or rater training was done to increase consistency of grading using the rubric across classes. A writing rubric was also designed for use in grading writing tasks across the curriculum. Three categories of spelling and grammar, content, and organization were chosen from the example rubrics in the PET Handbook (University of Cambridge ESOL Examinations 2012) for teachers and the descriptors provided in the PET handbook were adapted. As with the presentation rubric, all teachers were expected to use this rubric for grading writing tasks. However, due to task prioritization and time limitations, no standardization or rater training was done to increase consistency of grading using the rubric across classes. Speaking tests for the GE curriculum were designed by analyzing and adapting speaking tasks from the KET and PET, certification tests for English proficiency at the A2 and B1 levels, respectively. Three speaking test tasks were adapted and combined to make an eight- to ten-minute speaking test that was graded by two examiners. The test was a paired format (i.e., two test takers took the test together and spoke to each other to complete tasks). The first task was answering few short questions from an examiner, the second task was a simple information exchange, and the final task was a pair discussion. One examiner used an analytic rubric and the other used a holistic rubric. The two rubrics’ band descriptors spanned CEFR levels pre-A1-B1. Scores from the two examiners were combined into a single grade. A different speaking test was delivered at the end of each semester with the task content for each test designed based on lesson content from the semester and written according to test specifications. Test raters were trained through a standardization session before administration of each exam. To increase grading objectivity, teachers did not score their own classes. It is important to note that the following two example speaking test rubrics span a wide range of CEFR proficiency levels (pre-A1–B1). This is because the same rubric was designed to be used across the two course levels (A1-A2 and A2-B1) over two years of study. This approach has the advantage that teachers only had to be trained and standardized in the use of two rubrics. However, a potential disadvantage of using such a broad level range in a rubric is that learners are unlikely to observe progress in their ability, as it takes a long time to move up the CEFR levels. Rubrics that span a narrower range of CEFR levels, such as A2/A2+/B1, may be more beneficial to show students their progress within and across adjacent levels (Tables 3.2 and 3.3).
128
3
Assessment
Table 3.2 An example of a CEFR-base holistic speaking test rubric Interlocutor—Holistic Rubric
Score
CEFR level
Handles communication in everyday situations, despite hesitation 5 B1 Constructs longer utterances but is not able to use complex language except in well-rehearsed utterances Performance shares features of bands 4 and 5 4.5 A2+ Conveys basic meaning in very familiar everyday situations 4 A2 Produces utterances which tend to be very short—words or phrases— with frequent hesitation and pauses Performance shares features of bands 3 and 4 3.5 A1+ Has difficulty conveying basic meaning even in very familiar everyday 3 A1 situations Responses are limited to short phrases or isolated words with frequent hesitation and pauses Unable to produce the language to complete the tasks 2 Pre-A1 Does not attempt the task 1 Pre-A1 This rubric is based on the A2 Global achievement rubric from the Cambridge English Key Handbook for Teachers for exams from 2016, page 52, https://www.cambridgeenglish.org/Images/ 168163-cambridge-english-key-handbook-for-teachers.pdf. Cambridge Assessment English had no involvement in amendments to the original table
Table 3.3 An example of a CEFR-based analytic speaking test rubric Grammar and vocabulary
Pronunciation
Shows a good degree Is mostly intelligible and of control of simple has some control of grammatical forms phonological features at Uses a range of both utterance and word appropriate vocabulary levels when talking about everyday situations Performance shares features of bands 4 and 5 Shows sufficient Is mostly intelligible, control of simple despite limited control of grammatical forms phonological features Uses appropriate vocabulary to talk about everyday situations
Interactive communication
Grade
CEFR level
Maintains simple exchanges Requires very little prompting and support
5
B1
4.5 4
A2+ A2
Maintains simple exchanges, despite some difficulty Requires prompting and support
(continued)
3.5 Case Study and Further Reading
129
Table 3.3 (continued) Grammar and vocabulary
Pronunciation
Performance shares features of bands 3 and 4 Shows only limited Has very limited control control of a few of phonological features grammatical forms and is often Uses a vocabulary of unintelligible isolated words and phrases
Interactive communication Has considerable difficulty maintaining simple exchanges Requires additional prompting and support Unable to ask or respond to most questions
Grade
CEFR level
3.5 3
A1+ A1
Shows no control of Pronunciation is mostly 2 Pre-A1 grammatical forms unintelligible Uses inappropriate vocabulary or mostly Japanese Does not attempt the Does not attempt the Does not 1 Pre-A1 task task attempt the task This rubric is based on the assessment scales for Cambridge English: Key from the Cambridge English Key Handbook for Teachers for exams from 2016, page 53, https://www. cambridgeenglish.org/Images/168163-cambridge-english-key-handbook-for-teachers.pdf. Cambridge Assessment English had no involvement in amendments to the original table
Vocabulary quizzes were also administered three times a semester. The quizzes covered words on vocabulary lists for lessons that students had taken. Lesson vocabulary lists were compiled based on lesson content and by referencing Cambridge’s English Vocabulary Profile to select words and phrases mostly at the A2 and B1 levels. The quizzes were cumulative in that each quiz tested students on knowledge of words from all the previous lessons they had taken, including word lists from previous semesters. The vocabulary quizzes had five question types of five questions each for a total of 25 selected response items. Reading and listening tests were also administered to entering students to stream them into classes and again at the end of students’ first and second years of study. Tasks for these tests were designed by analyzing and adapting tasks from the KET and PET. These tests were intended to act as achievement tests for the curriculum reading and listening goals. For a detailed validation study of the placement and achievement functions of these tests, see Bower (2019).
130
3
Assessment
A breakdown of assessment weighing for the 2016 GE first-year courses follows. Semester 1 Assessment Type
Assessment weighting (% of final course grade)
Timing of assessment
Description of assessment
SALC Activities
10%
Twice during semester at teacher discretion
Unit Homework activities and presentations Vocabulary Tests
45%
Two presentations, one around week 7 and the other around week 15. Regular homework
Learners visited the SALC and used resources there to compete a CEFR-informed task Individual, pair, or group presentation Pre-study of key vocabulary for each lesson
10%
Speaking Test Class Participation
15%
Three vocabulary tests, one at the end of each of three course units around week 5, week 10, and week 15 In the week 16 exam week after the final lesson Every lesson
20%
Online tests of 25 selected response items
Pair speaking test graded by two teacher raters Teacher assessment of student class participation based on attendance, active participation, and attitude
Semester 2 Assessment type
Assessment weighting (% of final course grade)
Timing of assessment
Description of assessment
SALC activities
10%
Twice during semester at teacher discretion
Unit homework activities and presentations
30%
Two presentations, one around week 7 and the other around week 15. Regular homework
Vocabulary tests
10%
Three vocabulary tests, one at the end of each of three course units around week 5, week 10, and week 15
Learners visited the SALC and used resources there to compete a CEFR-informed task Individual, pair, or group presentation Pre-study of key vocabulary for each lesson Online test of 25 selected response items
(continued)
3.5 Case Study and Further Reading
131
(continued) Assessment type
Assessment weighting (% of final course grade)
Timing of assessment
Description of assessment
A standardized reading and listening test Speaking test
15%
Administered in class around week 14
15%
Class participation
20%
In week 16 exam week after the final lesson Every lesson
An extended multiple choice reading and listening test Pair speaking test graded by two teacher raters Teacher assessment of student class participation based on attendance, active participation, and attitude
Case Study Conclusion • This case study provides a good example of attempting to inform all assessments in a course with the philosophy and standards of the CEFR and its related resources. • Firstly, curriculum content was designed with reference to the Waystage and Threshold documents (van Ek 1998a, b) as well as other CEFR-related resources discussed elsewhere in this book. • Secondly, content for each assessment was chosen to match course and lesson content as closely as possible, including communicative tasks, themes, vocabulary, and grammar. • Thirdly, as the assessment designers were teachers, rather than testing specialists, assessments were deliberately modeled on commercial tests which claimed to be CEFR-based. This was to save the time required to design such tests in-house from scratch. • Fourthly, assessments in the course were intended to match the philosophy of the CEFR, with the CEFR’s action-oriented philosophy reflected in 15% of assessment allocated to an interactive speaking test, 10% each semester to presentations, and 10% to SALC activities that were designed to foster learner autonomy. The fact that a fairly large proportion of assessment was allocated to class participation (20%) also reflects the importance of active participation in classroom L2 communicative activities, which also aligns with the CEFR’s action-oriented approach.
132
3
Assessment
3.5.2 Further Reading General Language Testing Brown (2005), Carr (2011), and Fulcher (2013) all provide excellent, accessible, and practical general introductions to the field of language testing for teachers. Xi (2008) provides an overview of validity evidence types which are commonly gathered to support arguments in argument-based approaches to test validation. Validation of CEFR-based Tests Detailed validation research on the Cambridge ESOL exams using Weir’s (2005a) sociocognitive framework is useful reading for those interested in exploring validation of CEFR-aligned tests more deeply. For listening tests see Geranpayeh and Taylor (2013), for reading Khalifa and Weir (2009), for speaking Taylor (2011), and for writing Shaw and Weir (2007). Linking Existing Tests to the CEFR The Council of Europe has provided a manual on linking examinations to the CEFR (COE 2009) and a volume of studies on using the manual has also been published (Martyniuk 2010). Furthermore, a range of supplementary material to the manual is available online on the Council of Europe website (https://www.coe.int/en/web/common-european-frameworkreference-languages/additional-material). The manual and its supplementary texts are excellent resources for linking existing tests to the CEFR proficiency scales. However, the procedures in the manual and many of the techniques presented in the supplementary texts are likely too detailed and technical for teachers’ everyday practical use with most class assessments. Resources The following resources are recommended for assessment and testing: • Overview of CEFR-related scales (https://www.coe.int/en/web/portfolio/ overview-of-cefr-related-scales) is a COE webpage that “contains links to a variety of documents presenting the illustrative scales and descriptors from the CEFR as well as other descriptors that have been produced for other purposes such as ELP models.” • The English Vocabulary Profile (https://www.englishprofile.org/wordlists?) is a resource that facilitates level-appropriate vocabulary selection for lesson materials. It is based on the Cambridge Learner Corpus and the Cambridge English Profile Corpus. It has is a search function for words which returns CEFR levels for the word, and further information such as example sentences, and CEFR-leveled phrases. • Text Inspector (https://www.englishprofile.org/wordlists/text-inspector) is an online tool that analyzes the difficulty of English texts in terms of CEFR level. • The Council of Europe and the Association for Language Testers in Europe’s (ALTE 2011) Manual for Language Test Development and Examining: For use with the CEFR (https://rm.coe.int/manual-for-language-test-development-andexamining-for-use-with-the-ce/1680667a2b) is an accessible introduction to designing language tests aligned to the CEFR.
3.5 Case Study and Further Reading
133
• ALTE Descriptors (https://www.alte.org/resources/Documents/CanDo% 20Booklet%20text%20Nov%202002.pdf ) are ‘Can Do’ descriptors for course goals, lesson goals, and learner self-assessment. They are an abundant bank of descriptors that seem readily applicable. • Illustrative tasks and items (https://www.coe.int/en/web/common-europeanframework-reference-languages/using-illustrative-tasks) are a collection of CEFR-based test tasks and items for reading and listening provided by professional testing organizations. • The Dutch CEFR Construct Grid (https://www.lancaster.ac.uk/fss/projects/grid/) is an online tool which provides a systematic framework for analyzing reading and listening tasks in terms of their relation to the CEFR. The website also provides examples and training on how to use the grid. • DIALANG (https://dialangweb.lancaster.ac.uk/): Diagnoses language proficiency on the CEFR scales. It is computer administered, free, and covers many European languages. • The Oxford Online Placement Test (https://www.oxfordenglishtesting.com/ DefaultMR.aspx?id=3034&menuId=1) is a reasonably priced CEFR level check. • The English First Standard English Test (https://www.ef.com/wwen/test3/#/) is a free English CEFR-based level placement test. • Content grids for speaking and writing (https://www.coe.int/en/web/commoneuropean-framework-reference-languages/relating-examinations-to-the-cefr) are excellent resource s for facilitating the design and analysis of CEFR-based tests of productive skills. • TALE Test Assessment Literacy (http://taleproject.eu/index.php) provides free, open-access, online courses to educate teachers on how to design and implement English language assessments. This appears to be is a great resource for language teachers who want to build a solid foundation of language assessment knowledge. • Euroexam rubrics (http://www.euroexam.com/assessment-and-marking-criteria) are nice examples of CEFR-based rubrics for speaking and writing exams at CEFR levels A1-C1. • The Cambridge suite of ESOL exams teacher handbooks and explanations of rating scales (see Cambridge English Language Assessment citations in the references for URLs) are excellent resources for designing CEFR-based tests and assessment rubrics. • Samples of CEFR-calibrated English learner spoken production (https://www. cambridgeenglish.org/research-and-validation/fitness-for-purpose/#A2) are available from Cambridge Assessment English. • Further calibrated samples of speaking performance in five European languages for all six CEFR levels with accompanying reports and analyses are available on DVD and online (https://www.ciep.fr/ressources/ouvrages-cederoms-consacres-alevaluation-certifications/dvd-productions-orales-illustrant-les-6-niveaux-cecrl). These samples are the result of a cross-language benchmarking seminar organized at the Center International d’Etudes Pédagogiques (CIEP) in 2008.
134
3
Assessment
• The Guide for the organization of a seminar to calibrate examples of spoken performance (https://www.coe.int/en/web/portfolio/speaking) is an excellent resource for planning and carrying out a standardization session for speaking test rater training. • Graded examples of CEFR-based writing tasks (https://rm.coe.int/ CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId= 090000168045a0cf) are an excellent reference for designing CEFR-informed writing tasks and tests. • The European Association for Language Testing and Assessment (EALTA) webpage (http://www.ealta.eu.org/resources.htm) provides a range of resources on general language testing and language testing related to the CEFR.
References American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: AERA. Alderson, J. C. (1991). Bands and scores. In J. C. Alderson & B. North (Eds.), Language testing in the 1990s (pp. 71–86). London: Macmillan. Alderson, J. C. (2005). Diagnosing foreign language proficiency: The interface between learning and assessment. London: Continuum. Alderson, J. C, Figueras, N, Kuijper, H, Nold, G, Takala, S, Tardieu, C (no date). The Dutch CEFR Grid. www.lancs.ac.uk/fss/projects/grid. Accessed August 18, 2019. Alderson, J. C., Figueras, N., Kuijper, H., Nold, G., Sauli, T., & Tardieu, C. (2006). Analysing tests of reading and listening in relation to the Common European Framework of Reference: The experience of the Dutch CEFR construct project. Language Assessment Quarterly, 3(1), 3– 30. Alderson, J. C., & Huhta, A. (2005). The development of a suite of computer–based diagnostic tests based on the Common European Framework. Language Testing, 22(3), 301–320. Association of Language Testers in Europe (2002) The ALTE Can Do project: Articles and Can Do descriptors produced by the members of ALTE 1992–2002. https://www.alte.org/resources/ Documents/CanDo%20Booklet%20text%20Nov%202002.pdf. Accessed March 11, 2019. Bachman, L. F. (2004). Statistical analyses for language assessment. Cambridge: Cambridge University Press. Bachman, L. F., & Palmer, A. (2010). Language assessment in practice: Developing language assessments and justifying their use in the real world. Oxford, England: Oxford University Press. Baglantzi, V. (2012). Online diagnostic assessment: Potential and limitations (the case of DIALANG in the Greek junior high school context). Research papers in language teaching and learning, 3(1), 293–310. Black, P., & Wiliam, D. (1998a). Assessment and Classroom Learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7–74. Black, P. and Wiliam, D. (1998b). Inside the black box: raising standards through classroom assessment. Phi Delta Kappan, 80. Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for Learning: Putting it into Practice. Buckingham, UK: Open University Press. Black, P., Harrison, C, Lee, C, Marshall, B. & Wiliam, D. (2004). Working inside the black box: Assessment for Learning in the classroom. Phi Delta Kappan, 86.
References
135
Blanche, P., & Merino, B. J. (1989). Self-assessment of foreign-language skills: Implications for teachers and researchers. Language Learning, 39(3), 313–338. Bower, J., Runnels, J., Rutson-Griffiths, A., Schmidt, R., Cook, G., Lusk Lehde, L., & Kodate, A. (2017). Aligning a Japanese university’s English language curriculum and lesson plans to the CEFR-J. In F. O’Dwyer, M, Hunke, A. Imig, N. Nagai, N. Naganuma, & M. G. Schmidt (Eds.), Critical, constructive assessment of CEFR-informed language teaching in Japan and beyond (pp. 176–225). Cambridge, England: Cambridge University Press. Brown, J. D. (2005). Testing in language programs: A comprehensive guide to English language assessment. Singapore: McGraw-Hill. Brown, J. D. (2012). Developing rubrics for language assessment. In J. D. Brown (Ed.), Developing, using and analyzing rubrics in language assessment with case studies in Asian and Pacific languages. Honolulu, HI: National Foreign Languages Resource Center. Brown, J. D., & Hudson, T. (1998). The alternatives in language assessment. TESOL Quarterly, 32 (4), 653–675. Brown, J. D., & Hudson, T. (2002). Criterion-referenced language testing. Cambridge, England: Cambridge University Press. Cambridge Assessment. (2019). Fitness for purpose—Common European Framework of Reference for Languages (CEFR). https://www.cambridgeenglish.org/research-andvalidation/fitness-for-purpose/#A2. Accessed March 11, 2019. Cambridge English Language Assessment. (2008a). Assessing speaking performance—Level A2. https://www.cambridgeenglish.org/images/168617-assessing-speaking-performance-at-levela2.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2008b). Assessing speaking performance—Level B2. https://www.cambridgeenglish.org/images/168619-assessing-speaking-performance-at-levelb2.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2011a). Assessing speaking performance—Level C1. https://www.cambridgeenglish.org/images/168620-assessing-speaking-performance-at-levelc1.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2011b). Assessing speaking performance—Level C2. https://www.cambridgeenglish.org/images/182109-assessing-speaking-performance-at-levelc2.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2014a). Assessing writing performance—level B1. https://www.cambridgeenglish.org/images/231794-cambridge-english-assessing-writingperformance-at-level-b1.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2014b). Cambridge English First: First Certificate in English (FCE) CEFR Level B2 handbook for teachers. https://www.cambridgeenglish.org/ images/cambridge-english-first-handbook-for-teachers.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2016a). Assessing writing performance—Level B2. https://www.cambridgeenglish.org/images/cambridge-english-assessing-writing-performanceat-level-b2.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2016b). Assessing writing performance—level C1. https://www.cambridgeenglish.org/images/cambridge-english-assessing-writing-performanceat-level-c1.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2016c). Assessing writing performance—Level C2. https://www.cambridgeenglish.org/images/cambridge-english-assessing-writing-performanceat-level-c2.pdf. Accessed March 21, 2019. Cambridge English Language Assessment. (2016d). Cambridge English Advanced handbook for teachers for exams from 2016. https://www.cambridgeenglish.org/Images/167804-cambridgeenglish-advanced-handbook.pdf. Accessed March 20, 2019. Cambridge English Language Assessment. (2016e). Cambridge English Proficiency handbook for teachers for exams from 2016. https://www.cambridgeenglish.org/Images/168194-cambridgeenglish-proficiency-teachers-handbook.pdf. Accessed 20 March 2019.
136
3
Assessment
Cambridge English Language Assessment. (2016f). Cambridge English Young Learners handbook for teachers: Starters, Movers Flyers. https://www.cambridgeenglish.org/images/ 153612-yle-handbook-for-teachers.pdf. Accessed March 20, 2019. Cambridge English Language Assessment. (2018). A2 Key Handbook for Teachers for exams from 2020. https://www.cambridgeenglish.org/images/504505-a2-key-handbook-2020.pdf. Accessed March 19, 2019. Cambridge University Press. (2015). Text inspector. https://englishprofile.org/wordlists/textinspector. Accessed March 22, 2019. Carr, N. T. (2011). Designing and analyzing language tests. Oxford, England: Oxford University Press. Carless, D. (2007). Learning-oriented assessment: conceptual bases and practical implications. Innovations in Education and Teaching International, 44(1), 57–66. Chapelle, C. A. (2012). Validity argument for language assessment: The framework is simple … Language Testing, 29(1), 19–27. Chapelle, C. A., Enright, M. K., & Jamieson, J. M. (2008). Building a validity argument for the Test of English as a Foreign Language. New York, NY: Routledge. Cizek, C. J. (Ed.). (2012). Setting performance standards: Foundations, methods, and innovations. New York, NY: Routledge. Cizek, G. J., & Bunch, M. B. (2007). Standard setting: A guide to establishing and evaluating performance standards on tests. Thousand Oaks, CA: SAGE Publications. Council of Europe. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge, England: Cambridge University Press. Coombe, C. (2018). An A to Z of second language assessment: How language teachers understand assessment concepts. London, UK: British Council. Council of Europe. (2005). Guide for the production of RLD. Strasbourg: Language policy division. https://rm.coe.int/090000168077c574. Accessed March 11, 2019. Council of Europe. (2009). Relating language examinations to the Common European framework of reference for languages: Learning, teaching, assessment (CEFR). Strasbourg, France: Council of Europe. Council of Europe. (2018). The Common European Framework of Reference for Languages: learning, teaching, assessment. Companion Volume with new descriptors. Strasbourg: Council of Europe. https://rm.coe.int/cefr-companion-volume-with-new-descriptors-2018/1680787989. Accessed March 18, 2019. Council of Europe. (2019a). CEFR Reference level descriptions language by language: components and forerunners. https://www.coe.int/en/web/common-european-frameworkreference-languages/cefr-reference-level-descriptions-language-by-language-components-andforerunners. Accessed August 24, 2019. Council of Europe. (2019b). Self-assessment Grids (CEFR). Retrieved from https://www.coe.int/ en/web/portfolio/self-assessment-grid. Accessed 11 March, 2019. Council of Europe and Association of Language Testers in Europe. (2011). Manual for language test development and examining: For use with the CEFR. Strasbourg, Council of Europe. https://www.alte.org/resources/Documents/ManualLanguageTest-Alte2011_EN.pdf. Accessed March 12, 2019. Cumming, A. (2012). Validation of language assessments. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics (pp. 6006–6015). Hoboken, NJ: Wiley-Blackwell. Davis, L. (2016). The influence of training and experience on rater performance in scoring spoken language. Language Testing, 33(1), 117–135. De Jong, J. H. A. L. (2009, June). Unwarranted claims about CEF alignment of some international English language tests. Paper presented at EALTA, Turku, Finland. http://www.ealta.eu.org/ conference/2009/docs/friday/John_deJong.pdf. Accessed March 11, 2019.
References
137
Downing, S. M. (2006). Selected-Response Item Formats in Test Development. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (p. 287–301). Lawrence Erlbaum Associates Publishers. Eaquals. (2015). Revision and refinement of CEFR descriptors, available online: https://www. eaquals.org/resources/revision-and-refinement-of-cefr-descriptors/. Eckes, T. (2015). Introduction to many-facet Rasch measurement: Analyzing and evaluating rater-mediated assessments (2nd ed.). Frankfurt am Main: Peter Lang. Education First. (2014). EF SET academic and technical development report. https://www.efset. org/*/media/centralefcom/efset/pdf/EF-SET-Academic-Development-Report.pdf. Accessed March 11, 2019. Educational Testing Service. (2019). About the TOEFL iBT® test. https://www.ets.org/toefl/ibt/ about. Accessed December 26, 2019. Euroexam. (2019). Assessment and Marking Criteria. http://www.euroexam.com/assessment-andmarking-criteria. Accessed March 11, 2019. Everhard, C. J. (2015). Implementing a student peer-mentoring programme for self-access language learning. Studies in Self-Access Learning Journal, 6(3), 300–312. https://sisaljournal. org/archives/sep15/everhard/. Accessed March 11, 2019. Florescano, A. A., O’Sullivan, B., Sanchez Chavez, C., Ryan, D. E., Zamora Lara, E., Santana Martinez, L. A., et al. (2011). Developing affordable ‘local’ tests: the EXAVER project. In Barry O’Sullivan (Ed.), Language testing: Theory and practice (pp. 228–243). Oxford: Palgrave. Fleckenstein, J., Leucht, M., & Köller, O. (2018). Teachers’ Judgement Accuracy Concerning CEFR Levels of Prospective University Students. Language Assessment Quarterly, 15(1), 90–101. Fulcher, G. (2013). Practical language testing. London, England: Routledge. Geranpayeh, A., & Taylor, L. (Eds.). (2013). Examining listening: Research and practice in assessing second language listening. Studies in Language Testing 35. Cambridge: UCLES/Cambridge University Press. Goethe Institut. (2019). Test your German. https://www.goethe.de/en/spr/kup/tsd.html. Accessed March 11, 2019. Green, A. [Alison]. (1998). Verbal protocol analysis in language testing research: A handbook. Cambridge, England: Cambridge University Press. Green, A. [Anthony]. (2017). Learning-oriented Language Test Preparation Materials: A contradiction in terms? Papers in Language Testing and Assessment, 6(1), 112–132. Green, A. [Anthony]. (2018). Linking tests of English for academic purposes to the CEFR: The score user’s perspective. Language Assessment Quarterly, 15(1), 59–74. Green, R. (2013). Statistical analyses for language testers. New York, NY: Palgrave Macmillan. Hamp-Lyons, L. (2017). Language assessment literacy for language learning-oriented assessment. Papers in Language Testing and Assessment, 6(1), 88–111. Harsch, C., & Martin, G. (2012). Adapting CEF-descriptors for rating purposes: Validation by a combined rater training and scale revision approach. Assessing Writing, 17(4), 228–250. Harsch, C., Pant, H. A., & Köller, O. (2010). Calibrating standards-based assessment tasks for English as a first foreign language. Standard-setting procedures in Germany. Münster: Waxmann. Harsch, C., & Rupp, A. A. (2011). Designing and scaling level-specific writing tasks in alignment with the CEFR: A test-centered approach. Language Assessment Quarterly, 8(1), 1–34. Hoge, R. D., & Coladarci, T. (1989). Teacher-based judgements of academic achievement: A review of literature. Review of Educational Research, 59(3), 297–313. Jones, N., & Saville, N. (2016). Learning Oriented Assessment: A systemic approach (Studies in Language Testing Vol. 45). Cambridge: UCLES & Cambridge University Press. Kane, M. T. (2013a). The Argument-Based Approach to Validation. School Psychology Review, 42(4), 448–457.
138
3
Assessment
Kane, M. T. (2013b). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1–73. Khalifa, H. and Weir, C. J. (2009). Examining reading: Research and practice in assessing second language reading. Studies in Language Testing 29, Cambridge: UCLES/Cambridge University Press. Kodate, A. (2017). Developing ELP-informed self-access centre learning materials to support a curriculum aligned to the CEFR. In F. O’Dwyer, M, Hunke, A. Imig, N. Nagai, N. Naganuma, & M. G. Schmidt (Eds.), Critical, constructive assessment of CEFR-informed language teaching in Japan and beyond (pp. 226–246). Cambridge, England: Cambridge University Press. Konrad, E., Holzknecht, F. Schwarz, V., & Spöttl, C. (2018). Assessing writing at Lower Levels: Research findings, task development locally and internationally, and the opportunities presented by the extended CEFR descriptors. ARAGs Research Reports Online. https://www. britishcouncil.org/sites/default/files/konrad_holzknecht_schwarz_spottl_layout.pdf. Accessed March 11, 2019. Koretz, D. (2008). What educational testing really tells us. Cambridge, MA: Harvard University Press. Lengalia. (2019). Spanish placement test. https://www.lengalia.com/en/placement-test.html. Accessed March 11, 2019. Leong, W. S., Ismail, H., Costa, J. S., & Tan, H. B. (2018). Assessment for learning research in East Asian countries. Studies in Educational Evaluation, 59, 270–277. Leung, C., & Scott, C. (2009). Formative assessment in language education policies: Emerging lessons from Wales and Scotland. Annual Review of Applied Linguistics, 29, 64–79. Lim, G. S., Geranpayeh, A., Khalifa, H., & Buckendahl, C. W. (2013). Standard setting to an international reference framework: Implications for theory and practice. International Journal of Testing, 13(1), 32–49. Little, D. (2009). The European Language Portfolio: where pedagogy and assessment meet. Strasbourg: Council of Europe. Liu, L., & Jia, G. (2017). Looking beyond scores: Validating a CEFR-based university speaking assessment in mainland China. Language Testing in Asia, 7(1), 1–16. Martyniuk, W. (Ed.). (2010). Aligning tests with the CEFR: Reflections on using the Council of Europe’s draft manual. Cambridge, England: Cambridge University Press. McMillan, J. H., & Hearn, J. (2008). Student self-assessment: The key to stronger student motivation and higher achievement. Educational Horizons, 87(1), 40–49. Midraj, J. (2018). Self-assessment. In J. I. Liontas, T. International Association & M. DelliCarpini (Eds.), The TESOL encyclopedia of English language teaching. Milanovic, M. (2009). Cambridge ESOL and the CEFR. Cambridge ESOL Research Notes 37, 2– 5. https://www.cambridgeenglish.org/images/23156-research-notes-37.pdf. Accessed March 11, 2019. Murphey, T., & Arao, H. (2001). Reported belief changes through near peer role modeling. TESL-EJ, 5(3), 1–18. Mynard, J., & Carson, L. (2012). Advising in language learning: Dialogue, tools and context. Harlow, UK: Pearson. North, B. (2009). CEFR teacher assessment: A guide for Eaquals Members on implementing CEFR-referenced teacher assessment. https://www.eaquals.org/resources/a-teachers-guide-tocefr-based-assessment-procedures/. Accessed August 27, 2019. North, B., & Jones, N. (2009). Further material on maintaining standards across languages, contexts and administrations by exploiting teacher judgment and IRT scaling. https://rm.coe. int/1680459fa0. Accessed March 11, 2019. Ockey, G. J., & Zhi, L. (2015). New and not so new methods for assessing oral communication. Language Value, 7(1), 1–21. Ong, W. (1982). Orality and literacy. London: Methuen.
References
139
O’Sullivan, B. (2005). Levels specification project report. Internal report, Zayed University, United Arab Emirates. O’Sullivan, B. (2013). Assessing speaking. In A. J. Kunnan (Ed.), The companion to language assessment (pp. 156–171). London: Wiley. Papageorgiou, S., Tannenbaum, R. J., Bridgeman, B., & Cho, Y. (2015). The Association Between TOEFL iBT® Test Scores and the Common European Framework of Reference (CEFR) Levels (Research Memorandum No. RM-15-06). Princeton, NJ: Educational Testing Service. Purpura, J. E. (2004). Assessing grammar. Cambridge: Cambridge University Press. Purpura, J. E. (2010). The Oxford Online Placement Test: What does it measure and how? Oxford: Oxford University Press. https://www.oxfordenglishtesting.com/uploadedfiles/6_New_Look_ and_Feel/Content/oopt_measure.pdf. Accessed March 11, 2019. Read, J. (2000). Assessing vocabulary. Cambridge: Cambridge University Press. Rimfeld, K., Malanchini, M., Hannigan, J. L., Dale, P. S., Allen, R., Hart, S. A., et al. (2019). Teacher assessments during compulsory education are as reliable, stable and heritable as standardized test scores. The Journal of Child Psychology and Psychiatry, 60(12), 1278–1288. Rodriguez, M. C. (2016). Selected-response item development. In S. Lane, M. Raymond, & T. M. Haladyna (Eds.), Handbook of test development (2nd ed., pp. 259–273). New York, NY: Routledge. Ross, S. (1998). Self-assessment in second language testing: A meta-analysis and analysis of experiential factors. Language Testing, 15(1), 1–20. Rupp, A. A., Vock, M., Harsch, C., & Köller, O. (2008). Developing standards-based assessment tasks for English as a first foreign language—Context, processes and outcomes in Germany. Münster: Waxmann. Shaw, S. D. and Weir, C. J. (2007). Examining writing: Research and practice in assessing second language writing. Studies in Language Testing 26, Cambridge: UCLES/Cambridge University Press. Stiggins, R. J. (2005). From formative assessment to assessment FOR learning: A path to success in standards-based schools. Phi Delta Kappan, 87(4), 324–328. Südkamp, A., Kaiser, J., & Möller, J. (2012). Accuracy of teachers’ judgements of students’ academic achievement: A meta-analysis. Journal of Educational Psychology, 104(3), 743–762. Suzuki, Y. (2015). Self-assessment of Japanese as a second language: The role of experiences in the naturalistic acquisition. Language Testing, 32(1), 63–81. Tannenbaum, R. J., & Baron, P. A. (2011). Mapping TOEFL ITP scores onto the Common European Framework of Reference. Princeton, NJ: Educational Testing Service. Tannenbaum, R. J., & Wylie, E. C. (2013). Mapping TOEIC and TOEIC Bridge Test Scores to the Common European Framework of Reference. In Powers, D. E. (Ed.), The Research Foundation for the TOEIC Tests: A compendium of studies (Vol. 2, pp. 6.16.10). Princeton, NJ: Educational Testing Service. Taylor, L. (Ed.). (2011). Examining speaking: Research and practice in assessing second language speaking. Studies in Language Testing 30. Cambridge: UCLES/Cambridge University Press. Tono, Y. (2019). CEFR-J. http://www.cefr-j.org/index.html. Accessed December 26, 2019. Tsagari, D. (2004). Is there life beyond language testing? An introduction to alternative language assessment. CRILE Working Papers (pp. 58, 1–23). University of Cambridge ESOL Examinations Research and Validation Group. (2009). Examples of Speaking Performance at CEFR Levels A2 to C2 (Taken from Cambridge ESOL’s Main Suite exams) Project overview. https://www.cambridgeenglish.org/Images/22649-rvexamples-of-speaking-performance.pdf University of Cambridge ESOL Examinations. (2012). Cambridge English preliminary handbook for teachers. https://www.lang.com.pl/images/egzaminy-cambridge-english/egzaminy-logo/ Cambridge_English_Preliminary__Handbook.pdf. Accessed March 19, 2019. van Ek, J. A. (1998). Threshold 1990 (2nd edn.). Cambridge: Cambridge University Press.
140
3
Assessment
van Ek, J. A., & Trim, J. L. M. (1998). Waystage 1990 (2nd ed.). Cambridge: Cambridge University Press. Weir, C. J. (2005a). Language testing and validation: An evidence-based approach. Hampshire: Palgrave Macmillan. Weir, C. J. (2005b). Limitations of the common European framework for developing comparable examinations and tests. Language Testing, 22(3), 281–300. Wisniewski, K. (2018). The empirical validity of the common European framework of reference scales. An exemplary study for the vocabulary and fluency scales in a language testing context. Applied Linguistics, 39(6), 933–959. Wu, J., & Wu, R. (2010). Relating the GEPT reading comprehension tests to the CEFR. In W. Martyniuk (Ed.), Relating language examinations to the Common European framework of reference for languages: Case studies and reflections on the use of the Council Europe’s draft manual, Studies in language testing 33. Cambridge: CUP. Xi, X. (2008). Methods of test validation. In E. Shohamy & N. H. Hornberger (Eds.), Encyclopedia of language and education (2nd edn, Vol. 7, pp. 177–196) Language Testing and Assessment. New York: Springer.
4
Learner Autonomy and the European Language Portfolio
This chapter discusses the central role of autonomous learning within the Common European Framework of Reference for Languages (CEFR; Council of Europe 2001a) and illustrates how the European Language Portfolio (ELP; COE 2019b) can promote learner autonomy (LA). After reviewing CEFR-related LA literature (Sect. 4.1), the guiding principles and functions of the ELP are first introduced in Sect. 4.2. This includes discussion of the various components of the ELP and how they can be utilized to reflect on and assess language progress, the learning process, and intercultural interactions as well as to set goals for learning. This is exemplified through descriptions of the Standard Adult Language Passport (COE 2019b), the Swiss ELP (COE 2001b),1 various ELP templates (COE 2019b), and implementation guidelines.2 The potential of electronic ELPs is also explored. General guidelines for compiling an ELP (Sect. 4.3) are followed by concrete exercises (Sect. 4.4) that provide a step-by-step guide for creating and reflecting on an ELP. The chapter concludes with three case studies that show how the ELP has evolved (Case Study 1: EPOS—an electronic ELP) and can be supplemented (Case Study 2: Digital materials and self-directed learning and Case Study 3: Developing Intercultural Competence).
1
Permission to reproduce the Swiss ELP (COE 2001b) was granted by Europäisches Sprachenportfolio III, © 2010 Schulverlag plus AG, © Schweizerische Konferenz der kantonalen Erziehungsdirektoren (EDK), weiterführend auf: www.sprachenportfolio.ch. 2 Permission to reproduce the Standard Adult Language Passport (COE 2019b), various ELP templates (COE 2019b), and implementation guidelines was granted by the © Council of Europe. © Springer Nature Singapore Pte Ltd. 2020 N. Nagai et al., CEFR-informed Learning, Teaching and Assessment, Springer Texts in Education, https://doi.org/10.1007/978-981-15-5894-8_4
141
142
4.1
4
Learner Autonomy and the European Language Portfolio
The Role of the CEFR and ELP in Autonomous Learning
As discussed in Chaps. 2 and 3 of this volume, the CEFR can be used by educators to organize learning goals cohesively, articulate them clearly, and serve as a guide for assessing learners’ communicative abilities and competences. On its own, learners (and some teachers) may find the CEFR difficult to process due to its length and the relatively challenging nature of the content. Even with lists of revised descriptors that have been tailored to meet learners’ needs, teachers and students require support to realize the full potential of the framework. What is needed is a concrete tool that can help learners understand the core concepts promoted in the CEFR and articulated in detailed lists of descriptors along with how to utilize them to develop awareness of linguistic identity, awareness of cultural identity, and capacity for independent language learning (COE 2011c: 3). The ELP, which was conceived at the same time as the CEFR at the 1991 COE symposium (see Sect. 1.3.2), fulfills these goals through two functions. Its reporting function enables learners to display their ability to use foreign languages in different contexts in a culturally appropriate manner. Its pedagogic function promotes LA through reflection and (self-)assessment of the language learning process and progress. These functions are realized through the ELP’s three components. The language passport provides learners with an overview of their current level in relation to the Common Reference Levels. The language biography utilizes the illustrative scales for goal setting, reflection, and self-assessment of linguistic and intercultural activities. The dossier stores concrete evidence of student ability.
4.1.1 Learner Autonomy and the CEFR With respect to autonomy, the ELP reflects the aims of the Council of Europe, which include (i) the development of plurilingualism as a life-long process; (ii) the development of the language learner; and (iii) the development of the capacity for independent language learning according to the Principles and Guidelines (COE 2011c). The ELP is designed to help learners to achieve a fuller awareness of their developing linguistic and cultural identity … and of themselves as language learners and to develop language learning skills that they can deploy to meet individual needs that arise outside as well as inside formal educational contexts. (COE 2011c: paragraph 1.6 explanatory notes)
Little (2012: 12) notes, however, that “(b)eyond associating learner autonomy with critical awareness of the learning process, the Principles and Guidelines do not explain what they mean by the term, far less how learner autonomy should be operationalized.” These issues are instead addressed through ELP design, the official ELP guide (Little and Perclová 2001) and other resources introduced throughout this chapter. Ultimately, the goal for ELP use, as suggested by Little et al. (2007: 15), is for learners to become autonomous in formal educational
4.1 The Role of the CEFR and ELP in Autonomous Learning
143
contexts to the extent that “they develop and exercise the capacity to plan, monitor and evaluate their own learning.” The teacher’s role in encouraging this development is threefold: • learner involvement—teachers involve students in the language learning process and over time, increasingly give them ownership of learning objectives and the learning process; • learner reflection—teachers encourage learners not only to reflect on the learning process and the language itself, but also to self-assess their progress and levels of achievement; • appropriate language use—teachers must provide opportunities and support for target language use, including the language of reflection and self-assessment. Again, with teacher guidance and support, the ELP helps to mediate CEFR principles in a meaningful and concrete way for learners. Along with the ‘Can Do’ descriptors, it focuses learners on communicative language use. If learners’ capacity for learning, including their plurilingual and intercultural competence, has been sufficiently developed through using the ELP, autonomous learners can also make progress with other languages that they may choose to learn, and in different linguistic and cultural contexts they may encounter.
4.1.2 Learner Autonomy Holec (1981) first introduced the concept of LA in a report to the Council of Europe, which informed both the CEFR and ELP, characterizing it as “the ability to take charge of one’s learning” (3). Benson (2013: 58) defines autonomy as “the capacity to take control of one’s own learning,” arguing that the construct of ‘control’ appears to be more open to empirical investigation than the other terms commonly used in definitions (e.g., ‘take charge of’, ‘take responsibility for’). Both definitions are purposefully vague and mean different things to different people, so Little (1990: 7, cited in Benson 2013: 59), suggests it is useful to also consider what is NOT autonomy. • Autonomy is not a synonym for self-instruction; in other words, autonomy is not limited to learning without a teacher. • In the classroom context, autonomy does not entail an abdication of responsibility on the part of the teacher; it is not a matter of letting the learners get on with things as best they can. • On the other hand, autonomy is not something that teachers do to learners; that is, it is not another teaching method. • Autonomy is not a single, easily described behavior. • Autonomy is not a steady state achieved by learners.
144
4
Learner Autonomy and the European Language Portfolio
It should be noted that autonomy does not mean independence from the teacher. In many ways, developing autonomy is a collaborative process in which “guidance and encouragement [are necessary] to help learners extend and systematize the capacities that they already possess” (Benson 2013: 91). Autonomy is also multidimensional and cannot easily be pinned down as it “will take different forms for different individuals, and even for the same individual in different contexts or different times” (Benson 2013: 58). Within every learning environment, there will be context-specific constraints, which may or may not be negotiable, that affect the degree to which learners can (and willingly) assume control of their learning. This chapter is primarily concerned with autonomy in language learning within institutional settings and how teachers can provide their learners with opportunities to exercise autonomy (e.g., freedom from external constraints) and develop their capacity to learn autonomously. One drawback of this focus is that less emphasis is placed on non-institutional learning autonomy, and it is written from the teacher’s perspective. This is also true of the bulk of the literature on autonomy (Benson 2008). Strong and Weak Pedagogies for Developing Autonomy One simple but useful distinction for viewing autonomy is the difference between “strong” and “weak” pedagogies. Strong pedagogies are based on the belief that students are already autonomous, while weak ones see students as lacking autonomy. Strong pedagogies focus on “co-creating with students optimal conditions for the exercise of their autonomy” while in weak ones, “autonomy is seen as a deferred goal and as a product of instruction rather than something which students are currently ready to exercise directly” (Smith 2003: 120–132 as cited in Benson 2008: 23–24). As teachers, if we choose to employ a “strong” pedagogy, this implies that we favor a process-oriented approach in which we “introduce activities which require students to act autonomously.” By choosing a “weak” pedagogy, we prefer a product-oriented approach in which we strive “to produce an autonomous person” (Boud 1981: 30 as cited in Benson 2008: 21), developing their ability to lead autonomous lives at the expense, in the short term, of doing this freely. In most contexts, this is not an either-or option and teachers are likely to choose a middle ground that employs aspects of both approaches. This is also true of the ELP. There are numerous templates with varying degrees of structure that can be utilized when compiling an ELP. Later in this chapter, we will discuss these templates and their potential for developing LA. Even key LA researchers whose work informed the CEFR and ELP (Holec 1981) and ELP teacher guides (Little 1991; Little and Perclová 2001) share different views. Holec’s (1981) stance is that learners’ ability to take charge of their learning is not necessarily innate and that the teacher should play an active role in helping learners develop this ability. Furthermore, language teaching was seen by Holec as a distinct objective separate from nurturing LA. In contrast, Little (2012: 13) argues that “learners have some experience with autonomy in the lives they lead outside educational institutions … [and the] teacher’s task is to harness her learners’ capacity for autonomous behavior to the business of L2 learning.” A further
4.1 The Role of the CEFR and ELP in Autonomous Learning
145
distinction is that Little (2012: 13) views language LA as inseparable from the learner’s developing proficiency in the target language (TL), “for it is constructed and enacted in TL discourse” (see Little 2007 for more on this line of argument). In other words, TL use is central, not only as the language of instruction and within the ELP, but also for student reflection and self-assessment. Thus, it is included as part of the teacher’s role (“promoting appropriate language use;” see Sect. 4.1.1). It is important to keep Little’s stance in mind as he served as lead author on numerous ELP guides and projects referred to throughout this chapter.
4.1.2.1 Three Dimensions of Learner Autonomy Autonomy has been defined as “the capacity to take control of one’s own learning” (Benson 2013: 58). Benson argues that LA is multidimensional and involves the learner exercising control over their learning behavior (learning management), the psychology of learning (cognitive processing) and the learning environment (learning content). Learning Management refers to how learners exercise control over learning behaviors when they plan, organize, and evaluate their learning. This dimension is based on Holec’s (1981) report to the Council of Europe, which informed the CEFR and defined autonomy as follows: To take charge of one’s own learning is to have, and to hold, the responsibility for all the decisions concerning all aspects of this learning, that is: • • • •
determining the objectives; defining the contents and progressions; selecting methods and techniques to be used; monitoring the procedure of acquisition properly speaking (rhythm, time, place, etc.); • evaluating what has been acquired. The autonomous learner is himself capable of making all these decisions concerning the learning with which he is or wishes to be involved. (Holec 1981: 3, as cited in Benson 2013: 59)
This definition focuses more on observable behaviors than the underlying capabilities that allow learners to complete these tasks successfully (an aspect dealt with under Cognitive Processing) (Benson 2013: 93). Control over learning behavior can be discussed in terms of self-directed learners, but dealt with proactively through the identification and classification of and training in learning strategies (e.g., Oxford 1990; O’Malley and Chamot 1990). (A word of caution: Research shows that explicit instruction in strategy use can enhance learning performance, but it may not be effective in enabling learners to develop the capacity for autonomous learning (Benson 2013: 161).) Oxford’s (1990) extensive taxonomy of learning strategies is divided into direct and indirect strategies. Direct strategies involve the mental processing of the target
146
4
Learner Autonomy and the European Language Portfolio
language and can be divided into three groups—memory, cognitive, and compensation. • Memory strategies, such as grouping or using imagery, have a highly specific function: helping students store and retrieve new information. • Cognitive strategies, such as summarizing or reasoning deductively, enable learners to understand and produce new language by many different means. • Compensation strategies, like guessing or using synonyms, allow learners to use the language despite often large gaps in knowledge. Indirect strategies, on the other hand, provide indirect support for language learning through “focusing, planning, evaluating, seeking opportunities, controlling anxiety, increasing cooperation and empathy, and other means” (Oxford 1990: 151) within the categories of metacognitive, affective and social strategies. Benson (2013: 97) argues that “it is these indirect strategies, rather than strategies in general, that are the potential components of autonomy, because they are concerned with control over the learning process rather than control over language or learning materials.” Many of the metacognitive strategies for planning and evaluating learning found in Oxford (1990) are clearly promoted in the ELP. Oxford (1990) provides useful insights into how to develop these and other aspects of indirect strategies that can support and complement the concrete features found in the ELP and the CEFR. For example, within the CEFR learners are viewed as social agents. Oxford’s third category, social strategies, could be utilized to promote this view. Cognitive Processing Little (1991), who has been extremely influential in the promotion of the ELP, defines autonomy in terms of students’ capacity to exercise control over the cognitive processes underlying successful self-management of learning, adding a complementary but vital psychological dimension to Holec’s (1981) definition (Benson 2013: 60): Essentially, autonomy is a capacity – for detachment, critical reflection, decision-making, and independent action. It presupposes, but also entails, that the learner will develop a particular kind of psychological relation to the process and content of his learning. The capacity for autonomy will be displayed both in the way the learner learns and in the way he or she transfers what has been learned to wider contexts. (Little 1991: 4)
Benson (2013: 100) describes three ways to develop this capacity; attention, reflection, and metacognitive knowledge. We will briefly discuss the importance of attention in second language acquisition in Sect. 5.2.3.1 by examining the role of attention in noticing features in the input and monitoring output using Ellis’ model (2003: 149) for the role of explicit knowledge in implicit learning (see Fig. 5.2). Attention is also discussed regarding task difficulty (Skehan 1998) in Sect. 5.3.3.1, which argues that teachers must seek an appropriate level of task difficulty to enable students to devote attention to language form, be it focus on accuracy or complexity. Concerning the importance of attention to autonomy, Benson (2013: 103) observes:
4.1 The Role of the CEFR and ELP in Autonomous Learning
147
If attention is a pre-condition of acquisition, effective language learning may begin with the learner taking control over what is attended to in input. It may also be the case that the processes identified with control over learning management and content … can be seen as broader manifestations of control over attentional resources.
Conscious reflection on the learning process is considered to be a distinctive characteristic of autonomous learning by Little (1997: 24), who argues that “incidental reflection that planning, monitoring and evaluating learning entail[s must be supplemented with] explicitly detached reflection on the process and content of learning.” An important role of this reflection is its ability to encourage change, but “questioning fundamental beliefs [about language learning] is exceptional and occurs naturally only at moments of crisis or change” (Benson 2013: 106). Therefore, it is important to keep in mind that reflection alone may not necessarily lead to change. Learners will need to reflect “at appropriate moments in the learning process and (act) upon the results” (Benson 2013: 109) but the learner also needs to be “sufficiently well-informed concerning the new approach (to learning) so that he can see for himself its advantages and disadvantages” (Holec 1980: 41 as cited in Benson 2013: 108). The ELP can play a central role in this process but the teacher may also want to facilitate and encourage a deeper level of learner reflection. Reflection on the target language is also incredibly important. For example, “Kohonen (1992) views deep learning as a process of hypothesis generation and testing in which reflection plays a crucial role” (Benson 2013: 107). This view of learning as hypothesis generation and testing also underlies the one taken by Skehan and Foster (2001) in which learners must try to use and reflect on language that they have a limited command of in order to develop their language competence (see Sect. 5.2.3.3). Metacognitive knowledge is the last aspect of cognitive processing, which Wenden (1995: 185) defined as “the stable, statable and sometimes fallible knowledge learners acquire about themselves as learners and the learning process.” Without such knowledge, Wenden argues, the effectiveness of strategies concerning the planning, monitoring and evaluation of language learning are ‘weak’. Wenden’s research (1998: 531) suggests that “learners also need guidance in improving and expanding their knowledge about learning so that they may also become more autonomous in their approach to the learning of their new language.” Interestingly, this echoes Perez Cavana’s claim (2012) that the ELP, while effective at promoting the metacognitive skills of planning, reflection, and assessment, is less effective at introducing and developing new metacognitive knowledge and skills (see Sect. 4.2.3). Of the three types of metacognitive knowledge—person, strategic, and task knowledge—Benson (2013) argues task knowledge holds the most promise for developing control over the learning process. Wenden (1995: 185) defines task knowledge as “what learners need to know about (i) the purpose of a task, (ii) the task’s demands, and (iii) implicit in these considerations, a determination of the kind of task it is.” Whether a task involves something as simple as learning new vocabulary items or as complex as the process of learning a new language, according to Benson (2013: 110), the decision to carry out a task implicates
148
4
Learner Autonomy and the European Language Portfolio
“decisions about content, progression, pace, place and time of learning, the selection and use of cognitive strategies and the criteria selected for evaluation.” Metacognitive knowledge about the language and learning process informs these decisions and provides the basis for critical reflection and analysis. Learning Content Control over learning content has a situational aspect and acknowledges that language learning is not only enhanced by interaction with others but involves the learner’s ability to negotiate the goals, content, resources, and methodology with the teacher and other students (Benson 2013). For learners to be truly autonomous and authentically self-directed, however, they need to be able to control all these aspects, but “in institutional contexts, there are usually social and political dimensions to control of learning” (Benson 2013: 112) that inhibit this freedom. In many cases, negotiation within the classroom may be limited to control over the methodology used with institutionally imposed goals or content as these aspects may be considered non-negotiable. Littlewood (1999) would describe this as reactive autonomy (e.g., learners are reacting to constraints). Proactive autonomy is where the direction and content of learning (among other things) can be decided by the learner. It is also possible to discuss autonomy in terms of whether negotiation of content results in learners converging on a similar goal or pursuing divergent ones (Ribe 2003). Within the ELP literature, Little and Perclová (2001) argue for a strong form of autonomy in which students are able to choose the learning objectives from a list of ‘Can Do’ statements. They would consider it unacceptable to blindly follow a textbook syllabus as this is an act of imposing the textbook author’s learning target and implied learning process upon learners. Little (2012) acknowledges that such an approach would require a paradigm shift to realize the full potential of the ELP and the strong form of autonomy he envisions (see Sect. 4.2.3 ELP Implementation Guidelines). Summary While it is useful to view the dimensions of learning management, cognitive processing, and learning content separately, they cannot operate independently of each other. Rather, they are interdependent; effective learning management relies on learners being able to control cognitive processes as well as negotiate learning content with other stakeholders. LA has been discussed primarily from the teacher’s perspective to consider how teachers can develop this capacity within learners. In different contexts, teachers themselves face barriers that restrict their ability to exercise autonomy. When planning lessons, for example, teachers confront various barriers including prescribed syllabi, predetermined textbooks, and non-negotiable assessment methods such as quizzes and tests. As we will see in Chap. 6 of this volume, teachers play an important mediating role in interpreting these constraints. Little (1995: 178) portrays this role as follows: The curriculum that she presents to her learners is hers and no one else’s; however closely she may seek to follow a prescribed programme, she can only communicate her necessarily unique interpretation of it.
4.1 The Role of the CEFR and ELP in Autonomous Learning
149
While there are many barriers that may restrict the degree to which teachers can promote autonomy, learners can be encouraged to take control of their learning through using an ELP. In the next section, guidelines and exercises for understanding, compiling and implementing an ELP are introduced.
4.2
European Language Portfolio
This section explains the ELP in four parts. The first part overviews the principles and guidelines behind the ELP. In Sect. 4.2.2, the ELP’s reporting and pedagogic functions and its three key components (passport, biography, and dossier) are discussed in detail, followed by implementation guidelines. Last, the potential of e-portfolios is discussed in Sect. 4.2.4. For more background information about the ELP, including the history behind its development, results of both pilot studies and ECML-related projects, and ELP dissemination, see Chap. 1 of this volume.
4.2.1 ELP: Overview Principles and Guidelines The ELP is designed as a tool to familiarize learners, teachers, and other stakeholders with the principles and concepts promoted in the CEFR. The aims of the Council of Europe that the ELP are to reflect and promote are: • • • • • • •
the deepening of mutual understanding among citizens in Europe; respect for diversity of cultures and ways of life; the protection and promotion of linguistic and cultural diversity; the development of plurilingualism as a life-long process; the development of the language learner; the development of the capacity for independent language learning; transparency and coherence in language learning programs.
(COE 2011c: 4) These principles and aims are therefore reflected in the aims and functions of the ELP, which promotes • plurilingualism and pluriculturalism3; • is considered the property of the learner; • documents and gives value to all language and cultural competences and experiences; 3
The Council of Europe currently promotes intercultural awareness and competence, but this change is not reflected in the ELP: Principles and Guidelines (COE 2011c) as well as the CEFR/CV (COE 2018). For the remainder of the chapter, the term ‘intercultural awareness and competence’ will be used. See Sect. 1.1.2.4.
150
4
Learner Autonomy and the European Language Portfolio
• helps foster LA; • encourages learner self-assessment and the recording of assessment by other stakeholders (i.e., teachers, educational authorities and examination bodies). For more information on the history of the ELP and its evolution since its introduction in 2000 (Schneider and Lenz 2001), see Chap. 1 of this volume. It is also important to keep in mind that official principles and guidelines (COE 2011c) have not been updated to reflect the addition of the new descriptors released in 2018, which includes the CEFR Companion Volume (COE 2018) and descriptors for young learners (Szabo 2018a, b). Additionally, 2014 ELPs are no longer being registered by the Council of Europe, which is a point that is addressed later in Sect. 4.5.4.
4.2.2 ELP: Functions, Types, Components An ELP must be comprised of three parts, which usually appear in the following order: Language Passport: an overview of the learner’s current level in relation to the Common Reference Levels (e.g., global scale and self-assessment grid); Language Biography: facilitates the learner’s involvement in planning, reflecting upon, and assessing the learning process and progress; and Dossier: a collection of materials to document and illustrate the learner’s achievements and experiences. Before discussing these three parts in detail, the two functions of the ELP, the reporting and pedagogic function are introduced based on the guides for ELP developers (Schneider and Lenz 2001) and users (Little and Perclová 2001).
4.2.2.1 The Reporting and Pedagogic Function The ELP was primarily conceived to fulfill a reporting function—“to document its holder’s plurilingual language proficiency and experiences in other languages in a comprehensive, informative, and reliable way” (Schneider and Lenz 2001: 4)— allowing learners to track and share this information with various stakeholders (e.g., when students transfer or move schools, new teachers may find this information extremely useful). The ELP goes beyond simply reporting language proficiency as measured in diplomas, certificates, and examinations. It does so in a way that is coherent, transparent, and comprehensive. The coherency and transparency come from linking proficiency to the Common Reference Levels. The comprehensiveness is from incorporating a broad range of evidence that may not otherwise be shared with stakeholders, such as examples of both written and spoken language use and participation in study abroad programs.
4.2 European Language Portfolio
151
The pedagogic function, according to the guide for developers (Schneider and Lenz 2001), is to: • Enhance the motivation of learners to improve their ability to communicate in different languages, learn additional languages, and seek new intercultural experiences. • Incite and help learners to reflect on their objectives, their ways of learning, their success in language learning, to plan their learning, and to learn autonomously. • Encourage learners to enhance their plurilingual and intercultural experience. For example, through contact and visits, reading, use of media, and projects. The reporting and pedagogic functions differ in that the reporting function focuses more on the product of language learning (e.g., results), while the pedagogic function focuses on the process. The importance of the former is that “the ELP has the potential to play a key role throughout Europe (and beyond) in the attempt to introduce transparency and coherence into the description and documentation of proficiency in modern languages” (Schneider and Lenz 2001: 6), while the latter may facilitate the adoption of certain innovations, particularly the promotion of a reflective attitude through LA. Together, these functions and the ELP in general can catalyze change, particularly in areas not commonly associated with traditional teaching (e.g., self-assessment).
4.2.2.2 Three Fundamental Types of ELPs Schneider and Lenz (2001: 8) argue that there is good reason to limit the number and diversity of ELPs, the most important being convenience and simplicity for learners, and that “the [national or even transnational] currency value of a model [and possibly the ELP in general] increases when there is little variety and competition among models.” However, they point out that it is necessary to adapt the ELP to learner age, groups of learners with special needs, and different environments/traditions. They propose three fundamental ELP stages for different age groups: (a) Stage 1 Language Portfolios for very young learners up to 10–12 years; (b) Stage 2 Language Portfolios for use during the remaining years of obligatory schooling (11–15/16 years); (c) Stage 3 Language Portfolios for young people and adults (15/16 years upward) Furthermore, these portfolios will vary according to priorities (e.g., the relative importance of the two basic functions of the ELP) and different contextual variables (e.g., the degree to which the ELP is embedded in an institutional context and the cognitive demands an ELP may make, including language complexity). Every ELP should be as self-explanatory as possible, including introductions stating the purpose of each part.
152
4
Learner Autonomy and the European Language Portfolio
4.2.2.3 The Language Passport As a tool to promote plurilingualism, intercultural awareness, and competence, the ELP is designed to take account of, give value to, and record learner language and intercultural learning, whether it takes place inside or outside of formal educational contexts. It also records experiences of using second/foreign languages and competences in several languages (COE 2011c). An overview of the learner’s proficiency is primarily recorded in the language passport, which fulfills the following purpose: The Language Passport section provides an overview of the individual’s proficiency in different languages at a given point in time; the overview is defined in terms of skills and the Common Reference Levels in the Common European Framework; it records formal qualifications and describes language competences and significant language and intercultural learning experiences; it includes information on partial and specific competence; it allows for self-assessment and the recording of assessment by teachers, educational institutions and examinations boards; it requires that information entered in the Passport states on what basis, when and by whom the assessment was carried out. (COE 2011c)
It is best to think of the language passport as a summary, while the language biography is intended to focus on the process of language learning and the dossier on its products. To facilitate recognition and mobility across Europe, the standard passports for adolescents and adults (the Stage 2 and 3 portfolios mentioned earlier) are available and promoted by the Council of Europe. Therefore, the passport is the least flexible of the three as it must contain a number of “hard” pages which cannot be altered (Schneider and Lenz 2001). These include: • A front and a back cover that have a common design in all language versions; • A first double page (Editor: English in one column and French in the second, but one could/should be replaced with the learner’s L1) providing a brief explanation of the Council of Europe and its aims; a contact address and the accreditation number of the ELP version to which the passport belongs; • A double page that links the passport to the ELP and the Common European Framework, and briefly presents the contents of the passport. This also leaves space for the name and picture of its owner; • A double page for the owner’s Profile of language skills, providing grids (skills/levels) for recording the results of self-assessment (using the updated Self-Assessment Grid, COE 2018: 167–170) in up to six languages; • Two double pages entitled Summary of Language Learning Experiences and intercultural experiences providing a grid for systematic reporting of – School and course-based language learning, language use at work, language contacts, etc., in the student’s local region; – Course-based language learning, foreign language use for study or at work, etc., in other language regions; • A double page for listing language certificates and diplomas obtained. (Schneider and Lenz 2001: 17–18)
4.2 European Language Portfolio
153
These “hard” pages may be supplemented with “soft” pages, which in the Swiss ELP for young people and adults includes: • Instruments that help institutions …to describe and to relate language examinations they offer to the Common Reference Levels; • Forms that can be used to confirm a learner has had relevant intercultural and language learning experiences. (Schneider and Lenz 2001: 18–19) Table 4.1 gives a brief description of each section in the Standard Adult Language Passport as well as relevant notes from the official document concerning the principles and guidelines (COE 2011c) and additional comments. CEFR Profiles When providing learners with a form to record their language skills in relation to the Common Reference Levels (normally the self-assessment grid), it is important to consider the level of detail required from them. A standard profile will usually be based on the self-assessment grid provided in the Companion Volume (Fig. 4.1 is adapted from COE 2018). Note that in the CEFR/CV, levels have been further subdivided, including an additional level below A1 (Pre-A1) and a further distinction is made between the ‘criterion levels’ (e.g., A2 or A2.1) and the ‘plus levels’ (e.g., A2+ or A2.2) (COE 2018: 36), as well as the addition of both Written Online Interaction and Mediation as discussed in Chap. 2. Since there are relatively large gaps between the levels, it is unlikely that students will see significant improvement in language ability in the short term, which is the nature of summative assessment. A potential alternative is to create a profile in relation to the illustrative scales and levels most appropriate for the students and revise accordingly (Fig. 4.2 is reproduced from COE 2018: 37).
4.2.2.4 The Language Biography In line with official guidelines and the results of pilot studies, the ELP is a tool to promote LA. Its pedagogic function is primarily, but not exclusively (see Kohonen and Westhoff 2003), realized through the language biography. However, unlike the language passport, which is standard and available on the COE website, the form the language biography takes is less fixed as it varies depending on context, reporting goals, and pedagogic function. This variation can easily be seen in the wide range of templates available on the COE website for Developing an ELP and in the Guide for Developers (Schneider and Lenz 2001). Table 4.2 includes both the principles for the language biography and the COE-provided templates which correspond most closely with those goals. Overlap with the passport section is intentional as the biography, through its reporting function, reinforces and complements the language passport by providing a more detailed description of what the learner can do in each language along with their experience with and exposure to the languages and their various cultures. The checklists students utilize to assess language progress and determine their next learning target are much more specific than those used in the passport section (i.e., Self-Assessment Grid) as they are
154
4
Table 4.1
Learner Autonomy and the European Language Portfolio
Standard Adult Language Passport
Page Section 3
4
5
6
7–9
10
11
Language passport introduction
Notes from principles and Guidelines (COE 2011c)
This must be in either English or French, and in the learner’s L1, or in the case of multilingual contexts, the national language or language of school instruction A profile of the languages that the holder This is particularly important in classes has grown up with with students from a variety of backgrounds who “may have proficiency in one or more second languages that is significantly in advance of what their peer group is likely to achieve in foreign language learning at school” (COE 2011c: 8) A profile of language skills in relation to All ELPs should include the the common European Framework self-assessment grid in its entirety as a basic (Self-Assessment grid) point of reference. However, a grid with further subdivisions may be more appropriate for some learners in the short term. In addition, the C1 and C2 levels may be omitted with some learners (e.g., young learners) so that they do not rate themselves at an unrealistically high level. Learners are also encouraged to record partial competences (e.g., an ability to read a language but not speak it) A summary of language learning and A useful summary but in some classes with training experiences in primary, secondary, a relatively homogeneous group of post-secondary, and/or higher education, in students, there may be little difference addition to other language courses between learners. (Author’s comments) A summary of linguistic and intercultural This is a summary of significant experiences experiences involving language or culture Using language for study or training/using through study, training, work or travel. languages at work/using languages while Detailed descriptions are typically stored in living or travelling abroad/mediating the biography between languages (e.g., information translation)/other areas of use A record of certificates and diplomas It is also important to include the formal assessment of learners by teachers, educational institutions, and examinations. Ideally, this should be kept separate from the learner’s self-assessment and not used to correct it. With respect to young learners, this may need to be revised. For example, very young learners could be encouraged to record successes in other areas, such as in language competitions, etc. Self-assessment Grid If the CEFR version has been adapted, this must be clearly stated, and a brief justification for the adaptation should be included
4.2 European Language Portfolio Japanese
Pre-A 1
155
A1
A2
A2+
B1
B1+
B2
B2+
C1
Listening comprehension Reading comprehension Spoken interaction Written interaction Spoken production Written production Mediation
Fig. 4.1 Sample proficiency profile in relation to self-assessment grid
Reception Mediation
B2
B1
A2
Interaction
Production
Fig. 4.2 Proficiency profile in relation to illustrative scales and levels
derived from the illustrative scales. Furthermore, they are used on a much more regular basis as part of ongoing formative assessment in contrast to the passport’s periodic summative assessment. In other words, the focus is on the learning process as students reflect on their progress in order to set future learning goals and consider how they learn best. Schneider and Lenz (2001: 20) discuss the biography in terms of the elements it may contain to fulfill the aims outlined in Table 4.2. These elements, discussed in detail below, include:
156
4
Learner Autonomy and the European Language Portfolio
Table 4.2 The language biography principles and their available templates The language biography facilitates (COE 2011c: 9)
Available templates
(a) the learner’s involvement in planning, reflecting upon and assessing his or her learning process and progress
Goal setting and learning how to learn Self-assessment checklists Intercultural awareness and experience
(b) It encourages the learner to state what he/ she can do in each language and to include information on linguistic, cultural and learning experiences gained in and outside formal education contexts (c) It is organized to promote plurilingualism, i.e., the development of competencies in a number of languages
(A) (B) (C) (D)
User’s Plurilingual profile
A personal and more or less detailed biography covering language learning, sociocultural, and intercultural experiences; Checklists related to the Common Reference Levels; Checklists or other forms of descriptions of skills and competencies that are not related to the Common Reference Levels; Planning instruments such as personal descriptions of objectives.
(A) Language Learning, Sociocultural, and Intercultural Experiences This first item clearly overlaps with the passport, presenting a personal detailed biography concerning linguistic and intercultural experiences. Depending on which section is introduced first (see Sect. 4.2.3—ELP Implementation Guidelines), the descriptions in the biography will either form the basis of those in the passport as suggested by Schneider and Lenz (2001) or expand upon them. Like the learner’s assessment of language proficiency, various stakeholders (such as new teachers) will find the detailed information in the biography useful. The format to record and reflect on these experiences has varied greatly depending on the learners. Biographies for younger learners tend to rely more on closed forms, are less cognitively demanding, require less extensive written production, and tend to be more visual (e.g., learners are encouraged to either draw or utilize photos) than the ones for adults. Interestingly, among the templates available on the COE website, only 4 of the 21 templates for adult users focus solely on intercultural experiences and awareness (see Language Biography Templates—Intercultural Awareness and Experience; Council of Europe 2011b). Instead, the majority of templates allow the learner to write about both linguistic and intercultural experiences within the same form. This resource also includes 16 templates designed for use with young learners. A word of caution: As the templates come from various validated ELPs, individually they are of high quality, but it is unclear how they link to other pages within the passport and biography, and whether this is done in a cohesive manner. In Table 4.3, the biography section of the Swiss ELP III (for young people from age 16 and adults) (COE 2001b) is reproduced along with the Standard Adult Language Passport (which is based on the Swiss ELP) (COE 2019b) so that we can see how learners
4.2 European Language Portfolio Table 4.3
157
Standard Adult Language Passport (COE) and Swiss ELP biography section
Standard Adult Language Passport (COE 2019b)
Swiss ELP Biography section (excluding Sect. 2.2 checklists) ELP, accredited model No. 1.2000 (COE 2001b)
Learner can only add limited details, such as name of language or dates Sect. 1 Profile of Language Skills: a profile of the languages that the holder has grown up with Language(s) I used or use (1) within my family and neighborhood; (2) in my school(s) Sect. 2 Summary of Language Learning Experiences for formal education (e.g., post-secondary) or other language courses Sect. 3 A summary of linguistic and intercultural experiences. Categories include (1) Using language for study or training, (2) at work, (3) while living or travelling abroad/ (4) Mediating between languages (e.g., information translation)/ (5) Other areas of use
Sect. 2.1 Personal Language Learning Biography When you write your Personal Language Learning Biography for other people, it is best to put it in table form. If you want to reflect on your own experiences and progress, it could be more useful to write a more detailed biography 1. Write down what language(s) you have grown up with and what language areas you have lived in 2. Give brief information about language learning at school and about courses that you have attended, about the length, frequency, and types of teaching 3. Put down where and how you are learning or have learnt languages outside of school 4. Note when and how you use or have used other languages at work, in your studies, with friends, and on trips Sect. 2.3 Information about Important Linguistic and Intercultural Experiences This is the place for you to give (in more detail than in Sect. 2.1) information about important intercultural experiences and activities which have contributed to widening your knowledge of other countries and the people, society, culture of foreign language areas (Prompts given (e.g., Intercultural experiences (encounters with the country, culture and speakers of the language))) Sect. 2.4 Information about Foreign Language Teaching in Schools and Language Courses Sect. 2.5 Objectives, contents, programs. (This can be filled in by the school/course organizer (possibly with the learners’ participation))
can be encouraged to provide a rich description of their language and intercultural experiences in the biography section. Language biography templates to complement the standard passport are discussed in Sect. 4.3.4. One drawback of focusing on the Swiss ELP is that the goal behind its development was “to develop instruments to describe foreign language competence at transfer points in a coherent and
158
4
Learner Autonomy and the European Language Portfolio
transparent manner” (Schärer 2012: 51), thus there is less emphasis on learning how to learn. This will be addressed with examples from other ELPs later in this chapter (see Sect. 4.4.2.3). (B) Checklists Related to the Common Reference Levels In the passport section, learners assess their language ability in relation to the Self-assessment Grid, which provides them with a profile of their language skills. These levels, however, are extremely broad. Therefore, in the biography section it is necessary to provide learners with lists that are not only appropriate for their levels and context but that are more detailed because they are used on a regular basis as a form of summative assessment (e.g., at the end of a term) and formative assessment (e.g., to reflect on the learning process and progress). An excellent starting point for compiling an ELP checklist is the original ELP guide for developers (Schneider and Lenz 2001) and an updated chapter and accompanying bank of descriptors (Lenz and Schneider 2004a, b). These documents can be found on the dedicated COE website (COE 2019b)—Developing an ELP—but neither incorporates the new descriptors contained in the CEFR Companion Volume (Council of Europe 2018). However, Lenz and Schneider (2004b) is an excellent guide for developing, calibrating, and adapting descriptors. Their chapter is useful for another reason; it is accompanied by A bank of descriptors for self-assessment in ELPs (Lenz and Schneider 2004a), which contains both the CEFR descriptors (COE 2001a) and descriptors used in a number of validated ELPs. Teachers can draw from this extensive list knowing that these descriptors are clearly related to the scaled CEFR descriptors. They help teachers understand how COE descriptors have been modified into portfolio descriptors. Other lists of ELP descriptors can also be found on the COE CEFR homepage (COE 2019a) under the Levels menu (Bank of supplementary descriptors), but they too need to be supplemented with the new CEFR/CV descriptors (COE Council of Europe 2018). For teachers of young learners, Szabo (Szabo 2018a, b) provides an overview of how existing ELP descriptors for young learners (aged 7–10 and 11–15) relate to the calibrated CEFR illustrative descriptors. (C) Checklists or Other Forms of Descriptions of Skills and Competencies that are not Related to the Common Reference Levels This section in Schneider and Lenz (2001) is very short, only one page. It briefly covers (i) sociocultural and intercultural competence, (ii) awareness of the variety of existing languages, and (iii) learning techniques and strategies. Sociocultural and intercultural competence and Awareness of the variety of existing languages Reference to sociocultural competence, including aspects of sociocultural knowledge (a knowledge of history, politics, culture, etc.) and components of intercultural competence (e.g., ability to handle differing norms and culturally induced misunderstandings), is given only a brief mention in the guide. Furthermore, the templates for a presentation of the User’s Plurilingual Profile, available on the COE website (COE 2019b)—Developing an ELP, lack an introduction regarding the importance of plurilingualism. While the templates cover the
4.2 European Language Portfolio
159
same topics as the passport, the form (and lack of space) does not allow learners to extend beyond the entries provided. (However, this weakness could be addressed in part by changing the format). Also, the newest Guide for the Development and Implementation of Curricula for Plurilingual and Intercultural Education (Beacco et al. 2016) rarely mentions the ELP and does not include these templates, reinforcing the claim by Schärer (2012: 54) that the ELP’s future role is unclear within COE initiatives. The CEFR/CV (COE 2018: 28–29, 157–162) does include an introduction to and descriptors for plurilingual and intercultural competence, but these are conceptual in nature and so might be difficult to apply. Future case studies will help exemplify their application. Goullier (2009: 3) discusses the most frequent errors made by ELP developers, acknowledging that few developers received comments from the validation committee about the promotion of plurilingualism (or lack thereof) due to the “difficulty in giving precise indications to developers as to the ways and means of taking this dimension into account.” There is, however, a useful example from the ELC ELP for University students (COE 2011a: 30). Not only is the plurilingual principle emphasized, but “it also raises the learner’s awareness of the number of languages he/she has engaged with in different ways and at different levels of proficiency (see Table 4.4, reproduced from COE 2011a: 30)”. This closely resembles Sects. 2.1 and 2.3 of the Swiss ELP biography section (COE 2001b) (see Table 4.3). The Autobiography of Intercultural Encounters (AIE), developed in 2009 for the Language Policy Division of the Council of Europe, was “designed to support students’ analysis of specific intercultural encounters but is not an integral part of the ELP” (Alvarez 2012: 140). It is not concerned with language learning or the role of language in intercultural encounters. For readers interested in exploring how the AIE can guide student reflection on intercultural encounters, the AIE and explanatory guides are available at www.coe.int/lang. The LOLIPOP ELP did attempt to incorporate a scale designed for self-assessment of intercultural competence (see Kennedy et al. 2011), but this e-portfolio is no longer available online. Learning Techniques and Strategies Learning techniques and strategies are also given only brief mention in Schneider and Lenz (2001). However, the importance of investing time and energy in this area is indicated through; the inclusion of communication strategies in the CEFR, the importance of LA and students taking responsibility for and reflecting on how and why they are learning a foreign language, and the inclusion of learning techniques and strategies in ELPs such as the Swiss ELP. Within the Principles and Guidelines (COE 2011c: 4), the ELP “reflects the Council of Europe’s concern with … the development of the language learner [and] the development of the capacity for independent language learning.” The language biography itself “facilitates the learner’s involvement in planning, reflecting upon and assessing his or her learning process and progress.” This section focuses on documents relating to the learning process. The discussion of planning and learning progress will be reserved for the next section—(D) Planning Instruments Such as Personal Description of Objectives.
160
4
Learner Autonomy and the European Language Portfolio
Table 4.4 ELC ELP for University students—my language learning biography (COE 2011a: 30)
My language learning biography Here give a description of your language learning experiences for the languages you know. This information may be useful to other people (e.g., teachers, employers) and will provide a basis on which you may plan your future learning activities Which languages have I learned? – Languages that I learned at school or in courses (Give the duration, number of hours, goals, content, teaching methods, textbooks, and where appropriate the kinds of examination.) – Languages that I grew up with – Language areas where I have lived – Use of language while working, in training, studying, travelling, and in my circle of acquaintances – Language contact through television, radio, the cinema, art, music, books, the press, the Internet, etc. How have I experienced learning? – How have I learned well and with pleasure? What was particularly important and enriching? – In the framework of language learning and the languages I have learned or am learning, what has demotivated me? The worksheets can be presented chronologically like a curriculum vitae, or separated according to language. Please give the number of years of learning and where possible also the dates
Users of the framework are encouraged to consider and articulate how they promote the development of students as responsible, independent language learners and users. Learners are encouraged to develop their study skills, heuristic skills, and to accept responsibility for their own learning (COE 2001a: 149): (a) simply as ‘spin-off’ from language learning and teaching, without any special planning or provision; (b) by progressively transferring responsibility for learning from the teacher to the pupils/students and encouraging them to reflect on their learning and to share this experience with other learners; (c) by systematically raising the learners’ awareness of the learning/teaching processes in which they are participating; (d) by engaging learners as participants in experimentation with different methodological options; (e) by getting learners to recognize their own cognitive style and to develop their own learning strategies accordingly.
4.2 European Language Portfolio
161
The resource mentioned in the Guide for Developers (Schneider and Lenz 2001: 35) is a reference to the North Rhine-Westphalian (NRW) ELP, which encourages learners to reflect on language learning in the following areas: • • • • • • • •
How I organize my work How I learn words How I improve my pronunciation How I improve and assess my listening comprehension How I check and improve my reading comprehension How I revise and further develop my texts How I learn grammar and make sure I do not forget What I have decided to do in the future.
(D) Planning Instruments Such as Personal Description of Objectives In general, ELPs will have planning instruments, as they are strongly recommended in the Principles and Guidelines. Reflection on and assessment of the learning process and learning progress feed naturally into planning and goal setting. Learners will not only want to learn more efficiently, but they might want to determine or have a say in how they learn as well as the content and focus of their learning. Schneider and Lenz (2001) acknowledge that in some contexts student involvement in the planning process might be limited, particularly in school settings with rigidly defined curriculums. Regardless, “the planning aspect should in any case be integrated in the measures and materials accompanying the introduction of an ELP, and teacher training in particular” (Schneider and Lenz 2001: 37). As with most pages in the biography section, the format and structure vary considerably. The developers of the Swiss ELP opted for a very open form, placing the onus on learners to (1) decide what they want to learn, (2) identify their personal learning styles/strategies, and (3) reflect on their reasons for learning and priorities in learning (See Table 4.5, reproduced from COE 2011a: 15). In summary, the form and focus the biography section takes varies considerably depending on learner needs and learning context. We will now turn our attention to the last section of the ELP, the dossier.
4.2.2.5 The Dossier The dossier offers learners opportunities to select material to document and illustrate achievements of experience recorded in the language biography or language passport (COE 2011c). Schneider and Lenz (2001: 38) state that there are two different approaches: (a) The dossier that serves as a companion to everyday language learning stresses the pedagogic function, an approach favored in Finland as discussed in Kohonen (see Kohonen and Westhoff 2003). It would organize and document the learning process in a very comprehensive manner by including a much larger amount of materials used on a day-to-day basis.
162
4
Learner Autonomy and the European Language Portfolio
Table 4.5 Swiss ELP for young people and adults— Sect. 2.5 My objectives
My objectives Formulate the objectives and plans for language learning: the pages can be organized individually (1) What do I want to learn? (2) How do I want to learn? (3) Why do I want to learn a language, what do I need to be able to do in it, and how would I like to go about it? Am I learning the language for my job, for travel, or for study? Is it more important for me to understand, to read literature or to write? Do I want to attend a course, learn in a tandem partnership, or have a stay in a foreign language area?
(b) The dossier that serves to exemplify, illustrate, and build on information given and claims made in the language biography and in the language passport section stresses the reporting function. Only products that illustrate the learning process and current level of language and intercultural competence would be included. Developers of specific portfolios are left to decide which approach to favor, but approach (b) is more common. However, it is not clear exactly what materials should be kept in the dossier. Schneider and Lenz (2001: 38) note that in the majority of ELPs, “checklists, formulations of objectives, accounts of language learning or intercultural experiences” are kept in the language biography, while “confirmation of stays abroad, language contacts, etc., language certificates and diplomas” do tend to be stored in the dossier (the Swiss ELP favors the language passport for this material). Portfolio developers should make clear to learners (and other stakeholders who might have access to the dossier) the nature of the materials that have been collected and their intended uses. Areas for consideration include (Schneider and Lenz 2001: 39): • if the dossier is intended to be used as a (i) process-oriented or pedagogic dossier (Approach a) or (ii) a product-oriented or reporting dossier (Approach b), • if there are separate sections for certificates/diplomas, examples of student’s work, etc., • how the information will be presented (i.e., Is this work meant to serve as evidence of the proficiency level reached as claimed in the other sections, and if so, has this level been verified by others?), • if this work is at an early stage of development (e.g., a draft) or a polished version which may or may not include feedback/support from teachers, • the physical format of the dossier, and whether it should in fact be separate from the other sections (due to its larger size).
4.2 European Language Portfolio
163
Table 4.6 Swiss ELP for young people and adults—Dossier How do I put my Dossier together? If you want to show the dossier on a specific occasion (for example, when you change schools or when you apply for a job) you can select suitable examples of your own work which show your present level of competence in various languages If you want to document your learning process, you should collect pieces of work that show your learning progress during a school year or a language course Consider how you can most clearly document your proficiency in different languages. Think, too, about the languages that you have not learnt at school Put the names of the items in the List of Pieces of Work in the Dossier, and indicate whether these are individual or group work, typical or best examples, spontaneous or corrected work The collection of work should be periodically updated. You decide which examples should stay in the dossier, and which new pieces of work should go in You can also keep old pages form the Language Biography (for example checklists) in the dossier List of Pieces of Work in the Dossier a
b
Individual work
Group Work
2
Typical work
Best work
3
Result of spontaneous production
Final product after correction and redrafting
4
An earlier stage of development
Current stage of development
1
Doc. No
Type of document (e.g. letter to a firm)
Language
Date
Kind of worka
State in this column the status of the examples provided. Use the abbreviations 2a / 1b, etc.
a
One issue that has not been raised in official documents concerns ELP use over an extended period of time. No clear guidelines exist for how language biography pages should be renewed, how the contents of the dossier should change, or what should happen to pages or learner products that are superseded. The Swiss ELP for young people and adults (Table 4.6, reproduced from Schneider and Lenz 2001: 40) is an excellent example of Approach b. Clear guidelines are provided for learners (and teachers) about what can be stored in the dossier. More importantly, the nature of the work included is documented (e.g., learners indicate whether a piece of work is a product of individual or group work, typical or best example, spontaneous or corrected work, example of earlier or current stage of development). For those interested in a process-oriented or pedagogic dossier, see Kohonen and Westhoff (2003).
164
4
Learner Autonomy and the European Language Portfolio
4.2.3 ELP Implementation Guidelines Implementation guidelines vary according to the learners targeted. Little and Perclová (2001: 16) recommend that adult learners first reflect on their linguistic identities and current proficiency levels in the passport before setting individual learning targets in the biography. Student work is then stored in the dossier and evaluated in the biography, which leads in turn to student goal setting. For younger learners, they suggest reversing the order, starting with the simpler task of storing examples of their work in the dossier before moving onto the biography and later the passport section. In each example in the guide, regardless of learner age or level, it is students who are setting learning goals after a period of reflection and self-assessment. Self-assessment is in relation to the ‘Can Do’ descriptors found in the passport (e.g., Common Reference Levels: self-assessment grid) and biography (e.g., the illustrative scales). The former is done less frequently and is a type of summative assessment (e.g., completed at the end of a term), while the latter is a type of formative assessment (e.g., done throughout the term to provide feedback on learning). Keep in mind these statements may need to be adapted to meet learner needs or levels. With respect to textbooks, the guidelines explicitly state that textbooks are subordinate as following the textbook without considering student needs imposes the textbook author’s learning targets and implied learning process on the students. This stance exemplifies how autonomy is central to the ELP and why a paradigm shift is considered necessary to realize the ELP’s full potential (Little 2012: 11). Little and Perclová (2001: 3) note, however, that developing autonomy is a gradual process.
4.2.4 e-Portfolios As discussed in Chap. 1, the number of ELPs in use is not as high as was anticipated when it was developed. This trend is unlikely to be reversed unless there is a sustainable model whose benefits are clear to teachers and students. Even with e-portfolios, significant hurdles remain, such as the need for training, compatibility and connection issues, and managing different files and software types (Álvarez 2012: 133, 138–139). Simply replicating a paper-based ELP in a digital format ignores current technology’s potential. For the e-portfolio to be successful, Barrett (2011) argues that elements of social networking which foster students’ intrinsic motivation need to be incorporated. Until this is done, we are unlikely to see increased portfolio use through continued reliance on institution-imposed requirements or extrinsic motivation (e.g., using the ELP to gain admission to a school or for finding a job) alone. Case Study 1 at the end of the chapter (Sect. 4.5.1) focuses on EPOS, an extensively researched and supported e-portfolio developed in Bremen, Germany. This section explores factors that social networking and technology can contribute, rather than technical aspects of designing and implementing an e-portfolio which are beyond the scope of this chapter.
4.2 European Language Portfolio
165
Early eELP initiatives include such projects as Eaquals-ALTE electronic ELP and LOLIPOP or Language On Line Portfolio Project (Kennedy et al. 2011) as well as AIOLE—An Interactive Online Learning Environment (see Sanchez-Villalon and Sanchez-Villalon 2014). Unlike a number of paper-based ELPs available online, a quick Internet search reveals that these electronic portfolios are either no longer available or are not currently working, illustrating how technical issues and a withdrawal of support have plagued the long-term viability of eELPs. Even when eELPs are up-and-running, they may not offer public access. With the original ELP, “(a)ccounts of the use of ELP models are rare and descriptions of the development processes of those models are non-existent,” according to Alvarez (2012: 138), who calls for the sharing of relevant learning in this area. Research into eELPs and their development is available, particularly with respect to EPOS (Buschmann-Göbels and Kühn 2017; Kühn 2016), but a lack of accessibility inhibits widespread ELP use. Another challenge is that the ELP itself has a paper-based design. It is possible to replicate this design in digital form—the Swiss ELP is one such example. The mere fact that this eELP is still available indicates that this is a viable approach. In fact, it may be the single most important eELP thanks to its straight-forward design and continued availability. Alternatively, an electronic version of the language passport for adult learners, the EUROPASS, was the Council of Europe and European Union’s initiative to help citizens demonstrate qualifications and competences for education or employment across European borders (see https://europass.cedefop. europa.eu). These resources are important, but they do not take advantage of the full potential of e-portfolios. In early portfolio literature (see Barrett 2000), the portfolio development process involves collecting, selecting, reflecting, directing/goals and presenting. Barrett (2011) supplements these elements with those found in Technology and Social Networking to illustrate the potential for e-portfolios. For example, in the collection stage, technology allows learners to create a digital archive, including formats (e.g., video) that are not possible with paper versions. From this collection, students select work to demonstrate a particular outcome, goal or standard—in our case, this would be evidence in relation to a ‘Can Do’ descriptor—but with technology students are able to create hyperlinks or embedded documents, potentially resulting in more cohesion. While the process of reflection was considered a means to help learners construct meaning from their selected works, Barrett (2007) argues that technology helps with this construction of meaning by creating new models of storytelling (a personal story of learning progress and process). Finally, presenting involves preparing the material for others, but it is possible to reach a much wider audience when publishing online. Technology and social networking in particular can also facilitate collaboration and interactivity. Learners are able to collaborate in group projects through shared online documents (e.g., Google Docs) in ways not previously possible. Most e-portfolio systems, according to Barrett (2011), emphasize the reporting function (portfolio as product), which she terms the Presentation Portfolio. Evidence is organized thematically around learning goals, such as CEFR illustrative
166
4
Learner Autonomy and the European Language Portfolio
scales, showcased for an audience, and reflection is retrospective (and written in the past tense), thus resembling summative assessment. The Working Portfolio, on the other hand, prioritizes the pedagogic function (portfolio as process). A specific piece of work or learning experience is reflected on earlier (and written in the present tense) through the use of a blog or similar function within the e-portfolio, allowing peers (and teachers) to provide feedback quickly (i.e., formative assessment). Blog use is one example where “the boundaries are blurring between e-portfolios and social networks” (Barrett 2011). As with the reporting and pedagogic functions of the ELP, both types and levels of reflection are important. However, Barrett argues that e-Portfolios incorporating elements of social networking will be more intrinsically motivating for students. Whether this results in the ‘deeper’ levels of reflection promoted in the ELP literature (Kohonen 1992; Kohonen and Westhoff 2003) is unclear. Although Barrett’s research primarily concerns e-portfolio use in K-12 schools in the USA, readers are strongly encouraged to visit her homepage (http://electronicportfolios.org/) for further information about free online tools and the role that mobile devices play in this process. In addition to the motivational aspects of social networking with an eELP, Perez Cavana (2012: 143) exemplifies how an electronic portfolio makes it possible to develop the pedagogic function of the ELP in ways that could not have been envisaged in the paper version. Instead of relying on the teacher for an introduction to new learning strategies, ELPs could be designed to include information, links, and other types of input. This would serve not only as a source for new knowledge, but also as scaffolding to facilitate the adoption of learning strategies, something not feasible within the constraints of a paper-based design. While the ELP is effective at promoting the metacognitive skills of planning, reflection, and assessment, Perez Cavana (2012) argues that it is less effective at introducing and developing new metacognitive knowledge and skills outside of what students already know. Addressing this weakness is one way to enhance the ELP’s pedagogic function. Other examples include the introduction of learning styles (Perez Cavana 2010) and self-assessment of intercultural competence (LOLIPOP ELP) (Kennedy et al. 2011). As both electronic portfolios are no longer available online, an alternative using a learning management system (i.e., Moodle) is presented in Case Study 2 at the end of the chapter (Sect. 4.5.2). Within Europe, two commercially available eELPs are the Swiss ELP and EPOS—the latter is the focus of Case Study 1.
4.3
Application of the CEFR: Creating a Language Portfolio for Your Own Course
Within Europe, it is no longer possible to validate or register a new ELP model. Even when this formal process was in place, it is likely that many teachers experimented with and utilized a validated ELP in their own contexts. For now, when compiling or adapting an ELP for their students, teachers are left to decide
4.3 Application of the CEFR: Creating a Language Portfolio …
167
how closely they follow official guidelines. However, an ELP based on these guidelines, explicitly linked to the CEFR scales, and appropriately adapted to learners’ needs should help users do three things: (1) understand core CEFR concepts and detailed descriptors, (2) utilize these concepts and descriptors to raise an awareness of both their linguistic and cultural identity, and (3) develop a capacity for independent language learning. Equally important, it can ensure “the quality and credibility of the European Language Portfolio as a pedagogic and reporting tool and of the quality, validity and transparency of individual ELPs in a European context” (COE 2011c: 4). This second point presents an interesting dilemma, particularly as interest in the ELP has spread beyond Europe and the validation process no longer exists. However, ELP use is certainly not confined to Europe; instead, our goal is to utilize an ELP in a manner appropriate for our individual contexts. In the remaining sections, the focus is on how to prepare a high-quality, credible ELP with a validated ELP (i.e., Swiss ELP for young people and adults) serving as a model. This will provide a degree of cohesion and structure and a link to the CEFR scales to meet the transparency criterion. Available templates to supplement or replace relevant sections for teachers wanting to customize further will be introduced. In Sect. 4.4, key questions for validating an ELP serve as a guide to preparing an ELP. Many of the documents referred to in this chapter can be found on the COE homepage: Developing a European Language Portfolio (ELP) (COE 2019b). This homepage includes: 1. 2. 3. 4. 5. 6.
Documents on the ELP’s origin, guiding principles and history Reports on the ELP project at the European level and at international seminars Lists of accredited and registered ELPs A guide to compiling an ELP model Templates and other resources for compiling an ELP Some key publications on designing and using an ELP
Three key documents that can help teachers to compile an ELP (Goullier 2010a), avoid common mistakes (Goullier 2009), and incorporate and promote both plurilingual and intercultural competence (Goullier 2010b) can be downloaded from this site (COE 2019b). Researchers or educators responsible for compiling an ELP to be used in a wider context such as across a school district should refer to the Application for Validation and Accreditation of an ELP model (Schneider and Lenz 2001: Appendix D; ELP Validation Committee 2005). The summary below provides a useful introduction but cannot serve as a replacement for these documents.
4.3.1 Validated ELPs to Serve as a Model In some ways, it appears that putting together an ELP has never been easier as there are 118 validated and 23 registered ELPs to draw upon, which include ELPs from national bodies and other organizations (e.g., CERCLES—European Confederation
168
4
Learner Autonomy and the European Language Portfolio
of Language Centres in Higher Education). It should be noted that not all ELPs are available for download; some links are no longer active and the ELP may not be written in English. These models and templates are available from the COE website, but the ECML website—Using the European Language Portfolio (https://elp. ecml.at/) also has an excellent search function and database of 51 ELPs. When consulting these models, Goullier (2010a: 3) suggests that particular attention should be paid to • their similarity with the target users, • the experience acquired with this model, • and the date of validation, insofar as progress in the design and use of ELPs has gradually been incorporated in the models developed and revised over the years. Because the models are subject to the rules governing intellectual property, use of part of an ELP model must at the very least include explicit reference in the new model to where this component is from. In other cases, it is necessary to obtain authorization, particularly when using the Council of Europe’s intellectual property (Goullier 2010a).
4.3.2 Ensuring the Quality of the New Model Guidelines to ensure the quality of a new language portfolio (Goullier 2010a: 4–5) have been discussed throughout this chapter. These are reviewed here. They are based on a compilation of the most frequent errors observed by the ELP validation committee (Goullier 2009). • The ELP should be translated into the language(s) of schooling and/or the mother tongue(s) of the users, in addition to either English or French (the official languages of the Council of Europe). Any translation should be checked by a native speaker of that language • The ELP takes account of the sociolinguistic context by including all the national, regional, or minority languages relevant to the context and ensuring the learning of these languages is explicitly promoted • The learners’ practical use of the language is recognized through the use of communicative language descriptors in the self-assessment checklists. These checklists are adapted and/or supplemented with concrete examples relevant to the learners and their context • CEFR terminology and official translations of self-assessment grid are used consistently and accurately • All the competence levels, the languages used, and the various sections and components of the ELP are used consistently and accurately in a cohesive manner • The various components and formats are presented cohesively.
4.3 Application of the CEFR: Creating a Language Portfolio …
169
4.3.3 Steps to Be Taken When Compiling a New ELP Model 4.3.3.1 The Need for Developing a New Model It is entirely possible to use a validated ELP model whose quality has been verified by the validation committee. However, most ELPs were written to cover a large group of learners (i.e., young people and adults in Switzerland), which might have emphasized one function over the other [i.e., the Swiss ELP places great importance on the reporting function (Schärer 2012: 51)], give weight to certain aspects over others (e.g., to what degree is ‘learning how to learn’ addressed), and the structure (e.g., open format versus highly structured) may reflect the developer’s view of autonomy (i.e., the learner simply needs opportunities to be autonomous versus learners need guidance in developing autonomy). 4.3.3.2 The Specific Needs of a Group of Users Within a Certain Context As pointed out earlier, the teacher must consider similarities between their users and the learners for whom a particular ELP was developed and the context of learning (with consequences for the way the teacher treats formal and informal learning, the nature and presentation of the descriptors and checklists, etc.) and the sociolinguistic context (with consequences for your choice of languages used in the instructions, the examples and illustrations given, etc.). 4.3.3.3 The Aims and Intended Use of the ELP Every ELP must include the three separate parts (language passport, language biography, and the dossier), adhere to the aims of each section as prescribed in the Principles and Guidelines, and in the case of the passport section, utilize the standard version available on the COE website. With respect to the biography section, provided all the important aspects are covered, developers can employ a structure that reflects the priority and weighting they place on each aspect and how they believe these aspects can best be developed.
4.3.4 Relevant Components and Available Templates The relevant components and available templates for compiling an ELP are provided below. In the exercises, the reader will be guided through the process of choosing the most appropriate materials for their needs.
4.3.4.1 Language Passport The standard language passport for adults and the language passport templates for young learners are available on the COE website. Please note that they have not been updated recently, but suggestions for the Profile of Language Skills and the new Self-Assessment Grid can be found in the CEFR/CV (COE 2018). Official translations of these documents should appear on the website shortly. For
170
4
Learner Autonomy and the European Language Portfolio
non-European languages, great care must be taken if a version does not yet exist in the first language of the learners. It is likely that a version will be prepared shortly through either a national ministry of education or prominent research or academic associations that have led previous CEFR-related initiatives in your area. Another alternative is the Swiss ELP for young people and adults (www.languageportfolio.ch), which favors detailed information about examinations taken as well as attestation forms bearing the stamp of an institution (when possible) for and a rich descriptions of (i) Participation in an Exchange Program, (ii) Participation in Bi-Lingual Teaching/Immersion Teaching, and (iii) A Language Learning Stay in a region where the language is spoken, (iv) Playing Host to a Foreign-LanguageSpeaking Guest from a Partner School, Institution or Family, and (v) Participation in a Sustained Correspondence with a Foreign-Language-Speaking Pen Friend. In some ELPs, the passport is detachable, enabling learners to share it easily with future employers or school administrators.
4.3.4.2 Language Biography There are numerous areas to be covered in the language biography (see Sect. 4.2.2.3). They are listed below (taken from Goullier 2010a: 6). Reference is given to the corresponding section in the Swiss ELP III (See COE 2001b for information about each section, an explanation on its use as well as access to the ELP itself) and available COE templates (COE 2011b) (see Table 4.7) when possible. The teacher can then decide if they would like to adopt, supplement, or adapt one of the components. (Templates on the COE website are a useful starting point.) It is also important to consider how the components link to the passport section. The benefit of using an existing model is that this has already been done explicitly and cohesively. For each component taken from a validated ELP, include the following reference: The section(s) organizing this presentation or encouraging this reflection are taken from ELP Model(s) No(s)….
4.3.4.3 Dossier According to the Principles and Guidelines, the dossier allows the learner to select material to document and illustrate achievements recorded in the language biography or language passport. The form the dossier takes will vary according to whether the reporting or pedagogic function is prioritized (see Sect. 4.2.2.4). The Swiss ELP III provides information on how to use and organize a basic dossier (see Table 4.6 and/or the dossier section of the Swiss ELP website). This will also be discussed in the Exercises.
4.4 Exercises
171
Table 4.7 Language biography components and corresponding Swiss ELP III Sections and COE templates Language biography components
Swiss ELP III section
Supplement from available COE templates
(a) Presentation to the user of the different parts of the language biography (b) Presentation of the user (c) Presentation of the particular language learning situations by the user (d) Reflection on linguistic and cultural diversity in the immediate environment
Explanation on website 2.1 2.1/2.4
Explanation on website
(e) Self-assessment of component levels
2.2
(f) Setting language learning objectives (g) Reflection on learning strategies
2.5
(h) Reflection on mediation (i) Reflection on direct or indirect intercultural meetings (j) Promoting plurilingualism
2.3
2.1
See Table 4.4 this chapter—ELC ELP. Intercultural awareness and experience Self-assessment checklists See Tables 4.5/4.12/ 4.13 this chapter Goal-setting and Learning How to Learn Intercultural awareness and experience A Plurilingual Profile See Table 4.4 this chapter
(k) Explanation of the competence levels in one or more languages when these are not easy for the user to identify with the help of the checklists or in the part describing the linguistic profiles in the language passport (l) Consideration given to the required frequency of use of the different sections
4.4
Exercises
The goal of these exercises is to guide the reader through compiling an ELP based on an examination of the Standard Adult Language Passport, a number of templates for the biography and dossier section, and each component of the Swiss ELP. Key questions taken directly from the Application for Validation and Accreditation of an ELP model (Schneider and Lenz 2001: Appendix E; https://rm.coe.int/1680459fa3) are used to guide the discussion. The complete list of questions from the application appears in Appendix 1, providing a more comprehensive overview than is possible here.
172
4
Learner Autonomy and the European Language Portfolio
4.4.1 Exercise 1: Language Passport • Compare the Standard Adult Language Passport available on the COE website (https://www.coe.int/en/web/portfolio/templates-of-the-3-parts-of-a-pel) with the Swiss ELP III Language Passport (https://www.languageportfolio.ch) (see Table 4.8). Key questions: Does the language passport • allow the recording of formal qualifications and language competencies regardless of whether gained in or outside formal education contexts? • ensure continuity between different educational institutions, sectors and regions? ANSWER The Standard Adult Language Passport provides a more holistic picture of learner ability, covering the same areas as the Swiss ELP III Biography Table 4.8
Standard Adult Language Passport (COE) and Swiss ELP III Language Passport
Standard Adult Language Passport Council of Europe
Swiss ELP III Language Passport (Online demo version) www.languageportfolio.ch
Learners can only add limited details, such as name of language or dates Profile of Language Skills: a profile of the language(s) that the holder has grown up with. Language(s) I used or use (1) within my family and neighborhood; (2) in my school(s) Summary of Language Learning Experiences for formal education (e.g., post-secondary) or other language courses. A summary of linguistic and intercultural experiences. Categories include (1) Using language for study or training, (2) at work, (3) while living or travelling abroad/(4) mediating between languages (e.g., information translation)/(5) other areas of use
Sect. 1.1 Self-Assessment Grid (No entry required) Sect. 1.2 Global Scale: Calibration of Certificates and Qualifications to Common European Framework Levels. (Students add name/date of qualification to corresponding CEFR level (e.g., B2) and indicate the degree to which qualification has been calibrated to CEFR using 4-point scale (e.g., Weakest level * = Collective judgment by the teaching staff of the institution concerned) Sect. 1.3 Examination Description: Student provides detailed information about certificates/qualifications awarded based on examinations Attestation forms for: Sect. 1.4 Participation in an Exchange Program Sect. 1.5 Participation in Bi-Lingual Teaching/Immersion Teaching Sect. 1.6 A Language Learning Stay in a region where the language is spoken Sect. 1.7 Playing Host to a Foreign-Language-Speaking Guest from a Partner School, Institution or Family Sect. 1.8 Participation in a Sustained Correspondence with a Foreign-Language-Speaking Pen Friend
4.4 Exercises
173
Section (which is no surprise as the Swiss version served as a model for the standard one). Student input is (intentionally) limited due to space restrictions, but students are meant to expand on these summaries in the biography and dossier section. Self-assessment of ability (e.g., Profile of Language Skills) is not linked nor calibrated to an external examination to the same degree as in the Swiss model. Student voices are more prominent in the standard version, indicating that the pedagogic and reporting functions are given equal weighting. Teachers prioritizing the reporting function, however, may prefer the Swiss ELP for young people and adults (Swiss ELP III), as it favors detailed information about examinations taken (Swiss ELP III Sects. 1.2 and 1.3) as well as attestation forms (bearing the stamp of an institution when possible) for and rich descriptions of language and cultural experiences (Swiss ELP III Sects. 1.4–1.8). However, it might require more input and guidance from the teacher.
4.4.2 Exercise 2: Language Biography Here we compare various COE templates for the language biography with the Swiss ELP III equivalents (see Table 4.7).
4.4.2.1 Exercise Language Learning Biography • Compare the Swiss ELP III (Table 4.9) with the form used in ELC ELP for University students—My language learning biography (Table 4.10, reproduced from COE 2011a: 30). Key questions: Does the language biography • encourage learners to include information on linguistic and cultural experiences gained in and outside formal educational contexts? • Is the biography organized to promote plurilingualism (i.e., the development of competencies in a number of languages)? • Are these sections linked cohesively to the language passport or dossier? ANSWER (Cohesion) The Swiss ELP is clearly divided into different sections (and pages) for the biography. The teacher may need to explain how Sect. 2.1 (like the Standard Adult passport) serves as a summary and Sects. 2.3–2.5 are for learners to expand upon informal linguistic and intercultural experiences, formal education programs and future goals. (Note that each section makes explicit reference to Sect. 2.1.) The ELC ELP provides numerous prompts but no explicit linking to the language passport or later sections. It is a small difference, but the strength of the Swiss ELP is that students may see the connection to other sections of the biography more clearly. Another advantage of the Swiss ELP, an e-portfolio, is the potential to (hyper)link entries. For example, a summary in Sect. 2.1 could be
174
4
Learner Autonomy and the European Language Portfolio
Table 4.9 Swiss ELP III— language learning biography
Sect. 2.1 Personal language learning biography When you write your Personal Language Learning Biography for other people, it is best to put it in table form. If you want to reflect on your own experiences and progress, it could be more useful to write a more detailed biography (1) Write down what language(s) you have grown up with and what language areas you have lived in (2) Give brief information about language learning at school and about courses that you have attended, about the length, frequency, and types of teaching (3) Put down where and how you are learning or have learnt languages outside of school (4) Note when and how you use or have used other languages at work, in your studies, with friends and on trips Sect. 2.2 ELP checklists Sect. 2.3 Information about Important Linguistic and Intercultural Experiences This is the place for you to give (in more detail than in Section 2.1) information about important intercultural experiences and activities which have contributed to widening your knowledge of other countries and the people, society, culture of foreign language areas. (Prompts given (e.g., Intercultural experiences (encounters with the country, culture, and speakers of the language))) Sect. 2.4 Information about Foreign Language Teaching in Schools and Language Courses Information about foreign language classes you have taken can be useful, for example, in cases where you have not acquired a diploma. Information about the goals and the duration of the language course gives some indication about the probable level of your language skills. Information about the teaching aid used and about the teaching method is helpful for new teachers in case of a change of school or at the beginning of a new language course. Furthermore, the documentation on the foreign language courses you have taken can also be of valuable information to yourself. It can help you to evaluate the teaching method which suits you best so that you can use it for your further language learning. (This information may be provided by the school you attended) Sect. 2.5 Setting goals for yourself Looking back on the experiences you have made during language learning is a good way to draw conclusions for further language learning; the checklists might help you clarify which goals you have already reached. It might be very useful to write down your new goals and to plan, perhaps together with a teacher, how to proceed in order to reach them
linked to evidence in a later section, but it is unclear whether this function is embedded in the Swiss ELP. This cohesion is more difficult to attain with a paper-based version like the ELC ELP. This exercise also highlights a weakness of
4.4 Exercises Table 4.10 ELC ELP for University students—my language learning biography
175 My language learning biography Here give a description of your language learning experiences for the languages you know. This information may be useful to other people (e.g., teachers, employers) and will provide a basis on which you may plan your future learning activities Which languages have I learned? – Languages that I learned at school or in courses (Give the duration, number of hours, goals, content, teaching methods, textbooks, and where appropriate the kinds of examination.) – Languages that I grew up with – Language areas where I have lived – Use of language while working, in training, studying, travelling, and in my circle of acquaintances – Language contact through television, radio, the cinema, art, music, books, the press, the Internet, etc. How have I experienced learning? -–How have I learned well and with pleasure? What was particularly important and enriching? – In the framework of language learning and the languages I have learned or am learning, what has demotivated me? The worksheets can be presented chronologically like a curriculum vitae, or separated according to language. Please give the number of years of learning and where possible also the dates
relying on templates. It is unclear how cohesively the ELC template fits into the ELP from which it was taken. Plurilingualism and Intercultural Awareness and Competence As discussed in Sect. 4.2.2.4, it is important to raise learner awareness of the number of languages they have engaged with. Both forms try to do this. Furthermore, these open-ended forms encourage students to reflect on using another language in a different context and culture. As with many templates intended for a large audience, they are rather generic and the teacher is left to decide how much input or feedback (if any) to provide. Case Study 5.5.2 is an example of how context determines whether a task (i.e., sharing one’s culture and customs) can serve as a linguistic and intercultural experience to reflect on and store in the language biography to promote interculturalism or as a language activity to formally assess linguistic competence. Case Study 4.5.3 exemplifies how teacher input can lead to deeper levels of reflection concerning intercultural experiences.
4.4.2.2 Exercise Language Biography—Goal Setting • Compare the Swiss ELP III (Sect. 2.5) (Table 4.11) with the form used in CercleS ELP for University students (Table 4.12 reproduced from COE 2011a: 26).
176
4
Learner Autonomy and the European Language Portfolio
Table 4.11 Swiss ELP for young people and adults— Sect. 2.5 My objectives
My objectives Formulate the objectives and plans for language learning: the pages can be organized individually (1) What do I want to learn? (2) How do I want to learn? (3) Why do I want to learn a language, what do I need to be able to do in it, and how would I like to go about it? Am I learning the language for my job, for travel, or for study? Is it more important for me to understand, to read literature or to write? Do I want to attend a course, learn in a tandem partnership, or have a stay in a foreign language area?
Table 4.12 CercleS ELP for university students
My next language learning target Language (1) Learning target (Use the self-assessment grid in the language passport and the checklists in the appendix to formulate your next learning target as precisely as possible.) (2) How much time can I devote each day/week to achieving my target? (3) When shall I begin? (3) When do I plan to finish? (4) How do I intend to achieve my target? For example, can I work alone or do I need to work with other people? (5) What learning materials do I need? (6) How shall I know whether or not I have achieved my target? (For example, can I take a test or set and correct a test for myself? Or shall I need to ask my teacher, another learner or a native speaker to assess me? Or can I depend entirely on my own judgment?) (7) Review of learning progress on or near my target date: Have I achieved my target? In working toward my target have I learnt anything new about (i) the target language or (ii) language learning? What am I going to do with what I have learned?
Key questions: Does the language biography • facilitate learner involvement in planning? • facilitate reflection upon and assessment of progress? ANSWER The Swiss ELP III (Table 4.11) tends toward an open-ended format while the CercleS ELP for university students (Table 4.12) is more structured, guiding students in setting their learning targets. The CercleS ELP encourages
4.4 Exercises
177
learners to use the CEFR (Self-Assessment Grid and portfolio descriptors) as a basis for setting new targets, (2) consider the amount of time available to achieve the target, (3) set specific dates for self-monitoring, (4) decide the working methods, (5) assess, and (6) reflect on their learning. Which approach do you prefer? The open-ended format may be more appropriate for teachers who believe their learners simply need opportunities to be autonomous, while a highly structured format may be favored by those who believe learners need guidance in developing autonomy. Case Study 5.4.1 exemplifies how the CercleS version can be used to set short-term goals in relation to class material and the illustrative scales.
4.4.2.3 Exercise Language Biography—Learning How to Learn The templates for Goal Setting and Learning how to learn (available on the COE website—Developing an ELP) are a valuable resource gathered from various ELPs. Individually, these templates are of high quality, but it is unclear how they fit into the ELP from which they were taken. They vary considerably and include narrowly focused checklists, although a majority tend toward an open-ended format where prompts suggest areas for consideration. With these resources, no clear guidelines are given concerning the degree to which teachers are involved in the reflection process, whether they should provide feedback on student reflections, and how the various forms should be used. This is in part because the COE does not seek to impose a specific teaching approach. Rather, decisions concerning how to best adapt the templates to local contexts are left to individual teachers. However, portfolios that are prepared with a well-defined group of learners in mind are likely to be more highly structured (Schneider and Lenz 2001: 37). Teachers with a clear understanding of their students’ needs can accommodate them accordingly. The Milestone ELP for young adults and adult migrants is one such example. In this ELP, learners (1) identify and analyze their learning needs and (2) reflect on previous experiences. This allows learners to specify the most effective strategies to meet their learning needs (see Table 4.13 reproduced from COE 2011a: 23). Table 4.13 Milestone: ELP for young adult/adult migrant—the ways I learn best The ways I learn best Here I think about and record the ways I learn best and I describe my learning approaches for different purposes (1) What I have to (2) How I learn best Notes learn New vocabulary
Write it in my notes Use a tape to hear it again and again My favorite time and place for learning is:
I need to learn more specialized vocabulary for work
178
4
Learner Autonomy and the European Language Portfolio
• Compare the Swiss ELP III (Sect. 2.5—My Objectives) (Table 4.11) with the form used in Milestone: ELP for Young adult/adult migrant (Table 4.13 reproduced from COE 2011a: 23) Key question: Does the language biography • facilitate reflection on the learning process? ANSWER Great care must be taken when utilizing these templates. While they do facilitate reflection and self-assessment, according to Perez Cavana (2012: 147), “these types of questionnaire cannot offer any new knowledge in relation to other ways to learn or methods … [and] it seems necessary to have some input in the form of explicit instruction, where different types of learning strategy and ways to use them are presented and explained.” Perez Cavana (2012) acknowledges that the inclusion of such material in the paper portfolio is unrealistic due to cost and size constraints. A simple solution is to prompt reflection after students have been explicitly introduced to learning strategies (see Sect. 4.1.2.1). Perez Cavana explores the possibility of addressing this weakness through e-portfolio use (discussed in Sect. 4.2.4). The reader may now want to turn to Case study 4.5.2, which exemplifies how (in this case, Japanese university) students can be guided through various online learning resources using a learning management system (Moodle) to reflect on their learning process.
4.4.2.4 Exercise Language Biography—Descriptor Checklists The checklists of ‘Can Do’ descriptors contained in ELPs were discussed in Sect. 4.2.2.4. However, it is also important to consider how they are presented to learners. Several alternatives exist, including the Eaquals checklists of descriptors discussed here (www.eaquals.org, under Our Expertise/CEFR/Practical Resources), which was first published in 2008 by an Eaquals project group led by Brian North, the co-author of the CEFR. The original six CEFR levels were subdivided into 11 levels to benefit institutions whose students stay for shorter periods or take fewer courses per week. These will be motivating as learners will be able to see progress more easily and at shorter intervals. These descriptors are presented in three different ways, so a comparison helps illustrate the benefits and drawbacks of each approach. • Examine the Eaquals Bank of Descriptors as a Checklist (Table 4.14) (The Swiss ELP III checklists (Sect. 2.2) are similar) and compare them with the Eaquals Bank of Descriptors as Levels (Table 4.15), the Eaquals Bank of Descriptors as Scales (description provided following Table 4.15), and the Swiss ELP III (Sect. 2.2, available online).
4.4 Exercises
179
Key question: Presenting descriptor checklists What are the advantages and disadvantages of each format? With respect to the Swiss ELP III, do the checklists • encourage learners to state what they can do in each language? Readers might also want to consider • How the self-assessment checklist included in an ELP relate to the curriculum/course descriptors from Chap. 2? • Are the levels and descriptors used compatible with the curriculum being used? Eaquals Bank of Descriptors as a Checklist Lists are grouped according to levels (e.g., A2) and descriptors are listed under five general skills (i.e., Listening, Reading, Spoken Interaction, Spoken Production, and Written Production). Descriptors from the illustrative scales are included but are not labeled as such. Below is an example of the descriptors for Listening.
Table 4.14 Eaquals Bank of Descriptors as a Checklist
Eaquals Bank of Descriptors—as Checklists A2 Listening I can understand simple information and questions about family, people, homes, work, and hobbies I can understand what people say to me in simple, everyday conversations, if they speak clearly and slowly and give me help I can understand short conversations about family, hobbies, and daily life, provided that people speak slowly and clearly I can follow changes of topic in TV news reports and understand the main information I can understand short, clear and simple messages at the airport, station, etc. For example: “The train to London leaves at 4:30” I can understand the main information in announcements if people talk very clearly. For example: weather reports, etc Followed by descriptors for Reading, Spoken Interaction, Spoken Production, Written Production Strategies I can start a conversation I can say what I don’t understand and ask simply for clarification. [+ 2 additional descriptors] Quality of Language I have enough vocabulary to communicate in simple everyday situations. I can communicate what I want to say in a simple and direct exchange of limited information; in other situations I generally have to compromise the message. [+ 4 additional descriptors] Adapted with permission from Eaquals
180
4
Learner Autonomy and the European Language Portfolio
Table 4.15 Eaquals Bank of Descriptors as Levels Eaquals Bank of Descriptors—as Levels Overall listening A2 I can understand simple information and questions about family, people, homes, work, and hobbies Overall reading
Listen to interlocutor
Listen in discussion
I can understand what people say to me in simple, everyday conversations, if they speak clearly and slowly and give me help
I can understand short conversations about family, hobbies, and daily life, provided that people speak slowly and clearly
Read for orientation
Read Info and argument
Listen in audience
Listen to TV, film
Listen to announcements
I can follow changes of topic in TV news reports and understand the main information
I can understand short, clear and simple messages at the airport station, etc. For example: “The train to London leaves at 4:30”
Read Read instructions literature
Read correspondence
Eaquals Bank of Descriptors as Levels Lists are grouped according to levels (e.g., A2), and descriptors are listed under general skills (e.g., Overall Listening) and the corresponding illustrative scales. Like the descriptions as checklists, the listening descriptors are followed by the four skills (e.g., reading). Eaquals Bank of Descriptors as Scales Lists grouped according to skills (e.g., Listening) include all corresponding illustrative scales and levels for that particular skill. The table for this scale differs from Table 4.15 in that the A2 Listening descriptors would be preceded by Listening descriptors at the A2+ level and followed by those at the A1+ level. ANSWER Visually speaking, the first list—Eaquals Bank of Descriptors as Checklists—may be the easiest for students to process initially. All relevant descriptors appear under their corresponding skill and learners can add descriptors. However, they do not draw the learners’ attention to the different categories that a specific skill can be subdivided into (i.e., the illustrative scales), which is the advantage of listing descriptors as levels, as they do include this information. The drawback of lists organized by level is that they may encourage a belief that a learner is at one level for all skills even though it is likely that one’s receptive skills will be higher than one’s productive skills. Last, the advantage of descriptors as scales is that it is easier to see the relationship between different levels of descriptors within each skill. Another important contribution of the Eaquals Bank of Descriptors is the inclusion of Strategies learners may employ at that level (e.g., B1+ level: When I can’t think of a word, I can explain what I mean with another word) and Quality of Language, which indicates the level of performance expected (e.g., B1+ level: I have a sufficient range of language to describe unusual and predictable situations and to express my thoughts on abstract or cultural as well as everyday topics (such
4.4 Exercises
181
as music, films)). Within each list, this information is displayed in the same way as the descriptors of communicative language activities. A fourth descriptors bank— Comparison with CEFR descriptors—resembles Lenz and Schneider (2004a) by comparing EQUALS 2008 descriptors with those found in CEFR and various validated ELPs. When drawing up a checklist, the developers must decide not only on an appropriate number of descriptors to include, but also how generic or specific these descriptors should be. As Schneider and Lenz (2001: 32) point out, “having to work through long lists of carefully worded descriptions may become cumbersome, for young learners in particular… The descriptors to be included in checklists should be carefully selected, adapted to the learners if necessary, and presented as attractively as possible.” The Swiss ELP for young people and adults (COE 2001b) relies on generic lists with six broad levels (A1–C2). These lists are presented in the same way as Eaquals Bank of Descriptors as a Checklist, and the descriptors state what learners at a certain level are typically able to do, not everything a learner should be able to do at this level. Space is available for the learner/teacher to add additional descriptors. For each descriptor, there is a column for self-assessment (√ = I can do this under normal circumstances; √√ = I can do this easily), the teacher’s assessment, and a future objective (! = This is an objective for me; !! = This is a priority for me). Of course, other ELPs differ according to the evaluation scale used for self-assessment (e.g., on a scale of 1–10), or allow space for future assessments (e.g., six months later) or in different languages. Regardless of what checklists are chosen, if the descriptors have been revised to meet student needs, then a brief summary of why and how this was done should be included.
4.4.3 Exercise 3: Dossier The dossier allows learners to select material to document and illustrate achievements of experience recorded in the language biography or language passport. The form the dossier takes will vary according to whether the reporting or pedagogic function is prioritized (see Sect. 4.2.2.5). • Examine the Swiss ELP III Dossier [see Table 4.6 and/or the dossier section of Swiss ELP website (COE 2001b)]. Key Question: Does the dossier • offer learners the opportunity to select materials to document and illustrate achievements or experiences? How accurately does this evidence reflect student ability? • allow for updating and reorganization?
182
4
Learner Autonomy and the European Language Portfolio
ANSWER: If you feel the need to prioritize the reporting or pedagogic function, you might provide more specific guidelines about how to report the nature of the materials that have been included as well as how these materials can be organized. Case Study 5.5.2 offers a glimpse of the predicament teachers face when evaluating evidence stored by students in the dossier. In a high-stake speech contest, student performance carried a heavier weighting in the criteria than presentation content as the presentations and scripts appeared to have been heavily edited (written) by their teachers. When presented as evidence of student ability, it may not be clear how truly representative or accurate this material is.
4.5
Case Studies and Further Reading
This section includes three case studies that show how the ELP has evolved or can be supplemented. Case Study 1 describes EPOS, the most thoroughly researched and supported electronic portfolio. Case Study 2 and 3 address Perez Cavana’s criticism that many paper-based portfolios cannot offer any new knowledge. In Case Study 2, the use of digital materials to promote self-directed learning is offered as a possible solution to the weakness of the ‘Learning how to learn’ templates discussed in Sect. 4.4 (Exercise 2.3). The focus of Case Study 3 is the development of Intercultural Competence.
4.5.1 Case Study 1: EPOS The first case study focuses on the e-portfolio Elektronisches Europäisches Portfolio der Sprachen (EPOS—Electronic European Portfolio of Languages) and how it is used in conjunction with a tutorial program at the University of Bremen’s Language Centre as described in Buschmann-Göbels and Kühn (2017). While the Swiss ELP III (for young people from age 16 to adults) focuses on the reporting function, EPOS places greater emphasis on the pedagogic function through self-evaluation, goal setting, a journal to reflect and document the learning process, a dossier to document evidence of progress, and a biography to reflect on language learning experiences. The long-term viability of the EPOS rests on a number of key features. First, there is significant institutional support for this e-portfolio, including the Language Centers of Bremen’s four public universities, the Department of Computer Science at the University of Bremen, the Language Council of Bremen, and the EPOS Association (Kühn 2016). Second, much thought was put into ensuring that EPOS took advantage of the digital format through the interconnectedness of key features (e.g., learning objectives from self-evaluation are automatically stored in the EPOS menu), the ability to store different digital formats, and the motivational aspects of online social networking (e.g., the ability for users to comment on each other’s pages). Third, EPOS use is supported through the tutorial program where students
4.5 Case Studies and Further Reading
183
Fig. 4.3 EPOS at a glance (Buschmann-Göbels and Kühn 2017: 275)
receive guidance and encouragement on a regular basis through one-on-one sessions with a tutor. Finally, the EPOS clearly follows CEFR and ELP principles and is aligned to the CEFR through use of ‘Can Do’ descriptors; namely, the European Confederation of Language Centres in Higher Education (CercleS, www. Cercles. org/en) and the European Language Council/Conseil Européen pour les Langues (ELC/CEL, www.celelc.org). The EPOS’s digital format supplements the above foci with a ‘Pages’ function, allowing users to present learning examples to others, and a ‘Group’ function which enables users to collaborate, give feedback upon, and share projects, taking advantage of motivational aspects of collaboration inherent in online social networking, which Barrett (2011) sees as essential for the success of digital portfolios (Fig. 4.3) (see Sect. 4.2.4 for more information). EPOS’ potential is realized through the ‘Autonomous Language Learning with tutorial advisory service’ which is offered through Bremen’s Higher Education institutions’ Language Centres. Trained student tutors meet weekly with language learners, offering support in planning, performing and evaluating their learning process. This program is open to learners of any language, learning materials are available for free, and students can receive language credit provided they document their participation and workload.
184
4
Learner Autonomy and the European Language Portfolio
The concrete goals for the tutorial program are: • to improve the students’ ability to study through individual, personal learning counseling, • to achieve sustainability in language learning by supporting LA and focusing on action and competence orientation, • to individualize language courses to meet the needs of heterogeneous groups of students and their different profiles, curricula, and timetables. The tutor’s role is primarily capacity building: • • • • • •
help set reasonable learning goals for tutees, work out a corresponding learning plan, support tutees with strategies for autonomous learning, recommend suitable materials, if necessary, help learners with time management, introduce learners to the e-portfolio EPOS and give regular feedback on learning diaries and dossiers, • support students in their project work. The use of EPOS in the tutorial program To help learners plan and reflect upon their learning, EPOS and the tutorial program are used in conjunction as follows: 1. Tutee needs analysis: Using a questionnaire to record learner language learning biographies, the tutors and tutees analyze learning needs. 2. Learners introduced to e-portfolio and its main functions. • Self-evaluation: Several descriptor lists are available for learners to self-assess their level in relation to ‘Can Do’ descriptors for listening, speaking, reading, and writing. Other competences can be added. A graphic representation of their self-evaluation resembling the proficiency profile (see Fig. 4.2) is also provided. • Learning objectives: Goals are set according to the ticked descriptors in the self-evaluation section. Learners can phrase these as they wish. These goals are automatically saved in the list of learning objectives in the EPOS menu. This is an example of the interconnectedness of several features. • Journal: Students reflect on and document the learning process. Guiding questions are available, such as: What learning objective did I work on? What materials did I use? How much progress did I make? What problems did I encounter? What do I want to discuss with my tutor? • Dossier: Students can store evidence of progress in a wide variety of digital formats. It is also possible to download any data file or data format. • Biography: Students record and reflect on their language learning experiences.
4.5 Case Studies and Further Reading
185
• Pages: Learners present evidence of their learning to other students and their teachers and tutors. Material from student self-evaluations, journals, and dossiers can be used to create a digital poster which others can comment on. • Groups: All members of a group share their work and can work on it collaboratively and give feedback, etc.
4.5.2 Case Study 2: Digital Materials and Self-directed Learning While this case study is not directly related to CEFR and ELP implementation, it describes a class at a private university in Tokyo, Japan, which focuses on self-directed learning (Ohashi 2018). It has been included here because it shares the ELP’s goal of training students to become independent learners and draws upon the same body of LA literature (e.g., Little 1991) used in official ELP guides. However, this focus is referred to as self-directed learning, acknowledges that teachers play a crucial role in guiding and developing LA, and addresses Perez Cavana’s (2012) criticism of the paper-based ELP by utilizing recent developments in LA literature (e.g., Blidi 2017) and language learning-related technology (e.g., Lai 2017) to introduce and develop new metacognitive knowledge and skills outside of what students already know. The contents of the ‘Self-Directed Learning Course’ are as follows: • • • •
Reflecting on students’ language learning history Sharing resources and methods for English study Identifying short-term and long-term goals Making SMART plans to reach goals (SMART: Specific; Measurable, Attainable, Relevant, Time-based) • Completing Planning-Action-Reflection cycles Utilizing Moodle, students are guided through the contents using online resources. For example, students review at least five language learning histories from an online site (see the Chuo University website, http://c-faculty.chuo-u.ac.jp/ *mikenix1/tlr/work/llh/index.html) before making notes about their own language learning history and sharing it in small groups in the following lesson. With respect to the second goal, students are introduced to various English study-related sites for categories such as Listening, Studying Vocabulary, Extensive Reading, and Goal Setting. They are asked to review these sites and present their findings to the class. Goal setting is in reference to Dörnyei’s (2014) imagined future self, exemplified with the teacher’s personal goals, and articulated after watching a short video concerning setting SMART goals.
186
4
Learner Autonomy and the European Language Portfolio
The final step involves completing the Planning-Action-Reflection cycle • • • • • • • •
Outline long-term and short-term goals Identify tasks that will build toward goals List the tasks in a SMART way Take action outside of class Discuss the action (or inaction…) in class Reflect on the experience Modify plans if necessary Continue the cycle.
As with other online resources, students reflect on and share their experiences of studying outside of class through feedback presentations in the Self-Directed Learning (SDL) Course. In a feedback survey comparing students in the SDL course and in a regular course with an SDL strand, Ohashi (2018) found that the SDL Course students reported studying for longer periods both during term and during university break. While not an ELP-specific case study, Ohashi (2018) addressees many of Perez Cavana’s concerns (2012), exemplifies many of the principles shared by the ELP for developing LA, and does so in a way that could easily be replicated.
4.5.3 Case Study 3: Developing Intercultural Competence ELP templates for reflecting on linguistic and intercultural experiences tend toward an open-ended format for use with a wide audience. However, the depth of reflection is left to chance as no measures are provided in the official guide for the intercultural component of the language biography (COE 2011b)4 as to the type of input (e.g., intercultural theory) or process (e.g., implementation guidelines). The purpose of this case study is to exemplify one teacher’s use of relevant theoretical models and practical advice for developing intercultural competence, facilitating deeper levels of reflection in relation to a cultural exchange between Japanese and Taiwanese universities students who (with a few exceptions) can only communicate through their second language—English (Yabuta 2019). It is common for universities to provide overseas exchanges for their students, but the level of preparation students undertake varies greatly. By tying a week-long stay in Taiwan to two 15-week intercultural understanding courses, Yabuta was able to introduce two models of intercultural competence. Byram’s (1997) model provides an excellent overview as it fully integrates linguistic, sociolinguistic, and discourse competence, while also informing the CEFR stance on intercultural competence (see Sect. 1.1.2.4 of this volume for a discussion of intercultural competence, plurilingualism, and pluriculturalism within the CEFR). This is 4
As discussed earlier, the COE intentionally does not provide prescriptive pedagogic guidelines, acknowledging that these decisions are best left to EAP users.
4.5 Case Studies and Further Reading
187
complemented by Deardorff’s (2006) Pyramid Model of Intercultural Competence, which shows how attitudes (e.g., respect for other cultures) influence the knowledge, comprehension, and skills necessary for communicating in a manner respectful of and appropriate in different cultures and contexts. Students envision and (hopefully) achieve a ‘desired external outcome.’ In other words, to achieve this outcome they envision behaving and communicating effectively and appropriately (based on their intercultural knowledge, skills, and attitudes). This is also made possible through a ‘desired internal outcome’, an informed frame of reference requiring adaptability, flexibility, an ethnorelative view, and empathy. Practical advice for how this model might be realized is provided through Itoki (2015), the former CEO of Sony Korea, who shares his experience and advice for Japanese working overseas, covering such topics as: • Dealing with a new culture/culture shock • Embracing differences • Expressing yourself Students then use this advice to set goals for the cultural exchange with the university students in Taiwan. Later, the students record experiences and monitor motivation in a learner journal during the trip that includes planned formal activities (e.g., surveying each other about different cultural practices) and social ones (e.g., sightseeing trips). Upon returning to Japan, students reflect on their trip, writing a report about their impressions and experiences in relation to Deardorff’s model of intercultural competence and Itoki’s advice. They also consider how their experiences (and gains in intercultural competence) might benefit them in the future (Table 4.16).
Table 4.16 Yabuta (2019) Student goal setting and reflection on intercultural experience Pre-trip goal
Be active
What I can do What I experienced
I can make an effort to talk with Taiwanese students Outside of class, I tried to converse with the Taiwanese students I talked about my favorite artist I introduced popular trends in Japan At first, our communication did not progress smoothly. I did not know how to communicate with someone from a different culture We both speak English as a second language and had difficulty communicating at times If I talked a lot, the Taiwanese student would also talk a lot When we found we liked the same artists, we felt we had something in common To actively expressive myself When I start work, I can use this (experience). For example, every company has its own atmosphere and environment. I will have to adapt myself (to that environment). I might be able to apply this knowledge
Comment
Post-trip goal Future use
188
4
Learner Autonomy and the European Language Portfolio
The purpose of this model was to illustrate how the introduction of relevant theory and practical advice, coupled with opportunities for reflection, might lead to the development of greater intercultural competence than otherwise possible. Case Study Conclusion The case studies presented here were chosen as examples to evolve or complement the ELP. Case Study 1 represents best practice by taking advantage of the motivational aspects of online interaction and promoting autonomous learning through tutorial support. It must be noted that EPOS enjoys significant institutional backing, but public access is limited. Case Studies 2 and 3 address the weakness of using generic ELP templates by illustrating the teacher’s role in introducing concrete tools for self-directed learning through online resources, relevant theory, and practical advice for developing intercultural competence. These studies are offered as best practice for implementing key concepts promoted in the ELP.
4.5.4 Further Reading Official Resources for this chapter are presented here. Information from the five main websites for understanding the ELP have been summarized in this chapter. However, these websites contain additional information that anyone interested in further researching the ELP will find useful. Each site provides the most pertinent information for a given theme and includes a general introduction, relevant guides, and information about projects, research, and training. Between sites, there is extensive cross-referencing and overlap. However, it can be a little overwhelming at first as the presentation of information is not consistent between websites. Since the sites were developed for different purposes, some information is given prominence, while other aspects may be buried at the bottom of a list or absent altogether. It is also important to point out that since 2014, ELPs are no longer being registered by the COE, and it appears that these sites are no longer being maintained and monitored to the degree they once were. • Council of Europe and the ELP (https://www.coe.int/en/web/portfolio) This site will be of interest to educators primarily concerned with Developing an ELP. • The ECML CEFR and ELP homepage (https://www.ecml.at/) includes a general introduction, relevant CEFR resources (i.e., practical guides), ongoing ECML projects and information concerning training/consultancy available through ECML. • ECML Using the ELP (https://elp.ecml.at/) contains information on how to use and implement the ELP • ECML ELP TT (http://elp-tt2.ecml.at/) is a site dedicated to an ECML project (2004–2007)—Training teachers to use the European Language Portfolio. • ECML ELP WS (http://elp-wsu.ecml.at/) was an ECML project (2008–2011) tasked with implementing and evaluating whole-school ELP use.
4.5 Case Studies and Further Reading
189
Below is a short compilation of the available resources concerning the ELP in four key areas. 1. 2. 3. 4.
Developing and Registering an ELP Using the ELP Teacher Training Whole School Use
1. Developing an ELP A. The COE Homepage: https://www.coe.int/en/web/portfolio) includes • Documents on the ELP’s origin, guiding principles and history • Reports on the ELP project at the European level and international seminars • Lists of accredited and registered ELPs • A guide to compiling an ELP model • Templates and other resources for compiling an ELP. • Some key publications on designing and using an ELP B. Booklet: ELP: Guide for Developers (Schneider and Lenz 2001) C. Booklet: Introduction to the bank of descriptors for self-assessment in European Language Portfolios (Lenz and Schneider 2004b) D. Booklet: A bank of descriptors for self-assessment in European Language Portfolios. (Schneider and Lenz 2004a) E. Booklet: Bergen ‘Can Do’ Project (Hasselgreen and Blomqvist 2003), a product of the ECML medium-term program of activities for 2000–2003 2. Using the ELP A. ECML Homepage: http://elp.ecml.at/, a product of the second ECML medium-term program of activities (2004–2007). On this site, you will find: • ideas and tools for teacher education • models and case studies of the ELP being used in different educational contexts • ideas and tools to support the use of the ELP in different educational contexts • projects run by the Council of Europe’s European Centre for Modern Languages in Graz to support use of the ELP B. Booklet: The European Language Portfolio: a guide for teachers and teacher trainers (Little and Perclová 2001). C. Booklet: Enhancing the pedagogical aspects of the European Language Portfolio (ELP). (Kohonen and Westhoff 2003)
190
4
Learner Autonomy and the European Language Portfolio
3. Teacher Training: A. Booklet: Preparing teachers to use the European Language Portfolio: Arguments, materials and resources (Little et al. 2007); a product of the second ECML medium-term program (2004–2007). B. Homepage for ELP Teacher Training. Project results and exercises for reflection: http://archive.ecml.at/mtp2/Elp_tt/Results/. Both the booklet and the homepage are a product of the second ECML medium-term program of activities (2004–2007). 4. Whole-School Use A. ECML Homepage: http://elp-wsu.ecml.at/ a product of the ECML medium-term program (2008–2011). On this site, a guide and case studies are available for using the ELP in whole-school projects. B. Booklet: The European Language Portfolio: A guide to the planning, implementation and evaluation of whole-school projects. (Little 2011). A useful book Kühn, B., & Perez Cavana, M. L. (Eds.). (2012). Perspectives from the European language portfolio: Learner autonomy and self-assessment. Abingdon: Routledge. e-ELP Few electronic versions of the ELP are available to the public. The most accessible is the ELP Switzerland, available at www.languageportfolio.ch.
Appendix 1 Application for Validation and Accreditation of an ELP Model Questions appearing in the Application for Validation and Accreditation of an ELP Model (Schneider and Lenz 2001, Appendix E) 1. General/Contact Information of applicant. 2. Your ELP model: 2:1 Is a tool to promote plurilingual and pluriculturalism? 2:2 Is the property of the learner? 2:3 values the full range of the learner’s language and intercultural competence and experience regardless of whether acquired within or outside formal education? 2:4 Is a tool to promote learner autonomy?
Appendix 1 Application for Validation and Accreditation of an ELP Model
191
2:5 Has both a pedagogic function to guide and support the learner in the process of language learning and a reporting function to record proficiency in languages? 2:6 Is based on the Common European Framework of Reference with explicit reference to the common levels of competence? 2:7 Encourages learner self-assessment (which is usually combined with teacher assessment) and assessment by educational authorities and examination bodies? 2:8 Incorporates a minimum of common features (outlined in the Guidelines) which make it recognizable and comprehensible across Europe? 2:9 Caters for the specific needs of your target group? 3. Does your ELP passport section 3:1 allow an overview of the individual’s proficiency in different languages at a given point in time? 3:2 allow the recording of formal qualifications and all language competencies regardless of whether gained in or outside formal educational contexts? 3:3 allow the recording of significant language and intercultural experiences? 3:4 allow the recording of partial and specific language competence? 3:5 allow the recording of self-assessment, teacher assessment and assessment by educational institutions and examination boards? 3:6 allow to record on what basis, when and by whom the assessment was carried out? 3:7 Is the overview defined in terms of skills or competencies as described in the levels of the Common European Framework of reference? If not, specify how your levels relate to the CEF. 3:8 take account of your learners’ needs according to age, learning purposes and contexts, and background? 3:9 ensure continuity between different educational institutions, sectors, and regions? 3:10 respect the European character of the ELP so as to promote mutual recognition of Portfolios within and across national boundaries? 4. Does the language biography 4:1 4:2 4:3 4:4 4:5
facilitate the learner’s involvement in planning? facilitate reflection upon the learning process? facilitate reflection upon and assessment of progress? encourage learners to state what they can do in each language? encourage learners to include information on linguistic and cultural experiences gained in and outside formal educational contexts?
192
4
Learner Autonomy and the European Language Portfolio
4:6 Is the biography organized to promote plurilingualism, i.e., the development of competencies in a number of languages? 4:7 Do the levels used match with the levels in the common European Framework? If not explain how they relate. 4:8 Have the descriptors used been tested with the target population—are they transparent, understandable for the target age group? 4:9 Are the levels and descriptors used compatible with the curricula? 4:10 Are the assessment and evaluation criteria in harmony with the Common European Framework? 4:11 Are the levels and descriptors coherent with those used in ELP models in other educational sectors? 4:12 Are there any specific additional descriptors? 5. Does the dossier 5:1 offer the learner the opportunity to select materials to document and illustrate achievements or experiences? 5:2 allow for updating and re-organization? 5:3 encourage the development of plurilingualism? 5:4 encourage a creative personal development as learner and language learner? 6. General Principles 6:1 Is it in your context possible for learners who so wish to obtain and use your ELP? State the distribution channels and the cost involved for an individual learner. 6:2 Is the learner recognized as the owner of his/her ELP? 6:3 Will you ensure that the aims and the purpose of the ELP are understood by the learners and that they can understand the content? How? 6:4 Will the concept of European Citizenship by providing a record of all language competencies and experiences, including where appropriate, indigenous languages of minorities and languages of migrants be promoted? 6:5 Will in your context other ELPs which individual learners may possess and wish to present or maintain be recognized, supported, and valued?
References Álvarez, I. (2012). From paper to the web: The ELP in the digital era. In B. Kühn & M. L. Perez Cavana (Eds.), Perspectives from the European language portfolio-Learner autonomy and self-assessment (pp. 125–142). Oxon: Routledge. Barrett, H. (2000). Electronic teaching portfolios: Multimedia skills + portfolio development = Powerful professional development. http://electronicportfolios.org/portfolios/ aahe2000.html. Accessed June 2, 2018. Barrett, H. (2007). Researching electronic portfolios and learner engagement: The REFLECT initiative. Journal of Adolescent & Adult Literacy, 50(6), 436–449.
References
193
Barrett, H. (2011). Balancing the two faces of ePortfolios. British Columbia Ministry of Education: Innovations in Education (2nd edn). https://electronicportfolios.org/balance/. Accessed June 2, 2018. Beacco, J. C., Byram, M., Cavalli, M., Coste, D., Cuenat, M. E., Goullier, F., & Panthier, J. (2016). Guide for the development and implementation of curricula for plurilingual and intercultural education. Strasbourg: Council of Europe. https://rm.coe.int/ CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId= 09000016806ae621. Accessed January 15, 2018. Benson, P. (2008). Teachers’ and learners’ perspectives on autonomy. In T. Lamb & H. Reinders (Eds.), Learner and teacher autonomy: Concepts, realities, and response (Vol. 1, pp. 15–32). Amsterdam: John Benjamins Publishing. Benson, P. (2013). Teaching and researching autonomy. New York: Routledge. Blidi, S. (2017). Collaborative learner autonomy. Singapore: Springer. Boud, D. (1981). Towards student responsibility for learning. In D. Boud (Ed.), Developing student autonomy in learning (pp. 21–38). London: Kogan Page. Buschmann-Göbels, A., & Kühn, B. (2017). A CEFR-informed e-portfolio in blended learning at a German university. In F. O’Dwyer, M. Hunke, A. Imig, N. Nagai, N. Naganuma, & M. G. Schmidt (Eds.), Critical, constructive assessment of CEFR-informed: Language teaching in Japan and beyond (pp. 269–284). Cambridge: Cambridge University Press Byram, M. (1997). Teaching and assessing intercultural communicative competence. New York: Multilingual Matters. Council of Europe. (2001a). The common European framework of reference for languages: Learning, teaching, assessment. Cambridge: Cambridge University Press. Council of Europe (2001b). European language portfolio, accredited model No. 1.2000. Schulverlag plus AG. www.languageportfolio.ch. Accessed Dec 1, 2019. Council of Europe. (2011a). European language portfolio templates and resources—Language biography: Goal setting and learning how to learn. Strasbourg: Council of Europe. https://rm. coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId= 09000016804932c3. Accessed June 17, 2017. Council of Europe (2011b). European language portfolio templates and resources – Language biography: Intercultural experience and awareness templates. Strasbourg: Council of Europe. https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId= 09000016804932c1. Accessed June 17, 2017. Council of Europe (2011c). European language portfolio (ELP): Principles and guidelines, with added explanatory notes. Strasbourg: Council of Europe. https://rm.coe.int/16804586ba. Accessed July 7, 2018. Council of Europe. (2018). The common European framework of reference for languages: Learning, teaching, assessment. Companion volume with new descriptors. Strasbourg: Council of Europe. Council of Europe. (2019a). Common European framework of reference for languages: Learning, teaching, assessment (CEFR). https://www.coe.int/en/web/common-european-frameworkreference-languages/home. Accessed September 2, 2019. Council of Europe. (2019b). European language portfolio. https://www.coe.int/en/web/portfolio. Accessed November 29, 2019. Deardorff, D. (2006). Assessing intercultural competence in study abroad students. In M. Byram & A. Feng (Eds.), Living and studying abroad. Research and practice (pp. 232–276). Bristol: Multilingual Matters Ltd. Dörnyei, Z. (2014). Future self-guides and vision. In K. Csizér & M. Magid (Eds.), The impact of self-concept on language learning (pp. 7–18). Bristol: Multilingual Matters. Ellis, R. (2003). Task-based language learning and teaching. Oxford: Oxford University Press.
194
4
Learner Autonomy and the European Language Portfolio
Goullier, F. (2009). The most frequent errors to be avoided when developing a new ELP model. European language portfolio templates and resources. Strasbourg: Council of Europe. https:// www.coe.int/en/web/portfolio/templates-of-the-3-parts-of-a-pel. Accessed September 11, 2017. Goullier, F. (2010a). How to compile the various components of a European language portfolio. European language portfolio templates and resources. Strasbourg: Council of Europe. https:// www.coe.int/en/web/portfolio/templates-of-the-3-parts-of-a-pel. Accessed September 11, 2017. Goullier, F. (2010b). Taking account of plurilingual and intercultural competence in European language portfolios. Strasbourg: Council of Europe. https://www.coe.int/en/web/portfolio/ templates-of-the-3-parts-of-a-pel. Accessed September 11, 2017. Hasselgreen, A., & Blomqvist, P. (2003). Bergen ‘Can Do’ project (Vol. 1). Strasbourg: Council of Europe. Holec, H. (1980). Learner training: Meeting needs in self-directed learning. In H. B. Altman & C. V. James (Eds.), Foreign language learning: Meeting individual needs (pp. 30–45). Oxford: Pergamon. Holec, H. (1981). Autonomy in foreign language learning. Oxford: Pergamon (First published 1979). Strasbourg: Council of Europe. Itoki, K. (2015). Nihonjin ga kaigai de saiko no shigoto wo suru houhou (A Method for working successfully overseas for Japanese). Tokyo: Eiji Press. Kennedy, F., Bruen, J., & Péchenart, J. (2011). Using an e-portfolio to facilitate the self-assessment of both language and intercultural learning in higher education: A case-study approach. Language Learning in Higher Education, 1(1), 229–247. https:// search.proquest.com/docview/1703448862?accountid=32466. Accessed March 10, 2018. Kohonen, V. (1992). Experiential language learning: Second language learning as cooperative learner education. In D. Nunan (Ed.), Collaborative language learning and teaching (pp. 14– 39). Cambridge: Cambridge University Press. Kohonen, V., & Westhoff, G. (2003). Enhancing the pedagogical aspects of the European language portfolio (ELP). Strasbourg: Council of Europe. Kühn, B. (2016). EPOS–the European e-portfolio of languages. Language Learning in Higher Education, 6(2), 335–354. https://www.researchgate.net/publication/267243829/download. Accessed December 4, 2017. Kühn, B., & Perez Cavana, M. L. (Eds.). (2012). Perspectives from the European language portfolio: Learner autonomy and self-assessment. Oxon: Routledge. Lai, C. (2017). Autonomous language learning with technology: Beyond the classroom. London: Bloomsbury. Lamb, T., & Reinders, H. (Eds.). (2008). Learner and teacher autonomy: Concepts, realities, and response (Vol. 1). Amsterdam: John Benjamins Publishing. Lenz, P., & Schneider, G. (2004a). A bank of descriptors for self-assessment in European language portfolios. Strasbourg: Council of Europe. https://rm.coe.int/168045b15f. Accessed January 15, 2017. Lenz, P., & Schneider, G. (2004b). Introduction to the bank of descriptors for self-assessment in European language portfolios. https://rm.coe.int/168045b15d. Accessed 15 January 2017. Little, D. (1990). Autonomy in language learning. In I. Gathercole (Ed.), Autonomy in language learning (pp. 7–15). London: CILT. Little, D. (1991). Learner autonomy 1: Definitions, issues and problems. Dublin: Authentik. Little, D. (1995). Learning as dialogue: The dependence of learner autonomy on teacher autonomy. System, 23(2), 175–182. Little, D. (1997). Language awareness and the autonomous language learner. Language Awareness, 6(2/3), 93–104. Little, D. (2007). Language learner autonomy: Some fundamental considerations revisited. Innovation in Language Learning and Teaching, 1(1), 14–29.
References
195
Little, D. (2011). The European language portfolio: A guide to the planning, implementation and evaluation of whole-school projects. Strasbourg: Council of Europe. Little, D. (2012). The European language portfolio: History, key concepts, future prospects. In B. Kühn & M. L. Perez Cavana. (Eds.), Perspectives from the European language portfolio (pp. 7–21). Oxon: Routledge. Little, D., Hodel, H., Kohonen, V., Meijer, D., & Perclová, R. (2007). Preparing teachers to use the European language portfolio: Arguments, materials and resources. Strasbourg: Council of Europe. Little, D., & Perclová, R. (2001). The European language portfolio: A guide for teachers and teacher trainers. Strasbourg: Council of Europe. Littlewood, W. (1999). Defining and developing autonomy in East Asian context. Applied Linguistics, 13(2), 71–94. Ohashi, L. (2018). Supporting out-of-class learning through in-class activities. Paper presented at the meeting of Shinshu Chapter of the Japan Association for Language Teaching (JALT), Nagano, Japan. O’Malley, J. M., & Chamot, A. U. (1990). Learning strategies in second language acquisition. Cambridge: Cambridge University Press. Oxford, R. (1990). Language learning strategies: What every teacher should know. Boston: Heinle & Heinle Publishers. Perez Cavana ML (2010) Assessing learning styles within the European language portfolio (ELP). In ELSIN XV: Exploring styles to enhance learning and teaching in diverse contexts, Proceedings of the 15th Annual Conference of the European Learning Styles Information Network, Universidade de Aveiro. Perez Cavana, M. L. (2012). Fostering strategic, self-regulated learning: the case for a ‘soft’ ELP. In B. Kühn, & M. L. Perez Cavana (Eds.), Perspectives from the European language portfolio: Learner autonomy and self-assessment (pp. 143–160). Oxon: Routledge. Ribe, R. (2003). Tramas in the foreign language classroom: Autopoietic networks for learner growth. In D. Little, J. Ridley, & E. Ushioda (Eds.), Learner autonomy in foreign language classrooms: Teacher, learner, curriculum and assessment (pp. 11–28). Dublin: Authentik. Sanchez-Villalon, A., & Sanchez-Villalon, P. P. (2014). Validation and resources in AIOLE environments for language learning. https://www.researchgate.net/profile/Pedro_SanchezVillalon/publication/282730961_Validation_and_Resources_in_AIOLE_Environments_for_ Language_Learning/links/561a346908ae044edbb1df89.pdf. Accessed May 27, 2018. Schärer, R. (2012). Between vision and reality: Reflection on twenty years of a common European project. In B. Kühn, & M. L. Perez Cavana (Eds.), Perspectives from the European language portfolio: Learner autonomy and self-assessment. (pp. 45–58). Oxon: Routledge. Schneider, G., & Lenz, P. (2001). European language portfolio: Guide for developers. Strasbourg: Council of Europe. https://rm.coe.int/1680459fa3. Accessed June 26, 2017. Skehan, P. (1998). A cognitive approach to language learning. Oxford: Oxford University Press. Skehan, P., & Foster, P. (2001). Cognition and tasks. In P. Robinson (Ed.), Cognition and second language learning (pp. 183–205). Cambridge: Cambridge University Press. Smith, R. C. (2003). Pedagogy for autonomy as (becoming-)appropriate methodology. In D. Palfreyman & R. C. Smith (Eds.), Learner autonomy across cultures: Language education perspectives (pp. 129–146). Basingstoke: Palgrave Macmillan. Szabo, T. (2018a). Collated representative samples of descriptors of language competences developed for young learners. Volume 1: Ages 7–10 2018 edition. Strasbourg: Council of Europe. https://rm.coe.int/collated-representative-samples-descriptors-young-learners-volume1-ag/16808b1688. Accessed August 8, 2019. Szabo, T. (2018b). Collated representative samples of descriptors of language competences developed for young learners. Volume 2: Ages 11–15 2018 edition. Strasbourg: Council of Europe. https://rm.coe.int/collated-representative-samples-descriptors-young-learners-volume2-ag/16808b1689. Accessed August 8, 2019.
196
4
Learner Autonomy and the European Language Portfolio
Wenden, A. (1995). Learner training in context: A knowledge-based approach. System, 23(2), 183–194. Wenden, A. (1998). Metacognitive knowledge and language learning. Applied Linguistics, 19(4), 515–537. Yabuta, Y. (2019). Deep Active Learning wo katsuyou shite ibunka rikai taiken wo sagaka saseru kokoromi. Ibunka communication noryoku no ikusei wo mezashite. (Developing intercultural competence through deep active learning). Journal of the Chubu English Language Education Society, 48, 205–212.
5
Integrating Learning, Teaching, and Assessment
This chapter discusses the role of the Council of Europe (COE)’s Common European Framework of Reference for Languages (CEFR) (2001) and tasks for integrating the topics addressed in this book—course design, assessment, and autonomous learning—within institutions and classrooms, respectively. After reexamining the action-oriented approach and view of learners as social agents, the role of tasks to connect the classroom to real-world contexts of language use is explained in relation to the CEFR’s concepts of domains, situations, conditions and constraints, and themes. This is followed by a description of how tasks provide students with opportunities for language and strategy use, communicative language competence development, and assessment. In Sect. 5.2, the CEFR as an integrative tool will be discussed using concrete procedures for aligning an existing curriculum to the CEFR, and key resources for linking ‘Can Do’ descriptors to language (e.g., grammar). This is followed by a thorough review of Task-Based Language Teaching (TBLT) literature to show how an action-oriented approach can be implemented in the classroom. After defining tasks, three approaches to TBLT are introduced together with the role of explicit knowledge (developed in part from grammar/vocabulary instruction) in second language acquisition (Ellis 2003, 2009). The stance taken by Ellis (2003), the weak-interface position, provides the theoretical justification for linking tasks with the language necessary to fulfill these communication acts (Sect. 5.2.2). Guidelines for implementing TBLT and ensuring appropriate task difficulty are then reviewed. Section 5.3 illustrates how CEFR concepts concerning course design, TBLT, and self-assessment can be integrated in a general English university program in Japan (Nagai 2010). The exercises in Sect. 5.4 focus on the measures that can be taken to ensure that a CEFR-informed curriculum is enacted by teachers and learners in a way consistent with CEFR’s philosophy and principles. The case studies in Sect. 5.5 provide further examples of TBLT implementation for developing and assessing communicative language competence (Case Study 5.5.1 and 5.5.2) and intercultural competence (Case Study 5.5.2). © Springer Nature Singapore Pte Ltd. 2020 N. Nagai et al., CEFR-informed Learning, Teaching and Assessment, Springer Texts in Education, https://doi.org/10.1007/978-981-15-5894-8_5
197
198
5.1
5
Integrating Learning, Teaching, and Assessment
The Role of the CEFR in Integrating Learning, Teaching, and Assessment
In this chapter, we will discuss the role of the CEFR for integrating learning, teaching, and assessment. On a larger scale, this role includes facilitating transparency and coherence between institutions, educational sectors, regions, and countries. Here, we are primarily concerned with how it facilitates transparency and coherence between curriculum, teaching, and assessment within an institution. This coordination is made possible through the application of the common descriptive scheme (see CEFR/CV, COE 2018: Fig. 1), Common Reference Levels, and illustrative descriptors defining aspects of the scheme at the different CEFR levels (COE 2001: Chaps. 3 and 4) and the CEFR/CV (COE 2018). In this book, the process of utilizing the CEFR as a tool to assist in the planning and articulation of goals for curricula, courses, and examinations was described separately in Chaps. 2 and 3 along with the ELP’s role for reflecting on the language learning process, progress, and intercultural experiences (Chap. 4). In this chapter, we consider them together. The starting point for this planning is always determining “what the users/learners need to be able to do in the language” (COE 2018: 26). This stance is clearly articulated in the CEFR’s action-oriented approach (Sect. 1.1.2.1). If the CEFR can be used to integrate learning, teaching, and assessment within a curriculum and courses, then communicative tasks can fulfill this role within individual lessons (although some tasks may take several lessons to complete). After reviewing the action-oriented approach, the function of tasks within the CEFR is examined.
5.1.1 Action-Oriented Approach The aims and objectives of the CEFR (COE 2001, Chap.1) are realized through its action-oriented approach—that language learning should be directed toward enabling learners to act in real-life situations and assessed according to their ability to do so in relation to a continuum of proficiency levels (A1-C2). The CEFR is methodology1 neutral, and users are not limited to a task-based approach and its accompanying methodological restrictions. Some forms of task-based learning (Long 2015; Skehan 1998; Willis 1996) primarily promote unfocused tasks (e.g., tasks without a linguistic focus), and linguistic aspects are dealt with through corrective post-task feedback. Furthermore, as North (2014: 108) points out, ‘Can Do’ descriptors are not limited to language activities—a third of the illustrative scales concern learner competences, including six concerning linguistic 1
This is a complex issue as the CEFR is a reference document, and so its function is not to advocate one approach to language teaching in preference to others. What is more, the Council of Europe respects subsidiarity: the right of individual member states to determine educational policy and practice. At the same time, however, the CEFR’s action-oriented approach has clear pedagogical implications, perhaps chief among them are (i) language learning through target language use and (ii) a task-based approach.
5.1 The Role of the CEFR in Integrating Learning, Teaching, and Assessment
199
competences. While an action-oriented approach may provide teachers with a little more methodological leeway, it still implies that the central pedagogic unit in the classroom is purposeful, collaborative tasks that resemble real-life tasks and whose primary focus is language use, not specific language forms per se. In short, the CEFR “propagates language learning for a social purpose, not an intellectual pursuit” (North 2014: 107).
5.1.2 Tasks as Integrative Tools Chap. 2 discussed how to utilize the CEFR to establish course aims (Sect. 2.3.2). The process involved five steps to identify the relevant domains, language activities, levels, communicative competences, and strategies to serve as the foci of a course. Briefly, the steps are: (1) What domains of language activity are the learners most likely to be involved in? (2) What communicative language activities do they need to be able to perform? (3) What is the proficiency level they need to attain? (4) What language competences are necessary to perform the selected communicative language activities of a given proficiency level? (5) What strategies enable them to use the language competences identified to perform the target language activity? Once the course aims have been decided, teachers are responsible for interpreting these aims, considering the most appropriate methodology and preparing teaching materials that provide students with opportunities for language practice, instruction, and feedback. However, after going through this process, it may be beneficial to revisit the CEFR. For example, Chap. 4 of the CEFR (COE 2001: 43– 100) includes a detailed list of the categories needed to describe contexts of language use, which teachers may find useful. These categories are briefly summarized in Sect. 1.2 (they are reproduced here for ease of reference). See CEFR Chap. 4 for more detailed information. The context of language use (CEFR 4.1) includes: Domains Each act of language use is set in the context of a specific situation within the personal domain, public, occupational, and/or educational (see Chap. 2 for definitions). Situations The external situations which arise in each domain. See CEFR Table 5: External context of use: descriptive categories, for examples of different aspects of situations, such as location and times in which language use occurs, the institutions and/or persons involved, etc. Conditions and Constraints The external conditions (e.g., physical and social conditions) under which communication occurs and various constraints (e.g., time pressure) that the users face.
200
5
Integrating Learning, Teaching, and Assessment
Themes (CEFR Sect. 4.2) The focus of attention in particular communication acts (e.g., the subjects of discourse, conversation, reflection, or composition). Tasks (CEFR Sect. 4.3) The communication acts a language user undertakes in pursuing individual needs in a given situation (see CEFR Chap. 7 and Sect. 5.2.3). Language Activities and Strategies (CEFR Sect. 4.4) The illustrative scales for reception, production, interaction, and mediation and strategies covering pre-planning, execution, monitoring, and repair action for language activities (described in Chap. 2). Communicative Language Processes (CEFR Sect. 4.5) The planning, execution, and monitoring of language use. Texts (CEFR Sect. 4.6) Any piece of language, spoken or written, which users/learners receive, produce, or exchange. Considering all these aspects when preparing curriculum and course aims may seem overwhelming, but following the process described in Chap. 2 makes the task manageable. Some teachers, however, may face a different challenge; aligning an existing curriculum or course to the CEFR and/or adapting or supplementing teaching materials not designed with the CEFR in mind. This is addressed in Sect. 5.2.1. Regardless of the starting point, teachers will need to design and/or implement tasks. Therefore, the relationship between tasks and the CEFR categories for describing contexts of language use are examined next (Fig. 5.1). Tasks are defined in the CEFR (COE 2001: 10) as follows: A task is defined as any purposeful action considered by an individual as necessary in order to achieve a given result in the context of a problem to be solved, an obligation to fulfil or an objective to be achieved.
The Relationship Between Tasks and the Contexts of Language Use (CEFR Sects. 4.1 and 4.2) The common aspects of this and other definitions of tasks are that meaningful language use bears some resemblance or connection to the real world and results in a concrete outcome (e.g., the solving of a problem). Although the educational domain is the obvious first choice for teachers, “it should be noted General competence
Strategies
Communicative competence Linguistic
Domains, situations, conditions and constraints and themes TASK Identifiable outcome
Pragmatic
Sociolinguistic
Language activities Assessment
Fig. 5.1 Tasks as an integrative tool Adapted from Goullier (2007) with permission from © Council of Europe
5.1 The Role of the CEFR in Integrating Learning, Teaching, and Assessment
201
that in many situations more than 1 domain may be involved … and the occupational and educational domains largely coincide” (COE 2001: 45). In other words, when designing tasks for the classroom, teachers often envision future situations that learners will find themselves in, the conditions and constraints under which these tasks will be performed, and various themes learners will have to deal with. Alternatively, teachers may employ a bottom-up approach, imagining an authentic real-world task and the situation and themes that it implies. Within a classroom, it may not be possible (or appropriate) to replicate the conditions under which these tasks are performed. However, much research has been done in relation to pedagogic tasks and the conditions and constraints affecting task performance, namely task difficulty (e.g., number of participants, type of information), task characteristics (e.g., structured tasks, tasks requiring complex decisions) and task implementation (e.g., time and resources available during preparation/planning), and their effect on performance (see Skehan 1998). Tasks, through the requirement of goal-oriented meaningful language use, have a clear connection to various contexts of language use described in CEFR Sects. 4.1 and 4.2. Piccardo and North (2019: 141) argue that tasks should be linked to contexts of language use through scenarios (Di Pietro 1987) or simulations2 “in which learners as social agents are real people acting as themselves in realistic contexts.” The justification for framing tasks within such scenarios is that students will engage personally with the tasks and exercise more agency by taking responsibility for what needs to be done to reach the mutually agreed upon goals. The danger of utilizing decontextualized tasks and role plays, on the other hand, is that students are forced to adopt artificial roles as other people and will find these tasks neither engaging nor motivating. The Relationship Between Tasks and Language Activities, Strategies, and Texts Here again, the connection between tasks and productive, receptive, interactive, and mediation activities is evident. These activities are defined using specific descriptions of language use at various levels and the nature of the texts involved. Modified descriptors can be used as a specific task goal, and performance can be assessed according to the degree to which the task is successfully completed. Texts can be defined in terms of output for productive tasks or input for receptive ones. To carry out communicative tasks, it is necessary to employ communication strategies (COE 2001: 57). It will be useful to review the definition and role of strategies.
2
The course described in Case Study 1 at the end of the chapter (Sect. 5.5.1) is essentially a business simulation which provides a realistic context for cohesively linking various tasks (See Table 5.18) that the students as new company employees must complete. This resembles the practical examples provided in Piccardo and North (2019: Chap. 7).
202
5
Integrating Learning, Teaching, and Assessment
Strategies are a means the language user exploits to mobilize and balance his or her resources, to activate skills and procedures, in order to fulfill the demands of communication in context and successfully complete the task in question in the most comprehensive or most economical way feasible depending on his or her precise purpose (COE 2001: 57).
Both language activities and strategies are dealt with together in CEFR Sect. 4.4, but following Goullier (2007), it might be better to give strategies a more prominent role by listing them separately (as in Fig. 5.1). This parallels the importance given to them in other parts of the CEFR (i.e., there are dedicated illustrative scales for strategies). As mentioned earlier, there are four main types of strategies. For each language activity, they are broken down further. For example, receptive strategies for the four categories are listed as follows, but an illustrative scale only exists for execution. • Pre-planning: framing (selecting mental set, activating schemata, setting up expectations); • Execution: identifying cues and inferring from them; • Monitoring: hypothesis testing—matching cues to schemata; • Repair Action: revising hypotheses. Tasks provide learners with an opportunity to apply these strategies. Awareness and development of them can be facilitated through reference to the illustrative scales and explicit instruction. The Relationship Between Tasks and General and Communicative Language Competences In order to carry out tasks, users and learners draw upon a number of competences developed in the course of their previous experience. In return, participation in communicative events (including, of course, those events specifically designed to promote language learning), results in the further development of the learner’s competences, for both immediate and long-term use. (COE 2001: 101)
The reason for the double-sided arrow in Fig. 5.1 should now be clear. Although tasks provide learners with opportunities to ‘draw upon’ their general and communicative language competence during meaningful language use, task performance itself and feedback on this performance hopefully lead to gains in the competences. Feedback may come through monitoring (during task) or reflection (post-task), from students and/or teachers (e.g., corrective feedback). Of course, tasks may also be designed with a specific focus in mind (i.e., form-focused tasks). Linguistic competence (e.g., grammar and vocabulary) may be a separate strand within syllabi where tasks play a less central role (task-supported versus task-based language learning). This is discussed in Sect. 5.2.3. The Relationship Between Tasks and Assessment Last, the identifiable output or outcome of a task can be assessed using task completion and performance in relation to ‘Can Do’ descriptors for language activities and language competences, as well as the qualitative aspects of spoken language use (updated version of CEFR Table 3 can be found in COE 2018—Appendix 3) and the Written assessment grid
5.1 The Role of the CEFR in Integrating Learning, Teaching, and Assessment
203
(COE 2018—Appendix 4). The methodology for designing a teaching task can also be followed when designing one for assessment as “the same features that make a valid classroom task will also help to make a valid assessment task” (North 2014: 157).
5.2
CEFR as an Integrative Tool
The CEFR’s role in curriculum and course design is discussed as a ‘top-down’ process in Chap. 2. However, there are also resources for aligning an existing curriculum to the CEFR, for specifying the communicative focus and linguistic content of a course (i.e., syllabus templates) (Sect. 5.2.1) as well as the language (e.g., grammar, vocabulary) implied by and linked to the ‘Can Do’ descriptors (Sect. 5.2.2). These course aims are realized in the classroom through tasks, which are discussed here using the definition, justification, and guidelines for tasks from the CEFR and expanded upon using TBLT and second language acquisition literature (Sect. 5.2.3).
5.2.1 Aligning a Curriculum to the CEFR Curricular aims can be devised using the global scale (COE 2001: Table 1) and the self-assessment grid (COE 2001: Table 2; COE 2018: Appendix 2) to give an overall picture of a language program. It is also necessary to take into account such factors as learners’ needs and current proficiency levels and the time and resources available (as well as externally imposed constraints). Curricular aims then need to be broken down and addressed in courses whose aims are described using the illustrative scales. Most curricular reform, however, may be more modest in the sense that teachers may prefer (or be required) to refine rather than replace an existing curriculum. This approach is also encouraged in the literature; tools are available to aid teachers in aligning a language program to the CEFR. One useful resource is an Eaqual’s initiative with a self-help guide for curriculum and syllabus design (Matheidesz and Heyworth 2007). This is supplemented by a number of case studies (Eaquals 2008) documenting the application of this guide and CEFR ‘Can Do’ descriptors in various contexts and summarized by North (2014), who served as the Chair of Eaquals (Evaluation and Accreditation of Quality Language Services) at the time of this initiative, and given book-length treatment in North et al. (2018). When aligning a curriculum to the CEFR, the starting point is critically examining the language program in question. Table 5.1 overviews areas covered and available resources (adapted for this book). The guide breaks each area down further, encouraging reflection in reference to CEFR principles, and provides a variety of worksheets and templates that exemplify how to communicate curriculum and syllabus aims and level descriptors in relation to CEFR scales (e.g., creative
204
5
Integrating Learning, Teaching, and Assessment
Table 5.1 Overview of the curriculum design process and Eaquals self-help guide The key stages of the curriculum design process
Resources available within the self-help guide (Matheidesz and Heyworth 2007)
The school’s educational philosophy Developing a curriculum statement using What does this school believe about learning a CEFR principles language? • Examples of school curriculum and syllabi from Eaquals • Summary of CEFR principles Task 1: Are the CEFR principles reflected in your own school’s educational philosophy? Task 2: Review and choose an appropriate model from samples of curriculum statements to mimic Task 3: Write a curriculum statement Objectives Developing a framework of levels with What should students be able to do (CEFR reference to the global Scale, the ‘Can Do’ statements), and what do they need Self-Assessment Grid and descriptor scales to know at any given level to do it? How does Task 4: Have the CEFR levels been this relate to exams used in the school? incorporated into your school’s levels? Has this been done accurately and meaningfully? Task 5: Assess curriculum statements from Task 3 in reference to the above scales Methods, techniques Formulating a general statement of means How is this learning to be achieved? What used to achieve these objectives methods and techniques should teachers use in Task 6: Does the methodology statement adopt their classroom? a task-based approach in formulating teaching aims (see CEFR Chaps. 6 and 7) and use the competence scales? Task 7: Review the curriculum statement in light of the methodology statement and revise accordingly Syllabus; schemes of work; progress Developing your syllabus: adopt CEFR and What language and micro-skills will be learnt? adapt your syllabus (Here, a syllabus is How long is a level likely to take? How are defined as a planning document describing specific periods of teaching (week, month, specific units of the learning-teaching term) planned? How are lessons planned? How program) are learners informed about planning? Task 8: Review and evaluate course syllabuses using the questions provided in the guide Task 9: With the help of the syllabus template (Appendix 4), review a syllabus for a course to ensure all the required language elements are listed (e.g., vocabulary items and language functions) and that global learning aims and related subskills are specified Task 10: Review a scheme of work (a detailed description of a portion of the syllabus, such as a weekly plan). For example, are the short term objectives in line with the course syllabus and curriculum aims? Task 11: Review a lesson plan for an individual lesson to check if it includes reference to general and specific lesson objectives (continued)
5.2 CEFR as an Integrative Tool
205
Table 5.1 (continued) The key stages of the curriculum design process
Resources available within the self-help guide (Matheidesz and Heyworth 2007)
Assessment; pre/during/post How are learners placed in classes? How and at what intervals is progress assessed? What assessment is there at the end of the course? What form of certification is given?
Assessment in the language school How are the results of teaching assessed? If goals are specified through ‘Can Do’ descriptors, are learners’ actual performances assessed? Are they in relation to the competence scales? Task 12: Review the curriculum statement, descriptions of methods and syllabus contents. Check whether they apply a performance-based approach to specifying aims and objectives that are compatible with your general principles of teaching
Coherence Achieving coherence at all levels of planning and teaching
writing and grammatical accuracy). All necessary documents appear in the guide’s appendices (Matheidesz and Heyworth 2007). Information on the curriculum design process can also be found in North (2014: 119) and North et al. (2018: Chap. 5). In Sect. 5.4, readers are asked to reflect on the curriculum development and implementation process. Developing aims (descriptions of language activities, strategies and competences) for a specific course is covered in Chap. 2. However, a syllabus template is not provided and the language necessary to realize these communication tasks not identified. Therefore, readers are encouraged to refer to North et al. (2018), which builds upon the Eaquals’ guide (Matheidesz and Heyworth 2007) and the collection of accompanying case studies (Eaquals 2008). In particular, readers will benefit from the examples and discussion of how to display communicative objectives and the language resources necessary to complete real-world tasks successfully (= grammar, vocabulary). Examples for how communicative objectives (= ‘Can Do’ descriptors) can be cross-referenced to various resources (e.g., textbook units to practice relevant ‘Can Do’ descriptors or study the grammar/vocabulary associated with the descriptors) are also provided. The language resources that should also be included in syllabi are discussed next.
5.2.2 Linking ‘Can Do’ Descriptors to Language As mentioned earlier, two common misconceptions (North 2014) are (1) that curriculum or course aims are defined using descriptors for language activities alone and not the accompanying competency scales and (2) that course syllabi do not include reference to the language necessary to realize these communication acts. This section overviews some of the resources linking ‘Can Do’ descriptors to language. Interested readers can take advantage of these resources to better understand what language should be focused on at each CEFR level. The first is the British Council—Eaquals ‘Core Inventory for general English’ (North et al. 2010),
206
5
Integrating Learning, Teaching, and Assessment
which attempts to answer this very question. Through an analysis of language implied by CEFR descriptors, content common to the syllabuses of Eaquals members, the content of different series of popular course books, and teacher surveys, the Core Inventory identifies the functions, grammar, discourse markers, vocabulary, and topics common to each CEFR level (e.g., A2). “Each language point appears at the level(s) at which it is considered of most relevance to the learner in the classroom”; however, it is stressed that this Core Inventory is not a detailed guide for course book or exam developers (North et al. 2010: 8). Instead, it is intended as a reference work (not as a practical tool) for teachers and learners. It is also important to point out that there is more agreement about what should be included at the lower levels (A1-B2) than the higher ones (C1-C2).3 While the Core Inventory features language items necessary at a specific CEFR level, the next resource examines how learners are using the language at the different levels. For English, this research project is known as the English Profile Program. It is responsible for producing Reference Level Descriptions (RLDs), resulting in a number of resources including a series of books published by Cambridge University Press (English Profile Studies) and a web site (www.englishprofile.org). (As mentioned in Sect. 1.3.3, RLDs are available in numerous languages on the COE CEFR homepage, but these are at varying states of development). RLDs provide detailed language-specific guidance for CEFR users. English Profile “concentrates on the description of linguistic ability in specific areas of the English language (vocabulary, grammar, language functions, etc.) across all six CEFR levels, using empirical data from learner corpora and curricula to inform its research findings” (CUP 2019). Like the Eaquals guide, English Profile informs teachers about which linguistic features to concentrate on in a course, although it “is not attempting to provide English Language Teaching (ELT) professionals with a definitive set of language points that they should teach at each level” (Harrison 2015: 3). Deciding what language points to include in the syllabus should be based on a needs analysis to identify the tasks and level learners should focus on. The reference works cited here can help teachers align the language focus and task at an appropriate level. Keep in mind that the six Common Reference Levels, defined in terms of performance standards, are based on earlier works describing content specifications. For example, the CEFR B1 level is based on Threshold4 (van Ek 1976; van Ek and Trim 2001). According to North (2014: 14), Threshold is a notional/functional specification of the language knowledge and skills needed to visit or live in another country. It provides lists of relevant situations and texts plus a detailed analysis of the general notions (like space, time, possibility, probability), the specific notions (more akin to topics) and the language functions that people will need in such situations, providing appropriate language exponents, together with an analysis of the requisite syntactic, morphological and phonological content operationalised in them.
3
One reason for this is that the lower levels focus on language learning for general, everyday communication, whereas the higher levels imply a wide range of academic and professional focuses. 4 It is worth noting that the threshold level focuses on spoken language.
5.2 CEFR as an Integrative Tool
207
From its origins focusing on content specification for different learner levels, the work of the Council of Europe in the 1980s and 90s increasingly focused on developing a framework of objectives that ultimately led to the publication of the CEFR in 2001. It appears that the research is coming full circle to focus once again on content specification, due in part to COE guidelines to develop Reference Level Descriptions (RLDs) for national and regional languages.
5.2.3 Task-Based Language Teaching Section 5.1 discussed tasks as a tool for integrating key CEFR concepts. Next, this topic is explored in greater detail in relation to the literature on Task-Based Language Teaching (TBLT). The definitions of tasks in CEFR’s early chapters are comprehensive, describing language use in the real world; they are not limited to classroom and pedagogic tasks. Furthermore, strategies are not restricted to those intended to help learners to overcome deficiencies in language competence. They also include strategies to enable users to exploit their resources and skills to fulfill a communicative need. In CEFR Chap. 7, the definition of tasks narrows to focus on classroom tasks that resemble communication acts students are likely encounter in the real world (e.g., ‘real-life’, ‘target’, ‘rehearsal’ tasks), including pedagogic tasks modified for teaching or testing purposes. Classroom tasks not only serve as a bridge to real-world language performance; they are: based on the principle that language learning will progress most successfully if teaching aims simply to create contexts in which the learner’s natural learning capacity can be nurtured rather than making a systematic attempt to teach the language bit by bit (as in approaches based on a structural syllabus)” (Ellis 2009: 222; emphasis added)
Thus, the justification for using tasks is not just their link to real-world contexts, but also their ability to further develop learner communicative competence. Effective TBLT implementation is the focus of this section.5 Definition of and criteria for a pedagogic task Before further examining the theoretical underpinnings of TBLT, it is useful to review the CEFR definition for a pedagogic task. Communicative pedagogic tasks … aim to actively involve learners in meaningful communication, are relevant (here and now in the formal learning context), are challenging but feasible (with task manipulation where appropriate), and have identifiable … outcomes. Such tasks may involve ‘metacommunicative’ (sub)tasks, i.e. communication around task implementation and the language used in carrying out the task (COE 2001: 157–8). 5
The definition of task in the CEFR is broader than those found in the TBLT literature and discussed here, as our focus is limited to classroom tasks. Within classroom tasks, Ellis (2017: 508) distinguishes between ‘real-world’ tasks that prioritize situational authenticity as they are based on real-world target tasks and ‘pedagogic’ tasks which prioritize “interactional authenticity (i.e., the kind of natural language processing found in communication in the world outside the classroom)” but which lack situational authenticity.
208
5
Integrating Learning, Teaching, and Assessment
This definition clearly incorporates the criteria common to most definitions as outlined in Ellis (2009: 223), namely 1. The primary focus should be on ‘meaning’ (by which is meant that learners should be mainly concerned with processing the semantic and pragmatic meaning of an utterance). 2. There should be some kind of ‘gap’ (i.e., a need to convey information, to express an opinion, or to infer meaning). 3. Learners should largely have to rely on their own resources (linguistic and non-linguistic) in order to complete the activity. 4. There is a clearly defined outcome other than the use of language (i.e., the language serves as the means for achieving the outcome, not as an end in its own right). Through meaningful language use, learners can see how pedagogic tasks used ‘here and now’ in the classroom are relevant because these tasks are generally linked to real-world tasks and ‘Can Do’ descriptors in a transparent manner. The tasks learners encounter should be challenging and feasible because the teacher has tried to ensure an appropriate level of difficulty, enabling the learners to complete the task using their own resources. Last, both definitions include reference to a clearly defined outcome. The inclusion of ‘metacommunicative’ tasks that promote reflection on task performance is a position favored by Willis (1996). Focused Tasks/Unfocused Tasks/Situational Grammar Exercises When describing tasks, it is also necessary to consider the language that will be required to complete the task (see Sect. 5.2.2). Some proponents of task-based learning (Skehan 1998) see tasks as the central pedagogic unit and rely on ‘unfocused’ tasks which provide learners with opportunities to use language to communicate without a predetermined linguistic focus. Long (1985) and Ellis (2003) argue for the judicious use of ‘focused’ tasks designed with a specific linguistic feature in mind even though that focus initially remains hidden to the learners. Ellis (2017: 511) suggests these tasks “can be used to raise learners’ awareness of the function or semantic meanings of linguistic features”, particularly those representing learning problems that persist even in advanced stages of language learning (e.g., subject verb agreement). Long (2016: 8) also acknowledges the need to intervene “(t)o deal with persistent errors with a non-salient target language feature.” An activity in which learners are aware of the linguistic focus is a situational grammar exercise (Ellis 2009). In tasks, learner attention should be directed to ‘form’ (accurate language use) while communicating as opposed to situational grammar exercises whose focus is generally one pre-determined form (or rule) at a time. This distinction is known as ‘Focus on Form’ versus ‘Focus on FormS,’ respectively. Ellis (2009) stresses that tasks are not of greater pedagogic value than situational grammar exercises, but that it is important to distinguish between them.
5.2 CEFR as an Integrative Tool
209
5.2.3.1 A Comparison of Three Approaches to TBLT and a Justification for Explicit Form Instruction The CEFR is teaching methodology neutral in that it does not prescribe using tasks, although it does favor them (CEFR 2001: Chap. 7). This is understandable because there is little agreement about the single best way to implement TBLT. A review of three TBLT approaches—Long (1985); Skehan (1998) and Ellis (2003)—can be found in Ellis (2009: 225) (Table 5.2). All agree on the necessity of natural language use and a Focus on Form, but disagree on the role of learner-centeredness (as manifested in the centrality of small group work),6 the role of focused tasks, and a rejection of more traditional approaches to language teaching. Looking at Table 5.2, the most striking difference between the three approaches is their stance toward more traditional language teaching approaches. For Ellis (2003), explicit instruction of linguistic features, common in more traditional teaching approaches, plays an important role as explicit knowledge facilitates developing implicit knowledge, which is the intuitive knowledge of language that underlies the ability to communicate fluently in the L1. It manifests itself in actual language performance and is only verbalizable if it is converted into explicit knowledge (343).
Known as the weak-interface position, Ellis (2003: 106) argues there are two roles for explicit knowledge of a linguistic feature. First, it “serves to prime attention to form in the input and thereby activate the processes involved in the acquisition of implicit knowledge.” The second role is to assist in noticing-the-gap between what the learner is saying, how the feature is used in the input, and what they know consciously about it. Feedback may also contribute to this awareness. In contrast, proponents of the non-interface position (e.g., Krashen (1981) and the natural approach) argue that explicit knowledge does not lead to implicit knowledge because these two entities are separate and do not interact, while the strong-interface position (e.g., DeKeyser (1998) based on Skill-building theory) argues that declarative knowledge can become procedural knowledge. Teachers who see a prominent role for explicit instruction and whose syllabus clearly features a linguistic strand that shares equal weighting with tasks might prefer to describe their methodology as task-supported language teaching (Ellis 2003: 29–30). On the other hand, when tasks are prioritized and serve as the central pedagogic unit for describing the focus and content of the syllabus and when the linguistic focus may occupy a subsidiary role, their methodological stance may more closely follow the work of Long (1985, 2015), Skehan (1998), and Willis (1996) and would best be described as adhering to the strong form of TBLT. The role of explicit instruction is but one issue that remains controversial. Sometimes, these issues stem from different positions taken by advocates of TBLT 6
This is a narrow interpretation of learner-centeredness. Within TBLT, Van den Branden (2006a:10) argues for a wider view, similar to the position taken in the CEFR (see Chap. 4), namely that the learner “is given a fair share of freedom and responsibility when it comes to negotiating course content, choosing linguistic forms from his own linguistic repertoire during task performance, discussing various options for task performance and evaluating task outcomes.”
210
5
Integrating Learning, Teaching, and Assessment
Table 5.2 A comparison of three approaches to TBLT (Ellis 2009: 225) Characteristic
Long (1985)
Skehan (1998)
Ellis (2003)
Natural language use Learner-centeredness Focus on form
Yes Yes Yes—corrective feedback
Yes Not necessarily Yes—all phases of TBLT lessons
Tasks
Yes—unfocused and focused (1985) Yes
Yes Yes Yes— mainly pre-task Yes— unfocused Yes
Rejection of more traditional teaching approaches (e.g., PPP)
Yes—unfocused and focused No
(Ellis 2017), while others are critiques of TBLT based on misconceptions that fortunately have been addressed (Ellis 2009; Long 2016). Also, research into TBLT is ongoing. For a summary of the findings which have or have not been incorporated into classroom practice (and have been over-applied), see East (2017). A detailed discussion of the origins, current shape, and potential directions of TBLT can be found in Bygate (2015, 2016) and Ellis (2018). Teachers interested in bridging the gap between theory and practice will benefit from reading Van den Branden (2006b). Readers may also want to visit the International Association for Task-Based Language Teaching’s web site (www.iatblt.org).
5.2.3.2 Task Selection There are three basic ways to select tasks. The result of designing courses around CEFR ‘Can Do’ statements is a list of tasks—the method used in Chap. 2. The second method was alluded to in Sect. 5.2.1—Aligning a curriculum to the CEFR. This involves mapping the CEFR onto an existing curriculum and assigning (and modifying) ‘Can Do’ statements to the existing activities. The overview of TBLT outlined in this chapter can help determine the degree to which activities can be described as tasks and how they can be improved. A third method for selecting tasks is based on a thorough needs analysis7 to determine the most relevant domains, language use situations, and tasks. It is described in Van Avermaet and Gysen (2006: 27–28). A distinguishing feature of this paper and others that appear in Van den Branden (2006a: 1) is that the pedagogic proposals stem from investigations into the nation-wide implementation of TBLT in Dutch as a second language programs in Belgium rather than psycholinguistic studies conducted under tightly controlled settings to elaborate our knowledge of second language acquisition, with the former approach offering vital insight into the application of theory into practice. The steps outlined in Van Avermaet and Gysen (2006) are: 7
For detailed information on needs analysis, see Long (2005). A simplified version using CEFR descriptors can be found in North et al. (2018).
5.2 CEFR as an Integrative Tool
211
(1) “lists of potentially relevant domains and language use situations were presented to a sample of stakeholders, including the learners involved and other relevant parties. From these lists, the most crucial domains and language use situations were selected” (27). (2) “the list of selected domains and language use situations was presented to experts in the field, who were asked to refine and complete the list. In this second phase, the contributions of experts and stakeholders may become determining factors, especially when the learners involved have no clear picture of the language use situations that are typical for the selected domains” (27). (3) A set of tasks was derived from the list of language use situations using (i) observations in the target domain/language use situations, (ii) expert opinions elicited via written and oral surveys, and (iii) a sampling of language learners’ experiences (28). One drawback to this approach is that it results in long lists of tasks (the problem of specification), “descriptions of tasks that often do not provide any information about the level of complexity and difficulty of the target tasks” (the problem of complexity), and “it cannot be taken for granted that performance of one task implies that a person is able to perform a more or less similar task (the problem of extrapolation)” (Van Avermaet and Gysen 2006: 29). Teachers may find the detailed, concrete descriptions of tasks that emerge from such an analysis useful as it can serve as the basis for a lesson. However, long lists may at the same time be quite overwhelming. To address this, Van Avermaet and Gysen (2006: 30–31) describe a bottom-up approach to analyze an extensive list of concrete tasks for a given domain, identify parameters common to these tasks, and organize them under a ‘type task,’ defined as “the result of clustering several language tasks, derived from one and the same domain, that share a number of linguistic and non-linguistic features, which are described as settings of parameters.” Parameters for each type task were organized according to: a. Skills involved: Does the language learner speak, listen, read, or write in the language use situation? b. Text genre: What kind of message has to be conveyed or understood? c. Level of information processing: At what level must the linguistic information be cognitively processed? d. Interlocutor: Who are the language learner’s interlocutor(s)? e. Contextual support: To what extent is the message embedded in a supporting context? f. Linguistic features: What specific linguistic items (words, sounds, grammatical rules) are necessary to perform the task? Once the parameters are identified for a given set of tasks, a type task can be written to articulate general goals. The following is an example from Van den
212
5
Integrating Learning, Teaching, and Assessment
Branden et al. (2001: 24)—Attainment goals for Dutch as a second language at the beginning of Dutch-medium primary education in Flanders and the Netherlands. Listening at the descriptive level of information processing: the child is able to understand oral instructions for a physical action in the here and now given by the teacher, e.g. ‘now we all sit down.’
The parallels with the CEFR are obvious. The parameters closely follow those outlined in CEFR’s context of language use (CEFR 4.1; described at the beginning of this chapter), and type tasks bear a close resemblance to CEFR ‘Can Do’ statements. Since CEFR illustrative scales are not exhaustive lists of all possible communication acts, the bottom-up approach described in Van Avermaet and Gysen (2006) and the process for modifying CEFR ‘Can Do’ statements described in Chap. 2 should help the reader design both general and specific task descriptions for their unique needs.
5.2.3.3 Task Difficulty In addition to classifying tasks according to their parameters, type tasks or CEFR ‘Can Do’ descriptors, it is also important to consider their difficulty. However, task difficulty is hard to predict as it varies between tasks and according to individual learners’ competences and characteristics. What one learner may find difficult will be easy for another learner. As a result, it is necessary to cater to a range of abilities within one class. Fortunately, teachers have some control over the task and the conditions and constraints that affect student performance. By examining some of the factors influencing task difficulty, teachers can make the task challenging and feasible. More importantly, learners should be able to both complete the task and focus on the form the message takes. Task difficulty for receptive tasks, both written and oral, can be determined using standard measures of readability which take into account vocabulary level and sentence length (Ellis 2017: 512) as well as discourse factors such as cohesion and coherence (e.g., Pitler and Nenkova 2008). A practical complexity scale can be found in Duran and Ramaut (2006: 52–53). This task complexity scale was developed by analyzing tasks and parameters (as exemplified in Van Avermaet and Gysen 2006). The parameters, ranging on a three-point scale from simple (1) to complex (3), include World (level of abstraction; degree of visual support; linguistic content), Task (communicative and cognitive processing demands; modality), and Text (vocabulary, syntax, text structure, text length). Unlike generic lists, Duran and Ramaut (2006: 54) argue that these parameters are relative. To give an example from the scale, linguistic content is simple if it has high redundancy and low information density. Complex linguistic content, on the other hand, has high information density and low redundancy. These terms are relative because “the ‘complex’ end refers to the ultimate level of proficiency that has to be attained” by students at the end of a course. In other words, complexity scales vary according to learner proficiency levels. Providing task difficulty guidelines for output tasks is more problematic. There is considerable research into how different task types (e.g., narrative or decision
5.2 CEFR as an Integrative Tool
213
making), task characteristics (e.g., task structure, number of elements, and information organization), and task conditions (e.g., pre-task planning opportunities, during-task time conditions, post-task activities, task repetition, monologic/interactive tasks) affect student performance (see Skehan 2016 for an excellent summary). However, these findings tend to be based on studies varying a small number of elements (e.g., planning time) measuring influences on student performance. Another starting point for discussing task difficulty is Robinson’s Cognition Hypothesis (2001, 2011), “which makes central the construct of task complexity and proposes a distinction between resource-directing factors and resource-dispersing factors, with each of these overarching categories then leading to specific performance predictions” (Skehan 2016: 35). This is useful for considering “how different versions of the same task can be developed so that they lead incrementally to a simulation of the target task itself”, the horizontal sequencing of tasks. There is little guidance within Robinson (2001, 2011) and the literature in general for the vertical sequencing of tasks, or “the order in which specific tasks (or task types) figure in the syllabus” (Ellis 2017: 514). The accuracy of the predictions has also been called into question. Skehan (2016: 37) argues that the factors identified in the framework “do not generate consistency or robust generalizations”. The classroom is a far less controlled environment than the laboratory settings found in research studies, and tasks vary according to numerous factors that interact in ways not clearly understood and are often interpreted by learners differently from how the researcher/teacher intended. Therefore, teachers should apply caution when incorporating theories and research findings into their lessons. Given the current state of understanding, teachers should draw on their experience and intuition when choosing and implementing tasks, a point discussed in Sect. 5.2.3.4. An appropriate level of difficulty is where learners find the task challenging and feasible and can also focus on the form their message takes in addition to task completion. In other words, fluency is not developed at the expense of focusing on the language and its development. It will be useful to clarify what is meant by a focus on meaning (i.e., fluency) and a focus on form (i.e., accuracy and complexity) and their relationship. Skehan (1998) argues that students have a limited amount of cognitive resources to devote to a task (more accurately, limited information processing capacity), and the amount available varies according to the content and demands of the task and the conditions and constraints under which the task is performed. In all communication acts, the primary focus is on meaning as the language user concentrates on conveying their meaning fluently. If a task is simple, they can also be accurate. However, if the task is challenging, learners may complete the task but at the expense of a focus on form, be it accurate or ambitious (i.e., complex) language use. Accuracy refers to how well the language is produced relative to language norms. When students focus on accuracy, they tend to use structures or language that they have greater control over. This is a conservative approach where students may use simple language to avoid making mistakes (e.g., an error avoidance strategy) and where they also tend to be less fluent (i.e., it takes longer to convey meaning accurately). Complexity concerns students’ willingness to use more challenging and difficult language. Students try to use a wider range of structures and language even
214
5
Integrating Learning, Teaching, and Assessment
though they might not be able to use it accurately or produce it quickly. “The language concerned is at the upper limit of his or her IL system (Interlanguage System), and so reflects hypothesis testing with recently acquired structures” (Skehan and Foster 2001: 190). Basically, learners are willing to make mistakes in order to test the degree to which they can use the recently acquired structures accurately. Task difficulty is addressed in the CEFR (Sect. 7.3) in recognition that for students to focus on form and by extension develop their linguistic competence, tasks have to be feasible to complete using their linguistic resources. They will be able to notice features of the input, monitor their output, be receptive to feedback, and deal with breakdowns in communication only if the task has an appropriate level of difficulty. Skehan (1998) argues that while we can influence how much learners focus on form through task selection and implementation, ultimately the learners must be motivated to do the focusing. Task Difficulty—General Guidelines Teacher experience is essential to determine task difficulty, although there are general guidelines. Skehan (1992, 1996; see also Skehan and Foster 2001) breaks task difficulty into three areas: language, cognition, and performance conditions. This closely matches the CEFR (Sect. 7.3—Task Difficulty). • Language refers to the task’s linguistic demands. Teachers consider the difficulty of the grammar, vocabulary, and lexical density (e.g., the amount of information present in the input). • Cognitive complexity concerns the content of the task and how it is manipulated. In other words, a task is less difficult if students are already familiar with the topic and have done a similar task type. • Performance Conditions include the amount of time students have to prepare for and complete the task, the number of skills involved (e.g., Can a task be completed with one skill or is a combination of skills required?), the number of students involved (e.g., pair work versus group work), how important it is to do the task correctly (e.g., high versus low stakes), and the degree of control participants can exercise. For more information on task difficulty, see Ellis (2003: Chaps. 3 and 7: 2017), Robinson (2001, 2011), Skehan (1996, 1998, 2016) , and Skehan and Foster (2001).
5.2.3.4 Implementing Task-Based Learning Willis (1996) provides a useful framework for task implementation. It was primarily designed for unfocused tasks as it is difficult to predict the language used in most tasks (Willis 1996: 33–34). However, it can be used with textbooks to modify situational grammar exercises into focused tasks. This is achieved by changing the sequence of the traditional PPP methodology, “beginning with the production stage and following up with the presentation and practice stages only if learners demonstrate their inability to use the targeted feature during the production stage”
5.2 CEFR as an Integrative Tool
215
(Ellis 2003: 29–30, referring to Brumfit 1979). This suggestion has its own drawbacks, namely that “presenting and practicing features that learners have failed to use correctly in production may not result in their acquisition if the learners are not developmentally ready to acquire them” (Ellis 2003: 30). Ellis therefore suggests that it may be better to keep these two strands (linguistic and tasks) separate. Another drawback is that explicit instruction may interfere with how students perform a task, as they may interpret it as an opportunity to practice the targeted feature (Ellis 2017: 515). Returning to Willis’ TBL framework, it can be broken down into three stages: • The first is the pre-task phase in which learners prepare to perform a task. This involves an introduction to the topic and task to activate the relevant schemata. Students may also listen to others performing the same or a similar task, providing them with a chance to mine that input for useful vocabulary. Students may also be given time to prepare for the task, which positively influences student production (see Skehan 2016: 38 for a succinct review of research findings in this area). The main goal for this stage is to ensure “the processing load during task execution and monitoring is reduced and the learner’s attention is freer to deal with any unexpected content and/or form-related problems that may arise, thereby increasing the likelihood of successful task completion in both quantitative and qualitative terms” (COE 2001: 159). • The task cycle: Students perform the task, and teachers monitor their performance, noting both successful performances and errors to focus on in the next phase. As noted in the CEFR (2001: 159), “performance is affected by certain task-related conditions and constraints which can vary from task to task, and the teacher … can control a number of elements in order to adjust the level of task difficulty upwards or downwards.” In the second part of the task cycle, students prepare to report on task performance (orally or in writing) and present these reports to the class. This is a type of ‘metacommunicative’ task promoting reflection on task performance and is also favored in the CEFR (2001: 157–8). • Post-task phase (Language focus): Learners analyze specific features of the performance (using their reports or transcripts). The linguistic focus may arise during task performance, an incidental or reactive Focus on Form, and be addressed through corrective feedback. Long (1985, 2015) argues that students are most receptive to corrective feedback during task performance, and attention to form should be brief so it does not interrupt the flow of communication. Alternatively, the linguistic focus may have been decided upon beforehand, in which case the teacher has already prepared practice activities. However, Willis (1996) questions the accuracy of these predictions and Long (2015) their merit. The teacher’s role within these three stages is vitally important; Van den Branden (2016: 167) overviews research on “the decisions and actions that teachers can (and according to some of these publications, should) take to optimally promote students’ learning”, noting that these decisions and actions “may strongly differ from the prescriptions in the pedagogically oriented literature.”
216
5
Integrating Learning, Teaching, and Assessment
In this section, we have drawn on TBLT literature to illustrate how teachers can select and implement tasks while controlling task difficulty. With this knowledge, teachers can develop learner communicative competence in a way that is consistent with the SLA literature (Ellis 2003, 2009) and the communicative aims of the CEFR.
5.3
Application of the CEFR: Integrating Learning, Teaching, and Assessment
This section explains how to use the CEFR to integrate learning, teaching, and assessment. To do this, scaled descriptors of relevant language activities, strategies, linguistic competences, and study skills are discussed. A Japanese university’s general English course (Nagai 2010, updated to include recent developments; COE 2018) is used to explain this process. It fulfills the following requirements: (1) The course aims to synthesize communicative language skills, knowledge, and strategies that learners acquire in the curriculum. It integrates them into communicative language activity performances. (2) The course promotes autonomous learning through developing relevant study skills. To fulfill the course requirements of synthesizing communicative language skills, knowledge, and strategies, students give presentations using PowerPoint. This presentation course integrates six language activities: spoken production, reading, writing, listening, mediation, and spoken interaction. Integration of the six language activities is schematically illustrated in Fig. 5.2. The main task assigned in the course is a PowerPoint presentation. To pursue this task, several subtasks are used as scaffolding for the final presentation task. These subtasks involve different language activities. Learners first survey web pages concerning the topic they will present on. Based on this survey, they write a 500-word essay and make their PowerPoint slides. During the learner-presenter’s presentation, other students are engaged in a mediation activity, taking notes. After the presentation, they ask and answer questions in a spoken interaction activity.
5.3.1 Learning Outcome Statements Once the main language activity and sublanguage activities are decided, the specific learning outcomes for the main tasks and subtasks are prepared. The learning outcomes of the course must specify what learners can do in terms of activities as well as what knowledge, strategies, and skills they are to possess. These should be articulated so that learners understand what they are expected to achieve and can monitor their progress. For teachers, specific learning outcome statements form the
5.3 Application of the CEFR: Integrating Learning, Teaching, and Assessment
Listening Listening comprehension
Mediation Processing text Note taking
Spoken Production Presentation
217
Spoken Interaction Question & Answer
Writing Essay Writing, Making PowerPoint slides
Reading Survey on a topic of presentation
Fig. 5.2 Integration of six language activities
basis for lesson plans. Learning outcomes also function as assessment criteria for learner self-assessment and teacher assessment. Thus, these statements integrate learning, teaching, and assessment.
5.3.1.1 Learning Outcome Statements of Tasks Describing the outcomes of different tasks follows three steps: (1) specification of language activities required for the presentations, (2) determination of presentation themes, and (3) production of the outcome statements. The next each part is explained in detail. (1) Specification of Language Activities The language activities necessary to for presentation tasks are selected and specified through the following three steps. First, consider and examine the language activities discussed in the CEFR Sect. 4.4 (COE 2001: 57–90) and the CEFR/CV (COE 2018: 99–112).8 Second, from the list of language activities, select the most relevant language activities for a presentation task and its scaffolding subtasks. Third, add to the list some activities not listed in the CEFR but necessary for presentation tasks. These three steps result in the following language activities and tasks for the presentation course: Spoken production activities • Addressing audience; • Speaking using PowerPoint.
8
For newly added mediation activities, specifically mediating a text relevant to a presentation task, please consult the scaled descriptors.
218
5
Integrating Learning, Teaching, and Assessment
Writing activities • Writing 500-word essays; • Making PowerPoint slides. Reading activities • • • •
Reading for information; Texts: newspaper articles and other materials on the Web; Reading for gist; Reading for detailed understanding.
Listening activities • Listening as a member of an audience. Mediation activity • Taking notes while listening to presentations. Spoken Interaction activities • Formulating questions for clarification; • Asking questions; • Answering questions. (2) Determination of Presentation Themes The presentation topics are determined by considering whether they are interesting and appropriate. Here, the example course plans three presentations9 in a 30-lesson semester. The themes of the first two presentations are predetermined: cell phones safety and environmental issues. All the students use cell phones frequently. However, their ubiquitous use raises several issues that learners may be familiar with. Many students are interested in environmental issues, and so, they represent a broad topic out of which students can easily select something to talk about based on their interests. Students choose the final presentation topic that interests them and is appealing to their audience of fellow students. (3) Production of Outcome Statements After deciding language activities and presentations themes, the expected outcomes of the course are articulated. Learning outcome statements can be produced by examining scaled illustrative descriptors of the activities selected. Such descriptors are listed in the CEFR (Chap. 4). Mediation descriptors are in the CEFR/CV. Because the example course is offered to students who completed general English courses at the tertiary level, B1 descriptors of the selected language activities were used.
9
10 lessons are allocated for the preparation of each presentation.
5.3 Application of the CEFR: Integrating Learning, Teaching, and Assessment
219
These scaled descriptors of language activities include addressing audiences, writing reports and essays, reading for information, listening as a member of a live audience, processing text in writing, and note-taking. The selected descriptors were modified from the following perspectives: • Language activities; • The domain of language use; • The themes of presentation tasks. Some ‘Can Do’ statements were created when no scaled descriptors of the assigned activities were provided by the CEFR. The lists of modified and added descriptors are given in Tables 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, and 5.9. Words in italics indicate the original statement which was modified. Words in bold indicate the modification. Additions are underlined.
5.3.1.2 Learning Outcome Strategy Statements The CEFR provides several scaled strategy descriptors for production, reception, and interaction activities. The following two scaled strategies were selected and modified to fit the example course (Table 5.10). 5.3.1.3 Learning Outcome Statements of Linguistic Competences The CEFR explores communicative language competences, which include linguistic, sociolinguistic, and pragmatic competences. Each competence is further
Table 5.3 Spoken production: Addressing audiences Spoken production: addressing audiences (COE 2001: 60) B1 Can give prepared straightforward presentations on a familiar topic within his/her field ! on the safety of cell phones, environmental issues, and the topics of his/her concern which are clear enough to be followed without difficulty most of the time, and in which the main points are explained with reasonable precision, using PowerPoint Can take follow-up questions, but may have to ask for repetition if the speech was rapid
Table 5.4 Written production: Reports and essays Written production: reports and essays (COE 2001: 62) B1 Can write short, simple essay on topics of interest ! 500-word essays on the safety of cell phones, environmental issues and other topics of interest Can summarize, report and give his/her opinion about accumulated factual information on familiar routine and non-routine matters within his/her field ! on current issues and problems such as the safety of cell phones and environmental issues with some confidence Can make PowerPoint slides following the 6 6 rulea a The 6 6 rule means at most 6 words per line with at most 6 lines in one slide
220
5
Integrating Learning, Teaching, and Assessment
Table 5.5 Reading reception: Reading for information and argument Reading Reception: Reading for information and argument (COE 2001: 70) B1 Can recognize significant points in straightforward newspaper articles on familiar topics ! newspaper articles and other texts on web pages about current issues and problems such as the safety of cell phones and environmental issues
Table 5.6 Mediation: Processing text in writing Mediation: Processing text in writing (COE 2018: 112) B1 Can summarize in writing (in Language B) the information and arguments contained in texts (in Language A) on the subjects related to his/her interest. ! Can summarize in writing in English the information and arguments contained in texts in English on current issues and problems such as the safety of cell phones and environmental issues
Table 5.7 Listening as a member of a live audience Listening as a member of a live audience (COE 2001: 67) B1 Can follow a lecture or talk ! presentations within his/her own field ! on current issues and problems such as the safety of cell phones and environmental issues, provided the subject matter is familiar and the presentation straightforward and clearly structured ! provided the presentation is clearly structured and aided by PowerPoint slides
Table 5.8 Mediation: Note-taking Mediation: Note-taking (COE 2018: 115) B1 Can take notes as a list of key points during a straightforward lecture ! presentation on current issues and problems such as the safety of cell phones and environmental issues, provided the topic is familiar, and the talk is both formulated in simple language and delivered in clearly articulated standard speech ! provided the presentation is clearly outlined and structured as well as aided by PowerPoint slides
Table 5.9 Interaction strategy: Asking for clarification Interaction strategy: Asking for clarification (COE 2001: 87) B1 Can ask someone ! a presenter to clarify or elaborate what he/she has just saida Can answer most of the questions on the topic of the presentation a This ‘Can Do’ statement is selected and modified from the interaction strategies section
5.3 Application of the CEFR: Integrating Learning, Teaching, and Assessment
221
Table 5.10 Production strategies (planning) Production strategies (planning) (COE 2001: 64) B1 Can work out how to communicate the main point(s) he/she wants to get across, exploiting any resources available and limiting the message to what he/she can recall or find the means to express
divided into more specific subcategories. For instance, linguistic competence entails lexical, grammatical, semantic, phonological, orthographic, and orthoepic competence (Table 5.11). The main aim of the course is to integrate what learners have already acquired, using that knowledge to strategically give presentations. The students are assumed to possess B1 general linguistic competences except for full control of the vocabulary concerning the presentation topics. Hence, Table 5.12 presents scaled lexical competences selected and modified to represent course outcomes. The CEFR also provides functional competences: fluency and propositional precision, with these two factors determining learners’ functional success. They are valuable for assessing the presentations. Hence, the descriptors for spoken fluency and propositional precision were selected and modified in Table 5.13.
5.3.1.4 Learning Outcome Statements of Study Skills One of the two requirements of the example course is to help students become independent learners. To attain this goal, students should be able to present successfully if required to in other university classes. The CEFR discusses ability to learn from several perspectives: language and communication awareness, study skills, and heuristics. Referring to the self-regulated learning literature (Zimmerman and Schunk 1989), the following statements were created (Table 5.14).
Table 5.11 Reception strategies (identifying cues and inferring) Reception strategies (identifying cues and inferring) (COE 2001: 72) B1 Can identify unfamiliar words from the context on topics related to his/her field and interests ! such as the safety of cell phones, environmental issues, and other current issues of his/her interest Can extrapolate the meaning of occasional unknown words from the context and deduce sentence meaning provided the topic discussed is familiar ! provided the topics discussed are the safety of cell phones, environmental issues, and other current issues of his/her interest
222
5
Integrating Learning, Teaching, and Assessment
Table 5.12 Lexical competence (COE 2001: 112) Vocabulary range B1 Has a sufficient vocabulary to express him/herself with some circumlocutions on most topics pertinent to his/her everyday life such as family, hobbies and interests, work, travel, and current events ! to express facts and his/her opinions about the safety of cell phones, some environmental issues, and topics pertinent to a social issue of his/her interest Vocabulary control B1 Shows good control of elementary vocabulary ! vocabulary pertinent to the safety of cell phones, often discussed environmental issues, and frequently reported social issues but major errors still occur when expressing more complex thoughts or handing unfamiliar topics and situations ! in a presentation as well as in asking and answering questions about the presentation
Table 5.13 Spoken fluency Spoken fluency (COE 2001: 129) B1 Can express him/herself with relative ease in prepared presentations. Despite some problems with formulation resulting in pauses and ‘cul-de-sacs’, he/she is able to keep going effectively without help Can keep going comprehensively, even though pausing for grammatical and lexical planning and repair is very evident, especially in longer stretches of free production ! in answering questions about what he/she said in the presentation Propositional precision B1 Can explain the main points in an idea or problem ! of the presentation with supporting details with reasonable precision
Table 5.14 Study skills Study skills Can Can Can Can
list subtasks and procedures to give presentations plan each presentation preparation step, meeting the deadline monitor his/her own progress and revise their plan if necessary rehearse their presentation, consciously checking for weak and strong points
5.3.2 Checklist The illustrative descriptors discussed in Sects. 5.3.1.1 to 5.3.1.4 are given to learners as a checklist (Table 5.15) so they can monitor their developmental progress, facilitating learner autonomy. Teachers use the same list to make detailed lesson plans and create assessment rubrics (as discussed in Chap. 3).
5.3 Application of the CEFR: Integrating Learning, Teaching, and Assessment
223
Table 5.15 Checklist for the presentation course Self-assessment Use the following checklist to evaluate the current stage of your ability. Write * if you can do it reasonably well, ** well, and *** very well. Write the date of your assessment as well Spoken production Can give prepared straightforward presentations on the safety of cell phones, environmental issues, and topics of concern that are clear enough to be followed without difficulty most of the time. Main points are explained with reasonable precision using PowerPoint Can take follow-up questions, but may have to ask for repetition if the speech was rapid Writing Can write 500-word essays on the safety of cell phones, environmental issues, and topics on an area of concern Can summarize, report, and give my opinion about accumulated factual information on current issues and problems such as the safety of cell phones and environmental issues with some confidence Can make PowerPoint slides following the 6 6 rule Reading Can recognize significant points in straightforward newspaper articles and other texts on web pages about current issues, problems, and topics of interest Processing text Can collate short pieces of information from several sources, summarizing them for later use in essay writing Can paraphrase short written passages in a simple fashion, using the original text wording and ordering Listening Can follow presentations on current issues and problems such as the safety of cell phones and environmental issues, provided the presentation is clearly structured and aided by PowerPoint slides Note-taking Can take notes as a list of key points during a straightforward presentation on current issues and problems such as the safety of cell phones and environmental issues, provided the presentation is clearly outlined, structured, and aided by PowerPoint slides Spoken interaction Can ask a presenter to clarify or elaborate on what they have just said Can answer most questions on the topic of my presentation Production strategies (planning) Can work out how to communicate the main point(s) I want to get across, exploiting available resources and limiting the message to what I can recall or express Reception strategies (identifying cues and inferring) Can identify unfamiliar words from context on topics such as the safety of cellphones, environmental issues, and other current issues of interest (continued)
224
5
Integrating Learning, Teaching, and Assessment
Table 5.15 (continued) Self-assessment Use the following checklist to evaluate the current stage of your ability. Write * if you can do it reasonably well, ** well, and *** very well. Write the date of your assessment as well Can occasionally extrapolate the meaning of unknown words from context and deduce sentence meaning provided the topics discussed are the safety of cellphones, environmental issues, and other current issues of interest Vocabulary range Has vocabulary sufficient to express facts and opinions about the safety of cellphones, environmental issues, and topics pertinent to a social issue of interest Vocabulary control Shows good control of vocabulary pertinent to the safety of cell phones, often discussed environmental issues, and frequently reported social issues in presentation. Can ask and answer questions about a presentation Spoken fluency Can express myself with relative ease in a prepared presentation. Despite some problems with formulation resulting in pauses and ‘cul-de-sacs’, I can keep going without help Can keep going comprehensively, even though pausing for grammatical and lexical planning and repair is evident, especially in answering questions about what I said in my presentation Propositional precision Can explain the main points of the presentation with supporting details with reasonable precision Study skills Can list subtasks and procedures to prepare and give presentations Can plan each step of the preparation for a presentation to meet the deadline Can monitor my own progress and revise my plan if necessary Can rehearse my presentation consciously checking my weak and strong points
5.4
Exercises
This section reflects on the implementation of a CEFR-informed curriculum (Shimo et al. 2017). The process for articulating course goals using the CEFR descriptive scheme is described in Sect. 5.3. Further examples of TBLT for developing and assessing both communicative language competence and intercultural competence are provided in Sect. 5.5. However, a curriculum does not exist until it is enacted within the classroom by teachers and students (Graves 2008). Therefore, the purpose of these exercises is to consider measures that can be taken to ensure a CEFR-informed curriculum is enacted by teachers and learners in a way consistent with the CEFR’s philosophy and principles as well as realizing the potential of CEFR’s descriptive scheme. Using exercises adapted from the Eaqual’s guide (Matheidesz and Heyworth 2007), readers first reflect on one institution’s plan for
5.4 Exercises
225
integrating learning, teaching, and assessment through a CEFR-informed curriculum (Shimo et al. 2017). Next, compare this reflection with how the curriculum was enacted by the teachers and learners and the degree to which the curriculum development team felt their measures were effective. The three areas for reflection are: Key Questions • Defining curriculum and class goals: To what degree did the institution utilize the CEFR descriptive scheme to define curriculum goals? • Curriculum design process: Were learning outcomes defined using the CEFR descriptors prioritized when designing the curriculum? For example, did the learning outcomes inform decisions concerning teaching content, materials, and methodology? (This process, known as backward design, was introduced in Sect. 1.1.2.2). • Involving students and teachers: What measures were taken to ensure teachers and students were familiar with the CEFR and took ownership to enact the CEFR-informed curriculum? Program Description The Faculty of Applied Sociology at Kindai University uses a CEFR-informed ‘Can Do’ framework for their English learning program. In addition to improving student communicative abilities and develop both learning autonomy and intrinsic motivation through tools and practices from the European Language Portfolio, the specific program goals are: (a) Improve the four basic language skills (speaking, writing, listening, and reading) to use English as a communication tool; (b) Develop a positive attitude among students toward writing their own opinions, making presentations, and discussing issues with others in English; (c) Improve students’ ability to work on tasks, make presentations in English, and interact with people of different cultural backgrounds on their own initiative; and (d) Develop autonomy by setting goals based on self-evaluation activities. The Kindai ‘Can Do’ framework (KCF) is based on CEFR levels (Table 5.16) and illustrative scales (Table 5.17). These were heavily modified with lists of other descriptors (e.g., TOEIC) also drawn upon. Implementing the Kindai ‘Can Do’ Framework The methods and measures utilized to implement the framework include the My Can-Do Handbook (CDH), teacher-chosen textbooks, unified syllabuses and evaluation systems, and class management information sharing. • The My Can-Do Handbook is similar to the ELP in that it contains assessment grids and learning record sheets. It also presents the curriculum goals, the KCF framework and associated checklists for self-evaluation, as well as information and software manuals for studying vocabulary and pronunciation, assessing writing, and monitoring the extensive reading program.
226
5
Integrating Learning, Teaching, and Assessment
Table 5.16 Kindai scale, CEFR scale, and TOEIC score Kindai level
K-4
K-3
K-2
K-1
K-Global
CEFR level TOEIC
A1/A2