256 2 5MB
English Pages 272 [268] Year 2021
Assessing Academic Literacy in a Multilingual Society
NEW PERSPECTIVES ON LANGUAGE AND EDUCATION Founding Editor: Viv Edwards, University of Reading, UK Series Editors: Phan Le Ha, University of Hawaii at Manoa, USA and Joel Windle, Monash University, Australia. Two decades of research and development in language and literacy education have yielded a broad, multidisciplinary focus. Yet education systems face constant economic and technological change, with attendant issues of identity and power, community and culture. What are the implications for language education of new ‘semiotic economies’ and communications technologies? Of complex blendings of cultural and linguistic diversity in communities and institutions? Of new cultural, regional and national identities and practices? The New Perspectives on Language and Education series will feature critical and interpretive, disciplinary and multidisciplinary perspectives on teaching and learning, language and literacy in new times. New proposals, particularly for edited volumes, are expected to acknowledge and include perspectives from the Global South. Contributions from scholars from the Global South will be particularly sought out and welcomed, as well as those from marginalized communities within the Global North. All books in this series are externally peer-reviewed. Full details of all the books in this series and of all our other publications can be found on http://www.multilingual-matters.com, or by writing to Multilingual Matters, St Nicholas House, 31-34 High Street, Bristol BS1 2AW, UK.
NEW PERSPECTIVES ON LANGUAGE AND EDUCATION: 84
Assessing Academic Literacy in a Multilingual Society Transition and Transformation
Edited by Albert Weideman, John Read and Theo du Plessis
MULTILINGUAL MATTERS Bristol • Blue Ridge Summit
DOI https://doi.org/10.21832/WEIDEM6201 Library of Congress Cataloging in Publication Data A catalog record for this book is available from the Library of Congress. Library of Congress Control Number: 2020028278 British Library Cataloguing in Publication Data A catalogue entry for this book is available from the British Library. ISBN-13: 978-1-78892-620-1 (hbk) Multilingual Matters
UK: St Nicholas House, 31-34 High Street, Bristol BS1 2AW, UK. USA: NBN, Blue Ridge Summit, PA, USA. Website: www.multilingual-matters.com Twitter: Multi_Ling_Mat Facebook: https://www.facebook.com/multilingualmatters Blog: www.channelviewpublications.wordpress.com Copyright © 2021 Albert Weideman, John Read, Theo du Plessis and the authors of individual chapters. All rights reserved. No part of this work may be reproduced in any form or by any means without permission in writing from the publisher. The policy of Multilingual Matters/Channel View Publications is to use papers that are natural, renewable and recyclable products, made from wood grown in sustainable forests. In the manufacturing process of our books, and to further support our policy, preference is given to printers that have FSC and PEFC Chain of Custody certification. The FSC and/or PEFC logos will appear on those books where full certification has been granted to the printer concerned. Typeset by Riverside Publishing Solutions.
Contents
Contributors
vii
Abbreviations
xi
Introduction: A Global Perspective on the South African Context xiii John Read and Colleen du Plessis Part 1: Conceptual Foundations: Policy, Construct, Learning Potential 1 Institutional Language Policy and Academic Literacy in South African Higher Education – a Two-pronged or Forked-tongue Approach? Theo du Plessis
3
2 A Skills-Neutral Approach to Academic Literacy Assessment Albert Weideman
22
3 Does One Size Fit All? Some Considerations for Test Translation Tobie van Dyk, Herculene Kotzé and Piet Murre
52
4 The Use of Mediation and Feedback in a Standardised Test of Academic Literacy: Theoretical and Design Considerations Alan Cliff
75
Part 2: Assessing Academic Literacy at Secondary School Level 5 Basic Education and Academic Literacy: Conflicting Constructs in the South African National Senior Certificate (NSC) Language Examination Colleen du Plessis
95
6 How Early Should We Measure Academic Literacy? The Usefulness of an Appropriate Test of Academic Literacy for Grade 10 Students 117 Jo-Mari Myburgh-Smit and Albert Weideman
v
vi Contents
7 Design Pathways to Parity Between Parallel Tests of Language Ability: Lessons from a Project Sanet Steyn
132
Part 3: Assessing Discipline-Specific Needs at University 8 Generic Academic Literacy Testing: A Logical Precursor for Faculty-Specific Language Teaching and Assessment Kabelo Sebolai
151
9 Diagnosing with Care: The Academic Literacy Needs of Theology Students Avasha Rambiritch, Linda Alston and Marien Graham
170
10 Assessing Readiness to Write: The Design of an Assessment of Preparedness to Present Multimodal Information (APPMI) Laura Drennan
196
11 Postscript: What the Data Tell Us: An Overview of Language Assessment Research in South Africa’s Multilingual Context Tobie van Dyk
217
Subject Index
236
Author Index
241
Contributors
Editors
Albert Weideman designs assessments of academic literacy, relating those to a theory of applied linguistics. He is a rated researcher with the National Research Foundation in South Africa and is the author of Beyond expression: A Systematic Study of the Foundations of Linguistics (Paideia Press, 2009), Academic Literacy: Prepare to Learn (Van Schaik, 2007) and Responsible Design in Applied Linguistics: Theory and Practice (Springer, 2017). Currently Professor of Applied Language Studies and Research Fellow, University of the Free State, he is also chairperson of the Inter-institutional Centre for Language Development and Assessment (ICELDA) and managing director, Language Courses and Tests (LCaT). John Read is Professor Emeritus at the University of Auckland, New Zealand. He is a specialist in vocabulary assessment and the testing of English for academic and professional purposes. He is a former co-editor of the international research journal Language Testing (2002–2006) and served as President of the International Language Testing Association in 2011 and 2012. His major publications are two authored books, Assessing Vocabulary (Cambridge, 2000) and Assessing English Proficiency for University Study (Palgrave Macmillan, 2015), and an edited volume PostAdmission Language Assessment of University Students (Springer, 2016). Theo du Plessis specialises in language policy and planning. Former director of the Unit for Language Facilitation and Empowerment, University of the Free State, he is also editor-in-chief of the eight volume series Language Policy Studies in South Africa (Van Schaik), and of three issues of SUN Media’s South African Language Rights Monitor series. Associate Editor of Language Matters: Studies in the Languages of Africa (UNISA/Taylor & Francis), he furthermore serves on the editorial board of Language Policy (Springer) and Language, Culture and Curriculum (Taylor & Francis). He is a member of the International Academy of Linguistic Law.
vii
viii Contributors
Authors
Linda Alston has extensive experience as an adult educator, and before that as a project officer in a large South African anti-apartheid NGO. She is an experienced academic literacy lecturer, with more than a decade of experience in this field. She works in the Unit for Academic Literacy at the University of Pretoria, where she has gained substantial experience in the design and delivery of online instruction. Alan Cliff is an Associate Professor and Academic Staff Development specialist. He leads research on alignment between curriculum and student assessment – including e-assessment practices – with new and established academics and professional staff. He teaches courses on assessment design and academic literacy. His current research interests are in the use of theories and principles of Dynamic Assessment to facilitate student learning; and in the processes of staff development as ‘literacies practices’ and induction into professional learning communities. Alan contributes to the development of educational assessment policy in the further and higher education sectors nationally. Laura Drennan specialises in academic literacy development, with a particular focus on academic writing. In 2019, she completed the PhD in English language studies and academic literacy development with a thesis entitled ‘Defensibility and accountability: developing a theoretically justifiable academic writing intervention for students at tertiary level’. The thesis examines whether newly developed writing courses offered by the Write Site (the University of the Free State writing centre) comply with the conditions for the responsible design of applied linguistic interventions. She is a lecturer/researcher at the Unit for Language Development, and founder and current head of the Write Site. Colleen du Plessis is a lecturer in English and language curriculum coordinator at the Department of English, University of the Free State. She is an executive member of NExLA (Network of Expertise in Language Assessment) and member of the South African Association for Language Teaching (SAALT). Working in the fields of academic literacy development, language teaching and language testing, she focuses her current research on language teaching challenges following the massification of higher education in developing countries over the last 25 years; on student engagement; and on integrative classroom assessment. Marien Graham is a specialist in nonparametric statistics and statistical process control and a former staff member of the Department of Statistics, University of Pretoria (South Africa), where this research was conducted. She is a rated researcher with the National Research Foundation (South
Contributors ix
Africa), co-author of Nonparametric Statistical Process Control (John Wiley & Sons, 2019) and currently Associate Professor in the Department of Science, Mathematics and Technology Education, University of Pretoria. She serves on the editorial advisory board of Scientific Studies and Research, Series Mathematics and Informatics. Herculene Kotzé is a specialist in interpreting, interpreter training (specifically educational interpreting) and descriptive translation studies. In addition, she has a current interest in bibliometric studies. She has over 10 years of experience as a trained conference interpreter from English-Afrikaans and Afrikaans-English. Currently Senior Lecturer and Subject Chairperson for Language Practice on the Potchefstroom Campus of North-West University, she is the former Secretary and current Deputy Chairperson of the South African Linguistics and Applied Linguistics Society, and Associate Editor of their journal, South African Linguistics and Applied Language Studies (SALALS). She is a member of SATI and ATISA. Piet Murre’s research interests lie in the relationship between school subjects and pedagogy, including the academic literacy of students, and their levels of vocabulary knowledge. He has co-edited several books on pedagogy in Dutch, and has presented widely to both academic and educational practitioner audiences on topics in his fields of interest. Jo-Mari Myburgh-Smit’s undergraduate and postgraduate studies in education and in language management and practice led to her research into the academic literacy levels of tertiary education students across South Africa, as well as the examination of academic literacy at secondary school level. On the latter, she completed an MA dissertation at the University of the Free State in 2015, the results of which are also highlighted in this volume. All of these strands have now coalesced in a career that centres on the management of South African Sign Language interpreting for Deaf students at the University of the Free State. Avasha Rambiritch is a lecturer in the Unit for Academic Literacy at the University of Pretoria, where she teaches a number of academic literacy and academic writing modules at undergraduate and postgraduate level. She is the co-coordinator of the writing centre. She has a PhD in Applied Linguistics (Language Practice) and has published research articles in accredited journals on language testing, academic writing and writing centre support, as well as co-authoring a chapter in a book published by Springer. Her research interests include academic writing, the operation of writing centres and social justice in language testing. Kabelo Sebolai has extensive experience in academic literacy instruction and assessment. He has published numerous articles in accredited
x Contributors
journals on academic literacy curriculum design, instruction and testing. He is the former coordinator of the Academic Literacy Programme at the Central University of Technology in Bloemfontein and Research Lead for Academic Literacy Testing in the Centre for Educational Testing for Access and Placement (CETAP) at the University of Cape Town. He is the current Deputy Director of the Language and Communication Development section of the Language Centre at Stellenbosch University and Chairperson of NExLA (Network of Expertise in Language Assessment). Sanet Steyn’s research interests include academic literacy, language assessment and curriculum design. She has completed two MA degrees, one from the Rijksuniversiteit Groningen and another from the University of the Free State, both cum laude. In her current role in CETAP she is responsible for the development and quality of tests such as the academic literacy component of National Benchmark Tests. Her research is currently focused on the design and development of parallel instruments for multilingual contexts and the challenges facing test developers working in these spaces. Tobie van Dyk is Professor in applied linguistics and Director, School of Languages, North-West University (South Africa). He manages the SADiLAR projects of the Inter-institutional Centre for Language Development and Assessment (ICELDA), is a SPAAN Fellow (University of Michigan) and editor of the South African Journal for Language Teaching. He has led several local and international projects with substantial external funding. An experienced scholar in the fields of academic language ability assessment and testing, as well as in course and syllabus design, he focuses his research on language policy, planning and support and fair and unbiased language testing.
Abbreviations
AARP AL APPMI APS AUQA CTexT CAPS CASS CEL-ELC CETAP CIE CHE CHED CITO CSHE DHET DIF EMI FAL GER HELP HESA HL HSRC IB ICELDA L1
Alternative Admissions Research Project Academic literacy Assessment of Preparedness to Produce Multimodal Information Admission Point Score Australian Universities Quality Agency Centre for Text Technology Curriculum and Assessment Policy Statement Continuous Assessment (school-based portfolio work assessed as part of the school-leaving qualification in South Africa) Conseil Européen pour les Langues / European Language Council Centre for Educational Testing for Access and Placement Cambridge International Examinations Council on Higher Education Centre for Higher Education Development Centraal Instituut voor Toetsontwikkeling (see TiaPlus) Centre for the Study of Higher Education Department of Higher Education and Training Differential Item Functioning English medium instruction First Additional Language Gross Enrolment Ratio (used to indicate student participation rates at South African higher education and training institutions) Higher Education Language Policy Higher Education South Africa (also see USAf) Home Language (see also L1) Human Sciences Research Council International Baccalaureate Inter-institutional Centre for Language Development and Assessment First language (see also HL) xi
xii Abbreviations
LCaT LPHE MoE NBT NBT AL NBTP NExLA NQF NRF NSC NSFAS PELAs PLC RNCS SAALT SADC SAGs SAQA SC STEM SUN TAG TAGNaS TALA TALL TALPS TEL TiaPlus TOEIC TOGTAV UFS Umalusi USAf
Language Courses and Tests Language Policy for Higher Education Ministry of Education National Benchmark Test National Benchmark Test in Academic Literacy National Benchmark Tests Project Network of Expertise in Language Assessment National Qualifications Framework National Research Foundation National Senior Certificate National Student Financial Aid Scheme Post-entry Language Assessments Provincial Language Committee Revised National Curriculum Statement (the South African government school curriculum of 2001) South African Association for Language Teaching Southern African Development Community Subject Assessment Guidelines South African Qualifications Authority Senior Certificate (the previous school-leaving qualification in South Africa, replaced by the NSC in 2008) Science, Technology, Engineering and Mathematics Stellenbosch University Toets van Akademiese Geletterdheidsvlakke (see TALL) Toets van Akademiese Geletterdheid vir Nagraadse Studente (see TALPS) Test of Advanced Language Ability (see TOGTAV) Test of Academic Literacy Levels (see TAG) Test of Academic Literacy for Postgraduate Students (see TAGNaS) Test of Emergent Literacy Test and Item Analysis Plus (software from CITO) Test of English for International Communication Toets van Gevorderde Taalvaardigheid (Afrikaans equivalent of TALA) University of the Free State Council for Quality Assurance in General and Further Education and Training (the statutorily mandated overseer of the South African school-leaving qualification) Universities South Africa (also see HESA)
Introduction: A Global Perspective on the South African Context John Read and Colleen du Plessis
South African universities face major challenges in meeting the needs of their students in the area of academic language and literacy. The dominant medium of instruction in the universities is English and now, to a much lesser extent, Afrikaans, but only a minority of the national population are native speakers of these languages. Eleven languages can be media of instruction in schools, which makes the transition to tertiary education difficult enough in itself for students from these schools. However, when this is coupled with the chronic under-resourcing of schools for these students that dates back to the apartheid era, it is not surprising that lack of academic preparedness among these students is a daunting issue, and there is a high dropout rate from the first year of university study. The South African context has distinctive features, particularly associated with the political and social history of the country. As compared to the traditional English-speaking countries, speakers of English (and Afrikaans) represent only a small proportion of the domestic population. On the other hand, as a result of increased migration flows and equity policies to enhance the educational participation of indigenous minority students, universities in Australia, New Zealand and the United Kingdom are also facing new challenges posed by an increasingly multilingual intake of students from their own country. Even domestic students who are monolingual in English can no longer be assumed to be academically literate. Thus, academics in English-speaking countries have much to learn from their colleagues in South Africa, who must grapple with the multilingual complexity of their society. South African applied linguists and other educationalists have been analysing the complex issues involved and proposing solutions for the past two decades. Many of their initiatives in research and test development are innovative and potentially of great interest to a wider audience. However,
xiii
xiv Introduction
to date, their work has largely appeared in publications within their own country, with the result that it is not widely known internationally. This book will help to remedy that situation. The particular initiatives to be presented in this collection are those associated with, first, defining the construct of academic literacy that underlies a successful transition by young people from secondary school to higher education and then operationalising the construct in a series of tests for students entering South African universities. The construct is informed by applied linguistic theory and research on the distinctive nature of academic language, and it is generic in two senses: it is not exclusive to English (even though English is the dominant medium of higher education in South Africa and internationally); and it is not specific to particular disciplines. The resulting tests have been administered on a large scale at various universities since the early 2000s, beginning with the English language Test of Academic Literacy Levels (TALL) for undergraduate students at the University of Pretoria and later extending to other institutions (North-West University, the University of the Free State and Stellenbosch University), and to versions in Afrikaans and for postgraduate students. Although the undergraduate level tests are administered at the point of university entry, they are not admission tests as such but primarily intended to have a diagnostic and placement function in order to ensure that students with low levels of academic literacy are directed to language development courses on campus to support their studies in the first year of enrolment. The International Context
Before we discuss the domestic context of this volume in more detail, it is useful to explore how language issues in the South African universities fit (or do not fit) with trends in higher education elsewhere in the world. Historically, South Africa shares with Australia, New Zealand and Canada the legacy of eventually (after its Dutch beginning) being a British settler society in which English was dominant as an official language in public life, including in education, although – as with Canada – those of British stock shared their privileged social position with settlers of another European heritage, in this case, those of Dutch descent. Unlike in the other three former British Dominions, however, those of both British and Dutch descent represented an elite minority within the national population, with the majority of Black South Africans being largely excluded from educational opportunity beyond a basic level right up to the end of the apartheid era in the early 1990s. Largely, but not entirely: many Black South Africans grasped what opportunities there were to train, for example, as lawyers (like Mandela and Tambo), doctors, social workers, nurses and teachers, forming their own elite. The main point, however, is that, although in one sense South Africa has been an English-speaking
Introduction xv
country, it is also a complex multilingual society with 11 official languages which are all used as media of instruction in primary and to some extent in secondary schools. Both English and Afrikaans have long been established as media of instruction in South African universities. In the post-apartheid era English has become the dominant medium, whereby all the universities offer their academic programmes in English and some traditionally Afrikaansmedium institutions such as the Universities of Pretoria and Stellenbosch are bilingual. There have been only a small number of initiatives to introduce the other nine official languages into university teaching. At the University of Limpopo there was an experiment with a joint English/ Sepedi degree, and academic literacy assessments have been tried out in Sesotho at the Vaal Triangle Campus of North-West University (Butler, 2017). The issue of languages of learning and teaching at school level in South Africa is too complex to deal with here (see Weideman et al. (2017) and du Plessis et al. (2016) for a description of the complications), but the observation needs to be made here that, despite an early switch to English as medium of instruction, a mix of languages is in actual fact employed, to the extent that Bantu languages may remain predominant at school. Even where English is the medium of instruction, as in the majority of secondary schools in South Africa, there are serious concerns about the quality of the English that students from non-English speaking backgrounds are exposed to through their schooling. Their teachers are typically not very proficient in the language and not in a position to provide a good model of standard English usage as a foundation for the development of academic language proficiency. Cummins’ (2000) widely cited distinction between basic interpersonal communication skills (BICS) and cognitive academic language proficiency (CALP) is very relevant here (see T. du Plessis, Chapter 1 of this volume, for further discussion of the two concepts). At best the students acquire some conversational fluency in the language, but with limited vocabulary knowledge and a lack of ability to express complex ideas. Unfortunately, students completing their secondary education are given an erroneous impression of their English language achievement by the high marks they attain in the school-leaving language examinations, which are analysed in some detail by C. du Plessis (this volume, Chapter 5). This means that students educated through the medium of English and the Bantu languages at the secondary level must still make the transition to study in English if they are admitted to a university. Similar moves to those made in South Africa at tertiary level have been undertaken in the major English-speaking countries to widen participation in higher education on equity grounds through deliberate efforts to recruit students from under-represented groups such as, in the case of New Zealand, Māori and Pasifika students. However, whereas indigenous groups in New Zealand
xvi Introduction
or Australia are in the minority, Black persons constitute the great majority of the South African population. Looking beyond the traditional English-speaking countries, the situation in South African universities bears some resemblance to a rapidly growing international phenomenon in the past 20 years: the adoption of English as a medium of instruction in the universities of countries where previously students were taught only in the national language. The term English medium instruction (EMI) can be defined in various ways – and the phenomenon itself is operationalised in an even wider variety of academic programmes – but Dearden (2015) offers a definition which fits most of the contemporary literature on the topic: ‘The use of the English language to teach academic subjects in countries or jurisdictions where the first language (L1) of the majority of the population is not English’ (Dearden, 2015: 2). The definition as stated could certainly be applied to South Africa, but we need to understand the driving forces behind EMI in other parts of the world before considering its applicability in the South African context. There have been numerous recent publications – both journal articles and edited collections – which have documented the spread of EMI in universities particularly in Europe (Coleman, 2006; Wächter & Maiworm, 2014), in East Asia (Fenton-Smith et al., 2017) and more internationally (Doiz et al., 2013; Taguchi, 2014). Essentially, there have been two major motivations for EMI in these contexts (see also Walkinshaw et al., 2017). One is the internationalisation of universities, which is seen as being facilitated by access to English as the pre-eminent international language in the modern world for the benefit of domestic students as well as the wider economy and society. The other motivation is to attract international students, both as sources of revenue for universities through the fees they pay and as agents of internationalisation through the diversity they can bring to the student body (for a critical perspective on these trends, see e.g. Jenkins, 2014). Whereas the United States, the United Kingdom, Australia and Canada have long been the preferred destinations for international students worldwide, they face increasing competition from institutions offering EMI programmes in Western Europe, East Asia and elsewhere. South African universities enrol relatively modest numbers of international students, who come predominantly from countries in the region (MacGregor, 2014), but also from further afield in Africa. Through the Southern African Development Community (SADC), many such students are eligible for domestic tuition fees and are attracted by the wider range of study opportunities than their home country universities can offer. Presumably the fact that half of the member states of the SADC also belong to The Commonwealth means that many of these students have been previously educated through English, so that the transition to EMI at university level is easier than for students from countries where English
Introduction xvii
has been a foreign language. In South Africa, then, the issues in delivering EMI academic programmes in higher education are primarily associated with domestic rather than international students. It is useful to consider the questions that have arisen in other countries around the implementation of EMI. In a recent state-of-the-art article on this topic, Macaro et al. (2018) reviewed 83 studies from Europe, Asia and the Middle East, plus one from Colombia. (Interestingly they found none at all from Africa.) On this basis, they identified six factors that needed to be investigated by researchers in order to understand the extent to which the intended outcomes of EMI instruction, both in terms of language development and content learning, were likely to be achieved. The first question – the adequacy of the language competence of subject l ecturers – is less of an issue in South African universities, because of the large pool of English-educated academics in the country. However, the second and third factors are certainly relevant, and in fact, they represent the core concerns of the authors of the present volume: We need to have an understanding of the level of English proficiency EMI students in HE [Higher Education] need to start with, develop or attain and what are the consequences of students being admitted to courses/ lecture rooms with different levels of proficiency, or different types of linguistic knowledge. We need to find out whether differing levels of students’ language proficiency leads to inequalities of opportunity particularly at transition points (e.g. from secondary to tertiary education) where a selection process based on a language test may present insuperable obstacles for perfectly capable content students (e.g. potential future engineers, geographers and medics). (Macaro et al., 2018: 38).
In the studies they reviewed, Macaro et al. found a pervasive concern about the low level of English language proficiency of students (and indeed their lecturers) in EMI programmes. This was reported to seriously hamper the students’ ability to understand lectures, to read textbooks, to participate in seminars – and presumably to complete their assessed writing assignments. Although not explicitly stated, the implication is that, at least for domestic students in the country concerned, there is no minimum level of English required for admission to these programmes and no provision of support for the students’ language development as they undertake their studies. This may reflect the haste with which EMI programmes have been introduced in numerous countries at the behest of policymakers and university managers, without any consultation with academic staff or language specialists on how to implement this new medium of instruction effectively. Interestingly, Macaro et al. (2018: 52) noted as an aside the lack of clarity in the way that different researchers
xviii Introduction
on EMI conceptualised the nature of the academic language proficiency that students in these programmes needed to acquire. Similarly, in the introduction to their volume on EMI programmes in the Asia-Pacific region, Walkinshaw et al. (2017) identify the same range of problems of implementation. One chapter in the book by Nguyen et al. (2017) addresses the language issues more directly through a case study of a university in Vietnam. In this case, there was an English entry requirement, defined as a minimum score of 500 on the Test of English for International Communication (TOEIC) (www.ets.org/toeic). Not only is this ostensibly a test of English in the workplace, rather than an academic proficiency measure, but also a score of 500 (out of a possible 900) indicates only a modest level of competence in the language – and some students were admitted to the programme without even obtaining that score. It was to be expected then that the students struggled to meet the course requirements, and lecturers routinely code-switched between English and Vietnamese in the classroom in an effort to assist their students’ comprehension of the course content. In another chapter of the book, Humphreys (2017) reflects on the current situation in Australian universities and argues that there are some strong similarities with student experiences of EMI in non-English- speaking countries, especially in degree programmes which attract a high proportion of international student enrolments. In 2006 there was considerable public debate in Australia about evidence that many international students were not able to meet the English requirement for an employment visa after graduating with a Bachelor’s degree, even though they were supposed to have achieved the required score in order to have been admitted to the degree (Birrell, 2006). This prompted a national symposium under the auspices of the federal government, which in turn led to the Good Practice Principles for English Language Proficiency of International Students in Australian Universities (AUQA, 2009). The principles have been influential in challenging Australian universities to review their obligations to ensure that the English language needs of their students – both international and more recently domestic students – are being adequately addressed, upon entry to the institution and during the course of their studies. One practical outcome has been the introduction of what have become known as post-entry (English) language assessments (PELAs), which are designed to identify incoming students who would benefit from an enhancement of their language and literacy skills and to advise or direct the students to participate in language development programmes available on the campus. These essentially local initiatives, which take a variety of forms at different Australian universities, have been extensively documented by Dunworth and her colleagues (Dunworth, 2009; Dunworth et al., 2013) and more recently by Read (2019). In two book-length treatments, Read (2015, 2016) has extended the coverage of such assessments to universities in New Zealand, Canada, the United States, Hong Kong,
Introduction xix
Oman – and South Africa, through the work of Albert Weideman and his colleagues, which is now presented more extensively in this volume. Thus, academics in both Australia and South Africa have much to offer to universities in other countries, both in terms of defining the construct of academic literacy and in designing practical assessments, to address the kind of questions that Macaro et al. (2018) formulated on the basis of their review of the research on the current wave of EMI programmes in countries where English has not been a medium of instruction in higher education in the past. The National Educational Background
We move now to outline the linguistic and educational context in which South African universities have operated during the last 20 years. The South African Constitution (South Africa, 1996) and other forms of legislation, such as the Language-in-Education Policy (Department of Education, 1997), have helped to ensure the elevation and use of the 11 official languages taught as L1 school subjects. Notwithstanding this, policies and legislation have in many spheres not achieved the desired protection or development of all the official languages (Balfour, 2006; De Kadt, 2006; Webb, 2013). English and Afrikaans remain the dominant languages of learning and teaching (LOLTs) at school and university. This means that first language speakers of these two languages receive plentiful opportunity to develop their academic literacy levels in these languages during their years of schooling compared to speakers of indigenous Bantu languages. The reality is that the official languages have not shared the same historical status and have not developed to the same extent (Alexander, 2013; Kamper, 2006; Louw, 2004; Webb, 2013). There are, for example, different traditions of language teaching and testing among the Bantu languages taught as L1s, and learners of languages with strong oral traditions may not have access to as many written resources as those studying developed languages such as English and Afrikaans (D’Oliveira, 2003; Ministry of Education, 2003). (Note that Afrikaans is recognised by the Department of Basic Education as an ‘African language’; the use of the linguistically correct term ‘Bantu language’ enables a distinction where reference to Afrikaans is not intended.) All of these concerns have had a profound effect on the teaching and learning of languages and the development of academic literacies. Texts tend to be created artificially in some of the Bantu languages for use in the education system rather than for the purposes of public consumption (Umalusi, 2012). In other words, there are not always sufficient authentic materials to draw on for many of the eleven Home Language (HL) subjects. This may have implications for the constructs assessed in the language examinations and the types of tasks included, as well as the
xx Introduction
focus of teaching in the respective language classes. Another disparity on the exposure side, used as an explanation for the difference in standards, is that academic meta-language may be a problem for teachers of Bantu languages. It is claimed that this is not a regular part of Bantu language discourse and ‘the context is therefore not as supportive for developing the kind of critical and close reading skills typically associated with the English examinations’ (Umalusi, 2012: 7). Even if this were true, since it proceeds from the contestable assumption of some languages being (inherently) deficient, to what extent this can be considered a valid reason for not developing critical thinking and analytical ability in the Bantu language classroom is equally debatable. Probably one of the biggest challenges that remains to be addressed is the varying qualifications and capabilities of educators and the extent to which this has the potential to compromise the standard of teaching (Bhorat & Oosthuizen, 2008; Buckler, 2011; Modisaotsile, 2012; Steyn & Mentz, 2008; Van der Berg & Hofmeyr, 2018). Without improvement in current levels of teacher qualifications and accomplishing instruction in higher order skills in the classroom, the curricular goal of ensuring that students are prepared for tertiary education as regards their language development will remain a virtual impossibility. In fact, the expansion of cognitive abilities is mentioned in a comprehensive review commissioned by the World Bank as the foremost priority in South African school education (Van der Berg & Hofmeyr, 2018: 3). In this respect, ongoing monitoring of both cognitive processing skills and language development would be essential to gauge whether any improvement was discernible. As far as the training offered to pre-service and in-service educators is concerned, there is clearly a lack of accountability on the part of all stakeholders and a need for independent quality control measures. Despite the variety of state-funded and private support programmes available to educators across the country, there is no evidence of their effectiveness. Further to this, serious capacity constraints hinder the provision of suitable support, whereas the evaluation of service providers is virtually impossible owing to the excessive influence of labour unions (see Van der Berg et al. (2016) for a full discussion of constraining factors). Currently, there is little incentive for educators to improve their knowledge and skills, even though the National Policy Framework for Teacher Education and Development in South Africa (Department of Education, 2007) requires of teachers to earn a minimum number of professional development (PD) points over a three-year cycle. Some open-learning platforms for teacher development are available, but the extent to which teachers make use of these is debatable, especially since the earning of PD points renders merely symbolic recognition of further training. The reading crisis in the country, evident in the results of independent assessments of ability, can generally be ascribed to the varying standards of literacy and education in schools, but specifically to the teaching of
Introduction xxi
reading in Bantu languages. The latter is currently being foregrounded as a critical area of research (Van der Berg et al., 2016). Students who are given a strong foundation in their first languages when starting school are generally able to transition without difficulty to other languages as the medium of instruction in higher grades and at university. Unfortunately, (misguided) sociocultural beliefs about the usefulness of Bantu languages and a general lack of institutional support have contributed towards subtractive bilingualism rather than additive bilingualism in South African schools. Instead of maintaining the first language (L1/HL), in addition to learning a second language, the development of knowledge and ability in the L1/HL has been neglected owing to the emphasis accorded to English as main or only medium of instruction. Currently, there is a growing awareness at higher education institutions that multilingual classrooms and pedagogies may be able to take students further than the unilingual English route adopted up to now (Kaschula, 2013; Tshotsho, 2013). When given sufficient prioritisation and investment, unequally resourced languages can become more prominent and assume a more influential role in society over a period of time (cf. De Kadt, 2006), even if they are unlikely to be in a position to become international languages such as English. Adopting what are now called ‘translanguaging pedagogies’ is, however, often only a confirmation that humans, in communicating with others, will use whatever semiotic resources come readily to hand. Such acknowledgement does not of itself suggest any way out of the dilemma that most students with low academic literacy levels (in whatever language, or mix of languages) find themselves in. Preparing learners for tertiary study is one general aim of the National Senior Certificate (NSC) school curriculum in South Africa. Notwithstanding the laudable efforts made since the transition to democracy in 1994 to eliminate discrepancies in terms of infrastructure, funding and educational standards at schools (Jansen & Taylor, 2003; Spaull, 2012), compared to other countries there is still little return on the amounts invested in education. Even though the country spends approximately 20% of its entire budget on education (Spaull, 2012), far too many learners do not complete even their basic years of schooling. Those who do manage to complete their secondary education battle to find employment, and yet others who proceed to study at higher institutions of learning struggle to pass (Chisholm, 2005; Solidarity Research Institute, 2015). This means that a considerable number of learners are not acquiring the requisite knowledge and abilities needed to perform well in post-secondary education. In order to assist poor students to obtain a university education, in 1999 the National Student Financial Aid Scheme (NSFAS) was introduced by an act of Parliament (Department of Education, 1999). The amount designated to fund students has seen substantial increases since then, but at the same time university fees have risen and government funding for
xxii Introduction
universities has diminished. This contradictory state of affairs is problematic, as South African universities rely heavily on income from government subsidy and student fees, in addition to generating their own third streams of income (Lewin & Mawoyo, 2014). Although by 2012 just under one million students had been assisted to access university study through government funding, representing an amount of over R25 billion in the form of loans and bursaries (Lewin & Mawoyo, 2014), the success of this initiative is debatable. Even with increased financial support there has been a 72% dropout rate in respect of students on NSFAS bursaries (WilsonStrydom, 2015). It is clear that money is not the answer to everything and that increasing the financial support available for students can only go so far. This, then, represents the background to the primary issue to be discussed in this volume, which is to screen students entering South African universities from these diverse and often inadequate educational backgrounds to identify those who will need substantial language and literacy support if they are to have any chance of coping with tertiary study through the medium of English. Outline of the Volume
Most of the papers included here were presented in their original form at colloquia in two important conferences: the South African Association for Language Teaching (SAALT) at Rhodes University in Grahamstown in July 2017; and the Language Testing Research Colloquium (LTRC) at the University of Auckland, New Zealand in July 2018. The papers have been subsequently revised and augmented for publication in this volume. The focus of the book is on procedures for assessing the academic language and literacy levels and needs of students, not so much to exclude students from higher education but rather to identify those who would benefit from further development of their ability in order to undertake their degree studies successfully. Thus, these assessments have an important diagnostic function, which requires a careful analysis of the nature of academic discourse, in order to provide the basis for designing tests which tap into relevant components of academic literacy. The chapters in the book include some technical detail on assessment procedures but they are written to be accessible to non-specialists in language testing. Part 1 of the book comprises four chapters which provide theoretical and conceptual foundations for the work reported in later chapters. Theo du Plessis discusses the complete lack of coordination in South African universities between the articulation of institutional language policy and the provision of programmes for students to develop their academic literacy. There is consequently a mismatch between an increasing emphasis at the policy level on recognising the country’s 11 official languages and the reality that the universities are more and more anglicised in practice.
Introduction xxiii
The other three chapters explore the conceptual basis for the design of academic literacy assessments. Albert Weideman argues that the underlying construct must be defined in terms of an applied linguistic understanding of the nature of academic discourse. He outlines a five-phase process designed to guide test developers away from producing conventional skills-based tests towards a ‘skills-neutral’ approach. This design approach builds on a more functional definition of academic literacy which can be operationalised through creative use of the multiple-choice format in tests that can be administered on a large scale. This is followed by the chapter by Tobie van Dyk, Herculene Kotzé and Piet Murre, which considers whether an assessment based on the principles presented by Weideman can be translated for use in a different social and educational context. The authors use the results of administering a Dutch version of the Test of Academic Literacy Levels (TALL) in The Netherlands to explore issues of translation equivalence and equitable decision-making for students from different backgrounds. The final chapter in Part 1, by Alan Cliff, offers a complementary perspective which builds on the Vygotskyan principles of dynamic assessment to assess the learning potential of students. He presents a table of specifications for an operational test of this kind, with some sample test formats. Test trials from both South Africa and Costa Rica have produced promising evidence for the effectiveness of this approach. In Part 2 the focus shifts to the secondary school level to trace the origins of the poor preparation of students for study in university. Colleen du Plessis presents numerous concerns about the lack of cognitive- academic content and the varying passing standards across the multiple language versions of the National Senior Certificate (NSC) matriculation examination at Grade 12. She offers specific examples of poorly constructed reading and writing tasks in the exam papers. Thus, the students’ marks are not good predictors of their academic performance at tertiary level. Responding to this situation, Jo Myburgh-Smit and Albert Weideman argue that academic language needs should be diagnosed earlier than Grade 12, so that deficiencies in academic preparedness could be addressed in the final two years of secondary school. They compared a Grade 12 level Test of Advanced Language Ability (TALA) with one adapted for students in Grade 10 and showed that the adapted test might function effectively to give early warning of language needs. The chapter by Sanet Steyn provides further evidence of the disparities in student performance of the 11 Home Language versions of the NSC examination. The solution that she investigates is based on defining a generic construct definition for the TALA in English and then pursuing two possible strategies for producing equivalent versions in the other languages. One involves functional translation from English to the other languages, and for the other each language version is written on the basis of a common set of test specifications.
xxiv Introduction
Part 3 explores how measures of academic literacy can be used to inform the design of language interventions in university programmes. Kabelo Sebolai analyses the results of the pre-admission National Benchmark Test of Academic Literacy across faculties at his university and argues that, even though the test is generic rather than discipline-specific in content, the results can guide the provision of academic development opportunities for undergraduate students in each faculty. The next two chapters look at assessments designed for students of particular disciplines. Avasha Rambiritch, Linda Alston and Marien Graham explore the specific literacy needs of Theology students, who in their case typically enter the university with quite limited language ability in English. The authors first undertook a survey of the target student population on their experiences in the first semester of university study, followed by an administration of the Test of Academic Literacy Levels (TALL). The discussion highlights the diagnostic potential of an academic literacy assessment, provided that the students are guided to understand the meaning of the results. In the last chapter Laura Drennan reports on the development of a test for Social Science students entering postgraduate study. The test is based on the same kind of construct definition as the other tests reported in this volume but is based on field-specific tests and tasks. A pilot study produced encouraging evidence that such a test could inform writing interventions to develop the students’ ability to present academic information in written theses and other media at the postgraduate level. In the postscript Tobie van Dyk presents a ‘best evidence review’ of the literature on academic language and literacy assessment in South Africa since 1999. This analysis identifies the prominent authors, avenues of publication and themes of the published work using qualitative and quantitative software tools, together with a comprehensive bibliography. References Alexander, N. (2013) Language Policy and National Unity in South Africa/Azania. E-book published by the Estate of Neville Edward Alexander. See http://www.sahistory.org.za/ archive/language-policy-and-national-unity-south-africa-neville-alexander (accessed April 2014). Australian Universities Quality Agency (AUQA) (2009) Good practice principles for English language proficiency for international students in Australian Universities. See http:// www.aall.org.au/sites/default/files/Final_Report-Good_Practice_Principles2009.pdf (accessed May 2019). Balfour, R.J. (2006) ‘A new journey with old symbols’: University language policies, internationalism, multilingualism, and the politics of language development in South Africa and the United Kingdom. In V. Webb and T. du Plessis (eds) The Politics of Language in South Africa. Studies in Language Policy in South Africa (pp. 1–14). Pretoria: Van Schaik. Bhorat, H. and Oosthuizen, M. (2008) Determinants of Grade 12 pass rates in the postapartheid South African schooling system. Journal of African Economies 18 (4), 634–666.
Introduction xxv
Birrell, B. (2006) Implications of low English standards among overseas students at Australian universities. People and Place 14 (4), 53–64. Melbourne: Centre for Population and Urban Research, Monash University. Buckler, A. (2011) Reconsidering the evidence base, considering the rural: Aiming for a better understanding of the education and training needs of Sub-Saharan African teachers. International Journal of Educational Development 31, 244–250. Butler, G. (2017) Translating the test of academic literacy levels into Sesotho. Journal for Language Teaching 51 (1), 11–43. Chisholm, L. (2005) The state of South Africa’s schools. In J. Daniel, R. Southall and J. Lutchman (eds) State of the Nation: South Africa 2004-2005 (pp. 210–226). Cape Town: Human Sciences Research Council. Coleman, J. (2006) English-medium teaching in European higher education. Language Teaching 39 (1), 1–14. Cummins, J. (2000) Language, Power, and Pedagogy: Bilingual Children in the Crossfire. Clevedon: Multilingual Matters. Dearden, J. (2015) English as a medium of instruction—a growing global phenomenon. See https://www.britishcouncil.org/sites/default/files/e484_emi_-_cover_option_3_final_ web.pdf (accessed May 2019). De Kadt, J. (2006) Language development in South Africa – past and present. In V. Webb and T. du Plessis (eds) The Politics of Language in South Africa. Studies in Language Policy in South Africa (pp. 40–56). Pretoria: Van Schaik. Department of Education (1997) Language-in-Education Policy 14 July 1997, in terms of the National Education Policy Act (27 of 1996). GN 1701, Government Gazette 18546 of 19 December 1997. Department of Education (1999) National Students Financial Aid Scheme (NSFAS) Act (56 of 1999). Government Gazette 206052 of 19 November 1999. Department of Education (2007) The National Policy Framework for Teacher Education and Development in South Africa, in terms of the National Education Policy Act (27 of 1996). GN 367, Government Gazette 29832 of 26 April 2007. See https:// www.che.ac.za/sites/default/files/publications/NPF_Teacher_Ed_Dev_Apr2007.pdf (accessed June 2019). Doiz, A., Lasagabaster, D. and Sierra, J.M. (eds) (2013) English-Medium Instruction at Universities: Global Challenges. Bristol: Multilingual Matters. D’Oliveira, C. (2003) Moving towards multilingualism in South African schools. In P. Cuvelier, T. du Plessis and L. Teck (eds) Multilingualism, Education and Social Integration (pp. 131–140). (Studies in language policy in South Africa; 3). Pretoria: Van Schaik. Dunworth, K. (2009) An investigation into post-entry English language assessment in Australian universities. Journal of Academic Language and Learning 3 (1), 1–13. Dunworth, K., Drury, H., Kralik, C., Moore, T. and Mulligan, D. (2013) Degrees of proficiency. See http://degreesofproficiency.aall.org.au/ (accessed May 2019). du Plessis, C., Steyn, S. and Weideman, A. (2016) Die assessering van huistale in die SuidAfrikaanse Nasionale Seniorsertifikaateksamen: Die strewe na regverdigheid en groter geloofwaardigheid. LitNet Akademies 13 (1), 425–443. Fenton-Smith B., Humphreys, P. and Walkinshaw, I. (eds) (2017) English Medium Instruction in Higher Education in Asia-Pacific. Cham, Switzerland: Springer. Humphreys P. (2017) EMI in Anglophone nations: Contradiction in terms or cause for consideration? In B. Fenton-Smith, P. Humphreys and I. Walkinshaw (eds) English Medium Instruction in Higher Education in Asia-Pacific (pp. 93–114). Cham: Springer. Jansen, J. and Taylor, N. (2003) Educational Change in South Africa 1994–2003: Case Studies in Large-Scale Education Reform. Country Studies Education Reform and Management Publication Series commissioned by the Education Reform and Management thematic group of the World Bank, vol. II (1). See http://www.jet.org. za/publications/research/Jansen%20and%20Taylor_World%20Bank%20report.pdf (accessed February 2016).
xxvi Introduction
Jenkins, J. (2014) English as a Lingua Franca in the International University: The Politics of Academic English Language Policy. Abingdon, UK: Routledge. Kamper, G. (2006) Implications of the position of indigenous languages in South Africa for the use of indigenous knowledge in community development. Africanus 36 (1), 75–87. Kaschula, R. (2013) Multilingual teaching and learning models at South African universities: Opportunities and challenges. Seminar presentation, NRF SARCHI Chair: Intellectualisation of African languages, Multilingualism and Education, Rhodes University, Grahamstown, March 2013. Lewin, R. and Mawoyo, M. (2014) Student access and success: Issues and interventions in South African universities. Report published by Inyathelo: The South African Institute for Advancement, with the support of The Kresge Foundation. See http://www. inyathelo.org.za/knowledge-services/inyathelo-publications/view-all-publications-fordownload.html (accessed April 2016). Louw, P.E. (2004) Anglicising postapartheid South Africa. Journal of Multilingual and Multicultural Development 25 (4), 318–332. Macaro, E., Curle, S., Pun, J., An, J. and Dearden, J. (2018) A systematic review of English medium instruction to higher education. Language Teaching 52 (1), 36–76. MacGregor, K. (2014) Major survey of international students in South Africa. University World News, 6 September. See https://www.universityworldnews.com/post. php?story=20140905134914811 (accessed May 2019). Ministry of Education (2003) The development of indigenous African languages as mediums of instruction in higher education (Report compiled by the Ministerial Committee appointed by the Ministry of Education, September 2003). See http://www.dhet. gov.za/Management%20Support/The%20development%20of%20Indigenous%20 African%20Languages%20as%20mediums%20of%20instruction%20in%20 Higher%20Education.pdf (accessed July 2019). Modisaotsile, B.M. (2012) The failing standard of basic education in South Africa. Africa Institute of South Africa Policy Brief No. 72, March 2012. Nguyen H.T., Walkinshaw I. and Pham H.H. (2017) EMI programs in a Vietnamese university: Language, pedagogy and policy issues. In B. Fenton-Smith, P. Humphreys and I. Walkinshaw (eds) English Medium Instruction in Higher Education in AsiaPacific (pp. 37–52). Cham: Springer. Read, J. (2015) Assessing English Proficiency for University Study. Basingstoke: Palgrave Macmillan. Read, J. (ed.) (2016) Post-admission Language Assessment of University Students. Cham: Springer. Read, J. (2019) Postentry English language assessment in universities. In A. Gao (ed.) Second Handbook of English Language Teaching (pp. 395–414). Cham: Springer. South Africa (1996) Constitution of the Republic of South Africa, 1996. Act 108 of 1996. Pretoria: Government Printers. Solidarity Research Institute (2015) Matric report: The South African labour market and the prospects for the matriculants of 2015. See http://www.solidariteit.co.za/wp-content/ uploads/2016/01/Matric_Report_2015.pdf (accessed January 2017). Spaull, N. (2012) Education in SA: A tale of two systems. Politicsweb, 31 August. See http://www.politicsweb.co.za/politicsweb/view/politicsweb/en/page71619? oid=323272&sn=Detail (accessed February 2014). Steyn, H.J. and Mentz, E. (2008) Teacher training in South Africa: The integrated model as viable option. South African Journal of Higher Education 22 (3), 679–691. Taguchi, N. (ed.) (2014) Special issue: English-medium education in the global society. International Review of Applied Linguistics in Language Teaching 52 (2). Tshotsho, B.P. (2013) Mother tongue debate and language policy in South Africa. International Journal of Humanities and Social Science 3 (13), 39–44. Umalusi (2012) The Standards of the National Senior Certificate HL Examinations: A Comparison of South African Official Languages. Pretoria: Umalusi.
Introduction xxvii
Van der Berg, S. and Hofmeyr, H. (2018) Education in South Africa. World Bank: Washington, DC. See https://openknowledge.worldbank.org/handle/10986/30017 (accessed June 2019). Van der Berg, S., Spaull, N., Wills, G., Gustafsson, M. and Kotzé, J. (2016) Identifying binding constraints in education: Synthesis report for the programme to support pro-poor policy development (PSPPD). Stellenbosch: Research on Socio-Economic Policy (RESEP), University of Stellenbosch. See http://resep.sun.ac.za/wp-content/ uploads/2016/05/PSPPD_BICiE-email.pdf (accessed June 2019). Wächter, B. and Maiworm, F. (2014) English-taught Programmes in European Higher Education: The State of Play in 2014. Bonn: Lemmens. Walkinshaw I., Fenton-Smith B. and Humphreys P. (2017) EMI issues and challenges in Asia-Pacific higher education: An introduction. In B. Fenton-Smith, P. Humphreys and I. Walkinshaw (eds) English Medium Instruction in Higher Education in Asia-Pacific (pp. 1–18). Cham: Springer. Webb, V. (2013) African languages in post-1994 education in South Africa: Our own Titanic? Southern African Linguistics and Applied Language Studies 31 (2), 173–184. Weideman, A., du Plessis, C. and Steyn, S. (2017) Diversity, variation and fairness: Equivalence in national level language assessments. Literator 38 (1). See https:// literator.org.za/index.php/literator/article/view/1319 (accessed July 2019). Wilson-Strydom, M. (2015) University Access and Success: Capabilities, Diversity and Social Justice. Abingdon: Routledge.
1 Institutional Language Policy and Academic Literacy in South African Higher Education – a Two-pronged or Forkedtongue Approach?1,2 Theo du Plessis
Introduction
How do institutional and sectoral language policies interact with language interventions and practices in South African universities? Has enough been done to ensure an alignment between language policy and language development initiatives in higher education? Is the strong movement towards supporting the language development of students to ensure greater academic success adequately supported by sectoral and institutional policies? Do we have anything to learn from examples in other countries? The related question raised in this chapter is where South Africa stands with regard to academic literacy policy in relation to language-in-higher-education policy. South Africa has developed the Language Policy for Higher Education (LPHE), a policy that also contains directives regarding institutional languages and languages of instruction (MoE, 2002). However, regarding academic literacy it merely ‘encourages’ higher education institutions to give attention to language proficiency and to make provision for academic literacy, thus not reflecting the integrated approach found in what is called HELP (Higher Education Language Policy) in Europe. In fact, in her critique of the LPHE Van der Walt (2004: 150) finds that the policy framework falls short of integrating language policy and planning in the educational landscape. At institutional level Knott (2015: 159) identifies in the case of the reviewed language policy of Nelson Mandela University (the former 3
4 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Nelson Mandela Metropolitan University) a notable misalignment between institutional language policy and ‘multilingual literacies’. As a result of these misgivings, it is not surprising that questions have surfaced about the state of language policy design in the South African higher education landscape. The obvious first question is whether the 2017 draft LPHE by the Department of Higher Education and Training (DHET, 2017), offers a corrected design solution to the problem of misalignment. This entails looking at the evolution of the language policy framework in relation to the discourse about academic literacy in South Africa mainly in terms of the role of the LPHE and related policy documents in providing an appropriate framework for designing what Weideman (2017) terms a ‘responsible’ approach to language problems in applied linguistics. In echoing Weideman (2021), one is concerned about cases of misalignment since, at best, these can lead to ‘design inefficiencies and at worst to contradictory and conflicting arrangements’. Any misalignment within an institutional language policy between institutional language, language of teaching and learning, academic language and language proficiency would suggest that the language problems that are identified by the so-called Soudien report3 have not been addressed. This report incidentally concludes as follows about the language situation in higher education in South Africa: (T)here are unacceptably large numbers of students who are not successful academically because of the ‘language problem’. They fail, not because of a lack of intelligence, but because they are unable to express their views in the dominant language of instruction. (Ministerial Committee, 2008: 101)
It recommended a two-pronged policy response to this dilemma, requiring universities to adopt a multilingual approach to (a) institutional communication in the Sintu languages [that is the Bantu languages of Southern Africa; sometimes, somewhat misleadingly, referred to as ‘African languages’], and (b) academic literacy through the development of the Sintu languages as academic languages (Ministerial Committee, 2008: 102). What essentially was called for was a redesign of existing institutional language policies that would help to overcome the detrimental outcomes of language policy misalignment alluded to by Weideman. Language Policy and Planning and Academic Literacy
In order to take this discussion further, I refer first to opinions on the above originating in the Australian and European contexts. Fenton-Smith and Gurney (2016: 73) are of the opinion that a national language policy
Institutional Language Policy and Academic Literacy in South African Higher Education 5
for higher education might help to resolve a similar policy dilemma within the Australian context. However, they claim that universities themselves also ought to become involved in ‘academic language policy and planning’, a domain of language-in-education policy and planning that deals with issues of ‘academic language and learning’. The authors foresee that academic language policy and planning is likely to gain prominence in higher education given the world-wide increase in the adoption of institutional language policies that make provision for English-medium instruction, mostly in contexts where English is not the dominant language (Fenton-Smith & Gurney, 2016: 84). If approached as an ‘institution-wide’ or ‘whole-institution’ initiative, such planning could help, inter alia, to realise an ‘institutional language enhancement plan’ (Fenton-Smith et al., 2017: 465). In contrast, considering the European context, the CEL/ELC Working Group has made it pertinently clear that it wishes to assist higher education institutions to develop their own language policies by means of a meta-policy. They nevertheless want each university involved in establishing institution-wide language policies, i.e. policies that deal with the languages of instruction, administration and communication, over and above dealing with the objectives of language programmes and language support services (Lauridsen, 2013: 3). From a language policy perspective this comparison between the Australian and European language policy in higher education points to the need to recognise the cardinal difference between an institutional language policy and an academic language and learning policy in higher education. The CEL/ELC Working Group makes this distinction clear in their recommendations to higher education institutions by separating recommendations on institutional language(s) from those concerned with language proficiency and related issues (Lauridsen, 2013: 11), but nevertheless foreseeing one language policy combining both of these aspects. These recommendations are written in a context where European universities are moving ‘beyond monolingual ideologies’ (Mazak & Carroll, 2016), primarily because of the internationalisation of higher education (Lauridsen, 2013: 3). Although, as stated, these recommendations have been made within the internationalisation context, Australian colleagues do not in the first instance refer to an institutional language policy but are essentially alluding to the dilemma of English proficiency in particular (the Europeans take a more generic approach), as articulated in a recent publication on issues in the Australian higher education landscape written by the Centre for the Study of Higher Education (CSHE) at the University of Melbourne. An entire chapter of this publication is devoted to ‘English language standards and claims of soft marking’ (Arkoudis, 2013). Incidentally the author involved reiterates the need for Australian universities to adopt ‘an institutional-wide strategy’, but one which is aimed specifically at English language learning and development.
6 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Essentially the difference between the European and Australian approaches to a language policy for higher education institutions lies in the degree of recognition of institutional multilingualism. European higher education already has a tradition of bi- and multilingual universities, whereas it is neither the case with nor the intention of Australian universities to move towards such a situation, despite the steady increase of ‘indigenous’ students since 2001 (see Universities Australia, 2015: 19). However, the value of the Australian case lies in the accentuation of what seems to be a very specific language policy and planning goal, i.e. literacy acquisition (Liddicoat, 2007a: 1), a matter that receives less attention in the case of European higher education. Although the Framework for Language Planning Goals (Kaplan & Baldauf, 2003: 202) does not specifically make provision for literacy acquisition as a goal, one may nevertheless relate it to language-ineducation planning, otherwise also described as language acquisition planning (Cooper, 1989); a specific type of language planning alongside language status, corpus and prestige planning. Depending on the context, literacy acquisition could involve different literacy types, including what Fenton-Smith and Gurney (2016: 73) term ‘tertiary academic literacy’ [own emphasis]. In the European case, on the other hand, the goals seem to be different and more strongly related to language maintenance, language spread and interlingual communication (matters concerning language status planning), language internationalisation (in language corpus planning) and language intellectualisation (language prestige planning) regarding institutional languages. Nevertheless, the European case also includes goals that relate to language-in-education planning, albeit that these are notably different ones: Foreign Language and Second Language programmes are priorities in European education, as well as language maintenance. The addition of literacy acquisition as a goal of language-ineducation planning requires our further attention. According to Kaplan and Baldauf (2003: 202), language-in-education planning typically involves policy planning (regarding access, personnel, curriculum, methods and materials, resourcing, community and evaluation) and cultivation planning (regarding language acquisition and reacquisition, language maintenance, Second Language or Foreign Language dynamics and language shift). It therefore stands to reason that cultivation planning for tertiary academic literacy would have to rely on policy planning about literacy and typically therefore relates to other relevant policies such as access policy, curriculum policy, etc. In fact, Liddicoat (2007a: 1) even points to the interrelatedness of literacy planning and language corpus planning (for a language to require technologies for literacy), language status planning (for a language of literacy and of instruction) and language prestige planning (for promoting
Institutional Language Policy and Academic Literacy in South African Higher Education 7
and intellectualising a language of literacy). This interrelatedness strengthens the case for an ‘institution-wide’ language and literacy policy that is clear about institutional language, taught language (i.e. Foreign Language, etc.), academic or literacy language; the latter being determined by the language policy-formulation process (Liddicoat, 2007b: 24). In the case of monolingual universities, some of these allocations could all refer to the same language but in the case of bior multilingual universities the matter is a bit more complex, hence the need for an overarching language and literacy policy. This requires what Kaplan and Baldauf (1997: 258) describe as a coherent ‘language, literacy and communication policy approach to tertiary literacy’. In approaching tertiary academic literacy as a language planning goal, Kaplan and Baldauf (1997: 149) stress that policymakers need to understand what literacy is, understand its dynamics as well as how it changes over time while recognising the role of educational institutions in disseminating an appropriate literacy. Similarly, Liddicoat (2007b: 26) emphasises the changing conceptualisation of literacy and reiterates the need for the language policy and planning process to deal with the so-called contestations inherent to the literacy field. This requires language policy and planning aimed at literacy to move beyond a print-based conceptualisation that results in a focus on programmes in reading and writing (the so-called autonomous model of literacy) to a more social constructionist understanding of literacy as social practice, recognising the multiplicity of literacies involved. However, according to Liddicoat much of language policy and planning initiatives have actually not moved beyond the former approach. This is confirmed by a recent national audit of academic literacy provision at 35 Australian higher education institutions which revealed that the enabling educators still largely hold on to what can be described as a ‘commonsensical (tacit) notion of what academic literacies means’ (Baker & Irwin, 2015). According to the authors of this audit report, academic literacies is understood as ‘a multi-faceted, complicated and expansive set of practices that students need for undergraduate study’. The majority of the participants in the audit stress writing, reading and critical thinking as the core elements of academic literacy, which, in the authors’ opinion is actually an umbrella term for a range of related aspects. Liddicoat (2007b: 24) also emphasises the intricate relation between literacy and language selection, whether literacy is conceptualised independent from a particular language (in other words transferable from one language to another) or tied closely with the propagation of a specific language in society (for instance ‘English literacy’). The crucial difference between the two approaches is that in the case of the transferable type, literacy education can concentrate and build on home languages, even languages of minority and indigenous groups, whereas
8 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
the second approach tends to lead to the equation of literacy with the propagated language, such as one would find in English speaking countries regarding English. In this sense one should recognise the significant difference between planning for literacy in a first or home language (whether official language or national language or special status language in the case of a minority language or marginalised language), planning for literacy in an additional language (in the case of minority or marginalised communities) or planning for literacy in multiple languages (Liddicoat, 2007a: 2), a situation that often occurs in developing countries. Consequently, Liddicoat (2007b: 18) draws a distinction between planning for national literacies, vernacular literacies, local literacies or biliteracies. One may assume that all of the above would also play a role in tertiary academic literacy policy and planning. Liddicoat raises the issue of language proficiency in academic contexts in relation to the question concerning support for secondlanguage learners, which, according to Cummins (2000: 57) is a ‘recurring issue for educational policy in many countries’. He contends that the manner in which language proficiency is conceptualised and how decision makers and teachers relate it to academic development are ‘central to many volatile policy issues in education’, where secondlanguage users of English are concerned (Cummins, 2000: 58). In an attempt to clarify the critical relationship between language proficiency and academic achievement, he developed the BICS (Basic Interpersonal Communication Skills)/CALP (Cognitive Academic Language Profi ciency) dichotomy during the latter part of the 1970s, thereby distin guishing between conversational and academic aspects of language proficiency. The latter aspect relates primarily to academic literacy, while obviously not equating academic literacy to language proficiency. BICS is acquired early in the developmental process while CALP is developed mostly within the formal educational process. Cummins notably identifies the conflation of BICS and CALP as ‘a major factor’ in creating difficulties for students using more than one language in their education and at different stages and levels. He therefore defines academic language proficiency as ‘the extent to which an individual has access to and command of the oral and written academic registers of schooling’ (Cummins, 2000: 67). This can also be applied to higher education. Patterson and Weideman (2013) similarly but more broadly refer to the lingual aspects of academic discourse. In order to better understand the language demands underlying academic tasks, Cummins (2000: 68) further distinguishes between cognitively demanding and less demanding language tasks that are interrelated with contextual demands. Simplified this means, for example, that a conversation outside of the classroom is cognitively far less demanding than participating in a seminar or conceptualising and writing an essay. For Boughey (2013: 27), the BICS/ CALP distinction helps to shift the debate in academic development
Institutional Language Policy and Academic Literacy in South African Higher Education 9
away from language use per se to language use in ‘context reduced’ situations and for cognitively demanding tasks. Though influential, the BICS/CALP distinction captures only some dimensions of the typicality of academic discourse, and Cummins (2000: 86ff) himself has responded extensively to major criticisms of his typology, but these distinctions do give a first conceptual step to get to grips with the particularity of this kind of language use. This brief overview emphasises the complexity of language policy and planning for tertiary academic literacy, because of literacy being a contested and not primarily a language proficiency phenomenon. As shown, even predominantly monolingual universities have apparently not yet managed to adopt a satisfactory policy-driven approach to this unique language planning goal. When considering academic language policy and planning, higher education institutions are challenged in different ways by two opposing but somehow related trends: the Anglicisation of higher education (Coleman, 2006: 3) on the one hand and ‘increasing multilingualism’ in higher education (Liddicoat, 2016: 232) on the other. It would seem that institutions functioning in an environment of institutional or official multilingualism respond differently to the latter than those functioning within an institutional or official monolingual polity but an un-official multilingual environment. Notably Liddicoat (2016: 238) concludes by emphasising the complexity of language policy and planning at universities: how they have to ‘engage’ with the internationalisation challenge but also how ‘multilingualism may be overlooked’ in current language policy and planning initiatives. Either way, higher education institutions are increasingly required to adopt a policy-driven approach to solving their language problems. What is arguably required is therefore a holistic policy-driven approach dealing with both institutional and academic language issues and one that moves beyond ‘monolingualisation’ or beyond what some prefer to describe as monolingual ideologies. Academic Literacy Policy in South African Higher Education
Boughey (2013: 26, 31) stresses that within the South African context academic literacy has largely been dealt with as a matter of academic development, a field initially introduced at former English-speaking universities in the early 1980s when students from marginalised groups got admitted at former ‘white’ universities (Boughey, 2010: 4). This led to the establishment of academic support programmes, an initiative that was soon questioned as students experienced them as ‘a function of the general deficiency model related to their social and cultural backgrounds’ (Boughey, 2010: 10). The focus subsequently shifted to academic development programmes and, due to the transformation discourse of the 1990s, to institutional development programmes,
10 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
involving the integration of student support into mainstream teaching and learning. In practice, student support units or centres evolved over time into academic development units/centres, and eventually into teaching and learning units/centres or higher education development units/centres (Boughey, 2010: 17). The Centre for Higher Education Development at the University of Cape Town is a prime example of this evolution and of how academic support, academic development and staff and curriculum development are combined (CHED, 2018). Boughey (2010: 13 ff) identifies several policy developments that have had an impact on academic development in higher education in post-1994 South Africa, inter alia the introduction of the National Qualifications Framework (NQF) in 1995 in terms of the South African Qualifications Authority Act (South Africa, 1995), the establishment of new (merged and amalgamated) higher education institutions and new institutions during 2001, as well as the introduction of quality assurance in South African higher education (CHE, 2004). Notably, she does not mention any corresponding language policy development in higher education during the period. Boughey nevertheless points out how the mentioned interventions required a range of responses from higher education institutions in relation to, among others, the capacity required for managing and developing teaching and learning, and in the field of academic development. Universities responded by embarking on a range of academic development initiatives, one of them being the introduction of so-called Extended Curriculum Programmes. These are programmes that have been lengthened in order to allow for the inclusion of activities geared towards supporting and helping disadvantaged students. Obviously, this required the establishment of a full-blown academic support infrastructure which resulted in a huge financial burden for universities, since their subsidies were calculated on the basis of a three-year degree programme. In recognition of these interventions, the ministry introduced its Foundation Programme Grants in 2000 (Boughey, 2010: 16). From 2004 onwards state funding for extended curriculum programmes came into being (CHE, 2014: 28). A discussion ensued about these extended programmes, as reflected in relevant studies commissioned by the CHE such as those of Scott et al. (2007), the Task Team on Undergraduate Curriculum Structure (2013) and Shay et al. (2016), to mention the major ones. Also occasionally touching on issues concerning language and academic literacy, these studies have identified the issue of ‘student under-preparedness’ as a complex issue warranting serious attention. Consequently, a new undergraduate curriculum structure with three fundamental elements was proposed, with modifications in respect of duration (extending all current basic degrees and diplomas by one year), flexibility (to allow students freedom to complete the programme in a
Institutional Language Policy and Academic Literacy in South African Higher Education 11
shorter time) and standards (retaining and even improving on existing standards by utilising additional curriculum space). General agreement was also reached on the need to improve on poor throughput rates. A number of questions were nevertheless raised, including whether the proposed extended programme would contribute to developing ‘language functionality’ and how academic literacy would help with addressing missing hierarchical knowledge (CHE, 2014: 17; CHE, 2014: 8). To date the Ministry has not accepted the proposals on an extended curriculum. Nevertheless, ‘the twin challenge of academic language and language of instruction (English)’ identified in terms of success and throughput (CHE, 2010: 182) remains, although an overemphasis on (English) language proficiency over disciplinary knowledge is challenged. These studies raise an important question, namely whether intellectual rigour and cognitive ability can be judged solely on proficiency in the English language (CHE, 2010: 173). Although a rather cursory glance of the development of policy on academic literacy, the overview highlights that work has indeed been done on academic literacy (or literacies) policy in higher education since the promulgation of the Higher Education Act in 1997. In fact, academic literacy and academic development has taken centre stage in the discourse on curriculum reform in South Africa and in addressing the key systemic challenges of access, retention and throughput. The initiative of a number of universities in this regard is noteworthy. Also notable is the role the CHE is playing in advancing discussions regarding under-preparedness, academic success and throughput rates. However, what is striking about these developments is the virtual absence of significant cross-referencing to institutional language policy, and this despite the fact that language of instruction and language proficiency do feature in the unfolding discussions, more specifically in relation to proficiency in English as academic language. Language Policy for Higher Education in South Africa
Relatively more has been written about language policy development since the publication of the LPHE in November 2002 (see MoE, 2002). The LPHE draws on two preceding reports, the Language Policy Framework for South African Higher Education (CHE, 2001) released in terms of Section 5(2)(i) of the Higher Education Act (South Africa, 1997) as advice to the Minister during July 2001, as well as its ‘extension’, the so-called Gerwel Report on the position of Afrikaans (Gerwel, 2002) released in January 2002. The LPHE obviously also draws on the National Plan for Higher Education released during February 2001 (MoE, 2001) Apart from several scholars providing brief summaries of the period following the publication of the LPHE (inter alia by Abongdia, 2015; Laga Ramoupi, 2014; Madiba, 2010; Turner & Wildsmith-Cromarty,
12 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
2014). Maseko (2014) presents the most detailed overview up to date of interventions since 2002. Although less critical of the LPHE, Maseko (2014: 28) nevertheless notes that the evolution of language policy in higher education since November 2002 points to the need for institutions to ‘rethink the place of these [the Sintu – own insertion] languages’ and to adjust their institutional language policies accordingly. She bases this observation on her analysis of the LPHE as well as on the following post-2002 policy interventions: • two departmental reports: • one by a Ministerial Committee appointed in September 2003 on the development of Sintu languages as media of instruction in higher education (Ministerial Committee, 2003); • a second by a Task Team appointed in October 2011 on the Charter for Humanities and Social Sciences (DHET, 2011); • the Green Paper for Post-school Education and Training launched on 12 January 2012 (DHET, 2012a); and • the appointment on 10 February 2012 of the Ministerial Advisory Panel on the Development of Sintu Languages in Higher Education (DHET, 2012b). Maseko discusses the essence of each of these interventions in relation to what emanates as an intended ‘language policy shift’. With regard to the core provisions of the LPHE, she emphasises the need for the development of the Sintu languages in order to increase access and ensure the academic success of students speaking languages other than English and Afrikaans; also to promote social cohesion. According to Maseko (2014: 30) the LPHE foresees a role for the Sintu languages in the development of academic literacy. This notion is supported by recommendations from the two above mentioned reports, about the allocation of specific Sintu languages per university for the purpose of developing these as academic languages in the first instance, and integrating them as languages of learning in the second instance (Maseko, 2014: 32). One of the six ‘catalytic’ projects recommended by the second report includes a national multi-disciplinary project on how ‘indigenous languages’ could support the process of concept formation in the humanities and social sciences, and could enrich social scientific thinking or pedagogy; this project being flagged as a priority (DHET, 2011: 38). A further recommendation is that adjustments need to be made in the way universities are funded in order to show a funding commitment towards language-related programmes (DHET, 2011: 42). Maseko emphasises the centrality of recommendations in the Green Paper on the ‘strengthening’ of Sintu languages and their development, as well as the shift towards concretising steps towards elevating their
Institutional Language Policy and Academic Literacy in South African Higher Education 13
status and extending their use. Essentially the Green Paper proposes three measures, viz. that: • proficiency in Sintu languages be made a requirement in professional training; • teacher training be geared towards mother-tongue education; and • students be encouraged to follow a course in an ‘African language’. (Maseko, 2014: 34) The more recent White Paper on Post-Secondary School Education and Training (DHET, 2013a: 38) retains these recommendations. It also stresses the need for higher education institutions to integrate the Sintu languages into formal programmes, the overall need to reverse the decline of African language departments and the strategic importance of these languages to the work of universities. Finally, Maseko (2014: 35) pays attention to the work of the Ministerial Advisory Panel on the Development of African Languages in Higher Education and the forthcoming thinking about steps to concretise recommendations concerning the implementation of these languages in higher education and the extension of their role that have been circulating since 2002; with this including findings from a range of related studies. The Advisory Panel’s report to the Minister has been published in the meantime. It concludes with a range of concrete recommendations, inter alia that: • Both national policy and institutional policies clearly relate policy goals to the development of Sintu languages as subjects (both home language and additional languages); • as mediums of instruction; and • as support to learning; • Both the DHET and universities set up structures (such as Language Units) to monitor and evaluate language policy implementation and regularly report on progress; • Sintu languages be developed as a collaborative effort; • Comprehensive research be undertaken regarding the intellectualisation of Sintu languages, supported by dedicated and increased funding; • A national study be undertaken to gain evidence on the evolution of African language departments in South Africa; • A dynamic partnership be established between the DHET and the Department of Basic Education in order to rally support for meaningful multilingual education where the Sintu languages are prominently featured; • Terminology development be financially supported; • Each university establish an infrastructure to develop and translate critical texts into Sintu languages and research their use in pedagogical contexts. (DHET, 2015a: 39–44)
14 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
As such, these recommendations represent the most concrete language planning steps so far proposed concerning the position of the Sintu languages in higher education. However, it is rather striking that they do not deal directly with language proficiency or academic literacy, despite the fact that the topic is mentioned in the report. Maseko (2014: 44) concludes that the language policy landscape is ‘conducive to facilitating the meaningful integration of the indigenous African languages [other than the Khoesan languages and Afrikaans – own insertion] in all teaching, learning and research practices’ in higher education and ensuring that English in particular does not exclude students for whom it is an additional language. At the time of her overview the table was already set for a mind-shift in higher education about the languages used for learning and teaching, and in particular with regard to the future role of the Sintu languages. This requires a farreaching review of universities’ institutional language policies. The need for language policy reviews also arose from policy interventions related to a parallel transformation process driven by the DHET not discussed by Maseko. These interventions flow from a Ministerial Committee report on transformation and social cohesion and the elimination of discrimination in higher education institutions released on 30 November 2008. Although not focused on language problems per se, this report nevertheless touches on problematic language practices at some universities, where apparently language continues to be used as a mechanism for exclusion and where it could be perceived as a barrier to transformation. Notably the report pertinently highlights the dilemma of non-English speaking students being disadvantaged because of a lack of language proficiency in the two existing languages of teaching and learning (Ministerial Committee, 2008: 101). The report makes a range of recommendations directed specifically at granting the Minister a larger role in engineering transformation at the targeted institutions. These include that • institutional language policy implementation be reviewed; • a mechanism be established to monitor implementation; and, most importantly; • that universities be requested to report on their commitment to multilingualism and the development of Sintu languages as academic languages. (Ministerial Committee, 2008: 102) Three noteworthy consequences of these recommendations are: • the establishment of a higher education forum on transformation (meeting with the Minister on a regular basis); • the institution of a higher education summit, the first held in 2010 (DHET, 2010) and a second in 2015 (DHET, 2015b); and
Institutional Language Policy and Academic Literacy in South African Higher Education 15
• the appointment of a Ministerial Oversight Committee on Transfor mation in 2013 (DHET, 2013b). Also, more may be learned about this shifting focus regarding language issues from the reports of the 2010 and 2015 higher education summits. The position of the Sintu languages drew considerable attention at the 2010 summit. A resolution was adopted there that higher education institutions should contribute to the development as well as institutionalisation of these languages as academic languages and that consideration be given to making them compulsory at first-year level (DHET, 2010: 24–25). In comparison, less attention is devoted to language issues at the 2015 summit, although discussion is devoted to language practices at universities that present a stumbling block to effective teaching and learning (DHET, 2015c: 2). The matter was raised by a report of the Oversight Committee at the summit as part of a discussion about institutional culture at South African universities, as well as students’ general dissatisfaction with institutional language policies (DHET, 2015d). The draft revised language policy for higher education published on 23 February 2018 gives an indication of what the Minister expects concerning the linking of language with transformation. Given the repeated suggestions about the need for language policy review, it comes as no surprise that this draft requires all higher education institutions to actually ‘revise’ their institutional language policies, with the aim of: • Strengthening ‘indigenous official languages’ as languages of teaching and learning, research and innovation and science; and • Ensuring transformation by enhancing the status and roles of previously marginalised languages in fostering institutional inclusivity and social cohesion. (DHET, 2017: 12) Apart from a two-pronged definition for ‘academic language’ (the ‘language needed by students to do the work in schools’), it includes, for example, ‘discipline-specific vocabulary, grammar and punctuation, and applications or rhetorical conventions and devices that are typical for a content area’ (DHET, 2017: 6) and the brief mentioning of language proficiency challenges in the introduction of the draft revised LPHE (DHET, 2017: 9). However, very little of any substance is provided about academic language policy and planning. Although some guidelines are given regarding the languages of teaching and learning, the draft revised LPHE notably concentrates on matters relating to institutional languages (medium of instruction, administration and research). This shortcoming might still be resolved once the draft is finalised in response to stakeholders’ comments.
16 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Discussion
It is rather striking that documents dealing with policy development in academic literacy point to low levels of language proficiency in English, and to some extent also in Afrikaans, as languages of learning and teaching. However, apart from mentioning this dilemma, this study has not uncovered any evidence of corrective interventions in this regard. To the contrary, recent publications emphasise the fact that universities have taken the opposite route by elevating English as the primary language of higher education and, in the process, even removing Afrikaans as an alternative language medium (du Plessis, 2017). Notably, little mention is made in the studied documents of the utilising of the Sintu languages as academic languages. Another striking aspect of the academic literacy discourse is that institutional language policies are not mentioned. One gets the impression of two discourses running parallel to one another. This might have to do with the fact that those involved in academic language development represent a fraternity of their own, what Boughey (2010) refers to as the ‘academic development movement’. It essentially constitutes a bottom-up response to the challenges of language development in South Africa, which differs significantly from the largely top-down approach followed in institutional language policy development; a situation that prompts one to want to say ‘and never the twain shall meet’. The overview of language policy development within higher education in South Africa provided here points to the fact that decision makers are aware of the importance of higher education language policy that responds to these challenges in some way. Much effort went into developing a national framework for university language policies. Apart from a range of reports and other policy outcomes, the LPHE stands out as the central document resulting from extensive policy deliberations since 1994. This applies to both the (initial) 2002 and (revised) 2017 versions. By comparing these two versions one is able to reconstruct the discourse on university language policy over the period. One of the most noticeable trends to have emerged through such a comparison is the complete shift regarding languages of teaching and learning at South African universities. Whereas the 2002 version of the LPHE emphasises the fact that English and Afrikaans are the languages of higher education, the 2017 version has moved away from this position and now over-stresses the use of the Sintu languages at South African higher education institutions. It emphasises the institutionalisation of these languages and gives guidance in this regard by requiring that these languages be included as institutional languages. This implies that these languages are to be used not only as languages of administration but also as languages of learning and teaching.
Institutional Language Policy and Academic Literacy in South African Higher Education 17
Another shift in language policy concerns the lesser emphasis on language proficiency in the academic language. Whereas the 2002 version still stresses this issue, the 2017 version offers very little substance in this regard and does not manage to provide a clear correlation between language of learning and teaching on the one hand and academic language on the other. Rather, while emphasising the Sintu languages in the former role, English is now the designated language in the latter role. Apart from creating an internal policy conflict and thereby undermining all the efforts regarding the elevation of the Sintu languages, this conflict weakens attempts to improve academic literacy. On the basis of the evidence presented in this chapter, one may reach the conclusion that there seems to be a lack of alignment between institutional language policy and academic literacy policy. The LPHE does not provide an adequate overarching framework for both an institutional and academic language policy. Furthermore, neither the framework for institutional language policy nor the initiatives towards developing academic literacy do enough to establish a next level of alignment, and that is between institutional language policy, language proficiency and academic language policy. Conclusion
While the overview of language policy development in higher education has not dealt with practice, or what happens at individual institutions, the repetition of the same themes over many years leads one to suspect what Kaschula (2011) in his inaugural address calls the ‘Forked Tongue of Multilingualism – policy is in place while implementation remains a challenge’. One could extend this metaphor by applying it to institutional language policy and academic literacy; also approached in such ambivalent manner. Notably, the documented shift in language policy towards institutionalising the Sintu languages as languages of academia does not really correspond to measures regarding academic literacy (especially when restricted by moving to ‘English literacy’). Essentially the situation emphasises what Van der Walt (2004: 147) correctly identifies as the need to integrate language policy and planning in the educational landscape, i.e. integrating institutional language policy with other aspects of language-in-education planning (e.g. access policy, curriculum policy, assessment policy, etc.). Through such integration, she argues, the institution would be including language requirements in support of learning in ‘a more effective way’, this relating to the principle of harmonic alignment between the different policies stated in the introduction to this chapter. Returning to the two central language policy and planning challenges of the 2008 Soudien report on transformation at universities that were
18 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
raised in the introduction, namely to include the Sintu languages as institutional languages as well as academic languages, our concluding question is whether meta-policy initiatives have responded appropriately and whether institutions themselves have followed suit. Although the latter has not been discussed in any detail, the overview of academic literacy policy indicates that the establishment of an academic language development movement offers proof of what institutions have done in this regard. However, our analysis has revealed that institutional meta-policy has not managed to address convincingly the challenge of institutionalising the Sintu languages as academic languages. The latest version of the LPHE addresses this shortcoming, but, as noticed, in a rather incoherent manner, suggesting that the case of the so-called disjunctive institutional language policy at Nelson Mandela University is probably the norm rather than the exception. However, one would require a more in-depth study of language policy development at different universities in order to reach a final conclusion on the matter. Until such a comprehensive study has been undertaken, one provisionally has to conclude that the quest for harmony between institutional and academic language policy remains just that. It should be d isturbing to responsible applied linguists when the largest portion of the South African student population is being disempowered on a daily basis as they are ‘largely not [adequately – own insertion] prepared to cope with the typical academic literacy demands of academic study’ (Centre for Educational Testing for Access and Placement, 2017: 97). By striving towards a responsible language policy design capable of establishing the kind of harmony that is required, applied linguists can hopefully help more of these students to perform optimally. Notes (1) A reworked version of a paper presented as part of symposia on Assessing the academic literacy of university students through post-admission assessments at SAALT 2017 in Grahamstown and a paper delivered at the University of the Free State/University of Namur Workshop III, ‘Managing language diversity in educational institutions: Language preservation and empowerment within multilingual contexts’, Clarens, South Africa, 11 May 2018. (2) This work is based on research supported by the National Research Foundation. Any opinion, finding and conclusion or recommendation expressed in this material is that of the author(s) and the NRF does not accept any liability in this regard. (3) Named after the chairman of the Ministerial Committee that investigated transformation and social cohesion and the elimination of discrimination in public higher education institutions in South Africa (Ministerial Committee, 2008).
References Abongdia, J.-F.A. (2015) The impact of a monolingual medium of instruction in a multilingual university in South Africa. International Journal of Educational Sciences 8 (3), 473–483.
Institutional Language Policy and Academic Literacy in South African Higher Education 19
Arkoudis, S. (2013) English language standards and claims of soft marking. In S. Marginson (ed.) Tertiary Education Policy in Australia (pp. 123–129). Melbourne: Centre for the Study of Higher Education, University of Melbourne. Baker, S. and Irwin, E. (2015) A National Audit of Academic Literacies Provision in Enabling Courses in Australian Higher Education (HE) (Report compiled for the Association of Academic Language & Learning). Newcastle: English Language and Foundation Studies Centre & Centre for Excellence for Equity in Higher Education, University of Newcastle, Australia. Boughey, C. (2010) Academic Development for Improved Efficiency in The Higher Education and Training System in South Africa. Midrand: Development Bank of Southern Africa. Boughey, C. (2013) What are we thinking of? A critical overview of approaches to developing academic literacy in South African higher education. Journal for Language Teaching 47 (2), 25–42. Centre for Educational Testing for Access and Placement (CETAP) (2017) NBTP National Report: 2017 Intake Cycle. Cape Town: National Benchmark Tests Project, Centre for Higher Education Development. Centre of Higher Education Development (CHED) (2018) CHED units. See http://www. ched.uct.ac.za/ched/chedunits (accessed May 2019). Coleman, J. (2006) English-medium teaching in European Higher Education. Language Teaching 39 (1), 1–14. Cooper, R. (1989) Language Planning and Social Change. Cambridge: Cambridge University Press. Council on Higher Education (CHE) (2001) Language Policy Framework for South African Higher Education. Pretoria: Council on Higher Education. Council on Higher Education (CHE) (2004) Higher Education Quality Committee Founding Document (2nd edn). Pretoria: Council on Higher Education. Council on Higher Education (CHE) (2010) Access and throughput in South African higher education: Three case studies. CHE Monitor 9 March 2010. Pretoria: Council on Higher Education. Council on Higher Education (CHE) (2014) Responses to the CHE Task Team’s Proposal for Undergraduate Curriculum Reform. Pretoria: Council on Higher Education. Cummins, J. (2000) Language, Power, and Pedagogy: Bilingual Children in the Crossfire. Clevedon: Multilingual Matters. Department of Higher Education and Training (DHET) (2010) Report on the Stakeholder Summit on Higher Education Transformation: Called by the Minister of Higher Education and Training, Dr Blade Nzimande 22–23 April 2010. See http://www.dhet.gov. za/summit/Docs/General/he_transformation_summit_report.pdf (accessed May 2019). Department of Higher Education and Training (DHET) (2011) Report Commissioned by the Minister of Higher Education and Training for the Charter for Humanities and Social Sciences: Final Report 30 June 2011. Pretoria: Department of Higher Education and Training. Department of Higher Education and Training (DHET) (2012a) Green Paper for Post-school Education and Training. Pretoria: Department of Higher Education and Training. Department of Higher Education and Training (DHET) (2012b) Ministerial advisory panel on the development of African languages in higher education. Government Notice No. 103. Government Gazette No. 35028, 10 February 2012. Pretoria: Government Printer. Department of Higher Education and Training (DHET) (2013a) White Paper on PostSecondary School Education and Training: Building an Expanded, Effective and Integrated post-school system. As approved by Cabinet on 20 November 2013. Pretoria: Department of Higher Education and Training. Department of Higher Education and Training (DHET) (2013b) Public Finance Management Act (1 of 1999). Pretoria: Government Printer.
20 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Department of Higher Education and Training (DHET) (2015a) Report on the Use of African Languages as Mediums of Instruction in Higher Education. Pretoria: Department of Higher Education and Training. Department of Higher Education and Training (DHET) (2015b) Second National Higher Education Transformation Summit, International Convention Centre, Durban, KwaZulu-Natal, 15–17 October 2015 (Programme). See http://www.dhet.gov.za/ summit/Docs/151009_Summit%20Programme_Final.pdf (accessed June 2019). Department of Higher Education and Training (DHET) (2015c) The 2015 Durban Statement on transformation in higher education, 17 October 2015. See http://www. dhet.gov.za/summit/Docs/2015Docs/2015%20Durban%20HE%20Transformation% 20Summit%20Statement.pdf (accessed June 2019). Department of Higher Education and Training (DHET) (2015d) The transformation of South African higher education. Concept paper prepared for the second national Higher Education Transformation Summit, 2015. See http://www.dhet.gov.za/summit/ Docs/2015Docs/Annex%208_TOC_Transformation%20of%20SA%20HE.pdf (accessed May 2016). Department of Higher Education and Training (DHET) (2017) Draft Language Policy for Higher Education. Pretoria: Department of Higher Education and Training. du Plessis, T. (2017) Language policy evaluation and review at the University of the Free State. Language Matters 48 (3), 1–21. Fenton-Smith, B. and Gurney, L. (2016) Actors and agency in academic language policy and planning. Current Issues in Language Planning 17 (1), 72–87. Fenton-Smith, B., Humphreys, P., Walkinshaw, I., Michael, R. and Lobo, A. (2017) Implementing a university-wide credit-bearing English language enhancement programme: issues emerging from practice. Studies in Higher Education 42 (3), 463–479. Gerwel, G. (2002) Report to the Minister of Education AK Asmal by the Informal Committee Convened to Advise on the Position of Afrikaans in the University System. Pretoria: Ministry of Education. Kaplan, R.B. and Baldauf Jr., R.B. (1997) Language Planning from Practice to Theory. Clevedon: Multilingual Matters. Kaplan, R.B. and Baldauf Jr., R.B. (2003) Language and Language-in-education Planning in the Pacific Basin. Dordrecht: Kluwer Academic Publishers. Kaschula, R. (2011) Challenging the forked tongue of multilingualism: Scholarship in African languages at SA universities with specific reference to Rhodes (Professor Russell H. Kaschula inaugaral address). See https://www.ru.ac.za/media/rhodesuniversity/ content/equityinstitutionalculture/documents/Challenging_the_forked_tongue_of_ multilingualism._Scholarship_in_African_languages_at_SA_Universities_with_ specific_reference_to_Rhodes..pdf (accessed July 2019). Knott, A. (2015) An analysis of a trilingual language policy’s dominant skills discourse, theory of language(s) and teaching approaches for university policy. South African Journal of Higher Education 29 (6), 145–166. Laga Ramoupi, N.L. (2014) African languages policy in the education of South Africa: 20 years of freedom or subjugation? Journal of Higher Education in Africa/Revue de l’enseignement supérieur en Afrique 12 (2), 53–93. Lauridsen, K.M. (2013) Higher Education Language Policy: Report of the CEL-ELC Working Group. Brussels: Conseil Européen pour les Langues / European Language Council. Liddicoat, A.J. (2007a) Introduction: Literacy and language planning. In A.J. Liddicoat (ed.) Language Planning and Policy: Issues in Language Planning and Literacy (pp. 1–12). Clevedon: Multilingual Matters. Liddicoat, A.J. (2007b) Language planning for literacy: Issues and implications. In A.J. Liddicoat (ed.) Language Planning and Policy: Issues in Language Planning (pp. 13–29). Clevedon: Multilingual Matters.
Institutional Language Policy and Academic Literacy in South African Higher Education 21
Liddicoat, A.J. (2016) Language planning in universities: Teaching, research and administration. Current Issues in Language Planning 17 (3–4), 231–241. Madiba, M. (2010) Towards multilingual higher education in South Africa: The University of Cape Town’s experience. The Language Learning Journal 38 (3), 327–346. Maseko, P. (2014) Multilingualism at work in South African higher education: From policy to practice. In L. Hibbert and C. van der Walt (eds) Multilingual Universities in South Africa: Reflecting Society in Higher Education (pp. 28–45). Bristol: Multilingual Matters. Mazak, C. and Carroll, K.S. (2016) Translanguaging in Higher Education: Beyond Monolingual Ideologies. Bristol: Multilingual Matters. Ministerial Committee (2003) The Development of Indigenous African Languages as Mediums of Instruction in Higher Education (Report compiled by the Ministerial Committee appointed by the Ministry of Education in September 2003). Pretoria: Department of Education. Ministerial Committee (2008) Report of the Ministerial Committee on Transformation and Social Cohesion and the Elimination of Discrimination in Public Higher Education Institutions. (30 November 2008, final report). Pretoria: Department of Education. Ministry of Education (MoE) (2001) National Plan for Higher Education. Pretoria: Department of Education. Ministry of Education (MoE) (2002) Language Policy for Higher Education. Pretoria: Department of Education. Patterson, R. and Weideman, A. (2013) The typicality of academic discourse and its relevance for constructs of academic literacy. Journal for Language Teaching 47 (1), 107–123. Scott, I., Yeld, N. and Hendry, J. (2007) A case for improving teaching and learning in South African higher education. Research paper prepared for the Council on Higher Education. Higher Education Monitor No. 6, October 2007. Pretoria: Council on Higher Education. Shay, S., Wolff, K. and Clarence-Fincham, J. (2016) New Generation Extended Curriculum Programmes: Report to the DHET. Cape Town: University of Cape Town. South Africa (1995) South African Qualifications Authority Act (58 of 1995). Pretoria: Government Printer. South Africa (1997) Higher Education Act (101 of 1997) Pretoria: Government Printer. Task Team on Undergraduate Curriculum Structure. (2013) A Proposal for Undergraduate Curriculum Reform in South Africa: The case for a Flexible Curriculum Structure (Discussion document, August 2013). Pretoria: Council on Higher Education. Turner, N. and Wildsmith-Cromarty, R. (2014) Challenges to the implementation of bilingual/multilingual language policies at tertiary institutions in South Africa (1995– 2012). Language Matters 45 (3), 295–312. Universities Australia. (2015) Keep it clever. Policy statement 2016. See https:// www.universitiesaustralia.edu.au/news/policy-papers/Keep-it-Clever--PolicyStatement-2016#.WqEcdslLdPY (accessed July 2019). Van der Walt, C. (2004) The challenge of multilingualism: in response to the language policy for higher education. South African Journal of Higher Education 18 (1), 140–152. Weideman, A. (2017) Responsible Design in Applied Linguistics: Theory and Practice. Cham, Switzerland: Springer. Weideman, A. (2021) A skills-neutral approach to academic literacy assessment. [In this volume].
2 A Skills-Neutral Approach to Academic Literacy Assessment1 Albert Weideman
The Issue of Skills-based Design Versus Skills Neutrality
When we design language interventions, we seldom submit to further critical scrutiny what we consider to be the essential components of such interventions. More often than not, our design of these interventions may be based on an unexamined assumption of what the essential elements must be, or, if we do care to scrutinise that assumption, our design may still turn out to be guided by a willingness to compromise with the traditional. Both when we create language curricula or courses, and when we devise language assessments, our default is to assume that they have or must have a conventional format. That means, more often than not, that our language course or test will either be demarcated as a ‘speaking’, ‘listening’, ‘reading’ or ‘writing’ test, or will be filled with tasks relating to these customary ‘skills’. If we do for a moment hesitate to examine the skills so identified for our language intervention designs, we are soon reassured by the practical examples we encounter of language tests, some quite large and influential, that continue to label tests thus. Whatever merits these practical examples of what language tests look like otherwise have, their conventional design reassures us that accommodating traditional test designs is not only acceptable, but standard practice. Whatever further misgivings we may momentarily experience are pushed aside by the globally influential rating scales of language ability, that continue to promote the maintenance of dividing language into four different skills, insisting as they do that they should give us defined bands of performance on each of these. Why should we bother to examine critically assumptions that are so pervasive, so intuitively obvious, and apparently so easily identifiable? In this chapter, I pick up on an issue that has been touched on several times in passing, but that has still, as far as I have been able to determine, not been properly and adequately attended to. It concerns 22
A Skills-Neutral Approach to Academic Literacy Assessment 23
the question, seldom asked, whether ‘to think in terms of “skills”’ is at all relevant. The issue was formulated as follows quite some time ago by Bachman and Palmer (1996: 75ff), but one has yet to see it being properly foregrounded in discussions of test design: We would thus not consider language skills to be part of language ability at all, but to be the contextualised realisation of the ability to use language in the performance of specific language use tasks. We would … argue that it is not useful to think in terms of ‘skills’, but to think in terms of specific activities or tasks in which language is used purposefully.
Though one might sense the importance of this observation, one may initially understand it merely as the logical outcome of the shift in language assessment to a communicative view on language, without heeding the true measure of its implications. Surely, it may be argued, in line with the then orthodoxy in language course and test design some 20 years ago, one would expect nothing less from two authoritative practitioners than an endorsement of a number of the main tenets of the communicative approach? What we then forget is that our profession, and the designs language test designers propose, is driven much more strongly by continuity with the past than by innovation (Weideman, 2014: 183ff). Its default is not renewal, but retention of the traditional. Nowhere is this continuity with the past better illustrated, and then ironically so, than in the supposed innovations associated with the beginning of applied linguistics, and its main outcome: the audiolingual method. Considered from the angle of the four ‘skills’, the development of audiolingualism constitutes not so much a break with the past as a confirmation of their traditional importance. The predecessors of the audio-lingual ‘revolution’ reach back into the 19th century. Its forerunners are evident there in the emphases of the grammar translation method on reading and composition, while in the 20th century one can see its harbingers in the conversion among language teachers to the direct method, with its emphasis on listening and speaking. These two earlier methods, the grammar translation and the direct method, had already identified the two pairs of important language ‘skills’: reading, writing (‘composition’), listening and speaking, that shaped their ‘revolutionary’ audiolingual successor. In an important sense, the reification of these four skills in the grammar translation and direct methods therefore merely received a resounding affirmation in the audiolingual method. The only further addition to the employment of the four skills in this case was that the behaviourist and structuralist starting points of audiolingualism, combined with the fieldwork techniques of structuralist-inspired investigations of native American languages, required the teaching of the skills to be done in
24 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
a strict sequence. That strict sequence was essentially the same as the procedure for recording a new, strange language in the ‘field’: listening comes first, then speaking, before one attempts to deal with reading and its basis in orthography. This reaffirmation of the conventional, as well as the continuity with the past that it endorses, has consequences for language assessment design. When audiolingualism gave way to communicative language teaching, the assessment of the four ‘skills’ remained largely intact, despite claims such as those by Bachman and Palmer (1996) that were referred to above, and despite attempts at innovation and relevance in test and course design. In fact, the ‘communicative revolution’, while widely acknowledged in policy and curricula, and apparently enthusiastically endorsed by practitioners, has not had as wide an impact on language intervention designs as might be inferred from its official endorsement (Karavas-Doukas, 1996, 1998; Weideman et al., 2003). Indeed, its widespread misinterpretation and lack of actual implementation not only in Europe, but also in Africa (Heugh, 2013) as well as in Asia (Hong, 2013) are significant. Its lack of adoption is illustrated both by its subsequent fragmentation into task-based and other varieties of the approach, and, more recently, in an acknowledgement of its lack of widespread implementation, with renewed pleas being issued for ‘communication-oriented language teaching’ (Littlewood, 2014). The full significance of Bachman and Palmer’s (1996) claim that language skills are not really part of language ability becomes a little clearer, furthermore, when one considers the conclusion that Kumaravadivelu (2003: 226) reaches, quoting numbers of other observations as well as empirical studies: ‘Skill separation is, in fact, a remnant of a bygone era and has very little empirical or experiential justification’. It is interesting to note, too, that right at the outset, proponents of the then new communicative approach to language teaching took a much broader view of language mastery and the media in and through which language communication will take place (Littlewood, 1981: 82–84). In addition to an emphatic recognition of a wider range of communicative media than in conventional language teaching methods, there was renewed attention in early communicative language teaching to the language situations, to the expectable topics one might encounter in those situations and, most importantly, to the functions of language and their different realisations in grammatical form, that are experienced in real-life lingual interaction. So while its widespread acceptance has not yet materialised, the case for an other than skills-based approach is certainly not a new one. As one reviewer has pointed out, there are indeed in the international commercial language test arena examples of several tests that have either attempted, or still attempt, to ‘integrate’ language skills in their tasks. Moreover, there is renewed interest, certainly, in ‘reading-into-writing’ assessments
A Skills-Neutral Approach to Academic Literacy Assessment 25
(Chan, 2018). The question, however, is: can one disintegrate them in the first instance? It is more difficult to isolate the skills than one would expect. Moreover, if they are integrated, why then are the results still studiously reported in terms of listening, speaking, reading and writing? It is the purpose of this contribution not to make the case anew, but rather to show how gains can be made in the design of one kind of language intervention, tests of language ability, when we take a skillsneutral view, or what I prefer to call a disclosed view of language. The term ‘skills-neutral’ may therefore not be the best description, though it is shorter than ‘a disclosed view of language’. That view of language, which underlies Bachman and Palmer’s (1996) explanations, is inspired by what may broadly be termed sociolinguistic developments (Weideman, 2009a). Though Skehan (1988) had introduced it much earlier, it is a view that was specifically related by Yeld (2001) to Bachman and Palmer’s (1996) explanation, and subsequently to its employment in the kinds of academic literacy tests that are being referred to here. The same argument for what Bachman and Palmer (1996) called target language use (TLU) domains is made not only by this early work of Yeld (2001), but also by Patterson and Weideman (2013a). It is true, however, that this disclosed view (see again Weideman, 2009a) of what academic language is has been further enriched by notions of cognitive processing (Cliff, 2014, 2015), but its formulation has, like the early ones, remained a functional view of language (‘What do we do with language in academic settings?’), that stresses the purposes for which academic language is used. So while there are many possible labels for this approach, I shall focus in this chapter specifically on post-admission assessments of academic literacy that adopt a skillsneutral, functional view of language ability, a view that anticipates the use of academic language in interaction with others and for the sake of academic communication in a potential variety of media. Presenting as a procedure of test design a model that describes it as a process with a number of distinct phases and sub-phases, the rest of this chapter begins by identifying, against this background, those phases of such design in which the misunderstanding and misinterpretation may arise. It will be demonstrated that while an early question for test designers might be how a definition of academic literacy may be adequately conceptualised, in order to ensure that the design will be theoretically defensible, or regarded as a valid measure of academic literacy, that is not the first hurdle that a skills-neutral design intention will have to overcome. The definition of the test construct, as will be shown, can subsequently be operationalised in the development of appropriate assessment tasks. Once the test has been constructed and used, it will generate results. To support the argument that the skillsneutral design of these assessments has benefits for its use, empirical data will be presented to demonstrate the consistency or reliability of
26 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
the results obtained, and to show how empirical measures may further contribute to the responsible interpretation of test results, as well as to the utility of the tests in respect of diagnostic information. The chapter will conclude by showing how the design principles of reliability, theoretical defensibility, interpretability and usefulness relate, with others, to a theory of applied linguistics. The Phases of Language Test Development, Their Relation to and Their Relevance for Design Choices
At which stage of the language assessment design process does the choice between a skills-based versus a skills-neutral test instrument first occur? And at what other stage or stages may it again become a consideration? There are several models of the process of test design. Read (2015: 176–177) discusses a five-stage process, starting with planning, which is followed by design, operationalisation, trialling and use. Another prominent model is that of Fulcher, who describes it as a cycle with eight phases, that may repeat (2010: 94). While valuable and insightful, the main objection to Fulcher’s description is that design decisions are often taken before what is identified in this model as the first step in the cycle. What is more, in actual examples of test design (in contrast to the theoretical conceptualisation of how it can or should be done), test designers already imagine a design and may well proceed to develop tasks before they have full clarity about the construct. To illustrate how early influential design decisions are taken about what a language test may look like, I shall therefore rather employ Schuurman’s (1972: 404) model of a three-stage design process for technical artefacts, and for the purposes of this paper add to that another two phases of test design (Weideman, 2009b: 244–245). In that case we may in the process of language assessment design distinguish at least five phases, which in turn, as will become clear in the exposition below, may be constituted by a number of sub-stages. In addition, the model sees the last two or three phases as potentially iterative. Though it gives a broadbrush rendering of the process of developing a language test, it comes closer, in my opinion, to what actually happens during that process, an observation that has now twice been confirmed for me in test design and development initiatives that I have been involved with as far afield as Australia and Singapore. The claim is therefore that the model presented here may well have much wider applicability than merely in relation to the tests used in this instance as illustration; it claims to provide a better and more realistic explanation and understanding of the design process. Before a discussion of the phase(s) at which the choice between a skills-based as opposed to a skills-neutral test is relevant, Figure 2.1 is a diagrammatic representation of them (adapted from Weideman, 2009b: 244):
A Skills-Neutral Approach to Academic Literacy Assessment 27
Figure 2.1 Five phases of applied linguistic designs
Phase 1
Language problems are often institution-specific, or sector-specific. In the first phase, there is growing awareness of a language problem, in particular by administrative and executive decision-makers within an institution or organisation. That awareness leads to the identification and eventual acknowledgement of the problem as a language problem, and as further sub-stages an articulation by the institutional authorities that it needs to be solved, followed by an official decision to allocate resources to the solution desired. In a sophisticated institutional environment, that presents the problem as an applied linguistic one. Not all institutional environments have such an approach, however. It is important to note that this phase of language test design still has nothing ‘scientific’ about it. It is a practical, organisational problem that can potentially be subjected to further, more precise identification once applied linguistic attention is devoted to it, but it may in fact also bypass such professional attention. In that case the solution presented will almost certainly be entirely conventional, and most likely skills based. When we look at the case of the academic literacy tests that will primarily be referred to in this chapter, the Test of Academic Literacy Levels (TALL) and its associated counterparts in other languages and
28 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
at other (e.g. postgraduate) levels, we note that the problem was indeed first identified by university administrators and authorities. What specifically gave rise to its identification were the concerns that, in a period of increasing enrolments (‘massification’) in higher education, some students who were being given access to university education appeared to be unable to meet the language demands of academic study, a concern that was backed by substantial or at least pervasive anecdotal evidence. As a result, these students were presumed to have risk in terms of language. The problem was thought at that stage to be limited to those students who have to receive instruction through a language that for them is an additional and not their first language. As it turned out, the scope of the problem was wider, and involved a good number of first language users as well. Being academically literate has been likened to the learning of another language (for an overview, see Patterson & Weideman, 2013a), so it should come as no surprise that its strangeness may confound even first language users who are already competent in using ‘their’ language in other sectors or spheres, but may not have developed their ability to handle academic discourse, the language of the particular sphere of study, to the level required. What is noteworthy is that already at this stage the identification of the problem, and the way it is articulated, may prejudice the solution. If the problem is couched in familiar but imprecise academic rhetoric (‘These students can’t write a coherent sentence, let alone a p aragraph’), it may be identified as merely a ‘writing’ problem. That kind of problem formulation suggests that if an intervention can be designed to improve students’ writing, the problem will be solved. There is no critical engagement with what ‘writing’ entails or whether it is helpful to think of academic writing as writing sentences and paragraphs. There is little consideration of what precedes ‘writing’ and what makes it possible, functionally, cognitively and analytically. Worse, the problem may be couched in pseudo-academic language that views language problems as grammar problems (‘These students don’t know the first thing about grammar’). In that instance ‘grammar’ is employed not in the possibly more sophisticated or descriptive sense that syntacticians would use it, but as prescriptive and unchanging authoritative rules that have unfortunately not been adequately mastered. Such formulations of the problem already anticipate – at least in the eyes of those who make them – the intuitive and obvious solution. If the solution is somewhat more fortuitously seen as one that must be proposed for a language development problem, the decision may be to design only a language development course. Or there might be a decision only to assess language ability, without any prospect of language development, or perhaps assessment combined with an inappropriate existing language course. Another variation, and one actually experienced, is that the language test may be inappropriate. There is an apparently endless variety to the
A Skills-Neutral Approach to Academic Literacy Assessment 29
misconception of the solution to the problem, or the utilisation of an inappropriate measurement or intervention. Though a skills-based solution may still be the default, such misconception may indeed also go beyond that. What is more, as has been remarked above, the problem may in the unsophisticated case not even be referred for applied linguistic scrutiny. In the case of TALL, the development of a more responsible assessment happened only after the inadequacy and indefensibility of the initial, commercially acquired, skills-based language test had become evident (cf. Van Dyk & Weideman, 2004). What was more fortunate in that case, however, was that there was nothing that prevented harmony to be achieved among the institutional language policy (that required the assessment to be administered), the language development course (that had been left to the design of professional applied linguists) and, eventually, the assessment of language ability in a theoretically defensible and valid way. However, if in the identification of the problem a skillsbased alternative had been implied, this sought-after alignment of language policy, adequate language measurement and effective language development may not have been achieved: it could in such a case have ended up in the establishment of a writing centre as the cure-all, or any number of skills-based alternatives. Phase 2
In the second phase matters are taken further; decisions made in phase one begin to be implemented. If one assumes that there is the desirable institutional alignment of language policy (students are obliged to take a test), a need for language assessment that estimates the language ability of the incoming students, and an opportunity for language development (a language course that utilises the results of the test, and aims to make language development possible), a decision to develop or use a responsibly designed test can be further implemented. In that second stage of language assessment design, we find a first attempt by those tasked with the design of the measurement instrument to synthesise two dimensions, their technical creativity and imagination on the one hand, and their existing theoretical knowledge of how the problem might be approached on the other. However, if the designers are not trained as applied linguists, or even if they are applied linguists but their training lacks critical insight, there is once again a risk that they will choose the conventional route: design (or purchase commercially) a grammar test, or a ‘listening’ test, or a ‘reading’ or ‘writing’ test, or, if undertaken more ambitiously, combinations of these. For the tests used as illustration in this contribution, this initial design work brought together expertise in language test and language course development, in order to achieve an alignment not only
30 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
with policy, but also between the measurement of the ability, and its further development: the sought after harmony between language teaching, and language learning. The language intervention was already based on a communicative, functional and interactive view of language (for a description of the course, see Weideman, 2003) that provides a theoretical justification for the first edition of Weideman (2007). So the initial choice of a skills-based test was not warranted. But had the view of language taken by the course designers remained skills-based, the choice would certainly have been different. Phase 3
The third stage of language assessment design involves the further development of the initial (preparatory) formulation of an imaginative solution to the problem. In these preparatory plans, the technical fantasy and imagination of the test designer achieve even more prominence, indeed to the extent that they hold sway: they lead, and may at this stage override any theoretical or analytical corrections. The inconclusive nature of this stage of the design is further underlined by its being characterised by a gathering of information on what resources would be available to administer the test. If there are sufficient financial resources to purchase a commercially designed (skills-based) test, administrators may take this route, and the remainder of the process will be entrusted to others. Let us assume, however, that financial and other considerations indicate the need for an institution-specific or sector-specific assessment. Then other logistical issues may come into play: if, for example, the imagined design is for an open-ended format, requiring many markers, the administration of the test would require considerable resources, in particular human resources of some sophistication and competence. Should those resources not be available, the technical fantasy would be aimed at imagining a test in which closed questions, in multiple choice format, can be designed, since then the answers can be machine-scored, and fewer scarce resources would be required. Whatever choice of format is considered, the contemplated design of the test in this preparatory phase is likely to be influenced by a serious weighing of options, in particular of considering the technical means available to achieve the technical ends envisaged. Also likely at this stage is experimentation with designed tasks and task types. For the tests being referred to here, the test construct was still implicit and to some extent unformulated, and had not been deliberately aligned with the envisaged outcomes of the curriculum, which were the enhancement of language learning and the development of academic language. The articulation of the construct still needed to be done in a deliberate way, anticipating the next phase of the design. It follows, therefore, that also at this stage of the design process, there could have
A Skills-Neutral Approach to Academic Literacy Assessment 31
been traditional, executive or administrative forces at work that would have frowned upon experimentation with new task types and formats, and in fact have encouraged a reversion to the conventional, skills-based model of assessment. What mainly sets this phase apart from the next, however, is the prominence here of the relation between technical means and technical ends: are there enough resources available, human and other, to make the imagined test, and, should there be severe resource constraints (e.g. financial constraints, limited time in which to administer the test, or even more limited time to score it), would it still achieve its purpose? If the technical means are not available to achieve the desired technical ends, there is a further design challenge: the applied linguistic plan might have to change. Phase 4
In the fourth phase of the assessment design we find several sub-stages. First, the test developers seek to discover and articulate a theoretical justification for the preparatory and initial solution proposed, and to do so not only with deliberation, but preferably in a way that makes use of current views of language. Their intention is to articulate a theoretical rationale for the assessment that will provide a credible theoretical defence of what will be measured. It is significant and characteristic for this phase of test design that, unlike in the second and third phases, where a fusion was sought between the technical imagination of the test designers and their existing theoretical knowledge, at this stage new theoretical information, uncovered and gathered by the designers to attempt to improve on their design, may be sought and integrated into the planned solution, the measurement instrument. Since their initial formulation, the theoretical points of departure for the academic literacy tests being referred to here have been further discussed in a number of subsequent analyses (Patterson & Weideman, 2013a, 2013b; Weideman et al., 2016). The prominence given to defining the construct of these tests of academic literacy is fully warranted: without a clear perspective of what it is that gets tested, the test would have little validity. What is more, without a clear definition of what has been assessed, the interpretation of the results would become virtually meaningless. The construct of the tests used here for illustration will be discussed more fully below, but it should be noted that, after the initial implicit idea of what the construct might be, that characterises phase three, the deliberate articulation of the construct, in a way that echoes current views, provides the theoretical justification for the design. It is this reference to current theory that entitles the design to be considered an applied linguistic artefact. Without that, the assessment design is but a plan to test, that may or may not be aligned either with current theory
32 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
or with existing curricula for language development. In a word, it is more likely to be a test that is not theoretically defensible. There is within this fourth phase a second sub-stage. While the definition of what the ability is that will be assessed is of supreme importance, this phase focuses the design still further, by operationalising the construct. That usually means that the conceptualisation and definition of the construct is taken a few steps further, usually by identifying the components that make up the ability. I return to this operationalisation below. It should be obvious that a sharp delineation of the construct can bring a corrective to the design that might entail substantial reconceptualisation of all prior designs, either of the test as a whole, or of certain task types. Should the construct derive from a functional instead of a skills-based perspective on language ability, and some of the initial designs show a clear bias towards the latter, the corrective in the design of the assessment will be that tasks assessing conventionally designated ‘skills’ would either have to be substantially modified, or be rejected as invalid measures. In the present case, by defining the construct of academic literacy functionally as well as in terms of analytical and cognitive processes, the test developers adopted a design of the test that was aligned not only with the language instruction that followed, but also with the theoretically most current thinking about what constitutes language learning. Before describing the third sub-stage in this phase of test construction, one must note that the second sub-stage of this phase of the process is where the theoretically defined construct is first operationalised into components. Those components lead to the identification of task types that potentially tap into the components, and finally there follows detailed specifications for items, including their provisional weighting within those task types (making up the subtests). Once items and subtests so specified have been developed and scrutinised, they can be tested out (piloted). That kind of trial constitutes the third sub-stage in this phase of test construction. The piloting that is undertaken will yield empirical data and results for further consideration and refinement. That information informs the fifth phase of test design and construction. Phase 5
The endpoint of the fifth and final phase of language test design is the finalisation of the blueprint of the test. Questions that need to be answered upon analysis of the results of the trials include those concerning the quality of both the test as a whole and of its subtests, as well as of the items that make up each task type. Parameters have to be set at test level for consistency or reliability, for subtest intercorrelations
A Skills-Neutral Approach to Academic Literacy Assessment 33
and subtest-test correlations, as well as for item productivity (e.g. in respect of their discriminatory ability, their facility, their degree of ‘fit’ with the ability of the persons tested and their alignment with the construct). So, test designers use both their theoretically defensible construct and the empirical outcomes of the preliminary trials of their instrument to measure its adequacy and appropriateness as a designed solution to the problem identified in phase one. Should there be empirically identifiable, unanticipated flaws or weaknesses in the design, the test developers can revert to phase four. They may have to re-examine their rationale, rethink their adoption of certain task types or even have to go back to phase three, in order to re-apply their imagination to a redesigned solution. In fact, they may need to make unexpected compromises (Knoch & Elder, 2013: 62ff), since some definitions of language ability may be easier to operationalise than others. That redesigned solution will again have to be tested out. The aim of this modification and subsequent confirmation of the blueprint for the test is to achieve a technical solution that is more adequate, and more appropriate; see also below, the section on ‘The Further Specification of a Construct of Academic Literacy’. In at least four of the five phases of language test design set out above, it is noteworthy that the adoption of a theoretically defensible construct plays a supportive, though pivotal role. It does not lead the process: that role is taken by the technical imagination and fantasy of the test designers. Yet its adoption signals the detour into scientific analysis that completes its applied linguistic design intent. I turn in the next section to the articulation and operationalisation of the construct for the tests of academic literacy under discussion here. The Construct of Academic Literacy and its Operationalisation
The construct of academic literacy that is relevant here has its own history; for a more complete discussion of its development, one may consult Van Dyk and Weideman (2004), as well as Patterson and Weideman (2013a, 2013b). There is further information in Cliff (2014, 2015). For the sake of conciseness, the most recent definition of academic literacy (Patterson & Weideman, 2013b) is presented below in a format that I have again modified functionally. That further modification emphasises the analytically stamped nature of academic language, and in addition acknowledges a range of processes that make up academic interaction through language. Those processes include the three critically important academic undertakings of gathering analytically qualified information, processing that information and producing new information. Conventionally – and confusingly, if the argument made in this chapter is valid – these processes may have been
34 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
conceived of or divided up as an intertwinement of listening, writing, reading and speaking. The important and noteworthy point is that they now also take account of the (thus far underemphasised) cognitive processing that is characteristic of academic language ability. This augmented definition thus foregrounds the sub-processes of language interaction in academic settings, some of which are attentive listening, careful note-taking, effective reading for further information (Scholtz, 2015), engaging in clarifying discussion with peers and lecturers, analytical categorisation, forming critical opinion, processing a sound argument by finding valid evidence for it, planning how to articulate that argument well, and eventually setting it down for the sake of sharing it with others either in live, face-to-face presentation, or in (usually asynchronous) written or other legible (electronic) or visually accessible format. The ability to use language for such purposes may therefore be functionally defined as an ability to: • understand and use a range of academic vocabulary as well as content or discipline-specific vocabulary in context; • interpret the use of metaphor and idiom in academic language and perceive connotation, word play and ambiguity; • understand and use specialised or complex grammatical structures correctly, also texts with high lexical diversity, containing formal prestigious expressions and abstract/technical concepts; • understand relations between different parts of a text, be aware of the logical development and organisation of an academic text via introductions to conclusions and know how to understand and eventually use language that serves to make the different parts of a text hang together; • understand the communicative function of various ways of expression in academic language (such as defining, providing examples, inferring, extrapolating, arguing); • interpret different kinds of text type (genre), and have a sensitivity for the meaning they convey, as well as the audience they are aimed at; • interpret, use and produce information presented in graphic or visual format in order to think creatively: devise imaginative and original solutions, methods or ideas through brainstorming, mind-mapping, visualisation and association; • distinguish between essential and non-essential information, fact and opinion, propositions and arguments, cause and effect and classify, categorise and handle data that make comparisons; • see sequence and order, and do simple numerical estimations and computations that express analytical information, that allow comparisons to be made and can be applied for the purposes of an argument;
A Skills-Neutral Approach to Academic Literacy Assessment 35
• systematically analyse the use of theoretical paradigms, methods and arguments critically, both in respect of one’s own research and that of others; • interact with texts both in spoken discussion and by noting down relevant information during reading: discuss, question, agree/disagree, evaluate and investigate problems, analyse; • make meaning of an academic text beyond the level of the sentence; link texts, synthesise and integrate information from a multiplicity of sources with one’s own knowledge in order to build new assertions, draw logical conclusions from texts, with a view finally to producing new texts, with an understanding of academic integrity and the risks of plagiarism; • know what counts as evidence for an argument, extrapolate from information by making inferences and apply the information or its implications to other cases than the one at hand; • interpret and adapt one’s reading/writing for an analytical or argumentative purpose and in light of one’s own experience and insight, in order to produce new academic texts that are authoritative yet appropriate for their intended audience. One can operationalise the above components of the construct in any number of ways. The important point is that they give a functional rather than a skills-based articulation of what constitutes academic literacy. As long as it is understood that the further operationalisation given below is only one of several ways of further specifying what should make up a test that takes the above components into account, and that a different interpretation, or even a different selection of task types (subtests), may of course legitimately be considered or revisited, the measure of streamlining that it entails will be useful. The streamlined rendering of the construct below is aimed at facilitating its eventual accurate and acceptable, adequate and appropriate measurement. In the table (Table 2.1) below, the first column must be read as ‘The ability to understand, or interpret, or have knowledge of…’; the second column provides a possible subtest or task type format that may be employed, developed and placed on trial when the pilot test is designed and developed and then experimented with. In the table, these subtests carry the same descriptive names that they usually have as titles of different task types in the range of academic literacy tests that are being used as illustration here. Their listing does not in the least imply that this is an exhaustive range of possible task types: it is at this level of specification that the technical imagination of the language test designer is highly productive, leading to the weighing of alternatives, and the selection of the most desirable. The ‘…’ in the second column indeed serves to signal that more task types and subtests are possible. It is noteworthy, furthermore, that the
36 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Table 2.1 Operationalising the construct Understand/interpret/ have knowledge of
Task type/subtest
Vocabulary and metaphor
Academic vocabulary (one word) Academic vocabulary (two word) Text comprehension (in larger context) Text editing Grammar and text relations (modified cloze)
Complex grammar, and text relations
Grammar and text relations (cloze) Scrambled text/organisation of text Text editing Making academic arguments
Communicative function
Understanding text type and communicative function Text comprehension Text type/Register awareness Grammar and text relations Scrambled text/organisation of text
Text type, including visually presented information
Text type/Register awareness Text comprehension Interpreting graphic and visual information Organising information visually
Essential/non-essential information, sequence and numerical distinctions, identifying relevant information and evidence
Text comprehension Interpreting graphic and visual information Making academic arguments
Employment and awareness of method
Understanding text type and communicative function Text comprehension Making academic arguments
Inference, extrapolation, synthesis of information and constructing an argument
Making academic arguments Text comprehension Scrambled text/organisation of text Writing task
streamlining in Table 2.2 has re-categorised some of the components, reducing them to seven main categories of ability. Such a regrouping and re-categorisation of components will always have to refer back to the original categories in order to produce a balanced and theoretically defensible assessment. The Further Specification of a Construct of Academic Literacy
The re-categorised, streamlined version of the construct of academic literacy (left-hand column, Table 2.1) is not the end of the process of design. As has been remarked above, a test construct has to be translated by test designers into specifications that include, among other things, the determination of which task types and assessment formats will be used (Davidson & Lynch, 2002). That further process entails that any selection or combination of task types from the streamlined version above needs further specification. In particular, it will have to echo the particular purposes of the test. For example, in a recent test design by Drennan (2019) and others, the purpose was
A Skills-Neutral Approach to Academic Literacy Assessment 37
to develop a test that would test the level of preparedness of senior undergraduate and junior postgraduate students to be able to write or present new academic information in a visual format. Such presentation would make that information accessible to others (fellow students, lecturers) in their academic interaction with them. Her research problem was to be able to compare students’ level of preparedness for academic presentation and writing before and after an intervention offered by the Write Site, an academic writing support centre. For this, she needed to have a reliable and adequate assessment that would measure that level of academic literacy before and after the intervention (see Drennan, this volume). Using a skills-neutral construct that takes a disclosed and functional view of language, the design team defined the preparedness to present new academic information not only as an important component of academic literacy, but also as more than being prepared to write. The general construct of academic literacy outlined above was defined and made more particular in such way that writing ability was certainly part of the eventual product of several academic processes, but it was augmented by the idea that the presentation of academic information could also be done in visual formats and in media that go beyond the written word. Thus an Assessment of Preparedness to Produce Multimodal Information (APPMI) was born. The intention with APPMI was not only to utilise the full construct outlined in the previous section, but to select those task types that would measure specifically the two processes that precede the presentation of academic information in face-to-face, visual or written format: the gathering of information, and the processing of that information. In order to make the selection, arguments for which ones would be more prominent in pre-presentation preparation were produced, in order to come up with a defensible range. After selection of potential task types or subtests, further arguments (which will not be given here) were made, in view of the particular test purpose, to give different weightings to the subtests. The draft and provisional design of APPMI is set out in Table 2.2. APPMI is designed to be an assessment that for logistical reasons will be in multiple choice format; students should be able to complete its 86 items in 1½ hours. Its subtests (especially the longer ones, like text comprehension) will have to take cognisance not only of the components they primarily would test, but all components of the original construct. Without that as a background and condition, the validity – in the disclosed sense of the argument and evidence that are presented for the theoretical defensibility of what gets measured – will be undermined. There is a real design challenge in converting a functionally defined construct in several intervening steps into item specifications for a
38 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Table 2.2 Specifications for an assessment of preparedness to present multimodal information (APPMI) Subtest
Number of items
Weighting/ mark
Organisation of text
5
5
Understanding academic vocabulary [two word format]
6
12
Interpreting graphic and visual information
8
8
Organising and categorising information visually
8
8
Understanding text type and communicative function
8
8
20
20
Text comprehension Making academic arguments/Building arguments Grammar and text relations Text editing Totals
8
16
16
16
7
7
86
100
multiple choice format. Elsewhere, I have given examples of how this may be accomplished (Weideman, 2014, 2017a: 190–191); the point is that, given the technical imagination and creativity that lead test design, we may be able to achieve more with a multiple choice format than is initially apparent. There is no space here to give further examples, but one may also refer to the sample test on the Language Courses and Tests website (LCaT, 2020). While the discussion here has highlighted the closed-ended, multiple choice format, that format of course is certainly not the only one that might be (or is) employed in tests of this nature. Several of the tests of academic literacy mentioned here also have sections that have deliberately been designed as open-ended format assessments (Keyser, 2017; Pot, 2013). The rationale for those designs is not the focus of the current discussion, however. Beyond the specification of numbers of items, item format and task types above, there are of course even further specifications, relating to detailed and conventional requirements that items must fulfil. The level of specification already evident in Table 2.2 gives an example, however, of how, after a decision to adopt a specific definition of a construct, that construct can be further operationalised and specified. To mount an argument for the defence of the eventual design, there will therefore have to be reference to the 14 original components that defined the construct (‘ability to master the analytically stamped language of academic discourse’). In building that defence, usually in the form of what is conventionally known as a validation argument, the analysis will also take cognisance of the difficulty to isolate certain components of academic literacy for the purposes of assessing the ability to handle them through a single subtest. Unlike test design in the era of skill separation, there is an awareness here of how mastery
A Skills-Neutral Approach to Academic Literacy Assessment 39
of one component can be measured in a variety of ways (see Table 2.1), and of how a single subtest can in turn yield information on the mastery of several, if not the majority, of components of the construct (see Table 2.2). Every test design has a number of steps, and a retracing or revisiting of the last two or even the last three phases of test design confirms the blueprint for its further development and refinement. Setting Parameters of Reliability and Other Empirical Conditions
The assessments of academic literacy being used as illustration here have over the past 15 years been used at six Southern African universities, besides on occasion at a number of others in the Netherlands, Belgium, Vietnam, Singapore and Australia. Their primary use has been as assessments of academic literacy, and many studies and analyses, reported in more than 70 scholarly articles and several MA dissertations and PhD theses, have been done about them (Network of Expertise in Language Assessment, 2019). Reliability remains a first requirement of test design. Table 2.3 gives the reliability indices for 12 different administrations over six years of versions of the Test of Academic Literacy Levels (TALL) to almost 32,000 candidates across three campuses of as many universities. The reliability measures, in this case Cronbach alpha, are as calculated by Iteman 3.6 and later versions (Guyer & Thompson, 2011: 23). Other tests of this kind, for example the Test of Academic Literacy for Postgraduate Students (TALPS), follow the same pattern, with alphas of 0.93 even for groups of candidates as small as 34. While at Table 2.3 Reliability indices for TALL (2004–2009) Year
University (campus)
2004
NWU (Potchefstroom)
Number of candidates
Reliability
124
0.92
3833
0.95
135
0.94
Pretoria (main)
3310
0.93
Pretoria (main)
3652
0.94
143
0.93
Stellenbosch
2952
0.91
2007
Pretoria (main)
3905
0.94
2008
Pretoria (main)
4325
0.94
140
0.94
Stellenbosch
4219
0.91
Pretoria (main)
5191
0.94
31,929
0.93
Pretoria (main) 2005
2006
NWU (Potchefstroom)
NWU (Potchefstroom)
NWU (Potchefstroom)
2009
Total and average
40 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
test level TALL has measured in a wholly satisfactory manner as regards its consistency, parameters should of course also be set, for example, at item level for discrimination value and for facility (P). The high reliability levels of these tests are related not only to the large numbers of candidates involved, but derive in part also from the parameters set for discrimination (usually an rpbis of higher than 0.15, where rpbis is the item point-biserial correlation of the choice of option with the total score – cf. Guyer & Thompson, 2011: 29–30), and for facility of each item (P values of above 0.15 and below 0.84 in this case). However high the technically determined consistency of a test is, it is never perfectly reliable, and may still misclassify some candidates, which can result in unfair decisions when the test results are used to enforce a required language policy prescription. Some candidates whose results are in the vicinity of a cut-off point may have been negatively prejudiced by the imprecision, others positively. A calculation by a program such as TiaPlus (CITO, 2005) can inform us of how many potential misclassifications there may have been in a particular administration of the test. For example, in the 2011 administration of TALL, TiaPlus allowed us to identify the potential misclassifications using two measures of reliability, Cronbach alpha and Greatest Lower Bound (GLB) (CITO, 2005: 17–18). In addition, the calculation takes into account the correlation of the test scores with a parallel test (the Rxx’ case, below), and the correlation between the observed scores and the true score (the Rxt case in Table 2.4; see CITO, 2005: 18, 30). Table 2.4 also gives more information on the highly satisfactory average discrimination value (Rit; cf. CITO, 2005: 29) of items making up the test, the standard deviation of the test, and, most importantly, its two measures of test-level reliability, Coefficient Alpha (0.89) and Greatest Lower Bound (GLB) of 0.92. The latter two measures are used for the further calculation of the number of potential misclassifications in the test: Table 2.4 Potential misclassifications in administration of TALL (2011) Number of persons in test
5383
Number of items
64
Average test score/P value
62.4
Average Rit
0.43
Standard error of measurement
18.25
Standard deviation
5.98
Coefficient alpha
0.89
Greatest Lower Bound and Asymptotic GLB coefficient
0.92
Misclassifications Alpha based - Rxx’ case:
GLB based Percentage
3.8%
Percentage
2.7%
Number - Rxt case: Number
204
148
Percentage Number Percentage Number
3.3% 178 2.4% 128
A Skills-Neutral Approach to Academic Literacy Assessment 41
Academic literacy tests like TALL often have a heterogeneous test construct (see the discussion of Figure 2.2, below). As can be seen here, the reliability index more appropriate for use in tests with heterogeneous constructs, GLB, identifies fewer potential misclassifications in total than when the more conservative measure, Cronbach alpha, is used. Nonetheless, for the sake of fairness, test administrators could argue that at most 204 test takers may have been misclassified. Since we may assume that the chance of misclassification is equal in respect of having been advantageously or disadvantageously classified, they could then make a second chance test available to the first 102 candidates (or 1.9% of the test takers) below the cut-off point. Such a fairer treatment of candidates would have been much more difficult to institute if the test items were not in a format that allowed the relatively easy calculation of their discrimination, the overall reliability of the test and other supportive empirical information about its administration. The analyses back up the administrative decisions that may be taken to achieve fairness. Another measure of consistency is expressed in the factor analysis of TALL 2006 (Figure 2.2, below), for all 7505 candidates at three different South African universities, which shows an expectable pattern – a reasonably flat oval for most items, with some outliers in two subtests – for a test with a heterogeneous construct such as this. While this is an indication of relative consistency in a test, there are still other quantifiable measures that relate to fairness. All of the tests being used here as illustration are regularly scrutinised for Differential Item Functioning (DIF), especially to investigate whether certain items discriminate in respect of gender, or those with a home language other
Figure 2.2 Factor analysis of TALL 2006
42 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
than English (cf. Van der Slik & Weideman, 2010), or, by extension, in respect of race (where language is a proxy for that). The result of this scrutiny has been that there is no to very little DIF present in items, and where present, it is comparable to international experience. Revisiting the Validation of Tests of Academic Literacy
Several requirements of test quality, in particular the technical consistency of a test and its technical (in the sense of designed, or instrumental) fairness, along with the theoretical defensibility of the test construct, have been discussed above. Of course, as conventional validation studies have demonstrated, these aspects of test design are not unrelated. In fact, the validation of a test or range of tests is currently required to be done not piecemeal, with reference to one or a few kinds of validity, as before, but in a systematic fashion, by producing an integrated multiplicity of arguments for the adequacy of a test. These arguments are unified around a central argument of test quality, and more often than not its own centrepiece is the theoretical defensibility of the test construct, or construct validity as it was called in earlier test justification. In this section, as well as in the concluding one, I turn to a discussion of how one may systematically carry out the validation of the tests being used for illustration here. The best examples of systematic and integrated arguments being made for validation derive from those done by Van der Walt and Steyn (2007) and Van Dyk (2010) of the Toets van Akademiese Geletterdheid (TAG, the Afrikaans equivalent of TALL). Van der Walt and Steyn set out their argument in 10 claims, inter alia about the test having consistency, illustrated with reference to reliability indices, as well as claims about construct validity, with reference to factor analyses of the test and its components, more or less in the way done above for other tests. Four further claims relate to the quantitative analyses of answers to a questionnaire administered after the test to test takers to determine their reception and experience of taking the test, as well as their expectations of subsequent requirements and actions that they would be exposed to as a result of the test. Another quantitatively expressed claim deals with the internal correlations of the different test sections satisfying specific criteria. Following Alderson et al. (2005) and other analysts, Van der Walt and Steyn (2007) set the desirable parameters for these inter-subtest correlation coefficients as fairly low, between 0.3 and 0.5, since each subtest is supposed to be testing a slightly different dimension or component of academic literacy. At the same time, one would wish for the correlation between the subtests and the total to be higher, possibly above 0.7. If this claim were part of the validation argument for the validation of the newly designed Afrikaans counterpart of the postgraduate test of academic literacy (TALPS), to be known as the
A Skills-Neutral Approach to Academic Literacy Assessment 43
Table 2.5 Subtest intercorrelations of refined pilot version of TAGNaS Test 1 Subtest
Total test
Subtest 1
Subtest 2
Subtest 3
Scrambled text 1
1
0.58
Scrambled text 2
2
0.58
0.41
Vocabulary (single)
3
0.69
0.31
0.35
Vocabulary (double)
4
0.46
0.22
0.17
0.28
Grammar and text relations
5
0.87
0.29
0.28
0.41
Subtest 4
0.34
Toets van Akademiese Geletterdheid vir Nagraadse Studente (TAGNaS) (Keyser 2017), then one would need to examine these correlation coefficients to determine whether the test satisfies the parameters set. For example, the refined pilot of the first tier of the three-tier test, TAGNaS, administered to 271 test takers, had the values presented in Table 2.5. This shorter, 38 item, 40 mark test therefore satisfies the parameters set for subtest correlations with the test as a whole in only two cases: subtests 3 and 5. Designers would have to attend to that in further refinements. The same applies to the two indications of there not being sufficient subtest intercorrelations between subtests 1 and 2 (Organisation of text) and subtest 4 (Vocabulary: double), since they are below 0.3. In my experience, however, for subtest intercorrelations in these kinds of academic literacy assessments, that low parameter may profitably be adjusted to 0.2 without other discernible losses in test quality. A final kind of claim suggested by Van der Walt and Steyn (2007) as part of the validation process concerns the degree of fit between the general ability of candidates and the general level of difficulty of test items. The matching of this is usually calculated using a Rasch analysis. In the subtest on ‘Interpretasie van grafiese en visuele inligting’ (Interpreting graphic and visual information) of the first pilot version of the second tier of TAGNaS (Test 2), for example, there is a demonstrable fit between person ability and item functioning. As Keyser (2017) points out, the analysis that she did employing Winsteps (Linacre, 2006) shows that there is a high degree of fit, with the item estimations spread between –2.3 logits (with item 1 as the lowest) to 2.5 logits (Item 23) in the variable map (Figure 2.3, below). As was indicated also in the TiaPlus and Iteman analyses, which are for the greater part based on Classical Test Theory, the problematic items (1, 4 and 23) are again identified as such by the Rasch analysis. They lie too far away on the graphic presentation of the measurement from the lower and upper second standard deviation (T), indicating a lack of fit. It is for the test designers to take a view on what the desirable parameters should be for personitem fit. If, for instance, the parameters of between –3 and +3 logits had been set as acceptable extremes (Van der Walt & Steyn, 2007, claim 2),
44 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Figure 2.3 Person-item fit: Interpretasie van visuele inligting (interpretation of visual information), TAGNaS first pilot
all the items in this subtest would have been deemed acceptable. As it is, one might well as test developer consider tightening the parameters as suggested here, to require a person-item fit of between –2 and +2. Other Rasch-based analyses, from the number of iterations required to achieve
A Skills-Neutral Approach to Academic Literacy Assessment 45
convergence to the calculations of infit mean square, also showed in this particular case, as in the case of all the other refined versions of the subtests of the TAGNaS first pilot, to be wholly within conventionally accepted parameters (Keyser, 2017). Having now demonstrated that there is a substantial amount, in fact a multiplicity of sets of evidence that can be mustered for a validation argument to demonstrate that the test has the desirable quality or effect (called ‘validity’ by a previous orthodoxy; cf. Weideman (2017b) for a more detailed discussion), one may well ask whether we have not now merely added one more requirement, technical validity, to the list already discussed above: technical consistency, fairness, theoretical defensibility of the technically stamped design to the extent that it can be shown to be based on a current construct, and so forth. What, apart from the argument-based discussion of various sets of evidence introduced as a new orthodoxy by Kane (1992), integrates the apparently still disparate requirements of reliability, validity, fairness and so forth? I turn to this in the final section below. Principles of Test Design and a Theory of Applied Linguistics
Language tests may productively be conceived of as applied linguistic instruments. It can in fact be claimed, as was done in this chapter, that language tests constitute one of the three main artefacts within the domain of applied linguistic designs: the other two are language courses and language policies. It follows that, where implemented within institutions as designed solutions to large scale or pervasive language problems, there must ideally be alignment among them: the institutional language policy must not be at odds with arrangements for language assessment or for teaching and learning language. As was indicated above in the discussion of the various phases of test design, a disharmony or lack of alignment among institutional language policy, language assessment and the expectable language development intervention, a language course, can lead at best to design inefficiencies and at worst to contradictory and conflicting arrangements. That is not the point I wish to dwell on further here, however; for a further discussion of where such technical harmony has been attempted, and potentially achieved, I would refer those interested to the textbook for an undergraduate academic literacy course that has been designed exactly along the same lines as the relevant language tests discussed above (Weideman, 2003, 2007: xi–xii) and, for a broader discussion, to Sebolai and Huff (2015). On the specific issue of using the diagnostic information that such assessments yield, see Pot (2013) and Pot and Weideman (2015). If we see language assessments as designs or plans to solve a language problem, as applied linguistic instruments, then applied linguistic theory must further be able to inform us of the design requirements
46 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
they are subject to. The problem is that we have at present no coherent, let alone systematic, theory of applied linguistics (Weideman, 2017a), and there are even questions about the desirability of having one. Nonetheless, there are seminal distinctions that may begin to help us here. They are based, first, on the assumption that language interventions such as policies, assessments and courses are technically qualified artefacts. Second, they assume that the leading technical function of an instrument such as a language test coheres with other dimensions of our experience, and, third, that the connections between the technical and other aspects of experience may yield broad principles, that for their implementation would need further and insightful interpretation when applied to specific cases. If we focus then on the leading technical aspect of a language test, we may abstract away from that entity the nuclear moment of its qualifying function, the technical. We may then conclude that that abstracted mode, the technical, is characterised by the idea of design, of giving form to, shaping, planning, arranging, facilitating. There are, in the philosophical framework that I am employing here (Strauss, 2009), at least two further fundamental assumptions. The first further claim is that, by reflecting the other dimensions of experience, there are pivotal moments within the technical dimension that express these reflections or analogies. The second is that the different dimensions or aspects of experience can be identified as earlier and later dimensions or aspects of a time order (Strauss, 2009: 77). What is of importance in the current discussion is that such reflections of other dimensions of experience within the technical modality present themselves, when viewed from within this philosophical framework, as pivotal, foundational concepts and ideas. Each reflection within the technical mode of other dimensions of experience potentially harbours a normative moment, a requirement or condition to which factual designs – in our case, the language assessments that our technical fantasy and imagination can conceive of and give shape to – are subject. The requirements mentioned and discussed above all relate to such pivotal technical concepts and ideas. For example, the technical unity within a multiplicity of assessments that we find in a language test is expressed and evaluated quantitatively by such empirical measures as a factor analysis as in Figure 2.2 above. The concept of a technical unity within a multiplicity of technically conceived assessments relates the technical dimension of experience strongly to the numerical mode, where we originally encounter the notion of one and many (more than one). The various subtests that make up the test are parts not of an unrelated set of assessments, but of a whole, echoing in that instance the spatial analogy within the technical. What is more, a test has a describable range: it is designed and intended to measure only a limited language
A Skills-Neutral Approach to Academic Literacy Assessment 47
capacity; in our case: the ability to handle academic discourse. That technically limited design intention clearly also echoes the link between the technical and the spatial. If we now turn to the other prominent requirements for test design discussed above, we should be able to see that the technical consistency of a test is a conceptual reflection of the analogical link between the technical and kinematic dimensions of experience. In the latter dimension of experience, we first encounter the idea of regular or consistent movement; in fact that idea, of consistent movement, is the characterisation that identifies the nuclear meaning of ‘kinematic’ (Strauss, 2009: 88). All concepts of reliability measures of a technical artefact rely on that connection with the kinematic for their articulation. The rudimentary, as yet undisclosed notion of validity (as being made up of concurrent, construct and other kinds of features) relies, in turn, on the concepts we have of the technical effects that a test brings about when administered. The results of a test are the effects that are caused by the measurement, hence the notion of a test yielding adequate results. All of these analogical moments within the technical are related to the connections it has with the sphere of energy-effect, the physical mode of experience. Once that initial notion of technical validity or force has been conceptualised, however, there are further questions that arise: is the quality of the measurement adequate? Are the measurements relatable to valid ideas of what constitutes language ability? Hence, the concept of validity is enriched; we enquire, with reference to the analytical substratum of the technical, about how valid the construct is. The question ‘Is the construct theoretically defensible?’ anticipates a disclosure, an enhancement of the initial concept of validity. At its basis lie the connections among not only the technical and the physical, but certainly also the disclosure of those connections by the analytical mode. A central point of the argument of this chapter has been that only if the construct that is measured by a language test is theoretically defensible, in the sense of closely referring to the language ability – in this case academic literacy – that is being measured, does its results become interpretable. That clearly links the characteristic leading function of a test design with the reflections of the lingual dimension (cf. Weideman, 2009a) within the technical. In the interpretation of test results we have to express their meaning, stating what the results signify. To be able to interpret the results of a test with reference to a currently acceptable perspective on language, the test has to be designed in such a way that it authentically measures that idea of what language ability is. When such an idea reverts to, or, for the sake of avoiding controversy, accommodates earlier notions of language as a diversity of separable skills, the interpretation of test results, their meaning, is undermined. The interpretation of the results of the tests being discussed here do not, therefore, make
48 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
claims about listening, speaking, writing, or reading ability, not the least because, as Drennan (this volume) also argues and demonstrates, they cannot so easily be treated distinctly. What is more, as becomes clear when we look at the conceptualisation of the pivotal technical idea of utility, expressed in the analogy of the economic dimension of reality within the technical, the results (and the test itself) will be much less useful if the test measures ability that is compartmentalised in ways that are no longer theoretically current. So test usefulness and efficiency, concepts deriving from the economic sphere, give additional support to the argument for basing language assessments on a skills-neutral view of language ability. Without efficient and effective measurement, diagnostic information will be harder to find; the various studies of this (e.g. Pot, 2013; Pot & Weideman, 2015) show how a disclosed view of what constitutes academic discourse will more effectively yield diagnostic information that may be relevant for further intervention (language course) designs. While it may be so that the subtests referred to in this study have their own limitations regarding diagnostic strength (some may simply be too short, for example), Pot’s (2013) study has shown that the results of these subtests, in combination with a section that culminates in the assessment of a written argument, are highly relevant in supplying diagnostic information. Though these tests are a source of technically useful information, it is a pity that this chapter has not been able to highlight their diagnostic power more, but that was not its primary focus. The final set of analogies within the technical sphere to be discussed here is related to the connection between the design and the uses of its results (effects). Results have to be used with care and compassion, and this kind of use is dependent on their being fair in the first instance. In these ethical and juridical connections, we encounter exactly such ideas. The reference above to the identification of potential misclassifications, the provision of second chance tests to those so mistreated, and the elimination of items that either do not function well, or discriminate unfairly, clearly echo the juridical analogy within the technical sphere. These ideas also echo a concern that the tests of academic literacy that we design should treat test takers with care and compassion, which are originally ethical notions. The discussion in this concluding section of the chapter has centred on how a theory of applied linguistics may begin to inform our conceptualisation of language test design. We all acknowledge that test quality derives from adherence to numerous conditions and requirements. The theoretical framework discussed above presents, though not exhaustively, the beginning of an insight into a coherent set of principles of language test design. It is the thesis of this chapter that responsible test design will be possible if the principles of language assessment design are coherently brought together in a foundational framework of technical concepts and
A Skills-Neutral Approach to Academic Literacy Assessment 49
ideas. Such principles should guide our test-making efforts, the administration of tests and the caring and compassionate use of their results. In this way, fairness, compassion and integrity are not added to tests afterwards, but are integrally part of it, just as are test reliability, validity, meaningfulness, usefulness and other ideas that guide test design. Note (1) This chapter is based on a paper presented as part of symposia on Assessing the academic literacy of university students through post-admission assessments at SAALT 2017 in Grahamstown and LTRC 2017 in Bogota, Colombia.
References Alderson, J.C., Clapham, C. and Wall, D. (2005) Language Test Construction and Evaluation. Cambridge: Cambridge University Press. Bachman, L.F. and Palmer, A.S. (1996) Language Testing in Practice: Designing and Developing Useful Language Tests. Oxford: Oxford University Press. Chan, S. (2018) Some evidence of the development of L2 reading-into-writing at three levels. Language Education and Assessment 1 (1), 9–27. DOI: 10.29140/lea.v1n1.44. CITO (2005) TiaPlus User’s Manual. Arnhem: CITO. Cliff, A. (2014) Entry-level students’ reading abilities and what these abilities might mean for academic readiness. Language Matters 45 (3), 313–324. Cliff, A. (2015) The National Benchmark Test in academic literacy: How might it be used to support teaching in higher education? Language Matters 46 (1), 3–21. Davidson, F. and Lynch, B.K. (2002) Testcraft. New Haven, CT: Yale University Press. Drennan, L. (2019) Defensibility and accountability: Developing a theoretically justifiable academic writing intervention for Sociology students at tertiary level. Unpublished PhD thesis, University of the Free State. Drennan, L. (2021) Assessing readiness to write: The design of an Assessment of Preparedness to Present Multimodal Information (APPMI). [In this volume]. Fulcher, G. (2010) Practical Language Testing. London: Hodder Education. Guyer, R. and Thompson, N.A. (2011). User’s Manual for Iteman 4.2. St. Paul, MN: Assessment Systems Corporation. Heugh, K. (2013) Where ‘whole language’ literacy and ‘communicative’ language teaching fail. HSRC Review 11 (1), 14–15. Hong, N.T.P. (2013) A dynamic usage-based approach to second language teaching. Unpublished PhD thesis, Rijksuniversiteit Groningen. Kane, M.T. (1992) An argument-based approach to validity. Psychological Bulletin 112 (3), 527–535. Karavas-Doukas, E.K. (1996) Using attitude scales to investigate teachers’ attitudes to the communicative approach. ELT Journal 50 (3), 187–198. Karavas-Doukas, K. (1998) Evaluating the implementation of educational innovations: Lessons from the past. In P. Rea-Dickins and K. Germaine (eds) Managing Evaluation and Innovation in Language Teaching: Building Bridges (pp. 25–50). London: Longman. Keyser, G. (2017) Die teoretiese begronding vir die ontwerp van ‘n nagraadse toets van akademiese geletterdheid in Afrikaans. Unpublished MA dissertation, University of the Free State. Knoch, U. and Elder, C. (2013) A framework for validating post-entry language assessments (PELAs). Papers in Language Testing and Assessment 2 (2), 48–66. Kumaravadivelu, B. (2003) Beyond Methods: Macrostrategies for Language Teaching. London: Yale University Press.
50 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Language Courses and Tests (LCaT) (2020) Sample test. See https://lcat.design/academicliteracy/sample-test-of-academic-literacy-levels/ (accessed June 2020). Linacre, J.M. (2006) A User’s guide to Winsteps Ministep Rasch-model Computer Programs. Chicago: Winsteps. Littlewood, W. (1981) Communicative Language Teaching: An Introduction. Cambridge: Cambridge University Press. Littlewood, W. (2014) Communication-oriented language teaching: Where are we now? Where do we go from here? Language Teaching 47 (3), 349–362. Network of Expertise in Language Assessment (NExLA) (2019) Bibliography of language assessment. See https://nexla.org.za/research-on-language-assessment/ (accessed June 2019). Patterson, R. and Weideman, A. (2013a) The typicality of academic discourse and its relevance for constructs of academic literacy. Journal for Language Teaching 47 (1), 107–123. DOI: 10.4314/jlt.v47i1.5. Patterson, R. and Weideman, A. (2013b) The refinement of a construct for tests of academic literacy. Journal for Language Teaching 47 (1), 125–151. DOI: 10.4314/jlt.v47i1.6. Pot, A. (2013) Diagnosing academic language ability: An analysis of TALPS. Unpublished MA dissertation. Groningen: Rijksuniversiteit Groningen. Pot, A. and Weideman, A. (2015) Diagnosing academic language ability: Insights from an analysis of a postgraduate test of academic literacy. Language Matters 46 (1), 22–43. DOI: 10.1080/10228195.2014.986665. Read, J. (2015) Assessing English Proficiency for University Study. Basingstoke: Palgrave Macmillan. Scholtz, D. (2015) A comparative analysis of academic literacy specifications for a standardised test and academic literacy requirements for reading and writing in a range of disciplinary contexts. Unpublished DPhil thesis, University of Cape Town. See http://hdl.handle.net/11427/16866 (accessed June 2019). Schuurman, E. (1972) Techniek en toekomst: confrontatie met wijsgerige beschouwingen. Assen, the Netherlands: Van Gorcum. Sebolai, K. and Huff, L. (2015) Academic literacy curriculum renewal at a South African university: A case study. Journal for Language Teaching 49 (1), 333–351. DOI: 10.4314/ jlt.v49i1.13. Skehan, P. (1988) Language testing. Part I. Language Teaching 21 (4), 211–221. Strauss, D.F.M. (2009) Philosophy: Discipline of the Disciplines. Grand Rapids, MI: Paideia Press. Van der Slik, F. and Weideman, A. (2010) Examining bias in a test of academic literacy: Does the Test of Academic Literacy Levels (TALL) treat students from English and African language backgrounds differently? Journal for Language Teaching 44 (2), 106–118. DOI: 10.4314/jlt.v44i2.71793. Van der Walt, J.L. and Steyn, H.S. (2007) Pragmatic validation of a test of academic literacy at tertiary level. Ensovoort 11 (2), 138–153. Van Dyk, T.J. (2010) Konstitutiewe voorwaardes vir die ontwerp en ontwikkeling van ’n toets vir akademiese geletterdheid. Unpublished PhD thesis, University of the Free State. Van Dyk, T.J. and Weideman, A. (2004) Switching constructs: On the selection of an appropriate blueprint for academic literacy assessment. Journal for Language Teaching 38 (1), 1–13. DOI: 10.4314/jlt.v38i1.6024. Weideman, A. (2003) Assessing and developing academic literacy. Per Linguam 19 (1&2), 55–65. Weideman, A. (2007) Academic Literacy: Prepare to Learn (2nd edn). Pretoria: Van Schaik. Weideman, A. (2009a) Beyond Expression: A Systematic Study of the Foundations of Linguistics. Grand Rapids: Paideia Press. See https://albertweideman.files.wordpress. com/2016/04/beyond-expression-by-albert-weideman.pdf (accessed June 2019). Weideman, A. (2009b) Constitutive and regulative conditions for the assessment of academic literacy. Southern African Linguistics and Applied Language Studies 27 (3), 235–251. DOI: 10.2989/SALALS.2009.27.3.3.937.
A Skills-Neutral Approach to Academic Literacy Assessment 51
Weideman, A. (2014) Innovation and reciprocity in applied linguistics. Literator 35 (1), 1–10. DOI: 10.4102/lit.v35i1.1074. Weideman, A. (2017a) Responsible Design and Applied Linguistics: Theory and Practice. Cham: Springer. DOI 10.1007/978-3-319-41731-8. Weideman, A. (2017b) The refinement of the idea of consequential validity within an alternative framework for responsible test design. In J. Allan and A.J. Artiles (eds) Assessment Inequalities: Routledge World Yearbook of Education (pp. 218–236). London: Routledge. Weideman, A., Patterson, R. and Pot, A. (2016) Construct refinement in tests of academic literacy. In J. Read (ed.) Post-Admission Language Assessment of University Students (pp. 179–196). Cham: Springer. DOI: 10.1007/978-3-319-39192-2_9. Weideman, A., Tesfamariam, H. and Shaalukeni, L. (2003) Resistance to change in language teaching: Some African case studies. Southern African Linguistics and Applied Language Studies 21 (1&2), 67–76. DOI: 10.2989/16073610309486329. Yeld, N. (2001) Equity, assessment and language of learning: key issues for Higher Education selection and access in South Africa. Unpublished PhD thesis, University of Cape Town.
3 Does One Size Fit All? Some Considerations for Test Translation Tobie van Dyk, Herculene Kotzé and Piet Murre
Introduction
It is generally acknowledged that students struggle with the demands of higher education, resulting in substantial financial loss due to low throughput rates (Inspectie van het onderwijs, 2019; OECD, 2013; Scott et al., 2013; Van Dyk, 2015; Van Dyk & Van de Poel, 2013). There are many variables influencing study success, among others underpreparedness for university education, difficulties with the transition from school to higher education, financial (in)stability, emotional well-being, motivation, study skills, self-efficacy and educational background (Scott et al., 2007; Simpson, 2001; Zajacova et al., 2005). A variable of particular importance for this and considered to be critical for study success, is that of adequate levels of academic literacy, which includes, but is not limited to, academic language proficiency (Defazio et al., 2010; Holtzman et al., 2005; Terraschke & Wahid, 2011; Zamel & Spack, 1998). In this regard, Van Dyk and Van de Poel (2013: 56) make a case for an ‘open, non-restricted view of language’ and define academic literacy as ‘being able to use, manipulate, and control language and cognitive abilities for specific purposes and in specific [academic] contexts’. From the above, it could be argued that a significant proportion of undergraduate students require some academic literacy and/or academic language support, especially since they exhibit literacy practices that are often considered inappropriate or inadequate in the context of higher education. This is markedly the case in contexts where students are exposed to intricate, cognitively sophisticated arguments through their reading matter and what they have to listen to during lectures. Moreover, it is often expected that students produce written and oral work at the same level of complexity as the oral and written texts they encounter. One way in which the epistemological access of students to such texts can be improved, is to unlock what is currently covert through 52
Does One Size Fit All? Some Considerations for Test Translation 53
our teaching and learning endeavours. Teaching experience, literature surveys, empirical research and testing (as components of certain aspects of, e.g. academic literacy) are all means through which we can determine what students struggle with and which can inform our course design and methodologies. The sections below will consider the use of academic literacy tests, with a specific focus on whether such tests can be transferred easily from one context to another without disadvantaging any particular group of students. Since reliable and valid tests are expensive to design, develop and refine, it is not necessarily feasible or sustainable to start anew in contexts that appear to differ, but also have similar features. Problem Statement
Carroll (1961: 314) asserts that one of the purposes of language testing is to ‘render information to aid in making intelligent decisions about possible courses of action’. This is reiterated by Bachman (2003, 2009) in his discussion on test use; i.e. that data generated by tests should benefit stakeholders (for purposes of this chapter, they are taken to be students and higher education institutions). The question is whether it is pedagogically responsible merely to use and reuse tests in different, albeit comparable, contexts. Although such a pragmatic approach seems to be justifiable, can they still be considered fair and valid, as measuring what they aim to test (Bachman, 1990). As our title in part states, ‘Does one size fit all?’ Theoretical Frameworks A framework for academic literacy testing
In 2013, Van Dyk and Van de Poel considered different perspectives on, and the reasoning informing the concept of ‘academic literacy’. They first took into account the broader pedagogical perspective within which the concept most often occurs. This was followed by an elucidation of how the New Literacy Studies and the Academic Literacies movements respectively contributed to a better understanding of the concept. Their paper continued to build a conceptual basis from which academic literacy could be considered, also taking into account a linguistic viewpoint, and illuminating the relationship between academic literacy, academic language ability and learning. The purpose of this part of the present chapter is to build on the Van Dyk and Van de Poel (2013) publication, but to take the argument further. In essence, the competence to ‘use, manipulate, and control language and cognitive abilities for specific purposes and in specific [academic] contexts’ (Van Dyk & Van de Poel, 2013: 56) refers to empowering students to become part of a culture or sub-culture, associated with a specific
54 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
discipline or academia in general (in other words, to become ‘insiders’ as opposed to remaining ‘outsiders’). It follows that as lecturers or teachers we have the responsibility to equip our students with the knowledge, skill and appropriate discourse(s) to, among other things, become familiar with the appropriate ways of thinking about, knowing, categorising, comparing, analysing and critiquing their disciplines (Ballard & Clanchy, 1988; Gee, 1990; Jacobs, 2005; Street, 1998). With the above as point of departure, one needs to consider what an academic literacy test should look like in 2019 and beyond. We take our cue here form the influence of the idea of communicative competence (Canale, 1983; Canale & Swain, 1980; cf. too Halliday, 1978, 1985; Skehan, 1988: 211–213; and Bachman, 1990; Blanton, 1994; Bachman & Palmer, 1996; Weir, 1990) as that relates specifically to language assessment. Similarly, in later decades, Weideman (2003) argues for a nonrestrictive or open view of language, where, firstly, language is considered a social instrument used to mediate and negotiate human interaction in specific contexts. The main function of language is here seen to be communication; moreover, the focus is on the process of using language instead of simply on language itself. Thus, too, as regards the assessment of language ability, McNamara (2004: 767) pleads for an open view of language that ‘will incorporate the impact of factors other than simple knowledge of the language ... [presenting] candidates with test tasks which seem to simulate the communicative demands of the target context, and evaluate performance in terms of criteria operating in that context’. The discussion above confirms that there will be numerous challenges associated with designing language tests, not the least the need to test in a fair and unbiased manner. In our test designs, we also need to establish alignment, or technical harmony, between real life language tasks and the way they are appropriated in language tests. The information gathered by tests might then be used to arrive at informed decisions about the ways in which epistemological access can be improved, by revealing what otherwise remains concealed from students entering higher education. A construct for the Test of Academic Literacy Levels (TALL) and the Afrikaans Toets van Akademiese Geletterdheidsvlakke (TAG) was developed taking the above into consideration (cf. Davies et al., 1999; Van Dyk & Weideman, 2004; Weideman, this volume). The test comprises six sections (with the possibility of a seventh). On a macro level the sections contain question types, and on a micro level ten areas or components (Butler, 2017: 23–24) to be taken into consideration. Those sections operating on the macro level include: Section 1: Scrambled text
In this section students are required to rearrange a number of sentences from a single paragraph in the correct order to form a cohesive
Does One Size Fit All? Some Considerations for Test Translation 55
whole. They accordingly have to recognise relations between different parts of the text and use lexical clues provided in the sentences to link them logically to one another. Section 2: Graphic and visual literacy
For this section, students typically have to interpret information presented in graphic or tabular format and respond to questions focusing on trends, estimations, simple numerical computations and inferences based on these calculations. Section 3: Text type, genre, register and style
Students are presented with two sets of sentences taken from different text types, genres, registers and styles. They have to match these groups of sentences in terms of similarity. Section 4: Reading comprehension
Students are presented with a longer text (or texts) usually taken (and adapted) from academic journals, popular scientific magazines and text books. They have to classify and compare information, make inferences, seek for the hidden message between the lines, note text relations, distinguish between essential and non-essential information, know what counts as evidence, and so forth. Section 5: Academic vocabulary
In this section of the test, familiarity with typical words frequently occurring in academic texts is tested. The ability to deduce meaning from context is also tested. Both Coxhead’s academic wordlist (Coxhead, 2000) and Nation’s word list (Nation, 2006) are used for this section. Section 6: Text relations
For this section of the test, students are presented with a modified cloze procedure. It measures the functional knowledge informing students’ language ability, including syntax and grammar, as well as vocabulary and the communicative function of language. Section 7: Writing (optional)
Students are required to complete a scaffolded piece of academic writing. It can be on a topic similar to the general topic of the test, or, if the test has no general topic, any topic deemed suitable. It measures text
56 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
construction (introduction, body, conclusion), referencing, the logical development of a text, and so forth. This section is in most cases used for borderline cases only. At the micro level, the test requires that students should be able to do the several actions with academic language that are outlined in Weideman (this volume; and the references there; for an earlier formulation, see Van Dyk & Weideman, 2004; for revised formulations, cf. too Patterson & Weideman, 2013a, 2013b). A framework for text translation
For the purposes of this chapter, the suitability of the TALL for use in a different context than the one it was designed for is reviewed, in particular the feasibility of using the test in the Netherlands. It was decided to translate an existing test into Dutch and then determine if it rendered fair and unbiased results. The focus in this section of the chapter is on establishing a framework for test translation that could also be used in other contexts, the (multilingual) South African context in particular (see Steyn, this volume). Interdisciplinary cooperation has long been a reality in the field of translation studies (Munday, 2012: 22–23), especially in recent years. Particularly in the fields of medicine and psychology, the practice of translating existing tests for a variety of purposes is well known. According to Hambleton and Patsula (1998: 153), the notion of test adaptation has become an active area of research, as researchers are increasingly attempting to use existing tests (constructed and validated for use in one particular language) in other cultures and languages. However, the use of a data collection instrument is ‘more involved’ than the reproduction, administering and comparing of results when applied to another cultural group (cf. Hambleton, 1993, 1994; Van de Vijver & Hambleton, 1996). According to Gile (2017: 1436), interdisciplinary work is crucial to the development of both translation and interpreting studies, but he also warns against ‘blind compliance’ with a set of norms functioning in a particular field. He refers specifically to the fields of cognitive psychology, where the testing of a hypothesis becomes an issue due to inter-individual variability, but nevertheless states that experiments should in fact be done in translation studies. It is crucial that if methods are borrowed from other disciplines both the methods and criteria need to be adapted ‘on the basis of a sound understanding of the merits and drawbacks of each’. To attend to this problem, recent studies have focused specifically on the cross-cultural adaptation of tests. In medical fields, in particular, an increased number of international and multicultural projects have highlighted the need for adapted health status measures which can be used in different languages (Beaton et al., 2000: 3186). Cross-cultural
Does One Size Fit All? Some Considerations for Test Translation 57
adaptation is defined as the ‘process that looks at both language (translation) and cultural adaptation issues in the process of preparing a questionnaire for use in another setting’ (Beaton et al., 2000: 3186). The issue of cross-cultural research is problematic, specifically due to the quality of the translation and consequently the comparability of the results when doing research in groups which differ culturally and ethnically (Sperber, 2004: 124). Sperber (2004: 124) asserts that the challenge in translating a questionnaire lies in ‘maintaining the meaning and intent of the original items’, and that a literal approach cannot be taken. In addition, he states that the translation of the data collection instrument as a crucial part of the process is unfortunately often neglected and viewed as unimportant. The risk in doing this is that the validity of any results obtained during the research process may be invalid. The notion of producing a translation that is fit for a specific purpose is a well-known one and draws on the work of, among others, Hans. J Vermeer, Katharina Reiss and Christiane Nord. From the development of general translation theory (Vermeer) to more specific translation theory (Reiss), a number of functional translation theories were born, among others that of Nord (2018). More recently, the focuses of these functionalist approaches have been on their application, for instance, in solving specific translation problems. In this regard, Van Dyk et al. (2011: 159–160) applied this approach successfully to the translation of academic literacy tests by using a multidisciplinary team approach involving, among others, professional translators and qualified and experienced academics when translating an academic literacy test from Afrikaans into English. The conclusions of their study confirmed that using a multidisciplinary team worked well, but that further research should be done in terms of back-translation as an added option when translating this type of test. Back-translation, initially developed as a language-learning tool, is a translation that follows lexical and syntactic patterning of the source text as closely as possible. This allows for a comparison to be made between a source and target text and is often used to validate the accuracy of a test within a wide range of fields, including psychology, medicine and psychometric questionnaires (Van Dyk et al., 2011: 157). However, several studies have concluded that this method of translation is not sufficient (cf. Fourie & Feinauer, 2005; Lamoureux-Hébert & Morin, 2009; McGorry, 2000; Steyn, this volume), a point supported by Hambleton (2005: 13): ‘[e]vidence of test equivalence provided by a back-translation design is only one of many types of evidence that should be compiled in a test adaptation study’. In the South African context, Butler (2017) reports on the translation of the Test of Academic Literacy Levels from English into Sesotho and isiZulu using backtranslation. Overall, Butler (2017: 38; cf. too Steyn, this volume) reports encouraging findings using this method, but confirms Hambleton’s
58 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
above statement, as subsequent corrections and adaptations had to be made to the back-translations to ‘create a conceptually accurate Sesotho translation where the abilities tested by the source text were not compromised in the Sesotho translation’. Although back-translation may therefore prove useful, it is not sufficient, and can (and should) be supported by taking further steps to ensure accuracy. In fact, using a multidisciplinary approach (including a team of specialists and backtranslation as part of a multifaceted process) to translate tests in various contexts is becoming increasingly common, with positive results (cf. Barchi-Ferreira et al., 2019; Mongkolsirikiet & Akaraborworn, 2019; Ossada et al., 2019). However, determining exactly which steps must be included in a multifaceted approach to translating tests should take cognisance of context, and will therefore probably be determined by a variety of factors. Due to cost and time factors limiting investment in both adaptation and back-translation, we decided to use only the method of adaptation for this project. A Case Study in Test Translation The South African context
As noted in the introduction to this chapter, students, particularly firsttime entering students, struggle with the demands of higher education. According to the Department of Higher Education and Training (2019: 32ff), success rates should be measured in a nuanced manner that takes into account, for instance, drop-out rates, retention, completion rates, study progress and study duration and transition into the labour market or further studies. In South Africa the average undergraduate completion rate after three years of study is calculated at approximately 31%, and after eight years of study it is approximately 70%. Universities are therefore required to empower and support students to overcome barriers associated with teaching and learning (commonly referred to as increasing epistemological access), as well as increasing formal access. Each institution can decide how they wish to approach this, but need to report to the Department of Higher Education and Training annually, and ensure they comply with prescriptions, especially in cases where additional funding was provided, e.g. to employ more staff, invest in multimodal learning and implement tutorial systems to increase study success. In an attempt to support their students more effectively, many universities have introduced access and placement mechanisms. In principle, university access is granted provided the student passes the final secondary school exams with a so-called university exemption (a minority of school leavers actually do pass at this level). The next step is the calculation of the so-called Admission Point Score (APS; based on the
Does One Size Fit All? Some Considerations for Test Translation 59
results of the final school exams) with different fields of study requiring prospective students to meet a certain minimum for those particular fields of study – the APS for prospective teachers is, e.g. lower than that for prospective medical practitioners. Different fields of study are also allowed to include additional entry requirements, e.g. a certain minimum mark for mathematics and physical sciences if the student wishes to study engineering. Once students have gained formal access to university education, additional mechanisms might be employed, aimed at gathering diagnostic and/or placement data. The use of TALL is a typical example, where first-year students are required to sit the test before the results are used for placement and diagnostic purposes, to support students with focused training on how to achieve success in academia, or, put differently, gain epistemological access to higher education. The Dutch context
The broader educational context for higher education in the Netherlands in the second decade of the 21st century includes a continuous political interest in the quality of courses and the quality of institutions themselves, as well as accessibility (both formal and epistemological) for students from all walks of life (this links, indeed, to the South African context outlined above). The broader context is often embedded in an economic discourse, in which drop-out rates are seen in financial terms. Drop-out rates vary: according to recent reports, after one year in higher education 15% of students stopped studying altogether (Inspectie van het Onderwijs, 2019: 186), while 24% of teacher education students dropped out (Onderwijsinspectie, 2014); the percentage of students studying education who graduate after eight years of study is calculated at 65%, which is twice the nominal time of a student studying for a Bachelor’s degree (Vereniging hogescholen, 2019). Against this backdrop politicians, as well as university managers, considered employing different entrance selection mechanisms, which eventually, early in the 2010s, led to more rigorous means of selecting students before entrance, or, somewhat less stringently, preparing prospective students better for what studying at a university entails. At Driestar University, a small-scale Dutch University for teacher education in Gouda in the Netherlands, concerns about the preparedness levels of prospective students and the high drop-out rates at such universities led to the implementation of a procedure that utilises different possible means provided by the national policy framework. Driestar offers bachelor courses for secondary teacher education, in which roughly a hundred students enrol annually. One third of these are school-leaving students who study full time; the others are all part-time students aged between 20 and 50, generally people who want to make a career change in order to become teachers.
60 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
The selection procedure consists of a selection day, accompanied by so-called written advice. On the selection day, students are firstly provided with general information about a career in teaching: this includes what the characteristics of a good teacher are, basic concepts on teaching and learning matters, the ethos and culture of Driestar University and the curriculum. Secondly, prospective students are required to sit a number of tests: a subject-matter-knowledge test for the particular area of study they wish to enrol for, a passive recognition test of academic English at the 10K frequency band (Laufer et al., 2004), and a translated TALL. The TALL was selected to be part of the test battery, as it tests constructs the university finds important (as noted above). TALL has been rigorously validated, exhibits good reliability and was administered to thousands of first time entry university students, mainly in South Africa, but also in Botswana, Namibia, Singapore, Vietnam, Australia, Belgium and elsewhere (Van Dyk, 2010). Moreover, it is a multiple choice test (if Section 7 is excluded – cf. section ‘A framework for academic literary testing’ above), which contributes to simple test administration procedures and objective scoring. Finally, since Afrikaans could be considered a sister language of Dutch, and the specific test was already translated into English, it seemed to be a relatively straightforward and uncomplicated enterprise to translate the test into Dutch, whereas developing a new version would have been much more expensive in terms of time and expertise (language and translation) required. Thirdly, and in the final stage of the selection procedure, three experts (lecturers at Driestar, concerned with entrance and selection) collaboratively formulate a personalised advisory piece for every prospective student, based on the results of the aforementioned tests, a student’s curriculum vitae (particularly in cases where prospective students wish to make a career change), their written reflection about a text on ‘the ideal student’, and a letter in which prospective students are required to motivate why they wish to study education. The advice can be threefold: allowed to enrol, a middle category in which students are allowed to enrol but which then also expressly indicates areas of some concern or not allowed. The translation project
TALL was translated into Dutch by a sworn translator, Dutch being her L1, for use in the Driestar context. The translator was familiar with the university. She was given general information about the selection procedure, the purpose of employing TALL and the future test takers. The brief was that the test had to be translated ‘literally’ in principle, as long as it was idiomatically possible in Dutch, and very infrequent or weird constructions were to be avoided. As a second step this first translated version was offered to a group of first time entering students (so not as an entrance test), who commented on the quality of the Dutch
Does One Size Fit All? Some Considerations for Test Translation 61
(their L1) and the test in general – this was part of quality control. The third step required a university lecturer (L1 Dutch, L2 English) to scrutinise students’ feedback and to check every item of the test against the English version of the TALL, and the corresponding Afrikaans version (called the TAG). This led to many significant changes in the Dutch version, which will be discussed below. Only this reworked version was administered to prospective students. Note that the method of adaptation described above was considered to be suitable for translating the test into Dutch since both the Afrikaans and English versions were available for cross-checks and quality assurance. Empirical investigation
For purposes of this study, students enrolled for studies in education in South Africa were used as sample to compare their results with those enrolled at Driestar University. This was done in an attempt to focus on theoretically comparable groups. In doing so, one could hypothesise that scores should be equivalent and the means and variance in theory be equal (Davies et al., 1999; Koch, 2009). For the quantitative investigation the following statistical analyses were performed: As a first step, a scree test, inclusive of a varimax rotation, was performed for both the South African and Dutch cohorts. For both these, it indicated that the test consists of six factors, which is aligned with the six sections of the test, resulting in confidence that the test measures what it purports to measure, and it does so across different administrations. With the exception of one section (Reading comprehension) items from a particular section clustered together under one of the aforementioned factors. Once consistency in measurement was established, we also employed t-tests (and ANOVAs – Bartlett’s test to determine differences in variance) to investigate the differences in performance between different genders, races and home language groups. In terms of gender, the variance in performance between males and females was not significant (p = 0.874) and a t-test had to be performed. The result indicated no statistically significant difference between gender groups (p > 0.05) – note Tables 3.1 and 3.2. In terms of race, there were statistically significant differences between four different cohorts of students (Table 3.3). Perhaps this Table 3.1 Descriptive statistics for comparing performance in TALL between different genders (South African and Dutch students combined) Variable
N
Mean
Std dev
Male
481
48.52
18.51
Female
574
50.54
18.63
62 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Table 3.2 The t-test for comparing performance in TALL between different genders Variances TALL
Equal
DF
t Value
Pr > |t|
1055
0.70
0.4723
Table 3.3 Descriptive statistics for comparing performance in TALL between different races (South African and Dutch students combined) Variable
N
Mean
Std dev
Coloured
94
53.51
12.31
Indian
51
44.15
11.94
Black
289
37.19
16.72
White
621
63.32
15.97
could be attributed to different schooling systems used in South Africa and the differences in preparedness levels between previously advantaged and disadvantaged students. White students typically outperformed black, coloured and Indian students. Coloured students outperformed black and Indian students. Indian students outperformed black students. Interestingly, when data for South African and Dutch students were combined it rendered similar results, indicating that there is not a significant difference between students who completed secondary school in South Africa and students who completed secondary school in Europe, except when race comes into play (also see the Discussion section below). Bartlett’s test (Table 3.4) indicated no homogeneity between performance of different race groups. The difference between white and coloured students for this specific cohort (Table 3.5) was statistically significant (p < 0.0001) and a t-test was performed, which confirmed significant differences. Comparable results were yielded for different home language groups (note that Afrikaans, English, African, Dutch and Other were used as language groupings), which was expected due to the close relationship between race and home language. In terms of equating test results, a number of variables had to be taken into consideration, particularly because the ‘same’ test was Table 3.4 Bartlett’s test for homgeneity variance in TALL Bartlett’s Test for Homogeneity of TALL variance Source
DF
Chi-square
Pr > ChiSq
3
117.4
|t|
Unequal
1888
–16.60
|t|
Equal
1079
0.68
0.4273
Does One Size Fit All? Some Considerations for Test Translation 65
Differential item functioning (DIF)
DIF refers to bias at item level. If DIF appears, it could be argued that a student had to rely on abilities other than those being tested by the construct. The South African and Dutch test data were compared in terms of item difficulty and discrimination index. Only seven items showed possible DIF: one from Graphic and visual (question 6), two from Text types (questions 13 and 15), two from Reading comprehension (questions 28, 29 and 35) and one from Vocabulary (question 48). In the Van Dyk et al. (2011) study, 10 questions showed DIF, of which five are exactly the same as in the case at hand (questions 6, 13, 15, 28, 29). This could perhaps be an indication that these items are clearly problematic and need to be replaced, even though they have been amended after the 2011 study. The TiaPlus (CITO, 2005) statistical package was used to perform the Mantel-Haenszel statistic for the above-mentioned tests and the results are shown in Table 3.9. If the DIF Stat is smaller than 1, then the item could be considered harder for the first subgroup, the Dutch students in this case; the closer to 1, the more equal in terms of item difficulty. If the DIF Stat is larger than 1, then the item was harder for the second subgroup, the South African students. If one considers these findings in more detail, we note that some of them can be attributed to translation errors, grammar issues, phrasing of questions, ambiguity and so forth. These are all discussed in the qualitative section below. Qualitative investigation
Six problem areas were identified in the first translated version of the test, by analysing the feedback from students and scrutinising the test item-by-item against the original English version, and the translated Afrikaans version. The six areas of concern were scattered across the entire test and not confined to particular sections. Treating these areas by discussing each of the sections, as Butler has done in detail for the Sesotho-English translation (2017), is therefore in this case not a feasible way of presenting the results. Each of the six problem areas is discussed below and illustrative examples are given, from the first translated version (Du. tr.), as compared to the English original (Eng. orig.) and the adapted Dutch translation (Du. alt.). Note that as the TALL is of course still in use, it is impossible to copy large pieces of text into this chapter. The examples provided, however, indicate the point under discussion. Table 3.9 Descriptive statistics for items showing DIF Item 6
Item 13
Item 15
Item 28
Item 29
Item 35
Item 48
DIF stat
3.452
4.147
0.099
3.296
5.822
3.952
7.629
z(stand)
–2.7703
2.619
–2.955
2.695
3.627
2.905
2.678
SA
SA
Dutch
SA
SA
SA
SA
Harder for
66 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Correspondence between leading phrase and answers
The first area of concern was that the leading phrase did not always corresponded perfectly with the answers, in terms of word order (also see Steyn, this volume): (Eng. orig.) (Du. tr.)
The writer uses the quote (…) to illustrate that De schrijver gebruikt het citaat (…) om te illustreren dat
This translation is correct and the leading phrase as such does not provide problems. However, in one of the distractors the English original has been translated too literally, without looking back at the leading phrase: (Eng. orig.) FB users overdo their enthusiasm (Du. tr.) de gebruikers van Facebook zijn overdreven enthousiast over (…)
While the Dutch is correct as such when considering the specific distractor, it does no longer correspond seamlessly with the leading phrase, as that requires a change in word order: (Du. alt.) over (…)
de gebruikers van Facebook overdreven enthousiast zijn
Incorrect or unnatural Dutch
A second problem area that occurred was words or constructions that are either plainly incorrect and would never be used by L1 speakers, or which are grammatically possible but artificial and somehow stilted: (Eng. orig.) (Du. tr.)
‘which’, ‘artificial’ ‘welk’, ‘kunstmatig’
Though both Dutch adjectives are correct in terms of a dictionary translation, they can be with or without an added final ‘e’, depending on the gender of the following noun. In the sentences in which these words were used, they required this final ‘e’: (Du. alt.)
‘welke’, ‘kunstmatige’
Unintended ambiguity
The third problem area consists of words or phrases where unintentional ambiguity is introduced in the translation; ambiguity which is absent in the original. In the example below this is combined with incorrect Dutch in the first word in bold: (Eng. orig.) if we … what proportion would this be of …? (Du. tr.) Als we…, wat aandeel is dit van …?
Does One Size Fit All? Some Considerations for Test Translation 67
Though the first word in bold may seem to be deceptively close to the original, it is in fact wrong. The word ‘wat’ as an interrogative pronoun can never be used before a noun in this construction. Consequently, this is considered an instance of a false friend and incorrect Dutch. The second words in bold in the English original indicate a possibility, not a reality, and there is no ambiguity here. In Dutch, in these constructions, possibilities are indicated by another tense (not being the present simple) and clearly distinguished from reality as denoted by the ‘is’ in the Dutch translation. In colloquial spoken Dutch ‘is’ may sometimes hint at possibility. However, in order to prevent ambiguity, it had to be adapted: (Du. alt.)
Als we…., welk aandeel zou dit zijn van ….?
Individual words
A fourth broad area of concern was that of register, word choice in general, academic language use and shades of meaning in words used in the Dutch translation that do not correspond well enough with the English original. The Dutch translation, not infrequently, contained words that were neither wrong nor unidiomatic, unnatural or infrequent. However, they were not always the best option available in terms of register or academic jargon. As this is directly related to the constructs the test was designed to measure it is an important area of concern. See the following example: (Eng. orig.) (Du. tr.)
‘annotation’ ‘kanttekening’
Though the Dutch ‘kanttekening’ is indeed a correct translation of the English ‘annotation’, and also used in academic texts, it more often than in the English original version connotes criticism. In this case there was a Dutch synonym which seems perfectly suitable: (Du. alt.)
‘annotaties’
Correspondence between questions and source texts
In the original test, whenever a keyword was used in a question it could literally be found in the text (paragraph or sentence) that the question refered to. A noun was e.g. never changed into a verb or adjective. In the Dutch translation this close correspondence was sometimes lost in the translation. Both the question and answer options, and the source text they referred to, were correct in terms of idiomatic language. However, on the level of the individual word there were sometimes minor and avoidable discrepancies: (Du. tr. text) (Du. tr. question)
‘verandert’ (a finite form) ‘veranderen’ (an infinitive)
68 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
This was, of course, easy to verify and to adapt: (Du. alt. question)
‘verandert’
Test design
A six and final problem area had to do with theoretical considerations on test design and more particularly distractor design. None of the distractors, ideally speaking, should stand out from the others in any way, including length, initial letter/prefix/suffix, part of speech, problem with the word: (Eng. orig.)
A. adaptable B. useable C. divisible D. reconcilable
(Du. tr.) A. geschikt B. bruikbaar C. scheidbaar D. verzoenbaar
The Dutch translation introduces two problems. First, one of the alternatives (‘geschikt’) lacks the suffix -baar, so clearly stands out, whereas in the English original the suffixes -able and -ible are used in all four distractors. The second problem is related to unnatural Dutch. Both ‘scheidbaar’ and ‘verzoenbaar’ are technically possible but very infrequent or even idiosyncratic. It was not possible to find an alternative adjective to ‘geschikt’. Therefore, in this case all options were changed to equally frequent and non-artificial verbs:
(Du. alt.)
A. aan te passen B. te gebruiken C. te scheiden D. te verzoenen
A second example is a problem with initial letters. Either these are all different (and therefore none of the options stand out for this reason), or they all have the same initial letters. (Eng. orig.) ‘or makes an ________ of it’. A. annotation B. analogy C. analysis D. anarchy
Does One Size Fit All? Some Considerations for Test Translation 69
(Du. tr.) ‘of er ________ bij maakt’. A. kanttekeningen B. analogie C. analyse D. anarchie
Apart from the fact that only the first answer fits in well, because of the preposition ‘bij’, it obviously stands out. In this case the preposition had to be incorporated in the answer options to avoid both this problem, and obtain identical first letters: (Du. alt.) ‘of er ________ maakt’. A. annotaties bij B. analogieën van C. analyses van D. een anarchie van
Discussion
In the problem statement above the question was raised whether one size fits all. Can a test construct and tests designed for use in a specific context be reused in another country even though it is for the same purpose, that is: identifying the academic literacy levels of students entering higher education in order to provide appropriate and adequate support, or for a different purpose, that is used as part of a selection battery? If it works well in the one context, will it be fair and unbiased in the other? From the above it may be concluded that on the surface the tests (the original and the translated versions) do not seem to be fair or unbiased. However, if one considers diagnostic and placement information generated by the tests, the question arises whether it is not acceptable to have the tests ‘discriminate’ against certain groups in order to establish what kinds of support and scaffolding students need to survive academia. Unquestionably, this brings to the fore the issue of what defines a university and what a university system should look and function like, especially one that is a decolonised and Africanised university in the South African context, and a Western tertiary institution in the Dutch context. In our opinion, students need to be trained to function effectively and efficiently in environments different from their own. It is therefore of importance to focus on the one, without neglecting the other. To this end, Deygers (2019) provides a thorough discussion of fairness and social justice in language assessment, that is worthy of further consideration.
70 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
The empirical data indicate that students from disadvantaged school backgrounds do perform significantly different from their more privileged counterparts. In the South African context, where test data are used for diagnostic and/or placement purposes, this seems to be justified. However, in the Dutch context, where it is used for selection purposes, it seems unfair. Nonetheless, one should also take into consideration that TALL is but one part of the complete selection process and that the total weighting of the test in final decision making falls within international parameters, i.e. that it should preferably not have a weighting of more than 10% of the final decision to allow access to tertiary education. Fortunately other variables, tests and judgements are also taken into consideration, specifically in the Dutch context. Conclusion
This chapter set out to determine whether translated tests could be used in a fair and unbiased manner in different, albeit comparable, contexts. In an attempt to establish test equivalence, both quantitative and qualitative analyses were performed. These were interpreted and argued against theoretical frameworks for (academic literacy) testing and translation theory. It was found that, although there were some hindrances, overall the tests could be considered fair and unbiased, particularly in cases where test data are used to inform decision making in terms of the kinds and amount of support students require to survive academia (i.e. where results are used in low stakes decisionmaking). However, if the test results are used for high stakes decisions, e.g. admission to further study, it is of concern that they have the ability to discriminate unfairly against certain cohorts of students, particularly those who have received secondary education in contexts not conducive to developing students to their full academic potential. This reiterates the point that when test instruments are used for high stakes decisions, they should rather be used in combination with other measures, as was the case in the Dutch study. A further matter of importance is the choice of translator and the brief given to the translator. In the case above, one could infer that the translation was approached with too little deliberation – semantically, the translation was too broad and general in nature, focusing on the meaning of chunks, instead of specific words and phrases that also appeared in questions. Working with only one, albeit a sworn, translator was not a good approach, as the translator had to employ her second language ability when interpreting the source text. This brings to the fore the value of also including back translation as part of quality assurance to enable us to assess the quality of the target test more adequately. Back translation, however, is not the be all and end all. On its own, it will not bring to light the problems identified above; it is therefore also
Does One Size Fit All? Some Considerations for Test Translation 71
necessary for a language course and assessment designer to check the test translation first. Test translators should preferably be part of a multidisciplinary team so that they can be exposed to the knowledge and skill of test designers. Finally, we recommend that attention be paid to developing a test translation protocol, for the South African context in particular, similar to the guidelines provided by the International Test Commission (ITC) (2016). Ultimately all these are small but necessary steps towards the overarching question: What is the predictive validity of the advice given to prospective students on the basis of the TALL scores? In order to do these recipients of the assessment justice, it must be useful for indicating their potential to perform better in future, provided that provision is made for the development of their ability to handle language in tertiary academic settings. References Bachman, L.F. (1990) Fundamental Considerations in Language Testing. Oxford: Oxford University Press. Bachman, L.F. (2003) Building and supporting a case for test use. Language Assessment Quarterly 2 (1), 1–34. Bachman, L.F. (2009) Generalizability and research use arguments. In K. Ercikan and W. Roth (eds) Generalizing from Educational Research: Beyond Qualitative and Quantitative Polarization (pp. 127–148). New York: Routledge. Bachman, L.F. and Palmer, A.S. (1996) Language Testing in Practice: Designing and Developing Useful Language Tests. Oxford: Oxford University Press. Ballard, B. and Clanchy, J. (1988) Literacy in the university: An ‘anthropological’ approach. In G. Taylor, B. Ballard, V. Beasley, H.K. Bock, J. Clanchy and P. Nightingale (eds) Literacy by Degrees (pp. 7–23). Milton Keynes: Open University Press. Barchi-Ferreira, A.M., Loureiro, S.R., Torres, A.R., Da Silva, T.D.A., Moreno, A.L., DeSousa, D.A., Chagas, M.H.N., Dos Santos, R.G., Machado-de-Souza, J.P., Chagas, N.M.S., Hallak, J.E.C., Crippa, J.A.S. and Osório, F.L. (2019) Personality Inventory for DSM-5 (PID-5): Cross-cultural adaptation and content validity in the Brazilian context. Trends Psychiatry Psychother 30 May. DOI: 10.1590/2237-6089-2018-0098. Beaton, D.E., Bombardier, C., Guillemin, F. and Ferraz, M.B. (2000) Guidelines for the process of cross-cultural adaptation of self-report measures. Spine 25 (24), 3186–3191. Blanton, L.L. (1994) Discourse, artefacts and the Ozarks: Understanding academic literacy. Journal of Second Language Writing 3 (1), 1–16. Butler, G. (2017) Translating the Test of Academic Literacy Levels into Sesotho. Journal for Language Teaching 51 (1), 11–44. Canale, M. (1983) From communicative competence to communicative language pedagogy. In J.C. Richards and R.W. Schmidt (eds) Language and Communication (pp. 2–14). London: Longman. Canale, M. and Swain, M. (1980) Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics 1 (1), 1–47. Carroll, J.B. (1961) Fundamental considerations in testing for English proficiency of foreign students. In H.B. Allen and R.N. Campbell (eds) (1965) Teaching English as a Second Language: A Book of Readings (pp. 313–330). New York: McGraw Hill. CITO (2005) TiaPlus User’s Manual. Arnhem: CITO. Coxhead, A. (2000) A new academic word list. TESOL Quarterly 34 (2), 213–238.
72 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Davies, A., Brown, A., Elder, C., Hill, K., Lumley, T. and McNamara, T. (1999) Dictionary of Language Testing. Cambridge: Cambridge University Press. Defazio, J., Jones, J., Tennant, F. and Hooke, S.A. (2010) Academic literacy: The importance and impact of writing across the curriculum. Journal of the Scholarship of Teaching and Learning 10 (2), 34–47. Department of Higher Education and Training (2019) Post-School Education and Training Monitor. Pretoria: Department of Higher Education and Training. Deygers, B. (2019) Fairness and social justice in English language assessment. In Second Handbook of English Language Teaching (pp. 1–29). New York: Springer. Fourie, J. and Feinauer, I. (2005) The quality of translated medical research questionnaires. Southern African Linguistics and Applied Language Studies 23 (4), 349–367. Gee, J.P. (1990) Social Linguistics and Literacies: Ideology in Discourse. London: Falmer. Gile, D. (2017) Traditions and innovation in Interpreting Studies: A personal analysis for 2016. Dominios de Lingua@gem, Uberlandia 11 (5), 1424–1439. Halliday, M.A.K. (1978) Language as Social Semiotic. London: Edward Arnold. Halliday, M.A.K. (1985) An Introduction to Functional Grammar. London: Edward Arnold. Hambleton, R.K. (1993) Translating achievement tests for use in cross-national studies. European Journal of Psychological Assessment 9, 57–68. Hambleton, R.K. (1994) Guidelines for adapting educational and psychological tests: A progress report. European Journal of Psychological Assessment 10, 229–244. Hambleton, R.K. (2005) Issues, designs, and technical guidelines for adapting tests into multiple languages and cultures. In R.K. Hambleton, P.F. Merenda and C.D. Speilberger (eds) Adapting Educational and Psychological Tests for Cross-Cultural Assessment (pp. 3–38). New York: Taylor and Francis Psychology Press. Hambleton, R.K. and Patsula, L. (1998) Adapting tests for use in multiple languages and cultures. Social Indicators Research 45, 153–171. Holtzman, J.M., Elliot, N., Biber, C.L. and Sanders, R.M. (2005) Computerized assessment of dental student writing skills. Journal of Dental Education 69 (2), 285–295. Inspectie van het Onderwijs (2019) Rapport De Staat van het Onderwijs 2019: Onderwijsverslag over 2017/2018. Utrecht: Inspectie van het Onderwijs (Annual report of the Dutch Inspectorate on Education in the Netherlands). See https://www. onderwijsinspectie.nl/documenten/rapporten/2019/04/10/rapport-de-staat-van-hetonderwijs-2019 (accessed July 2019). International Test Commission (2016) The ITC Guidelines for Translating and Adapting Tests (2nd edn). See www.InTest.Com.org (accessed July 2019). Jacobs, C. (2005) On being an insider on the outside: New spaces for integrating academic literacies. Teaching in Higher Education 10 (4), 475–487. Koch, E. (2009) The case for bilingual language tests: A study of test adaptation and analysis. South African Linguistics and Applied Language Studies 27 (3), 301–316. Lamoureux-Hébert, M. and Morin, D. (2009) Translation and cultural adaptation of the supports intensity scale in French. American Journal on Intellectual and Developmental Disabilities 114 (1), 61–66. Laufer, B., Elder, C., Hill, K. and Congdon, P. (2004) Size and strength: Do we need both to measure vocabulary knowledge? Language Testing (21), 202–226. McGorry, S. (2000) Measurement in a cross-cultural environment: Survey translation issues. Qualitative Market Research: An International Journal 3 (2), 74–81. McNamara, T. (2004) Language testing. In A. Davies and C. Elder (eds) The Handbook of Applied Linguistics (pp. 763–783). Malden, MA: Blackwell. Mongkolsirikiet, K. and Akaraborworn, C. (2019) A revisit of Holton’s HRD Evaluation and Research Model (2005) for Learning Transfer. Journal of Community Development Research (Humanities and Social Sciences) 12 (2), 15–34. Munday, J. (2012) Introducing Translation Studies: Theories and Applications (3rd edn). Routledge: New York.
Does One Size Fit All? Some Considerations for Test Translation 73
Nation, I.S.P. (2006) How large a vocabulary is needed for reading and listening? Canadian Modern Language Review 63 (1), 59–82. Nord, C. (2018) Translating as a Purposeful Activity: Functionalist Approaches Explained (2nd edn). Manchester: St. Jerome. OECD (2013) Education at a Glance. See https://www.oecd.org/edu/eag2013%20(eng)– FINAL%2020%20June%20(2013)pdf (accessed July 2019). Onderwijsinspectie (2014) Uitval studenten. See https://www.onderwijsinspectie.nl/ onderwijssectoren/hoger-onderwijs/sectoren/onderwijs/indicatoren/uitval (accessed July 2019). Ossada, V.A.Y., Souza, J.G., Cruz, D.M.C, Campos, L.C.B., Medola, F.O. and Costa, V.S.P. (2019) Cross-cultural adaptation of wheelchair skills test (version 4.3) for wheelchair users and caregivers to the Portuguese language (Brazil). Disability and Rehabilitation: Assistive Technology May 30, 1–8. DOI: 10.1080/17483107.20191604826. Patterson, R. and Weideman, A. (2013a) The typicality of academic discourse and its relevance for constructs of academic literacy. Journal for Language Teaching 47 (1), 107–123. DOI: 10.4314/jlt.v47i1.5. Patterson, R. and Weideman, A. (2013b) The refinement of a construct for tests of academic literacy. Journal for Language Teaching 47 (1), 125151. DOI: 10.4314/jlt.v47i1.6. Scott, I., Ndebele, N., Badsha, N., Figaji, B., Gevers, W. and Pityana, B. (2013) A Proposal for Undergraduate Curriculum Reform in South Africa: The Case for a Flexible Curriculum Structure (Report of the Task Team on Undergraduate Curriculum Structure). Pretoria: Council on Higher Education. Scott, I., Yeld, N. and Hendry, J. (2007) A case for improving teaching and learning in South African higher education. Higher Education Monitor; 6. Pretoria: Council for Higher Education. Skehan, P. (1988) Language testing, Part I. Language Teaching 21 (4), 211–221. Simpson, J.C. (2001) Segregated by subject: Racial differences in the factors influencing academic major between European Americans, Asian Americans, and African, Hispanic and Native Americans. The Journal of Higher Education 72 (1), 63–100. Sperber, A.D. (2004) Translation and validation of study instruments for cross-cultural research. Gastroenterology 126, 124–128. Steyn, S. (2021) Design pathways to parity between parallel tests of language ability: Lessons from a project. [In this volume]. Street, B. (1998) New literacies in theory and practice: What are the implications for language in education? Linguistics in Education 10 (1), 1–24. Terraschke, A. and Wahid, R. (2011) The impact of EAP study on the academic experiences of international postgraduate students in Australia. Journal of English for Academic Purposes 10 (3), 173–182. Van de Vijver, F. and Hambleton, R.K. (1996) Translating tests. European Psychologist 1 (2), 89–99. Van Dyk, T.J. (2010) Konstitutiewe voorwaardes vir die ontwerp van ’n toets van akademiese geletterdheid. Unpublished PhD Thesis, University of the Free State. Van Dyk, T.J. (2015) Tried and tested. Academic literacy tests as predictors of academic success. Tijdschrift voor Taalbeheersing 47 (2), 43–70. Van Dyk, T.J. and Van de Poel, K. (2013) Towards a responsible agenda for academic literacy development.: Considerations that will benefit students and society. Journal for Language Teaching 47 (2), 43–70. Van Dyk, T.J. and Weideman, A. (2004) Switching constructs: On the selection of an appropriate blueprint for academic literacy assessment. Journal for Language Teaching 38 (1), 1–13. Van Dyk, T.J., Van Rensburg, A. and Marais, F. (2011) Levelling the playing field: An investigation into the translation of academic literacy tests. Journal for Language Teaching 45 (1), 153–169.
74 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Vereniging Hogescholen (2019) Dashboard studiesucces, uitval en studiewissel. See https:// www.vereniginghogescholen.nl/kennisbank/feiten-en-cijfers/artikelen/dashboardstudiesucces-uitval-en-studiewissel (accessed July 2019). Weideman, A. (2003) Justifying course and task construction: Design considerations for language teaching. Acta Academica 35 (3), 26–48. Weideman, A. (2021) A skills-neutral approach to academic literacy assessment. [In this volume]. Weir, C.J. (1990) Communicative Language Testing. London: Prentice Hall. Weir, C.J. (2005) Language Testing and Validation. Basingstoke: Palgrave Macmillan. Zajacova, A., Lynch, S.M. and Espenshade, T.J. (2005) Self-efficacy, stress and academic success in college. Research in Higher Education 46 (6), 677–706. Zamel, V. and Spack, R. (1998) Preface. In V. Zamel and R. Spack (eds) Negotiating Academic Literacies: Teaching and Learning Across Languages and Cultures. Mahwah, NJ: Lawrence Erlbaum.
4 The Use of Mediation and Feedback in a Standardised Test of Academic Literacy: Theoretical and Design Considerations Alan Cliff
Introduction
The aim of this chapter is to explore the value of a test of the learning potential of entry-level students wishing to access Higher Education. The chapter explores mediation and feedback as applied in educational testing contexts and considers the merits and demerits of such mediation for the assessment of learning potential. Learning potential is here understood in a fairly specific sense to mean the extent to which a learner can demonstrate the ability to gain new knowledge or learn new cognitive processes in a context where mediation is provided to facilitate that possible learning. The mediation provided consists of explicit feedback to the learner about his/her responses to particular learning tasks; about the qualitative differences between one kind of response to a task and another; and about the extent to which feedback given has been utilised by the learner in responses to subsequent different learning tasks. Through mediation and feedback, a test taker is presented with a new, more complex awareness of what is to be responded to and utilises that awareness in the completion of following tasks. Potential in this sense refers therefore to the externalisation of an intangible (or latent) ability to learn in the form of a response to tasks carefully and consciously constructed to provide the learner with an opportunity to ‘show’ what has been learned. Quantitative or qualitative improvement by the test taker is at minimum a demonstration of the application of 75
76 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
increased awareness of (a) what is required in order to respond to tasks and (b) that this awareness can be transferred to other learning contexts. Mediation and feedback in the test-taking situation assume the nature of dynamic processes to the extent that they enable and contribute to change in the test-taker’s awareness of the task. Antecedents of Dynamic Assessment
The development of dynamic approaches to assessment arguably traces its origins to 20th century reaction to the development of psychometric tests of intelligence in the Stanford-Binet tradition (Binet & Simon, 1905). In their early conception, these tests were predicated on the assumption that intelligence was a static and universally understood ability amenable to measurement through a test-taker’s response to a range of verbal and non-verbal tasks undertaken in a controlled experimental environment. It is perhaps worth noting that modern versions of the Stanford-Binet IQ test include a scale for the assessment of ‘fluid intelligence’ – about which more will be said in relation to the work of Cattell (1971) – but early versions of the Stanford-Binet test were critiqued for assuming intelligence to be fixed, applicable to all socio-cultural contexts and amenable to measurement through response to a range of decontextualised, somewhat abstract tasks. Cattell’s Culture Free Intelligence Test (1940) and his subsequent writings on the changeability of intelligence over time (see for example, Cattell, 1971) are arguably analogues for the development of dynamic approaches to assessment. There has been much subsequent debate about the meaning of ‘culture free’ and ‘culture fair’ tests: can any test or individual really be culture free or is it more that educational testing needs to take account of the way ‘culture’ acts as a filter for the test maker and the test taker, i.e. the assessment needs to be culture fair? Cattell’s notions of crystallised and fluid intelligence stem from his theory that some aspects of ability or intelligence are related to education, learning, knowledge and experience (crystallised intelligence) and others are related to the capacity to reason and think logically in novel situations that are independent of acquired knowledge (fluid intelligence). Cattell hypothesised crystallised and fluid intelligence to be discrete components of what he termed general intelligence (or g), a notion first introduced in the work of Spearman (1927). Whether crystallised and fluid intelligence are discrete forms of a more general intelligence is perhaps a moot point – it is likely that fluid intelligence is at least partly influenced by the residual effects of crystallised intelligence. Put differently, an individual’s capacity to reason and think logically in novel situations may not be entirely independent of acquired knowledge, as such acquired knowledge has an impact on performance in novel situations in that it acts as a repository or resource
The Use of Mediation and Feedback in a Standardised Test of Academic Literacy 77
in the novel situation. Nonetheless, the notion of fluid intelligence presents possibilities for the assessment of learning potential: it is assumed from Cattell’s theory that thinking and behaviour in novel situations requires the learner to call on what is innate or latent – which might be called potential: an ability to think or act that is not visible until a context demands it. Cattell’s notion of fluid intelligence has been linked to attempts to design assessments of ability that purport to be independent of previous learning and therefore culture free. Many of these assessment artefacts incorporate non-verbal tasks using objects, figures or symbols rather than verbal tasks – it is assumed that linguistic ability is related to crystallised intelligence and therefore verbal tasks should be avoided. Nonverbal tasks are claimed to be culture free and related to fluid intelligence. Research has been undertaken in a Costa Rican context to determine the extent to which a non-verbal assessment of fluid intelligence provides evidence that such intelligence is independent of formerly acquired knowledge and behaviour (see Cliff & Montero, 2010). Early evidence is that students taking a test of fluid intelligence perform better on this test than they do on a test of crystallised intelligence. What is not yet apparent, however, is whether a test of fluid intelligence reduces or eliminates the effects of prior knowledge and learning. It is hypothesised that, if test takers from diverse knowledge and experience backgrounds can be shown to perform similarly on the fluid intelligence test, this similarity of performance suggests the effects of diverse backgrounds to have been ameliorated. Antecedents of dynamic assessment have been strongly linked to the pioneering work of Vygotsky (1978, 1986). A summary of what I believe his impact to have been follows: • Vygotsky highlighted the impact of cultural capital (or crystallised intelligence) on learning for learners from poorly resourced backgrounds; but • he also demonstrated that it is possible for such learners to develop beyond what they might ordinarily be expected to achieve; • his work in culturally diverse settings demonstrated the extent to which the ‘dominant’ culture can lead to the marginalisation of ‘other’ cultures; but • he also demonstrated that it is possible for ‘other’ cultures to be developed through mediation; • through the notion of the Zone of Proximal Development (ZPD), he provided a framework for understanding that learning is possible if the teacher (or mediator) correctly assesses the level (or zone) at which the learner can most benefit from intervention; • he demonstrated that meaningful and carefully sequenced feedback can increase the learner’s awareness of what learning is possible; • his work significantly impacted on conceptions of intelligence as static by focusing on the dynamic and interactive relationship between teacher and learner. The extent to which learners can develop through mediation and interaction is the key to his theory of teaching.
78 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Vygotskian conceptions of change through dynamic processes of mediation and interaction are the context for a number of significant models of learning based on the notion of dynamic interaction. Modern models of dynamic assessment include Budoff’s Learning Potential Measurement approach (Budoff, 1987); Guthke’s Leipzig Learning Test approach (Guthke, 1993); Carlson and Wiedl’s Testing-the-limits approach (Carlson & Wiedl, 1992); Brown’s Graduated Prompt approach (Brown & Ferrara, 1985); Feuerstein’s Mediated Learning Experience approach (Feuerstein et al., 1979; Feuerstein et al., 1980) and Learning Potential Assessment (Hamers et al., 1993). Feuerstein’s approach has enjoyed international attention as a means of assessing the extent to which learners can change in the presence of meaningful, structured and self-conscious mediation. Poehner (2008) summarises the first three attributes of Feuerstein’s theory of Mediated Learning as being the core aspects of mediation: (1) intentionality: ‘... the adult’s deliberate efforts to mediate the world, an object in it, or an activity for the child’ (Poehner, 2008: 57); (2) transcendence: ‘... the goal of the MLE is to bring about the cognitive development required for the child to move beyond the “here-and-now” demands of a given activity’ (Poehner, 2008: 59); and (3) mediation of meaning: ‘... the significance of objects and actions cannot be intuitively understood by the child but must be mediated to him so that relationships and connections become clear’ (Poehner, 2008: 59). Sternberg and Grigorenko’s (2002) work on the assessment of learning potential draws a distinction between what they term latent capacity and developed ability. According to these authors – and offering here a very brief summary – developed ability refers to what a learner is able to do as a result of learning, instruction and experience; latent capacity is what a learner might be able to do, given ideal or nearideal instructional circumstances. An assessment of developed ability measures latent capacity only as it is manifest in task performance; the assessment of latent capacity measures the extent to which learners can change as a consequence of being exposed to deliberate and carefully sequenced instruction. In static or conventional assessment, the emphasis is on the assessment of product based on pre-existing knowledge or ability; in dynamic assessment, the emphasis is on the assessment of the psychological and cognitive processes associated with change and development. Drawing together the preceding discussion of the antecedents of dynamic assessment, the following points are made: • dynamic assessment is at least in part an assessment of learning potential, since its focus is on the extent to which a learner can demonstrate quantitative or qualitative change as a result of receiving mediated instruction;
The Use of Mediation and Feedback in a Standardised Test of Academic Literacy 79
• potential is here viewed as the extent to which a learner’s performance can change as a result of mediation and feedback; • interaction and intervention, through explicit or implicit deliberate and sequenced mediation and feedback, represent the cornerstones of dynamic assessment; • in terms of Cattell’s concept of fluid intelligence, dynamic assessment represents an attempt to assess the extent to which mediation and feedback enable a learner to respond to novel learning contexts and to call on learning that is relatively unrelated to prior knowledge and experience; • mediation includes presenting the learner with support and learning ‘cues’ that reduce or downplay the role of prior learning or experience; • mediation and feedback are aimed at enabling the learner to complete successive, more complex tasks through raising the learner’s awareness of the requirements for success in a preceding task or set of tasks. A Standardised Test Based on Principles of Mediation and Feedback
The rest of this chapter is devoted to a focus on the design and development of a standardised test incorporating fundamental principles of dynamic assessment. The emphasis here is on the notion of a first assessment of learning potential. In this case, the assessment is of the learning potential of entry-level students wishing to access Higher Education. The focus of the test is on the assessment of such students’ capacity to engage with the typical reading, writing and reasoning demands they will face in Higher Education in the medium of instruction of the institution: on the concept of academic literacy. In the South African Higher Education context, the development of tests of academic literacy has an extensive and credible history, starting from the mid-1980s. For approximately 30 years now, standardised tests – inter alia tests of academic literacy – have been aimed at contributing to widening access to Higher Education for students whose performance on conventional, achievement tests (such as the school-leaving examination) is not necessarily a reflection of their capacity to engage with typical Higher Education content and tasks. The pioneering work of Yeld (see, for example, Haeck et al., 1997; Yeld, 2001; Yeld & Haeck, 1997) has contributed to the design of tests of academic potential in, for example, academic literacy and mathematics and to the development of a robust, complex construct of academic literacy based on the work of Bachman and Palmer (1996) and others. Testing and research in South Africa have been based on the use of paper-based tests, but explorations in the cognate fields of computerassisted and online testing as conducted by, for example, Conole and Warburton (2005), Maier et al. (2016) and Zhang (2017) – and
80 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
computer-adaptive testing (for example, Thompson, 2017) – have been referenced so as to ensure alignment with this work and that of others in the latter fields. Operationalised in the form of a set of specifications for a test, the construct of Academic Literacy is summarised in Table 4.1 (see also Weideman, this volume): Table 4.1 Specifications for a test of academic literacy Skill assessed
Explanation of skill area
Vocabulary
Students’ abilities to derive/work out word meanings from their context
Metaphorical expression
Students’ abilities to understand and work with metaphor in language. This includes their capacity to perceive connotation, word play, ambiguity, idiomatic expressionsand so on
Extrapolation, application and inferencing
Students’ capacities to draw conclusions and apply insights, either on the basis of what is stated in texts or is implied by these texts
Understanding the communicative function of sentences
Students’ abilities to ‘see’ how parts of sentences/discourse define other parts; or are examples of ideas; or are supports for arguments; or attempts to persuade
Understanding relations between parts of text
Students’ capacities to ‘see’ the structure and organisation of discourse and argument, by paying attention – within and between paragraphs in text – to transitions in argument; superordinate and subordinate ideas; introductions and conclusions; logical development; anaphoric and cataphoric referencing
Understanding text genre
Students’ abilities to perceive ‘audience’ in text and purpose in writing, including an ability to understand text register (formality / informality) and tone (didactic / informative / persuasive / etc.)
Separating the essential from the non-essential
Students’ capacities to ‘see’ main ideas and supporting detail; statements and examples; facts and opinions; propositions and their arguments; being able to classify, categorise and ‘label’
Understanding information presented visually
Students’ abilities to understand graphs, tables, diagrams, pictures, maps, flowcharts
Understanding basic numerical concepts
Students’ abilities to make numerical estimations; comparisons; calculate percentages and fractions; make chronological references and sequence events / processes; do basic computations
The construct or ‘blueprint’ for a test of academic literacy is fully explicated in Yeld (2001) and a detailed explanation is not attempted here. The principal features of the approach to the development of the test are that it is: (1) a generic test, designed to provide complementary information to traditional achievement tests (such as the school- leaving examination); (2) developed by national interdisciplinary teams of expertise, to increase both its face and content validity; (3) relatively curriculum independent, so as to downplay the role of prior exposure to knowledge; (4) designed to assess language as a vehicle for academic study and reasoning rather than language per se; (5) developed according to a theme and a set of specifications, so as to ensure
The Use of Mediation and Feedback in a Standardised Test of Academic Literacy 81
that engagement for the writers can be mediated using ‘scaffolded’ tasks; made progressively more complex; and be authentic to a Higher Education context. Tests of Academic Literacy based on the specifications in Table 4.1 have been widely used in Higher Education in South Africa; have been demonstrated to be an indication of students’ academic potential (cf. Yeld & Haeck, 1997); have been used as mechanisms for widening access to students from poorly resourced educational backgrounds (cf. the work of the Alternative Admissions Research Project at the University of Cape Town); have been shown to have diagnostic and predictive value (see, for example, Cliff et al., 2007; Visser & Hanslo, 2005); and have been useful as selection and placement mechanisms that yield alternate information about students’ ability and capacity (Cliff & Hanslo, 2009; Wadee & Cliff, 2016). Incorporated into the design of such tests are elements of task mediation (such as text as teaching mechanism; examples of tasks given and explained in the test; tasks being made increasingly complex). What the present chapter attempts is an explication of the use of further kinds of mediation, in particular the use of qualitative feedback to test takers on a selected number of their responses and the use of test questions where an answer is provided to the test taker, together with an explanation of what their possible response to the question might imply about their understanding of the task. Before annotated examples of test questions are provided, the chapter looks at assumptions – based on the theoretical discussion earlier – which underpin the test design approach. The following assumptions are incorporated into its design: • It is assumed that test takers will have had some exposure to academic reading and writing (through their secondary school experience). This knowledge and practice are assumed to be what they already know and can do. • It is assumed that all learners (and test takers) have the capacity to move from what they already know and can do to what they might be able to know and do, given a context in which they are supported and their learning is mediated inter alia through sequenced and context-sensitive feedback. This movement is at least in some measure a reflection and an externalisation of ‘learning potential’. • It is assumed that – even in a static test-taking environment, where there are no opportunities for teacher interaction with test takers – explicit feedback about what is being assessed in the test may enable at least some learners to utilise the feedback in further assessment tasks. • Test takers will benefit to a lesser or greater extent from the feedback given to them in the test-taking situation, and some learners will be able to utilise the feedback in the completion of subsequent tasks. This
82 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
•
• •
•
•
•
•
assumption is resonant of some of the research of Oxford et al. (2014) in which it is clear that ‘good’ language learners are those that adopt ‘better’ strategies for learning, that include being more context aware, and this means they benefit from context cues in learning. Providing feedback raises the levels of awareness of at least some learners (or test takers) and sensitises them to what they are required to do in subsequent tasks. The extent to which learners will be able to benefit from feedback is likely to be somewhat related to their educational backgrounds, but the assumption is that feedback will create opportunities for learners (and test takers) to demonstrate evidence of learning that is relatively independent of their educational backgrounds. This evidence of learning is arguably evidence of some potential to learn in new situations that are independent of the test-taking situation. It is assumed that the feedback given to test takers in the test-taking situation will be based on the ways they would be likely to have responded to certain test tasks. It is also assumed that there are a finite number of qualitatively different ways – or categories of response – possible in certain test tasks. Feedback can provide at least some learners (or test takers) with an awareness that some responses to task are more inclusive/appropriate/ complex than others. It is assumed that learners (or test takers) will benefit from this awareness in future test tasks. Test items in which feedback about the range of possible responses is provided to test takers in the test situation are assumed to be opportunities for learning and are not awarded marks for summative assessment purposes. Items further along in a test can be made cognitively more demanding on the assumption that at least some learners will have benefited from increased levels of task awareness and will then be able to cope with the more demanding subsequent tasks. It should be possible to measure the impact of mediated feedback in a test-taking situation by comparing the performance of test takers on such a test with the performance of similar test takers who have not benefited from feedback or by comparing the performance of test takers in a mediated feedback context with their performance in a less mediated feedback context.
Mediation and Feedback in a Standardised Test: Analytical Illustrations
For reasons of brevity, the illustrations below are only excerpts from the test. The extract presented here is the first task of an academic literacy test:
The Use of Mediation and Feedback in a Standardised Test of Academic Literacy 83
What is Reality TV? Reality TV is the name given to the new genre of programmes that feature ‘real’ people in ‘real’ circumstances and are therefore largely unscripted. The latest generation of reality programmes such as Big Brother and Survivor differ from earlier reality programming. These older programmes followed more of a socio-documentary model and tended to be frumpy and low budget and taught viewers how to paint or cook. Their appeal was limited. However, some of the newer shows, which feature dieting competitions such as The Biggest Loser or talent competitions such as Idols, have mass appeal. The challenge for TV producers is to keep up with viewers’ fickle behaviour, short concentration span and continuous demand for the ‘new’.
The instructions in the test inform test takers that they do not need to draw on any information other than what they read in the text itself. The choice of theme for this test as well as the examples of Big Brother and Survivor are made by a nationally representative group of test developers, whose judgment is that the theme is accessible to all test takers regardless of background. The assumption in a test of this kind, however, is that test takers wanting to access Higher Education may be faced with texts that are outside their range of experience. Hence, text contains ‘cues’ in context (such as the definition of Reality TV and the use of local examples of the genre) that give the test taker the opportunity to derive meaning. The two questions that follow below – including the feedback – are the first two questions in the test: (1) In paragraph 1, which sentence gives a definition of Reality TV? Circle your answer. Sentence 1 Sentence 2 Sentence 3 Sentence 4 Writers usually introduce topics to readers in a broad or general way initially, sometimes with a definition. In this text, the writer explains clearly what Reality TV is. (2) On the lines below, and in your own words, give the writer’s definition of Reality TV: If you included in your answer the points about Reality TV being about real people in real situations with no script, you were right. If, however,
84 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
you said that Reality TV was about programmes like Big Brother and Survivor, this is not a full answer because these are examples only. They do not explain fully what Reality TV is. If you said that Reality TV is a new genre of programmes, then your answer also wasn’t complete because you are stating what kinds of programmes they are; not how we define them.
The teaching component in the first sentence where a definition of Reality TV is provided and the ‘giveaway’ nature of Question 1 – a question requiring the test taker to separate essential from non-essential information (see Table 4.1) – with its accompanying feedback following the question are deliberately designed to give the test taker entree to the text. Question 2 affords the test taker the opportunity simply to re-work what has been read, and the feedback given after the question provides the test taker with a set of interpretations of possible responses to the question. The tacit aim here is to sensitise the test taker to the possibility that some responses are qualitatively more appropriate or inclusive than others. A further illustration follows: (5) Are South African TV dramas like Muvhango, Generations and Scandal scripted or unscripted? If you said ‘scripted’, you are right, because, in these TV dramas a scriptwriter writes the dialogues which the actors memorise and say. (6) Explain why Reality TV shows are described as ‘unscripted’.
Question 5 above – a question requiring the test taker to make a simple inference (see Table 4.1) based on the text – involves a transfer from the context of the text to examples that are popular in South Africa (in a different context, appropriate examples would have to be inserted). The feedback given after the question aims to assist the test taker to make the necessary application to a slightly newer context. Question 6 requires the test taker to infer from what has gone before a response to a question requiring an inverse understanding to Question 5. The next unmediated example (Question 10 below) requires the test taker to draw inferences from the context of the text to respond to a question about the popularity of Reality TV shows with younger viewers. ‘The invasion of Reality TV has begun and the craze is set to grow owing to its popularity with young audiences. The speed, low costs of production and high ratings have made the genre popular with both TV
The Use of Mediation and Feedback in a Standardised Test of Academic Literacy 85
production companies and broadcasters. Programmes such as Survivor and Big Brother have proved to be big winners – so what’s the attraction? Reality TV shows are a cheap alternative to other programming. No scriptwriter is required, nor paid actors or complex sets, and they can rake in the ratings. Compare this with the millions per episode required to produce a programme such as Friends.’ (10) In paragraph 2, the writer says that Reality TV is popular with young audiences. Why do you think Reality TV shows might be popular with young audiences? Give three reasons.
The question requires the test taker to use the process of drawing inferences from what has been read – cued in Questions 5 and 6 – and write a response that extrapolates from text and incorporates their personal experiences of watching Reality TV shows. The following question illustrates mediation and feedback in relation to test-takers’ interpretation of non-literal or analogous language (see Table 4.1 earlier):
Writers often use metaphor to make their writing more interesting and to create ‘pictures’ in the reader’s head of some concept they are trying to explain. One example can be found in paragraph 2, where we read about: ‘the invasion of Reality TV’. As you know, an invasion usually refers to a military takeover and often involves the use of force. (11) What would you say is the writer’s purpose in using the word ‘invasion’ with regard to Reality TV? If you said that Reality TV programmes took over from other programmes or forced other programmes out through their sheer popularity, you were correct. So, the key is to apply the meanings of the words in one situation to a new one.
In the introductory material above question 11, and in the question itself, the test taker is required to grapple with the notion of metaphorical or analogous language. Feedback given after the question indicates to the test taker how the literal meanings of a word might be carried into less literal contexts. Following from question 11, the test taker is presented with an unmediated question relating to analogous reasoning, which is designed to enable her/him to elicit similar processes used in question 11 in response to the question.
86 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
(12) Now say what the writer’s purpose is in using the metaphorical phrase ‘rake in the ratings’ (in paragraph 2).
This question requires the test taker to interpret the metaphor in the phrase given in order to respond to the question by suggesting that ratings are being harvested or piled together or up. The following example requires a test taker to identify and interpret discourse signals from text. (14) In paragraph 1, we find an example of linking language. The writer uses the word ‘however’ to signal a change in emphasis from the older programmes that were not as popular to the new programmes that are very popular. Read paragraph 3 again. Find an example of linking language in paragraph 3. Did you find the word ‘Furthermore’ towards the middle of the paragraph? This word signals the addition of information. In this case, the writer is adding reasons for the popularity of reality TV with producers and audiences.
Question 14 requires the test taker to identify an aspect of academic vocabulary or discourse (‘however’), the meaning and purpose of which are explained in the feedback below the question. Further mediation and feedback are provided in relation to the second example: ‘Furthermore’ in the third paragraph of the text. The test as a whole contains more examples where explicit feedback is provided to the test taker – as well as questions which do not contain feedback other than what the test taker can deduce from context. Two examples of this kind are presented here for the sake of comparison with the four examples given above: (19) In which paragraph/s does the writer focus on the negative aspects of reality TV?
This is a question requiring the test taker to separate essential from less essential or supporting information to identify the paragraph in the text that provides the gist of the negative aspects of Reality TV. (20) A synonym is a word with a similar meaning to another. Which of the following words are synonyms of the word ‘claim’ in paragraph 3? You may choose more than one synonym. Circle your answer/s.
The Use of Mediation and Feedback in a Standardised Test of Academic Literacy 87
believe assert disagree demand state This question requires the test taker to identify the academic discourse meaning of the word ‘claim’ in order to identify words that are synonymous from the list. Test Data Analysis
The following discussion is based on an analysis of test-taker performance on the test illustrated in the previous section of this chapter. The test-taking sample (n = 465) were South African senior secondary-school students who took the test as part of a process of their seeking admission to study programmes at the University of Cape Town. In all cases, these were test takers from educationally disadvantaged school backgrounds whose academic performance on conventional school achievement would be unlikely to be sufficient grounds for their being offered a study place – the test was accordingly an opportunity for them to show evidence of learning potential that could complement their achievement scores that were not necessarily evidence of that potential. The focus of the current analysis is not on the predictive capacity of such testing; it is on the extent to which mediated testing could be said to enable test takers to perform well on the test, i.e. to show evidence of capacity to learn. Figure 4.1 below presents the item-by-item scores and the total score of each test taker, ranked from highest to lowest test achievement, for the 69 test items that were scored on a test with a total possible score of 126. The scores are presented as a heat map. The heat map shows the following overall patterns: • Darker shading is dominant for those test-takers towards the top of the score ranking and it is especially dominant in relation to the first 13 items (columns) of the test and again for the latter 27 items. The first 13 items on the test are mediation-dense items, so the design expectation is that test-takers would do well on these items. This seems to hold for those test-takers who are closer to the top of the overall ranking. What also seems to hold is that those test-takers towards the top of the overall ranking also perform well (darker shading towards the top right of the figure) in the latter 27 items of the test. These test-takers appear to have benefited from mediation and they have also been able to perform well throughout the test.
88 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Figure 4.1 Heat map of item-level test-taker performance (rows represent test takers and columns represent test items)
• What is less understandable is these test-takers’ dip in performance towards the middle section of the test (depicted by lighter shading). In test design terms, the middle section of the test is mediation-reduced or mediation-absent, so this could account for the dip in performance. But this does not account for the improved performance towards the latter part of the test. • This latter section improvement is better understood when test design is considered: the middle section of the test is both mediationreduced and also deals with visual and numerical literacy items. The latter section of the test is mediation-reduced, but deals with text-based literacy – as was the case in the mediation-dense first section of the test. For test-takers whose overall test performance is towards the top of the ranking, the mediation-dense, text-based literacy items in the first section of the test may have contributed to their performance in the latter section of the test. However, this mediation did not enable these test-takers to perform well on visual and numerical literacy items. This suggests the need for very targeted, context-specific, cognate mediation if overall improved performance is to be achieved. • Many test-takers appear to have benefited from the mediation-dense items on the test – even a number of those whose overall test performance is relatively poor. What is not yet clear from the data is why these testtakers could not capitalise on the mediation in subsequent sections of the test. Mediation appears beneficial, but only for those test-takers who recognise the relation between the mediation and the subsequent unmediated tasks.
The Use of Mediation and Feedback in a Standardised Test of Academic Literacy 89
Discussion
Qualitative differences in how test takers respond to tasks are assessed by graders of the test working from a scale that ‘rewards’ more inclusive/accurate/appropriate responses with more marks than less inclusive, accurate or appropriate responses, using a partial-credit assessment model. In line with the assumptions about and underpinnings of this approach to testing, it is assumed that test takers who have benefited from the ‘cues’ and the qualitative feedback from time to time during the test will have been able to produce more comprehensive responses to unmediated questions making similar or more complex cognitive demands on the test taker than the questions with feedback attached. Questions containing feedback do not contribute to an overall score. As indicated in the title and argument of this chapter, the primary emphasis here has been on the conceptual underpinnings of and design considerations for the development of a standardised test of academic literacy. However, it should be noted that previous research studies at the Universities of Cape Town and Costa Rica (Cliff & Hanslo, 2009; Cliff & Montero, 2010) have aimed to address the extent to which results on such standardised tests, using principles of mediation and feedback, might have been useful in identifying test-takers’ potential to learn. Expressed differently, the question becomes: are tests successfully able to surface evidence of learning development when they are explicitly designed to enable test takers to demonstrate such development that might not otherwise be visible in performance on conventional tests of achievement? And can such tests contribute to reducing the ‘gap’ between ‘crystallised intelligence’ (developed learning ability) and ‘fluid intelligence’ (learning ability demonstrated on the basis of responses to tasks where the impact of prior learning has been reduced or compensated for). In the Cape Town studies (Cliff & Hanslo, 2009), results on such tests have been shown to explain different amounts of variation in subsequent academic achievement for students from educationally well- resourced backgrounds (high levels of developed learning ability) as opposed to those from educationally poorly resourced backgrounds (lower levels of developed learning ability). For students from well- resourced backgrounds, developed learning ability (in conventional tests of achievement such as the school-leaving examination) contributes significantly to variation in subsequent academic achievement and supersedes evidence of learning ability in tests of learning potential. For students from poorly resourced backgrounds, however, performance on learning potential tests (such as the current academic literacy test) contributes significantly to variation in subsequent academic performance for these students alongside performance in conventional achievement tests. There appear to be encouraging signs that learning ability has been
90 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
surfaced by these tests and that this ability is influential in subsequent academic achievement. In the Costa Rican studies (Cliff & Montero, 2010), results in achievement tests based on high levels of developed learning ability favoured test takers from well-resourced educational backgrounds and the test achievement ‘gap’ between these test takers and those from poorly resourced educational backgrounds was significant. However, when these two groups of test takers took tests constructed to reduce the role played by developed learning ability, the achievement gap between test takers from well-resourced backgrounds and those from poorly resourced backgrounds was significantly narrowed; indeed, the latter group of test takers performed as well as – or nearly as well as – those from well-resourced educational backgrounds. Results from the Costa Rican studies were based on the implementation of a test of Reasoning with Figures, i.e. a test in which the role played by language (apart from the test instructions) is reduced. Test takers demonstrate the ability to learn by responding to pattern-recognition in the figures instead of to language prompts. Mediation is provided through test takers completing practice examples and being given feedback on these. Concluding Comments
This chapter has attempted to identify the principal theoretical formulations of the 20th century that have provided a context for the development of dynamic forms of assessment. The argument for dynamic assessment as a form of assessment of potential was made in order to give a framework for a discussion of the design and development features of a standardised test artefact which attempts to incorporate the features of dynamic assessment. The central challenge of incorporating features of dynamic assessment into a static test environment lies in providing test feedback that approximates the role of a mediator. Illustrations of this model were presented and discussed. Early evidence from South African and Costa Rican research studies raises promising possibilities for the use of mediation and feedback as ways of enabling the emergence of learning potential, both in language-dense (South African) and language-reduced (Costa Rica) testing contexts. The presentation of the current chapter represents a conceptual attempt to demonstrate the possible value of mediation and feedback in a test designed to assess test-takers’ ability to cope with the contextualised academic literacy demands of higher education reading and writing. Both the South African and Costa Rican sites have as a primary goal the identification of learning potential against the background of highly unequal forms of preparation for higher education studies – where the expectation is that poorly resourced educational backgrounds are highly likely to result in talented students
The Use of Mediation and Feedback in a Standardised Test of Academic Literacy 91
from those backgrounds being unable to access the content and context of learning. A further conceptual issue which the current chapter raises is the extent to which mediation and feedback in a test-taking situation can be said to enable the emergence of Cattell’s fluid intelligence or Feuerstein’s mediated response to task demands. A language-dense test such as the one that is the primary focus of the current chapter might arguably be said to require test takers to marshal developed learning capacities rather than ones that emerge as a consequence of test tasks. But the counterargument to this could well be that test cues ameliorate the effects of these developed capacities – and the test is aimed at making a judgment about how the test taker can respond to (academic literacy) tasks that are the domain of authentic higher education contexts. Potential is here judged as the ability to be able to respond to the reading and demands of these authentic higher education contexts. The role and outcomes of a language-reduced test alert us to the demands that are placed on students accessing higher education contexts where the language of instruction is not necessarily the same as the first language of these students – but language-reduced tests are arguably not reflective of authentic higher education engagements. Acknowledgement
The author acknowledges with thanks the statistical work of Andrew Deacon from the Centre for Learning and Teaching (CILT) at the University of Cape Town in the development of Figure 4.1. References Bachman, L.F. and Palmer, A.S. (1996) Language Testing in Practice. New York: Oxford University Press. Binet, A. and Simon, T. (1905) New methods for the diagnosis of the intellectual level of subnormals. L’Annee Psychologique 11, 191–336. Brown, A. and Ferrara, R.A. (1985) Diagnosing zones of proximal development. In J.V. Wertsch (ed.) Culture, Communication and Cognition. Vygotskian Perspectives. Cambridge: Cambridge University Press. Budoff, M. (1987). The validity of learning potential assessment. In C.S. Lidz (ed.) Dynamic Assessment: An Interactive Approach to Evaluating Learning Potential. New York: Guilford. Conole, G. and Warburton, B. (2005) A review of computer-assisted assessment. Research in Learning Technology 13 (1), 17–31. Carlson, J.S. and Weidl, K.H. (1992) Principles of dynamic assessment: The application of a specific model. Learning and Individual Differences 4, 153–166. Cattell, R.B. (1940) A culture-free intelligence test. I. Journal of Educational Psychology 31, 161–180. Cattell, R.B. (1971) Abilities: Their Structure, Growth and Action. Boston: HoughtonMifflin. Cliff, A., Ramaboa, K. and Pearce, C. (2007) The assessment of entry-level students’ academic literacy: Does it matter? Ensovoort 11, 33–48.
92 Part 1: Conceptual Foundations: Policy, Construct, Learning Potential
Cliff, A. and Hanslo, M. (2009) The design and use of ‘alternate’ assessments of academic literacy as selection mechanisms in Higher Education. Southern African Linguistics and Applied Language Studies 27 (3), 265–276. Cliff, A. and Montero, E. (2010) The balance between excellence and equity in an admission test: contributions of experiences in South Africa and Costa Rica. Ibero-American Journal of Educational Evaluation 3 (2), 8–28. Feuerstein, R., Rand, Y. and Hoffman, M.B. (1979) The Dynamic Assessment of Retarded Performers: The Learning Potential Assessment Device, Theory, Instruments, and Techniques. Baltimore: University Park Press. Feuerstein, R., Rand, Y., Hoffman, M.B. and Miller, R. (1980) Instrumental Enrichment. Baltimore: University Park Press. Guthke, J. (1993) Current trends in theories and testing of intelligence. In J.H.M. Hamers, K. Sijtsma and A.J.J.M. Ruijssenaars (eds) Learning Potential Assessment: Theoretical, Methodological and Practical Issues. Amsterdam: Swets and Zeitlinger. Haeck, W., Yeld, N., Conradie, J., Robertson, N. and Shall, A. (1997) A developmental approach to Mathematics testing for university admissions and course placement. Educational Studies in Mathematics 33 (1), 71–91. Hamers, J.H.M., Sijtsma, K. and Ruijssenaars, A.J.J.M. (eds) (1993) Learning Potential Assessment: Theoretical, Methodological and Practical Issues. Amsterdam: Swets and Zeitlinger. Maier, U., Wolf, N. and Randler, C. (2016) Effects of a computer-assisted formative assessment intervention based on multiple-tier diagnostic items and different feedback types. Computers & Education 95(April), 85–98. Oxford, R.L, Griffiths, C., Longhini, A., Cohen, A.D., Macaro, E. and Harris, V. (2014) Experts’ personal metaphors and similes about language learning strategies. System 43 (1), 11–29. Poehner, M.E. (2008) Dynamic Assessment: A Vygotskian Approach to Understanding and Promoting L2 Development. New York: Springer. Spearman, C. (1927) The Abilities of Man. London: Macmillan. Sternberg, R.J. and Grigorenko, E.L. (2002) Dynamic Testing: The Nature and Measurement of Learning Potential. Cambridge: Cambridge University Press. Thompson, G. (2017) Computer adaptive testing, big data and algorithmic approaches to education. British Journal of Sociology of Education 38 (6), 827–840. Visser, A. and Hanslo, M. (2005) Approaches to predictive studies: Possibilities and challenges. South African Journal of Higher Education 19 (6), 1160–1176. Vygotsky, L. (1978) Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press. Vygotsky, L.S. (1986) Thought and Language. Cambridge, MA: MIT. Wadee, A.A. and Cliff, A. (2016) Pre-admission tests of learning potential as predictors of academic success of first-year medical students. South African Journal of Higher Education 30 (2), 264–278. Weideman, A. (2021) A skills-neutral approach to academic literacy assessment. [In this volume]. Yeld, N. (2001) Equity, assessment and language of learning: key issues for Higher Education selection and access in South Africa. Unpublished PhD thesis, University of Cape Town. Yeld, N. and Haeck, W. (1997) Educational histories and academic potential: Can tests deliver? Assessment and Evaluation in Higher Education 22 (1), 5–16. Zhang, Z. (2017) Student engagement with computer-generated feedback: A case study. ELT Journal 71 (3), 317–328.
5 Basic Education and Academic Literacy: Conflicting Constructs in the South African National Senior Certificate (NSC) Language Examination Colleen du Plessis
Elusive Equitability of Learning Opportunity
School-leaving examination papers are typically used to fulfil a gatekeeper function. Each year the results of exit-level examinations are employed for placement and access purposes at tertiary institutions, a practice that may be deemed fair and legitimate where credible assessment protocols are adhered to and the quality of instruction provided can be considered comparable. However, research shows that these last two aspects are particularly contentious in the South African context. Equitability of learning opportunity is definitely lacking (Balfour, 2015; Chisholm, 2005; du Plessis & du Plessis, 2015; Modisaotsile, 2012; Solidarity Research Institute, 2015; Spaull, 2012), and the annual Grade 12 results of the national school-leaving examination are viewed with suspicion (John, 2012; Moodley, 2014; Parker, 2012; Weideman et al., 2017). The introduction of placement or admission tests such as the Test of Academic Literacy Levels (TALL) and National Benchmark Tests (NBTs) at South African universities (CETAP, 2012) affirm the lack of confidence placed in the standard and examination results of the National Senior Certificate (NSC) schoolleaving qualification. In light of the above, other measures of knowledge and ability have to be used when admitting students to tertiary study, 95
96 Part 2: Assessing Academic Literacy at Secondary School Level
even if candidates hail from well-resourced schools with reasonably high standards of education. The reasons for the discrepancy between schoolleaving examination results and actual levels of ability can be attributed in part to conflicting constructs in the government school curriculum, and in the case of this chapter, the school language subjects in particular. Language subjects offered at what is usually called first language level (L1), but referred to as Home Language (HL) level in South Africa, and second language level (L2), referred to as First Additional Language Level (FAL), are expected to be of an appropriately high standard to equip students to use language effectively in higher education contexts. This is particularly relevant in respect of the learning of English, since this is the dominant language of learning and teaching (LOLT) at both secondary school and tertiary level in South Africa. However, as commendable as the objectives of the government school curriculum may be, it is the Grade 12 school-leaving language examination that provides us with tangible evidence of actual standards and literacy levels attained through language instruction at secondary school. In this sense, language assessment serves as an important quality control mechanism, and the quality of the instruments used to measure language learning and level of ability – in the case of the present discussion, the L1 language examination papers – is thus just as important as that of the language education provided. The effects of the basic school education provided to students are manifest in the high drop-out rates at tertiary institutions (see also Myburgh-Smit & Weideman, this volume). The first year of study appears to be the most precarious: around 50% of first-year university students in South Africa discontinue their studies (Wilson-Strydom, 2015). Although these students represent different races and cohorts, Black and Coloured students are particularly vulnerable. Under the National Party regime during the 1950 to mid-1980s period there was a gradual increase in the number of bachelor and certificate qualifications for Black and Coloured students. Since then the situation has deteriorated to the extent that the number of bachelor and certificate qualifications for this group of students is now reported to be lower than what it was in the 1950s (Statistics SA, 2016: 13). This dismal situation is attributed in part to the ‘poor educational foundations received throughout the schooling system, but especially at primary school level, as well as the affordability of post-secondary education’ (Statistics SA, 2016: 13). Despite the enormous costs of tertiary education, plans for free education are being tabled and enrolments at universities are scheduled to grow annually by about 1.9% to reach ‘a projected Gross Enrolment Ratio (GER) / participation rate of 21.2%’ and a total headcount of 1,087,281 (Department of Higher Education and Training, 2016: 6) (see Table 5.1). As evident from the enrolments figures, universities and institutions of higher learning are struggling to meet the required number of
Basic Education and Academic Literacy 97
Table 5.1 Enrolment planning at South African universities (2009–2020) (Department of Higher Education and Training, 2016: 7) Ministerial statement on student enrolment planning 2009–2013 Actual
Ministerial statement on student enrolment planning 2014/15–2019/20 Actual
Target
Deviation
Planned
2009
2010
2011
2012
2013
2014
Project. target 2014
Deviat. Deviat. from from 2014 2014 target target (nos) (%)
Project. Target 2019/ 2020
837,776
892,936
938,201
953,373
983,698
969,154
1,002,110
–32,956 –3.3%
1,087,281
enrolments, let alone the throughput rates envisaged by government. On top of that, we should keep in mind that around half of all students are expected not to graduate (Lewin & Mawoyo, 2014: 9) from university. By contrast, under the newly introduced common school curriculum more students are passing their matriculation year than in 2008, but this does not mean that the quality of school education is improving. Although a downward trend is evident in the above table, the Department of Basic Education interprets matters differently. In the message of the Minister of Basic Education, Angie Motshekga, at the time of the release of the 2015 NSC results, the following was stated: The Class of 2015 is the largest cohort of Grade 12 learners to participate in public examinations in South Africa, since the inception of the democratic dispensation. The significant increase of 117 798 candidates in the 2015 enrolment confirms a higher throughput rate of learners, a challenge which the sector has been dealing with during the last few years. The increase in the number of learners achieving the NSC, from 403 874 in 2014 to 455 825 in 2015, (an increase of 51 951 learners), attests to an improved efficiency of the system. The increase in the number of learners qualifying for admission to Bachelor Studies, from 150 752 to 166 263, is also extremely encouraging and points to quality improvements in the system. (Department of Basic Education, 2015b: 3)
It is uncertain how any conclusion can be reached that an increase in the numbers of students should be interpreted as evidence of improved quality of education. Similarly, throughput improvement can only be claimed by comparing those who finish with the numbers of those starting school 12 years before. The downward trend in pass rates from 2013 onwards that we see in Figure 5.1 does not bode well and is unlikely to be related to higher standards of education or increased difficulty of NSC examination papers. Inadequate school preparation for post-school education and weak language abilities are cited as major contributing factors to the high proportion of students who drop out of tertiary study or fail to graduate (Gumede, 2017; Letseka & Pitsoe, 2014; McKay, 2016). In light thereof, the objective of this chapter is to examine the
98 Part 2: Assessing Academic Literacy at Secondary School Level
Figure 5.1 Annual NSC pass rate from 2008–2015 (Department of Basic Education, 2015a: 42)
complicity of the school language curriculum and assessment protocol in not equipping students for tertiary study, although other factors beyond the scope of this discussion also play a role (cf. Naidoo et al., 2014; Wilson-Strydom, 2015). A Myriad of Curriculum Changes and Varying Instructional Practices
By amalgamating 18 disparate education departments into one national department of education and introducing a common curriculum, the current South African government has attempted to provide school students with equitable educational opportunity. In the previous political dispensation there were separate syllabuses and examinations for school subjects. However, despite the common curriculum, there are still varying practices in the teaching and assessment of the language subjects: … different communities of practice have evolved with different assumptions, not only about standards, but also about the purposes of language teaching and assessment. (Umalusi, 2012a: 12)
Although the Bantu languages used a common syllabus prior to 1989, shortly before the transition to democracy separate syllabuses were developed for most of these languages (Umalusi, 2012a: 13). Standardising the languages was a core issue at the time and in fact remains a controversial matter (Webb, 2008). While much corpus development has taken place,
Basic Education and Academic Literacy 99
acceptance of the standardised languages seems to be problematic. Also, educators are not necessarily communicatively competent in the standardised varieties, since there are rural and urban varieties in use. Apart from the late standardisation of the indigenous languages by Provincial Language Committees (PLCs) for each province in conjunction with lexicography development units and university language departments, different teaching methodologies have applied in schools. In a report published by the Council for Quality Assurance in General and Further Education and Training (Umalusi), it is stated that greater emphasis has been accorded to structuralism and the formal teaching of grammar in the Afrikaans and Bantu language classrooms, while the English curriculum, which has drawn heavily on the British system, has devoted more attention to aspects of critical literacy (Umalusi, 2012a). The introduction of a common outcomes-based curriculum in 1997 for all the official languages, the Senior Certificate (SC) (NATED 550 curricula; see Fiske & Ladd, 2004; Umalusi, 2012a), was an attempt to eliminate some of the disparities and provide common standards for language teaching and assessment. Further attempts were made to strengthen the school curriculum in 2001 with the introduction of the Revised National Curriculum Statement (RNCS) in terms of which all languages would henceforth be offered on both Standard and Higher Grade. Greater emphasis was once again placed on communicative competence, and learners were expected to study the same number of prescribed works in each language. At this stage examination papers were set provincially. Besides the common curriculum and assessment standards, school-based continuous assessment (CASS) was introduced in a further attempt to establish equity and balance in the assessment component, as opposed to relying solely on the results which learners obtained in exit-level examinations. However, huge discrepancies have been reported between the marks awarded as part of CASS and the summative Grade 12 examination marks (Mncwago, 2015). The SC was replaced in 2008 by the outcomes-based National Senior Certificate (NSC) exit-level qualification with its curriculum comprising the National Curriculum Statement (NCS) (Umalusi, 2010: 11). For the first time a common Grade 12 examination was set nationally, although the literature paper continued to be set provincially until 2009 (Umalusi, 2012a: 13). A further notable change was the abolition of separate Standard and Higher Grade subjects. Future examination papers would have to include tasks that would distinguish higher achieving students from those performing on lower levels. To enable this, the Subject Assessment Guidelines (SAGs) specified the cognitive abilities that were to be assessed on different levels and the kinds of questions that should be asked. Despite all the recurriculation initiatives, the standard of the L1 programme and exit-level examination remains questionable, as we will see.
100 Part 2: Assessing Academic Literacy at Secondary School Level
One of the main tasks of the Council for Quality Assurance in General and Further Education and Training (Umalusi – a name of Nguni origin that refers to a ‘guardian of assets’) is to ensure the quality of educational assessments under its jurisdiction, which naturally includes the Grade 12 exit-level examinations. Since the establishment of this statutory body in 2001, considerable time and resources have been invested in quality assurance and research studies related to improving the standards of the curricula and the respective examination papers. Notwithstanding these efforts, there are still discrepancies in standards between the various language examinations, and the sets of scores obtained for these are not comparable across all languages and years, even though the language papers are based on the same subject assessment guidelines (du Plessis et al., 2016). For example, on average learners who offer English and Afrikaans at HL level score lower marks than those who offer other languages at this level. Varying degrees of difficulty and levels of cognitive demand in the examination papers have been cited as reasons for some of the disparities, as well as uncertainty about whether the same constructs are being measured (Umalusi, 2012a). The variation in averages is not the only point of concern. Compared to non-language school subjects, the average percentage of students who pass the HL subjects is exceptionally high. The following table reflects the average pass rates obtained per HL over the period 2009-2015 (Department of Basic Education, 2010, 2012, 2015a): Table 5.2 Average percentage of students achieving a pass of 40% or more per L1 (2009–2015) 2009
2010
2011
2012
2013
2014
2015
Seven-year average
English
93.2
92.8
94.0
95.3
96.8
95.1
93.8
94.4
Afrikaans
94.5
97.2
98.1
98.3
97.9
96.9
97.3
97.2
Sotho
97.5
99.0
99.3
99.7
99.7
99.5
99.4
99.2
Zulu
98.6
99.1
99.4
99.4
99.7
99.4
99.4
99.3
Pedi
98.5
99.3
99.1
99.6
99.6
99.3
99.4
99.3
Tsonga
98.9
99.1
99.3
99.2
99.5
99.5
99.5
99.3
Swati
98.8
99.2
99.4
99.3
99.3
99.6
99.4
99.3
Setswana
99.3
99.4
99.4
99.7
99.7
99.8
99.6
99.6
Xhosa
99.7
99.7
99.8
99.9
99.9
99.8
99.6
99.8
Ndebele
99.8
99.8
99.9
99.9
99.9
99.9
99.8
99.9
Venda
99.8
99.8
99.9
99.9
100.0
100.0
99.9
99.9
Average
98.1
98.6
98.9
99.1
99.3
99.0
98.8
98.8
The results reflected in Table 5.2 suggest that the English and Afrikaans language papers are more challenging than the rest. It is
Basic Education and Academic Literacy 101
Table 5.3 Average national NSC pass rate per key subject 2009–2015 (compiled from information provided by Department of Basic Education 2012, 2015a) 2009
2010
2011
2012
2013
2014
2015
Ave
Mathematics
46.0
47.4
46.3
54.0
59.1
53.5
49.1
50.8
Physical Sciences
36.8
47.8
53.4
61.3
67.4
61.5
58.6
55.3
Accounting
61.5
62.8
61.6
65.6
65.7
68.0
59.6
63.5
Economics
71.6
75.2
64.0
72.8
73.9
68.9
68.2
70.7
Agricultural Sciences
51.7
62.6
71.3
73.7
80.7
82.6
76.9
71.4
Life Sciences
65.5
74.6
73.2
69.5
73.7
73.8
70.4
71.5
Geography
72.3
69.2
70.0
75.8
80.0
81.3
77.0
75.1
Business Studies
71.9
71.1
78.6
77.4
81.9
77.9
75.7
76.4
History
72.2
75.8
75.9
86
87.1
86.3
84.0
81.0
Mathematical Literacy
74.7
86.0
85.9
87.4
87.1
84.1
71.4
82.4
English FAL
92.7
94.5
96.2
97.8
98.8
82.8
97.1
94.3
always difficult to compare performance across different language papers, but the exceptionally high pass rates for the language subjects in general are particularly conspicuous when contrasted with the average pass rates in other school subjects. The average obtained for English L2 (First Additional Language) is commensurate with that of the English and Afrikaans L1 results. In Table 5.3 , the term ‘key subjects’ refers to those school subjects with the highest numbers of students, i.e. the most popular subjects (Department of Basic Education, 2015a: 19). Average scores obtained for language subjects may vary considerably from those of STEM subjects (Biology, Chemistry, Physics, Mathematics), but not to the degree reflected in Table 5.3. A variation of as much as 44% between some subjects is unacceptably high and suggests too basic a level of language learning that does not support academic literacy in other subjects. What makes matters worse is that the pass rates per key subject are based on students who attained 30% or more in the mentioned subjects. The required pass percentages for language subjects (only 40% for L1 and 30% for L2) run counter to the notion of academic literacy. Such low-slung pass requirements cannot serve as credible indications of language ability or mastery of subject content. From the preceding discussion we can see that changes to the school curriculum have not been sufficient to improve students’ academic literacy levels. Other initiatives are necessary. Although only a small proportion of school students will progress to tertiary study, the objective of the NSC remains to enable learners to advance as far as possible in all areas of learning, and not only to achieve a basic functional level of knowledge and ability.
102 Part 2: Assessing Academic Literacy at Secondary School Level
Basic and Higher Education: Opposite Poles on the Learning Continuum
The NCS Curriculum and Assessment Policy Statement (CAPS) for both first and second language subjects is aimed at preparing learners for tertiary study. It is premised on the principles of social transformation and the redress of educational imbalances, facilitating engaging and critical learning and the attainment of ‘high knowledge and high skills’ in all school subjects (Department of Basic Education, 2011a: 4–5). Specific objectives include the ability to identify and solve problems in creative ways, and the ability to organise and evaluate information. The emphasis on critical thinking and analytical problem solving is a defining feature of academic literacy in higher education contexts (Patterson & Weideman, 2013; Sebolai, 2016). Notwithstanding the foregrounding of such abilities in the prescribed school curriculum, there is little evidence of higher order and critical thinking in the NSC language examination papers. In a report on the standards of the L1 papers (Umalusi, 2012a), weightings of cognitive demand were found to vary considerably across the different language papers and some papers were deemed to be too easy. However, owing to the subjective nature of the evaluation, inconsistencies with the way the taxonomy was applied and the absence of statistical data, it was impossible to compare the standards across languages. Since then a number of studies have pointed out deficiencies in the language curriculum and its associated assessment protocol (du Plessis, 2014; McCusker, 2014; Weideman et al., 2017). There appears to be an inherent contradiction between the notions of basic education and academic literacy at school level. The former presupposes a lower level of learning and literacy more attuned to what Cummins refers to as ‘basic interpersonal communicative skills (BICS)’ (Cummins, 2000: 58) or the kind of language used for general conversational purposes. Academic literacy, on the other hand, presupposes the ability to negotiate academic discourse with very specific lexical and structural demands. Clearly, the policy objective was not to limit students to a basic language education; this may be an unintended consequence of the implementation of a ‘one-shoe-fits-all’ approach to teaching and learning. The earlier Senior Certificate (SC) curriculum used a system of higher and standard grade to differentiate between students who planned to go to university and those who preferred to take a technical, vocational or alternative route. Similarly to the SC, school-leaving qualifications offered by Cambridge International Examinations (CIE) and the International Baccalaureate (IB) differentiate between candidates of different aptitudes and interests (for further information: http://www.ibo.org and http://www. cie.org.uk). These are both qualifications that are recognised as credible international education programmes of long standing. They are operational in more than 125 countries, including a number of African countries
Basic Education and Academic Literacy 103
that share with South Africa common contextual challenges. The IB offers both Standard Level and Higher Level, while the CIE distinguishes between what is referred to as ‘the AS Level and the A Level’ (Umalusi, 2010: 6). Cambridge also offers what is called a Pre-University curriculum (Pre-U), in addition to the A and AS levels in the advanced course component. It would thus seem that even though the A and AS levels are aimed at preparing learners for university study, there is a need for additional differentiation in terms of learning content and needs for certain degree courses, and these are catered for through the Pre-U curriculum. In a benchmarking study commissioned by Higher Education South Africa (HESA) to set admission requirements for candidates who had obtained international qualifications from the above-mentioned institutions and who wished to study at a South African university, the research team found that the qualifications were not comparable to the NSC owing to the divergent nature and foci of the curricula. Nonetheless, the NSC, CIE A-level and the qualifications offered by the IB were all deemed adequate for admission purposes to higher education (Umalusi, 2010: 165). Further to this, since English at L2 level formed part of the investigation, but not English as L1, it can be inferred that in the opinion of HESA (now Universities South Africa, USAf) English First Additional Language (FAL) is sufficient for the purposes of preparing for a tertiary education in South Africa. However, research studies such as those done by the Alternative Admissions Research Project (AARP) show that the standard of English FAL is not adequate for university purposes (Cliff & Hanslo 2009; Fleisch et al., 2015). The need to make provision in the NSC for students pursuing different post-school avenues is further borne out by studies at a number of universities that it is not only English L2 learners who battle with the language demands of higher education, but English and Afrikaans L1 learners as well (Van Dyk et al., 2007; Rambiritch, 2012). While benchmarking practices such as the above can be useful, it is clear from the HESA study that supplementary data are needed to provide a fuller picture of educational standards for the purposes of informing admission practices at tertiary institutions. Evidence-based reflection on the kind of information that the NSC qualification provides is possible through close scrutiny of curricula and examination constructs. Examining the NSC L1 Examination: What Does the School Language Mark Represent?
Apart from the general aims, the NSC Curriculum and Policy Statement (CAPS) identifies the following specific goals for the learning of languages: Learning a language should enable learners to: • acquire the language skills required for academic learning across the curriculum;
104 Part 2: Assessing Academic Literacy at Secondary School Level
• listen, speak, read/view and write/present the language with confidence and enjoyment. These skills and attitudes form the basis for lifelong learning; • use language appropriately, taking into account audience, purpose and context; • express and justify, orally and in writing, their own ideas, views and emotions confidently in order to become independent and analytical thinkers; • use language and their imagination to find out more about themselves and the world around them. This will enable them to express their experiences and findings about the world orally and in writing; • use language to access and manage information for learning across the curriculum and in a wide range of other contexts. Information literacy is a vital skill in the ‘information age’ and forms the basis for life-long learning; and • use language as a means for critical and creative thinking; for expressing their opinions on ethical issues and values; for interacting critically with a wide range of texts; for challenging the perspectives, values and power relations embedded in texts; and for reading texts for various purposes, such as enjoyment, research and critique. (Department of Basic Education, 2011a: 9) Based on the objectives of the curriculum and a close study of learning outcomes, teaching approaches and content, the superordinate construct of the L1 language examination (for a full discussion see du Plessis, 2017) has been conceptualised as aiming at: … the assessment of a differentiated language ability in a number of discourse types involving typically different texts, and a generic ability incorporating task-based functional and formal aspects of language. (du Plessis et al., 2013: 20)
Since the L1 language examination is considered to be the highest level of assessment at school level, the next section looks at how the above construct has been articulated in a selection of English L1 and Afrikaans L1 examination papers. English and Afrikaans are also the two dominant languages of learning and teaching at school and university. Table 5.4 provides the blueprint for calculating the overall L1 language examination score. More than one third of the final mark (37.5%) derives from schoolbased assessment, which by Umalusi’s own admission cannot be trusted (Umalusi, 2012b). Moreover, the assessment of other components of the examination is also potentially subjective and unreliable, as validation studies on the English language papers have pointed out (du Plessis, 2014; du Plessis & Weideman, 2014; du Plessis et al., 2016).
Basic Education and Academic Literacy 105
Table 5.4 Sub-sections of the L1 exit-level examination (Department of Basic Education, 2011a) Component
Weighting and contribution towards final pass mark
Paper 1: Language in context Section A: Reading comprehension (30) Section B: Summary writing (10) Section C: Language structures and conventions (30)
70 marks 17.5%
Paper 2: Literature Section A: Poetry Section B: Novel Section C: Drama
80 marks 20%
Paper 3: Writing Section A: Essay (50 marks) Section B &/C: Transactional texts (50 marks)
100 marks 25%
Paper 4: Oral Oral tasks assessed during course of year by teachers
50 marks 12.5%
School-based continuous assessment Tests, examinations and assignments graded by teachers
100 marks 25%
The following section compares the main constructs and sub-abilities assessed in the English and Afrikaans L1 papers, based on the findings of two separate research studies, and the degree to which examination scores display reliability or consistency of measurement (cf. Green, 2014: 63) as indicators of language mastery and academic literacy. The oral and literature components (examination papers 2 and 4 in Table 5.4) fall beyond the scope of this analysis. It should, however, be noted that the literature curriculum requires very little reading of set work compared to the prescribed texts in the IB and IEC. The main distinguishing feature between L1 and L2 in the NSC is that L1 students are required to study one novel, a drama and a selection of poems, while the L2 students choose two genres from a selection of four: a novel, drama, selection of poems or selection of short stories (Department of Basic Education, 2011a, 2011b). There is thus no obligation to study a full novel at L2 level. In comparison, the Cambridge Pre-University curriculum requires as many as eight novels to be studied (www.cie.org. uk/cambridgepreu) to attain the required level for tertiary study. The construct of language in context as articulated in Paper 1
The three sections of the examination paper include tasks that assess the ability to understand written and visual texts; summarising a short text; and knowledge of grammar and language structures. Serious discrepancies have been found in the English L1 papers over the five-year period studied (2008–2012) and a lack of conceptual clarity of constructs has been identified (du Plessis, 2017). Content or context validity, traditionally cited in language testing literature as referring
106 Part 2: Assessing Academic Literacy at Secondary School Level
to sufficient representation of language-related tasks and content of an authentic nature, with due consideration of the language and communicative demands made on the interlocutor and the conditions for task performance (Cumming & Berwick, 1996; Hughes, 2003; Weir, 2005), was compromised by the inclusion of unsuitable texts and inadequate item specifications. For example, the reading comprehension tasks contained texts that were poorly written and covered topics that could potentially be biased in terms of culture and gender. Moreover, many items were problematic owing to poor formulation, repetition of parts of the item prompts in the memorandum answers, inappropriate answers in the memoranda and allowing copying from the reading passage where discussion was required. Far too many questions required the mere expression of an opinion without critical evaluation or substantiation of point of view (just over half of the items analysed) resulting in an overrepresentation of this basic ability (du Plessis, 2014). The combination of so many problematic aspects weakens the construct validity of the papers, i.e. the extent to which traits of a linguistic and cognitive nature are assessed, and the alignment of task and item types with theories on communicative competence and language processing (Cumming & Berwick, 1996; Hughes, 2003; Weir, 2005). In a comparative study of the Afrikaans L1 papers (2012–2015), the quality of texts selected for the examination papers was found to be better, with no cultural bias, but there was a need for a broader variety of topics (literacy themes tended to dominate). As in the case of the English papers, much of the emphasis was on a literal understanding of texts. Table 5.5 shows the most common sub-abilities assessed in Section A over the mentioned periods and their proportionate percentage of occurrence in the papers analysed. In respect of the potential reliability of scoring in Section A, no discrepancies were identified between items and answers in the marking Table 5.5 Dominant item types in Section A – Reading for comprehension English Paper 1 Item type
Afrikaans Paper 1 Frequency
Item type
Frequency
Express own opinion
50%
Understand literal meaning
29%
Understand literal meaning
28%
Understand common phrases/idioms
12%
Draw conclusions
15%
Express own opinion
11%
Attend to word choice
13%
Make an inference
10%
Distinguish between denotation and connotation
12%
Understand cause and effect
8%
Compare and contrast
10%
Read for main ideas
8%
Evaluate a statement
8%
Draw conclusions
7%
Respond to images
7%
Basic Education and Academic Literacy 107
memoranda of the Afrikaans papers, unlike in the case of the English papers. The potential for reliable marking was high, as more than 80% of the items involved objective measurement, even though most were open-ended items (not multiple choice). The English papers, on the other hand, were based on the principle of global marking of a potentially subjective nature, and only 30% of the items could be scored objectively. Most items contributed between 2 and 4 marks each and no indication was given to examinees or markers how marks would be earned. There can be little mention of reliability of scoring where the acceptability of answers is left largely to the judgement of individual scorers. This approach could in fact be interpreted by some critics as a deliberate strategy to ensure easy allocation of marks and high pass rates. It constitutes poor assessment practice and is to be discouraged at all costs. Section B, the summary writing task, was particularly problematic in both the Afrikaans and English papers and could hardly be considered representative of the construct of high language ability. This task did not reflect language and cognitive processing normally associated with summary writing (cf. Yu, 2013), an essential skill at tertiary level. Examinees were required to write a summary of 80–90 words of a short 350-word text. The Afrikaans papers required using own sentences, but in any order. In the English papers lifting phrases from the original text was allowed, examinees did not have to produce a coherent summary and no penalties were applied for exceeding the prescribed length, all aspects indicative of serious assessment shortcomings. Section C, the last section of Paper 1, had, in the English papers, a strong visual literacy rather than language focus. The inclusion of images such as cartoons in this part of the examination is potentially unfair, since these often require cultural and extraneous knowledge. Across all years of analysis (2008–2012), items were identified that were problematic mainly owing to cultural or gender bias, poor formulation and deficiencies in the memoranda. Examinees were required to carry out the following task based on Figure 5.2 (Past examination papers are obtainable from: https://www. education.gov.za/): Item 4.2.2 Comment on the use of humour in the titles of the books. (2 marks) Memorandum: Still necessary to consult books in order to operate computers. The computer, on the other hand, has to consult a book that explain (language error in original memorandum) how to deal with ‘dummies’ or people who are not familiar with computers.
It is obvious that the answer in the memorandum simply repeats the titles of the books. There is no mention, for example, of the irony that
108 Part 2: Assessing Academic Literacy at Secondary School Level
Figure 5.2 Illustration of lower order recall of information in English HL Paper 1, November 2008, p. 14
humans invented computers, but need manuals to understand them, or that human beings themselves cannot be understood fully, even with a manual. The negative or at least ironical connotations associated with the word ‘dummies’ are also not mentioned in the memorandum. Such items do not reflect higher order thinking, but lower order recall, even bland repetition, of information. Again, this weakens the construct validity of the English L1 paper as an assessment of high language ability. The Afrikaans papers that were analysed contained an appropriate selection of lexically rich texts. The emphasis was on communicative meaning making, but grammar and correct language use were assessed to a much greater extent (43% of the marks) than in the English papers (14% of the marks). The constructs thus differed considerably between the two papers. The potential for high reliability in this section was good for both the Afrikaans and English papers, with objective scoring applying to more than 90% of the examination items in the Afrikaans papers, and around 72% of the items in the English papers owing to the inclusion of grammar items that could be scored objectively. Viewed as a whole, the analysis of tasks and items in Paper 1 shows that only rudimentary abilities are assessed in the English and Afrikaans papers. There is no critical literacy or advanced reading ability involved and the examination results for this component will not provide a reliable indication of ‘cognitive academic language
Basic Education and Academic Literacy 109
proficiency (CALP)’ (Cummins, 2000: 58; see Chapter 3 for a full discussion) or the kind of advanced literacy needed to transition well to tertiary study. The construct of writing as articulated in Paper 3
The analysis of examination items in the English L1 papers (2008– 2012) and Afrikaans L1 papers (2012–2015) revealed a lack of conceptual clarity of the construct of writing on the part of the examiners. As a result only a generic writing ability could be assessed and the distinction between transactional and creative writing was obscured. The intention of the curriculum to have a multiplicity of genres, registers and discourse modes assessed was thus undermined. There was furthermore little difference between L1 level and L2 kind of writing in the examination (cf. du Plessis & Weideman, 2014). At the level of a first language paper, writing should be directed more towards the attainment of CALP than basic communicative ability. Much of the focus in writing assessment at this higher level should fall on the ‘originality of thought, the development of ideas, and the soundness of the writer’s logic’ (Weigle, 2002: 5). The emphasis in writing assessment at L1 level also does not fall explicitly on language knowledge, but other more strategic (metacognitive) aspects that indicate how the candidate is able to use language in highly differentiated manners. However, the lack of writing specifications and the kinds of topics included in Paper 3 provide little opportunity to demonstrate such ability. The sample essay topics in Table 5.6 create the impression of an advanced writing ability, but the topics can be reduced to basic descriptive writing, as the prescribed rubric used to evaluate the essay indeed demonstrates. There is another problem. The kinds of essay topics that have been the feature of Paper 3 since 2008 are not comparable and some require poetic and philosophical writing ability for the creation of literary artefacts far beyond the competence of most candidates. This goes against the principles of natural language use and situational authenticity (Weir, 2005). Creative composition constitutes a distinct kind of imaginative construct that is unsuitable for timed examination contexts. It is also unlikely to be expected of examinees in post-school domains. Furthermore, there is a sharp discrepancy with the transactional writing component (Section B). This part requires elementary writing ability and prompts vary between creative composition topics and dialogues, and absurdities such as writing agendas and minutes for imaginary meetings. What is completely missing are tasks that require producing genres of writing relevant to tertiary environments, a neglected area of academic development at school level (Bharuthram & McKenna, 2012). Another problematic aspect related to Paper 3 is its high weighting compared to the remaining examination papers, and the potential for
110 Part 2: Assessing Academic Literacy at Secondary School Level
Table 5.6 Typical essay writing prompts in the 2015 NSC English L1 and Afrikaans L1 examination papers Afrikaans (translated from the original)
English
Choose one topic and write an essay of 400–450 words.
Choose one topic and write an essay of 400–450 words.
1.1 This is where we spend time together, play, live and laugh. 1.2 Catch these moments in a diary! 1.3 Reading opens doors. 1.4 Happiness is like a pair of spectacles that so often one desperately searches for, when what is missing is right beneath one’s eyes! (Fliegende Blätter) 1.5 I think back to that Christmas eve when I stood before the shop window and heard a voice speak to me suddenly. It changed my life … 1.6 Write an essay about one of the following visual prompts. Provide your own title.
1.1 There was no possibility of taking a walk that day. 1.2 The past is a foreign country. 1.3 ‘When she transformed into a butterfly, the caterpillars spoke not of her beauty, but of her weirdness. They wanted her to change back into what she always had been’. ‘But she had wings’. (Dean Jackson) 1.4 Gold is the dust that blinds all eyes. 1.5 ‘There’s a time for daring and there’s a time for caution, and a wise man understands which is called for’. (In Dead Poets Society) 1.6 Select one picture and write an essay in response … Give your essay a title.
subjective and unreliable scoring. As pointed out, the writing prompts provided are not of comparable challenge, which creates an unfair basis for comparing performance. Research shows that the NSC does not assist students to develop strong reading or writing skills and that this obstructs their ability to master subject content at university (Van Rooy & Coetzee-Van Rooy, 2015). The artificial separation of language skills in the curriculum and examination is part of the problem. Writing is a multi-faceted construct
Basic Education and Academic Literacy 111
simultaneously involving many of the sub-abilities already assessed in Paper 1, strengthening the case for an integrated approach rather than separate papers devoted to creative writing. In fact, it can be argued that the process of arriving at producing a text in writing is so intertwined by prior processes of finding information (by listening, enquiring, discussing, reading and so forth) and processing that information (again by digesting it, provisionally organising and analysing it, presenting it by articulating it, discussing and summarising it) that it would be difficult to separate it from other ‘skills’ in the first instance (Weideman, 2013). What is more, such separation can in fact impede rather than facilitate the instruction and development of writing, as well as its imaginative and adequate assessment. From the preceding discussion of examination content, it is clear that the decision to accommodate all school students’ diverse educational needs through a generic curriculum and examination is not a sound approach. The end result can only be a messy confusion of widely divergent constructs and capabilities. It would be difficult to infer that students who obtain high marks for the language papers will be able to cope with the differential language ability required in university courses. Another complication hereof is that institutions of higher learning are brought under the erroneous impression that incumbent students with high scores in the NSC language examinations are adequately prepared for academic reading and writing.
University Requirements with Regard to Grade 12 Language Achievement
All South African universities use an admissions point (AP) system on the basis of which students may be admitted to programmes of study. Table 5.7 illustrates how the school language score is used to determine Table 5.7 Number of admission points generated by NSC pass mark per examination subject (University of the Free State, 2017a: 10) NSC Level
UFS achievement level
UFS admission points
8 (90%–100%)
8
7 (80%–100%)
7 (80%–89%)
7
6 (70%–79%)
6 (70%–79%)
6
5 (60%–69%)
5 (60%–69%)
5
4 (50%–59%)
4 (50%–59%)
4
3 (40%–49%)
3 (40%–49%)
3
2 (30%–39%)
2 (30%–39%)
1 (0%–29%)
2 1 (Life Orientation 60%+)
112 Part 2: Assessing Academic Literacy at Secondary School Level
Table 5.8 Grade 12 language mark and AP requirements for academic programmes (University of the Free State, 2017b) Faculty
Grade 12 English/Afrikaans mark
AP
Eco & Man Sciences
Level 4 (50%) for BCom and BAdmin
30
Humanities
Level 4 (50%) Level 5 (60%) in L1 for Philosophy
24 (extended programme) 30
Health Sciences
Level 5 (60%) for MBChB, BMedSc, B.Sc (Dietetics/ Physiotherapy), BOccTher, BOptom Level 4 (50%) BSocSci (Nursing), BBiok, BSCD (Sport Coaching and Development)
30 (Sport C.) 34 (Diet., Biokinetics, 36 (all other)
Agric & Nat Sciences
Level 3 (40%) for preparatory and extended programmes Level 4 (50%) for all other degree programmes
20 (preparatory programme) 24 (extended programme) 34 (Physics & Engineering) 30 (all other)
Education
Level 3 (40%) for university access programme Level 4 (50%) for all other degree programmes
20 (access programme) 25 (extended programme) 30 (all other)
Law
Level 4 (50%) for extended programme Level 6 (70%) for LLB
25 (extended programme) 33 (LLB)
Theology
Level 4 (50%)
20 (higher certificate) 30 (all other)
a comparable university achievement level and admission point at one tertiary institution, the University of the Free State (UFS). Prospective students are required to offer six school subjects. Most academic programmes require a minimum composite core of 30. The AP scores for students on the university access, preparatory and bridging programmes (all developmental streams to prepare for and support mainstream study), as well as the extended learning programmes, are lower (see Table 5.8). Note should be taken that no distinction is made between L1 and L2 at the UFS in terms of the admission requirements and points awarded, with the exception of the major in Philosophy. The fact that no distinction is made between language ability at L1 or L2 level is problematic, but gives further credence to the perceived distrust of the NSC results and reliance instead on the results of tests such as the NBTs and TALL when admitting and placing students. A number of studies have attempted to determine the extent to which the NSC examination serves as a predictor of university success (Cliff & Hanslo, 2009; Fleisch et al., 2015; Sebolai, 2016). In one such study, Van Rooy and Coetzee-Van Rooy (2015) find that the overall NSC average is a better predictor than the respective language papers, but that the NSC results as a whole only predict success in the case of students who attain an average of 65% or higher. If we consider the confusion of constructs and potential unreliability of scoring, the results of the English L1 examination in particular are not to be trusted.
Basic Education and Academic Literacy 113
The Sum of All Fears
The very basic level of education that the NSC provides simultaneously to students with different aptitudes and interests is unsuitable for preparing students for higher education. Furthermore, in light of the disparate schooling contexts and the general lack of credibility of the NSC results, there is consensus among South African scholars that the school results should not be used as the sole criteria for selecting or placing students at university. The state of school education in South Africa is deplorable and runs contrary to the ideal of creating a socially just and equitable society. School education should lay the foundation for the advancement of critical literacies and higher forms of knowledge and ability. It is a contradiction in itself to list higher order thinking and academic literacy objectives in the prescribed school curriculum, but for all practical purposes defer these to tertiary level instruction. Rather than working together in tandem, basic education and academic literacy become conflicting constructs. Responsible instructional and assessment practices support the principles of social justice and equality of opportunity, imperatives in any transformation initiative (see also Van Dyk et al., this volume, Discussion section). Of equal importance is their role as custodians of educational standards that enhance the credibility of school and university qualifications. The suspicion that something may also be amiss at tertiary level derives from criticism from the private sector about the language competencies and literacy capabilities of university graduates. Perhaps the inference to be drawn here is that leaving the matter of academic literacy to higher education institutions is a case of much (being done, but) too late for many students. References Balfour, R.J. (2015) Education in a New South Africa: Crisis and Change. Cambridge: Cambridge University Press. Bharuthram, S. and McKenna, S. (2012) Students’ navigation of the uncharted territories of academic writing. Africa Education Review 9 (3), 581–594. Centre for Educational Testing for Access and Placement (CETAP) (2012) CETAP. See http://nbt.uct.ac.za/ (accessed June 2019). Chisholm, L. (2005) The state of South Africa’s schools. In J. Daniel, R. Southall and J. Lutchman (eds) State of the Nation: South Africa 2004-2005 (pp. 210–226). Cape Town: Human Sciences Research Council. Cliff, A. and Hanslo, M. (2009) The design and use of ‘alternate’ assessments of academic literacy as selection mechanisms in higher education. Southern African Linguistics and Applied Language Studies 27 (3), 265–276. Cumming, A. and Berwick, R. (eds) (1996) Validation in Language Testing. Clevedon: Multilingual Matters. Cummins, J. (2000) Language, Power and Pedagogy: Bilingual Children in the Crossfire. Clevedon: Multilingual Matters.
114 Part 2: Assessing Academic Literacy at Secondary School Level
Department of Basic Education (2010) Report on the National Senior Certificate examination results. Pretoria: Department of Basic Education. See http://www. umalusi.org.za/docs/research/2010/iq_overview_report.pdf (accessed December 2017). Department of Basic Education (2011a) Curriculum and Assessment Policy Statement: Grades 10-12 English HL. Pretoria: Department of Basic Education. Department of Basic Education (2011b) Curriculum and Assessment Policy Statement: Grades 10-12 English FAL. Pretoria: Department of Basic Education. Department of Basic Education (2012) National Senior Certificate Examination technical report. See https://www.education.gov.za/Portals/0/Documents/Reports/TECHNICAL% 20REPORT%20LQ%2011-01-2013%20-%2012h25.pdf?ver=2013-03-12-225642-000 (accessed November 2017). Department of Basic Education (2015a) National Senior Certificate Examination technical report. See https://www.education.gov.za/Portals/0/Documents/Reports/2015%20NSC%20 Technical%20Report.pdf?ver=2016-01-05-050208-000 (accessed November 2017). Department of Basic Education (2015b) National Senior Certificate Information Booklet. Pretoria: Department of Basic Education. Department of Higher Education and Training (2016) Annual monitoring report on the projected 2014 targets of the ministerial statement on student enrolment planning, 2014/15 – 2019/20. See http://www.dhet.gov.za/Reports%20Doc%20Library/Annual% 20monitoring%20report%20Projected%202014%20targets%20on%20student%20 enrolment%20planning%20-%20Ministerial%20statement.pdf (accessed November 2017). du Plessis, C. (2014) Issues of validity and generalisability in the Grade 12 English HL examination. Per Linguam 30 (2), 1–19. du Plessis, C. (2017) Developing a theoretical rationale for the attainment of greater equivalence of standard in the Grade 12 Home Language exit-level examinations. Unpublished PhD thesis, University of the Free State. du Plessis, C. and du Plessis, T. (2015) Dealing with disparities: The teaching and assessment of official languages at first language level in the grade 12 school-leaving phase in South Africa. Language, Culture and Curriculum 28 (3), 209–225. DOI: 10.1080/07908318(2015)1083999 du Plessis, C. and Weideman, A. (2014) Writing as construct in the Grade 12 HL curriculum and examination. Journal for Language Teaching 48 (2), 127–147. du Plessis, C., Steyn, S. and Weideman, A. (2013) Towards a construct for assessing high level language ability in Grade 12. Unpublished report submitted to Umalusi. du Plessis, C., Steyn, S. and Weideman, A. (2016) Die assessering van huistale in die SuidAfrikaanse Nasionale Seniorsertifikaateksamen – die strewe na regverdigheid en groter geloofwaardigheid. LitNet Akademies 13 (1), 425–443. Fiske, E.B. and Ladd, H.F. (2004) Elusive Equity: Education Reform in Post-apartheid South Africa. Washington, DC: Brookings Institution Press. Fleisch, B., Schöer, V. and Cliff, A. (2015) When signals are lost in aggregation: A comparison of language marks and competencies of entering university students. South African Journal of Higher Education 29 (5), 156–178. Green, A. (2014) Exploring Language Assessment and Testing. New York: Routledge. Gumede, M. (2017) Universities put focus on high drop-out rate. Business Day, 28 March. See https://www.businesslive.co.za/bd/national/education/2017-03-28-universities-putfocus-on-high-drop-out-rate/ (accessed November 2017). Hughes, A. (2003) Testing for Language Teachers (2nd edn). Cambridge: Cambridge University Press. John, V. (2012) Matric pass rate may be deceiving. Mail & Guardian, 6 January. See http:// www.mg.co.za/article/2012-01-06-matric-pass-rate-may-be-deceiving (accessed June 2019). Letseka, M. and Pitsoe, V. (2014) The challenges and prospects of access to higher education at UNISA. Studies in Higher Education 39 (10), 1942–1954. DOI: 1 10.1080/ 03075079.2013.823933.
Basic Education and Academic Literacy 115
Lewin, R. and Mawoyo, M. (2014) Student access and success: Issues and interventions in South African universities (Report published by Inyathelo: The South African Institute for Advancement, with the support of The Kresge Foundation). See http://www. inyathelo.org.za/knowledge-services/inyathelo-publications/view-all-publications-fordownload.html (accessed April 2016). McCusker, D. (2014) What’s in the CAPS Package? A comparative study of the National Curriculum Statement (NCS) and the Curriculum and Assessment Policy Statement (CAPS), FET Phase: English HL (Research cluster report presented to Umalusi). McKay, T.M. (2016) Academic success, language, and the four year degree: A case study of a 2007 cohort. South African Journal of Higher Education 30 (4), 190–209. DOI: 10.20853/30-4-570 Mncwago, J.B. (2015) An exploration of the discrepancy between classroom-based assessment and external summative assessment in English First Additional Language in Grade 12. Unpublished Master’s dissertation, Stellenbosch University. Modisaotsile, B.M. (2012) The failing standard of basic education in South Africa. Africa Institute of South Africa: Policy Brief No. 72, March 2012. Moodley, V. (2014) Quality and inequality in the assessment of visual literacy in Grade 12 examination papers across six South African languages. Language Matters: Studies in the Languages of Africa 45 (2), 204–223. Myburgh-Smit, J. and Weideman, A. (2021) How early should we measure academic literacy? The usefulness of an appropriate test of academic literacy for Grade 10 students. [In this volume]. Naidoo, U., Flack, P.S., Naidoo, I. and Essack, S.Y. (2014) Secondary school factors relating to academic success in first year Health Science students. South African Journal of Higher Education 28 (4), 1332–1343. Parker, F. (2012) Do the maths: Results not in line with SA’s ambitions. Mail & Guardian, 10 January. See http://www.mg.co.za/article/2012-01-10-do-the-maths-matric-resultsnot-in-line-with-sas-ambitions/. Patterson, R. and Weideman, A. (2013) The typicality of academic discourse and its relevance for constructs of academic literacy. Journal for Language Teaching 47 (1), 107–123. Rambiritch, A. (2012) Transparency, accessibility and accountability as regulative conditions for a postgraduate test of academic literacy. Unpublished PhD thesis, University of the Free State. Sebolai, K. (2016) The incremental validity of three tests of academic literacy in the context of a South African university of technology. Unpublished PhD thesis, University of the Free State. See http://hdl.handle.net/11660/5408 (accessed November 2017). Solidarity Research Institute (2015) Matric report: The South African labour market and the prospects for the matriculants of 2015. See http://www.solidariteit.co.za/wp-content/ uploads/2016/01/Matric_Report_2015.pdf (accessed January 2017). Spaull, N. (2012) Education in SA: A tale of two systems, Politicsweb, 31 August. See https:// www.politicsweb.co.za/news-and-analysis/education-in-sa-a-tale-of-two-systems (accessed February 2019). Statistics South Africa (2016) Educational enrolment and achievement, 2016 (Education Series Vol. III). See http://www.statssa.gov.za/publications/Report%2092-01-03/ Report%2092-01-032016.pdf (accessed November 2017). Umalusi (2010) Evaluating the South African National Senior Certificate in Relation to Selected International Qualifications: A Self-Referencing Exercise to Determine the Standing of the NSC Joint research project undertaken by Umalusi and Higher Education South Africa (HESA). Pretoria: Umalusi. Umalusi (2012a) The Standards of the National Senior Certificate Hl Examinations: A Comparison of South African Official Languages. Pretoria: Umalusi. Umalusi (2012b) Technical Report on the Quality Assurance of the Examinations and Assessment of the National Senior Certificate (NSC). Umalusi: Pretoria.
116 Part 2: Assessing Academic Literacy at Secondary School Level
University of the Free State (2017a) Kovsies Prospectus. See https://www.ufs.ac.za/ docs/librariesprovider31/default-document-library/kovsies-2017-prospectus-705. pdf?sfvrsn=2 (accessed June 2019). University of the Free State (2017b) Yearbooks. See https://www.ufs.ac.za/templates/ yearbooks (accessed December 2017). Van Dyk, T.J., Van Dyk, L., Blanckenberg, H.C. and Blanckenberg, J. (2007) Van bevreemdende diskoers tot toegangsportaal: E-leer as aanvulling tot ’n akademiese geletterdheidskursus (From alienating discourse to access discourse: E-learning as a supplement to an academic literacy course). Ensovoort 11 (2), 154–172. Van Dyk, T., Kotzé, H. and Murre, P. (2021) Does one size fit all? Some considerations for test translation. [In this volume]. Van Rooy, B. and Coetzee-Van Rooy, S. (2015) The language issue and academic performance at a South African University. Southern African Linguistics and Applied Language Studies 33 (1), 31–46. DOI: 10.2989/16073614.20151012691 Webb, V. (2008) Overview of issues at stake. In M. Lafon and V. Web (eds) The Standardization of African Languages: Language Political Realities. Proceedings of a CENTREPOL Workshop held at the University of Pretoria on March 29, 2007 (pp. 8–21). Weideman, A. (2013) Academic literacy interventions: What are we not yet doing, or not yet doing right? Journal for Language Teaching 47 (2), 11–23. Weideman, A., du Plessis, C. and Steyn, S. (2017) Diversity, variation and fairness: Equivalence in national level language assessments. Literator 38 (1), 1–9. DOI: 10.4102/ lit.v38i1.1319. Weigle, S.C. (2002) Assessing Writing. Cambridge: Cambridge University Press. Weir, C.J. (2005) Language Testing and Validation: An Evidence-Based Approach. New York: Palgrave Macmillan. Wilson-Strydom, M. (2015) University Access and Success: Capabilities, Diversity and Social Justice. Abingdon: Routledge. Yu, G. (2013) The use of summarization tasks: Some lexical and conceptual analyses. Language Assessment Quarterly 10, 96–109.
6 How Early Should We Measure Academic Literacy? The Usefulness of an Appropriate Test of Academic Literacy for Grade 10 Students Jo-Mari Myburgh-Smit and Albert Weideman
The Interface Between Teaching Language at School and the Language Demands of Higher Education
A global phenomenon of the last decade of the previous century, the massification of higher education, presented several further challenges to the tertiary education sector in South Africa. The way that these challenges have been met over the last 20 years, in particular our institutional responses to them, now needs re-examination and reflection, in fact a reconsideration of what we have done, and what still needs to be done. According to Cliff et al. (2003: 1) and the Centre for Higher Education Trust (2017), we have seen a considerable, yet constant, increase in university applications and enrolments over the last few decades since the shift in the political landscape of the country at the end of the previous century. Similarly, over the course of only two decades, the education system has witnessed a 300% increase in the number of black students who complete their tertiary studies (Jeffery, 2014). Although a most welcome transformation, it also throws up the question whether the system itself has adapted and transformed in order to support this immense influx of students. According to an article by the Mail & Guardian (Nkosi, 2015), only 18% of those exiting the secondary school system at the end of Grade 12 are accepted for university study, and of that 18% between 50% and 60% 117
118 Part 2: Assessing Academic Literacy at Secondary School Level
drop out during their first year. Additionally, studies indicate that most students do not complete their qualifications in the prescribed time, and that students who drop out of tertiary educational institutions are unlikely ever to return (Scholtz, 2017: 28; see also du Plessis, this volume). Although many reasons exist for these worrisome statistics and drop-out figures, such as financial setbacks, the strain of maintaining new relationships and changes in the choice of what are considered to be relevant subjects and qualifications, the main reason is often cited as student under-preparedness. One of the major contributors to this is considered to be a preparedness to handle the language demands of tertiary education. The first consideration in this chapter will therefore be: How prepared do students who intend to enrol in higher education arrive at these academic institutions? What kind of preparation, in terms of language to be used for academic purposes, have they been exposed to? The official set of guidelines used by South African secondary school language teachers is called the Curriculum and Assessment Policy Statement (CAPS). The CAPS document for Home Language prescribes that students should master a ‘high standard of language’ (Department of Basic Education, 2011: 9) to enable them to acquire access to ‘further or Higher Education or the world of work’ (Department of Basic Education, 2011: 9). Although the significance of competently interacting with academic discourse is acknowledged in the CAPS document, the absence of a comprehensive definition of academic discourse or academic literacy is evident and ‘can easily lead to a misalignment between the aims of the curriculum and the subsequent assessment of students’ attempts at the realisation of these aims’ (Myburgh, 2015: 2). Treated, according to that curriculum, as a discourse type by the Department of Basic Education (2011), academic language is one of six material lingual spheres (Weideman, 2009: 39) that should be mastered by secondary school pupils. It follows that academic discourse must then be clearly conceived and defined in order to facilitate meaningful language pedagogy and produce theoretically defensible academic literacy tests, as well as useful and insightful test results. Du Plessis (2017) discusses in detail the aims of the South African curriculum. Within the current curriculum two levels of language teaching can be found, which are Home Language (HL) and First Additional Language (FAL). This distinction is especially significant within the multilingual context of South Africa where children often grow up with more than one language spoken at home. Home Language should, however, not be seen as the language used exclusively by a student at home. Within CAPS, it is explained that Home Language and First Additional Language refer to the two different levels at which a given language is instructed and assessed, with Home Language being perceived as the
How Early Should We Measure Academic Literacy? 119
more difficult of the two (du Plessis, 2017: 103). Du Plessis (2017: 103) thus also distinguishes between ‘two levels of proficiency’ within HL, namely social and academic/educational. According to CAPS, a combination of these two levels of proficiency should enable students to gain access to higher education and the world of work, as mentioned earlier. These goals should furthermore form the foundation on which language instruction is based and subsequent language assessment takes place. The question, however, is whether the current assessments are indeed aligned with the aims prescribed in the CAPS document. In her research, du Plessis (2017, this volume) has extensively analysed Paper 1 (Language) and Paper 3 (Writing) of the November Grade 12 Home Language assessments of 2008 to 2012. Each paper and its applicable memorandum were examined in great detail. For example, for Text Comprehension (Section A of Paper 1), she found that of the 60 items which were analysed, 21 items were problematic. These 21 items were inadequate for various reasons, including that the options given in the memorandum were illogical, ambiguous, outdated, incomplete, repetitive and/or extremely subjective in nature (du Plessis, 2017: 126–127). It is furthermore noted that a change in the choice of reading texts is evident since 2010, popular themes being those of religion, sport and politics. Du Plessis (2017: 129) mentions that these types of themes are generally discouraged in large scale assessments, since it can be argued that they can propagandise or be considered exclusive in nature to some cultures. Open-ended questions are to be found in abundance in Section A, which is further cause for concern (du Plessis, 2017: 132). Many additional problems for Section B (Summarising in your own words) and Section C (Language in context) are also to be observed. Du Plessis’s findings indicate a serious discrepancy between the aims mentioned in CAPS and the assessment of the said aims. This not only lessens the credibility of the Grade 12 exit examination results, but contributes to the under-preparedness students face when reaching university. As a further example, one of the main concerns of Paper 3 (Writing) pointed out by du Plessis (2017: 150) is that while CAPS places much focus on the writing process, this process cannot be authentically reproduced in an examination setting, thus creating an immediate misalignment between the aims given in CAPS and the subsequent assessment of these aims. The major conclusion in all these analyses is that problems abound at the interface between language instruction at school and the language demands faced by new entrants into tertiary education. The aim of this paper is to propose certain measures that might be adopted to ease these difficulties. Its argument is that we need an adequate assessment, administered earlier than in the last year of school, and it examines and reports on the performance of two language assessments that might potentially be useful in this regard. Below, the design of a more
120 Part 2: Assessing Academic Literacy at Secondary School Level
appropriate and credible measurement and its subsequent administration and refinement, as well as when these types of academic literacy assessments should take place, will therefore be discussed. Designing the Appropriate Measurement
If we desire to measure language ability appropriately and adequately, there are several considerations that influence what we do. The design of theoretically defensible literacy assessments, for example, should be guided by the articulation of a test construct (Weideman, 2011: 100). A test construct can be defined as an outline that describes the purpose of the test in adequate detail, and with reference to some theoretically current idea. In other words, the ‘construct of a test must include a clear theoretical definition of the ability that is intended to be measured by the test’ (Myburgh, 2015: 29) and should guide the design of the test from as early a point as possible in order to deliver a theoretically defensible measuring instrument. Defining an ability can be challenging. It appears relatively unproblematic for test designers to begin with a recognition that academic literacy tests attempt to measure a student’s ability to engage with academic discourse. What is subsequently needed is a detailed definition of academic discourse and an articulation of the kinds of activities that are characteristic of effectively engaging with academic discourse. According to Patterson and Weideman (2013: 118), academic discourse can be defined as ‘all lingual activities associated with academia, the output of research being perhaps the most important. The typicality of academic discourse is derived from the unique distinctionmaking activity which is associated with the analytical or logical mode of experience’. This definition is significant, as it not only attempts to describe the essence of academic discourse, but also provides an intent or purpose for employing it: the definition implies that specific activities or abilities are required for the functional use of academic discourse. In her work, Blanton (1994: 226) has, for example, listed eight demands students are presented with when engaging with academic discourse: (1) interpreting texts in light of their own experience and their own experience in light of texts; (2) agreeing or disagreeing with texts in light of experience; (3) linking texts to each other; (4) synthesising texts, and using their synthesis to build new assertions; (5) extrapolation from texts; (6) creating their own texts, doing any of the above; (7) talking and writing about doing any or all of the above; and (8) doing number 6 and 7 in such a way to meet the expectations of their audience.
How Early Should We Measure Academic Literacy? 121
An even more detailed list of components of academic literacy (Weideman et al., 2016: 7) defines the mastery of academic discourse as the ability of students to: (1) understand a range of academic vocabulary in context; (2) interpret and use metaphor and idiom, and perceive connotation, word play and ambiguity; (3) understand relations between different parts of a text, be aware of the logical development of (an academic) text, via introductions to conclusions, and know how to use language that serves to make the different parts of a text hang together; (4) interpret different kinds of text type (genre), and show sensitivity for the meaning that they convey and the audience that they are aimed at; (5) interpret, use and produce information presented in graphic or visual format; (6) make distinctions between essential and non-essential information, fact and opinion, propositions and arguments, distinguish between cause and effect, classify, categorise and handle data that make comparisons; (7) see sequence and order, do simple numerical estimations and computations that are relevant to academic information, that allow comparisons to be made, and can be applied for purposes of an argument; (8) know what counts as evidence for an argument, extrapolate from information by making inferences, and apply the information or its implications to other cases than the one at hand; (9) understand the communicative function of various ways of expression in academic language (such as defining, providing examples, arguing); and (10) make meaning (e.g. of an academic text) beyond the level of the sentence. These components may then be converted into task types or subtests by test designers for academic literacy assessments. Task types might for example include cloze tests, c-procedure, dictionary definitions, longer reading passages, academic writing tasks, scrambled texts, academic writing tasks, vocabulary knowledge, identifying register and text type and interpreting and understanding visual and graphic information. Furthermore, one task type can measure more than one component at a time. For example, if you wanted to test whether a student could make meaning beyond the sentence, you could use either a subtest of Register and text type, or a longer reading passage (Text comprehension) or even a Scrambled text. In turn, a scrambled text task can also be used to test textuality, and for understanding text type (Van Dyk & Weideman, 2004:
122 Part 2: Assessing Academic Literacy at Secondary School Level
18–19). Through the careful selection of mutually supporting task types, a feasible test can be created which can be administered and completed quickly and efficiently. Of course, the exact weighting and precise format of each task type, and the items that make it up, all contribute to a deliberately constructed academic literacy assessment. A good part of the deliberation therefore is to decide how many marks need to be allocated to which components of academic literacy, whether the test will be suitable and appropriate for the target population and how the test should be administered. Two tests were used in this investigation. The first test was the Test of Advanced Language Ability (TALA), which was designed specifically for Grade 12 students, and is discussed in this volume by Steyn (also 2018). The second test was taken from a test book by Weideman and Van Dyk (2014) and then adapted and shortened using the design considerations referred to above. A high school teacher helped with this process, and the test was ultimately designed to be appropriate for Grade 10 students. The test that we chose initially had a score of a 100 marks, but it was then modified to have a subtest framework similar to that of TALA. This was done by utilising the test specifications of TALA. Table 6.1 records these test specifications (Steyn, 2018: 37). Table 6.1 identifies the five subtests as Scrambled text, Vocabulary knowledge, Understanding graphs and visual information, Text comprehension and Grammar and text relations. The subtests each measure more than one of the components pertaining to academic literacy. What is more, as we have noted above, each one of the identified components of academic literacy may potentially be measured by more than one subtest of the same test. Textuality, for example, can be measured by means of a subtest such as Scrambled text, Text comprehension or Grammar and text relations, or all of them. In this case, the Scrambled text in the modified test was kept exactly the same, since the original also constituted five marks. The remaining subtests were all modified in light of the specifications listed above. As for TALA, and for the same reasons (Steyn, 2018; this volume), a multiple choice test format was used. These are ideal in situations where test results have to be made available as soon as possible after a test has been taken or where tests are administered to large groups of test takers, as in the present case. Moreover, it trumps the scoring of the essay format which, even when done with great care and consideration, is wholly subjective in nature (Green, 2014: 178). The description of the target population and logistical considerations determine the length of the test, as well as the Flesch reading ease of the texts used in it. For example, the Flesch reading ease of texts for Grade 10 students should be above 50% (Steyn 2010: 5), which is one rough indication that the texts are appropriate for Grade 10 students in terms of vocabulary and sentence length.
How Early Should We Measure Academic Literacy? 123
Table 6.1 TALA test specifications Subtest and general task type
Component measured/ potentially measured
Specifications for items (60 marks): guidelines for questions
A Scrambled text in which the candidate is given an altered sequence of sentences and must determine the correct order in which these sentences must be placed.
Textuality: cohesion and grammar, understand relations between different parts of a text, be aware of the logical development of an academic text, via introductions to conclusions, and know how to use language that serves to make the different parts of a text hang together Sequence and order Understanding text type (genre) Communicative function Making meaning beyond the sentence
(5) Sequencing [Candidates use their knowledge between different parts of the text and the logical development of an academic text to determine the correct order.]
Vocabulary knowledge is tested in the form of multiple choice questions
Vocabulary comprehension: understand and use a range of academic vocabulary as well as content or discipline-specific vocabulary in context (however, limited to a single sentence).
(10) Vocabulary in context (use) Handling metaphor and idiom (optional)
The Interpreting graphs and visual information subtest consists of questions on graphs and simple numerical computations.
Understanding text type (genre) Understanding graphic and visual information Distinguish between essential and non-essential information, fact and opinion, propositions and arguments, cause and effect and classify, categorise and handle data that make comparisons Numerical computation Extrapolation and application Making meaning beyond the sentence
(8) Trends: Perceived trends in sequence, proportion and size. Predictions and estimations based on trends. Averages across categories, etc. Proportions: Identify proportions expressed in terms of fractions or percentages. Compare proportions expressed in terms of fractions or percentages, e.g. biggest difference or smallest difference. Comparisons between individual readings within a category in terms of fraction, percentage or the reading in the relevant unit (e.g. in grams or millions of tonnes) Comparisons between the combined readings of two or more categories in terms of fractions, percentage or the reading in the relevant unit Differences between categories Comparisons of categories Inferencing/extrapolation based on the given graphic information. (Continued on next page)
124 Part 2: Assessing Academic Literacy at Secondary School Level
Table 6.1 TALA test specifications (Continued) Subtest and general task type
Component measured/ potentially measured
Specifications for items (60 marks): guidelines for questions
In the Text comprehension section, candidates must answer questions about the given text.
Vocabulary comprehension Understanding metaphor and idiom and vocabulary in use Distinguish between essential and non-essential information, fact and opinion, propositions and arguments, cause and effect and classify, categorise and handle data that make comparisons Extrapolation and application Think critically (analyse the use of techniques and arguments) and reason logically and systematically Interact with texts: discuss, question, agree/disagree, evaluate, research and investigate problems, analyse, link texts, draw logical conclusions form texts and then produce new texts Synthesise and integrate information from a multiplicity of sources with one’s own knowledge in order to build new assertions Communicative function Making meaning beyond the sentence Textuality (cohesion and grammar) Understanding text type (genre)
(25) Essential Distinction making: categorisation, comparison, distinguish between essential and non-essential (5) Inferencing/extrapolation: e.g. identify cause and effect (3) Comparing text with text (2) Vocabulary in context (5) Handling metaphor, idiom and word play (1) Another (4) from any of these.
Vocabulary comprehension Textuality (cohesion and grammar) Understanding text type (genre) Communicative function
(12) Determined by the specific item. The text is systematically mutilated – one cannot predict beforehand which components will be measured, but a good range is possible and indicated.
In the Grammar and text relations section the questions require the candidate to determine where words may have been deleted and which words belong in certain places in a given text that has been systematically mutilated
Possible (5) of the following: Communicative function: e.g. defining/concluding Cohesion/cohesive ties Sequencing/text organisation and structure Calculation
The current National Senior Certificate examinations, the secondary school exit level assessments for languages, currently possess none of these deliberate attempts at assessing academic language ability as a specific kind of discourse (du Plessis and Steyn, both in this volume). As a result, these assessments are misaligned with the curriculum (Department of Basic Education, 2011), and are not (yet) designed to inform institutions of higher education about the levels of academic literacy of their prospective students (du Plessis et al., 2013). It is
How Early Should We Measure Academic Literacy? 125
therefore not surprising that most South African universities currently require their own measures of preparedness in this regard. How these tests are administered and employed, however, is still contentious, as will be noted in the next section. From Task Design to Administration and Refinement
The administration of academic literacy tests plays a pivotal role in the process of academic literacy assessment. Ensuring that test takers are aware of the purpose and format of a test beforehand is something often overlooked by test administrators. Studies indicate that students mostly dread taking tests, especially when they are used as high stakes measuring instruments. The results of high stakes tests are used to grant or deny test takers opportunities, such as for further study, that may have a lasting effect on test takers. The results of the National Benchmark Tests (NBTs), used by 16 universities across South Africa in 2016 (Scholtz, 2017: 28), are often wrongly used – some would claim inappropriately – as a high stakes test where the test results are used to deny or grant prospective students access to university (Myburgh, 2015: 5), instead of serving as an additional measure of academic preparedness. The results of the NBTs are meant to provide tertiary institutions with more information to assist with the process of placing struggling students on appropriate academic language interventions, with the additional aims of encouraging and assisting students to obtain their qualifications successfully (Scholtz, 2017: 27). Their intention, thus defined, is therefore rather that of a medium than high stakes assessment. High stakes tests, by contrast, can easily demotivate test takers, or even lead to the unfair stigmatisation of test takers who do not perform as anticipated. For the tests used in the current study, we have wished to acknowledge the imperative that tests should be deliberately designed. Although this is merely an initial consideration in such deliberate assessment design, underlying this study are empirical data analyses that show how likely the assessment is to conform to the designers’ parameters for the reliability of the eventual test, as well its ability to discriminate adequately between different levels of ability. In light of the conventional neglect in South Africa of considering test consistency and adequacy, such attention to these qualities is essential. The statistical p rograms used, two versions of Iteman (3.6 and 4.3: Guyer & Thompson, 2011) and TiaPlus (CITO, 2005) also calculate the Differential Item Functioning (DIF) of items and, at test level, the correlations among subtests, and the correlation of each subtest with the overall test. These measures begin to ensure that there are technical mechanisms to refine the test, and enhance the quality of its versions subsequent to the initial pilot of the Grade 10 test. In this instance, we decided to
126 Part 2: Assessing Academic Literacy at Secondary School Level
modify the 10 items which did not score desirably. Modifications to items included lessening possible ambiguity of word choices and decreasing or increasing the difficulty of items through rearranging the positions of the possible answers given, or altogether changing the answer options. This indicates that a second administration of the test is required to gather test results that can again be analysed, to ensure that item modifications were successful. As important as the content of the test is the issue of deciding when the test should be administered. This will be the discussed in the following section. When to Test
A further question to be addressed is whether academic literacy testing should take place sooner rather than later. Van Rooy and Coetzee-Van Rooy (2015: 3) have noted the discrepancy between what is taught at school and what is expected at university in terms of language ability. They found, moreover, that academic literacy assessments are very useful in identifying which students should enrol for academic literacy interventions. At the same time, performance on academic literacy courses or academic interventions of some duration yields a more accurate representation of students’ overall academic performance than school language results, and they are also better predictors of successful completion of the first year of study (Van Rooy & CoetzeeVan Rooy, 2015). Various reasons can be advanced for testing academic literacy levels at an earlier stage than university enrolment. Emergent literacy, for example, is becoming ever more relevant. An early literacy test called TEL (Test of Emergent Literacy), which assesses the language and communicative skills of preschool children (whose home language is not English, although they attend an English-medium school), has been designed by Gruhn and Weideman (2017). Together with other experts, they argue that ‘emergent literacy skills especially build the foundation for subsequent academic achievement’ (Gruhn & Weideman, 2017: 25). Exploring the idea of testing preschool children for skills associated with academic literacy gives further substance to the notion of testing prospective higher education students at earlier stages than university enrolment. Additional considerations include that school pupils might simply have more time at their disposal to remedy low academic literacy levels than tertiary education students have, and that those opportunities that exist for the remediation of low academic literacy levels after school, such as academic literacy courses and interventions, are expensive, resource-intensive and time consuming. Deciding which students should be enrolled for academic interventions can also be considered a contentious issue. Even though underprepared students can possibly be identified by using tests like
How Early Should We Measure Academic Literacy? 127
the NBTs, it could be argued as ill advised to have an already struggling student register for an additional class or course in the form of an academic literacy intervention, since such an intervention will add to the workload of a student already identified as being at risk. An academic literacy intervention, which is designed to help students with low academic literacy levels, can consequently run the risk of becoming an unwelcome and additional burden. It would simply be more beneficial to tertiary education institutions, places of work, parents and most importantly the students themselves, to identify and remedy low academic literacy levels sooner rather than later, a point that we shall return to below. This is especially important for students who desire to be accepted into university or other higher education institutions and whose enrolment depends on their performance in high stakes language tests. Grade 12 students possibly better understand the significance of academic literacy tests and interventions, as well as the importance of furthering their education by attending higher education institutions, than their younger counterparts (Myburgh, 2015: 113). In the study being referred to here (Myburgh, 2015), however, an attempt was made, for the reasons given above, to use a test of language ability some time earlier: in Grade 10. Test Results and Uses
In this study two different academic literacy tests were administered at two schools, both based in Bloemfontein. Both tests were based on the same construct (see the section ‘Designing the Appropriate Measurement’ above), but the first test, as we noted above, was the Test of Advanced Language Ability (TALA), which was designed specifically for Grade 12 students. The second test was adapted and shortened using the design considerations referred to above, and the specifications for TALA. The tests were eventually administered to 243 students (TALA) and to 240 students in Grade 10 (adapted test). The aim of the administration of these tests was to determine whether a responsibly designed academic literacy test could predict university preparedness more accurately than school marks could. Three comparisons were made eventually. The results of the two academic literacy tests were compared to the students’ Home Language mark, their average Grade 10 mark, and also their average mark excluding the Home Language mark (Myburgh, 2015: 71). The reason for using the average mark for learners’ performance across all their different school subjects is that universities use an aggregate index of their eventual Grade 12 exit examination results, the so-called Admission Point Score (APS) as a prime indicator of whether students should be given access to further study. The Grade 10 average score was used in this study because it has a similar basis (performance across the full range of school subjects) as the APS.
128 Part 2: Assessing Academic Literacy at Secondary School Level
When the test results are compared, we note that the unrefined adapted test presented better results than the refined TALA test. Even though TALA was designed with great care and deliberation, and obtained extremely useful results during its first pilot, it most probably should not be administered to students any younger than Grade 12, since TALA was ultimately designed for Grade 12 students, and as an exit-level test. In this administration, TALA still obtained a creditable Cronbach Alpha of 0.82, which exceeds the benchmark score of 0.7 (Myburgh, 2015: 78) that would have been aimed for. As measured by Iteman 3.6, a satisfactory mean biserial index of 0.31 was also attained by TALA, which exceeds the desired level of 0.15 (du Plessis, 2012: 18). As regards its mean percentage correct, which should more or less be 50% (Weideman, 2011: 105), TALA scored a 42% value, which again indicates that the test probably was too difficult for the Grade 10 students on which it was tested. In turn, the adapted test obtained a more impressive Cronbach alpha of 0.896, which attests to the technical consistency of the test. A mean biserial score of 0.43 was also attained, as well as a mean percentage correct of 55%, indicating that the test might have been slightly easier than it should have been (Myburgh, 2015: 79), though not by much: technically items in the middle range of difficulty offer the best prospects for discriminating levels of ability. With a mean biserial correlation of 0.43, that should not be an issue. The other is to ensure that the test is neither unreasonably difficult nor unrealistically easy, which has to do with the need to fit the range of overall test difficulty with the range of student ability. In Table 6.2, the results of these two tests are correlated with their English Home Language mark, and their average performance in Grade 10. Ultimately, the English Home Language mark still correlated more closely with the students’ average mark, obtaining a value of 0.82. The adapted test came in second, with a score of 0.78, and TALA finished last with a somewhat unexpectedly low 0.46 (Myburgh, 2015: 89). Even though the study did not turn out entirely as expected, there are a number of positives. The results indicate that the second, adapted test can be refined as administered once more. If the refinement process is repeated, it is sure to increase the accuracy of the test. Moreover, the strong correlation of the results with average performance (what is Table 6.2 Correlation analysis results
Average without English
Average without English
TALA (p)
Adapted test (p)
English (p)
1.00000
0.45512 (