145 35 2MB
English Pages 288 [281] Year 2014
Advances and Innovations in University Assessment and Feedback
Advances and Innovations in University Assessment and Feedback A Festschrift in Honour of Professor Dai Hounsell
Edited by Carolin Kreber, Charles Anderson, Noel Entwistle and Jan McArthur
© editorial matter and organisation Carolin Kreber, Charles Anderson, Noel Entwistle and Jan McArthur, 2014 © the chapters their several authors, 2014 Edinburgh University Press Ltd The Tun – Holyrood Road 12 (2f) Jackson’s Entry Edinburgh EH8 8PJ www.euppublishing.com Typeset in 11/15 Adobe Garamond by Servis Filmsetting Ltd, Stockport, Cheshire, and printed and bound in Great Britain by CPI Group (UK) Ltd, Croydon CR0 4YY A CIP record for this book is available from the British Library ISBN 978 0 7486 9454 9 (hardback) ISBN 978 0 7486 9455 6 (webready PDF) The right of the contributors to be identified as authors of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988 and the Copyright and Related Rights Regulations 2003 (SI No. 2498).
Contents
List of Tables and Figures vii Acknowledgements viii Foreword by Professor Sir Timothy O’ Shea ix Introduction Noel Entwistle, Carolin Kreber, Charles Anderson and Jan McArthur
1
Part A Changing Perspectives on the Nature and Purposes of Assessment 1 Shifting Views of Assessment: From Secret Teachers’ Business to Sustaining Learning David Boud
13
2 Flourishing amid Strangeness and Uncertainty: Exploring the Meaning of ‘Graduateness’ and its Challenges for Assessment Carolin Kreber
32
3 Assessment for Learning Environments: A Student-Centred Perspective 56 Liz McDowell and Kay Sambell
viii | advances i n uni vers ity a s s e s s me n t
Part B Students’ Perceptions of Assessment and Feedback 4 Perceptions of Assessment and their Influences on Learning Noel Entwistle and Evangelia Karagiannopoulou
75
5 Students’ and Teachers’ Perceptions of Fairness in Assessment Telle Hailikari, Liisa Postareff, Tarja Tuononen, Milla Räisänen and Sari Lindblom-Ylänne
99
6 Perceptions of Assessment Standards and Student Learning Michael Prosser
114
Part C Reconceptualising Important Facets of Assessment 7 Only Connect? Communicating Meaning through Feedback Charles Anderson
131
8 Learning from Assessment Events: The Role of Goal Knowledge D. Royce Sadler
152
9 The Learning–Feedback–Assessment Triumvirate: Reconsidering Failure in Pursuit of Social Justice Jan McArthur
173
Part D Innovations in Assessment Practices 10 Guiding Principles for Peer Review: Unlocking Learners’ Evaluative Skills David Nicol
197
11 Disruptions and Dialogues: Supporting Collaborative Connoisseurship in Digital Environments Clara O’Shea and Tim Fawns
225
12 Understanding Students’ Experiences of Being Assessed: The Interplay between Prior Guidance, Engaging with Assessments and Receiving Feedback Velda McCune and Susan Rhind
246
Notes on the Contributors 264 Index 270
Tables and Figures
Table 2.1
Conceptualising the nature of ‘graduateness’ through philosophical accounts of authenticity 46 Table 4.1 Correlations between perceptions of environment and other variables (British) 81 Table 4.2 Correlations between perceptions of environment and other variables (Greek) 82 Table 6.1 Factor analysis of student learning experience questionnaires 118 Table 6.2 Correlation matrix of perceptions of assessment items with approaches to studying scores and grade point averages 119 Table 10.1 Principles of good peer review design 209 Figure 4.1 Influences of assessment and teaching on the quality of learning outcomes Figure 5.1 The three-layer model of knowledge and understanding Figure 6.1 Adapted 3-P model of student learning Figure 12.1 The revised guidance and feedback loop
88 105 116 252
Acknowledgements
P
ermission has been granted by John Wiley and Sons for the use of the diagram in Figure 4.1, which originally appeared in the British Journal of Educational Psychology, and by the Higher Education Research and Development Association at the University of New South Wales for the diagram reproduced as Figure 6.1. Taylor and Francis kindly granted permission to make use of an essay published in the journal Teaching in Higher Education (‘Rationalising the Nature of “Graduateness” through Philosophical Accounts of Authenticity’). Individual sections of this essay were incorporated into Chapter Two. The editors would like to thank the principal, Professor Tim O’Shea, for his enthusiasm and support for this book, as well as the Edinburgh University Press, for their professionalism and help in making this publication a success.
x
Foreword
I
am very pleased that four such distinguished editors have worked together to create this valuable Festschrift in honour of Professor Dai Hounsell. Dai has made profound contributions to research on university assessment and feedback, which have had a major impact both in Scotland and internationally. He has engaged in one-to-one conversation, he is a powerful speaker at conferences and his co-edited book, The Experience of Learning, as well as his book chapters, journal articles and commissioned guidebooks on assessment and feedback have all had great impact. His work has been distinguished by his clarity of thinking and careful and persistent concern with the student experience, which has enabled him to identify and illuminate best practice in support of the diversity of students who now study in universities. This Festschrift is a very fitting tribute to Dai’s history of thoughtful and principled contributions to the theory and practice of assessment and feedback. Assessment and feedback is one of the most important and demanding topics on the current higher education research agenda. As this book shows, the role of assessment has continued to morph and change and technological innovations have made new forms of feedback possible. I am particularly pleased to see Carolin Kreber’s thoughtful chapter, which deals with the purposes of higher education, as well as the final two chapters, which draw directly on Dai’s work and describe exciting technology-supported innovations in the design and implementation of new modes of assessment and xi
xii | a dva nces i n uni vers ity a s s e s s me n t feedback. All the chapters are valuable in different ways, and I warmly commend this book to all teachers and researchers concerned with university education. Professor Sir Timothy O’Shea Principal and Vice Chancellor University of Edinburgh
Introduction Noel Entwistle, Carolin Kreber, Charles Anderson and Jan McArthur
T
his book brings together recent developments in thinking about assessment and feedback in university education written by international experts in this field and is devised as a tribute to Professor Dai Hounsell, whose research in this field has been of international importance and influence. Dai has not only contributed to the research on assessment and feedback, but has, over the years, been actively involved in working with colleagues to find ways of improving practice. This book reflects these two aspects. It has been organised into four parts, focusing, first, on the broader view of the nature and purposes of assessment in higher education and, second, on how students perceive their experiences of assessment and feedback, before considering how assessment may have to be reconceptualised, in order to match the demands of university education in the twenty-first century. The final part looks more directly at the implications for practice through discussion of recent innovations in assessment. Most of the authors were able to reflect on some aspect of Dai’s academic experience and research, which is wide-ranging and has had significant international impact. Dai Hounsell’s Main Contributions to Research and Academic Development An important formative period in Dai’s involvement in this research area began when he was appointed information and research officer at Lancaster 1
2 | a dva nces i n uni vers ity a s s e s s me n t University in 1972. The post initially had an emphasis on informing academic staff about recent developments in the theory and practice of university teaching, but his work led him to develop a research role – in particular, through organising and participating in a series of international conferences that focused on new research in teaching and learning at a time when this field was still relatively undeveloped. These conferences brought together researchers from different countries and one in particular led to the publication of The Experience of Learning, co-edited by Dai. This involvement in, and organisation of, the series of conferences also enabled a sabbatical year with Ference Marton in Gothenburg to imbibe the ideas and atmosphere of an influential research group and later to see first-hand the work of the Tertiary Education Institute at the University of Queensland. While at Lancaster, he also became involved in a major United Kingdom (UK) Social Science Research Council programme on how students learn, which had international impact. Out of this experience came a specific interest in assessment as an important ‘driver’ of student learning and recognition of the importance of the quality of feedback on students’ assessed work. His doctoral thesis looked at students’ conceptions of essay writing in two contrasting university departments and showed that it was not just the actual assessment arrangements, requirements and formats that affected how students went about their learning, but also how those students conceptualised their experiences, which varied markedly between the departments and among individual students. The functions and processes of essay writing featured prominently in many of his subsequent publications. Dai’s engagement with research, however, in no way reduced his concern with the importance of helping academic staff to make use of research findings and with the often inadequate ways in which evidence regarding the outcomes of innovations were being reported. His appointment in 1985 to the post of founding director of the Centre for Teaching, Learning and Assessment at the University of Edinburgh led him to focus more broadly on the effects of the overall teaching environment on the quality of student learning, while maintaining his concern about the effects of assessment and feedback. The publication, in 1996, of an inventory of assessment practices in Scottish higher education enabled a much wider dissemination of ideas about innovatory methods, while a series of handbooks for tutors and dem-
i ntroducti on | 3 onstrators drew attention to the need to train doctoral students for these roles. In 1998, Dai became coordinating editor of ‘Teaching and Learning’ for the international journal Higher Education and was involved in the nationwide activities that led to the evaluation of teaching quality in Scotland and considerations of the role that research can, and should, have in defining the nature of quality in terms of the experiences of students. In 2001, he became co-director of a large-scale project within the UK Teaching and Learning Research Programme. An important feature of this project was to draw attention to the differences in the relationships between teaching and learning that are found in contrasting subject areas. The project drew colleagues in twenty-five course units across eleven universities into partnership in carrying out the research and reflecting on the findings; through its website and publications, it has had an international influence on ideas about enhancing teaching–learning environments. More recently, he was responsible for producing four handbooks on assessment for the Quality Assurance Agency for Higher Education in Scotland, which were widely circulated and reinforced through national and institutional workshops. And with his growing reputation, he also attracted increasing numbers of doctoral students, who, in turn, are now engaging in high quality research on assessment and feedback and disseminating good assessment practice in their own countries. Being appointed Vice-Principal for Academic Enhancement at the University of Edinburgh enabled him to play an active role in promoting ideas about teaching and learning across the university and, in particular, draw attention to the value of innovative methods of assessment and the need to provide timely and supportive feedback on students’ coursework. In his last two years as a Vice-Principal, he gave many presentations on these themes in British universities and, over the years, a large number of keynote addresses to overseas conferences, as well as workshops in Europe and Australia, activities that indicate just how important he felt that it was to translate research into improved practice.
4 | a dva nces i n uni vers ity a s s e s s me n t
The Main Themes Considered in this Book Part A Changing Perspectives on the Nature and Purposes of Assessment The first part of the book maps out present thinking on the nature and purpose of assessment. It aims to show how the learning, knowledge, skills and qualities required of students to do well in higher education, as well as in their future professional practice and civic life, demand new purposes for assessment, which can be met only by changing the nature of assessment practices. In Chapter 1, ‘Shifting Views of Assessment: From Secret Teachers’ Business to Sustaining Learning’, David Boud offers a historical overview of how assessment practices have changed over the past four decades. He observes that despite some drastic changes and shifts in positive directions, the emphasis on formative assessment has lessened today, in that now much assessment ‘counts’ towards a final mark. He calls for a new agenda in assessment practices – one that focuses explicitly on developing students’ capacity for judgement. This capacity for judgement is associated with the emergent agenda of ‘sustainable assessment’, which suggests that the purpose of assessment is not just to generate grades or influence present learning, but importantly to build students’ capacity to learn and assess themselves beyond the current task, for example, as they are engaged in lifelong learning in their professional practice. In Chapter 2, ‘Flourishing amid Strangeness and Uncertainty: Exploring the Meaning of “Graduateness” and its Challenges for Assessment’, Carolin Kreber undertakes a philosophical inquiry into the purposes of higher education at the present time and derives several implications for assessment. She takes as a starting point the understanding that the future is uncertain and thus concludes that a main task of higher education is to prepare students to cope with this uncertainty. This could be done through placing a stronger focus on graduate attributes, which refer to the key skills, abilities and dispositions that students are expected to attain. Present ideas on the educational purposes of universities and by implication the notion of ‘graduateness’, she proposes, are usefully understood through the philosophical concept of ‘authenticity’. Assessment tasks that prepare students for an uncertain and complex world are
i ntroducti on | 5 those that promote the students’ authenticity by fostering students’ openness to experience, as well as their moral awareness and social engagement. In Chapter 3, ‘Assessment for Learning Environments: A StudentCentred Perspective’, Liz McDowell and Say Sambell focus on the importance of assessment for learning. Rather than simply adding new assessment techniques for existing entrenched practices that are seen to encourage learner dependence on their teachers and assign learners a passive role in the assessment and learning process, the authors argue that, to be successful, changes towards assessment for learning have to be more comprehensive, focusing on the entire learning environment. Based on their work at the Centre for Excellence in Assessment in Learning at Northumbria University, they introduce six elements of a productive and positive assessment for learning environments designed to intentionally foster student autonomy in learning – in particular, learning for the longer term. Part B Students’ Perceptions of Assessment and Feedback In this second part of the book, we move from a broad focus on assessment from a historical and theoretical perspective to look at a perspective drawn from the perceptions and experiences of both students and university teachers of their everyday teaching and studying. In Chapter 4, ‘Perceptions of Assessment and their Influences on Learning’, Noel Entwistle and Evangelia Karagiannopoulou trace the development of ideas about students’ perceptions of teaching and assessment and the effects these have been found to have on their approaches to learning and studying. The authors report a series of studies that show substantial differences among students in their perceptions of the same assessment process, as well as exploring more general relationships that indicate how perceptions influence the extent to which students adopt deep or surface approaches that often lead to qualitative differences in levels of attainment. The final sections look in more detail at students’ individual reactions to an open-book exam, drawing attention to how the assessment produced either angry reactions to a mismatch between tutors’ and students’ views about the purposes of assessment or the satisfaction that came from a ‘meeting of minds’. In Chapter 5, ‘Students’ and Teachers’ Perceptions of Fairness in Assessment’, Telle Hailikari, Liisa Postareff, Tarja Tuononen, Milla Räisänen
6 | a dva nces i n uni vers ity a s s e s s me n t and Sari Lindblom-Ylänne from the University of Helsinki report on a series of interviews with students and their teachers that focus on how staff design and carry out assessments, showing limitations in the teachers’ knowledge of technical aspects of assessment practices and a consequent failure to use clearly defined assessment criteria in marking. This leads to uncertainty among the staff about their reasons for awarding particular grades and a feeling among students that the system is unfair, even though the students believe that only teachers can have the expertise to make those assessments. The concept of ‘fairness in assessment’ is explored in relation to the need to establish reliability and validity in the procedure, leading to a discussion of the implications for effective practice. In Chapter 6, ‘Perceptions of Assessment Standards and Student Learning’, Michael Prosser makes an important distinction between course grades and learning outcomes, arguing that grade levels, without adequate descriptors, provide no effective guidance for either teachers, in regards to how to carry out assessments, or for students, elucidating how to improve the quality of their work. He argues that it is important to provide full grade descriptors, based on characteristics of submitted work, which make clear the differences between the allocated levels. Examples of such schemes are provided to suggest a need for a conceptual basis to underpin the different levels described, using the structure of observed learning outcomes (SOLO) taxonomy as a way of justifying the levels identified. He also argues that students need to be made aware of the rationale underlying the system of assessment adopted and to be given examples of other students’ work that make those differences clear. Part C Reconceptualising Important Facets of Assessment The third part of the book explores the ways in which changing conceptualisations of certain important facets of assessment affect how it should be carried out. Each chapter involves a detailed theoretical analysis of a key aspect of assessment, which is then linked to the wider experience of learning within higher education. Each chapter reflects a commitment to enable students to have a greater sense of agency within, and through, their assessment and feedback experiences and an understanding of the importance of viewing these experiences as dynamic and ongoing. In Chapter 7, ‘Only Connect? Communicating Meaning through
i ntroducti on | 7 Feedback’, Charles Anderson considers the importance of the sensitive tailoring of our approaches to communication for specific situations and audiences, applying this, in particular, to the practice of feedback. The theoretical basis for this chapter lies in the work of Norwegian psycholinguist Ragnar Rommetveit, whose work on intersubjective communication also influenced Dai Hounsell. Anderson addresses the significant lack of analysis of the nature of communication in the literature on assessment and feedback. Drawing on Rommetveit, Anderson places particular emphasis on context and perspective in terms of meaning-making. This then has implications for how we give, and how students understand, feedback, particularly within different disciplinary and professional learning contexts. In Chapter 8, ‘Learning from Assessment Events: The Role of Goal Knowledge’, Royce Sadler takes a fresh look at an enduring assessment issue – why is it that students often fail to undertake an assessment task in the way that it has been specified? Sadler suggests that students need to be supported to develop ‘goal knowledge’ of what a final assessment should look like, in order to be able to engage with the specified task. Sadler adds to the notion of goal knowledge that of response genre, which needs to be chosen to enable students and assessors to understand the type of assessment required within a certain context. In developing these ideas, Sadler draws particularly on the work of Boulding to explore the relationship between the knowledge a person brings to a situation (such as an assessment task) and the ways in which that knowledge changes through multiple and layered processes of interaction while carrying out that task. In Chapter 9, ‘The Learning–Feedback–Assessment Triumvirate: Reconsidering Failure in Pursuit of Social Justice’, Jan McArthur draws on the critical theory of Adorno to explore the implications of complex knowledge and the positive associations of apparent failure. She argues that in order to achieve the triumvirate of learning–feedback–assessment found in Dai Hounsell’s work, we need to reconsider the notion of ‘failure’. She suggests that failure can sometimes be part of a pedagogical process, in which students are asked to engage with complex knowledge in a critical way. Assessment practices should allow for an iterative process of engagement with knowledge, rather than being judged entirely on a first submission. In that way, students are then able to respond to feedback with a strong sense of personal
8 | a dva nces i n uni vers ity a s s e s s me n t agency and become both authors and arbiters of their own learning. Enabling students to take greater control over their learning and encouraging forms of knowledge that can help promote social change, she argues, are means with which to promote greater social justice. Part D Innovations in Assessment Practices One of the central themes running through the final part of the book is a concern with exploring how students can be effectively involved in the assessment process in ways that can foster both their immediate progress and longer-term development. In Chapter 10, ‘Guiding Principles for Peer Review: Unlocking Learners’ Evaluative Skills’, David Nicol closely examines how learners’ capacities for self-regulation may be strengthened by developing their skills in making evaluative judgements. He identifies peer review as a key means for developing evaluative capacities; sets out principles that can inform the design of peer review; and illustrates how these principles can be put into action. In Chapter 11, ‘Disruptions and Dialogues: Supporting Collaborative Connoisseurship in Digital Environments’, Clara O’Shea and Tim Fawns consider the challenges posed for students and tutors by a move to multimodal assessments. They also explain how learning, teaching and assessment practices can be recast to meet these challenges, building on the affordances of digital environments. They present an analytical case study of a course that used a class-wide, wiki-based assignment to scaffold and support collaborative learning and assessment – a course which Dai Hounsell himself contributed to both the design and teaching. The chapter provides a fine-grained account of how ‘feedforward, cumulative assessment and developing connoisseurship’ can allow one to ‘move idealised principles of collaborative, dialogic and multimodal learning into practice’. Finally, in Chapter 12, ‘Understanding Students’ Experiences of Being Assessed: The Interplay between Prior Guidance, Engaging with Assessments and Receiving Feedback’, Velda McCune and Susan Rhind again stress the importance of students’ active involvement in guidance and feedback processes. For students in domains of professional study, they consider how such active involvement can build their identities as legitimate professional practitioners and maximise the impact of their learning experiences on their future
i ntroducti on | 9 practice. In addition, drawing on the ‘communities of practice’ literature, they explain why even able students may struggle to grasp what constitutes high quality work. A central contribution of the chapter is the development of Dai Hounsell’s model of the ‘guidance and feedback loop’ to provide greater emphasis on what students bring to the assessment process, in terms of their prior experiences, learner identities and imagined future trajectories.
1 Shifting Views of Assessment: From Secret Teachers’ Business to Sustaining Learning David Boud
Introduction
D
espite common assumptions that assessment practices in higher e ducation are enduring, the past forty years have seen remarkable changes. A key change has been from the dominance of unseen end-of-year examinations, through the move to ‘continuous assessment’ and on to a range of diverse assessment approaches. Another notable change has been from assessment weightings being regarded as confidential to the transparency of assessment standards and criteria of today. Assessment has thus shifted in some positive directions. Unfortunately, during the same period, the emphasis on what we now call formative assessment has lessened: from a general acceptance that all courses involve the production of considerable written work that is used purely to aid learning, we now have regimes based on the assumption that all assessment must ‘count’ towards final marks or a grade point average. My aim in this chapter is to briefly sketch these developments with the intention of projecting forward to explore emergent assessment practices. These are practices that move beyond current innovations in areas such as authentic assessment, self- and peer assessment and improved feedback to students. They represent new views of assessment based upon developing students’ capacity for judgement and involve practices that emphasise an active role for students beyond the production of written work. The chapter will 13
14 | advances i n uni vers ity a s s e s s me nt explore this emerging agenda and consider what changes might be possible, given the continuing dominance of accountability mechanisms that have had the effect of constraining the development of assessments for learning. Assessment as Taken-For-Granted One of the problems of discussing assessment is that we all have a prior or pre-existing conception of what it is and thus an immediate reaction to it, established through our early, formative experiences. Sometimes assessment has touched us deeply; sometimes it has left bruises (Falchikov and Boud 2007). While changes may occur around us, our point of reference is often what assessment was like when we were most influenced by it. This conception can easily get locked in and provide a personal yardstick against which we continue to judge assessment. It is important to resurface these prior events and experiences, as they influence what we regard as legitimate. In some cases, we see them as the gold standard of assessment, in others we resolve never to subject our students to the practices that impacted badly on us. Many of the changes in assessment that have occurred over the past half century are reflected in my own biography: first, as an undergraduate student in England and then later as an academic, mainly in Australia. There have been minor differences of emphasis between the two countries from time to time, but the main trajectory of assessment is similar. When I entered university there were two main activities that we now label as ‘assessment’. First, there were set tasks that were completed mostly out of class time, but occasionally within it. These were commonly handed in and ‘marked’. A record may have been kept of these marks, but as students we were not very conscious of them beyond the point of the return of our work. Marking usually involved assigning numbers or grades along with some form of brief comment. Work of this kind was commonplace. We completed it because it was the normal expectation of what students did at that time. Second, there were examinations. These occurred at the end of the year and sometimes at the end of each term. These were unseen tests undertaken under examination conditions. No notes were allowed and all information that was needed had to be recalled from memory (Black 1968). No details of examination performance, other than the final mark, were made available to students. Degree
shi f ti ng vi ews of a ssessm e n t | 15 performance was based predominantly on final-year examination results. The course I took in physics was part of a wave of innovations in the 1960s in that it also included a final-year project, which was also marked and contributed in part to the final degree classification. While different disciplines and different universities used variants on this approach, these variations were minor and the mix of regular, marked work with modest comments that did not count towards final grades and grading based on examinations was commonplace. In the language we use today, there was a clear separation between formative and summative assessment. For me, the most influential assessment events were not exams, but the ones that did not feel like assessment at all at the time. The first was a project conducted over many weeks in pairs in a laboratory during the final semester of first year in which we had to design and conduct experimental work on a problem, the solution to which was unknown or at least not easily located in texts by undergraduates. It was far from the stereotypical recipe-like lab problem that was common at the time. The second assessment event was a substantial final-year project in theoretical physics that involved me in exploring a new way of looking at statistical mechanics. What these activities did for me was to give me a taste of the world of research, rather than learn more subject matter. It showed me that it was possible to contribute to the building of knowledge in a small way and not just to absorb it. While it led to a resolve not to undertake physics research, it was also influential in my becoming a researcher. During my undergraduate years there was a substantial degree of secrecy about assessment processes. We were not told what the criteria for marking would be and how different subjects would be weighted into final results was confidential. The head of the department in which I studied (Lewis Elton) first took the then daring step of formally disclosing the weightings of the elements that would comprise our degree classification in the year of my graduation. Assessment was secret teachers’ business: it was not the position of students to understand the basis on which they would be judged. Over the late 1960s and the early 1970s, a campaign for what was known as ‘continuous assessment’ was mounted by student organisations (Rowntree 1977). Their argument was that: (1) it was unfair to base degree performance on a limited number of examinations taken at the end of the year/course, as
16 | advances i n uni vers ity a s s e s s me nt examination anxiety among some students led them to underperform; (2) assessment should be spread throughout a course to even out both workload and anxiety; and (3) multiple points of judgement should be used and final grades should be a weighted accumulation of assessments taken across the curriculum, which should be disclosed to students. Assessment for certification moved from one or two points late in a programme to a continuous sampling over the years. The battle for ‘continuous assessment’ was comprehensively won by students, and this practice is now so universal that the term for it is fading from our common language. Later, the massive expansion of higher education that occurred without a commensurate increase in unit resources meant that the amount of regular coursework that could be marked was severely reduced. Continuous assessment commonly transformed into two or three events within each subject per semester. In the Western world, it appears that Oxford University and Cambridge University are rare exceptions that continue to maintain traditional methods of assessment. Every change in assessment has unintended consequences and the move to continuous assessment has had quite profound ones. First, students have come to expect that all tasks that they complete will contribute in some way towards their final grades. The production of work for the purpose of learning alone, with no extrinsic end, has been inhibited. ‘Will it count?’ is a phrase commonly heard when asking students to complete a task. This shift also indicates a change in the relationship and contract between teachers and students. Trust that work suggested by teachers will necessarily be worthwhile has disappeared in an economy of grades. Second, having separate events for formative assessment and summative assessment has become unsustainable. When all work is summative, space for formative assessment is diminished. Poor work and mistakes from which students could have learned in the past and consequently surpassed are now inscribed on their records and weighted in their grade point average. Space for learning is eroded when all work is de facto final work. The dominance of the summative is well illustrated by the curious phenomenon pervasive in the US literature of referring to everything other than tests and examinations as ‘alternative assessment’ or ‘classroom assessment’, as if tests and examinations are the gold standard that defines the concept of assessment. Anything else is not quite the real thing; they are merely alternatives or confined to what a teacher might do in the classroom.
shi f ti ng vi ews of a ssessm e n t | 17 The Educational Measurement Revolution Alongside the primarily social change to continuous assessment, other forces outside the immediate community of higher education were influencing assessment and seeking to position it quite differently. In the 1960s and 1970s, the impact of the educational measurement revolution (for example, Ebel 1972) began to influence higher education assessment. The proposition articulated by educational testing specialists from a psychometric background was a simple one. In summary, they regarded student assessment as a folk practice ripe for a scientific approach. If only assessment could be treated as a form of psychological measurement, then a vast array of systematic techniques and strategies could be applied to it. Measurement assumptions were brought to bear on it. The prime assumption was that of a normal distribution of performance within any given group. Whatever qualities were being measured, the results must follow the pattern of a bell curve. If this assumption could be made, then all the power of parametric statistics could be applied to assessment. The educational measurement revolution was taken up with more enthusiasm in some quarters than others. The impact on psychology departments was high and later medical education was strongly influenced by this tradition, but many disciplines were not touched at all. I recall joining the University of New South Wales in the late 1970s and discovering that grades in each subject in the School of Psychology were not only required to fit a normal distribution, but that this included an expectation that a requisite number of students needed to fail each subject in conformity with the normal distribution. (It took many years to acknowledge that the selection process into higher education – and into later years of the programme – severely skewed the distribution and made these assumptions invalid.) While not all disciplines shared the enthusiasm of the psychologists, norm-referenced assessment became firmly established. Students were judged against the performance of other students in a given cohort, not against a fixed standard. The impact of educational measurement still lingers today. Notions of reliability and validity are commonly used in assessment discussions and the multiple-choice test – a key technique from this period – has become ubiquitous. The metaphor of measurement became entrenched for a considerable
18 | advances i n uni vers ity a s s e s s me nt time and is only recently being displaced. For example, it is interesting to note that between the first (2000) and second editions (2006) of the UK Quality Assurance Agency Code of Practice on Assessment of Students, the use of measurement in the definition of assessment was removed. A less obvious influence from this period is that once student assessment became the subject of scrutiny beyond the immediate context of its use, it would forever be the object of critical gaze. Assessment was no longer a taken-for-granted adjunct to teaching, but deserved consideration of its own as a separate feature. Assessment was to be discussed independently of disciplinary content or the teaching that preceded it. Assessment methods became the focus of attention, as if they were free-standing and the most important element of the process. Widening the Agenda Following the period of influence of educational measurement, there have been a number of other shifts of emphasis of greater or lesser effect. The first major ones were the incremental moves from norm-referenced testing to a criterion-referenced and standards-based approach. It is impossible to date this precisely or even cite key events that mark the transition, but there has been a comprehensive shift, at least at the level of university policy, from judging students against each other to judging them against a fixed standard using explicit criteria. Desired learning outcomes that have to be disclosed to students are widespread as required features of course and programme documentation, and, increasingly, assessment tasks are expected to be accompanied by explicit criteria used to judge whether standards have been met. Prompted by the Organisation for Economic Co-Operation and Development (OECD) agenda to create minimum standards for equivalent courses across countries, there have been both national (www.olt.gov.au/ system/files/resources/Resources_to_assist_discipline_communities_define_ TLOs.pdf) and international initiatives to document threshold programme standards (http://www.unideusto.org/tuning/). Linked to this, the second shift of emphasis has been the influence of an outcome-oriented approach and a focus on what students can do as a result of their higher education. Until the 1990s, assessment had focused strongly on what students knew. Students were judged primarily on their understanding
shi f ti ng vi ews of a ssessm e n t | 19 of the specific knowledge domain of the subjects they were studying. There was some emphasis on practical activities in professional courses and project work in later years, but the main emphasis was on what knowledge and academic skills students could demonstrate through assessment tasks. In some quarters, this has been represented as bringing the debate about competencies and capabilities into higher education. Vocational education and training systems have established very strong views about organising learning around explicit operational competencies, but higher education has taken a weaker view, which embraces a more holistic approach to competencies. The focus has been on outcomes, but not on reducing these to a behavioural level of detail. Various initiatives have led to an emphasis on transferable skills, generic attributes or core competencies (for example, Hughes and Barrie 2010) – that is, skills that all graduates should develop, irrespective of discipline. There has also been an increased emphasis on making assessment tasks more authentic (Wiggins 1989) – that is, creating tasks that have more of the look and feel of the world of practice, than activities that would only be found within an educational institution. This involves, for example, replacing the essay with a range of different writing tasks that involve academic writing adapted for particular contexts. Students learn not to perfect the standard academic essay, but to write in different genres for different audiences. This emphasis on authenticity has also permeated beyond vocational courses to ones which may be regarded as more conventionally academic. These changes to widen the notion of assessment have positioned it as an indicator not of what students know, but of what they can do. And not only what they can do, but what they can do in a variety of different contexts. What is important here is not the various facets of learning, but how they can be put together into meaningful and complex tasks; the kind of tasks that professional practitioners encounter after they graduate. Dilemmas and Contradictions in Assessment Practice So, today, we have a single term – assessment – that is normally used without qualification to refer to ideas with quite different purposes. It means the grading and certification of students to provide a public record of achievement in summary form – summative assessment. It also means the engagement of
20 | advances i n uni vers ity a s s e s s me nt students in activities from which they will derive information that will guide their subsequent learning – formative assessment. However, the tasks associated with each have collapsed together. All set tasks now seem to have a dual purpose. Severe problems are created by this arrangement, as it serves neither end very well. Let us take two examples. First, for the purposes of certification it may be satisfactory for grades to be used to summarise quite complex judgements about what has been achieved. These can be recorded on a transcript and provide a simple overview of patterns of accomplishment. However, this does not work for formative purposes. A grade or mark has little informational content. A ‘C’ on one assignment tells the student nothing in itself about what might be needed for a ‘B’ to be gained in the next assessment. Even when detailed grade descriptors are added, they only reveal what has been done, not what is needed to be done. For purposes of aiding learning, rich and detailed information is needed about what criteria have and have not been met, what is required for better subsequent performance and what steps a student might need to get there. For certification, summary grades are normally sufficient; for learning, much more detail is needed. Indeed, there is the suggestion in the research literature (for example, Black and Wiliam 1998) that the provision of a grade may distract students from engaging with more detailed information about their work. There is a second tension between the two purposes of assessment. It involves the timing of assessment. For purposes of certification, in decisionmaking for graduation, employment and scholarships, assessment needs to represent what a student can do on the completion of their studies. Difficulties that a student may have experienced in earlier stages of their course – and which have been fully overcome – should not affect the representation of what a student will be able to do. The implication of this thinking is that assessment for certification should occur late in the process of study. Returning to assessment for learning, does late assessment help? The answer is, clearly, no. Information for improvement is needed during the process of study, not after completion of the course. Indeed, early information is most needed to ensure misconceptions are not entrenched and academic skills can be developed effectively. For certification purposes, assessment needs to be loaded later in courses; for learning, it needs to be loaded earlier.
shi f ti ng vi ews of a ssessm e n t | 21 While logic might demand that these two purposes be separated so that both can be done well without the compromises that are required when each is associated with the other, it is now unrealistic to imagine that we can reverse the clock and return to a simpler time when different activities were used for formative and summative purposes. The demands that summative assessment cover a much wider range of outcomes than in the past, along with reductions in resources per student, mean that there may be little scope for additional formative activities. This pessimistic view of the overwhelming dominance of assessment for certification purposes needs to be balanced by the rediscovery and consequent re-emergence of discussion on formative assessment. The review paper by Black and Wiliam (1998) on formative assessment was one of the very few from the wider realm of educational research that has had an impact on higher education. Many authors took the momentum of this initiative to seek to reinstate the importance of formative assessment (for example, Yorke 2003). However, it is difficult to know the extent to which the considerable discussions of formative assessment in the higher education literature have embedded themselves into courses. Like many innovations that have been well canvassed with positive outcomes (for example, self- and peer assessment), there are many reports of practice, but, unlike the initiatives mentioned above, little sense that the uptake of this idea has been extensive. In summary, it is apparent that the present state of assessment in practice is often a messy compromise between incompatible ends. Understanding assessment now involves appreciating the tensions and dilemmas between demands of contradictory purposes. The Emerging Agenda of Assessment Feedback Notwithstanding the dilemmas and contradictions of two different purposes of assessment operating together, where is the assessment agenda moving and why might it be moving in that direction? If we look at what students are saying, we could conclude that the greatest issue for them is feedback or, rather, their perceptions of its inadequacy. In student surveys across universities in both Australia and the UK, the top concern is assessment and
22 | advances i n uni vers ity a s s e s s me nt feedback (Krause et al. 2009; HEFCE 2011). This is commonly taken to mean that students are dissatisfied by the extent, nature and timing of the comments made on their work. We should be wary though of coming too readily to an interpretation of what is meant. As an illustration, surprisingly, students at the University of Oxford also complained of a lack of feedback, even though they were getting prompt, detailed and useful comments on their work (a defining characteristic of the Oxford tutorial system). However, they were concerned that the formative information that they received, while helping them improve their work, did not enable them to judge how well they were tracking for the entirely separate formal examinations conducted at the end of their second and third years (Oxford Learning Institute 2013). This concern with feedback has led to a range of responses. At the crudest level, there are stories in circulation of pro vice chancellors urging teaching staff to ensure that they use the word feedback at every opportunity when commenting on anything that might be used by students to help them in assessed tasks, so that they remember this when filling in evaluation surveys. More importantly, the concern has prompted researchers to explore differences in interpretation between staff and students as to what they mean by ‘feedback’ (Adcroft 2011). More substantially again, in some cases the concern has led universities to appoint senior personnel to drive improvement and mount systematic and research-based interventions designed to improve feedback in many forms (Hounsell 2007). The initiatives and substantial website developed by Dai Hounsell and his colleagues is particularly notable in this regard (http://www.enhancingfeedback.ed.ac.uk/). Most important for the present discussion, it has prompted scholars to revisit the origins of feedback and question what feedback is and how it might be conducted effectively (Nicol and Macfarlane-Dick 2006; Hattie and Timperly 2007; Boud and Molloy 2013; Merry et al. 2013). The use of language in assessment lets us down again. As discussed e arlier, the term ‘assessment’ is used in everyday language to mean quite different things, but for the term ‘feedback’ the problem is even more severe. We use the term ‘feedback’ in the world of teaching and learning to refer to the information provided to students, mainly by teachers, about their work. This use of the word appears ignorant of the defining characteristic of feedback when used in disciplines such as engineering or biology. Feedback is not just
shi f ti ng vi ews of a ssessm e n t | 23 an input into a system, such as ‘teacher comments on student work’. In engineering and biology, a signal can only be termed ‘feedback’ if it influences the system and this influence can be detected. Thus, a heating system switches on when the temperature falls below a given level and switches off when a higher temperature is reached. The signal from the thermometer to the heater can only be called part of a ‘feedback system’ if it detectibly influences the output. If we apply this example to teaching and learning, we can only call the ‘hopefully useful information’ transmitted from teacher to student feedback when it results in some change in student behaviour, which is then manifest in what students subsequently do. The present emphasis on what the teacher writes and when they give it to the student needs to be replaced with a view of feedback that considers what students do with this information and how this changes their future work (Hounsell et al. 2008). Feedback does not exist if students do not use the information provided to them (Boud and Molloy 2012). We might speculate on how it is that academics who appreciate what feedback is in their own disciplinary area manage to so thoroughly change their understanding of it in teaching; however, it is more fruitful to focus on what are the implications of a clearer conception of feedback on higher education practice. The first implication is that we must focus attention on the student as an entity to be influenced, rather than solely on the teacher seeking to influence. The second is that we must focus not just on the single act of information provision at one single point in time (important as that might still be), but on what occurs subsequently: what effects are produced? When might these effects be demonstrated? Third, we must be conscious that students are not passive systems, but conscious, thinking agents of their own destiny. How can student agency influence the processes of feedback? My view is that we are at a point of fruitful innovation that will lead on from a reappraisal of what we mean by feedback to concrete changes in practice. The starting point should focus on course design. Changing what teachers do when confronted with student work in itself presents limited options. Looking at where tasks occur, what their nature is, how they relate to learning outcomes and what follows in future tasks during the semester may start to make a difference. Most importantly, change will only occur if there are active steps in place to monitor what changes take place in students’
24 | advances i n uni vers ity a s s e s s me nt work following input from others. Information on the performance of later tasks gives invaluable information to teachers about the effectiveness of their earlier inputs. The design of courses might start with consideration of how feedback loops can be adequately incorporated throughout the semester, so that, for example, students can learn, demonstrate their learning, receive comments about it, act on these comments and produce subsequent work on multiple occasions. Claims by students that feedback is inadequate can only be effectively countered by showing that it does make a difference. Developing Judgement While feedback might be prominent publicly, a more fundamental issue concerning assessment is emerging. As it becomes increasingly apparent that assessment is probably the most powerful shaper of what students do and how they see themselves, the question arises: does assessment have the necessary effect on students that is desired for higher education? As there are two purposes, it is likely to have two kinds of effect. First, for certification purposes, does it adequately and fairly portray students’ learning outcomes from a course? Second, does it lead students to focus their attention and efforts on what they will most need on graduation? While the generic attributes agenda is focusing attention on the features of a graduate needed across all programmes and how these might be developed, there is an underpinning issue that affects all other outcomes. Namely, how does assessment influence the judgements that students make about their own work? Clearly, graduates need to know things, do things and be able to work with others to get things done. But they also need to be able to know the limits of their knowledge and capabilities and be able to tell whether they can or cannot address the tasks with which they are faced. In other words, they need to be able to make judgements about their own work (Joughin 2009). Without this capacity to make effective judgements, they cannot plan and monitor their own learning or work with others to undertake the tasks at hand. How well do their courses equip them to do this? In particular, what contribution does assessment make? A precursor to this focus on student judgement is found in the literature on student self-assessment and self-directed learning (Knowles 1973). Particularly since John Heron’s seminal article in 1981 (Heron 1988), there
shi f ti ng vi ews of a ssessm e n t | 25 has been a flourishing of studies about student self-assessment, often in conjunction with peer assessment. Unfortunately, much of this literature has been preoccupied with seeking to demonstrate that students can make similar judgements of their grades as their teachers (Boud 1995). While many students can do this reasonably well, research has identified that students in introductory classes and those whose performance is weaker than average tend to overrate themselves and that students in advanced classes and those whose performance is above average tend to underrate themselves. Regrettably, this research focus is an outcome of thinking about certification, not learning. The implicit – and sometimes explicit – aim of this research appears to be trying to judge if student marks could be substituted for teachers’ marks. This is, in the end, a fruitless endeavour for at least two reasons. First, there are likely to be different marks generated by students depending on how great the consequences will be for them. Second, the generation of marks does not address exactly what students are and are not able to judge in their own work. Viewed from the perspective of assessment for learning, the problem changes dramatically. Of course, students on first encounter with new material will not be good judges of their own performance. As at the beginning of the process they will not sufficiently appreciate the criteria they need to apply to their work, it is understandable that they may err on the side of generosity towards themselves – they just do not know that their work is not good enough. As they gain a greater understanding of what they are studying, they will increasingly appreciate the criteria that generate successful work and be able to apply such criteria to their own work. Once they are sufficiently aware of the complexity of what they are doing and are conscious of their own limitations prior to having reached a level of mastery, they will be sparing in their own judgements and tend to underrate themselves. Self-assessment, then, should be seen as a marker of how well students are tracking in developing the capacity to judge their own work. We should not be dismayed that they are over- or underrating, as this is just an indicator of progress in a more important process: that of calibrating their own judgement (Boud, Lawson, and Thompson 2013).
26 | advances i n uni vers ity a s s e s s me nt Assessment as Equipping Students for Future Challenges This leads us to a view about where assessment for learning is heading. Unlike those who seek to set standards and articulate competencies to be achieved, my starting point, like that of Barnett (1997), is that the future is unknown and necessarily unknowable to us (see also Kreber in this volume). Acceptance of this creates constraints and possibilities for what we do in higher education. Of course, new knowledge, skills and dispositions will be required by our students in the future that cannot possibly be acquired now. So, whatever else, we must prepare students to cope with the unknown and build their capacity to learn when the props of a course – curriculum, assignments, teachers, academic resources – are withdrawn. What, then, does that imply for what and how we assess? Returning to our original distinction between the purposes of assessment as certifying achievement (summative assessment) and aiding learning (formative assessment), it is possible to see a third purpose: fostering lifelong learning. It could be reasonably argued that this latter purpose is merely a subset of formative assessment; however, there are merits in separating it out. Formative assessment, both in the literature and in practice is predominantly concerned with assisting students to cope with the immediate learning demands required to enable students to address what they will encounter in summative assessment. Acknowledgement may be given to its longer-term importance, but strategies to support that are not so common. To provide a focus for this idea, I coined the term ‘sustainable assessment’. Following the form of a well-known definition of sustainable development, sustainable assessment was described as ‘[a]ssessment that meets the needs of the present without compromising the ability of students to meet their own future learning needs’ (Boud 2000, 151). This does not refer to assessment being sustainable for the staff who has to carry the marking load, though that is also desirable. Rather, it clearly positions sustainability in terms of student learning. It focuses on assessment tasks that not only fulfil their present requirement – to generate grades or direct immediate learning – but also contribute to the building of students’ capacity to learn and assess themselves beyond the current task. Thus, for example, a sustainable assessment task might involve students in identifying the criteria that are appropriate for
shi f ti ng vi ews of a ssessm e n t | 27 judging the task in hand and planning what further activities are required in the light of their current performance against these criteria. It does not imply that they receive no assistance in these processes, but it does mean that they are not specified in advance by a teacher. Sustainable assessment may not involve wholesale changes in assessment tasks, but it does require changes in the pedagogic practices that accompany them, especially with regard to feedback. Hounsell (2007) and Carless et al. (2011) have taken the notion of sustainable assessment and applied it further to the practices of feedback. An Agenda for Assessment Change If we were to generate a set of ideas for assessment that would respond to the demands of coping with uncertainty and prepare students for a future in which learning was a central feature of their lives, what might they include? Feature 1: Becoming Sustainable As discussed above, acts of assessment will need to look beyond the immediate content and context to what is required after the end of the course. A view is needed that is not a simple projection of present content and practices, but encompasses what is required for students to face new challenges. A key element of this must be a strong focus to avoid creating dependency on current staff or courses. While assessment to please the teacher supposedly no longer takes place, any residue of this must be addressed. The more insidious challenge is to ensure that assessment does not involve always looking to teachers for judgement. Multiple sources to aid judgement must become a normal part of assessment regimes. Feature 2: Developing Informed Judgements As argued before, students must develop the capacity to make judgements about their own learning; otherwise, they cannot be effective learners, whether now or in the future. This means that assessment should focus on the process of informing students’ own judgements, as well as on others making judgements on their work for summative purposes (Boud and Falchikov 2007). The development of informed judgement thus becomes the sine qua non of assessment. Whatever else it might do, this is needed to ensure that graduates cope well with the future. We should be aware that summative assessment
28 | advances i n uni vers ity a s s e s s me nt alone is too risky and does not equip students for new challenges. Of its nature, it tends to be backward-looking, and it is not a strong predictor of future accomplishment. Assessment, then, is more important than grading and must be evaluated on the basis of what kinds of students it produces. Of course, opportunities for developing informed judgements need to be staged across a programme, as isolated opportunities to practice this will not plausibly lead to its development. Therefore, thinking about assessment across course modules becomes not just desirable, but essential. Feature 3: Constructing Reflexive Learners We have come a long way from when assessment was a secretive business to which students were blind. Transparency is now a key feature. However, there is a difference between openness and involvement. If students are to develop their own judgements and if assessment events are the focus of such judgements, students need to become active players. They need to understand what constitutes successful work, be able to demonstrate this and judge if what they have produced meets appropriate standards. Students must necessarily be involved in assessment, because they need to know how to do it for themselves. Assessment, then, needs to position students to see themselves as learners who are proactive, generative and drive their own learning. An example of this is in the use of rubrics. The provision of a rubric that specifies learning outcomes and the criteria associated with each task to be utilised by tutors when marking provides limited scope for reflexivity: students follow the path of others. However, in contrast to this, if the task prompts students to construct and use a rubric, they become actively involved in making decisions about what constitutes suitable criteria. This requires them to demonstrate that a learning outcome has been addressed, identify signs in completed work that indicate what has and has not been achieved and then gather evidence from other parties through seeking and utilising feedback. Fostering reflexivity and self-regulation is not something to be relegated to a limited number of tasks, but should be made manifest through every aspect of a course. The programme and all its components need to construct the reflexive learner.
shi f ti ng vi ews of a ssessm e n t | 29 Feature 4: Forming the Becoming Practitioner Finally, assessment needs to shape the becoming practitioner. All students become practitioners of one kind or another. It is only in particular professional or vocational courses that it is known what kind of practitioner they are likely to become. There are common characteristics of all those who practice in society: they address issues, they formulate them in terms of addressable problems and they make judgements. Assessment to this end needs to help students calibrate their own judgements (Boud, Lawson, and Thompson 2013). Learners act on their belief in their own judgements; if these are flawed, it is more serious than them having particular knowledge gaps. Assessment, then, needs to contribute to students developing the necessary confidence and skills that will enable them to manage their own learning and assessment. Understanding is not sufficient; showing that they can perform certain tasks is not enough. Capable beginning practitioners need to be able to become increasingly sophisticated in judging their work. In particular, they need to be able to do so when working effectively with others, in order to assist each other in their learning and mutually develop informed judgement. This view provides a substantive agenda for further changes in assessment. Any one of the particular elements mentioned above can be seen in the literature, but they are rarely seen in concert, even less so across a curriculum. Conclusion In conclusion, what would assessment that helped meet future challenges look like? It would start by focusing on the impact of assessment on learning as an essential assessment characteristic. It would position students as active learners, seeking an understanding of standards and feedback. It would develop their capacity to make judgements about learning, including that of others. It would involve treating students more as partners and less as subjects in assessment discussions. And it would contribute to building learning and assessment skills beyond the particular course. Of course, the first question to be asked is: would such assessment practice be more demanding for teachers? It would require us to think much more clearly about what changes in students we expect courses to influence. It also would require an initial investment in redesigning courses, as well
30 | advances i n uni vers ity a s s e s s me nt as a redistribution of where we focus our efforts. But following this initial adjustment, it could potentially lead to a more satisfying use of time with less time and energy spent on repetitive tasks and more on what really makes a difference. References Adcroft, A. 2011. ‘The Mythology of Feedback.’ Higher Education Research and Development 30, no. 4: 405–19. Barnett, R. 1997. Higher Education: A Critical Business. Buckingham: The Society for Research into Higher Education and Open University Press. Black, P. J. 1968. ‘University Examinations.’ Physics Education 3: 93–9. Black, P., and D. Wiliam. 1998. ‘Assessment and Classroom Learning.’ Assessment in Education 5, no. 1: 7–74. Boud, D. 1995. Enhancing Learning through Self Assessment. London: Kogan Page. Boud, D. 2000. ‘Sustainable Assessment: Rethinking Assessment for the Learning Society.’ Studies in Continuing Education 22, no. 2: 151–67. Boud, D., and N. Falchikov. 2007. ‘Developing Assessment for Informing Judgement.’ In Rethinking Assessment for Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 181–97. London and New York: Routledge. Boud, D., and E. Molloy, eds. 2013. Feedback in Higher and Professional Education. London: Routledge. Boud, D., R. Lawson, and D. Thompson. 2013. ‘Does Student Engagement in SelfAssessment Calibrate their Judgement Over Time?’ Assessment and Evaluation in Higher Education. Accessed December 10, 2013. doi: 10.1080/02602938. 2013.769198. Carless, D., D. Salter, M. Yang, and J. Lam. 2011. ‘Developing Sustainable Feedback Practices.’ Studies in Higher Education 36, no. 5: 395–407. Ebel, R. L. [1965] 1972. Essentials of Educational Measurement. Englewood Cliffs: Prentice-Hall. Falchikov, N., and D. Boud. 2007. ‘Assessment and Emotion: The Impact of Being Assessed.’ In Rethinking Assessment for Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 114–56. London: Routledge. Hattie, J., and H. Timperley. 2007. ‘The Power of Feedback.’ Review of Educational Research 77: 81–112. Heron, J. 1988. ‘Assessment Revisited.’ In Developing Student Autonomy in Learning, edited by D. Boud, 77–90. London: Kogan Page.
shi f ti ng vi ews of a ssessm e n t | 31 Higher Education Funding Council for England. 2011. The National Student Survey: Findings and Trends 2006–2010. Bristol: Higher Education Funding Council for England. Hounsell, D. 2007. ‘Towards More Sustainable Feedback to Students.’ In Rethinking Assessment for Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 101–33. London and New York: Routledge. Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance and Feedback to Students.’ Higher Education Research and Development 27, no. 1: 55–67. Hughes, C., and S. Barrie. 2010. ‘Influences on the Assessment of Graduate Attributes in Higher Education.’ Assessment and Evaluation in Higher Education 35, no. 3: 325–34. James, R., K. L. Krause, and C. Jennings. 2009. ‘The First Year Experience in Australian Universities: Findings from a Decade of National Studies.’ Accessed December 10, 2013. doi: http://www.cshe.unimelb.edu.au/research/experience/docs/FYE_Report_1994_to_2009.pdf. Joughin, G., ed. 2009. Assessment, Learning and Judgement in Higher Education. Dordrecht: Springer. Knowles, M. S. 1973. The Adult Learner: A Neglected Species. Houston: Gulf Publishing Company. Merry, S., M. Price, D. Carless, and M. Taras, eds. 2013. Reconceptualising Feedback in Higher Education. London: Routledge. Nicol, D., and D. Macfarlane-Dick. 2006. ‘Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice.’ Studies in Higher Education 31, no. 2: 199–218. Oxford Learning Institute. 2013. ‘Assessment and Feedback.’ Accessed May 16, 2013. doi: http://www.learning.ox.ac.uk/support/teaching/resources/assess/. Rowntree, D. 1977. Assessing Students: How Should We Know Them? London: Harper and Row. Wiggins, G. 1989. ‘A True Test: Toward More Authentic and Equitable Assessment.’ Phi Delta Kappan 70, no. 9 (May): 703–13. Yorke, M. 2003. ‘Formative Assessment in Higher Education: Moves towards Theory and the Enhancement of Pedagogic Practice.’ Higher Education 45, no. 4: 477–501.
2 Flourishing amid Strangeness and Uncertainty: Exploring the Meaning of ‘Graduateness’ and its Challenges for Assessment1 Carolin Kreber Introduction
D
uring his tenure as Vice-Principal for Academic Enhancement at The University of Edinburgh, Professor Hounsell took leadership of the sector’s engagement with the Scottish enhancement theme ‘Graduates for the 21st Century’ (QAA Scotland 2008–2011), thereby contributing to a deeper understanding of the meaning of ‘graduateness’ and the related notion of graduate attributes (Hounsell 2011a, 2011b). Determining the nature of ‘graduateness’ requires us to stand back and ask: what ought to be the educational purposes of universities at this time? And, more specifically: does the world we live in today render these purposes markedly different to, or perhaps broader than, those of earlier times? In recent years these questions have been widely debated and many universities have begun to draw up statements of so-called ‘generic graduate attributes’ that stand for the key skills, abilities and dispositions students are expected to attain over the course of their university study. While often inspired by the employability agenda, a broader sense of the educational A shorter version of this chapter was previously published in Teaching in Higher Education (Kreber, C. 2014. ‘Rationalising the Nature of “Graduateness” through Philosophical Accounts of Authenticity.’ Teaching in Higher Education 19, no. 1. Accessed December 10, 2013. doi: 10.1080/13562517.2013.860114).
1
32
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 33 purposes of higher education is, at times, acknowledged in the literature on graduate attributes. For example, Bowden et al. (2000) defined ‘graduate attributes’ as: the qualities, skills and understandings a university community agrees its students should develop during their time with the institution. These attributes include but go beyond the disciplinary expertise or technical knowledge that has traditionally formed the core of most university courses. They are qualities that also prepare graduates as agents of social good in an unknown future. (Cited in Hounsell 2011b)
Although there is variation among the graduate attribute statements drawn up by different institutions, there also tends to be agreement on some core or ‘enabling’ attributes (Barrie 2004). These include, for example, being capable of continuing learning in a world that is uncertain (see also Boud in this volume), having an inquiry–orientation and being capable of contributing effectively to civic life in a global context (see Hughes and Barrie 2010). In this chapter, I undertake an inquiry into the nature of ‘graduateness’ that is inspired by philosophical accounts of the notion of ‘authenticity’. The intent is to offer a theory-based – specifically, a philosophically-based – rationale for the singling out of certain core attributes as critical today and offer an outline of the conditions that assessment practices would need to meet in order to promote these attributes. My way into this chapter is to introduce two distinct, yet, I suggest, complementary ideas on the pedagogical purposes of higher education. One is represented here by the work of Sullivan and Rosin (2008), the other by that of Barnett (1997, 2005, 2006, 2007, 2009). Together, they inspire a particular understanding of ‘graduateness’. Within the North American context, philosophers William Sullivan and Matthew Rosin proposed that the main educational purpose of higher education today is to encourage students’ ‘participation in meaning-giving practices’ (Sullivan and Rosin 2008, 25) calling this ‘a new agenda for higher education’ (124). Meaning-giving practices, the authors suggest, rely on connecting and identifying with something that is larger than oneself. Students’ participating in meaning-giving practices, they argue, is fostered through the cultivation of practical reasoning, which combines the intellectual ability typically promoted through higher education with moral purpose and
34 | advances i n uni vers ity a s s e s s me nt community engagement. Practical reasoning is, then, not so much about the development of a skill, but the development of a certain kind of person, ‘a person disposed towards questioning and criticizing for the sake of more informed and responsible engagement’ (Sullivan and Rosin 2008, xvi). In the United Kingdom (UK), philosopher and theorist of higher education Ron Barnett similarly emphasises the ‘person’ dimension of learning and the importance of higher education in fostering not only critical thinking, but ‘critical being’. However, for Barnett, the key challenge for higher education lies not in students developing moral commitments or becoming more socially engaged, although such qualities are clearly desirable. The core disposition to be acquired through higher education is an inner capacity to cope with two distinct phenomena of our times: epistemological uncertainty and complexity, which, together, he suggests, result in an existential experience of ‘strangeness’. This disposition, according to Barnett, is fundamental or a prerequisite to attaining qualities2 such as moral commitment and social engagement. Although Barnett’s language is at times a little heavy-going and may not appeal to everyone, I think there is an important idea expressed in his work that deserves consideration when thinking about the nature of ‘graduateness’ and the assessment practices most likely to support it. My intent in this chapter is to first attempt a synthesis of the above two ideas on the pedagogical purposes of higher education (and therefore of ‘graduateness’) and then explore their implications for assessment in higher education. Running through this chapter are the concepts of ‘authenticity’ (Barnett 2007; Kreber 2013) and ‘practical reasoning’ (Hounsell 2011b; Sullivan and Rosin 2008), as well as ‘uncertainty’, ‘complexity’ and ‘strangeness’ (Barnett 2004, 2005). My argument will lead me to conclude that assessment and feedback practices that require students to question and defend their own and others’ knowledge claims, as well as apply, broaden and challenge their academic knowledge through experiential learning (whereby they relate abstract academic content to concrete real-life issues of social relevance) can foster learn For a discussion of the distinction drawn between dispositions and qualities, see Barnett (2007, 2009). Dispositions are more fundamental and open the door for possible qualities. Put differently, qualities describe the particular direction or shape that dispositions might take.
2
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 35 ing for the longer term. These practices also promote transformative learning, leading to a deeper understanding of subject matter and self and an identity grounded in a sense of responsible agency, commitment and authenticity. Coping with ‘Strangeness’ and Uncertainty The notion of ‘super-complexity’ is employed by Barnett (2004, 252) as a shorthand for referring to the multi-level challenges students are exposed to in making sense of their experiences. The notion is particularly helpful for unpacking the deeper meaning of ‘graduateness’. Barnett suggests that there are two different challenges that students need to grapple with, which stand in a hierarchical relationship to one another. The first is the challenge brought about by the rapid and ongoing advancements in knowledge. What to believe or consider as ‘true’ is constantly being called into question due to these changes, leading to awareness that the future is unknown or unpredictable. Universities, according to Barnett, have a responsibility to develop in students the capacity to cope with this epistemological uncertainty. However, this challenge is made still more complex – indeed, is made super-complex – by the increase in specialisations. It is one thing to prepare students for the reality that knowledge is uncertain as one advances through further scholarship or discoveries within a particular field or discipline; it is quite another to prepare them for the additional challenge of different disciplinary specialisations producing often incompatible frameworks and discourses through which to interpret this knowledge (this is not just a reflection of what is happening within the academy, but, importantly, also outside, reflecting on the condition of globalisation). ‘This is an age that is replete with multiplying and contradicting interpretations of the world’, Barnett (2007, 36 n37) observes. Given this super-complexity, Barnett (2005) contends, the important task of higher education is to cultivate in students ‘human capacities needed to flourish amid “strangeness”’ (794). Such flourishing, he suggests, can be supported through ‘a pedagogy of affirmation’ (Barnett 2005, 795) or, as stated elsewhere, a pedagogy of ‘solicitude’ (Barnett 2007, 129) or, as stated again elsewhere, ‘a pedagogy of human being’ (Barnett 2004, 247). What is important, Barnett adds, is that students are encouraged to not only endure the strangeness associated with super-complexity, but to become part of it. ‘For
36 | advances i n uni vers ity a s s e s s me nt ultimately’, he suggests, ‘the only way, amid strangeness, to become fully human, to achieve agency and authenticity, is the capacity to go on producing strangeness by and for oneself’ (Barnett 2005, 794). This is an intriguing and intuitively compelling statement, but what does it mean? What would it entail if students were to develop the capacity to produce ‘strangeness’ by and for themselves? There are four ideas here that require unpacking. The first is the idea of ‘strangeness’; the second that of authenticity, which is made possible through ‘strangeness’; the third is the idea that coping with ‘strangeness’ requires authenticity; and the fourth is the idea that authenticity means producing ‘strangeness’ by and for oneself. I will address each of these in turn. I will then show that fostering authenticity through higher education means more than promoting critical thinking skills and abilities; even more than what we typically understand by critical thinking dispositions. It is also, importantly, a matter of developing a personal disposition towards openness to experiences – that is, openness to one’s own possibilities or flourishing – and developing the qualities of personal commitment and responsible engagement. I will argue that the notion of ‘graduateness’ as defined in this chapter, underpinned by the above disposition and qualities, can be usefully understood through three rather different perspectives on authenticity:3 the existential, the critical and the communitarian. My intent is not so much to present these ideas on the pedagogical purposes of higher education – or ‘graduateness’ – as new, but rather to provide a rationale for them; a rationale grounded in philosophical discourse, the latter not commonly reflected in the assessment literature.
I note that the notion of authenticity has been used in the higher education literature in different ways, at times with little consideration of its underlying philosophical assumptions. However, different philosophical traditions, too, interpret the notion in distinct ways. Despite the complexity of the notion of authenticity, the absence of a single definition or construct and the fact that contributors to this volume use the notion differently (referring, for example, to the authenticity of the student, the authenticity of the student’s experience or the authenticity of the learning task [meaning its correspondence to the demands of the ‘real’ world]), the notion of ‘authenticity’, in all these interpretations, can be helpful for thinking about teaching, learning and assessment in higher education in fresh and innovative ways.
3
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 37 Strangeness The notion of ‘strangeness’ is linked to, but is also stronger than, uncertainty. When we say we are uncertain, we want to convey that we have not fully worked things out yet, but we hold open the possibility that we might do so in the future. ‘Strangeness’, however, suggests that our previous ways of understanding do not serve us well or that too many contradictory alternatives present themselves simultaneously. ‘Strangeness’ makes us feel disoriented and uprooted. When we experience a situation as strange it typically arouses certain emotions in us. This is usually accompanied by a feeling of uneasiness, disquiet and anxiety. ‘Strangeness’, therefore, is reminiscent of Freud’s ‘uncanny’ (Bayne 2008) or what the Germans call ‘das Unheimliche’. When something feels strange or ‘unheimlich’ we no longer feel at home, but rather pushed out of our comfort zone. The reality of ‘super-complexity’ (described earlier) produces such strangeness. It is not just intellectual uncertainty; is it an experience that also affects us on a deeper level, both personally and emotionally. It thus affects the core of our being. However, while deeply challenging, the experience of ‘strangeness’ does not stifle our development, but, paradoxically, is essential for helping us achieve the full potential of our being. Why should this be the case? And why should contributing to even more ‘strangeness’ be the solution? Authenticity made possible through strangeness. The modern world is characterised by rapid and constant change and often conflicting discourses, leading not only to a sense that the future is uncertain, but also, at times, to a sense of an unfamiliar, uncontrollable present. It would be nice to be able to rely on long-held assumptions, beliefs and conventions that have stood the test of time and continue to provide comfort and security. But doing so is often not an option, unless we are willing to deceive ourselves. A sense of insecurity and uncertainty brings with it a feeling of ‘strangeness’. Yet it is precisely this experience of strangeness that makes us think about our assumptions and reconsider their validity. To face the reality of ‘strangeness’, we must muster the courage to question received wisdom and convention ourselves. This is both an emotional and an intellectual challenge. It is the challenge of authenticity. From an existential perspective, becoming authentic implies that we
38 | advances i n uni vers ity a s s e s s me nt become aware of our own unique purposes and possibilities in life, thereby becoming authors of our own life – ‘beings-for-themselves’ – who take responsibility for our actions and stand by our inner commitments (Malpas 2003; Sherman 2003). Heidegger saw the great task and possibility of being human in the freedom we possess to break free from self-deception or das man (often translated as ‘the they’), whereby ‘the they’ refers to our unexamined ways of being (what ‘one’ does or believes). Broadly similarly, Sartre spoke of us living in ‘bad faith’, thereby referring to our tendency to want to take for granted and leave intact conventional ways of doing things, although we are on some level aware that we are deceiving ourselves. The key point to hold on to is that: it is only when we encounter ‘strangeness’ that we ourselves begin to question what until then we took for granted. For Heidegger (1962), anxiety (felt as a result of ‘strangeness’) opens up new possibilities, and it is through anxiety that a person stands a chance to move towards greater authenticity. The very process of reaching authenticity provokes anxiety, but anxiety is also a prerequisite to authenticity. This point will be explored next. Coping with strangeness requires authenticity. Barnett argues that encountering anxiety is a condition of what it means to be a student and that this is similar to what we feel when we participate in the world of contradictory discourses and ill-defined problems, making us aware of the impossibility of ready-made solutions. What higher education needs to provide are ways of helping students live with this anxiety or strangeness produced by ‘supercomplexity’. To cope with this ‘super-complexity’ or with ‘strangeness’, Barnett (2007) argues, student being itself has to become more complex. This complexity of being is grounded in a willingness to challenge oneself and throw oneself forward towards authenticity. Complex being is open to experience. It is open to its own possibilities, to its own striving for authenticity. Authenticity implies producing strangeness by and for oneself. Elsewhere, Barnett argued that: A genuine higher learning is subversive in the sense of subverting the student’s taken-for-granted world . . . A genuine higher education is unsettling; it is not meant to be a cosy experience. It is disturbing because, ultimately, the student comes to see that things could always be other than they are. (Barnett 1990, 155)
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 39 A caring teacher, therefore, will encourage students to challenge themselves, to move out of their comfort zone, to come into themselves and thus to achieve their own possibilities or full potential of being. But to do so, the student must want to challenge him or herself; the student must want to grow. The student who cannot accept feedback that points out his or her mistakes is limited in his or her potential for growth. A certain disposition is required. Being able to contribute to ‘strangeness’ or producing ‘strangeness’ means being able to accept that problems are multidimensional and interdependent and that further research may not simplify, but only reveal additional complexity. It is an acceptance that experts themselves may not find the answers to some problems or will disagree on what they believe the best answer is. Being able to contribute ‘strangeness’ means being able to participate in these discourses and not feel threatened by them. It means to speak for oneself, as there is no authority to rest on. Becoming authentic in this way means having the courage to let go of the need for prior confirmation and to contribute one’s own alternatives. It means becoming the author of one’s life. Of course, in this process, other voices are taken into account, but one is not determined by these (Baxter Magolda 1999). What is at stake here is something that goes beyond being able to think critically. Beyond Critical Thinking Critical thinking, in the literature often discussed as being distinctive to particular disciplines and contexts (for example, Donald 2002; Hounsell and Anderson 2009; McPeck 1990), is typically described in terms of cognitive abilities and skills (for example, Watson and Glaser 1984). However, several educationalists and philosophers of education have highlighted the importance also of certain critical thinking dispositions (for example, Ennis 1962; Facione, Sanchez, and Facione 1994; Norris 1992; Perkins, Jay, and Tishman 1993; Paul 1990; Siegel 1988). Norris (1992) pointed out that an individual will use the ability for critical thinking only if so disposed. Several specific dispositions have been identified, such as: trying to be well informed; seeking reasons; taking account of the whole situation and so forth (Ennis 1962) or being intellectually careful; being inclined to wonder, problem-find and investigate; evaluating reasons; being meta-cognitive and so forth (Perkins, Jay, and Tishman 1993). Facione, Sanchez, and Facione (1994) identified
40 | advances i n uni vers ity a s s e s s me nt seven dispositions that are fundamental to critical thinking, including openmindedness, inquisitiveness, systematicity, analyticity, truth-seeking, critical thinking, self-confidence and maturity. There have also been attempts at singling out an overarching disposition. For Paul (1990), for example, this is captured in the notion of ‘fairmindedness’, while Siegel (1988) speaks of ‘critical-spiritedness’. Dispositions are seen to make critical thinking possible or influence its direction. Critical thinking skills, abilities and dispositions are all involved in making autonomous, rational choices. However, when students become authentic, rather than merely autonomous, they develop the capacity to make choices that are bound up with their own inner motives (Bonnett 1976, 1978; Bonnett and Cuypers 2003). They become personally invested in their choices and feel a deep inner commitment to them. It is through developing a commitment to a claim (or larger cause) that students become authentic and thus able to cope with strangeness. In the play and film Educating Rita, we learn that it is not sufficient for Rita to know how to critique a text through the conventions of literary criticism; the much more profound learning occurs for Rita when she becomes capable of critiquing herself and her real motives. To develop commitment students have to become self-critical, which involves also learning to self-assess their work. Barnett (2007, 160), too, suggests that ‘[b]ecoming authentic in higher education is none other than the formation of critical being’. Importantly, critical being is not just a matter of intellectual dispositions, although, as David Nicol highlights in this volume, these are indeed significant. The student needs to internalise critical voices, not just repeat them. The student needs to take a stance and be willing to do so. Authenticity therefore involves, or brings about, what is typically described as critical thinking (or intellectual) abilities and dispositions,4 but it also goes one step beyond. The person who is becoming authentic is invested in, and committed to, his or her own choices, and this requires a more fundamental disposition: it requires openness to experience, openness to one’s own possibilities of being, openness to have previous experiences challenged and reconstructed (Dunne 2011) and a willingness to engage with ‘strangeness’. On my interpretation, Barnett would prefer the term ‘qualities’ to describe what others call critical thinking dispositions.
4
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 41 The Role of the Teacher What might be the role of the teacher in this context? The role of the teacher, according to Barnett (2007), is to provide space for risk. It is important that students learn to take risks, with all the implications, but to do so with support. Heidegger’s notion of teaching as ‘letting learn’ comes to mind. Heidegger also emphasises the difference between ‘leaping in’ and ‘leaping ahead’ (Zimmerman 1986, 94 n95). Leaping in for someone means to take over responsibility for this person. Such acts of taking over, even if well intended, diminish the person’s authenticity. Think of the supervisor who tells the struggling student what argument to make in his or her thesis? Or think of the teacher who feels sorry for the shy student in the seminar and therefore never expects him or her to contribute in class. Or think of the teacher who sees his or her role exclusively as one of responding to learners’ expressed needs, thereby never challenging them out of their comfort zone? While leaping in for the other means to take over responsibility for the other, leaping ahead means to let the other take responsibility for him or herself. For a student to take over responsibility may well require the support of a caring teacher who believes in the student’s abilities and thereby instils confidence. For teachers who have the important interests of the student at heart, rather than the student’s expressed needs (for a distinction, see, for example, Brookfield 1986, 2005), the notion of leaping ahead is particularly helpful. However, to be clear, leaping ahead does not – indeed cannot – manifest itself as a general strategy. The teacher committed to preparing students to flourish amid strangeness and uncertainty will be flexible and not treat all students the same. He or she will know when more support and more challenge is required. ‘Pedagogy of solicitude’ (Barnett 2007) or ‘caring’ is, of necessity, flexible in this way and mindful of the particular needs of individual students. Indeed, when to offer support, when to stand back and when to challenge will always depend on the particular student that the teacher is working with. While I propose here as an essential and general goal of higher education that students move towards authenticity, the approaches we might take in order to invite individual students into their authenticity must be rooted in attention to the particularity of the case (Kreber 2013). Critical observers will argue that seeing students as
42 | advances i n uni vers ity a s s e s s me nt individuals and attending to their idiosyncratic needs is a common theme in the teacher professionalism literature and is not an idea born out of the present analysis of authenticity. This is, of course, true; however, the point here is that the student’s authenticity, by definition, is unique to each student, thereby demanding this level of professionalism in teaching. There are important questions for assessment, the obvious one being whether all students should be expected to do the same assessments or whether there can be flexibility built into courses that allows students to choose; the extent to which this is possible will, in part, depend on the subject matter. However, before turning to the implications for assessment, I shall introduce two additional dimensions of authenticity, as these help us move the discussion forward. Broadening the Concept of Authenticity: Towards a Synthesis of Two Core Ideas on the Pedagogical Purposes of Higher Education In addition to the existential dimension discussed above, authenticity has two more dimensions that are relevant to this chapter. One is the critical dimension (for example, Habermas 1983), while the other is the communitarian (for example, Taylor 1991). Although the three dimensions are related, for analytical purposes it makes sense to discuss them separately. The critical dimension highlights emancipation from ideology and hegemonic assumptions. It adds to the existential dimension the idea of critical consciousness raising (Freire 1971; Habermas 1983). Viewed from a critical perspective, we move towards greater authenticity as we become conscious of how socially learned truths (ideas we pick up through interactions with our various communities that we then uncritically assimilate) influence how we make sense of the world. So rather than equating authenticity with pure self-experience, the critical dimension suggests that authenticity demands that people recognise how their views of the world have been shaped by the conditions or structures inherent in the contexts in which events were experienced. Examples of socially constructed assumptions that students might unconsciously hold and could come to question include: experts have all the answers; there is only one truth; I am not worthy of studying at this university; only texts from white European Anglo-Saxon cultures are of any value to my learning; I am especially deserving of my privileges and so
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 43 on.5 There are, of course, also assumptions that are more discipline-specific. To mention just one example, one might think of a student who begins to critically reflect on why certain explanatory frameworks have come to determine how we understand a given discipline or problem, what alternative ways of knowing are concealed by this and what are the implications of this concealment. The communitarian perspective (Taylor 1991) highlights that it is our engagement in meaning-giving practices that have evolved in our communities that gives us identity and purpose in life. What ultimately provides a person’s life with meaning, it is argued, is not just the furthering of his or her self-interests, but the person contributing to something larger than him or herself with which he or she identifies. Articulations of ‘graduate attributes’ often include statements regarding critical thinking and problem-solving skills, but sometimes, as we saw at the beginning of this chapter, they also make reference to ethical, social and professional understandings; collaboration, teamwork and leadership; and global citizenship (Hounsell 2011b; Hughes and Barrie 2010). When Martha Nussbaum (1997, 52) argues that ‘[w]e must educate people who can operate as world citizens with sensitivity and understanding’, she clearly assumes that graduates ought to be able to think critically about the information and arguments they encounter; however, her main concern is whether university graduates will choose to employ their critical reasoning abilities for the common good. Such choices necessitate not only an ability to think critically and a disposition to do so when it is called for (Norris 1992); what is required goes one step further. Such choices also necessitate a willingness to take risks, be courageous and, importantly, be able to feel compassion towards those in need. Similar to Sullivan and Rosin (2008), Nussbaum (1997) expresses a concern with moral commitment and responsible community engagement. In his keynote presentation at the Eighth Enhancement Themes Conference in Edinburgh entitled ‘Critical Thinking and Beyond’, Hounsell (2011b), citing Sullivan and Rosin (2008), makes the point that: 5
While these assumptions will appear ‘obviously wrong’ to most readers, the point is that they appear ‘self-evidently true’ to some students. For a more comprehensive discussion of self-evidently true assumptions, see, for example, Brookfield (2005).
44 | advances i n uni vers ity a s s e s s me nt ‘Critical thinking’ means standing apart from the world and establishing reasons and causes. This is a necessary aspect of practical reasoning, but is not sufficient for responsible judgment. Education must also give students access to valued practices for engaging the world more mindfully.
Practical reasoning is a type of meaning-making that moves back and forth between the particularities of the case at hand and the generality of abstract principles (derived from different disciplinary traditions or newer areas of specialisations) (Sullivan and Rosin 2008). Practical reasoning involves accurately assessing a given situation and making an appropriate decision, while abandoning the security offered by rules and regulatives (Dunne and Pendlebury 2003, 198). Practical reasoning therefore also requires personal investment in one’s actions – we might also say, leaning on Bonnett (1978) and Bonnett and Cuypers (2003), that it requires authenticity. At the beginning of this chapter, I argued that there are two different, albeit complementary, ideas on the educational purposes of universities and hence on the notion of ‘graduateness’. One draws on European existentialism, emphasising how human being is affected and deals with the challenges of being in the world; the other is grounded in the North American tradition, emphasising community engagement. Both ideas, I argue, are equally important to ‘graduateness’ and both are usefully conceptualised under the broader notion of authenticity that I sketched out. All three dimensions of this broader notion of authenticity involve a certain sense of self-knowledge and criticality; however, the focus is slightly different in each. The existential dimension of authenticity addresses the development of the fundamental disposition of being open to experience (and one’s own possibilities) and willingly seizing opportunities for change and development arising from these experiences (Barnett 2004); in other words, a capacity to cope with, and contribute to, ‘strangeness’. ‘Strangeness’ propels us to question assumptions, which opens up the possibility for authenticity. The relationship between authenticity and ‘strangeness’ is reciprocal: by becoming more authentic, being becomes capable of coping with, and contributing to, strangeness, and it is the experience of ‘strangeness’ itself that helps being become more authentic. The critical dimension of authenticity stresses reflection on socially constructed, and often uncritically assimilated, assumptions
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 45 of how the world should be. The communitarian dimension emphasises that authenticity involves an appreciation of our social interrelatedness, associating authenticity thereby also with moral commitments and engagement in meaning-giving practices within the community. The critical and communitarian dimensions of authenticity are involved, in particular, in developing the qualities of moral commitment and responsible engagement (Sullivan and Rosin 2008). These two dimensions have to do more with developing a particular kind of person. The disposition and qualities stand in a particular relationship to one another. The disposition is foundational. In order to become any kind of person, the person, of necessity, first has to be open. Becoming requires openness or complexity of being. The notion of ‘graduateness’ is usefully understood through all three dimensions of authenticity: the existential, the critical and the communitarian. Of course, separating these three dimensions makes sense only for analytical purposes. Achieving one’s full potential of being involves critically questioning one’s choices and recognising that one’s own flourishing hinges on the flourishing of others. Nonetheless, our social interrelatedness – in a sense of it giving rise to moral commitments and social responsibility – is most central to both the critical and communitarian dimensions.6 Table 2.1 is my attempt at a graphical illustration of what has been argued. The table also shows how central attributes frequently found in institutional graduate attribute statements relate to the proposed model – for example, the foundational or so-called ‘enabling’ attributes identified by Hughes and Barrie (2010). Implications for Assessment Increasingly, institutions are exploring ways of supporting, measuring and documenting the attainment of graduate attributes through appropriate pedagogies and assessment (Hughes and Barrie 2010); for example, by integrating graduate attributes directly into programme validation, thereby requiring the programme team to clearly articulate how, when and where 6
However, it would not be true to say that our social relatedness does not matter to the existential dimension, as for Heidegger ‘the they’ also stands for the social world we are part of that provides orientation and guidance on how to live together (but the emphasis there is different).
Barnett (2005) Develop more complex being – the capacity for coping with, and producing, ‘strangeness’ Sullivan and Rosin (2008) Develop a particular kind of person who can reason practically, combining academic ability with moral purpose and community engagement
Purposes of higher education that go beyond ‘critical thinking’
Communitarian Critical How people become conscious How people come to appreciate their social interrelatedness and find purpose in meaningof socially constructed and giving practices in the world (for what often uncritically assimilated ultimately provides a person’s life with meaning assumptions is contributing to, and identifying with, something larger than him or herself)
X
Being capable of continuing learning in a world that is uncertain Having an inquiry–orientation
Having an inquiry–orientation Being capable of contributing effectively to civic life
Being capable of contributing effectively to civic life Having an inquiry–orientation
X
The fundamental disposition allows for the qualities of moral commitment and responsible engagement
Linkages to ‘enabling attributes’ (Hughes and Barrie 2010)
A fundamental disposition of being open to experience X
The nature of core graduate attributes that follow from these dimensions
Existential How human being is affected and deals with the challenges of being in the world
Three interrelated dimensions of authenticity
Table 2.1 Conceptualising the nature of ‘graduateness’ through philosophical accounts of authenticity.
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 47 in the programme students have the opportunity to develop these attributes (Hounsell 2011a). Below I offer a few suggestions for how the fundamental disposition and two qualities I identified as being among the core attributes of ‘graduateness’ can be promoted through assessment and feedback practices. So-called ‘authentic’ assessment, referring to assessment tasks that are situated within, or correspond to, the ‘real world’ or appropriate social, professional and disciplinary contexts can be helpful with this at times; however, the ‘authentic’ assessment literature (for example, Neumann, King, and Carmichael 2007; Svinicki 2005; Wiggins 1989) is informed by very different assumptions about the meaning of ‘authenticity’ than those underlying the conceptual framework depicted in Table 2.1. In light of the previous discussion, I suggest that the intent of feedback and assessment practices should be to support the students’ capacity to cope with uncertainty by becoming more open to experiences, reconstruction and change (Barnett 2005) and by developing practical reasoning (Sullivan and Rosin 2008). A question that needs to be asked early on is whether assessment could, indeed, be counterproductive to the attainment of certain attributes – in a sense, that it might hinder rather than enable students’ development of desired qualities. When assessment confuses the demonstration of a certain performance with the cultivation of qualities such as an inner motivation towards moral commitment and social engagement, it is far from certain that the same performance would be observed in contexts where no immediate reward (such as a grade or report card) ensues from the activity. Assessment tasks might convey the impression to students that the value lies exclusively in the doing, not in the reasons behind the doing. However, the key idea behind Sullivan and Rosin’s (2008) proposal to make the development of practical reasoning the new agenda for higher education is, of course, that it requires students to make their reasons explicit. In doing so, they must reflect critically on the abstract knowledge acquired in courses, in order to make sense of and, crucially, make judgements about, particular events, situations or problems in real life. For a concrete example of how students can make their practical reasoning explicit, I refer readers to Kandlebinder (2007). Practical reasoning requires the ability to think critically about abstract knowledge, but it is, crucially, also a moving back and forth between the
48 | advances i n uni vers ity a s s e s s me nt particulars of the case and the abstract principle. Imagine an engineer in the process of determining whether and how to build a new high-speed railway system in a complex environment. Will his or her thinking be limited to reasoning about the applicability of the abstract rule to the concrete technical problem? He or she surely will ponder the applicability of abstract rules or systemised knowledge to the particular case; however, he or she will do this in a particular way. As we saw earlier, practical reasoning is about the development of a certain kind of person (see, for example, Dunne 1993). By itself, the mediation of abstract knowledge and the particulars of the situation will not bring about social commitment and responsible engagement. Importantly, practical reasoning is neither simply a matter of applying abstract rules, nor is it just about establishing the most effective means for the achievement of certain ends; it involves deliberations of the ends themselves (‘what is the most desirable thing to do?’). It asks what is to be done here, not just what is the most effective way to achieve X. Practical reasoning is precisely not limited to scientific, instrumental reasoning or ‘techne’ (Dunne 1993). Involving students in practical reasoning around issues concerning their own and others’ lives, therefore, implies that students reflect on their own roles and responsibilities with regards to the wider community they are part of and which they are there to serve through the professional practices many of them enter into after graduation. Practical reasoning involves deliberation on means and ends. Engaged in practical reasoning, our engineer would look beyond the technical side of the problem and explore also how the lives of humans, animals and other organisms are affected by his or her decisions and the reasons for his or her decisions. Importantly, practical reasoning is tied to the action. The key point for assessment is to get students to work on concrete and real-life issues that require them to draw on abstract disciplinary knowledge, but to reflect on what constitutes the most meaningful course of action in the given situation, taking into account the specifics of the case, which includes the perspectives of the various stakeholder groups that are implicated. This reflective process is helped by an exchange of ideas where knowledge claims can be identified and deliberated. MacIntyre (1987, 24) once remarked that ‘one can only think for oneself if one does not think by oneself’. True reflectivity, MacIntyre suggests, cannot occur independently of input from others. Critical reflection depends on other people helping us become aware of our
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 49 assumptions and call these into question. Peer assessment and tutor assessment are therefore also required to challenge assumptions. However, while empowering, coming out of the closet with one’s own convictions and having these assessed by others can be immensely anxiety-provoking. Another key point for assessment, therefore, is to require students to make their knowledge claims (and values) explicit, thereby articulating the reasons for their judgements. It would, then, seem that fundamental to encouraging the process of practical reasoning is that students know how to both receive and provide feedback. Indeed, as Hounsell (2007, 110) argued, ‘expertise in feedback–accomplishment in learning from and with it, and in being skilled at deploying it constructively’ needs to be recognised as an important learning outcome of higher education in itself. That feedback should not be dismissive should go without saying. Feedback provided in the form of questions that ask the student ‘how did you arrive at this conclusion?’, ‘what might be the practical implications of your argument?’, ‘have you considered the perspectives of others?’, ‘could you say more about why you think the potential outcomes of your decision are defensible?’ and ‘have you considered such and such as alternatives?’ are preferable to brief judgmental comments of a student’s work (as in, ‘your conclusion doesn’t make sense to me’, ‘your thinking is flawed’ or even ‘what you are saying leaves me utterly confused’). Comments of the latter kind, it should be obvious, are more likely to shut down rather than encourage reflection. The characteristics of good feedback have been explored and discussed widely, and I shall not delve into them here (but for good evidencebased insights, see, for example, Nicol and Macfarlane-Dick 2006; Hattie and Timperley 2007; Hounsell 2008; Hounsell et al. 2008). Being able to provide useful feedback is, then, clearly not just having a sound knowledge base from which to make judgements regarding the validity of claims put forward by a student (although this is important too), but also – and crucially – a matter of having empathy for the one being assessed. Peer assessment in higher education is perhaps also valuable precisely because the assessor knows what being assessed is like. Tutors, at times, lack this empathy. It might be argued that students are not critical enough, neither towards themselves nor towards their peers. But is this necessarily the case? This leads me to revisit the point Hounsell (2007) made regarding students needing to develop a capacity for receiving feedback and also of self-assessing their
50 | advances i n uni vers ity a s s e s s me nt work. Both of these are critically important for developing the fundamental disposition of openness to experience and change. Note, as well, that David Nicol emphasises the importance of peer assessment (see this volume) and self-assessment (Nicol 2010; Nicol and Macfarlane-Dick 2006), demonstrating how students’ evaluative judgements can be enhanced. Nicol (2010, 1) describes the students’ ability to evaluate critically the quality and impact of their own work as ‘the underpinning requirement for all attribute development’. Boud (2007, 20) similarly argues that: ‘A view of assessment that places informing judgement centrally is able to include key graduate attributes as an intrinsic part of what assessment is for’. So how does a student become open to feedback from peers and tutors – indeed, willingly subject him or herself to feedback and/or assessment? First, it seems essential that the student needs to care. More specifically, the student needs to care about two different things: he or she needs to care about the claims he or she made; and he or she needs to care about the feedback offered by the assessment, recognising the latter as an opportunity to check her understanding and improve on it. The two are related, in a sense that the more I care about my claims, the more I care about the feedback I receive on these claims (unless my motivation is entirely extrinsic, in which case I care about the feedback in order to get a good mark or to please the teacher, but do not care about my claims). Barnett (2007) emphasises the importance of pedagogies, including assessments, becoming spaces where students have the opportunity to take risks and develop their own claims and put them forward. Essays, to mention just one example, qualify as permissive spaces that allow students to approach a topic in their own way, but so are group and individual presentations, posters, blogs and so forth. Assessment tasks need to include the requirement of making choices, a requirement to commit oneself to a claim. However, this only works if the student is able to trust that risk-taking is what it takes to learn, grow or improve. Preferring security, as in hiding behind and repeating ready-made answers by others and never putting forward one’s own considered opinions/claims hinders important learning. Importantly, never making mistakes, never being wrong is simply not what life is about. It is essential, therefore, that students can trust that they are allowed to make mistakes and that mistakes are construed as a powerful basis for learning (see also McArthur, in this volume). Accepting this would
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 51 seem fundamental for being able to receive feedback and use it to improve upon one’s learning. For students to not shy away from challenges, they must be invited into a process of critique. This also means that tutors should be perceived not just as knowledge experts, but also as co-learners/inquirers. As I argued elsewhere, Buber’s I–Thou has some relevance for higher education teaching (Kreber 2013). While the teacher’s position of power (being the one who assesses) must not be denied and is inevitable, a trusting tutor–student relationship can be promoted by balancing position power with two other forms of power: expertise and referent power (French and Raven 1959). In conclusion, I suggest that feedback and assessment practices that support students’ capacity to cope with uncertainty by becoming more open to experiences and change (Barnett 2005) and develop practical reasoning essential for moral commitment and social responsibility (Sullivan and Rosin 2008) fulfil six conditions: 1. They provoke students to critically reflect on their assumptions, beliefs and values and thus afford them opportunities to move beyond frames of reference that limit how they make meaning of their experiences; 2. they require and encourage students to take risks, take a stance and ‘go public’ with their knowledge claims, subjecting these, willingly, to the critical scrutiny of others in an environment characterised by trust; 3. they develop the students’ own evaluative capability (Nicol 2010) and help students ‘to construct themselves as active subjects’ (Boud 2007, 18) who make personal judgements about their own learning and consider this an integral part of the learning process; 4. they help students develop ‘expertise in feedback – accomplishment in learning from and with it, and being skilled in deploying it constructively’ (Hounsell 2007, 110); 5. they require students to become personally invested in the issues they are learning about in university courses and consider what these mean to them on a personal level and how they can make a difference in the world; and 6. they engage students in experiential ways of learning, requiring them to relate abstract academic content to concrete, real-life issues of social relevance.
52 | advances i n uni vers ity a s s e s s me nt Together, these conditions foster learning for the longer term (Boud 2007, also see this volume) and also learning that is transformative, leading to a deeper understanding of subject matter and a more complex self and thus an identity grounded in a sense of responsible agency and moral commitment. References Barnett, R. 1990. The Idea of Higher Education. Buckingham: The Society of Research into Higher Education and Open University Press. Barnett, R. 1997. Higher Education: A Critical Business. Buckingham: The Society of Research into Higher Education and Open University Press. Barnett, R. 2004. ‘Learning for an Unknown Future.’ Higher Education Research and Development 23, no. 3: 247–60. Barnett, R. 2005. ‘Recapturing the Universal in the University.’ Educational Philosophy and Theory 37, no. 6: 785–96. Barnett, R. 2006. ‘Graduate Attributes in an Age of Uncertainty.’ In Graduate Attributes, Learning and Employability, edited by P. Hager and S. Holland, 49–65. Dordrecht: Springer Verlag. Barnett, R. 2007. A Will to Learn. Buckingham: Society for Research into Higher Education and Open University Press. Barnett, Ron. 2009. ‘Being a Graduate in the 21st Century.’ Paper presented at The Twenty-First Century Graduate Conference, University of Edinburgh, Edinburgh, April 24, 2009. Barrie, S. 2004. ‘A Research-Based Approach to Generic Attributes Policy.’ Higher Education Research and Development 23, no. 3: 261–75. Baxter-Magolda, M. 1999. Creating Contexts for Learning and Self-Authorship. Constructive Developmental Pedagogy. Nashville: Vanderbilt University Press. Bayne, S. 2008. ‘Uncanny Spaces for Higher Education: Teaching and Learning in Virtual Worlds.’ Alt-J, Research in Learning Technology 16, no. 3: 197–205. Bonnett, M. 1976. ‘Authenticity, Autonomy and Compulsory Curriculum.’ Cambridge Journal of Education 6, no. 3: 107–21. Bonnett, M. 1978. ‘Authenticity and Education.’ Journal of Philosophy of Education 12: 51–61. Bonnett, M., and S. Cuypers. 2003. ‘Autonomy and Authenticity in Education.’ In The Blackwell Guide to the Philosophy of Education, edited by N. Blake, P. Smeyers, R. Smith, and P. Standish, 326–40. Oxford: Blackwell Publishing. Boud, D. 2007. ‘Reframing Assessment as if Learning were Important.’ In Rethinking
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 53 Assessment in Higher Education, edited by D. Boud and N. Falchikov, 14–26. London and New York: Routledge. Bowden, J., G. Hart, B. King, K. Trigwell, and O. Watts. 2000. ‘Generic Capabilities of ATN University Graduates.’ Canberra: Australian Government Department of Education, Training and Youth Affairs. Brookfield, S. 1986. Understanding and Facilitating Adult Learning. San Francisco: Jossey-Bass. Brookfield, S. 2005. The Power of Critical Theory. Liberating Adult Learning and Teaching. San Francisco: Jossey-Bass. Donald, J. G. 2002. Learning To Think: Disciplinary Perspectives. San Francisco: The Jossey-Bass Higher and Adult Education Series. Dunne, J. 1993. Back to the Rough Ground: ‘Phronesis’ and ‘Techne’ in Modern Philosophy and in Aristotle. London: University of Notre Dame Press. Dunne, L. 2011. ‘Professional Wisdom in “Practice”.’ In Towards Professional Wisdom: Practical Deliberation in the People Professions, edited by L. Bondi, D. Carr, C. Clark, and C. Clegg, 13–26. Farnham: Ashgate Publishing Limited. Dunne, J., and S. Pendlebury. 2003. ‘Practical Reason.’ The Blackwell Guide to the Philosophy of Education, edited by N. Blake, P. Smeyers, R. Smith, and P. Standish, 194–211. Oxford: Blackwell Publishing. Ennis, R. H. 1996. ‘Critical Thinking Dispositions: Their Nature and Assessibility.’ Informal Logic 18, no. 2: 165–82, n3. Facione, P. A., C. A. Sanchez, and N. C. Facione. 1994. Are College Students Disposed to Think? Millbrae: California Academic Press. Freire, P. 1971. Pedagogy of the Oppressed. New York: Continuum. French, J. R. P., and B. Raven. 1959. ‘The Bases of Social Power.’ In Group Dynamics, edited by D. Cartwright and A. Zander, 150–67. New York: Harper and Row. Habermas, J. 1983. Theorie des Kommunikativen Handelns, Vol. 1, 2. Frankfurt: Suhrkamp. Hattie, J., and H. Timperley. 2007. ‘The Power of Feedback.’ Review of Educational Research 77, no. 1: 81–112. Heidegger, M. [1927] 1962. Being and Time, translated by John Macquarrie and Edward Robinson. London: SCM Press. Hounsell, D. 2007. ‘Towards More Sustainable Feedback to Students.’ In Rethinking Assessment in Higher Education, edited by D. Boud and N. Falchikov, 101–13. London and New York: Routledge. Hounsell, D. 2008. ‘The Trouble with Feedback.’ Interchange 2: 1–10. Centre for Teaching, Learning and Assessment, the University of Edinburgh. Accessed
54 | advances i n uni vers ity a s s e s s me nt August 8, 2013. doi: http://www.docs.hss.ed.ac.uk/iad/Learning_teaching/ Academic_teaching/Resources/Interchange/spring2008.pdf. Hounsell, Dai. 2011a. ‘Graduates for the 21st Century: Integrating the Enhancement Themes.’ The Quality Assurance Agency for Higher Education. Accessed August 7, 2013. doi: http://www.enhancementthemes.ac.uk/docs/publications/ graduates-for-the-21st-century-institutional-activities.PDF. Hounsell, Dai. 2011b. ‘To Critical Thinking and Beyond.’ Presentation at the Eighth Enhancement Themes Conference, Heriot Watt University, Edinburgh, March 2–3. Hounsell, D., and C. Anderson. 2009. ‘Ways of Thinking and Practicing in Biology and History: Disciplinary Aspects of Teaching and Learning Environments.’ In The University and its Disciplines: Teaching and Learning Within and Beyond Disciplinary Boundaries, edited by C. Kreber, 71–84. London and New York: Routledge. Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance and Feedback to Students.’ Higher Education Research and Development 27, no. 1: 55–67. Hughes, C., and S. Barrie. 2010. ‘Influences on the Assessment of Graduate Attributes in Higher Education.’ Assessment and Evaluation in Higher Education 35, no. 3: 325–34. Kandlebinder, P. 2007. ‘Writing about Practice for Future Learning.’ In Rethinking Assessment in Higher Education, edited by D. Boud and N. Falchikov, 159–66. London and New York: Routledge. Kreber, C. 2013. Authenticity in and through Teaching in Higher Education: The Transformative Potential of the Scholarship of Teaching. London and New York: Routledge. MacIntyre, A. 1987. ‘The Idea of an Educated Public.’ In Education and Values. The Richard Peters Lectures, edited by G. Haydon, 15–36. London: Institute of Education, University of London. Malpas, J. 2003. ‘Martin Heidegger.’ In The Blackwell Guide to Continental Philosophy, edited by R. C. Salomon and D. L. Sherman, 143–62. Oxford: Blackwell Publishing. McPeck, J. E. 1981. Critical Thinking and Education. Oxford: Martin Robertson and Company Ltd. Norris, S. P., ed. 1992. The Generalizability of Critical Thinking: Multiple Perspectives on a Educational Ideal. New York: Teachers College. Neumann, F., M. King, and D. L. Carmichael. 2007. ‘Authentic Instruction and
e xpl oring ‘g raduateness’ a nd i ts ch a l l e n ge s | 55 Assessment. Common Standards for Rigor and Relevance in Teaching Academic Subjects.’ Accessed August 7, 2013. doi: http://centerforaiw.com/sites/center foraiw.com/files/Authentic-Instruction-Assessment-BlueBook.pdf. Nicol, D. 2010. ‘The Foundation for Graduate Attributes: Developing SelfRegulation through Self and Peer Assessment, QAA Scotland, Enhancement Themes.’ Accessed December 10, 2013. doi: http://www.enhancementthemes. ac.uk/resources/publications/graduates-for-the-21st-century. Nicol, D., and D. Macfarlane-Dick. 2006. ‘Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice.’ Studies in Higher Education 31, no. 2: 199–218. Nussbaum, M. 1997. Cultivating Humanity: A Classical Defense of Reform in Liberal Education. Cambridge, MA: Harvard University Press. Paul, R. 1990. Critical Thinking: What Every Person Needs to Survive in a Rapidly Changing World. Rohnert Park: Center for Critical Thinking and Moral Critique. Perkins, D. N., E. Jay, and S. Tishman. 1993. ‘Beyond Abilities: A Dispositional Theory of Thinking.’ Merrill-Palmer Quarterly 39, no. 1: 1–21. Sherman, D. L. 2003. ‘Jean-Paul Sartre.’ In The Blackwell Guide to Continental Philosophy, edited by R. C. Salomon and D. L. Sherman, 163–87. Oxford: Blackwell Publishing. Siegel, H. 1988. Educating Reason: Rationality, Critical Thinking and Education. New York: Routledge. Sullivan, W. M., and M. S. Rosin. 2008. A New Agenda for Higher Education: Shaping a (that is, Real-World Correspondence) Life of the Mind for Practice. San Francisco: Carnegie Foundation for the Advancement of Teaching and Jossey Bass. Svinicki, M. 2005. ‘Authentic Assessment: Testing in Reality.’ New Directions for Teaching and Learning 100: 23–9. Accessed December 10, 2013. doi: http:// onlinelibrary.wiley.com/doi/10.1002/tl.167/pdf. Taylor, C. 1991. The Ethics of Authenticity. Cambridge, MA: Harvard University Press. Watson, G., and E. M. Glaser. 1984. The Watson-Glaser Critical Thinking Appraisal. San Antonio: The Psychological Corporation. Wiggins, G. 1989. ‘A True Test: Toward More Authentic and Equitable Assessment.’ Phi Delta Kappan 70, no. 9 (May): 703–13. Zimmerman, M. 1986. The Development of Heidegger’s Concept of Authenticity. Eclipse of the Self (rev. edn). Athens: Ohio University Press.
3 Assessment for Learning Environments: A Student-Centred Perspective Liz McDowell and Kay Sambell
Introduction
A
ssessment for learning is arguably the most widespread development in assessment practice in recent years. Many universities have introduced assessment for learning initiatives designed to enhance assessment and improve student learning. Some institutions focus on single specific aspects of assessment for learning, such as feedback to students on their work. Others have a broader scope, such as an aim to improve formative assessment using a range of methods. Although many policies, strategies and practices are centred on assessment for learning, it is not always clear on what basis the term is being used. Paul Black, a leading researcher and developer of assessment, claims that assessment for learning has become ‘a free brand name to attach to any practice’ (Black 2006, 11). It is not only that assessment for learning practices are varied, but the same is true when we examine the thinking behind assessment for learning, the underlying pedagogical approaches and philosophies and the nature of experiences offered to students. The varying views of assessment for learning can be located on a spectrum ranging from those based on assessment techniques, which use new assessment practices to manage student behaviour, to those that focus on transformations in learning environments to foster student engagement and self-directed learning. In this chapter, we examine different versions of assessment for learning and the student roles that they enable.
asse ssm ent f or lea rni ng envi r o n me nts | 57 The Development of Assessment for Learning Assessment has been moving to the forefront of higher education pedagogy for a number of years. Assessment now takes its place in the phrase ‘teaching, learning and assessment’ and it integrates these three components as a holistic practice. Birenbaum (1996) reflected on the changes in pedagogy linked to changes in assessment. The ‘old’ pedagogic approach, which she termed a ‘testing culture’, drew on behaviourist perspectives with assessment based fundamentally on testing and the measurement of how much of a defined, decontextualised body of knowledge students could remember or apply. This testing was typified by exams, where students faced unknown questions in a time-constrained context and without reference to any resources or guidance materials. In contrast, Birenbaum used the term ‘assessment culture’ to describe the new assessment that is integrated with learning and teaching and takes different forms, being more authentic, challenging and engaging for students and functioning as a learning activity as well as a means of demonstrating knowledge. Not only were the methods of assessment different, but the two cultures developed their own images of students. In the [assessment culture] the perceived student position with regard to the evaluation process changes from that of a passive, powerless, often oppressed, subject who is mystified by the process, to an active participant who shares responsibility in the process, practices self-evaluation, reflection and collaboration, and conducts a continuous dialogue with the teacher. (Birenbaum 1996, 7)
Brown and Knight (1994) described a similar change to assessment for learning by suggesting that: ‘[f]rom assessment as something done to students, we move to assessment as something done with students’ (38). Their theory of formative assessment is based upon creating constructivist learning environments that ‘provide a model for self-directed learning and hence for intellectual autonomy’ (38). They suggest that students need to negotiate their areas of strength and weakness with tutors, thereby being encouraged to appraise their own performances. This is based on the view that the student is motivated to learn and is not solely focused on marks or grades. Students are framed as taking their own progress seriously and having an active role to play
58 | advances i n uni vers ity a s s e s s me nt in the assessment process. Students act on feedback and discuss it, use feedback to shape later work and handle the complexity of criterion-referenced assessment, taking an ipsative approach where they review progress on the basis of their own achievements over time, rather than ‘compare marks’ with others, which is what we might see within a testing culture. Using Assessment for Learning in Practice: Techniques and New Learning Environments As described above, there remain different views and practices in assessment. In relation to assessment for learning, there is similarly a spread of differing perspectives. The key differences are between a focus on assessment techniques and methods and a focus on learning environments. Assessment Techniques Often specific assessment techniques are introduced by teachers as a response to perceived pedagogic problems. They are added to the usual ways of going about things; effectively, attempts at making improvements in relatively quick and easy ways that are not significantly disruptive of existing teaching, learning and assessment practices. These interventions do not require a rethinking of underpinning philosophy or principles. The student position often remains one of being directed and controlled – a ‘subject’ in the assessment process. These versions of assessment for learning thus retain a powerful position for teachers where students have little encouragement to take more control and direct their own learning. A focus on pedagogic techniques designed to elicit an appropriate student response tends to include practices such as requiring students to engage in reflection by setting and grading reflective writing tasks for students to undertake or requiring students to specify their individual feedback requirements. These exemplify the kinds of approach to assessment for learning where a range of techniques are used to manage assessment and, importantly, to manage student behaviour, since the students are perceived as needing to be closely directed and controlled in order to enable them to achieve. A focus on techniques, often to address immediate teacher-defined problems, may develop in order to ensure efficiency in learning. There is limited scope for student self-direction. In this approach, the main lens
asse ssm ent f or lea rni ng envi r o n me nts | 59 through which assessment for learning is seen is that of testing or summative assessment, with the emphasis on the design of tests or assignments, feedback on student work and developing student adherence to assessment criteria. Feedback on students’ work is an aspect of assessment where there are multiple possible interventions that are intended to address perceived problems. In the United Kingdom (UK), the concerns about inadequate feedback on student work have been exacerbated by the status of the annual National Student Survey (http://www.thestudentsurvey.com/), where the survey questions on students’ satisfaction with assessment and feedback always receive comparatively low scores. As a result, many academics and institutions perceive an urgent need to improve feedback. Assessment for learning has so far had a limited impact on ‘the feedback problem’, because the focus has been on techniques to address current problems – perhaps providing a ‘quick fix’, rather than rethinking the purpose and nature of feedback. It is still the case that normally feedback is given in writing to individual students some time after they have submitted a piece of work. In this context, students often claim that feedback is too late, too limited and difficult to understand and apply. At the same time, tutors question the value of feedback, as they find that many students do not even read it. Tutors’ efforts to improve feedback and thereby engage students do not appear to be having a significant impact on student performance. It is, then, easy for academics with many other demands on their time in addition to providing feedback to become disheartened (Hounsell 2007a). The assumption is often made that students are not interested in feedback and, consequently, are also not interested in improving their learning and academic work. Typical interventions relating to feedback include: 1. the standardisation of feedback – for example, using checklists and grade descriptors; 2. the reduction of delays in receiving feedback; 3. tutor feedback to groups rather than individuals, often for exam feedback; and 4. innovations in the forms of feedback, such as using podcasts to provide audio feedback.
60 | advances i n uni vers ity a s s e s s me nt These types of interventions are intended to engage students who are assumed to be reluctant to pay attention to feedback. They are not fundamental changes to the processes that have been in use for some time and do not change the positions of the students or tutors, who continue to operate within a process that distances them from each other and does not promote dialogue. Some techniques are used to more or less ‘force’ students to pay some attention to feedback as a way of resolving take-up problems. For example, Taras (2010) has used various strategies where tutor marks are withheld, but students are required to consider and respond to feedback comments (tutor and peer), with the tutor marks only delivered when students have submitted their response. In another example, Bloxham and Campbell (2010) required students to submit assignments with questions to the tutor requesting feedback on particular points or concerns. These kinds of processes aim to fix a problem – that is, the lack of student engagement in feedback – by requiring participation. This technique again positions students as learners who must be closely controlled and directed. It is not, however, the only possible approach. For example, students could be permitted to have a say. Teachers and students could work collaboratively to devise a method that will provide useful feedback, changing the student role to one of active seeker of good feedback. The Assessment Standards Knowledge exchange (ASKe) research group (http://www.brookes.ac.uk/aske/) trialled a number of interventions designed to help students to understand feedback and apply assessment criteria to their own work. As part of this programme, Rust et al. (2003) found that students did not grasp the key messages from marking criteria grids and written protocols, because a large proportion of lecturers’ knowledge and understanding of the meanings of criteria and standards are actually tacit and created via participation in assessment as a situated social practice. To address this problem, the ASKe group initially developed a ‘structured intervention’ to share teachers’ knowledge. This intervention was in the form of an optional marking workshop, which actively engaged large groups of first-year undergraduate business studies students with assessment criteria and standards in order to ‘support the effective transfer of both explicit and tacit assessment knowledge’ (O’Donovan et al. 2004, 325). The ASKe action-research project sought to explore the impact of the workshops on students’ performance,
asse ssm ent f or lea rni ng envi r o n me nts | 61 which they tracked over time. They demonstrated that ‘after the intervention, there was a significant difference between the results of [students] who participated in the workshop and those that did not’ (Rust et al. 2003, 156). They concluded that investing ‘time and rigour in the consideration of the transfer process of assessment knowledge . . . will enable assessment for learning’ (O’Donovan et al. 2004, 333 [emphasis in original]) and argued that: through a relatively simple intervention incorporating a combination of explicit articulation and socialisation processes a considerable amount may be achieved in developing a shared understanding and, consequently, in improving student performance – and that this improvement may last over time and be transferable, albeit possibly only in relatively similar contexts. (Rust et al. 2003, 162)
Arguably, this approach revealed a tendency to treat teachers’ marking practices, and the construction of standards and criteria that underpin them, as a single notion to be ‘transferred’ to the students – an assumption that has subsequently been called into question by Bloxham, Boyd, and Orr (2011). The ASKe intervention is predominantly perceived to ‘work’ when students become socialised into an accepted way of thinking about particular assessment tasks performed within the university. The focus of the initial research design, and the language used to explain the approach, tends towards a paradigm of transmission or teacher telling, with a focus on the performance goals of marks and grades (Dweck 2000), rather than on learning goals or the development of students’ self-regulatory capacity for the longer term. Such a focus risks predominantly framing the student as a strategist playing the assessment game (and, perhaps, someone who needs to conform to others’ demands), rather than as a developing individual. It is important to recognise the extent to which the authors of this intervention propose it as an element of a ‘conceptual framework . . . that encompasses a spectrum of tacit and explicit processes’ (O’Donovan et al. 2004, 325 [emphasis in original]). Later work positioned the model as an explicit part of course design, rather than as a stand-alone intervention (Rust et al. 2005). Moreover, it is important to acknowledge the extent to which they emphasise the constructivist aspects of student sensemaking, via active involvement, around assessment criteria and standards thus moving towards the development of a new learning environment.
62 | advances i n uni vers ity a s s e s s me nt Intervention approaches are often seen as useful and may be well received by students. Working with criteria and standards helps students to understand what their lecturers require. However, learning to meet requirements in the immediate context of their programme can lead to ‘criteria compliance’ and ‘students who are more dependent on their teachers rather than less’ (Willis 2011, 402 [emphasis in original]). This suggests that additional activities are needed to enable students to acquire a broader perspective on self-evaluation for the longer term. However, students are often perceived as being highly focused on marks, to the extent that they may not even undertake activities that do not accrue marks, particularly as the concept of student as customer is gaining more purchase in higher education (Molesworth, Nixon, and Scullion 2009). A further problem related to learning and assessment that teachers identify is the negative backwash effect that assessment – of the summative kind – can often have on student learning, where students may rely on memorisation or taking short-cuts, rather than fully engaging with the subject matter in their learning. The main assessment for learning techniques developed to address this problem is constructive alignment (Biggs 1996). Its aim is to ensure that students undertake the kinds of learning that are important in their subject and mirror the kinds of activity that a subject specialist or professional would undertake. It was developed in recognition that the usual assessment could allow students to succeed using memorisation or poor learning. Constructive alignment, in contrast, emphasises authentic learning and testing. It relies on assessment that directs students towards the kinds of learning goals and processes that are more representative of meaningful learning in their particular subject area. The aim is to entrap students into a positive approach to learning. With careful design, teaching, learning and assessment can all consistently drive students towards deep (high quality) rather than surface learning. Biggs and Tang (2007) use the explicit statement of learning outcomes and the design of authentic summative assessment tasks to ensure that students learn the right kinds of things in appropriate ways. They claim that a student’s activities in learning are ‘primed’ by the direction given and clarity about what is required. This approach has been very influential with wide impact in UK higher education, where the appropriate use of explicit learn-
asse ssm ent f or lea rni ng envi r o n me nts | 63 ing outcomes is a major component of the quality assurance system. What is not much discussed is the actual role of the student. It seems that students are expected to change their behaviour on the basis of signals received, but there is no clear model of how the student may actually change and what role they play as learners. As identified by Boud and Molloy (2012, 4), it is implied that students are not assumed to take any active role other than making changes determined by their lecturers ‘without conscious volition on [the students’] part’. They have almost been painted out of the picture. Assessment for Learning: Transformation through Learning Environments Some versions of assessment for learning develop an approach, beyond the application of new techniques, by creating learning environments that can transform teaching, learning and assessment. A learning environment consists of various components: ‘all of the physical surroundings, psychological or emotional conditions, and social or cultural influences affecting [learning]’ (Hiemstra 1991, 8) and such an environment is fostered by considerations at all levels from formal assessment tasks and requirements to the configurations of teaching and learning spaces, the gestures of teachers, the questions of learners and teachers and the climate of cooperation between students. (Boud and Molloy 2012, 708 [emphasis in original])
However, to develop a learning environment into an assessment for learning environment requires a more specific set of conditions. An assessment for learning environment has the potential capacity to transform student and staff interactions and dialogue to create a positive learning environment integrating teaching, learning and assessment and encourage student selfdirection to help students develop autonomy as learners within the university and beyond. Assessment, learning and teaching are integrated. Assessment for learning environments offer a more radical alternative to the use of techniques and interventions. Many authors agree that the key purpose of assessment for learning is to develop students’ capacities for learning independently in the longer term as graduates (Boud and Falchikov 2006, 2007; Carless et al. 2011; Nicol and Macfarlane-Dick 2006). Assessment for learning environments are characterised by addressing the development
64 | advances i n uni vers ity a s s e s s me nt of student autonomy and by the use of dialogic and participative approaches moving away from ‘monologue’ and ‘teacher telling’ (Nicol 2010). The overarching aim, from this perspective, is to develop students as self-directed learners contributing, discovering and constructing. Dialogue here should be understood not merely as opportunities to speak and discuss, but also responsible and active participation in teaching, learning and assessment activities by students in collaboration with teachers. The goal is to transform learning environments to enhance current learning and practice and also assist in engaging students in assessment and more broadly in their learning. Trowler and Trowler (2010, 15) concluded a review of student engagement by saying that purposive educational activities, including those associated with assessment, were needed to improve student engagement and performance. Clouder et al. (2012, 2) claim that assessment has the potential ‘. . . to enable students to engage with peers and tutors, to gain personal insight, to feel valued and supported and above all feel that they “fit in” as part of a learning community, and, as such, can succeed in higher education’. Assessment for learning interventions have some limitations. What students tend to learn is how to adopt the procedures, standards and requirements of an assessment system in order to do well in their programmes of study. The student is positioned as someone engaging actively in learning, but chiefly controlled by the teachers who set the framework and ground rules for learning. The student task is mainly about conforming to immediate expectations. Students tend to be seen as individual learners, rather than as a collective learning community. It has been argued that this type of student engagement is not sustainable assessment in the sense of enabling students to direct and manage their own learning beyond university (Boud and Falchikov 2006). To achieve sustainable assessment it is particularly important to engage students in feedback and formative assessment more broadly than in their roles as ‘receivers’ who are subject to instruction and judgement by their lecturers. Black and Wiliam (1998, 106), reviewing the field of assessment and learning, note the ‘vigorous advocacy in the formative assessment literature of the particular benefits of student involvement in the processes of feedback’. They conclude that it is ultimately the students who must understand, derive meaning and make sense by undertaking assessment tasks. To do this, they need to play a more active, responsible role
asse ssm ent f or lea rni ng envi r o n me nts | 65 and not just respond to the demands made upon them (Black and Wiliam 1998). Assessment for learning has a forward-looking focus. The purpose of assessment is not merely to judge students or help them to manage their learning in the current assessment system. Assessment is also used to develop students as genuinely active partners so that they may be involved alongside teachers in activities such as setting tasks or developing and evaluating assessment criteria. The need for such an emphasis can be justified because it is a valuable outcome of higher education. Capabilities can be carried forward into the workplace so that ‘expertise in feedback – accomplishment in learning from and with it and being skilled at deploying it constructively – would in itself become an outcome of higher education’ (Hounsell 2007b, 110). Nicol and MacFarlane-Dick (2006) also give primary importance to students’ capabilities in self-regulation of their learning and their ability to monitor and direct their own learning. One key activity is giving, in addition to receiving, feedback. It is a very potent learning activity, especially when it has the salience of a real context, such as self- or peer assessment, rather than a simulated context. Boud and Falchikov (2006) see this approach as sustainable assessment leading to long-term learning, but argue that it needs to take place within an appropriate assessment regime or environment. Assessment for Learning in Practice The general messages from assessment for learning research are clear in relation to productive learning environments. However, more detail on ways in which aspirational goals can be turned into guidance for practice is also needed. The Centre for Excellence in Assessment for Learning at Northumbria University has identified the components constituting an assessment for learning environment (Sambell, McDowell, and Montgomery 2013). This consists of six qualities of an assessment for learning environment that support learning and assessment and as a whole provides an authentic learning experience and promotes self-direction by students. The six elements work together as a whole, but each of them also provides a starting point, which can be selected according to the best fit in terms of the particular circumstances of the programme and discipline. The elements overlap and all of them pull in the same direction. The whole model encompasses: formative assessment; formal
66 | advances i n uni vers ity a s s e s s me nt f eedback through systems such as tutor comment; informal or intrinsic feedback through dialogue and participation; authentic assessment tasks and processes; balancing and integrating summative and formative assessment; and opportunities for student self-direction. Below we offer an overview of the qualities of an assessment for learning environment addressing each of the six elements. Authentic Assessment Students are often assessed using familiar methods that seem to offer reliability. Although there has been a diversification of types of assessment in recent years (Hounsell 2007a, 2007b), traditional closed-book exams and course work forms that are conventional for their specific discipline are still the mainstay of most university summative assessment. Yet we know that many approaches to assessment are disadvantageous to student learning, encouraging students to memorise materials or produce formulaic written coursework that seems like ‘just hoops to jump through’. Using types of assessment that are much more like the ‘real things’ that academics or professionals in the field do can engage students in much more meaningful ways. Balancing Summative and Formative Assessment Summative assessment must be carried out effectively, but should not be the only or dominant assessment that students encounter in their programme. If summative assessment is dominant, students may focus entirely on accumulating marks and ‘learning for the test’ and fail to engage with other valuable learning opportunities. An unbalanced focus on marks and grades leads to student engagement in learning that is qualitatively different to engagement in genuine learning. Creating Opportunities for Practice and Rehearsal Students should be able to practice and improve their knowledge, skills and understanding before they are summatively assessed. This applies to the formats of assessment, where students should be able to try out a less common format such as an oral presentation, use their feedback and try to improve their performance before it ‘counts’ for marks. Equally important are opportunities for students to try out their developing knowledge and understand-
asse ssm ent f or lea rni ng envi r o n me nts | 67 ing, thus building confidence before they have to demonstrate this in the high stakes context of being marked. Learning environments that stress assessment for learning provide a variety of group and individual low-stakes activities that enable this. Designing Formal Feedback to Improve Learning Well-designed and planned feedback is essential to students’ learning. However, there are limitations to the conventional ways that universities provide feedback; often, it is provided in the form of tutor-written comments on individual students’ marked work. It is important to build in other kinds of formal feedback from tutors, more frequently and at earlier stages, adopting dialogic approaches so that, for instance, comments are received before final submissions. In this way, the comments can ‘feedforward’ (Hounsell et al. 2008) directly into refinements and revisions of future work. It is also important to draw on other sources of feedback, including self and peer review and reflection. Designing Opportunities for Informal Feedback Active, collaborative and dialogic approaches to teaching, learning and assessment enable an intrinsic supply of ‘informal’ feedback to benefit student learning. As students work together, discuss ideas and methods and interact with teachers, they can test out their ideas and skills, see how other students go about things and begin to absorb the standards and requirements of their subjects. Developing Students as Self-Assessors and Effective Lifelong Learners If students are to be active in their own learning, they need to be able to make decisions for themselves, decide what approaches to take and evaluate their own progress. If we want students to be active participants in assessment processes, we need to help them to develop assessment literacy. Ultimately, as graduates and professionals, students need to take over for themselves much of the assessment that lecturers currently do for them and also be skilled at drawing on the resources of peers and colleagues to support their ongoing development.
68 | advances i n uni vers ity a s s e s s me nt Examples of Practice Examples of assessment for learning in practice illustrate how the qualities of assessment for learning environments are important. The first illustrates how feedback has been improved through changes to the learning environment, rather than more narrow interventions. Some interventions simply make it a requirement for students to demonstrate that they have paid attention to feedback. In contrast, Hounsell’s (2007a) approach is to transform feedback into ‘feedforward’, which is a much more effective approach to supporting student learning that is both theoretically justifiable and practically feasible. In order to adopt the ‘feedforward’ approach, students must have an opportunity to use feedback. Their engagement can be stimulated if they have an opportunity to apply any comments or guidance that they have obtained. They are not likely to see the point of trying to apply specific guidance to a different assignment in several weeks’ or months’ time, perhaps on a different topic and with a different lecturer. Students are much more likely to apply feedback on academic work that is undertaken in stages, with comments and guidance at each stage of the learning process. In another approach, a tutor offers anticipatory feedback before an endpoint assessment by giving generalised suggestions that are related to a forthcoming exam. Another example is the concept of the ‘patchwork text’, where a carefully designed series of writing tasks is undertaken by students. The students then bring their ‘patches’ into a class session, where they can review their writing within a small peer group and with the tutor. In this scenario, the students engage in forms of self-assessment as they evaluate their own work in response to feedback from diverse sources and also engage in peer assessment by offering comments on the work of other students. Finally, the patches may be submitted as part of an end-point summative assessment (Sambell, McDowell, and Montgomery 2013, 58). This example shows how successful change may be related to changes in the learning environment, rather than to assessment alone. The important elements here are based on the understanding that students learning in groups or participating in whole class dialogue have substantial opportunities to test out their ideas and thoughts, to gauge responses, to build on the ideas of fellow students and, perhaps, to achieve a higher quality of output than they might have done alone. Participation in a group offers
asse ssm ent f or lea rni ng envi r o n me nts | 69 a continuous supply of feedback, as the work is ongoing. Hounsell (2007a) terms this ‘intrinsic feedback’, while Sambell, McDowell, and Montgomery (2013) call it ‘informal feedback’. This happens when students display and exchange their ideas with each other. On their own or in small groups they may present their work to other groups or the whole class by means of an oral presentation, poster or similar, thereby providing an opportunity for feedback from peers and tutors. Perhaps as importantly, it also provides students with experience in giving feedback, not just receiving it. Hounsell (2007a) raises the issue of the congruence of the whole teaching, learning and assessment environment, encompassing: extrinsic feedback coming after the assessment activity and intrinsic feedback coming during the activity. Specifically, he suggests that ‘intrinsic feedback occurs incidentally and concurrently and is woven into day-to-day teaching-learning encounters’ (Hounsell 2007a, 108). Again, this stresses the importance of dialogue and interaction amongst staff and students and changes in the learning, teaching and assessment context. It also especially raises questions about the quality of staff–student ‘encounters’ and prompts us to think about whether anything that could be reasonably termed an ‘encounter’ is even included in many current learning contexts, with the large lecture being an obvious example of a limited, while not impossible, opportunity for staff–student encounters. Conclusion Assessment for learning has been a very positive addition to our ways of thinking about teaching, learning and assessment. The model of an assessment for learning environment given here can be a starting point for many innovations. However, as we have stressed, assessment for learning is not just a technique to be dropped in to ‘normal’ teaching. Indeed, adopting assessment for learning means that some quite fundamental educational concerns need to be addressed. Teachers need to be able to take risks. Doing things in the same way that they have always done as regards assessment will not lead to a good learning environment. Learning, teaching and assessment need to be seen in a different, more integrated way. If students are to gain the most benefit from assessment, then it needs to be integrated with learning and teaching. Teachers also need to be willing to share power with their students, as assessment becomes more of a partnership with them. By emphasising
70 | advances i n uni vers ity a s s e s s me nt student self-direction, we help our students to take charge of assessment for themselves, rather than be controlled by it. This is an attitude and skill that will stand our students in good stead beyond the university. References Biggs, J. 1996. ‘Enhancing Teaching through Constructive Alignment.’ Higher Education 32, no. 3: 347–64. Biggs, J., and C. Tang. 2007. Teaching for Quality Learning at University (3rd edn). Maidenhead: Open University Press. Birenbaum, M. 1996. ‘Assessment 2000: Towards a Pluralistic Approach to Assessment.’ In Alternatives in Assessment of Achievements, Learning Processes and Prior Knowledge, edited by M. Birenbaum and F. J. R. C. Dochy, 3–29. Boston: Kluwer. Black, P. 2006. ‘Assessment for Learning: Where is it Now? Where is it Going?’ In Improving Student Learning through Assessment, edited by C. Rust, 9–20. Oxford: Oxford Centre for Staff and Learning Development. Black, P., and D. Wiliam. 1998. ‘Assessment and Classroom Learning.’ Assessment in Education 5, no. 1: 7–74. Bloxham, S., and L. Campbell. 2010. ‘Generating Dialogue in Assessment Feedback: Exploring the Use of Interactive Cover Sheets.’ Assessment and Evaluation in Higher Education 35, no. 3: 291–300. Bloxham, S., P. Boyd, and S. Orr. 2011. ‘Mark my Words: The Role of Assessment Criteria in UK Higher Education Grading.’ Studies in Higher Education 36, no. 6: 655–70. Boud, D., and N. Falchikov. 2006. ‘Aligning Assessment with Long Term Learning.’ Assessment and Evaluation in Higher Education 31, no. 4: 399–413. Boud, D., and N. Falchikov, eds. 2007. Rethinking Assessment in Higher Education: Learning for the Longer Term. London: Routledge. Boud, D., and E. Molloy. 2012. ‘Rethinking Models of Feedback for Learning: The Challenge of Design.’ Assessment and Evaluation in Higher Education 38, no. 6: 698–712. Brown, S., and P. Knight. 1994. Assessing Learners in Higher Education. London: Kogan Page. Carless, D. 2013. ‘Sustainable Feedback and the Development of Student SelfEvaluative Capacities.’ In Reconceptualising Feedback in Higher Education: Developing Dialogue with Students, edited by S. Merry, M. Price, D. Carless, and M. Taras, 113–22. London: Routledge.
asse ssm ent f or lea rni ng envi r o n me nts | 71 Carless, D., D. Salter, M. Yang, and J. Lam. 2011. ‘Developing Sustainable Feedback Practices.’ Studies in Higher Education 36, no. 4: 395–407. Clouder, L., C. Broughan, S. Jewel, and G. Steventon, eds. 2012. Improving Student Engagement and Development through Assessment. London: Routledge Dweck, C. S. 2000. Self Theories: Their Role in Motivation, Personality and Development. Philadelphia: Psychology Press. Gibbs, G., and C. Simpson. 2004. ‘Conditions under which Assessment Supports Students’ Learning.’ Learning and Teaching in Higher Education 1, no. 1: 3–31. Hiemstra, R. 1991. ‘Aspects of Effective Learning Environments.’ New Directions for Adult and Continuing Education 50: 5–12. Hounsell, D. 2007a. ‘Towards More Sustainable Feedback to Students.’ In Rethinking Assessment in Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 101–13. London and New York: Routledge. Hounsell, D. 2007b. ‘Innovative Assessment across the Disciplines: An Analytical Review of the Literature.’ York: Higher Education Academy. Molesworth, M., E. Nixon, and R. Scullion. 2009. ‘Having, Being and Higher Education: The Marketisation of the University and the Transformation of the Student into Consumer.’ Teaching in Higher Education 14, no. 3: 277–87. Nicol, D. J. 2010. ‘From Monologue to Dialogue: Improving Written Feedback Processes in Mass Higher Education.’ Assessment and Evaluation in Higher Education 35, no. 5: 501–17. Nicol, D. J., and D. Macfarlane-Dick. 2006. ‘Formative Assessment and SelfRegulated Learning: A Model and Seven Principles of Good Feedback Practice.’ Studies in Higher Education 31, no. 22: 199–218. O’Donovan, B., M. Price, and C. Rust. 2004. ‘Know What I Mean? Enhancing Student Understanding of Assessment Standards and Criteria.’ Teaching in Higher Education 9, no. 3: 325–35. O’Donovan, B., M. Price, and C. Rust. 2008. ‘Developing Student Understanding of Assessment Standards: A Nested Hierarchy of Approaches.’ Teaching in Higher Education 9, no. 3: 145–58. Price, M. 2005. ‘Assessment Standards: The Role of Communities of Practice and the Scholarship of Assessment.’ Assessment and Evaluation in Higher Education 30, no. 3: 215–30. Rust, C., B. O’Donovan, and M. Price. 2005. ‘A Social Constructivist Assessment Process Model: How the Research Literature Shows Us This Could Be Best Practice.’ Assessment and Evaluation in Higher Education 30, no. 3: 231–40. Rust, C., M. Price, and B. O’Donovan. 2003. ‘Improving Students’ Learning
72 | advances i n uni vers ity a s s e s s me nt by Developing their Understanding of Assessment Criteria and Processes.’ Assessment and Evaluation in Higher Education 28, no. 2: 147–64. Sambell, K., L. McDowell, and C. Montgomery. 2013. Assessment for Learning in Higher Education. London: Routledge. Taras, M. 2010. ‘Student Self-Assessment: Processes and Consequences.’ Teaching in Higher Education 15, no. 2: 199–209. Trowler, P., and V. Trowler. 2010. ‘Research and Evidence Base for Student Engagement.’ Higher Education Academy. Accessed December 10, 2013. doi: http:// www.heacademy.ac.uk/resources/detail/studentengagement/Research_and_ evidence_base_for_student_engagement. Willis, J. 2011. ‘Affiliation, Autonomy and Assessment for Learning.’ Assessment in Education: Principles, Policy & Practice 18, no. 4: 399–415.
4 Perceptions of Assessment and their Influences on Learning Noel Entwistle and Evangelia Karagiannopoulou
Introduction
D
ai Hounsell’s early research on conceptions of essay writing played an important part in helping us to see the different perceptions of assessment shown by students. Since then, he has been at pains to highlight how students’ perceptions of the purposes and expected standards of assessed work may differ – sometimes quite markedly – from those of their lecturers, arguing that there is a need, in both research and everyday practice, to gain a firmer sense of how students understand assessments and their demands. This chapter introduces his study within the context of other research developed around the same time, in order to trace the development of ideas on the influences of perceptions of assessment on the quality of student learning. These early studies were initially based on in-depth student interviews regarding their experiences of studying, bringing in the concepts describing deep and surface approaches to learning and introducing the idea of students having, to differing degrees, a strategic approach when studying and preparing for examinations. The interviews also looked in detail at students’ experiences of carrying out assessed work and how they perceived their tutors’ comments. Subsequently, the concepts and categories identified in the qualitative research were used to develop questionnaires that enabled relationships between differing approaches to studying
76 | advances i n uni vers ity a s s e s s me nt and perceptions of teaching and assessment to be explored among large samples of students. Approaches to Learning and Perceptions of Teaching and Assessment Some of the early work on these perceptions was carried out at the University of Lancaster as part of a major Social Science Research Council project into how students learn and study (Entwistle and Ramsden 1983; Hounsell 1984). The project was inspired by the work of Ference Marton and his research team in Gothenburg (Marton and Säljö 1976, 1984), who introduced the distinction between deep and surface approaches to learning. The Swedish research focused mainly on students reading an academic article and analysing their understanding of it, while the Lancaster work explored more generally how students described their everyday studying. Through both interviews and questionnaires the two contrasting approaches to learning were confirmed, but an additional dimension was needed to explain students’ experiences. This was described as a strategic approach, geared towards doing well in examinations and coursework, as illustrated in the following interview extract: I try to think ahead when I’m studying. I know what has to be done and I make sure I can get hold of whatever I need to do it . . . I try to look over the lectures from time to time and, when it comes to the exams, I look systematically through previous exam papers to decide what seem to be the key topics and then revise those intensively to make sure I’m ready for what they’re likely to ask . . . In the exams, it’s a bit like a performance, being on a stage, being aware of the audience, and trying to please them. (Entwistle and Entwistle 2003, 26)
The effect of formal assessment on approaches to learning was not considered in the original study involving reading an article in a naturalistic experiment. However, the wider focus of the Lancaster study made clear just how important perceptions of assessment could be. This effect can be illustrated through the comments of two students, who explained their individual approaches to short-answer questions (SAQs) and essay writing: [With these short-answer questions], I hate to say it, but what you’ve got to do is have a list of the ‘facts’; you write down ten important points and
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 77 memorise those, then you’ll do all right in the test . . . If you can give a bit of factual information – so and so did that, and concluded that – for two sides of writing, then you’ll get a good mark. With that essay . . . I wrote for the lecturer, with an image of the marker in mind, the personality, the person. I find that’s important, to know who’s going to be marking your paper . . . You see an essay is an expression of thought, really, but that’s not what they’re after; they’re after a search through the library, I think, and a cribbing of other people’s ideas. (Ramsden 1984, 144, 151)
The research group in Sweden also explored differences among students in terms of their awareness of what they meant by ‘learning’ (Säljö 1979). For some students, learning was simply taken-for-granted, but for others it had become ‘thematised’: learning is something that can be explicitly talked about and discussed and can be the object of conscious planning and analysis. In learning, these people realise that there are, for example, alternative strategies or approaches which may be useful or suitable in various situations depending on, for example, time available, interest, demands of teachers and anticipated tests. (Säljö 1979, 446)
Miller and Parlett (1974) had found something similar in relation to the nature of assessment. Some students were aware that they could improve their performance in an exam by being ‘cue-conscious’ – in other words, being ‘perceptive and receptive to “cues” sent out by staff – things like picking up hints about exam topics, [and] noticing which aspects of the subject staff favoured’ (1974, 52). For such students, assessment was a game with strategies that could be used to improve the marks obtained, although for most students it was just something they had to do and put up with. A student with a strategic, cue-seeking approach explained: I play the examination game. The examiners play it too . . . The technique involves knowing what is going to be in the exam and how it’s going to be marked. You can acquire these techniques from sitting in the lecturer’s class, getting ideas from his point of view, the form of the notes, and the
78 | advances i n uni vers ity a s s e s s me nt books he has written – and this is separate to picking up the actual work content. (Miller and Parlett 1974, 52)
It is important to note, in this extract, the existence of what seem to be two separate focuses of attention: one on the content; the other on what is likely to pay off in assessment terms. This suggests a tension in the student’s mind between learning the subject and passing the examination, with rather different strategies being involved for each (Entwistle and Entwistle 1991, 208). Within the Lancaster project, Dai Hounsell’s main focus was on students’ conceptions of essay writing in coursework, as seen in their interview comments (Hounsell 1984). His analyses suggested that students exhibited two very different conceptions of essay writing that paralleled the distinction between deep and surface approaches to learning. Some students saw essays as depending, essentially, on a good supply of detailed information that simply had to be delivered to the assessor in the essay, whereas other students recognised the importance of developing an integrated analysis of the topic, well supported by evidence, in order to provide a cogent argument. Of course, tutors expect students to produce ‘cogent arguments’ and so provide feedback on the essays they mark to improve the students’ essaywriting skills. However, students often find these comments obscure, because tutors employ a disciplinary discourse, in which terms like ‘well-structured’, ‘analytical’ or ‘descriptive’ are taken-for-granted. Therefore, students are left with little idea of what they actually need to do to improve their work. As Hounsell explained: Where students’ conceptions of essay-writing are qualitatively different from those of their tutors, communication cannot readily take place because the premisses underlying the two disparate conceptions are not shared or mutually understood. Students misconstrue a tutor’s comments or guidance or fail to grasp the import of these because they do not have a grasp of the assumptions about the nature of academic discourse underlying what is being conveyed to them. Similarly, tutors fail to acknowledge the subtle interplay between what is said and what is taken for granted, and so do not seek to close the gap between their own and the students’ understanding of expectations. (Hounsell 1987, 114)
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 79 The importance of feedback on assignments being timely, supportive and comprehensible has been a continuing theme throughout Dai Hounsell’s thinking in both research and academic development (Hounsell 2010), and the effects of students’ perceptions of assessment on their approaches to learning became important for subsequent research based on both qualitative and quantitative research. The interviews from the Lancaster project were used in constructing two inventories. The first was the Approaches to Studying Inventory (ASI) (Entwistle and Ramsden 1983), which has since been revised to create Approaches and Study Strategies Inventory for Students (ASSIST). Sets of items have been chosen to describe deep, surface and strategic approaches, which have been verified through factor analyses in many studies within several countries (Entwistle 2009; Entwistle, McCune, and Tait 2013). The second inventory was developed from items chosen to indicate students’ perceptions of teaching and assessment – the Course Perceptions Questionnaire (CPQ) (Entwistle and Ramsden 1983). This was subsequently developed into the Course Experience Questionnaire (CEQ) (Ramsden 1991), containing five scales – good teaching, clear goals and standards, generic skills, (in)appropriate workloads and (in)appropriate assessment. ASSIST has since been used in conjunction with the CEQ to explore the relationships between approaches to learning and perceptions of teaching in large, cross-disciplinary samples of students. These show that perceptions of courses as being well taught, with appropriate workloads and assessment, are associated with higher levels of deep approach and lower levels of surface approach (Richardson 2005); although it seems that the relationships and causality can work in either direction or in both. Students who already have deep approaches are more likely to perceive the courses they experience more favourably, and courses that are given high student ratings on aspects of teaching tend to support understanding and also increase levels of deep approach in the students taking them (Richardson 2006). In particular, courses rated highly on ‘teaching for understanding and encouraging learning’ have been found to be associated with higher scores on the deep approach (Karagiannopoulou and Milienos forthcoming). The specific aspects of teaching and assessment that affect approaches to learning were explored as part of a major ESRC study – the ETL Project,
80 | advances i n uni vers ity a s s e s s me nt co-directed by Dai Hounsell and Noel Entwistle, using an adapted version of ASSIST. Students were also given a questionnaire designed to capture their perceptions of the overall teaching–learning environments they had experienced, including their perceptions of teaching, assessment and feedback on assignments. This was called the Experiences of Teaching and Learning Questionnaire (ETLQ) (ETL Project 2006). In the early stages of this project, the analysis of one cross-disciplinary sample showed the relationships between their approaches to learning and studying, their perceptions of both assessment and feedback and their self-ratings on knowledge acquired and achievement (Entwistle, McCune, and Hounsell 2003, Table 1). The meaning of the two scales on assessment can be seen from the items included: Feedback on assignments • The feedback given on my work helped me to improve my ways of learning and studying. • Staff gave me the support I needed to help me complete the set work for this course unit. • The feedback given on my set work helped to clarify things I hadn’t fully understood. Assessment for understanding • You had to really understand the subject to get good marks in this course unit. • To do well in this course unit, you had to think critically about the topics. • The set work helped me to make connections to my existing knowledge or experience.
Table 4.1 shows the correlations between scales designed to measure students’ perceptions of teaching and assessment and approaches to learning and also their self-ratings of knowledge acquired during a specific module and the grades they obtained. The strongest relationships with the deep approach appeared where teachers were perceived to be encouraging understanding and assessing it. Correlations between ‘feedback on assessment’ and ‘staff enthusiasm and support’ were also substantial. A similar pattern of relationships was also found between the perceptions and ‘monitoring studying’, but
.45 .33 .45 .34 .26
Encouraging understanding Feedback on assignments Assessment for understanding Staff enthusiasm and support Self-rating: grades
.40 .31 .33 .36 .17
Monitoring studying .23 .21 .24 .23 .20
Organised studying
Approaches to learning and studying
2.36 2.30 2.31 2.26 2.38
Surface approach
.43 .41 .42 .44 .31
.20 .26 .20 .23 1.00
Self-rating: grades
Perceived outcomes Knowledge acquired
(Source: Based on a sample of 216 British undergraduate students reported in Entwistle, McCune, and Hounsell 2003)
Deep approach
Perceptions of teaching– learning environment
ETLQ scales
Table 4.1 Correlations between perceptions of environment and other variables (British).
.30 .41 .30 .20 .32
Encouraging understanding Assessment for understanding Staff enthusiasm and support Self-rating: grades Grade Point Average (GPA)
.32 .37 .40 .39 .38
Strategic approach 2.10 2.15 2.07 2.19 2.18
Surface approach .33 .44 .44 .27 .22
Knowledge acquired
(Source: Based on a sample of 250 Greek undergraduates reported in Karagiannopoulou and Milienos 2013)
Deep approach
Perceptions of teaching– learning environment
Approaches to learning and studying
Table 4.2 Correlations between perceptions of environment and other variables (Greek).
.15 .14 .18 1.00 .69
Self-rating: grades
Perceived outcomes
.13 .14 .16 .69 1.00
GPA
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 83 lower values were found with ‘organised studying’, suggesting that habits of studying are less likely to be affected by perceptions of teaching and assessment. The correlations with the self-rating of grades tended to be lower, with the strongest coming from an avoidance of a surface approach, while deep and organised studying showed positive relationships, although less strongly. Of the perceptions scales, the highest correlations with grades came with effectiveness of feedback on assignments, as well as staff enthusiasm and support. The relationships with ‘surface approach’ were all substantially negative, and this pattern has been confirmed in many other studies. Recently, a study in Greece also used ASSIST alongside ETLQ (Karagiannopoulou and Milienos 2013). A reanalysis of that data for this chapter allowed comparisons to be drawn between the patterns of relationships within a different university system (see Table 4.2). The general pattern of relationships was remarkably similar to the ESRC study, although the relationships with the perceptions of teaching and assessment for understanding and with self-ratings of grades and the grades themselves suggest less detriment to achievement through the use of surface approaches, but also less positive effects of perceptions of supportive teaching on the deep approach. Taken as a whole, this area of research shows that students’ perceptions of assessment and feedback are consistently related to the approaches to learning and studying they adopt, which in turn affect their academic achievement (see Biggs and Tang 2011; Entwistle 2009). However, the strength and direction of these effects do vary with the method of assessment used and how their teachers perceive the function of assessment, as we shall see later in this chapter. Perceptions of Different Types of Assessment in Relation to Approaches to Learning Patrick Thomas, a researcher visiting the Lancaster research team, used the ASI to investigate how approaches to learning changed when the type of examination was altered from an essay examination to multiple choice questions (MCQ) and then back again to essays (Thomas and Bain 1984). The introduction of MCQs led the class as a whole to show increased levels of surface approaches, along with reduced deep approaches: reintroducing the essay exam reversed that pattern again. This same effect has been found in other
84 | advances i n uni vers ity a s s e s s me nt studies, which also show that students generally perceive essay-type exams as demanding understanding and MCQs and SAQs as requiring rote memorisation and reproductive answers (Scouller 1998). These effects can, however, be mitigated or even removed, by explaining the purposes for which each method is being used, as was found in the ESRC project. Careful explanations about the differing functions of MCQs and SAQs can enable students to avoid misperceptions about the purposes of these different assessment methods and so tailor their revision to the specific requirements of the different exam formats (Reimann and Xu 2005). Misperceptions of the functions of different types of assessment can create serious difficulties for students, as will become clear later on. Dai Hounsell pioneered attempts to encourage academic staff in Scotland to introduce more imaginative forms of assessment through an inventory of methods expected to encourage high quality learning (Hounsell, McCulloch, and Scott 1996). These ideas have been developed further to take account of the interactions between assessment and the feedback provided for students (Hounsell, Xu, and Tai 2007; Hounsell et al. 2008). Working with academics within the ETL Project (mentioned earlier) showed that there were often major obstacles in the way of adopting new approaches to assessment and, more recently, a range of studies have investigated the influence of new modes of assessment on approaches to learning, but these effects have proved to be variable. Some studies found a clear association with deep approaches, but others suggested either little effect or even an unexpected surface approach. For example, one reported that students who had found portfolio assessment interesting and stimulating tended to adopt deep approaches (Segers, Gijbels, and Thurlings 2008). But Gijbels and Dochy (2006) reported that a formative mode of assessment appeared to increase surface approaches, while another study suggested a paradoxical effect, in that even when students perceived that a new form of assessment was encouraging them to adopt deep approaches, they still shifted their actual approaches towards surface strategies (Gijbels, Segers, and Struyf 2008). This series of studies indicates the complexity of the interactions involved. Unclear goals of the new modes of assessment, such as portfolio assessment, may encourage students to revert to the ‘safer’ habits of surface learning (Segers, Gijbels, and Thurlings 2008). In any case, the demands being made
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 85 by teachers in any new approach need to be explained carefully (Segers, Gijbels, and Thurlings 2008), and students need time to adapt their perceptions and approaches to those new requirements (Segers, Nijhuis, and Gijselaers 2006). Perceptions of Assessment Targets and their Influence on Learning Students’ perceptions of assessment relate not just to the types of assessment they face, but also to what they believe tutors will reward through the marks awarded. At university level, as in the senior years of schooling, teachers are generally expecting students to reach a satisfactory conceptual understanding of the topics they are studying. This becomes the target understanding for which students are expected to be aiming (Smith 1998). However, students often find the target obscure, as conceptual understanding can only be reached once students have sufficient previous knowledge to recognise what the target really means (Meno’s paradox). Students gradually come to realise what is required through what is said in lectures and tutorials and from feedback on assignments, with this combination of experiences gradually creating a more accurate perception of the expected target understanding. Students, however, may have their own agenda for learning in trying to develop their own personal understanding, which may or may not coincide with the target set by the teacher (Smith 1998). Sometimes the disparity between these two forms of understanding creates a tension between what is formally required and what students hope to achieve for themselves, as we shall see in some interview extracts later on. The specific targets that teachers set are inevitably affected by their views about the nature of teaching and how they believe it affects student learning. Those conceptions vary markedly, not only across disciplines, but also among individual teachers (Entwistle 2009). The main distinction parallels the deep/ surface dichotomy in students’ approaches to learning. At one extreme, some teachers focus narrowly on the information to be conveyed to students, with the academic content viewed solely in terms of their own experience of the discipline. At the other extreme, teachers try to see the content from the perspective of the students and focus more directly on encouraging the students’ conceptual development (Prosser and Trigwell 1999). In a more recent study of academics’ experiences of their teaching and
86 | advances i n uni vers ity a s s e s s me nt how they conceptualise their own discipline, Prosser and his colleagues (2007) came up with the intriguing finding that differences in approaches to teaching are related to the way in which the lecturer thinks about the subject, the extent to which it is seen in a broadly integrated way or as discrete packages. At one extreme, the subject is seen as a series of topics or issues with little or no attention being paid to the discipline as a whole. When the subject is seen in this way, lecturers tend to talk about ‘delivering’ discrete ‘packages’ of information to students. In such a scenario, there is little opportunity for students to see how they might integrate what they learn into a larger field of knowledge; what they know is likely to remain a series of isolated facts. At the other extreme, when the subject matter is seen by an academic as a coherent whole, students are more likely to be helped into a relationship with the field as a whole and to experience and develop a personal understanding of that whole (Prosser, Martin, and Trigwell 2007, 56). Although it is possible to categorise university teachers’ conceptions of teaching and assessment in this way, it is also important to recognise how these conceptions may change over time, with increasing experience and awareness of students’ perceptions of their teaching and of the consequences of differing methods of assessment on students’ learning (Entwistle and Walker 2002). These conceptions also depend on the level of the students being taught, as well as any constraints in the conditions under which students are taught (Entwistle, Karagiannopoulou and Ólafsdóttir forthcoming). Almost inevitably, academics’ views about appropriate assessment methods will parallel their conceptions of teaching. However, academic staff do not have the same freedom in choosing assessment methods as in deciding on their own teaching approaches. There will thus be occasions where staff that want to teach for understanding will face assessment requirements that make it difficult to reward understanding appropriately, as we found in the ESRC project (Hounsell and Entwistle 2005). We, thus, have situations where university teachers feel constrained in the extent to which they can use assessment to support high quality learning and where students feel a tension between what they believe they have to do in assessment tasks and the understandings that they would like to reach for themselves. This experience was described, pointedly, by an Australian lecturer, who was interviewed as part of the study mentioned above (Prosser, Martin, and Trigwell 2007).
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 87 I did a workshop last year on assessment, different modes and what students expect out of it, just problematising it and thinking about different structures, and from that I came away with what I thought were some great ideas about assessment that I could fit into my course. [However,] the faculty set such restrictive assessment requirements that I can’t do these new things that are supposedly able to give students more choice and are able to re-invigorate and re-energise the learning process. (Prosser, personal communication)
Looking more broadly at students’ perceptions of both teaching and assessment and thinking about what influences the quality of learning they undertake, Figure 4.1 indicates the complexity of the interactions that are taking place, based on a review of earlier studies (Entwistle and Smith 2002). To avoid confusion, single arrows are used to show a possible sequence of events and influences, although the actual causal relationships are more likely to be more complex than the model conveys. The interactions between students and staff depend on the overall context within which these take place. However, the target understanding presented to the students is influenced by the teachers’ beliefs about teaching and learning, which then affects how the formally specified assessment requirements are interpreted and conveyed to the students, through the methods of teaching and assessment. In turn, the students’ perceptions of the target understanding depend on their previous knowledge and their perceptions of the teaching and assessment provided by the department as a whole. But they are also affected by what they perceive their teachers to be setting as the target understanding and how much freedom to explore their own understanding that they believe their teachers will allow within the defined assessment criteria, as we shall see later. The satisfaction felt by the students will then depend on the match, or lack of it, between the formally assessed learning outcome and the personal understanding they have developed. To the complexity of this model has to be added the difficulties involved in making changes in the assessment system. Even where alteration or change is possible, there will be uncertain effects of introducing innovative procedures. Some of this complexity can be illustrated through looking in detail at the experiences of students as they met an examination procedure designed to foster academic understanding, namely open-book exams.
88 | advances i n uni vers ity a s s e s s me nt Influences on personal understanding from student’s experiences Student’s current knowledge and understanding Perception of the teaching and assessment
Motivation and approach to studying
Comprehension of topics and target
Strategy, effort and engagement
Personal understanding developed by the student Departmental, institutional and cultural ethos
Formal assessed learning outcome
Target understanding presented by the teacher
Formally agreed specification of target
Match ?
Developmental trend
Level of understanding
Teacher’s perception of the student’s assessed work
Choice of topics and learning materials
Teacher’s interpretation of target understanding
Type of formative assessments used
Teacher’s beliefs about teaching and learning
Choice of teaching mode and method
Teacher’s subject-matter knowledge and attitudes
Developmental trend
Teacher’s influences on student’s understanding
Figure 4.1 Influences of assessment and teaching on the quality of learning outcomes. (Source: Developed from Figure 3 in Entwistle and Smith 2002)
Open-Book Exams and their Influence on Perceptions and Approaches Open-book assessment puts an emphasis on thoroughness and understanding, thinking and analysing (Biggs and Tang 2011; Zoller and Ben-Chaim 1988; Zoller, Ben Chaim, and Kamm 1997) and enhances higher order cognitive skills. Students taking open-book exams tend to report higher levels of motivation, better engagement with the tasks in structuring and mastery of content, and a greater optimism about the forthcoming exam than those taking traditional exams (Broyles, Cyr, and Korsen 2005; Theophilides and Koutselini
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 89 2000), where the emphasis is on memorisation and the reproduction of factual information (McDowell 1995). But, as we have seen earlier, new approaches to assessment may not achieve their expected goals, which may be the result of a mismatch between tutors’ goals and those of some students. A recent study in Greece, using ASSIST, indicated that students who reported predominantly deep approaches were more likely to prefer openbook exams, yet many of them also showed lower scores on the strategic scales, indicating a form of dissonance that was associated with lower levels of achievement (Karagiannopoulou and Milienos 2013). In an earlier interview study of twenty, mainly female, psychology students, Karagiannopoulou (2010) found that they reported differing reactions to the open-book format, often with elements of both deep and surface approaches being identified. A case study of four of the students was then carried out (Karagiannopoulou and Entwistle 2013). A review of the comments that students in this analysis had made, seen in relation to the themes emerging in this chapter, proved valuable in exploring how students’ predispositions and intentions in learning interacted with their perceptions of the exam and experiences of teaching. It seemed that some students with a predisposition to learn by rote had misperceived the purposes of the open-book exam and made poor attempts, while others, with an intention to understand for themselves, were either helped or hindered by the way in which their tutor reacted to their attempts to develop an independent personal understanding. Misperceptions of the Learning Requirements for the Open-Book Exam The effect of misunderstanding the learning requirements for an open-book exam can be seen in the reaction of a student who declared herself to be the ‘kind of person who believes that I can trust only the “formal” sources of knowledge, [as] personal experiences and trusting yourself to develop [your own] understanding may lead you to the wrong path’. When she encountered open-book exams, she believed that all she had to do was to read her lecture notes through (reflecting her ‘formal’ sources) and then bring them with her to the exam: I attended the lectures and had a good set of classroom notes. I read through my notes three or four times, and I sat the exams. I didn’t get into much
90 | advances i n uni vers ity a s s e s s me nt depth because I could look up any information I needed to develop my answer. I failed, [although] I thought I had revised all I needed to succeed in the exams, since the lectures were almost copied in my notes . . . Next September, I sat for an exam on the same subject. I repeated the process. I revised the same notes. I had read them through many times. I thought that classroom notes were the ‘key’ to passing, [as] tutors usually expect an answer close to their lectures. I felt I had understood the content, and I used in my answer the information presented in the lectures. I failed once again.
This experience of repeated failure forced the student to rethink her ways of revising, but she still remained unsure of the opportunities for learning that an open-book exam offered her: [This year] I sat for the exams having all this material with me and I got B. I read through the question and I started writing down the answer. My notes were almost useless in the exam. I read them through once and I put them aside. I answered the question from the beginning to the end, non-stop, almost automatically. Hopefully, I was not off the track. I can’t say that I have a clear idea of what was the crucial thing that, this time, made my answer so good. I have to admit that I’m never clear of what the tutor wants us to write in an answer, how she approaches an issue.
Conflict between the Tutor’s and the Student’s Perceptions of the Exam Demands Another student described a fundamental clash between the personal understanding she wanted to develop for herself and what she believed the tutor required. The student described herself as someone who ‘always tries to make sense of the information, to draw my own conclusions . . . That’s me. This happens irrespective of what the tutors present in the class’. She was clear in her own mind that the open-book exam should have suited her way of learning, but she found that the tutors were expecting a reproductive approach in the exams and thus experienced a form of ‘destructive friction’ (Vermunt and Verloop 1999, 270) that left her angry and disillusioned: Tutors ask us to attend the lectures consistently to develop our own understanding, but they don’t eventually value it. They want us to express our
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 91 own point of view, but they eventually want us to reproduce their ideas . . . [In the past] I presented my own perspective, I was critical of the theories approved of, and I failed . . . Now, I sit for the exam taking mainly into account the classroom notes in order to develop my answer close to the tutors’ ideas, adding a few personal thoughts to the ideas presented in the lectures, if necessary . . . I can’t be bothered to present my own view or interpretations . . . to present any personal understanding; I don’t believe that it is any good for me, no better grade [and] no appreciation of my attempts [to present my own views] . . . I feel humiliated: we are human beings, and [yet] they treat us like machines; we’re asked to regurgitate knowledge . . . I’m a kind of person who is always seeking meaning, but I can’t be bothered any more to develop my understanding . . . in ways approved by them. It’s my own business.
The strength of this student’s reaction illustrates the power of destructive friction emerging from discord between student and tutor regarding their differing perceptions of what higher education should involve. Another student also resented the need to fit in with the tutor’s way of thinking: To pass the exam you need to read between the lines, what is underlying the lecturer’s [position] . . . to be in touch with her way of thinking . . . [but] this is not clear at all. It seems to be an authoritarian relationship. She builds up an argument or an approach and we should align our thinking to her own understanding and perspective.
A ‘Meeting of Minds’ through Matching Exam Demands with Personal Understanding Our final illustration comes from another student, who wanted to understand ideas for herself in a more directly conceptual way. She explained that: I always try to understand the issue at hand . . . thinking a lot about which may be the most important concepts or ideas in the material . . . I mainly focus my revision on them: I definitely don’t focus on details.
She saw her tutors as providing the conceptual framework within which she can develop her own understanding more fully:
92 | advances i n uni vers ity a s s e s s me nt Tutors try to get us into a new way of thinking through teaching and examples they present using the theory . . . using new theoretical constructs to understand things around . . . issues, but mainly developing a rationale . . . This is their main concern . . . They’re concerned about us being able to think critically on the issues we have been taught and be able to make sense of the world, of our lives using the knowledge we have been taught . . . [and] to value for ourselves the process of seeking meaning and real understanding per se. They also expect us to be able to look out for the relevant literature and build up our own understanding of an issue through the lens of the underlying parameters. This is close to what they want [us] to present in the exams . . . parameters that underlie an issue, to tell a story . . . It’s not so much about getting into details in full.
She saw the tutor’s conceptual framework not as a restriction, but as an opportunity to develop her own thinking within an appropriately academic way of thinking. She recognised that tutors are, in her experience, encouraging her to develop her own ideas: They’re concerned about us being able to think critically on the issues we have been taught . . . [and] to value for ourselves the process of seeking meaning and real understanding per se . . . It’s not all about education, but self-development . . . I try to think what [the tutor] appears to perceive as important and what she emphasizes in the lectures. It’s all about the main concepts and ideas that make up her perspective and understanding. I get into it, and then this is what I take into account when thinking about possible questions and how to answer them . . . It’s the ‘know-how’ experiences I get from the lectures that enable me to approach any relevant issue . . . But I also think of possible questions which I myself perceive as significant after the lecture experience . . . what I myself perceive as worth knowing. [I try] to take a critical stance on the material: the germ of it can be found in tutor’s thinking . . . which is ‘feeding’ mine. I have a direction, her perspective, you start with the tutor’s perspective, you bring in previous knowledge and experiences that get you to a different end from where you started.
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 93 Conclusion These extracts have been used to illustrate the range of perceptions of assessment that students can have in relation to the rules of the assessment game and how these interact with perceptions of teaching, as well as with their feelings about the teachers themselves, to affect their approaches to learning. Negative feelings aroused by the experience of destructive friction are likely to interfere with any attempts to develop personal understanding, while the experience of a ‘meeting of minds’, where teachers, teaching and assessment are all simultaneously in tune with each other, will enhance the quality of personal understanding. Of course, the extracts used above illustrate only a restricted range of reactions to a specific form of assessment, but taken in conjunction with the earlier findings from larger samples, we can see more clearly why students’ individual and collective perceptions need to be taken into account in interpreting the findings of studies that seek to generalise about the effects of different kinds of assessment processes. This area of research complements Dai Hounsell’s contributions to the field of assessment and feedback by drawing attention to the variations in students’ experiences of assessment and in the feelings aroused by either consonance or dissonance in those experiences. Assessment and feedback are not procedures that we can expect to have uniform effects: they are events that students’ interpret in terms of their own individual motives and feelings. The variability in perceptions reflects the individual student’s previous experience and aspirations, in addition to the individual teacher’s convictions about the purpose of assessment, as indicated earlier in our heuristic model (see Figure 4.1). It is also important to recognise that the intention to understand and the need to be successful in assignments and exams arouse strong feelings in students, and that these may pull in opposite directions on occasions. But if students feel confident enough to follow their own preferred ways of learning and believe that their efforts will be fairly rewarded through the assessment procedure, then the learning experience is more likely to be both enjoyable and successful.
94 | advances i n uni vers ity a s s e s s me nt References Biggs, J. B., and C. Tang. 2011. Teaching for Quality Learning at University (4th edn). Buckingham: Open University Press and SRHE. Broyles, I. L., P. R. Cyr, and N. Korsen. 2005. ‘Open Book Tests: Assessment of Academic Learning in Clerkships.’ Medical Teacher 27: 456–62. Entwistle, N. J. 2009. Teaching for Understanding at University: Deep Approaches and Distinctive Ways of Thinking. Basingstoke: Palgrave Macmillan. Entwistle, N. J., and A. C. Entwistle. 1991. ‘Contrasting Forms of Understanding for Degree Examinations: The Student Experience and its Implications.’ Higher Education 22: 205–27. Entwistle, N. J., and D. M. Entwistle. 2003. ‘Preparing for Examinations: The Interplay of Memorising and Understanding, and the Development of Knowledge Objects.’ Higher Education Research and Development 22: 19–42. Entwistle, N. J., E. Karagiannopoulou, and A Ólafsdóttir. 2014. ‘Contributions of Different Levels of Analysis to Research into Experiences of University Learning and Teaching.’ The Psychology of Education Review. Forthcoming. Entwistle, N. J., and P. Ramsden. 1983. Understanding Student Learning. London: Croom Helm. Entwistle, N. J., and C. A. Smith. 2002. ‘Personal Understanding and Target Understanding: Mapping Influences on the Outcomes of Learning.’ British Journal of Educational Psychology 72: 321–42. Entwistle, N. J., and P. Walker. 2002. ‘Strategic Alertness and Expanded Awareness in Sophisticated Conceptions of Teaching.’ In Teacher Thinking, Beliefs and Knowledge in Higher Education, edited by N. Hativa and P. Goodyear, 15–40. Dordrecht: Kluwer. Entwistle, N. J., V. McCune, and J. Hounsell. 2002. ‘Approaches to Studying and Perceptions of University Teaching–Learning Environments: Concepts, Measures and Preliminary Findings.’ Occasional Report 1. ETL Project. Accessed October 4, 2013. doi: http://www.etl.tla.ed.ac.uk/docs/ETLreport1. pdf. Entwistle, N. J., V. McCune, and H. Tait. 2013. ‘Approaches and Study Skills Inventory for Students: Report of the Development and Use of the Inventory.’ Accessed October 4, 2013. doi: www.etl.tla.ed.ac.uk/questionnaires/ASSIST. pdf. ETL Project. 2006. ‘Introduction to the ETL Project Questionnaires.’ Accessed October 4, 2013. doi: www.etl.tla.ed.ac.uk/questionnaires/scoringkey.pdf.
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 95 Gijbels, D., and F. Dochy. 2006. ‘Students’ Assessment Preferences and Approaches to Learning: Can Formative Assessment Make a Difference?’ Educational Studies 32: 399–409. Gijbels, D., M. Segers, and E. Struyf. 2008. ‘Constructivist Learning Environments and the (Im)possibility to Change Students’ Perceptions of Assessment Demands and Approaches to Learning.’ Instructional Science 36: 431–43. Gulikers, J., T. Bastiaens, P. Kirschner, and L. Kester. 2006. ‘Relations between Student Perceptions of Assessment Authenticity, Study Approaches and Learning Outcome.’ Studies in Educational Evaluation 32: 381–400. Gulikers, J. T. M., L. Kester, P. A. Kirschner, and T. J. Bastiaens. 2008. ‘The Effect of Practical Experience on Perceptions of Assessment Authenticity, Study Approach, and Learning Outcome.’ Learning and Instruction 18: 172–86. Hounsell, D. J. 1984. Students’ Conceptions of Essay Writing. Unpublished PhD thesis, University of Lancaster, Lancaster. Hounsell, D. J. 1987. ‘Essay Writing and the Quality of Feedback.’ In Student Learning: Research in Education and Cognitive Psychology, edited by J. T. E. Richardson, M. W. Eysenck, and D. Warren-Piper, 109–19. Milton Keynes: Open University Press and SRHE. Hounsell, D. J. 2010. ‘Reshaping Feedback and Assessment.’ Academy Connect 3. Accessed October 4, 2013. doi: www.heacademy.ac.uk/resources/detail/our work/ipp/Issue3_ DaiHounsell. Hounsell, D. J., and N. J. Entwistle. 2005. ‘Enhancing Teaching–Learning Environments in Undergraduate Courses.’ Final Report to the Economic and Social Research Council on TLRP Project L139251099. Accessed October 4, 2013. doi: www.etl.tla.ed.ac.uk//docs/ETLfinalreport.pdf. Hounsell, D., M. McCulloch, and M. Scott. 1996. The ASSHE Inventory: Changing Assessment Practices in Scottish Higher Education. Accessed October 4, 2013. doi: www.ed.ac.uk/schools-departments/institute-academic-development/ learning-teaching/staff/advice/assessment/approaches/strategies-inventory. Hounsell, D., R. Xu, and C. M. Tai. 2007. Monitoring Students’ Experiences of Assessment. Scottish Enhancement Themes: Guides to Integrative Assessment, no. 1. Gloucester: Quality Assurance Agency for Higher Education. Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance and Feedback to Students.’ Higher Education Research and Development 27: 55–67. Karagiannopoulou, E. 2010. ‘Effects of Classroom Learning Experiences and
96 | advances i n uni vers ity a s s e s s me nt Examination Type on Students’ Learning.’ Psychology: The Journal of the Hellenic Psychological Society 17: 325–42. Karagiannopoulou, E., and N. J. Entwistle. 2013. ‘Influences on Personal Understanding: Intentions, Approaches to Learning, Perceptions of Assessment, and a “Meeting of Minds”.’ Psychology Teaching Review 19:2, 80–96. Forthcoming. Karagiannopoulou, E., and F. S. Milienos. 2013. ‘Exploring the Relationship between Experienced Students’ Preferences for Open- and Closed-Book Examinations, Approaches to Learning and Achievement.’ Educational Research and Evaluation 19: 271–96. Karagiannopoulou, E., and F. S. Milienos. 2014. ‘Testing Two Path Models to Explore Relationships between Students’ Experiences of the Teaching–Learning Environment, Approaches to Learning and Academic Achievement.’ Educational Psychology. Forthcoming. Marton, F., and R. Säljö. 1976. ‘On Qualitative Differences in Learning: I. Outcome and Process.’ British Journal of Educational Psychology 46: 4–11. Marton, F., and R. Säljö. 1984. ‘Approaches to Learning.’ In The Experience of Learning, edited by F. Marton, D. J. Hounsell, and N. J. Entwistle, 71–89. Edinburgh: Scottish Academic Press. Accessed October 4, 2013. doi: www.tla.ed.ac.uk/resources/EOL.html. Miller, C. M. L., and M. Parlett. 1974. Up to the Mark: A Study of the Examination Game. London: Society for Research into Higher Education. Northedge, A., and J. McArthur. 2009. ‘Guiding Students into a Discipline: The Significance of a Teacher.’ In The University and its Disciplines, edited by C. Kreber, 107–18. London: Routledge. Prosser, M., and K. Trigwell. 1999. Understanding Learning and Teaching: The Experience in Higher Education. Buckingham: Open University Press and Society for Research into Higher Education. Prosser, M., E. Martin, and K. Trigwell. 2007. ‘Academics’ Experiences of their Teaching and of their Subject Matter.’ In Student Learning and University Teaching, edited by N. J. Entwistle and P. D. Tomlinson, 49–60. Leicester: British Psychological Society. Ramsden, P. 1984. ‘The Context of Learning in Academic Departments.’ In The Experience of Learning, edited by F. Marton, D. J. Hounsell, and N. J. Entwistle, 198–216. Edinburgh: Scottish Academic Press. Ramsden, P. 1991. ‘A Performance Indicator of Teaching Quality in Higher Education: The Course Experience Questionnaire.’ Studies in Higher Education 16: 129–50.
p e rc e pt io ns of a ssessment a nd the ir in f l ue n ce s | 97 Reimann, N., and R. Xu. 2005. ‘Introducing Multiple-Choice Alongside ShortAnswer Questions into the End-of-Year Examination: The Impact on Student Learning in First Year Economics.’ Accessed May 1, 2013. doi: www.etl.tla. ed.ac.uk/publications.html. Richardson, J. T. E. 2005. ‘Students’ Perceptions of Academic Quality and Approaches to Studying in Distance Education.’ British Educational Research Journal 31: 1–21. Richardson, J. T. E. 2006. ‘Investigating the Relationship between Variations in Students’ Perceptions of their Academic Environment and Variations in Study Behaviour in Distance Education.’ British Journal of Educational Psychology 76: 867–93. Säljö, R. 1979. ‘Learning in the Learner’s Perspective. I – Some Common-Sense Conceptions.’ Research Report 76. Gothenburg: University of Gothenburg, Department of Education. Scouller, K. 1998. ‘The Influence of Assessment Method on Students’ Learning Approaches: Multiple Choice Question Examination Versus Assignment Essay.’ Higher Education 35: 453–72. Segers, M., D. Gijbels, and M. Thurlings. 2008. ‘The Relationship between Students’ Perceptions of Portfolio Assessment Practice and their Approaches to Learning.’ Educational Studies 34: 35–44. Segers, M., R. Martens, and P. Van den Bossche. 2008. ‘Understanding How a Case-Based Assessment Instrument Influences Student Teachers’ Learning Approaches.’ Teaching and Teacher Education 24: 1751–64. Segers, M., J. Nijhaus, and W. Gijselaers. 2006. ‘Redesigning a Learning and Assessment Environment: The Influence on Students’ Perceptions of Assessment Demands and their Learning Strategies.’ Studies in Educational Evaluation 32: 223–42. Smith, C. A. 1998. Personal Understanding and Target Understanding: Their Relationships through Individual Variations and Curricular Influences. Unpublished PhD thesis, University of Edinburgh, Edinburgh. Theophilides, C., and M. Koutselini. 2000. ‘Study Behaviour in the Closed-Book and the Open-Book Examination: A Comparative Analysis.’ Educational Research and Evaluation 6: 379–93. Thomas, P. R., and J. D. Bain. 1984. ‘Contextual Dependence of Learning Approaches: The Effects of Assessments.’ Human Learning 3: 227–40. Vermunt, J. D., and N. Verloop. 1999. ‘Congruence and Friction Between Learning and Teaching.’ Learning and Instruction: 257–80.
98 | advances i n uni vers ity a s s e s s me nt Zoller, U., and D. Ben-Chaim. 1988. ‘Interaction between Examination Type, Anxiety State, and Academic Achievement in College Science: An ActionOriented Research.’ Journal of Research in Science Teaching 26: 65–77. Zoller, U., D. Ben-Chaim, and S. Kamm. 1997. ‘Examination-Type Preferences of College Science Students and their Faculty in Israel and USA: A Comparative Study.’ School Science and Mathematics 97: 3–12.
5 Students’ and Teachers’ Perceptions of Fairness in Assessment Telle Hailikari, Liisa Postareff, Tarja Tuononen, Milla Räisänen and Sari Lindblom-Ylänne
Introduction
E
arlier chapters in this book have already made clear the significant role that assessment plays in the learning process, established that students are sensitive about what will be assessed and elucidated how assessment is undertaken. Indeed, assessment significantly influences students’ ways of studying by forming a ‘hidden curriculum’ (Brown, Bull, and Pendlebury 1997; Struyven, Dochy, and Janssens 2005; Hodgson and Pang 2012), which guides the direction and extent of their efforts. Morover, grades have a profound impact on students’ sense of their own abilities and achievements (Sadler 2009). However, there is evidence that teachers do not necessarily have the competencies necessary to assess in a valid and reliable manner (Prosser and Trigwell 1999; Parpala and Lindblom-Ylänne 2007; Postareff et al. 2012). Students may thus obtain good grades in exams, without ever reaching a firm understanding of the course’s fundamental ideas (Ramsden 2003; Segers, Dochy, and Cascallar 2003; Struyven, Dochy, and Janssens 2005). Fleming (1999) has noted that, even though grades play such a major role in students’ lives and are widely used as indicators of the quality of higher education, their role as objective indicators of learning outcomes is rarely questioned (see, also, Knight 2002).
100 | a dva nces i n uni ver s ity a s s e s s me nt Recently, an extensive set of projects have been carried out at the Centre for Research and Development of Higher Education at the University of Helsinki that have been designed to explore a variety of issues related to the experiences of students and intended to help university teachers provide a more supportive learning environment for their students. In this chapter, we are looking specifically at students’ and teachers’ experiences of the fairness of assessment. We are also exploring issues related to course grades, which are often used as objective indicators of the quality of learning, to see whether these grades sufficiently mirror students’ actual learning achievements. The idea of ‘fairness’ is important, because this term is often utilised when students express their feelings about assessment (Sambell, McDowell, and Brown 1997). From a research perspective, the issue of ‘fairness’ raises questions about the reliability and validity of the methods used to measure levels of attainment in higher education, and these links form the major focus of our current study. In this chapter, we explore the issue of fair assessment through a review of the relevant literature, before outlining the importance of validity and reliability in relation to it. We then look at the notion of ‘fairness’ in terms of the necessary alignment between the defined aims of a course and the criteria used to assess them and the importance of those criteria being made explicit to students. This leads to a description of our research project and an overview of our findings from a series of separate analyses and finally to an explication of what we see as the implications of our findings. What is Fair Assessment? ‘Fair assessment’ can be seen in terms of the two main concepts used to describe measurements in social science – namely, reliability and validity – with the implication that the grades awarded in assessment should be consistent, irrespective of the marker and the conditions under which the assessment has taken place, and that they should validly reflect what they are intended to assess. In our view, the definition of validity should also include the concept of ‘grade integrity’ (Sadler 2009, 807), which means the extent to which grades correspond to the quality, breadth and depth of students’ academic achievement. For grade integrity to occur, students’ examination answers should not be compared with those of other students, as in the norm-
pe rc e pti ons of f a i rness i n a ss e s s me nt | 101 referenced approach, but rather be matched against the assessment criteria for the course (as argued in the following chapter by Prosser). Moreover, a student’s individual history or previous achievements should not affect assessment results (Sadler 2009). Segers, Dochy, and Gijbels (2010) also suggest that, in addition to reliability and validity, fair assessment should include the transparency and authenticity of assessment. ‘Transparency’ describes an assessment process that is clear and comprehensible to all participants, while ‘authenticity’ implies that assignments should assess the knowledge and skills that are needed in realistic contexts, as opposed to being restricted to the academic context. Students’ experiences of the fairness of assessment seem to be directly related to the specific assessment method that is utilised. New modes of assessment have been regarded as having a more positive influence on student learning than traditional approaches and, as part of that experience, are likely to affect students’ perceptions of the fairness of assessment (Segers, Dochy, and Gijbels 2010). However, there is no conclusive evidence supporting this claim, and concerns about fairness can be found in the cases of both traditional and newer forms of assessment. Sambell, McDowell, and Brown (1997) showed that traditional assessment methods, such as pencil-and-paper exams held at the end of a study module, were perceived as inaccurate measurements of learning that directed students towards memorisation, rather than understanding. And students complained that success in traditional exams was dependent on external factors, such as their physical condition on the exam day, test anxiety and their level of stress, rather than their actual ability. While there are examples of more flexible alternative methods of assessment being seen as fairer and as having a positive effect on learning (see also Struyven et al. 2005), there is also evidence to the contrary. Mogey and others (2007) found that students were concerned about the fairness of alternative ways of undertaking essay examinations, such as through the use of computers. Similar findings were reported by Segers and Dochy (2001) regarding the use of self- and peer assessment. Students’ perceptions of fairness appear linked to their attitudes about whether or not the assessment positively affects their learning (Segers, Dochy, and Cascallar 2003; Sambell, McDowell, and Brown 1997). Thus, if assessment practices are to support high quality learning, one of the central criteria
102 | a dva nces i n uni ver s ity a s s e s s me nt is that they must be experienced as being fair. Previous studies have shown that students perceive fair assessment as something that accurately measures complex skills and qualities and provides genuinely valid measures of what they deem to be meaningful learning (Sambell, McDowell, and Brown 1997; Kniveton 1996). Furthermore, previous studies have shown that differences in students’ perceptions of fairness appear to be mediated by different test and evaluation conditions, such as perceived workloads (Chambers 1992; Drew 2001) and individual differences in learning approaches (Struyven, Dochy, and Janssens 2005; Segers, Nijhuis, and Gijselaers 2006). A severe challenge for the validity and reliability of assessment is that many university teachers do not see assessment as an essential part of the teaching–learning process. It has been shown that teachers often lack the awareness and skills concerning alternative methods of carrying out assessment and so rely mainly on conventional assessment methods (Parpala and Lindblom-Ylänne 2007; Postareff et al. 2012; Ramsden 2003). A study by MacLellan (2001), for example, emphasised the importance of teachers’ awareness of assessment. The study compared students’ and tutors’ perceptions of assessment and revealed significant differences between the two groups’ perceptions. One reason for this discrepancy appeared to lie in the teachers’ beliefs that they were assessing a fuller range of learning processes than they actually were. As a result, students did not believe that they were being given opportunities to advance their own learning through assessment. Indeed, many had a strong sense of inequitable treatment, as we saw in the previous chapter by Entwistle and Karagiannopoulou (this volume). Alignment between Aims and Assessment Criteria in Promoting Fair Assessment Another important issue affecting both validity and the perception of fairness in assessment is the extent to which assessment criteria are aligned with the defined aims of a course. The concept of constructive alignment was introduced by Biggs (1996) to indicate the importance of aligning teaching and assessment with the specified aims of each course being taught. The term ‘constructive’ was used to indicate that the aims should be in line with constructivist principles of learning – in other words, encouraging students to develop understanding for themselves. Applying this principle to fairness in
pe rc e pti ons of f a i rness i n a ss e s s me nt | 103 assessment, it implies that assessment criteria that explictly reward personal understanding should be designed to make clear to students the main aims of the course and how they are to be evaluated in assessments. There are a number of studies evidencing the importance of ensuring equivalent expectations between teachers and students regarding assessment through the use of explicit criteria. In a study of students’ and teachers’ perceptions of the level of difficulty of multiple choice questions, for example, Lingard and others (2009) found that fewer than half of the students recognised the level of assessed knowledge being required of them, possibly leading them to focus on lower-level knowledge and skills than they should have when studying. McCune and Hounsell (2005) also found that students were uncertain about what was expected of them and felt that assessment criteria were unclear. Similarly, MacLellan (2001) showed that almost 80 per cent of students perceived that assessment was carried out using implicit, rather than explicit, criteria. These results are worrying, because if the students are not aware of the criteria by which they are assessed, they will not be aware of what they should do to achieve the desired goals or to improve their learning. This might also result in a perception of unequal treatment by the examiners (see, for example, MacLellan 2001). Drew (2001) showed that students wanted to accurately know what was expected of them, in terms of assessment, learning and grades. The study emphasised the importance of clear expectations, clear briefings and clear assessment criteria in promoting learning through assessment. Setting clear criteria can also be seen as helping teachers to assess students’ answers in a more valid and reliable way (Yorke, Bridges, and Woolf 2000). The Relationship between Course Grades and Learning Outcomes and Issues of Validity and Reliability The relative absence of research on the validity and reliability of assessment in the field of higher education is rather surprising, given the importance of this issue. In particular, more research is needed that combines teachers’ and students’ experiences of assessment, especially concerning the reliability and validity of assessment and what grades actually assess. Our research has been exploring these issues. We are looking at them from different points of view,
104 | a dva nces i n uni ver s ity a s s e s s me nt investigating teachers’ and students’ experiences of the validity of exams and consistency (reliability) of grading, as well as teachers’ descriptions of the criteria used to assess exams, and seeing the extent to which teachers’ and students’ views correspond with each other. This chapter draws on interview data with undergraduate students and their teachers in two fundamentally different disciplines, bioscience and theology. Four courses were investigated in the biosciences and one in theology. The courses lasted around seven weeks and were taught through lectures supported by active assignments and discussions. A traditional end-of-course exam required students to answer questions that dealt with the content of the lectures and additional course reading. In each of the courses, learning objectives were outlined in the description of the curriculum, but teachers had discretionary power as to whether or not they used these with their students. These learning objectives may still be aimed at a very general level and thus do not necessarily provide students with concrete information about what is expected of them. In the Finnish context, there is no requirement to design any specific assessment criteria, although teachers are encouraged to do so. Altogether, five teachers and fifty-four volunteer students were interviewed at the end of their course after their exams had been assessed and graded using a stimulated recall (SR) method (see Lyle 2003). The teachers were asked to read a selection of the students’ answers for which they had given low, average and high grades and could refer to these papers during the subsequent interviews, if they wished. The students were asked to read their own exam answers, which showed them the grade awarded and, occasionally, contained remarks made by the teacher. Both groups were then able to respond to questions about the exam on the basis of a refreshed familiarity with their answers. The student interviews focused on their experiences of the assessment, but also included questions relating to their learning and studying. Students were asked how they had prepared for the exam and what kind of knowledge and understanding they believed they needed to demonstrate in the exam. In addition, they were asked what grade they had expected and what they thought about the fairness of the assessment. During the interviews with teachers, they were asked why they had set the particular exam questions, what kind of knowledge and understanding they were trying to assess and
pe rc e pti ons of f a i rness i n a ss e s s me nt | 105 how they had assessed the students’ learning achievements. They were also asked about the assessment criteria used for the different grades and why they had given specific grades for particular answers. Our analysis of the type and level of knowledge and understanding measured by the exams was based on a modified version of Hailikari’s (2009) model of knowledge and understanding. This model has three layers (see Figure 5.1). The first layer presents different levels of knowledge or understanding: reproducing, describing, integrating, applying and creating. The second, ‘indicator’, layer demonstrates verbs that describe the cognitive process within the different levels. The third layer demonstrates the range of levels of understanding, from knowing, to understanding and finally to applying and creating knowledge. The model was used as a tool within both teacher and student interviews and for analysing the data. These analyses identified three main themes that related to experiences of ‘fairness’ in assessment and also to issues of validity and reliability. These themes, which are discussed below, were: 1. teachers’ and students’ awareness of what is being assessed; 2. divergence in teachers’ and students’ experiences of grading; and 3. students’ trust in the teacher’s fairness in assessment. Reproduction
Reproducing
Production
Describing
Integrating
Applying
Defining, reproducing, understanding the meaning of the concept
Understanding concepts and their interrelations, classifying, comparing
Problem solving, application of knowledge, producing, implementing
Creating
Levels of understanding
Indicator
Recognising, enumerating, remembering, recalling
A new perspective, creating knowledge
Appyling, creating
Process
Understanding Knowing
Figure 5.1 The three-layer model of knowledge and understanding. (Source: Adapted from Hailikari 2009)
106 | a dva nces i n uni ver s ity a s s e s s me nt A range of analyses has already been carried out, but so far has been reported mainly in student dissertations, which are predominantly in Finnish and so not available for an international audience, while other articles are still under consideration. Here, we provide an outline of the findings. Teachers’ and Students’ Awareness of What is Being Assessed The analyses of the teachers’ responses looked at their pedagogical awareness of what they are actually assessing, the level of assessed knowledge and the clarity of the assessment criteria. Our analyses of the interviews showed that many teachers were not aware that their ways of assessing could affect their students’ learning and that the assessment was not aligned with the course objectives and, even where it was, teachers did not necessarily assess what had been specified in the objectives. All these aspects threatened both the reliability and the validity of the assessment process. We also found that teachers assessed mostly lower levels of knowledge and understanding, such as the ability to reproduce and describe the knowledge, so students were able to earn high grades by doing no more than memorising their course materials. This last result is in line with previous research, which shows that teachers find it difficult to differentiate between students’ repetition of their understanding and the creation of their own understanding (Prosser and Trigwell 1999). Moreover, teachers may intend to assess students’ critical understanding and yet still believe that understanding can be measured by assessing how well students can recall information (Postareff et al. 2012). Although some teachers in our study clearly had not reflected on what and how they planned to assess, others were more aware of what they wanted to measure and their exams consequently encouraged advanced levels of understanding, such as integrating knowledge. Our analyses also revealed that students were not really aware of the kind of knowledge that the exam was measuring. Our results showed variation in students’ evaluations of the level of knowledge for the same question, ranging from the repetition of knowledge to the integration and application of it. However, most of the students believed that the exam mostly measured repetition of knowledge, which again suggested problems created by either the validity of assessment or through the students’ misperceptions of what was required. The validity of assessment was also threatened by the fact that stu-
pe rc e pti ons of f a i rness i n a ss e s s me nt | 107 dents were aware that the exam questions were often the same as those used in previous years, so focused their revison on memorising specific content from the course, without attempting to understand it more deeply. In one of the courses, the teacher had told the students that the exam would be based on the lectures and that it could be easily passed by active attendance at the lectures. In this instance, the students neither anticipated a very demanding exam, nor invested much effort in studying for it. Thus, students’ perceptions of the demands of the exam could be seen to be strongly guiding their learning processes, as seen in previous research (Biggs and Tang 2007; Brown, Bull, and Pendlebury 1997). Our results are in line with this previous research, showing that t eachers’ conceptions of assessment and their assessment practices are often traditional and conventional. This lack of awareness of more sophisticated forms of assessment could explain the focus on the reproduction of knowledge through summative assessment in exams (Postareff et al. 2012). Sambell and others (1997) have shown that such assessment methods direct students towards memorisation, whereas for constructive alignment in assessment, the expected learning outcomes should be aligned with assessment in a way that promotes deep levels of understanding (Postareff et al. 2012). Our findings also suggest that teachers’ assessment practices had, in some cases, been influenced by a lack of clear assessment criteria for grading, which would indicate severe problems for both the reliability and validity of assessment. Both teachers and students felt that the criteria were unclear, seeming to be flexible and constantly changing. The results also suggest that students’ lack of awareness of the assessment criteria made it difficult for them to understand why they received certain grades. Sadler (2009) has suggested that the reliability of assessment is threatened if students are not made aware of the assessment criteria. They might then have very different views from the teacher of what will be assessed and what level of understanding is required. McCune and Hounsell (2005) have reported similar difficulties, with students being found to be uncertain about course requirements and assessment criteria. Students need to be guided towards deeper understanding so as to reach higher levels of academic achievement, as well as improved academic self-belief and study pace (Hailikari 2009).
108 | a dva nces i n uni ver s ity a s s e s s me nt Lack of preset assessment criteria also influenced students’ experiences, with some students believing that the assessment was not based on preset criteria and being unaware of how grades or marks for the examination were assigned. Most theology students thought that their grades reflected their efforts and that the teacher assessed the exam according to criteria relating to differing levels of understanding, rather than just the memorisation of facts. The teacher described taking into account the content of the answer and the way in which students had demonstrated their own thinking, as well as the overall nature of the assessment, but without any clear and preset assessment criteria. The fairness of this assessment was also seen to be problematic, because the teacher assessed students’ answers by comparing them to each other, using the principles of norm-referenced assessment by ‘grading on the curve’, which can affect the level of grading awarded (Sadler 2009). A similar tendency was also found with other teachers in our study. Our results suggest that a lack of clear, preset criteria makes teachers more likely to base the grades awarded on subjective judgments. As a consequence, judgments then tend to be more affected by tiredness and mood, so reducing the reliability of the assessment process, as Sadler (2009) has also indicated. Divergence in Students’ and Teachers’ Experiences of Grading Our results revealed that there was a considerable divergence in teachers’ and students’ perceptions of the extent to which grades reflected students’ achievements. In one of our studies, the teacher’s and the students’ experiences of assessment were compared with each other. Both the teacher, in retrospect, and the students felt that the grades did not reflect the quality of the actual learning achievements, with the teacher having given either overly high or overly low grades to students. In some instances, the teacher’s and the students’ views about grades were completely opposite, with the student expecting a higher grade and the teacher thinking that the grade should have been lower. This divergence might well have been caused by the lack of preset assessment criteria (as already discussed), which makes it difficult for both the teacher and students to evaluate learning achievements in a consistent way. A discrepancy between the students’ self-reported learning outcomes and the course grades was also shown in another study. Some students received high grades, even though they themselves reported that they had not under-
pe rc e pti ons of f a i rness i n a ss e s s me nt | 109 stood much of the content and had relied on isolated facts. This lack of alignment between learning outcomes and assessment suggested, in Sadler’s (2009) terms, that ‘grade integrity’ was not achieved. This has also been found in previous studies (Ramsden 2003; Segers, Dochy, and Cascallar 2003; Struyven, Dochy, and Janssens 2005). The results reported so far imply that, in the current situation, grades do not necessarily reflect the quality of learning achievements and should therefore be treated cautiously as indicators of it. In other studies, however, we found that the grades did reflect the students’ learning outcomes rather well. In other words, good grades reflected a higher level of thinking and understanding: in these courses, the teachers had preset assessment criteria and the students were aware of what was expected from them. Students’ Trust in Teachers’ Fairness in Assessment Interestingly, our studies found that although the students felt the assessment criteria to be unclear and they were not aware of why they had received certain grades, they still considered the assessment to be fair, trusting the assessment conducted by their teacher. The students might not always have understood how the teacher assessed the exam answers, but they still trusted the fairness of assessment. Our results showed that, in this instance at least, students considered that only the teacher had the expertise to make judgments of their learning achievements. It was surprising that none of the students asked for justifications of their grades. If they experienced something unclear in assessment, they tried to find explanations in terms of their own actions or perceived abilities and so might begin to doubt their own abilities as students. To summarise this section, both validity and reliability of assessment are related to experiences of the fairness of assessment. A lack of clear assessment criteria in the grading and correspondence between the grades and what students believed to be their level of achievement caused a sense of unfairness regarding the assessment. When assessment was valid and reliable and the criteria were clear to the students, both the teachers and students were likely to recognise that it was a fair assessment. Lack of feedback on their work meant that students did not know why they received a certain grade. Still, they did not question the assessment or the grades given, but rather believed they had
110 | a dva nces i n uni ver s ity a s s e s s me nt misjudged their own competence if the grade did not correspondent with their expectations. The results of our studies also provide clear empirical evidence of the backwash effect of assessment, as students changed their learning according to the assessment of the course. This is in line with previous results indicating that assessment steers student learning by forming a hidden curriculum (Brown, Bull, and Pendelbury 1997; Struyven, Dochy, and Janssens 2005; Hodgson and Pang 2012). Conclusion: Implications for Practice Our results suggest that it is important to raise awareness among academics of the importance of the validity and reliability of assessment and its effects on studying and learning. Without pedagogical awareness, it is difficult for teachers to vary their assessment practices and assess appropriate levels of knowledge. Furthermore, teaching staff can improve the perceived fairness of their assessment by producing clear assessment criteria and being more explicit with students regarding how their grades are determined. If they really do want students to develop their own understandings of academic material, then the aims and assessment criteria need to be congruent (or constructively aligned) to reward high level cognitive processes, and this congruence needs to be extended from course level to the institutional level (McCune and Hounsell 2005). The goal should be to promote a culture that fosters the development of assessment at the institutional level, whereas at the moment teachers may feel that university assessment regulations and established departmental patterns of assessment limit their possibilities to develop assessment (Anderson and Hounsell 2007). Institutional encouragement and support from the broader higher education community is therefore needed to put these ideas into action. We suggest that pedagogical awareness can be adopted and shared through collaborative practices in the teaching community of the discipline. Sharing positive experiences in collaborative settings may result in higher pedagogical awareness throughout the entire community. Our findings also draw attention to the importance of students being provided with adequate feedback that helps them make informed judgments about their own abilities and equips them for future demands (Hounsell et al. 2008). This applies in relation to individual pieces of work, but it is
pe rc e pti ons of f a i rness i n a ss e s s me nt | 111 also important to help students to understand the nature of the assessment process in general, in ways such as those suggested by Prosser (this volume). Such awareness allows students to consider the grades they are awarded more critically and to ask for justifications for them where necessary. Overall, our findings, taken in conjunction with previous research findings, make a strong case for believing that many current practices of assessment in higher education open up the validity and reliability of grading procedures to question. The serious implications of such fallibility in grading cannot be overestimated, given not only the status currently given to grades both administratively and in research studies, but also the effects that inaccurate and unfair grading can have on the confidence and self-image of students. References Anderson, C., and D. Hounsell. 2007. ‘Knowledge Practices: “Doing the Subject” in Undergraduate Courses.’ The Curriculum Journal 18: 463–78. Biggs, J. B. 1996. ‘Enhancing Teaching through Constructive Alignment.’ Higher Education 32: 347–64. Biggs, J., and C. Tang. 2007. Teaching for Quality Learning at University (3rd edn). Maidenhead: Society for Research into Higher Education/Open University Press. Brown, G., J. Bull, and M. Pendlebury. 1997. Assessing Student Learning in Higher Education. London: Routledge. Chambers, E. 1992. ‘Workload and Quality of Student Learning.’ Studies in Higher Education 17: 141–54. Drew, S. 2001. ‘Perceptions of What Helps Students Learn and Develop in Education.’ Teaching in Higher Education 6: 309–31. Fleming, N. 1999. ‘Biases in Marking Students’ Written Work: Quality?’ In Assessment Matters in Higher Education: Choosing and Using Diverse Approaches, edited by S. Brown and A. Glasner, 83–92. Maidenhead: Society for Research into Higher Education & Open University Press. Hailikari, T. 2009. Assessing University Students’ Prior Knowledge. Implications for Theory and Practice. PhD dissertation, University of Helsinki, Helsinki. Hodgson, P., and M. Y. C. Pang. 2012. ‘Effective Formative E-Assessment of Student Learning: A Study on a Statistics Course.’ Assessment and Evaluation in Higher Education 37: 215–25. Hounsell, D., and J. Hounsell. 2007. ‘Teaching–Learning Environments in Contemporary Mass Higher Education.’ In Student Learning and University
112 | a dva nces i n uni ver s ity a s s e s s me nt Teaching, edited by N. Entwistle and P. Tomlinson, 91–111. Leicester: British Psychology Society. Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance and Feedback to Students.’ Higher Education Research and Development 27: 55–67. Knight, P. 2002. ‘Summative Assessment in Higher Education: Practices in Disarray.’ Studies in Higher Education 27: 275–86. Kniveton, B. 1996. ‘Student Perceptions of Assessment Methods.’ Assessment and Evaluation in Higher Education 21: 229–37. Lingard, J., L. Minasian-Batmanian, G. Vella, I. Cathers, and C. Gonzalez. 2009. ‘Do Students with Well-Aligned Perceptions of Question Difficulty Perform Better?’ Assessment & Evaluation in Higher Education 34: 603–19. Lyle, J. 2003. ‘Stimulated Recall: A Report on its Use in Naturalistic Research.’ British Educational Research Journal 29: 861–78. MacLellan, E. 2001. ‘Assessment for Learning: The Differing Perceptions of Tutors and Students.’ Assessment and Evaluation in Higher Education 26: 307–18. McCune, V., and D. Hounsell. 2005. ‘The Development of Students’ Ways of Thinking and Practising in Three Final-Year Biology Courses.’ Higher Education 49: 255–89. Mogey, N., G. Sarab, J. Haywood, S. van Heyningen, D. Dewhurst, D. Hounsell, and R. Neilson. 2007. ‘The End of Handwriting? Using Computers in Traditional Essay Examinations.’ Journal of Computer Assisted Learning 24: 39–46. Parpala, A., and S. Lindblom-Ylänne. 2007. ‘University Teachers’ Conceptions of Good Teaching in the Units of High-Quality Education.’ Studies in Educational Evaluation 33: 355–70. Postareff, L., V. Virtanen, N. Katajavuori, and S. Lindblom-Ylänne. 2012. ‘Academics’ Conceptions of Assessment and their Assessment Practices.’ Studies in Educational Evaluation 38: 84–92. Prosser, M., and K. Trigwell. 1999. Understanding Learning and Teaching: The Experience in Higher Education. Buckingham: SRHE and Open University Press. Ramsden, P. 2003. Learning to Teach in Higher Education (2nd edn). London: Routledge. Sadler, R. 2009. ‘Grade Integrity and the Representation of Academic Achievement.’ Studies in Higher Education 34: 807–26. Sambell, K., L. McDowell, and S. Brown. 1997. ‘“But is it Fair?”: An Exploratory Study of Student Perceptions of the Consequential Validity of Assessment.’ Studies in Educational Evaluation 23: 349–71.
pe rc e pti ons of f a i rness i n a ss e s s me nt | 113 Segers, M., and F. Dochy. 2001. ‘New Assessment Forms in Problem-Based Learning: The Value Added of the Students’ Perspective.’ Studies in Higher Education 26: 327–43. Segers, M., F. Dochy, and E. Cascallar. 2003. ‘The Era of Assessment Engineering.’ In Optimising New Modes of Assessment: In Search of Qualities and Standards, edited by M. Segers, F. Dochy, and E. Cascallar, 1–12. Dordrecht: Kluwer Academic Publishers. Segers, M., F. Dochy, and D. Gijbels. 2010. ‘Impact of Assessment on Students’ Learning Strategies and Implications for Judging Assessment Quality.’ In International Encyclopedia of Education (3rd edn), edited by P. Peterson, E. Baker, and B. McGaw, 196–201. Oxford: Elsevier. Segers, M., J. Nijhuis, and W. Gijselaers. 2006. ‘Redesigning the Learning and Assessment Environment: The Influence on Students’ Perceptions of Assessment Demands and their Learning Strategies.’ Studies in Educational Evaluation 32: 223–42. Struyven, K., F. Dochy, and S. Janssens. 2005. ‘Students’ Perceptions about Evaluation and Assessment in Higher Education: A Review.’ Assessment and Evaluation in Higher Education 30: 331–47. Yorke, M., P. Bridges, and H. Woolf. 2000. ‘Mark Distributions and Marking Practices in UK Higher Education: Some Challenging Issues.’ Active Learning in Higher Education 1: 7–27.
6 Perceptions of Assessment Standards and Student Learning Michael Prosser
Introduction
T
he relationship between assessment and student learning has been a major issue in education for a long time. The argument has been that what is assessed constitutes the curriculum for many students and how it is assessed constitutes the learning process. The ideas behind, and the relationship between, summative and formative aspects of assessment have been much discussed. More recently, the issue of feedback and its relationship to quality assurance (Williams and Kane 2008) and student learning (Hattie and Timperley 2007) have been at the forefront of these discussions. Substantial and influential research on assessment and feedback in higher education has been carried out by Dai Hounsell. I first came across Dai’s earlier work at a joint conference of the Society for Research into Higher Education and the Cognitive Psychology Section of the British Psychological Society in 1985. At that conference, Dai presented a paper on “Essay Writing and the Quality of Feedback” (Hounsell 1987). Since then, Dai has written extensively on assessment and feedback, recently culminating in a six-step model for feedback (Hounsell et al. 2008). In much of this work, Dai has focused on the processes of feedback and on the criteria of assessment and feedback in relation to those criteria. The quality, quantity, frequency and timeliness of feedback have all been considerations at the forefront of his
pe rc e pti ons of a ssessment sta n d a r d s | 115 work, although the issues of standards of assessment, students’ understanding of standards and how that understanding relates to the quality of student learning have not been central for him. In this chapter, I wish to draw upon some of Royce Sadler’s ideas of, and, in particular, his focus on, standards and students’ understanding of standards of assessment, which complement many of the ideas to which Dai has also referred. I should point out, however, that Royce may not agree with all of the ideas presented here. In his recent work, he has made a strong case that the concepts of ‘criteria’ and ‘standards’ are often confused in the assessment literature in higher education, with criteria often equated with standards; as a result of this, he has made a clear distinction between criteria and standards (Sadler 2005). By a criterion he means: a distinguishing property or characteristic of anything, by which its quality can be judged or estimated or by which a decision or classification may be made. By a standard he means: A definite level of excellence or attainment, or a definite degree of any quality viewed as a prescribed object of endeavour or as the recognized measure of what is adequate for some purpose, so established by authority, custom, or consensus. (Sadler 2005, 189)
So, a criterion is defined in terms of the characteristic(s) by which something is judged, while a standard is the level of attainment reached. Most criterionreferenced assessment systems do not include a statement of standards: they detail characteristics by which something is judged. However, crucially, they rarely describe the level of attainment required to distinguish a pass/fail or what is required to achieve higher grades. In this chapter, I also wish to stress that it is students’ understanding of standards that is important in terms of their learning. Of course, it can be argued that the validity of assessments requires the assessor to be clear about the standards being applied, but here I wish to argue that it is the students’ understanding of assessment standards that is important for student learning. If students are unclear about what constitutes the difference in the quality of work required for different grades, they are more likely to believe that the students receiving a higher grade are either ‘smarter’ or ‘work harder’, rather than there being a qualitative difference in the standard of work being achieved.
116 | a dva nces i n uni ver s ity a s s e s s me nt In essence, I shall argue that it is a combination of Dai Hounsell’s ideas about feedback – its quality and timeliness – and Royce Sadler’s ideas about students’ awareness and understanding of standards that, together, indicate how best to support the quality of student learning in higher education. Student Learning Over the last thirty years or so, there has been a substantial amount of research into students’ learning in higher education – from the perspective of the student. The major point of departure of this research has been that it is the way in which students perceive and understand the teaching and learning context that they experience that relates to how and what they learn. Different students experience the same context in different ways, and it is the ways in which the individual students themselves experience that context that relate to how and what they learn. Prosser and Trigwell have summarised much of this research in terms of an adaptation of the 3-P (Presage, Process, Product) model of student learning (after Dunkin and Biddle 1974; see also Biggs 1978). Their model (Figure 6.1) summarises the relationship between the presage influences (students’ existing characteristics and the teaching and learning context provided to the students), the proPresage
Process
Product
CHARACTERISTICS OF THE STUDENT (e.g. previous experiences, current understanding) STUDENTS’ PERCEPTIONS CONTEXT (e.g. good teaching, clear goals) COURSE AND DEPARTMENTAL LEARNING CONTEXT (e.g. course design, teaching methods, assessment)
Figure 6.1 Adapted 3-P model of student learning. (Source: Prosser et al. 1994)
STUDENTS’ APPROACHES TO LEARNING (how they learn e.g. surface/deep)
STUDENTS’ LEARNING OUTCOMES (what they learn: quantity/quality)
pe rc e pti ons of a ssessment sta n d a r d s | 117 cess influences (students’ approaches to learning and their perceptions of the teaching and learning context) and the product (student learning outcomes). The major point of departure of this model from many other models of student learning is the stress placed on the importance of how students perceive and understand the teaching and learning context, not just the context itself (Prosser et al. 1994; Prosser and Trigwell 1999, 2007, 2009). The model suggests that student learning outcomes, in terms of both quality and quantity, relate to how they approach their studies. The key variation in approaches to learning identified in the model is that between a surface approach and a deep approach. A surface approach is one in which students tend to focus on short-term reproduction, often to meet assessment requirements, while a deep approach is one in which students focus on longer-term meaning and understanding. The way in which individual students within a class approach their studies is not determined directly by the way in which the teachers design and teach their courses (‘Course and departmental learning context’ in the model), but rather by how individual students perceive and understand that particular context. Their perceptions are formed through an interaction between their prior experiences and the way in which the course is designed and taught. Among the key issues in their perceptions are how they perceive the quality of teaching and feedback, how they understand what is being assessed, how they perceive their ability to cope with the size of the workload and, finally and most importantly for this chapter, how they perceive and understand the goals and standards of assessment in the course (Prosser and Trigwell 1999). As an example of these interactions, a recent study at the University of Hong Kong (Prosser 2013) carried out a factor analysis to show the associations between indicators of perceptions, approaches and outcomes, producing two factors distinguishing positive and negative groupings. Table 6.1 shows that in Factor 1 the outcome measures of overall satisfaction, Grade Point Average (GPA) and perceptions of achievement of the university’s aims were positively associated with a deep approach to studying and perceptions of quality teaching (including feedback), as well as clarity of goals and assessment standards. Factor 2 shows that high scores on surface approaches to studying, assessment being seen to measure reproduction rather than understanding and workloads being found to be inappropriately heavy were
118 | a dva nces i n uni ver s ity a s s e s s me nt Table 6.1 Factor analysis of student learning experience questionnaires. Scale Perceptions of Context: Good teaching (feedback and motivation) Clear goals and standards Inappropriate assessment (assessing reproduction, not understanding) Inappropriate workload (too much to learn and understand)
Factor 1 .77 .69
Approaches to Learning: Surface Approach (short-term reproduction) Deep Approach (long-term understanding and application)
.48
Learning Outcomes: University aims Grade Point Average Overall satisfaction
.69 .41 .75
Factor 2
.49 .77 .77
2.49
(Notes: N=2123. Factor loading less than .4 omitted)
all associated with low GPAs. These results are entirely consistent with the model and with the research reviewed in Prosser and Trigwell (1999). Assessment Standards and Student Learning As early as 1983, Entwistle and Ramsden identified students’ perceptions and understanding of assessment standards as important in regards to students’ adoption of deep approaches to study. In that study, based upon interviews and student inventories, they developed the Course Perceptions Questionnaire (CPQ), which includes a scale labelled ‘clear goals and standards’, with one of the core items being: ‘It’s always easy here to know the standard of work expected of you’. This questionnaire was later revised by Ramsden to produce the Course Experience Questionnaire (CEQ) (Wilson, Lizzio, and Ramsden 1997), which, along with subsequent versions, has been widely adopted by universities as a key performance indicator, including the University of Hong Kong. As a key component of its curriculum renewal process, the University of Hong Kong introduced the use of the CEQ as a key component of its performance indicators to monitor the introduction of a new curriculum (Prosser 2013). Among the items included were:
pe rc e pti ons of a ssessment sta n d a r d s | 119 Table 6.2 Correlation matrix of perceptions of assessment items with approaches to studying scores and grade point averages. Approaches to study and GPA Perceptions of assessment
Surface
Deep
GPA
Item 2: Feedback (helpful) Item 22: Feedback (quantity) Item 7: Clear assessment standards Item 19: Clear goals GPA
.01 .05 .10 .09 2.11
.23 .30 .21 .20 .22
.14 .11 .20 .20
(Note: Correlations of .10 and above are significant at the 0.01 level)
Item 2: The teachers normally give helpful feedback on my progress. Item 22: The teachers put a lot of time into commenting on my work. Item 7: It is always easy to know the standard of work expected. Item 19: The teachers made it clear right from the start what they expected from the students.
An earlier administration of the questionnaire also included Biggs’ Study Process Questionnaire (Biggs 1987), providing scale scores on surface and deep approaches to learning and studying. The questionnaires were administered to final year students, and the sample for analysis included students from three broad fields of study: arts and social science, business and economics, science and engineering, with a sample size of 604 complete returns. Table 6.2 shows the results of a correlation analysis of the four assessment items with surface and deep approaches to study and the students’ GPAs. Although most of the correlations are statistically significant in this large sample, the levels of association of the two feedback items are negligible with the surface approach scores and low with GPAs, but are considerably higher with deep approach scores, particularly with: ‘The teachers made it clear right from the start what they expected from the students’. However, the relationship of the feedback items with GPAs are low, indicating that the feedback has a stronger relationship with approach than it does with outcome, although deep approach itself is related to GPA (0.22). (It should be noted in passing that correlations of scales with items are always lower than they are with other scales, as item scores are much less reliable.) Looking at the correlation between the ‘goals and standards’ items and
120 | a dva nces i n uni ver s ity a s s e s s me nt the scales, the relationship with surface approach is low, but somewhat higher with both deep approach and GPA. The important point is that the correlations with clear assessment standards are as substantial as they are with the two feedback items, so that, in this sample at least, students’ perceptions of how clear they are regarding the standards of assessment are as important to students’ learning processes and outcomes as are their perceptions of the quality and quantity of feedback. And so, as students’ perceptions of the clarity of the standards of assessment are important to their learning, how can students be assisted in making sense of the standards? Standards and Grade Descriptors In recent years, there has been a substantial move away from norm-based approaches to assessment towards more criteria- or standards-based approaches. There have been a number of reasons for this. Princeton University, for example, adopted a standards-based assessment system, with a restriction on the percentage of ‘A’ grades being awarded, as a way of addressing the problem of grade inflation. Another reason for adopting standards-based approaches has been calls by the public to justify the standards being applied. But from the perspective of student learning, it has been argued that students deserve to be graded on the basis of the quality of the work they produce, not on how they perform relative to other students. It is also argued that students should be informed at the start of a course about the criteria and standards by which they will be assessed. So, how do we help our students develop an understanding of the standards expected? From my perspective, we cannot expect to be able to help our students until we have articulated the standards to ourselves as teachers. This is not something that university academics tend to do, at least not explicitly. The old aphorism often applies: ‘I know a first class mind when I see it’. The debate over the future and role of honours degrees in the United Kingdom, Hong Kong and Australia often turns around the issue of standards. It is also argued that there should be some sort of equivalence of standards across courses (a unit of study) within a programme. Without this, it makes little sense to combine the results from different courses into an honours grade or GPA. While this sounds appropriate in principle, how do we work towards articulating standards, and how do we help to ensure some sort of equivalence across a whole programme of study? One suggestion is in the use of
pe rc e pti ons of a ssessment sta n d a r d s | 121 ‘grade descriptors. These are broad verbal statements about the general standards being applied that provide a qualitative description of each grade. The variation between grades is then expected to be described as a qualitative, not (just) quantitative, variation among students. Such descriptors can be developed at the institutional, programme and course level, but need to be aligned with each other. In this way, some equivalence across programmes and between courses within programmes can be expected. Institutional-level grade descriptors are necessarily general and need to be carefully contextualised within individual programmes and courses. Marking criteria – or marking rubrics – would then need to be developed for each assessment item, aligned with the course-level grade descriptor. Examples of Grade Descriptors in Current Use An example of a set of grade descriptors at an institutional level is that from the University of Queensland (2013): Final Grade Descriptors 1. Fail. Fails to demonstrate most or all of the basic requirements of the course. 2. Fail. Demonstrates clear deficiencies in understanding and applying fundamental concepts; communicates information or ideas in ways that are frequently incomplete or confusing and give little attention to the conventions of the discipline. 3. Fail. Demonstrates superficial or partial or faulty understanding of the fundamental concepts of the field of study and limited ability to apply these concepts; presents undeveloped or inappropriate or unsupported arguments; communicates information or ideas with lack of clarity and inconsistent adherence to the conventions of the discipline. 4. Pass. Demonstrates adequate understanding and application of the fundamental concepts of the field of study; develops routine arguments or decisions and provides acceptable justification; communicates information and ideas adequately in terms of the conventions of the discipline. 5. Credit. Demonstrates substantial understanding of fundamental concepts of the field of study and ability to apply these concepts in a variety of contexts; develops or adapts convincing arguments and provides coherent
122 | a dva nces i n uni ver s ity a s s e s s me nt justification; communicates information and ideas clearly and fluently in terms of the conventions of the discipline. 6. Distinction. As for 5, with frequent evidence of originality in defining and analysing issues or problems and in creating solutions; uses a level, style and means of communication appropriate to the discipline and the audience. 7. High Distinction. As for 6, with consistent evidence of substantial originality and insight in identifying, generating and communicating competing arguments, perspectives or problem solving approaches; critically evaluates problems, their solutions and implications.
The standards of assessment on all programmes and courses within the University of Queensland undergraduate studies are now expected to align with these standards. An equivalent set of grade descriptors of results at faculty level is that of the Faculty of Arts at the University of Hong Kong: A: Excellent. Students with this grade must show evidence of original thought, strong analytical and critical abilities as well as a thorough grasp of the topic from background reading and analysis; should demonstrate excellent organizational, rhetorical and presentational skills. For courses with a predominant language focus, and within the related level of proficiency, students display excellent performance and knowledge in areas such as grammar and vocabulary, oral and aural competency. B: Good to very good. Students must also be are critical and analytical, but not necessarily original in their thinking. They show an adequate grasp of the topic from background reading and analysis, and demonstrate strong organisational, rhetorical, and presentational skills. For courses with a predominant language focus, students should display very good performance and knowledge in areas such as grammar and vocabulary, and oral and aural competency. C: Satisfactory to reasonably good. Students must show a reasonable grasp of their subject, but most of their information is derivative, with rather little evidence of critical thinking. They should nevertheless demonstrate fair organisational, rhetorical, and presentational skills. For courses with a pre-
pe rc e pti ons of a ssessment sta n d a r d s | 123 dominant language focus, students should display reasonable performance and knowledge in areas such as grammar and vocabulary, oral and aural competency. D: Barely satisfactory result. Students have assembled the bare minimum of information, which is poorly digested and not very well organised in its presentation. There is no evidence of critical thinking. For courses with a predominant language focus, students display minimal performance and knowledge in areas such as grammar and vocabulary, and oral and aural competencies are barely satisfactory. F: Unsatisfactory. Students have demonstrated poor knowledge and understanding of the subject, a lack of coherence and organization, and answers are largely irrelevant. Work fails to reach degree level. For courses with a predominant language focus, and within the related level of proficiency, students display poor performance and knowledge in areas such as grammar and vocabulary, and their oral and aural competencies are not satisfactory. (Adapted from University of Hong Kong 2013)
Finally, an example of grade descriptors at course level comes from the Animal Biology department at the University of Cambridge (2013): A. Outstanding. Excellent insight into the practical aims; exceptionally good organisation and presentation; critical treatment of the results. The Discussion would be very clearly written and show evidence of originality. B. Good. Full understanding of the practical aims; coherent organisation; clear presentation; accurate answers to the questions. The Discussion would be a complete and critical response to the prompts and questions in the hand-out. C. Satisfactory. Good in parts, but with important points omitted. Might also have defects in presentation, or be not very well written. Reasonably competent, but might show misunderstanding of the material: significant inaccuracies or errors. D. Poor. Some knowledge of the material is evident, but there are serious deficiencies in understanding, organisation, clarity, or accuracy. Write-ups that are unduly brief would also fall into this category.
124 | a dva nces i n uni ver s ity a s s e s s me nt A Conceptual Basis for Grade Descriptors If we examine these and other sets of grade descriptors, there are two different ways in which they are structured: either more quantitative or more qualitative. The more quantitative ones use verbs such as: excellent; good; acceptable; some; and none. If the grade descriptors do not go beyond such categories, then students are not being helped to develop their understanding of the standards required. They need to know, for example, how ‘good’ is distinguished from ‘excellent’. The descriptors need to go beyond these broad descriptors by including more qualitative descriptions of the variation between the grades awarded. Indeed, they are easier to defend if there is a clear conceptual basis underlying them, either specific to the particular discipline or in more general terms, drawing on ideas from educational research. One such general description, for example, directly relevant to assessment in higher education, has been provided by Biggs in his SOLO taxonomy (Biggs and Tang 2011). SOLO provides a ‘Structure of Learning Outcomes’ by describing a hierarchically ordered set of categories, increasing in complexity, which describes the relative accuracy, coherence and integration of the material. It ranges from students missing the point altogether to students being able to place the ideas being taught into a wider context. The outcome levels are described as: Pre-structural: misses the point, introducing only irrelevant aspects. Uni-structural: reproduces one, or very few, relevant aspects. Multi-structural: reproduces most relevant aspects, but is unable to relate them into a coherent whole. Relational: is able to relate and integrate most relevant aspects. Extended Abstract: is able to use additional ideas or information, or a theoretical perspective, to place the integrated aspects into a broader whole.
This general structure is evident in the grade descriptions illustrated previously, as well as in many other sets of grade descriptors reviewed by the author. Fail grades are often described in pre-structural or uni-structural ways, being able to reproduce some aspects, but with deficiencies and misunderstandings. A pass-level performance is seen in multi-structural terms, being able to reproduce or use most aspects, but not in a coherent way. The
pe rc e pti ons of a ssessment sta n d a r d s | 125 middle-level grades involve relational terms, being able to reproduce or use most aspects coherently, while the top grades often bring in extended abstract elements, such as clear insight, originality and so on. Sadler (2005), however, has warned that such descriptors cannot be used as ends in themselves. Providing students and staff with descriptors alone is not sufficient, as at both an institutional and programme level, in particular, there can be many interpretations of the meaning of the grade descriptors, even when described in qualitative terms. The grade descriptors need to be developed by teaching staff within departments through discussion and debate, using real examples of student work. In this way, they can move towards a more common understanding of the standards. But this is not an easy process. In practice, engaging staff in such discussions can be hard going initially, although once engaged with real examples of student work they may well become actively involved. For this chapter, the main issue is how such grade descriptors can then be used to help students develop their understanding of the standards being described. Of course, students do not deliberately submit incoherent or multi-structural responses, but helping them to understand what is involved in providing more complete answers is not served by simply showing them examples of coherent and original work. Indeed, that can be off-putting and even demoralising. Neither is it helpful to inform students, intended as feedback, that a particular grade descriptor or aligned marking rubric calls for a more ‘integrated account’ or to suggest that the essay or laboratory report is ‘not coherent’ or ‘shows no originality’. Such comments criticise without providing any real help in trying to improve, as Dai Hounsell argued in his early research into essay writing (Hounsell 1987). This issue of the use of language in helping students to understand learning goals, and so make their perception of the purpose of assessments clearer, will be taken up again in subsequent chapters. So, how are students to be assisted in understanding the underlying rationale of the grade descriptors and so see more clearly how to develop their ways of presenting their ideas in more convincing ways? One way is to ask students to judge and rate examples of assessments previously graded at the different levels (for example, multi-structural, relational and extended abstract levels) and discuss their conclusions with other students, leading to
126 | a dva nces i n uni ver s ity a s s e s s me nt a whole-class discussion; indeed, this method has been found to bring home the variation between the grade descriptors in a powerful manner (Rust, Price, and O’Donovan 2003). Individual students can thus be helped to understand how to improve their own performance through recognising the significance of the nature of the variations that they have seen among other students’ assessed work. Conclusion In this chapter I have shown that research since the early 1980s has been consistently demonstrating that students’ perceptions of clear (or unclear) goals and standards in assessment are related to the approaches to learning and studying that they adopt and so to their achievement levels. Indeed, our recent study in Hong Kong showed quite specifically that students who said that the assessment goals and standards of assessment were not clear to them were more likely to be adopting surface approaches to their learning and studying and thus reaching lower levels of learning outcomes (GPAs). As we saw in this chapter, there has been much progress in making standards of marking more consistent by defining grade levels explicitly and using marking schemes that are directly aligned with the goals of the course. And yet the rationale for the use of these methods is not always clear to staff, let alone to students. The main thrust of this chapter is thus to argue that it is important for university teachers to articulate the standards of assessment that they are using more clearly, making these much clearer for both themselves and their students. However, there are difficulties in communicating reasons behind the awarding of different grades, which leaves students unclear as to how to improve their performance. But students can be assisted in understanding rationales behind marking schemes and standards – namely, by applying these schemes and standards to examples of other students’ graded work. Supporting the development of students’ understanding of assessment standards is needed to complement and supplement the provision of high quality feedback provided to students (and for which Dai Hounsell has persuasively argued over many years). Indeed, some researchers have argued that if we can help our students to better understand the standards being used, there may be less need for detailed and time-consuming feedback on assess-
pe rc e pti ons of a ssessment sta n d a r d s | 127 ments (see, for example, Sadler 2005). This may be taking the argument too far, but I would argue that activities aimed at helping our students understand the standards of assessment more clearly are as equally important as the provision of timely and helpful feedback in our efforts to enhance students’ learning processes and outcomes. References Biggs, J. B. 1978. ‘Individual and Group Differences in Study Processes.’ British Journal of Educational Psychology 48: 266–79. Biggs, J. B. 1987. Student Approaches to Learning and Studying. Hawthorn: Australian Council for Educational Research. Biggs, J. B., and C. Tang. 2011. Teaching for Quality Learning at University. Maidenhead: Open University Press. Dunkin, M. J., and B. J. Biddle. 1974. The Study of Teaching. New York: University Press of America. Entwistle, N. J., and P. Ramsden. 1983. Understanding Student Learning. London: Croom Helm. Hattie, J., and H. Timperley. 2007. ‘The Power of Feedback.’ Review of Educational Research 77: 81–112. Hounsell, D. 1987. ‘Essay Writing and the Quality of Feedback.’ In Student Learning: Research in Education and Cognitive Psychology, edited by J. T. E. Richardson, M. W. Eysenck, and D. Warren Piper, 101–8. Milton Keynes: SRHE and Open University Press. Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance and Feedback to Students.’ Higher Education Research and Development 27: 55–67. Prosser, M. 2013. ‘The Four Year Degree in Hong Kong: An Opportunity for Quality Enhancement.’ In Enhancing Quality in Higher Education: International Perspectives, edited by G. Gordon and R. Land, 201–12. London: Routledge International. Prosser, M., and K. Trigwell. 1999. Understanding Learning and Teaching: The Experience in Higher Education. Buckingham: Open University Press. Prosser, M., and K. Trigwell. 2007. Understanding Learning and Teaching: The Experience in Higher Education (Chinese edn). Peking: Peking University Press. Prosser, M., and K. Trigwell. 2009. Understanding Learning and Teaching: The Experience in Higher Education (Arabic edn). Riyadh: Obeikan Research and Development.
128 | a dva nces i n uni ver s ity a s s e s s me nt Prosser, M., K. Trigwell, E. Hazel, and P. Gallagher. 1994. ‘Students’ Experiences of Teaching and Learning at the Topic Level.’ Research and Development in Higher Education 16: 305–10. Rust, C., M. Price, and B. O’Donovan. 2003. ‘Improving Students’ Learning by Developing their Understanding of Assessment Criteria and Processes.’ Assessment and Evaluation in Higher Education 28: 147–64. Sadler, D. R. 2005. ‘Interpretations of Criteria‑Based Assessment and Grading in Higher Education.’ Assessment and Evaluation in Higher Education 30: 175–94. The University of Cambridge. 2008–2009. Animal Biology Course Handbook, 2008– 2009. Accessed October 18, 2013. doi: www.zoo.cam.ac.uk/degree/1banimal/ handbook.pdf. The University of Hong Kong. 2013. ‘General Expectations of Student’s Performance at the Various Grades.’ Accessed 2 May, 2013. doi: arts.hku.hk/BAprogramme/ assessment/A20_906%20amended.pdf. The University of Queensland. 2013. ‘Grading System (as at 3.10.07).’ Accessed May 2, 2013. doi: ppl.app.uq.edu.au/content/3.10.07-grading-system. Williams, J., and D. Kane. 2008. Exploring the National Student Survey: Assessment and Feedback Issues. York: Higher Education Academy. Accessed April 11, 2013. doi: www.heacademy.ac.uk/assets/documents/nss/nss_assessment_and_feed back_issues.pdf. Wilson, K. L., A. Lizzio, and P. Ramsden. 1997. ‘The Development, Validation and Application of the Course Experience Questionnaire.’ Studies in Higher Education 22: 33–53.
7 Only Connect? Communicating Meaning through Feedback Charles Anderson
Background
A
key driver of Dai Hounsell’s research, scholarship and teaching has been a concern with communication and audience. This focus is very evident in his academic writing, staff development guides, online teaching and engaging, lively public performances. Across these different genres one can discern a deft tailoring of the forms of writing or speech to the specific purpose and audience. Communication and audience design have also been an analytic focus in his research work. One of his contributions to the field of assessment has been to delineate the forms and functions of different types of writing on assessment and to construct a clear, coherent typology of genres of publications on assessment (Hounsell et al. 2007). The themes of audience design, perspective-taking and communication have also been central to his research and development work on feedback and guidance on assessment. Over the last decade there has been considerable advocacy to provide more dialogical forms of feedback to students. Here, Dai Hounsell has been well ahead of advances in this field, as this is a cause that he has espoused from his doctoral studies in the early 1980s onwards. In his thesis, for example, he pointed to the need for empathetic perspective-taking on the part of lecturers if appropriate feedback and guidance is to be provided: ‘. . . a central task of teaching may be to “unthink” one’s conception
132 | a dva nces i n uni ver s ity a s s e s s me nt of essay-writing and put oneself in the frame of mind of a student to whom this conception is not simply unfamiliar but also formidably difficult to grasp’ (Hounsell 1984, 355). From the start of his academic career, he recognised: the inadequacies of any view of communication that, explicitly or implicitly, views it in terms of a direct transmission of meaning; and the complexities involved in achieving common reference. He observed that the evidence of his thesis work revealed that ‘it is misleading to assume that a common pool of meanings exists. Indeed, as we noted, a recognition by tutors of the problem of intersubjectivity would seem to be a prerequisite to genuine communication about expectations’ (Hounsell 1984, 355). In coming to this view of the need to negotiate, rather than assume the transmission of, meaning, Dai Hounsell has been strongly influenced by the seminal work of Rommetveit. In his own words: I had found it fruitful to draw on the ideas of the Norwegian psycholinguist Rommetveit (1974, 1979), for whom intersubjective communication, if it is to be effective, depends on both participants in a dialogue having a common set of underlying premises or assumptions. (Hounsell 2003, 70)
Aims and Focus This chapter follows Dai Hounsell’s lead by drawing on the work of Ragnar Rommetveit to address what I view as a significant gap in the current literature on assessment in higher education. A range of current writing on assessment has highlighted the difficulties that can arise in achieving a common understanding of feedback commentary between students and staff, the contextualised character of meaning and the tacit elements of academic discourse. The body of work on ‘academic literacies’ has also raised awareness of the need for feedback to be well-attuned to the contrasting forms of writing required across different disciplines (for example, Lea and Street 1998). While these writings raise important issues concerning forms of literacies and communication in relation to feedback, the literature on assessment and feedback in higher education has not tended to be underpinned by any developed theory of communication. This chapter aims to begin to address this gap by highlighting the value of Rommetveit’s subtle account of word meaning
c om m u nic ati ng mea ni ng through f e e d b a ck | 133 and communication for thinking through the challenges of sharing framing perspectives on a topic and creating sufficient common understanding. The chapter also draws on the work of another Scandinavian sociolinguist, Per Linell. His theorising of communication follows very similar contours to the framework mapped out by Rommetveit and offers insights into the nature of meaning-making and dialogue that seem particularly pertinent to higher education settings. The chapter first examines Rommetveit’s view of word meaning, including the stress that he places on its contextualised nature and the efforts required to achieve a sufficiently common understanding between the partners to a communicative act, such as feedback on assignments. Rommetveit’s emphasis that all mutuality in communication requires agreement on the perspective from which a phenomenon or state of affairs is viewed is then considered, with connections being drawn to the research that Dai Hounsell has conducted on the ways of thinking and practising of particular disciplines (WTPs) (McCune and Hounsell 2005; Hounsell and Anderson 2009). This leads to an examination of the particular demands on thought and communication that characterise higher education, including the demand for a dialogue with self and others that involves intensive questioning. The chapter concludes by shifting focus to the question of how acts of communication around assessed work can be designed to allow students in professional domains to bring their experiences into a coherent, meaningful pattern that is framed within a developmental narrative oriented towards the future. The Dynamic Creation of Meaning Creating Meaning through Communication A common, albeit not universal, move among linguists over the past few decades has been to reject a semantics that posits fixed, literal meanings for both individual words and larger units of language (for example, Prior 1998). Gee (2008, 7), for example, argues robustly that ‘the cultural model that words have fixed meanings in terms of concepts or definitions stored in people’s heads is misguided’. Moving away from assumptions of set, literal word meanings in effect
134 | a dva nces i n uni ver s ity a s s e s s me nt ‘opens up’ the language of assessment, challenging the notion that it can be a transparent medium of exchange. This point is brought out incisively by Sadler in his discussion of ‘indeterminancy in the use of preset criteria’: In ordinary discourse, words need to be interpreted in context . . . Wordbased criteria have the same versatility as words in general, but naturally are subject to the same limitations. Even those that may appear to have obvious meanings and straightforward implications for grading often stimulate debate when their meanings are probed. A consequence of the lack of unique meanings is that, within the same context, criteria may be interpreted differently by different teachers. They can also be interpreted differently by the same teacher in different assessment contexts. (Sadler 2009, 169)
A subtle analysis of how meaning may be created and sufficient common reference established in everyday settings and institutional encounters emerges from the corpus of Rommetveit’s work (for example, Rommetveit 1968, 1974, 1990; Rommetveit and Blakar 1979). Central to Rommetveit’s view of communication is a focus on the semantic potentialities of language. Rommetveit points up how participants will bring different experiences and knowledge to an act of communication and consequently need to engage in an active process of sense-making, rather than simply transmit and decode information. He portrays the construction of meaning as a dialogical process where ‘commonality is established when two persons construct a temporarily shared world by engaging in dialogue. This involves a coordination of both attention and intention’ (Farr and Rommetveit 1995, 271). This stress on the need for intersubjectivity has to be qualified, however. Contrary to some treatments of his concept of intersubjectivity in the educational literature, Rommetveit recognised that ‘complementarity rarely implies a complete synchrony of the participants’ intentions and thoughts’ (Collins and Marková 1995, 256). Indeed, he has trenchantly critiqued Habermas’s ideal type conceptualisation of ‘pure intersubjectivity’ (Wertsch 1998, 114 [emphasis in original]). While mutual understanding in communication may always remain partial, the depth and extent of the meeting of minds that is required varies considerably across situations (Linell 1995, 182). In particular, as Foppa (1995, 151) has noted, the stakes are especially
c om m u nic ati ng mea ni ng through f e e d b a ck | 135 high in instructional dialogues where more stringent demands are placed on the commonality of understanding that has to be reached. In a preceding publication, McCune and I, following other writers (for example, Northedge and McArthur 2009), have acknowledged the difficulty in achieving states of intersubjectivity in the conditions of higher education where the demands of communication are particularly exacting (Anderson and McCune 2013, 292). At the same time, however, we have argued that a focus on the indeterminancy of discourse allows meaning to be created, rather than lost, in translation. On this theme, we described (Anderson and McCune 2013, 292) how Newman, Griffin, and Cole (1989) regarded such indeterminacy as leaving ‘room for movement and change’ (11) and observed that: Just as the children do not have to know the full cultural analysis of a tool to begin using it, the teacher does not have to have a complete analysis of the children’s understanding of the situation to start using their actions in the larger system. (1989, 63)
An emphasis on the indeterminancy of discourse and the likely incompleteness of the intersubjectivity that can be achieved in higher education settings highlights the potential difficulties that may arise in communication of, and around, feedback in higher education. At the same time, though, it suggests one way in which Meno’s paradox can be addressed and how over time and with sufficient interaction students can be drawn into active participation ‘in the larger system’ of the practices and discourses of a discipline. This may involve the deployment of ‘hybrid’ discourses in learning, teaching and assessment encounters (Anderson and McCune 2013, 291), with effective feedback involving ‘an interplay between taking out an expert’s view of a subject to students, in terms that novices are likely to understand, and drawing in students’ [everyday lexis and] more common-sense understandings towards expert positions within the discipline’ (Anderson 1997, 192 [emphasis in original]). Such a conception of students’ entry into the discourses of higher education that implies a gradual transformation of meaning-making and the forms in which it occurs would seem to accentuate the desirability of seeing ‘feedback’ as an ongoing process of dialogue, rather than as an isolated act of communication. Here, Dai Hounsell’s conceptualisation of guidance and
136 | a dva nces i n uni ver s ity a s s e s s me nt feedback as an integrated cycle gives clear pointers as to how a continuing, coherent dialogue around feedback can be taken forwards (Hounsell et al. 2008; McCune and Rhind, this volume). His concern with highlighting the distinctive features of thinking and discourse in individual disciplines (McCune and Hounsell 2005; Hounsell and Anderson 2009) also reminds us that this dialogue may take rather different forms, both across and within the social sciences, humanities and the sciences. I return to this topic in the section on perspectivity in communication. Constraints on Meaning Potentials An emphasis on the semantic potentialities of language does not mean, however, that Rommetveit presents speaking and listening, reading and writing as solipsistic acts of creation and interpretation. In terms which resonate with Bakhtin’s (1981) description of ‘centrifugal’ and ‘centripetal’ forces in language use, and with the Prague semioticians’ portrayal of the interplay between static and dynamic aspects of language (Marková 1990, 134–5), Rommetveit observes that: The semantic system inherent in our everyday language is orderly and borders on our knowledge of the world, yet ambiguous and open. The order exists in the form of constraints upon semantic potentialities, however, and not in unequivocal ‘literal meanings’. (1979, 153 [emphasis in original])
These constraints on semantic potentials originate in part from the shaping effects of a context ‘where “context” entails intersubjective contracts, ongoing discourse and a horizon of background experience’ (Hanks 1996, 86). Consonant with a dialogical view of meaning-making, contexts are not regarded simply as containers for action, but rather as having to be created to a considerable extent by their participants. Over the following two sections, central elements of Linell’s and Rommetveit’s conceptualisation of the nature of constraints on meaning potentials are set out. Practices of Meaning Fixation At this point in the argument, the objection might be raised that surely in academic life the aim customarily is to achieve a fixed, highly specialised terminology that escapes the ambiguities inherent in everyday dis-
c om m u nic ati ng mea ni ng through f e e d b a ck | 137 course. Rommetveit himself fully recognises the relative fixation of meaning within ‘domains such as technological, professional and scientific discourse’ (Rommetveit 1990, 101). However, more is at stake here than a straightforward creation of ‘literal’ meaning that can then be easily communicated to, and readily understood by, students. Linell describes in detail how such comparative fixity of meaning is achieved only through the deployment of specific discursive practices (Linell 1992). In his own words: things in social life are recontextualized into these specific (scientific, etc.) contexts that are designed for decontextualization. At the same time, this means that such decontextualization is never absolute; it is always embedded within an activity context. Thus, scientific theory-building, as well as activities in science labs and experimental settings, are ‘local’ or ‘context bound’. Scientific practices, including natural scientific ones, are therefore both monologizing (and decontextualizing) and context bound. (Linell 2003, 227)
This account of how fixity of meaning is established through the operation of particular intellectual and discursive practices that will always be instantiated in, and learned within, a specific context, thus suggests that a fixed body of academic terminology cannot simply be provided to students. Rather, lecturers, by various means, including the provision of feedback, need to assist students to become fluent in the meaning-making practices associated with particular disciplines; to begin to participate in the discursive repertoire of a subject. Perspectivity in Communication Before I turn to consider how Dai Hounsell has been centrally concerned over the last decade with this question of how students can best be inducted into the practices of individual disciplines, it is necessary first to examine another key element of Rommetveit’s theory of communication. Building on, but also extending, the account given by the phenomenological philosophers (Husserl 1973) of the perspectival structure of perception and cognition, Rommetveit gives a central role to perspectivity in communication (Graumann 1990, 109). He notes that ‘the very identity of any given state of affairs is contingent upon the position from which it is
138 | a dva nces i n uni ver s ity a s s e s s me nt viewed’ (Rommetveit 1990, 87) and that ‘in order to decide whether what is asserted about any particular state of affairs is true, we must in principle first identify the position from which it is viewed and brought into language’ (89). In other words, a crucial matter for the coordination of attention and intention, according to Rommetveit, is the sufficient sharing of perspectives on the matter that is under discussion (Rommetveit 1990; Graumann 1990). Linell succinctly captures the main thrust of Rommetveit’s exposition of the centrality of perspectives in communication in the following sentence: ‘Convergence of attention (joint reference) implies a shared point-of-view’ (Linell 1995, 180). Dai Hounsell’s research has highlighted the fact that students quite commonly do not share their lecturers’ perspectives on assessment (Hounsell 1997), and this theme is pursued in depth in the chapter in this volume by Entwistle and Karagiannopoulou. This is not a surprising finding when one takes on-board Sadler’s analysis of the demands posed by academic tasks (Sadler 1989, 2009). A single assignment will typically pose multiple, complex demands that are not at all easily articulated into discrete criteria. Over the last decade, a central focus in both Dai Hounsell’s research and his academic development activities has been the close examination of the disciplinary perspectives, the ways of thinking and practising (WTPs) of particular subject domains, that students need to engage with if they are to transform their understanding of a subject (McCune and Hounsell 2005; Hounsell and Anderson 2009). Central to this work has been the recognition that ‘domain knowledge exists in dynamic relationship with the practices that are implicated in its creation, interpretation and use’ (Anderson and Hounsell 2007, 463). Accordingly, to be effective, feedback needs to work to unpack these practices, as far as this is possible, and to draw students into deploying them. Drawing students into a set of disciplinary practices commonly entails assisting them in taking a particular epistemological orientation towards that domain’s content and knowledge-making procedures (Anderson and Hounsell 2007, 469), giving them a sense of ‘what counts as “evidence” and the processes of creating, judging and validating knowledge’ (2007, 469). Without this sharing to a degree of a framing epistemological perspective, there may be an insufficient meeting of minds around feedback, as Price et al.
c om m u nic ati ng mea ni ng through f e e d b a ck | 139 (2013, 44) pointedly remind us: ‘tutors expecting a dialogue on the contestability of some disciplinary concepts may “fail” if faced by perplexed students in “dualist” mode who expect corrective feedback that tells them “the right answer”’. Attending to the WTPs of individual disciplines also brings into focus the intricate connection between the form and content of knowledge (Anderson and Hounsell 2007, 463). The content of a domain and the discursive practices within which it is communicated are indivisibly connected (472). As Richardson (1991, 184) has noted: ‘We do not learn the “content” of science and then learn the appropriate expository forms in which to write and speak about it’. When feedback is seen as centrally concerned with performing disciplinary ways of thinking and representation, it necessarily has to be seen as a very active and interactive process. Miscommunication as a Dialogic Process: Epistemic Responsibility and Power Turning to a different topic, a corollary of this dialogical view of meaningmaking is that miscommunication and misunderstanding must also be seen as ‘dialogically constituted and collectively generated’ (Linell 1995, 185). In Linell’s formulation: ‘A “misunderstanding” must be seen as interactionally constituted, and a matter of collective miscommunication. Conversely, miscommunication involves both misrepresentation (misleading expression) and misunderstanding’. A straightforward but central implication of this view of failure in communication is that one is required not simply to look separately at how lecturers can avoid feedback that students may find misleading and how students themselves can avoid misunderstandings of lecturers’ commentaries. Rather, attention needs to be given to the communicative exchange as a whole. In addition, a dialogic view of communication and miscommunication would seem to imply that both parties to the communication have some responsibility for achieving sufficiently common meaning. However, in exchanges concerning assessment and feedback, the relationships between lecturers and students are customarily distinctly asymmetrical. In particular, it is lecturers who have the power to set the terms on which a topic that features in feedback should be construed. In Rommetveit’s terms, lecturer to student feedback is a situation marked by a very uneven ‘distribution of epistemic
140 | a dva nces i n uni ver s ity a s s e s s me nt responsibility, i.e. responsibility for making sense of the talked-about state of affairs and bringing it into language’ (Rommetveit 1990, 98 [emphasis in original]. Accordingly, with such unequal positions, the onus for the success of communications regarding feedback would seem to lie much more with the lecturer. Asymmetries in perspective-setting and positional inequalities clearly can strongly inhibit dialogue around feedback. Indeed, scholars such as McArthur and Huxham (2013) who espouse a ‘critical pedagogy’ approach have given close attention to how such asymmetries can be reduced or transformed in everyday practice in higher education. McArthur and Huxham advocate taking a particularly wide definition of feedback and encouraging dialogue that can clarify and transform understanding throughout all of the activities of a course, arguing that such transformational dialogue ‘cannot be achieved when feedback is bound to formal assessment because the latter inevitably involves parameters and certainties that restrict the unbounded dialogue we seek to nourish’ (McArthur and Huxham 2013, 100). The Interrogative Mode While much recent work on assessment and feedback has stressed the need for dialogue in and around feedback (Carless 2013; McArthur and Huxham 2013; Nicol 2010), there has been less attention given to the forms in which dialogue in higher education is conducted. Here, Barnett provides a useful corrective when he writes that: Perhaps the dominant strand to the form of life that is academic life is not – as some might say – that it is dialogical or even that it is critical but rather that it is interrogative. That is to say, any utterance proffered in academic life is susceptible to questioning from other parties. Any utterance has its place in an interrogative space. (Barnett 2007, 34)
He goes on to observe that: ‘It is a capacity to live well with questioning coming at one – whether affirming or rebuking – that marks out the criticality that, for many, lies at the heart of higher education’ (2007, 34). As an important qualification to Barnett’s observation, it needs to be noted, however, that the nature of the ‘questioning coming at one’, the kind of ‘criticality’ one is expected to display, may vary considerably across disciplines.
c om m u nic ati ng mea ni ng through f e e d b a ck | 141 These reflections from Barnett would seem to carry implicitly a number of important messages concerning communication of, and around, feedback. One straightforward but crucial matter that they raise is what fora – face-toface and online – are provided to students where such questioning activity is fostered. Here, Nicol (2009, 2010) has presented both evidence that this can, to a certain degree, be achieved in mass higher education and a road map detailing how such questioning practices can be fostered. Following a different line of response to Barnett’s observations, one can consider the extent to which a particular feedback communication itself both exemplifies questioning and fosters it. Wertsch (1998), building on the work of Lotman (1988), points to how individual texts can be marked by a univocal function, where the emphasis is on the clear transmission of meaning and achieving mutual understanding, as opposed to a dialogical function, where the accent is on ‘dynamism, heterogeneity, and conflict among voices’ (Wertsch 1998, 115). In texts where the dialogic function is dominant ‘the focus is on how an interlocutor might use texts as thinking devices and respond to them in such a way that new meanings are generated’ (1998, 115). It could be argued that the balance between the univocal and dialogical functioning of a feedback text (oral or written) may need to vary considerably depending on the immediate purpose of the feedback communication, the subject area involved and the level of performance that a student has achieved. However, if the practice of critical, continuous questioning that characterises higher education is to be developed, it is clearly helpful if lecturers keep centrally in mind the role that their commentary on students’ work can play as a ‘thinking device’, rather than simply as a receptacle of judgements and instructions. Whether or not a feedback comment acts effectively as a thinking device is likely to depend not simply on the form in which it is cast, but also on the work that students are expected to achieve with this feedback and the wider system of activities into which it is incorporated; here again Nicol (2009, 2010) provides very useful pointers for practice. Writing on language and learning at school level, Nystrand captures what, to my mind, is a key feature that needs to be in place in any educational setting if dialogic feedback messages are to realise their potential as thinking devices. He notes the need for educational settings to provide ‘significant and serious
142 | a dva nces i n uni ver s ity a s s e s s me nt epistemic roles to students that the students themselves can value’ (Nystrand 1997, 72; also see Wertsch 1998, 119–24). Another facet of the interrogative character of higher education discourse that requires consideration in relation to feedback is the potential threat that a questioning, judging mode of communication can pose to an individual’s sense of competence and self. A feedback communication can be viewed as being a commentary both on the work that has been assessed and, at least implicitly, on the person who has produced the work. Some studies (for example, Young 2000) suggest that whether or not students view feedback as a judgement on themselves and their capacities rather than on their immediate performance is related to their level of self-esteem as learners. Those with low self-esteem are more susceptible to regarding feedback as a judgement on their ability than those with high self-esteem. The American social psychologist Carol Dweck (1999) has established that individuals who hold a fixed theory of intelligence are more likely to have concerns about their performance and being perceived as competent. For such individuals, the judgements given in feedback may be perceived as a threat to their public face of competence. By contrast, those individuals who possess a view of abilities as being malleable do not tend to have the same self-protective concerns and are more inclined to persist on tasks, seek out challenges and want to exploit opportunities for learning. Building on the work of Dweck (1999), Knight and Yorke (2003) urge university teachers to be alert to this distinction between fixedness and malleability in students’ views of their abilities when assessing work and ‘to encourage “fixed” students in the direction of malleability’ (129). Norms of Communication: Trust The observations made in the preceding paragraph on how assessments may threaten an individual’s sense of self bring into focus the normative dimensions of feedback communications. Taking these normative dimensions into account, it can be argued that acting to develop understanding and to spark engaged questioning cannot, or at least should not, be the sole goals of commentary. Graumann (1995, 19 [emphasis in original]) reminds us that ‘essential for the concept of dialogue is a certain degree of mutual trust, i.e. an intrinsically moral dimension of dialogues’. An incisive treatment of ‘trust and its role in facilitating dialogic feed-
c om m u nic ati ng mea ni ng through f e e d b a ck | 143 back’ (Carless 2013, 90) has been provided by Carless, who notes how ‘[t]rusting virtues such as empathy, tact and a genuine willingness to listen are ways in which positive feedback messages can flourish and more critical ones be softened’ (2013, 90). Drawing on the work of Reina and Reina (2006), he establishes the importance of the distinction between competence trust and communication trust in relation to feedback. He argues that trust in an assessor’s competence builds an ethos that is conducive to the sharing of ideas and that ‘students need quality input from competent trustworthy sources, as well as potentially less refined feedback from peers’ (Carless 2013, 92). These claims concerning competence trust are supported by empirical work, such as a study conducted by Poulos and Mahony (2008), where it was found that students’ perceptions of the ‘credibility of feedback’ (145) were centrally influenced by their perceptions of lecturers’ abilities and whether or not the lecturers exhibited ‘bias’ (152). Empathy and respect form the bedrock of communication trust (Carless 2013, 92); qualities that clearly need to be displayed on the part of both lecturers and students if open and productive dialogue around formative feedback is to be achieved. If one wishes actively to encourage students to debate and challenge their own and each others’ ideas, there would seem to be a concomitant need to foster communication trust, so that they can, in Barnett’s words, ‘live well with questioning’ (2007, 34). Forms of teaching and assessment that push students to work at the growing edge of their competence mean that they will, at the same time, be very near their frontier of incompetence and hence vulnerable. Sound arguments have been made for students themselves taking on the role of assessors (see Sadler 2013), with both Prosser and Nicol (in this volume), for example, developing a strong case for the value of reviewing peers’ work, although in rather different ways. Carless himself sees distinct advantages in giving students a greater role as assessors, but notes that: ‘For these potentials to be realized, students need to invest faith in their peers and allow their own work to be critiqued, i.e. placing themselves in a vulnerable state’ (Carless 2013, 93). Thus, effective, appropriate engagement in assessment would seem to involve not simply developing individual skills in reviewing and giving feedback, but also mutual adherence to a set of normative assumptions concerning how feedback should be given and received,
144 | a dva nces i n uni ver s ity a s s e s s me nt which require: commitment to the enterprise; communication and competence trust; and a degree of courage. Following this set of norms in turn can be seen to require a particular practice of giving and receiving critique and of argument. What is at stake in this particular practice of critique and argument is illustrated in the following quotation from an accomplished university teacher whom I interviewed as part of a study of university discussion groups. This particular teacher placed strong emphasis on the value of developing an ‘impersonal’ style of argument for students’ wider education. She observed that: being able to divorce subject matter and reasonable discourse from personalities is very important: and, you know, you do this partly through having diplomatic skills. So that you can argue very cogently against a point of view without involving either yourself or the other person, I think, is very important. (Anderson 1995, 269)
The Lens of Narrative I now want to shift the focus of discussion to another analytical lens that can be of value in considering assessment and feedback as acts of communication. There has been a burgeoning movement over the last decade in advocating ‘sustainable’ assessment and feedback practices (Boud 2000; Hounsell 2007) – that is, assessment and feedback that will ‘also meet the longer-term need of equipping students for a lifetime of learning’ (Boud and Falchikov 2007, 7). In this volume, McCune and Rhind highlight the ‘significance for assessment of students’ imagined trajectories in relation to professional communities’. Focusing on students’ future performance as professionals brings to the fore the questions of how assessment and feedback practices can assist in the construction of a coherent, future-oriented sense of oneself as a learner and how connections can be interwoven between past, present and future learning. How can these practices best support students to meet the interrelated challenges of gaining new forms of ‘knowing, acting and being’ (Barnett and Coate 2005, 59–65) and a sense of themselves as being able to engage with the demands of professional life? In a study that is currently being written up, my colleague Pauline Sangster and I have found the techniques of narrative analysis to be a produc-
c om m u nic ati ng mea ni ng through f e e d b a ck | 145 tive means of examining the feedback given in tutorial interactions between university lecturers and trainee teachers. The use of such narrative techniques also revealed how an experienced tutor of novice teachers was not simply giving ‘feedback’ on their immediate performance in class, but providing them with a refiguring of their teaching performance that constructed a developmental narrative of their actions. In brief, this study involved the close analysis of transcripts of the lengthy discussions that took place between an experienced university tutor and each of ten students in teacher training immediately after she observed them each give a lesson. We have described in a chapter setting out the methodology that informed our study how we moved from the initial adoption of a thematic analysis to a narrative analysis in response to the fact that what was happening in these tutoring encounters was not a simple revisiting of events on which the tutor then provided feedback, but rather a more complex joint weaving of an account that brought meaning to the events of the session. There was a narrative shaping that, in the words of Salmon and Riesmann (2008, 78), ‘entails imposing a meaningful pattern on what would otherwise be random and disconnected’. These sessions were marked by a strong concern with prospective meaning-making and professional development. We found that this prospective meaning-making could be delineated in terms of the devices that are customarily employed to analyse literary narratives: story and the way it is plotted; characterisation; narrative structure; narrative perspective; setting; tone; themes; and discourses. We looked, for example, at how the students characterised themselves and their pupils in these post-observation sessions and found that the plots of student teachers’ narratives were shaped by these characterisations. There were distinct differences in the plots of students who did not represent themselves as having key responsibility for how events played out in the classroom as opposed to those who portrayed themselves as able to influence, and having responsibility for, classroom events (Anderson and Sangster 2010, 138). Analysing the tutor’s activities through a narrative lens, it was very clear that she was acting to plot students’ activities and experiences, including those of failure, in a way that foregrounded future development. We have noted how her narrative actions
146 | a dva nces i n uni ver s ity a s s e s s me nt could be viewed as assisting students to gain a sense of coherence and meaningful pattern to their experience and to construct a d evelopmental narrative of their actions and being as teachers marked by: orientating towards the future; treating the immediate past as a resource for learning; positioning themselves within the community of teachers; and presenting themselves as active, potentially powerful agents. These sessions could be seen as not simply occasions for giving students feedback on their teaching, but as being settings for interactively creating particular narratives of professional formation that refigured experience. (Anderson and Sangster 2010, 139)
It will be interesting to explore in future research the extent to which narrative methods of analysis prove to be productive tools for examining the form and content of feedback in other fields of professional formation – say, for the veterinary science students who feature in the chapter by McCune and Rhind. University teachers in specific professional domains could also consider how they may best assist students in constructing a developmental narrative that is well tailored to their own professional context. If such efforts are to be successful, they need to be underpinned by the key conceptual change that is highlighted in the preceding quotation – a move from seeing feedback simply in terms of a static commentary on preceding performance to a more dynamic process of creating a coherent sense of possibilities for development that weaves together past and present performance with an envisaged future professional identity. In the words of the preceding quotation, the university teacher’s task can be seen as enabling students to create ‘particular narratives of professional formation that refigure[d] experience’ (Anderson and Sangster 2010, 139). References Anderson, C. 1995. Learning to Discuss, Discussing to Learn: A Study of Tutorial Groups in a Faculty of Social Sciences. Unpublished PhD thesis, University of Edinburgh, Edinburgh. Anderson, C. 1997. ‘Enabling and Shaping Understanding through Tutorials.’ In The Experience of Learning (2nd edn), edited by F. Marton, D. J. Hounsell, and N. J. Entwistle, 184–97. Edinburgh: Scottish Academic Press. Anderson, C., and D. J. Hounsell. 2007. ‘Knowledge Practices: “Doing the Subject” in Undergraduate Courses.’ The Curriculum Journal 18: 463–78.
c om m u nic ati ng mea ni ng through f e e d b a ck | 147 Anderson, C., and V. McCune. 2013. ‘Fostering Meaning: Fostering Community.’ Higher Education 66: 283–96. Anderson, C., and P. Sangster. 2010. ‘A Developing Narrative: Analysing Teachers in Training/Tutor Conferences.’ In Using Analytical Frameworks for Classroom Research: Collecting Data and Analysing Narrative, edited by S. Rodrigues, 125–43. London and New York: Routledge. Bakhtin, M. M. 1981. The Dialogic Imagination: Four Essays, edited by M. Holquist, translated by C. Emerson and M. Holquist. Austin: University of Texas Press. Barnett, R. 2007. ‘Assessment in Higher Education: An Impossible Mission?’ In Rethinking Assessment in Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 29–40. London and New York: Routledge. Barnett, R., and K. Coate. 2005. Engaging the Curriculum in Higher Education. Buckingham: Open University Press and SRHE. Boud, D. 2000. ‘Sustainable Assessment: Rethinking Assessment for the Learning Society.’ Studies in Continuing Education 22: 151–67. Boud, D., and N. Falchikov. 2007. ‘Introduction: Assessment for the Longer Term.’ In Rethinking Assessment in Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 3–13. London and New York: Routledge. Carless, D. 2013. ‘Trust and its Role in Facilitating Dialogic Feedback.’ In Feedback in Higher and Professional Education: Understanding it and Doing it Well, edited by D. Boud and E. Molloy, 90–103. London and New York: Routledge. Collins, S., and I. Marková. 1995. ‘Complementarity in the Construction of a Problematic Utterance in Conversation.’ In Mutualities in Dialogue, edited by I. Marková, C. Graumann, and K. Fopp, 238–63. Cambridge: Cambridge University Press. Dweck, C. S. 1999. Self-Theories: Their Role in Motivation, Personality and Development. Philadelphia: Psychology Press. Farr, R., and R. Rommetveit. 1995. ‘The Communicative Act: An Epilogue to Mutualities in Dialogue.’ In Mutualities in Dialogue, edited by I. Marková, C. Graumann, and K. Fopp, 264–74. Cambridge: Cambridge University Press. Foppa, K. 1995. ‘On Mutual Understanding and Agreement in Dialogues.’ In Mutualities in Dialogue, edited by I. Marková, C. Graumann, and K. Fopp, 149–75. Cambridge: Cambridge University Press. Gee, J. P. 2008. Social Linguistics and Literacies: Ideology in Discourses (3rd edn). London and New York: Routledge.
148 | a dva nces i n uni ver s ity a s s e s s me nt Graumann, C. F. 1990. ‘Perspectival Structure and Dynamics in Dialogues.’ In The Dynamics of Dialogue, edited by I. Marková and K. Foppa, 105–26. London and New York: Harvester Wheatsheaf. Graumann, C. F. 1995. ‘Commonality, Mutuality, Reciprocity: A Conceptual Introduction.’ In Mutualities in Dialogue, edited by I. Marková, C. Graumann, and K. Foppa, 1–24. Cambridge: Cambridge University Press. Hanks, W. F. 1996. Language and Communicative Practices. Oxford and Boulder: Westview Press. Hounsell, D. J. 1984. Students’ Conceptions of Essay-Writing. Unpublished PhD thesis, University of Lancaster, Lancaster. Hounsell, D. J. 1997. ‘Contrasting Conceptions of Essay-Writing.’ In The Experience of Learning (2nd edn), edited by F. Marton, D. J. Hounsell, and N. J. Entwistle, 106–25. Edinburgh: Scottish Academic Press. Hounsell, D. 2003. ‘Student Feedback, Learning and Development.’ In Higher Education and the Lifecourse, edited by M. Slowey and D. Watson, 67–78. Maidenhead: Open University Press and SRHE. Hounsell, D. J. 2007. ‘Towards More Sustainable Feedback to Students.’ In Rethinking Assessment in Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 101–13. London and New York: Routledge. Hounsell, D. J., and C. Anderson. 2009. ‘Ways of Thinking and Practising in Biology and History: Disciplinary Aspects of Teaching and Learning Environments.’ In The University and its Disciplines: Teaching and Learning Within and Beyond Disciplinary Boundaries, edited by C. Kreber, 71–83. Abingdon and New York: Routledge. Hounsell, D. J., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance and Feedback to Students.’ Higher Education Research and Development 27: 55–67. Hounsell, D. J., S. Blair, N. Falchikov, J. Hounsell, M. Huxham, and M. Klampfleitner. 2007. Innovative Assessment Across the Disciplines: An Analytical Review of the Literature. York: Higher Education Academy. Husserl, E. 1973. Experience and Judgment. Evanston: Northwestern University Press. Knight, P. T., and M. Yorke. 2003. Assessment, Learning and Employability. Maidenhead: Open University Press and SRHE. Lea, M. R., and B. V. Street. 1998. ‘Student Writing in Higher Education: An Academic Literacies Approach.’ Studies in Higher Education 23: 157–72. Linell, P. 1992. ‘The Embeddedness of Decontextualization in the Contexts of Social
c om m u nic ati ng mea ni ng through f e e d b a ck | 149 Practices.’ In The Dialogical Alternative: Towards a Theory of Language and Mind, edited by A. H. Wold, 253–71. Oslo: Scandinavian University Press. Linell, P. 1995. ‘Troubles with Mutualities: Towards a Dialogical Theory of Understanding and Miscommunication.’ In Mutualities in Dialogue, edited by I. Marková, C. Graumann, and K. Foppa, 176–213. Cambridge: Cambridge University Press. Linell, P. 2003. ‘Dialogical Tensions: On Rommetveitian Themes of Minds, Meanings, Monologues and Languages.’ Mind, Culture and Activity 10: 219–29. Lotman, Y. M. 1988. ‘Text Within a Text.’ Soviet Psychology 26: 32–51. McArthur, J., and M. Huxham. 2013. ‘Feedback Unbound: From Master to Usher.’ In Reconceptualising Feedback in Higher Education: Developing Dialogue with Students, edited by S. Merry, M. Price, D. Carless, and M. Taras, 92–102. London and New York: Routledge. McCune, V., and D. J. Hounsell. 2005. ‘The Development of Students’ Ways of Thinking and Practising in Three Final-Year Biology Courses.’ Higher Education 49: 255–89. Marková, I. 1990. ‘A Three-Step Process as a Unit of Analysis in Dialogue.’ In The Dynamics of Dialogue, edited by I. Marková and K. Foppa, 129–46. London and New York: Harvester Wheatsheaf. Newman, D., P. Griffin, and M. Cole. 1989. The Construction Zone: Working for Cognitive Change in School. Cambridge and New York: Cambridge University Press. Nicol, D. 2009. ‘Assessment for Learner Self-Regulation: Enhancing Achievement in the First Year Using Learning Technologies.’ Assessment and Evaluation in Higher Education 34: 335–52. Nicol, D. 2010. ‘From Monologue to Dialogue: Improving Written Feedback Processes in Mass Higher Education.’ Assessment and Evaluation in Higher Education 35: 501–17. Northedge, A., and J. McArthur. 2009. ‘Guiding Students into a Discipline: The Significance of the Teacher.’ In The University and its Disciplines: Teaching and Learning Within and Beyond Disciplinary Boundaries, edited by C. Kreber, 107– 18. Abingdon and New York: Routledge. Nystrand, M. 1997. Opening Dialogue: Understanding the Dynamics of Language and Learning in the English Classroom. New York: Teachers College Press. Poulos, A., and M. J. Mahony, 2008. ‘Effectiveness of Feedback: The Students’ Perspective.’ Assessment and Evaluation in Higher Education 33: 143–54.
150 | a dva nces i n uni ver s ity a s s e s s me nt Price, M., K. Handley, B. O’Donovan, C. Rust, and J. Millar. 2013. ‘Assessment Feedback: An Agenda for Change.’ In Reconceptualising Feedback in Higher Education: Developing Dialogue with Students, edited by S. Merry, M. Price, D. Carless, and M. Taras, 41–53. London and New York: Routledge. Prior, P. A. 1998. Writing/Disciplinarity: A Sociohistoric Account of Literate Activity in the Academy. London and Mahwah: Lawrence Erlbaum Associates. Reina, D. S., and M. L. Reina. 2006. Trust and Betrayal in the Workplace: Building Effective Relationships in your Organization. San Francisco: Berrett-Koehler. Richardson, P. 1991. ‘Language as Personal Resource and as Social Construct: Competing Views of Literacy Pedagogy in Australia.’ Educational Review 43: 171–89. Rommetveit, R. 1968. Words, Meanings and Messages: Theory and Experiments in Psycholinguistics. New York: Academic Press. Rommetveit, R. 1974. On Message Structure: A Framework for the Study of Language and Communication. London and New York: Wiley. Rommetveit, R. 1979. ‘On Negative Rationalism in Scholarly Studies of Verbal Communication and Dynamic Residuals in the Construction of Human Intersubjectivity.’ In Studies of Language, Thought and Verbal Communication, edited by R. Rommetveit and R. M. Blakar, 147–61. London, New York, San Francisco: Academic Press. Rommetveit, R. 1990. ‘On Axiomatic Features of a Dialogical Approach to Language and Mind.’ In The Dynamics of Dialogue, edited by I. Marková and K. Foppa, 83–104. London and New York: Harvester Wheatsheaf. Rommetveit, R., and R. M. Blakar, eds. 1979. Studies of Language, Thought and Verbal Communication. London, New York, San Francisco: Academic Press. Sadler, D. R. 1989. ‘Formative Assessment and the Design of Instructional Systems.’ Instructional Science 18: 119–44. Sadler, D. R. 2009. ‘Indeterminacy in the Use of Pre-Set Criteria for Assessment and Grading.’ Assessment and Evaluation in Higher Education 34: 159–79. Sadler, D. R. 2013. ‘Opening Up Feedback: Teaching Learners To See.’ In Reconceptualising Feedback in Higher Education: Developing Dialogue with Students, edited by S. Merry, M. Price, D. Carless, and M. Taras, 54–63. London and New York: Routledge. Salmon, P., and C. K. Riesmann. 2008. ‘Looking Back on Narrative Research: An Exchange.’ In Doing Narrative Research, edited by M. Andrews, C. Squire, and M. Tamboukou, 78–85. London, New Delhi, Los Angeles: Sage.
c om m u nic ati ng mea ni ng through f e e d b a ck | 151 Wertsch, J. V. 1998. Mind as Action. Oxford and New York: Oxford University Press. Young, P. 2000. ‘“I Might as Well Give Up”: Self-Esteem and Mature Students’ Feelings About Feedback on Assignments.’ Journal of Further and Higher Education 24: 409–18.
8 Learning from Assessment Events: The Role of Goal Knowledge D. Royce Sadler
Introduction
T
he trigger issue for this chapter is a phenomenon that is familiar to many higher education teachers and markers across a wide variety of courses and fields. It is this: when presented with well-formulated and specified assessment tasks requiring extended written responses, certain students consistently fail to undertake the task that is actually specified. Not uncommonly, these students focus on the subject matter itself, rather than on what is to be done with it. They assiduously search for all available information, compile it carefully and expect at least a passing mark. Markers with initial expectations that the responses will be in line with a literal interpretation of the assessment task become frustrated and disappointed when confronted with a steady stream of pedestrian works that do not attend to the set task and represent only lower-order cognitive processes. In many cases, they adjust their marking expectations accordingly. Conversely, when they encounter a student response that tackles the stated problem directly, they liven up and lean towards assigning bonus marks. Academics can also be baffled when they teach students over a sequence of courses and find themselves repeatedly telling the same students that they have not answered the question. Somehow, the point never seems to get across. This can occur even with students whose depth of knowledge and
t h e role of g oal k nowle d ge | 153 thinking, when probed conversationally or observed during class interactions, suggests they should be capable of considerably better performance. On a different tack, what if assessment tasks are not well-formulated and specified, but vague and technically deficient? Students then have to guess what was in the examiner’s mind when they set these tasks. An incorrect guess (or no conscious guess at all) again sets the stage for a significant mismatch between marker expectations and what the student produces. This results not only in inaccurate appraisals, but also in lost opportunities for students to develop higher-order learning. This chapter sets out an analysis of these problems and offers some proposed strategies aimed at reducing their frequency. Of course, there are always high-performing students who produce high-quality responses every time. They know what to do without detailed explanation, even though creating their responses might require great effort. The term ‘essay’ refers to a term paper or assignment that is a piece of integrated prose on a single topic or subject, making sense in its own right and typically spanning between 1,000 and 5,000 words long. The essay medium and format are useful for communicating ideas and reasoning and can be shaped to serve a variety of purposes and audiences. Although essays are the main concern, many of the principles apply to other divergent or ‘open’ student responses, such as answers to examination questions (if they are lengthy, complex and substantial enough) and seminar presentations. Parts of the chapter also have relevance to other academic works, for which the specific purpose may be set largely by the producer, such as dissertations, journal articles, monographs and book chapters. Analysis of a Specimen Assessment Task The starting point for discussion is the assessment task itself, which is assumed to be externally set. A suggestion appearing in many books on assessing student achievement is that the response format and the type of learning tested are inherently related. That is not so. The choice of format does not determine the type of outcome assessed. Essays, oral presentations, short-answer and multiple choice (or other ‘objective’) formats can all be used to assess higher-order or lower-order learning. However, they differ in their effectiveness, efficiency and convenience. Multiple-choice tests are problematic on a
154 | a dva nces i n uni ver s ity a s s e s s me nt number of counts, but, compared with essays, are generally easier to devise for testing lower-order than higher-order learning and are usually more efficient. Essays can be used for all, but require careful planning and execution if they are to assess higher-order learning. Clearly, developing an argument or constructing an original critical analysis necessitates extended prose. Whether a particular argument or analysis is best suited to a written piece of work, an oral presentation, a visual display or a combination of media depends on the situation. The point being made here is that a decision on the most appropriate form of assessment item type should correspond to the specific learning to be assessed. An essay format does not in itself predispose students to demonstrate higher-order outcomes. That has to be conveyed by the task specifications. Starting with the premise that examiners have a fairly clear idea of the types of higher-order knowledge and skills that they wish to assess, this will be reflected in the evidence they need to determine the nature and extent of learning that has occurred. The design of the assessment task and the written specifications of the expected ‘product’ type (which constitutes the ‘goal’ for students) follow on from those considerations. Design and specifications are mentioned separately to emphasise that an examiner’s intention of a sound design may be subverted by inadequate specifications. For the purposes of discussion, here are specimen task specifications based on an actual question from an invigilated examination paper in an advanced undergraduate course. The reference is fictitious and the wording has been modified, but the essential structure is retained. Assume that the issue had not been dealt with during the course. In 2004, Markham and Leollar argued that much of the logic and many of the intellectual and institutional assumptions that served earlier decades of theoretical development in X are at best marginally satisfactory today. Can you think of any earlier theories of X (or related fields) to which their argument is especially applicable? What practical or theoretical implications does their perspective have for the present?
Strictly speaking, the first of the two questions asked here could be answered with a single word. Such a response would not impress the typical marker, but how would a zero mark fare in a formal appeal by a student? The best
t h e role of g oal k nowle d ge | 155 students, of course, respond with a properly fleshed out answer. In general, non-trivial responses are derived from what students think the examiner had in mind. Sometimes their assumptions will be valid, sometimes not. Clearly, this first question is specified inadequately. The explanation for this could be that, in devising the task, the examiner focused too strongly on subject matter content and curriculum coverage, rather than on what students should do with that content. The second question in the specimen task (about implications) may or may not be more demanding in terms of depth and complexity; it depends on how the students interpret it in the light of their knowledge and capability. As it is stated, the question sets up no intellectual task or challenge. Student responses could legitimately range from an expanded list of the implications to a full-blown analytic treatment that not only identifies implications, but also deals with the hows, whys and wherefores. Furthermore, equally able students could attempt the task in different ways, and all of them could be legitimate. Two further sets of important questions arise. The first set applies to an examiner who is also the marker. What was the examiner intending when creating the task? How definite was it? If, during marking, a brilliant student response is encountered, does the marker – only at that point – recognise that the brilliant student response effectively fills out what they had really hoped for when writing the specifications? Does that particular response then act, even subliminally, as the reference point for later marking? Does the examiner believe that, if at least a handful of students produce work that is superbly on-target, the adequacy of task specifications is thereby established? (Would not just a ‘handful’ be evidence for the opposite conclusion?) The second set of questions is relevant to weaker students. What do they notice or attend to in the task description? How does that shape their thinking and their manner of responding? How do they start on their works? How could more of these students acquire whatever knowledge is necessary to do what the high performers do? After written work is appraised and marked, a marker may provide written comments (feedback) explaining the reasons for the mark awarded, commenting about strengths and weaknesses and providing advice for constructing future works. Research indicates that students are often confused
156 | a dva nces i n uni ver s ity a s s e s s me nt about what was expected by an essay assessment task, markers’ feedback fails to clarify it and they are often no wiser about what to do in future. Hounsell (1987) analysed student conceptions of what an essay is and what essay writing involves in the disciplines of psychology and history. Using student interviews as raw data, he asked them about their reactions to feedback. Their bewilderment is all too common. Ellie: I felt, in actual fact, I’d covered the area very comprehensively, by trying to bring in as many angles as I could. I tried to cover all the different areas. But one of the tutor’s criticisms was why did I just keep going from one to another. But I thought that’s what I was supposed to do . . . [F]rom the comments on the essay, I gathered the tutor wanted me to argue, about something, but I mean, by presenting the material as the research had demonstrated, it was a mild form of argument. I wasn’t going to get aggressive, in an essay. (114, n115) Gail: I felt pretty satisfied with it. I thought I’d get a brilliant mark for it. I was really put off when I saw [the tutor’s comments on my essay]. I just thought [the question] was ‘What limits a person’s ability to do two things at once?’ Not why, or how it was done. What I did I thought was very relevant. I just answered the question, which the tutor didn’t think was right, ‘cos the tutor wanted ‘how’ and ‘why’ factors, and I didn’t quite answer that. (115) Pattie: [What did I think the tutor was looking for in this essay?] Ah . . . well, this is what’s confusing me. I know the tutor likes concise work, but doesn’t like generalisations, and doesn’t like too much detail, although I think on the whole he’d like more detail than generalisations. And because it was such a general question, I thought, ‘oh help!’ I don’t know what he’s looking for. (115)
Inadequacies regarding the ways in which feedback is formulated and delivered have been extensively researched and are not summarised here. Instead, attention is directed to some reasons for failing to capitalise on the potential for learning from assessment events, which can be traced back to a lack of student knowledge about the intellectual purpose to be served by an essay response. This is explored by introducing the concept of response genre.
t h e role of g oal k nowle d ge | 157 Goal Knowledge and Response Genre In an earlier article published over thirty years ago (Sadler 1983), the general idea of goal knowledge and its progressive clarification were outlined in the context of learning during a course. Similar principles can be applied to the production of extended responses to set assessment tasks. An idea of what a final work should look like is referred to here as ‘goal knowledge’. Logically, students need to have some goal knowledge to even begin developing their responses in a deliberate and intelligent way. Something has to guide their steps, choices and organisation of material; something has to tell them which operations or processes are more promising than others for achieving the desired end. Books on how to go about writing essays typically contain specific advice on how to analyse the set topic, plan the essay, search out information, sort and evaluate it and finally put it all together. Some include useful heuristics for analysing the assessment task: ‘Circle all the content words (often nouns); this is what has to be written about. Underline all the process words (often verbs); these set out the operations which have to be performed on the content’. Goal knowledge goes beyond that to embrace the structure of the response as a whole. In practice, both the end and the means are developed interactively, being clarified and given meaning as work proceeds. Unambiguously stated assessment task specifications do not directly determine the character of a particular student’s final structure of response. Systems and rules may help, but the shape of their response’s final form is the student’s own and has to be invented. As the end point of development is approached, the competent producer ensures that the essential character of the response corresponds to what is required. It may well turn out to bear little resemblance to the initial conception, because it will have unfolded along the way. Competent producers typically value the learning that occurs throughout this process, because of the satisfaction they experience by pushing into new territory. For a given body of content, the flexibility of the essay form allows it to serve a variety of purposes. If an essay is in fact a critical review, its content and structure reflect that. Causal investigations, narratives and mere descriptions do not qualify as critiques. Knowing how to shape an essay so
158 | a dva nces i n uni ver s ity a s s e s s me nt that it serves a specified end is the major component of goal knowledge. A fundamental aspect of a student’s finished response is the extent to which it is a member of the class of responses called for in the task design and specifications. This class is denoted here by the term ‘response genre’. Additional examples of response genres are: analyses; arguments (or the making of cases); applications (of theoretical ideas to practical contexts); position statements; comparisons (of two or more things); extrapolations; professional advice (to clients or patients); evaluations; interpretations; and assumptions (underlying conclusions, policies or practices). Each genre is identified by the essential character of its members (what they are) or by the purpose they serve (what they do), not by an explicit definition, a set of boundaries, a particular format or a list of components. Formally, a response genre consists of the universe of all possible responses that could be constructed and recognised as attempts at answering the question, addressing the issue or solving the problem stated in the task specifications, regardless of the quality of these responses. A particular response genre is, by definition, open to a whole range of quite different responses, each of which are an instantiation and all constructed to serve the same purpose. This is important for students and markers to appreciate, and it is good reason to exercise caution in using a model answer as the exemplary work with which all student responses should be compared. The number of possible response genres is indefinitely large, and genres can be created at will. Some have common names (such as ‘critique’). Others are identified by a description of the purpose to be served. Here is an example of one that does not have a simple, straightforward label. Suppose students are given a set of propositions, certain factual information and a single conclusion. The assessment task requires them to identify the assumptions that would be necessary for the conclusion to hold. The marker’s opening question has to be about whether what the students have identified are actually assumptions. Unless they are, logically it will not be possible to move on to the next step, which is to evaluate them – as assumptions – in terms of, say, their validity and sufficiency. A comprehensive, coherent, well-written exploration of the propositions and facts would be situated in the wrong genre and would be impossible to appraise within the intended parameters. Deciding whether a particular student response belongs to a nominated
t h e role of g oal k nowle d ge | 159 genre calls for an all-things-considered, in-context decision that is necessarily holistic. Why is this important? One of two non-negotiable components for a person’s command over a body of knowledge is their subject-matter content knowledge in the specific field – the fundamental terminology, facts, relationships, applicability and so on. Assessing such knowledge typically makes use of sampling, because exhaustive testing is usually impractical, and sampling provides sufficient evidence for strong inferences about the whole. However, a lot more than the ability to reproduce information is required for the level of command to be determined. For students to demonstrate high‑level competence, they need to be able to select from, process and orchestrate information across the relevant knowledge domain and transform it, on demand, into new knowledge constructions within different genres to serve a variety of purposes. Another way of saying this is that proper assessment of command or competence involves sampling not only subject matter content, but also response genres. That is why assessment task specifications should identify, directly or indirectly, the response genre required, with no place for guesswork. The student’s task is then to create a response that delivers on the requirement. At the same time, specifications must not contain recipe-like instructions or formulas on how to produce an in‑genre response. The idea of evaluating within genre is so natural and common in daily life that it only needs to be explicitly raised when confusion would otherwise occur. For example, in evaluating footwear, some commonly used criteria are shock absorption, durability and slip resistance. Before launching into an evaluation of a particular model of footwear according to these three criteria, a prior question is fundamental. Should the footwear be assessed as hiking boots, athletic shoes, dress shoes or slippers? Each class of footwear has its own purpose (and, in this case, label). A particular model of footwear has to be judged as a member of its class. Matters are often less obvious for assessments of educational outcomes involving essays, which is why response genre as a concept has pedagogical potency. High performing students consistently create their responses within the relevant genre as a matter of course. They rarely give the matter conscious thought; it is so obvious. However, the same cannot necessarily be said of lesser performers. They often lack an awareness of response genre as a concept, even though tutors’ feedback may have told
160 | a dva nces i n uni ver s ity a s s e s s me nt them on multiple occasions that they have not answered the question. For otherwise diligent students who simply do not know enough about the basic idea of responding within genre to be able to use feedback in improving their work, this situation is remediable. Academics have a clear obligation to induct students into how to: distinguish one response genre from another; identify or deduce the response genre from the task specifications; judge the quality of different responses within a specified genre; and be able to establish when one of their own responses is ‘in genre’. These pedagogical imperatives need to be addressed through exposure to a wide variety of genres, specific discussion and practical experience, not through continual ‘telling’ by way of detailed feedback. Some ideas for doing this in a broadly parallel situation are outlined in Sadler (2013). Academics also need to cooperate with colleagues who teach cognate courses to arrive at a common policy across a broad front. All this is predicated, of course, on the assumption that the assessment task design requires higher-order thinking and skills to be applied and that the task specifications make clear the end to be served. Boulding’s Concept of ‘Image’ What is the mechanism by which a person’s knowledge of response genre can steer them through the construction process under conditions of progressive clarification and determination? In 1956, Kenneth Boulding published a small book entitled The Image: Knowledge in Life and Society. The first chapter of this book provides a constructive framework for exploring this question. Boulding was interested in the complexity of a person’s ‘held’ knowledge at any given point in their lives – how it comes about through a multitude of layered interactions with a wide range of sources and how it is progressively modified and re-modified through ongoing interactions. To the extent that the sum total of each person’s past encounters and experiences with things, places, times, people, situations, words and emotions are unique up to the point of interest, held knowledge both is personal and idiosyncratic. Boulding developed a broad-brush conception of knowledge-in-the-round and termed this a person’s ‘image’ of the world. (The term is not meant to imply visualisation.) The following extracts compiled from Boulding’s chapter are verbatim,
t h e role of g oal k nowle d ge | 161 and the italics are all in the original. In many places, the ellipses represent substantial gaps in wording (but without, it is hoped, misrepresentation of Boulding’s thought). He explained: Knowledge has an implication of validity, of truth. What I am talking about is what I believe to be true; my subjective knowledge. It is this Image that largely governs my behaviour . . . A hundred and one things may happen [to me]. As each event occurs . . . it alters my knowledge structure or my image. And as it alters my image, I behave accordingly . . . behavior depends on the image . . . One thing is clear. The image is built up as a result of all past experience of the possessor of the image. Part of the image is the history of the image itself . . . From the moment of birth if not before, there is a constant stream of messages entering the organism from the senses . . . Every time a message reaches [a person, the] image is likely to be changed in some degree by it, and as [the] image is changed . . . behavior patterns will be changed likewise. (5, n7)
Boulding argued that the image must be distinguished carefully from the messages that reach it and went on to analyse the effects that messages can have on the image that is current at any moment in time. Messages consist of information and, depending on the recipient and the state of the image, they may leave it unaffected, change it in some systematic way or, in some dramatic cases, lead to its abrupt and comprehensive restructuring or reorganisation as a whole. It may occur that the latter comes about only after a series of earlier related messages have been rejected as untrue, threatening or contradictory. If doubts concerning their validity eventually arise and continue, the scene is set for a single later message to lead to wholesale overturning or revision. A fourth possibility for the impact of a message is that it can clarify or make more certain some part of the image or, alternatively, it can introduce uncertainty or tentativeness where none existed before. In particular, ‘[i]mages of the future must be held with a degree of uncertainty, and as time passes and as the images become closer to the present, the messages that we receive inevitably modify them, both as to content and as to certainty’ (11). For Boulding, subjective knowledge structures consist not only of images of fact, but also of value – the valuations placed on objects, events and ideas. People may hold different value scales for different purposes, and the ‘value
162 | a dva nces i n uni ver s ity a s s e s s me nt scales of any individual or organisation are perhaps the most important single element determining the effect of the messages it receives on . . . [the held] image of the world’ (12). In other words, valuations placed on messages received serve as filters or mediators, and these affect the impact that messages have on the image. Favourably received messages may seem to have no impact, but may nevertheless increase the stability of the image, making it more resistant to unfavourable messages. Furthermore, . . . if a group of people . . . share . . . images of the world . . . which . . . are . . . roughly identical, and if this group of people are exposed to much the same set of messages in building up images of the world, the value systems of all individuals must be approximately the same. (15–16)
Boulding himself made a connection between the ‘image’ and the role of teachers in seeking to encourage and assist sophisticated learning to flourish: The accumulation of knowledge is not merely the difference between messages taken in and messages given out. It is not like a reservoir; it is rather an organization which grows through an active internal organizing principle . . . Knowledge grows also because of inward teachers as well as outward messages. As every good teacher knows, the business of teaching is not that of penetrating the student’s defences with the violence or loudness of the teacher’s messages. It is, rather, that of co-operating with the student’s own inward teacher whereby the student’s image may grow in conformity with that of [the] outward teacher. (18)
Largely by definition, higher education is about helping students develop deep knowledge, competence and proficiency in complex fields. Boulding’s characterisation of a somewhat malleable ‘image’ of reality, which people – including teachers and learners – carry around with them in their heads and which has such a defining effect on their behaviour is both refreshing and illuminating. The challenge for academic teachers and higher education institutions is to design and create optimum conditions for goal knowledge to grow. This includes recognising the pivotal importance for students of developing goal knowledge, which can guide their intelligent production of original, complex works.
t h e role of g oal k nowle d ge | 163 Changing the Scale of Boulding’s ‘Image’ Boulding’s ideas provide a way of thinking through what goes on in competent producers’ minds and what guides them when their initial ideas and tentative plans cannot be expressed as detailed blueprints for action. His ‘image’ was big-picture. In this section, it is reconfigured to the scale of student responses to assessment tasks. Consider, first, a relatively competent student for whom the response genre identified in the task specifications has registered in their mind and is well enough understood for them to know whether a finished work of their own (or others) would meet the necessary task requirements. That knowledge is fundamental, but it is not necessarily a sufficient guide on where to begin, how production can be steered and when the work is ‘complete’. They do not know in advance exactly where their conceptualisations or reasoning will eventually lead them, but they have an image of what their final response, indistinct though it may be at first, could look like. That image is about the future; it is held by the student. The task specifications identify the response genre, but do not supply an image for it. That image has to be supplied by the student. If asked, the student may be able to describe theirs in a nebulous way, yet ‘know’ it better as a mind image; however, it cannot be ‘known’ in its fullest sense until their work is brought to its final form and ‘complete’. For students with confidence in their ability to create works within the genre required, an initial image, even when somewhat formless or murky, enables them to make a start. They know that it is not unusual to find a major difference between their initial ‘soft’ image (Boulding’s image of the future) and the form that their response eventually takes. Exceptions apart, they know there is not just one possible final form, but an indefinite number, and are not necessarily unnerved by that. They have been in similar situations many times before. Which final form their response will take is of little consequence, so long as it serves the intended purpose and is of quality. Uncertainty and tentativeness at the interface of complex decision points go with the territory and are perfectly normal. Competent students know that during production their image will become progressively clearer, despite any temporarily blurry patches. They learn and refine tactics and moves experimentally and may be unaware which aspect dominates at any particular moment.
164 | a dva nces i n uni ver s ity a s s e s s me nt Metaphorically, at least, an internal dialogue between their image and messages coming in (from their own verbalisations or from external sources) continually shapes the image, and the production process is facilitated by the act of writing. Not infrequently, there are elements of discovery during the journey. As the work progresses, tentative aspects of the image are recognised, confirmed, disconfirmed or modified. Their image of the final work is sufficient to provide both direction and motivation towards fruition, but remains in the future until the end of the process. Always, the purpose to be served presides over the whole construction process. This portrayal of the process is consistent with Entwistle’s (1995) research into the experience of students during their preparations to sit invigilated examinations and then, during the examination, selecting, assembling, organising and putting together their material in response to actual examination tasks. Various students who were interviewed reported their experiences in dissecting the question, entertaining emerging and temporary forms and structures for their responses, iterative planning, a sense of fluidity and the undergirding sense of needing to shape their final responses to the specific requirements of the task. What about students who regularly fail to address the assessment task? For whatever reasons, they do not manage to appreciate the centrality or character of the response that has been asked for. Therefore, a significant task for academics is to figure out how a larger proportion of them can develop sufficiently rich and adaptable ‘images’ of academic ‘objects’ for them to produce works that serve a wide variety of purposes intelligently, on demand, unaided and of consistently high quality. The challenge for these students is to master the interface between their image at any point and the final form as they learn during, and on later reflection about, the experience of production. Obstacles to the Development of Goal Knowledge Three assessment policies and practices have been widely implemented with the best of intentions, but often turn out to limit the growth of goal knowledge among students who have a history of not correctly answering the question. These assessment policies and practices are related to feedback; preset criteria and standards; and approaches to marking. Changing the prevailing state of affairs requires a different perspective.
t h e role of g oal k nowle d ge | 165 Feedback In the context of complex learning, feedback often has to be complex to be effective. Ordinarily, the marker as sender of a feedback message knows more about the assessment task and how to appraise student responses than the student as receiver. The information transfer is therefore markedly asymmetric. This imbalance in knowledge and approach comes through strongly in research on feedback, including in two of the publications by Hounsell (1987, 2007). During the twenty-year period between these two publications, feedback to students enjoyed a high profile and still does. Its visibility and policy implications are reinforced by regular surveys of student satisfaction, giving it considerable accountability potential in evaluations of teaching. Hounsell (2007) found that ‘[s]tudents’ concerns about guidance and feedback ranged widely, encompassing not only the consistency and helpfulness of tutors’ comments but the timing and frequency of feedback and the adequacy of guidance about assessment expectations and criteria’ (102). The relatively low impact that feedback has on student learning can be partly attributed to its employment of the transmission model of teaching; it is predominantly about telling (Sadler 2010a). Whatever the total set of reasons, academics find the low impact of feedback disturbing, despite ongoing improvements in its quality, particularly its scope, language, terminology, focus, tone and clarity. When an assessor’s judgement is made, one of the reference points is likely to be the marker’s image of a high quality response. That image may have a number of shaping antecedents: expectations upon reading the task specifications; a model answer; the marker’s personal image of how they ‘would’ have answered it (which is unlikely to be a sharp image); supplied criteria sheets or rubrics; and judgements already made of other students’ responses. What cannot feasibly be explored is the full range of approaches that conceivably could have been taken to produce a response within the required genre. To the extent that a marker’s image and that of a student are different in a particular case, the feedback may appear to the student as at cross purposes. Markers’ feedback naturally relates to the marker’s image at the time and is dependent on the marker’s value filter as to which aspects of the work are noticed and attended to – that is, which characteristics count as evaluative data.
166 | a dva nces i n uni ver s ity a s s e s s me nt A complementary issue is the learning that can come about through the impact that markers’ messages have on the learner, given the learner’s own value filters. When markers offer advice about how a work could have been done better, they treat what the student has done as evidence from which to infer the nature of the student’s image for the completed work – and want to help them better achieve that end. A faulty inference leads to feedback offered in good faith, but summarily dismissed by the student. That and the ‘feedback-as-telling’ aspect together imply that feedback is unlikely to have much impact in helping off-target students create future works that fall within a nominated response genre. Criteria and Standards Several strategic initiatives aimed at improving the amount of learning gained from assessment events have been undertaken in recent years. One of these initiatives is to make appraisal processes more transparent to students by setting out the criteria and ‘standards’ in list form or as rubrics before students begin work on their responses. This approach is now a major component of what is termed ‘feedforward’. The expectation is that students will tailor their work accordingly and score higher marks and grades. For the marker as well, fixed criteria and standards are intended to facilitate marking and can be used to justify judgements and provide systematic feedback. Numerous research studies have inquired into how best to familiarise students with the nature and application of criteria and standards. By ‘criteria’, it is meant the properties or qualities of student responses. Qualities are one thing, but overall quality is a critical but different attribute. The fundamental question is not how well a piece of work rates on individual criteria or on all of them together, but its effectiveness in achieving its purpose. This requires a judgement as to whether it is situated within the nominated response genre and, if it is, about its quality. Making ‘relevance’ one of the criteria is not the way to check for genre membership. All material in the piece of work may be strictly relevant to the specified topic, but not all of it may advance (and in that sense be relevant to) the purpose to be served by the work as a whole. The common practice of formally setting criteria in advance and emphasising them as the principal tools for appraisal sounds logical, but is subject
t h e role of g oal k nowle d ge | 167 to a significant drawback: it interposes a grille between the assessor and the student on one side and the work on the other. Assessor and student both focus on the criteria. This grille is obtrusive in its placement and detracts from a consideration of the response genre. For many students, it leads not just to a myopic perspective, but also reduces their awareness of the nominated genre. This is especially the case for students who are strategic in their approach, by which is meant that they concentrate their attention on what will earn high marks. The way to do this is to score well on all the criteria. To take a hypothetical example, a student may believe that ‘scholarly work’ is characterised by excellent English, an appropriate use of technical or sophisticated language, comprehensiveness in coverage and full referencing. This student works at these criteria assiduously, weaves in everything they can find out about a topic, goes beyond the set readings (known to be highly valued), includes copious references and ensures that the writing and structure are near-flawless. The anticipation of a high mark goes with this high level of attention to the specific criteria. Disappointment sets in when the latter does eventuate. To cap it all, the marker’s feedback goes beyond the set criteria and refers to such things as the ‘analysis’ not being ‘penetrating enough’, the work making ‘too many sweeping generalisations’ or the need for examples to ‘clinch a point’. Simply put, the grille gets in the way. It limits the student’s ability to see beyond to what really matters – the overall effectiveness of the work in achieving the stated purpose. The role of vocabulary, writing quality, comprehensiveness and referencing play vitally important roles, but they must be recognised for what they are: necessary but not sufficient. They are tools in the production of high quality works within the required response genre. However, to the extent that they become goals in themselves, the characteristics of the finished work are subordinated to the criteria without an awareness of the corresponding compromises and limitations thus imposed. Appraising complex written works requires subjective judgements, which bring multiple interlocking criteria into play. However, the criteria need to emerge out of a holistic appraisal of the work (Sadler 2009). Emphasising preset criteria not only detracts from the disposition of both student (as producer) and marker (as appraiser) to attend to whether the work belongs to the nominated response genre, but also makes it awkward to raise as a key issue after the event.
168 | a dva nces i n uni ver s ity a s s e s s me nt Parallels to this phenomenon have been observed and commented upon for over half a century in a variety of social science fields in which human agency plays a crucial role, especially in the evaluation of complex social and other policies, programmes, initiatives and phenomena. Using indicators as the principal evaluation tool leads programme managers to prioritise decisions and actions that maximise scores on the indicators, even if this reduces the effectiveness of the programme. Strathern’s (1997) way of describing this outcome is widely quoted, succinct and telling: ‘When a measure becomes a target, it ceases to be a good measure’ (308). Students need to learn how to work on the other side of the grille. Academics need to teach students how to create their responses according to a single criterion, which is pre‑specified and overrides all others. This ‘capital C’ Criterion is obvious once it is stated: how well the student response achieves the purpose set out in the assessment task specifications (that is, how well the work solves the problem posed, addresses the issue designated, answers the question asked, executes the procedure required or demonstrates the performance nominated). With that unitary criterion as a non-negotiable given, academics can then be wide open as to how the students respond. The singular Criterion needs to be emphasised, because many students have been conditioned out of appreciating its absolute necessity. Too often, those who fail to get a grip on it can nevertheless manage to ‘earn’ a passing score by meeting most or all of the ‘small c’ criteria listed in the rubric or scoring sheet. If students can be brought to an appreciation that literally every aspect of their responses contributes to how well the work achieves the set purpose and is therefore fundamental to a determination of its quality, their approach to producing it is likely to change radically. They become the switching ‘holists–serialists’ that characterise consistently good producers, alternately attending to large and small scale aspects. Marking and Assessment Culture Higher-order knowledge and skills occupy a constitutive place in higher education as a social enterprise. Although lists of them vary, common inclusions are critical analysis; synthesis; problem-solving; originality; initiative and creativity; information location and evaluation; and effective communication. All
t h e role of g oal k nowle d ge | 169 of these are important in a wide variety of disciplines, fields and professions at both undergraduate and postgraduate levels. Collectively known as graduate learning outcomes, generic attributes or something equivalent, they are regularly championed by academics, employers and higher education leaders, authorities and institutions. The challenge is to ensure they are acquired and demonstrated across a steadily diversifying student body. It was mentioned earlier that many students who submit consistently low-level responses lying outside the set response genre nevertheless receive at least passing marks. This is rationalised on a variety of grounds. Some markers argue that many current students are not capable of doing advanced work, but nevertheless benefit from the experience of higher education and in any case need a qualification. Some consider that students who put in substantial effort will no doubt have learned something in the process and deserve to pass purely on that basis. Yet others say that, while it is disappointing to have students submit low‑level work, this is not really too serious an outcome, because there is no guarantee that those students will eventually work directly in the fields of their degrees anyway. All these reasons are connected with aspects of the learning environment over which individual academics and programme managers have considerable control. A further reason given is a reaction to certain externally set parameters. Expedience pushes for nearly all students to be passed, because in today’s climate high failure rates can produce negative consequences. Institutions, departments, programme directors and individual academics all have vested interests in high retention rates, high student satisfaction ratings and minimum times to graduate, in order to protect entry levels, fee income, funding from the public purse and institutional status. The benefit for students who have not acquired higher-order knowledge and skills and yet pass course after course is that they accumulate credit and finally graduate. The costs are twofold: they are not equipped as they ought to be; and the general public’s confidence in higher education is jeopardised. Directions for Change Reforming current assessment culture and practice in the light of the material discussed above has four main requirements. First, assessment task designers need to construct challenging tasks that demand higher-order cognitive and
170 | a dva nces i n uni ver s ity a s s e s s me nt professional skills. Second, the accompanying task specifications must be models of clarity and must identify the required response genre. The onus is squarely on the examiner to get both of these first two elements correct and in place. The third, which is conditional on the first two being met, is that students must be educated to distinguish a variety of response genres and know how to respond to a literal interpretation of assessment task specifications. The fourth is the most radical, and thus the one most likely to generate an instant negative reaction. It is that markers engage in detailed evaluation of only those responses that are within genre, not simply mark them down. Being within genre is thus made into a pre-emptive condition for markers to be able to judge quality. Responses not eligible for appraisal need to be returned to students, the reasons explained verbally, examples of conforming and non‑conforming works compared and discussed and adequate opportunity given for resubmission without penalty. Such action may need to be repeated only a very small number of times with a given student for the door to be unlocked, the idea concept caught on to and the event turned into a breakthrough learning experience. This action would not be penalising students unfairly; it would be actively teaching them how to master the higher‑order objectives that are stated that they will achieve through undertaking their specific courses. Is it unfair to other students who do not resubmit? It could be, under certain conditions. If the tradition has been that an essay submitted during a course is marked and the mark counted towards the final course mark or grade, it probably is unfair. But a major problem lies in the practice of accumulating marks (which, it turns out, is a relatively recent practice). Non-accumulation is a sound policy for other reasons as well, as explained in Sadler (2010b). If the assessment event is purely formative with no accumulation at all, the whole assessment dynamic is reconfigured. Purely formative events have high stakes for learning, but zero stakes for grading. The next concern is: does that not completely remove the student’s incentive for trying? Again, it is the dynamic that has to change. It would provide a strong incentive if students were convinced that for the assessments carried out late in the course for grade determination, responses that were out of genre would be ineligible for marking, regardless of their other qualities. Students would be more likely motivated to substitute the higher response-genre goal for the lesser criterion‑oriented goal. In practice, students
t h e role of g oal k nowle d ge | 171 adapt pragmatically to hard constraints, provided they know their settings. To reject such proposals out of hand is not only to legitimate existing practice, but also to pass up a significant opportunity for teachers and students together to pursue, and for students to achieve, higher-order outcomes. Conclusion Higher education graduates in all disciplines, fields and professions should be able to carry out an appropriate range of previously unseen complex tasks whenever required, with an appropriate degree of independence and to a consistently satisfactory standard. To take this aspiration seriously involves treating the development of higher-order capability as constitutive of good teaching for all students. This includes students who have previously exhibited a pattern of failing to respond to assessment tasks, according to a literal interpretation of the task specifications. Learning how to reverse this pattern of behaviour is critically important, but, for many students, does not seem to come naturally. To expand the proportion of students who develop transferable higherorder knowledge and skills requires concerted action on three fronts. The first is the design of assessment tasks that require higher‑order thinking to create their responses and specifications for the tasks, which are written so as to make that expectation explicit. The second is ensuring that all students develop knowledge of what it means to produce responses directed to a specific purpose and are equipped to create such responses. The third involves replacing three common assessment practices associated with feedback, preset criteria and marking, which are potential inhibitors of sophisticated learning. Ultimately, success will be contingent upon academics accepting responsibility for creating higher-order assessment tasks and specifications, requiring students to produce in-genre responses as a condition for a pass and inducting students into how to produce valid responses that measure up. References Boulding, K. E. 1956. The Image: Knowledge in Life and Society. Ann Arbor: University of Michigan Press. Entwistle, N. 1995. ‘Frameworks for Understanding as Experienced in Essay Writing and in Preparing for Examination.’ Educational Psychologist 30: 47–54.
172 | a dva nces i n uni ver s ity a s s e s s me nt Hounsell, D. 1987. ‘Essay Writing and the Quality of Feedback.’ In Student Learning: Research in Education and Cognitive Psychology, edited by J. T. E. Richardson, M. W. Eysenck, and D. Warren-Piper, 109–19. Milton Keynes: Open University Press and Society for Research into Higher Education. Hounsell, D. 2007. ‘Towards More Sustainable Feedback to Students.’ In Rethinking Assessment in Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 101–13. Abingdon and New York: Routledge. Sadler, D. R. 1983. ‘Evaluation and the Improvement of Academic Learning.’ Journal of Higher Education 54: 60–79. Sadler, D. R. 2009. ‘Indeterminacy in the Use of Preset Criteria for Assessment and Grading.’ Assessment and Evaluation in Higher Education 34: 159–79. Sadler, D. R. 2010a. ‘Beyond Feedback: Developing Student Capability in Complex Appraisal.’ Assessment and Evaluation in Higher Education 35: 535–50. Sadler, D. R. 2010b. ‘Fidelity as a Precondition for Integrity in Grading Academic Achievement.’ Assessment and Evaluation in Higher Education 35: 727–43. Sadler, D. R. 2013. ‘Opening Up Feedback: Teaching Learners To See.’ In Reconceptualising Feedback in Higher Education: Developing Dialogue with Students, edited by S. Merry, M. Price, D. Carless, and M. Taras, 54–63. London: Routledge. Strathern, M. 1997. ‘“Improving Ratings”: Audit in the British University System.’ European Review 5: 305–21.
9 The Learning–Feedback–Assessment Triumvirate: Reconsidering Failure in Pursuit of Social Justice Jan McArthur
Introduction
I
n doctoral education it is the norm to emphasise that a student must make an original contribution to their subject area. Also commonplace at that level is an approach to learning based on an iterative process of exploring, trying out ideas, trying again, critiquing and correcting work along the way. In other words, at the doctoral level – the highest level of formal education – we allow students to fail and try again as a natural part of the process towards making their own original contribution. In this chapter I argue that both of these aspects of doctoral education are also relevant to all other levels of higher education. Further, I suggest that making this connection has social justice implications, in terms of both an individual student’s learning experience and the broader nature of knowledge engaged with in higher education. I then propose a further link, based on the work of Dai Hounsell, which suggests that a reconceptualisation of failure as part of an educational process is key to achieving the triumvirate of learning–feedback–assessment that marks Dai’s significant contribution to higher education scholarship. I propose a reconsideration of failure in two ways. First, in terms of the stigma attached to failure by current assessment procedures and the underlying pedagogical assumptions behind them. I argue that it is rare when
174 | a dva nces i n uni ver s ity a s s e s s me nt e ngaging with the complex knowledge that should define higher education to grasp it fully, correctly or critically first time around. We should have systems of assessment that reflect this and approaches to student learning that encourage greater individual agency when dealing with moments of apparent failure. Second, from a critical theory perspective, I consider failure in terms of that which does not meet the expected or accepted norms. To what extent are our assessment practices based on students telling us, or doing for us, what we expect them to do? There is a lot of discussion about critical thinking, critical analysis and critical literacy in higher education, but just how critical do we allow students to be, especially when governed by our assessment systems? My own exploration of the relationship between conceptions of failure and achieving the learning–feedback–assessment triumvirate is grounded within critical theory and critical pedagogy, and much of the discussion in this chapter draws on the work of early critical theorist, Theodor Adorno (for example, Adorno 1973, 2005a, 2005b). This brings a further set of interrelationships into play, alongside the aspects of the triumvirate. Critical pedagogy is based on a commitment to the interrelationships between education and society. Education is shaped by the social world and, in turn, education helps to shape that social world. In addition, the critical aspect of such pedagogy, drawn from critical theory, carries a belief that society as it is currently organised is unjust (Brookfield 2003) and that the aims of education should be to move towards greater social justice (McLean 2006). Understood through this lens, the triumvirate takes on a particular form, and the role of failure as a pedagogical process begins to have meaning that spans both individual learning and the achievement of social justice in wider society and higher education. The link between the individual and the social comes, I argue, in a commitment to higher education as a place in which complex knowledge is studied, generated and critiqued (McArthur 2013). In addition, the rich engagement with complex knowledge requires a going beyond what already exists. This, surely, is the meaning of critical engagement that is, rightly, so strongly stressed as at the core of academic work in higher education. To describe a theory does not require creating anything new, and as such it is quite possible to get the description right the first time around. However, to critique, evaluate or synthesise a theory does require a going beyond what already exists – the students must contribute something of their
t h e l e arning –f eedba ck –assessment tr iumvir a te | 175 own thinking. This is a riskier process, where initial, even repeated, failure is likely. It is not the purpose of this chapter to consider particular examples of assessment, but rather to probe underlying assumptions behind assessment practices and how these relate to learning. Indeed, as Adorno often stressed, it is not the purpose of critical theory to tell anyone what to do, but rather to contribute to the process of thinking about what is to be done and why. As such, critical theory can perform a far more active and challenging role than any ‘how to’ guide. However, it is my aim to link the theoretical discussion of failure and the triumvirate to some of the key issues that – based on this analysis – we may need to reconsider in our assessment practices. This chapter has four clearly defined sections that each provides different facets to my overall argument. In the next section, I outline the importance of Hounsell’s work on learning, feedback and assessment to my argument about the positive pedagogical value of failure. I then consider some of the powerful social constructions of the notion of failure and how these can influence, often negatively, learning experiences. I follow this with a different perspective on failure, based on Adorno’s critical theory. Finally, I draw these facets together to discuss the role of failure in students’ active engagement with complex knowledge within higher education. Hounsell and the Learning–Feedback–Assessment Triumvirate A key aspect of Dai Hounsell’s contribution to higher education scholarship has been the link between assessment and learning that he has championed and researched throughout much of his career. Dai was in the vanguard that rejected assessment as merely ‘that bit at the end’ and moved it to its rightful place at the heart of student learning. Assessment and learning are intertwined on many levels: it is through assessment that one learns what to learn and how to learn; and assessment itself should be an act of learning. Hence, an assessment experience should not simply repeat what is already known, but be a creative opportunity through which students extend and build upon previous learning. In recent years the interrelationship between assessment and learning has been enhanced by Dai’s leading work on a third element that further reinforces this link between assessment and learning: feedback. Dai Hounsell’s work over many years has been critical in establishing the
176 | a dva nces i n uni ver s ity a s s e s s me nt intertwined nature of assessment, feedback and learning, such that we can describe these in terms of an essential triumvirate. Note that this is more than simply stating that these aspects relate to one another, as in saying that good feedback improves learning and hopefully leads to students’ improved assessment performance. Rather, the triumvirate alerts us to the importance of understanding each aspect not as separate, asynchronous stages in the pedagogical process, but rather as co-existing facets, often synchronous in nature and mutually reinforcing. The triumvirate is also a dynamic entity, its underlying logic based on learning, feedback and assessment as active and ongoing processes. While assessment may have some role in providing an evaluation of learning at a particular moment, it is also part of the process that shapes learning and thus should be inevitably forward-looking. So, too, feedback within the triumvirate is not an account of what happens at the end of the learning story, but is itself integral to learning. In this way, the learning–feedback–assessment triumvirate suggests a reconsideration of traditional distinctions between formative and summative assessment and feedback, because any assessment or feedback should be part of the learning process and hence formative (even if a summative purpose sits alongside). Indeed, Hounsell (2007) argued that the formative/summative divide is ‘familiar but rather shop-worn’ (103) and that the distinction between high- and low-stakes assessment might more usefully open up ‘fresh perspectives and insights’ (103). Learning occurs through a series of interconnected and multifaceted encounters, and assessment and feedback need to reflect this. As Hounsell also argues: feedback is likely to have much greater longevity if that particular assignment or assessment is imminently to be repeated, forms part of a linked chain of assessments within a module or course unit, or enhances students’ evolving grasp of a core component (a key concept, say, or skill in interpreting data) of a wider programme of study. (Hounsell 2007, 104)
Achieving this triumvirate, however, remains something of an elusive goal within higher education, inhibited by powerful barriers and contrary practices. The tendency to treat learning, feedback and assessment separately and then to hope they will somehow coalesce into a coherent student experience remains commonplace. Evidence for this can be found in a number
t h e l e arning –f eedba ck –assessment tr iumvir a te | 177 of contradictions within higher education. For example, theories of learning suggest that learning is a developmental process that takes place over time; however, courses are increasingly compressed into short-term units that may or may not connect with past or future learning. Similarly, assessment is understood to influence how learning occurs, and yet traditional exams and essays remain the norm. Many United Kingdom (UK) institutions have actively championed the need for better feedback; however, this is frequently done in terms of an ‘add on’ to an existing course or assessment, rather than appreciating that for feedback to be effective what is required is a thoughtful reconsideration of the whole learning and teaching experience – as implied by the learning–feedback–assessment triumvirate. Achieving the triumvirate requires both an uncoupling of feedback solely from an association with assessment (McArthur and Huxham 2013) and a new sense of the interrelatedness of all three as part of a learning experience. The purpose of meaningful feedback is to suggest ways to move beyond what has been achieved to a certain point and to consider how it can be done better or differently in the future (see Hounsell 2007). Therefore, such feedback is a process that must involve an open and positive acceptance by the student that the current piece of work is not as successful as it could be – and this is the foundation upon which I suggest we need to rehabilitate failure. We need to elevate the concept of failure to mean more than simply an absence of success; instead, failure needs to be understood as an important pedagogical phase, particularly when engaging with complex knowledge in critical ways. Social Constructions of Failure Failure and success are powerful notions in our social lives, both prosaic and profound, and subject to shared understandings and wide variations. We may fail to catch the fly that has got into our house, while succeed in getting pregnant. We may fail to save our marriage, while succeed in finding a car space at the supermarket. So wildly various are the potential uses of these terms that it is tempting to think they must now be empty of substantive meaning; however, this is not the case. The social status that comes with being a ‘success’ (the definition of which is often optional, rather like the current use of the term ‘celebrity’) cannot be underestimated. Similarly, the impact on one’s identity of being labelled a ‘failure’ can be painful and damaging.
178 | a dva nces i n uni ver s ity a s s e s s me nt The novelist Julian Barnes encapsulates this in a short reflection on failure that he wrote for The Guardian newspaper: When I was growing up, failure presented itself as something clear and public: you failed an exam, you failed to clear the high-jump bar. And in the grown-up world, it was the same: marriages failed, your football team failed to gain promotion from what was then the Third Division (South). Later, I realised that failure could also be private and hidden: there was emotional, moral, sexual failure; the failure to understand another person, to make friends, to say what you meant. But even in these new areas, the binary system applied: win or lose, pass or fail. It took me a long time to understand the nuances of success and failure, to see how they are often intertwined, how success to one person is failure to another. (Athill et al. 2013)
In seeking to reconsider failure, my intention is not to blithely dismiss the importance for any student of failing a course, exam or assignment. Nor is it meant to be simply about playing with words – to rejig failure so that failure is good, failure is everywhere; although, in that vein, the following quote from George Orwell’s novel Nineteen Eighty-Four does rather capture the line on which I am trying to balance: ‘In this game that we’re playing, we can’t win. Some kinds of failures are better than other kinds, that’s all’ (Orwell 2003, 153). For all this uncertainty, fuzziness and different usages, success and failure really do matter – socially and academically. Jackson (2010) describes how ‘academic “success” is valued so highly and promoted so strongly in contemporary UK society that fears of academic failure are commonplace in schools’ (41). Such fears have enormous personal, academic and social implications. Reay and Wiliam (1999) relate the example of a primary school girl called Hannah who has to sit her SATs (standard assessment tasks). Hannah describes being ‘really scared’ and fears that if she does not do well, then ‘I’ll be a nothing’ (345). Similarly, in the higher education context, Falchikov and Boud (2007) report how students who do not appear to do well in assessments describe themselves in terms such as being ‘the problem’ (149). Reconsidering understandings of failure is, therefore, also about challenging prevailing relationships of power in education. To what extent does
t h e l e arning –f eedba ck –assessment tr iumvir a te | 179 a ‘failure’ reflect on the actions of the student or on the actions and choices of the teacher? Academics also need to reconsider student failure in terms of their own teaching practice. The sources of failure and academic underachievement do not lie in what the student does alone. At the same time, students do need a more active sense of their own role in the learning– feedback–assessment triumvirate. There are two different responses to perceived failure that a student can make (notwithstanding the role here, too, of the teacher). Passive failure is experienced as a one-off event, with few or no connections made to how that failure occurred or how it could be different in future. Students have little sense of their own agency in being able to shape their learning experience and hence the assessment result. Students who experience failure in this way often resubmit their work with few changes and a sort of ‘fingers crossed’ approach that they might be graded a little better the second time around. As a result, passive failure is doomed often to be repeated. In contrast, critical failure is failure that leads to a different (better) outcome: failure through which we learn and thus achieve potentially more than if the failure had not been experienced. This form of failure is exercised by students who take active control of their own learning, a necessary part of which is a clear-sighted ability to recognise where and when their learning falls short of its potential. Adorno’s Critical Theory and the Links between Engagement, Knowledge and Failure Adorno is a great champion of failure. He associates failure, in intellectual and artistic terms, with a striving to go beyond what already exists. My own refusal to simply change the discourse – to call failure something more pleasing and appealing – is very much the result of an Adornean take on critical theory in which embracing that which is easier is regarded with suspicion. To re-label failure as something less unpleasant risks creating a distortion that also robs failure of its potency in terms of both learning and understanding the social world. There are two aspects of Adorno’s work that I want to bring to this consideration of failure, social justice and the learning–feedback–assessment triumvirate. These are, first, understanding the complex nature of the knowledge engaged with within higher education and the social world within
180 | a dva nces i n uni ver s ity a s s e s s me nt which that knowledge exists. The second is Adorno’s association of failure with authenticity in relation to this epistemological and social complexity. In another context, I have written about the nature of knowledge in terms of ‘virtuous mess and wicked clarity’ (McArthur 2012b, 419). Here, the emphasis was on the dangers of falsely tidying up reality to produce illusionary notions of neatness and coherence. I believe this can be usefully extended to our understanding the purposes and possibilities of assessment and learning and of the role that failure plays. The danger of privileging success over experiences of failure in some educational contexts is that it can implicitly – and inadvertently – privilege what is easy to know over that which is complex, difficult or elusive. My perspective on the complex nature of knowledge has been directly influenced by Adorno’s work on negative dialectics and non-identity. While these terms may seem rather abstruse, I would like to briefly outline why I have found them so useful in understanding the nature of knowledge engaged with in higher education. The two terms, negative dialectics and non-identity, lie in a pivotal relationship (Cook 2008) based upon an ‘ultimately imperfect match between thought and thing’ (Wilson 2007, 71). Adorno (2005a) argues that: ‘Whoever wants to define the concept precisely easily destroys that which he is aiming at’ (143). This is because we can only truly understand an object through the mutual dialectic between the universal and the particular aspects of that object. For example, there is a universal aspect to being a cat, but there is also an individual aspect to being a particular cat. Extending this into the human realm, the significance of this distinction is hopefully even more apparent. Thus, while there is a unity of identity in being a member of the working class, for example, or in being a woman or a parent, there are also a myriad of individual aspects that go beyond any form of identity grouping, in order to reflect who a particular person is. Now extending this further to how we can know or understand the social world, an Adornean perspective of knowledge sees this as a process that is made up of multiple aspects that can never be easily tied down to one moment or thing. This is not the same as a relativist position: Adorno is firmly located in a modernist realm. An Adornean approach does not defy the possibility of meaning or understanding, but rather stresses that meaning is difficult. Adorno explains: ‘A self-opinionated epistemology that
t h e l e arning –f eedba ck –assessment tr iumvir a te | 181 insists on precision where it is not possible to iron out ambiguities, sabotages our understanding and helps to perpetuate the bad by zealously prohibiting reflection upon whether progress is taking place or not’ (Adorno 2006, 141). This perspective on how we can, and should, understand the social world has a direct implication for the place of failure in the process of learning and knowing. For without failure, one cannot really understand or, at least, understanding is limited to the partial and superficial. Adorno can give no greater compliment than to refer to Kant’s Critique of Pure Reason as a great failure: ‘With this the Critique of Pure Reason represents the first great attempt, and one doomed to failure – to master through mere concepts all that cannot be mastered by concepts’ (Adorno 2001, 234). Supposedly successful understandings – those that paper over uncertainties, gaps and fissures and which are able to display a high level of self-confidence on this basis – are unlikely to be genuine. A demystified understanding, in contrast, accepts the aspects of what cannot be known, or known easily, as part of a genuine engagement with a subject. The implication for how students learn and how they are then assessed lies in challenging the notion that certainty and confidence are always epistemologically virtuous. For example, a student asked to write an essay on the relationship between poverty and health is able to draw on research to make some conclusions on a general level; however, to display a genuine understanding would also involve recognising that much remains unknown about the many different ways in which the link between poverty and health plays out in the lives of individuals. Occasionally, Adorno makes an explicit link between his critical theory and the practice of teaching and learning. For example, introducing his lectures on sociology, he tells his students that ‘academic study differs emphatically from school work in that it does not proceed step-by-step in a mediated, unbroken line. It advances by leaps, by sudden illuminations’ (Adorno 2002, 5n6). In learning through this experience of ‘leaps’ and ‘sudden illuminations’, students need to be able to accept that at times they will fail to understand some aspects of the object of their study. Sometimes this failure is a process, and illumination will come later as different aspects of what they are studying come into play within their minds. At other times, an acceptance that some aspects of knowing can never be fully tied down at any particular moment, or within a particular assessment, is also required.
182 | a dva nces i n uni ver s ity a s s e s s me nt Indeed, for Adorno, this refusal to make every aspect of thought clear is conveyed in his assertion that: ‘The injunction to practice intellectual honesty usually amounts to the sabotage of thought’ (Adorno 2005b, 80). In this aphorism from his visceral and painful work, subtitled Reflections on a Damaged Life, Adorno suggests the social justice implications of knowledge that remains tied purely to the familiar: For if honest ideas unfailingly boil down to mere repetition, whether of what was there beforehand or of categorical forms, then the thought . . . will always incur a certain guilt. It breaks the promise presupposed by the very form of judgement. (81)
So to merely repeat knowledge without bringing one’s own judgement to that knowledge is to be complicit in perpetuating possibly unreflective assumptions and distortions. It seems to me that there is a clear link between this position taken by Adorno and the rather common exultation within universities for students to be engaged in critical thinking, rather than mere description. This criticality of thought should extend through all levels of higher education, from first year undergraduate onwards. Where I find Adorno’s work helpful is in enabling us as teachers to understand that simply saying students should be critical rather than descriptive may not be enough, if the contexts in which learning and assessment take part are shaped by hidden impulses to sustain what is already known. I fear that critical thought can be tamed when it becomes one among several points on a checklist of acceptable and predetermined learning outcomes (see also McArthur 2012a; McArthur 2013). Adorno also makes a direct link between necessary failure and authenticity. Note, however, that Adorno’s use of authenticity and that of Heidegger outlined in Kreber’s earlier chapter are somewhat different. Just as in his negative dialectics meaning cannot be neatly or easily reached, for Adorno authenticity is achieved through not clarifying or providing a more complete and comprehensive meaning than is actually possible. As Paddison (2004) explains: ‘Adorno sees authenticity in the failed attempt to achieve coherence, integration, and consistency in a fractured world’ (218). Hence, authenticity becomes located ‘in the unflinching encounter with the fragmentation and contradictions of modernity’ (213).
t h e l e arning –f eedba ck –assessment tr iumvir a te | 183 As Paddison (2004) notes, authenticity, like other concepts in Adorno’s work, is best understood in terms of a constellation of ideas, the latter being a recurrent theme in Adorno’s work and openly borrowed from his friend Walter Benjamin. Included in Adorno’s meaning of authenticity are the everyday, such as ‘original’, ‘real thing’ and ‘unique’, as well as the authority of the real compared with the fake (Paddison 2004, 201). In my own experience, students often crave this authenticity, but it is regularly denied by the formalities of higher education. For example, in work that I previously undertook with an honours level course on ecology (see McArthur and Huxham 2011) students argued for more time to be spent learning outside the formal classroom environment and for assessments to therefore also be based outside the classroom. As one student explained, if asked to identify a particular species of flower from a pristine example in a textbook, she would have little trouble succeeding in the set task. However, this would in no way be useful to her future practice identifying a dirty, partially trodden example in a real meadow. This student, I suggest, is arguing for ‘the authority of the real compared with the fake’, as suggested above by Paddison’s analysis of Adorno. Success in identifying the fake is relatively easy, while the authentic may be harder to identify – it may involve some failures along the way – but this is the essence of its greater value. Authenticity and failure are linked to a dynamic temporal understanding, which can also be aligned to the notion of education. For Adorno, a work of music should not be understood as a static Being (Sein), but as a Becoming (Werden) – ‘an historical unfolding’ (Paddison 2004, 203). The importance of understanding being rooted in a dynamic sense is also reflected in Adorno’s negative dialectics, which rests on an acceptance of its failures, as well as hope (though such hope is phrased through an unflinching gaze, not a romantic one). Failure, of some sort, is an almost inevitable consequence of striving towards achievement in a modern world characterised by deep-set and complex contradictions. This is, therefore, a certain kind of failure. It is: a kind of failure which is not simply the result of technical inadequacy on the part of the composer but rather comes from the impossibility of succeeding in the task to be faced, a task which must be undertaken nevertheless. (Paddison 2004, 216)
184 | a dva nces i n uni ver s ity a s s e s s me nt It is in Adorno’s works on music that his attitudes to failure are most clearly articulated. In a discussion of Beethoven, Adorno distinguishes between the accidental failure of ‘lesser works’ and the inevitable failure of anything that aspires further: Art works of the highest rank are distinguished from the others not through their success – for in what have they succeeded? – but through the manner of their failure . . . whereas the failure of lesser works is accidental, a matter of mere subjective incapacity. A work of art is great when it registers a failed attempt to reconcile objective antimonies. That is its truth and its ‘success’: to have come up against its own limit. In these terms, any work of art which succeeds through not reading this limit is a failure. (Adorno’s unfinished volume on Beethoven: Beethoven: The Philosophy of Music (99–100), quoted in Paddison 2004, 217)
Again, we can draw a parallel here to the experience of the ecology student attempting the relatively unchallenging task of identifying a specimen in a glossy book, compared with the more authentic but challenging – and more open to failure – situation in a muddy meadow. Here, then, is the key idea from Adorno’s critical theory that I wish to apply to this discussion of failure and the achievement of the learning– teaching–assessment triumvirate. Complex understanding cannot be achieved through successfully tying down a succession of discrete pieces of knowledge in a neat way. There is a holistic and dynamic nature to such understanding that is hard to capture and likely to involve a necessary aspect of some sort of failure. Striving to achieve more than what is already known cannot be judged by static and one-dimensional notions of success. Great philosophy, art or music do not flourish in an environment in which failure is stigmatised and learning never ventures beyond what already is. Engagement with Complex Knowledge: Failure as a Pedagogical Process One of the defining features of higher education is the complex nature of the knowledge discussed, critiqued and generated within this sector (McArthur 2013). In the previous section, I outlined how my understanding of knowledge is based upon Adorno’s critical theory. In turn, this feature of higher education should influence how we approach assessment and learning, for
t h e l e arning –f eedba ck –assessment tr iumvir a te | 185 complex knowledge is unlikely to be mastered the first time around; rather, it requires a process of engagement, where ‘success’ at every given stage is unlikely. And yet, we have assessment systems that privilege first-time learning over the more complex intellectual process of a movement back and forth, upon which credible and solid academic knowledge is actually based. I suggest resonance between my argument and that of Kreber (this volume) regarding the importance of mistake-making to students’ learning. The modularisation of learning within higher education, albeit in the name of student choice, has had dire consequences in terms of failure as a pedagogical process. This modularisation has pushed learning into smaller, discrete bundles, each with a summative assessment that often comes just a few weeks after learning begins. Financial pressures within universities have also led to the introduction of resit fees and tougher regulations in terms of resits generally, which has added further pressures to students attempting to achieve critical and complex understandings within often truncated course situations. This is not to deny the practical dimensions, whereby students cannot be allowed to endlessly keep failing, but rather to suggest that the prevailing systems address this with rather blunt instruments, seemingly guided more by financial practicalities than pedagogical need. At the heart of the learning–feedback–assessment triumvirate is the active engagement of students in their own learning. In order to genuinely take control of their learning, students require a fearless capacity for self-evaluation. This, in turn, requires both failure and success to be understood in dynamic ways, often linked to recognition of the complex nature of knowledge. Neither success nor failure should close down learning, but rather be springboards to an ongoing cycle of reflection and intellectual challenge. Social theories of learning emphasise the way in which meaning is constructed over time through an iterative process. Rather than ‘receiving’ knowledge, students must construct it for themselves. This does not mean ‘making up’ just anything, but rather that there is a path to learning about a concept or idea that involves an interaction between the student and the knowledge. Freire (1996) exemplified this in his critique of ‘bankable’ knowledge. Here, he refers to knowledge being treated as something that can be passed over or transmitted. The student is largely regarded as an empty vessel to be filled with knowledge from the teacher. Assessment, then, becomes an act of withdrawing that
186 | a dva nces i n uni ver s ity a s s e s s me nt knowledge and passing it back to the teacher, largely unchanged. Success, in this context, is assured by little else than passive compliance. In contrast, there is a link between students’ active engagement and the processes of mistake-making or failure (McArthur 2012a, 2013). I have suggested the metaphor of a palimpsest to consider the nature of students’ meaningful engagements with complex knowledge. A palimpsest is: Manuscript or piece of writing material on which later writing has been superimposed on effaced earlier writing. Something reused or altered but still bearing visible traces of its earlier form. (Oxford English Dictionary 2010)
I suggest that the metaphor of palimpsest brings with it a notion of making mistakes, failures and trying again as part of an authentic engagement with real knowledge, rather than some sterile, pale imitation: [Palimpsest] emphasises that exciting, creative and transformative knowledge should be a bit messy, scribbled out, changed, re-thought and rewritten. It should bear the marks of mistakes, revisions, rethinks, challenges and even misunderstandings. Knowledge, and our engagement with it, is then likely to be stronger and richer for that, and reflective of the relationship between knower and knowledge. (McArthur 2012a, 742)
A clear consequence of this is that students are unlikely to always fulfil this meaning-making task on first encounter with new or complex knowledge. Hence, in a positive sense, the process of making-meaning is likely to involve failures along the way. I suggest that this position is implicit in the triumvirate of learning–feedback–assessment, but now needs to be made more explicit. The implicit aspect is evident in Hounsell’s (2007) conceptualisation of sustainable feedback. An important feature of such feedback is its usefulness for future learning and activity, rather than merely a record of a past activity that is now out of date and unlikely to be repeated. To achieve such sustainable feedback, I suggest we may need to reconsider curriculum design, in order to enable learning to occur over extended time frames by challenging the current system of assessment and feedback tightly constrained within small, individual courses. One way to do this is to build engagement with feedback into a course structure, so that students are supported, especially
t h e l e arning –f eedba ck –assessment tr iumvir a te | 187 through the provision of dedicated time and space, to make the iterative connections between past performance, self-evaluation and future tasks. The value of this looking forward, as suggested by Hounsell’s sustainable feedback, must surely also be grounded in looking back – which I think is a typical Adornean dichotomy. We learn for the future by taking a clear eye to what we have already done, thus learning from both its successes and failures. To borrow from Marx: students who do not learn from the mistakes of their assessments are likely to repeat them. Sadly, many students do not learn from their mistakes. One reason for this, I suggest, lies in feedback approaches that point out what is wrong and perhaps even what might be considered better, but do not make a link between past and future achievements. If failure and success become disembodied from one another, then the student has no basis from which to move from one to the other. The phenomenon of failing again in a resit exam/essay happens far more often than many may realise. In talking to students who have passed or failed again on their resits, a common story emerges. First, that the experience of failure is absolutely ‘gutting’, painful and one for which few university undergraduates are prepared. In addition, a lack of understanding of the assessment criteria, and hence being unable to visualise what a successful essay or exam looks like (see Sadler, this volume), often leads students to take a ‘better luck next time’ approach to a resit. Such students can lack both the emotional toughness and academic know-how to confront the nature of the failure and therefore be able to learn from it. In contrast, students who are able to work through the initial disappointment and despair of failing are often able to turn it into a double learning experience. For these students, who are compelled to leave nothing to chance on their resit, the failure enables them to engage with, and understand, the expectations of the assessment far more clearly than they have done previously. This then becomes a rich learning moment. Students need to be encouraged to think of their resit not in terms of simply getting through a particular course, but as a foundation for stronger academic success in the longer term. The same support should also be available for the students who scrape through on a pass or who underachieve at any point on the grading structure. Key to the reconsideration of failure and success in an academic learning
188 | a dva nces i n uni ver s ity a s s e s s me nt context is the understanding of the temporal nature of both of these outcomes. I suggest that both failure and success are pedagogically empty notions when understood as ends in themselves. In contrast, both need to be considered in terms of future learning and application if they are to be useful. For failure, this means that the experience is appreciated in terms of how the student responds to the failure (be it in actual grade terms or a sense of personal underachievement). Does the student grasp this as a formative experience? Is he/she able to do so? What forms of support or examples have the students been given to enable them to grasp failure as a formative learning stage? For success, the solution lies in considering how the students build on a successful engagement with knowledge or a particular task. What do they do next? To what extent does the specific example of ‘success’ lay the foundation for successful application and engagement in an ongoing way? However, by relieving students of the burden of success at every step of their learning, we can actually give them considerable power over their experiences and engagement. Again, my own experience with undergraduate teaching is that many undergraduates come to university with far more enthusiasm and original thinking than is often recognised. We promise much in terms of critical thinking and scholarly curiosity at the start of courses, but then quickly fall into the same old grind towards an end of semester assessment. I explicitly begin my courses by trying to provide students with a sense of the many learning opportunities and possibilities that will be revealed through the lectures, readings and course discussions. I try to emphasis a sense of focus within each course, but not boundaries. However, I am also aware that as each semester progresses, both my students and I start to become harnessed to the assessment requirements, which loom increasingly larger as the small number of teaching weeks quickly pass by. The exciting discussions that characterised the first few sessions can be lost through the sheer unexpected speed with which the end of semester comes into focus and learning seems to be traded for assessment. As such, even formative feedback can take on a mechanistic character, addressing the details of a specific assignment, rather than being the pedagogical dialogue it should ideally be (see McArthur and Huxham 2013). There are ways to mitigate against this happening, and I have tried several. I have had some success building in more low-stakes activities (which
t h e l e arning –f eedba ck –assessment tr iumvir a te | 189 therefore lessen anxieties about perceived failure) that combine a sense of open exploration and building towards the summative assessment. However, I suggest that there is still a limit to what can be achieved when the formalities of higher education learning, particularly as represented through assessment practices, are based on relatively short and sometimes unrelated courses. The suggested rethinking of failure also should not be confused with a saccharine-based desire to shield students from the reality of not having met a grade. It is useful here to borrow from Clegg and Rowland’s (2010) work on kindness in education, in which they stress that kindness should not be confused with a lack of rigour, but rather regarded as an academic virtue, as suggested by the idea of a ‘critical friend’. Students do not require such a critical friend simply to engage well with their own work, but it is vital if they are to go beyond mere description of what already exists. In other words, such critique is important to challenge students to have the confidence to bring something of themselves to their work and thus transcend mere description of what is already known. Another perspective of this can be found in Greer’s (1981) work on the relative lack of great women artists in the history of painting. Greer names flattery, or false praise, as one of the barriers to achievement, noting that a young woman with a desire to paint did not need to fear criticism, but rather ‘poisonous praise’ (68). Greer’s argument can be extended to a learning context to reinforce the importance of students taking responsibility and control for their own learning, if they are to achieve their full potential. This does not negate the role of teacher or assessor, but rather places it alongside that of the student in terms of importance. Traditional forms of assessment place the assessor as the key actor and arbiter. However, unless students can evaluate their own work, they cannot learn to their potential. The student who lacks confidence to say that she has failed to achieve her best in a piece of work inevitably holds herself back from achieving anything better in the future. Indeed, there is a form of alienation between the creator of the work and the work itself if the only worth is located outside of the student him-/herself. This situation is much harder to sustain once a student is encouraged and enabled to critically challenge existing knowledge and bring it into conversation with his/her own analysis. It is only through this process that we can guard against the alienation of a student from his or her own thinking.
190 | a dva nces i n uni ver s ity a s s e s s me nt This is, then, interrelated to the increasing commodification of assessment, where the value lies in the exchange value attributed to a certain degree classification, rather than what students have learned and how they may go on to apply this in the social – as well as, crucially, the economic – world: Rather than higher education being a journey or transformative experience, it is simply a packaging and marketing process: the degree is the shiny ribbon on the top of the box . . . The idea that people may engage in higher education to develop and realize their potential as human beings appears quaint and anachronistic. (McArthur 2011b, 742)
However, what I conceptualise as alienation, Boud – perhaps less controversially – refers to as passivity: The fundamental problem of the dominant view for assessment is that it constructs learners as passive subjects. That is, students are seen to have no role other than to subject themselves to the assessment acts of others, to be measured and classified. They conform to the rules and procedures of others to satisfy the needs of an assessment bureaucracy: they present themselves at set times for examinations over which they have little or no influence and they complete assignments which are, by and large, determined with little or not input from those being assessed. (Boud 2007, 17)
While Boud rightly argues that students need greater control over the processes of their learning and assessment, my argument, drawing on Adorno’s critical theory, seeks to go a step beyond this. Here, I extend my notion of failure still further and consider what it means to seek to know or understand beyond what is already known. This aspect of my reconsideration of failure has very important social justice repercussions, as it helps address the perennial dilemma of how we seek to achieve greater social justice from within a prevailing social system that we perceive to be unjust. Adorno is important here, I believe, because of his emphasis on finding ways to stand, however briefly or partially, outside the mainstream or status quo (see also McArthur 2011a, 2013) or, in Adorno’s words, ‘the value of a thought is measured by its distance from the continuity of the familiar’ (Adorno 2005b, 80). Near the start of a series of lectures on moral philosophy, Adorno warns his students:
t h e l e arning –f eedba ck –assessment tr iumvir a te | 191 So if I am going to throw stones at your heads, if you will allow the expression, it will be better if I say so at the outset than for me to leave you under the illusion that I am distributing bread. (Adorno 2000, 2)
To extend the metaphor, students who sit receiving gifts of bread from their teachers never have to confront failure, but nor do they learn. Learning should be a difficult process. However, this does not correlate with a need for assessments that are difficult simply due to being abstruse, high stakes or disconnected from students’ lived experiences of the knowledge they are studying. Conclusion The pressure to try to tie down knowledge to that which is already known has permeated not only students’ learning, but our own practice as researchers. There are clear parallels between the epistemological assumptions behind predetermined learning outcomes for students and the requirements of modern research grants, in which a clear statement of what will be discovered and how it can be useful is required before any enquiry begins (McArthur 2013). Modern approaches to research funding preclude the unexpected because such research involves a risk of failure, which is utterly forbidden within an increasingly audit-based academic culture (see Strathern 2000). In this way, the social justice implications of arguing for a new appreciation of failure as both an epistemological and pedagogical process become apparent. There is an impact upon social justice within universities, because if as academics we have become tamed in our own engagement with knowledge by the temptations made by those who control research funding, then how can we rigorously and authentically support our students in their own creative, risky and real engagement with knowledge? And for broader society, what are the implications of students trained to only walk within the lines of established knowledge? In continuing to purport that the purposes of higher education involve critical engagement, but to limit the capacity for such engagement through our systems and procedures that favour assessments that are easy to mark, easy to audit and broken up into easily managed units, we do a great disservice to our students and to those in wider society who are in need of the professional, artistic or creative expertise that should be nurtured within higher education.
192 | a dva nces i n uni ver s ity a s s e s s me nt However, if we bring the rehabilitation of failure as a pedagogical process together with the work of Dai Hounsell, which I have summarised using the learning–feedback–assessment triumvirate, I believe we establish an especially helpful combination that thoughtfully challenges the big issues, as well as considers practical realisations. Hounsell has established that only by understanding all of these aspects of the student experience holistically can we support genuine and authentic engagement. The approach suggested by this perspective is still hard for some academics to understand and implement. Radical reform of assessment is not easy, especially when a little light tinkering can appear to tick all the right boxes required by institutional policies or quality agenda. To achieve the triumvirate requires embracing challenge, thinking differently and a possibly painful reconsideration of what the academic (student and teacher) role entails. One aspect of this, I suggest, is the reconsideration of failure. We must move beyond a system in which the heavy weight of a fear of failure and the stigmatisation of failure puts brakes on the very process of learning. The complexity of knowledge is a pedagogical and social one and cannot be addressed without reconceptualisation of failure in this context. Thus, in Adorno’s words: ‘Thought waits to be woken one day by the memory of what has been missed, and to be transformed into teaching’ (Adorno 2005b, 81). References Adorno, T. W. 1973. Negative Dialectics. London: Routledge and Kegan Paul. Adorno, T. W. 2000. Problems of Moral Philosophy. Cambridge: Polity. Adorno, T. W. [1959] 2001. Kant’s Critique of Pure Reason. Cambridge: Polity. Adorno, T. W. 2002. Introduction to Sociology. Cambridge: Polity. Adorno, T. W. [1963 and 1969] 2005a. Critical Models. New York: Columbia University Press. Adorno, T. W. [1951] 2005b. Minima Moralia. London: Verso. Adorno, T. W. 2006. History and Freedom: Lectures 1964–1965. Cambridge: Polity. Athill, D., M. Atwood, J. Barnes, A. Enright, H. Jacobsen, W. Self, and L. Shriver. 2013. ‘Falling Short: Seven Writers Reflect on Failure.’ The Guardian, June 21. Boud, D. 2007. ‘Reframing Assessment as if Learning were Important.’ In Rethinking Assessment in Higher Education, edited by D. Boud and N. Falchikov, 14–25. Abingdon: Routledge.
t h e l e arning –f eedba ck –assessment tr iumvir a te | 193 Brookfield, S. 2003. ‘Putting the Critical Back into Critical Pedagogy: A Commentary on the Path of Dissent.’ Journal of Transformative Education 1: 141–9. Clegg, S., and S. Rowland. 2010. ‘Kindness in Pedagogical Practice and Academic Life.’ British Journal of Sociology of Education 31: 719–35. Cook, D. 2008. ‘Theodor W. Adorno: An Introduction.’ In Theodor Adorno: Key Concepts, edited by D. Cook, 3–19. Stocksfield: Acumen. Falchikov, N., and D. Boud. 2007. ‘Assessment and Emotion: The Impact of Being Assessed.’ In Rethinking Assessment in Higher Education, edited by D. Boud and N. Falchikov, 144–55. Abingdon: Routledge. Freire, P. 1996. Pedagogy of the Oppressed. London: Penguin. Greer, G. 1981. The Obstacle Race. London: Picador. Hounsell, D. 2007. ‘Towards More Sustainable Feedback to Students.’ In Rethinking Assessment in Higher Education, edited by D. Boud and N. Falchikov, 101–13. Abingdon: Routledge. Jackson, C. 2010. ‘Fear in Education.’ Educational Review 62: 39–52. McArthur, J. 2011a. ‘Exile, Sanctuary and Diaspora: Mediations between Higher Education and Society.’ Teaching in Higher Education 16: 579–89. McArthur, J. 2011b. ‘Reconsidering the Social and Economic Purposes of Higher Education.’ Higher Education Research & Development 30: 737–49. McArthur, J. 2012a. ‘Against Standardised Experience: Leaving Our Marks on the Palimpsests of Disciplinary Knowledge.’ Teaching in Higher Education 17: 485–96. McArthur, J. 2012b. ‘Virtuous Mess and Wicked Clarity: Struggle in Higher Education Research.’ Higher Education Research & Development 31: 419–30. McArthur, J. 2013. Rethinking Knowledge in Higher Education: Adorno and Social Justice. London: Bloomsbury. McArthur, J., and M. Huxham. 2011. Sharing Control: A Partnership Approach to Curriculum Design and Delivery. York: Higher Education Academy. McArthur, J., and M. Huxham. 2013. ‘Feedback Unbound: From Master to Usher.’ In Reconceptualising Feedback in Higher Education, edited by S. Merry, D. Carless, M. Price, and M. Taras, 92–102. London: Routledge. McLean, M. 2006. Pedagogy and the University. London and New York: Continuum. Orwell, G. [1949] 2003. Nineteen Eighty-Four. London: Penguin. Oxford English Dictionary. 2010. ‘Palimpsest.’ In Oxford English Dictionary, edited by A. Stevenson. Oxford: Oxford University. Accessed December 19, 2013. doi: www.oxfordreference.com.ezproxy.is.ed.ac.uk.
194 | a dva nces i n uni ver s ity a s s e s s me nt Paddison, M. 2004. ‘Authenticity and Failure in Adorno’s Aesthetics of Music.’ In The Cambridge Companion to Adorno, edited by T. Huhn, 198–221. Cambridge: Cambridge University Press.
10 Guiding Principles for Peer Review: Unlocking Learners’ Evaluative Skills David Nicol
Focus of the Chapter
E
nhancing students’ capacity to regulate their own learning, independently of the teacher, is a central goal in higher education. All learners can and do self-regulate; however, those more effective at self-regulation assume greater responsibility for their academic performance and produce higher quality work. A pivotal construct underpinning learner self-regulation is evaluative judgement. To regulate one’s own learning calls on a sophisticated capacity to make evaluative judgements about the quality of academic work as it is being produced. This chapter identifies peer review as the most productive platform for the development of evaluative skills and hence for learner self-regulation. Peer review is defined as an arrangement whereby students produce a written assignment and then review and write comments on assignments produced by their peers in the same topic domain. This chapter synthesises recent research on peer review in relation to the development of evaluative skills and the elaboration of knowledge. From this, it proposes a set of guiding principles for the design of peer review and provides some practical suggestions as to how each principle might be implemented.
198 | a dva nces i n uni ver s ity a s s e s s me nt Introduction This chapter is dedicated to Dai Hounsell, who has made a significant contribution to our thinking about assessment and feedback in higher education over many years (Hounsell 2003, 2007; Hounsell et al. 2008). Not only has Dai carried out important research in this area, which has helped reshape current conceptions of assessment and feedback, but he has also been particularly focused on the actual practices of assessment within and across disciplines. Indeed, in a recent paper, Dai synthesised large bodies of research on assessment and feedback in different disciplines so as to identify and catalogue innovative approaches that others might adopt or adapt (Hounsell et al. 2007). One aspect of Dai’s more recent work has been to promote a greater role for students in assessment practices (Hounsell 2008), for example, in using teacher feedback, in formulating assessment questions, in actively using assessment criteria and in assessing their own learning progress. This chapter builds upon and extends this aspect by looking at how students’ evaluative skills might be developed not through being assessed or by being given feedback by others, but by engaging in evaluative acts and by delivering feedback themselves. The recent literature on assessment and feedback in higher education now, more than ever before, emphasises the need to develop students’ selfregulatory abilities (Andrade 2010; Boud and Molloy 2013; Sadler 2010, 2013). Students must be equipped with the skills to think for themselves, to set their own goals, to monitor and evaluate their own work in relation to these goals and to make improvements to their work while it is being produced. They must also be able to carry out such regulatory activities in collaboration with others, for example, where performance goals and tasks are shared. It is also well-recognised by researchers that developing this capacity for self-regulation and co-regulation cannot be achieved through assessment practices that are solely carried out and controlled by teachers or where the primary conception of feedback is that of teacher transmission. Indeed, all contributors to this volume emphasise an active role for students in learning and assessment processes. A pivotal construct underpinning the idea of self-regulation is that of evaluative judgement. The students’ capacity to regulate their own learning
gu id ing pri nci ples f or peer r e vie w | 199 fundamentally depends on their ability to make valid and informed evaluative judgements about the quality of their own work, whether produced individually or in collaboration with others. There is a growing body of literature, both nationally and internationally, on evaluative judgement, and it is strongly represented in the chapters in this book. Also, a group of Australian researchers and academics in a recent document entitled Assessment 2020: Seven Propositions for Assessment Reform in Higher Education have proposed evaluative judgement as the building block for recasting assessment practices: Assessment is the making of judgements about how students’ work meets appropriate standards. Teachers, markers and examiners have traditionally been charged with this responsibility. However, students themselves need to develop the capacity to make judgements about their own work and that of others in order to become effective and continuing learners and practitioners. (Boud and Associates 2010, 1)
In my own work, I also have focused on assessment practices as the locus to develop students’ capacity to make evaluative judgements. Indeed, in 2006, I reinterpreted the research literature on formative assessment and feedback in higher education and positioned it within a model of self-regulation that emphasised evaluative judgement (Nicol and Macfarlane-Dick 2006). This model places student judgement in the form of self-assessment at the centre of all learning events. There were two reasons for this positioning. First, students are always monitoring and evaluating their own work and generating inner feedback as they engage in academic tasks. Those more effective at self-regulation produce better internal feedback and/or are more able to use the feedback they generate to achieve their desired goals (Butler and Winne 1995). Second, even when feedback is provided by others, if it is to influence their current and subsequent learning, students must engage in acts of assessment themselves; they must evaluate the external feedback they receive and generate internal feedback from it (Nicol 2009). More specifically, they must decode the feedback message, internalise it and compare and evaluate it with reference to their own work. As Andrade (2010) puts it, students themselves are always the definitive source of all feedback processes. Based on the model described in the preceding paragraph, over a number of years I have been researching ways of strengthening students’ ability to
200 | a dva nces i n uni ver s ity a s s e s s me nt assess and become better at regulating their own learning. This chapter, while building on this earlier work, takes a slightly different stance. Instead of putting self-assessment centre stage, the focus is on the varied processes involved in peer review: a scenario where students evaluate the work of their peers and produce a feedback commentary. The purpose of this chapter is to provide some new insights into how evaluative judgement might be conceptualised and effectively developed through peer review. As well as adding to the current theory and literature, the chapter provides two practical outputs. First, it presents a set of principles of good peer review practice for the development of evaluative judgement. Prior work has established the value of principles in making complex research findings accessible to busy practitioners who do not have time to read and interpret the educational literature. Second, it provides specific examples of how these principles might be instantiated in a range of different contexts. Earlier research has also shown that practice examples can provide useful entry points for practitioners who wish to implement principles within their own discipline (Nicol and Draper 2009). Elsewhere, I have provided a fuller discussion of the value of principles and examples (Nicol 2013c). Evaluative Judgement and Knowledge Construction The concept of evaluative judgement is receiving increasing attention in the higher education literature. Cowan (2010), for example, claims that: . . . a more specific emphasis should be placed in undergraduate education on the explicit development of the ability to make evaluative judgements. This higher level cognitive ability is . . . the foundation for much sound successful and professional development throughout education, and in lifelong development. (323)
Cowan maintains that evaluative judgement underpins both decision-making and reflective practice in the professions. He also highlights its relevance to the informal choices we make throughout life. Cowan’s notion of evaluative judgement brings into focus the idea of critical thinking – a skill and disposition that all university courses claim to develop. Bensley (1998, 5) defines critical thinking as ‘reflective thinking involving the evaluation of evidence relevant to some claim so that a sound
gu id ing pri nci ples f or peer r e vie w | 201 conclusion can be drawn from the evidence’. In a similar vein, Halpern (2003) points out that the term critical in critical thinking describes thinking that emphasises evaluation. Evaluative judgement, it could be argued, is the cornerstone of critical thinking in all disciplines; it is involved in distinguishing arguments from assertions, finding the central question, appraising the form and qualities of evidence, making sound predictions from theories, g enerating good hypotheses, constructing convincing arguments, comparing the quality of different things – texts, arguments, objects – expressing one’s reactions to texts, considering multiple perspectives and so on (Bensley 1998). Sadler (2010, 2013) discusses the concept of evaluative judgement, which he calls appraisal, from a feedback perspective. His concern is that telling students about the quality of their work through the delivery of teacher feedback is not an effective approach to helping them become competent producers of quality work by themselves. For this, they need an appreciation of what high-standard work is, skills in judging the quality of the work they are producing against this high standard and a repertoire of tactics and moves that they can draw on to make improvements. Sadler (2010) claims that if we wish to develop students’ competence in making evaluative judgements about academic work, then we should give them appraisal experiences similar to those of their teachers. Boud’s interest in evaluative judgement derives from his position that assessment and feedback in higher education should serve a long-term purpose (Boud 2007; Boud and Molloy 2013). Although these processes should help students perform better in the present, they should also prepare them for life beyond university and in future employment settings. Boud thus sees a dual role for assessment – it is both about informing students’ judgements, as well as about making judgements of them. In order to develop students’ capacity to make informed judgements, Boud advocates a greater use of selfassessment and a stronger role for teachers and peer communities in helping students calibrate their judgements. Taking a wider radius, I have highlighted the role that evaluative judgement plays in the fostering of graduate attributes. In 2010, I analysed the documented attribute statements from a range of universities and showed that evaluative judgement is the underpinning process behind each attribute (Nicol 2010a). For example, students cannot develop ethical awareness by
202 | a dva nces i n uni ver s ity a s s e s s me nt being told about ethics; rather they must learn to evaluate situations from an ethical perspective and make ethical decisions. Similarly, students cannot develop communication skills by being told about them – they must learn to evaluate the quality of their own communications and those received from others. From this analysis, I argued that if universities focused their attention on developing the student’s own evaluative capability, this would provide the foundation for almost all attribute development. Giving students experience in making evaluative judgements does not just strengthen their evaluative capabilities, it also brings into play cognitive processes that usually result in their elaborating existing knowledge or in their constructing new knowledge in a specific topic domain (Chi 2009). When they make judgements, students interact with subject content, they process it, think about it, compare it with alternative content – real or internally generated – they take different perspectives on it and create new knowledge that was not contained in the material being judged. Moreover, depending on the circumstances and particularly the depth of mental processing, this new conceptual and procedural knowledge will be incorporated into existing knowledge networks and will become personal capital that can be used by students and adapted and applied to new learning contexts. Hence, the act of making evaluative judgements is actually a ‘knowledge-building’ process. To elaborate further, the act of making evaluative judgements always involves comparisons of one thing with another, as there is no such thing as an absolute judgement (Laming 2004). In making judgements, one reference point for the comparison is always the evaluator’s personal construct in the domain of the work to be judged. For example, when a teacher appraises the quality of the argument in a student’s essay assignment, she uses her past experience of appraising similar assignments to make her evaluative response. This is also true even when she compares one student’s assignment with another or against criteria. Hence, making comparative judgements usually involves the generation of new knowledge – for example, new insights about similarities and differences between the current referent and those experienced before – that will elaborate, confirm, add to or change the evaluator’s personal construct. While this new knowledge will be internal to the evaluator, there are advantages to externalising these constructive outputs in writing (Chi 2009). One reason is that this is likely to result in deeper processing and
gu id ing pri nci ples f or peer r e vie w | 203 greater elaboration; the second is that once the judgements are externalised they become new materials that can be examined and from which further new knowledge might be inferred and constructed. The research and theoretical frameworks discussed above provide the background for this chapter. The emphasis is on the importance of developing students’ evaluative abilities and, through this, their knowledge and skills base. The sections that follow, drawing on my own research and other recent publications, identify why peer review is an ideal tool with which to develop these attributes. Scope and Terminology In this chapter, peer review is defined as an arrangement whereby students produce a written assignment and then review and comment on assignments produced by peers in the same topic domain. The assumption is that this written work is of a complex and open-ended nature, such as an essay, a report, a case study, a design and so on, and that the review response is also a written text. In many implementations of peer review, however, these written texts could be the output of prior peer or teacher–student discussions. So the basic peer review sequence is that students write an assignment, evaluate the assignments of others, produce a written feedback response and receive written feedback responses from others on their assignment. The criteria for the reviewing activity may or may not be provided in advance. As described in the last paragraph, the focus of this chapter is squarely on peer review, not peer marking or peer grading. Peer marking and grading refer to scenarios where students assign a mark or grade to a peer’s work and this mark contributes to the peer’s overall results. The term peer assessment in the published literature is sometimes synonymous with peer marking or grading, sometimes with peer review and sometimes with both together, so, for clarity, it is not used in this chapter. Although I am assuming that students do not provide a mark or grade, the reviewing activity might be graded by a teacher to encourage participation or to help students learn to calibrate their judgements. It should also be noted that asking students to mark the work of their peers does not necessarily invoke the same cognitive and knowledgebuilding processes as when they are required to produce a feedback commentary. Marking can be carried out without deep analysis, whereas formulating
204 | a dva nces i n uni ver s ity a s s e s s me nt a commentary usually activates quite sophisticated thinking and writing skills. Caution about marking is also warranted, because research shows that asking students to mark their peer’s work often undermines the benefits to be obtained from reviewing (Kaufman and Schunn 2011; Nicol, Thomson, and Breslin 2013). Why Use Peer Review as the Platform to Develop Evaluative Judgement? There are four key features of peer review – as implied by the definition provided above – that make it a suitable educational method for developing students’ skills in making evaluative judgements. First, reviewing the work of peers engages students directly in multiple acts of evaluative judgement; they scrutinise and evaluate a range of works of different quality that have been produced by fellow students to the same or a similar brief. Second, when students review the work of their peers, they invariably reflect back on their own work and consider ways of improving it (Nicol, Thomson, and Breslin 2013; Nicol 2013a, 2013b). Hence, reviewing others’ work actually develops students’ skills in evaluating their own work. This feature of peer review derives from the fact that, before reviewing the work of peers, students will have already spent considerable time producing work in the same topic domain themselves. This makes peer reviewing quite different from scenarios where students merely read and evaluate an academic paper or another topic-related text, as this would not necessarily elicit the same kinds of reflective processes. It also suggests that having students produce an assignment in the same topic domain as that to be reviewed is a crucial precondition in order to ensure maximum learning benefits from peer review. Third, in reviewing the work of their peers, students not only make judgements about others’ work, but they also express those judgements through written feedback commentaries, as per the above definition. Providing such feedback explanations or justifications builds on students’ knowledge, as it calls on them to revisit and rehearse their current understandings in the topic domain and to construct and reconstruct them, which adds to, and elaborates their existing knowledge base (Chi 2009; Roscoe and Chi 2008). Furthermore, peer review provides a platform for developing students’ skills not just in learning to interpret criteria and standards provided
gu id ing pri nci ples f or peer r e vie w | 205 by others, but also in formulating criteria and standards by themselves. These latter skills are vital if students are to develop their own concept of quality and to have the confidence and conviction to make judgements about the quality of their own work and that of others (Sadler 2010, 2013). What follows is an elaboration of these points drawing on current research. Exercising Judgement, Reflection and Learning Transfer In a number of recent studies, I have shown that when students produce and review work in the same topic domain they engage in multiple and overlapping acts of evaluation, both about the work produced by others and in many different ways about their own work (Nicol, Thomson, and Breslin 2013; see also http://www.reap.ac.uk/PEERToolkit.aspx). When students evaluate the work of their peers, evidence shows that the main reference point for this evaluation is their own work. They compare the work they have produced or, more accurately, an internal mental representation of that work with the peers’ work and they actively transfer ideas generated through this comparative process to inform their thinking about their own work. For example, students report seeing things in their peers’ work – different approaches to the task, alternative arguments, perspectives or solution strategies, or errors or gaps – that they can use to inform and enhance their own work. Moreover, if they have an opportunity to update their own work, students will invariably do so, even before they receive feedback reviews from their peers. However, in reviewing the work of peers, students not only compare their own work with that of peers, but they also, in situations where there is more than one peer assignment, make comparative evaluations across these assignments drawing on what is good from one assignment to inform their thinking and to comment on the other assignment, while at the same time always reflecting back on the work they have produced themselves (Nicol, Thomson, and Breslin 2013). This finding suggests that, up to a certain point, the more assignments that students are asked to review, the richer the evaluative processes they engage in and the more likely they are to be exposed to works of different levels of quality and to engage in productive learning transfer. In many peer review scenarios, students are asked to comment on other students’ work in relation to a set of criteria – a rubric – provided by the
206 | a dva nces i n uni ver s ity a s s e s s me nt teacher. This brings into play a third evaluative process relevant to the development of evaluative judgement: the comparison of each peer assignment against criteria and the production of a response. However, what is notable in peer review is that even while using teacher-provided criteria to frame their review responses, students still reflect back on their own work – that is, while they are applying criteria to others’ work, they are also directly or indirectly applying the same criteria to their own work. This point is elaborated on further below. Making Judgements, Commenting and Knowledge-Building Recent research on peer review has shown that producing feedback reviews for peers might be more beneficial for students’ learning and knowledge production than the receipt of feedback reviews from peers (Cho and MacArthur 2011; Cho and Cho 2011; Nicol, Thomson, and Breslin 2013). A critical consideration, however, is that to fully realise these benefits, student– reviewers must produce a written explanation for their evaluative judgements. Producing explanations is a constructive learning activity, which requires that reviewers generate and articulate ideas that go beyond the peer’s text (Chi 2009). Indeed, Cho and MacArthur (2011) in a controlled study compared students’ own written work after they had: (1) reviewed and commented on texts written by peers; (2) read some peer texts; or (3) read some unrelated articles. They found that students who had reviewed and commented on works written by peers outperformed those who had either simply read peer texts or read some unrelated articles. In other words, producing feedback explanations helped enhance and build students’ own knowledge and understanding to the extent that there was consequential transfer. This finding is consistent with the extensive work of Roscoe and Chi (2008) on peer tutoring, which shows that when student–tutors produce explanations for peers, they revisit, rehearse, evaluate and improve their own understanding of the topic. It is also congruent with other research showing that asking students to make explicit the meaning of texts they are reading, by giving verbal explanation to others, promotes deeper understanding and knowledge production as, in doing this, students realise that there are gaps in their own understanding and they create new knowledge to fill those gaps (Chi et al. 1994).
gu id ing pri nci ples f or peer r e vie w | 207 Engagement with Criteria and Standards: Developing a Concept of Quality Sadler has, over a number of years, been interested in how students learn to recognise and produce quality work and the role that criteria and standards play in such learning (Sadler 1989, 2010, 2013). In addressing this issue, he has recently drawn on studies of experts and analysed how they make evaluative judgements and make use of criteria and standards (Sadler 2010). Sadler observes that experts make holistic, multi-criteria judgements; they compare the work they are evaluating against an internal construct of quality, an internal standard, and when they produce an evaluative response they invoke criteria. This internal conception of what quality is develops through repeated experience in making judgements of many works of different levels of quality in a particular domain. Moreover, even when experts are provided with a list of criteria with which to inform their judgements, these are never used in isolation and instead are always combined with internal tacit criteria. Also, such internal criteria are not formulated in advance; rather, they emerge while experts are judging works, as they are born of an interaction between the experts’ internal constructs of quality and their evaluation of the work being appraised. For example, in scrutinising any piece of work, even though multiple criteria will be brought to bear in parallel rather than sequentially, particular features of the work might still become more salient than others in evaluative decisions. From his analysis, Sadler contends that if students are to develop expertise in making evaluative judgements, they must develop their own personal constructs of quality. He also notes that, given the complexity of the interactions between internal and external criteria, students will not acquire such constructs merely through being given statements of criteria by their teachers. Sadler (2013) identifies three requirements that would directly help students develop a personal construct of quality in any domain. First, students should be exposed to a range of works of different quality in that domain, where some are of a high standard. Second, they must gain practice in comparing these works with each other and with those of high quality, which will help refine their concept of quality. Third, they must express their judgements through feedback commentaries, as this will give them practice in f ormulating criteria
208 | a dva nces i n uni ver s ity a s s e s s me nt and in making tacit criteria explicit, which in turn will help them consolidate their quality constructs. In my own studies (Nicol 2013a; Nicol, Thomson, and Breslin 2013), I have found that when students review the work of their peers and comment on these works, this calls on processes of judgement that replicate those of experts and that meet Sadler’s requirements. As noted earlier, a key feature of reviewing is that students make direct comparisons of their own work with works produced by peers. This involves them in making holistic judgements using multiple criteria, with their own work acting as the initial standard. They also compare one peer’s work against another and with their own, which enriches and multiplies their holistic experiences. In addition, in producing comments on each peer’s work, students must formulate criteria to justify and express their judgements. Hence, the process of reviewing helps students refine and develop their own internal concept of quality standards, as well as giving them experience in generating criteria. These mental processes occur whether or not the teacher provides criteria, although they are more clearly evidenced when students are not given pre-formulated criteria. Where the teacher provides specific criteria, other processes come into play. In particular, engagement with teacher-provided criteria might either extend the range of the students’ own criteria and/or it might help them to calibrate their own judgements. In Nicol, Thomson and Breslin (2013) we therefore conjectured that the benefits of reviewing might be twofold, with students generating ‘richer criteria than those provided by the teacher but sounder criteria than those they might be able to formulate themselves’ (17). Receiving Peer Reviews and Evaluative Judgement Finally, in discussions of peer review and its value in developing evaluative judgement, the focus is naturally on the act of reviewing. However, peer review is a reciprocal process where students both produce reviews and receive reviews from their peers. Historically, most research on peer review has been on the receipt of reviews and on the benefits that arise when students receive feedback from multiple peers (for example, Topping 1998; Cho and MacArthur 2010); these include a greater quantity of feedback than teachers can provide, feedback of a different type and in a language and tone that is often more understandable. However, the quality of the feedback received is
gu id ing pri nci ples f or peer r e vie w | 209 not the primary interest in this chapter. Rather the concern is with how the receipt of feedback from peers might develop students’ evaluative competence. From that perspective, what is important is how students interact with, and respond to, received feedback. This point is returned to later. Principles and Practice of Effective Peer Review As signposted earlier, what follows below is a set of design principles for good practice in peer review (see Table 10.1). These are based on a synthesis of current research and logical analysis of reviewing processes. In implementing these principles, the aim is to give students experience in making evaluative judgements about the quality of academic works produced by peers that are in the same topic domain as those the students have produced themselves; that the works are in the same topic domain helps ensure that students will make inner comparative judgements of the peers’ work with their own and that this will assist them to develop their own concept of quality (Nicol, Thomson, and Breslin 2013). However, this condition/requirement that the students review works within the same topic domain does not necessarily mean they must review exactly the same assignment. For example, students might produce work in the same topic area, but with different students focusing on different aspects of that topic, or the same topic might be tackled from different perspectives by different groups of students. The important point is that the assignment that is produced and that which is reviewed overlap in their subject content to the extent that students are likely to reflect back on their own work as a result of the reviewing process. In the sections that follow, each principle is analysed in terms of its contribution to developing students’ evaluative skills and to enhancing their Table 10.1 Principles of good peer review design. Good peer review design should: 1. encourage an atmosphere of trust and respect; 2. use a range of different perspectives for the review tasks; 3. give practice in formulating criteria and identifying standards; 4. require well-reasoned written explanations for feedback responses; 5. facilitate dialogue around the object and the quality of reviews; 6. integrate self-review activities into peer review designs; 7. encourage critical evaluations of received reviews; and 8. provide inputs that help reviewers calibrate their judgements.
210 | a dva nces i n uni ver s ity a s s e s s me nt disciplinary knowledge and expertise. As well as briefly commenting on each principle and on how its formulation has been informed by recent research, each section ends by providing some suggestions about how that principle might be implemented. These briefly sketched examples serve to amplify the meaning of the principle, while at the same time providing models for practitioners wishing to implement peer review themselves or to refine and enhance the practices they have already implemented in their own disciplinary contexts. The administrative burden associated with implementing peer review can be greatly reduced by using software such as the Workshop module in Moodle or PeerMark in the Turnitin suite. This is not specifically discussed here, even though many of the implementation approaches suggested below could usefully be supported by peer review software. Readers are referred to Honeychurch et al. (2013) for more information on peer software. First Principle: Encourage an Atmosphere of Trust and Respect This first principle is about setting the stage for peer review and about addressing potential student concerns. Peer review is not common practice in higher education. Hence, students might be unsure or concerned about what it involves and why teachers are implementing it. Some may initially think that it is a way of easing the teacher’s burden in providing feedback reviews or marking. Others might be concerned about whether they, or their peers, are able to provide useful feedback, given their lack of expertise and experience. Still others might be concerned about sharing their good ideas with peers and, especially, about plagiarism. Peer review fundamentally upsets traditional power relations. In peer review, students become partners in assessment processes, and this shift in the balance of power, with the teacher giving up some authority, might not be welcomed by all. If peer review is to be successful, there must be commitment from students and a willingness to share and collaborate. Hence, academics wishing to introduce peer review are advised to invest time making sure students are clear about its purpose and to ensure that their early experiences with the process are positive. Specific approaches might include: (1) explaining why peer review is being implemented and what students will get out of it; (2) illustrating how peer review operates in professional contexts and in life beyond university; (3) clarifying that reviewing is not about finding fault with, and undermining
gu id ing pri nci ples f or peer r e vie w | 211 the work of, others, also showing students examples of both constructive and less constructive feedback critiques; (4) emphasising that students will still learn even if they receive poor reviews, as it is the reviewing experience itself that matters; (5) dealing with concerns about copying by emphasising that learning, at its best, is a collaborative endeavour or by explaining how you have designed the activities so that plagiarism is not an issue; and (6) making it clear that you are not asking students to mark others’ work. Many of these ideas might be more effectively introduced by organising workshops where students discuss the ideas themselves, rather than by simply providing written or oral explanations of them. For example, students in groups might identify and discuss the merits of producing and receiving reviews and discuss how these processes differ before engaging in peer review activities. Second Principle: Use a Range of Different Perspectives for the Review Tasks When teachers review assignments produced by students they normally comment on what is good and weak about them and what could be improved, with such comments justified through rational argument or evidence. In this approach teachers are essentially evaluating the quality of the students’ work in relation to an assignment brief, which will usually have been specified in advance, often through a list of criteria. This scenario is normally replicated in peer review – that is, students assume the teacher’s role and evaluate the extent to which the work of fellow students meets the assignment specification (Sadler 2013). While there is much to commend in this approach in terms of helping students develop a more robust conception of quality relative to a specific assignment brief, it does not capitalise on the full possibilities for learning and expertise development that peer review affords. Peer review is about developing the students’ capacity to make evaluative judgements and, through exercising such judgements, to build new knowledge and understanding. Both these aims require that students are given opportunities to evaluate peer work, not just from the perspective from which it was produced, but importantly also from other reference points and perspectives. Competent practitioners and experts are able to evaluate work from many different vantage points. They are able to do this because they have developed a highly structured and interconnected knowledge base which can be flexibly accessed depending on the situation or context of application
212 | a dva nces i n uni ver s ity a s s e s s me nt (deCorte 1988). Asking students to review peer work from a range of different perspectives will help them elaborate and refine their own knowledge networks, while at the same time enabling them to hone and sharpen their evaluative skills. Reviewing tasks should therefore, where possible, expose students to a rich range of perspectives, as well as give them practice in shifting perspectives. This might be achieved within a single review task or across a set of review tasks. It can also be achieved even where the interest is primarily in helping students improve the quality of their work relative to the assignment brief. Five perspectives with possible variations are identified for reviewing with each affording different advantages in terms of knowledge elaboration and skills development. Readers will be able to build on these and identify further perspectives appropriate to their context. The first perspective, which I refer to as the ‘holistic’ perspective, involves asking students to review and comment on the work as a whole. Experts and teachers make holistic judgements about work and performances, yet arguably university students do not gain much practice in this (Sadler 1989). There are, however, many ways of addressing this issue; for example, students might be asked to summarise the work produced by peers, to identify hidden assumptions in the work, to identify and comment on the centre of gravity in the writing or the most compelling argument. The second perspective is the ‘stakeholder’ perspective, with students asked to take a particular role in reviewing or, indeed, more than one role. In nursing, for example, they might be asked to comment on the work from the perspective of the nurse, the physician, the hospital manager, the patient and so on. The third perspective is the ‘reader-response’ perspective, where students are asked to give their reactions to, and feelings about, the peer text as they read it (for example, ‘My impression is that the introduction of this second issue clouded the argument’), rather than to make definitive judgements about it (for example, ‘this argument is unconvincing’). Students are highly receptive to such non-judgemental comments, as they help them to grasp the difference between their writing intentions and the actual effects of their writing on others (Lunsford 1997). In this scenario it is important that reviewers acknowledge that their responses are subjective and offer no explicit suggestions for improvement. The fourth perspective is the ‘graduate attributes’ perspective, which can take many forms, depending on the
gu id ing pri nci ples f or peer r e vie w | 213 particular attribute that one wishes to develop. For example, the focus might be ethical awareness, in which case students might review the work of their peers from an ethical perspective. The fifth perspective is the ‘contrastive’ perspective, where, as the term suggests, students are asked to comment on an assignment from a vantage point quite different from that which guided its production – for example, from a different theoretical position. This would heighten possibilities for the construction of new knowledge and would bring into play quite new vantage points for evaluative judgements. Third Principle: Give Practice in Identifying Quality and Formulating Criteria Students must develop their own internal construct of quality if they are to produce quality work themselves and be able to judge the quality of others’ work. Traditional approaches to helping students develop their understanding of quality range from involving students actively in interpreting criteria supplied by teachers to students negotiating criteria with teachers or even students developing their own criteria (Price and O’Donovan 2006). These approaches can be easily implemented within most peer review designs. However, valuable as they are, such approaches are not the most effective way to develop students’ own conception of quality, nor their ability to produce or recognise high quality work, as they all assume that what constitutes quality can be externally codified and specified in advance (Sadler 2007). Instead of focusing all our efforts on trying to develop students’ understanding of teacher-provided or pre-specified criteria, the family of approaches advocated under this principle all focus on developing the students’ own ability to make holistic judgements about quality and their ability to rationalise those judgements through the identification and articulation of criteria (Sadler 2013). The assumption is that criteria will emerge through formal consideration of the qualities of different works and that, through such processes, even tacit criteria will be elaborated. The essential conditions are that students have opportunities to make judgements of multiple works of differing quality in the same topic domain, with some works of a high standard, and that criteria are allowed to emerge from those judgements, rather than be specified beforehand. Given these conditions, I would like to suggest a number of possible approaches. First, within practical limits, the number of reviews that students carry
214 | a dva nces i n uni ver s ity a s s e s s me nt out should be increased. This would extend the range of works to which students are exposed and would make it more likely that they would encounter some works of high quality. A second approach, which would secure a similar end, would be to insert an example or examples of high quality work produced by the teacher or students from previous cohorts into the set of assignments being reviewed and, after reviewing, engage students in discussions of these examples. The latter would help students externalise the basis of their evaluative decisions through criteria which would build their knowledge base. Third, if one wished to enrich the students’ experience of making holistic judgements, they might be asked to compare a number of peer assignments, including their own, and to rank them in order of quality. If students were asked to explain their ranking decisions, this would call for discussions about both criteria and standards. A fourth approach that would enhance the production of criteria by students would be to require them to carry out reviews without giving them criteria to work from, but to identify and record the criteria that emerge for them during the reviewing task. The criteria that are recorded might usefully be compared afterwards with teacher-provided criteria. A further approach would be to provide students with examples of assignments from previous cohorts that all meet the required criteria and ask them to review and rank them and then discuss why some are still of a higher quality than others. This would make transparent the interplay between criteria and standards and analytic and holistic judgements. Fourth Principle: Require Well-Reasoned Written Explanations for Feedback Responses There are a number of reasons for requiring students to produce written feedback explanations to account for their evaluative judgements. First, as noted in the last section, providing explanations makes explicit the criteria – including the tacit criteria – which students have used to inform their judgements. Second, providing feedback explanations directly engages students in revisiting and rehearsing their current knowledge and in constructing new knowledge in the discipline (Roscoe and Chi 2008; Nicol 2013). Additionally, externalising explanations in writing creates new outputs that students can reflect upon and from which they might infer further new knowledge (Chi 2009). Lastly, producing explanations helps develop the students’ own writ-
gu id ing pri nci ples f or peer r e vie w | 215 ing abilities and their acquisition of a disciplinary vocabulary and discourse, especially that associated with critical analysis, argumentation and reasoning. A key question that arises is what kinds of written responses should be sought from student–reviewers. In most cases, what is required is that students provide an elaborated rationale to justify their evaluative judgements. The form of this will depend on the review perspectives and whether criteria are supplied or not. However, one would recommend that: (1) student– reviewers be advised that what is required is an extended written response, rather than a single word answer (for example, ‘in a paragraph, comment on . . .’); (2) students be asked for a constructive commentary – for example, to provide suggestions for improvement or to highlight alternative perspectives or approaches, rather than a critique, where the latter means providing an account of what is wrong or deficient in the peer’s work; (3) students carry out reviews in pairs or groups and provide a reflective report highlighting where members of the pair or group agreed or disagreed in their judgements – arguably, such discussions will trigger considerable knowledge elaboration; (4) the genre for the review output be varied so as to develop students’ writing skills and their experience in writing for different audiences – for example, they might provide a newspaper article, a letter to the author or a non-evaluative reader response. Fifth Principle: Facilitate Dialogue around the Peer Review Process All aspects of the peer review process can be enhanced through dialogue, both peer dialogue and teacher–peer dialogue. Dialogue is a means of enriching both the evaluative and knowledge-building processes that are elicited through peer review activities (Nicol 2010b). Dialogue in such peer contexts involves students in constructing, reconstructing and co-constructing meanings together. For example, students might be asked to make judgements collaboratively, which will involve them in negotiating their evaluative responses. Such co-regulation of responses not only triggers knowledge elaboration, but it also helps students develop collaborative skills that are relevant to their future professional lives. Dialogue can also help bolster students’ confidence when they make evaluative judgements, as they can check out and discuss their judgements and the reasons for them with others. Peer dialogue is especially valuable, as it can help attenuate the teacher’s voice and
216 | a dva nces i n uni ver s ity a s s e s s me nt strengthen the students’ voice during review activities. In effect, it helps shift responsibility for making judgements to the students themselves. Dialogue can be harnessed at different points in the review process: before students begin reviewing (for example, to articulate the review criteria), when they produce the assignment for review (for example, the assignment could be a group task), when they construct the review commentaries or even after the receipt of reviews. It can be organised as a classroom activity or in an online context. Specific approaches to integrating dialogue include: (1) asking students to produce the assignment as a group and then having individual students review a number of group assignments; importantly, this approach will increase the number of reviews each group receives; (2) the first approach could be followed up with groups writing a reflective account of how they responded to the multiple individual reviews they received; this would further enhance dialogue, as it would require that students discuss the received reviews; (3) asking students to formulate questions for the peer reviewer when they submit their assignment; the reviewer might then be asked to address the questions posed, as well as to provide their own review responses; and (4) sequencing the peer review activities so that later reviewers can see the comments of earlier reviewers when they add their comments; later reviewers might highlight where they agree or disagree with earlier comments, thereby enriching the range of review responses. Further ideas include getting students to work in pairs or groups to establish a particular perspective for the reviews or engaging students in post-review discussion with teachers about the quality of their reviews. Sixth Principle: Integrate Self-Review Activities into Peer Review Designs A key purpose of implementing peer review is to develop the students’ capacity to make evaluative judgements about the quality of their own work, not just about the quality of the work of their peers. Peer review naturally builds this self-evaluative capability, as students cannot avoid comparing their work with that of their peers and reflecting on how their work might be improved (Nicol, Thomson, and Breslin 2013). Also, research has shown that students produce better quality work in the same topic domain after participating in reviewing activities (Cho and Cho 2011).
gu id ing pri nci ples f or peer r e vie w | 217 In the published literature, many researchers advocate self-review or selfassessment as a platform for the development of students’ evaluative skills. The rationale is that students are already engaging in evaluations of their own work as they produce it and therefore it is only logical to try to strengthen this ability (Nicol and Macfarlane-Dick 2006). Possible approaches include making self-review an explicit requirement – for example, by having students review their own work against some specified criteria before submission. However, there are limitations with this approach; it is often difficult for students to make accurate or informed judgements about the quality of their own work, as they might not be able to take an objective stance on work they have just produced or to view it from another perspective (Eva and Regehr 2005; Nicol 2013). Peer review helps overcome these limitations, as it provides students with new inputs in the form of external reference points which can help them see their own work in a new light. In effect, reviewing the work of peers puts students in a position where they are likely to ‘notice’ aspects in their own work that require attention or that could be improved, rather than being told by others through the transmission of feedback comments (Sadler 2010). Despite its limitations when used in isolation, self-review therefore still has a useful role to play in peer review implementations. In particular, when integrated into peer review designs, self-review can help ensure that the learning transfer that occurs through reviewing is consolidated and strengthened. For example, students might externalise their learning from reviewing others’ work by subsequently reviewing their own work. The following are some approaches to the integration of self-review activities into peer review designs: (1) after completing a number of reviews, students are then asked to produce a written review commentary on their own work – this approach can give teachers insight into what students are learning from reviewing; it also helps to address concerns about plagiarism, as students are not asked to update their own work; (2) students produce an action plan stating how they will improve their future work after they have reviewed the work of peers; (3) students review their own work and then compare the reviews they receive from peers with these self-reviews and produce an account of what they have learned; and (4) students review works produced by peers by posing questions about those works, rather than by providing explanatory comments; peers then
218 | a dva nces i n uni ver s ity a s s e s s me nt provide answers to these questions before updating their work – answering the questions will activate self-review processes by the assignment producer. Seventh Principle: Encourage Critical Evaluations of Received Reviews This principle is about the receipt of feedback reviews from peers and the circumstances under which this would help develop students’ evaluative skills and their knowledge networks. The core argument is that received reviews will only achieve this purpose when students actually process the feedback they receive by critically evaluating it and/or by producing a response to it. Making an evaluative response to received feedback might mean summarising it, contesting it, discussing it with others or it might mean using the information it contains to update the student’s own assignment. Peer review is a natural context for requiring evaluative responses to received reviews, as invariably these are provided on draft work. This principle is, however, also highly relevant to teacher feedback. Indeed, the failure to implement this principle is arguably the main reason why there is so much dissatisfaction with teacher feedback in higher education, both by staff and students (Draper 2013; Nicol 2013). Some approaches to using received reviews to develop students’ knowledge and evaluative skills include the following: (1) asking students to respond to the multiple reviews that they receive from their peers by commenting on their quality – for example, by identifying the merits and limitations of each review; (2) asking students to preface their assignment submission with three questions that they specifically wish to receive feedback on, then getting them to comment on whether the feedback they received helped address these questions; and (3) when students submit a subsequent assignment, asking them to submit a cover sheet outlining how the feedback they received on the earlier assignments has informed the current submission (Hughes 2011; Draper 2013). Eighth Principle: Provide Inputs that Help Reviewers Calibrate their Judgements The role of the teacher in peer review is to design learning activities that develop students’ ability to make their own judgements of quality and to provide inputs that help students calibrate the quality of these judgements.
gu id ing pri nci ples f or peer r e vie w | 219 By inputs, I mean external information that can be used by students as a comparison against which to evaluate the quality of their own review responses. The purpose of such inputs is specifically to heighten students’ awareness of the standards that apply in their disciplinary area. One such input might be teacher feedback comments on the quality of the students’ own feedback reviews. However, given that the definitive source of all feedback is ultimately the students themselves (Andrade 2010), teacher feedback will not suffice as the only, or the main, strategy for the calibration of students’ evaluative skills. More will be achieved if, in designing peer review activities, students are given opportunities to engage with actual examples of work of a high standard and are also provided with the chance to compare and evaluate their own reviews of peer work with such examples of high quality reviews produced by others and particularly by experts (Molloy and Boud 2013). The following are some approaches to ensuring that students learn to make sound evaluative judgements and to produce high quality feedback reviews: (1) provide feedback on the quality of students’ reviews stating what is good and what might be improved, also noting alternative perspectives they might consider; (2) ask pairs of students to review the same peer assignment and then to compare and discuss their reviews and to produce an agreed response – then ask them to compare their response with a teacher-provided review or against selected high-quality reviews; (3) have students, in class, compare and discuss reviews produced by others, producing notes on their merits and weaknesses; and (4) scaffold the students’ reviewing activities by providing them with a menu of teacher feedback comments or a menu of teacher feedback questions – the kinds of questions that the teacher would ask about the work; this will bring into play both teacher-produced and student-produced criteria and standards within the reviewing task. Conclusion This chapter has proposed and discussed a theoretical rationale for the development of students’ evaluative skills through peer review seen as an arrangement whereby students produce an assignment and then review and comment on assignments produced by peers in the same topic domain. Drawing on recent research, it has also identified a number of guiding principles for peer review and has illustrated, through some practical suggestions, how these principles
220 | a dva nces i n uni ver s ity a s s e s s me nt might be implemented. As such, this chapter has both a theoretical and practical orientation: theoretical, in that it has synthesised the research to advance current thinking; practical, in that it has offered concrete ideas for practitioners wishing to implement new classroom activities centred on peer review, activities which themselves should generate further research data and lead to further developments of theory. This chapter is a contribution to a volume that celebrates a great scholar, innovator and practitioner, Dai Hounsell, whose own work has also bridged theory and practice and opened up new avenues of investigation. As ever, I look forward to discussing and developing these ideas and many others with Dai in the future. Resources Readers interested in peer review design can find further information and resources, including a peer review design toolkit (see http://www.reap.ac.uk/ PEERToolkit.aspx). Acknowledgements Two projects on peer review funded by Jisc in the United Kingdom helped me develop my ideas for this paper. I would like to thank Lisa Gray, Marianne Sheppard and Sarah Davies of Jisc for their support with these projects. References Andrade, H. I. 2010. ‘Students as the Definitive Source of Formative Assessment; Academic Self-Assessment and the Self-Regulation of Learning.’ Paper presented at the Northeastern Educational Research Association (NERA) conference (proceedings), Connecticut, October 20–2, 2010. Bensley, D. A. 1998. Critical Thinking in Psychology: A Unified Skills Approach. Belmont, Pacific Grove: Brooks/Cole. Boud, D. 2007. ‘Reframing Assessment as if Learning was Important.’ In Rethinking Assessment for Higher Education: Learning for the Longer Term, edited by D. Boud and N. Fachikov, 14–25. London: Routledge. Boud, D., and Associates. 2010. Assessment 2020: Seven Propositions for Assessment Reform in Higher Education. Sydney: Australian Learning and Teaching Council. Accessed October 11, 2013. doi: http://www.assessmentfutures.com. Boud, D., and E. Molloy. 2013. ‘Rethinking Models of Feedback for Learning: The
gu id ing pri nci ples f or peer r e vie w | 221 Challenge of Design.’ Assessment and Evaluation in Higher Education 38, no. 6: 698–712. Butler, D. L., and P. H. Winnie. 1995. ‘Feedback and Self-Regulated Learning: A Theoretical Synthesis.’ Review of Educational Research 65, no. 3: 245–81. Chi, M. T. H. 2009. ‘Active–Constructive–Interactive: A Conceptual Framework for Differentiating Learning Activities.’ Topics in Cognitive Science 1: 73–105. Chi, M. T. H., N. de Leeuw, M. H. Chiu, and C. LaVancher. 1994. ‘Eliciting SelfExplanations Improves Understanding.’ Cognitive Science 18, 439–77. Cho, K., and C. MacArthur. 2010. ‘Student Revision with Peer and Expert Reviewing.’ Learning and Instruction 20, no. 4: 328–38. Cho, K., and C. MacArthur. 2011. ‘Learning by Reviewing.’ Journal of Educational Psychology 103, no. 1: 73–84. Cho, Y. H., and K. Cho. 2011. ‘Peer Reviewers Learn from Giving Comments.’ Instructional Science 39, no. 5: 629–43. Cowan, J. 2010. ‘Developing the Ability for Making Evaluative Judgements.’ Teaching in Higher Education 15, no. 3: 323–34. deCorte, E. 1988. ‘New Perspectives on Learning and Teaching in Higher Education.’ In Goals and Purposes of Higher Education in the 21st Century, edited by A. Burgen, 112–32. London: Jessica Kingsley. Draper, S. 2013. ‘What if Feedback Only Counted If the Learner Used It?’ Paper presented at the International Enhancement Themes conference, Crown Plaza Hotel, Glasgow, June 11–13. Accessed July 11, 2013. doi: http://www.psy.gla. ac.uk/~steve/DraperUsedFbck.pdf. Eva, K. W., and G. Regehr. 2005. ‘Self-Assessment in the Health Professions. A Reformulation and Research Agenda.’ Academic Medicine 80, no. 10: 46–54. Halpern, D. F. 2003. Thought Knowledge: An Introduction to Critical Thinking. Mahwah: Lawrence Erlbaum Associates. Honeychurch, S., N. Barr, C. Brown, and J. Hamer. 2012. ‘Peer Assessment Assisted by Technology.’ Paper presented at the Computer Assisted Assessment (CAA) conference, University of Southampton, Southampton, July 10–11, 2012. Accessed October 11, 2013. doi: http://caaconference.co.uk/pastConferences/2012/caa2012_submission_28b.pdf. Hounsell, D. 2003. ‘Student Feedback, Learning and Development.’ In Higher Education and the Lifecourse, edited by M. Slowey and D. Watson, 66–78. Buckingham: SRHE and Open University Press. Hounsell, D. 2007. ‘Towards More Sustainable Feedback to Students.’ In Rethinking
222 | a dva nces i n uni ver s ity a s s e s s me nt Assessment in Higher Education: Learning For the Longer Term, edited by D. Boud and N. Falchikov, 101–13. London: Routledge. Hounsell, D. 2008. ‘The Trouble with Feedback: New Challenge, Emerging Strategies.’ TLA Interchange 1, no. 2: 1–9. Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance to Students.’ Higher Education Research and Development 27, no. 1: 55–67. Hounsell, D., N. Falchikov, J. Hounsell, M. Klampfleitner, M. Huxham, K. Thomson, and S. Blair. 2007. ‘Innovative Assessment across the Disciplines; An Analytic Review of the Literature.’ York: The Higher Education Academy. Accessed October 11, 2013. doi: http://www.heacademy.ac.uk/assets/ documents/research/innovative_assessment_lr.pdf. Hughes, G. 2011. ‘Aiming for Personal Best: A Case for Introducing Ipsative Assessment in Higher Education.’ Studies in Higher Education 36, no. 3: 353–67. Kaufman, J. H., and C. D. Schunn. 2011. ‘Students’ Perceptions about Peer Assessment for Writing: Their Origin and Impact on Revision Work.’ Instructional Science 39: 387–406. Laming, D. 2004. Human Judgement: The Eye of the Beholder. London: Thomson. Lunsford, R. 1997. ‘When Less is More: Principles for Responding in the Disciplines.’ In Writing to Learn: Strategies for Assigning and Responding to Writing across the Discipline, edited by M. Sorcinelli and P. Elbow, 91–104. San Francisco: Jossey-Bass. Molloy, E., and D. Boud. 2013. ‘Changing Conceptions of Feedback.’ In Feedback in Higher and Professional Education: Understanding It and Doing It Well, edited by D. Boud and E. Malloy, 11–33. London and New York: Routledge. Nicol, D. 2009. ‘Assessment for Learner Self-Regulation: Enhancing Achievement in the First Year using Learning Technologies.’ Assessment and Evaluation in Higher Education 34, no. 3: 335–52. Nicol, D. 2010a. ‘The Foundation for Graduate Attributes: Developing SelfRegulation through Self and Peer Assessment.’ QAA Scotland. Accessed October 11, 2013. doi: http://www.enhancementthemes.ac.uk/resources/publications/ graduates-for-the-21st-century. Nicol, D. 2010b. ‘From Monologue to Dialogue: Improving Written Feedback in Mass Higher Education.’ Assessment and Evaluation in Higher Education 35, no 5: 501–17. Nicol, D. 2011. ‘Developing the Students’ Ability to Construct Feedback.’ QAA Scotland. Accessed October 11, 2013. doi: http://www.enhancementthemes.ac.
gu id ing pri nci ples f or peer r e vie w | 223 uk/pages/docdetail/docs/publications/developing-students-ability-to-constructfeedback. Nicol, D. 2013a. ‘Resituating Feedback from the Reactive to the Proactive.’ In Feedback in Higher and Professional Education: Understanding It and Doing It Well, edited by D. Boud and E. Molloy, 34–49. London and New York: Routledge. Nicol, D. 2013b. ‘Peer Review: Putting Feedback Processes in Students’ Hands.’ Perspectives on Pedagogy and Practice, no. 4: 111–23. Accessed October 11, 2013. doi: http://eprints.ulster.ac.uk/26926/1/PERSPECTIVE_ON_PEDAGOGY_ AND_PRACTICE.pdf. Nicol, D. 2013c. ‘Assessment and Feedback Principles: Rationale and Formulation.’ Accessed October 11, 2013. doi: http://www.reap.ac.uk/TheoryPractice/ Principles.aspx. Nicol, D., and D. Macfarlane-Dick. 2006. ‘Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice.’ Studies in Higher Education 31, no. 2: 199–218. Nicol, D., and S. Draper. 2009. ‘A Blueprint for Transformational Organisational Change in Higher Education: REAP as a Case Study.’ In Education through Technology-Enhanced Learning, edited by T. Mayes, D. Morrison, H. Meller, P. Bullen, and M. Oliver, 191–207. York: Higher Education Academy. Accessed October 11, 2009. doi: http://www.reap.ac.uk/reap/public/Papers/ NIcol_Draper_transforming_assessment_feedback.pdf. Nicol, D., A. Thomson, and C. Breslin. 2012. ‘Rethinking Feedback Practices in Higher Education: A Peer Review Perspective.’ Assessment & Evaluation in Higher Education 39, no. 1: 102–22. doi: 10.1080/02602938.2013.795518. Price, M., and B. O’Donovan. 2006. ‘Improving Student Performance through Enhanced Student Understanding of Criteria and Feedback.’ In Innovative Assessment in Higher Education, edited by C. Bryan and K. Clegg, 100–9. London: Routledge. Roscoe, R., and M. Chi. 2008. ‘Tutor Learning: The Role of Explaining and Responding to Questions.’ Instructional Science 36: 321–50. Sadler, D. R. 1989. ‘Formative Assessment and the Design of Instructional Systems.’ Instructional Science 18, no. 2: 119–44. Sadler, D. R. 2007. ‘Perils in the Meticulous Specification of Goals and Assessment Criteria.’ Assessment in Education 14, no. 3: 387–92. Sadler, D. R. 2010. ‘Beyond Feedback: Developing Student Capability in Complex Appraisal.’ Assessment and Evaluation in Higher Education 35, no. 5: 535–50.
224 | a dva nces i n uni ver s ity a s s e s s me nt Sadler, D. R. 2013. ‘Opening Up Feedback: Teaching Learners To See.’ In Reconceptualising Feedback in Higher Education: Developing Dialogue with Students, edited by S. Merry, M. Price, D. Carless, and M. Taras, 54–63. London: Routledge. Topping, K. 1998. ‘Peer Assessment between Students in Colleges and Universities.’ Review of Educational Research 68, no. 3: 249–76.
11 Disruptions and Dialogues: Supporting Collaborative Connoisseurship in Digital Environments Clara O’Shea and Tim Fawns
Introduction
O
ver the last decade, higher education has grappled with the integration of digital environments into assessment and feedback processes. Tools such as blogs and wikis open up new ways for individuals to work with content and with each other. This technological development has coincided with a turn towards learner-centredness, with an increasing emphasis on the co-constructed nature of meaning-making and the value of peer feedback (Hounsell et al. 2008), peer review practices (Nicol, this volume), a greater emphasis on self-regulation (Gibbs and Simpson 2004) and an emerging appetite for student involvement in assessment design (Nicol and MacfarlaneDick 2006; Carless 2007). As a result, educators have been prompted to reflect on the nature, purpose and appropriateness of educational practices and to consider shifting the balance from assessment of learning to assessment for learning, as advocated by Hounsell, Xu, and Tai (2007a, 2007b). Multimodal assessments can be particularly disruptive to past assumptions about the nature of assessment, since to construct or advance an argument they require meaning to be created between multiple modes of communication (such as image, text and animation) and between creator and audience (Sorapure et al. 2005). The student must guide the reader to piece together different components in such a way that each not only complements,
226 | a dva nces i n uni ver s ity a s s e s s me nt but is dependent on, the others – a skill that lies outside traditional academic literacy (Archer 2010; Goodfellow and Lea 2007). Tutors and students generally have a somewhat vague grasp of what represents academic quality within emerging multimodal practices or how to produce a multimodal product that conforms to assessment criteria and other requirements (for example, word count) as they are traditionally understood (Goodfellow and Lea 2005; Bayne and Ross 2013). This lack of clarity results in many students opting for more traditional assessment forms, rather than embracing new media as a novel way of forming and articulating arguments. When used, multimodalities are often treated in less risky ways – for instance, as platforms for the presentation of linear, essay-like work (Hemmi, Bayne, and Land 2009; Swan, Shen, and Hiltz 2006). In the Manifesto for Teaching Online, Ross et al. (2011) claim that ‘assessment is a creative crisis as much as it is a statement of knowledge’. It is a crisis not only for students, but also for educators, as multimodal work must be engaged with in a more interpretive and non-traditional way. This is not meant as a negative statement, but rather as an embracing of risk as a potential catalyst for opportunities and rewards (see McArthur, this volume). Indeed, it is a crisis that can lead to greater pedagogical creativity. While uncertainty and inexperience may provide a challenge, multimodal content creation can, and should, be used to question power relations, support risky ventures and redefine the boundaries of academic discourse. In this chapter, we argue that Dai Hounsell’s elucidation of feedforward, cumulative assessment and developing connoisseurship provides a conceptual frame with which to move idealised principles of collaborative, dialogic and multimodal learning into practice. The following case study is an exploration of a course that uses a class-wide, wiki-based assignment to scaffold and support group learning and assessment. This chapter is dedicated to Dai, who not only had a key role in the conceptual framework, but also in the design and teaching of the course, as part of the University of Edinburgh’s MSc in Digital Education. A Balanced Approach An oft-held tenet in the assessment literature is that students must come to share ‘a concept of quality roughly similar to that held by the teacher’ (Sadler 1989, 121). However, notions of quality are highly contextualised
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 227 and can be expressed in a myriad of forms (Bloxham and Boyd 2007; Sadler 2010). In multimodal assessment, creating shared understandings becomes particularly demanding and the perceived risks of failing may stifle creative forms of academic expression. A safe and supportive environment is required in which students can develop their skills and understandings of multimodal authorship. There is a need for students to help tutors to interpret their work in a way that fulfils the criteria and for tutors to clarify their understanding of quality in this new context. Transparent, open and dialogic experience with a wide range of works, including the application of assessment criteria, can lead to a more fundamental understanding of quality than simply reading an assignment brief or set of marking criteria (Hounsell, Xu, and Tai 2007b; Hounsell 2008). Collaborative assessments, in which students co-generate and co-author work, offer particularly rich opportunities for developing shared understandings (Hounsell 2008). For one thing, the multidimensional nature of assessment as both process and product is made more explicit. The affordances for interaction made possible by the combination of interfaces, environments and actors in digital environments (Bloomfield, Latham, and Vurdubakis 2010) can be exploited to develop an understanding of the complex processes of collaboration and co-authoring. Social tools, such as wikis, facilitate documentation and dialogue around the workings that lead to the final, synthesised product (Williams, Brown, and Benson 2013). For example, each edit or revision can be traced and revisited by all members of the group. Visual conventions, such as writing in different colours, can be adopted to clarify who has done what so that other group members know whom to approach for further information or explanation. Comment functions allow a separation of process-related reflections from the written work. Access to product and process can be made available to peers and tutors to provide opportunities for peer feedback (McCune and Hounsell 2005; Sadler 2010) and vicarious learning (Mayes et al. 2002). It is not just the feedback that a student receives from his/her peers that is valuable here, but rather, as Nicol argues earlier in this book, the creating of feedback for others. This affords the opportunity to practise appraisal and communication skills through providing commentary and feeds directly into the development of students’ ability to judge the quality of their own product
228 | a dva nces i n uni ver s ity a s s e s s me nt and thus work towards improving that level of quality (Carless 2007). It is the development of this evaluative acumen that leads to what Hounsell et al. (2007) call ‘connoisseurship’. While not unproblematic, digital tools can aid this process by providing access to, and the ability to comment on, other students’ work, encouraging ongoing conversations and an iterative refinement of ideas. Importantly, these experiences are most effective when there is the opportunity for development from one learning experience or assessment to the next so that feedback and guidance from tutors and peers can be directly applied to subsequent work. ‘Feedforward’ increases the value of feedback by creating that iterative development (or ‘feedback loop’) where feedback from one task feeds directly into the next task (Hounsell, Xu, and Tai 2007b). For us, with an emphasis on a learning-oriented approach (Carless 2007), feedforward provides an excellent way of achieving a balance between formative and summative assessment. It embraces dialogue, reflection and transparency, supporting the student’s development and confidence within both their particular subject area and the assessment process. Equally, it encourages educators to ensure that teaching, learning and assessment processes are constructively aligned (Biggs 2003). By embracing a multimodal approach to feedforward (for example, by including audio, video or imagery), tutors can also model aspects of multimodal academic literacy to their students. ‘Online Assessment’ and the Class-Wide Wiki Assignment We present here a detailed analysis of ‘Online Assessment’ – a course that is part of the MSc in Digital Education at the University that covers the subject fields of learning, assessment and digital environments. The design of this course draws from the educational theory outlined above, using the affordances of online environments to inform assessment practices that foster multimodal academic literacy and assist students to meet the challenges of multimodal collaborative work. We are extremely fortunate to have Dai Hounsell as one of the tutors on this course, and his influence can be seen in the approaches we have taken towards developing connoisseurship. The course is delivered entirely online over the duration of twelve weeks and contains two formal assessment activities: a class wiki assignment and an individual critical review (weighted at 25 per cent and 75 per cent of the
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 229 overall grade respectively). Here we focus on the class wiki assignment, which was developed to create an opportunity for students to experience and reflect on the problematic nature of online collaborative assessment. As Hemmi, Bayne, and Land note: Wiki textuality has the potential to be radically different from more orthodox, non-digital modes of writing within formal higher education, in that the wiki space is one which is fundamentally unstable and collectively produced, with a tendency to problematize conventional notions of authorship and ownership. (2009, 27)
By introducing challenges of ‘unorthodox’ collaboration, the wiki assignment was designed to help students think through some practicalities of applying the principles concerning assessment and feedback put forward in the literature on digital environments. The assignment, completed over weeks’ four to nine, was supported by earlier activities that were also situated in the wiki environment. These activities were designed to support students’ development of technological skills and promote reflection on social and group writing practices. For instance, during an orientation week, students edited text and added images and links to the wiki. In week one, they used the comment facility, while in weeks two to three they made their first attempt at collaboratively co-authoring a summary and a critique of an academic paper. These early activities not only provided scaffolds for the technological and social practices around the wiki, but also allowed tutors an opportunity to diagnose the particular support needs of the cohort. For the assignment itself, students were asked to collaboratively author a response to one of five challenging topic statements, known as ‘The Big 5’. Short and seemingly simple, these statements asked students to draw on course themes in overlapping, complex and nuanced ways. For example, a successful response to the statement ‘collaboration is just bringing together multiple individual efforts’ required a balanced and critical examination of a range of literature, as well as of the statement itself. The nineteen students allocated themselves into groups of up to four members based on the topic statement that most interested them. They also signed up to act as a ‘critical friend’ (a nominated peer reviewer) for one or two other groups and were encouraged to read and think across all five topics. Help was available
230 | a dva nces i n uni ver s ity a s s e s s me nt throughout the course from tutors, via email, a discussion forum and Skype. Halfway through the assignment, each group was asked to nominate three elements of their work that they would like feedforward on. Tutors provided this feedforward in the form of a seven to ten minute audio discussion for each group, along with generalised written commentary on the wiki as a whole. These were shared on the class forum and accessible to all students. As well as modelling the critical friend role, this process was intended to provide supportive and critical guidance in preparation for final submission. As Forte and Bruckman (2007) argue, there is an underlying tension between individual assessment and collaborative work. By giving each student a class-wide grade, rather than a group or individual grade, we hoped that they would not only experience a collaborative assessment, but would also engage with work beyond their group’s topic. Further, by engaging in a critical fashion with the work of their peers, we hoped that students would develop a sense of connoisseurship around what counts as quality work (Hounsell, Xu, and Tai 2007b) and that this would, in turn, encourage students to feel more confident in creating and engaging with other multimodal assignments in the future. Methodology We have taken a constructionist stance for this research, as we strongly believe that meaning is constructed in, and through, our interactions with participants within a social context (Cousins 2009). We gathered and analysed a range of qualitative data on the thoughts and experiences of the nineteen students in the class, all of whom participated in the study. The main form of data generation was email interviews at the middle and end of the wiki assessment activity. The asynchronous nature of the interview enabled ongoing reflective discussions between researchers and individual participants exploring complex issues (Berger and Paul 2011). Our interviews aimed ‘to provide an environment conducive to the production of the range and complexity of meanings that might occur to all interview participants’ (Holstein and Gubrium 2004, 152). Alongside email interviews, data generation included the discussion forum, where in-depth, class-wide discussion of theoretical and practical issues took place, and the wiki, where the argument itself was formulated.
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 231 While acknowledging the subjectivity involved in evaluating our own course, our position as tutor–researchers enabled more open and free-flowing discussions with participants throughout the data generation and analysis stages. Thus, although we started our thematic analysis (Boyatzis 1998) by generating conceptualisations of the students’ experience from the literature and our own experiences of coordinating the course over several years, the coding and memo-ing process developed through our ongoing interactions with the participants. The concepts that emerged from that process of testing and re-testing formed the basis for understanding the nature of engagement with the assignment and the factors behind this engagement. Discussion Dialogues and Disruptions In our exploration of the themes that arose from our research, we found two interweaving and overarching concepts – ‘dialogues’ and ‘disruptions’. Disruptions reflect the various types of destabilisation that occurred due to the combination of digital environments and tools, physical distance, requirements for collaborative and multimodal authorship and other aspects of the assessment structure. While often related to confusions and uncertainties, disruptions are not negative aspects of the teaching and learning process. Rather, they are indicative of students’ engagement with troublesome and transformative threshold concepts (Meyer and Land 2003) as they negotiate new and previously inaccessible ways of understanding. Disruptions are as essential to course design as their counterpart, dialogues. Dialogues refer to the rebalancing, if not stabilising, activities aimed at bridging the gap between confusions and uncertainties, on the one hand, and a sense of clarity and shared purpose, on the other. They are ways of working on the threshold, the processes through which students and tutors help to establish and shift shared understandings. Dialogues, then, are attempts to negotiate disruptions. These two broad themes have a push/pull quality to them, each bringing about the other, both culminating in a new moment of disruption or a new opportunity for dialogue. The interplay between dialogues and disruptions brings students through a network of threshold
232 | a dva nces i n uni ver s ity a s s e s s me nt c oncepts related to ways of thinking, practising and developing their evaluative acumen (connoisseurship). Supporting Success For us, as tutors, there were three main components to ensuring a successful outcome for the group-authored class wiki assignment as both an assessment for and of learning. These were: the development of individual connoisseurship; the development of group and class-wide connoisseurship (with an emphasis on successful peer engagement through the role of the ‘critical friend’); and, finally, a product of appropriate quality that demonstrated critical engagement with the topic. As expected, the most problematic component for students was the collaborative work that underlies communal connoisseurship. As noted by other researchers, this type of work can be very challenging for students, as it is at odds with their prior assessment experiences (McCune and Rhind, this volume; Swan, Shen, and Hiltz 2006). Students were aware, however, that this discomfort was part of the intention behind the assignment, which aimed to provide first-hand experience and insight into the tensions of online assessment. Supporting students through this challenging, complex and, admittedly, idealistic design demanded more than post-submission feedback. It required multi-level, iterative and targeted scaffolding, not just at the formal feedforward point, but also throughout the whole course as informal, just-in-time tutor support (for example, through forum posts, wiki comments, emails and Skype chats). In our experience, the needs of each cohort are different and cannot be fully predicted, and guidance is best tailored to groups and individuals as we get to know them during the course. We took an unfolding, dialogic approach to clarifying the task and guiding students toward our expectations in respect to the assessment criteria. For some students, this approach worked well: ‘The design of the course has clearly been very well thought out, with each week leading nicely into the next’. However, getting the balance right for each individual student will always be a challenge. One student, in particular, ‘would have liked clearer parameters regarding approximate goal length of wiki project early on’ and felt that ‘as a wider class we had to initially “beg” this information over the forums’.
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 233 A Cumulative Approach Importantly, supporting the three components of success described above involved designing earlier learning activities that laid the groundwork for the skills and practices that would be needed. For instance, an initial task using the wiki to reflect on prior experiences of assessment was intended to help students reconcile their preconceptions with the demands of an unfamiliar format. A later task involved critiquing various assessment frameworks and principles through a low-risk practice attempt at small group co-authoring. This was intended to make the upcoming disruptions more manageable and, thus, lower the perceived risk. Students were also encouraged to take their first step towards critical friendship at this juncture, as a way of gaining insight into their own work. For some, this initial low-risk group work effectively enabled positive group experiences later, although as one student noted: ‘Group cohesion – this was great from day one, stayed great and ended great. Why? Possibly because the formative group work activity around frameworks eased us into it . . . Possibly though, we’re a natural gelling group’. For others, it highlighted difficulties they might face later on with higher-risk group work. One student, in particular, questioned the relationship between individual work, such as writing content, and contributions to group processes: Overall, I would rate myself as a fail on this task. Although I did make some suggestions and offered some ideas about how to develop the writing, my contribution in terms of content was negligible. However, I did, I think, raise some useful meta discussion about the group process . . .
It is worth noting that the other group members disagreed vociferously with this student’s self-evaluation, arguing against being overly critical and instead unpicking the processes needed to ensure successful group work in future. As Vassell, Amin, and Winch (2008) have found, wiki contributions, such as edits, number of words and comments, may not be the best indicators of engagement across the group, as members may have different roles, foci and forms of engagement with the task. Within-group discussions, such as the one described here, prepared the way for the assignment phase, opening up reflections on the collaborative process and future ways of working. Students
234 | a dva nces i n uni ver s ity a s s e s s me nt began to brainstorm solutions in relation to role allocation, choice of technologies, preferred working styles and so on. Negotiating Risk One issue with online distance learning is that certain practical issues, such as technical problems and conflicting schedules and priorities, can undermine the fast and efficient flow of dialogue (Felder and Brent 2001). In this case, students noted that a reliance on asynchronous discussion could make achieving consensus a slow and painstaking process. Time pressures were exacerbated by a tendency to overestimate the scope of the task. Although the assignment was generally considered to be low-risk (being worth only 25 per cent of the overall grade for the course), a disproportionate amount of energy was spent on it relative to other course tasks. This may be an inherent issue with group work, where achieving consensus can be time-consuming (Karasavvidis 2010). While acknowledging these difficulties, students generally considered group processes to be worthwhile. As one student argued: On this particular occasion . . . I think the class-wide mark is a brilliant idea. It promotes the exact skills it is meant to foster without the disadvantages of group-marks . . . I would definitely prefer individual marks if the assessment was more ‘high-stakes’.
As has been found in previous research into online group work (for example, see Rovai 2001; Kreijns, Kirschner, and Jochems 2003; Vratulis and Dobson 2008), before effective co-authoring could take place, appropriate group management strategies and social conventions had to be developed. In the case of one group, a particular technological affordance (Skype voice conferencing) helped them to negotiate strategies for group cohesion: At first I thought that a group assessment would be a little daunting as the experience in the activity groupwork was not collaborative or cooperative, however the Group Assessment went very well especially after we all met using VOIP (Skype), this helped to break the ice somewhat and is vital for good group gelling. We all had a turn as group leader for a week and felt that we could talk to each other regarding anything. [Virtual] Group hugs were also a good part of this and boosting each other when things got on top of us.
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 235 The challenges of ‘project management’ seemed more clearly felt in groups where individuals did not have that sense of shared purpose essential to academic alignment (Davies 2009). Some students perceived a loss of autonomy, which demotivated and obstructed their engagement in the very practices that would have led to connoisseurship: ‘I have to say however, that I have lost a lot of my zest for this Big 5 assignment, knowing that only the cohort’s overall efforts will be rewarded and not the group or each individual’. This demotivation had serious implications, as this particular group struggled to motivate the student and thus tackle the assignment. There is a balancing act, then, between assessment as a driver and motivator (as noted by Boud 1995; Knight 2002) and ensuring that an assessment is perceived neither as too high, nor, indeed, as too low, a risk. In the same way, another student felt that the needs of the group sometimes subsumed those of the individual. She eloquently described her feelings regarding this ‘disheartening’ tension between the individual versus the group needs: . . . when you feel that you cannot be ‘you’ but have to become ‘us’ –accept things that you don’t agree with, adopt a writing style and structure that says nothing about who you are and how you feel and think. If the rapport between members of a group is good, then perhaps one would not feel all this.
Vratulis and Dobson (2008) note that group members do not always have equal rights. They describe a case where the struggle to express individual positions within wiki co-authorship highlighted an unrealised group hierarchy. The student expressing the inherent tensions above seemed to have come to a similar conclusion – that individual voices can be suppressed within the negotiation of consensus. Interestingly, when another group negotiated a successful collaborative work process, the choice of process created its own form of disruption. The group moved from the communal space of the wiki to the space of Google Docs (an online collaborative writing tool). Their rationale was that comments could be added without disrupting the document’s flow, there was a synchronous chat feature and it felt ‘more intuitive’, as it involved a single document (whereas wikis are dynamically structured by linking multiple
236 | a dva nces i n uni ver s ity a s s e s s me nt pages together). Although this group was able to develop conventions of its own, the decision to use an alternative environment led to most of the working process not being accessed by tutors or the other groups. This reduced the amount of feedback they received during the development of their work and also made it more difficult to apply successful aspects of other groups’ working processes to their own. Indeed, ‘outsiders’ to the group could only leave anonymous comments, which undermined the opportunity for extended dialogue with critical friends. The affordances of Google Docs changed the possibilities for the kinds of writing, group work and even learning that could take place (Norman 1988). Collaborative Connoisseurship In keeping with the findings of Stacey (2007) and Naismith, Lee, and Pilkington (2011), the most prominent disruption of the learning process was unlearning certain ways of working individually. Individual authorship and ownership were destabilised by the requirement for negotiating direction, cosynthesis of knowledge and the expression of that knowledge. Stepping back from an individual perspective on the work required a challenging shift in appraisal. One student described the struggle to move from a position where ‘I’m seeing the class-wide feedback and thinking (in a paranoid fashion) “which bits apply to me and which bits don’t”’ to one that positions group synthesis as the priority. Another student spoke of a past strategy, reminiscent of the diligent isolate of Pieterse and Thompson (2010), where an individual works independently of the group dynamic: Normally when I ‘was forced’ to work in a group/pairs (those rare times), I used to do all the work, cause *I’m very shy/not proud to say*, I thought that the other/s wouldn’t produce work which is up to ‘my’ standards.
The students, however, recognised that individual approaches could lead to a less integrated and less coherent outcome. As one student noted: ‘The danger of group writing could be that we end up with an ugly duckling instead of a swan due to different types of writing styles, and being intimidated by editing others’ work’. Wheeler, Yeomans, and Wheeler (2008) recommend that students be prepared to accept that once an idea has been published on the wiki, it no
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 237 longer belongs to the originator, but to the group. The tutor’s role, then, is to encourage the editing, amending and challenging of group ideas, while ensuring that this engagement is ‘not a breach of trust but an act of responsibility and mutuality’ (Hemmi, Bayne, and Land 2009, 28). In order to do this, students had to move away from individual notions of self-regulation and toward interdependence (as found in previous research, such as Karasavvidis 2010). One student was ‘flummoxed’ by: . . . not making sure I had a clear idea of what was expected of me as an individual and as a group member. This resulted in me waiting too long and not prioritising the activities I should have done in tandem with other members, and therefore left me way behind, consequently being unable to contribute much of interest.
A form of ‘group connoisseurship’ was required. In other words, individual understandings of quality needed to align with a co-constructed group understanding of what constituted ‘good’ work in relation to their shared task (Naismith, Lee, and Pilkington 2011; HEA 2012). This group connoisseurship was eventually achieved for each group and the class as a whole through the often challenging and destabilising dialogic/disruptive dynamics that were necessary processes in criticising and editing the work of others. Interestingly, while the process of editing the work of others is an important aspect of asynchronous collaboration (Wheeler, Yeomans, and Wheeler 2008), there is still limited understanding within the assessment literature regarding how this is approached by groups and how remote collaborators develop a group tone. Directly editing the work of others was the most difficult process for the students and was generally done only when external pressures built up to the extent that immediate action was necessary: We were ‘editing’ initially by Skyping and discussing/debating what we’d found. This was great – very scholarly and collegiate, but what we weren’t doing was touching each others’ writing directly . . . By the end we were really doing this in anger and I think we got a common toned and joined up feeling, but this came from literally the last 5 days or so.
In line with the findings of Hemmi, Bayne, and Land (2009), the forms of criticism that did take place between group members were often tentative,
238 | a dva nces i n uni ver s ity a s s e s s me nt with a cautious tone, soft language and social niceties utilised to defuse discomfort and risk. Technology-based conventions also needed to be negotiated and the affordances of the tools employed in this course offered ways of performing this gentle approach, as demonstrated here – ‘I have used all of the sections highlighted in yellow but didn’t want to delete them just in case you weren’t happy with my changes ; )’. Collaborative Critiques One way of moving a group forward was by giving a constructive critique of another group’s work. This raised the level at which students were critically aware of their own group’s work and offered new ways of articulating and dealing with critiques within their own group. The critical friends, then, were tasked with working towards multi-level alignment, both within their own group and between groups. However, until the formal tutor feedforward, students were tentative in performing the critical friend role – for example, ‘I’m just calling by to say hi as one of your critical friends. Happy to help any way you want feedback/discussion but also don’t want to be distracting you too soon in your work flow’. Prior to this, most groups were still negotiating roles, social cohesion and the direction of their argument, so while some critical friend input (for example, suggestions, resources, challenging questions) was useful, it was difficult for students outside the group to know when to step in. Indeed, there was a tension between critical friend behaviour as a performance indicator and as a scaffolding device: ‘[I] want to help with relevant comments but want to avoid making “show off” comments only to “prove” I’m a “committed” Critical Friend’. It may have been that until the guiding influence of modelled feedback demonstrated ways of constructively critiquing, students engaged more in a ritual fashion, completing critical friend duties in order to avoid negative consequences (Zyngier and May 2004). Notably, most groups made significant progress after the formal tutor feedforward was given midway through the assignment. Tutors provided feedforward in audio format, where the imprecision and tonal variance of verbal speech conveyed the contextual, provisional nature of knowledge (Gould and Day 2013). We adopted a conversational approach, including moments of disagreement and consensus-building that highlighted the subjectivity of assessment in a way that linear text could not. This style of feed-
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 239 forward was an integral part of scaffolding the task and the response to this was consistently positive: ‘The feedforward experience has been extremely constructive to my learning process. I guess it was the first time that the feed(back) I received, came before actually handing in the final piece. As I said, all very formative!’ The feedforward was able to bridge uncertainties or discomfort in relation to the ways the multimodal product could meet the assessment criteria. For example, tutor feedforward across all the groups emphasised the development of academic discourse and argumentation in combination with critical use of the literature. Post tutor feedforward, critical friends were able to play a more useful role in that final developmental stage, as demonstrated by feedback given like this: this is a very well researched piece of work. it covers a lot of points and give breadth and depth to the topic. however, what i didn’t see explicitly coming through very much was your argument. what is your group’s take on this matter? . . . what i think you have here is a literature review, but i wonder if this is what the task is about?
Tutor feedforward had an impact not just on the direction of work, but on the confidence with which groups worked on different aspects of their argument. Some students found that tutor feedforward affirmed their own sentiments and empowered them to negotiate the direction their argument was taking within their group: ‘p.s. Thanks for the Group 2 wiki feedback. Bang on. Difficult for me as I’ve been making similar noises. The art of negotiation is the key’. Others were looking for affirmation and struggled to find it within feedback that was tailored to the group, rather than the individual: I think it was the idea of not being able to compare with others . . . to do with defining personal success against the rest of the class – where does one’s mark lie in comparison to others. I have to say that during the process of writing I didn’t really think about that aspect . . . but I find myself now, faced with the generic class mark and feedback, searching for my little bit of personal feedback. And really, I think it probably comes down to looking for reassurance . . . that I’m up to the necessary standard for the next ‘individual’ assignment.
Despite our attempts at encouraging less hierarchical forms of dialogue, the uncertainty and unfamiliarity of the task may have reinforced a perception
240 | a dva nces i n uni ver s ity a s s e s s me nt of tutor feedback as qualitatively different from peer feedback, particularly in terms of the authority that students attributed to it, either seeing tutors as sources of knowledge or as assessors. Engaging with tutor feedforward across the entire class opened students up further to a range of exemplars of current work in progress. Students were also able to see what finished products looked like by engaging with previous cohorts’ collaborative assignments. However, this previous work did not feature integrated multimodal authorship that took advantage of the digital environment, and this might have impeded progress towards greater multimodality. A lack of clarity around what constituted good quality multimodal academic discourse may have restricted opportunities for groups to develop shared multimodal expression. In our next iteration of the course, we intend to scaffold that engagement with multimodality earlier and more iteratively. Overall, however, we were pleasantly surprised by how much students came, by the end of the assignment, to engage with each others’ work as critical friends. Indeed, we think that this, alongside their engagement with the tutor feedforward, may explain the quality of work we saw in the individual critical review assignments completed after the group wiki task. Conclusion The collaborative element of the wiki assignment was disruptive in a number of ways. As previously mentioned, disruption is not inherently negative, but rather it can create opportunities for deep learning and a more nuanced understanding. The structure of this assignment, including the class-wide grade, created a level of dissonance between individual priorities and goals, those of the group and those of the class as a whole. Further dissonance emerged between traditional criteria of appraisal of the quality of individual work, the appraisal of individual contributions to collaborative work and the quality of a group-authored product. The complexity made it necessary for students to closely examine the processes behind the co-construction of meaning and expression, as well as the affordances of the digital environments that framed their interactions. The nature of the digital environment also created opportunities for multimodal expression within the students’ perceived parameters of the assessment of academic discourse. Departing from traditional, text-based
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 241 modes of academic writing challenged notions of authorship, authority and shared understandings of quality. When situated within a high stakes assessment context, this disruption may have increased the perception of risk. Our approach to supporting students through these disruptions was to design a cumulative learning and assessment structure that iteratively added layers of complexity to their understanding of quality. This strategy placed a strong emphasis on dialogue between members of the learning community, making particular use of peer feedback and tutor feedforward – a dialogue that took place across multiple technologies and genres. The ultimate goal was to develop learners’ connoisseurship at individual, group and class level. The extent to which this was successful could be thought of as how well students – with our support – managed to resolve and rebalance the disruptions created by the assessment. To this end, our results were mixed. Although there was very limited multimodal discourse, the class as a whole managed to produce a high quality academic submission, as did each of the groups. Yet at an individual level, there were cases where a student’s contribution to group content, quality and/or cohesion fell short of our design intentions. We have learned that balancing the tension between individual and collaborative connoisseurship requires more than just an eye for the right design choices. It also needs skilled facilitation, taking advantage of teachable moments and taking account of very specific cohort needs. Challenging as this is, with a flexible, dialogic approach, a highly engaging and transformative course can emerge. One student said of our course: ‘[I]t’s a challenge but useful indeed. The meta-cognitive thinking throughout OA is impressive. I love it. Now, all is getting even more interesting and engaging as I started to look at the other topics as a critical friend’. With Dai’s scholarship and expert involvement in the Online Assessment course, we have had a wonderful opportunity to develop our own connoisseurship. This chapter, we hope, takes the critical friendship he has kindly extended to us and shares it with others. References Archer, A. 2010. ‘Multimodal Texts in Higher Education and the Implications for Writing Pedagogy.’ English in Education 44, no. 3: 201–13. Bayne, S., and J. Ross. 2013. ‘Posthuman Literacy in Heterotopic Space: A Pedagogic
242 | a dva nces i n uni ver s ity a s s e s s me nt Proposal.’ In Literacy in the Digital University, edited by R. Goodfellow and M. Lea, 95–110. London: Routledge. Berger, R., and M. S. Paul. 2011. ‘Using E-Mail for Family Research.’ Journal of Technology in Human Services 29: 197–211. Biggs, J. 2003. ‘Aligning Teaching for Constructing Learning.’ The Higher Education Academy. Accessed November 5, 2013. doi: http://www.bangor.ac.uk/adu/ the_scheme/documents/Biggs.pdf. Bloomfield, B., Y. Latham, and T. Vurdubakis. 2010. ‘Bodies, Technologies and Action Possibilities: When is an Affordance?’ Sociology 44, no. 3: 419–20. Bloxham, S., and P. Boyd. 2007. Developing Effective Assessment in Higher Education. Maidenhead: Open University Press. Boud, D. 1995. ‘Assessment and Learning: Contradictory or Complementary.’ In Assessment for Learning in Higher Education, edited by P. Knight, 35–48. London: Kogan Page/SEDA. Boyatzis, R. E. 1998. Transforming Qualitative Information: Thematic Analysis and Code Development. Thousand Oaks: Sage. Carless, D. 2007. ‘Learning-Oriented Assessment: Conceptual Bases and Practical Implications.’ Innovations in Education and Teaching International 44, no. 1: 57–66. Cousins, G. 2009. Researching Learning in Higher Education. Abingdon: Routledge. Davies, W. M. 2009. ‘Groupwork as a Form of Assessment: Common Problems and Recommended Solutions.’ Higher Education 58, no. 4: 563–84. Felder, R., and R. Brent. 2001. ‘Effective Strategies for Cooperative Learning.’ Journal of Cooperation & Collaboration in College Teaching 10, no. 2: 63–9. Forte, A., and A. Bruckman. 2007. ‘Constructing Text: Wiki as a Toolkit for (Collaborative?) Learning.’ Proceedings of the 2007 International Symposium on Wikis, Montreal, October 21–5, 2007. Accessed November 10, 2013. doi: http://www.andreaforte.net/ForteBruckmanConstructingText.pdf. Gibbs, G., and C. Simpson. 2004. ‘Conditions Under Which Assessment Supports Students’ Learning.’ Learning and Teaching in Higher Education 1, no. 1: 3–31. Goodfellow, R., and M. R. Lea. 2005. ‘Supporting Writing for Assessment in Online Learning.’ Assessment & Evaluation in Higher Education 30, no. 3: 261–71. Goodfellow, R., and M. R. Lea. 2007. Challenging E Learning in the University. Maidenhead: Open University Press. Gould, J., and P. Day. 2013. ‘Hearing You Loud and Clear: Student Perspectives of Audio Feedback in Higher Education.’ Assessment and Evaluation in Higher Education 38, no. 5: 554–66.
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 243 Hemmi, A., S. Bayne, and R. Land. 2009. ‘The Appropriation and Repurposing of Social Technologies in Higher Education.’ Journal of Computer Assisted Learning 25, no. 1: 19–30. Higher Education Academy. 2012. A Marked Improvement: Transforming Assessment in Higher Education. York: HEA. Holstein, J., and J. Gubrium. 2004. ‘The Active Interview.’ In Qualitative Research: Theory, Method and Practice, edited by D. Silverman, 140–61. London: Sage. Hounsell, D. 2008. ‘The Trouble with Feedback: New Challenges, Emerging Strategies.’ Interchange 2: 1–10. Hounsell, D., R. Xu, and C. -M. Tai. 2007a. Integrative Assessment: Monitoring Students’ Experience of Assessment (Scottish Enhancement Themes: Guides to Integrative Assessment, no. 1). Gloucester: QAA. Accessed December 13, 2013. doi: http://www.enhancementthemes.ac.uk/docs/publications/monitoringstudents-experiences-of-assessment.pdf. Hounsell, D., R. Xu, and C. -M. Tai. 2007b. Balancing Assessment of and Assessment for Learning (Scottish Enhancement Themes: Guides to Integrative Assessment, no. 2). Gloucester: QAA. Accessed December 13, 2013. doi: http://www. enhancementthemes.ac.uk/docs/publications/guide-no-2---balancing-assessmentof-and-assessment-for-learning.pdf. Karasavvidis, I. 2010. ‘Wiki Uses in Higher Education: Exploring Barriers to Successful Implementation.’ Interactive Learning Environments 18, no. 3: 219–31. Knight, P. 2002. ‘Summative Assessment in Higher Education: Practices in Disarray.’ Studies in Higher Education 2, no. 3: 275–86. Kreijns, K., P. A. Kirschner, and W. Jochems. 2003. ‘Identifying the Pitfalls for Social Interaction in Computer-Supported Collaborative Learning Environments: A Review of the Research.’ Computers in Human Behaviour 19: 335–53. Landow, G. 2006. Hypertext 3.0: Critical Theory and New Media in an Era of Globalization. Baltimore: Johns Hopkins University Press. Mayes, T., F. Dineen, J. McKendree, and J. Lee. 2002. ‘Learning From Watching Others Learn.’ In Networked Learning: Perspectives and Issues, edited by C. Steeple and C. Jones, 213–27. London: Springer. McCune, V., and D. Hounsell. 2005. ‘The Development of Students’ Ways of Thinking and Practising in Three Final-Year Biology Courses.’ Higher Education 49, no. 3: 255–89. Meyer, J., and R. Land. 2006. ‘Threshold Concepts and Troublesome Knowledge.’ In Overcoming Barriers to Student Understanding: Threshold Concepts and
244 | a dva nces i n uni ver s ity a s s e s s me nt Troublesome Knowledge, edited by J. Meyer and R. Land, 3–18. London: Routledge. Naismith, L., B. -H. Lee, and R. M. Pilkington. 2011. ‘Collaborative Learning with a Wiki: Differences in Perceived Usefulness in Two Contexts of Use.’ Journal of Computer Assisted Learning 27: 228–42. Nicol, D. J., and D. Macfarlane-Dick. 2006. ‘Formative Assessment and SelfRegulated Learning: A Model and Seven Principles of Good Feedback Practice.’ Studies in Higher Education 31, no. 2: 199–18. Norman, D. 1988. The Psychology of Everyday Things. New York: Basic Books. Pieterse, V., and L. Thompson. 2010. ‘Academic Alignment to Reduce the Presence of “Social Loafers” and “Diligent Isolates” in Student Teams.’ Teaching in Higher Education 15, no. 4: 355–67. Ross, J., S. Bayne, H. Macleod, and C. O’Shea. 2011. ‘Manifesto for Teaching Online.’ Accessed December 13, 2013. doi: http://onlineteachingmanifesto. wordpress.com. Rovai, A. 2001. ‘Building Classroom Community at a Distance: A Case Study.’ Educational Technology Research and Development 49, no. 4: 33–48. Sadler, D. R. 1989. ‘Formative Assessment and the Design of Instructional Systems.’ Instructional Science 18, no. 2: 119–44. Sadler, D. R. 2010. ‘Beyond Feedback: Developing Student Capability in Complex Appraisal.’ Assessment and Evaluation in Higher Education 35, no. 5: 535–50. Sorapure, M., P. Takayoshi, M. Zoetewey, J. Staggers, and K. Yancey. 2005. ‘Between Modes: Assessing Student New Media Compositions.’ Kairos 10, no. 2: 1–15. Stacey, E. 2007. ‘Collaborative Learning in an Online Environment.’ The Journal of Distance Education 14, no. 2: 14–33. Swan, K., J. Shen, and R. Hiltz. 2006. ‘Assessment and Collaboration in Online Learning.’ Journal of Asynchronous Learning Networks 10, no. 1: 45–62. Vassell, C., N. Amin, and S. Winch. 2008. ‘Evaluating the Use of Wikis in Student Group Work within Blackboard.’ Paper presented at the 9th annual conference of the Subject Centre for Information and Computer Sciences, Liverpool Hope University, Liverpool, August 26–8, 2008 Accessed April 6, 2009. doi: http://www.ics.heacademy.ac.uk/events/9th-annual-conf/Papers/Proceedings/ Proceedings%20Full.pdf#page=128. Vratulis, V., and T. M. Dobson. 2008. ‘Social Negotiations in a Wiki Environment: A Case Study with Pre-Service Teachers.’ Educational Media International 45: 285–94. Wheeler, S., P. Yeomans, and D. Wheeler. 2008. ‘The Good, the Bad and the Wiki:
c onnoiss eurshi p i n di g i tal envi r o nme n ts | 245 Evaluating Student-Generated Content for Collaborative Learning.’ British Journal of Educational Technology 39, no. 1: 987–95. Williams, B., T. Brown, and R. Benson. 2013. ‘Feedback in the Digital Environment.’ In Feedback in Higher and Professional Education: Understanding It and Doing It Well, edited by D. Boud and E. Malloy, 125–39. London and New York: Routledge. Zyngier, D., and W. May. 2004. ‘Key-Makers: Advancing Student Engagement through Changed Teaching Practice.’ Accessed October 27, 2009. doi: http:// www.fmpllen.com.au/documents/Keymakers_Final_Report_November_2004. pdf.
12
Understanding experiences of being assessed
Understanding Students’ Experiences of Being Assessed: The Interplay between Prior Guidance, Engaging with Assessments and Receiving Feedback Velda McCune and Susan Rhind Introduction
W
e were delighted to have the opportunity to contribute to this Festschrift, as we have both drawn extensively on Professor Hounsell’s rich contributions to the literature on assessment while working over the years to understand students’ experiences of being assessed. For the first author of this co-authored chapter, Velda McCune, this began during my PhD studies when I was considering psychology students’ experiences of essay writing and found Professor Hounsell’s work in this area to be an excellent source of inspiration. Since then, I have had the privilege to work with Professor Hounsell on projects looking at students’ experiences of oral presentations and on the Enhancing Teaching–Learning in Undergraduate Courses Project (ETL), to which we will return in more detail later in this chapter. As part of the ETL Project, we considered how to conceptualise what makes for high quality assessed work in particular subject areas in higher education and how students might come to grasp this. The insights from this work were particularly valuable when I started collaborating with Susan Rhind, my co-author of this chapter, to explore students’ experiences of assessment and feedback in veterinary medicine. We were intrigued to explore why these particularly high achieving and committed students should be so worried about having difficulties understanding what was required in their assessments, and
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 247 Professor Hounsell’s work helped us to illuminate and further explore what was happening. For the second author, Susan Rhind, this became a major focus of research interest, since it was clear both from large-scale surveys, such as the National Student Survey in the United Kingdom (UK), and more local evaluations that these concerns were impacting significantly on the student experience. This interest led to the development of a number of practical interventions in the local context. These interventions were always underpinned by a wider desire to understand more fully the student experience relating to assessment and feedback as they progress and develop within their disciplinary community. This chapter is focused on the interplay between higher education students’ prior perspectives on, and experiences of, assessment, the guidance they are given on their assessments and the feedback they receive, as Professor Hounsell’s insights into the importance of seeing the interconnections between these different aspects of the assessment process have been central to our work. We draw out the broad implications of Professor Hounsell’s research in this area and expand on them from our own findings and related literature. Drawing on the ETL Project, we explore how the nature of high quality learning can be conceptualised and what this means for students coming to grips with what is intended by the guidance and feedback they are given. We then consider the ‘guidance and feedback loop’, as described in Professor Hounsell’s work from the ETL Project (Hounsell et al. 2008). Given the particularly intriguing questions pertaining to why grasping the nature of high quality learning should be so challenging for veterinary medicine students, who are among the most highly qualified individuals in the UK, we place particular emphasis on our shared research work with these learners. In this context, we explore how guidance and feedback may contribute to students’ induction into a particular professional practice community. This leads us to a revision of the guidance and feedback loop, which places greater emphasis on what students bring to the assessment process, in terms of their prior experiences, learner identities and imagined future trajectories. The actively engaged and situated student is placed at the centre of our model and the importance of assessment for future learning is highlighted. We draw out key themes relating to how assessment processes can shape students’ sense of belonging and participation in particular contexts. We then conclude by
248 | a dva nces i n uni ver s ity a s s e s s me nt considering how research into assessment and feedback can be applied in practice to enhance learning in higher education with an emphasis on professional areas, such as veterinary medicine. The Challenges for Students in Grasping What Makes for High Quality Work It is clear from the literature on learning in higher education that students can benefit from, and value, guidance and feedback on how to improve their assessed work (Black and Wiliam 1998; Hounsell et al. 2008). Additionally, it seems that even experienced learners at doctoral level may find it challenging to make sense of how to learn effectively and produce high quality work in particular contexts (Cotterall 2011). Likewise, in the context of highly qualified students of veterinary medicine, Smith (2009) identified that lack of understanding of what was expected of them in their assessments was a common issue for students. This uncertainty encompassed how much they were expected to study, the level of detail they were expected to learn and what would be expected of them in assessments. If learning were only a matter of students grasping in generic terms what makes for good assessed work and then building up their subject area knowledge to map onto this generic understanding, we might reasonably expect that students in higher education would find it a simple matter to know how to succeed in their assessments. If this were the case, students would only need guidance on key concepts and relevant subject knowledge. As making sense of assessments does not seem to be this simple, it is important to consider what underlies students’ difficulties in grasping what makes for high quality work. One way to explain these themes in the literature is to see learning in higher education as being very much a matter of students coming to grasp the particular contextualised practices of specific subject area communities. The work of Lave and Wenger on communities of practice (Lave and Wenger 1991; Wenger 1998) has been particularly influential and valuable in this regard, although limitations of this perspective have also been noted (Anderson and McCune 2013; Cotterall 2011). The work on communities of practice was one of the literatures underpinning research that was taken forward as part of the Enhancing Teaching– Learning Environments in Undergraduate Courses Project (ETL), funded by
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 249 the Economic and Social Research Council Teaching and Learning Research Programme from 2001 to 2005. Professor Hounsell was co-director of this project, along with Professor Noel Entwistle. One of the key strands of this project investigated how high quality learning might be understood and conceptualised within particular undergraduate course settings. This work drew out the importance of considering the ways of thinking and practising (WTPs) of particular academic disciplines as enacted locally. WTPs were intended as a means with which to encapsulate the rich complexity of what students may learn through their engagement with their studies. This would incorporate more explicit aspects of subject area knowledge and understanding, but also more tacit elements of what might be involved in being, or becoming, a participant in a particular community (Hounsell and Anderson 2009; McCune and Hounsell 2005). This could include the norms, values and discourses of an academic community, as expressed through assessed work. It is likely that these elements are learned gradually through participation and shifts in identity on the part of the learner (Wenger 1998). Thus, the perspective on high quality learning presented through WTPs helps to illuminate why even experienced learners may struggle at first to grasp what is expected of them in a new context and, as such, look to guidance and feedback on assessments as key to learning what is valued locally. The Guidance and Feedback Loop Against this background of the challenges that students may face in making sense of what is required if they are to produce work that is accepted as high quality in a given context, it becomes clear that close consideration of the learning processes around assessment are of fundamental importance. Professor Hounsell led an important contribution in this area through treating guidance and feedback in higher education as an integrated cycle – the guidance and feedback loop – set out as distinct but connected stages (Hounsell et al. 2008). The guidance and feedback loop mapped out in detail the interplay between students’ prior experiences of assessment, the guidance they received on assessments, the feedback they were given and how this related to future assessed work. Understanding guidance and feedback as interrelated steps in a cycle, which also encompassed students’ prior experiences of assessment, offered new insight into where trouble spots were likely
250 | a dva nces i n uni ver s ity a s s e s s me nt to arise. It became easier to see how weaknesses in the earlier guidance stages, coupled with unfamiliar assessment tasks, might then place greater pressure on the feedback stage in the cycle. The initial research underpinning the guidance and feedback loop drew on a subset of the data from the ETL Project, comprising questionnaire and group interview data from first- and final-year students studying undergraduate course units in the biosciences across three different universities in the UK. The data set brought together 782 student questionnaires, with twentythree group interviews involving sixty-nine participants (further details of the data collection and settings can be found in Hounsell et al. 2008). Within the questionnaire data, the students who responded generally indicated positive perceptions of the teaching–learning environments of the course units. Looking in closer detail at specific questions relating to guidance and feedback on assessment suggested that clarity about what was expected in assessed work was a concern for a substantial minority of students. The scores on questions relating to feedback tended to be the lowest, indicating that receiving feedback which improved learning or clarified matters not understood was more problematic. It is important to note, however, that the students would not necessarily have received all of their feedback at the time that the questionnaire was administered. The interview data were used to provide more fine-grained insights into the students’ experiences of guidance and feedback on assessments. After the initial themes had been identified in the data, it was decided that the findings could best be described as an iterative cycle or loop representing a series of interrelated steps, which would be relevant across assessment regimes. Within the initial study by Hounsell et al. (2008), the following steps were identified: 1. Step 1: Students’ Prior Experiences of Cognate Assessments. The data relevant to this step indicated considerable variation in how familiar students were with the types of assessment they were experiencing in the course units and how confident they were in tackling them. 2. Step 2: Preliminary Guidance about Expectations and Requirements. Such guidance was presented across all of the course units and was often received positively but some comments from students suggested misunderstanding or insufficient guidance.
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 251 3. Step 3: Ongoing Guidance and Clarification of Expectations. This took various forms including self-test questions or requests to staff for further advice. One issue arising here was that the onus could be on students to seek help and some felt uncertain about doing this. 4. Step 4: Feedback on Performance and Achievement. Paucity or inconsistency of feedback on coursework was quite a common concern here. 5. Step 5: Supplementary Support. This refers to the possibility that students could seek additional guidance about their feedback. Again the onus tended to be on students to ask for this support. 6. Step 6: FeedForward. This is where the learner takes what has been learned from a particular guidance, assessment and feedback experience and applies it to a subsequent assessment. Opportunities were lost here for some of the students when feedback was not provided in time to feed in to some later work.
In our subsequent work exploring feedback in the context of veterinary medicine (Hughes, McCune, and Rhind 2013), this model was further developed to capture the additional discipline-specific themes emerging from the data. The key alterations made to the model involved amendments to the step ‘students’ prior experiences of cognate assessments’, which we renamed ‘experiences and attitudes that students bring to the assessment process’. These experiences and attitudes were shown to be heavily influenced by both competitiveness, which the students identified as a key theme, and the professional expectations they have relating to their appreciation of their studies being vocational in nature. In this, we were acknowledging that the ‘feedforward’ aspects have relevance not only for informing future assessment tasks, but they also have wider relevance for students’ future careers. In relation to the roles students may take up after they leave higher education, we have been considering the growing emphasis in the literature on the part that assessment plays in preparing students for future learning and developing their capacity to self-regulate their learning (Boud and Falchikov 2006; Crisp 2012; Nicol and MacFarlane-Dick 2006). This has led us to revise the guidance and feedback loop models from our previous research. The new version of the model is presented in Figure 12.1. We have adapted the end step of the model to focus on ‘the student’s integration of what is
252 | a dva nces i n uni ver s ity a s s e s s me nt
Figure 12.1 The revised guidance and feedback loop.
learned into future practice’, rather than simply considering the feedforward into subsequent assignments. We have also revised the model to emphasise the importance of the student’s active engagement in each step. In previous versions of the model, there was more of a sense of the guidance and feedback being led by the teacher, which did not fully acknowledge the potential of student involvement in assessment as a means of developing the capacity for lifelong learning and greater self-regulation (Boud and Falchikov 2006; Nicol and MacFarlane-Dick 2006). Beaumont, O’Doherty, and Shannon (2011) describe a ‘dialogic feedback cycle’ relating to students’ learning in schools and colleges and explore the differences in how guidance and feedback are enacted in these contexts as compared with higher education. They emphasise the importance of seeing ‘feedback as part of a dialogic guidance process rather than a summative event’ (2011, 671 [emphasis in original]), which sits well with our perspective on the guidance and feedback loop. Beaumont, O’Doherty, and Shannon (2011) note how the students in their study often experienced a challenging
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 253 transition from school and college settings, where there was ongoing iterative dialogue around multiple drafts of assignments, to university settings, where the emphasis was more summative and less dialogic. This reminds us of the essential role of the teacher in enabling access to the practices of particular subject areas (Northedge 2003), even in higher education contexts, where we are emphasising the active and increasingly self-regulated role of the learner. We have now placed ‘what the student brings to assessment’ at the centre of the model based on a growing realisation that the earlier guidance and feedback loop had not acknowledged how all aspects of the learning process involve learners’ developing identities and imagined trajectories in relation to current and future communities of practice (Anderson and McCune 2013). In the context of the students of veterinary medicine, this could be seen in their emotional responses to the guidance and feedback process and in their concern with how these processes informed their learning for future professional roles (Hughes, McCune, and Rhind 2013). In the next section of this chapter, we look more closely at experiences of belonging and participation through assessment, with particular emphasis on the veterinary context. Belonging and Participating through Guidance, Feedback and Engaging with Assessment Processes While the guidance and feedback loop provides a useful heuristic for analysing how the whole assessment process can contribute to students’ growing mastery of the ways of thinking and practising in their particular subject area, it does not offer detailed insight into the ways in which guidance and feedback on assessments can enhance students’ sense of belonging and legitimate participation within higher education subject-area communities. In this part of the chapter, we explore two key themes in relation to how guidance and feedback practices can enable learners’ participation in those higher education contexts, such as veterinary medicine, which involve trajectories leading towards professional roles. We begin by considering how guidance and feedback practices can contribute to, or inhibit, a students’ sense that they are valued and competent participants in the higher education communities that overlap with, and relate to, their future professional communities. We then consider the challenges of communication inherent in supporting students to grasp the partly tacit practices underpinning what makes for high
254 | a dva nces i n uni ver s ity a s s e s s me nt q uality work in a particular higher education context, an area that has been less well addressed in the literature on communities of practice (Anderson and McCune 2013). As we have noted, our understanding of subject area communities in higher education and learners’ identification with these communities can be usefully framed in relation to Lave and Wenger’s work on communities of practice (Lave and Wenger 1991; Wenger 1998; Wenger, White, and Smith 2009). This work provides a valuable theoretical frame within which to interpret students’ learning, provided that certain limitations are taken into account (Anderson and McCune 2013). Broadly defined, communities of practice are contexts within which participants work together toward common goals in a manner that allows for the development of shared practice and shared learning (Wenger 1998). Wenger’s recent work, in particular, notes the fluid and overlapping nature of these communities (Wenger, White, and Smith 2009). This is a good fit to professional contexts, such as veterinary medicine, in which there are multiple practitioner communities that partially overlap with the higher education communities within which students learn. Learners’ trajectories, their sense of where their future lies in relation to particular communities and how they have arrived there are important in shaping what learners will find meaningful and relevant in their studies. An important part of students’ trajectories and identities in relation to professional communities is their sense of familiarity and competence in these settings (Wenger 1998). In our research focused on veterinary medicine, we identified some of the ways in which guidance and feedback can contribute to, or inhibit, students’ sense of being competent and valued. Smith (2009) found that students’ desire for feedback related not only to assisting them in gauging their achievement within specific courses, but also to measuring how well they were progressing in terms of preparation for their profession. The awareness of this trajectory seemed to add extra pressure to the students, who often doubted their own ability to judge their personal progress. This finding is consistent with an overall system that does not, as suggested by Sadler, ‘make explicit provision for the acquisition of evaluative expertise’ (Sadler 1989, 119). The power of the spoken word in encouraging students’ sense of belong-
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 255 ing and legitimate participation through feedback has been demonstrated by both ourselves and others in researching audio feedback (Lunt and Curran 2010; Ice et al. 2007; Rhind et al. 2013). Veterinary students identified the importance of the sense of being recognised and feeling personally known by staff. Furthermore, they identified that the personal nature of the feedback was both encouraging and motivating and had the capacity to build confidence. This may be particularly important in highly competitive disciplines such as veterinary medicine, where research also highlights that the transition into a cohort of similarly high achieving students is challenging (Smith 2009; Hughes, McCune, and Rhind 2013). Returning to the argument of the importance of supporting students to judge their own work more accurately and gain access to expert practice, audio feedback has also shown benefits in this regard, with students commenting that feedback delivered in this manner helped them gain insight into the examiners’ views and process of marking (Rhind et al. 2013). We have repeatedly found that veterinary students consider the ‘gold standard’ method of feedback to be individual discussion with members of staff (Hughes, McCune, and Rhind 2013; Rhind et al. 2013); hence, the positive results we have demonstrated with audio feedback are likely to relate to the students, at least in part, feeling that they are receiving the same individualised and personal input that is the valued feature of individual discussions. Studies in other disciplines, however, have shown written comments to be the most favoured feedback ‘method’ (see, for example, Ferguson 2011, where in a study across several education programmes, written comments were seen as most favourable). This suggests that the student view is heavily influenced by the specific assessment and disciplinary context. One of the most powerful ways for students to engage with the assessment process is through developing skills as assessors themselves (Boud and Falchikov 2006). Having the student in the position of examiner clearly has the potential to impact on each stage of the revised guidance and feedback loop (see Figure 12.1) by adding richly to the central emphasis on ‘what the student brings to assessment’. One example of this in veterinary medicine has been in the context of students authoring multiple choice questions for their peers. Students found the process of authoring questions and answering or commenting on each other’s questions very beneficial for revision
256 | a dva nces i n uni ver s ity a s s e s s me nt purposes (Rhind and Pettigrew 2012). This study also indicated that students in earlier years have a greater need for reassurance in their role as examiners, whereas students who have progressed further in the curriculum have more confidence in the abilities of their peer group in general. This finding is also consistent with the students gaining confidence in their positions as legitimate participants in their assessment community. Moving on to consider the communicative challenges inherent in guidance and feedback, the literature provides many examples of difficulties in communication regarding expectations of what constitutes high quality work, particularly in relation to essay writing. In his earlier work on essay writing, Professor Hounsell emphasised the context-specific nature of academic writing and the value of considering authentic experiences of writing situated within students’ day-to-day learning experiences in their subject area (Hounsell 1988). In this work, he illustrated key differences in high quality writing between subject areas and the qualitatively different conceptions of what makes for good essay writing held by students within the same subject area. Setting out the implications for feedback, he noted: From a student perspective, however, learning to engage in academic discourse entails not merely a sensitivity to particular disciplinary frames, but apprehending a mode of discourse which is essentially tacit in character. The implications of this for teaching-learning transactions are considerable, especially as far as feedback to students is concerned. In the Hounsell (1988) study, such feedback was predominantly in the form of general guidelines, circulated at the beginning of a course of study, or written comments on individual essays. As I have argued elsewhere (Hounsell, 1985), we should not assume that feedback of this kind has a meaning which is self-evident. The comments made are more than simply particularised observations; they allude to a tacit mode of written academic discourse, and may thus remain opaque to students whose premises for discourse are fundamentally at variance with those of their tutors. (Hounsell 1988, 173 [emphasis in original])
Many subsequent studies have raised similar concerns relating to qualitative differences in students’ conceptions of essay writing in particular subject areas (see, for example, Campbell, Smith, and Brooker 1998; Prosser and Webb 1994).
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 257 For the veterinary medicine students we interviewed, some of the most challenging aspects of feedback related to perceived inconsistencies in marking and feedback given on written (short-answer and essay) questions (Hughes, McCune, and Rhind 2013). Linked to this was a strong theme in the data of students struggling with gauging teacher expectations and receiving differing advice from academics in advance of assessment. This highlights the point that at the heart of a sound assessment and feedback policy there must be a level of ‘assessment literacy’ (Price et al. 2012) that is shared by both staff and students. Price et al. describe assessment literacy as a set of ‘knowledge, skills and competencies’ that encompasses both technical and conceptual aspects of assessment. Assessment literacy, it is argued, is a key enabler to further learning. The authors’ definition of assessment in this context includes feedback. However, we consider it helpful to explicitly consider ‘feedback literacy’ as a separate, albeit intimately linked, entity, given the complex and perhaps differing competencies that need to be developed by the community as a whole in order to address many of the issues we raise in this chapter. Assessment and feedback literacy, while having generic aspects, will also have distinct disciplinary dimensions, as they relate to the trajectory experienced by the student within their discrete academic community. Anderson, in Chapter 7, and Anderson and McCune (2013) suggest that the communicative challenges faced by learners and academic staff in higher education may be partly explained through the literature which emphasises that spoken and written language do not offer transparent and unambiguous communication. Instead, language is seen as offering multiple possible interpretations requiring shared understanding to be achieved through participants creating sufficiently shared frames of reference (see, for example, Rommetveit 1974). Northedge (2003) uses teaching materials from an Open University course to illustrate how the complex language of academic texts can be impenetrable for students, even when they have access to the dictionary definition of all the words included in a given piece of writing. What the students lack, Northedge argues, is sufficient understanding of the history of debate and discussion in their subject area to allow them to understand the particular ways in which these words are being used to create meaning in context. On this basis, Northedge emphasises the importance of teachers finding ways of providing students with a frame of reference to render the texts
258 | a dva nces i n uni ver s ity a s s e s s me nt meaningful. This can be done through conversation, during which teachers work to establish a commonly understood example with which to frame the discussion of complex ideas. This broad idea of finding points of common understanding in order to provide entry into specialist discourse is also highly relevant for assessed work. Northedge (2003) notes the importance of the teacher’s role in coaching students as they learn to speak and write in the manner of more experienced practitioners of a particular subject area. He suggests creating assignments that give students a basis for their own meaning-making – for example, by working with case studies that students can understand and use as a basis from which to form and share their own opinions. Giving guidance and feedback on assessed work involves establishing shared understanding, perhaps around exemplars of past student work. Northedge also suggests that an important role for feedback is to connect with, and respond to, the meanings that students have tried to create. A sense of gradually and sensitively shaping students’ responses, such that they are closer to ways of communicating demonstrated by expert practitioners, may also be important (Anderson 1997). Conclusions and Implications The research we have reviewed in this chapter makes clear the great significance of Professor Hounsell’s contribution to our understanding of why even highly qualified and motivated students may struggle to grasp what makes for high quality work in their subject areas and what might best be done to support their learning. Drawing together Professor Hounsell’s work with our own research and the wider literature, we emphasise how students are engaged in a process of gradually developing mastery over the tacit practices of their learning communities and the overlapping subject area and professional communities they may join in the future. This being the case, we must expect that students will not find what is required of them transparent and that it is of fundamental importance that we attend to how their developing mastery and future learning can be supported through all of the stages of the guidance and feedback loop. In our revised version of the guidance and feedback loop, we have placed what students bring to assessment at the centre of the model. This acknowledges the significance for assessment of students’ imagined trajectories in relation to professional communities. It also signals
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 259 the central importance of students’ active engagement in assessment processes for their future learning. These themes are particularly salient for students in professional contexts, such as veterinary medicine. Veterinary medical students are likely to have a particularly strong sense of their imagined future trajectories within their learner identities, as most expect to go on to work in the profession. This being the case, any perceived threat to their intended path created by difficulties in grasping assessment expectations could be deeply troubling. Our research has shown that these students feel pressurised by the understanding that what they learn at university must set them up for life and is important for their future career (Smith 2009). Veterinary students’ earlier learning identities will also typically encompass an experience of being previously among the strongest students in their peer group, which can lead to a sense of disquiet when becoming part of a new peer group of similarly high-achieving individuals and encountering difficulties for perhaps the first time (Hughes, McCune, and Rhind 2013). This makes these students a particularly interesting group to consider in potential future research into assessment experiences. In terms of future research, close analysis of the ways in which shared meaning is achieved through whole cycles of the guidance and feedback loop would be an important further step. This would ideally involve longitudinal studies analysing relevant written guidance, observing relevant interactions and interviewing students and staff about their perceptions of the process. Longitudinal studies, focusing on the extent to which assessment practices in professional contexts feed into future learning beyond higher education, would be challenging to achieve, but highly relevant to developing our understanding in this area. Given the current interest in the National Student Survey in the UK and the negative perceptions of feedback sometimes expressed in this survey, it would be valuable to explore more deeply whether improvements to shared meaning-making around assessment generate more positive responses to these kinds of external surveys. Professor Hounsell has taken a strong leading role in educational development work with academic staff throughout his career. The implications of the research reported in this chapter for supporting colleagues as they guide students’ learning are considerable. Part of the task lies in enabling colleagues to understand why it is that students struggle to grasp what is required of
260 | a dva nces i n uni ver s ity a s s e s s me nt them. Without an understanding that making sense of what makes for high quality work requires rich, ongoing participation in relevant academic and professional practice, colleagues may become frustrated when students struggle to improve. This may lead to the belief that students are putting in insufficient effort or are not engaging actively with the guidance and feedback they are given. This, in turn, may sap colleagues’ motivation to find the time for supporting assessed work within their busy professional lives. While there will be some students who are less engaged in their academic work, it is important that academics are aware of other possible explanations for learners’ difficulties. Giving our academic colleagues a strong sense of the need to work to achieve meaning and shared understanding with students, rather than assuming that comments given by experienced members of an academic community will have transparent meaning to students would also be an important step. Once the roots of the challenges that students face are better understood, the guidance and feedback loop provides a valuable heuristic for working with colleagues to analyse how guidance and feedback practices may be developed. Close consideration of each step in the loop to identify potential trouble spots can provide a clear framework for enhancement. Engaging with the guidance and feedback loop also emphasises the important interrelationships between aspects of guidance and feedback that might otherwise be treated as distinct issues. This may result in missed opportunities, for example, where any negative comments about feedback given in questionnaires such as the National Student Survey are only addressed by providing more endpoint feedback, when some of the issues might be better resolved at other points in the cycle. Our current version of the guidance and feedback loop, as presented in Figure 12.1, was developed particularly with professional learning in mind (although we are aware that many of the points made would be equally relevant to other learning situations). This has led us to emphasise the importance of students’ active involvement in guidance and feedback processes as a means of developing their identities as legitimate, professional practitioners and maximising the impact of their learning experiences on their future practice. We hope this will prove useful in future in supporting colleagues to see the full relevance of assessment and feedback practices for students’ professional development. We have also emphasised the centrality of what students
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 261 bring to all aspects of the learning processes around assessment. This could provide valuable educational development opportunities to engage with colleagues about how learners’ identities, prior experiences and beliefs need to be taken into account as we work to enhance all aspects of guidance and feedback. There are additional possibilities for future research examining the impact of educational development work using this model on academic staff perspectives and practices and on student learning. A further area of research interest would be to explore more deeply how students’ mindsets and psychological profiles relate to the revised guidance and feedback loop. In doing so, we could begin to explore the heterogeneity that exists within the student cohort, even within a specific discipline, to provide a better understanding of individual student trajectories and support needs as they develop their own assessment and feedback literacies. Acknowledgements The Principal’s Teaching Award scheme at The University of Edinburgh funded the projects focusing on exploring feedback in veterinary medicine and on audio feedback. The Enhancing Teaching–Learning Environments in Undergraduate Courses Project (ETL) was funded by the Teaching and Learning Research Programme of the UK Economic and Social Research Programme. This project was undertaken by a team from the Universities of Coventry, Durham and Edinburgh in collaboration with partner departments in these subject areas. Members of the project team over this period of time were Charles Anderson, Liz Beaty, Adrian Bromage, Glynis Cousin, Kate Day, Noel Entwistle, Dai Hounsell, Jenny Hounsell, Ray Land, Judith Litjens, Velda McCune, Erik Meyer, Jennifer Nisbet, Nicola Reimann and Hilary Tait. References Anderson, C. 1997. ‘Enabling and Shaping Understanding through Tutorials.’ In The Experience of Learning (2nd edn), edited by F. Marton, D. J. Hounsell, and N. J. Entwistle, 184–97. Edinburgh: Scottish Academic Press. Anderson, C., and V. McCune. 2013. ‘Fostering Meaning: Fostering Community.’ Higher Education 66: 283–96. Beaumont, C., M. O’Doherty, and L. Shannon. 2011. ‘Reconceptualising Assessment
262 | a dva nces i n uni ver s ity a s s e s s me nt Feedback: A Key to Improving Student Learning?’ Studies in Higher Education 36: 671–87. Black, P., and D. Wiliam. 1998. ‘Assessment and Classroom Learning.’ Assessment in Education: Principles, Policy and Practice 5: 7–74. Boud, D., and N. Falchikov. 2006. ‘Aligning Assessment with Long-Term Learning.’ Assessment and Evaluation in Higher Education 31: 399–413. Campbell, J., D. Smith, and R. Brooker. 1998. ‘From Conception to Performance: How Undergraduate Students Conceptualise and Construct Essays.’ Higher Education 36: 449–69. Cotterall, S. 2011. ‘Doctoral Students Writing: Where’s the Pedagogy?’ Teaching in Higher Education 16: 413–25. Crisp, G. 2012. ‘Integrative Assessment: Reframing Assessment Practice for Current and Future Learning.’ Assessment and Evaluation in Higher Education 37: 33–43. Ferguson, P. 2011. ‘Student Perceptions of Quality Feedback in Teacher Education.’ Assessment and Evaluation in Higher Education 36: 51–62. Hounsell, D. 1988. ‘Towards an Anatomy of Academic Discourse: Meaning and Context in the Undergraduate Essay.’ In The Written World: Studies in Literate Thought and Action, edited by R. Säljö, 161–77. Berlin: Springer-Verlag. Hounsell, D., and C. Anderson. 2009. ‘Ways of Thinking and Practicing in Biology and History: Disciplinary Aspects of Teaching and Learning Environments.’ In The University and its Disciplines: Teaching and Learning within and beyond Disciplinary Boundaries, edited by C. Kreber, 71–83. New York: Routledge. Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance and Feedback to Students.’ Higher Education Research and Development 27: 55–67. Hughes, K., V. McCune, and S. Rhind. 2013. ‘Academic Feedback in Veterinary Medicine: A Comparison of School Leaver and Graduate Entry Cohorts.’ Assessment and Evaluation in Higher Education 38: 167–82. Ice, P., R. Curtis, P. Phillips, and J. Wells. 2007. ‘Using Asynchronous Audio Feedback to Enhance Teaching Presence and Students’ Sense of Community.’ Journal of Asynchronous Learning Networks 11: 3–25. Lave, J., and E. Wenger. 1991. Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press. Lunt, T., and J. Curran. 2010. ‘“Are You Listening Please?” The Advantages of Electronic Audio Feedback Compared to Written Feedback.’ Assessment and Evaluation in Higher Education 35: 759–69. McCune, V., and D. Hounsell. 2005. ‘The Development of Students’ Ways of
u nd e rst andi ng experi ences of bei n g a s s e s s e d | 263 Thinking and Practising in Three Final-Year Biology Courses.’ Higher Education 49: 255–89. Nicol, D., and D. Macfarlane-Dick. 2006. ‘Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice.’ Studies in Higher Education 31: 199–218. Northedge, A. 2003. ‘Enabling Participation in Academic Discourse.’ Teaching in Higher Education 8: 169–80. Price, M., C. Rust, B. O’Donovan, K. Handley, and R. Bryant. 2012. Assessment Literacy: The Foundation for Improving Student Learning. Oxford: Oxford Centre for Staff and Learning Development. Prosser, M., and C. Webb. 1994. ‘Relating the Process of Undergraduate Essay Writing to the Finished Product.’ Studies in Higher Education 19: 125–38. Rhind, S. M., and G. W. Pettigrew. 2012. ‘Peer Generation of Multiple Choice Questions: Student Engagement and Experiences.’ Journal of Veterinary Medical Education 39: 375–79. Rhind, S. M., G. W. Pettigrew, J. Spiller, and G. T. Pearson. 2013. ‘Experiences with Audio Feedback in a Veterinary Curriculum.’ Journal of Veterinary Medical Education 40: 12–18. Rommetveit, R. 1974. On Message Structure: A Framework for the Study of Language and Communication. London and New York: Wiley. Sadler, R. 1989. ‘Formative Assessment and the Design of Instructional Systems.’ Instructional Science 18: 119–44. Smith, K. 2009. Exploring Student Perceptions of Academic Feedback in Veterinary Medicine. Unpublished Masters thesis, Edinburgh, University of Edinburgh. Wenger, E. 1998. Communities of Practice: Learning Meaning and Identity. Cambridge: Cambridge University Press. Wenger, E., N. White, and J. D. Smith. 2009. Digital Habitats: Stewarding Technology for Communities. Portland: CPsquare.
Notes on the Contributors
Charles Anderson is a Senior Lecturer and Deputy Head of the Institute for Education, Community and Society at the University of Edinburgh. His research has ranged across a number of aspects of higher education and secondary school education, while maintaining a central focus on textual practices and communication. David Boud is Professor of Adult Education in the Faculty of Arts and Social Sciences at the University of Technology, Sydney. He has published extensively on teaching, learning and assessment in higher and professional education. In the area of assessment, he has been a pioneer in promoting learning-centred approaches to assessment, particularly as regards student self-assessment (Enhancing Learning through Self-Assessment), building assessment skills for long-term learning (Rethinking Assessment in Higher Education: Learning for the Longer Term) and new ways of approaching feedback (Effective Feedback in Professional and Higher Education). He is also an Australian Learning and Teaching Council Senior (for more details, see: www.assessmentfutures.com). Noel Entwistle is Professor Emeritus of Education at the University of Edinburgh and has been the editor of the British Journal of Educational Psychology (BJEP) and of Higher Education. He has honorary degrees from
notes on the contri butor s | 265 the Universities of Gothenburg and Turku and holds an Oeuvre Award from the European Association for Research in Learning and Instruction. His main research interests continue to be in student learning and understanding at university level. He is co-editor of a Palgrave Macmillan series on universities in the twenty-first century and his recent publications include Student Learning and University Teaching (editor) (BJEP, 2007), Teaching for Understanding at University (2009), along with a chapter in Enhancing the Quality of Learning (2012). Tim Fawns is eLearning Coordinator for Clinical Psychology at the University of Edinburgh and a graduate of the MSc in eLearning. Tim’s primary research interests include technological influences on semantic and episodic memory, online group dynamics and the uses of media in learning and identity construction. Telle Hailikari is a Senior Researcher at the Centre for Research and Development in Higher Education at the University of Helsinki, Finland. Her doctoral dissertation focused on prior knowledge assessment and its relation to student achievement. Her research areas cover assessment practices, students’ study progression, factors impeding and enhancing studying, the role of emotions in learning, students’ approaches to learning, and procrastination. Evangelia Karagiannopoulou holds a doctorate from the University of London and is currently an Assistant Professor in Psychology of Education in the University of Ioannina, Greece. Her published work has been mainly focused on learning and assessment in higher education, with many articles in both Greek and international journals exploring psychological constructs in relation to learning. Her recent research interests concern the influences of teaching and assessment on learning, understanding and achievement, including the importance of a ‘meeting of minds’ between tutor and students to give students the confidence to understand key ideas for themselves. Carolin Kreber is Professor of Higher Education at the University of Edinburgh, where she is also Director of the Higher Education Research
266 | a dva nces i n uni ver s ity a s s e s s me nt Group. Her recent book publication, Authenticity in and through Teaching in Higher Education, explores teaching and learning in higher education through the lens of authenticity (2013). Her present work is concerned with professional learning within the academy, interpretations of professionalism and preparation for practice to create a more just and sustainable future. Sari Lindblom-Ylänne is Professor of Higher Education and Director of the Centre for Research and Development in Higher Education at the University of Helsinki, Finland. She was President of the European Association for Research on Learning and Instruction (EARLI) from 2009 to 2011 and is now President-Elect of the World Education Research Association (WERA); she will be President for the years 2014 to 2016. Sari Lindblom-Ylänne is actively involved in many international research projects. Her research focuses on student learning and teaching at university – for example, on approaches to learning and teaching, self-regulation, self-efficacy beliefs, motivation towards studying, assessment practices and quality enhancement in higher education. Jan McArthur is a Lecturer in Higher Education at the University of Edinburgh. She holds a PhD in Educational Research from Lancaster University and has taught in higher education in both Australia and the United Kingdom. Her research interests span the nature and purposes of higher education, social justice within and throughout higher education and dialogue/student voice within assessment, learning and feedback. She has a particular interest in critical theory and its applications to higher education research and practice, especially the work of Theodor Adorno, as demonstrated in her recent book, Rethinking Knowledge within Higher Education: Adorno and Social Justice. Velda McCune is Senior Lecturer and Deputy Director at the Institute for Academic Development at the University of Edinburgh. She completed her undergraduate and PhD studies at the University of Edinburgh, developing a focus on student learning in higher education. In her current role, she leads a team who undertakes development work with staff and students relating to university learning and teaching. Her research focuses on teaching–
notes on the contri butor s | 267 learning environments and students’ experiences of learning in higher education. Liz McDowell has undertaken extensive research in topics related to the student and staff experience of learning, teaching and, most prominently, assessment. She was the director of the national Centre for Excellence in Teaching and Learning (CETL) in Assessment for Learning from 2005 to 2010. Liz was also the founding director of an influential series of international conferences held in conjunction with the European Association for Research in Learning and Instruction. Throughout her career, she has linked research and practice, working in various academic development roles. She is a National Teaching Fellow and has held a personal chair as Professor in Academic Practice at Northumbria University. David Nicol is Emeritus Professor of Higher Education at the University of Strathclyde, Scotland. He is also Visiting Professor at the University of Ulster and Adjunct Professor at the Swinburne University of Technology, Australia. David’s research is in the areas of assessment and feedback, eLearning developments and change management in higher education. Some of David’s work can be accessed through the Re-Engineering Assessment Practices website (see: www.reap.ac.uk). Clara O’Shea is an Associate Lecturer at the University of Edinburgh. Her research has covered a variety of disciplines, developing curriculum in online and blended environments. Her research on digital environments focuses on curriculum design, assessment and feedback practices, notions of space and place, and identity development. Liisa Postareff is a Senior Lecturer in University Pedagogy at the Centre for Research and Development in Higher Education at the University of Helsinki, Finland. Her research areas cover teaching and learning in higher education – more specifically, teachers’ approaches to teaching and teacher development, self-efficacy beliefs, students’ approaches to learning, the interaction between teaching and learning, assessment of student learning and the role of emotions in teaching and learning. She is a member of the editorial
268 | a dva nces i n uni ver s ity a s s e s s me nt board of New Approaches in Learning Research and a reviewer of a number of scientific journals. She is also an active member of the European Association for Research on Learning and Instruction (EARLI) Special Interest Group (SIG) higher education group. Michael Prosser has recently retired as Professor and Executive Director of the Centre for the Enhancement of Teaching and Learning at the University of Hong Kong. His research on teaching and learning in higher education has been widely published and cited. He has twice been co-editor of Higher Education Research and Development (HERD) and has also been an Associate Editor of the British Journal of Educational Psychology (BJEP). He has been elected as a life member of the Higher Education Research Society of Australasia (HERDSA) and the International Society for the Scholarship of Teaching and Learning (ISSOTL) and has been in the Institute for Scientific Information (ISI) list of the top 1 per cent of cited authors in the field of social sciences for several years. Milla Räisänen is a doctoral student at the Centre for Research and Development in Higher Education at the University of Helsinki, Finland. Her doctoral dissertation focuses on the development of university students’ self-regulation skills – more specifically, students’ relation to different approaches to learning, self-efficacy beliefs and emotions. Her other research areas include the assessment of students’ learning outcomes. Susan Rhind is Chair of Veterinary Medical Education and Director of Veterinary Teaching at the Royal (Dick) School of Veterinary Studies. She graduated from Glasgow Veterinary School in 1990 and, following three years in general practice, studied for a PhD in immunology at the University of Edinburgh. She subsequently specialised as a pathologist, becoming a member of the Royal College of Pathologists in 2000. In recent years, she has developed a major interest in all aspects of veterinary education. Her current areas of research include assessment, feedback and graduate transition into practice. D. Royce Sadler is currently Senior Assessment Scholar at the Teaching and Educational Development Institute, University of Queensland. He is
notes on the contri butor s | 269 also Professor Emeritus of Higher Education, Griffith University. His long interest in improving the efficiency and effectiveness of learning has resulted in publications on formative assessment and the place of feedback in it, with several journal articles and book chapters focused specifically on those aspects. Recent articles on summative assessment include a suite with a dominant emphasis on academic achievement standards in higher education and the case for a particular approach to assuring them. Kay Sambell has published widely in the field of teaching, learning and assessment in higher education. She has directed a number of large-scale research and development projects aimed at investigating and enhancing the student experience and has led research on student engagement at the Centre for Excellence in Teaching and Learning (CETL) in assessment for learning. In 2002, Kay was awarded a UK National Teaching Fellowship for her work on innovative assessment, and she currently holds a personal chair as Professor of Learning and Teaching at Northumbria University. Tarja Tuononen is a doctoral student at the Centre for Research and Development in Higher Education at the University of Helsinki, Finland. Her research focuses on the development of students’ generic skills and their relation to learning and work experience in both university and working-life contexts. Tarja Tuononen has also studied assessment of student learning, and she is a researcher in the LEARN-project, which concentrates on developing research-based quality tools for learning.
Index
Adorno, T., 7, 174, 175, 179–84, 190–1 alienation (student), 189–90 ‘alternative’ or ‘classroom’ assessment, 16 Anderson, C., 6–7, 131–51 appraisal, 201, 227, 240; see also evaluative judgement Approaches and Study Strategies Inventory for Students (ASSIST), 79, 80, 83, 89 Approaches to Studying Inventory (ASI), 79, 83 assessment, 1–9 characteristics, 51 confidentiality/secrecy, 13, 15 as creative crisis, 226 criterion referenced, 17–18, 120; see also criteria; standards dilemmas and contradictions, 19–21 emerging agendas, 21–4 and examinations, 14 fairness, 6, 100–11 as game, 61, 77, 93 and lifelong learning see sustainable assessment multimodal, 8, 225–6, 228, 240 norm-based, 17–18, 120 preconceptions about, 14 prior experience of, 9, 117, 233, 247, 249, 250, 251, 261 and projects, 15
purpose, 47 reliability, 100, 102, 110, 111 students’ engagement in, 259; see also feedback: student giving and receiving; peer review students’ perceptions, 75–93, 87, 88 fig 4.1 targets, 85–8 fig 4.1 teachers’ perceptions, 5, 86–7, 88 fig 4.1, 107 and timing, 20 transparent, 13, 101 types see ‘alternative’ assessment; assessment for learning; ‘authentic’ assessment; collaborative assessment; continuous assessment; feedforward assessment; formative assessment; portfolio assessment; summative assessment; sustainable assessment validity, 102, 110, 111 assessment culture, 57, 168–9 assessment for certification see summative assessment assessment for learning, 57–8, 62, 80 environments, 5, 63–7 techniques and practice, 56, 58–63, 65–9 see also formative assessment assessment literacy, 67, 257 Assessment Standards Knowledge exchange (ASke), 60–1
i ndex | 271 assessment tasks analysis of example, 153–6 failure correctly to undertake, 7, 152–3, 168–71 image of response to, 163–4 low-risk vs high-risk, 176, 233, 234, 235, 241, 254 specification of response genre required, 170 task awareness, 102, 105, 106–8 Assessment 2020, 199 attributes generic, 19, 24, 169 graduate, 4, 32–3, 45–52, 201–2; see also ‘graduateness’ ‘authentic’ assessment, 47, 66, 101 authenticity, 4–5, 19, 34 in Adorno, 182–3 and assessment, 47, 66, 101 communitarian dimension, 36, 42, 43–4, 45 critical dimension, 36, 42–3, 45 existential dimension, 36, 37–40, 44 and failure, 180, 184 and ‘graduateness’, 33, 45, 46 table 2.1 in Heidegger, 38 and strangeness, 36, 37–9 backwash effect, 62, 110 Barnett, R., 26, 33, 34, 35–6, 38, 40, 41, 50, 140–1, 143 Bensley, D. A., 200–1 Biggs, J. B., 62, 102, 119, 124 Birenbaum, M., 57 Boud, D., 4, 13–31, 52, 190, 201 Boulding, K., 7, 160–3 Brown, S., 57 care (student) about claims made for work, 50 about feedback, 50 collaborative assessments, 8, 227, 229, 230, 238 collaborative connoisseurship, 8, 232, 236–8, 241 collaborative peer review, 215–16 collaborative work processes, 8, 232, 235–6, 240 communication intersubjective, 7, 131, 132 and meaning creation, 133–6
miscommunication and misunderstanding, 139–40 perspectivity in, 137–9 and trust, 142–3 communities of practice, 8, 248, 249, 253, 254 connoisseurship, 226, 228 collaborative, 8, 232, 236–8, 241 individual, 232, 241 of quality work see high quality work constructive alignment, 62, 102, 107 continuous assessment, 13, 15–16 core competencies see attributes: generic Course Experience Questionnaire (CEQ), 79, 118 Course Perceptions Questionnaire (CPQ), 79, 118 Cowan, J., 200 criteria, 109, 110, 115, 134, 166 alignment with course aims, 102–3 explicit, 18, 103 and peer review, 204, 205, 206, 213–14 preset, 108–9, 134, 167, 171 tacit, 103, 207, 208, 213, 214 unitary criterion, 168 criteria compliance, 62 criteria formulation (by students), 208 critical being, formation of, 34, 40 critical failure, 179 critical friend role, 189, 229, 230, 232, 238 critical pedagogy, 140, 174 critical theory, 174, 179–84 critical thinking/understanding, 39–40, 48, 106, 182, 200–1 cumulative assessment see feedforward assessment deep vs surface learning, 5, 76, 79, 81–2 tables 4.1–2, 83, 84, 85, 117–18 table 6.1, 119 table 6.2, 126 dialogue, 231–2, 241 and feedback, 80, 140, 141, 252–3 and peer review, 215–16 digital environments, affordances, 8, 227, 234 dispositions vs qualities, 34n disruptions, 231, 236, 240, 241 editing others’ work, 236–7 educational measurement revolution, 17–18
272 | a dva nces i n uni ver s ity a s s e s s me nt Enhancing Teaching–Learning in Undergraduate Courses Project (ETL), 79–83, 84, 86, 246, 247, 248–9, 250–1, 261 Entwistle, N. J., 5, 75–98, 80, 102, 118, 138 essay writing, 77, 78–9, 84, 153–6, 157–8, 256 evaluative judgement, 8, 198–9 and graduate attributes, 201–2 importance of multiple acts, 204, 205 and knowledge construction, 200–3 and peer review, 204–5 see also connoisseurship exam demands, varied perceptions, 89–92 examiners (faculty) expectations, 154–5 feedback, 155–6 examiners (student) see peer assessment Experiences of Teaching and Learning Questionnaire (ETLQ), 80, 81 table 4.1, 83 failure accidental, 184 and authenticity, 180, 184 correctly to undertake assessment task, 7, 152–3, 168–71 critical, 179 and future learning and application, 188 inevitable/necessary, 181, 182, 184 passive, 179 as pedagogical process, 7, 174, 177, 192 risk of, 191, 227 social constructions of, 177–9 and social justice, 7–8, 173, 190, 191 to achieve grades, 173–4, 189 to choose response genre, 170 to meet norms, 174 Fawns, T., 8, 225–45 feedback, 21–4, 49, 51, 165–6, 177 anticipatory, 68 care about, 50 and course design, 23–4 credibility, 143 and dialogue, 80, 140, 141, 252–3 and epistemic responsibility, 139–50 and future professional identity, 146 improvements in, 22 inadequacy, 21–2, 109, 257 informal, 67, 69 intrinsic, 66, 67, 69
literacy about, 257 meaning and purpose, 22–3, 47, 59–60 openness to, 50 and patchwork text, 68 perceptions of, 1, 5–6, 119 table 6.2, 143 spoken, 254–5 students’ giving and receiving, 8, 23, 49, 65, 144, 206, 227 sustainable, 186–7 univocal function, 00 written, 204, 206, 214–15, 255 feedforward assessment, 68, 226, 228, 230, 232, 238–40, 241, 251 formative assessment, 13, 15, 19, 20, 21, 57 vs summative assessment, 16, 66, 176 future challenges, equipping for see learning: lifelong genre responses see response genre goal knowledge, 7, 157–8, 162, 165–9 and language, 125, 134, 135, 257–8 Google Docs, 237–8 grade descriptors, 6, 121–6 Grade Point Average (GPA), 16, 82 table 4.2, 117–18 table 6.1, 119 table 6.2, 120 grades, 99, 100 divergence of teachers’ and students’ experience, 108–9 integrity, 100, 109 and students’ self-assessment, 108–9 vs learning outcomes, 6, 108–9 graduate attributes, 4, 32–3, 45–52, 201–2 ‘graduateness’, 4, 32, 36, 44 and authenticity, 33, 45, 46 table 2.1 group work individual vs group needs, 235 low-risk vs high-risk tasks, 233 see also collaborative assessments; collaborative connoisseurship; collaborative peer review; collaborative work processes guidance and feedback loop, 24, 247, 249–53, 252 fig 12.1, 258, 259–60 and students’ grasp of nature of high quality work, 253–4 and students’ self-image, 253, 254–5 Hailikari, T., 5–6, 99–113
i ndex | 273 Halpern, D. F., 201 Heidegger, M., 38, 41, 45n, 182 hidden curriculum, 99, 110 high quality work, grasp of requirements for, 9, 207–8, 213, 226, 230, 248–9, 253–4 Hounsell, D., 1–3 and communication and audience, 131 and connoisseurship, 226 and essay writing, 78, 256 and feedback, 79, 114–15, 131–2, 135–6, 176, 186–7, 198 and feedforward assessment, 226 and guidance and feedback loop, 9, 247, 249–51 and learning–feedback–assessment triumvirate, 173, 175–7, 192 and ways of thinking and practising (WTPs), 133, 138–9, 249 identity, 9, 253–4 future professional, 146; see also imagined trajectories negative, 180 image of marker, 165 of response to assessment task, 163–4 of world, 160–2 imagined trajectories (of future professional identity), 9, 144, 253, 254, 258 interdependence, 237 intersubjectivity, 132, 134, 135 intervention, structured, 60 judgements capacity for making, 4, 13, 24–5, 27–9 evaluative see evaluative judgement Karagiannopoulou, E., 5, 75–98 Knight, P., 57 knowledge complex nature, 7, 35, 37, 179–80, 184–6, 192 as palimpsest, 186 and social justice, 182 see also goal knowledge; subject knowledge knowledge and understanding, model of, 105 fig 5.1 knowledge construction, 159, 202–3
knowledge transfer, 171, 206 Kreber, C., 4–5, 26, 32–55, 182, 185 language interpretation, problems of, 125, 134, 135, 257–8 Lave, J., 248, 254 learning, 116–18 3-P model, 116 fig 6.1 individual learning and unlearning, 236–7 lifelong, 4, 26, 67, 200, 252; see also sustainable assessment modularisation of, 185 strategic approach, 76, 77, 82 table 4.2 transformative, 35, 52, 186, 241 learning outcomes, 18–19, 62–3, 117 taxonomy, 6, 124 vs course grades, 6, 108–9 learning–feedback–assessment triumvirate, 7, 173, 175–7, 185, 192 Lindblom-Ylänne, S., 5–6, 99–113 Linell, P., 133, 137, 138, 139 McArthur, J., 7–8, 50, 136, 144, 173–94, 226, 232 McCune, V., 8–9, 136, 144, 246–63 McDowell, L., 5, 56–72 Marton, F., 76 meaning creation, 133–6, 145, 185, 258 meaning fixation, 136–7 meaning potentials, 136 meaning-giving practices, 33, 43, 45 multimodal assessment, 8, 225–6, 228, 240 multiple-choice questions(MCQs)/tests, 17, 83–4, 103, 153–4 narrative analysis, 144–6 Nicol, D., 8, 40, 50, 143, 197–224, 225 Nussbaum, M., 43 ‘Online Assessment’ course, 228–9, 241 open-book exams, 5, 87–91 O’Shea, C., 8, 225–45 outcomes see learning outcomes pass mark, consequences of unjustified award, 169 passivity, 5, 57, 179, 186, 190 peer assessment, 49–50, 65, 68, 203–4, 255–6
274 | a dva nces i n uni ver s ity a s s e s s me nt peer review, 8, 143, 197, 229, 241 collaborative, 215–16 comparative, 205, 208 and criteria, 204, 205, 206, 213–14 and critical friend role, 229, 230, 232, 238 design resources, 220 and development of concept of high quality work, 213 and dialogue, 215–16 and evaluation of reviews received, 218 and evaluative judgement, 204–5 importance of giving and receiving, 208–9 and learning transfer, 205 principles, 209 table 9.1, 210–19 and self-assessment, 204, 205, 208, 216–18 software resources, 210 and standards, 204, 205 teacher’s role, 218–19 and trust and respect, 210–11 vs peer marking/grading, 203–4 and written feedback, 204, 214–15 peer tutoring, 206 portfolio assessment, 84 Postareff, L., 5–6, 99–113 practical reasoning, 34, 44, 47–8, 51 practice and rehearsal, opportunities for, 66–7 professional formation, 146 Prosser, M., 6, 111, 114–28, 143 quality see high quality work Räisän, M., 5–6, 99–113 Ramsden, P., 118 response genre, 7, 158–60, 163, 170, 171 Rhind, S., 8–9, 136, 144, 232, 246–63 risk of failure, 191, 227 high vs low risk assessment tasks, 176, 233, 234, 235, 241, 254 risk-taking student, 41, 43, 50, 51 teacher, 69 Rommetveit, R., 7, 132, 133, 134, 136, 137–8, 139 Rosin, M., 33–4, 43, 47 rubrics, 28, 121, 165–6
Sadler, D. R., 7, 115, 125, 134, 152–72, 187, 201, 207–8, 254 Sambell, S., 5, 56–72 Sartre, J.-P., 38 scaffolding, 8, 219, 226, 229, 232, 239, 240 self-assessment (student), 24–5, 49–50, 65, 67, 185, 199 and grading, 108–9 and peer review, 204, 205, 208, 216–18 and view of own abilities, 142 self-image (student), 142, 253, 254–5 self-regulation of learning, 8, 24, 28, 65, 197, 198, 199, 200 short-answer questions (SAQs), 76–7, 84, 153, 257 skills, transferable, 19, 171 Skype, 230, 232, 234 social justice and failure, 7–8, 173, 190, 191 and knowledge, 182 standards, 115, 120, 126 and grade descriptors, 125–6 and peer review, 204, 205 students’ perceptions, 116, 118–20 strangeness, 34, 35–9, 44 structure of observed learning outcomes (SOLO) taxonomy, 6, 124 Study Process Questionaire, 119 subject knowledge, 248, 249 success, 188, 232 Sullivan, W., 33–4, 43, 47 summative assessment, 15, 19, 20, 21 vs formative assessment, 16, 66, 176 super-complexity, 35, 37 surface learning see deep vs surface learning sustainable assessment, 4, 26–7, 64, 65, 144 teaching students’ perceptions of, 5, 87, 88 fig 4.1 teachers’ approaches and role, 41–2, 69, 86 testing culture vs assessment culture, 57, 58 Thomas, P., 83 trust and communication, 142–3
i ndex | 275 in peer review, 210–11 in teachers’ fairness, 109–10 Tuononen, T., 5–6, 99–113 uncertainty, 4, 6, 34–5, 37, 41, 47, 51, 161, 163, 226, 235, 248 understanding of goals, 125, 134, 135, 257–8 miscommunication and misunderstanding, 139–40 model of, 105 fig 5.1 personal, 85–6, 87, 89–91, 93, 103
shared, 61, 177, 227, 231, 241, 257–8, 260 of targets, 85, 87 veterinary medicine students, 246–7, 251, 253, 255, 257, 259 ways of thinking and practising (WTPs), 138–9, 249 Wenger, E., 248, 254 wiki assignments, 8, 227, 228–9, 232–3, 235, 239–40 word meaning, 132, 133